In this tutorial, we will show you how to use Machine Learning Model Deployment.
Public Beta Warning
Machine Learning (ML) Model Deployment is currently in public beta. Some features may not work as
expected. Please bear with us and provide feedback using the feedback
button directly in the platform or through the feedback portal.
In the next steps, we will walk you through the process of deploying a model in your project. The model
is deployed through an integration with the MLflow platform.
You can encounter two situations, depending on whether you need to create and register a new model, or
whether you already have an existing model you can use.
New ML Model
Let’s assume that you have just started exploring this feature and your ML/AI section and that no model
has been created yet. You will see this empty screen.
This means that you must create an MLflow workspace; continue on to the tab Workspaces.
Here you can create a new workspace. Click the green button New Workspace on the right,
and select the Python MLflow workspace.
Name it, e.g., My test workspace, and select its backend power: small/medium/large. Then click
the button Create Workspace.
After a while a workspace is created. You can connect to it using the generated credentials.
JupyterLab will open, and you should find an empty Jupyter notebook, where you can place your code to use
the MLflow server for training and registering the model.
Once you train your model and run the first experiment, you can go back to the Keboola Connection UI,
open the MLflow UI from there, and check the results.
If you are satisfied, you can use a registered model for deployment. You can set the stage.
MLflow provides predefined stages
for common use cases such as Staging, Production, or Archived.
You can transition a model version from one stage to another stage.
Once ready, go back to the tab ML/AI services in the Keboola Connection UI and deploy the model.
Existing ML Model
If there are already any models available (perhaps created by someone else before you), simply go to
the tab ML/AI Services, click the button Deploy Model, select one of the already created models,
and use it.
The model will be deployed, and a unique endpoint URL, which you can use for sending requests, will be generated.
A successfully deployed model should look like this: