Note
After several years of inactivity, we’re excited to announce that Kale development has restarted! 🎉 Kale was widely appreciated by the community back in the day, and our current goal is to re-establish a solid baseline by updating all components to the latest versions and ensuring full compatibility with the most recent Kubeflow releases.
See all details in the Road to 2.0 issue
KALE (Kubeflow Automated pipeLines Engine) is a project that aims at simplifying the Data Science experience of deploying Kubeflow Pipelines workflows.
Kubeflow is a great platform for orchestrating complex workflows on top of Kubernetes, and Kubeflow Pipelines provide the means to create reusable components that can be executed as part of workflows. The self-service nature of Kubeflow makes it extremely appealing for Data Science use, at it provides an easy access to advanced distributed jobs orchestration, re-usability of components, Jupyter Notebooks, rich UIs and more. Still, developing and maintaining Kubeflow workflows can be hard for data scientists, who may not be experts in working orchestration platforms and related SDKs. Additionally, data science often involve processes of data exploration, iterative modelling and interactive environments (mostly Jupyter notebook).
Kale bridges this gap by providing a simple UI to define Kubeflow Pipelines workflows directly from your JupyterLab interface, without the need to change a single line of code.
See the Kale v2.0 Demo video at the bottom of the README for more details.
Read more about Kale and how it works in this Medium post: Automating Jupyter Notebook Deployments to Kubeflow Pipelines with Kale
- Python 3.11+
- Kubeflow Pipelines v2.16.0+
- The
securityContextfield in the Kubernetes executor config is not recognized by older KFP servers (kfp[kubernetes] < 2.16.0), causing pipeline submission to fail. - Install KFP as recommended in the official Kubeflow Pipelines Installation documentation (make sure to set
PIPELINE_VERSION=2.16.0or later) - If you are upgrading from an earlier version, make sure you have
kfp[kubernetes]>=2.16.0in your dependencies along withkfp>=2.0.0
- The
- A Kubernetes cluster (
minikube,kind, or any K8s cluster)
Important
Kale v2.0 is not yet released on PyPI. Until then, install from source:
git clone https://github.com/kubeflow-kale/kale.git
cd kale
make dev # Set up development environment
make jupyter # Start JupyterLabSee CONTRIBUTING.md for detailed setup instructions.
Once v2.0 is released, you'll be able to install from PyPI:
pip install "jupyterlab>=4.0.0" kubeflow-kale[jupyter]
jupyter lab-
Start your Kubernetes cluster and KFP:
minikube start kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80
-
Test the CLI:
kale --nb examples/base/candies_sharing.ipynb --kfp_host http://127.0.0.1:8080 --run_pipeline
This generates a pipeline in
.kale/and submits it to KFP. -
Test the JupyterLab extension:
- Open JupyterLab (
make jupyterorjupyter lab) - Open a notebook from
examples/base/ - Click the Kale icon in the left panel
- Enable the Kale panel with the toggle
- Open JupyterLab (
You can test Kale in a Kubeflow-like notebook environment using Docker. The image
is based on the official Kubeflow notebook image (jupyter-scipy) with Kale
pre-installed.
make docker-build # Build wheels + Docker image
make docker-run # Start JupyterLab on http://localhost:8889To connect to a KFP cluster, run these in separate terminals:
# Terminal 1: Serve the dev wheel (so compiled pipelines can install Kale)
make kfp-serve
# Terminal 2: Port-forward the KFP API
kubectl port-forward -n kubeflow svc/ml-pipeline 8080:8888
# Terminal 3: Start the container
make docker-runmake docker-run automatically configures:
- KFP API via
host.docker.internal(works on macOS, Windows, and Linux) - KFP UI links pointing to
localhost:8080(so pipeline links open in your browser) - Wheel server connectivity for compiled pipelines
Kale uses special cell types (tags) to organize your notebook into pipeline components. You can assign these types to cells using the Kale JupyterLab extension or by adding tags directly in the notebook metadata.
| Cell Type | Status | Description |
|---|---|---|
| Imports | ✅ Works | The code in this cell will be pre-pended to every step of the pipeline. Used for all import statements. All imports must be placed in cells tagged as imports. Importing libraries (pandas, tensorflow, etc.) in other cell types will cause pipeline execution errors. |
| Functions | ✅ Works | The code in this cell will be pre-pended to every step of the pipeline, after imports. Used for function and class definitions only. Do not include top-level executable statements |
| Pipeline Parameters | ✅ Works | Define variables that will become pipeline parameters. If more than one Pipeline Parameters cell exists, and a parameter is defined in each cell, only the final value will be taken. |
| Pipeline Metrics | ✅ Works | Print scalar metrics and transform it into pipeline metrics. |
| Step | ✅ Works | Regular pipeline steps with custom names. This is the default cell type for your data processing and ML logic. Each step can have dependencies on other steps. Steps can also define their own image and GPU requirements. |
| Skip Cell | ✅ Works | Cells marked as skip will be excluded from the pipeline. Useful for exploratory code or debugging that shouldn't be part of the production pipeline. |
Warning
Imports outside Imports cells won't be detected for automatic dependency installation, which causes ImportError at runtime if the package isn't pre-installed in the container image.
Best Practices:
- Place all imports at the beginning of your notebook in cells tagged as
Imports - Keep function definitions pure - no side effects (modifying global variables or mutable parameters), prints, or imports
- Use
pipeline-parametersfor values you might want to tune between runs - Use
skipcells for exploratory analysis that shouldn't be in the pipeline
Check out the example notebooks at examples/ to see cell types in action.
Head over to FAQ to read about some known issues and some of the limitations imposed by the Kale data marshalling model.
- Kale introduction blog post
- KubeCon NA Tutorial 2019: From Notebook to Kubeflow Pipelines: An End-to-End Data Science Workflow / video
- KubeCon EU Tutorial 2020: From Notebook to Kubeflow Pipelines with HP Tuning: A Data Science Journey / video
make dev # Set up development environment
make test # Run all tests
make jupyter # Start JupyterLabSee CONTRIBUTING.md for detailed development instructions, including:
- Available make commands
- Testing with KFP clusters
- Building release artifacts
- Live reload setup
Watch the KubeFlow Kale Demo - Introduction video below.


