Experiment with the Pipelines Samples
Get started with the Kubeflow Pipelines notebooks and samples
You can learn how to build and deploy pipelines by running the samplesprovided in the Kubeflow Pipelines repository or by walking through aJupyter notebook that describes the process.
Compiling the samples on the command line
This section shows you how to compile theKubeflow Pipelines samplesand deploy them using the Kubeflow Pipelines UI.
Before you start
Set up your environment:
- Clone or download theKubeflow Pipelines samples.
- Install the Kubeflow Pipelines SDK.
- Activate your Python 3 environment if you haven’t done so already:
source activate <YOUR-PYTHON-ENVIRONMENT-NAME>
For example:
source activate mlpipeline
Choose and compile a pipeline
Examine the pipeline samples that you downloaded and choose one to work with.Thesequential.py
sample pipeline:is a good one to start with.
Each pipeline is defined as a Python program. Before you can submit a pipelineto the Kubeflow Pipelines service, you must compile thepipeline to an intermediate representation. The intermediate representationtakes the form of a YAML file compressed into a.tar.gz
file.
Use the dsl-compile
command to compile the pipeline that you chose:
dsl-compile --py [path/to/python/file] --output [path/to/output/tar.gz]
For example, to compile thesequential.py
sample pipeline:
export DIR=[YOUR PIPELINES REPO DIRECTORY]/samples/basic
dsl-compile --py ${DIR}/sequential.py --output ${DIR}/sequential.tar.gz
Deploy the pipeline
Upload the generated .tar.gz
file through the Kubeflow Pipelines UI. See theguide to getting started with the UI.
Building a pipeline in a Jupyter notebook
You can choose to build your pipeline in a Jupyter notebook. Thesample notebookswalk you through the process.
It’s easiest to use the Jupyter services that are installed in the same cluster asthe Kubeflow Pipelines system.
Note: The notebook samples don’t work on Jupyter notebooks outside the samecluster, because the Python library communicates with the Kubeflow Pipelinessystem through in-cluster service names.
Follow these steps to start a notebook:
Deploy Kubeflow:
Follow the GCP deployment guide, including the stepto deploy Kubeflow using theKubeflow deployment UI.
When Kubeflow is running, access the Kubeflow UI at a URL of the form
https://<deployment-name>.endpoints.<project>.cloud.goog/
.
Follow the Kubeflow notebooks setup guide tocreate a Jupyter notebook server and open the Jupyter UI.
Download the sample notebooks fromhttps://github.com/kubeflow/pipelines/tree/master/samples/notebooks.
Upload these notebooks from the Jupyter UI: In Jupyter, go to the tree viewand find the upload button in the top right-hand area of the screen.
Open one of the uploaded notebooks.
Make sure the notebook kernel is set to Python 3. The Python version is atthe top right-hand corner in the Jupyter notebook view.
Follow the instructions in the notebook.
The following notebooks are available:
KubeFlow pipeline using TFX OSS components:This notebook demonstrates how to build a machine learning pipeline based onTensorFlow Extended (TFX) components.The pipeline includes a TFDV step to infer the schema, a TFT preprocessor, aTensorFlow trainer, a TFMA analyzer, and a model deployer which deploys thetrained model to
tf-serving
in the same cluster. The notebook alsodemonstrates how to build a component based on Python 3 inside the notebook,including how to build a Docker container.Lightweight Python components:This notebook demonstrates how to build simple Python components based onPython 3 and use them in a pipeline with fast iterations. If you use thistechnique, you don’t need to build a Docker container when you build acomponent. Note that the container image may not be self contained because thesource code is not built into the container.
Next steps
- Learn the various ways to use the Kubeflow PipelinesSDK.
- See how tobuild your own pipeline components.
- Read more aboutbuilding lightweight components.