Overview of Kubeflow Pipelines

Understanding the goals and main concepts of Kubeflow Pipelines

Kubeflow Pipelines is a platform for building and deploying portable,scalable machine learning (ML) workflows based on Docker containers.

Quickstart

Run your first pipeline by following thepipelines quickstart guide.

What is Kubeflow Pipelines?

The Kubeflow Pipelines platform consists of:

  • A user interface (UI) for managing and tracking experiments, jobs, and runs.
  • An engine for scheduling multi-step ML workflows.
  • An SDK for defining and manipulating pipelines and components.
  • Notebooks for interacting with the system using the SDK.

The following are the goals of Kubeflow Pipelines:

  • End-to-end orchestration: enabling and simplifying the orchestration ofmachine learning pipelines.
  • Easy experimentation: making it easy for you to try numerous ideas andtechniques and manage your various trials/experiments.
  • Easy re-use: enabling you to re-use components and pipelines to quicklycreate end-to-end solutions without having to rebuild each time.

InKubeflow v0.1.3 and later,Kubeflow Pipelines is one of the Kubeflow core components. It’s automatically deployed during Kubeflow deployment.

Due tokubeflow/pipelines#1700andkubeflow/pipelines#337,some non-critical pieces of functionality are currently available only on GKE clusters.

What is a pipeline?

A pipeline is a description of an ML workflow, including all of the componentsin the workflow and how they combine in the form of a graph. (See thescreenshot below showing an example of a pipeline graph.) The pipelineincludes the definition of the inputs (parameters) required to run the pipelineand the inputs and outputs of each component.

After developing your pipeline, you can upload and share it on theKubeflow Pipelines UI.

A pipeline component is a self-contained set of user code, packaged as aDocker image, thatperforms one step in the pipeline. For example, a component can be responsiblefor data preprocessing, data transformation, model training, and so on.

See the conceptual guides to pipelinesand components.

Example of a pipeline

The screenshots and code below show the xgboost-training-cm.py pipeline, whichcreates an XGBoost model using structured data in CSV format. You can see thesource code and other information about the pipeline onGitHub.

The runtime execution graph of the pipeline

The screenshot below shows the example pipeline’s runtime execution graph in theKubeflow Pipelines UI:

XGBoost results on the pipelines UI

The Python code that represents the pipeline

Below is an extract from the Python code that defines thexgboost-training-cm.py pipeline. You can see the full code onGitHub.

  1. @dsl.pipeline(
  2. name='XGBoost Trainer',
  3. description='A trainer that does end-to-end distributed training for XGBoost models.'
  4. )
  5. def xgb_train_pipeline(
  6. output,
  7. project,
  8. region='us-central1',
  9. train_data='gs://ml-pipeline-playground/sfpd/train.csv',
  10. eval_data='gs://ml-pipeline-playground/sfpd/eval.csv',
  11. schema='gs://ml-pipeline-playground/sfpd/schema.json',
  12. target='resolution',
  13. rounds=200,
  14. workers=2,
  15. true_label='ACTION',
  16. ):
  17. delete_cluster_op = DeleteClusterOp('delete-cluster', project, region).apply(gcp.use_gcp_secret('user-gcp-sa'))
  18. with dsl.ExitHandler(exit_op=delete_cluster_op):
  19. create_cluster_op = CreateClusterOp('create-cluster', project, region, output).apply(gcp.use_gcp_secret('user-gcp-sa'))
  20. analyze_op = AnalyzeOp('analyze', project, region, create_cluster_op.output, schema,
  21. train_data, '%s/{{workflow.name}}/analysis' % output).apply(gcp.use_gcp_secret('user-gcp-sa'))
  22. transform_op = TransformOp('transform', project, region, create_cluster_op.output,
  23. train_data, eval_data, target, analyze_op.output,
  24. '%s/{{workflow.name}}/transform' % output).apply(gcp.use_gcp_secret('user-gcp-sa'))
  25. train_op = TrainerOp('train', project, region, create_cluster_op.output, transform_op.outputs['train'],
  26. transform_op.outputs['eval'], target, analyze_op.output, workers,
  27. rounds, '%s/{{workflow.name}}/model' % output).apply(gcp.use_gcp_secret('user-gcp-sa'))
  28. predict_op = PredictOp('predict', project, region, create_cluster_op.output, transform_op.outputs['eval'],
  29. train_op.output, target, analyze_op.output, '%s/{{workflow.name}}/predict' % output).apply(gcp.use_gcp_secret('user-gcp-sa'))
  30. confusion_matrix_op = ConfusionMatrixOp('confusion-matrix', predict_op.output,
  31. '%s/{{workflow.name}}/confusionmatrix' % output).apply(gcp.use_gcp_secret('user-gcp-sa'))
  32. roc_op = RocOp('roc', predict_op.output, true_label, '%s/{{workflow.name}}/roc' % output).apply(gcp.use_gcp_secret('user-gcp-sa'))

Pipeline data on the Kubeflow Pipelines UI

The screenshot below shows the Kubeflow Pipelines UI for kicking off a run ofthe pipeline. The pipeline definition in your code determines which parametersappear in the UI form. The pipeline definition can also set default values forthese parameters. The arrows on the screenshot indicate theparameters that do not have useful default values in this particular example:

Starting the XGBoost run on the pipelines UI

Outputs from the pipeline

The following screenshots show examples of the pipeline output visible onthe Kubeflow Pipelines UI.

Prediction results:

Prediction output

Confusion matrix:

Confusion matrix

Receiver operating characteristics (ROC) curve:

ROC

Architectural overview

Pipelines architectural diagram

At a high level, the execution of a pipeline proceeds as follows:

  • Python SDK: You create components or specify a pipeline using the KubeflowPipelines domain-specific language(DSL).
  • DSL compiler: TheDSL compilertransforms your pipeline’s Python code into a static configuration (YAML).
  • Pipeline Service: You call the Pipeline Service to create apipeline run from the static configuration.
  • Kubernetes resources: The Pipeline Service calls the Kubernetes APIserver to create the necessary Kubernetes resources(CRDs)to run the pipeline.
  • Orchestration controllers: A set of orchestration controllersexecute the containers needed to complete the pipeline execution specifiedby the Kubernetes resources(CRDs).The containers execute within Kubernetes Pods on virtual machines. Anexample controller is the [ArgoWorkflow](https://github.com/argoproj/argo) controller, whichorchestrates task-driven workflows.
  • Artifact storage: The Pods store two kinds of data:

    • Metadata: Experiments, jobs, runs, etc. Also single scalar metrics,generally aggregated for the purposes of sorting and filtering.Kubeflow Pipelines stores the metadata in a MySQL database.
    • Artifacts: Pipeline packages, views, etc. Alsolarge-scale metrics like time series, usually used for investigating anindividual run’s performance and for debugging. Kubeflow Pipelinesstores the artifacts in an artifact store likeMinio server orCloud Storage. The MySQL database and the Minio server are both backed by the KubernetesPersistentVolume(PV) subsystem.
  • Persistence agent and ML metadata: The Pipeline Persistence Agentwatches the Kubernetes resources created by the Pipeline Service andpersists the state of these resources in the ML Metadata Service. ThePipeline Persistence Agent records the set of containers that executed aswell as their inputs and outputs. The input/output consists of eithercontainer parameters or data artifact URIs.

  • Pipeline web server: The Pipeline web server gathers data from variousservices to display relevant views: the list of pipelines currently running,the history of pipeline execution, the list of data artifacts, debugginginformation about individual pipeline runs, execution status about individualpipeline runs.

Next steps