Overview
Model serving overview
Kubeflow supports two model serving systems that allow multi-framework modelserving: KFServing and Seldon Core. Alternatively, you can use astandalone model serving system. This page gives an overview of the options, sothat you can choose the framework that best supports your model servingrequirements.
Multi-framework serving with KFServing or Seldon Core
KFServing and Seldon Core are both open source systems that allowmulti-framework model serving. The following table comparesKFServing and Seldon Core. A check mark (✓) indicates that the system(KFServing or Seldon Core) supports the feature specified in that row.
Feature | Sub-feature | KFServing | Seldon Core |
---|---|---|---|
Framework | TensorFlow | ✓sample | ✓docs |
XGBoost | ✓sample | ✓docs | |
scikit-learn | ✓sample | ✓docs | |
NVIDIA TensorRT Inference Server | ✓sample | ✓docs | |
ONNX | ✓sample | ✓docs | |
PyTorch | ✓sample | ✓ | |
Graph | Transformers | ✓sample | ✓docs |
Combiners | Roadmap | ✓sample | |
Routers including MAB | Roadmap | ✓docs | |
Analytics | Explanations | ✓sample | ✓docs |
Scaling | Knative | ✓sample | |
GPU AutoScaling | ✓sample | ||
HPA | ✓ | ✓docs | |
Custom | Container | ✓sample | ✓docs |
Language Wrappers | ✓Python, Java, R | ||
Multi-Container | ✓docs | ||
Rollout | Canary | ✓sample | ✓docs |
Shadow | ✓ | ||
Istio | ✓ | ✓ |
Notes:
- KFServing and Seldon Core share some technical features, including explainability (using Seldon Alibi Explain) and payload logging, as well as other areas.
- A commercial product, Seldon Deploy, supports both KFServing and Seldon in production.
- KFServing is part of the Kubeflow project ecosystem. Seldon Core is an external project supported within Kubeflow.
Further information:
- KFServing:
- Seldon Core
TensorFlow Serving
For TensorFlow models you can use TensorFlow Serving forreal-time prediction.However, if you plan to use multiple frameworks, you should consider KFServingor Seldon Core as described above.
NVIDIA TensorRT Inference Server
NVIDIA TensorRT Inference Server is a REST and GRPC service for deep-learninginferencing of TensorRT, TensorFlow and Caffe2 models. The server isoptimized to deploy machine learning algorithms on both GPUs andCPUs at scale.
You can use NVIDIA TensorRT Inference Server as astandalone system,but you should consider KFServing as described above. KFServing includes supportfor NVIDIA TensorRT Inference Server.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
Last modified 27.01.2020: Deleted the PyTorch Serving page (#1555) (923ea2ac)