Deploy pipelines, not models.
The following diagram shows the common flows of training and prediction in a data analytics project. The training phase consists of features engineering (data transformation -> features extraction & pre-processing) and model training, the prediction phase should contain the same features engineering and model prediction with optional prediction transformation. Deploying just the model part of the workflow is not enough, the entire pipeline must be deployed.
Although there are already pipeline frameworks, like Scikit-learn, Spark-ML pipeline, the pipeline could not cover all stages, the data scientist always needs to write extra code out of it.
DaaS is built on the Function-as-a-Service framework, you can generate the custom scoring script that has already included the model itself prediction at ease, then add your custom pre-processing & post-processing functions to finish the entire pipeline. At last, DaaS can provide REST APIs for the entire pipeline automatically.
Open standard and open source
DaaS can deploy your AI & ML solutions into production at scale on Kubernetes, which provides highly reliable and scalable AI models deployment services. DaaS can support major open standard and open-source models: PMML, Scikit-learn, XGBoost, LightGBM, Spark-ML; ONNX, Keras, TensorFlow, Pytorch, MXNet, and even custom models. One-click to deploy them using the DaaS client library, you can visit DaaS-Client for details, there are several example notebooks for your reference.