Machine Learning Model Deployment using Docker Container

Model deployment is the next and very important steps once you finalized your model training and development. There are many methods available to deploy the models depending upon the type of serving. There are many serving methods like batch serving, online serving, real time serving or live streaming based serving. In this article I am going to explain one of the deployment mechanism which does online serving using APIs. So I will be explaining how to deploy models using Docker container and run them on production efficiently and reliably.

Feature Store in Machine Learning

Feature store in machine learning is the concept to store features in both online and offline stores for model training and serving purposes. Feature store make sure to provide the consistency between the data used for model training and the data used during online serving to models. In other words, it guarantees that you’re serving the same data to models during training and prediction, eliminating training-prediction skew. Feast is one of the open source tools used for feature store.

How to deploy machine learning models as a microservice using fastapi

As of today, FastAPI is the most popular web framework for building microservices with python 3.6+ versions. By deploying machine learning models as microservice-based architecture, we make code components re-usable, highly maintained, ease of testing, and of-course the quick response time. FastAPI is built over ASGI (Asynchronous Server Gateway Interface) instead of flask’s WSGI (Web Server Gateway Interface). This is the reason it is faster as compared to flask-based APIs.