Serving Machine Learning Models

Rishiraj Acharya
3 min readJan 5, 2023

As machine learning models become more prevalent in a variety of industries, the need to serve these models in production environments has become increasingly important. Serving a machine learning model means making it available for prediction or inference, either to serve real-time predictions to users or to batch prediction for offline use. There are several ways to serve a machine learning model, and which one is the best for your use case depends on the specific requirements of your application. In this blog post, we will explore the various options for serving machine learning models and their trade-offs.

1: Serve the model locally

One option is to serve the model locally, directly from the machine where it is trained. This can be a simple and cost-effective solution if you only need to serve the model to a small number of users or if you only need to make infrequent predictions.

To serve the model locally, you will need to expose an interface for making predictions, such as a command-line interface or a web server. You can use a framework such as Flask to build a simple web server that can receive requests, make predictions using the model, and return the results to the client.

One drawback of this approach is that the model can only be accessed from the machine where it is hosted, which may not be ideal if you need to serve the model to users from different locations. In addition, serving the model locally can be challenging if you need to scale to handle…

--

--

Rishiraj Acharya

GDE in ML (Gen AI, Keras) | GSoC '22 at TensorFlow | TFUG Kolkata Organizer | Hugging Face Fellow | Kaggle Master | MLE at Tensorlake, Past - Dynopii, Celebal