/

Machine Learning

Machine Learning Model Retraining - Part 3

Jul 28, 2022

5 min read

In this series of three blogs we have reached the finale, retraining in model training. If you landed here before reading the last two parts, you can read them here Data Collection, preprocessing and Model Training.


Retraining:

Ask any seasoned veteran, all of them stress one aspect the most, ‘you do not deploy only once.’ Especially in ML where the entire workflow depends on each other, retraining plays a major part.


Why do we have to retain our model?

In real life, datasets fed into the model differ from the training, testing, and validation phase, resulting in data drifts. To avoid model degradation, we need to retrain our models with new data on a regular basis.

Once your model has been through the training phase, like any good soldier, you march them into the real world with expectations that it would send you predictions. But unfortunately, the datasets are entirely different and far from ideal in the real world.

For example, in an e-commerce store, when we deploy our model to predict future cart abandonment. We expect predictions to come back with results such as products that are likely to be abandoned or user segments likely to abandon products in carts. But, that is not the case.

In real life, datasets don't come clean or sorted. We must consider user behavior, trends, market sentiments, and buying behavior. Furthermore, our model should be able to adapt to such changes. But, deploying once and moving on, results in model drift. Hence we need to monitor the model, the environment, and the overall process.

So, the process of performance degradation of a model over time is called model drift, but how do you track these model drifts and ensure your model is performing to its abilities.

Measuring the performance of a live model is easier said than done because you need to compare the data from predictions with ground reality, which is not easy. Furthermore, because the predictions might not have been stored (a simple yet prevalent mistake), even when they're stored, you might not be able to access them, and in some cases, predictions and labels don't go together.

It is also good practice to compare feature distributions used in training with the predictions since models are bound to degrade; comparing them and interfering before a model drift occurs makes sense.

Here are some ways to improve model performance:

  • Assembling data from different sources and different datasets

  • Feature Engineering by deriving from raw data to improve predictive performance

  • Comparing different learning algorithms before selecting models

  • Optimizing the chosen model for error estimation


When and how should I retrain my model?

There is no definitive timeline post-deployment that works well for all of us, and it depends on the problem you're solving, the algorithm and model you have trained, and the environment you have deployed.

While monitoring the model, you can see the drop in performance. So, for example, in our e-commerce store example, let's say we deploy our model, which predicts products that are likely to be bought together, but with time the predictions are off, and you start wasting time and resources on campaigns.

Now, there may be many reasons for the dip in performance. The products could be seasonal, changing buying behavior, or changing trends. We can retrain the model periodically to keep up with the changing environment.

When retraining, it is unnecessary to retrain the entire model on the go; training them batch after batch is sufficient. This involved using a scheduler to schedule model training jobs, for which NimbleBox.ai is the perfect platform.

Finally, if you have automated the drift detection process, it makes sense to trigger model retraining when model drift detection goes off.


What are the best practices for model training?

We’ll list down the most important aspects of machine learning training best practices that you need to come in mind when retraining your model:

1. Always keep track of the problem you're trying to solve; sometimes, data changes, trends change, and even problems might not need solving. The last thing you want is months of effort going down the drain.

2. Data cleanliness is next to nothing; no matter which type of learning you choose to use to solve your business problem, data with less noise, misinformation, and or disparity goes a long way.

3. Preprocessing is as essential as training and deploying; knowing the current accuracy level vs. the accuracy needed to achieve lets you preprocess the data better.

4. Naming conventions and annotations are equally important, and there will be a time when you need to do code checks periodically to ensure your model predicts accurately. Code checks also allow you to keep track of experiments better.

5. Keeping track of all experiments, yes, experiments are incredible, and we add them like ice cream scoops, but beyond a particular stage, you lose track of experiments.

6. Constantly monitor your model for model drift, and retrain before it's too late.


Conclusion:

We have gone over how a standard ML workflow works in our previous article and how to train your model in this article. Covering data collection, preprocessing, the actual training,and retraining. The upcoming article will cover deployment, performance metrics, and MLOps vs. DevOps.

Written By

Thinesh Sridhar

Technical Content Writer

Copyright © 2023 NimbleBox, Inc.