Skip to main content

The challenges of Machine Learning models

Machine Learning models are becoming increasingly more popular as data science teams are finding new ways to apply these models across a variety of industries and use cases.  From predicting outcomes such as how fast a wound will heal to training a model to read and extract text from documents, ML models can be applied to almost any use case provided the right data is available.  


Much of the heavy lifting when building ML models happens in the early stages of gathering and understanding data, modeling the data and training the model.  However, actually putting the model into production has its own unique set of challenges.  These challenges can often make or break your project.


Here are 5 of the most common challenges teams face when trying to put a ML model into production.


machine learning models


1. Training data doesn’t match production data


The data that the model is trained on should be representative of the data that you’re going to have in production. There are two scenarios where we see this problem arise. In the first, teams will often want to use only the “cleanest” data to get the most accurate model, but unless the underlying data management process has been fixed, that clean data just doesn’t represent the real world! Once the model hits production, surprise outliers, new lines of business, and changing data types can really throw off how well a model will perform.


In the second, the model becomes the victim of “scope creep”. When a model produces great results, the first instinct is to think “how else could I use this” – but asking a model to perform something it wasn’t trained to do is asking for trouble. A retrain to incorporate related scenarios can prevent pain down the road.


2. The Model changes the existing workflow


One of the key decisions your team will need to make early on in any AI/ML project, is where will the model reside within the workflow or process.  Ideally, the model will work in a way that doesn’t alter the existing user workflow.  Often times this topic is not fully thought out and the model ends up changing a process that is already efficient and adds extra steps which can reduce overall productivity. 


3. Where will the Model be hosted?


Typically, there are 3 main options for where to host a ML model – your Cloud, your vendor’s Cloud, or On-Premise.  Sometimes any of these options will work.  However, if your organization is very concerned about compliance, an on-premise hosting solution may be the only way to go.  This may require additional hardware spend which might delay your launch date. 


Additionally, public Cloud options may be all you need, but costs could quickly escalate depending on how much data you are processing.


4. Issues with Integrating to existing systems


ML models will constantly be ingesting data and providing an output into one or more systems.  Depending on the complexity of the data, you could run into issues when integrating the model with multiple systems.  Will the data come through an API or will an FTP file be suitable?  If it needs to be integrated through an API, does your current system already have one, or does one need to be built?  What happens when there are issues or changes to the system and data cannot be retrieved?  These are all common questions we run into when dealing with different systems and will need to be addressed early on in the process. 


software engineer


5. User Adoption


This is an issue that comes up weeks or months after a ML model has successfully been launched.  A technical team has worked for months to develop, train, and get a model into production, but the end users were not fully educated on expectations of the model or how to properly use the model within their systems.  This leads to lack of adoption and ultimately poor results since the model is not being used enough to see any true benefit or ROI. 


It is important to properly communicate the benefits and timeline for AI/ML projects back to the end users so they know what to expect and how they can benefit from using it.  These end users will also be critical for providing feedback that will go into any future model refits.

Together we can eliminate the challenges of your ML models.

For more information on ML model development, or if you have a project you would like to discuss, please contact us 

Leave a Reply