Scale Events
timezone
+00:00 GMT
Articles
February 17, 2022

7 Reasons Dataset Management Tools Are Critical to MLOps

# Dataset Management

Data lies at the heart of any ML system, and dataset management tools should be at the core of any well-functioning MLOps workflow.

Daniel McNeela
Daniel McNeela

Murphy’s Law, which says that “anything that can go wrong, will go wrong,” certainly applies to the practice of machine learning, what with the inherent complexity involved in training and deploying a model. That complexity, when not controlled, can lead to incorrect predictions and unhappy customers. 

Software engineers, too, understand the complexity that creating code entails. Their mechanisms for coping with it—their tools and best practices for maintaining clean, scalable, and reliable programs—all fall loosely under the umbrella of DevOps. Likewise, machine learning engineers have their own suite of techniques and applications for managing complexity, a subfield of ML engineering called ML operations.

While some MLOps tools and best practices are the same as those in the DevOps category, or incorporate similar concepts, ML engineering adds another layer of complexity. 

At the heart of any ML system is data, and dataset management tools serve at the core in any well-functioning MLOps workflow. Here are the primary benefits of incorporating MLOps into your deployment pipeline, with an eye toward dataset management tools.

1. Track ML Model Performance and Hyperparameter Settings

MLOps tools such as MLflow let you record model configurations and hyperparameter settings as different versions of a model are trained. This allows model training to become a repeatable process that can be easily managed, version-controlled, and monitored. That said, wouldn’t it be helpful to understand how shifts in your data distribution affect model hyperparameters and performance metrics? 

Dataset management tools close this gap, allowing you to record how model performance and settings vary in response to when samples are added or deleted or to slice into a certain segment of a dataset. They can track accuracy, precision, recall, F1 score, and other advanced metrics; monitor log hyperparameter settings; provide version control; and even incorporate advanced visualizations such as confusion matrices.

2. Improve Annotation Quality

One of the most challenging parts of making ML successful is culling labeled data to train a supervised model. Unfortunately, most data is unlabeled, and so ML practitioners are left with two options: either train an unsupervised model, or label their data, either algorithmically or via human intervention. 

Unfortunately, the accuracy of even the best unsupervised models is usually not on a par with what you can expect from supervised learning, so many companies opt to have their data labeled. But when generating synthetic predictions or using outsourced human annotation, label quality can be a big concern. Imagine a binary classification dataset where 50% of the labels are incorrect. Good luck trying to get a model to learn anything useful! 

Because quality control in annotation is such a problem, dataset management tools can spot potentially fuzzy examples or low-confidence labels at the tail ends of the data distribution. Low-confidence labels can show up in one of two ways, although each has the same originating cause. In the case of manual annotation, a low-confidence label might be applied to a tricky or ambiguous example for which two human annotators disagree on what the correct label should be. For example, should an image of a sphinx be labeled as a cat, a lion, or a bird? Or what label should be given to a blurry image that can’t be easily interpreted? Low-confidence labels can also come from model predictions themselves. For example, most image classification models output probabilities corresponding to classes. If a model is given a frame from a self-driving car’s video feed and predicts the presence of a stop sign with probability 0.52 and a yield sign with probability 0.48, this would likely warrant investigation and data relabeling, followed by model retraining. With a dataset management tool, these problematic samples can be identified and then sent for further validation by a human annotator. You can also automate, via APIs, traditionally cumbersome processes such as importing labels or metadata.

3. Reduce Labeling Spend

The cost of labeling data can be problematic, especially when working with datasets at the scale needed to efficiently train deep learning models, as deep learning model performance is often very positively correlated with the amount of labeled data used in training. By identifying key, representative samples at various parts of the data distribution, the tools can quickly identify important data points to offload to annotators. For example, say you have a dataset in which 50% of the images are of cats and 50% are of dogs, and say also that budgetary constraints mean that you can only afford to pay for manual annotation of 50% of all examples. In that case, you wouldn’t want annotators to only label the dog images or only the cat images; rather, you’d like an even split where 50% of all dog images and 50% of all cat images are manually labeled. Likewise, within the 50% of dog images, you wouldn’t want to have your annotators label only golden retrievers. You’d want to segment the data into subclasses such that the annotation process covers samples of each breed: chihuahua, Rottweiler, Dalmatian, bulldog, corgi, etc. In this way, the model will be able to generalize much better to unseen data.

Dataset management tools can help to achieve this even class and cluster balance in choosing samples for manual annotation in order to get the most bang for your buck in terms of downstream model performance. In effect, they can do this while ensuring that human labeling work is focused on edge cases and hard examples, not repeated needlessly on samples with high overlap or on samples where the model is already performing well enough. The remaining labels can then be predicted algorithmically, putting a huge dent in overall labeling spend.

4. Increase Cross-Team Collaboration

It’s no secret that certain communicated nuances can be lost in translation. This phenomenon proves to be even more pronounced in the ML engineering world, where slight differences in data quality, numerical precision, or statistical inconsistencies can propagate into huge downstream errors. For example, improperly normalized data or numerical precision issues can cause losses to explode or gradients to vanish. Good dataset management tools provide functionality by which engineers, labelers, and ops specialists can easily track, share, and discuss data as well as model performance issues. 

For example, collaborative data labeling ensures agreement between teams as to what a model should be predicting and eliminates confusion down the road. Having data centrally organized also gives you a common pool for different teams to pull from and minimizes the number of interlocking components in an already complex software system.

5. Integrate with (and Deploying Seamlessly to) the Cloud

Much machine learning (both training and inference) is performed on cloud servers, where access to compute resources is unbounded, but the cloud adds another layer of complexity when it comes to handling your data. The best dataset management tools can ingest data natively from cloud databases or data stores and can propagate changes back to those same services. 

This increases developer efficiency and eliminates the need for expensive middle steps that involve transferring large datasets back and forth across networks.

6. revent (or Adapt to) Model Drift

Model drift is one of the most prominent issues in ML deployments. After a model is trained and deployed, its predictive performance can suffer as new data comes in and the data distribution begins to shift. This new data usually comes from interactions with customers and product users. 

Adapting to model drift requires periodic retraining of the model; however, the problem of identifying when and how often to retrain can be difficult to solve. 

Remember, retraining is not a cost-free process. Especially for large models, it can require huge expenditures in compute, developer, and other resources. For example, developers might be needed to provision expensive new cloud servers, manage Kubernetes clusters and training deployments, and ensure that the retrained model is integrated successfully back into production. Dataset management tools can help detect model performance degradation due to shifts in the data distribution and can help developers identify the optimal time to retrain a model in order to maximize performance while minimizing cost.

7. Get Dataset Management Right

Controlling all the variables in an ML model deployment is difficult, and the dataset used to train the model is a particularly challenging variable. Dataset management is a commonly overlooked aspect of MLOps, but getting it right is vital to architecting a robust and scalable machine learning deployment that doesn’t burn a hole in a business’s budget.

The best dataset management tools help track model performance, automate annotation, achieve big reductions in spend, increase intra-team collaboration, interface seamlessly with the cloud, and prevent model drift. Integrating a dataset management tool into your MLOps workflow can revitalize your data warehousing and model deployment processes.

Dive in
Related
Blog
Dataset Management: What It Is and Why It Matters
By Becca Miller • Mar 15th, 2022 Views 3.3K
Blog
Dataset Management: What It Is and Why It Matters
By Becca Miller • Mar 15th, 2022 Views 3.3K
Blog
How to Put Generative Models to Work for Synthetic Data
By Daniel McNeela • Jun 7th, 2022 Views 4.2K
Blog
4 MLOps Training Courses: How to Choose
By Marwan Mattar • Apr 12th, 2022 Views 4.8K