Experiment management in the context of machine learning is a process of tracking experiment metadata like:
organizing them in a meaningful way and making them available to access and collaborate on within your organization.
In the next sections, you will see exactly what that means with examples and implementations.
Tracking ML experiments
What I mean by tracking is collecting all the metainformation about your machine learning experiments that is needed to:
share your results and insights with the team (and you in the future),
reproduce results of the machine learning experiments,
keep your results, that take a long time to generate, safe.
Let’s go through all the pieces of an experiment that I believe should be recorded, one by one.
Code version control for data science
Okay, in 2019 I think pretty much everyone working with code knows about version control. Failing to keep track of your code is a big, but obvious and easy to fix the problem.Should we just proceed to the next section? Not so fast.
Problem 1: Jupyter notebook version control
A large part of data science development is happening in Jupyter notebooks which are more than just code. Fortunately, there are tools that help with notebook versioning and diffing. Some tools that I know:
Data science people tend to not follow the best practices of software development. You can always find someone (me included) who would ask:
“But how about tracking code in-between commits? What if someone runs an experiment without committing the code?”
One option is to explicitly forbid running code on dirty commits. Another option is to give users an additional safety net and snapshot code whenever they run an experiment. Each one has its pros and cons and it is up to you to decide.
Every machine learning model or pipeline needs hyperparameters. Those could be learning rate, number of trees or a missing value imputation method. Failing to keep track of hyperparameters can result in weeks of wasted time looking for them or retraining models.The good thing is, keeping track of hyperparameters can be really simple. Let’s start with the way people tend to define them and then we’ll proceed to hyperparameter tracking:
Typically a .yaml file that contains all the information that your script needs to run. For example:
We all do that sometimes but it is not a great idea especially if someone will need to take over your work.Ok, so I do like .yaml configs and passing arguments from the command line (option 1 and 2), but anything other than magic numbers is fine. What is important is that you log those parameters for every experiment.
If you decide to pass all parameters as the script arguments make sure to log them somewhere. It is easy to forget, so using an experiment management tool that does this automatically can save you here.
There is nothing so painfulas to have a perfect script on perfect data version producing perfect metrics only to discover that you don’t remember what are the hyperparameters that were passed as arguments.Note:A bonus of having your hyperparameters abstracted away entirely (option 1 and 2) is that you implicitly turn your training and evaluation scripts into an objective function that you can optimize automatically:
That means you can use readily available libraries and run hyperparameter optimization algorithms with virtually no additional work! If you are interested in the subject please check out my blog post series about hyperparameter optimization libraries in Python.
In real-life projects, data is changing over time. Some typical situations include:
new images are added,
labels are improved,
mislabeled/wrong data is removed,
new data tables are discovered,
new features are engineered and processed,
validation and testing datasets change to reflect the production environment.
Whenever your data changes, the output of your analysis, report or experiment results will likely change even though the code and environment did not. That is why to make sure you are comparing apples to apples you need to keep track of your data versions.
Having almost everything versioned and getting different results can be extremely frustrating, and can mean a lot of time (and money) in wasted effort. The sad part is that you can do little about it afterward. So again, keep your experiment data versioned.
For the vast majority of use cases whenever new data comes in you can save it in a new location and log this location and a hash of the data. Even if the data is very large, for example when dealing with images, you can create a smaller metadata file with image paths and labels and track changes of that file.
A wise man once told me:
“Storage is cheap, training a model for 2 weeks on an 8-GPU node is not.”
And if you think about it, logging this information doesn’t have to be rocket science.
You can calculate hash yourself, use a simple data versioning extension or outsource hashing to a full-blown data versioning tool like DVC.Whichever option you decide is best for your project please version your data.
Note:I know that 10x data scientists can read data hash and know exactly what it is, but you may also want to log something a bit more readable for us mere mortals. For example, I wrote a simple function that lets you log a snapshot of your image directory to Neptune:
from neptunecontrib.versioning.data import log_image_dir_snapshots
Tracking machine learning metrics
I have never found myself in a situation where I thought that I have logged too many metrics for my experiment, have you?In a real-world project, the metrics you care about can change due to new discoveries or changing specifications so logging more metrics can actually save you some time and trouble in the future.
Either way, my suggestion is:
“Log metrics, log them all”
Typically, metrics are as simple as a single number
but I like to think of it as something a bit broader. To understand if your model has improved, you may want to take a look at a chart, confusion matrix or distribution of predictions. Those, in my view, are still metrics because they help you measure the performance of your experiment.
Note:Tracking metrics both on training and validation datasets can help you assess the risk of the model not performing well in production. The smaller the gap the lower the risk. A great resource is this kaggle days talk by Jean-François Puget.
Moreover, if you are working with data collected at different timestamps you can assess model performance decay and suggest proper model retraining schema. Simply track metrics at different timeframes of your validation data and see how the performance drops.
Versioning data science environment
The majority of problems with environment versioning can be summarized by the infamous quote:
“I don’t understand, it worked on my machine.”
One approach that helps solve this issue can be called “environment as code” where the environment can be created by executing instructions (bash/yaml/docker) step-by-step. By embracing this approach you can switch from versioning the environment to versioning environment set-up code which we know how to do.
There are a few options that I know to be used in practice (by no means this is a full list of approaches).
This is the preferred option and there are a lot of resources on the subject. One that I particularly like is the “Learn Enough Docker to be useful” series by Jeff Hale.
In a nutshell, you define the Dockerfile with some instructions.
# Use a miniconda3 as base image
# Installation of jupyterlab
RUN pip install jupyterlab==0.35.6 &&
pip install jupyterlab-server==0.2.0 &&
conda install -c conda-forge nodejs
# Installation of Neptune and enabling neptune extension
RUN pip install neptune-client &&
pip install neptune-notebooks &&
jupyter labextension install neptune-notebooks
# Setting up Neptune API token as env variable
# Adding current directory to container
ADD . /mnt/workdir
You build your environment from those instructions:
It’s a simpler option and in many cases, it is enough to manage your environments with no problems. It doesn’t give you as many options or guarantees as docker does, but it can be enough for your use case.
The environment can be defined as a .yaml configuration file just like this one:
What is pretty cool is that you can always dump the state of your environment to such config by running:
conda env export > environment.yaml
Simple and gets the job done.
You can always define all your bash instructions explicitly in the Makefile. For example:
git clone email@example.com:neptune-ml/open-solution-mapping-challenge.git
pip install -r requirements.txt
curl -0 https://www.kaggle.com/c/imagenet-object-localization-challenge/data/LOC_synset_mapping.txt
and set it up by running:
It is often difficult to read those files and you are giving up a ton of additional features of conda and/or docker but it doesn’t get much simpler than this.Now, that you have your environment defined as code, make sure to log the environment file for every experiment.
Again, if you are using an experiment manager you can snapshot your code whenever you create a new experiment, even if you forget to git commit:
As much as I think tracking experimentation and ensuring the reproducibility of your work is important it is just a part of the puzzle. Once you have tracked hundreds of experiment runs you will quickly face new problems:
how to search through and visualize all of those experiments,
how to organize them into something that you and your colleagues can digest,
how to make this data shareable and accessible inside your team/organization?
This is where experiment management tools really come in handy. They let you:
visualize/compare experiment runs,
share (app and programmatic query API) experiment results and metadata.
With that, you and all the people on your team know exactly what is happening when it comes to model development. It makes it easy to track the progress, discuss problems, and discover new improvement ideas.
Working in creative iterations
Tools like that are a big help and a huge improvement from spreadsheets and notes. However, what I believe can take your machine learning projects to the next level is a focused experimentation methodology that I call creative iterations.I’d like to start with some pseudocode and explain it later:
time, budget, business_goal = business_specification()
creative_idea = initial_research(business_goal)
while time and budget and not business_goal:
solution = develop(creative_idea)
metrics = evaluate(solution, validation_data)
if metrics > best_metrics:
best_metrics = metrics
best_solution = solution
creative_idea = explore_results(best_solution)
In every project, there is a phase where the business_specification is created that usually entails a timeframe, budget, and goal of the machine learning project. When say goal, I mean a set of KPIs, business metrics, or if you are super lucky machine learning metrics. At this stage, it is very important to manage business expectations but it’s a story for another day. If you are interested in those things I suggest you take a look at some articles by Cassie Kozyrkov, for instance, this one.Assuming that you and your team know what is the business goal you can do initial_research and cook up a baseline approach, a first creative_idea. Then you develop it and come up with a solution which you need to evaluate and get your first set of metrics. Those, as mentioned before, don’t have to be simple numbers (and often are not) but could be charts, reports or user study results. Now you should study your solution, metrics, and explore_results.
It may be here where your project will end because:
your first solution is good enough to satisfy business needs,
you can reasonably expect that there is no way to reach business goals within the previously assumed time and budget,
you discover that there is a low-hanging fruit problem somewhere close and your team should focus their efforts there.
If none of the above apply, you list all the underperforming parts of your solution and figure out which ones could be improved and what creative_ideas can get you there. Once you have that list, you need to prioritize them based on expected goal improvements and budget. If you are wondering how can you estimate those improvements, the answer is simple: results exploration.
You have probably noticed that results exploration comes up a lot. That’s because it is so very important that it deserves its own section.
Model results exploration
This is an extremely important part of the process. You need to understand thoroughly where the current approach fails, how far time/budget wise are you from your goal, what are the risks associated with using your approach in production. In reality, this part is far from easy but mastering it is extremely valuable because:
it leads to business problem understanding,
it leads to focusing on the problems that matter and saves a lot of time and effort for the team and organization,
it leads to discovering new business insights and project ideas.
Some good resources I found on the subject are:
“Understanding and diagnosing your machine-learning models” PyData talk by Gael Varoquaux
“Creating correct and capable classifiers” PyData talk by Ian Osvald
Diving deeply into results exploration is a story for another day and another blog post, but the key takeaway is that investing your time in understanding your current solution can be extremely beneficial for your business.
In this article, I explained:
what experiment management is,
how organizing your model development process improves your workflow.
For me, adding experiment management tools to my “standard” software development best practices was an aha-moment which made my machine learning projects more likely to succeed. I think, if you give it a go you will feel the same.
Top articles, research, podcasts, webinars and more delivered to you monthly.
Image (source) Technological advancements have left a massive impact on nearly every aspect of society. So the idea of having an intelligent assistant with you at all times is not far from a dream come true. Since the turn of the century, mobile apps and user experiences have changed dramatically. Early apps offered very few
Artificial intelligence is shaping the future of work around the world in virtually every field. The role AI will play in employment in the years ahead is dynamic and collaborative. Rather than eliminating jobs altogether, AI will augment the capabilities and resources of employees and businesses, allowing them to do more with less. In more
Everybody is discussing Artificial Intelligence (AI) and machine learning, and some legal professionals are already leveraging these technological capabilities. AI is not the future expectation; it is the present reality. Aside from law, AI is widely used in various fields such as transportation and manufacturing, education, employment, defense, health care, business intelligence, robotics, and so
Incubated in Harvard Innovation Lab, Experfy specializes in pipelining and deploying the world's best AI and engineering talent at breakneck speed, with exceptional focus on quality and compliance. Enterprises and governments also leverage our award-winning SaaS platform to build their own customized future of work solutions such as talent clouds.