ROML, or “return on machine learning.” This week we coin an acronym, investigate the topic of how to calculate ROI on ML and create our own handy framework.
What value does AI/ML provide?
Recently I have been thinking about the actual value that AI/ML provides, specifically how people calculate ROI when deploying AI//ML. A recent UBS report highlighted a survey from Gartner which shows corporations are using a wide array of estimates to calculate ROI on AI/ML
The same report mentioned how today’s AI ecosystems suffer from “A lack of ROI frameworks to help managers interpret the risk and rewards of their investments in AI technologies.” So, I thought to myself, why not put together a ROI framework to help managers interpret the risk and rewards of their investments in AI technologies.
The risks are a little easier to sort out, so let’s start there.
The biggest risk I can think of when attempting any AI/Ml project is the risk of deploying whatever system you build and getting worse results. An example would be if you developed an algorithm to target who should receive more emails, but when the algorithm starts sending more emails to these people, engagement drops and unsubscribe rates increase. A way to get around this problem would be doing a test on a small group of people with your new algorithm and compare that to a group that has not been served the algorithm. This way you can get some sort of reassurance before going live.
Another big risk is the possibility of your data changing, specifically the properties and distribution your data comes from. In ML parlance, if the data you use to predict something changes, this is called covariate shift, while if what you are trying to predict changes, this is called concept shift. Building one model that tries to predict trends in fashion is difficult because fashion is always changing, thus the data is always changing. As part of the risk assessment, you need to assess how much you expect the properties of your data to change over time.
Finally, being able to scale what ever AI/ML you build is a huge risk. Often, whoever is in charge of originally creating the model is probably using a sample dataset or building this model on a standard laptop, but once you deploy this into production, the code may not be optimized to run on the entire population. Models should always be built thinking how the solution will ultimately scale so when the time comes to flip the switch and apply to millions of users, everything works.
To review, my risk framework is estimate changes in your data, make sure it scales, then test, test, test.
Ok, on to rewards. For rewards, we are introducing a concept called “ROML”
Just for clarification, we are pronouncing it Row-mIll, as in “We rowed to the mill.” I was originally considering Roy-Mill, as in “Roy jet-skied to the mill”, but then it would have been ROIML and I don’t really like how that looks aesthetically. It also doesn’t sound fair that Roy got to jet ski while had to put in the labor to row, but, that is life.
Anyways, the formula for ROML is as follows.
ROML = Expected Machine Learning Value / (Development Costs + Compute Time + Maintenance time)
ROML would be the value you get from your AI/ML solution divided by the cost to develop the model, the cost of hardware to keep the model up and running and then the cost of any additional labor required to keep the model in tact, For example, if it cost us $250k to develop, build, and continuously host the model, and we saw a reduction in cost or something like that of 1mn, our ROML would be 4x.
Let’s look at each of these inputs more closely
Expected Machine Learning Value
Sounds fancy, but really it’s just either the savings, cost reduction, or incremental revenue you expect to get by putting your model into production. This is probably the hardest thing to estimate because who knows what you’ll get? You could use the results you got from the risk reduction experiment you ran and that could serve as some sort of benchmark. If you got $1mn in incremental revenue by applying your model to 5% of the population, then you could extrapolate to the larger population.
Next, you need to estimate the development costs of the model. Is this something an intern could build or do you need data scientists, a developer and a database engineer? Not every siltation needs the latest and greatest deep learning algorithm and choosing how complicated your AI/ML solution will be has a heavy influence on it it will return a positive ROML.
Third, you will have to estimate the cost of calculating all of the data and running a machine learning model. If your model hardware is going to eat a significant amount of your expected machine learning value, they you may want to reconsider this project. The one beauty of doing things in a cloud environment is you don’t have to purchase all of the hardware ahead of time, but rather can rent it instead. If you underestimate your hardware cost, and get a lower ROML, at least you haven’t bought all of these servers.
Finally, maintenance time is any time you put into the model when things go wrong. Things will go wrong. Databases will break, code will serve weird errors, its always important to factor this in. Maintenance time also includes monitoring your model to see if it is sticking to its accuracy. At some point you may need to retrain your model due to a whole host of factors (including shift, a risk we identified) and this can have consequences on your ROML.
Even better, is we can link the risks, to the rewards, to the costs.
That’s it! I guess if you have any questions/concerns/comments just shoot me a note, or if you want to make a better ROML visualization, that would help too.
Otherwise, best of luck rowing to whatever mill you seek out.
Subscribe at https://www.andrewvanaken.com/newsletter/
Copyright © 2018 Ogilvy, All rights reserved.