Errors

0
82
source: Wikimedia

From Alexa not understanding what you are saying to algorithms classifying cats as avocados, machine learning systems are far from perfect. But as the scale of AI/ML grows, being wrong has serious consequences. This week we examine those consequences.

But first!

This week we ask our readers to take a quick survey about if there any topics you would like to see or things we could do better. Thank you very much!
Take the survey

How to be wrong

Thinking about how to be wrong is a difficult question and a debate among data people. When creating a prediction, some people like to give a single estimate, while others use a range. Some like to use the “mean absolute error”, others prefer the “root mean square error.” When we classify things, we have errors like F1, Precision, Recall, ROC, AUC and so on. People go to great lengths to be wrong.

But, depending on what you are doing, how to be wrong is a very important consideration. If you are creating a test for cancer, you would probably want to calibrate your algorithm to have more false positives (a prediction of cancer, when there isn’t cancer) than false negatives (the prediction of no cancer, despite there being cancer) because in the former case, you can get a second opinion or get another test, while in the latter you probably just go on living your life normally while a tumor grows…

Aside from the cancer test

With how AI/ML is utilized now, if Netflix recommends a bad show to watch, or Google gives you the wrong amount of time it takes to get to the airport, typically the outcomes are not that bad. Researchers are aware of this and deploy their algorithms into the wild and collect feedback to make them better. Building AI/ML algorithms is a constant feedback loop.

The end of the loop

AI spreads to more industries and applications, the cost of being wrong increases dramatically. Companies will clearly have to change how they approach AI/ML problems when the stakes are much higher. The unfortunate event where an Uber powered autonomous car killed someone in Arizona is a clear wake up call for everyone, especially if we dive deep into the thought process.

If we analyze the sequence of errors that Uber made, red flags show up everywhere. First, we start at why the car plowed into the individual. The first explanation for the error could have been purely mechanical. The LIDAR sensor on the top of the car could have missed the fact that there was a pedestrian in the road, causing the car to plow into the driver. The company that makes the LIDAR denies this.

If the LIDAR was working, then we have a bigger problem. It could turn out that Uber’s algorithms are not as accurate as they thought they were. The NYTimes pointed out that Uber was missing its goals about how accurate they wanted their system to be.

There is also a rumor that Uber had turned LIDAR off. Executives at Uber could have made the mistake that they thought their system was accurate enough not to use LIDAR anymore, and turned it off. Since LIDAR is so expensive, car manufactures are looking for ways to avoid using this technology and unless the cost comes down, when autonomous cars become mass produced, they will not feature LIDAR.

Either Uber underestimated the accuracy of their system, hardware failed, or there was clear negligence, but clearly this error will cost Uber billions of dollars. By going through the sequence of events, we begin to realize the complexity of making decisions and figuring out who was wrong and where.

After this, Uber put a pause on its autonomous driving efforts until an investigation by the NTSB is completed. It is unclear when Uber will be allowed to return to testing autonomous vehicles and now Uber waits on the sidelines while other companies continue to test their systems. With this new development, Google has an interesting decision to make about when it should launch its fleet of autonomous cars. Maybe it believes that it already could operate on the road. Since the autonomous opportunity is so large, maybe they push back their timeline to make sure everything is thoroughly tested.

2nd and 3rd order errors

Another type of error that more companies will have to examine is improperly estimating the impact AI will have. We got a taste of that during the 2007/08 financial crisis. What is the impact of misclassifying if risky borrowers won’t pay back a mortgage? Turns out it’s a complete meltdown of the financial system. While the models to score borrowers looked good on paper, we did not take into account the impacts of these investments being insured and spread across the world.

Job automation is another tricky area. McKinsey estimates that “39 million to 73 million jobs could be destroyed, but about 20 million of those displaced workers can be shifted fairly easily into similar occupations.” But what if they are wrong and those 20 million don’t find jobs, or 100 million jobs are automated, we have a problem.

With all technological innovations, we have no idea what the outcome will be. But the increased use of AI/ML should make people think hard about the consequences of making mistakes. Maybe we need new error metrics like the cost of being wrong or the probability we don’t know if we are wrong.

A Russian already saved us from a nuclear war by identifying a radar error. Let’s hope machines don’t put us in that situation again.

Facebook Comments