Blog

The Perils of Black Box Machine Learning: Baseball, Movies, and Predicting Deaths in Game of Thrones

Posted by Jon Millis

22.04.2016 10:17 AM

Making predictions is fun. I was a huge baseball fan growing up. There was nothing quite like chatting with my dad and my friends, crunching basic statistics and watching games, reading scouting reports, and finally, expressing my opinion on what would happen (the Braves would win the World Series) and why things were happening (Manny Ramirez was on a hot streak because he was facing inexperienced left-handed pitchers). I was always right…unless I was wrong.*

One of the reasons business people, scientists and hobbyists like predictive modeling is that in many cases, it allows us to sharpen our predictions. There’s a reason why Moneyball became a best-selling book, as it was one of the first widely publicized examples of applying analytics to gain a competitive advantage, in this case by identifying the most important player statistics that translate to winning baseball games. Predictive modeling was the engine that drove the Oakland A’s from a small budget cellar-dweller to a perennial championship contender. By being able to understand the components of a valuable baseball player – not merely predict their statistics – the A’s held on to a valuable advantage for years.

Oakland_As.jpg

High five, Zito! You won 23 games for the price of a relief pitcher!

The A’s were ahead of their time, focusing on forecasting wins and diagnosing the “why”. With this dual-pronged approach, they could make significant tweaks to change future outcomes. But many times, predictive modeling is different, and takes the form of “black box” predictions. “Leonardo DiCaprio will win an Oscar,” or, “Your farm will yield 30 trucks-worth of corn this season.” That’s nice to hear if you’re confident your system will be right 100% of the time. Sometimes you don’t need to know the “why”; you just need an answer. But more often, if you want to be sure you’ll be getting an accurate prediction, you need to understand not only what will happen, but why it will happen. How did you come to that conclusion? What are the different inputs or variables at play?

If, for example, a machine learning algorithm predicts that Leonardo DiCaprio will win an Oscar – but one of the deciding variables is that survival movie stars always win the award if they wear a black bow tie to the awards ceremony – we would want to know this, so we could tweak our model and remove the false correlation, as it likely has nothing to do with the outcome. We might consequently be left with a model that now only includes box office sales, number of star actors, type of movie, and the number of stars given by Roger Ebert. This model is one we can be more confident in because as movie buffs, we’re mini “domain experts” that can confirm the model makes sense. And to boot, we can have full insight into why Leonardo will win the Oscar, so we can place a more confident bet in Vegas. (You know…if that’s your thing.) Operating in the “black box” confines of most machine learning approaches would render the previous iterations impossible.

Leo_at_the_Oscars.jpg

I don’t always win the Oscars…but when I do, I wear a black bow tie.

That’s why my head continues to spin at the mistakenly godlike, magic bullet appeal of black box modeling. It’s fantastic for certain applications: specifically, answering the question, “What will happen?” when you don’t care how the answer is derived, so long as it’s right. One example of this could be a high-frequency trading application that executes a trade based on the fact that its algorithm can predict with 95% accuracy when a stock will appreciate.

But for most things, the value of a prediction is understanding the “what will happen” and the “why”. I almost shook my computer with frustration this morning when I read that a team of researchers at the Technical University of Munich had used artificial intelligence (more specifically, machine learning) to predict that Tommen Baratheon would be the first character killed off in the upcoming season of Game of Thrones – but didn’t give any indication of how or why that will happen. It’s because the algorithm said so. Are you kidding me, guys?! That’s like saying, “Jon, you will eat a horrendous dinner tonight that will verrrrry likely leave you violently ill for days, buuuuuut unfortunately come back later to see how that happened or where you ate for dinner, because we just don’t feel like telling you.” What good is a prediction without context and understanding? Will I get sick from the spinach in my fridge, from bad meatloaf at a restaurant, or from a coworker who decided to come over and sneeze on my food as I’m finishing up this blog post far too late on a Thursday night?? (Stay away from my food, Michael. STAY AWAY!!!) Without that context, I can’t make any change to improve my outcome of being home sick as a dogface for an entire week.

There’s a reason that people see artificial intelligence and machine learning as fairy dust. A lot of the time, it works, but it’s hard to use, requires technical expertise, and it frequently operates in a total black box. I like to understand my predictions. That’s why when I was a 10-year-old kid, I decided I’d work on bringing machine intelligence – the automated creation and interpretation of models – to the world and join Nutonian. Well…that may not be entirely true. More likely, I was trying to predict how well I’d have to hit a curveball to make it to the MLB.

 

*Sayings like this always remind me of one of my favorite off-the-field players of all-time, Yogi Berra, a Hall of Famer known as much for his wit and turns of phrases as his talent: http://m.mlb.com/news/article/151217962/yogisms-yogi-berras-best-sayings

 

Topics: Baseball, Game of Thrones, Machine Intelligence, Machine learning

Follow Me

Posts by Topic

see all