Blog

Machine Intelligence with Michael Schmidt: Analytical models predict and describe the world around us

Posted by Michael Schmidt

23.06.2016 12:06 PM

Models are the foundation for predicting outcomes and forming business decisions from data. But all models are not created equal. Models range from simple trend analysis, to deep complex predictors and precise descriptions of how variables behave. One of the most powerful forms of model is an “analytical model” – that is, a model that can be analyzed, interpreted, and understood. In the past, analytical models have remained the most challenging type of model to obtain, requiring incredible skill and knowledge to create. However, modern AI today can infer these models directly from data.

mathematical model (or analytical model) is a “description of a system using mathematical concepts and language. Mathematical models are used not only in the natural sciences (such as physics, biology, earth science, meteorology) and engineering disciplines (e.g. computer science, artificial intelligence), but also in the social sciences (such as economics, psychology, sociology and political science); physicists, engineers, statisticians, operations research analysts and economists use mathematical models most extensively. A model may help to explain a system and to study the effects of different components, and to make predictions about behaviour.”

Analytical modeling with machine intelligence

An example analytical model inferred by AI from data (Eureqa).

There’s a reason why every field of science uses math to describe and communicate the intrinsic patterns and concepts in nature, and why business analysts design mathematical models to analyze business outcomes. Essentially, these models give the most accurate predictions and the most concise explanation behind them. They allow us to forecast into the future and understand how things will react under entirely new circumstances.

Other forms of models are easier to create, but are less powerful to use. For example, linear models, polynomial models, and spline models can be used to fit curves and quantify past trends. They can estimate rates of change or interpolate between past values. Unfortunately, they are poor at extrapolating and predicting future data because they are too simple to capture general relationships. Similarly, these models often need to be quite large and high-dimensional in order to capture global variation in the data at all, which subsequently often makes them difficult or impossible to interpret.

Many open-source algorithms in machine learning attempt to improve the predictive accuracy over standard linear or nonparametric methods. Decision trees, neural networks, ensembles, and the like contain more complex and nonlinear operations that are more efficient at encoding trends in the data. In order to improve accuracy, they generally repeatedly apply the same nonlinear transformation over and over (such as a logistic function or split average), regardless of the actual underlying system. This causes these models to be almost impossible to interpret meaningfully. They also require significant expertise from those using them; entire competitions are held for experts to tune and control parameters of these algorithms and models to prevent them from overfitting the data, limiting where they can be applied.

Deep learning methods in machine learning can be viewed as an extreme, producing enormously large, complex models. These models perform extremely well a few particular types problems that have dense data, like images and text, where there are thousands of equally important inputs. Deep neural networks typically will use every input available, even when completely irrelevant or spurious, which makes them difficult to use where the important variables and inputs are unknown ahead of time.

The power of analytical models is that they use the least amount of complexity possible in order to achieve the same accuracy. Instead of reapplying the same transformation over and over, the structure of the model is specific to the system being modeled. This makes the model’s structure special – it is by definition the absolute best structure for the data, and the simplest and most elegant hypothesis on how the system works. The drawback to analytical models is that they require significant amounts of computational effort to compute.

Our mission with Eureqa has been to solve this challenge at scale, and we’ve already seen major impacts in both science/research and business/enterprise. For me personally, I’m most excited by the prospect of using machine intelligence for analytical modeling, where instead of completely automating a task or simply fitting data, the machines are making discoveries in the data we collect and interpreting them back to us automatically. Automation has never been so beneficial.

Topics: Analytical models, Deep learning, Machine learning

The Perils of Black Box Machine Learning: Baseball, Movies, and Predicting Deaths in Game of Thrones

Posted by Jon Millis

22.04.2016 10:17 AM

Making predictions is fun. I was a huge baseball fan growing up. There was nothing quite like chatting with my dad and my friends, crunching basic statistics and watching games, reading scouting reports, and finally, expressing my opinion on what would happen (the Braves would win the World Series) and why things were happening (Manny Ramirez was on a hot streak because he was facing inexperienced left-handed pitchers). I was always right…unless I was wrong.*

One of the reasons business people, scientists and hobbyists like predictive modeling is that in many cases, it allows us to sharpen our predictions. There’s a reason why Moneyball became a best-selling book, as it was one of the first widely publicized examples of applying analytics to gain a competitive advantage, in this case by identifying the most important player statistics that translate to winning baseball games. Predictive modeling was the engine that drove the Oakland A’s from a small budget cellar-dweller to a perennial championship contender. By being able to understand the components of a valuable baseball player – not merely predict their statistics – the A’s held on to a valuable advantage for years.

Oakland_As.jpg

High five, Zito! You won 23 games for the price of a relief pitcher!

The A’s were ahead of their time, focusing on forecasting wins and diagnosing the “why”. With this dual-pronged approach, they could make significant tweaks to change future outcomes. But many times, predictive modeling is different, and takes the form of “black box” predictions. “Leonardo DiCaprio will win an Oscar,” or, “Your farm will yield 30 trucks-worth of corn this season.” That’s nice to hear if you’re confident your system will be right 100% of the time. Sometimes you don’t need to know the “why”; you just need an answer. But more often, if you want to be sure you’ll be getting an accurate prediction, you need to understand not only what will happen, but why it will happen. How did you come to that conclusion? What are the different inputs or variables at play?

If, for example, a machine learning algorithm predicts that Leonardo DiCaprio will win an Oscar – but one of the deciding variables is that survival movie stars always win the award if they wear a black bow tie to the awards ceremony – we would want to know this, so we could tweak our model and remove the false correlation, as it likely has nothing to do with the outcome. We might consequently be left with a model that now only includes box office sales, number of star actors, type of movie, and the number of stars given by Roger Ebert. This model is one we can be more confident in because as movie buffs, we’re mini “domain experts” that can confirm the model makes sense. And to boot, we can have full insight into why Leonardo will win the Oscar, so we can place a more confident bet in Vegas. (You know…if that’s your thing.) Operating in the “black box” confines of most machine learning approaches would render the previous iterations impossible.

Leo_at_the_Oscars.jpg

I don’t always win the Oscars…but when I do, I wear a black bow tie.

That’s why my head continues to spin at the mistakenly godlike, magic bullet appeal of black box modeling. It’s fantastic for certain applications: specifically, answering the question, “What will happen?” when you don’t care how the answer is derived, so long as it’s right. One example of this could be a high-frequency trading application that executes a trade based on the fact that its algorithm can predict with 95% accuracy when a stock will appreciate.

But for most things, the value of a prediction is understanding the “what will happen” and the “why”. I almost shook my computer with frustration this morning when I read that a team of researchers at the Technical University of Munich had used artificial intelligence (more specifically, machine learning) to predict that Tommen Baratheon would be the first character killed off in the upcoming season of Game of Thrones – but didn’t give any indication of how or why that will happen. It’s because the algorithm said so. Are you kidding me, guys?! That’s like saying, “Jon, you will eat a horrendous dinner tonight that will verrrrry likely leave you violently ill for days, buuuuuut unfortunately come back later to see how that happened or where you ate for dinner, because we just don’t feel like telling you.” What good is a prediction without context and understanding? Will I get sick from the spinach in my fridge, from bad meatloaf at a restaurant, or from a coworker who decided to come over and sneeze on my food as I’m finishing up this blog post far too late on a Thursday night?? (Stay away from my food, Michael. STAY AWAY!!!) Without that context, I can’t make any change to improve my outcome of being home sick as a dogface for an entire week.

There’s a reason that people see artificial intelligence and machine learning as fairy dust. A lot of the time, it works, but it’s hard to use, requires technical expertise, and it frequently operates in a total black box. I like to understand my predictions. That’s why when I was a 10-year-old kid, I decided I’d work on bringing machine intelligence – the automated creation and interpretation of models – to the world and join Nutonian. Well…that may not be entirely true. More likely, I was trying to predict how well I’d have to hit a curveball to make it to the MLB.

 

*Sayings like this always remind me of one of my favorite off-the-field players of all-time, Yogi Berra, a Hall of Famer known as much for his wit and turns of phrases as his talent: http://m.mlb.com/news/article/151217962/yogisms-yogi-berras-best-sayings

 

Topics: Baseball, Game of Thrones, Machine Intelligence, Machine learning

Follow Me

Posts by Topic

see all