Machine Intelligence with Michael Schmidt: Analytical models predict and describe the world around us

Posted by Michael Schmidt

6/23/16 12:06 PM

Models are the foundation for predicting outcomes and forming business decisions from data. But all models are not created equal. Models range from simple trend analysis, to deep complex predictors and precise descriptions of how variables behave. One of the most powerful forms of model is an “analytical model” – that is, a model that can be analyzed, interpreted, and understood. In the past, analytical models have remained the most challenging type of model to obtain, requiring incredible skill and knowledge to create. However, modern AI today can infer these models directly from data.

mathematical model (or analytical model) is a “description of a system using mathematical concepts and language. Mathematical models are used not only in the natural sciences (such as physics, biology, earth science, meteorology) and engineering disciplines (e.g. computer science, artificial intelligence), but also in the social sciences (such as economics, psychology, sociology and political science); physicists, engineers, statisticians, operations research analysts and economists use mathematical models most extensively. A model may help to explain a system and to study the effects of different components, and to make predictions about behaviour.”

Analytical modeling with machine intelligence

An example analytical model inferred by AI from data (Eureqa).

There’s a reason why every field of science uses math to describe and communicate the intrinsic patterns and concepts in nature, and why business analysts design mathematical models to analyze business outcomes. Essentially, these models give the most accurate predictions and the most concise explanation behind them. They allow us to forecast into the future and understand how things will react under entirely new circumstances.

Other forms of models are easier to create, but are less powerful to use. For example, linear models, polynomial models, and spline models can be used to fit curves and quantify past trends. They can estimate rates of change or interpolate between past values. Unfortunately, they are poor at extrapolating and predicting future data because they are too simple to capture general relationships. Similarly, these models often need to be quite large and high-dimensional in order to capture global variation in the data at all, which subsequently often makes them difficult or impossible to interpret.

Many open-source algorithms in machine learning attempt to improve the predictive accuracy over standard linear or nonparametric methods. Decision trees, neural networks, ensembles, and the like contain more complex and nonlinear operations that are more efficient at encoding trends in the data. In order to improve accuracy, they generally repeatedly apply the same nonlinear transformation over and over (such as a logistic function or split average), regardless of the actual underlying system. This causes these models to be almost impossible to interpret meaningfully. They also require significant expertise from those using them; entire competitions are held for experts to tune and control parameters of these algorithms and models to prevent them from overfitting the data, limiting where they can be applied.

Deep learning methods in machine learning can be viewed as an extreme, producing enormously large, complex models. These models perform extremely well a few particular types problems that have dense data, like images and text, where there are thousands of equally important inputs. Deep neural networks typically will use every input available, even when completely irrelevant or spurious, which makes them difficult to use where the important variables and inputs are unknown ahead of time.

The power of analytical models is that they use the least amount of complexity possible in order to achieve the same accuracy. Instead of reapplying the same transformation over and over, the structure of the model is specific to the system being modeled. This makes the model’s structure special – it is by definition the absolute best structure for the data, and the simplest and most elegant hypothesis on how the system works. The drawback to analytical models is that they require significant amounts of computational effort to compute.

Our mission with Eureqa has been to solve this challenge at scale, and we’ve already seen major impacts in both science/research and business/enterprise. For me personally, I’m most excited by the prospect of using machine intelligence for analytical modeling, where instead of completely automating a task or simply fitting data, the machines are making discoveries in the data we collect and interpreting them back to us automatically. Automation has never been so beneficial.

Topics: Machine learning, Analytical models, Deep learning

Subscribe to Our Blog!

Follow Me

Posts by Topic

see all