Michael Schmidt

Recent Posts

Machine Intelligence with Michael Schmidt: Searching data for causation

Posted by Michael Schmidt

7/27/16 10:03 AM

The holy grail of data analytics is finding “causation” in data: identifying which variables, inputs, and processes are driving the outcome of a problem. The entire field of econometrics, for example, is dedicated to studying and characterizing where causation exists. Actually proving causation, however, is extremely difficult, typically involving carefully controlled experiments. To even get started, analysts need to know which variables are important to include in the evaluation, which need to be controlled for, and which to ignore. From there, they can build a model, design an experiment to test its causal predictions, and iterate until they arrive at a conclusion.

Read More

Topics: Eureqa, Machine Intelligence, Causation

Machine Intelligence with Michael Schmidt: Analytical models predict and describe the world around us

Posted by Michael Schmidt

6/23/16 12:06 PM

Models are the foundation for predicting outcomes and forming business decisions from data. But all models are not created equal. Models range from simple trend analysis, to deep complex predictors and precise descriptions of how variables behave. One of the most powerful forms of model is an “analytical model” – that is, a model that can be analyzed, interpreted, and understood. In the past, analytical models have remained the most challenging type of model to obtain, requiring incredible skill and knowledge to create. However, modern AI today can infer these models directly from data.

Read More

Topics: Machine learning, Analytical models, Deep learning

Machine Intelligence with Michael Schmidt: OpenAI and doomsday artificial intelligence

Posted by Michael Schmidt

6/1/16 9:30 AM

Speaking at the Open Data Science Conference (ODSC) last week, I discussed where artificial intelligence is going, what it will automate, and what its impact will be on science, business, and jobs. While the impact from Eureqa has been overwhelmingly positive, many are warning about a darker future:

Read More

Topics: Artificial intelligence, Reinforcement learning, OpenAI

Machine Intelligence with Michael Schmidt: IBM’s Watson, Eureqa, and the race for smart machines

Posted by Michael Schmidt

5/16/16 11:12 AM

Three months ago I spoke at a conference affectionately titled “Datapalooza” sponsored by IBM. My talk covered how modern AI can infer the features and transformations that make raw data predictive. I’m not sure exactly how many IBM people were in the crowd, but two IBM database and analytics leads grabbed me after the talk:

Read More

Topics: Machine Intelligence, IBM Watson, Artificial intelligence

Setting and using validation data with Eureqa

Posted by Michael Schmidt

6/28/13 4:02 PM

Eureqa automatically splits your data into groups: training and validation data sets. The training data is used to optimize models, whereas validation data is used to test how well models generalize to new data. Eureqa also uses the validation data to filter out the best models to display in the Eureqa user interface. This post describes how to use and control these data sets in Eureqa.

Default Splitting
By default, Eureqa will randomly shuffle your data and then split it into train and validation data sets based on the total size of your data. Eureqa will color these points differently in the user interface, and also provide statistics for each when displaying stats, for example:

All other error metrics shown in Eureqa, like the "Fit" column and "Error" shown in the Accuracy/Complexity plot, use the metric calculated with the validation data set.

Validation Data Settings

You can modify how Eureqa chooses the training and validation data sets in the Options | Advanced Genetic Program Settings menu, shown below:

Here you can change the portion of the data that is used for the training data, and the portion that goes into the validation data. The two sets are allowed to overlap, but can also be set to be mutually exclusive as shown above.

For very small data sets (under a few hundred points) it is usually best to use almost all of this data for both training and validation. Model selection can be done using the model complexity alone in these cases.

For very large data sets (over 1,000 rows) it is usually best to use a smaller fraction of data for training. It is recommended to choose a fraction such that the size of the training data is approximately 10,000 rows or less. Then, use all the remaining data for validation.

Finally, you can also tell Eureqa to randomly shuffle the data before splitting or not. One reason to disable the shuffling is if you want to choose specific rows at the end of the data set to use for validation.

Using Validation Data to Test Extrapolating Future Values

If you are using time-series data and are trying to predict future time-series values, you may want to create a validation data split that emphasizes the ability of the models to predict future values that were not used for optimizing the model directly.

To do this, you need to disable the random shuffling in the Options | Advanced Genetic Programming Options menu, and optionally make the training and validation data sets mutually exclusive (as shown in the options above). For example, you could set the first 75% of the data to be used for training, and the last 25% to be used for validation. After starting the search, you will see your data split like below:

Now, the list of best solutions will be filtered by their ability to predict only future values - the last rows in the data set which were not used to optimize the models directly. 

Read More

Topics: Preparing Data, Advanced Techniques, Eureqa, Tutorial

Working with discontinuous data in Eureqa

Posted by Michael Schmidt

6/28/13 4:02 PM

Several features in Eureqa assume that your data is one continuous series of points by default, such as the smoothing features and numerical derivative operators. This post shows how to tell Eureqa that there are breaks in the data.

Entering discontinuity with a blank row:

Read More

Topics: Preparing Data, Advanced Techniques, Eureqa, Tutorial

Using date and time variables

Posted by Michael Schmidt

6/28/13 4:01 PM

This post describes the best way to convert date or time values into numeric time values that can be used in Eureqa.

Time Values in Eureqa:
Eureqa can only store date and time values as numeric values (e.g. total seconds or total days). Therefore, you need to pick a reference point to measure a time duration from, and units to measure the time duration.

For example, you could convert a time value "8:31 am" to 8.52 total hours since midnight. Similarly for dates, you could convert a date like "Dec. 6, 1981 8:31 am" to 81.9 total years since 1900.

You need to make date and time conversions to numeric duration values in another program like Excel before entering into Eureqa (see below for example).


1) Do not concatenate date and time strings to get a numeric value. For example, do not convert a date like "1981-12-06" to 19811206. This representation of time is extremely nonlinear. It can preserve order, but has lost all meaning. Additionally, the values are very large and numerically unstable.

2) Avoid measuring time durations from a very distant reference point. For example, if you're data uses time values that span a few days, do not convert these time values to total seconds since the beginning of the century. The numeric values would be enormous and numerically unstable.

Instead, the best practice is to measure a time duration since the time point in your data set.

Convert in Excel:

Many programs can convert date and time values to numeric time duration values. In Excel, if you subtract two date cells, the result is the fractional number of days between the two dates. You could then convert days to hours or some other unit to get numeric values with reasonable numeric magnitudes. For example:

and then repeated for all rows, would subtract the first date in cell A0, and multiply the resulting day values into hours.
Another useful function is the YEARFRAC function which converts the difference between a date and a reference date to the fraction of years difference between them. For example:
    =YEARFRAC(A$0, A0)
and repeated for all rows, returns the fractional value of years from cell A0.

See Also:
Read More

Topics: Preparing Data, Advanced Techniques, Eureqa, Techniques, Time Series

Custom error metrics and special Search Relations in Eureqa

Posted by Michael Schmidt

6/28/13 4:01 PM

Eureqa's Search Relation setting provides quite a bit of flexibility to search for different types of models. This post describes some advanced techniques of using the Search Relation setting to specify custom error metrics for the search to optimize; or more specifically, arbitrary custom loss functions for the fitness calculation.

Custom Fitness Using Minimize Difference

Eureqa has a built-in fitness metric named "Minimize difference". This fitness metric minimizes the signed difference between the left- and right-hand sides of the search relationship. For example, specifying:

y = f(x)

with the minimize difference fitness metric selected tells Eureqa to find an f(x) to minimize y - f(x). A trivial solution to this relationship would be f(x) = negative infinity. However, you can enter other relations that are more useful. Consider the follow search relation:

(y - f(x))^4 = 0

Here, the minimize difference fitness would minimize the 4th-power error. In Eureqa this setting looks like:

In fact, you can enter any such expression and the f(x) can appear multiple times. For example:

max( abs(y - f(x)), (y - f(x))^2 ) = 0

would minimize the maximum of the absolute error and squared error, at each data point in the data set.

Other Methods

There are many other possible ways to alter the fitness metric using the search relationship setting. For example, you could use a normal fitness metric (e.g. absolute or squared error) but scale both sides of the relation. For example, you could wrap each side of the search relation with a sigmoid function like tanh:

tanh(y) = tanh( f(x) )

Now, both the left and right sides get squashed down to a tanh function (an s-shaped curve that ranges -1 to 1) before being compared. This effectively caps large errors, reducing their impact on the fitness.

Even More Tricks

You can also use the search relationship to forbid certain values by exploiting NaN values (NaN = Not a Number). For example, consider the following search relation, which forbids models with negative values:

y = f(x) + 0*log( f(x) )

Notice the unusual 0*log(f(x)) term. Whenever f(x) is positive, the log is real-valued, and the multiplication with zero reduces the expression to y = f(x). However, whenever f(x) is negative, log(f(x)) is undefined, and produces a NaN value. Whenever a NaN appears in the fitness calculation, Eureqa automatically assigns the solution infinite error. Therefore, this search relationship tells Eureqa to find an f(x) that models y, but f(x) must be positive on each point in the data set.

This behavior can be used in other ways as well. Any operation that would produce an IEEE floating point NaN, undefined, or infinity will trigger Eureqa to assign infinite error. You can also add multiple terms like this to place multiple different constraints on solutions.
Read More

Topics: Advanced Techniques, Eureqa, Tutorial, Custom Error Metrics

Use time-delays or time-lags of a variable in Eureqa

Posted by Michael Schmidt

6/28/13 4:01 PM

A time delay retrieves the value of a variable or expression at a fixed offset in the past, according to the time ordering or index of each data point in the data set. This post describes the time-delay building-blocks available in Eureqa and different modeling techniques with delayed values.

Time Delay Building Blocks:

Eureqa provides the delay(x, c) building block to represent an arbitrary time-delay, where x could be any expression. The expression delay(x, c) returns the value of x at c time units in the past. When used as a building-block, Eureqa can automatically optimize expressions or variables to be delayed and the time-delay amount  c.

The figure above plots an arbitrary variable x and a delayed value delay(x, 1.0), where the values are ordered by some time variable t. The delayed version is equal to x at 1.0 time units into the past.

To use time-delay building-blocks, your data must have some notion of time or ordering. You also need to tell Eureqa which variable in your data represents the time or ordering value:

If you don't specify a time variable, Eureqa will use the row number in the spreadsheet as the time value of each data point.

If a particular delayed time value falls between two points in the data set, the value is linearly interpolated between the two data points using the time value.

Eureqa also provides the delay_var(x, c) building-block which is identical to delay(x, c), except that it only accepts a variable as input. It's provided as a special case of the delay(x, c) building-block to allow you to constrain the types delays used in the solutions. But in the end they are effectively identical.

Control the Fraction of Data Used for History

Notice that the delayed output plotted above does not have values on the left side of the graph for the first few time points. This is because these points request previous values of x that lie before the first point in our data set. Eureqa will automatically ignore these data points when calculating errors.

However, there is a way to control how much of the data set Eureqa is allowed to ignore - or effectively, specify a maximum delay offset. You can limit the fraction of data used for time-delay history values in the Advanced Solutions Options menu:

The default maximum fraction is 50% of the data. If you find that Eureqa is identifying solutions with very large time delays, perhaps just to avoid modeling difficult features in the first half of the data set, you may want reduce this fraction

Additionally, you can control the number of delayed values per variable (including a zero delay of an ordinary variable use) in this dialog.

Fixed Time-delays:

Another way to model a value as a function of its previous values is with fixed delays. You can enter in fixed time-delays, or "lags" of the variable, directly into the Search Relationship option. For example:

    x = f( delay(x, 2.1), delay(x, 5.6) )

This search relationship tells Eureqa to find an equation to model the value of x as a function of it's value at 2.1 and 5.6 time units in the past.

Minimum Time-delays:

You may also want to specify a minimum time-delay offset. If you entered a search relationship such as x = f(x), Eureqa would find a trivial answer f(x) = x. More likely, you wanted to find a model of x, but as a function of x at least some amount of time in the past. The way to do this is to again use a fixed delay, such as:

    x = f(delay(x, 3.21))

Here, 3.21 is the minimum time-delay. Now, if the time-delay building-blocks are enabled, Eureqa can delay this delayed input further if necessary.

Delay Differential Equations:

Another common use for time-delays in for modeling using Delay Differential Equations. Finding delay differential equations is just like searching for ordinary differential equations. For example, entering a search relationship like:

    D(y,t) = f(y)

but also enabling time-delay building blocks. This relationship has a trivial solution however: Eureqa will return the slope formula such as

    f(y) = ( y - delay(y, 0.1) )/0.1

Therefore, you most-likely want to limit the total number of delays per variable to one (which includes the zero delay of the normal variable use). You can set this in the Advanced Solution Settings menu. The default is unlimited.

Implementing Delays Outside of Eureqa

In Matlab, you can implement a time delay using the interp1 function. For example, the expression delay(x, 1.23) would be implemented as:

    interp1(t, x, t - 1.23, 'linear')

Implementing delays in Excel is a littler harder. You need to download an Excel add-on that adds an interpolate function. For example, the package XlXtrFun adds a function "Interpolate" that is just like Matlab's interp1. There are also other guides for Linear Interpolation with Excel.
Read More

Topics: Advanced Techniques, Eureqa, Tutorial, Techniques, Time Series

Normalizing data variables in Eureqa

Posted by Michael Schmidt

6/28/13 4:00 PM

While normalizing your data variables (rescaling the numeric values) is completely optional, it can greatly improve the performance of Eureqa, and numerical stability of solutions. This post discusses when and how to normalize variables in your data.

When to Normalize:

Eureqa works best when all variables in your data have small to medium magnitudes, on the order of 1 to 100. For example, if you have any variable that ranges over a million, it would be best to rescale the values to larger units.

Additionally, the magnitudes of the variable should be similar to the mean or offset of the variable. For example, if you have a variable that only varies between 100.0 and 100.5, it would be best to subtract off 100 so that it ranges between 0 and 0.5.

For example, consider the following two variables in some data set:
Notice that both variables look rather flat. You can't see any interesting variation because the variable a has such a large offset. Do variables in your data look like this? Let's try subtracting off an offset of 10,000 from a:
Now, we can see some interesting variation in the variable a, but the variable b still looks flat because the variable still has a large magnitude relative to b. Next, let's try dividing the values of a by 50:
Now we can see the interesting variation in both variables, as they now have the same relative scale and magnitudes. This is ideally how we want our data to look before entering it into Eureqa. When the variables are reasonably scaled, Eureqa is most likely to utilize their variation to build accurate solutions.

How to Normalize a Variable:

First, consider changing the units of the data you enter into Eureqa. Could you measure values in meters instead of centimeters? Could you measure currency in millions-of-dollars instead of dollars? Pick units such that the numeric values have a range of approximately 1 to 100.

Second, consider measuring values from an offset. Could you measure time since the time of your first data point, instead of since the beginning of the year or century?

Third, check over your data; look for outliers. Are there any values that are drastically out of proportion with the rest of the values? If so, consider removing this entire row in your data set or giving it a very low weight.

The general formula for normalizing a variable y is:

    y_normalized = (y - offset)/scale

where offset and scale are the normalization parameters. It's recommended that you pick offset and scale manually, so that the numeric values still have an intuitive meaning. However, if you truly don't care what the numeric values mean, a common approach is to set offset equal to the mean of the variable and scale to the standard deviation of the variable.

It's also recommended that you apply normalization before entering your data into Eureqa. However, you can specify the normalization in the Eureqa Search Relation. For example, consider the search relation:

    y = f( x/1000 )

This tells Eureqa to find a model of y as a function of values of x that are divided by 1000.

Automatic Normalization Checks:

By default, Eureqa will check your data for extreme cases of that variables that need to be normalized. When entering or modifying values in the Eureqa data view, you may encounter a message like this:

Here, Eureqa is telling you that the variable y has a large offset. It has a mean value of about 1000, but it only varies by +/- 1.38. Eureqa suggests subtracting 996 from each y value in your data set, but leaving the scale unchanged.

You can also modify this and specify what values to apply. Pick a scale and offset that makes sense and preserves meaning.

See Also:
Read More

Topics: Preparing Data, Eureqa, Tutorial, Techniques, Normalize Data in Eureqa

Subscribe to Our Blog!

Follow Me

Posts by Topic

see all