Blog

Customer Spotlight: NASA

Posted by Jess Lin

15.10.2014 10:30 AM

Customers from energy to healthcare to communications are leveraging Eureqa to transform their data into gold. As customers tell us their stories, we’ll continue to share them with you on our blog. Hopefully, they’ll inspire you to start thinking about how your company can leverage machine intelligence for a significant competitive advantage. Contact us if you have a Eureqa case study you would like to share.

Nutonian Case Study - NASACommercial airplane travel reigns as the king of safe long-distance travel, but the highly engineered and complex machinery has inherent risks that can be introduced by human error. In the complex world of an airplane cockpit, Nutonian’s Eureqa® is helping NASA improve pilot’s split second judgement of unexpected flight conditions and reduce flight fatality risk by more than 50%.

 

Since the mid-20th century, commercial aviation had become a popular means of transportation with nearly 10 million total flights and 1 billion passengers across the US in 2013. Advances in aircraft design, engineering and navigation aids have dramatically improved safety and reduced accidents over the years, but the life of every passenger at 30,000 feet still rests solely in the pilot’s hands. Historically, more than half of all plane crashes have been caused by pilot error; while experience and training reduce risks, even veteran pilots can be fallible under stress.

Spurred by a 2010 Commercial Aviation Safety Team (CAST) report that analyzed patterns in aircraft accidents over the past decade, NASA has launched a three-year effort dedicated to tackling this deadly problem.

For more details, read the full case study here.

Topics: Case study, Eureqa, NASA

Customer Spotlight: Profusion Analytics

Posted by Jon Millis

13.10.2014 10:16 AM

Customers from energy to healthcare to communications are leveraging Eureqa to transform their data into gold. As customers tell us their stories, we’ll continue to share them with you on our blog. Hopefully, they’ll inspire you to start thinking about how your company can leverage machine intelligence for a significant competitive advantage. Contact us if you have a Eureqa case study you would like to share.

profusion_logo-final-whiteKnowledge is power. For cable providers, it’s also more efficient troubleshooting, happier customers and more revenue. Yet until recently, knowledge about how to optimize cable networks and detect service disruptions was underwhelming. If a customer or neighborhood faced a connection problem, cable companies would dispatch a
number of vehicles in a “truck roll”, sending employees and thousands of dollars worth of equipment to tap into the local cable nodes and take the pulse of the broadcast spectrum. From there, they relied on the expertise and hunches of their employees to diagnose and patch up the issue at hand.

Fortunately, advances in modem chip set technology have made virtually every modem and cable box smarter. Modems are now almost universally equipped with tiny data tracking mechanisms that give cable providers a transparent view of their downstream broadcast spectrum. This means that when a subscriber calls in with an issue, a customer service representative can work to solve it remotely, keeping subscribers happy and saving the cable company time and money. But streamlining the data acquisition process does nothing to fundamentally change how networks are monitored and mended.

Profusion Analytics, a predictive analytics company in Greenwood, Colorado, empowers some of the largest cable providers in the country to pinpoint network problems quickly and accurately, often before the customer even knows an issue existed.

For more details, read the full case study here.

Topics: Case study, Eureqa

Nutonian Piercing the Veil of Distortion Over Mission-Critical Images

Posted by Jay Schuren

12.08.2014 10:00 AM

Screen_Shot_2014-08-08_at_2.36.47_PMImaging and advanced characterization are at the heart of a range of industries across aircraft component inspections, medical imaging, CPU manufacturing and more. As society continues to push the envelope in technological innovation, demand for the best quantitative characterization possible is always present. Distortion, or warping of image data, arises from the imaging equipment itself; common examples are fish eye lenses or fun house mirrors. A little-discussed issue is that changing the instrument settings often changes the distortion – limiting peak characterization performance across fields, reducing accuracy and escalating time and effort for discovery.

The majority of the innovation in imaging systems has been focused on showing ever-smaller features. But those expensive high-magnification machines still have distortion issues that no one has addressed. Inspired by work with the Air Force Research Laboratory, Nutonian has developed tools able to dynamically eliminate image distortion. Applying Nutonian’s data science automation engine, Eureqa, allows users to rapidly identify specific relationships between the instrument settings/environmental conditions and the limiting distortions.

Screen_Shot_2014-08-08_at_2.34.20_PM

Now, instead of pretending that image distortion either doesn’t exist or remains static across different instrument settings, companies can computationally model, predict and calculate behavior at a high degree of accuracy based on the measurements they take from an image. Understanding these causal relationships gives users the ability to “unwarp” image distortions, enabling accurate insight and peak performance when it really matters. The implications could mean saved lives and >10x improvements in quantitative measurements for systems such as Scanning Electronic Microscopes, realizing peak performance even in outdated equipment.

Whether companies are looking for cracks in aircraft turbine blades or tumors in a mammogram, current limits of characterization systems govern the status quo for early identification. Applying Eureqa to a Scanning Electron Microscope and a mammography detector over a range of conditions resulted in >10x improvements in quantitative measurements. Gain competitive advantage with access to improved detection systems that will save lives, reduce costs and accelerate the development of next generation products.

Topics: Advanced Techniques, Big data, Case study, nutonian, U.S. Air Force

Eureqa Distills the Importance of Key Factors in Porsche Performance

Posted by Jon Millis

16.06.2014 10:00 AM

Bill_takes_home_some_hardwareNutonian users are not just analysts in large corporations. Hobbyists find interesting, innovative ways to apply Eureqa to their daily lives and passions. Below is a guest post from one of our favorite adrenaline-seeking customers in San Diego. (If you have a story that you’d like featured in our blog, we’d love to hear from you at contact@nutonian.com.)

William Ripka, Ph.D., is a retired scientist and longtime Porsche-lover. After completing graduate work at University of Illinois, he bought his first Porsche, which he still owns: a 1967 Porsche 912. Years later, in the process of restoring it, he met a fellow 912 driver who was leaving the racing scene and selling his race-equipped 1978 911 SC. Bill bought the car, and in 2010, began racing it against other members of the Porsche Club of America (PCA).

Being trained as a scientist, Bill became interested in the technical aspects of racing and how data could be used to improve performance and the racing experience. One of the issues with PCA racing is how the cars, which span over 50 years, are classified into groups to ensure competitive racing. Currently, these classifications are done subjectively, and he thought a more analytical, data-driven approach might make more sense. Bill was looking for analytic software on the internet when he came across Nutonian. Below is his story of how he leveraged Eureqa Desktop to test the validity of the PCA’s 16-group classification system, by modeling the relative influence of different car features on performance. The results may lead to a future change in the way the PCA conducts its races.

Racing in time trials and autocrosses with the Porsche Club of America (PCA) Zone 8 is governed by certain “rules” that classify cars into groups to ensure fair competition. These cars range from 1960s models all the way to present 2014 models. Over the years, vast improvements have been made in horsepower, performance, and handling. The problem is how to classify these cars into actual competitive classes, so competition within a class provides a fair competitive environment.

Classes are currently determined by calculating performance level points based on horsepower, weight, year of model introduction, and wheel size. Added to these base points are points assessed for tire size over a specified standard size. Finally, performance points are added for any suspension, power train, fuel injection, and other modifications made over the stock configuration that could improve performance and handling. This results in an overall point score for a particular car that determines its class (CC01 to CC16).

While there is general agreement with regard to the qualitative importance of these various factors, there has been no attempt to quantify them. Rather, the actual points assessed are based on the subjective judgment of a few experienced mechanics and drivers. This has led to some bizarre classifications where in one class, CC05, there is a 1988 924S (160 hp), a 1986 911 Turbo (231 hp), 1993 RS America (247 hp), and a 2010 987 Boxster (255 hp). This would strongly suggest the relative point assessment for the various factors needs to be adjusted.

Determining the relative importance of the various factors and assessing appropriate performance points requires a driver. The problem is that drivers vary widely in their skills; a poor driver in a powerful car may not be competitive with an excellent driver in a much less powerful one. The only way one could objectively determine, for example, the relative importance of a tire size change versus a torsion bar change on performance would be to have the same expert driver drive each car at its limit or have a robot drive the cars. Obviously, that isn’t possible.

Bill_leaving_other_drivers_in_the_dust

In trying to address this problem, data was compiled for the top 15 drivers in the PCA San Diego region at various tracks over the last three years. The assumption is that these drivers are all excellent drivers with comparable skills, driving their various cars at the limit. If one accepts this assumption, this may allow comparisons of different cars with different setups. To do this, an “index” was calculated that was based on a relative score of the best times of these drivers compared to the top time of day (TTOD) at a particular track on a particular day – this takes into account length of the track, direction, conditions, etc. Therefore, someone with an index of .9 was 90% as fast as the driver with the TTOD. The individual indexes for each driver at each track were then averaged over the three years for all the events he participated in to get an overall index. The value of these indexes for all drivers ranged from about 0.8 to 1.0 (TTOD). In fact, over the last two years, the average index for any one driver was essentially constant for every track – independent of the length of the track, the direction driven, and weather conditions – indicating this is a reasonable measure of car/driver performance. With the index for all drivers ranging from 0.8 to 1.0, the average root mean square deviation for any one driver was 0.02 or less index points. That .02 RMS represents about 2-3 seconds at the tracks. The 15 expert drivers had indexes ranging from 0.932 to 0.986. Their cars were in a range of classes (CC09 to CC16) according to the current classification scheme.

Eureqa was used to determine if actual performance, as indicated by the index value, could be correlated with the factors used in the current subjective classification scheme, i.e., did it support the current system (based on a subjective analysis), or might it suggest a different one? The current formula for assessing points to determine class is:

points = (4000/(weight/hp)) + (stock front wheel size(“) + stock rear wheel size (“)-12) + (year-2010) + (sum of front and rear tire size-2*(205)) + (Performance Equipment Points)

Recasting this with coefficients for each term,

points = A*(4000/(weight/hp)) + B*(stock front wheel size(“) + stock rear wheel size (“)-12) + C*(year-2010) + D*(sum of front and rear tire size -2*(205)) + E*(Performance Equipment Points)

where A=B=C=D=E=1.0, and the relative contribution of each coefficient is 1/5 or 20%.

Most of the terms above are self-explanatory. The term with the inverse of the power to weight ratio (PW) and multiplication factor of 4000 is meant to create a steepening curve that assigns progressively higher points for each incremental improvement in the PW ratio.

The points determine the class and are meant to be a measure of car performance. If this is true, the points should have a linear relationship to the index described above. A plot of the total class points for each of the 15 drivers/cars versus the performance index is shown in Table 1. The correlation is not particularly good.

Table 1. Index vs. Total Points (current classification system)

The data used in the Eureqa analysis is shown in Table 2 and normalized in Table 3.

Table 2. Raw Data

Raw_Data


Table 3. Normalized Data 

Normalized_Data

The Eureqa analysis was run with the following search formula:

Index

The resulting formula from the analysis is shown below:

index = 0.02247649635*wheel + 0.01287558429*performance + 0.008444703921*tire + 0.007289825753*year + 0.004846957046*hpwt

Table 4 shows the Eureqa plot of observed vs. predicted index values.

Table 4. Observed vs. Predicted Index Values

Observed_vs._Predicted_Index_Values

Based on the formula found, the relative contribution of each coefficient for each term is:

(year-2010)  9.1%

(4000/(wt/hp))  9.1%

(frt+rear wheel -12)  42%

(frt+rear tire -2*205)  15.8%

(performance)  24.1%

The old formula would have each coefficient equal at 20% (see above). Applying the approximate correction factors of 0.5, 0.5, 2, 0.8, and 1.2 then:

new points = 0.5*(4000/(weight/hp)) + 2*(stock front wheel size(“) + stock rear wheel size (“)-12) + 0.5*(year-2010) + 0.8*(sum of front and rear tire size -2*(205)) + 1.2*(Performance Equipment Points)

This suggests the hp/wt term (4000/(wt/hp)) was overestimated in importance and should be one half this value. On the other hand, the term for wheels (frt+rear-12) should be increased by a factor of two. The tire term (frt+rear-2*205) should be about 3/4 the value, and performance should be about 1.2x its value.

For years, Zone 8 Porsche enthusiasts assumed they were racing in a level playing field, their group classifications determined as a function of car performance and modifications. The PCA subjectively assessed the relative importance of weight-to-horsepower ratio, wheel size, car model year, and equipment modifications. After feeding race results and car metrics into Eureqa, it was determined that the current car classification model may be flawed, and a new one with more precise weights for each input may be warranted. The ideal result? More competitive races, between competitive drivers, in some of the most competitive performance automobiles in the world.

Let us know what you think of Bill’s research in the comments section below, and don’t forget to take Eureqa for a free spin on a project of your own.

Topics: Case study, Eureqa, Porsche Club of America

Customer Spotlight: University of Vermont

Posted by Jess Lin

17.02.2014 09:00 AM

We have amazing customers doing even more amazing things with their data. As we hear from our customers with their stories, we will be sharing them with you here on our blog. Hopefully they will help inspire you to think about what more you could be doing with your own data! Contact us if you have a case study using Eureqa you would like to share.

University of VermontNetwork theory has found strong applications in fields from social sciences to physics. One prominent way that network theory can be applied is in social networks, to examine the structure of relationships between different social entities on the network. Using network theory, researchers can discover nodes (people) that have been hidden from the networks, and even pinpoint fictional nodes that have been implanted into the network. 

Josh Bongard, professor of computer science at the University of Vermont, was already familiar with the Eureqa software from communications with Michael Schmidt. He and fellow professor, Jim Bagrow, decided to pit Eureqa against their enormous dataset of simulated social networks to create models that could discover missing nodes. 

Bongard explains, “We could have used a support vector machine or some other state-of-the-art linear or nonlinear regression method to get a low order polynomial approximation, but that wouldn’t have given us any real insight into the nature of the relationship between information flow and network structure.

For more details, read the full case study here!

Jess

Topics: Case study, Eureqa

Follow Me

Posts by Topic

see all