Blog

Machine Intelligence with Michael Schmidt: OpenAI and doomsday artificial intelligence

Posted by Michael Schmidt

01.06.2016 09:30 AM

Speaking at the Open Data Science Conference (ODSC) last week, I discussed where artificial intelligence is going, what it will automate, and what its impact will be on science, business, and jobs. While the impact from Eureqa has been overwhelmingly positive, many are warning about a darker future:

Robot.jpg“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, he’s sure he can control the demon [but it] doesn’t work out.” –Elon Musk

Elon Musk, in the above quote, is worried about a very specific area of AI research – the sentient autonomous AI and robotics research as popularized in movies.

In fact, the press has characterized Eureqa as a “Robot Scientist” as well, speculating advanced tasks like scientific inquiry may become automated by machines one day. However, Eureqa was born out of the challenge to accelerate and scale the complexity of problems that we can tackle and solve – not simply mimic human behavior.

The areas of AI focused on simply learning tasks and replicating human behavior (e.g. IBM Watson or Google AlphaGo) are much hazier. It’s not clear what type of impact this trajectory will have.

The research group OpenAI, founded to support “beneficial” AI research, signaled they are focused entirely on this type of AI last month. Their platform OpenAI Gym enables researchers to develop reinforcement learning algorithms. Reinforcement learning is a class of machine learning algorithms used for tasks like chat bots, video games, and robots. Interestingly, it doesn’t typically start with data or try to learn from an existing data set; it attempts to learn to control an agent (like a robot) based purely on a set of actions it can take and its current state.

The downside of reinforcement learning is that it is not immediately applicable or natural for most business problems that I observe today. That is, businesses are not clamouring for chat bots or interactive agents; they tend to have more data than they can analyze and are invested in putting it to work instead.

Of all areas of machine learning and AI, reinforcement learning may be the furthest out. But early research is producing some exciting results, for example learning to play videogames like Mario from trial and error.

It’s important to keep in mind, however, how far there is to go before sentient AI systems Musk and OpenAI are alluding to may arise. Last week the White House Science and Technology Office concluded that despite improvement in areas like machine vision and speech understanding, AI research is still far from matching the flexibility and learning capability of the human mind. That said, I’ll be rooting for OpenAI to keep this area of AI beneficial for us as it matures.

Topics: Artificial intelligence, OpenAI, Reinforcement learning

Machine Intelligence with Michael Schmidt: IBM’s Watson, Eureqa, and the race for smart machines

Posted by Michael Schmidt

16.05.2016 11:12 AM

Three months ago I spoke at a conference affectionately titled “Datapalooza” sponsored by IBM. My talk covered how modern AI can infer the features and transformations that make raw data predictive. I’m not sure exactly how many IBM people were in the crowd, but two IBM database and analytics leads grabbed me after the talk:

“We love what you’re doing. The Watson team is attempting to do things like this internally but is nowhere near this yet.” – [names withheld]

What’s interesting is that Watson has been coming up more and more recently when I speak to customers. The billions of dollars IBM has invested to market Watson has created an air of mystery and hype around what artificial intelligence can do. In fact, IBM is expecting Watson to grow to over $1B per year in revenue in the next 18 months. Yet we haven’t seen any prospect choose Watson over Eureqa to date. So what’s going on?

Michael_Schmidt_presenting_Machine_Intelligence_at_IBM_Datapalooza.jpg

Speaking at IBM’s Datapalooza (2016) in Seattle, WA.

I remember the excitement in the AI lab (CCSL) at Cornell University when IBM’s Watson computer competed in the game show Jeopardy in 2011. A group of us watched live as the computer beat the show’s top player, Ken Jennings.

IBM had pioneered one of the most interactive AI systems in history. Instead of simulating moves in chess more than before (as it’s predecessor Deep Blue had done), Watson appeared to actually “think.” It interpreted speech and searched data sources for a relevant response. It inspired similar technology, like Apple’s Siri and Microsoft’s Cortana, which came out over the next few years.

Unlike Apple, Google, Facebook, and others, however, IBM recognized an enormous opportunity in the market. Every business in the world today stockpiles data faster than can be analyzed. Literally hundreds of billions of dollars in value lies in the applications of this data. Perhaps the technology that could win quiz competitions like Jeopardy could unlock some of this value as well. IBM decided to step out of the safe confines of a specific application, and attempted to work with business data and real-world problems with commercial deployments of Watson.

Google_searches_for_IBM_Watson.png

The interest in IBM Watson tied to Jeopardy over 10 years.

Coincidentally, I began working on the preliminary technology behind Eureqa around the same time. Eureqa was focused on a broader challenge; instead of trying to interpret sentences and look up responses, Eureqa was tasked with deducing how any arbitrary system behaved – just provide the data/observations. It became the first AI that could think like a scientist and produce new explanations for how any system worked.

The similarity, and the power, of both Eureqa and Watson is that they are examples of Machine Intelligence – meaning the answers they output can be meaningfully interpreted and understood, as opposed to some statistical prediction or data visualization. But this is where the similarities end.

Watson’s great challenge has been adapting its technology for answering trivia questions to real business problems. Despite the prevalence of unstructured text data, very few new business problems appear to be blocked by the ability to look up relevant information in text. From the WSJ: “According to a review of internal IBM documents and interviews with Watson’s first customers, Watson is having more trouble solving real-life problems.”

The data that most businesses have today consists of log data, event data, sensor data, sales data, or other numeric data (data where Watson’s core technology doesn’t apply). The major questions they need answered relate to what causes other things to happen, what triggers or blocks certain outcomes, or simply what’s possible with the data I have and how do I even begin? To me, the key interest and success behind Eureqa has come from its applicability to real business data and problems. It finds physical relationships, and interpretable models, which answer these types of questions directly.

Earlier this year, IBM announced they are splitting the different components inside Watson into individual services instead of trying to map a complete solution for customers. They may no longer be a pioneer in the space, but perhaps they’re starting to acknowledge what businesses need most today.

 

Topics: Artificial intelligence, IBM Watson, Machine Intelligence

Analyze This: 6 Ways Retailers Put Artificial Intelligence To Work

Posted by Jon Millis

04.05.2016 12:08 PM

This week Retail TouchPoints, a leading retail analytics media company, ran an article by Scott Howser, our SVP of Products and Marketing. Scott is seemingly always on the cutting-edge of big data, data science, and AI, having spent time driving success at companies like Vertica (acquired by HP), Hadapt (acquired by Teradata), and now Nutonian. Retail TouchPoints sought Scott’s expertise to find out how retailers are leveraging the latest hot technology: artificial intelligence. The article, “Analyze This: 6 Ways Retailers Put Artificial to Work,” is reproduced below.

Retailers_put_artificial_intelligence_to_use-1.jpg

When a shopper walks into a retail store or visits a retailer’s website, data science isn’t likely top-of-mind. But everything — whether it’s physical or online — is strategically placed to optimize the customer experience and maximize sales for the retailer, or at least it should be. From store location, product placement, items on sale, which employees are working, etc., there is strategy behind these seemingly simple things to ensure the retailer makes the most out of every aspect of its operation. And the early adopters are turning to artificial intelligence (AI) to solve their most pressing data-driven problems.

Here are six ways retailers are putting AI to work.

New Store Optimization

When a new retail store opens in a certain location, it isn’t a result of blindly throwing darts at a map to see what sticks. For many years, retail companies have been applying analytics to determine where to add locations, but new technology is taking store optimization to a new level.

Looking at historical data such as sales, demographics, distance from competitors, nearby events and more allows retailers to be strategic about where and when to open a new location. AI applications that learn from this data can do more than just create and sort a list of best locations to open a store; they can actually provide retailers with an understanding of why, based on identifying the most important “drivers”/variables that contribute to new store success.

One of Nutonian’s (disclaimer: my employer) customers, for example, estimates that the difference between building a new store in a “good” location versus an “average” location equates to roughly $30 million in extra revenue per store per year. Based on sales data per store and per product, demographics and proximity to competitors, this retailer can determine exactly where to build their next store — and even the type of store (large brick-and-mortar, outlet) it should be.

The same theory can be applied to closing poorly performing stores. By looking at and analyzing the aforementioned data with AI, this company discovered that it is saving $10 to $15 million annually by not building a store in a “bad” location.

Staffing

It’s Black Friday, your store is packed with customers, but you are dramatically understaffed despite feeling prepared for the busiest shopping day of the year. The reality is that unless companies are making business decisions — in this case, properly staffing — based on accurate data, a store can be left with either skyrocketing operating costs or poor customer experience.

It goes without saying that retailers can expect heavy traffic during the holidays, but how do retail companies determine precisely how to staff stores? Should they add two more employees during this time? Five? Furthermore, what about staffing different locations?

Predictive models built with information such as historical traffic, sales and marketing efforts during certain times show retailers how to dynamically staff stores based on expected traffic. The result is lower costs for the retailer and a better store experience for the customer — a win-win.

Supply Chain Optimization

Inventory management is a huge challenge for retail companies. An excess of supplies leads to low turnover and decreased profitability. Yet stock-outs result in backorders, lost sales and dissatisfied customers. It’s a difficult balance that has significant impact on retailers’ revenue streams.

Retailers require forecasting models that show what items will be needed when and where — i.e. based on this time of year and this store location, here are the products you should stock to keep inventory and costs low, while maximizing sales.

Retail companies can achieve this by plugging data such as past sales for different products, events, marketing campaigns, etc. into AI apps that build predictive models to give retailers answers — prescriptive measures to ensure shipping and delivery logistics are optimized, and operational funds are allocated effectively.

Marketing

Marketing is important for any business, and knowing your company is making the best use of its marketing investments is critical. It’s also difficult to measure ROI.

Running campaigns is integral to help grow, engage and convert audience, but it’s not always easy to identify which campaigns are successful and under what conditions.

By looking at historical sales, marketing campaigns, web site discounts, events and competitor events, retail companies can discover through data modeling applications that Campaign “X” converted the highest number of customers via email blasts when distributed in the evening with specific key words used.

With this level of insight, retailers can use results to optimize marketing spend and know they are allocating marketing resources effectively.

Sales Forecasting

Accurate sales forecasting affects all facets of a company and its operations — revenues, resource and product planning, investor relations, etc. — and without a clear understanding of what’s driving ROI, retailers can’t effectively manage working capital requirements.

Analyzing and building predictive models from data such as historical sales for different products, marketing, event schedules and weather provides companies with a clear picture of the road ahead, explain why it’s going to happen, and detail how retailers can optimize their desired outcome.

Improved Hiring Process

Employee turnover can cost companies thousands of dollars, so it’s critical to minimize costs by making personnel decisions before time and money are wasted. During the hiring process, retailers plug data such as historical employee performance and attributes (i.e. background, previous sales experience, previous jobs, focus, etc.) into modeling apps to gain a sorted list of best potential candidates, and a list of growth areas for current candidates.

Retailers have all this data and more available to them. It’s just a matter of knowing how to use it and learn from it. Savvy retailers are using AI apps to create predictive models that will give them competitive insight, increased sales and improved customer experiences.

Topics: Artificial intelligence, New store optimization, Retail, Retail TouchPoints, Sales forecasting, Staffing

NSF Uses Artificial Intelligence to Tackle Illegal Tiger Poaching

Posted by Jon Millis

26.04.2016 09:23 AM

AI to the rescue. Forget doomsday scenarios of robots transforming humans into paperclips. This time it’s more positive. The National Science Foundation (NSF) announced it’s turned to artificial intelligence as a critical weapon in the fight against poaching.

Whether killed for skins, “medicine”, or trophy hunting, tigers have been devastated by illegal shooters. Poachers have driven the population of wild tigers down from 60,000 in the early 1900s to just 3,200 today. And with protection relying heavily on human capital and resources that just aren’t there, governments and nonprofits have to get smarter about how they enforce the rule of law before tigers (as well as other species, forests, and coral reefs) disappear.

Tigers.jpg

Currently, ranger patrol routes are mostly “reactive”, keeping tabs on the areas that have been hit hard before and preventing what they can. An NSF-funded team at the University of Southern California, however, has built an AI-driven application called Protection Assistant for Wildlife Security (PAWS) that makes patrolling more predictive, and hence, more effective. PAWS incorporates data on past patrols, evidence of poaching, and complex terrain information like topography, to determine the highest-probability patrol routes while minimizing elevation changes, saving time and energy. As it receives more data, the system “learns” and improves its patrol planning. The application also randomizes patrol routes to avoid falling into predictable patterns that can be anticipated by poachers.

The NSF said that since 2015, non-governmental organizations Panthera and Rimbat have used PAWS to protect forests in Malaysia. The research won the Innovative Applications of Artificial Intelligence award for deployed application, as one of the best AI applications with measurable benefits.

This is not the first instance of leveraging AI for good. Unfortunately, the public is bombarded with negative depictions of AI, with stories like targeted online ads and Facebook’s almost eerie knowledge of its user base dominating the headlines. That’s because negativity sometimes sells more headlines. The truth is, like any technological advancement, the power of AI is in the hands of its users. AI can vastly improve human productivity and thus raise living standards, solve problems, discover new breakthroughs. As more applications like PAWS come to light, we hope that more people will see the incredible good that comes from the power of data, supplementing human expertise to drive towards solutions for the most pressing social, economic and environmental issues of our day.

Topics: Artificial intelligence

Follow Me

Posts by Topic

see all