Fun LoL to Teach Machines How to Learn More Efficiently

DARPA seeks mathematical framework to characterize fundamental limits of learning

Its not easy to put the intelligence in artificial intelligence. Current machine learning techniques generally rely on huge amounts of training data, vast computational resources, and a time-consuming trial and error methodology. Even then, the process typically results in learned concepts that arent easily generalized to solve related problems or that cant be leveraged to learn more complex concepts. The process of advancing machine learning could no doubt go more efficiently—but how much so? To date, very little is known about the limits of what could be achieved for a given learning problem or even how such limits might be determined. To find answers to these questions, DARPA recently announced its Fundamental Limits of Learning (Fun LoL) program. The objective of Fun LoL is to investigate and characterize fundamental limits of machine learning with supportive theoretical foundations to enable the design of systems that learn more efficiently.


"Weve seen advances in machine learning and AI enabling computers to beat human champions playing games like Jeopardy, chess, and most recently Go, the ancient Chinese strategy game," said Reza Ghanadan, DARPA program manager. "Whats lacking, however, is a fundamental theoretical framework for understanding the relationships among data, tasks, resources, and measures of performance—elements that would allow us to more efficiently teach tasks to machines and allow them to generalize their existing knowledge to new situations. With Fun LoL were addressing how the quest for the ultimate learning machine can be measured and tracked in a systematic and principled way."

As it stands now with machine learning, Ghanadan noted, even a small change in task often requires programmers to create an entirely new machine teaching process. "If you slightly tweak a few rules of the game Go, for example, the machine wont be able to generalize from what it already knows. Programmers would need to start from scratch and re-load a data set on the order of tens of millions of possible moves to account for the updated rules."

These kinds of challenges are especially pressing for the Department of Defense, whose specialized systems typically dont have large training sets to begin with and cant afford to rely on trial and error methods, which come at high cost. Additionally, defense against complex threats requires machines to adapt and learn quickly, so it is important that they be able to generalize creatively from previously learned concepts.

Fun LoL seeks information regarding mathematical frameworks, architectures, and methods that would help answer questions such as:

What are the number of examples necessary for training to achieve a given accuracy performance? (e.g., Would a training set with fewer than the 30 million moves that programmers provided to this years winning machine have sufficed to beat a Go grand champion? How do you know?)
What are important trade-offs and their implications? (e.g., size, performance accuracy, processing power considerations)
How "efficient" is a given learning algorithm for a given problem?
How close is the expected achievable performance of a learning algorithm compared to what can be achieved at the limit?
What are the effects of noise and error in the training data?
What are the potential gains possible due to the statistical structure of the model generating the data?
As an historical example of what Fun LoL is trying to achieve, Ghanadan pointed to a mathematical construct called Shannons theorem that helped revolutionize communications theory. Shannon's theorem established a mathematical framework showing that for any given communication channel, it is possible to communicate information nearly error-free up to a computable maximum rate through that channel. The theorem addresses tradeoffs in bandwidth, source data distribution, noise, methods of communication transmission, error correction coding, measures of information, and other factors that can affect determinations of communications efficiency. "Shannons theorem provided the fundamental basis that catalyzed the widespread operational application of modern digital and wireless communications," Ghanadan said. "The goal of Fun LoL is to achieve a similar mathematical breakthrough for machine learning and AI."

DARPAs Fun LoL is seeking information that could inform novel approaches to this problem. Technology areas that may be relevant include information theory, computer science theory, statistics, control theory, machine learning, AI, and cognitive science.

For more information, see the Request for Information on FedBizOpps here. Deadline for responses is June 7, 2016: http://go.usa.gov/cJGsH

Featured Product

Tech Rim Standards – Tech-Winder Teach Pendant Cable Winder

Tech Rim Standards - Tech-Winder Teach Pendant Cable Winder

Tech-Winder is a robust solution for industrial cord management. Aluminum body, for hard impact protection. Flame retardant netting, for added protection against any sparks that may land on the cord. This industrial grade cord reel keeps your pendant cords protected and organized. Stores up to 50 meters (164 ft.) of cable in length with a diameter of 4.5 mm to 10.0 mm. Adapts to NAAMS or Modular Fencing when mounting. Help prevent your next $1,200 factory cable incident, by getting organized with Tech-Winder from Tech Rim Standards.