Sony Joins Forces with Cogitai to Conduct Research and Development for the Next Wave of Artificial Intelligence
Sony has invested in Cogitai and the two companies plan to collaborate towards the development of novel AI technologies using deep reinforcement learning with prediction technology that could be used as the basis for the next generation of AI applications and products.
NEW YORK, May 17, 2016 /PRNewswire/ -- Sony Corporation today announced that it has teamed with Cogitai, an artificial intelligence (AI) start-up focused on next generation technologies. Specifically, Sony has invested in Cogitai and the two companies plan to collaborate towards the development of novel AI technologies using deep reinforcement learning with prediction technology that could be used as the basis for the next generation of AI applications and products.
Cogitai was founded in September 2015 by three leading AI researchers, Mark Ring (CEO of Cogitai, and a pioneer in continual learning and reinforcement learning), Peter Stone (President and COO of Cogitai, a professor at the University of Texas at Austin, and a leader in reinforcement learning, multiagent systems and robotics), and Satinder Singh Baveja (Chief Scientist and CTO of Cogitai, a professor at the University of Michigan, and a leader in reinforcement learning, intrinsic motivation, and the combination of deep learning and reinforcement learning). The company aims to develop AI technology that empowers machines to learn continually from interaction with the real world, enabling everyday things that sense and act to get smarter, more skilled, and more knowledgeable with experience. In addition to its distinguished founders, Cogitai has also assembled a "Brain Trust" consisting of a number of the world's best academics in AI, who will be actively engaged in technology development.
AI has evolved over the past 60 years. In the early days, AI was based on programming computers to carry out specific tasks that previously had required human intelligence. However, programming machines to act intelligently turned out to be far more difficult than researchers had imagined because of the richness and complexity underlying human knowledge and the sophistication of human understanding and perception. Machine learning emerged as a critical method to overcome this problem because it allowed machines to be trained with examples rather than through explicit programming. This approach is very effective when large amounts of high-quality examples are available. Deep learning, which took its current shape around 2010, is a particularly powerful form of machine learning capable of noticing fine subtleties in the data and making human-quality distinctions. However, to use these machine-learning methods, humans still have to label the data. Reinforcement learning emerged in the 1980s, inspired by behavioral psychology. It enabled the development of intelligent systems that make their own choices, choosing actions and then getting rewards (and penalties), and improving action-selection from experience. This was a major breakthrough, but still, the representation of both knowledge and the world had to be created by humans, and that limited the systems to certain kinds of narrowly-defined tasks. The combination of reinforcement learning and deep learning - called deep reinforcement learning - is considered to be the breakthrough to overcome these limitations. In fact, the power of deep reinforcement learning was demonstrated by the recent success of AlphaGo, an AI Go playing system from Google DeepMind. These AI systems are able to learn to outperform humans on complex but still narrowly-defined tasks.
Sony and Cogitai both consider the next challenge for AI to be the creation of systems that can autonomously and continually learn from experience -autonomous cognitive development systems (or continual learning systems) that exhibit flexible competence and can learn to react properly in a wide variety of task domains.
These systems will allow machines to autonomously build up their own knowledge and skills from interactive experience with the real world, and then to share and extend their knowledge, skills and understanding with each other.
Sony has a long history of R&D in AI. In 1999, Sony announced AIBO, a fully autonomous robot, which featured many state-of-the-art AI technologies such as face recognition and speech recognition. These technologies were then incorporated into Sony's products and services, such as digital cameras and personalized TV program recommendation services.
In parallel, Sony established Sony Intelligence Dynamics Laboratory in 2004, which studied autonomous development of intelligence called Intelligence Dynamics. The technical features of Intelligence Dynamics are learning by prediction, and self-development by intrinsic motivation - the capability of machines to develop skills autonomously as open-ended systems. These activities were transferred into Sony's corporate R&D group in 2006, and Sony has continued to study AI technologies, including deep learning as well as reinforcement learning.
Currently, activities based on AI technologies are managed by the System R&D Group in Sony Headquarters, where innovative products and services are created, including Augmented Reality Experience (SmartAR) that is incorporated into the "AR Effect" app from Xperia™ smartphones, activity recognition used in Xperia's Lifelog app, and facial recognition login capabilities utilized by PlayStation® 4. In February, Sony Mobile showcased Xperia Agent, which responds to user's voice and provides useful information, communication assistance with voice and gestures, and home appliance controls.
In addition, in March 2016 Sony's Future Lab Program unveiled Project N, a neckband-style wearable device that provides a totally hands-free interactive interface for accessing music and audio information, without the need for an earpiece. Project N utilizes advanced audio signal processing and robust speech recognition.
Alongside such activities, Sony Computer Science Laboratories, Inc. (Sony CSL) conducts a broad range of AI research from basic theoretical studies, such as computational information geometry, causality inference from noisy open-ended data, and evolution of language and perception to application of AI, to interactive music experiences, manufacturing processes, and other numerous domains. It is led by Dr. Hiroaki Kitano (who is a former president of the International Joint Conference on Artificial Intelligence -- a premier international AI society, a founding president of RoboCup, and the Computers and Thoughts Award winner in 1993). Sony CSL, located in Tokyo and Paris, represents a stronghold of AI research in the global AI community.
"Sony and Cogitai have complementary views on the future direction of AI," said Toshimoto Mitomo, Corporate Executive in charge of Intellectual Property and Mid-to-Long Term Business Development of Sony Corporation. "By working with Cogitai we can combine the expertise of some of the best brains in the field of AI with our world class engineers and technologies, such as sensing technologies, to develop products that may truly change the world."
"We are thrilled to have Sony's profound support in helping us to enable the next generation of artificial intelligence," said Dr. Mark Ring, one of Cogitai's three founders. "There are many companies pursuing different avenues with AI, but we feel confident that the technologies we plan to develop with Sony are the future direction for the industry."
"We believe that AI will be incorporated into numerous products and will eventually become commonplace," said Dr. Hiroaki Kitano, President and Chief Executive Officer, Sony CSL. "As this evolution happens, the most important thing to focus on is the benefit the technology brings to consumers. Because of this, the choice of domains, value propositions, and how one can align technologies to enable them to work together will be crucial. From this perspective the collaboration between Cogitai and Sony is a major milestone for the next wave of AI."
About Cogitai's Founders:
Mark Ring, Ph.D.
Dr. Ring's research revolves around a single focus: Continual Learning in Artificial Intelligence, which tries to answer one question: If you can give an agent a single algorithm at its inception and then stand back and let it learn all on its own, what do you put into that algorithm to allow the agent to continue to learn, develop and improve forever? How should an artificial agent begin the unending process of learning and development, so that it is constantly improving its ability to comprehend and interact with the world? His 1994 dissertation, Continual Learning in Reinforcement Environments, explored this and related issues in depth. Although many ideas discussed in the dissertation have more recently fallen into favor, at the time of their publication, much of the work was far from the beaten path.
There are many potential mechanisms for artificial continual learning, but the first one he developed was called the Temporal Transition Hierarchies (TTH) algorithm, which intelligently and incrementally extended an agent's memory to help it resolve contradictions. More recent work has focused on organizing behaviors according to their similarities (using the "Motmap") and making predictions about long-term behaviors (Forecasts).
He has also worked on AI safety issues from a mathematical perspective using methods based on the theory of computation. Dr. Ring received his Ph.D. (1994) and his M.S. (1990) in Computer Science from the University of Texas at Austin.
Peter Stone, Ph.D.
He received his Ph.D. in 1998 and his M.S. in 1995 from Carnegie Mellon University, both in Computer Science. He received his B.S. in Mathematics from the University of Chicago in 1993.
After receiving his Ph.D., Dr. Stone continued at Carnegie Mellon as a Postdoctoral Fellow for one year. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs - Research. He then joined the faculty of Computer Science Department at the University of Texas at Austin as an assistant professor. He was promoted to associate professor in 2007 and full professor in 2012.
Dr. Stone co-authored the papers that first proposed the robot soccer challenges around which Robocup was founded. He is a vice president of the RoboCup Federation, the governing organization of RoboCup activities around the globe and was a co-chair of RoboCup-2001 at IJCAI-01. Peter Stone was a Program Co-Chair of AAMAS 2006, was General Co-Chair of AAMAS 2011, and was a Program Co-Chair of AAAI-14. He has developed teams of robot soccer agents that have won RoboCup championships in the simulation league (1998, 1999, 2003, 2005, 2011, 2012, 2013, 2014, 2015), in the standard platform (2012) and in the small-wheeled robot (1997, 1998) leagues. He has also developed agents that have won auction trading agents competitions (2000, 2001, 2003, 2005, 2006, 2008, 2009, 2010, 2011, 2013). Professor Peter Stone won the Computers and Thought Award in 2007.
Satinder Singh Baveja, Ph.D.
Dr. Singh is a Professor of Computer Science & Engineering at the University of Michigan where he also currently serves as the Director of the Artificial Intelligence Laboratory. He received his Ph.D. in Computer Science from the University of Massachusetts, Amherst, following a B.Tech. in Electrical Engineering from the Indian Institute of Technology, New Delhi, India. He joined the University of Michigan in 2002 after a Postdoctoral Fellowship in Brain and Cognitive Sciences at Massachusetts Institute of Technology, a Scientist position at Harlequin Inc., an Assistant Professorship at University of Colorado, Boulder, a Senior Research Scientist position at AT&T-Labs Research, and a Chief Scientist position at a venture capital company (Systek Capital).
Dr. Singh's research interests focus on the field of Reinforcement Learning, i.e., on building algorithms, theory, and architectures for software agents that can learn how to act in uncertain, complex, and dynamic environments. Specific interests include building models of dynamical systems from time-series data, learning good interventions in human-machine interaction, dealing with partial observability and hidden state in sequential decision-making, dealing with the challenge of exploration-exploitation and delayed feedback, explaining animal and human decision making using computational models, and optimal querying in semi-autonomous agents based on value of information. He is interested in applications from healthcare, robotics, and game-playing. He is a Fellow of the Association for the Advancement of Artificial Intelligence, has received an outstanding faculty award from his department, and has published over 150 papers in his field.