We believe three components are critical for turning self-driving cars into a mass product: power-efficient hardware, optimized algorithms and a solid regulatory environment. While none of these components are fully ready at this stage, competition and advances in technology are speeding the process for the first two.
Though AI topics are not new to Japan and the companies promoting their products were not completely new; all of sudden a surge, huge interest in Artificial Intelligence among most of the mid aged workers, salary men in Japan has risen.
An intelligent yet evil operating system connected to nearly every device we use on a daily basis. Seems like science-fiction-but are we starting to live in this kind of world?
By continuing to add more computing capabilities for AI on edge devices with NVIDIA Jetson, and more tools and platforms to accelerate robotics development, like Isaac and the Jetson robotics reference platforms, we can help researchers and companies build robots that are more capable, less expensive, and safer to deploy.
We believe that this technology will allow senior people and Alzheimer's disease patients to fully experience the joy of communication.
Conversational applications may seem simple on the surface, but building truly useful conversational experiences represents one of the hardest AI challenges solvable today.
The cognitive computing tech we developed enables ElliQ to not only react to commands but also proactively suggest activities for the older adults, such as going for a walk based on the weather, reading the news, finding new music, or video-chatting with a friend.
Deep-Domain Conversational AI describes the AI technology which is required to build voice and chat assistants which can demonstrate deep understanding of any knowledge domain.
From DeepMind: For almost 20 years, the StarCraft game series has been widely recognised as the pinnacle of 1v1 competitive video games, and among the best PC games of all time. The original StarCraft was an early pioneer in eSports, played at the highest level by elite professional players since the late 90s, and remains incredibly competitive to this day. The StarCraft series’ longevity in competitive gaming is a testament to Blizzard’s design, and their continual effort to balance and refine their games over the years. StarCraft II continues the series’ renowned eSports tradition, and has been the focus of our work with Blizzard. DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how. Games are the perfect environment in which to do this, allowing us to develop and test smarter, more flexible AI algorithms quickly and efficiently, and also providing instant feedback on how we’re doing through scores... (more)
From James Charles, Derek Magee, David Hogg: The objective of this work is to build virtual talking avatars of characters fully automatically from TV shows. From this unconstrained data, we show how to capture a character’s style of speech, visual appearance and language in an effort to construct an interactive avatar of the person and effectively immortalize them in a computational model. We make three contributions (i) a complete framework for producing a generative model of the audiovisual and language of characters from TV shows; (ii) a novel method for aligning transcripts to video using the audio; and (iii) a fast audio segmentation system for silencing nonspoken audio from TV shows. Our framework is demonstrated using all 236 episodes from the TV series Friends (≈ 97hrs of video) and shown to generate novel sentences as well as character specific speech and video... (full paper)
From MIT News: Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have demonstrated an algorithm that has effectively learned how to predict sound: When shown a silent video clip of an object being hit, the algorithm can produce a sound for the hit that is realistic enough to fool human viewers. This “Turing Test for sound” represents much more than just a clever computer trick: Researchers envision future versions of similar algorithms being used to automatically produce sound effects for movies and TV shows, as well as to help robots better understand objects’ properties... (full article) (full paper)
From the OpenAI team: We're releasing the public beta of OpenAI Gym , a toolkit for developing and comparing reinforcement learning (RL) algorithms. It consists of a growing suite of environments (from simulated robots to Atari games), and a site for comparing and reproducing results. OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflow and Theano . The environments are written in Python, but we'll soon make them easy to use from any language. We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community. Getting started: If you'd like to dive in right away, you can work through our tutorial ... (full intro post)
From Ross Goodwin on Medium: To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement. After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery. As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning. On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details) Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film: C (smiles) I don’t know anything about any of this. H (to Hauk, taking his eyes from his mouth) Then what? H2 There’s no answer.
From MIT: This week a postdoc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) developed a Trump Twitterbot that Tweets out remarkably Trump-like statements, such as “I’m what ISIS doesn’t need.” The bot is based on an artificial-intelligence algorithm that is trained on just a few hours of transcripts of Trump’s victory speeches and debate performances... ... ( MIT article ) ( twitter feed )
Records 46 to 59 of 59
Destaco's Robohand RDH/RTH Series 2 and 3 jaw parallel grippers have a shielded design that deflects chips and other particulate for a more reliable, repeatable operation in part gripping applications ranging from the small and lightweight, to the large and heavy. RDH Series of Rugged, Multi-Purpose Parallel Grippers for Heavy Parts - Designed for high particulate application environments, automotive engine block, gantry systems, and ideal for heavy part gripping The series includes eight sizes for small lightweight to large/heavy part gripping. RTH Series of Powerful, Multi-Purpose Parallel Grippers for Heavy Parts - Designed for large round shaped parts, automotive engine block and gantry systems, and heavy part gripping. They're available in eight sizes for small lightweight to large and heavy part gripping.