Deepmind and Blizzard to Release Starcraft II as an AI Research Environment

From DeepMind:   For almost 20 years, the StarCraft game series has been widely recognised as the pinnacle of 1v1 competitive video games, and among the best PC games of all time. The original StarCraft was an early pioneer in eSports, played at the highest level by elite professional players since the late 90s, and remains incredibly competitive to this day. The StarCraft series’ longevity in competitive gaming is a testament to Blizzard’s design, and their continual effort to balance and refine their games over the years. StarCraft II continues the series’ renowned eSports tradition, and has been the focus of our work with Blizzard. DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how. Games are the perfect environment in which to do this, allowing us to develop and test smarter, more flexible AI algorithms quickly and efficiently, and also providing instant feedback on how we’re doing through scores... (more)

Virtual Immortality: Reanimating characters from TV shows

From James Charles, Derek Magee, David Hogg: The objective of this work is to build virtual talking avatars of characters fully automatically from TV shows. From this unconstrained data, we show how to capture a character’s style of speech, visual appearance and language in an effort to construct an interactive avatar of the person and effectively immortalize them in a computational model. We make three contributions (i) a complete framework for producing a generative model of the audiovisual and language of characters from TV shows; (ii) a novel method for aligning transcripts to video using the audio; and (iii) a fast audio segmentation system for silencing nonspoken audio from TV shows. Our framework is demonstrated using all 236 episodes from the TV series Friends (≈ 97hrs of video) and shown to generate novel sentences as well as character specific speech and video... (full paper)  

Artificial Intelligence Produces Realistic Sounds That Fool Humans

From MIT News:   Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world.  Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have demonstrated an algorithm that has effectively learned how to predict sound: When shown a silent video clip of an object being hit, the algorithm can produce a sound for the hit that is realistic enough to fool human viewers. This “Turing Test for sound” represents much more than just a clever computer trick: Researchers envision future versions of similar algorithms being used to automatically produce sound effects for movies and TV shows, as well as to help robots better understand objects’ properties... (full article) (full paper)  

OpenAI Gym Beta

From the OpenAI team: We're releasing the public beta of  OpenAI Gym , a toolkit for developing and comparing  reinforcement learning (RL) algorithms. It consists of a growing suite of environments  (from  simulated robots  to  Atari  games), and a site for  comparing and reproducing  results. OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflow  and  Theano . The environments are written in Python, but we'll soon make them easy to use from any language. We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community.   Getting started:   If you'd like to dive in right away, you can work through our  tutorial ...  (full intro post)

SUNSPRING by 32 Tesla K80 GPUs

From Ross Goodwin on Medium: To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement.  After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery.  As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning. On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details) Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film: C (smiles) I don’t know anything about any of this. H (to Hauk, taking his eyes from his mouth) Then what? H2 There’s no answer.

Postdoc's Trump Twitterbot Uses AI To Train Itself On Transcripts From Trump Speeches

From MIT: This week a postdoc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) developed a Trump Twitterbot that Tweets out remarkably Trump-like statements, such as “I’m what ISIS doesn’t need.” The bot is based on an artificial-intelligence algorithm that is trained on just a few hours of transcripts of Trump’s victory speeches and debate performances... ... ( MIT article ) ( twitter feed )

Records 136 to 141 of 141

First | Previous

Featured Product

High Performance Servo Drives for localized and distributed control applications from Servo2Go.com

High Performance Servo Drives for localized and distributed control applications from Servo2Go.com

Engineered to drive brushless and brush servomotors in torque, velocity or position mode, Servo2Go.com offers a broad selection of servo drives in a wide range of input voltages and output power levels.