Virtual Immortality: Reanimating characters from TV shows

From James Charles, Derek Magee, David Hogg: The objective of this work is to build virtual talking avatars of characters fully automatically from TV shows. From this unconstrained data, we show how to capture a character’s style of speech, visual appearance and language in an effort to construct an interactive avatar of the person and effectively immortalize them in a computational model. We make three contributions (i) a complete framework for producing a generative model of the audiovisual and language of characters from TV shows; (ii) a novel method for aligning transcripts to video using the audio; and (iii) a fast audio segmentation system for silencing nonspoken audio from TV shows. Our framework is demonstrated using all 236 episodes from the TV series Friends (≈ 97hrs of video) and shown to generate novel sentences as well as character specific speech and video... (full paper)  

Artificial Intelligence Produces Realistic Sounds That Fool Humans

From MIT News:   Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world.  Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have demonstrated an algorithm that has effectively learned how to predict sound: When shown a silent video clip of an object being hit, the algorithm can produce a sound for the hit that is realistic enough to fool human viewers. This “Turing Test for sound” represents much more than just a clever computer trick: Researchers envision future versions of similar algorithms being used to automatically produce sound effects for movies and TV shows, as well as to help robots better understand objects’ properties... (full article) (full paper)  

OpenAI Gym Beta

From the OpenAI team: We're releasing the public beta of  OpenAI Gym , a toolkit for developing and comparing  reinforcement learning (RL) algorithms. It consists of a growing suite of environments  (from  simulated robots  to  Atari  games), and a site for  comparing and reproducing  results. OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflow  and  Theano . The environments are written in Python, but we'll soon make them easy to use from any language. We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community.   Getting started:   If you'd like to dive in right away, you can work through our  tutorial ...  (full intro post)

SUNSPRING by 32 Tesla K80 GPUs

From Ross Goodwin on Medium: To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement.  After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery.  As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning. On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details) Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film: C (smiles) I don’t know anything about any of this. H (to Hauk, taking his eyes from his mouth) Then what? H2 There’s no answer.

Postdoc's Trump Twitterbot Uses AI To Train Itself On Transcripts From Trump Speeches

From MIT: This week a postdoc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) developed a Trump Twitterbot that Tweets out remarkably Trump-like statements, such as “I’m what ISIS doesn’t need.” The bot is based on an artificial-intelligence algorithm that is trained on just a few hours of transcripts of Trump’s victory speeches and debate performances... ... ( MIT article ) ( twitter feed )

Records 16 to 20 of 20

First | Previous

Featured Product

Universal Robots - Collaborative Robot Solutions

Universal Robots - Collaborative Robot Solutions

Universal Robots is a result of many years of intensive research in robotics. The product portfolio includes the UR5 and UR10 models that handle payloads of up to 11.3 lbs. and 22.6 lbs. respectively. The six-axis robot arms weigh as little as 40 lbs. with reach capabilities of up to 51 inches. Repeatability of +/- .004" allows quick precision handling of even microscopically small parts. After initial risk assessment, the collaborative Universal Robots can operate alongside human operators without cumbersome and expensive safety guarding. This makes it simple and easy to move the light-weight robot around the production, addressing the needs of agile manufacturing even within small- and medium sized companies regarding automation as costly and complex. If the robots come into contact with an employee, the built-in force control limits the forces at contact, adhering to the current safety requirements on force and torque limitations. Intuitively programmed by non-technical users, the robot arms go from box to operation in less than an hour, and typically pay for themselves within 195 days. Since the first UR robot entered the market in 2009, the company has seen substantial growth with the robotic arms now being sold in more than 50 countries worldwide.