CADE METZ for WIRED: HANNS TAPPEINER TYPES a few lines of code into his laptop and hits “return.” A tiny robot sits beside the laptop, looking like one of those anthropomorphic automobiles that show up in Pixar’s Cars movies. Almost instantly, it wakes up, rolls down the table, and counts to four. This is Cozmo—an artificially intelligent toy robot unveiled late last month by San Francisco startup Anki—and Tappeiner, one of the company’s founders, is programming the little automaton to do new things.
The programs are simple—he also teaches Cozmo to stack blocks—but they’re supposed to be simple. Tappeiner is using Anki’s newly unveiled software development kit—an SDK, in coder parlance—that he says even the greenest of coders can use to tweak the behavior of the toy robot. And that’s a big deal, at least according to Anki. The company claims the SDK is the first of its kind: a kit that lets anyone program such an intelligent robot, a robot that recognizes faces and navigates new environments and even mimics emotions. With the kit, Tappeiner says, “we’re trying to advance the field of robotics.” He compares the move to Apple letting people build apps for the iPhone. Cont'd...
Samuel Bouchard for Engineering.com: Collaborative robots (also known as cobots) are changing how robots and humans interact in our factories and manufacturing facilities.
No longer separated by cages, humans and cobots can work beside each other on complex tasks from picking and placement to assembly and logistics.
Human-cobot systems bring together the best of human capabilities (complex reasoning, ease of learning new tasks, pattern and object recognition in cluttered environments) and robot functionality (the ability to perform complex, tedious tasks 24/7 and with high precision).
The close proximity between humans and cobots and its advantages are exciting for manufacturers, SMEs, and the robotics industry, but it also brings a unique set of safety challenges.
Enter ISO/TS 15066 – the world's first specifications of safety requirements for collaborative robot applications. Cont'd...
From New York Magazine: Snowden’s body might be confined to Moscow, but the former NSA computer specialist has hacked a work-around: a robot. If he wants to make his physical presence felt in the United States, he can connect to a wheeled contraption called a BeamPro, a flat-screen monitor that stands atop a pair of legs, five-foot-two in all, with a camera that acts as a swiveling Cyclops eye. Inevitably, people call it the “Snowbot.” The avatar resides at the Manhattan offices of the ACLU, where it takes meetings and occasionally travels to speaking engagements. (You can Google pictures of the Snowbot posing with Sergey Brin at TED.) Undeniably, it’s a gimmick: a tool in the campaign to advance Snowden’s cause — and his case for clemency — by building his cultural and intellectual celebrity. But the technology is of real symbolic and practical use to Snowden, who hopes to prove that the internet can overcome the power of governments, the strictures of exile, and isolation... (full article)
Linda A. Thompson for Bloomberg: European lawmakers warn that the growing use of robots and artificial intelligence may cause job losses across the continent, threatening to result in plummeting tax revenues if current tax frameworks aren't revised to account for the rise of the robotic workforce.
Practitioners told Bloomberg BNA that taxing robots as “electronic persons,” as the EU contemplates in a recent report, would hinder innovation and that other ways of taxing the value that robotics create should be explored.
The recent European Parliament Committee on Legal Affairs draft reportrecommends the European Commission adopt a resolution to require companies to report on “the extent and proportion of the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions.” Its first paragraph references Frankenstein, and comes amid mounting concerns that the rise in automation and artificial intelligence in the workplace will fundamentally alter economies, destroy jobs and jeopardize social welfare programs such as social security. Cont'd...
From All About Circuits: Google ATAP is bringing touchless interfaces to the market using a miniaturized radar chip no bigger than a dime. This is Project Soli.
Soli’s radar sensor is a marvel in many respects. For one thing, it solves a long-lived issue when it comes to gesture-recognition technology. Previous forays into the topic yielded almost-answers such as stereo cameras (which have difficulty understanding the overlap of fingers, e.g.) and capacitive touch sensing(which struggles to interpret motion in a 3D context).
Google ATAP’s answer is radar.
Radar is capable of interpreting objects’ position and motion even through other objects, making it perfect for developing a sensor that can be embedded in different kinds of devices like smartphones... (full article)
The Star: Chinese appliances giant Midea moved a step closer to fulfilling its ambition to acquire German industrial robotics firm Kuka with two weekend deals raising its stake to nearly a majority.
Two of Kuka’s biggest German shareholders – technology company Voith and entrepreneur Friedhelm Loh – said they had decided to take up Midea’s offer of €115 (RM512) per share and sell their stakes.
German news agency DPA reported that Voith had agreed to sell its stake of 25.1% for €1.2bil (RM5.34bil).
And Loh told the business daily Handelsblatt he had decided to sell his stake of 10% for nearly €500mil (RM2.22bil).
Combined with its existing holding of 13.5% in Kuka, the two purchases mean Midea now holds 48.5%, or not far from the outright majority, in the Augsburg-based robot builder. Cont'd...
Yuri Kageyama for News Factor: The U.S. robotics expert tapped to head Toyota's Silicon Valley research company says the $1 billion investment by the giant Japanese automaker will start showing results within five years.
Gill Pratt [pictured above] told reporters that the Toyota Research Institute is also looking ahead into the distant future when there will be cars that anyone, including children and the elderly, can ride in on their own, as well as robots that help out in homes.
Pratt, a former program manager at the U.S. military's Defense Advanced Research Projects Agency, joined Toyota Motor Corp. first as a technical adviser when it set up its artificial intelligence research effort at Stanford University and MIT.
He said safety features will be the first types of AI applications to appear in Toyota vehicles. Such features are already offered on some models now being sold, such as sensors that help cars brake or warn drivers before a possible crash, and cars that drive themselves automatically into parking spaces or on certain roads.
"I expect something to come out during those five years," Pratt told reporters recently at Toyota's Tokyo office of the timeframe seen for the investment. Cont'd...
John DiPietro for NHVoice: Lately, Boston Dynamics has released a new video of its robot called Mini Spot. In the video, the robot is seen running around outside, planning around objects in a home and climbing up stairs. The best part of the video is how delicately the robot picks up a wine glass and puts into the dishwasher.
The wine-glass act has been highlight as it could be judged as to how much skilled is the robot in handling delicate things. For robots to safely operate around humans they need to be able to sense their environment and capable of knowing how mighty they are.
Mini Spot weighs 55 lbs and is all electric and runs for around 90 minutes on a charge depending on what is it doing. The robot is having many sensors, including depth cameras, a solid state gyro and proprioception sensors in its limbs. Cont'd...
George Konidaris and Daniel Sorin of Duke University have developed a new technology that cuts robotic motion planning times by 10,000 while consuming a small fraction of the power compared to current options. Watch one of their robotic arms in action as they explain how their innovative solution works.
HEBOCON is a robot contest for the technically ungifted. They held the first tournament in Tokyo in July 19,2014... (Facebook page)
From MIT News: Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have demonstrated an algorithm that has effectively learned how to predict sound: When shown a silent video clip of an object being hit, the algorithm can produce a sound for the hit that is realistic enough to fool human viewers.
This “Turing Test for sound” represents much more than just a clever computer trick: Researchers envision future versions of similar algorithms being used to automatically produce sound effects for movies and TV shows, as well as to help robots better understand objects’ properties... (full article)(full paper)
Jiaji Zhou for RoboHub: The Manipulation Lab at the CMU Robotics Institute proposes a computational model that relates an applied robot action to the resultant object motion. Their research won the Best Conference Paper Award atICRA 2016.
Understanding the mechanics of manipulation is essential for robots to autonomously interact with the physical world. One of the common manipulation scenarios involves pushing objects in a plane subject to dry friction. We propose a planar friction (force-motion) model that relates an applied robot action to the resultant object motion. Cont'd...
Eugene Kim for Business Insider: It wasn't until 2014 that Amazon really started to use the machines made by Kiva, the robotics company it bought for $775 million in 2012. Kiva makes robots that automate the picking and packing process at large warehouses.
But in the short two years they've been deployed across Amazon's warehouses, Kiva's robots have been a real cost saver, according to a new note published by Deutsche Bank on Wednesday.
The note says Kiva robots have cut operating expenses by about 20%, quoting Amazon exec Dave Clark, adding that it would translate to roughly $22 million in cost savings for each fulfillment center.
Additionally, Deutsche Bank estimates Amazon could cut another $800 million in one-time cost savings once it deploys more Kiva robots across the 110 fulfillment centers that don't have them yet. Amazon uses Kiva robots in only 13 of its fulfillment centers currently. Cont'd...
Spencer Soper & Shannon Pettypiece for Bloomberg: Wal-Mart Stores Inc. is working with a robotics company to develop a shopping cart that helps customers find items on their lists and saves them from pushing a heavy cart through a sprawling store and parking lot, according to a person familiar with the matter.
Such carts are an emerging opportunity for robotics companies as brick-and-mortar stores look for innovative ways to match the convenience of Amazon.com Inc. and other online retailers, said Wendy Roberts, founder and chief executive officer of Five Elements Robotics.
Roberts, who spoke Tuesday on a robotics panel at the Bloomberg Technology Conference 2016, said her company was working with the “world’s largest retailer” on such a shopping cart.
That retailer is Wal-Mart, which is evaluating a prototype in its lab and giving feedback to the New Jersey robotics company, a person familiar said. Wal-Mart spokesman Ravi Jariwala said he couldn’t immediately comment on the robotic shopping cart. Cont'd...
Graham Templeton for ExtremeTech: Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?
That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons. Cont'd...
Records 31 to 45 of 600