Mary Jo Foley for All About Microsoft: In the early 2000s, Microsoft was all-in on robotics. By the middle of that decade, the company seemingly had all but abandoned the robotics space.
But this may be the year that Microsoft may be ready to get back into robotics, on multiple fronts.
When Microsoft founder Bill Gates was still involved in the day-to-day operations of the company, robotics was slated to be one of Microsoft's next big things. Microsoft built a programming model and framework for developers working on anything from Lego robots to industrial-scale robots. However, that product, "Microsoft Robotics Studio," never really went beyond the academic and hobbyist communities and the company's ambitions in this space withered.
Cut to 2017. These days, the home for a good chunk of the Microsoft current robotics work is apparently in Microsoft Research (MSR) -- specifically in the AI + Research (AI+R) Group under executive vice president Harry Shum. (I say "apparently" here because Microsoft officials declined to answer any of my questions on the company's robotics initiatives.) Shum is known for his work in computer vision and graphics and has a Ph.D. in robotics from Carnegie Mellon. Cont'd...
Jared Newman for PCWorld: At the 2015 Build conference, Microsoft tried to prove that HoloLens is more than just a neat gimmick.
The company showed off several new demos for its “mixed reality” headset, which can map digital imagery onto the user’s physical surroundings. While previous demos had focused on fun ideas like a virtual Mars walk and a living room-sized version of Minecraft, the Build presentation emphasized real-world applications for businesses and education.
For instance, Microsoft showed how architects could use HoloLens to interact with 3D models, laid out virtually in front of them on a table. They might also be able to examine aspects of a building site at full scale, with virtual beams and walls rendered before their eyes.
Not all the presentations were so serious. Microsoft also showed off an actual robot whose controls appeared in the virtual space above the robot’s head. Users could then create a movement pattern for the robot by tapping on the ground. Another demo showed how users could create their own personal screens that followed them around in real space.
This paper uses NAO, the humanoid robot from Aldebaran Systems, to demonstrate how MapleSim can be used to develop a robot model, and how the model can be further analyzed using the symbolic computation engine within Maple.