Richard Erb to Become Executive Director of RoboUniverse
The magazine has undergone a major content update and redesign
In the next decade, self-driving cars will revolutionize transportation worldwide.
Portwell's Intel Atom E3800 processor-based NANO-6060 helps power the students' unmanned surface vehicle (USV) to success
High-speed transfers reduce data bottlenecks and increase efficiency for high-end applications
Canadian robot makers accompany Yahoo!'s Marissa Mayer and Facebook's Mark Zuckerberg on 40 Under 40 list.
"Ecovacs Robotics is honored to receive the esteemed CES Innovation Award honoring outstanding design and engineering in consumer technology"
Du-Co Ceramics Integrates Robotics Into Ceramics Parts Manufacturing Process
"The unmanned K-MAX and Indago aircraft can work to fight fires day and night, in all weather, reaching dangerous areas without risking a life"
Award honors robotic industry's remarkable technical accomplishments and their makers.
3D Robotics Launches X8+ Ready-to-Fly Personal Drone with Expandable Payload Capacity, Introduces FPV Kit
The ruggedized X8+ not only has the power to carry professional mirrorless system cameras, but also provides the lifting capacity to do delivery and real work, making it much more than a flying camera.
This solution is ideal for on-machine mounting and fast, precise format adjustment.
Because of the Nov. 14th submission deadline for this years IEEE Conference on Computer Vision and Pattern Recognition (CVPR) several big image-recognition papers are coming out this week: From Andrej Karpathy and Li Fei-Fei of Stanford: We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations... ( website with examples ) ( full paper ) From Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan at Google: Show and Tell: A Neural Image Caption Generator ( announcement post ) ( full paper ) From Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel at University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models ( full paper ) From Junhua Mao, Wei Xu, Yi Yang, Jiang Wang and Alan L. Yuille at Baidu Research/UCLA: Explain Images with Multimodal Recurrent Neural Networks ( full paper ) From Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell at UT Austin, UMass Lowell and UC Berkeley: Long-term Recurrent Convolutional Networks for Visual Recognition and Description ( full paper ) All these came from this Hacker News discussion .
Make-believe soft robot in Disney's newest movie has roots in real-world robotics research
On the third floor of the Department of Informatics there is a robotics laboratory which looks like a playroom This is where researchers are testing how their robots can figure out how to move past barriers and other obstacles.
Records 2161 to 2175 of 6547
Industrial Robotics - Featured Product
RoboDEX, a comprehensive trade show for robots, will be held at the center of robot/robotics industry, Tokyo, 2018. Covering from development technology of robots to application of robots, it attracts all the professionals involved in robot industry and professionals considering utilizing robots.