China's Hunger for Robots Marks Significant Shift

By TIMOTHY AEPPEL and MARK MAGNIER for WSJ.com -  Having devoured many of the world’s factory jobs, China is now handing them over to robots. China already ranks as the world’s largest market for robotic machines. Sales last year grew 54% from a year earlier, and the boom shows every sign of increasing. China is projected to have more installed industrial robots than any other country by next year, according to the International Federation of Robotics. China’s emergence as an automation hub contradicts many assumptions about robots and the global economy. Economists often view automation as a way for advanced economies to keep industries that might otherwise move offshore, or even to win them back through reshoring, since the focus is on ways to reduce costly labor. That motivation hasn’t gone away. But increasingly, robots are taking over work in developing countries, reducing the potential job creation associated with building new factories in the frontier markets of Asia, Africa or Latin America.   Cont'd...

Could This Machine Push 3-D Printing into the Manufacturing Big Leagues?

Neil Hopkinson, a professor of mechanical engineering at the University of Sheffield in the United Kingdom, has been developing the new method, called high-speed sintering, for over a decade.  Laser sintering machines build objects by using a single-point laser to melt and fuse thin layers of powdered polymer, one by one. Hopkinson replaced the laser system, which is both expensive and slow, with an infrared lamp and an ink-jet print head. The print head rapidly and precisely delivers patterns of radiation-absorbing material to the powder bed. Subsequently exposing the powder to infrared light melts and fuses the powder into patterns, and the machine creates thin layers, one by one—similar to the way laser sintering works, but much faster. Hopkinson’s group has already shown that the method works at a relatively small scale. They’ve also calculated that, given a large enough building area, high-speed sintering is “on the order of 100 times faster” than laser sintering certain kinds of parts, and that it can be cost competitive with injection molding for making millions of small, complex parts at a time, says Hopkinson. Now the group will actually build the machine, using funding from the British government and a few industrial partners.  Cont'd...

Widespread backing for UK robotics network

The Engineering and Physical Sciences Research Council (EPSRC) has announced the launch of a new robotics network that aims to foster academic and industry collaboration. The UK Robotics and Autonomous Systems Network (UK-RAS Network) will have a strong academic foundation, with a number of universities acting as founding members. According to the EPSRC, the network has already received strong support from major industrial partners, as well as from professional bodies such as Royal Academy of Engineering, IET, and The Institute of Mechanical Engineers. Globally, the market for service and industrial robots is estimated to reach $59.5 billion by 2020. A primary aim of the network will be to bring the UK’s academic capabilities under national coordination, fuelling innovation in the robotics sector and taking advantage of the growth in the industry.   Cont'd...  

Rockwell brings factory-automation tools to smartphones, tablets

By John Schmid of the Journal Sentinel:  The Texas facility that mass-produces State Fair corn dogs and Jimmy Dean Pancakes & Sausage on a Stick retooled itself recently as a hyper-automated smart factory. It installed 1,500 sensors to collect gigabytes of data on everything from raw meat inventories to wastewater and electrical usage. Then the Fort Worth factory took one extra step into the future of industrial technology: It added software that transmits all of that real-time data onto smartphones and tablets, making it possible for plant managers to monitor their production network from anywhere on the factory floor — and during coffee breaks or vacations, as well. If they choose — so far, most don't — this new breed of mobile managers can even operate factory equipment remotely, shutting off pumps or speeding up production lines. Technology has made that sort of operation as easy as playing a smartphone video game, but it can be reckless because a lot of equipment can interfere with or hurt those who are physically present. It's only a matter of time, some say, before factory controls migrate to Google Glass, the wearable displays mounted in eyeglass frames, or smart wristwatches. Cont'd...

CLEARPATH ROBOTICS ANNOUNCES MOBILITY SOLUTION FOR RETHINK ROBOTICS' BAXTER ROBOT

Clearpath Robotics announced the newest member of its robot fleet: an omnidirectional development platform called Ridgeback. The mobile robot is designed to carry heavy payloads and easily integrate with a variety of manipulators and sensors. Ridgeback was unveiled as a mobile base for Rethink Robotics' Baxter research platform at ICRA 2015 in Seattle, Washington.  "Many of our customers have approached us looking for a way to use Baxter for mobile manipulation research - these customers inspired the concept of Ridgeback. The platform is designed so that Baxter can plug into Ridgeback and go," said Julian Ware, General Manager for Research Products at Clearpath Robotics. "Ridgeback includes all the ROS, visualization and simulation support needed to start doing interesting research right out of the box."  Ridgeback's rugged drivetrain and chassis is designed to move manipulators and other heavy payloads with ease. Omnidirectional wheels provide precision control for forward, lateral or twisting movements in constrained environments. Following suit of other Clearpath robots, Ridgeback is ROS-ready and designed for rapid integration of sensors and payloads; specific consideration has been made for the integration of the Baxter research platform.

NHL Goal Celebration Hack With A Hue Light Show And Real Time Machine Learning

From François Maillet: In Montréal this time of year, the city literally stops and everyone starts talking, thinking and dreaming about a single thing: the Stanley Cup Playoffs. Even most of those who don’t normally care the least bit about hockey transform into die hard fans of theMontréal Canadiens, or the Habs like we also call them. Below is a Youtube clip of the epic goal celebration hack in action. In a single sentence, I trained a machine learning model to detect in real-time that a goal was just scored by the Habs based on the live audio feed of a game and to trigger a light show using Philips hues in my living room... ( full article )

QinetiQ North America Introduces DriveRobotics

From QinetiQ North America: By transforming manned industrial vehicles into unmanned robots, DriveRobotics™ add-on applique kit lets building demolition and roadside construction companies convert to unmanned operations whenever operators face hazardous situations. This commercial robotic system, which can also be installed in new vehicles and existing fleets, eliminates need for spotter. Remote-control capabilities enable or facilitate machine use in demanding applications... ( full press release )

Gear Generator

  About Gear Generator: Gear Generator is a tool for creating involute spur gearsand download them in SVG format. In addition it let you compose full gear layouts with connetcted gears to design multiple gears system with control of the input/output ratio and rotation speed. Gears can be animated with various speed to demonstrate working mechanism... ( link )

Yale OpenHand Project

From Yale's OpenHand Project: This project intends to establish a series of open-source hand designs, and through the contributions of the open-source user community, result in a large number of useful design modifications and variations available to researchers. Based on the original  SDM Hand , the  Model T  is the OpenHand Project's first released hand design, initially introduced at ICRA 2013. the four underactuated fingers are differentially coupled through a floating pulley tree, allowing for equal force output on all finger contacts. Based on our lab's work with iRobot and Harvard on the  iHY hand , which won the  DARPA ARM program , the  Model O  replicates the hand topology common to several commercial hands, including ones from Barrett, Robotiq, and Schunk (among others). A commercial version of this hand is currently for sale by  RightHand Robotics ... ( homepage )

Robo.Op: Opening Industrial Robotics

From MADLAB.CC: Robo.Op is an open hardware / open software platform for hacking industrial robots (IRs). Robo.Op makes it cheaper and easier to customize your IR for creative use, so you can explore the fringes of industrial robotics. The toolkit is made up of a modular physical prototyping platform, a simplified software interface, and a centralized hub for sharing knowledge, tools, and code... ( homepage ) ( github )  

VERSABALL Beer Pong Robot

From Empire Robotics: The VERSABALL is a squishy balloon membrane full of loose sub-millimeter particles. The soft ball gripper easily conforms around a wide range of target object shapes and sizes. Using a process known as “granular jamming”, air is quickly sucked out of the ball, which vacuum-packs the particles and hardens the gripper around the object to hold and lift it. The object releases when the ball is re-inflated. VERSABALL comes in multiple head shapes and sizes that use the same pneumatic base... ( Empire Robotics' site )

Robot Learning Manipulation Action Plans by "Watching" Unconstrained Videos from the World Wide Web

From Yezhou Yang, Yi Li, Cornelia Fermuller and Yiannis Aloimonos: In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. The list of the grasping types. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.... ( article at Kurzweilai.net ) ( original paper )

John Carmack On Modern C++

Winter break homework from John Carmack. Gamasutra reprint article "In-depth: Functional programming in C++": A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in. In a multithreaded environment, the lack of understanding and the resulting problems are greatly amplified, almost to the point of panic if you are paying attention. Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible... ( full article ) Also "Lessons to learn from Oculus development team when using the “Modern C++” approach": Modern C++ doesn’t imply necessarly the overuse of templates Andrei Alexandrescu says about the Modern C++ design: "Modern C++ Design defines and systematically uses generic components - highly flexible design artifacts that are mixable and matchable to obtain rich behaviors with a small, orthogonal body of code." Modern C++ has a close relation with generic programming; probably it’s the reason that makes many developers neglect the modern C++ approach. They think that the code will be mostly implemented as templates, which makes the code difficult to read and maintain. In the SDK, the templates represent only 20% of all types defined and most of them are related to the technical layer... ( full article )

OpenCV Vision Challenge

From the OpenCV Foundation: OpenCV Foundation with support from DARPA and Intel Corporation are launching a community-wide challenge to update and extend the OpenCV library with state-of-art algorithms. An award pool of $50,000 is provided to reward submitters of the best performing algorithms in the following 11 CV application areas: (1) image segmentation, (2) image registration, (3) human pose estimation, (4) SLAM, (5) multi-view stereo matching, (6) object recognition, (7) face recognition, (8) gesture recognition, (9) action recognition, (10) text recognition, (11) tracking. Conditions: The OpenCV Vision Challenge Committee will judge up to five best entries. You may submit a new algorithm developed by yourself or your implementation of an existing algorithm even if you are not the author of the algorithm.  You may enter any number of categories.  If your entry wins the contest you will be awarded $1K. To win an additional $7.5 to $9K, you must contribute the source code as an OpenCV pull request under a BSD license.  You acknowledge that your contributed code may be included, with your copyright, in OpenCV. You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR. Timeline: Submission Period: Now – May 8th 2015  Winners Announcement: June 8th 2015 at CVPR 2015 (full details)

Deep Visual-Semantic Alignments for Generating Image Descriptions

Because of the Nov. 14th submission  deadline for this years IEEE Conference on Computer Vision and Pattern Recognition (CVPR) several big image-recognition papers are coming out this week: From Andrej Karpathy and Li Fei-Fei of Stanford: We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations... ( website with examples ) ( full paper ) From Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan at Google: Show and Tell: A Neural Image Caption Generator  ( announcement post ) ( full paper ) From Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel at University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models  ( full paper ) From Junhua Mao, Wei Xu, Yi Yang, Jiang Wang and Alan L. Yuille at Baidu Research/UCLA: Explain Images with Multimodal Recurrent Neural Networks  ( full paper ) From Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell at UT Austin, UMass Lowell and UC Berkeley: Long-term Recurrent Convolutional Networks for Visual Recognition and Description ( full paper ) All these came from this Hacker News discussion .

Records 76 to 90 of 102

First | Previous | Next | Last

Factory Automation - Featured Product

E4T & S4T Miniature Optical Kit Encoders - Available in CPRs up to 500

E4T & S4T Miniature Optical Kit Encoders - Available in CPRs up to 500

High resolution. Limited space. Not a problem for our latest miniature encoders that provides precise feedback and is easy to install within smaller-sized applications. Now available in 400 and 500 CPRs. Coming in at about the size of a nickel, we've added our proprietary Opto-ASIC sensor technology and improved quadrature for even greater motion control. Product Features: 10 resolutions up to 360 CPR, plus new 400 and 500 CPR resolution; 288 configurations available, including single and differential output; Compact form factor • 0.866 inch (in) / 22.00 millimeter (mm) package outside diameter • 0.446 in / 11.33 mm package height • Fits NEMA 8, 11, 14 and 17 motors; Simple and efficient assembly process • Four-piece construction • Push-on hub disk design, patent pending; 100 kilohertz frequency response; Shafted version up to 0.25 in / 6.25 mm diameter.