Were seeing growing demand for interfaces that dont require a traditional frame grabber inside the personal computer (PC), as well as a migration towards smaller form factor PCs with embedded processing.

Gigabit Ethernet Interfaces and Robotics: John Phillips, Pleora Technologies

John Phillips | Pleora Technologies

Can you provide some background on Pleora and the company’s areas of specialty?

Pleora Technologies is headquartered in Ottawa, Canada and is one of the few companies worldwide that specializes in camera connectivity solutions and software for transmitting video and related data over Ethernet networks. System and camera manufacturers deploy our Gigabit Ethernet-based external frame grabber and embedded camera interface products in manufacturing, medical, defense, security and transportation applications that require high-speed, uncompressed video for real-time analysis. As a pioneer in the machine vision industry, we helped establish and continue to lead the development of camera interface standards that ensure seamless interoperability and simplified integration of Ethernet-based vision products.

What are some of the important trends that you see in terms of the interfaces used in robotics applications?

We’re seeing growing demand for interfaces that don’t require a traditional frame grabber inside the personal computer (PC), as well as a migration towards smaller form factor PCs with embedded processing. Standardization in the machine vision system market is also gaining widespread acceptance, with end-users increasingly soliciting proposals based on the GigE Vision standard. Across our end-markets there’s continuing demand for products that are easy to integrate and upgrade to help simplify design, speed time-to-market, lower operating costs and improve overall performance.

How does a traditional frame grabber work?

Quite simply, frame grabbers interface cameras to a PC. These have traditionally been an internal component of the PC with a camera interface provided through a PCI or a PCMCIA card. Increasingly, these internal devices are being replaced by external frame grabbers that interface to an extended range of PC form factors through the Ethernet or USB port.  

Why are companies looking at alternatives to traditional frame grabbers?

One main reason is the continuing evolution of the PC with increasing processing power in an ever-decreasing footprint. The smartphone in your pocket today provides more functionality that a desktop PC of just a few years ago. Similarly, processing capabilities that were once unheard of are now readily available in laptops, embedded PCs, small form factor PCs and single-board computers.  As part of a machine vision system, smaller form factor computing platforms deliver considerable weight and footprint advantages as well as cost-savings for systems that operate 24/7. However, as form factors shrink there is often no room for an internal frame grabber. Instead, standards like GigE Vision and USB3 Vision allow the use of external frame grabbers connecting to ports already built into these smaller computing platforms.

With external frame grabbers, end-users can more easily deploy a distributed computing system that integrates multiple cameras, small form factor PCs and remotely located processing or upgrade an existing vision system with an installed base of high-performance or application-specific analog cameras.  Rather than replace these expensive cameras with a digital version, we provide a network-attached external frame grabber that can capture the video from multiple cameras and convert it to digital data, integrate other data sources and multicast to multiple locations. Depending on the application the extended reach of Ethernet means the PC can then be located outside the work environment. This can be convenient for maintenance if required.         

You mentioned embedded processing.  What is it and how is it applied to robotics solutions?

The processor is the “brains” of an embedded system that performs a dedicated function within a larger mechanical or industrial setting. Where a typical PC is designed with the flexibility to support a range of functions for end-users, an embedded system is dedicated to one particular task, often with little or no user interface. In robotics most tasks or processes that are automated and repeated, including image and video processing within the vision system, are good candidates to be handled by an embedded processor.

Across our customer base there is a shift from traditional computing architectures where processing happens at a desktop PC to embedded systems. Embedded systems are now available that offer very high processing capabilities in an extremely small form factor. This means processing intelligence for the vision system can be located at different points in the network; in a roadside cabinet, up a gantry or mounted with a camera. In addition, power efficiencies help lower operating costs and reduce heat output to prevent the premature failure of other electronic components and increase reliability. The end result is increased system design flexibility, an upward shift in intelligence at various points in the network and performance and cost advantages.  

What are some of the considerations when choosing to use embedded processors?

As part of the design process, system integrators need to ensure that the software development kit (SDK) they are using to interface to the camera can run on the embedded processor. An embedded system based on an ARM processor, for example, does not have a vision-specific interface but it does support Ethernet. We’re seeing more demand for external frame grabbers, which come with an SDK and drivers to simplify system design using off-the-shelf ARM-based embedded processing. Another significant design consideration is ensuring the image processing algorithm can run on the embedded processor. Libraries like OpenCV are available for both ARM and PowerPC.

System integrators and manufacturers needn’t be terribly concerned with changes to development environments and required skill sets. Certainly, switching a system from Windows to Linux isn’t a trivial undertaking; however the tool sets targeting embedded systems are becoming quite developer-friendly.

Do standards-based interfaces and embedded computing help companies get to market quicker?

Absolutely. Whether you’re designing a commercial machine vision system or upgrading a system within your own organization, standards and embedded computing help speed design time. Pleora has been deeply involved in the development of vision standards based on common PC interface and networking technologies because this ultimately drives design flexibility for our customers. By deploying a standards-based interface, designers can concentrate on image processing requirements rather than spending time reading standards documentation and testing implementations for compliance and interoperability.

With off-the-shelf embedded processors designers have more flexibility in terms of which processing engines to use, and then a wider range of options in terms of where to place processing intelligence within the network due to form factor advantages. One often overlooked benefit with embedded processing is the potential change in operating system. Microsoft Windows is often used, but in switching to an embedded computing platform the operating system can be changed to Linux. This often results in increased system reliability and faster startup times, as well as minor but noticeable performance improvements.  

What are some of the other benefits in moving to Gigabit Ethernet?

Ethernet is a mature technology that is globally installed across multiple industries and applications, meaning designers of vision systems for robotics applications can lower overall system costs by using off-the-shelf components developed, maintained and sold to the mass market by established providers. In addition, the widespread adoption of Ethernet encourages the continuing evolution of standards by suppliers and end-users. For the machine vision system industry, this allows the reuse of technology already tested and perfected by other industries. For example, the IEEE 1588 protocol widely deployed over the past decade to synchronize wired and wireless telecom networks is used in machine vision systems to synchronize network devices with sub-microsecond accuracy over the same Ethernet connection used for image and control signals.

In terms of overall performance Ethernet provides image capture capabilities over a longer reach; up to 100 meters between network nodes over standard copper cabling and even greater distances with switches or fiber extenders. Ethernet also enhances network flexibility and scalability, supporting almost every conceivable connectivity configuration including point-to-point, point-to-multipoint, and multi-point to multi-point.

With Gigabit Ethernet, machine vision system manufacturers and end-users don’t have to reinvent the wheel at either the physical or application layer interface. There are hardware and software options that provide out-of-the-box plug-and-play simplicity so designers can complete proof of concepts quickly and develop applications that are easy to integrate, upgrade and maintain.       

Looking forward, how do you see new technologies affecting the future of the robotics industry?

Like any industry, technologies that help deliver cost and performance advantages will drive the evolution of vision systems for robotics applications. Increasing performance capabilities of embedded processors will support the expanded deployment of solid-state and miniaturized computing platforms. For end-users this will bring significant benefits, including increased reliability, lower initial investment costs as well as ongoing power consumption and system maintenance advantages. We’re already seeing this trend in the defense sector, and it won’t be long before it will be adopted by the machine vision and robotic markets.  As software development kits for embedded systems continue to mature the barrier to entry for these systems in being continuously lowered – that’s a good thing for everyone designing advanced vision systems.

 

About John Phillips, Pleora Technologies

John Phillips is Senior Manager, Marketing at Pleora Technologies, responsible for product management, product marketing, and corporate communications.  Prior to joining Pleora, Mr. Phillips spent 10 years with March Networks in software development, sales and product line management, where he guided development of advanced video solutions for the security market and played a key role in the company becoming a recognized market leader. Before that, Mr. Phillips worked with Elcombe Systems and IBM. He holds a BSc in Computer Science from the University of Ottawa.

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Fort Robotics - Avoid Costly Downtime with Safety & Security for Machines

Fort Robotics - Avoid Costly Downtime with Safety & Security for Machines

Machine safety and security are two critical components of any industrial operation. Our latest video explores this question and provides insights into how security measures can enhance machine safety. Nivedita Ojha, VP of Product at FORT, breaks down the key considerations when it comes to securing your machines and keeping your workers safe, explaining why there is no safety without security.