This blog post details GAJAMOHAN MOHANARAJAH, CEO of Rapyuta Robotic perspective of Cloud Robotics presented at the Robotics: Science and Systems (RSS) 2019 Workshop, Messe Freiburg, Germany on 2019-06-23.

Cloud Robotics: A Perspective
Cloud Robotics: A Perspective

Opinion from GAJAMOHAN MOHANARAJAH, CEO | Rapyuta Robotics

The term “Cloud Robotics” was first coined by James Kuffner, at the time a Google employee, in 2010 [1]. The new concept referred to merging robotics and cloud computing to enable an “extended and shared brain” for robots. The idea was that robots could use this “extended brain” to offload heavy computation to data center servers, then use a “shared brain” to build a common database to collect and organize information about the world (environment) and (robot) skills/behaviors.

Meanwhile, on the other side of the pond (Europe), RoboEarth [2], an EU-funded project from 2010 to 2014, was busy trying to create an “internet for robots.” Ideally, this would be a place where robots could enjoy both a “shared” (via a common knowledge-base) and “extended” ( via servers in the datacenter) brain.

The founding members of what would become Rapyuta Robotics made up the ETH Zurich team of the RoboEarth project. Our partners included six universities and the Dutch electronics giant Philips. For major results on the common database and offloading computation see [3] and [4] and the following videos:

The two sides of the ponds did have some collaborations, e.g., James Kuffner was in RoboEarth’s Industrial Advisory Committee along with Brian Gerky, now OSRF CEO [5]. 

Note that some of the ideas of Cloud Robotics can be traced back to the 90s to Inaba. et. al.’s work on remote brain robots [9] and Goldberg et. al.’s work on Internet telerobots [8].

 

Cloud Robotics – a new perspective

Where we are coming from

Based on our decade long experience in the field of Cloud Robotics, especially the last four year as a company where we built and helped others build solutions, we think Cloud Robotics should have a broader definition. The motivation for this broader definition and our approach to solving this challenge is coming from the following two key learnings/realizations:

  • People and processes over technology – We must first empathize with the user and understand their goals, their processes, and their constraints before selecting the technology. We should also be aware of the golden hammer, a cognitive bias that involves an over-reliance on a familiar technology.
  • Community over the individual – ‘in union there is strength’ period!

There is a bigger picture beyond the brain, i.e., computing. A brain needs a body – robot hardware – a composition of sensors to understand the environment, actuators to do useful physical work, and all working in synchronization. Furthermore, a significant portion of the use cases includes a multi-robot scenario – i.e., multiple sets of computing, sensors, and actuators working in synchronization. Finally, on top of everything, there are people and processes that need to be followed.

Note that we are not disputing the benefits of connecting cloud computing and robots. We still believe in the benefits of having an extended and shared brain. What we are advocating now is to go beyond the brain, if the ultimate goal is to make robots more accessible.

Drawing from our conversations with end users and robotics solution developers, the current challenges preventing robotics from going mainstream are as follows:

  • Technical complexity: the extremely technical nature of robotics solutions drives away those who are not experts in the field.
  • Large capital expenditure: substantial amount of capital is needed for implementing a robotics solutions
  • Rigid design: robotics solutions are built for a very specific purpose making them inflexible towards environmental and process changes 
  • Limited access: most solutions providing only an on-site physical access is an operational and scaling challenge 
  • Proprietary interfaces: non standardized APIs between software/hardware components are creating fragmentation and limiting innovation

 

The Aha Moment

The aha moment for us was when we saw some parallels between the old way of operating servers and modern cloud computing. Let us explain in more detail. 

Before the advent of cloud computing, a whole team of experts was required to install, configure, test, run, secure, and update servers and applications. Even the biggest companies with the best IT departments were struggling on this front, and small to mid-size businesses didn’t stand a chance.

Then came cloud computing. 

Computing was democratized – anyone had instant access to unlimited computing resources and they were charged by the hour! This leveled the playing field by allowing garage startups to compete with giants.  Formally, the National Institute of Standards and Technology (NIST)[6] defines the cloud as a model that gives computing resources (e.g., servers, storage, and software) the following essential characteristics:

  • On-demand self-service: Whenever you need computing resources, you can get them without any hand-holding from an expert.
  • Shared resource: You don’t own your servers, you share a common pool of resources.
  • Rapid elasticity: Scale up and down based on demand.
  • Ubiquitous access: Resources are accessible over the network/internet from anywhere.
  • Measured service: Resource usage is transparently measured at some level of abstraction appropriate to the type of service (e.g. storage, processing, bandwidth, and active user accounts) enabling metered services (a.k.a. “pay-per-use”), resource optimization, and predictive planning [7].

As a thought exercise, let’s apply the above services to robotics, and imagine what the world might look like:

 

On-demand self-service

End users provision and deprovision robots as needed. Given the heterogeneity of the robotic hardware and software components, the provisioning step may also include a composition step where the user composes the required configuration from a catalog of hardware and software components without the need for expert help. This will create an open market driving innovation and competitive pricing.

 

Shared resources

End users don’t have to own robots, because institutions with lots of capital will hold the hardware. Having all units of a specific type of hardware managed in a common way will drive the economies of scale in hardware and incur lower maintenance costs.

 

Rapid elasticity

Only the “brain,” or computing, part of the robot can have rapid elasticity. Since the body, or hardware components, cannot move at the speed of light, you will need to wait some time for robots to be shipped and clear customs when crossing borders.

 

Ubiquitous access

End users can access their robots/robots fleets from anywhere – even using thin clients – like a browser and operate them in an intuitive way. E.g., this video shows our prior-work that allows the end user to remotely launch and control an aerial robot using a web browser.

 

Measured service

Resource usage is measured and transparent to all users – end users, solution providers, hardware developers, software developers – e.g., how many picks did a robot do, how long did the robot run, how many times the global planning API was called, how much bandwidth/storage is used for logs/metrics, how much computation was consumed by the global map optimizer? This measured usage will be used to enable pay-per-use, optimization, and predictive planning thus optimizing cost.

Good start, but feels a bit like shoe-horning. Let’s try to emphasize characteristics that are more relevant to robotics, tone down some, and merge others. 

 

First, remove on-demand. 

Given the hardware components and safety requirements, it is hard to imagine an end user unilaterally provisioning a robotics solution on-demand, i.e., as soon as or whenever required without any human touchpoints, at least for the next couple of years. Unilateral consumption and flexible configuration of the deployed resources should be the first goal.

 

Second, remove  ‘rapid’ in rapid elasticity. 

Given the requirement of having the hardware on-premises, i.e., customer location, rapid is a bit too much to aim for short/mid term and it is not the burning need based on the feedback. 

 

Finally, replace measured service with open interfaces. 

When we say ‘measured service,’ outside the cloud-native domain we get a lot of eyes rolling. Let’s use open interfaces where open is primarily used in the sense of transparency and community. 

Done! here comes the one sentence version – drum roll please 

Cloud Robotics is a model enabling self-service, elastic, and ubiquitous access to a shared pool of robotics resources with open interfaces

Let’s try a slightly detailed version with one of the solutions we’ve built as shown in the figure below. The solution consists of a set of robots that collaborate with the people in order to increase the efficiency of piece picking – a process that is the most human intensive in a warehouse.

The Cloud Robotics characteristics when interpreted to the above pick-assist solution gives:

  • Self-service: the end user does not require an expert to configure and operate a robotics solution. 
  • Shared resources: the end user doesn’t own robots. Instead, robots are rented on a monthly basis.
  • Elasticity: re-configuration to fit the business process and to allow for upward or downward scaling with minimal effort
  • Ubiquitous access: the state of the robotics system and its sub-components can be observed and controlled from anywhere by anyone who has the appropriate authorization.
  • Open interfaces: interfaces are well-defined, making it easier for third-party integration such as WMS/ERP; The transactions are transparent for the relevant stakeholders. For example, the hardware maintenance company has full access to the operational data, the hardware, and its usage, enabling them to perform timely maintenance and guarantee high uptime.

 

Aha! but…

Of course, there is a but. You probably started noticing this when we were trying to emphasize the characteristics. Here are the top three challenges when we go with this new perspective

  • robots are heterogeneous
  • robots are physically distributed
  • robots need to be autonomous

The first two challenges – heterogeneity and physical distribution – can be better understood by looking into the ‘pets vs cattle’ idea used in cloud computing. In a pet service model, the resources are unique, lovingly hand-raised, and when they get sick, you nurse them back to health. You scale these resources up fostering their growth, and when they are not growing or fall sick, everyone notices. Meanwhile, in a cattle service model, each resource is “almost identical to each other”, and “when one gets sick, you replace it with another one”. You scale these by creating more of them, and when one is unavailable, no one notices [10]. For the history of pets vs cattle and proper usage see [11]. Although the ‘pets vs cattle’ analogy is used in the context of heterogeneity and scale in the cloud computing domain, we can also extend it to explain physical distribution. Ignoring the pet-cafes in Akihabara, pets stay at home creating a special bond between them and the people living there. Meanwhile, cattle are raised as a group in a farm, i.e., co-located as servers in a data center. 

Now the challenge is #robots-are-pets. We are not saying this because we love them so much. Here is our logical argument:

Heterogeneity – in order to achieve some amount of optimality including financial viability on a given task robots need to have specialized mechanisms – ‘no free lunch’. We acknowledge the school of thought of building general purpose super-human robots like Doraemon to fix this problem, but don’t see a viable solution soon. 

Physical distribution – Robots have to be ‘on-site’, where it is expected to do useful physical work. This is different from a server that can be placed in an arbitrary location as long as there is a good internet connection. 

Now let’s move on to the third challenge – autonomy. Autonomy can be seen as a measure of how well you understand your surroundings and how well/fast you react to get the optimal reward – of course with minimal human intervention. Given this definition, to get a feel for the challenge, let’s compare them to the following three domains

  • Cloud computing – input/sensing for the system is well structured (i.e., API are well defined) with no uncertainty, 
  • IoT – sensor bandwidth is low, in most cases, there are no hard real-time requirements to close the loop
  • Smartphones – compared to the sensors in a robot the sensors in smartphones are still low and is a smartphone the human user access the situation chooses the appropriate app to use, where the robots don’t have that luxury

Note that the above comparison is only related to autonomy, each of the domains have their own set of challenges and a big salute to those who are solving them and inspiring us.

Now how are we going to herd the robots?

 

Ah! We see some light

The gist of the idea is – pet herding requires a diverse set of skills/roles and resources that are at least an order of magnitude more compared to herding cattle. We are taking an open approach in a building the technology stack (a.k.a platform) that

  • help with aspects of the pet herding that can be automated 
  • enables a streamlined interaction between the pet owners, pet breeders, pet doctors, etc.

We will keep you updated on developments at our end. If you have any thoughts or opinions, please feel free to write to us. 

 

 

 

 

 

References

[1] Kuffner, James (2010). “Cloud-Ehttps://www.scribd.com/doc/47486324/Cloud-Enabled-Robotsnabled Robots“. IEEE-RAS International Conference on Humanoid Robotics.

[2] http://roboearth.ethz.ch

[3]  M. Tenorth, A. C. Perzylo, R. Lafrenz, and M. Beetz, “Representation and Exchange of Knowledge About Actions, Objects, and Environments in the RoboEarth Framework,” in IEEE Transactions on Automation Science and Engineering, vol. 10, no. 3, pp. 643-651, July 2013.

[4] G. Mohanarajah, D. Hunziker, R. D’Andrea and M. Waibel, “Rapyuta: A Cloud Robotics Platform,” in IEEE Transactions on Automation Science and Engineering, vol. 12, no. 2, pp. 481-493, April 2015.

[5] http://roboearth.ethz.ch/iac/index.html

[6] The NIST Definition of Cloud

Computing https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf

[7] https://www.techopedia.com/definition/14469/measured-service-cloud-computing 

[8] K. Goldberg and R. Siegwart, Eds., Beyond webcams: an introduction to online

robots. Cambridge, MA, USA: MIT Press, 2002.

[9] M.Inaba, S.Kagami, F.Kanehiro, Y.Hoshino, and H.Inoue, “A Platform for Robotics Research Based on the Remote-brained Robot Approach.” I. J. Robotic Res., vol. 19, no. 10, pp. 933–954, 2000.

[10] https://medium.com/@Joachim8675309/devops-concepts-pets-vs-cattle-2380b5aab313

[11] http://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

3D Vision: Ensenso B now also available as a mono version!

3D Vision: Ensenso B now also available as a mono version!

This compact 3D camera series combines a very short working distance, a large field of view and a high depth of field - perfect for bin picking applications. With its ability to capture multiple objects over a large area, it can help robots empty containers more efficiently. Now available from IDS Imaging Development Systems. In the color version of the Ensenso B, the stereo system is equipped with two RGB image sensors. This saves additional sensors and reduces installation space and hardware costs. Now, you can also choose your model to be equipped with two 5 MP mono sensors, achieving impressively high spatial precision. With enhanced sharpness and accuracy, you can tackle applications where absolute precision is essential. The great strength of the Ensenso B lies in the very precise detection of objects at close range. It offers a wide field of view and an impressively high depth of field. This means that the area in which an object is in focus is unusually large. At a distance of 30 centimetres between the camera and the object, the Z-accuracy is approx. 0.1 millimetres. The maximum working distance is 2 meters. This 3D camera series complies with protection class IP65/67 and is ideal for use in industrial environments.