Why robotics needs a radical reboot
12 Feb 2018 by Evoluted New Media
Intelligent robots that operate convincingly within the dynamic chaos of the real world are far from being a reality. For artificial systems to behave anything like living systems we need to fundamentally rethink the standard view of what behaviour is all about says Dr Rupert Young
Living systems employ novel and parsimonious methods which resolve and dissolve many of the seemingly intractable problems faced by conventional robotics.
Generally, standard robotics, including robot arms and self-driving cars, has been viewed as an engineering problem of the geometric manipulation of objects within 3-dimensional space, which requires definition of the equations of motion, within the robotic system, that govern the physics of those objects and worlds.
Moving a robotic arm normally requires predicting in advance what joint angles are required to form a particular pose – which requires equations to compute those angles, along with parametric knowledge such as mass, gravity and length of the limbs. Self-driving cars require detailed mapping of the environment, and algorithms to plan and move objects through its virtual worlds. The trajectories and models are continually adjusted by checking that the real, perceived world matches its predictions.
That might seem like a valid approach, initially, but the problem becomes increasingly complicated as the environments become more complex, such as in the real world.
Robots have been stuck in predictable, structured environments and the modelling approach is not viable in the real worldHumans, and animals, work in a very different way. Rather than trying to predict and compute the actions required to achieve a task people vary their actions in order to perceive the world as they want. In other words, behaviour is a process of controlling perceptual input.
For example, catching a baseball is not about the computation of trajectories and intercept points, but merely about keeping the speed of the ball on the retina constant. Operation of a robot arm can be achieved by a multitude of simple controllers, handling perceptual inputs such as relationships between joint angles, rather than by complex kinematics computations. As parametric knowledge is not required, parameters, such limb length, can be changed without any adverse effect on the robotic system.
When driving, we turn the steering wheel to maintain our perception of the car between the white lines. We don’t turn the wheel to a specific angle or by a specific amount, but rather until we perceive the car between the lines. There are many factors that can affect the heading of the car; wheel balance, tyre pressures, rain, road surface and especially wind. And we don’t need to know anything about them as we simply counteract their combined effects on the perceived position of the car.
Avoiding the demon So, with the perceptual control approach there is no need to model the transfer function between, say, the steering wheel and the heading of the car. In fact we could go as far to say that there is no transfer function that could be modelled anyway. At least not in practice, unless you are some sort of Laplacian Demon that knows the entire state and dynamics of the universe.
But, you may say, in some cases transfer functions can be defined and robotic systems can be developed in that way. Well, yes, that is true in simulated environments. In a simulation the transfer function between the steering wheel and the heading can easily be defined and a car can be controlled perfectly by this approach. But that is because with simulations we are acting as if we were Laplace’s Demons, as it is us who are defining the universe in which the simulation is running.
It is similar with controlled environments such as the factory floor or the laboratory, as the uncertainty is limited and managed, allowing us to define some relatively modest models. However, in the real world we can’t do this, we can’t be Laplacian Demons, and there are no transfer functions to model. And this is why robots have been stuck in predictable, structured environments and why the modelling approach is not viable in the real world. The fact that the conventional approach works in simulation has misled researchers to think that it is valid in the real world.
Simple perception The perceptual control system is a negative feedback process where the error between the goal and current perception drives the output action. By not requiring knowledge of the relationship between input and output, perceptual control systems overcome the complexity problem associated with predictive modelling. Although it seems as if we do prediction, it is only in the sense that we can set perceptual goals in advance of acting, not in the sense of predicting output by way of internal simulations of the physical world. The conventional approach may work after a fashion, but perceptual control shows that it is unnecessarily complex.
There is another way that living systems overcome complexity, which is with the perception and control of high-level invariants. This is achieved with a hierarchical architecture of perceptual control systems. The higher up the hierarchy the more complex the perceptions, and more psychological in the sense that they don’t correspond directly to physical properties or objects of the real world.
This is directly relevant to self-driving cars and represents a good reason to be very wary of the ongoing hype around the imminent arrival of autonomous vehicles on our roads. There is more to driving than low-level object manipulation. It also requires dynamic response to novel circumstances as well as psychological interpretations of the intentions of other control systems (that’s other drivers to you and me). So far, self-driving cars have largely been restricted to test environments. Unless the systems are able to incorporate understanding and control of high-level, psychological perceptions then that is where they will remain.
Any task can be deconstructed into a set of perceptual goals. Although the basic process is simple, there is a lot more going on in an everyday task than might be appreciated.
The perceptual control approach provides a very different way of thinking about behaviour, with a simple process that is applicable to all types and levels of behaviour. Demonstrating that complex internal models of the world are not required and that perception is the goal of behaviour rather than actions has profound implications not only for our understanding of ourselves as human beings, but also for behavioural sciences and robotics. The resultant control architecture is parsimonious and computationally lightweight making it ideal for implementing in artificial systems.
For robotics the benefits are significant, resulting in much simpler systems that are inherently adaptive and autonomous, infinitely scalable and avoid the complexity of the conventional computational and modelling approach.
The perceptual control approach to robotics is outlined in my paper in the Artificial Life journal: Young, Rupert (2017), A General Architecture for Robotics Systems: A Perception-Based Approach to Artificial Life, Artificial Life 23:2.
Dr Rupert Young is an independent researcher and technologist with robotics start-up Perceptual Robots. He has a degree in Computing with Artificial Intelligence and a PhD in Robotics Vision.