Earlier this month, I had the complete, overwhelming pleasure of meeting a humanoid robot named TIAGo.
I met TIAGo at the Smart Factory Expo, Liverpool, where I was surrounded by industrial robots carrying out repetitive tasks such as constructing tiny parts or moving boxes. While I enjoyed watching the industrial robots, going about their work in an orderly, consistent manner, none had the impact of TIAGo. I was caught up in a kaleidoscope of emotions when meeting him, excited, surprised, shocked, amazed, all perfectly standard when meeting highly technical, complex robotics, but also, I felt maternal and oddly teary.
But why? Why had coming face-to-face with the awe-inspiring leaps in robotics made me feel like I was interacting with a beloved pet dog,or a particularly cute child? This is what I had to understand.
Perhaps, as Dr Kate Darling, robot ethicist and researcher at the Massachusetts Institute of Technology (MIT) Media Lab, explains, it is simply personification, encouraged by film characters such as WALL-E, Marvin the Paranoid Android, from the classic The Hitchhiker’s Guide to the Galaxy, or Baymax from Big Hero 6. Maybe his responsive chat, in a Northern accent no less, and thoughtful expression made him appear more human to me and created this connection on my part.
Robotics can certainly be designed to evoke a response in users, generally by manipulating features such as eyes, faces in general, and movement, reinforced by what Dr Darling has found in her research. Faces, and reading faces is very important to human communication, as a marker for identity and communicating non-verbal social cues, so it stands to reason that the humanoid robot would create a better, more instantaneous connection with humans, as we would physically recognise more of ourselves in them.
At the ACM/IEEE International Conference on Human Robot Interaction (HRI) in March 2018, roboticists from the University of Washington presented a paper entitled Characterizing the Design Space of Rendered Robot Faces, to study the importance, and the objectivity of robot facial features.
They looked particularly the ‘presence of a particular element on the face (e.g.,mouth, nose, eyebrows, cheeks/blush); the colour of these elements and the face (and any additional features); and the size, shape, and placement of each element.’
TIAGo’s design means that he has circular spots of light in his eyes, within the camera lenses, giving the appearance of pupils and the shape of the outline around his eyes is suggestive of curiosity, or the act of considering. By also including a small, indented shape below his chin, which could be viewed as a mouth, TIAGo’s facial features read as friendly to me.
The movement style of robots is also important to humans. Over the course of a day, humans will move hundreds of times, for example shifting in a chair, blinking, crossing, and uncrossing the legs, or adjusting clothes or hair. These human responses come from needing to adjust the body to remain comfortable. Robots do not have these impulses of course, because they would not feel bodily discomfort, and therefore they do not do it. This can lead to humans viewing robots as too still and then any following movement can be seen as jarring.
Similarly, jerkiness of movement is an issue faced by roboticists. Human movement is generally fluid in completion and flows smoothly, except in the cases of older age or ill-health. For example, a colleague adjusting their glasses. This simple action would involve an arm folding at the elbow, while being raised at the shoulder, while the index finger also extends to push the glasses up the bridge of the nose. Simultaneous, simple, and dynamic for a human- much more complex for a robot.
As with humans, ‘almost all robots have a movable body. Some […] have dozens of movable segments,typically made of metal or plastic. Like the bones in your body, the individual segments are connected together with joints.’
Actuators, a power-source, and an electrical circuitry, controlled by the computer, all work together to create robot movements. ‘The robot's computer controls everything attached to the circuit. To move the robot, the computer switches on all the necessary motors and valves,’to achieve the desired motion. This can result in jerky, stunted mobility, for several factors, from the age of the technology, and heavy motors, to dodgy WiFi connection speeds.
Improvements in robotics, and technology, and the construction of upgraded mechanical parts are increasing at a much-increased rate, demonstrated recently by researchers at Northeastern University, who releaseda video in July this year to demonstrate their success in creating a robot arm that can smoothly mimic the actions of a researcher.This smoothness of motion was evident to me when meeting TIAGo, whose movementsretained a fluidity, with the exception of his wheeled base. I was moresurprised by the smooth motion of his arm, and head movements, so naturally they took more of my focus. Although I knew he was wheeled, as Richard Waterstone explained during our meeting, his lack of standard “human” movement did not make me uncomfortable, as it would for some.
According to Darling, we are ‘biologically hardwired to project intent and life onto any movement in our personal space that seems autonomous,’  so perhaps, it is not the style of movement, but the movement itself that helps create a relationship between humans and robots. Contrastingly however, for less modern robots, this jerkiness can be alienating to some, making them so obviously ‘different,’ or ‘other’ when they are alike us in so many other ways.
Freud’s The Uncanny and the Uncanny Valley
TIAGo was designed predominantly as a method of delivering telepresence in foreign countries, using Virtual Reality to allow the user to “step into” his robotic presence to see through his eyes, move with his body, and speak with their own voice, through him. This, coupled with his human-like movement and features, is something that some would find highly unsettling, a feeling evidenced by Cyberselves MD Richard Waterstone.
“[F]rom Terminator movies and screaming headlines in the tabloids… [there is a] mistrust of robots and A.I. … People feel there is an intrinsic danger to robots doing that sort of work, or they will take their jobs, or kill us…”
But Why do Some Find Robots So Discomforting?
Heavily based on the earlier theories of Ernst Jentsch, famed psychoanalyst Sigmund Freud wrote his 1919 essay ‘The Uncanny,’ (Das Unheimliche). His works
‘repositioned theidea as the instance when something can be familiar and yet alien at the sametime. He suggested that the ‘unheimlich’(unhomely) was specifically inopposition to the ‘heimlich,’ which can mean homely and familiar but alsosecret and concealed or private. ‘Unheimlich’ therefore was not just unknown,but also, he argued, bringing out something that was hidden or repressed. Hecalled it 'that class of frightening which leads back to what is known of oldand long familiar.'
Freud’s complex theory is related to a very specific sense of what is frightening, for it is a kind of fear with elements of obscurity and vagueness. It is not just generally frightening and is completely subjective, with a vast myriad of possible causes. ‘The properties of persons, things, sensations, experiences and situations which arouse in us the feeling of uncanniness, and then infer the unknown nature of the uncanny from what they all have in common,’ can all arouse a feeling of uncanniness, for an individual, but most often they are tied to experiences that are familiar, or homely, and those that are unfamiliar, or unhomely at the same time.
Robots, dolls, mannequins, and doppelgangers all fall into the category of the potentially uncanny, both because as they are heimlich, homely and familiar as human, but also unheimlich, not human and other. They also commonly bring back memories from childhood, which brings forth illogical childish ideals, such as the idea that toys are secretly alive, with their own agendas- a notion that as adults most of us will have let go and long forgotten. These ideas then, re-emerge when we are reintroduced to humanoid objects, as described by Freud, in his theory of the Uncanny.
Masahiro Mori’s theory of the Uncanny Valley expands on Freud’s earlier works and also discusses how human reactions to humanoid robotics can become uncanny. The robotics professor at the Tokyo Institute of Technology’s essay focussed how he envisioned people's reactions to robots that looked and acted as “human.”
‘In particular, he hypothesized that a person's response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance. This descent into eeriness is known as the uncanny valley.’
Industrial robots design is based purely on functionalityand therefore, although they perform functions like those of human factoryworkers, their appearance does not matter. ‘Thus, given their lack of resemblance to human beings, in general, people hardly feel any affinity for them.’ They experience no feelings of revulsion, or the uncanny, while also feeling no positive associations. This can be demonstrated on Mori’s graph below.
Humanoid robots however, as their humanlike features increase, we can respond to them with a heightened sense of affinity. Here however, Mori disagrees with Freud. Mori finds that the inclusion of movement can bring the robot, or robotic object such as a prosthetic hand, a feeling of the uncanny, as observed with highly sophisticated robots designed for the 1970 World Exposition in Osaka, Japan.
‘For example, one robot had 29 pairs of artificial muscles in the face (the same number as ahuman being) to make it smile in a humanlike fashion. According to the designer, a smile is a dynamic sequence of facial deformations, and the speed of the deformations is crucial. When the speed is cut in half in an attempt to make the robot bring up a smile more slowly, instead of looking happy, its expression turns creepy. This shows how, because of a variation in movement, something that has come to appear very close to human—like a robot, puppet, or prosthetic hand—could easily tumble down into the uncanny valley.’
To get around this, Mori suggests building robotics that are like, but not exact copies of, humans, for example by putting eyeglasses on robot faces instead of mock eyeballs, or by making moveable hands from carved wood, so they retain the shape of a human hand, without the element of falseness from skin mimicking materials, which can be unsettling, or uncanny.
Are Robots akin to Animals?
Towards the end of her TED Talk, Dr Darling explored the idea that Human & Robot Interactions (HRI) are increasingly demonstrating that human and robot relationships are more like that of humans and animals, particularly pets.
‘Robots are in an interesting space where they’re these objects that move like agents, and that tricks our brain into projecting intent onto them and life onto them. So, we end up treating them more like animals than devices.’
This opens a whole new world of possibilities and suggests away that the potential uncanniness of overly- humanoid robotics can be avoided.
This can be achieved by selecting facial features, and the shape of those features, that is viewed positively, excluding those that give humans a sense of unfriendliness, for example the inclusion of eyelids. Similarly, utilising smoother, more fluid movement can reduce the sense of the uncanny, according to Freud.
If future robotics uses what we know to be effective for facial features and movement, we can potentially remove the eerie, uncanny feeling that robots sometimes give, and with it, the barriers to increased robot application in our homes and workplaces. This will allow robots to ‘assume greater responsibility for work that’s dirty, dull or dangerous while simultaneously helping humans to sense, discover and build what would have been previously impossible. ’
By utilising a less human, unheimlich outer casing for future robotics, while utilising what we know about creating faces from specific features, perhaps we can shape a world in which robots and humans can co-exist, and work better, together. Or as put by Alisa Kalegina: ‘Ultimately, for us, it comes down to thoughtful and conscious design choices. If we study what our impressions of faces are, we can create technology that fits more harmoniously into our lives.’
Without triggering uncanny feelings.
 Baxter, the industrial robot, builtby Rethink Robotics in 2011. Image from Tech Explorist.com,‘Baxter Robot: The Blue-Collar Robot,’ published by Amit Malewar, 24th March 2016, <https://www.techexplorist.com/baxter-robot-blue-collar-robot/2452/>06.12.2021
 Quote and Image from News.Northeastern.edu, ‘NIMBLE ROBOTIC ARMS THAT PERFORM DELICATE SURGERY MAY BE ONE STEP CLOSER TO REALITY,’ <https://news.northeastern.edu/2021/07/23/is-it-possible-to-build-a-better-robotic-arm/>02.12.2021
 Dr Kate Darlin, 'Why we have an emotional connection to robots Ted-Talk,’ September 2018,<https://www.ted.com/talks/kate_darling_why_we_have_an_emotional_connection_to_robots?referrer=playlist-how_to_live_with_robots&language=en> 06.12.2021
[17 & 18]INews, 'The New Breed: Why learning to treat robots like animals is good forhumanity,’ published 19th April 2021, <https://inews.co.uk/news/technology/the-new-breed-dr-kate-darling-why-learning-treat-robots-like-animals-good-humanity-960388>06.12.2021