Richard Bright: Can we begin by you saying something about your background.
Shimon Whiteson: I’m an Associate Professor in the Department of Computer Science at the University of Oxford. I do research on artificial intelligence, with a focus on machine learning, with applications in robotics, sensor systems, and web systems.
RB: What is The TERESA project?
SW: TERESA is an EU-funded research project that I am coordinating in which six European institutions collaborate to build a semi-autonomous telepresence robot. A telepresence robot, sometimes called “Skype on a stick” is a mobile robot with a video screen, camera, mic, and speakers that allows a remotely located person, the “visitor”, to interact with people in the robot’s vicinity.
Unlike Skype, telepresence allows the visitor to move around and have spontaneous social interactions, e.g., to mingle at a cocktail party. However, the system places a lot of cognitive load on the visitor, who must control the robot and insure it moves in a socially appropriate way. This requires doing manually a lot of things that would happen automatically if the visitor was physically present.
TERESA is a semi-autonomous telepresence system that alleviates this cognitive load by automatically navigating towards the people with whom the visitor wants to speak, and supporting the conversation with socially appropriate body language and positioning. This frees the visitor to focus on the conversation, improving the quality of the social interaction.
RB: How is TERESA made to be semi-autonomous and socially intelligent?
SW: Primarily using a machine learning technique called Learning from Demonstration. We had humans manually control the robot to demonstrate socially normative behaviour and then we used machine learning to synthesise control systems that behave in a way that is consistent with those demonstrations, and also generalise beyond them.
RB: How have users reacted to interacting with the robot?
SW: The reaction has been overwhelmingly positive. Our target group is elderly people and all our experiments are done at an elderly day centre in France. I was worried that the subjects would find the robot confusing or even scary but, on the contrary, they found it fun and exciting and were extremely keen to use it and suggest new (often impractical) features.
RB: What new insights have been revealed by the TERESA project into socially normative robot behaviour?
SW: As with many things in artificial intelligence, what seems simple often turns out to be devilishly difficult. Quantifying socially normative behaviour is difficult because it depends so much on social context: the right way to behave depends on what others are doing and saying, and robots still have quite a limited ability to perceive such context.
That said, it was surprising how easy it was to give the impression that the robot’s behaviour was at least modestly intelligent. A feature that my postdoc coded in a few hours, which enables the robot to adjust the height of its head to that of the person to whom the visitor is talking, already went 80% of the way towards achieving the impression of natural, human-like behaviour that we sought. Achieving the remaining 20% was then a massive amount of work.
RB: What do you see as the challenges and opportunities for multi-agent learning?
SW: With respect to robotics in particular, we are very limited by hardware. TERESA is loaded with expensive, heavy sensors and computers and still has nowhere near the perceptual capabilities it would need to replicate human-level social intelligence. The algorithms are there, but it may be decades before the hardware catches up.
RB: Do you think AI can be creative?
SW: Yes, and it often is. Creativity is fundamentally a search process and searching is something that computers do quite well. There’s a long history of using AI to generate music, art, etc., either in isolation or together with humans. It’s amazing what deep learning is making possible in this respect. See e.g.. the NIPS deep learning art competition: https://deepart.io/nips/submissions/
RB: In the future, do you think there will be a blurring of boundaries between Man and Machine?
SW: I think that boundary is already quite blurry. The introduction of the smartphone was a critical milestone in this regard. Already many people feel they cannot function without their phones. It probably won’t be too long before we start seeing cyborgs, e.g., people with chips implanted in their brains, but at some level, that step will be less significant than the one we’ve already taken, as the human and the computer are already functioning a single tightly coupled cognitive unit.
RB: How far should we take AI?
SW: There may be fundamental limits to how intelligent a machine we can build, but I don’t think we should artificially stop short of those limits. However, there are many applications of such intelligence that we should avoid or least restrict. We are already seeing how AI can be used unethically, e.g., by governments to aid mass surveillance, and I think the ethical temptations will only become more abundant as AI continues to advance.
Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.