Cyborgs and I

Professor Kevin Warwick’s main research areas are artificial intelligence, biomedical systems, robotics and cyborgs. Due to his research as a self-experimenter he is frequently referred to as the world’s first Cyborg. His experiments into implant technology led to him being featured as the cover story on the US magazine, ‘Wired’. He achieved the world’s first direct electronic communication between two human nervous systems, the basis for thought communication. Another project extended human sensory input to include ultrasonics. He also linked his nervous system with the internet in order to control a robot hand directly from his neural signals, across the Atlantic Ocean. In this exclusive interview he discusses his ideas and work on AI, robotics and the future of humans ‘plugging’ into technology.

Kevin Warwick

Richard Bright: The February issue aims to explore what it means (and will mean) to be human in the age of Artificial Intelligence. It would be good to start with how we define intelligence, both in terms of human intelligence and our growing understanding of animal intelligence. It is obviously a complex, multi-faceted entity, (and maybe difficult to answer?) but how would you define intelligence?

Kevin Warwick: I was bit annoyed hearing people like Richard Dawkins and others saying that there has always only been one intelligence, that human intelligence is something very special and different, and consciousness is something that’s human and so on. I feel this is not really giving non-human animals much credibility at all. I felt that it’s pretty nasty. The more you study other animals you see that they are intelligent, but in a very different way. It’s dependent on how they sense the world, it’s dependent on how they live in the world. So, if you’re looking for a definition of intelligence in general, then it’s something like the mental processes that are needed in that particular creature. This could be very simple, in the case of a sea slug, just with a few brain cells that enable it to do its decision making, even if those decisions are fairly trivial. When you then apply it to artificial intelligence, machines are intelligent now. Artificial intelligence exists and there is no reason why machines can’t be conscious in their own way. Funnily enough, Alan Turing said similar things many years ago, that machines are intelligent but it’s in a different way to humans. In a way, I’m not looking at it in any new way, but it is different to what quite a few philosophers and people like Richard Dawkins are saying, that intelligence is only a human thing. Then you get some people saying that some humans are intelligent while others humans are not, which would be ridiculous and could lead to worrying consequences.

RB: How does, and how can, machine intelligence enhance human intelligence? Also, following the work being done with Evolutionary AI do you think our views on what is intelligence might change?

KW: When you look at AI and machines and technology it really opens up your thoughts on what intelligence is all about. Machines can sense the world in a whole variety of ways. Machines can communicate in all sorts of different ways when you look at all the basics of intelligence. So, the human brain is a typical size, we all have similar sized brains. We can look through natural evolution and maybe they will gradually get bigger and there will be more connections. There may well happen to be a linearity in that, in the biological sense, but we do have the opportunity of linking with technology and plugging in to your brain. There are currently lots of different areas of research that are looking at aspects of that, so the possibility of combining artificial intelligence and human intelligence is a real one. When you look positively at enhancement it really opens up all sorts of different possibilities. For me, communication is an enormous one and I think when we look at technology seriously and how we communicate as humans now it’s quite embarrassing that we’re still communicating like this, using mechanical pressure waves to communicate with each other, which are not really much connected with the thoughts, feelings, images and emotions that are going on inside our brains. So, I do think that the way we communicate at this moment is pretty pathetic compared to the way it could be if we link with AI.

Then you get unknowns. Machines are used quite often now to deal with multi-dimensional information, because they can make all the links that the human brain cannot, the human brain is stuck in 3-dimensions at best. The possibility, therefore, of linking the human brain with a machine that can deal with hundreds of dimensions of information, where that can go could be enormous, it’s difficult to comprehend. One thing that disappoints me, from a scientific point of view, is that we’re stuck on Earth after people have travelled to the Moon and that’s as far as we’ve gone. Obviously there are major problems due to the distances involved, because of the time that it takes, with us going any further. If we can start understanding things in more dimensions then maybe, just maybe, we could do it in a different way because we’re actually thinking in more dimensions. So, linking with AI, as well as providing some obvious wins such as communication and sensing, will provide us with some big unknowns which really turn everything on its head, such as multi-dimensional thinking.

RB: So, you’re not talking about just ‘using’ technology, you’re talking about linking with technology?

KW: Absolutely.

RB: This leads me on to my next question, which involves consciousness. Does consciousness require embodiment?

KW: That’s a very good question. If you take a look at what philosophers say, such as John Searle and Roger Penrose, they don’t really talk about embodiment, but when they start talking about consciousness then they’re generally talking about human consciousness. In my research we have made some little robots which have biological brains, so we’re growing, culturing the brains separately. Typically they only have 150 thousand brain cells, but there is still a lot of connectivity. They can do simple things like avoiding obstacles but they’ve all got their own characteristics. In a way you feel sorry for them because they have very simple senses and they just move around in a little corral. After about three months they get a sort of premature dementia, which may well come from the fact that they’ve not really got much interaction with the world. Then you think, well, if all I could do was have some ultra-sonic sense and no one communicated with me, and that was my only view of the world, it would be awful no matter how big the brain is. So, if we can give those creatures more inputs and outputs, and give them more to do, then maybe they would become conscious in a limited way. So, to answer your question, I think perhaps consciousness does require some form of embodiment and it becomes part and parcel of it, you need some inputs and outputs to have some effect on the world. Otherwise it’s a ‘brain in a bottle’.  So, it does need embodiment, but as for what the ‘body’ is, there are all sorts of possibilities.

RB: Do you think consciousness is the key to artificial intelligence? Whether we assign it to it or whether it is actually a conscious ‘being’.

KW: Yes, I would say so, but we would have to be open minded that it could take on all sorts of different forms. I think we have to get away from what human consciousness is. Someone like John Searle would be saying that if we take human neurons and put enough of these together then consciousness emerges, which is again looking at how wonderful humans are and how terrible everything else is, including machines. I can see why John and others use the argument because it says humans are conscious and machines are not conscious, which makes us somehow better than they are, and it satisfies the argument. But, in the process, it really does not provide much respect whatsoever for other creatures, including chimpanzees, which are not too far from ourselves. It also doesn’t respect machines, if they deserve any respect that is.

RB: What is your view on Whole Brain Emulation?

KW: It is very difficult, given what we know scientifically at the moment, to actually copy a human brain and put it in a machine brain, bit for bit, process for process, cycle for cycle etc. What is much easier is to open a port into the brain to plug into and connect to computers. To expand the processing of the brain by plugging into it has effectively been done to some extent. That’s not saying that we can change memories and things like that.

RB: Whole Brain Emulation is more of a ‘thought experiment’ at the moment, giving rise to a whole set of problems around consciousness, identity etc.

KW: Exactly.

RB: Our brain power has an evolutionary limitation, including a limitation on our visual system which can only see one small part of the electromagnetic spectrum, which has been a necessary survival property for thousands of years. Would we be able to cope with any enhancement of that?

KW: I think that the visual input is probably the best thing we’ve got but, as you say, it’s still a very limited frequency range when we look at all the signals out there. I know from my own experiments, where I dabbled with ultrasonics by having implants and that was no problem. It took several weeks to train my brain to recognise pulses that were linked to ultrasonic sensors, but then when we had done it, it was as if I’d always had this thing, my brain was fine with it. It wasn’t that I could sense, “oh that’s exactly 2.6 metres away” or whatever it happened to be, it was that I could sense that something was there and if it moved slightly towards me or slightly further away I could detect very small movements with high accuracy. That is pretty cool!

I think, bit by bit, as long your brain can comprehend and make some sense of the signals then we can expand things to other frequencies like infrared, which I don’t think would be a problem at all. Even with frequencies like X-ray we probably could. But we would have to put some kind of understanding on it.  How far we can go with it is difficult to know, for example, with ultraviolet we may not be able to expand to or it may not be simple to expand to.

RB: The brain is very plastic and adaptive.

KW: And it’s mercenary as well. If it likes something and it enjoys something it will go for it. And if it doesn’t it’s difficult to force it to do that. Brain cells like to communicate. If I plugged an implant in your brain and linked it to an implant in my brain and we started sending signals I think our brain cells would go over to it big time. Some of them might stop doing other things that they are doing now because they like to communicate and here’s a whole new way of communicating. Our brains may even biologically expand because of that.

RB: Do you think AI can be creative?

----------------------------------------------------

The rest of this article is reserved for members only. If you have a subscription, please sign in here. Otherwise, why not Subscribe today?

Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.

Subscribe Today

, , , , ,

No comments yet.

You must be a subscriber and logged in to leave a comment. Users of a Site License are unable to comment.

Log in Now | Subscribe Today