Richard Bright: Can we begin by you saying something about your background?
Keith Frankish: I trained as a philosopher of mind, but I think of myself as a cognitive scientist — as someone bringing his particular skills to the cross-disciplinary enterprise of understanding the human mind. Most of my research has been concerned in some way with the conscious mind — with conscious belief and reasoning on the one hand and with conscious experience on the other. I might summarise my views by saying that the conscious mind is less fundamental than we suppose. The conscious mind is very important, of course, and the source of many of our uniquely human abilities, but I see it as a fragile superstructure, shaped by culture and heavily dependent on nonconscious processes. The nonconscious mind is the engine room of cognition. It’s this perspective that I want to bring to thinking about AI. When we wonder what artificial minds might be like, we naturally think about artificial versions of our own conscious minds, but I think that distorts our view. Artificial intelligences may be very different.
RB: The February issue aims to explore what it means (and will mean) to be human in the age of Artificial Intelligence. It would be good to start with how we define intelligence, both in terms of human intelligence and our growing understanding of animal intelligence. It is obviously a complex, multi-faceted entity, (and maybe difficult to answer?) but how would you define intelligence?
KF: I’m inclined to adopt a minimal definition of intelligence as a problem-solving capacity — a capacity to respond to stimuli in ways that further some purpose or task. Even plants have intelligence in this sense. They have been designed by natural selection to perform certain tasks, such as maintaining the levels of vital nutrients, and they respond to stimuli in ways that help them achieve these tasks, such as by moving their leaves to face the sun. Other intelligent systems perform more demanding tasks, such as navigating around an environment, or recognising faces, and they can learn from experience and adapt their responses accordingly. I think that our minds and the minds of other animals are largely composed of special-purpose intelligent systems like this, designed by natural selection to perform specific tasks that are important for survival. But of course when we speak of ‘intelligence’ we are usually thinking of something much broader — the sort of general cognitive capacity measured by IQ tests. Again, we can think of this as a problem-solving capacity, but this time a general, open-ended one, which can be applied to any task. (IQ tests have different components — reasoning, knowledge, processing speed, and so on — but these are themselves general abilities.)
Note that I’ve said nothing about the more subjective aspects of intelligence, such as self-awareness, emotion, and consciousness. That’s not because I think they are irrelevant. For example, emotional responses are important for evaluating courses of action and making wise decisions. But I see them as important because they make us better problem solvers, not in their own right. I don’t think we should assume that artificial or alien intelligences will have them, at least in the same form we do. They might find different ways of doing things. So I don’t want to build them into the definition of intelligence.
Another thing to note is that I’ve defined general intelligence in a way that makes it problematic. How on earth could evolution (or human engineers) create a mechanism that could solve any problem? In fact, I don’t think it did. There is no general-purpose reasoning system in the brain. Rather, evolution — both biological and cultural — found tricks for getting the special-purpose systems to work together in ways that approximate to general intelligence. This is what created the conscious mind — the fragile superstructure I referred to. I’ll say a bit about how I think this happened because it’s central to the way I think of intelligence. (I should note that my views on this are heavily indebted to the work of the philosophers Daniel Dennett and Peter Carruthers, among others. I particularly recommend Dennett’s book Consciousness Explained and Carruthers’s The Centred Mind.)
I think there were three key components to the trick: language, self-stimulation, and mental imagery, which developed separately. Language gives us a universal representational medium, which can combine outputs from different specialist systems. It has a structure which facilitates complex thought and logical inference, and it can represent abstract ideas and imaginary situations. By self-stimulation, I mean stimulating one’s own mental subsystems by creating sensory inputs that focus and direct their activities. In particular, we can stimulate ourselves with language, questioning ourselves, instructing ourselves, prompting ourselves, and so on. The third element is the capacity to produce mental imagery and to imagine ourselves performing actions. (The latter ability probably depends on mechanisms that evolved for the guidance of action.) This enables new kinds of self-stimulation. We can talk to ourselves in inner speech (producing auditory images of the utterances), conjure up images of sights and sounds and other stimuli, and try out actions in imagination before performing them.
Such imagistic self-stimulation creates what we call the stream of consciousness, and it forms a new level of mental activity, which Daniel Dennett describes as a soft-wired ‘virtual machine’ running on the hardware of the biological brain. This virtual mind enables us to tackle problems beyond the scope of our unaided biological brains. When we confront a problem to which our mental processes don’t deliver a spontaneous response, we don’t have to remain baffled. We can do something — start questioning ourselves: How could I tackle this? What would help? Is there another way of looking at it? What if I did this? And we can imagine relevant scenes, objects, conversations, actions, and suchlike. These self-stimulations may then generate a spontaneous response — more inner speech, other sensory imagery, or an emotional reaction — which reframes the problem or provides a partial solution to it, and which prompts another round of self-stimulation, and so on. In this way, by engaging in cycles of self-stimulation and response, we can work our way through problems that would otherwise be beyond us. It is important to emphasise that the process needn’t be pre-planned. We don’t need to know in advance precisely which self-stimulations will solve our original problem. (If we did, then we would in effect already have solved it.) Rather, it is a process of trial and error, and we may make many false starts and encounter many dead ends before we get to a solution. At the same time, however, it won’t be completely random. We may have picked up useful tricks and developed hunches about what will work, based on past experience. And, of course, we can draw on vast stores of culturally transmitted knowledge and know-how, thanks again to the wonderfully flexible representational medium provided by human language.
I think this distinction between the biological mind and the virtual mind is crucial to understanding human psychology, and I have argued that it corresponds to the distinction drawn by ‘dual-system’ theories of reasoning, as advocated, for example, by Daniel Kahneman in his book Thinking, Fast and Slow. Dual-system theories claim that the human mind has two different reasoning systems: System 1 (actually a large suite of subsystems), whose operations are fast, automatic, and nonconscious, and System 2, which is slow, controlled, and conscious. System 1 corresponds to the biological mind, and System 2 to the virtual mind. (System 2 processes are conscious, since sensory imagery is processed like actual sensory inputs — we are aware of imaged sights and sounds in much the same way that we are aware of real ones.) And it is by installing this virtual System 2 in our heads (that is, by developing regular habits of self-stimulation) that we achieve something close to general intelligence. Of course, you can’t install a System 2 on just any brain. You need to have a suitably rich suite of biological subsystems in place first, including a language system, before you can get the trick to work
RB: How does, and how can, machine intelligence enhance human intelligence?
KF: In two very different ways, I think. I just distinguished two levels of mentality – the biological mind, composed of specialist subsystems, and this virtual mind, formed by self-stimulatory activities. So the first thing to ask is which system we’re thinking of enhancing: the biological System 1 or the virtual System 2? It is obvious that the methods would be very different. Enhancing the biological mind would mean getting deep into the hardware of the brain, installing specialist systems which interface with the biological ones at a neural level. This might involve creating self-organising systems, which could be implanted early in life and grow alongside the biological ones, forming complex neural interfaces with them. Technology like this is probably some way off. But enhancing System 2 is a completely different matter. In fact, we’ve been doing it for millennia. Self-stimulation isn’t limited to forming sensory imagery. We can use artefacts to stimulate our System 1 thinking and help us break down complex problems into simpler chunks. Think of using a calculator to solve a complicated mathematical problem. Instead of trying to solve the problem directly, we can follow an indirect path through interaction with the device. At each step the calculator provides us with new stimuli, creating new, simpler subproblems: which keys to press first, how to interpret the answer the calculator displays, what entries to key in next, and so on. The solutions to these simpler problems will typically be provided by non-conscious, System 1 processes, and the solution to the whole problem will be the product of cycles of System 1 thinking and electronic processes, which together constitute an artificially enhanced System 2 process.
The thing to stress about System 2 enhancements is that they are simple to adopt. The artefacts involved interface with our brains through our bodies and sense organs. We press the keys of the calculator and look at its display. So it is easy to add new enhancements; it just requires some training in using and interpreting the device. (We might be able to make the devices more efficient by developing interfaces that bypass the external organs, detecting motor commands in the brain and sending signals directly to afferent sensory pathways, but these interventions would still be shallow from a neural perspective.) For thousands of years, we humans have been enhancing our System 2 thinking with artefacts, from writing instruments and abacuses through to iPhones and smart glasses, and I think this sort of enhancement will accelerate rapidly in coming decades and that our virtual minds will become heavily dependent on external support.
RB: Do you think that, following the work being done with Evolutionary AI (evolutionary algorithms), our views on what is intelligence might change?
KF: Yes, I think so. We tend to take a top-down, anthropocentric view of intelligence. We naturally focus on our conscious, System 2 thinking – the serial, effortful, logical activity that Sherlock Holmes was so good at. From that perspective, of course, it is preposterous to claim that a plant has intelligence. But I’ve suggested that this System 2 thinking is a special kind of human activity, and that it is supported by a vast suite of fast, automatic intelligent systems (many shared with other animals), which evolved to deal with specific survival problems. And it’s not absurd to compare these special-purpose systems with those that regulate a plant’s metabolism. Now, Evolutionary AI takes its inspiration from these biological systems, and so it provides a corrective to that anthropocentric picture we have. At the same time, it is also changing our ideas of how artificial intelligence can be created. Early AI systems were intelligently designed by human engineers, who built dedicated hardware and programmed it with explicit instructions, imitating the methods of logical, System 2 thinking. But Evolutionary AI takes a very different approach. It relies on the trial-and-error methods of natural selection to discover structures and algorithms that best meet particular environmental demands. So, again, it is replacing our top-down picture of intelligence with a more biologically-based bottom-up one. I think that’s salutary.
RB: Your website has the intriguing title Tricks of the Mind. How would you characterize your view on consciousness?
KF: The title is intended to capture two ideas. The first is one I’ve already mentioned: that our conscious minds are the product of various tricks or contrivances, which transformed the powers of our biological brains. The second is that some features of our minds, as we ordinarily conceive of them, are illusory (tricks in the magic show sense). I think this is the case with what philosophers call phenomenal consciousness. When we try to look inwards (‘introspect’) and examine our own experiences, we tend to judge that they have an intrinsic quality to them, a subjective feel or ‘what-it’s-likeness’, which can’t be detected from the outside or described in physical terms. I think this is a sort of illusion, produced by our introspective mechanisms. Introspection misrepresents our experiences as having intrinsic qualities that they don’t really have. Experiences are really just brain processes, vast swathes of neural activity. Georges Rey draws an analogy with a child who thinks the creatures in a cartoon film are real. The child interprets a series of still images as living, moving beings. Similarly, we interpret introspected patterns of brain activity as simple intrinsic qualities. I call this view illusionism.
This isn’t to say that I deny the existence of consciousness. I just don’t conceptualise it in the phenomenal way. For an experience to be conscious, I say, is (roughly) for the information it carries to be widely available to other neural systems – for memory, reasoning, speech, emotion, and so on. So a conscious experience is one you can act on, report, reflect on, and so on. (This is sometimes called access consciousness, as opposed to phenomenal consciousness.) When I spoke of System 2 processes being conscious it was in this sense that I meant it.
RB: A super-intelligent AI could register information and solve problems and that would far exceed even the brightest human minds, but being made of a different substrate (e.g., silicon), would it have conscious experience?
KF: If we’re talking about phenomenal consciousness, then of course I say no, since I’m an illusionist about it. I don’t think anything has phenomenal consciousness, including us. But that’s a good question for anyone who does believe in phenomenal consciousness. They’d have to say, I think, that it’s an impenetrable mystery. We might hypothesise that phenomenal consciousness arises only in brains with similar biological properties to ours, in which case silicon-based AIs wouldn’t have it. Or we might speculate that it arises when certain information processing activities occur, in which case suitably programmed AIs would have it. How could we tell? Phenomenal consciousness is essentially private. Only the AI itself would know. (You might think we could find out by asking the AI, but that wouldn’t resolve the matter. The AI might think it was phenomenally conscious and sincerely report that it was, without actually being so.) This is one reason (not the only one) for being suspicious of the notion of phenomenal consciousness. However, a silicon AI might have the illusion of phenomenal consciousness, just as we do. I assume that the illusion can be explained in representational terms — in terms of how our experiences are represented to us in introspection — and similar representations could be implemented in other kinds of hardware.
RB: Is consciousness the key to Artificial Intelligence?
KF: Not in the phenomenal sense, for obvious reasons. The key, I think, is something different. As I suggested, I don’t think it’s feasible to try to design general intelligence as such. Rather, we’ll need to design — or evolve — specialist intelligences and then get them to work together in a way that approximates to general intelligence. Access consciousness will be a prerequisite for this, but it won’t be sufficient. It isn’t enough to make the systems share information between themselves; we need to get them to cooperate in pursuit of strategic goals. We might try to build an executive system that would organise the activities of the specialist systems, but designing such a system might pose as great a challenge as designing a general-purpose reasoner itself. It would be simpler to find a trick like the one we use. In fact, we might be able to use the very same trick: Equip our AI with a language system and a capacity for sensory imagery, and then encourage it to self-stimulate and develop a virtual, System 2 mind for itself. We might train it in self-stimulation, as we train children, by giving prompts that help it break down a complex problem: What do you think would help? What do you need to know? Could you look at it differently? What if you did this? (Our interactions with AIs may be much like those with precocious children.)
It may be, then, that AIs will form System 2 minds for themselves. However, these minds are unlikely to resemble ours. Since System 2 thinking is the product of self-stimulation, its form is determined by the thinker’s language system, sensory apparatus, and working memory (which holds sensory imagery). AIs with more complex language systems, more powerful sensory apparatus, and larger working memories might have much richer, multi-dimensional streams – we might say seas – of consciousness.
RB: Given what we know about our own minds, can we expect to intentionally create artificial consciousness? Or will AI become conscious by itself?
KF: Of course, I don’t think we can create artificial phenomenal consciousness. But there is an interesting variant of the question: Can we expect to create the illusion of phenomenal consciousness in an AI? Or might it arise naturally as a side effect of other processes? As I said, I don’t see any reason why we couldn’t create the illusion artificially. But to understand exactly how to create it, we first need to understand how it is created in us. It might just be a consequence of the limitations of introspection, as I suggested earlier. We judge that our experiences have intrinsic, nonphysical properties because introspection represents them to us in a simplified, schematic way. (Recall Rey’s analogy with a cartoon; the illusion of movement occurs because the human visual system cannot register the individual frames.) If that’s so, then an AI with similarly limited introspective capacities would judge that it had phenomenal consciousness too — and perhaps start to puzzle over it, as we do. (Somewhat paradoxically, if we equipped it with superior introspective abilities, which represented its internal states more accurately, then it wouldn’t experience the illusion and would deny that it was phenomenally conscious!) So on this view, the illusion is a nonfunctional side effect of introspection. Another view, which has been eloquently defended by Nicholas Humphrey in his book Soul Dust, is that the illusion is an adaptation. Humphrey argues that our sense of being phenomenally conscious vastly enhances our lives and that certain brain processes have evolved specifically to produce it. Humphrey provides a sketch of these processes, which could serve as a basic blueprint for producing the illusion artificially. I think this is a very exciting area to explore.
RB: Does consciousness require embodiment?
KF: In one sense at least. I think that only our sensory states are conscious — sensory representations of features of the world and of our own bodies. (I’m using ‘conscious’ here in the access sense, of course.) For us, conscious experience is of a world independent of us, which we explore via sensory interfaces — a world that is felt, seen, heard, tasted, smelt, and so on. Our thoughts are conscious only if they have a sensory vehicle of some kind, such as an image of a heard sentence. I can’t imagine existing as a pure intellect, which just thinks in a completely nonsensory way. So that’s a reason for thinking that a body with sensory apparatus is needed in order to have consciousness that’s anything like ours. However, perhaps it needn’t be a physical body. Perhaps an artificial intelligence could have a simulated body, with which it explores a simulated world.
RB: How far should we take AI?
KF: The question assumes that we will have control of the process, and I’m not sure that we will. It’s not that I think AIs will rise up and enslave us. That’s a fantasy. (At least, we don’t need to worry about it till we have created complex artificial life, which can support itself and reproduce without human assistance. Until there are lots of dumb artificial creatures living around us, we don’t need to worry about smart ones taking over.) But there’s another way in which we may lose control of the process, and I think it’s a much more pressing worry. Earlier, I stressed how easy it is to artificially enhance our System 2 minds. We can simply plug in cognitive aids via sensory interfaces. We’re already doing this with things like smart phones and smart glasses, and as the devices get more powerful and their interfaces more streamlined, we’re going to do it more and more. I suspect we’re going to see a smart revolution, in which we offload cognitive drudgery onto electronics in much the same way that previous generations offloaded manual labour onto household appliances. Why should a lawyer spend years studying case law if they can buy a tiny earpiece that will instantly retrieve contextually relevant data as needed and feed it to them? We’ll also use smart technology to enrich our lives and enable us to do wonderful new things. We’ll be able to share experiences through video, audio, and tactile feeds, and develop new ways of working, socialising, and loving. We’ll be able collaborate on vast new projects, working with people all around the world in real time. Our conscious minds will be radically enhanced, extended, and enriched.
It sounds marvellous, but the dangers are obvious. We’ll be hugely dependent on the technology, and our conscious minds will become even more fragile. A solar flare might knock out the devices we depend on and leave us cognitively disabled. We’ll have to trust the information we’re fed and won’t have the resources to assess its quality. And it will be easy for anyone with control over the technology to manipulate us. (We’re already seeing something like this with social media bots being used to manipulate opinion during elections. Imagine relying on similar bots to guide your work, your social relations, your personal life, your very thinking.) In short, we may be enslaved, not by artificial minds, but by own enhanced minds. It’s not the master AIs we have to worry about but the servant ones. I don’t know what we can do about this. As I said, this enhancement is just a continuation of a process that’s been going on for thousands of years. It’s very natural for us, and I’m not sure we can stop it now. Those who resist the technology will be left behind. That’s a pessimistic note on which to end, but I think it’s one of the most pressing issues facing us.
Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.