Creating Artificial Consciousness

Ryota Kanai PhD is a neuroscientist working on the computational principles underlying consciousness and the brain, and the founder and CEO of an AI startup, Araya, Inc. in Tokyo. His goal is to create artificial consciousness using intrinsic motivation, deep neural networks, and integrated information while taking inspirations from neuroscience. He formerly led a cognitive neuroscience lab at Sackler Centre for Consciousness Science at the University of Sussex. In this exclusive interview he discusses his ideas and work in trying to understand consciousness by creating it.

Ryota Kanai

Richard Bright: Can we begin by you saying something about your background?

Ryota Kanai: I would say my background is science. I worked on whatever seemed important for understanding consciousness. Before I founded Araya, Inc., a few years ago in Japan, I was studying cognitive neuroscience to understand the neural mechanisms underlying consciousness. Cognitive neuroscience is an interdisciplinary field with the aim to understand the human brain by combining neuroscience (e.g. neuroimaging and computational neuroscience) and other traditional fields such as philosophy and psychology.

I was particularly interested in how the visual system works. The visual system is a relatively well understood part of the brain, and I hoped to uncover how subjective experience of seeing (e.g. the redness of red) is linked to neuronal activities and anatomy in the visual brain.

After a while I felt neuroscience alone was not sufficient. Neuroscience is mostly about proving ideas with empirical observations, which is important, but I also thought what we really need is more wild ideas on consciousness. So I switched to a different approach called the constructivist approach where the aim is to understand consciousness by trying to create artificial consciousness. This gave me an opportunity to think about possible functions of consciousness with more imaginations and synthesise ideas from various fields not limited to neuroscience. In Araya, our research team is trying to understand consciousness by creating it. Artificial consciousness will be particularly illuminating for designing Artificial General Intelligence (AGI).

RB: The February issue aims to explore what it means (and will mean) to be human in the age of Artificial Intelligence. It would be good to start with how we define intelligence, both in terms of human intelligence and our growing understanding of animal intelligence. It is obviously a complex, multi-faceted entity, (and maybe difficult to answer?) but how would you define intelligence?

RK: A simple definition of intelligence is the ability to solve problems. If AI can solve a variety of tasks, including more complex ones, it is considered to be intelligent. But intelligence is not a single function, but a combination of many constituent functions. So we need to build those.

Instead of giving a list of all the necessary functions, I would like to emphasise the importance of imagination in intelligence. By imagination, I mean the ability to represent novel situations so that we can interact with fictional situations to learn from them.

To be able to have imagination, we need to construct the model of the environment and the self from past experiences, and then apply the learned structure to new situations. Once we have the ability to imagine new situations, we can learn from such fictional situations. This is a great advantage and endows an agent with flexibility to act adaptively in novel situations.

RB: How does, and how can, machine intelligence enhance human intelligence? And do you think that, following the work being done with Evolutionary AI, our views on what is intelligence might change?

RK: Machine intelligence definitely augments human intelligence. This is already happening with smartphones and computers. We rely on google maps to navigate and machine translations are already quite useful. These are AI technologies. We may not notice them but we use them all the time to enhance our capability in everyday life.

The point about evolutionary AI is interesting. We are currently limited with our own ideas of what intelligence is, which comes from observations of human behaviour and sometimes other animals. But theoretically, there could be other forms of intelligence we are yet to discover. Evolutionary AI is one way to create such intelligence because we can’t design the kind of intelligence we don’t have a concept for. Open-ended evolution could potentially generate new kinds of intelligence.

We often think about intelligence as quantity such as processing speed or complexity of the problem it can solve. But there might be a few additional qualitatively different steps beyond human intelligence. This is an interesting possibility to think about.

I tend to think there is something beyond human-level intelligence, but current AI research is still far from creating human-level AI. We still haven’t succeeded in producing human-like intelligence and need to figure out how we can build a machine that understands human language or make scientific discoveries in physics or neuroscience.

RB: How would you characterize your view on consciousness?

RK: There are two aspects to consciousness. One is the function of consciousness. What is consciousness good for? What advantage did conscious creatures have for survival in the course of evolution? The other aspect of consciousness is the internal experience accompanying computations in the brain. This subjective aspect of consciousness is called phenomenal consciousness, and this is what makes consciousness a fascinating topic and also extremely hard for science. Phenomenal consciousness is often considered to have no functional role. It’s just an epiphenomenon.

As for the functional aspect (aka access consciousness), my current hypothesis is that the ability to generate representations detached from the current environment is the key function of consciousness.  This ability corresponds to the imagination I mentioned earlier in the context of intelligence. Once we have the ability to internally generate representations of novel, or fictional situations, we can use the simulated future for making sensible decisions. For example, I can mentally simulate how I walk to the nearest station or how a pile of books would collapse if I were to remove one of the books from the pile. This sort of imagination is extremely useful when we need to deal with new situations.

Consciousness is often associated with non-reflexive behaviour. Neuroscience has shown sophisticated reflex can be triggered without consciousness. So it seems we do not need consciousness if our interactions with the environment happens real time in response to sensory inputs. On the other hand, taking an action based upon internally generated images require conscious intention.

To be able to internally general fictional representations, we need to have so-called generative models of the external world and the self. For example, when we mentally imagine how an object would fall due to gravity, we are applying our internal model of how things work in the environment.  It’s like a physic simulator where the laws of physics are extracted from our previous interactions with the environment.

The subjective aspect of consciousness is hard. My view is that phenomenal experience of consciousness or qualia such as the redness of red or painfulness of pain is essentially what information feels like as seen from the inside. Every experience we have is a form of information, which I believe is a physical phenomenon. There should be one-to-one mapping (called isomorphism between information and consciousness. According to this view, particular sensations we experience reflect the structure of information, and the actual substrates should not matter. As long as we have the same information structure, regardless of physical substrates, we should have the same internal experience or qualia.

To fully understand phenomenal consciousness, we need to look beyond neuroscience and AI research. We need to understand information. What is information as a physical phenomenon? The kind of information we are familiar with tend to be observer-relative, meaning there is an external observer who interprets the state of matter as encoding information about particular objects.  So there is an external viewpoint. However, we need to find ways to define information independent of observers taking an intrinsic perspective. The contents of conscious experience do not depend on the interpretation of my brain activities by others. They are uniquely determined by the brain state alone. To define information independent of observer within a physical system is the key step towards understanding qualia. Tononi’s integrated information theory is attractive because it offers a formal expression to characterise information as seen from the inside without assuming interpretative observers.

RB: A super-intelligent AI could register information and solve problems and that would far exceed even the brightest human minds, but being made of a different substrate (eg. silicon), would it have conscious experience? Or would it only be able to attain certain aspects of consciousness, such as, for example, learning and memory?

RK: Conscious experience depends on how functions are implemented in physical substrates. The same set of functions can be realised in multiple ways and I believe the quality of conscious experience depends on how exactly those functions are physically implemented. For example, we might simulate the whole brain inside a computer, but the intrinsic information inside a computer can be very different from the intrinsic information within a biological brain. In this sense, superintelligence could be completely unconscious. On the other hand, I feel it’s going to be unlikely that superintelligent AI would remain unconscious. Internal mechanisms of an AI can be constrained by efficiency in energy and resources. With such constraints, internal mechanisms will likely to converge to similar architecture with recurrent networks and so on. So if super-intelligent AI is created with some efficiency constraint, I predict it will have consciousness.

RB: Given what we know about our own minds, can we expect to intentionally create artificial consciousness? Or will AI become conscious by itself?

RK: That’s exactly what we are doing at Araya. One working hypothesis is that the ability of mental simulation via information generation is the key to conscious AI. In a way, such designs already exist within AI research, and generative models of the environment and the self are already used in many situations. So AI research is already tapping into possible functions of consciousness without considering implications for human consciousness. As a consequence of developing stronger and stronger AI systems, I expect that we end up creating conscious AI intentionally or unintentionally. The interesting thing about conscious AI is that we may discover different kinds of consciousness than biological consciousness.

RB: Does consciousness require embodiment?

RK: Not necessarily. I believe that consciousness is what information feels like from within, and as such we don’t need embodiment for consciousness as long as the system has information from an intrinsic viewpoint (this idea is in line with Tononi’s Integrated Information Theory). But, when we consider building artificial consciousness, entities that are not embodied tend to be uninteresting. If an agent had no interaction with the environment, it wouldn’t learn any interesting structures or relationships of the environment and the self. Such a conscious AI would be like just dreaming continuously without any particular contents and wouldn’t exhibit any meaningful behaviour.

Our conscious experiences are shaped by the internal, generative models of the environment, which model rich statistical regularities from the environment and predictive relationships between our actions and the reactions from the environment. Only through the presence of such regularities can we discuss what the information structure of visual experience as opposed to auditory experience is. If our aim is to understand why particular sensations such as the redness of red are learned and embedded within a system such as the brain or AI, it’s important to study embodied agents.

RB: What inspiration can we draw from experimental and theoretical neuroscience in advancing AI research?

RK: Neuroscience is limited in what it can offer in terms of implementation because we don’t understand how the brain works. Instead we need AI to understand the brain. AI research is offering a lot of vocabulary to describe computational concepts, which are very useful when we speculate on how biological neural networks might work. We also need AI to understand principles underlying highly complex systems. The amount of data we can gather in neuroscience is growing very fast, and we need better computational tools to extract knowledge from data.

On the other hand, more psychological theories and concepts seem useful in guiding AI research. For example, the mental simulation I’m talking about here are built upon ideas in (neuro)psychology such as naïve physics, model-based reinforcement, as well as many psychophysical observations from consciousness research.

RB: How far should we take AI?

RK: As far as we can. I think it’s likely that the current AI hype will die out, and we will get stuck. Of course, AI technologies will have commercial applications and will be part of our life. To solve intelligence and consciousness at the fundamental level, we need to keep pressing forward.  I don’t think there will be a singularity moment where it becomes impossible for humans to interfere with AI’s self-recursive improvement. I’m more concerned powerful AI technology is owned by a small set of groups such as big firms or governments and used against the rest of the world. To make sure that AI technology will benefit the entire world, it’s important to discuss how we share advances in AI technology. And if AIs become conscious, we should also consider their well-being.

RB: What are some of the challenges you hope to address in the coming years?

RK: As mentioned in this interview, we are trying to solve two problems. One is to implement putative functions of consciousness in a large scale. Specially, the ability of imagination is potentially important for constructing AGI. A more ambitious goal is to understand information. We are developing computational tools to analyse internal information structure. Our next step in this direction is to compute internal experiences of artificial deep neural networks and biological brains in animals.

……………

Giulio Tononi’s integrated information theory (IIT) is an information theoretical framework for connecting consciousness with physical substrates such as the brain. It proposed a measure of integrated information within a system termed Φ (phi), which measures to what extend the system is conscious. This measure can be in principle applied to non-biological systems such as artificial intelligence.

Naïve physics is our intuitive understanding of how the physical world works. For example, even formal education of Newtonian physics and gravity, we can make predictions based on intuitive understanding of the tendency that physical objects fall straight down.

Model-based reinforcement learning. Reinforcement learning is a study of how to learn a policy to take actions to maximise rewards through interaction with the environment.  In model-based reinforcement learning, the agent uses forward/predictive models of how the environment reacts to future actions for planning next actions to maximise rewards.

Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.

Subscribe Today

, , , , , ,

No comments yet.

You must be a subscriber and logged in to leave a comment. Users of a Site License are unable to comment.

Log in Now | Subscribe Today