by Axel Cleeremans
Consciousness refers to the fact that information processing as carried out by the brain is accompanied by subjective experience: There is something it is like to recognize a face, to drink coffee, to smell a unfamiliar fragrance, to win a game of go. By contrast, there is nothing it is like for a computer, or even for a robot, to carry out its information processing: The computations occur “in the dark”, so to speak, and only but a few would ascribe any modicum of experience to such devices. A central issue for the cognitive neurosciences is thus to elucidate the differences, if any, between information processing with or without consciousness. This can be achieved in different ways, ranging from behavioural and imaging approaches (Frith, Perry, & Lumer, 1999). The study of consciousness entails addressing unique challenges at the boundary between theory and methodology because it requires one to combine first- and third-person data (Seth, Dienes, Cleeremans, Overgaard, & Pessoa, 2008).
While numerous theories of consciousness have now been proposed, only a few can be characterized as computational theories of consciousness (Atkinson, Thomas, & Cleeremans, 2000; Maia & Cleeremans, 2005). Amongst the latter, two big ideas dominate. The first is that consciousness amounts to “fame in the brain” (e.g., Baars, Dehaene, Dennett, Lamme): We are conscious of whatever representations have, at some point in time, come to dominate information processing through processes of global competition and constraint satisfaction.
The other idea is that consciousness specifically depends on the involvement of meta-representations (e.g., Rosenthal, Perner & Dienes): We are conscious of something in virtue of the fact that our first-order representations are, under certain circumstances, the target of higher-order representations. In other words, it is because of the fact that one is conscious that one is conscious, that one is conscious!
Here, I would like to defend the idea that both perspectives critically depend on learning: Whether some representations come to dominate processing and hence “win the competition” must depend on processes of adaptation and automatization. Likewise, whether some representations come to redescribe other representations also depends on learning. Hence the suggestion that we learn to be conscious — the “Radical Plasticity Thesis” (Cleeremans, 2008). Action plays a critical role in this context, for there wouldn't be anything to learn about if we were unable to influence our environment and other agents. However, I would additionally like to suggest that the brain learns not only about the consequences of the actions of its body on the environment, but also about itself. The core hypothesis is thus that the brain continuously and unconsciously learns about its own workings, so developing re-representations of its own internal states. These re-representations form the basis of conscious experience, and also subtend successful control of action. In a sense thus, this is the enactive perspective turned inwards: Consciousness is “signal detection on the mind”; the mind is the brain's theory about itself.
In this presentation, I will overview computational theories of consciousness and elaborate on the “radical plasticity” idea, as well as illustrate them by relevant evidence in different domains. In particular, I will focus on recent work dedicated to implementing Rosenthal’s “Higher-Order Thought” theory of consciousness in the form of neural networks capable of redescribing their own internal states to themselves (Cleeremans, Timmermans, & Pasquali, 2007). The conceptual and methodological implications of the view will also be surveyed, in particular, the idea that consciousness is graded rather than dichotomous.