Funding for this site is provided by readers like you.
L'émergence de la conscience
Sub-Topics
The Sense of Self

Linked
HelpLien : What is it like to be a bat?Lien : The Problem of QualiaLien : Quining Qualia
Lien : QUALIA: What it is like to have an experienceLien : QualiaLien : Petite bibliographie sur les qualiaLien : Journal of Consciousness Studies
Lien : PSYCHE, an interdisciplinary journal of research on consciousnessLien : Short Reviews Other Books in the FieldLien :  Vidéo : Sur quelques développements récents d'électrophysiologie du sommeil et du reve, et sur leurs conséquences en philosophie de la conscienceLien : L’argument des qualia inversés
Lien : Science & Consciousness Review
Researcher
Chercheur : Ned BlockChercheur : Dr. Susan BlackmoreChercheur : Thomas Nagel
Original modules
History Module: When the History of Science Sheds Light on the Philosophy of MindWhen the History of Science Sheds Light on the Philosophy of Mind

A Summer School on Animal Sentience and Cognition

Despite the immediacy of qualia, it must be remembered that they result from at least one translation: that of a physical stimulus into action potential frequencies. For example, for an electromagnetic wave with a wavelength of 700 nm to acquire the qualium “red”, it must first be transduced by the photoreceptor cells in the retina. And it is the series of action potentials along particular pathways leading to the visual cortex that is the proximal cause, as it were, of the qualium.

Some authors even speak of a second translation in which these encoded impulses are converted into qualia. But that position is already controversial, because it is based on a certain philosophical conception of qualia.

Another philosophical presupposition, a materialist one in this case, establishes an equivalence between a qualium and a particular physiochemical state in the brain, somewhat the same way that heat corresponds to molecular kinetic energy or that visible light corresponds to certain radiant electromagnetic energy.

Some authors, such as the philosopher Daniel Dennett, even try to convince us that in the end, what we call qualia do not actually exist. Because while it is hard to deny the existence of something very general such as “subjective experience”, the term “qualia”, being more precise and coming from the jargon of philosophers, is more vulnerable to criticism.

Another philosopher, Ned Block, half-jokingly defines qualia by borrowing the definition of jazz that is attributed to Louis Armstrong: “If you got to ask, you ain’t never gonna get to know.” But that leaves philosophers like Dennett unsatisfied. Dennett is convinced, for example, that some time in future, the concept of qualia will have no more value than the concept of élan vital (“vital impetus”, or “vital force”), which was highly popular before the molecular mechanisms of life were understood but has now fallen into disuse.



WHAT IS CONSCIOUSNESS?
PHILOSOPHICAL POSITIONS ON CONSCIOUSNESSTHEORIES OF CONSCIOUSNESS IN THE COGNITIVE SCIENCESFLAWS IN THE CLASSICAL MODEL OF CONSCIOUSNESSSOME PROMISING CONCEPTS AND MODELS FROM THE NEUROSCIENCES

For many reasons, human consciousness is very hard to define. In particular, the kind of problem that it poses for science is very different from that of explaining physical phenomena such as falling objects, photosynthesis, or nuclear fusion. This difference has been characterized in various ways, with a number of different dichotomies.

For example, some authors have stressed the private nature of consciousness, which is accessible only from the viewpoint of the conscious subject, whereas physical phenomena are accessible to any observer.

Others have stressed the ineffability of consciousness: it cannot be effectively explained in terms of language, unlike physical phenomena, whose properties can be accurately expressed in terms such as mass or temperature.

In a 1974 article entitled “What is it Like to Be a Bat?”, the philosopher Thomas Nagel focused on these subjective properties of conscious human experience.

 

To do so, he tried to imagine the subjective viewpoint of an animal with a sensory spectrum very different from our own: a bat. Bats orient themselves in space by echolocation: they emit high-frequency shrieks, then use the echoes returned by obstacles or prey to locate them.

 

Nagel’s idea was to show that because humans are incapable of echolocation, they will never be able to subjectively feel “what it is like” to orient themselves in this way. Just as we humans, with our sense of sight, perceive not electromagnetic waves in the visible-light spectrum, but rather illuminated objects, bats may perceive their returning echoes not as sounds, but directly as objects. That, however, is something we will never know.

And this is exactly what is meant by the subjective side of conscious experience, compared with its objective side. In the bat’s case, the objective side is the acoustical physics of echolocation, which we can describe and understand, unlike its subjective side, which we cannot. Nagel therefore concludes that science has taught us many things about how a bat’s brain functions, but not “what it is like” to be a bat.

This subjective aspect of “what it is like” to have any given conscious state is also referred to as the phenomenological aspect of consciousness. A related term, qualia (the plural of qualium or quale), more specifically designates all of our direct impressions of things. Qualia are the immediate experiential aspects of sensations—to offer some crude examples, the particular redness of the red of an apple, or the coldness of ice. Some authors even extend the concept of qualia to our most basic thoughts and drives.

The problems that qualia pose for the scientific study of consciousness have led the philosopher David Chalmers to distinguish what he calls the “hard problem” of consciousness from the other, “easy” problems.

 

The philosophical literature is brimming with “thought experiments” attempting to grasp the essence of qualia. One of the most famous of these experiments was proposed by philosopher Frank Jackson in 1982. Imagine that Marie is one of the world’s greatest neurobiologists in the field of colour vision, but that she has lived her entire life enclosed in a room where everything is black and white. Everything that she knows about colour vision, she has learned from the books, printed in black ink on white pages, that she has been reading since she was a little girl. Thus Marie has come to know all of the relevant facts about how humans perceive colours.

Then suppose that one day, for the first time in her life, Marie leaves her room and sees the real colours of the world around her. She sees some red tulips and exclaims, “This is what it’s like to see red!”. As Jackson tells us in his thought experiment, at this point Marie appears to be experiencing something completely new. So how it is possible that even though she has had access to absolutely every imaginable piece of information about colour vision, she is now discovering something new by simply seeing a colour? This something new, Jackson’s fable concludes, is the qualium of the particular red of the particular flowers that she has seen.

In other words, even an extremely accurate knowledge of the brain and of the neural correlates of subjective consciousness does not appear to give access to the experience itself: what the subject is experiencing as a subject.

Jackson’s thought experiment naturally became the target of much commentary and criticism, particularly by thinkers who championed a materialist approach to consciousness. At the time that he published his thought experiment, Jackson himself regarded consciousness as an epiphenomenon. But he later came to reject this idea, because if Marie exclaimed when she saw colour for the first time, then this qualium was what caused her exclamation. Since epiphenomenalism does not accept that qualia can affect the physiology of the brain, and since Jackson was convinced that only physical causes can influence the physical world, there was a serious problem in accepting his original metaphor as such.

Lien : Epiphenomenal Qualia, by Frank JacksonLien : Mary's roomLien : Norwegian Knut Nordby was born totally colour-blindLien : Sacks, Oliver : The Island of the ColorblindLien : L’esprit dans un monde physique
    

Linked
Lien : la philosophie de l'espritLien : In Conversation with Paul Davies and Phillip AdamsLien : Definition of New MysterianismLien : Neurosciences et philosophie
Lien : Neuro-PsychoanalysisLien : Internatonal Neuro-Psychoanalysis SocietyLien : Toward a "new" paradigm of therapeutic action : Neuro-psychoanalysis and downward causationLien : How Could Conscious Experiences Affect Brains?
Lien : Neutral monism
Original modules
History Module: When the History of Science Sheds Light on the Philosophy of MindWhen the History of Science Sheds Light on the Philosophy of Mind

It would be chauvinistic to make an a priori assertion that only humans can be conscious. For a theory of consciousness to be as general as possible, it must therefore also account for the possibility of non-human consciousness (for example, in animals or machines).

Some authors believe that the concept of intentionality, a sophisticated way of talking about mental representations, might facilitate the development of a general theory of consciousness. Toward the end of the 19th century, the German philosopher Franz Brentano (1838-1917) developed the idea that the essence of mental activity is to be object-directed. For Brentano, all consciousness is consciousness of something.

In this sense, language is intentional. For example, if you think of the word “Montreal”, you may imagine the sight of Mount Royal overlooking that city, or perhaps the Olympic Stadium with its famous mast. But how does our understanding of the thing that is signified by a word enable us to form a mental representation of that thing? This is the kind of problem raised by the concept of intentionality, and it is just as hard to solve as the “hard problem” of consciousness.

Lien : Autres formes d’éliminativismeLien : L’intentionnalité : une marque du mental

The biological research done during the 20th century discredited the notion of the existence of mental forces with properties distinct from physical forces. Over the course of this century, researchers compiled an impressive volume of data on the neural circuits of the brain and how they function, but never uncovered any sign of the presence of separate mental causes.

Some eminent 20th-century neurobiologists, such as John Eccles and Roger Sperry, defended the idea that the conscious mind was separate from the brain and could sometimes exert an independent influence on its operations. But now, in the early 21st century, most neurobiologists reject the idea of mental causes separate from the physical world.


PHILOSOPHICAL POSITIONS ON CONSCIOUSNESS
WHAT IS CONSCIOUSNESS?THEORIES OF CONSCIOUSNESS IN THE COGNITIVE SCIENCESFLAWS IN THE CLASSICAL MODEL OF CONSCIOUSNESSSOME PROMISING CONCEPTS AND MODELS FROM THE NEUROSCIENCES

How can we explain the subjectivity of human consciousness, or, to use Thomas Nagel’s phrase, “what it is like” to be ourselves. Or, as David Chalmers would put it, how do we solve the “hard problem” of consciousness?

Today’s neuroscientists are trying to provide solutions to this problem, but philosophers have been grappling with it for centuries. Perhaps the two schools of philosophy that have had the most to say on this subject are dualism and materialism.


René Descartes (1596-1650)

According to substance dualism, the material world does in fact exist, but the subjective aspects of consciousness are of a different nature and constitute the other great substance of which the world is made. This immediately raises the question of the interaction between this subjective world and the physical one—a very difficult question to which the philosopher René Descartes, who distinguished the res extensa (material substance) from the res cogitans (thinking substance), provided an explanation that has been refuted since.

Even so, substance dualism has had a hard time. As early as the 4th century B.C.E., Plato distinguished a mortal body from an immortal soul. In a later age, this thesis served Christian theologians’ purposes so well that they unabashedly backed it with all the authority that the political power of the Church conferred on them for many centuries.

But subsequently, substance dualism has been shaken to its very foundations by questions about what its detractors such as philosopher Gilbert Ryle have called the myth of the “ghost in the machine”. For example, if the human body is a physical machine piloted by a non-physical ghost hiding somewhere inside the human skull, where exactly is that ghost hiding? And is there only one such ghost, or are there many? Who animates the ghost itself, and through what force does this ghost affect the physical world?

In an attempt to retain the advantages of having two separate entities, but avoid the pitfalls of substance dualism, philosophers have developed several variants of dualism. These include:

  • property dualism, which accepts that humans are composed only of matter, but that this matter has two very distinct types of properties;

  • epiphenomenalism, which recognizes the causal effects of the brain on the mind, but not of the mind on the brain; and

  • emergentism, which holds that mental states are more than the sum of their material parts but can nevertheless interact with them.

The other major school of philosophy that has has something to say about consciousness is materialism. For materialists, the causal relationship between our mental states and our behaviours does not pose any problem, because both are part of the physical world. A subjective experience such as pain is quite real, but simply consists of the neuronal states that give rise to it.

This materialist framework is monistic: it holds that only matter exists. But within it, there have been two main interpretations about the nature of the mind.

The first, known as dual-aspect monism or neutral monism, has been championed by thinkers such as Baruch Spinoza, George Henry Lewes, Thomas Nagel, and Mark Solms. It states that matter is a single substance, but can be perceived from two different perspectives. Just as a curve remains a line even though it can be described as concave or convex at any given time, so our psychophysical processes remain the same regardless of whether we are talking about them from a physical standpoint or a mental one.

Baruch Spinoza (1632-1677)

Thus, for the proponents of the dual-aspect theory, your brain can appear to you as something physical when you regard it as an object from the outside, but as something “mental” when you, as a subject, examine it from the inside (by introspection). Just as physicists can speak of light as a wave and a particle simultaneously, we can regard the body and the mind as simply the two sides of the same coin. The age-old distinction between mind and body may therefore have been nothing more than an artifact of perception.

Neuropsychoanalysis, a movement that tries to combine data from the neurosciences and psychoanalysis to achieve a better understanding of human consciousness, is based on dual-aspect theory.

The second main materialist interpretation of the nature of the mind is psychophysical identity theory. This theory postulates that there is an identity between a person’s conscious states and the physical states of his or her brain. In psychophysical identity theory, unlike in dual-aspect theory, the subjective and objective natures of consciousness cannot be regarded as two different aspects of the same thing, because they are one and the same thing. In other words, mental states can be completely reduced to physical ones, just as water can be reduced to its chemical formula H20.

The problem then, of course, is to explain how the objective and the subjective, the brain and the mind, can be identical when they seem so different. Two different forms of identity, “type to type” identity and “token to token” identity, have been proposed. They lead to two variants of reductive materialism, one based on identity in the stronger sense and the other on identity in the weaker sense.

Eliminative materialism is even more radical than the two forms of materialism just discussed. Like materialist functionalism, eliminative materialism seeks to circumvent the difficulties inherent in materialism while accepting its basic premise: that matter is the only thing that exists.

Lastly, the mysterian school of philosophers, whose best known representative is Colin McGinn, are non-materialists who think that the problem of human consciousness simply surpasses human understanding. They refuse to believe that our subjective vision of colours, for example, is simply identical to the activity of a population of neurons in certain areas of the cortex.

At the same time, however, these philosophers do not want to return to dualism. They therefore argue that consciousness is a mystery, and that it is a mystery because our concepts of the mental and physical world are too crude to address the problem of the relationship between the body and the mind in an enlightening way. It is somewhat the same as the reason that monkeys will never perform differential calculus: it would require concepts that are inaccessible to their brains. In fact, every species has limits to its cognitive abilities, and understanding consciousness may just require some concepts that humans simply cannot access.

But the materialists say that the mysterians have given up too fast and that they base their conclusion on nothing more than their disbelief in the possibility that the brain’s grey matter can constitute the world in the bright colours that we experience every day. Some materialists believe that one way to make the identity between consciousness and matter less counterintuitive is to apply new concepts from the cognitive neurosciences to our thinking about the phenomenological aspect of consciousness.


    

Linked
Lien : CybernétiqueLien : The Representationalism Web SiteLien : Mutual EliminativismLien : Livre : Aux origines des sciences cognitives
Lien : Brief history of the study of ConsciousnessLien : Conférences de MacyLien : Notre compréhension du physicalisme et de ses enjeuxLien : Cognitivisme
Lien : Cognitivism (psychology)Lien : A CONNECTIONIST THEORY OF PHENOMENAL EXPERIENCELien : The self-organizing consciousnessLien : Connectionism and Neural Machines
Lien : ConnexionnismeLien : ConnectionismLien : Les modèles connexionnistes et la catégorisation
Researcher
Chercheur : Hilary PutnamChercheur : Jerry Fodor
History
History : Cognitivisme et connexionnismeHistoire : Sciences cognitives et modèles de la pensée Histoire : 50 Years of Information Technology Histoire : Le récepteur à l'acétylcholine / Les conférences Macy
Histoire : Les grands paradigmes en sciences cognitives
Original modules
Tool Module: Similarities and Differences Between the Brain and a ComputerSimilarities and Differences Between the Brain and a Computer
Tool Module: Cybernetics Cybernetics

A Microprocessor That Simulates a Synapse


The English mathematician Alan Turing (1912-1954) believed that in the relatively near future, scientists would manage to program a computer so as to give it conscious states. To determine when that goal would have been reached, he developed what is now known as the Turing Test. This test assumes that one is in communication with an entity that one cannot see, through some remote mechanism such as postal mail or e-mail. The task is to ask this entity questions so as to determine whether it is a human or a computer. If the entity is a machine, and it succeeds in fooling you into thinking that it is human, then it has passed the Turing Test and it can be assumed to have the same conscious states as a human being.

But a number of critics have objected that a computer that passes the Turing Test might simply be simulating conscious states in a very sophisticated way.

Lien : The Turing Test

The idea that the essence of human thought is similar to the operation of computers—that it consists of symbolic representations manipulated by logical operations—continues to influence the cognitive sciences, even though this view is less common now than in the 1960s or 1970s.

While neuroscientists attempt to understand the operation of human consciousness directly, by analyzing its various components, artificial intelligence (AI) researchers attempt to build machines that resemble the human mind as closely as possible. These researchers hope that if they can succeed in building a machine whose responses can be mistaken for those of a human mind (see preceding sidebar), then we may learn what a system must contain in order for a consciousness to emerge from it.

But would this be a truly “human” consciousness in the sense that we experience it, or would it be a different form of consciousness, specific to this particular kind of machine? Or would it be only a simulation of consciousness, as some skeptics and the proponents of “weak AI” would have it (see the discussion to the right)? Or perhaps the Dutch computer scientist Edsger Dijkstra had it right when he said: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

Lien : Intelligence artificielleLien : Artificial intelligenceLien : Is it time to rehabilitate Marvin Minsky?Histoire : Histoire de l’informatique
Histoire : A Very Brief History of Computer ScienceHistoire : HISTOIRE DU DÉVELOPPEMENT DES ORDINATEURSHistoire : An Illustrated History of ComputersHistoire : HISTOIRE DE L'INFORMATIQUE  ET D'INTERNET
Lien : History of ComputersLien : Automates intelligentsHistoire : Brief History of Artificial IntelligenceLien : Alain Cardon (Université du Havre), Vidéo Génération d'états mentaux
Chercheur : Alain Cardon, chercheur en intelligence artificielleChercheur : Jean-Michel TruongLien : Totalement inhumaineLien : Google : Première forme d’intelligence artificielle?
Lien : Jean-Arcady Meyer : Les animatsLien : Peter Menzel et Faith D'Aluisio : Robo sapiensLien : Artificial intelligenceLien : Machines intelligentes : comment reproduire ce que sait faire le cerveau?
Lien : Pour une science avec conscienceLien : Mind / Brain / MachineOutil : Similitudes et différences cerveau - ordinateur
THEORIES OF CONSCIOUSNESS IN THE COGNITIVE SCIENCES
WHAT IS CONSCIOUSNESS?PHILOSOPHICAL POSITIONS ON CONSCIOUSNESSFLAWS IN THE CLASSICAL MODEL OF CONSCIOUSNESSSOME PROMISING CONCEPTS AND MODELS FROM THE NEUROSCIENCES

From 1946 to 1953, when the field of psychology was still dominated by behaviourism, the Macy Foundation sponsored a series of conferences in New York City and at Princeton University, in New Jersey. These conferences were attended by specialists from many disciplines, ranging from mathematics to psychology to anthropology, sociology, and neurobiology.

The scholars such as Wiener, Shannon, McCulloch, von Foerster, and von Neumann who regularly attended these conferences strongly advocated that they take a multidisciplinary approach, which proved highly productive. What are now known as the Macy Conferences gave rise to the cybernetics movement. Now defined as the general science of communication and control in natural and artificial systems, cybernetics studies how information circulates.

A number of ideas that originated in cybernetics have gone on to profoundly influence all fields of science (biology, economics, ecology, and so on—follow the Tool Module link to the left). For example, the concept of feedback has led to a better understanding of the numerous ways in which hormones control the human body.

Many biologists, such as Henri Laborit and Henri Atlan, were greatly influenced by concepts of cybernetics. This new science also quickly found applications in computing, then in its infancy, as well as in what would later become known as artificial intelligence (see sidebar).

The cyberneticists were also clearly interested in investigating the complex system par excellence: the human mind. And because they rejected all forms of idealism and shared a strong inclination toward materialism, they quite naturally included the study of the brain in their two approaches to complex systems:

  • the “top-down” approach (decomposition or reduction);

  • the “bottom-up” approach (global or systemic construction, or synthesis).

These two approaches tended to complement rather than contradict one another. They gave rise to the two main currents that developed subsequently in the cognitive sciences: cognitivism and connectionism, respectively.

The computers developed during World War II, though still very slow, were a great source of inspiration for the cognitivist (also known as the computational) approach. The classic use of the computer as a metaphor for the human mind (though we now know its limitations—follow the Tool Module link to the left) thus led the cognitivists to believe that the mind translates the components of the external world into internal representations, exactly as a computer does.

According to the cognitivists, the mind then manipulates these internal symbolic representations according to certain predetermined rules so as to provide appropriate responses, or “outputs”. In other words, the cognitivists saw thought as a form of information processing.

This central paradigm of cognitivism dominated the cognitive sciences from the mid-1950s for almost 20 years. As Jerry Fodor, a student of Hilary Putnam, couched the argument, to think is to manipulate symbols, and cognition is nothing more than manipulating symbols the way that computers do. This premise and Fodor’s research inspired the various functionalist approaches, according to which the mind is organized into specialized modules that can be implemented on platforms other than computers. This is the famous concept of “multiple realization”.

Once mental states had been equated with computer software and the brain with computer hardware, computer simulation and modelling became an ideal means of studying how the human mind operates. This field of inquiry came to be known as “artificial intelligence”, or AI (see sidebar).

The philosopher John Searle distinguished two positions regarding the possibilities of AI. According to the “strong AI” position, to be intelligent, all a machine needed was the right program. But Searle dealt this position a harsh blow with his Chinese Room argument.

According to the “weak AI” position, computers can only simulate the human mind. No matter how much computational power they have, they can never create a true intelligence or a genuine consciousness. Meteorologists’ computers, though they may be able to simulate the development of hurricanes with great accuracy, are never going to soak us to the skin or flatten our homes.

Cognitivism, inspired by the operation of computers that manipulate symbols without interpreting their meaning, is forced to reduce the brain to a simple syntactic device, and not a semantic one. Epistemologically speaking, this position is vulnerable to attack from many angles.

It was in this context that connectionism, the other major current in the cognitive sciences, developed in the 1980s.

The roots of connectionism lie in cybernetics and neurobiology, and accordingly, its primary analogy for the human mind is a network of numerous interconnected units. This new approach, based on networks of artificial neurons that process information in parallel, was developed to make the structure of cognitive models more closely approximate that of the brain.

Thus connectionism is a “bottom-up” approach. It is associated with philosophers such as Daniel Dennett and Douglas Hofstadter. In connectionism, mental representations are not discussed in terms of symbols, but instead are analyzed in terms of links among numerous distributed, co-operative, self-organizing agents.

Marvin Minsky, who inspired this approach, thus regards the cognitive system as a society of micro-agents that are capable of solving problems locally. Connectionists therefore believe that the analysis must penetrate down inside symbolic operations, to the “sub-symbolic” level.

In contrast to the computing analogy used in cognitivism, connectionism does not depend on complex algorithms that are executed sequentially, or on a control centre that processes all the information, because the networks of neurons in the brain are considered quite capable of doing without them. What is special about the brain’s neural networks, however, besides their distributed mode of operation, is that the effectiveness of the connections among them is altered by experience.

Connectionism was thus inspired directly by Hebb’s rule, which states that when two neurons tend to be activated simultaneously, their connections are strengthened; when the opposite is true, then their connections become weaker. The connectivity of a system thus becomes inseparable from the history of its transformations. And cognition becomes the emergence of global states from the application of simple rules (such as Hebb’s rule) to a network of elements that are just as simple, but very numerous and highly interconnected. The big difference, compared with cognitivism, is that connectivism regards the networks of neurons as being not programmed, but trained (see box below), and regards a mental representation as a correspondence between an emergent global state and properties of the external world.

But with this approach to cognition, the notion of representation was to become increasingly problematic .Because according to Minsky himself, the brain’s main activity consists in performing modifications on itself continuously. What you experience today will influence the way you recall a memory which, far from always being the same, will be a reconstruction based on the current state of your brain. Moreover, this reconstructed memory will inevitably affect the way that your brain functions subsequently.

Consequently, unlike a machine that manufactures an object that has no effect on the machine’s operation, the brain is a machine whose processes constantly alter its subsequent operation.

In other words, with the brain, the results of processes become the processes themselves. Our cognitive processes are regarded not as representing a world that exists independently, but as causing the emergence of a world as something inseparable from the structures that embody the cognitive system. This perspective has led some researchers to seriously question whether there even is a pre-existing world from which the cognitive system extracts information. For example, in response to this tenacious metaphor of a cognitive agent that could not survive without a map of an external world, Francisco Varela developed his theory of enaction.

Unlike traditional artificial intelligence, in which all of the operations had to be written in advance by a programmer, artificial neural networks are not programmed, but rather trained. And for many tasks, such as face recognition, this approach has proven productive.

It would be very hard to define explicit rules for the way that we so readily recognize a characteristic such as a person’s gender from looking at their face. The connectionist approach provides far better results. The first step is to train the network of artificial neurons by showing it a series of photographs of faces and asking it to determine the sex of the people concerned. The next step is to tell it what mistakes it has made in performing this task. The network will then adjust the efficiency of the connections in its circuits to correct its errors and produce increasingly accurate responses.

In this particular example, a minimum network might consist of three layers of neurons, in which every neuron can make connections to several other neurons in the next layer. The first layer should have a large number of elements that can correspond fairly precisely to the dark and light areas in the photographs that the network is to be shown. The third layer—the output layer—should have only two elements, corresponding to the two genders. The second layer, between these two, would contain an indeterminate number of elements for which the efficiency of the connections with the elements in the two other layers can be adjusted during the training process.

If the training photographs are well chosen, then the network will be able to correctly recognize the genders of the people in new photographs that it is shown—even if the programmers won’t be able to describe in detail how the network is finding the right answers. Because unlike traditional software applications, connectionist networks don’t do only what their programmers tell them to do. They devise some of the winning strategy themselves.



    

Linked
Lien : Les sciences cognitives à la croisée des cheminsLien : La recherche, no 353, Mai 2002Lien : Change blindnessLien : State of the Art - The Psychology of Consciousness
Lien : SOLVING THE "REAL" MYSTERIES OF VISUAL PERCEPTION:  THE WORLD AS AN OUTSIDE MEMORYLien : Le paradoxe de la vision aveugleLien : ON A CONFUSION ABOUT A FUNCTION OF CONSCIOUSNESSLien : Consciousness
Lien : Measuring the relative magnitude of unconscious influencesLien : Exclusion failure does not demonstrate unconscious perceptionLien : ERROR DETECTION IN THE GRAY MATTER : Have Scientists Discovered Intuition?Lien : SEEING WHAT YOU DON'T SEE?
Lien : On the difficulty of distinguishing between conscious brain functions in humans and other mammals, using objective measuresLien : Un cas de conscience
Researcher
Chercheur : Melvyn A. Goodale

Bluffing in Interrogations Leads to False Confessions

Who’s in Charge—Us?

The Return of the Invisible Gorilla

Will You Be the Same Person in 10 Years As You are Now?

Seeing without knowing it : the strange phenomenon of blindsight

“Men are aware of their own desires and ignorant of the causes by which those desires are determined.”

- Baruch Spinoza (1632-1677)


To demonstrate that change blindness can occur in everyday life, Daniel Simons devised several clever experiments. In one of these, an experimenter disguised as a construction worker approaches a pedestrian, pulls out a map, and asks him how to get somewhere. While the pedestrian starts to provide directions while looking and pointing at the map, two other experimenters, also disguised as workers, walk in between the first experimenter and the pedestrian while carrying a wooden door that momentarily blocks his view. At that moment, the first experimenter quickly changes places with one of the two others, whose face was hidden by the door. While the first experimenter slips away hidden behind the door, the second emerges, also holding a map in his hand. What happens then is pretty surprising: in about half of the trials, the pedestrian doesn’t even notice that it’s no longer the same person and continues providing directions as if nothing had happened!

Simons has also demonstrated another disturbing phenomenon, inattentional blindness, in an equally spectacular fashion.

Lien : Videos from Daniel J. Simons : Change Blindness and Inattentional Blindness ExamplesLien : Sustained inattentional blindness -- selective looking - opaque gorilla from Simons and ChabrisLien : An Overview of Change BlindnessLien : Daniel J. Simons

Change blindness to a given object can be diminished when the object has more meaning for the individual concerned (for example, a cigarette lighter for a smoker, as opposed to a non-smoker). This suggests that the semantic value of an object for an individual helps that individual to extract it from the surrounding chaos.

The phenomenon of your change blindness for a given object diminishing when that object has a subjective meaning for you probably comes into play in advertising. But the reverse situation —for example, if you became convinced that a given product was of no use to you—might well send it back into the unconscious chaos of things with no useful meaning that you come across every day but do not even notice. The meaning of a thing, its affective value for you, thus influences your conscious perception of it. And this holds true just as much for letting it enter your consciousness (for example, in an advertisement for a product that might interest you) as for keeping it out (for instance, when intellectually protecting yourself from political propaganda).

Lien : MAPS OF MEANINGLien : The pragmatics of meaningChercheur : Jordan B. Peterson
Lien : Petit cours d’autodéfense intellectuelleLien : ATTENTIONAL BIAS AFFECTS CHANGE DETECTION

When individuals are exposed to a stimulus repeatedly without any consequences or positive reinforcement, they learn any possible associations with this stimulus more slowly. This phenomenon, known as latent inhabiting, has been observed in a wide variety of mammals, ranging from mice to humans.

Many explanations have been offered as to why it is harder to form new associations with a stimulus that you have previously judged to be meaningless. Most of these explanations emphasize the adaptive value of this phenomenon, which removes from the field of conscious attention anything that is not directly useful for the task at hand. Latent inhibition also highlights the fact that the meaningfulness of certain things for an individual is learned, and not given a priori.

This unconscious sorting process keeps you from being so constantly assailed by inconsequential stimuli that you cannot focus on what is essential. Examples of this process include the way you smell the particular scent of a house when you first enter it, or hear the ticking of a clock on the nightstand when you first go to bed, but soon stop noticing them.

Lien : Latent inhibitionLien : Inhibition latente

More and more experiments have demonstrated priming effects that are completely unconscious. In one such study, subjects on their way to a test where they were supposed to evaluate the sociability of strangers were stopped by the experimenter’s accomplice just before they entered the laboratory. The accomplice had his hands full and asked each subject to help him for a moment by holding his cup of coffee, which was hot in some cases and cold in others. The results of this study showed that those subjects who had held a cold cup of coffee assessed the strangers in the test as being colder, less sociable, and more egotistical than did the subjects who had held a hot cup of coffee!

Experimenters have also been able to influence other behaviours in a particular direction without the subjects’ becoming conscious of it. In one such experiment, the test group of subjects filled out a questionnaire in a room where there was a smell of lemon-scented cleaning liquid, while the control group filled out the questionnaire in a room where there was no particular smell. After completing the questionnaire, both groups were rewarded with a snack of cookies that crumbled very easily. The film of the subjects eating their snacks showed that the subjects who had been in the room with the cleaning-liquid scent were three times more likely to sweep up their cookie crumbs than the subjects in the control group.

These experiments and many others (follow the link below) reveal an unconscious brain that is far more active in choosing our behaviours than was once believed. A myriad of sensory indications that we do not perceive consciously might thus explain why we can be friendly and courteous in one situation but rude and irritated in another, even though we consciously perceive the two situations as similar. In such cases, perhaps one of these unconscious circuits that can guide our behaviour has been triggered by something without our knowing it.

Lien : Who’s Minding the Mind?


When you are faced with a complex problem, are you better off trying to consciously make the best choice, or going with your intuition (in other words, your unconscious processes)? To try to answer this question, psychologist Ap Dijksterhuis asked a group of subjects to choose the best car by using a large file of information to consider a dozen different criteria carefully. After closing the file, half of the subjects were asked to think for a few moments before making their choice, while the other half were asked to do a puzzle (to prevent them from thinking about all that information).

Dijksterhuis found that the second group—the subjects who did not consciously ponder the problem—made the better choices! His explanation was that those subjects who had to make their decisions through conscious reasoning became so overloaded by all the information that some of the information they considered was not the most relevant. In this case, unconscious processes, which can deal with large volumes of information automatically, seem to have been capable of a better synthesis.

These results are consistent with those obtained by Antonio Damasio and with his theory of somatic markers, which states that we naturally rely on our emotions, which are expressed implicitly, to make our rational choices. Should we therefore give more importance to our intuition than to our reason? Not always, because in the reverse situation, where there are very few criteria for making a choice (between two dinner napkins, for example), conscious, controlled deliberation has proven more effective.

Lien : Vous réfléchissez tropLien : Vous faites face à une décision difficile? N’y pensez plus!Lien : On Making the Right Choice: The Deliberation-Without-Attention Effect

FLAWS IN THE CLASSICAL MODEL OF CONSCIOUSNESS
WHAT IS CONSCIOUSNESS?PHILOSOPHICAL POSITIONS ON CONSCIOUSNESSTHEORIES OF CONSCIOUSNESS IN THE COGNITIVE SCIENCESSOME PROMISING CONCEPTS AND MODELS FROM THE NEUROSCIENCES

The development of the neurosciences as a major discipline within the cognitive sciences has gradually revealed the flaws in the classical model of consciousness. The neuroscientific data provided by brain-imaging experiments and many other experiments do not remotely agree with the idea that all of our mental processes are consciously accessible to us, or that our consciousness springs from a point in our brain as the result of our transparent perception of the world, or that the intentions that we can access consciously are sufficient causes for our behaviours.

For example, philosopher Daniel Dennett writes: “The idea of a special center in the brain is the most tenacious bad idea bedeviling our attempts to think about consciousness.” Dennett’s model of “multiple versions”of consciousness shattered the illusion of what he calls the Cartesian theatre.

What we experience consciously may in fact be only the tip of an iceberg whose submerged portion consists of countless unconscious processes. The following paragraphs describe some phenomena that show these unconscious processes at work. These phenomena are often quite complex, and these descriptions will be quite brief, but they will nevertheless demonstrate the many flaws in the classical model of consciousness. Note that the processes in question are described here as “unconscious” not in the sense that this word has in psychoanalysis, but simply because they are not subject to our conscious control.

The first phenomenon that we will examine involves situations where the conscious perception changes while the stimulus presented does not. As we shall see, such cases of rival perceptions pose problems for the classical model of consciousness. The phenomenon of binocular rivalry is an example of rival perceptions. In the experimental protocol used to study binocular rivalry, the subject looks into a special pair of binoculars in which each eyepiece presents a different image to each eye. This is a very artificial situation, because it not only separates the fields of vision of the two eyes but also presents them with differing information.

Under these conditions, the subject’s subjective perception will alternate between two states: sometimes the subject will see the image presented to the left eye, and other times the image presented to the right. This would be a pretty strange result, if the classical model were correct in describing conscious perception as a “transparent window on the world”. Which world would be the real one here: the one seen by the left eye, or the one seen by the right?

Another interesting detail: if, while this experiment is in progress, the subject’s brain activity is recorded, and they are asked to indicate which of the two images they are perceiving at given times, the activity observed in certain areas of their brains will vary according to the subjective experience that they report. This difference observed at the neuronal level thus reflects only the difference in subjective perception, because the objective stimulus has not changed. 

The Necker cube is another example of two rival perceptions that can arise from the same stimulus: a first perception in which you see the upper surface of the cube, and a second in which you see the bottom one.

Scientists have also discovered some situations where the reverse occurs: the conscious perception doesn’t change, even though the stimulus does. This phenomenon, known as change blindness, sheds further doubt on the classical model’s unitary, detailed view of our consciousness of the world.

It is true that when you look at a landscape, you have the impression of being aware of the entire scene in all its rich detail. And it is also true that if something appears in or disappears from the scene, you notice it immediately. The human visual system is indeed very sensitive to anything that creates an impression of movement in the scene, such as the appearance or disappearance of an object as in the animation below. When such an event occurs, you immediately turn your glance in the direction of the change, to try to identify it.

But what happens if you very briefly insert an empty screen between the two images? You thus artificially mask the appearance or disappearance, because the entire scene disappears for a moment, then quickly reappears, but with one object added or subtracted. Try this yourself by clicking the button below the image to the right. Did you notice the change as easily?

The fact that the mask makes it so much harder to identify what is changing in the scene suggests that, contrary to what the classical model of consciousness would have us believe, at any given moment we are processing only a small proportion of a visual scene consciously. We are thus never actually forming a detailed visual representation of the entire scene.

Some neurobiologists believe that the reason we have this illusion of being fully conscious of the entire scene is that we know that at any time, we can shift our attention from one point to another in the scene to check the details. According to these scientists, we are in a sense using the world itself as a form of external memory. We are also, in their view, processing the entire scene at all times, but only at a preconscious level that would let us identify certain details in it consciously if we wanted to.

Lastly, the sidebar on the research done by Daniel Simons shows that change blindness can also be observed outside the laboratory, in particular interpersonal situations.

Optical illusions are another common phenomenon that does not fit at all with the idea of consciousness as a faithful reflection of the reality that surrounds us. The very essence of an optical illusion is to give us a conscious perception that is incorrect, and hence different from reality. One can readily see the problem that this raises for the classical model of consciousness.

For instance, the picture to the right exemplifies how context influences our perception of an object’s size. We consciously perceive the yellow poker chip that is surrounded by larger black chips as being smaller than the yellow chip that is surrounded by smaller black ones.

But curiously, though our conscious perception of the size of the two chips may be influenced by this optical illusion, that is not the case for the actions that we direct toward them when they are presented as 3D objects through the effect of perspective, as in the image below.

If you were asked to try to pick up either of the yellow chips in this 3D picture, and the distance between your fingers when you performed the corresponding movement were measured, this distance would reflect the actual size of the chip, regardless of which of the two contexts it was presented in.

This result indicates that “visual perception” and “visually-guided action”can be dissociated from each other. In other words, behaviours such as picking up an object are not misled by the erroneous conscious perception. It follows that these behaviours must be controlled by processes that escape consciousness.

Another example of this phenomenon is the old joke that if you want to ruin your opponent’s concentration in a tennis match, for example, all you have to do is compliment him on the accuracy of his serve, the smoothness of his return, and so on. Usually, that will make him self-conscious. He will start trying to use conscious movements to match the perfect accuracy of the movements that he makes unconsciously from years of constant practice, and he’ll end up sending the ball into the net!

The existence of an unconscious aspect of vision is also revealed in spectacular fashion by the phenomenon of “blind vision”. People who have blind vision have suffered damage to either their left or their right primary visual cortex and have consequently lost their sight in the opposite visual hemifield. But in experiments where a light stimulus is sometimes presented to such people’s blind hemifield and sometimes not, and they are asked to “take a chance” each time and say whether or not such a stimulus was present, they will respond accurately much more often than would happen at random! And when they are told how accurate their answers have been, they remain incredulous, convinced that they must have made random lucky guesses, because they say that they had seen nothing at all in that part of their visual field.

People with blind vision thus have some surprising residual visual capabilities. These capabilities appear to be made possible by the subcortical visual structures and by some neural pathways that lead directly from the lateral geniculate nucleus to visual areas V4 and V5, without passing first through primary visual area V1.

Thus, though the primary visual areas seem to be essential for conscious vision, there are a number of vision-guided behaviours that do not seem to require any conscious control. But how is this possible? Isn’t consciousness suppose to arise first, and action flow from it after that? Yet another chink in the armour of the classical model of consciousness!

And there are more. Just as with perception, there are entire areas of learning and memory that take place outside the realm of consciousness. First of all, at any given time, most of our memories are unconscious. We can remember them consciously, but they spend most of their time as unconscious traces in our nervous systems.

Second, there are the numerous “implicit” forms of memory . Simply acquiring a particular skill, such as bicycle riding or touch typing, involves procedural memory that we cannot access consciously. The same is true for the priming effect, in which past exposure to a relevant piece of information influences our cognitive processes without our even realizing it (see sidebar). For example, if you are given a long list of words to memorize, and one of these words recurs several times in the list, you will find it easier to recall this word, even if you didn’t consciously notice that it occurred more often than the others. (A good portion of advertising is based on this principle of unconscious preferential recognition.)

Studies of people with amnesia also have shown the great autonomy of this implicit memory system, which is often preserved despite a loss of explicit memory. The subjects in these studies, such as the famous patient H.M., would be presented every day with a problem such as the Tower of Hanoi puzzle, and every day they would say that this was the first time they had ever tried to solve it, but nevertheless, they would find the solution a bit more quickly every day.

It thus seems clear that we accomplish a multitude of tasks unconsciously, and that these unconscious processes are far more numerous than our conscious actions. Language might be cited as a final example which also shows that the two kinds of processes, conscious and unconscious, can be at work simultaneously. Because if you think about it, when you are having a conversation, you are forming conscious thoughts as the same time as you are using the syntax and vocabulary of your mother tongue completely automatically.

Given these many manifestations of unconscious processes, we can therefore, as a first approximation, distinguish not one but two sub-systems. The first one is conscious, often verbal or visual, and operates serially (“You can’t think of more than one thing at a time.”). The second is largely unconscious, often affective, and responds to stimuli automatically. It is composed of numerous units operating as massively parallel processors, so that it has a much greater processing capacity.

The demonstration that the majority of our cognitive processes are in fact unconscious is regarded as a veritable revolution that has ended the reign of the classical model of consciousness. This unconscious part of our minds, which is also far more “intelligent” than had previously been believed (see sidebar on difficult choices), continues to amaze scientists with the diversity of its processes: mental and sensorimotor automatisms, implicit knowledge and even implicit reasoning, semantic processing, and so on.

But these two sub-systems, the conscious and unconscious, do not suffice all on their own to manage the complexity of the real world that is so vastly underestimated by the classical model of consciousness. They are therefore supported by another system, composed of what are called our attentional processes.

 

Many data show that certain aspects of consciousness that seem unitary are in fact dissociable. For instance, patients with brain damage can sometimes display a complete dissociation between their performance and their awareness of this performance.

Consider conscious visual perception, for example. There is a perception disorder called visual form agnosia (or aperceptive agnosia), in which the patient cannot visually recognize the size, shape, or orientation of an object. Yet despite this major information deficit concerning the object, the person can still grasp it perfectly between his or her thumb and index finger.

There are also cases of the reverse condition, optical ataxia, in which people cannot reach or grasp an object but can visually recognize its size, shape, and orientation.

In both cases, there is a complete dissociation between the conscious perceptive processing and the unconscious visual/motor processing. This same distinction is also found in the brain’s anatomy, between the ventral visual pathway and the dorsal visual pathway.

Anosognosia is an even more global syndrome in which the patient flatly denies the existence of a deficit that he or she has acquired as the result of a neurological injury. This was the case for a patient who was treated by V.S. Ramachandran. This patient had a paralyzed left arm, as the result of a stroke in the right hemisphere (anosognosia is almost always the result of a right-hemisphere injury). When Ramachandran asked her to point to him with her right arm, she did so without any problem. When he asked her to do so with her paralyzed left arm, however, that arm of course remained immobile, but she insisted that she had followed the instruction. And if Ramachandran told her that her left arm hadn’t moved, she answered that she had arthritis in her left shoulder, that it was causing her pain, and that he knew so very well.

Damage to the right hemisphere can also cause another spectacular type of dissociation: hemineglect. Patients with hemineglect simply can no longer consciously perceive the left half of their universe. For example, a man with hemineglect will shave only the right side of his face and eat only the food on the right side of his plate. If asked to draw a clock, he will cram all 12 hours into the right half of the dial. And if someone seated to his left speaks to him, he will respond to the person seated to his right.

Hemineglect also differs from another elementary perceptual disorder such as hemianopsia (loss of sight in half of the visual field). When presented with a printed sentence, people with hemianopsia will turn their head in order to see the whole sentence, whereas people with hemineglect will read only the words in the right-hand portion of the sentence.

What makes hemineglect of interest for the study of consciousness is that the information that its victims overlook consciously, they seem to process unconsciously nevertheless. For example, if they are shown two pictures, one to their left and one to their right, they will be unable to identify the one to their left. But curiously, if you ask them to take a chance and guess whether the image on the left was the same as the one on the right, they will guess correctly far more often than would happen randomly. And other experiments have suggested that the brains of people with hemineglect can unconsciously process not only the basic physical traits of images, but more elaborate semantic information as well. These people thus show that there can be a disassociation between performance and awareness of performance. Such a finding may seem paradoxical if we assume the classical model of consciousness, but becomes intelligible when we adopt a more distributed model of the substrates of consciousness in the brain.

Another strange syndrome, prosopagnosia, occurs when someone suffers brain damage that makes them unable to recognize faces, even of people whom they know well. But when they are shown a picture of a friend’s face, even though they will consciously say that they are seeing that face for the first time, physiological signs such as minute changes in the moistness of their hands (as measured by variations in their skin conductance) show that they have actually recognized the face anyway—yet another example of the dissociation between unconscious and conscious performance.

Mental disorders such as schizophrenia provide other examples of dissociation that help us to understand consciousness. People with schizophrenia often attribute the intentions behind their actions not to themselves but to forces outside themselves. Several authors have tried to explain this aspect of schizophrenia in terms of a dissociation between an intentional system that is the source of the action and a “self”control system that is not being informed of the individual’s intentions.

There are some other, rare types of dissociations that are truly bizarre, such as strange-hand syndrome, in which people have the impression that one of their hands is no longer under their control. Such people may, for example, watch with fright as their hand performs a complex task such as unbuttoning their shirt, when they are convinced that they did not give it the order to do so. In this pathology, which often involves damages to the corpus callosum (as in people with a split brain), the hand’s action is perceived as responding to a foreign intention.

Yet another spectacular form of dissociation is dissociative fugue, most common in Hollywood movies but nevertheless possible in real life (it affects approximately 2 out of every 1000 people in the United States). In extreme cases, someone may leave their home, travel a long distance, and start a new life while being totally or partially amnesic about their previous one.

Certainly one of the best known forms of dissociation is dissociative identity disorder (formerly known as multiple personality disorder). People with this disorder alternate among two or more personalities without being able to control these changes. Each of these personalities usually has its own range of behaviours and does not share its explicit knowledge with the other personalities. But a transfer of information among the various personalities may take place in implicit memory. Once again, conscious and unconscious do not necessarily go together.

Just how strange can dissociative disorders get? Perhaps the strangest is Body Integrity Identity Disorder, or BIID, in which an individual requests the elective amputation of a body part that they say does not match the idealized image that they have of themselves. Paradoxically, these people do not feel complete until the day they succeed in getting the amputation.

Lien : Neurological disorders and the structure of human consciousnessLien : Vivre dans un demi-universLien : Brain damage can leave a person unable to recognise a human faceLien : The Alien Hand: Cases, Categorizations, and Anatomical CorrelatesLien : Definition of Alien hand syndromeLien : Alien Hand SyndromeLien : Dissociative DisordersLien : Dissociative identity disorder
Lien : Trouble dissociatif de l'identitéLien : Body Integrity Identity DisorderLien : The most complete source of information about Body Integrity Identity Disorder (BIID)Lien : What Drives People to Want to Be Amputees?Lien : Voluntary AmputationLien : Anosognosia for Hemiplegia: A Window into Self-AwarenessLien : AnosognosiaLien : Ces Zombies qui nous gouvernent

       

Linked
Lien : The two faces of consciousness. On Susan Blackmore “Consciousness: An Introduction“Lien : Dr Susan Blackmore tells us about human consciousnessLien : What the Best Minds Think about the Brain, Free Will, and What It Means to Be Human  Susan BlackmoreLien : Le contenu de la conscience visuelle
Lien : What is a Neural Correlate of Consciousness?Lien : Science & Consciousness ReviewLien : Levels of explanationLien : Conversations with Neil's Brain. The Neural Nature of Thought & Language
Lien : A Global Workspace perspective on mental disordersLien : Final proof of role of neural coherence in consciousness?Lien : How conscious experience and working memory interact
Original modules
Tool Module: Neural DarwinismNeural Darwinism

When a correlation is found between a conscious experience and a neural event (in other words, when they are found to co-occur regularly), it must be interpreted cautiously, because it can mean various things. It could mean that the neural event causes the conscious experience. It could mean that the conscious experience causes the neural event. It could mean that some third event causes both the neural event and the conscious experience. Lastly, the neural event and the conscious experience may actually be the same thing, even if the two don’t seem at all alike.

SOME PROMISING CONCEPTS AND MODELS FROM THE NEUROSCIENCES
WHAT IS CONSCIOUSNESS?PHILOSOPHICAL POSITIONS ON CONSCIOUSNESSTHEORIES OF CONSCIOUSNESS IN THE COGNITIVE SCIENCESFLAWS IN THE CLASSICAL MODEL OF CONSCIOUSNESS

To develop neurobiological models of consciousness, scientists start by looking for what are called neural correlates of consciousness. This consists in identifying variations that always occur in the activity of certain specific groups of neurons when a particular piece of conscious content appears. For example, some experimental protocols commonly used to identify neural correlates of conscious visual perception are binocular rivalry, change blindness, and bistable images (images that can be interpreted in two different ways).

True neurobiological models of consciousness must be distinguished from such simple neural correlates of consciousness. True, identifying correlations between the activity of certain groups of neurons and certain subjective or phenomenological properties of consciousness can help to define what is plausible when one is developing a model. But the identification of such correlations does not automatically produce a comprehensive explanation that relates these neuronal activities to the phenomenon of consciousness.

To produce such explanations requires more general models that try to explain the many facets of consciousness by combining data from all branches of the contemporary cognitive neurosciences. Most of these models began to be developed in the early 1990s, in the wake of the first international conferences devoted essentially to the study of consciousness, like the one announced in the poster below.

Some of these models about how consciousness operates take its psychological properties as their starting point. Others were developed on the basis of the brain structures that seem to play an important role in consciousness. Still other models are centered on the activity of the neurons themselves, in particular the timing of the discharge of nerve impulses.

These models also develop some concepts that are specific to their level of analysis. But because all of these models are intended to be firmly anchored in the neural substrate of consciousness, it is not surprising to see the same concepts recurring in a variety of them. Of course, these models may subtly distinguish their use of these concepts or partially redefine them, but increasingly, the explanatory power of several of these concepts is being confirmed.

So before we give a brief overview of some of the major neurobiological models of consciousness, here is an equally brief presentation of their main concepts.

For Daniel Dennett, consciousness is about “fame in the brain”. At any moment, thousands of mental objects are forming and dissolving everywhere in the brain as they engage in a Darwinian competition with one another. The “self” might be regarded as what emerges from this competition. Thus, at any given time, there are many possible conscious states, but only one of these multiple versions will get its moment of glory and become “famous”, i.e., conscious, for the space of a second. According to this model, consciousness cannot be located precisely in time or in any particular part of the brain, which thus totally excludes classical, “Cartesian theatre” models of consciousness .

Another recurring concept in neurobiological models of consciousness, and one with many variants, is the “global workspace”. Originally developed by psychologist Bernard Baars, this concept is based on the observation that the human brain comprises several specialized systems (for perception, attention, language, etc.), each of which carries out its task at a level that does not reach the threshold of consciousness.

According to global workspace theory, consciousness becomes possible when these various subsystems pool certain results of their operations in a single, “global workspace”. When these results are expressed in this forum, they become accessible to the brain as a whole, and therefore conscious. Much like Dennett’s “multiple versions”, various elements enter into competition to capture our attention, depending on our interests of the moment, but only one at a time can occupy the global workspace, which explains why we can be conscious of only one thing at a time.

After Dehaene et al. 2003

This neuronal workspace, according to Baars, thus serves as a site for information exchange. Other subsystems can then take advantage of this available information too, and it is this availability that constitutes consciousness, while the information processed by the subsystems in isolation remains unconscious. This conception of consciousness as something akin to a form of momentary working memory also provides an account of the interaction between conscious and unconscious processes that is observed in various phenomena.

Jean-Pierre Changeux and Stanislas Dehaene took the concept of the global workspace one step further by defining a neuroanatomical basis for it—a sort of “neural circuit” for the conscious workspace. Their model was based on the pyramidal neurons of the cerebral cortex, with their long axons that can connect areas of the cortex that are distant from one another.

Changeux and Dehaene attempted to describe the various states that can be observed in this connectionist model of consciousness and then tried to identify the mechanisms that let the mind pass from one of these states to another. In contrast to Baars’s model and to several other brain-imaging studies that simply distinguished one conscious state from multiple unconscious ones, Changeux and Dehaene’s model distinguishes three different possible states of activation:

After Dehaene et al., 2006

- a first, subliminal processing state in which there is not enough bottom-up activation to trigger wide-scale activation of the network;

- a second, preconscious state in which there is enough activation to access consciousness, but that is temporarily kept from doing so by a lack of top-down attention;

- a third, conscious state that penetrates the global workspace when a preconscious stimulus receives enough attention to cross the consciousness threshold.

Francis Crick and Christof Koch also examine the neural correlates of consciousness, but their emphasis is on the circuits of the visual system. For Crick and Koch, the key to conscious processes lies in the synchronized neuronal oscillations that occur in the cortex at frequencies around 40 Hertz (35 to 75 Hz).

Various visual areas of the brain that respond to various visual characteristics of the same object (form, colour, movement, etc.) do in fact fire synchronously at a particular frequency. And if there is another object located just beside the first one in the person’s field of vision, then other neurons in that person’s visual areas also fire synchronously, but not at the same moments as the neurons associated with the first object.

According to Crick and Koch, it is from this temporal synchronization of the oscillations of neuronal activity that conscious perceptual units arise. This possible answer to the famous binding problem—how the brain combines the various sensory parallel processing modules —is now widely accepted as a working hypothesis.

Rodolfo Llinás focuses on a global form of neuronal synchronization that might prove essential for determining which particular perception becomes conscious. According to Llinás, the thalamus triggers cortical oscillations that sweep the brain from front to back in 25 milliseconds—in other words, 40 times per second, the same 40-Hz frequency that has often been associated with the conscious perceptual unit. Thus, in addition to the cortical oscillations that bind the various aspects of a perceived object together, there would be this second type of synchrony between a given neuronal assembly and these non-specific thalamic oscillations. The assembly that is in phase with these non-specific oscillations would then be the one that becomes conscious.

Gerald Edelman accords less importance to the specific activity of certain neurons than to the general organization of the brain’s circuits. He starts from the premise that consciousness has not always existed and that it appeared at some time in the course of the evolution of species just as it appears at a given time in the development of individual human beings. Edelman then attempts to identify the new brain architectures that led to the emergence of consciousness.

According to Edelman, a selective mechanism that he calls “neural darwinism” (follow the Tool Module link to the left) creates a system of neural maps composed of neuronal assemblies that are responsible for our various perceptual abilities. When the brain receives a new stimulus, several of these maps are activated and send signals to one another. Edelman uses the term “re-entrant loops” to designate this pattern of interconnections among the various neural maps. The reciprocal connections between the thalamus and the cortex, also known as thalamocortical loops, are central to this model of “re-entrant maps” whose looping operation constitutes the starting point for consciousness, according to Edelman.

Thus, in Edelman’s view, consciousness is associated not with a permanent anatomical structure, but rather with an ephemeral activity pattern that is present at various locations in the cortex where these re-entrant loops permit. That is why Edelman and his colleague Giulio Tononi instead describe these conscious processes in terms of a “dynamic core”.

A dynamic view of consciousness is also taken by Walter J. Freeman, who uses the mathematics of non-linear dynamics to interpret the neuronal oscillations associated with conscious phenomena. According to Freeman, the brain responds to changes in the world by destabilizing its primary sensory cortexes. The resulting new, chaotic oscillation patterns give the impression of being noise, but actually hide an underlying order from which new meanings are constructed continuously.

Consciousness thus plays the role of an operator who modulates these cerebral dynamics. Residing both nowhere and everywhere, this operator is continuously re-forming conscious contents that are supplied by the various parts of the brain and that undergo the rapid, extensive changes that we associate with human thought.

But conscious thought and the decisions that arise from it do not involve abstract reasoning alone. For Antonio Damasio, one cannot speak of consciousness without including the constant monitoring of an affective loop in which the brain and the body engage in a continuous dialogue (via the autonomic nervous system, the endocrine system, etc.).

Damasio champions the idea that our conscious thoughts depend substantially on our visceral perceptions. For him, consciousness develops through the brain’s monitoring of internal somatic states (notably via the insula), and this monitoring has evolved because it lets us uses these somatic states to mark, or evaluate, our external perceptions. Damasio thus uses the concept of somatic markers to describe how the emotions of our inner world interact with our perceptions of the outside world.

One last concept, which gives the body and the environment an even more extensive role in the genesis of conscious processes, is called enaction. It was developed by Francisco Varela and is part of the intellectual current known as embodied cognition. The central idea of enaction is that the cognitive faculties develop because the body interacts with a given environment in real time.

From the enactive perspective, perception has nothing to do with passive reception but is inextricably linked with the way that the body/brain system guides its own actions in the local situation of the moment. In the terminology of enaction, our senses enable us to “enact” meanings—in other words, to modify our environment while also being constantly shaped by it.

The essence of cognition and consciousness therefore cannot be found in representations of a world completely external to ourselves, nor solely in a particular neuronal organization, but instead depends on all of the organism’s sensorimotor structures and its capabilities for bodily action, coupled with a particular environment.

These numerous concepts derived from neurobiological models of consciousness let us take the findings of the neurosciences into account in developing our understanding of consciousness. As for the explanations of why consciousness exists, they are at least as numerous.

  Presentations | Credits | Contact | Copyleft