Attention schema theory

From Infogalactic: the planetary knowledge core
Jump to: navigation, search


The attention schema theory (AST) of consciousness (or subjective awareness) is an evolutionary and neuropsychological scientific theory of consciousness which was developed by neuroscientist Michael Graziano at Princeton University.[1][2] It proposes that brains construct subjective awareness as a schematic model of the process of attention.[1][2] The theory is a materialist theory of consciousness. It shares similarities with the illusionist ideas of philosophers like Daniel Dennett, Patricia Churchland, and Keith Frankish.[3][4]

Graziano proposed that an attention schema is like the body schema. Just like the brain constructs a simplified model of the body to help monitor and control movements of the body, so the brain constructs a simplified model of attention to help monitor and control attention. The information in that model, portraying an imperfect and simplified version of attention, leads the brain to conclude that it has a non-physical essence of awareness. The construct of subjective awareness is the brain's efficient but imperfect model of its own attention. This approach is intended to explain how awareness and attention are similar in many respects, yet are sometimes dissociated, how the brain can be aware of both internal and external events, and also provides testable predictions.[2]

One goal of developing the AST is to allow people to eventually construct artificial consciousness. AST seeks to explain how an information-processing machine could act the way people do, insisting it has consciousness, describing consciousness in the ways that we do, and attributing similar properties to others. AST is a theory of how a machine insists it is more than a machine, even if it is not.

Summary of the theory

The AST describes how an information-processing machine can make the claim that it has a conscious, subjective experience of something.[5] In the theory, the brain is an information processor that is captive to the information constructed within it. The machine has no means to discern the difference between its claim and reality. In this approach, the challenge of explaining consciousness is not, “How does the brain produce an ineffable internal experience,” but rather, “How does the brain construct a quirky self description, and what is the useful cognitive role of that self model?” A crucial aspect of the theory is model-based knowledge. The brain constructs rich internal models that lie beneath the level of higher cognition or of language. Cognition has partial access to those internal models, and we report the content of those models as though we are reporting literal reality.

The AST can be summarized in three broad points.[1] First, the brain is an information-processing device. Second, it has a capacity to focus its processing resources more on some signals than on others. That focus may be on select, incoming sensory signals, or it may be on internal information such as specific, recalled memories. That ability to process select information in a focused manner is sometimes called attention. Third, the brain not only uses the process of attention, but it also builds a set of information, or a representation, descriptive of attention. That representation, or internal model, is the attention schema.

In the theory, the attention schema provides the requisite information that allows the machine to make claims about consciousness. When the machine claims to be conscious of thing X – when it claims that it has a subjective awareness, or a mental possession, of thing X – the machine is using higher cognition to access an attention schema, and reporting the information therein.

For example, suppose a person looks at an apple. When the person reports, “I have a subjective experience of that shiny red apple,” three items are linked together in that claim: the self, the apple, and a subjective experience. The claim about the presence of a self depends on cognitive access to a self model. Without a self model, without the requisite information, the system would be unable to make claims referencing the self. The claim about the presence of, and properties of, an apple depends on cognitive access to a model of the apple, presumably constructed in the visual system. Again, without the requisite information, the system would obviously be unable to make any claims about the apple or its visual properties. In the theory, the claim about the presence of subjective experience depends on cognitive access to an internal model of attention. That internal model does not provide a scientifically precise description of attention, complete with the details of neurons, lateral inhibitory synapses, and competitive signals. The model is silent on the physical mechanisms of attention. Instead, like all internal models in the brain, it is simplified and schematic for the sake of efficiency.

Accessing the information within these three, linked internal models, cognitive machinery claims that there is a self, there is an apple, and the self has a mental possession of the apple; the mental possession, in and of itself, is invisible and has no physically describable properties, but has a general location somewhere inside the body and a specific anchor to the apple; and that mental essence empowers the self to understand, react to, and remember the apple. The machine, in relying on that incomplete and inaccurate model of attention, claims to have a metaphysical consciousness of the apple.

In the AST, subjective experience, or consciousness, or the ineffable mental possession of something, is a simplified construct that is a fairly good, if detail poor, description of the act of attending to something. The internal model of attention is not constructed at a higher cognitive level. It is not a cognitive self theory. It is not learned. Instead, it is constructed beneath the level of cognition and is automatic, much like the internal model of the apple and the internal model of the self. You cannot help constructing those models in those particular ways. In that sense, one could call the attention schema a perception-like model of attention, to distinguish it from a higher-order cognitive model such as a belief or an intellectually reasoned theory.

The AST explains how a machine with an attention schema contains the requisite information to claim to have a consciousness of something, whether consciousness of an apple, consciousness of a thought, or consciousness of the self; how the machine talks about consciousness in the same ways that we do; and how the machine, on accessing its internal information, does not find explanatory meta information, such as that it is a machine computing a conclusion, or that it is accessing an internal model, but instead learns only the narrow contents of the internal models. In AST, we are machines of that sort.

AST is consistent with the perspective called illusionism.[4] The term “illusion,” however, may have connotations that are not quite apt for this theory. Three issues with that label arise. First, many people equate an illusion with something dismissable or harmful. If we can see through the illusion, we are better off. Yet in the AST, the attention schema is a well-functioning internal model. It is not normally dysregulated or in error. Second, most people tend to equate an illusion with a mirage. A mirage falsely indicates the presence of something that actually does not exist. If consciousness is an illusion, then by implication nothing real is present behind the illusion. There is no “there” there. But in the AST, that is not so. Consciousness is a good, if detail-poor, account of something real: attention. We do have attention, a physical and mechanistic process that emerges from the interactions of neurons. When we claim to be subjectively conscious of something, we are providing a slightly schematized version of the literal truth. There is, indeed, a “there” there. Third, an illusion is experienced by something. Those who call consciousness an illusion are extremely careful to define what they mean by “experience” so as to avoid circularity. But the AST is not a theory of how the brain has experiences. It is a theory of how a machine makes claims – how it claims to have experiences – and being stuck in a logic loop, or captive to its own internal information, it cannot escape making those claims.

In the theory, an attention schema did not evolve so that we could walk around claiming to have consciousness. Instead it evolved because it has fundamental adaptive uses in perception, cognition, and social interaction.

Two main types of functions have been proposed for the attention schema. One is to help control attention.[2] A fundamental principle of control theory is that a good controller should incorporate an internal model. Thus the brain's controller of attention should incorporate an internal model of attention – a set of information that is continuously updated and that reflects the dynamics and the changing state of attention. Since attention is one of the most pervasive and important processes in the brain, the proposed attention schema, helping to control attention, would be of fundamental importance to the system. A growing set of behavioral evidence supports this hypothesis. When subjective awareness of a visual stimulus is absent, people can still direct attention to that stimulus, but that attention loses some aspects of control. It is less stable over time, and is less adaptable given training on perturbations.[2][6] These findings support the proposal that awareness acts like the internal model for the control of attention.

A second proposed function of an attention schema is for social cognition – using the attention schema to model the attentional states of others as well as of ourselves.[1] A main advantage of this public, social use of an attention schema lies in behavioral prediction. As social animals, we survive in the world partly by predicting the behavior of other people. We also plan our futures partly by predicting our own actions. But attention is one of the dominant influences on behavior. What you are attending to, you are likely to behave toward. What you are not attending to, you are much less likely to behave toward. A good model of attention, of its dynamics and consequences, would be useful for predicting behavior.

Analogy to the body schema

AST was developed in analogy to the psychological and neuroscientific work on the body schema, an area of research to which Graziano contributed heavily in his previous publications.[1] In this section, the central ideas of AST are explained by use of the analogy to the body schema.

Suppose a person, Kevin, has reached out and grasped an apple. You ask Kevin what he is holding. He can tell you that the object is an apple, and he can describe the properties of the apple. The reason is that Kevin's brain has constructed a schematic description of the apple, also sometimes called an internal model. The internal model is a set of information, such as about size, color, shape, and location, that is constantly updated as new signals are processed. The model allows Kevin's brain to react to the apple and even predict how the apple may behave in different circumstances. Kevin's brain has constructed an apple schema. His cognitive and linguistic processors have some access to that internal model of an apple, and thus Kevin can verbally answer questions about the apple.

Now you ask Kevin, “How are you holding the apple? What is your physical relationship to the apple?” Once again Kevin can answer. The reason is that, in addition to an internal model of the apple, Kevin's brain also constructs an internal model of his body, including of his arm and hand. That internal model, also sometimes called the body schema, is a set of information, constantly updated as new signals are processed, that specifies the size and shape of Kevin's limbs, how they are hinged, how they tend to move, the state they are in at each moment, and what state they are likely to be in over the next few moments. The primary purpose of this body schema is to allow Kevin's brain to control movement. Because he knows the state that his arm is in, he can better guide its movement. A side-effect of his body schema is that he can explicitly talk about his body. His cognitive and linguistic processors have some access to the body schema, and therefore Kevin can answer, “I am grasping the apple with my hand, while my arm is outstretched.”

The body schema is limited. If you ask Kevin, “How many muscles are in your arm? Where do they attach to the bones?” he cannot answer based on his body schema. He may have intellectual knowledge gleaned from a book, but he has no immediate insight into the muscles of his particular arm. The body schema lacks that level of mechanistic detail.

AST takes this analysis one step further. Kevin is doing more than physically grasping the apple. He is also paying attention to the apple.

To understand AST, it is necessary to specify the correct definition of attention. The word is used colloquially in many ways, leading to some possible confusion. Here it is used to mean that Kevin's brain has focused some resources on the processing of the apple. The internal model of the apple has been boosted in signal strength, and as a result Kevin's brain processes the apple deeply, is more likely to store information about it in memory, and is more likely to trigger a behavioral response to the apple. In this definition of the word, attention is a mechanistic, data-handling process. It involves a relative deployment of processing resources to a specific signal.

Now you ask Kevin, “What is your mental relationship to the apple?” Kevin can answer this question too. The reason, according to AST, is that Kevin's brain constructs not only an internal model of the apple, and an internal model of his body, but also an internal model of his attention. That attention schema is a set of information that describes what attention is, what its most basic properties are, what its dynamics and consequences are, and what state it is in at any particular moment. Kevin's cognitive and linguistic machinery has some access to that internal model, and therefore Kevin can describe his mental relationship to the apple. However, just as in the case of the body schema, the attention schema lacks information about the mechanistic details. It does not contain information about the neurons, synapses, or electrochemical signals that make attention possible. As a result, Kevin reports having a property that lacks any clear physical attributes. He says, “I have a mental grasp of the apple. That mental possession, in and of itself, has no physical properties. It just is. It’s vaguely located inside me. It is what allows me to know about that apple. It allows me to remember the apple. It allows me to choose to react to the apple. It’s my mental self taking hold of the apple – my experience of the apple.” Here Kevin is describing a subjective, experiential consciousness of the apple. Consciousness, as he describes it, seems to transcend physical mechanism, only because it is an incomplete description of a physical mechanism. Kevin's account of his consciousness is a partial, schematic description of his state of attention.

The example given here relates to a consciousness of an apple. However, the same logic could apply to anything – consciousness of a sound, a memory, or oneself as a whole.

In AST, because we claim to be conscious, something in the brain must therefore have computed the requisite information about consciousness to enable the system to output that claim. AST proposes an adaptive function to that information: it serves as an internal model of one of the brain's most important features, attention.

Control of attention

The central hypothesis in AST is that the brain constructs an internal model of attention, the attention schema. The primary adaptive function of that attention schema is to enable a better, more flexible control of attention. In the theory of dynamical systems control, a control system works better and more flexibly if it constructs an internal model of the item it controls. An automatic airplane-piloting system will work better if it incorporates a model of the dynamics of the airplane. An air and temperature controller for a building works better if it incorporates a rich, predictive model of the building's airflow and temperature dynamics. The brain's controller of attention works better by constructing a rich, internal model of what attention is, how it changes over time, what its consequences are, and what state it is in at any moment.

Most of the experimental research on AST is therefore focused on the control of attention. In specific, when people are relatively less aware of a visual stimulus, and yet still directing attention to that stimulus, does attention behave as though its controller has a weakened or absent internal model? Initial experiments suggest that this may be true, though many more experiments would be needed to make the case.[2][6]

Social cognition

According to one proposal in AST, not only does the brain construct an attention schema to model its own state of attention, but it also uses the same mechanism to model other people's states of attention. In effect, just as we attribute awareness to ourselves, we also attribute it to others. In this proposal, one of the main adaptive functions of an attention schema is for use in social cognition. Some of the research into AST therefore focuses on the overlap between one's own claims of awareness and one's attributions of awareness to others. Initial research using brain scanning in humans suggests that both processes recruit cortical networks that converge on the temporoparietal junction.[7][8]

References

  1. 1.0 1.1 1.2 1.3 1.4 Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 2.2 2.3 2.4 2.5 Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. 6.0 6.1 Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.

Further reading

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.

External links