chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Psychologists have analyzed perceptual systems for more than a century. Vision and hearing have received the most attention by far, but other perceptual systems, like those for smell taste movement, balance, touch, and pain, have also been studied extensively. Perception scientists use a variety of approaches to study these systems—they design experiments, study neurological patients with damaged brain regions, and create perceptual illusions that toy with the brain’s efforts to interpret the sensory world.
Creation and testing of perceptual illusions has been a fruitful approach to the study of perception—particularly visual perception—since the early days of psychology. People often think that visual illusions are simply amusing tricks that provide us with entertainment. Many illusions are fun to experience, but perception scientists create illusions based on their understanding of the perceptual system. Once they have created a successful illusion, the scientist can explore what people experience, what parts of the brain are involved in interpretation of the illusion, and what variables increase or diminish the strength of the illusion. Scientists are not alone in this interest. Visual artists have discovered and used many illusion-producing principles for centuries, allowing them to create the experience of depth, movement, light and shadow, and relative size on two-dimensional canvases.
Depth Illusions
When we look at the world, we are not very good at detecting the absolute qualities of things— their exact size or color or shape. What we are very good at is judging objects in the context of other objects and conditions. Let’s take a look at a few illusions to see how they are based on insights about our perception. Look at Figure 2 below. Which of the two horizontal yellow lines looks wider, the top one or the bottom one?
Most people experience the top line as wider. They are both exactly the same length. This experience is called the Ponzo illusion. Even though you know that the lines are the same length, it is difficult to see them as identical. Our perceptual system takes the context into account, here using the converging “railroad tracks” to produce an experience of depth. Then, using some impressive mental geometry, our brain adjusts the experienced length of the top line to be consistent with the size it would have if it were that far away: if two lines are the same length on my retina, but different distances from me, the more distant line must be in reality longer. You experience a world that “makes sense” rather than a world that reflects the actual objects in front of you.
There are many depth illusions. It is difficult to see the drawing on the left below as a two- dimensional figure. The converging lines and smaller square at the center seem to coax our perceptual systems into seeing depth, even though we know that the drawing is flat. This urge to see depth is probably so strong because our ability to use two-dimensional information to infer a three dimensional world is essential for allowing us to operate in the world. The picture on the right below is a driving tunnel, something you would need to process at high speed if you were in a car going through it. Your quick and detailed use of converging lines and other cues allows you to make sense of this 3-D world.
Light and Size Illusions
Depth is not the only quality in the world that shows how we adjust what we experience to fit the surrounding world. Look at the two gray squares in the figure below. Which one looks darker?
Most people experience the square on the right as the darker of the two gray squares. You’ve probably already guessed that the squares are actually identical in shade, but the surrounding area—black on the left and white on the right—influence how our perceptual systems interpret the gray area. In this case, the greater difference in shading between the white surrounding area and the gray square on the right results in the experience of a darker square.
Here is another example below. The two triangular figures are identical in shade, but the triangle on the left looks lighter against the dark background of the cross when compared to the triangle in the white area on the right.
Our visual systems work with more than simple contrast. They also use our knowledge of how the world works to adjust our perceptual experience. Look at the checkerboard below. There are two squares with letters in them, one marked “A” and the other “B”. Which one of those two squares is darker?
This seems like an easy comparison, but the truth is that squares A and B are identical in shade. Our perceptual system adjusts our experience by taking some visual information into account. First, “A” is one of the “dark squares” and “B” is a “light square” if we take the checkerboard pattern into account. Perhaps even more impressive, our visual systems notice that “B” is in a shadow. Object in a shadow appear darker, so our experience is adjusted to take account of effect of the shadow, resulting in perceiving square B as being lighter than square A, which sits in the bright light. And if you really don’t believe your eyes, take a look at a video showing the same color tiles here.
Link to Learning
If you want to explore more visual illusions, here is a great site with dozens of interesting illusions created by Michael Bach.
Ebbinghaus in the Real World
Imagine that you are in a golf competition in which you are putting against someone with the same experience and skill that you have. There is one problem: Your opponent gets to putt into a hole that is 10% larger than the hole you have to use. You’d probably think that the competition was unfairly biased against you.
Now imagine a somewhat different situation. You and your opponent are about equal in ability and the holes you are using are the same size, but the hole that your opponent is using looks 10% larger than the one you are using. Would your opponent have an unfair advantage now?
If you read the earlier section on the Ebbinghaus effect, you have an idea how psychologists could exploit your perceptual system (and your opponent’s) to test this very question.
Psychologist Jessica Witt and her colleagues Sally Linkenauger and Dennis Proffitt recruited research participants with no unusual golf experience to participate in a putting task. They competed against themselves rather than against another person.
The experimenters made the task challenging by using a hole with a 2-inch diameter, which is about half the diameter of the hole you will find on a golf course. An overhead projector mounted on the ceiling of their lab allowed them to project Ebbinghaus’s circles around the putting hole. Some participants saw the putting hole surrounded by circles that were smaller than the hole in the center; the other half saw surrounding black circles that were larger.
Participants putted from about 11½ feet away. They took 10 putts in one condition, and then 10 in the other condition. Half of the participants putted with the large surrounding circles first and half saw the small surrounding circles first. This procedure is called counterbalancing. If there is any advantage (e.g., getting better over time with practice) or disadvantage (e.g., getting tired of putting), counterbalancing assures that both conditions are equally exposed to the positive or negative effects of which task goes first or second. Failure to take account of this type of problem means that you may have a confounding variable—practice or fatigue—that influences performance. A confounding variable is something that could influence performance, but is not part of the study. We try to control(that is, neutralize) potentially confounding variables so they cannot be the cause of performance differences. So, for instance, if everyone did the large surrounding circles condition first and then the small surrounding circles, then differences in performance could be due to order of conditions (leading to practice or fatigue effects) rather than the size of the surrounding circles. By counterbalancing, we don’t get rid of the effects of practice or fatigue for any particular person, but—across all the participants— practice or fatigue should affect both conditions (both types of Ebbinghaus circles) equally.
The experimenters wanted to know two things. First, did they actually produce the Ebbinghaus illusion? Remember: there is no guarantee that people see or think the way your theory says they should. So just before the participant started putting in a particular condition, he or she drew a circle using a computerized drawing tool, attempting to match the exact size of the putting hole. This is better than simply asking, “do you see the illusion?” The drawing task attempts to directly measure what they perceive.
Second, the experimenters wanted to see if the perceived size of the hole influenced putting accuracy. They recorded the success or failure of each putt. Each participant could get a score of 0 to 10 successful putts in each condition.
Methods Summary
Recap the steps you’ve read about thus far:
1. The participant practices putting to get used to the task.
2. The participant completes the first condition (large surrounding circles for half ofthe participants and small surrounding circles for the other half).
o The participant draws a circle corresponding to his or her estimation of the actual size of the putting hole. This allows the experimenters to determine if the Ebbinghaus effect actually occurred.
o The participant putts 10 times in this condition.
3. Participant completes the second condition (whichever condition they have not yet done).
o The participant draws a circle corresponding to his or her estimation of the actual size of the putting hole.
o The participant putts 10 times in this condition.
This is not the only experiment that has used a sports context to study the effects of illusions. Other experiments have shown that people hit softballs better when the balls are perceived as larger. People score higher in darts when the board appears larger. Athletes kick field goals and return tennis balls more successfully when the goal posts or tennis balls appear larger. In all of these studies, the balls or boards or goal posts were not actually larger, but they were perceived as larger because the experimenters created illusions. Skilled athletes often report that targets appear larger or time slows down when they are “in the zone”, as if practice and skill create their own perceptual illusions that increase confidence and make difficult challenges feel easier.
Link to Learning
Watch this interview with Psychologist Jessica Witt to see her talk about how her research utilizing the Ebbinghaus illusion impacts a golfer’s perception and performance. You can also read about more about similar variations of her research here.
A Final Note: Science Doesn’t Always Produce Simple Results
Professor Witt’s study had interesting results; however, they weren’t quite as simple as we have made them seem. The researchers actually had two different hole sizes: 2 inches and 4 inches. The Ebbinghaus circles were adjusted to be relatively larger or smaller than the putting hole.
The Ebbinghaus illusion worked for the smaller (2 inch) putting holes, but not for the larger (4 inch) putting holes. In other words, when people drew the circles as they perceived them (the “manipulation check” dependent variable), they drew different sized circles for the 2 inch holes (the Ebbinghaus illusion), but the same size circles for the 4 inch holes (no Ebbinghaus illusion).
For the larger (4 inch) putting holes, putting accuracy was the same for the two different conditions. This didn’t bother the experimenters, because—as we have already noted—the participants did not experience the Ebbinghaus illusion with the larger holes. If the holes were perceived as the same, then self-confidence should not have been affected and, in turn, putting should not have been better in one condition than the other.
In the research paper, the experimenters suggest a few technical reasons that the larger hole might not have produced the Ebbinghaus illusion, but they admit that they have no definitive explanation. That’s okay. Science often yields messy results—and these can be the basis for new experiments and sometimes for really interesting discoveries. The world is not as simple as our theories try to make it seem. Happily, in science, as in many aspects of life, you learn more from your failures than your successes, so good scientists don’t try to hide from results they don’t expect.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/10%3A_Perception/10.03%3A_Visual_Illusions.txt
|
While our sensory receptors are constantly collecting information from the environment, it is ultimately how we interpret that information that affects how we interact with the world. Perception refers to the way sensory information is organized, interpreted, and consciously experienced. Perception involves both bottom-up and top-down processing. Bottom-up processing refers to the fact that perceptions are built from sensory input. On the other hand, how we interpret those sensations is influenced by our available knowledge, our experiences, and our thoughts. This is called top-down processing.
Look at the shape in Figure 16 below. Seen alone, your brain engages in bottom-up processing. There are two thick vertical lines and three thin horizontal lines. There is no context to give it a specific meaning, so there is no top-down processing involved.
Now, look at the same shape in two different contexts. Surrounded by sequential letters, your brain expects the shape to be a letter and to complete the sequence. In that context, you perceive the lines to form the shape of the letter “B.”
Surrounded by numbers, the same shape now looks like the number “13.”
When given a context, your perception is driven by your cognitive expectations. Now you are processing the shape in a top-down fashion.
One way to think of this concept is that sensation is a physical process, whereas perception is psychological. For example, upon walking into a kitchen and smelling the scent of baking cinnamon rolls, the sensation is the scent receptors detecting the odor of cinnamon, but the perception may be “Mmm, this smells like the bread Grandma used to bake when the family gathered for holidays.”
Although our perceptions are built from sensations, not all sensations result in perception. In fact, we often don’t perceive stimuli that remain relatively constant over prolonged periods of time. This is known as sensory adaptation. Imagine entering a classroom with an old analog clock. Upon first entering the room, you can hear the ticking of the clock; as you begin to engage in conversation with classmates or listen to your professor greet the class, you are no longer aware of the ticking. The clock is still ticking, and that information is still affecting sensory receptors of the auditory system. The fact that you no longer perceive the sound demonstrates sensory adaptation and shows that while closely associated, sensation and perception are different.
10.05: Multisensory Perception
Although it has been traditional to study the various senses independently, most of the time, perception operates in the context of information supplied by multiple sensory modalities at the same time. For example, imagine if you witnessed a car collision. You could describe the stimulus generated by this event by considering each of the senses independently; that is, as a set of unimodal stimuli. Your eyes would be stimulated with patterns of light energy bouncing off the cars involved. Your ears would be stimulated with patterns of acoustic energy emanating from the collision. Your nose might even be stimulated by the smell of burning rubber or gasoline. However, all of this information would be relevant to the same thing: your perception of the car collision. Indeed, unless someone was to explicitly ask you to describe your perception in unimodal terms, you would most likely experience the event as a unified bundle of sensations from multiple senses. In other words, your perception would be multimodal. The question is whether the various sources of information involved in this multimodal stimulus are processed separately by the perceptual system or not.
For the last few decades, perceptual research has pointed to the importance of multimodal perception: the effects on the perception of events and objects in the world that are observed when there is information from more than one sensory modality. Most of this research indicates that, at some point in perceptual processing, information from the various sensory modalities is integrated. In other words, the information is combined and treated as a unitary representation of the world.
Behavioral Effects of Multimodal Perception
Although neuroscientists tend to study very simple interactions between neurons, the fact that they’ve found so many crossmodal areas of the cortex seems to hint that the way we experience the world is fundamentally multimodal. Our intuitions about perception are consistent with this; it does not seem as though our perception of events is constrained to the perception of each sensory modality independently. Rather, we perceive a unified world, regardless of the sensory modality through which we perceive it.
It will probably require many more years of research before neuroscientists uncover all the details of the neural machinery involved in this unified experience. In the meantime, experimental psychologists have contributed to our understanding of multimodal perception through investigations of the behavioral effects associated with it. These effects fall into two broad classes. The first class—multimodal phenomena—concerns the binding of inputs from multiple sensory modalities and the effects of this binding on perception. The second class— crossmodal phenomena—concerns the influence of one sensory modality on the perception of another (Spence, Senkowski, & Roder, 2009).
Multimodal Phenomena
Audiovisual Speech
Multimodal phenomena concern stimuli that generate simultaneous (or nearly simultaneous) information in more than one sensory modality. As discussed above, speech is a classic example of this kind of stimulus. When an individual speaks, she generates sound waves that carry meaningful information. If the perceiver is also looking at the speaker, then that perceiver also has access to visual patterns that carry meaningful information. Of course, as anyone who has ever tried to lipread knows, there are limits on how informative visual speech information is.
Even so, the visual speech pattern alone is sufficient for very robust speech perception. Most people assume that deaf individuals are much better at lipreading than individuals with normal hearing. It may come as a surprise to learn, however, that some individuals with normal hearing are also remarkably good at lipreading (sometimes called “speechreading”). In fact, there is a wide range of speechreading ability in both normal hearing and deaf populations (Andersson, Lyxell, Rönnberg, & Spens, 2001). However, the reasons for this wide range of performance are not well understood (Auer & Bernstein, 2007; Bernstein, 2006; Bernstein, Auer, & Tucker, 2001; Mohammed et al., 2005).
How does visual information about speech interact with auditory information about speech? One of the earliest investigations of this question examined the accuracy of recognizing spoken words presented in a noisy context, much like in the example above about talking at a crowded party. To study this phenomenon experimentally, some irrelevant noise (“white noise”—which sounds like a radio tuned between stations) was presented to participants. Embedded in the white noise were spoken words, and the participants’ task was to identify the words. There were two conditions: one in which only the auditory component of the words was presented (the “auditory-alone” condition), and one in both the auditory and visual components were presented (the “audiovisual” condition). The noise levels were also varied, so that on some trials, the noise was very loud relative to the loudness of the words, and on other trials, the noise was very soft relative to the words. Sumby and Pollack (1954) found that the accuracy of identifying the spoken words was much higher for the audiovisual condition than it was in the auditory-alone condition. In addition, the pattern of results was consistent with the Principle of Inverse Effectiveness: The advantage gained by audiovisual presentation was highest when the auditory-alone condition performance was lowest (i.e., when the noise was loudest). At these noise levels, the audiovisual advantage was considerable: It was estimated that allowing the participant to see the speaker was equivalent to turning the volume of the noise down by over half. Clearly, the audiovisual advantage can have dramatic effects on behavior.
Another phenomenon using audiovisual speech is a very famous illusion called the “McGurk effect” (named after one of its discoverers). In the classic formulation of the illusion, a movie is recorded of a speaker saying the syllables “gaga.” Another movie is made of the same speaker saying the syllables “baba.” Then, the auditory portion of the “baba” movie is dubbed onto the visual portion of the “gaga” movie. This combined stimulus is presented to participants, who are asked to report what the speaker in the movie said. McGurk and MacDonald (1976) reported that 98 percent of their participants reported hearing the syllable “dada”—which was in neither the visual nor the auditory components of the stimulus. These results indicate that when visual and auditory information about speech is integrated, it can have profound effects on perception.
Interactive Element
Watch this video to see an example of the McGurk effect.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/10%3A_Perception/10.04%3A_Top-Down_vs._Bottom-Up_%28Conceptually-driven_vs._Data-driven_processing%29.txt
|
The idea of subliminal perception - that stimuli presented below the threshold for awareness can influence thoughts, feelings, or actions – is a fascinating and kind of creepy one. Can messages you are unaware of, embedded in movies or ads or the music playing in the grocery store, really influence what you buy? Many such claims of the power of subliminal perception have been made. One of the most famous came from a market researcher who claimed that the message “Eat Popcorn” briefly flashed throughout a movie increased popcorn sales by more than 50% although he later admitted that the study was made up (Merikle, 2000).
Psychologists have worked hard to investigate whether this is a valid phenomenon. Studying subliminal perception is more difficult than it might seem, because of the difficulty establishing what the threshold for consciousness is or of even determining what type of thresholds important; for example, Cheesman and Merikle (1984, 1986) make an important distinction between objective and subjective thresholds. The bottom line is that there is some evidence that individuals can be influenced by stimuli they are not aware of, but how complex stimuli can be or the extent to which unconscious material can affect behavior is not settled.
10.07: Synesthesia
Synesthesia is a condition in which a sensory stimulus presented in one area evokes a sensation in a different area. In the 19th century Francis Galton observed that a certain proportion of the general population who were otherwise normal had a hereditary condition he dubbed "synesthesia"; a sensory stimulus presented through one modality spontaneously evoked a sensation experienced in an unrelated modality. For example, an individual may experience a specific color for every given note (“C sharp is red”) or printed number or letter- is tinged with a specific hue (e.g. 5 is indigo and 7 is green). The specificity of the colors remains stable over time within any given individual but the same note or letter doesn’t necessarily evoke the same color in different people. Although long regarded as a curiosity there has been a tremendous resurgence of interest in synesthesia in the last decade. Synesthesia used to be regarded as a rare condition but recent estimates suggest that it affects 4% of the population. The most common of which appears to be letter sounds associated with color. Most individuals report having had the experience as far back in childhood as they can remember. As Galton himself noted, the condition tends to run in families and recent work suggests a genetic basis.
Synesthesia was previously believed 6 times more common in women than in men according to responses from newspaper ads. However, Simner and colleagues showed no difference between the sexes testing a large population for synesthesia. Sometimes, sensory deficiency can lead to one sensory input evoking sensations in a different modality. For example, after early visual deprivation due to a disease that attacked eye retinas, touch stimuli can produce “visual light” or after a thalamic lesion leading to a loss of tactile sensation, sounds can elicit touch sensations. This probably occurs because the tactile or auditory sensory input now begins to cross-activate the deprived cortical areas. This could be regarded as a form of acquired synesthesia.
10.08: McGurk Effect-Bimodal Speech Perception
Interactive Element
Watch this video to understand the impact of the McGurk effect on perception.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/10%3A_Perception/10.06%3A_Subliminal_Perception.txt
|
We use the term “attention” all the time, but what processes or abilities does that concept really refer to? This module will focus on how attention allows us to select certain parts of our environment and ignore other parts, and what happens to the ignored information. A key concept is the idea that we are limited in how much we can do at any one time. So we will also consider what happens when someone tries to do several things at once, such as driving while using electronic devices.
11: Attention
Before we begin exploring attention in its various forms, take a moment to consider how you think about the concept. How would you define attention, or how do you use the term? We certainly use the word very frequently in our everyday language: “ATTENTION! USE ONLY AS DIRECTED!” warns the label on the medicine bottle, meaning be alert to possible danger. “Pay attention!” pleads the weary seventh-grade teacher, not warning about danger (with possible exceptions, depending on the teacher) but urging the students to focus on the task at hand. We may refer to a child who is easily distracted as having an attention disorder, although we also are told that Americans have an attention span of about 8 seconds, down from 12 seconds in 2000, suggesting that we all have trouble sustaining concentration for any amount of time (from www.Statisticbrain.com). How that number was determined is not clear from the Web site, nor is it clear how attention span in the goldfish—9 seconds!—was measured, but the fact that our average span reportedly is less than that of a goldfish is intriguing, to say the least.
William James wrote extensively about attention in the late 1800s. An often quoted passage (James, 1890/1983) beautifully captures how intuitively obvious the concept of attention is, while it remains very difficult to define in measurable, concrete terms:
Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others. (pp. 381–382)
Notice that this description touches on the conscious nature of attention, as well as the notion that what is in consciousness is often controlled voluntarily but can also be determined by events that capture our attention. Implied in this description is the idea that we seem to have a limited capacity for information processing, and that we can only attend to or be consciously aware of a small amount of information at any given time.
Many aspects of attention have been studied in the field of psychology. In some respects, we define different types of attention by the nature of the task used to study it. For example, a crucial issue in World War II was how long an individual could remain highly alert and accurate while watching a radar screen for enemy planes, and this problem led psychologists to study how attention works under such conditions. When watching for a rare event, it is easy to allow concentration to lag. (This a continues to be a challenge today for TSA agents, charged with looking at images of the contents of your carry-on items in search of knives, guns, or shampoo bottles larger than 3 oz.) Attention in the context of this type of search task refers to the level of sustained attention or vigilance one can maintain. In contrast, divided attention tasks allow us to determine how well individuals can attend to many sources of information at once. Spatial attention refers specifically to how we focus on one part of our environment and how we move attention to other locations in the environment. These are all examples of different aspects of attention, but an implied element of most of these ideas is the concept of selective attention; some information is attended to while other information is intentionally blocked out. This module will focus on important issues in selective and divided attention, addressing these questions:
• Can we pay attention to several sources of information at once, or do we have a limited capacity for information?
• How do we select what to pay attention to?
• What happens to information that we try to ignore?
• Can we learn to divide attention between multiple tasks?
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/11%3A_Attention/11.01%3A_What_is_Attention.txt
|
There has been a large increase in research activity in the area of attention since the 1950s. This research has focused not only on attention, but also how attention is related to memory and executive functioning. Human learning and behaviour are dependent on our ability to pay attention to our environment, retain and retrieve information, and use cognitive strategies. An understanding of the development of attention is also critical when we consider that deficits in attention often lead to difficulties in school and in the work force. Thus, attention is an important topic in the study of psychology, specifically in the areas of development (see Part II of this book), learning (Part III), and psychological disorders (see the section on ADHD in Part IV). There is no doubt that an understanding of attention and related concepts is critical to our understanding of human cognition and learning.
Introduction to the History of Research on Attention
The study of attention is a major part of contemporary cognitive psychology and cognitive neuroscience. Attention plays a critical role in essentially all aspects of perception, cognition, and action, influencing the choices we make. The study of attention has been of interest to the field of psychology since its earliest days. However, many ideas about attention can be traced to philosophers in the 18th and 19th centuries, preceding the foundation of the field of psychology. The topic of attention was originally discussed by philosophers. Among the issues considered were the role of attention on conscious awareness and thought, and whether attention was directed voluntarily or involuntarily toward objects or events. The characterization of attention provided by each philosopher reflected that individual's larger metaphysical views of the nature of things and how we come to know the world. For instance, Joan Luis Vives (1492-1540) recognized the role of attention in forming memories. Gottfried Leibniz (1646-1716) introduced the concept of apperception, which refers to an act that is necessary for an individual to become conscious of a perceptual event. He noted that without apperception, information does not enter conscious awareness. Leibniz said, "Attention is a determination of the soul to know something in preference to other things". In summary, many philosophers gave attention a central role in perception and thinking. They introduced several important issues, such as the extent to which attention is directed automatically or intentionally. These topics continue to be examined and evaluated in contemporary research. Although they conducted little experimental research themselves, their conceptual analysis of attention laid the foundation for the scientific study of attention in ensuing years. The philosophical analyses of attention led to some predictions that could be tested experimentally. In addition, in the mid-1800s psychophysical methods were being developed that allowed the relation between physical stimulus properties and their corresponding psychological perceptions to be measured. Wilhelm Wundt, who established the first laboratory devoted to psychological research in 1879, was responsible for introducing the study of attention to the field. In addition, the relation between attention and perception was one of the first topics to be studied in experimental psychology. Wundt held that attention was an inner activity that caused ideas to be present to differing degrees in consciousness. He distinguished between perception, which was the entry into the field of attention, and apperception, which was responsible for entry into the inner focus. He assumed that the focus of attention could narrow or widen. This view that has also enjoyed popularity in recent years. At the end of the 19th century, Hermann von Helmholtz (1821-1894) argued that attention is essential for visual perception. Using himself as a subject and pages of briefly visible printed letters as stimuli, he found that attention could be directed in advance of the stimulus presentation to a particular region of the page, even though the eyes were kept fixed at a central point. He also found that attention was limited: The letters in by far the largest part of the visual field, even in the vicinity of the fixation point, were not automatically perceived.
William James's [1] (1890/1950) views on attention are probably the most well known of the early psychologists. In his famous Principles of Psychology (1980), James asserted that "the faculty of voluntarily bringing back a wandering attention, over and over again, is the very root of judgment, character, and will." His definition of attention is also widely quoted. According to James (1890), “It is taking possession by the mind, in clear and vivid form, of one of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state." Moreover, according to James, the immediate effects of attention are to make us perceive, conceive, distinguish and remember, better than we otherwise could –both more successive things and each thing more clearly. It also shortens “reaction time”. James’s definition also mentions clearness, which Titchener (1908/1973) viewed as the central aspect of attention. Pillsbury (1908/1973) agreed with Titchener, indicating, “the essence of attention as a conscious process is an increase in the clearness on one idea or a group of ideas at the expense of others”. Researchers at the beginning of the 20th century debated how this increased clearness is obtained. In summary, around 1860, the philosophical approach dominated the study of psychology in general and attention especially. During the period from 1980 to 1909, the study of attention was transformed, as was the field of psychology as a whole, to one of scientific inquiry with emphasis on experimental investigations. However, given that behaviourism came to dominate psychology in the next period, at least in the United States, the study of attentional mechanisms was largely delayed until the middle of the 20th century.
Although one often reads that research on attention essentially ceased during the period of 1910-1949, attention research never disappeared completely. However, there was an increase in interest in the topic with the advent of contemporary cognitive psychology. Lovie (1983) compiled tables showing the numbers of papers on attention listed in Psychological Abstracts and its predecessor, Psychological Index, in five-year intervals from 1910 to 1960, showing that studies on the topic were conducted during these time periods. Among the important works on attention was that of Jersild (1927) who published a classic monograph, “Mental Set and Shift”.
Another significant contribution during this era was the discovery of the psychological refractory period effect by Telford (1931). He noted that numerous studies showed that stimulation of neurons was followed by a refractory phase during which the neurons were less sensitive to stimulation. Stroop (1935/1992) also published what is certainly one of the most widely cited studies in the field of psychology, in which he demonstrated that stimulus information that is irrelevant to the task can have a major impact on performance (see below for John Ridley Stroop and the impact of the Stroop Color-Word Task on research on attention). Paschal (1941), Gibson (1940) and Mowrer, Rayman and Bliss (1940) also conducted research on attention such as that on preparatory set or mental set. In sum, although the proportion of psychological research devoted to the topic of attention was much less during this time period than during preceding decades, many important discoveries were made, which have influenced contemporary research on the topic.
The period from 1950 to 1974 saw a revival of interest in the characterization of human information processing. Research on attention during this period was characterized by an interplay between technical applications and theory. Mackworth (1950) reported experiments on the maintenance of vigilance that exemplified this interaction and set the stage for extensive research on the topic over the remainder of the 20th century. This research originated from concerns about the performance of radar operators in World War II detecting infrequently occurring signals. Cherry (1953) conducted one of the seminal works in attention during this period, studying the problem of selective attention, or, as he called it, “the cocktail party phenomenon”. He used a procedure called dichotic listening in which he presented different messages to each ear through headphones. Broadbent (1958) developed the first complete model of attention, called Filter Theory (see below). Treisman (1960) reformulated Broadbent's Filter Theory into what is now called the Filter-Attenuation Theory (see below). In the early 1970s, there was a shift from studying attention mainly with auditory tasks to studying it mainly with visual tasks. A view that regards attention as a limited-capacity resource that can be directed toward various processes became popular. Kahneman’s (1973) model is the most well known of these unitary capacity or resource theories.
According to this model, attention is a single resource that can be divided among different tasks in different amounts. The basic idea behind these models is that multiple tasks should produce interference when they compete for the limited capacity resources. Also, in this time period, the first controlled experiments that used psychophysiological techniques to study attention were conducted on humans. These experiments used methods that allow brain activity relating to the processing of a stimulus, called event related potentials, to be measured using electrodes placed on the scalp. In sum, the research during this period yielded considerable information about the mechanisms of attention. The most important development was the introduction of detailed information processing models of attention. Research on attention blossomed during the last quarter of the 20th century. Multiple resources models have emerged from many studies showing that it is easier to perform two tasks together when the tasks use different stimulus or response modalities than when they use the same modalities. Treisman and Gelade (1980) also developed a highly influential variant of the Spotlight Theory called the Feature Integration Theory to explain the results from visual search studies, in which subjects are to detect whether a target is present among distracters. Priming studies have also been popular during the most recent period of attention research. In such studies, a prime stimulus precedes the imperative stimulus to which the subject is to respond; the prime can be the same as or different from some aspect of the imperative stimulus. In addition, a major focus has been on gathering neuropsychological evidence pertaining to the brain mechanisms that underlie attention. Cognitive neuroscience, of which studies of attention are a major part, has made great strides due to the continued development of neuroimaging technologies. The converging evidence provided by neuropsychological and behavioral data promises to advance the study of attention significantly in the first half of the 21st century.
Finally, significant advances have also been made toward expanding the theories and methods of attention to address a range of applied problems. Two major areas can be identified. The first one concerns ergonomics in its broadest sense, ranging from human-machine interactions to improvement of work environments such as mental workload and situation awareness. The second major area of application is clinical neuropsychology, which has benefited substantially from adopting cognitive models and experimental methods to describe and investigate cognitive deficits in neurological patients. There is also work being done on the clinical application of attentional strategies (e.g., mindfulness training) in the treatment of a wide range of psychological disorders (see section on mindfulness).
John Ridley Stroop and The Stroop Effect
For over half a century, the Stroop effect has been one of the most well known standard demonstrations in undergraduate psychology courses and laboratories. In this cognitive task, participants asked to name the color of the ink in which an incompatible color word is printed (e.g., to say “red” aloud in response to the stimulus word GREEN printed in red ink) take longer than when asked to name the color in a control condition (e.g., to say "red" to the stimulus XXXXX printed in red ink). This effect, now known as the Stroop effect, was first reported in the classic article “Studies of Interference in Serial Verbal Reactions” published in the Journal of Experimental Psychology in 1935. Since then, this phenomena has become one of the most well known in the history of psychology.
Stroop’s article has become one of the most cited articles in the history of experimental psychology. It has more than 700 studies seeking to explain some nuance of the Stroop effect along with thousands of others directly or indirectly influenced by this article (MacLeod, 1992). However, at the time of its publication, it had relatively little impact because it was published at the height of Behaviourism in America (MacLeod, 1991). For the next thirty years after its publication, almost no experimental investigations of the Stroop effect occurred. For instance, between 1935 and 1964, only 16 articles are cited that directly examined the Stroop effect. In 1960s, with the advent of information processing as the dominant perspective in cognitive psychology, Stroop's work was rediscovered. Since then, the annual number of studies rose quickly, until by 1969 the number of articles settled in at just over 20 annually, where it appears to have remained (MacLeod, 1992).
Donald Broadbent and Dichotic Listening
Donald E. Broadbent has been praised for his outstanding contributions to the field of psychology since the 1950s, most notably in the area of attention. In fact, despite the undeniable role that attention plays in almost all psychological processes, research in this area was neglected by psychologists for the first half of the twentieth century (Massaro, 1996). During that time, behaviourists ignored the role of attention in human behaviour. Behaviourism was characterized by a stimulus-response approach, emphasizing the association between a stimulus and a response, but without identifying the cognitive operations that lead to that response (Reed, 2000). Subsequently, in the mid-1950s, a growing number of psychologists became interested in the information-processing approach as opposed to the stimulus response approach. It was Broadbent’s elaboration of the idea of the human organism as an information-processing system that lead to a systematic study of attention, and more generally, to the interrelation of scientific theory and practical application in the study of psychology.
Dichotic Listening Experiments
In 1952, Broadbent published his first report in a series of experiments that involved a dichotic listening paradigm. In that report, he was concerned with a person’s ability to answer one of two messages that were delivered at the same time, but one of which was irrelevant.
The participants were required to answer a series of Yes-No questions about a visual display over a radio-telephone. For example, the participant would be asked “S-1 from G.D.O. Is there a heart on Position 1?” Over,” to which the participant should answer “G.D.O. from S-1. Yes, over.” Participants in groups I, II, III, and IV heard two successive series of messages, in which two voices (G.D.O and Turret) spoke simultaneously during some of the messages. Only one of the voices was addressing S-1, and the other addressed S-2, S-3, S-4, S-5, or S-6. Participants were assigned to the following five groups:
• Group I: instructed to answer the message for S-1 and ignore the other on both runs
• Group II: instructed on one run to only answer the message from G.D.O. andon the second run was provided with a visual cue before the pairs of messages began for the name of the voice to be answered
• Group III: were given the same directions as Group I on one run, and on the other run had the experimenter indicate the correct voice verbally after the two messages had reached the “over” stage
• Group IV: had the correct voice indicated in all cases, but in one run it was before the messages began (like in Group II) and in the other run it was after the messages had finished (like in Group III)
• Group V: under the same conditions as Group I, heard the same recordings as Groups I, II, III and IV, but then also heard a two new recordings. One recording had a voice that addressed S-1 and a voice that addressed T-2, T-3, T-4, T-5, orT6 (thus the simultaneous messages were more distinct than for the other groups). The other recording had this same differentiation of messages, but also had both voices repeat the call-sign portion of the message (i.e., “S-1 from G.D.O., S-1 from G.D.O.)
For groups I and II, it is important to note that the overall proportion of failures to answer the correct message correctly was 52%. Results from Groups III and IV indicated that delaying knowledge of the correct voice until the message is completed makes that knowledge almost useless. More specifically, Broadbent (1952) stated:
“The present case is an instance of selection in perception (attention). Since the visual cue to the correct voice is useless when it arrives towards the ends of the message, it is clear that process of discarding part of the information contained in the mixed voices has already taken place…It seems possible that one of the two voices is selected for response without reference to its correctness, and that the other is ignored…If one of the two voices is selected (attended to) in the resulting mixture there is no guarantee that it will be the correct one, and both call signs cannot be perceived at once any more than both messages can be received and stored till a visual cue indicates the one to be answered”. (p. 55)
In 1954, Broadbent used the same procedure as discussed above with slight modifications. In that case, he found information that indicated the positive impact that spatial separation of the messages has on paying attention to and understanding the correct message. The dichotic listening paradigm has been utilized in numerous other publications, both by Broadbent and by other psychologists working in the field of cognition. For example, Cherry (1953) investigated how we can recognize what one person is saying when others are speaking at the same time, which be described as the “cocktail party problem” (p. 976). In his experiment, subjects listened to simultaneous messages and were instructed to repeat one of the messages word by word or phrase by phrase.
Information-Processing and the Filter Model of Attention
Cognitive psychology is often called human information processing, which reflects the approach taken by many cognitive psychologists in studying cognition. The stage approach, with the acquisition, storage, retrieval, and use of information in a number of separate stages, was influenced by the computer metaphor and the way people enter, store, and retrieve data from a computer (Reed, 2000). The stages in an information-processing model are:
• Sensory Store: brief storage for information in its original sensory form
• Filter: part of attention in which some perceptual information is blocked out and not recognized, while other information is attended to and recognized
• Pattern Recognition: stage in which a stimulus is recognized
• Selection: stage that determines what information a person will try to remember
• Short-Term Memory: memory with limited capacity, that lasts for about 20-30 seconds without attending to its content
• Long-Term Memory: memory that has no capacity limit and lasts from minutes to a lifetime
Using an information-processing approach, Broadbent collected data on attention (Reed, 2000). He used a dichotic listening paradigm (see above section), asking participants to listen simultaneously to messages played in each ear, and based on the difficulty that participants had in listening to the simultaneous messages, proposed that a listener can attend to only one message at a time (Broadbent, 1952; Broadbent, 1954). More specifically, he asked enlisted men in England's Royal Army to listen to three pairs of digits. One digit from each pair was presented to one ear at the same time that the other digit from the pair was presented to the other ear. The subjects were asked to recall the digits in whatever order they chose, and almost all of the correct reports involved recalling all of the digits presented to one ear, followed by all the digits presented to the other ear. A second group of participants were asked to recall the digits in the order they were presented (i.e., as pairs). Performance was worse than when they were able to recall all digits from one ear and then the other.
To account for these findings, Broadbent hypothesized that the mechanism of attention was controlled by two components: a selective device or filter located early in the nervous system, and a temporary buffer store that precedes the filter (Broadbent, 1958). He proposed that the filter was tuned to one channel or the other, in an all-or-nothing manner. Broadbent’s filter model, described in his book Perception and Communication (1958), was one of the first information-processing models to be examined by psychologists.
Shortly after, it was discovered that if the unattended message became highly meaningful (for example, hearing one’s name as in Cherry's Cocktail Party Effect, as mentioned above), then attention would switch automatically to the new message. This result led to the paradox that the content of the message is understood before it is selected, indicating that Broadbent needed to revise his theory (Craik & Baddeley, 1995). Broadbent did not shy away from this task. In fact, he saw all scientific theories as temporary statements, a method of integrating current evidence in a coherent manner. According to Craik and Baddeley, (1995), although Broadbent always presented his current theories firmly and persuasively, he never took the position of obstinately defending an outmoded theory. When he published his second book on the topic, Decision and Stress (1971), he used his filter model as the starting point, to which he applied modifications and added concepts “to accommodate new findings that the model itself had stimulated” (Massaro, 1996, pp. 141). Despite its inconsistencies with emerging findings, the filter model remains the first and most influential information-processing model of human cognition.
Anne Treisman and Feature Integration Theory
Anne Treisman is one of the most influential cognitive psychologists in the world today. For over four decades, she has been has using innovative research methods to define fundamental issues in the area of attention and perception. Best known for her Feature Integration Theory (1980, 1986), Treisman’s hypotheses about the mechanisms involved in information processing have formed a starting point for many theorists in this area of research.
In 1967, while Treisman worked as a visiting scientist in the psychology department at Bell Telephone Laboratories, she published an influential paper in Psychological Review that was central to the development of selective attention as a scientific field of study. This paper articulated many of the fundamental issues that continue to guide studies of attention to this day. While at Bell, Treisman’s research interests began to expand (Anon, 1991). Although she remained intrigued by the role of attention on auditory perception, she was now also fascinated by the way this construct modulates perception in the visual modality.
In the following years, Treisman returned to Oxford, where she accepted a position as University lecturer in the Psychology Department and was appointed a Fellow of St. Anne’s College (Treisman, 2006). Here, she began to explore the notion that attention is involved in integrating separate features to form visual perceptual representations of objects. Using a stopwatch and her children as research participants, she found that the search for a red ‘X’ among red ‘Os’ and blue ‘Xs’ was slow and laborious compared to the search for either shape or colour alone (Gazzaniga et al., 2002). These findings were corroborated by results from testing adult participants in the laboratory and provided the basis of a new research program, where Treisman conducted experiments exploring the relationships between feature integration, attention and object perception (Triesman & Gelade, 1980).
In 1976, Treisman’s marriage to Michel Treisman ended. She remarried in 1978, to Daniel Kahneman, a fellow psychologist who would go on to win the Nobel Prize for Economics in 2002. Shortly thereafter, Treisman and Kahneman accepted positions at the University of British Columbia, Canada. In 1980, Treisman and Gelade published a seminal paper proposing her enormously influential Feature Integration Theory (FIT). Treisman’s research demonstrated that during the early stages of object perception, early vision encodes features such as color, form, and orientation as separate entities (in "feature maps") (Treisman, 1986). Focused attention to these features recombines the separate features resulting in correct object perception. In the absence of focused attention, these features can bind randomly to form illusory conjunctions (Treisman & Schmidt, 1982; Treisman, 1986). Feature integration theory has had an overarching impact both within and outside the area of psychology.
Feature Integration Theory Experiments
According to Treisman’s Feature Integration Theory perception of objects is divided into two stages:
1. Pre-Attentive Stage: The first stage in perception is so named because it happens automatically, without effort or attention by the perceiver. In this stage, an object is analyzed into its features (i.e., color, texture, shapes etc.). Treisman suggests that the reason we are unaware of the breakdown of an object into its elementary features is that this analysis occurs early in the perceptual processes, before we have become conscious of the object. Evidence: Treisman created a display of four objects flanked by two black numbers. This display was flashed on a screen for one-fifth of a second and followed by a random dot masking field in order to eliminate residual perception of the stimuli. Participants were asked to report the numbers first, followed by what they saw at each of the four locations where the shapes had been. In 18 percent of trials, participants reported seeing objects that consisted of a combination of features from two different stimuli (i.e., color and shape). The combinations of features from different stimuli are called illusory conjunctions (Treisman and Schmidt, 1982). The experiment also showed that these illusory conjunctions could occur even if the stimuli differ greatly in shape and size. According to Treisman, illusory conjunctions occur because early in the perceptual process, features may exist independently of one another, and can therefore be incorrectly combined in laboratory settings when briefly flashed stimuli are followed by a masking field (Treisman, 1986).
2. Focused Attention Stage: During this second stage of perception features are recombined to form whole objects. Evidence: Treisman repeated the illusory conjunction experiment, but this time, participants were instructed to ignore the flanking numbers, and to focus their attention on the four target objects. Results demonstrated that this focused attention eliminated illusory conjunctions, so that all shapes were paired with their correct colours (Treisman and Schmidt, 1982). The experiment demonstrates the role of attention in the correct perception of objects.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/11%3A_Attention/11.02%3A_History_of_Attention.txt
|
THE COCKTAIL PARTY
Selective attention is the ability to select certain stimuli in the environment to process, while ignoring distracting information. One way to get an intuitive sense of how attention works is to consider situations in which attention is used. A party provides an excellent example for our purposes. Many people may be milling around, there is a dazzling variety of colors and sounds and smells, the buzz of many conversations is striking. There are so many conversations going on; how is it possible to select just one and follow it? You don’t have to be looking at the person talking; you may be listening with great interest to some gossip while pretending not to hear.
However, once you are engaged in conversation with someone, you quickly become aware that you cannot also listen to other conversations at the same time. You also are probably not aware of how tight your shoes feel or of the smell of a nearby flower arrangement. On the other hand, if someone behind you mentions your name, you typically notice it immediately and may start attending to that (much more interesting) conversation. This situation highlights an interesting set of observations. We have an amazing ability to select and track one voice, visual object, etc., even when a million things are competing for our attention, but at the same time, we seem to be limited in how much we can attend to at one time, which in turn suggests that attention is crucial in selecting what is important. How does it all work?
DICHOTIC LISTENING STUDIES
This cocktail party scenario is the quintessential example of selective attention, and it is essentially what some early researchers tried to replicate under controlled laboratory conditions as a starting point for understanding the role of attention in perception (e.g., Cherry, 1953; Moray, 1959). In particular, they used dichotic listening and shadowing tasks to evaluate the selection process. Dichotic listening simply refers to the situation when two messages are presented simultaneously to an individual, with one message in each ear. In order to control which message the person attends to, the individual is asked to repeat back or “shadow” one of the messages as he hears it. For example, let’s say that a story about a camping trip is presented to John’s left ear, and a story about Abe Lincoln is presented to his right ear. The typical dichotic listening task would have John repeat the story presented to one ear as he hears it. Can he do that without being distracted by the information in the other ear?
People can become pretty good at the shadowing task, and they can easily report the content of the message that they attend to. But what happens to the ignored message? Typically, people can tell you if the ignored message was a man’s or a woman’s voice, or other physical characteristics of the speech, but they cannot tell you what the message was about. In fact, many studies have shown that people in a shadowing task were not aware of a change in the language of the message (e.g., from English to German; Cherry, 1953), and they didn’t even notice when the same word was repeated in the unattended ear more than 35 times (Moray, 1959)! Only the basic physical characteristics, such as the pitch of the unattended message, could be reported.
On the basis of these types of experiments, it seems that we can answer the first question about how much information we can attend to very easily: not very much. We clearly have a limited capacity for processing information for meaning, making the selection process all the more important. The question becomes: How does this selection process work?
MODELS OF SELECTIVE ATTENTION
Broadbent’s Filter Model. Many researchers have investigated how selection occurs and what happens to ignored information. Donald Broadbent was one of the first to try to characterize the selection process. His Filter Model was based on the dichotic listening tasks described above as well as other types of experiments (Broadbent, 1958). He found that people select information on the basis of physical features: the sensory channel (or ear) that a message was coming in, the pitch of the voice, the color or font of a visual message. People seemed vaguely aware of the physical features of the unattended information, but had no knowledge of the meaning. As a result, Broadbent argued that selection occurs very early, with no additional processing for the unselected information. A flowchart of the model might look like this:
TREISMAN’S ATTENUATION MODEL
Broadbent’s model makes sense, but if you think about it you already know that it cannot account for all aspects of the Cocktail Party Effect. What doesn’t fit? The fact is that you tend to hear your own name when it is spoken by someone, even if you are deeply engaged in a conversation. We mentioned earlier that people in a shadowing experiment were unaware of a word in the unattended ear that was repeated many times—and yet many people noticed their own name in the unattended ear even it occurred only once.
Anne Treisman (1960) carried out a number of dichotic listening experiments in which she presented two different stories to the two ears. As usual, she asked people to shadow the message in one ear. As the stories progressed, however, she switched the stories to the opposite ears. Treisman found that individuals spontaneously followed the story, or the content of the message, when it shifted from the left ear to the right ear. Then they realized they were shadowing the wrong ear and switched back.
Results like this, and the fact that you tend to hear meaningful information even when you aren’t paying attention to it, suggest that we do monitor the unattended information to some degree on the basis of its meaning. Therefore, the filter theory can’t be right to suggest that unattended information is completely blocked at the sensory analysis level. Instead, Treisman suggested that selection starts at the physical or perceptual level, but that the unattended information is not blocked completely, it is just weakened or attenuated. As a result, highly meaningful or pertinent information in the unattended ear will get through the filter for further processing at the level of meaning. The figure below shows information going in both ears, and in this case there is no filter that completely blocks nonselected information. Instead, selection of the left ear information strengthens that material, while the nonselected information in the right ear is weakened. However, if the preliminary analysis shows that the nonselected information is especially pertinent or meaningful (such as your own name), then the Attenuation Control will instead strengthen the more meaningful information.
LATE SELECTION MODELS
Other selective attention models have been proposed as well. A late selection or response selection model proposed by Deutsch and Deutsch (1963) suggests that all information in the unattended ear is processed on the basis of meaning, not just the selected or highly pertinent information. However, only the information that is relevant for the task response gets into conscious awareness. This model is consistent with ideas of subliminal perception; in other words, that you don’t have to be aware of or attending a message for it to be fully processed for meaning.
You might notice that this figure looks a lot like that of the Early Selection model—only the location of the selective filter has changed, with the assumption that analysis of meaning occurs before selection occurs, but only the selected information becomes conscious.
MULTIMODE MODEL
Why did researchers keep coming up with different models? Because no model really seemed to account for all the data, some of which indicates that non-selected information is blocked completely, whereas other studies suggest that it can be processed for meaning. The multimode model addresses this apparent inconsistency, suggesting that the stage at which selection occurs can change depending on the task. Johnston and Heinz (1978) demonstrated that under some conditions, we can select what to attend to at a very early stage and we do not process the content of the unattended message very much at all. Analyzing physical information, such as attending to information based on whether it is a male or female voice, is relatively easy; it occurs automatically, rapidly, and doesn’t take much effort. Under the right conditions, we can select what to attend to on the basis of the meaning of the messages.
However, the late selection option—processing the content of all messages before selection—is more difficult and requires more effort. The benefit, though, is that we have the flexibility to change how we deploy our attention depending upon what we are trying to accomplish, which is one of the greatest strengths of our cognitive system.
This discussion of selective attention has focused on experiments using auditory material, but the same principles hold for other perceptual systems as well. Neisser (1979) investigated some of the same questions with visual materials by superimposing two semi-transparent video clips and asking viewers to attend to just one series of actions. As with the auditory materials, viewers often were unaware of what went on in the other clearly visible video. Twenty years later, Simons and Chabris (1999) explored and expanded these findings using similar techniques, and triggered a flood of new work in an area referred to as inattentional blindness. We touch on those ideas below, and you can also refer to another Noba Module, Failures of Awareness: The Case of Inattentional Blindness for a more complete discussion.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/11%3A_Attention/11.03%3A_Selective_Attention_and_Models_of_Attention.txt
|
In spite of the evidence of our limited capacity, we all like to think that we can do several things at once. Some people claim to be able to multitask without any problem: reading a textbook while watching television and talking with friends; talking on the phone while playing computer games; texting while driving. The fact is that we sometimes can seem to juggle several things at once, but the question remains whether dividing attention in this way impairs performance.
Is it possible to overcome the limited capacity that we experience when engaging in cognitive tasks? We know that with extensive practice, we can acquire skills that do not appear to require conscious attention. As we walk down the street, we don’t need to think consciously about what muscle to contract in order to take the next step. Indeed, paying attention to automated skills can lead to a breakdown in performance, or “choking” (e.g., Beilock & Carr, 2001). But what about higher level, more mentally demanding tasks: Is it possible to learn to perform two complex tasks at the same time?
DIVIDED ATTENTION TASKS
In a classic study that examined this type of divided attention task, two participants were trained to take dictation for spoken words while reading unrelated material for comprehension (Spelke, Hirst, & Neisser, 1976). In divided attention tasks such as these, each task is evaluated separately, in order to determine baseline performance when the individual can allocate as many cognitive resources as necessary to one task at a time. Then performance is evaluated when the two tasks are performed simultaneously. A decrease in performance for either task would suggest that even if attention can be divided or switched between the tasks, the cognitive demands are too great to avoid disruption of performance. (We should note here that divided attention tasks are designed, in principle, to see if two tasks can be carried out simultaneously. A related research area looks at task switching and how well we can switch back and forth among different tasks [e.g., Monsell, 2003]. It turns out that switching itself is cognitively demanding and can impair performance.)
The focus of the Spelke et al. (1976) study was whether individuals could learn to perform two relatively complex tasks concurrently, without impairing performance. The participants received plenty of practice—the study lasted 17 weeks and they had a 1-hour session each day, 5 days a week. These participants were able to learn to take dictation for lists of words and read for comprehension without affecting performance in either task, and the authors suggested that perhaps there are not fixed limits on our attentional capacity. However, changing the tasks somewhat, such as reading aloud rather than silently, impaired performance initially, so this multitasking ability may be specific to these well-learned tasks. Indeed, not everyone could learn to perform two complex tasks without performance costs (Hirst, Neisser, & Spelke, 1978), although the fact that some can is impressive.
DISTRACTED DRIVING
More relevant to our current lifestyles are questions about multitasking while texting or having cell phone conversations. Research designed to investigate, under controlled conditions, multitasking while driving has revealed some surprising results. Certainly there are many possible types of distractions that could impair driving performance, such as applying makeup using the rearview mirror, attempting (usually in vain) to stop the kids in the backseat from fighting, fiddling with the CD player, trying to negotiate a handheld cell phone, a cigarette, and a soda all at once, eating a bowl of cereal while driving (!). But we tend to have a strong sense that we CAN multitask while driving, and cars are being built with more and more technological capabilities that encourage multitasking. How good are we at dividing attention in these cases?
Most people acknowledge the distraction caused by texting while driving and the reason seems obvious: Your eyes are off the road and your hands and at least one hand (often both) are engaged while texting. However, the problem is not simply one of occupied hands or eyes, but rather that the cognitive demands on our limited capacity systems can seriously impair driving performance (Strayer, Watson, & Drews, 2011). The effect of a cell phone conversation on performance (such as not noticing someone’s brake lights or responding more slowly to them) is just as significant when the individual is having a conversation with a hands-free device as with a handheld phone; the same impairments do not occur when listening to the radio or a book on tape (Strayer & Johnston, 2001). Moreover, studies using eye-tracking devices have shown that drivers are less likely to later recognize objects that they did look at when using a cell phone while driving (Strayer & Drews, 2007). These findings demonstrate that cognitive distractions such as cell phone conversations can produce inattentional blindness, or a lack of awareness of what is right before your eyes (see also, Simons & Chabris, 1999). Sadly, although we all like to think that we can multitask while driving, in fact the percentage of people who can truly perform cognitive tasks without impairing their driving performance is estimated to be about 2% (Watson & Strayer, 2010).
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/11%3A_Attention/11.04%3A_Divided_Attention.txt
|
There are theories that apply to a small number of closely related phenomena. One of these theories is a very specific quantitative ability called subitizing. This refers to people’s ability to quickly and accurately perceive the number of objects in a scene without counting them—as long as the number is four or fewer. Several theories have been proposed to explain subitizing. Among them is the idea that small numbers of objects are associated with easily recognizable patterns. For example, people know immediately that there are three objects in a scene because the three objects tend to form a “triangle” and it is this pattern that is quickly perceived.
Though fewer, narrow theories have their place in psychological research. Broad theories organize more phenomena but tend to be less formal and less precise in their predictions. Narrow theories organize fewer phenomena but tend to be more formal and more precise in their predictions.
Treisman’s Attenuation Model as it relates to Divided Attention
In 1960 psychologist Anne Treisman carried out a number of dichotic listening experiments in which she presented two different stories to the two ears. As usual, she asked people to shadow the message in one ear. As the stories progressed, however, she switched the stories to the opposite ears. Treisman found that individuals spontaneously followed the story, or the content of the message, when it shifted from the left ear to the right ear. Then they realized they were shadowing the wrong ear and switched back.
Results like this, and the fact that you tend to hear meaningful information even when you aren’t paying attention to it, suggest that we do monitor the unattended information to some degree on the basis of its meaning. Therefore, the established filter theory can’t be right to suggest that unattended information is completely blocked at the sensory analysis level.
Instead, Treisman suggested that selection starts at the physical or perceptual level, but that the unattended information is not blocked completely, it is just weakened or attenuated. As a result, highly meaningful or pertinent information in the unattended ear will get through the filter for further processing at the level of meaning. The figure below shows information going in both ears, and in this case there is no filter that completely blocks nonselected information. Instead, selection of the left ear information strengthens that material, while the nonselected information in the right ear is weakened. However, if the preliminary analysis shows that the nonselected information is especially pertinent or meaningful (such as your own name), then the Attenuation Control will instead strengthen the more meaningful information.
11.06: Auditory Attention
More than 50 years ago, experimental psychologists began documenting the many ways that our perception of the world is limited, not by our eyes and ears, but by our minds. We appear able to process only one stream of information at a time, effectively filtering other information from awareness. To a large extent, we perceive only that which receives the focus of our cognitive efforts: our attention.
Imagine the following task, known as dichotic listening: You put on a set of headphones that play two completely different speech streams, one to your left ear and one to your right ear.
Your task is to repeat each syllable spoken into your left ear as quickly and accurately as possible, mimicking each sound as you hear it. When performing this attention-demanding task, you won’t notice if the speaker in your right ear switches to a different language or is replaced by a different speaker with a similar voice. You won’t notice if the content of their speech becomes nonsensical. In effect, you are deaf to the substance of the ignored speech. But, that is not because of the limits of your auditory senses. It is a form of cognitive deafness, due to the nature of focused, selective attention. Even if the speaker on your right headphone says your name, you will notice it only about one-third of the time (Conway, Cowan, & Bunting, 2001).
And, at least by some accounts, you only notice it that often because you still devote some of your limited attention to the ignored speech stream (Holendar, 1986). In this task, you will tend to notice only large physical changes (e.g., a switch from a male to a female speaker), but not substantive ones, except in rare cases.
This selective listening task highlights the power of attention to filter extraneous information from awareness while letting in only those elements of our world that we want to hear.
Focused attention is crucial to our powers of observation, making it possible for us to zero in on what we want to see or hear while filtering out irrelevant distractions. But, it has consequences as well: We can miss what would otherwise be obvious and important signals.
Criteria for Automaticity:
Interactive Element
MIT has published a video lecture that explains Automaticity. The video will begin playing at 30 minutes, watch till 37:08 to understand the criteria for automaticity.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/11%3A_Attention/11.05%3A_Subitizing.txt
|
Template Matching
One way for people to recognize objects in their environment would be for them to compare their representations of those objects with templates stored in memory. For example, if I can achieve a match between the large red object I see in the street and my stored representation of a London bus, then I recognize a London bus. However, one difficulty for this theory is illustrated in the figure to the below.
Here, we have no problem differentiating the middle letters in each word (H and A), even though they are identical. A second problem is that we continue to recognize most objects regardless of what perspective we see them from (e.g. from the front, side, back, bottom, top, etc.). This would suggest we have a nearly infinite store of templates, which hardly seems credible.
Prototypes
An alternative to template theory is based on prototype matching. Instead of comparing a visual array to a stored template, the array is compared to a stored prototype, the prototype being a kind of average of many other patterns. The perceived array does not need to exactly match the prototype in order for recognition to occur, so long as there is a family resemblance. For example, if I am looking down on a London bus from above its qualities of size and redness enable me to recognize it as a bus, even though the shape does not match my prototype. There is good evidence that people do form prototypes after exposure to a series of related stimuli.
For instance, in one study people were shown a series of patterns that were related to a prototype, but not the prototype itself. When later shown a series of distractor patterns plus the prototype, the participants identified the prototype as a pattern they had seen previously.
Feature Analysis
Feature-matching theories propose that we decompose visual patterns into a set of critical features, which we then try to match against features stored in memory. For example, in memory I have stored the information that the letter "Z" comprises two horizontal lines, one oblique line, and two acute angles, whereas the letter "Y" has one vertical line, two oblique lines, and one acute angle. I have similar stored knowledge about other letters of the alphabet. When I am presented with a letter of the alphabet, the process of recognition involves identifying the types of lines and angles and comparing these to stored information about all letters of the alphabet. If presented with a "Z", as long as I can identify the features then I should recognise it as a "Z", because no other letter of the alphabet shares this combination of features. The best known model of this kind is Oliver Selfridge's Pandemonium.
One source of evidence for feature matching comes from Hubel and Wiesel's research, which found that the visual cortex of cats contains neurons that only respond to specific features (e.g. one type of neuron might fire when a vertical line is presented, another type of neuron might fire if a horizontal line moving in a particular direction is shown).
Some authors have distinguished between local features and global features. In a paper titled Forest before trees David Navon suggested that "global" features are processed before "local" ones. He showed participants large letter "H"s or "S"s that were made up of smaller letters, either small Hs or small Ss. People were faster to identify the larger letter than the smaller ones, and the response time was the same regardless of whether the smaller letters (the local features) were Hs or Ss. However, when required to identify the smaller letters people responded more quickly when the large letter was of the same type as the smaller letters.
One difficulty for feature-matching theory comes from the fact that we are normally able to read slanted handwriting that does not seem to conform to the feature description given above. For example, if I write a letter "L" in a slanted fashion, I cannot match this to a stored description that states that L must have a vertical line. Another difficulty arises from trying to generalise the theory to the natural objects that we encounter in our environment.
12.02: Face Recognition Systems
Prosopagnosia
Faces provide information about one’s gender, age, ethnicity, emotional state, and perhaps most importantly, they identify the owner. Thus, the ability to recognize an individual just by looking at their face is crucial for human social interaction. Prosopagnosia is a cognitive condition characterized by a relatively selective impairment in face recognition. The disorder can be acquired or developmental in nature, with the latter also referred to as “congenital” or “hereditary” prosopagnosia. The condition occurs in the absence of any neurological damage, socio-emotional dysfunction or lower-level visual deficits4, and may affect 2–2.5% of the adult population7 and 1.2–4% of those in middle childhood.
In the last 20 years, individuals with DP have been used to make theoretical inferences about the development and functioning of the cognitive and neural architecture of the typical and impaired face recognition system. Given some individuals also report moderate-to-severe psychosocial consequences of the condition , there has been increasing interest in the accurate diagnosis of DP via objective testing. Many researchers diagnose the condition using a combination of the Cambridge Face Memory Test (CFMT17) and the Cambridge Face Perception Test (CFPT18) - regarded as the leading objective tests of face recognition - and a famous faces test. Participants are thought to meet the diagnostic criteria for DP when their scores are considered together, and in many cases, this will mean that DP is determined when individuals score atypically on at least two of these three measures.
Unlike those with acquired prosopagnosia, those with DP have no point of comparison nor experience an abrupt loss of their face recognition skills: many individuals tested in our laboratory did not become aware of their difficulties until mid or even late adulthood (see also33,34). This is likely to be due to a combination of reasons. For instance, many people with prosopagnosia can identify people via voice, gait and general appearance and manner15. Face recognition difficulties have also been reported to be highly heritable (e.g. refs39,40) and individuals may be comparing their abilities to family members who are equally poor at recognizing faces. Subsequently, these individuals may not become aware of their difficulties for a long period of time. Additionally, some people with DP devise their own strategies to recognize others and cope relatively well with their difficulties33. This may conceal the condition from other people, or even falsely indicate to oneself, that they are able to recognize others in the same manner as most others in the general population.
If an unaffected person is to recognize the traits of DP in others (as would typically be required to identify the condition in children), they must first know that the condition exists and have an understanding of its behavioral manifestation on an everyday level.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/12%3A_Classification_and_Categorization_with_Pattern_Recognition/12.01%3A_Approaches_to_Pattern_Recognition.txt
|
Categories
A category a set of objects that can be treated as equivalent in some way. For example, consider the following categories: trucks, wireless devices, weddings, psychopaths, and trout. Although the objects in a given category are different from one another, they have many commonalities. When you know something is a truck, you know quite a bit about it. The psychology of categories concerns how people learn, remember, and use informative categories such as trucks or psychopaths. The mental representations we form of categories are called concepts. There is a category of trucks in the world, and you also have a concept of trucks in your head. We assume that people’s concepts correspond more or less closely to the actual category, but it can be useful to distinguish the two, as when someone’s concept is not really correct.
Consider the following set of objects: some dust, papers, a computer monitor, two pens, a cup, and an orange. What do these things have in common? Only that they all happen to be on my desk as I write this. This set of things can be considered a category, a set of objects that can be treated as equivalent in some way. But, most of our categories seem much more informative— they share many properties. For example, consider the following categories: trucks, wireless devices, weddings, psychopaths, and trout. Although the objects in a given category are different from one another, they have many commonalities. When you know something is a truck, you know quite a bit about it. The psychology of categories concerns how people learn, remember, and use informative categories such as trucks or psychopaths.
The mental representations we form of categories are called concepts. There is a category of trucks in the world, and I also have a concept of trucks in my head. We assume that people’s concepts correspond more or less closely to the actual category, but it can be useful to distinguish the two, as when someone’s concept is not really correct.
Concepts are at the core of intelligent behavior. We expect people to be able to know what to do in new situations and when confronting new objects. If you go into a new classroom and see chairs, a blackboard, a projector, and a screen, you know what these things are and how they will be used. You’ll sit on one of the chairs and expect the instructor to write on the blackboard or project something onto the screen. You do this even if you have never seen any of these particular objects before, because you have concepts of classrooms, chairs, projectors, and so forth, that tell you what they are and what you’re supposed to do with them. Furthermore, if someone tells you a new fact about the projector—for example, that it has a halogen bulb—you are likely to extend this fact to other projectors you encounter. In short, concepts allow you to extend what you have learned about a limited number of objects to a potentially infinite set of entities.
You know thousands of categories, most of which you have learned without careful study or instruction. Although this accomplishment may seem simple, we know that it isn’t, because it is difficult to program computers to solve such intellectual tasks. If you teach a learning program that a robin, a swallow, and a duck are all birds, it may not recognize a cardinal or peacock as a bird. As we’ll shortly see, the problem is that objects in categories are often surprisingly diverse.
Simpler organisms, such as animals and human infants, also have concepts (Mareschal, Quinn, & Lea, 2010). Squirrels may have a concept of predators, for example, that is specific to their own lives and experiences. However, animals likely have many fewer concepts and cannot understand complex concepts such as mortgages or musical instruments.
Nature of Categories
Traditionally, it has been assumed that categories are well-defined. This means that you can give a definition that specifies what is in and out of the category. Such a definition has two parts. First, it provides the necessary features for category membership: What must objects have in order to be in it? Second, those features must be jointly sufficient for membership: If an object has those features, then it is in the category. For example, if I defined a dog as a four- legged animal that barks, this would mean that every dog is four-legged, an animal, and barks, and also that anything that has all those properties is a dog.
Unfortunately, it has not been possible to find definitions for many familiar categories. Definitions are neat and clear-cut; the world is messy and often unclear. For example, consider our definition of dogs. In reality, not all dogs have four legs; not all dogs bark. I knew a dog that lost her bark with age (this was an improvement); no one doubted that she was still a dog. It is often possible to find some necessary features (e.g., all dogs have blood and breathe), but these features are generally not sufficient to determine category membership (you also have blood and breathe but are not a dog).
Even in domains where one might expect to find clear-cut definitions, such as science and law, there are often problems. For example, many people were upset when Pluto was downgraded from its status as a planet to a dwarf planet in 2006. Upset turned to outrage when they discovered that there was no hard-and-fast definition of planethood: “Aren’t these astronomers scientists? Can’t they make a simple definition?” In fact, they couldn’t. After an astronomical organization tried to make a definition for planets, a number of astronomers complained that it might not include accepted planets such as Neptune and refused to use it. If everything looked like our Earth, our moon, and our sun, it would be easy to give definitions of planets, moons, and stars, but the universe has sadly not conformed to this ideal.
Fuzzy Categories
Borderline Items
Experiments also showed that the psychological assumptions of well-defined categories were not correct. Hampton (1979) asked subjects to judge whether a number of items were in different categories. He did not find that items were either clear members or clear nonmembers. Instead, he found many items that were just barely considered category members and others that were just barely not members, with much disagreement among subjects. Sinks were barely considered as members of the kitchen utensil category, and sponges were barely excluded. People just included seaweed as a vegetable and just barely excluded tomatoes and gourds. Hampton found that members and nonmembers formed a continuum, with no obvious break in people’s membership judgments. If categories were well defined, such examples should be very rare. Many studies since then have found such borderline members that are not clearly in or clearly out of the category.
McCloskey and Glucksberg (1978) found further evidence for borderline membership by asking people to judge category membership twice, separated by two weeks. They found that when people made repeated category judgments such as “Is an olive a fruit?” or “Is a sponge a kitchen utensil?” they changed their minds about borderline items—up to 22 percent of the time. So, not only do people disagree with one another about borderline items, they disagree with themselves! As a result, researchers often say that categories are fuzzy, that is, they have unclear boundaries that can shift over time.
Typicality
A related finding that turns out to be most important is that even among items that clearly are in a category, some seem to be “better” members than others (Rosch, 1973). Among birds, for example, robins and sparrows are very typical. In contrast, ostriches and penguins are very atypical (meaning not typical). If someone says, “There’s a bird in my yard,” the image you have will be of a smallish passerine bird such as a robin, not an eagle or hummingbird or turkey.
You can find out which category members are typical merely by asking people. Table 1 shows a list of category members in order of their rated typicality. Typicality is perhaps the most important variable in predicting how people interact with categories. The following text box is a partial list of what typicality influences.
We can understand the two phenomena of borderline members and typicality as two sides of the same coin. Think of the most typical category member: This is often called the category prototype. Items that are less and less similar to the prototype become less and less typical. At some point, these less typical items become so atypical that you start to doubt whether they are in the category at all. Is a rug really an example of furniture? It’s in the home like chairs and tables, but it’s also different from most furniture in its structure and use. From day to day, you might change your mind as to whether this atypical example is in or out of the category. So, changes in typicality ultimately lead to borderline members.
Source of Typicality
Intuitively, it is not surprising that robins are better examples of birds than penguins are, or that a table is a more typical kind of furniture than is a rug. But given that robins and penguins are known to be birds, why should one be more typical than the other? One possible answer is the frequency with which we encounter the object: We see a lot more robins than penguins, so they must be more typical. Frequency does have some effect, but it is actually not the most important variable (Rosch, Simpson, & Miller, 1976). For example, I see both rugs and tables every single day, but one of them is much more typical as furniture than the other.
The best account of what makes something typical comes from Rosch and Mervis’s (1975) family resemblance theory. They proposed that items are likely to be typical if they (a) have the features that are frequent in the category and (b) do not have features frequent in other categories. Let’s compare two extremes, robins and penguins. Robins are small flying birds that sing, live in nests in trees, migrate in winter, hop around on your lawn, and so on. Most of these properties are found in many other birds. In contrast, penguins do not fly, do not sing, do not live in nests or in trees, do not hop around on your lawn. Furthermore, they have properties that are common in other categories, such as swimming expertly and having wings that look and act like fins. These properties are more often found in fish than in birds.
According to Rosch and Mervis, then, it is not because a robin is a very common bird that makes it typical. Rather, it is because the robin has the shape, size, body parts, and behaviors that are very common among birds—and not common among fish, mammals, bugs, and so forth.
In a classic experiment, Rosch and Mervis (1975) made up two new categories, with arbitrary features. Subjects viewed example after example and had to learn which example was in which category. Rosch and Mervis constructed some items that had features that were common in the category and other items that had features less common in the category. The subjects learned the first type of item before they learned the second type. Furthermore, they then rated the items with common features as more typical. In another experiment, Rosch and Mervis constructed items that differed in how many features were shared with a different category. The more features were shared, the longer it took subjects to learn which category the item was in. These experiments, and many later studies, support both parts of the family resemblance theory.
Category Hierarchies
Many important categories fall into hierarchies, in which more concrete categories are nested inside larger, abstract categories. For example, consider the categories: brown bear, bear, mammal, vertebrate, animal, entity. Clearly, all brown bears are bears; all bears are mammals; all mammals are vertebrates; and so on. Any given object typically does not fall into just one category—it could be in a dozen different categories, some of which are structured in this hierarchical manner. Examples of biological categories come to mind most easily, but within the realm of human artifacts, hierarchical structures can readily be found: desk chair, chair, furniture, artifact, object.
Brown (1958), a child language researcher, was perhaps the first to note that there seems to be a preference for which category we use to label things. If your office desk chair is in the way, you’ll probably say, “Move that chair,” rather than “Move that desk chair” or “piece of furniture.” Brown thought that the use of a single, consistent name probably helped children to learn the name for things. And, indeed, children’s first labels for categories tend to be exactly those names that adults prefer to use (Anglin, 1977).
This preference is referred to as a preference for the basic level of categorization, and it was first studied in detail by Eleanor Rosch and her students (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). The basic level represents a kind of Goldilocks effect, in which the category used for something is not too small (northern brown bear) and not too big (animal), but is just right (bear). The simplest way to identify an object’s basic-level category is to discover how it would be labeled in a neutral situation. Rosch et al. (1976) showed subjects pictures and asked them to provide the first name that came to mind. They found that 1,595 names were at the basic level, with 14 more specific names (subordinates) used. Only once did anyone use a more general name (superordinate). Furthermore, in printed text, basic-level labels are much more frequent than most subordinate or superordinate labels (e.g., Wisniewski & Murphy, 1989).
The preference for the basic level is not merely a matter of labeling. Basic-level categories are usually easier to learn. As Brown noted, children use these categories first in language learning, and superordinates are especially difficult for children to fully acquire. People are faster at identifying objects as members of basic-level categories (Rosch et al., 1976).
Rosch et al. (1976) initially proposed that basic-level categories cut the world at its joints, that is, merely reflect the big differences between categories like chairs and tables or between cats and mice that exist in the world. However, it turns out that which level is basic is not universal. North Americans are likely to use names like tree, fish, and bird to label natural objects. But people in less industrialized societies seldom use these labels and instead use more specific words, equivalent to elm, trout, and finch (Berlin, 1992). Because Americans and many other people living in industrialized societies know so much less than our ancestors did about the natural world, our basic level has “moved up” to what would have been the superordinate level a century ago. Furthermore, experts in a domain often have a preferred level that is more specific than that of non-experts. Birdwatchers see sparrows rather than just birds, and carpenters see roofing hammers rather than just hammers (Tanaka & Taylor, 1991). This all suggests that the preferred level is not (only) based on how different categories are in the world, but that people’s knowledge and interest in the categories has an important effect.
One explanation of the basic-level preference is that basic-level categories are more differentiated: The category members are similar to one another, but they are different from members of other categories (Murphy & Brownell, 1985; Rosch et al., 1976). (The alert reader will note a similarity to the explanation of typicality I gave above. However, here we’re talking about the entire category and not individual members.) Chairs are pretty similar to one another, sharing a lot of features (legs, a seat, a back, similar size and shape); they also don’t share that many features with other furniture. Superordinate categories are not as useful because their members are not very similar to one another. What features are common to most furniture? There are very few. Subordinate categories are not as useful, because they’re very similar to other categories: Desk chairs are quite similar to dining room chairs and easy chairs. As a result, it can be difficult to decide which subordinate category an object is in (Murphy & Brownell, 1985). Experts can differ from novices in which categories are the most differentiated, because they know different things about the categories, therefore changing how similar the categories are.
This is a controversial claim, as some say that infants learn superordinates before anything else (Mandler, 2004). However, if true, then it is very puzzling that older children have great difficulty learning the correct meaning of words for superordinates, as well as in learning artificial superordinate categories (Horton & Markman, 1980; Mervis, 1987). However, it seems fair to say that the answer to this question is not yet fully known.
Conclusion: So, what is Cognitive Psychology?
Ultimatly Cognitive psychology is the scientific investigation of human cognition, that is, all our mental abilities – perceiving, learning, remembering, thinking, reasoning, and understanding. It is closely related to the highly interdisciplinary cognitive science and influenced by artificial intelligence, computer science, philosophy, anthropology, linguistics, biology, physics, and neuroscience.
The term “cognition” stems from the Latin word “cognoscere” or "to know". Fundamentally, cognitive psychology studies how people acquire and apply knowledge or information.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/12%3A_Classification_and_Categorization_with_Pattern_Recognition/12.03%3A_Concepts_and_Categories.txt
|
Imagine the following situation: A young man, let’s call him Knut, is sitting at his desk, reading some papers which he needs to complete a psychology assignment. In his right hand he holds a cup of coffee. With his left one he reaches for a bag of sweets without removing the focus of his eyes from the paper. Suddenly he stares up to the ceiling of his room and asks himself: “What is happening here?”
Probably everybody has had experiences like the one described above. Even though at first sight there is nothing exciting happening in this everyday situation, a lot of what is going on here is highly interesting particularly for researchers and students in the field of Cognitive Psychology. They are involved in the study of lots of incredibly fascinating processes which we are not aware of in this situation. Roughly speaking, an analysis of Knut's situation by Cognitive Psychologists would look like this:
Knut has a problem; he really needs to do his assignment. To solve this problem, he has to perform loads of cognition. The light reaching his eyes is transduced into electrical signals traveling through several stations to his visual cortex. Meanwhile, complex nets of neurons filter the information flow and compute contrast, colour, patterns, positions in space, motion of the objects in Knut's environment. Stains and lines on the screen become words; words get meaning; the meaning is put into context; analyzed on its relevance for Knut's problem and finally maybe stored in some part of his memory. At the same time an appetite for sweets is creeping from Knut's hypothalamus, a region in the brain responsible for controlling the needs of an organism. This appetite finally causes Knut to reach out for his sweets.
Now, let us take a look into the past to see how Cognitive Psychologists developed its terminology and methods to interpret ourselves on the basis of brain, behaviour and theory.
01: Cognitive Psychology and the Brain
Early thoughts claimed that knowledge was stored in the brain.
Renaissance and Beyond
Renaissance philosophers of the 17th century generally agreed with Nativists and even tried to show the structure and functions of the brain graphically. But also empiricist philosophers had very important ideas. According to David Hume, the internal representations of knowledge are formed obeying particular rules. These creations and transformations take effort and time. Actually, this is the basis of much current research in Cognitive Psychology. In the 19th Century Wilhelm Wundt and Franciscus Cornelis Donders made the corresponding experiments measuring the reaction time required for a response, of which further interpretation gave rise to Cognitive Psychology 55 years later.
20th Century and the Cognitive Revolution
During the first half of the 20th Century, a radical turn in the investigation of cognition took place. Behaviourists like Burrhus Frederic Skinner claimed that such mental internal operations - such as attention, memory, and thinking – are only hypothetical constructs that cannot be observed or proven. Therefore, Behaviorists asserted, mental constructs are not as important and relevant as the study and experimental analysis of behaviour (directly observable data) in response to some stimulus. According to Watson and Skinner, man could be objectively studied only in this way. The popularity of Behavioralist theory in the psychological world led investigation of mental events and processes to be abandoned for about 50 years.
In the 1950s scientific interest returned again to attention, memory, images, language processing, thinking and consciousness. The “failure” of Behaviourism heralded a new period in the investigation of cognition, called Cognitive Revolution. This was characterized by a revival of already existing theories and the rise of new ideas such as various communication theories. These theories emerged mainly from the previously created information theory, giving rise to experiments in signal detection and attention in order to form a theoretical and practical understanding of communication.
Modern linguists suggested new theories on language and grammar structure, which were correlated with cognitive processes. Chomsky’s Generative Grammar and Universal Grammar theory, proposed language hierarchy, and his critique of Skinner’s “Verbal Behaviour” are all milestones in the history of Cognitive Science. Theories of memory and models of its organization gave rise to models of other cognitive processes. Computer science, especially artificial intelligence, re-examined basic theories of problem solving and the processing and storage of memory, language processing and acquisition.
For clarification: Further discussion on the "behaviorist" history.
Although the above account reflects the most common version of the rise and fall of behaviorism, it is a misrepresentation. In order to better understand the founding of cognitive psychology it must be understood in an accurate historical context. Theoretical disagreements exist in every science. However, these disagreements should be based on an honest interpretation of the opposing view. There is a general tendency to draw a false equivalence between Skinner and Watson. It is true that Watson rejected the role that mental or conscious events played in the behavior of humans. In hindsight this was an error. However, if we examine the historical context of Watson's position we can better understand why he went to such extremes. He, like many young psychologists of the time, was growing frustrated with the lack of practical progress in psychological science. The focus on consciousness was yielding inconsistent, unreliable and conflicting data. Excited by the progress coming from Pavlov's work with elicited responses and looking to the natural sciences for inspiration, Watson rejected the study of observable mental events and also pushed psychology to study stimulus-response relations as a means to better understand human behavior. This new school of psychology, "behaviorism" became very popular. Skinner's school of thought, although inspired by Watson, takes a very different approach to the study of unobservable mental events. Skinner proposed that the distinction between "mind" and "body" brought with it irreconcilable philosophical baggage. He proposed that the events going on "within the skin", previously referred to as mental events, be called private events. This would bring the private experiences of thinking, reasoning, feeling and such, back into the scientific fold of psychology. However, Skinner proposed that these were things we are doing rather than events going on at a theorized mental place. For Skinner, the question was not of a mental world existing or not, it was whether or not we need to appeal to the existence of a mental world in order to explain the things going on inside our heads. Such as the natural sciences ask whether we need to assume the existence of a creator in order to account for phenomena in the natural world. For Skinner, it was an error for psychologists to point to these private events (mental) events as causes of behavior. Instead, he suggested that these too had to be explained through the study of how one evolves as a matter of experience. For example, we could say that a student studies because she "expects" to do better on an exam if she does. To "expect" might sound like an acceptable explanation for the behavior of studying, however, Skinner would ask why she "expects". The answer to this question would yield the true explanation of why the student is studying. To "expect" is to do something, to behave "in our head", and thus must also be explained.
The cognitive psychologist Henry Roediger pointed out that many psychologists erroneously subscribe to the version of psychology presented in the first paragraph. He also pointed to the successful rebuttal against Chomsky's review of Verbal behavior. The evidence for the utility in Skinner's book can be seen in the abundance of actionable data it has generated, therapies unmatched by any modern linguistic account of language. Roediger reminded his readers that in fact, we all measure behavior, some simply choose to make more assumptions about its origins than others. He recalls how, even as a cognitive psychologist, he has been the focus of criticism for not making more assumptions about his data. The law of parsimony tells us that when choosing an explanation for a set of data about observable behavior (the data all psychologists collect), we must be careful not to make assumptions beyond those necessary to explain the data. This is where the main division lies between modern day behavior analysts and cognitive psychologists. It is not in the rejection of our private experiences, it is in how these experiences are studied. Behavior analysts study them in relation to our learning history and the brain correlates of that history. They use this information to design environments that change our private experience by changing our interaction with the world. After all, it is through our interaction with our relative world that our private experiences evolve. It is a far cry from the mechanical stimulus-response psychology of John Watson. Academic honesty requires that we make a good faith effort to understand what we wish to criticize. Henry Roediger pointed out that many psychologists understand a very stereotyped, erroneous version of psychology's history. In doing so they miss the many successful real world applications that Skinner's analysis has generated.
Neuroinformatics, which is based on the natural structure of the human nervous system, tries to build neuronal structures by the idea of artificial neurons. In addition to that, Neuroinformatics is used as a field of evidence for psychological models, for example models for memory. The artificial neuron network “learns” words and behaves like “real” neurons in the brain. If the results of the artificial neuron network are quite similar to the results of real memory experiments, it would support the model. In this way psychological models can be “tested”. Furthermore it would help to build artificial neuron networks, which posses similar skills like the human such as face recognition.
If more about the ways humans process information was understood, it would be much simpler to build artificial structures, which have the same or similar abilities. The area of cognitive development investigation tried to describe how children develop their cognitive abilities from infancy to adolescence. The theories of knowledge representation were first strongly concerned with sensory inputs. Current scientists claim to have evidence that our internal representation of reality is not a one-to-one reproduction of the physical world. It is rather stored in some abstract or neurochemical code. Tolman, Bartlett, Norman and Rumelhart made some experiments on cognitive mapping. Here, the inner knowledge seemed not only to be related to sensory input, but also to be modified by some kind of knowledge network modeled by past experience.
Newer methods, like Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) have given researchers the possibility to measure brain activity and possibly correlate it to mental states and processes. All these new approaches in the study of human cognition and psychology have defined the field of Cognitive Psychology, a very fascinating field which tries to answer what is quite possibly the most interesting question posed since the dawn of reason. There is still a lot to discover and to answer and to ask again, but first we want to make you more familiar with the concept of Cognitive Psychology.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/01%3A_Cognitive_Psychology_and_the_Brain/1.01%3A_History_of_Cognitive_Psychology.txt
|
The easiest answer to this question is: “Cognitive Psychology is the study of thinking and the processes underlying mental events.” Of course this creates the new problem of what a mental event actually is. There are many possible answers for this:
Let us look at Knut again to give you some more examples and make the things clearer. He needs to focus on reading his paper. So all his attention is directed at the words and sentences which he perceives through his visual pathways. Other stimuli and information that enter his cognitive apparatus - maybe some street noise or the fly crawling along a window - are not that relevant in this moment and are therefore attended much less. Many higher cognitive abilities are also subject to investigation. Knut’s situation could be explained as a classical example of problem solving: He needs to get from his present state – an unfinished assignment – to a goal state - a completed assignment - and has certain operators to achieve that goal. Both Knut’s short and long term memory are active. He needs his short term memory to integrate what he is reading with the information from earlier passages of the paper. His long term memory helps him remember what he learned in the lectures he took and what he read in other books. And of course Knut’s ability to comprehend language enables him to make sense of the letters printed on the paper and to relate the sentences in a proper way.
This situation can be considered to reflect mental events like perception, comprehension and memory storage. Some scientists think that our emotions cannot be considered separate from cognition, so that hate, love, fear or joy are also sometimes looked at as part of our individual minds. Cognitive psychologists study questions like: How do we receive information about the outside world? How do we store it and process it? How do we solve problems? How is language represented?
Cognitive Psychology is a field of psychology that learns and researches about mental processes, including perception, thinking, memory, and judgment. The mainstay of cognitive psychology is the idea where sensation and perception are both different issues.
1.03: Relations to Neuroscience
Cognitive Neuropsychology
Of course it would be very convenient if we could understand the nature of cognition without the nature of the brain itself. But unfortunately it is very difficult if not impossible to build and prove theories about our thinking in absence of neurobiological constraints. Neuroscience comprises the study of neuroanatomy, neurophysiology, brain functions and related psychological and computer based models. For years, investigations on a neuronal level were completely separated from those on a cognitive or psychological level. The thinking process is so vast and complex that there are too many conceivable solutions to the problem of how cognitive operation could be accomplished.
Neurobiological data provide physical evidence for a theoretical approach to the investigation of cognition. Therefore it narrows the research area and makes it much more exact. The correlation between brain pathology and behaviour supports scientists in their research. It has been known for a long time that different types of brain damage, traumas, lesions, and tumours affect behaviour and cause changes in some mental functions. The rise of new technologies allows us to see and investigate brain structures and processes never seen before. This provides us with a lot of information and material to build simulation models which help us to understand processes in our mind. As neuroscience is not always able to explain all the observations made in laboratories, neurobiologists turn towards Cognitive Psychology in order to find models of brain and behaviour on an interdisciplinary level – Cognitive Neuropsychology. This “inter-science” as a bridge connects and integrates the two most important domains and their methods of research of the human mind. Research at one level provides constraints, correlations and inspirations for research at another level.
Neuroanatomy Basics
The basic building blocks of the brain are a special sort of cells called neurons. There are approximately 100 billion neurons involved in information processing in the brain. When we look at the brain superficially, we can't see these neurons, but rather look at two halves called the hemispheres. The hemispheres themselves may differ in size and function, as we will see later in the book, but principally each of them can be subdivided into four parts called the lobes: the temporal, parietal, occipital and frontal lobe. This division of modern neuroscience is supported by the up- and down-bulging structure of the brain's surface. The bulges are called gyri (singular gyrus), the creases sulci (singular sulcus). They are also involved in information processing. The different tasks performed by different subdivisions of the brain as attention, memory and language cannot be viewed as separated from each other, nevertheless some parts play a key role in a specific task. For example the parietal lobe has been shown to be responsible for orientation in space and the relation you have to it, the occipital lobe is mainly responsible for visual perception and imagination etc. Summed up, brain anatomy poses some basic constraints to what is possible for us and a better understanding will help us to find better therapies for cognitive deficits as well as guide research for cognitive psychologists. It is one goal of our book to present the complex interactions between the different levels on which the brain that can be described, and their implications for Cognitive Neuropsychology.
Methods
Newer methods, like EEG and fMRI etc. allow researchers to correlate the behaviour of a participant in an experiment with the brain activity which is measured simultaneously. It is possible to record neurophysiological responses to certain stimuli or to find out which brain areas are involved in the execution of certain mental tasks. EEG measures the electric potentials along the skull through electrodes that are attached to a cap. While its spatial resolution is not very precise, the temporal resolution lies within the range of milliseconds. The use of fMRI benefits from the fact the increased brain activity goes along with increased blood flow in the active region. The haemoglobin in the blood has magnetic properties that are registered by the fMRI scanner. The spatial resolution of fMRI is very precise in comparison to EEG. On the other hand, the temporal resolution is in the range of just 1–2 seconds.
1.04: Conclusion
Remember the scenario described at the beginning of the chapter. Knut was asking himself “What is happening here?” It should have become clear that this question cannot be simply answered with one or two sentences. We have seen that the field of Cognitive Psychology comprises a lot of processes and phenomena of which every single one is subject to extensive research to understand how cognitive abilities are produced by our brain. In the following chapters of this WikiBook you will see how the different areas of research in Cognitive Psychology are trying to solve the initial question raised by Knut.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/01%3A_Cognitive_Psychology_and_the_Brain/1.02%3A_What_is_Cognitive_Psychology.txt
|
Same place, different day. Knut is sitting at his desk again, staring at a blank paper in front of him, while nervously playing with a pen in his right hand. Just a few hours left to hand in his essay and he has not written a word. All of a sudden he smashes his fist on the table and cries out: "I need a plan!"
That thing Knut is confronted with is something everyone of us encounters in his daily life. He has got a problem – and he does not really know how to solve it. But what exactly is a problem? Are there strategies to solve problems? These are just a few of the questions we want to answer in this chapter.
We begin our chapter by giving a short description of what psychologists regard as a problem. Afterwards we are going to present different approaches towards problem solving, starting with gestalt psychologists and ending with modern search strategies connected to artificial intelligence. In addition we will also consider how experts do solve problems and finally we will have a closer look at two topics: The neurophysiological background on the one hand and the question what kind of role can be assigned to evolution regarding problem solving on the other.
The most basic definition is “A problem is any given situation that differs from a desired goal”. This definition is very useful for discussing problem solving in terms of evolutionary adaptation, as it allows to understand every aspect of (human or animal) life as a problem. This includes issues like finding food in harsh winters, remembering where you left your provisions, making decisions about which way to go, learning, repeating and varying all kinds of complex movements, and so on. Though all these problems were of crucial importance during the evolutionary process that created us the way we are, they are by no means solved exclusively by humans. We find a most amazing variety of different solutions for these problems in nature (just consider, e.g., by which means a bat hunts its prey, compared to a spider). For this essay we will mainly focus on those problems that are not solved by animals or evolution, that is, all kinds of abstract problems (e.g. playing chess). Furthermore, we will not consider those situations as problems that have an obvious solution: Imagine Knut decides to take a sip of coffee from the mug next to his right hand. He does not even have to think about how to do this. This is not because the situation itself is trivial (a robot capable of recognising the mug, deciding whether it is full, then grabbing it and moving it to Knut’s mouth would be a highly complex machine) but because in the context of all possible situations it is so trivial that it no longer is a problem our consciousness needs to be bothered with. The problems we will discuss in the following all need some conscious effort, though some seem to be solved without us being able to say how exactly we got to the solution. Still we will find that often the strategies we use to solve these problems are applicable to more basic problems, too.
Non-trivial, abstract problems can be divided into two groups:
Well-defined Problems
For many abstract problems it is possible to find an algorithmic solution. We call all those problems well-defined that can be properly formalised, which comes along with the following properties:
• The problem has a clearly defined given state. This might be the line-up of a chess game, a given formula you have to solve, or the set-up of the towers of Hanoi game (which we will discuss later).
• There is a finite set of operators, that is, of rules you may apply to the given state. For the chess game, e.g., these would be the rules that tell you which piece you may move to which position.
• Finally, the problem has a clear goal state: The equations is resolved to x, all discs are moved to the right stack, or the other player is in checkmate.
Not surprisingly, a problem that fulfils these requirements can be implemented algorithmically (also see convergent thinking). Therefore many well-defined problems can be very effectively solved by computers, like playing chess.
Ill-defined Problems
Though many problems can be properly formalised (sometimes only if we accept an enormous complexity) there are still others where this is not the case. Good examples for this are all kinds of tasks that involve creativity, and, generally speaking, all problems for which it is not possible to clearly define a given state and a goal state: Formalising a problem of the kind “Please paint a beautiful picture” may be impossible. Still this is a problem most people would be able to access in one way or the other, even if the result may be totally different from person to person. And while Knut might judge that picture X is gorgeous, you might completely disagree.
Nevertheless ill-defined problems often involve sub-problems that can be totally well-defined. On the other hand, many every-day problems that seem to be completely well-defined involve- when examined in detail- a big deal of creativity and ambiguities.
If we think of Knut's fairly ill-defined task of writing an essay, he will not be able to complete this task without first understanding the text he has to write about. This step is the first subgoal Knut has to solve. Interestingly, ill-defined problems often involve subproblems that are well-defined.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/02%3A_Problem_Solving_from_an_Evolutionary_Perspective/2.01%3A_Introduction.txt
|
One dominant approach to Problem Solving originated from Gestalt psychologists in the 1920s. Their understanding of problem solving emphasises behaviour in situations requiring relatively novel means of attaining goals and suggests that problem solving involves a process called restructuring. Since this indicates a perceptual approach, two main questions have to be considered:
• How is a problem represented in a person's mind?
• How does solving this problem involve a reorganisation or restructuring of this representation?
This is what we are going to do in the following part of this section.
How is a problem represented in the mind?
In current research internal and external representations are distinguished: The first kind is regarded as the knowledge and structure of memory, while the latter type is defined as the knowledge and structure of the environment, such like physical objects or symbols whose information can be picked up and processed by the perceptual system autonomously. On the contrary the information in internal representations has to be retrieved by cognitive processes.
Generally speaking, problem representations are models of the situation as experienced by the agent. Representing a problem means to analyse it and split it into separate components:
• objects, predicates
• state space
• operators
• selection criteria
Therefore the efficiency of Problem Solving depends on the underlying representations in a person’s mind, which usually also involves personal aspects. Analysing the problem domain according to different dimensions, i.e., changing from one representation to another, results in arriving at a new understanding of a problem. This is basically what is described as restructuring. The following example illustrates this:
Two boys of different age are playing badminton. The older one is a more skilled player, and therefore it is predictable for the outcome of usual matches who will be the winner. After some time and several defeats the younger boy finally loses interest in playing, and the older boy faces a problem, namely that he has no one to play with anymore.
The usual options, according to M. Wertheimer (1945/82), at this point of the story range from 'offering candy' and 'playing another game' to 'not playing to full ability' and 'shaming the younger boy into playing'. All those strategies aim at making the younger stay.
And this is what the older boy comes up with: He proposes that they should try to keep the bird in play as long as possible. Thus they change from a game of competition to one of cooperation. They'd start with easy shots and make them harder as their success increases, counting the number of consecutive hits. The proposal is happily accepted and the game is on again.
The key in this story is that the older boy restructured the problem and found out that he used an attitude towards the younger which made it difficult to keep him playing. With the new type of game the problem is solved: the older is not bored, the younger not frustrated.
Possibly, new representations can make a problem more difficult or much easier to solve. To the latter case insight– the sudden realisation of a problem’s solution – seems to be related.
Insight
There are two very different ways of approaching a goal-oriented situation. In one case an organism readily reproduces the response to the given problem from past experience. This is called reproductive thinking.
The second way requires something new and different to achieve the goal, prior learning is of little help here. Such productive thinking is (sometimes) argued to involve insight. Gestalt psychologists even state that insight problems are a separate category of problems in their own right.
Tasks that might involve insight usually have certain features – they require something new and non-obvious to be done and in most cases they are difficult enough to predict that the initial solution attempt will be unsuccessful. When you solve a problem of this kind you often have a so called "AHA-experience" – the solution pops up all of a sudden. At one time you do not have any ideas of the answer to the problem, you do not even feel to make any progress trying out different ideas, but in the next second the problem is solved.
For all those readers who would like to experience such an effect, here is an example for an Insight Problem: Knut is given four pieces of a chain; each made up of three links. The task is to link it all up to a closed loop and he has only 15 cents. To open a link costs 2, to close a link costs 3 cents. What should Knut do?
If you want to know the correct solution, click to enlarge the image.
To show that solving insight problems involves restructuring, psychologists created a number of problems that were more difficult to solve for participants provided with previous experiences, since it was harder for them to change the representation of the given situation (see Fixation). Sometimes given hints may lead to the insight required to solve the problem. And this is also true for involuntarily given ones. For instance it might help you to solve a memory game if someone accidentally drops a card on the floor and you look at the other side. Although such help is not obviously a hint, the effect does not differ from that of intended help.
For non-insight problems the opposite is the case. Solving arithmetical problems, for instance, requires schemas, through which one can get to the solution step by step.
Fixation
Sometimes, previous experience or familiarity can even make problem solving more difficult. This is the case whenever habitual directions get in the way of finding new directions – an effect called fixation.
Functional fixedness
Functional fixedness concerns the solution of object-use problems. The basic idea is that when the usual way of using an object is emphasised, it will be far more difficult for a person to use that object in a novel manner. An example for this effect is the candle problem: Imagine you are given a box of matches, some candles and tacks. On the wall of the room there is a cork-board. Your task is to fix the candle to the cork-board in such a way that no wax will drop on the floor when the candle is lit. – Got an idea?
Explanation: The clue is just the following: when people are confronted with a problem and given certain objects to solve it, it is difficult for them to figure out that they could use them in a different (not so familiar or obvious) way. In this example the box has to be recognised as a support rather than as a container.
A further example is the two-string problem: Knut is left in a room with a chair and a pair of pliers given the task to bind two strings together that are hanging from the ceiling. The problem he faces is that he can never reach both strings at a time because they are just too far away from each other. What can Knut do?
Solution: Knut has to recognise he can use the pliers in a novel function – as weight for a pendulum. He can bind them to one of the :strings, push it away, hold the other string and just wait for the first one moving towards him. If necessary, Knut can even climb on the chair, but he is not that small, we suppose . . .
Mental fixedness
Functional fixedness as involved in the examples above illustrates a mental set – a person’s tendency to respond to a given task in a manner based on past experience. Because Knut maps an object to a particular function he has difficulties to vary the way of use (pliers as pendulum's weight).
One approach to studying fixation was to study wrong-answer verbal insight problems. It was shown that people tend to give rather an incorrect answer when failing to solve a problem than to give no answer at all.
A typical example: People are told that on a lake the area covered by water lilies doubles every 24 hours and that it takes 60 days to cover the whole lake. Then they are asked how many days it takes to cover half the lake. The typical response is '30 days' (whereas 59 days is correct).
These wrong solutions are due to an inaccurate interpretation, hence representation, of the problem. This can happen because of sloppiness (a quick shallow reading of the problem and/or weak monitoring of their efforts made to come to a solution). In this case error feedback should help people to reconsider the problem features, note the inadequacy of their first answer, and find the correct solution. If, however, people are truly fixated on their incorrect representation, being told the answer is wrong does not help. In a study made by P.I. Dallop and R.L. Dominowski in 1992 these two possibilities were contrasted. In approximately one third of the cases error feedback led to right answers, so only approximately one third of the wrong answers were due to inadequate monitoring.[1]
Another approach is the study of examples with and without a preceding analogous task. In cases such like the water-jug task analogous thinking indeed leads to a correct solution, but to take a different way might make the case much simpler:
Imagine Knut again, this time he is given three jugs with different capacities and is asked to measure the required amount of water. :Of course he is not allowed to use anything despite the jugs and as much water as he likes. In the first case the sizes are: 127 litres, 21 litres and 3 litres while 100 litres are desired.
In the second case Knut is asked to measure 18 litres from jugs of 39, 15 and three litres size.
In fact participants faced with the 100 litre task first choose a complicate way in order to solve the second one. Others on the contrary who did not know about that complex task solved the 18 litre case by just adding three litres to 15.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/02%3A_Problem_Solving_from_an_Evolutionary_Perspective/2.02%3A_Restructuring__The_Gestalt_Approach.txt
|
The idea of regarding problem solving as a search problem originated from Alan Newell and Herbert Simon while trying to design computer programs which could solve certain problems. This led them to develop a program called General Problem Solver which was able to solve any well-defined problem by creating heuristics on the basis of the user's input. This input consisted of objects and operations that could be done on them.
As we already know, every problem is composed of an initial state, intermediate states and a goal state (also: desired or final state), while the initial and goal states characterise the situations before and after solving the problem. The intermediate states describe any possible situation between initial and goal state. The set of operators builds up the transitions between the states. A solution is defined as the sequence of operators which leads from the initial state across intermediate states to the goal state.
The simplest method to solve a problem, defined in these terms, is to search for a solution by just trying one possibility after another (also called trial and error).
As already mentioned above, an organised search, following a specific strategy, might not be helpful for finding a solution to some ill-defined problem, since it is impossible to formalise such problems in a way that a search algorithm can find a solution.
As an example we could just take Knut and his essay: he has to find out about his own opinion and formulate it and he has to make sure he understands the sources texts. But there are no predefined operators he can use, there is no panacea how to get to an opinion and even not how to write it down.
Means-End Analysis
In Means-End Analysis you try to reduce the difference between initial state and goal state by creating subgoals until a subgoal can be reached directly (probably you know several examples of recursion which works on the basis of this).
An example for a problem that can be solved by Means-End Analysis are the "Towers of Hanoi":
Towers of Hanoi – A well defined problem
The initial state of this problem is described by the different sized discs being stacked in order of size on the first of three pegs (the “start-peg“). The goal state is described by these discs being stacked on the third pegs (the “end-peg“) in exactly the same order.
There are three operators:
• You are allowed to move one single disc from one peg to another one
• You are only able to move a disc if it is on top of one stack
• A disc cannot be put onto a smaller one.
In order to use Means-End Analysis we have to create subgoals. One possible way of doing this is described in the picture:
1. Moving the discs lying on the biggest one onto the second peg.
2. Shifting the biggest disc to the third peg.
3. Moving the other ones onto the third peg, too
You can apply this strategy again and again in order to reduce the problem to the case where you only have to move a single disc – which is then something you are allowed to do.
Strategies of this kind can easily be formulated for a computer; the respective algorithm for the Towers of Hanoi would look like this:
1. move n-1 discs from A to B
2. move disc #n from A to C
3. move n-1 discs from B to C
where n is the total number of discs, A is the first peg, B the second, C the third one. Now the problem is reduced by one with each recursive loop.
Means-end analysis is important to solve everyday-problems – like getting the right train connection: You have to figure out where you catch the first train and where you want to arrive, first of all. Then you have to look for possible changes just in case you do not get a direct connection. Third, you have to figure out what are the best times of departure and arrival, on which platforms you leave and arrive and make it all fit together.
Analogies
Analogies describe similar structures and interconnect them to clarify and explain certain relations. In a recent study, for example, a song that got stuck in your head is compared to an itching of the brain that can only be scratched by repeating the song over and over again.
Restructuring by Using Analogies
One special kind of restructuring, the way already mentioned during the discussion of the Gestalt approach, is analogical problem solving. Here, to find a solution to one problem – the so called target problem, an analogous solution to another problem – the source problem, is presented.
An example for this kind of strategy is the radiation problem posed by K. Duncker in 1945:
As a doctor you have to treat a patient with a malignant, inoperable tumour, buried deep inside the body. There exists a special kind of ray, which is perfectly harmless at a low intensity, but at the sufficient high intensity is able to destroy the tumour – as well as the healthy tissue on his way to it. What can be done to avoid the latter?
When this question was asked to participants in an experiment, most of them couldn't come up with the appropriate answer to the problem. Then they were told a story that went something like this:
A General wanted to capture his enemy's fortress. He gathered a large army to launch a full-scale direct attack, but then learned, that all the roads leading directly towards the fortress were blocked by mines. These roadblocks were designed in such a way, that it was possible for small groups of the fortress-owner's men to pass them safely, but every large group of men would initially set them off. Now the General figured out the following plan: He divided his troops into several smaller groups and made each of them march down a different road, timed in such a way, that the entire army would reunite exactly when reaching the fortress and could hit with full strength.
Here, the story about the General is the source problem, and the radiation problem is the target problem. The fortress is analogous to the tumour and the big army corresponds to the highly intensive ray. Consequently a small group of soldiers represents a ray at low intensity. The solution to the problem is to split the ray up, as the general did with his army, and send the now harmless rays towards the tumour from different angles in such a way that they all meet when reaching it. No healthy tissue is damaged but the tumour itself gets destroyed by the ray at its full intensity.
M. Gick and K. Holyoak presented Duncker's radiation problem to a group of participants in 1980 and 1983. Only 10 percent of them were able to solve the problem right away, 30 percent could solve it when they read the story of the general before. After given an additional hint – to use the story as help – 75 percent of them solved the problem.
With this results, Gick and Holyoak concluded, that analogical problem solving depends on three steps:
1. Noticing that an analogical connection exists between the source and the target problem.
2. Mapping corresponding parts of the two problems onto each other (fortress → tumour, army → ray, etc.)
3. Applying the mapping to generate a parallel solution to the target problem (using little groups of soldiers approaching from different directions → sending several weaker rays from different directions)
Next, Gick and Holyoak started looking for factors that could be helpful for the noticing and the mapping parts, for example:
Discovering the basic linking concept behind the source and the target problem.
-->picture coming soon<--
Schema
The concept that links the target problem with the analogy (the “source problem“) is called problem schema. Gick and Holyoak obtained the activation of a schema on their participants by giving them two stories and asking them to compare and summarise them. This activation of problem schemata is called “schema induction“.
The two presented texts were picked out of six stories which describe analogical problems and their solution. One of these stories was "The General" (remember example in Chapter 4.1).
After solving the task the participants were asked to solve the radiation problem (see chapter 4.2). The experiment showed that in order to solve the target problem reading of two stories with analogical problems is more helpful than reading only one story: After reading two stories 52% of the participants were able to solve the radiation problem (As told in chapter 4.2 only 30% were able to solve it after reading only one story, namely: “The General“).
Gick and Holyoak found out that the quality of the schema a participant developed differs. They classified them into three groups:
• Good schemata: In good schemata it was recognised that the same concept was used in order to solve the problem (21% of the participants created a good schema and 91% of them were able to solve the radiation problem).
• Intermediate schemata: The creator of an intermediate schema has figured out that the root of the matter equals (here: many small forces solved the problem). (20% created one, 40% of them had the right solution).
• Poor schemata: The poor schemata were hardly related to the target problem. In many poor schemata the participant only detected that the hero of the story was rewarded for his efforts (59% created one, 30% of them had the right solution).
The process of using a schema or analogy, i.e. applying it to a novel situation is called transduction. One can use a common strategy to solve problems of a new kind.
To create a good schema and finally get to a solution is a problem-solving skill that requires practise and some background knowledge.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/02%3A_Problem_Solving_from_an_Evolutionary_Perspective/2.03%3A_Problem_Solving_as_a_Search_Problem.txt
|
With the term expert we describe someone who devotes large amounts of his or her time and energy to one specific field of interest in which he, subsequently, reaches a certain level of mastery. It should not be of surprise that experts tend to be better in solving problems in their field than novices (people who are beginners or not as well trained in a field as experts) are. They are faster in coming up with solutions and have a higher success rate of right solutions. But what is the difference between the way experts and non-experts solve problems? Research on the nature of expertise has come up with the following conclusions:
Experts know more about their field,
their knowledge is organised differently, and
they spend more time analysing the problem.
When it comes to problems that are situated outside the experts' field, their performance often does not differ from that of novices.
Knowledge: An experiment by Chase and Simon (1973a, b) dealt with the question how well experts and novices are able to reproduce positions of chess pieces on chessboards when these are presented to them only briefly. The results showed that experts were far better in reproducing actual game positions, but that their performance was comparable with that of novices when the chess pieces were arranged randomly on the board. Chase and Simon concluded that the superior performance on actual game positions was due to the ability to recognise familiar patterns: A chess expert has up to 50,000 patterns stored in his memory. In comparison, a good player might know about 1,000 patterns by heart and a novice only few to none at all. This very detailed knowledge is of crucial help when an expert is confronted with a new problem in his field. Still, it is not pure size of knowledge that makes an expert more successful. Experts also organise their knowledge quite differently from novices.
Organisation: In 1982 M. Chi and her co-workers took a set of 24 physics problems and presented them to a group of physics professors as well as to a group of students with only one semester of physics. The task was to group the problems based on their similarities. As it turned out the students tended to group the problems based on their surface structure (similarities of objects used in the problem, e.g. on sketches illustrating the problem), whereas the professors used their deep structure (the general physical principles that underlay the problems) as criteria. By recognising the actual structure of a problem experts are able to connect the given task to the relevant knowledge they already have (e.g. another problem they solved earlier which required the same strategy).
Analysis: Experts often spend more time analysing a problem before actually trying to solve it. This way of approaching a problem may often result in what appears to be a slow start, but in the long run this strategy is much more effective. A novice, on the other hand, might start working on the problem right away, but often has to realise that he reaches dead ends as he chose a wrong path in the very beginning.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/02%3A_Problem_Solving_from_an_Evolutionary_Perspective/2.04%3A_How_do_Experts_Solve_Problems.txt
|
We already introduced a lot of ways to solve a problem, mainly strategies that can be used to find the “correct” answer. But there are also problems which do not require a “right answer” to be given – It is time for creative productiveness!
Imagine you are given three objects – your task is to invent a completely new object that is related to nothing you know. Then try to describe its function and how it could additionally be used. Difficult? Well, you are free to think creatively and will not be at risk to give an incorrect answer. For example think of what can be constructed from a half-sphere, wire and a handle. The result is amazing: a lawn lounger, global earrings, a sled, a water weigher, a portable agitator, ... [2]
Divergent Thinking
The term divergent thinking describes a way of thinking that does not lead to one goal, but is open-ended. Problems that are solved this way can have a large number of potential 'solutions' of which none is exactly 'right' or 'wrong', though some might be more suitable than others.
Solving a problem like this involves indirect and productive thinking and is mostly very helpful when somebody faces an ill-definedproblem, i.e. when either initial state or goal state cannot be stated clearly and operators or either insufficient or not given at all.
The process of divergent thinking is often associated with creativity, and it undoubtedly leads to many creative ideas. Nevertheless, researches have shown that there is only modest correlation between performance on divergent thinking tasks and other measures of creativity. Additionally it was found that in processes resulting in original and practical inventions things like searching for solutions, being aware of structures and looking for analogies are heavily involved, too.
Thus, divergent thinking alone is not an appropriate tool for making an invention. You also need to analyse the problem in order to make the suggested, i.e. invention, solution appropriate.
right or wrong
The ability of children to imitate the people and the surrounding environment also influential in recognizing the concepts of right and wrong To introduce the concepts of right and wrong must be seen from the age of the child. When children are a year old, their brains are not fully developed so their understanding is still limited. But keep in mind, too, from an early age the average child is able to imitate parents, see their surroundings and do imitation or called modeling. Therefore, the introduction of the concept of right and wrong also depends on how the parents or other adults live with the child. "If a mother often sits on the couch while raising both legs, children tend to sit with more or less the same style and think this is true. As we get older, modeling is the most natural thing that children can get about right and wrong," said this psychologist called Kiki. The method of giving understanding about the concepts of right and wrong is also adjusted to the age of the child. If children are still toddlers, they can go through activities such as telling stories that are rich in social values. Slip conclusions at the end of a fairy tale. "For example, the Kancil tale, after storytelling parents can say, 'So, stealing is not good', to emphasize the moral message in the fairy tale," said the psychologist from the Indonesian Psychological Practice Foundation, Bintaro, South Jakarta. For children who are older, for example in primary school age and still under 12 years of age, understanding can be given by giving an explanation of their eyes. Because the nature of them still tends to be egocentric. However, when entering adolescence, giving an explanation can be through a general perspective, especially cause and effect. "When giving to tell children about the concepts of right and wrong, parents need to pay attention to whether the child really understands the message that was delivered as a whole or only part of the contents of the message," Kiki added. For example, when parents want to teach the concept of stealing is not good through the story of Kancil, parents must make sure the child understands that anyone should not steal, no matter what the circumstances. Do not let the child who understands that is not allowed to steal a mouse deer or that should not be stolen is cucumber. Therefore, ask the child to explain his understanding once more so that the child is sure to understand. Responsible Learning If you have been taught the concept of right and wrong, but the child still violates it, parents must act and the child needs to know the consequences of the wrong actions. "For example, it was explained that you should not pick rambutan from a neighboring tree, but the child still did it, immediately reprimanded firmly and words that were not ambiguous or ambiguous, but still polite. "However, the child must be responsible for his attitude," Kiki reminded. Of course, continued Kiki, all this depends on the age of the child. In a small age for certain things, it is better for parents to stay with children, but when they are older, children need to know that parents will not risk their mistakes. Children who from childhood have understood between right and wrong will grow into individuals who are independent, responsible and well-mannered. This will also make it easier for them to socialize in their environment, have healthy friendships and make it easier for them to get good jobs because employers and coworkers certainly want to work with people who are polite, honest and responsible. Important to remember The following basic things can be done by parents to instill in children the right behavior - To say thanks - Say a word please if you want to ask for help - apologize if wrong, even to the child if the parents are wrong - Say greetings
Convergent Thinking
Convergent thinking patterns are problem solving techniques that unite different ideas or fields to find a solution. The focus of this mindset is speed, logic and accuracy, also identification of facts, reapplying existing techniques, gathering information. The most important factor of this mindset is: there is only one correct answer. You only think of two answers, namely right or wrong. This type of thinking is associated with certain science or standard procedures. People with this type of thinking have logical thinking, are able to memorize patterns, solve problems and work on scientific tests. Most school subjects sharpen this type of thinking ability.
Research shows that the creative process involves both types of thought processes. But experts recommend not joining the two processes in one session. For example, in the next 30 minutes, you invite everyone on your team to brainstorm creating new ideas (which involve divergent thinking patterns). Within 30 minutes, all ideas should only be recorded, not judged, for example by saying that an idea is irrelevant because of a limited budget. After all the ideas are contained, go to the next session, namely analysis and decision making (which involves convergent thinking patterns). Based on research too, doing creative jobs causes mood swings (mood swings), and it turns out that both types of thinking create two different moods. Convergent thinking patterns create negative moods, while divergent thinking patterns create a positive mood. J.A. Research Horne in 1988 revealed that lack of sleep will greatly affect the performance of people with divergent thought patterns, whereas people with convergent mindsets will be more likely to be fine. Including which mindset do you have? Use wisely your talents, and practice both types of thinking to be able to use them in balance at the right times.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/02%3A_Problem_Solving_from_an_Evolutionary_Perspective/2.05%3A_Creative_Cognition.txt
|
Presenting Neurophysiology in its entirety would be enough to fill several books. Fortunately we do not have to concern ourselves with most of these facts. Instead, let's just focus on the aspects that are really relevant to problem solving. Nevertheless this topic is quite complex and problem solving cannot be attributed to one single brain area. Rather there are systems of several brain areas working together to perform a specific task. This is best shown by an example:
In 1994 Paolo Nichelli and coworkers used the method of PET (Positron Emission Tomography), to localise certain brain areas, which are involved in solving various chess problems. In the following table you can see which brain area was active during a specific task:
Task Location of Brain activity
• Identifying chess pieces
• determining location of pieces
• Thinking about making a move
• Remembering a pieces move
• Planning and executing strategies
• Pathway from Occipital to Temporal Lobe
(also called the "what"-pathway of visual processing)
• Pathway from Occipital to parietal Lobe
(also called the "where"-pathway of visual processing)
• Premotor area
• Hippocampus
(forming new memories)
• Prefrontal cortex
Lobes of the Brain
One of the key tasks, namely planning and executing strategies, is performed by a brain area which also plays an important role for several other tasks correlated with problem solving – the prefrontal cortex (PFC). This can be made clear if you take a look at several examples of damages to the PFC and their effects on the ability to solve problems.
Patients with a lesion in this brain area have difficulty switching from one behaviouristic pattern to another. A well known example is the wisconsin card-sorting task. A patient with a PFC lesion who is told to separate all blue cards from a deck, would continue sorting out the blue ones, even if the experimenter told him to sort out all brown cards. Transferred to a more complex problem, this person would most likely fail, because he is not flexible enough to change his strategy after running into a dead end.
Another example is the one of a young homemaker, who had a tumour in the frontal lobe. Even though she was able to cook individual dishes, preparing a whole family meal was an infeasible task for her.
As the examples above illustrate, the structure of our brain seems to be of great importance regarding problem solving, i.e. cognitive life. But how was our cognitive apparatus designed? How did perception-action integration as a central species specific property come about?
2.07: The Evolutionary Perspective
Charles Darwin developed the evolutionary theory which was primarily meant to explain why there are so many different kinds of species. This theory is also important for psychology because it explains how species were designed by evolutionary forces and what their goals are. By knowing the goals of species it is possible to explain and predict their behaviour.
The process of evolution involves several components, for instance natural selection – which is a feedback process that 'chooses' among 'alternative designs' on the basis of deciding how good the respective modulation is. As a result of this natural selection we find adaption. This is a process that constantly tests the variations among individuals in relation to the environment. If adaptions are useful they get passed on; if not they’ll just be an unimportant variation.
Another component of the evolutionary process is sexual selection, i.e. increasing of certain sex characteristics, which give individuals the ability to rival with other individuals of the same sex or an increased ability to attract individuals of the opposite sex.
Altruism is a further component of the evolutionary process, which will be explained in more detail in the following chapter Evolutionary Perspective on Social Cognitions.
2.08: Summary and Conclusion
After Knut read this WikiChapter he was relieved that he did not waste his time for the essay – quite the opposite! He now has a new view on problem solving – and recognises his problem as a well-defined one:
His initial state was the clear blank paper without any philosophical sentences on it. The goal state was just in front of his mind's eye: Him – grinning broadly – handing in the essay with some carefully developed arguments.
He decides to use the technique of Means-End Analysis and creates several subgoals:
1. Read important passages again
2. Summarise parts of the text
3. Develop an argumentative structure
4. Write the essay
5. Look for typos
Right after he hands in his essay Knut will go on reading this WikiBook. He now looks forward to turning the page over and to discovering the next chapter...
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/02%3A_Problem_Solving_from_an_Evolutionary_Perspective/2.06%3A_Neurophysiological_Background.txt
|
Why do we live in cities? Why do we often choose to work together? Why do we enjoy sharing our spare time with others? These are questions of Social Cognition and its evolutionary development.
The term Social Cognition describes all abilities necessary to act adequately in a social system. Basically, it is the study of how we process social information, especially its storage, retrieval and application to social situations. Social Cognition is a common skill among various species.
In the following, the focus will be on Social Cognition as a human skill. Important concepts and the development during childhood will be explained. Having built up a conceptional basis for the term, we will then take a look at this skill from an evolutionary perspective and present the common theories on the origin of Social Cognition.
The publication of Michael Tomasello et al. in the journal Behavioral and Brain Sciences (2005) [1] will serve as a basis for this chapter.
3.02: Social Cognition
The human faculty of Social Cognition
Playing football as a complex social activity
Humans are by far the most talented species in reading the minds of others. That means we are able to successfully predict what other humans perceive, intend, believe, know or desire. Among these abilities, understanding the intention of others is crucial. It allows us to resolve possible ambiguities of physical actions. For example, if you were to see someone breaking a car window, you would probably assume he was trying to steal a stranger's car. He would need to be judged differently if he had lost his car keys and it was his own car that he was trying to break into. Humans also collaborate and interact culturally. We perform complex collaborative activities, like building a house together or playing football as a team. Over time this led to powerful concepts of organizational levels like societies and states. The reason for this intense development can be traced back to the concept of Shared Intentionality.
Shared Intentionality
An intentional action is an organism's intelligent behavioural interaction with its environment towards a certain goal state. This is the concept of Problem Solving, which was already described in the previous chapter.
The social interaction of agents in an environment which understand each other as acting intentionally causes the emergence of Shared Intentionality. This means that the agents work together towards a shared goal in collaborative interaction. They do that in coordinated action roles and mutual knowledge about themselves. The nature of the activity or its complexity is not important, as long as the action is carried out in the described fashion. It is important to mention that the notion of shared goals means that the internal goals of each agent include the intentions of the others. This can easily be misinterpreted. For example, take a group of apes on a hunt. They appear to be acting in a collaborative way, however, it is reasonable to assume that they do not have coordinated action roles or a shared goal – they could just be acting towards the same individual goal. Summing up, the important characteristics of the behaviour in question are that the agents are mutually responsive, have the goal of achieving something together and coordinate their actions with distributed roles and action plans.
The strictly human faculty to participate in collaborative actions that involve shared goals and socially coordinated action plans is also called Joint Intention. This requires an understanding of the goals and perceptions of other involved agents, as well as sharing and communicating these, which again seems to be a strictly human behaviour. Due to our special motivation to share psychological states, we also need certain complex cognitive representations. These representations are called dialogic cognitive representations, because they have as content mostly social engagement. This is especially important for the concept of joint intentions, since we need not only a representation for our own action plan, but also for our partner's plan. Joint Intentions are an essential part of Shared Intentionality.
Dialogic cognitive representations are closely related with the communication and use of linguistic symbols. They allow in some sense a form of collective intentionality, which is important to construct social norms, conceptualize beliefs and, most importantly, share them. In complex social groups the repeated sharing of intentions in a particular interactive context leads to the creation of habitual social practices and beliefs. That may form normative or structural aspects of a society, like government, money, marriage, etc. Society might hence be seen as a product and an indicator of Social Cognition.
The social interaction that builds ground for activities involving Shared Intentionality is proposed to be divided into three groups:
• Dyadic engagement: The simple sharing of emotions and behaviour, by means of interaction and direct mutual response between agents. Dyadic interaction between human infants and adults are called protoconversations. These are turn-taking sequences of touching, face expressions and vocalisations. The exchange of emotions is the most important outcome of this interaction.
• Triadic engagement: Two agents act together towards a shared goal, while monitoring the perception and goal-direction of the other agent. They focus on the same problem and coordinate their actions respectively, which makes it possible to predict following events.
• Collaborative engagement: The combination of Joint Intentions and attention. At this point, the agents share a goal and act in complementary roles with a complex action plan and mutual knowledge about the selective attention and the intentions of one another. The latter aspect allows the agents to assist each other and reverse or take over roles.
These different levels of social engagement require the understanding of different aspects of intentional action, as introduced above, and presuppose the motivation to share psychological states with each other.
Development of Social Cognition during childhood
Children making social experiences
A crucial point for Social Cognition is the comprehension of intentional action. Children's understanding of intentional action can basically be divided into three groups, each representing a more complex level of grasp.
1. The first one to be mentioned is the identification of animate action. This means that after a couple of months, babies can differentiate between motion that was caused by some external influence and actions that an organism has performed by itself, as an animate being. At this stage, however, the child has not yet any understanding of potential goals the observed actor might have, so it is still incapable of predicting the behaviour of others.
2. The next stage of comprehension includes the understanding that the organism acts with persistence towards achieving a goal. Children can now distinguish accidental incidents from intentional actions and failed from successful attempts. This ability develops after about 9 months. With this new perspective, the child also learns that the person it observes has a certain perception - thus a certain amount of predicting behaviour is possible. This is an essential difference between the first and the second stage.
3. After around 14 months of age, children fully comprehend intentional action and the basics of rational decision making. They realise, that an actor pursuing a goal may have a variety of action plans to achieve a goal, and is choosing between them. Furthermore, a certain sense for the selective attention of an agent develops. This allows a broad variety of predictions of behaviour in a certain environment. In addition to that, children acquire the skill of cultural learning: when they observe how an individual successfully reaches a goal, they memorise the procedure. Hence, they can use the methods to reach their own goals. This is called imitative learning, which turns out to be an extremely powerful tool. By applying this technique, children also learn how things are conventionally done in their culture.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/03%3A_Evolutionary_Perspective_on_Social_Cognitions/3.01%3A_Introduction.txt
|
So far we discussed what Social Cognition is about. But how could this behaviour develop during evolution? At first glance, Darwin’s theory of the survival of the fittest does not support the development of social behaviour. Caring for others, and not just for oneself, seems to be a decrease of fitness. Nevertheless, various theories have been formulated which try to explain Social Cognition from an evolutionary perspective. We will present three influential theories which have been formulated by Steven Gaulin and Donald McBurney.[2]
Group Selection
Moai at Rano Raraku
Vero Wynne-Edwards first proposed this theory in the 1960s. From an evolutionary perspective, a group is a number of individuals which affect the fitness of each other. Group Selection means that if any of the individuals of a group is doing benefit to its group, the group is more likely to survive and pass on its predisposition to the next generation. This again improves the chance of the individual to spread its genetic material. So in this theory a social organism is more likely to spread its genes than a selfish organism. The distinction to the classical theory of evolution is that not only the fittest individuals are likely to survive, but also the fittest groups.
An example would be the history of the Rapa Nui. The Rapa Nui were the natives of Easter Island which handled their resources extremely wasteful in order to build giant heads made of stone. After a while, every tree on the island was extinct because they needed the trunks to transport the stones. The following lack of food led to the breakdown of their civilization.
A society which handles their resources more moderate and provident would not have ended up in such a fate. However, if both societies would have lived on one island, the second group would not have been able to survive because they would not have been able to keep the resources.
This indicates the problem of the Group Selection: it needs certain circumstances to describe things properly. Additionally, every theory about groups should include the phenomenon of migration. So in this simple form, the theory is not capable of handling selfish behaviour of some agents in altruistic groups: Altruistic groups which include selfish members would turn into pure selfish ones over time, because altruistic agents would work for selfish agents, thereby increasing the cheaters' fitness while decreasing their own. Thus, Group Selection may not be a sufficient explanation for the development of Social Cognition.
Kin Selection
Since altruistic populations are vulnerable to cheaters, there must exist a mechanism that allows altruism to be maintained by natural selection. The Kin Selection approach provides an explanation how altruistic genes can spread without being eliminated by selfish behaviour. The theory was developed by William D. Hamilton and John M. Smith in 1964.[3] The basic principle of Kin Selection is to benefit somebody who is genetically related, for example by sharing food. For the altruistic individual, this means a reduction of its own fitness by increasing the fitness of its relative. However, the closer the recipient is related to the altruist, the more likely he shares the altruistic genes. The loss of fitness can be compensated since the genes of the altruistically behaving agent have then the chance to be spread indirectly through the recipient: The relative might be able to reproduce and pass the altruistic genes over to the next generation.
In principle, the disadvantage for the giver should always be less than the increased fitness of the addressee. This relation between costs and benefit is expressed by Hamilton's rule taking additionally the relatedness of altruist and recipient into account:
$r\cdot b>c$
where
r shows the genetic relatedness between altruist and recipient (coefficient between zero and one),
b is the reproductive benefit or increased fitness for the recipient and
c are the altruist's reproductive costs or the reduction of his fitness in the performed action.
If the product of relatedness and benefit outweighs the costs for the giver, the altruistic action should be performed. The closer the recipient is genetically related, the higher costs are acceptable.
Ant colonies provide evidence for Kin Selection
Examples for kin-selected altruism can be found in populations of social insects like ants, termites or bees. An ant colony, for instance, consists of one fertile queen and several hundreds or more of sterile female workers. While the queen is the only one reproducing, the workers are among other things responsible for brood care. The workers are genetically closer related to the sisters they raise (75%) than they would be to their own offspring (50%). Therefore, they are passing on more of their genes than if they bred on their own.
According to Hamilton's rule, altruism is only favoured if directed towards relatives, that is {\displaystyle r>0} 0" aria-hidden="true" src="https://wikimedia.org/api/rest_v1/me...ec42790850b452">. Therefore, Kin Selection theory accounts only for genetic relatives. Altruism however occurs among not related individuals as well. This issue is addressed by the theory of Reciprocal Altruism.
Reciprocal Altruism
The theory of Reciprocal Altruism describes beneficial behaviour in expectation of future reciprocity. This form of altruism is not a selfless concern for the welfare of others but it denotes mutual cooperation of repeatedly interacting species in order to maximise their individual utility. In social life an individual can benefit from mutual cooperation, but each one can also do even better by exploiting the cooperative efforts of others. Game Theory allows a formalisation of the strategic possibilities in such situations. It can be shown, that altruistic behaviour can be more successful (in terms of utility) than purely self-interested strategies and therefore will lead to better fitness and survivability.
In many cases social interactions can be modelled by the Prisoner's Dilemma, which provides the basis of our analysis. The classical prisoner's dilemma is as follows: Knut and his friend are arrested by the police. The police has insufficient evidence for a conviction, and, having separated both prisoners, visits each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full ten-year sentence. If both stay silent, the police can sentence both prisoners to only six months in jail for a minor charge. If each betrays the other, each will receive a two-year sentence.
Possible outcomes of the Prisoner's Dilemma:
Prisoner 1 / Prisoner 2 Cooperate Defect
Cooperate 6 months each 10 years / free
Defect free / 10 years 2 years each
Each prisoner has two strategies to choose from, to remain silent (cooperate) or to testify (defect). Assume Knut wants to minimize his individual durance. If Knut's friend cooperates, it is better to defect and go free than to cooperate and spend six months in jail. If Knut's friend defects, then Knut should defect too, because two years in jail are better than ten. The same holds for the other prisoner. So defection is the dominant strategy in the prisoner's dilemma, even though both would do better, if they cooperated. In a one-shot game a rational player would always defect, but what happens if the game is played repeatedly?
One of the most effective strategies in the iterated prisoner's dilemma is the mixed strategy called Tit for Tat: Always cooperate in the first game, then do whatever your opponent did in the previous game. Playing Tit for Tat means to maintain cooperation as long as the opponent does. If the opponent defects he gets punished in succeeding games by defecting likewise until cooperation is restored. With this strategy rational players can sustain the cooperative outcome at least for indefinitely long games (like life).[4] Clearly Tit for Tat is only expected to evolve in the presence of a mechanism to identify and punish cheaters.
Assuming species are not able to choose between different strategies, but rather that their strategical behaviour is hard-wired, we can finally come back to the evolutionary perspective. In The Evolution of Cooperation Robert Axelrod formalised Darwin's emphasis on individual advantage in terms of game theory.[5] Based on the concept of an evolutionary stable strategy in the context of the prisoner's dilemma game he showed how cooperation can get started in an asocial world and can resist invasion once fully established.
3.04: Conclusion
Summing up, Social Cognition is a very complex skill and can be seen as the fundament of our current society. On account of the concept of Shared Intentionality, humans show by far the most sophisticated form of social cooperation. Although it may not seem obvious, Social Cognition can actually be compatible with the theory of evolution and various reasonable approaches can be formulated. These theories are all based on a rather selfish drive to pass on our genetic material - so it may be questionable, if deep-rooted altruism and completely selfless behaviour truly exists.
3.05: References
1. Tomasello, M. et al (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675–735.
2. Gaulin, S. J. C, & McBurney, D. H. (2003). Evolutionary Psychology. New Jersey: Prentice-Hall.
3. Hamilton, W. D. (1964). The genetical evolution of social behaviour I and II. Journal of Theoretical Biology, 7, 17-52.
4. Aumann, R. J. (1959). Acceptable Points in General Cooperative n-Person Games. Contributions to the Theory of Games IV, Annals of Mathematics Study, 40, 287-324.
5. Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/03%3A_Evolutionary_Perspective_on_Social_Cognitions/3.03%3A_Evolutionary_Perspective_on_Social_Cognition.txt
|
Behavioural and Neuroscientific methods are used to gain insight into how the brain influences the way individuals think, feel, and act. There are an array of methods, which can be used to analyze the brain and its relationship to behavior. Well-known techniques include EEG (electroencephalography) which records the brain’s electrical activity and fMRI (functional magnetic resonance imaging) which produces detailed images of brain structure and/or activity. Other methods, such as the lesion method, are lesser known, but still influential in today's neuroscience research.
Methods can be organized into the following categories: anatomical, physiological, and functional. Other techniques include modulating brain activity, analyzing behavior or computational modeling.
4.02: Lesion Method
In the lesion method, patients with brain damage are examined to determine which brain structures are damaged and how this influences the patient's behavior. Researchers attempt to correlate a specific brain area to an observed behavior by using reported experiences and research observations. Researchers may conclude that the loss of functionality in a brain region causes behavioral changes or deficits in task performance. For example, a patient with a lesion in the parietal-temporal-occipital association area will exhibit agraphia, a condition in which he/she is not able to write, despite having no deficits in motor ability. If damage to a particular brain region (structure X) is shown to correlate with a specific change in behavior (behavior Y), researchers may deduce that structure X has a relation to behavior Y.
In humans, lesions are most often caused by tumors or strokes. Through current brain imaging technologies, it is possible to determine which area was damaged during a stroke. Loss of function in the stroke victim may then be correlated with that damaged brain area. While lesion studies in humans have provided key insights into brain organization and function, lesions studies in animals offer many advantages.
First, animals used in research are reared in controlled environmental conditions that limit variability between subjects. Secondly, researchers are able to measure task performance in the same animal, before and after a lesion. This allows for within-subject comparison. And third the control groups can be watched who either did not undergo surgery or who did have surgery in another brain area. These benefits also increase the accuracy of the hypothesis being tested which is more difficult in human research because the before-after comparison and control experiments drop out.
Visualization of iron rod passing through brain of Phineas Gage
To strengthen conclusions regarding a brain area and task performance, researchers may perform double dissociation. The goal of this method is to prove that two dissociations are independent. Through comparison of two patients with differing brain damage and contradictory disease patterns, researchers may localize different behaviors to each brain area. Broca's area is a region of the brain is responsible for language processing, comprehension and speech production. Patients with a lesion in Broca's area will exhibit Broca's aphasia or non-fluent aphasia. These patients are unable to speak fluently; a sentence produced by a patient with damage to the Broca's area may sound like: "I ... er ... wanted ... ah ... well ... I ... wanted to ... er ... go surfing ... and ..er ... well...". On the other hand, Wernicke's area is responsible for speech comprehension. A patient with a lesion in this area has Wernicke's aphasia. They may be able to produce language, but lack the ability to produce meaningful sentences. Patients may produce 'word salad': " I then did this chingo for some hours after my dazi went through meek and been sharko". Patients with Wernicke's aphasia are often unaware of speech deficits and may believe that they are speaking properly.
Certainly one of the famous "lesion" cases was that of Phineas Gage. On 13 September 1848 Gage, a railroad construction foreman, was using an iron rod to tamp an explosive charge into a body of rock when premature explosion of the charge blew the rod through his left jaw and out the top of his head. Miraculously, Gage survived, but reportedly underwent a dramatic personality change as a result of destruction of one or both of his frontal lobes. The uniqueness of Gage case (and the ethical impossibility of repeating the treatment in other patients) makes it difficult to draw generalizations from it, but it does illustrate the core idea behind the lesion method. Further problems stem from the persistent distortions in published accounts of Gage—see the Wikipedia article Phineas Gage.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/04%3A_Behavioural_and_Neuroscience_Methods/4.01%3A_Introduction.txt
|
CAT
X-ray picture.
CAT scanning was invented in 1972 by the British engineer Godfrey N. Hounsfield and the South African (later American) physicist Alan Cromack.
CAT (Computed Axial Tomography) is an x-ray procedure which combines many x-ray images with the aid of a computer to generate cross-sectional views, and when needed 3D images of the internal organs and structures of the human body. A large donut-shaped x-ray machine takes x-ray images at many different angles around the body. Those images are processed by a computer to produce cross-sectional picture of the body. In each of these pictures the body is seen as an x-ray ‘slice’ of the body, which is recorded on a film. This recorded image is called a tomogram.
CAT scans are performed to analyze, for example, the head, where traumatic injuries (such as blood clots or skull fractures), tumors, and infections can be identified. In the spine the bony structure of the vertebrae can be accurately defined, as can the anatomy of the spinal cord. CAT scans are also extremely helpful in defining body organ anatomy, including visualizing the liver, gallbladder, pancreas, spleen, aorta, kidneys, uterus, and ovaries. The amount of radiation a person receives during a CAT scan is minimal. In men and non-pregnant women it has not been shown to produce any adverse effects. However, doing a CAT test hides some risks. If the subject or the patient is pregnant it maybe recommended to do another type of exam to reduce the possible risk of exposing her fetus to radiation. Also in cases of asthma or allergies it is recommended to avoid this type of scanning. Since the CAT scan requires a contrast medium, there's a slight risk of an allergic reaction to the contrast medium. Having certain medical conditions; Diabetes, asthma, heart disease, kidney problems or thyroid conditions also increases the risk of a reaction to contrast medium.
MRI
Although CAT scanning was a breakthrough, in many cases it was substituted by magnetic resonance imaging (MRI), a method of looking inside the body without using x-rays, harmful dyes or surgery. Instead, radio waves and a strong magnetic field are used in order to provide remarkably clear and detailed pictures of internal organs and tissues.
MRI head side
History and Development of MRI
MRI is based on a physics phenomenon called nuclear magnetic resonance (NMR), which was discovered in the 1930s by Felix Bloch (working at Stanford University) and Edward Purcell (from Harvard University). In this resonance, magnetic field and radio waves cause atoms to give off tiny radio signals. In the year 1970, Raymond Damadian, a medical doctor and research scientist, discovered the basis for using magnetic resonance imaging as a tool for medical diagnosis. Four years later a patent was granted, which was the world's first patent issued in the field of MRI. In 1977, Dr. Damadian completed the construction of the first “whole-body” MRI scanner, which he called the ”Indomitable”. The medical use of magnetic resonance imaging has developed rapidly. The first MRI equipment in healthcare was available at the beginning of the 1980s. In 2002, approximately 22,000 MRI scanners were in use worldwide, and more than 60 million MRI examinations were performed.
A full size MRI-Scanner.
Common Uses of the MRI Procedure
Because of its detailed and clear pictures, MRI is widely used to diagnose sports-related injuries, especially those affecting the knee, elbow, shoulder, hip and wrist. Furthermore, MRI of the heart, aorta and blood vessels is a fast, non-invasive tool for diagnosing artery disease and heart problems. The doctors can even examine the size of the heart-chambers and determine the extent of damage caused by a heart disease or a heart attack. Organs like lungs, liver or spleen can also be examined in high detail with MRI. Because no radiation exposure is involved, MRI is often the preferred diagnostic tool for examination of the male and female reproductive systems, pelvis and hips and the bladder.
Risks
An undetected metal implant may be affected by the strong magnetic field. MRI is generally avoided in the first 12 weeks of pregnancy. Scientists usually use other methods of imaging, such as ultrasound, on pregnant women unless there is a strong medical reason to use MRI.
PPT MRIII
Reconstruction of nerve fibers
There has been some further development of the MRI: The DT-MRI (diffusion tensor magnetic resonance imaging) enables the measurement of the restricted diffusion of water in tissue and gives a 3-dimensional image of it. History: The principle of using a magnetic field to measure diffusion was already described in 1965 by the chemist Edward O. Stejskal and John E. Tanner. After the development of the MRI, Michael Moseley introduced the principle into MR Imaging in 1984 and further fundamental work was done by Dennis LeBihan in 1985. In 1994 the engineer Peter J. Basser published optimized mathematical models of an older diffusion-tensor model.[1] This model is commonly used today and supported by all new MRI-devices.
The DT-MRI technique takes advantage of the fact that the mobility of water molecules in brain tissue is restricted by obstacles like cell membranes. In nerve fibers mobility is only possible alongside the axons. So measuring the diffusion gives rise to the course of the main nerve fibers. All the data of one diffusion-tensor are too much to process in a single image, so there are different techniques for visualization of different aspects of this data: - Cross section images - tractography (reconstruction of main nerve fibers) - tensor glyphs (complete illustration of diffusion-tensor information)
The diffusion manner changes by patients with specific diseases of the central nervous system in a characteristic way, so they can be discerned by the diffusion-tensor technique. Diagnosis of apoplectic strokes and medical research of diseases involving changes of the white matter, like Alzheimer's disease or Multiple sclerosis are the main applications. Disadvantages of DT-MRI are that it is far more time consuming than ordinary MRI and produces large amounts of data, which first have to be visualized by the different methods to be interpreted.
fMRI
The fMRI (Functional Magnetic Resonance) Imaging is based on the Nuclear magnetic resonance (NMR). The way this method works is the following: All atomic nuclei with an odd number of protons have a nuclear spin. A strong magnetic field is put around the tested object which aligns all spins parallel or antiparallel to it. There is a resonance to an oscillating magnetic field at a specific frequency, which can be computed in dependence on the atom type (the nuclei’s usual spin is disturbed, which induces a voltage s (t), afterwards they return to the equilibrium state). At this level different tissues can be identified, but there is no information about their location. Consequently the magnetic field’s strength is gradually changed, thereby there is a correspondence between frequency and location and with the help of Fourier analysis we can get one-dimensional location information. Combining several such methods as the Fourier analysis it is possible to get a 3D image.
fMRI picture
The central idea for fMRI is to look at the areas with increased blood flow. Hemoglobin disturbs the magnetic imaging, so areas with an increased blood oxygen level dependant (BOLD) can be identified. Higher BOLD signal intensities arise from decreases in the concentration of deoxygenated haemoglobin. An fMRI experiment usually lasts 1-2 hours. The subject will lie in the magnet and a particular form of stimulation will be set up and MRI images of the subject's brain are taken. In the first step a high resolution single scan is taken. This is used later as a background for highlighting the brain areas which were activated by the stimulus. In the next step a series of low resolution scans are taken over time, for example, 150 scans, one every 5 seconds. For some of these scans, the stimulus will be presented, and for some of the scans, the stimulus will be absent. The low resolution brain images in the two cases can be compared, to see which parts of the brain were activated by the stimulus. The rest of the analysis is done using a series of tools which correct distortions in the images, remove the effect of the subject moving their head during the experiment, and compare the low resolution images taken when the stimulus was off with those taken when it was on. The final statistical image shows up bright in those parts of the brain which were activated by this experiment. These activated areas are then shown as coloured blobs on top of the original high resolution scan. This image can also be rendered in 3D.
fMRI has moderately good spatial resolution and bad temporal resolution since one fMRI frame is about 2 seconds long. However, the temporal response of the blood supply, which is the basis of fMRI, is poor relative to the electrical signals that define neuronal communication. Therefore, some research groups are working around this issue by combining fMRI with data collection techniques such as electroencephalography (EEG) or magneto encephalography (MEG), which has much higher temporal resolution but rather poorer spatial resolution.
PET
Positron emission tomography, also called PET imaging or a PET scan, is a diagnostic examination that involves the acquisition of physiologic images based on the detection of radiation from the emission of positrons. It is currently the most effective way to check for cancer recurrences. Positrons are tiny particles emitted from a radioactive substance administered to the patient. This radiopharmaceutical is injected to the patient and its emissions are measured by a PET scanner. A PET scanner consists of an array of detectors that surround the patient. Using the gamma ray signals given off by the injected radionuclide, PET measures the amount of metabolic activity at a site in the body and a computer reassembles the signals into images. PET's ability to measure metabolism is very useful in diagnosing Altsheimer's disease, Parkinson's disease, epilepsy and other neurological conditions, because it can precisely illustrate areas where brain activity differs from the norm. It is also one of the most accurate methods available to localize areas of the brain causing epileptic seizures and to determine if surgery is a treatment option. PET is often used in conjunction with an MRI or CT scan through "fusion" to give a full three-dimensional view of an organ.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/04%3A_Behavioural_and_Neuroscience_Methods/4.03%3A_Techniques_for_Assessing_Brain_Anatomy___Physiological_Function.txt
|
The methods we have mentioned up to now examine the metabolic activity of the brain. But there are also other cases in which one wants to measure electrical activity of the brain or the magnetic fields produced by the electrical activity. The methods we discussed so far do a great job of identifying where activity is occurring in the brain. A disadvantage of these methods is that they do not measure brain activity on a millisecond-by-millisecond basis. This measuring can be done by electromagnetic recording methods, for example by single-cell recording or the Electroencephalography (EEG). These methods measure the brain activity really fast and over a longer period of time so that they can give a really good temporal resolution.
Single cell
When using the single-cell method an electrode is placed into a cell of the brain on which we want to focus our attention. Now, it is possible for the experimenter to record the electrical output of the cell that is contacted by the exposed electrode tip. That is useful for studying the underlying ion currents which are responsible for the cell’s resting potential. The researchers’ goal is then to determine for example, if the cell responds to sensory information from only specific details of the world or from many stimuli. So we could determine whether the cell is sensitive to input in only one sensory modality or is multimodal in sensitivity. One can also find out which properties of a stimulus make cells in those regions fire. Furthermore we can find out if the animal’s attention towards a certain stimulus influences in the cell’s respond.
Single cell studies are not very helpful for studying the human brain, since it is too invasive to be a common method. Hence, this method is most often used in animals. There are just a few cases in which the single-cell recording is also applied in humans. People with epilepsy sometimes get removed the epileptic tissue. A week before surgery electrodes are implanted into the brain or get placed on the surface of the brain during the surgery to better isolate the source of seizure activity. So using this method one can decrease the possibility that useful tissues will be removed. Due to the limitations of this method in humans there are other methods which measure electrical activity. Those we are going to discuss next.
EEG
One of the most famous techniques to study brain activity is probably the Electroencephalography (EEG). Most people might know it as a technique which is used clinically to detect aberrant activity such as epilepsy and disorders.
Electroencephalogram (Electroencephalography, EEG) is obtained by electro-electron electroencephalography, which collects weak creatures produced by the human brain from the scalp and enlarges notes. Measuring electroencephalogram, and EEG measures, voltage fluctuations generated by the flow of ionic neurons in the brain. EEG is a diagnosis of a brain-related disease, but because it is susceptible to interference, it is usually used in combination with other methods.
EEG is most commonly used to diagnose epilepsy because epilepsy can cause abnormal EEG readings. It is also used to diagnose sleep disorders, coma, cerebrovascular disease, etc., and brain death. Brain waves have been used in first-line methods to diagnose tumors, strokes, and other focal brain diseases, but this has been reduced with the advent of high-resolution anatomical imaging techniques, such as nuclear magnetic resonance (MRI). And computed tomography (CT). Unlike CT and MRI, EEGs have a higher temporal resolution. Therefore, although spatial resolution of EEG is limited, it is still a valuable tool for research and diagnostics, especially when determining studies that require time resolution in the millisecond range
Name Frequency (Hz) About
• Delta(δ)
• Theta(θ)
• Alpha(α)
• Beta(β)Low Range
• Beta(β) Middle Range
• Beta(β) High Range
• Gamma(γ)
• Lambda(λ)
• P300
• 0.1~3 Hz
• 4~7Hz
• 8~12Hz
• 12.5 ~ 16 Hz
• 16.5 ~ 20 Hz
• 20.5 ~ 28 Hz
• 25 ~ 100 Hz(normally 40Hz)
• according to the power generated
• according to the power generated
• Deep sleep and no dreams
• When adults are under stress, especially disappointment or frustration
• Relax, calm, close your eyes, but when you are awake
• Relax but concentrate
• Thinking, dealing with receiving external messages (hearing or thinking)
• Excitement, anxiety
• Raise awareness, happiness, stress reduction, meditation
• Induced by 100ms after the eye is stimulated by light (also known as P100)
• Induced after seeing or hearing something imagined in the brain 300ms later
In an experimental way this technique is used to show the brain activity in certain psychological states, such as alertness or drowsiness. To measure the brain activity mental electrodes are placed on the scalp. Each electrode, also known as lead, makes a recording of its own. Next, a reference is needed which provides a baseline, to compare this value with each of the recording electrodes. This electrode must not cover muscles because its contractions are induced by electrical signals. Usually it is placed at the “mastoid bone” which is located behind the ear.
During the EEG electrodes are places like this. Over the right hemisphere electrodes are labelled with even numbers. Odd numbers are used for those on the left hemisphere. Those on the midline are labelled with a z. The capital letters stands for the location of the electrode(C=central, F=frontal, Fop= frontal pole, O= occipital, P= parietal and T= temporal).
After placing each electrode at the right position, the electrical potential can be measured. This electrical potential has a particular voltage and furthermore a particular frequency. Accordingly, to a person’s state the frequency and form of the EEG signal can differ. If a person is awake, beta activity can be recognized, which means that the frequency is relatively fast. Just before someone falls asleep one can observe alpha activity, which has a slower frequency. The slowest frequencies are called delta activity, which occur during sleep. Patients who suffer epilepsy show an increase of the amplitude of firing that can be observed on the EEG record. In addition EEG can also be used to help answering experimental questions. In the case of emotion for example, one can see that there is a greater alpha suppression over the right frontal areas than over the left ones, in the case of depression. One can conclude from this, that depression is accompanied by greater activation of right frontal regions than of left frontal regions.
The disadvantage of EEG is that the electric conductivity, and therefore the measured electrical potentials vary widely from person to person and, also during time. This is because all tissues (brain matter, blood, bones etc.) have other conductivities for electrical signals. That is why it is sometimes not clear from which exact brain-region the electrical signal comes from.
ERP
Whereas EEG recordings provide a continuous measure of brain activity, event-related potentials (ERPs) are recordings which are linked to the occurrence of an event. A presentation of a stimulus for example would be such an event. When a stimulus is presented, the electrodes, which are placed on a person’s scalp, record changes in the brain generated by the thousands of neurons under the electrodes. By measuring the brain's response to an event we can learn how different types of information are processed. Representing the word eats or bake for example causes a positive potential at about 200msec. From this one can conclude, that our brain processes these words 200 ms after presenting it. This positive potential is followed by a negative one at about 400ms. This one is also called N400 (whereas N stands for negative and 400 for the time). So in general one can say that there is a letter P or N to denote whether the deflection of the electrical signal is positive or negative. And a number, which represent, on average, how many hundreds of milliseconds after stimulus presentation the component appears. The event-related- potential shows special interest for researchers, because different components of the response indicate different aspects of cognitive processing. For example, presenting the sentences “The cats won’t eat” and “The cat won’t bake”, the N400 response for the word “eat” is smaller than for the word “bake”. From this one can draw the conclusion that our brain needs 400 ms to register information about a word’s meaning. Furthermore, one can figure out where this activity occurs in the brain, namely if one looks at the position on the scalp of the electrodes that pick up the largest response.
MEG
Magnetoencephalography (MEG) is related to electroencephalography (EEG). However, instead of recording electrical potentials on the scalp, it uses magnetic potentials near the scalp to index brain activity. To locate a dipole, the magnetic field can be used, because the dipole shows excellently the intensity of the magnetic field. By using devices called SQUIDs (superconducting quantum interference device) one can record these magnetic fields.
MEG is mainly used to localize the source of epileptic activity and to locate primary sensory cortices. This is helpful because by locating them they can be avoided during neurological intervention. Furthermore, MEG can be used to understand more about the neurophysiology underlying psychiatric disorders such as schizophrenia. In addition, MEG can also be used to examine a variety of cognitive processes, such as language, object recognition and spatial processing among others, in people who are neurologically intact.
MEG has some advantages over EEG. First, magnetic fields are less influenced than electrical currents by conduction through brain tissues, cerebral spinal fluid, the skull and scalp. Second, the strength of the magnetic field can tell us information about how deep within the brain the source is located. However, MEG also has some disadvantages. The magnetic field in the brain is about 100 million times smaller than that of the earth. Due to this, shielded rooms, made out of aluminum, are required. This makes MEG more expensive. Another disadvantage is that MEG cannot detect activity of cells with certain orientations within the brain. For example, magnetic fields created by cells with long axes radial to the surface will be invisible.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/04%3A_Behavioural_and_Neuroscience_Methods/4.04%3A_Electromagnetic_Recording_Methods.txt
|
TMS
History: Transcranial magnetic stimulation (TMS) is an important technique for modulating brain activity. The first modern TMS device was developed by Antony Baker in the year 1985 in Sheffield after 8 years of research. The field has developed rapidly since then with many researchers using TMS in order to study a variety of brain functions. Today, researchers also try to develop clinical applications of TMS, because there are long lasting effects on the brain activity it has been considered as a possible alternative to antidepressant medication.
Method: UMTS utilizes the principle of electromagnetic induction to an isolated brain region. A wire-coil electromagnet is held upon the fixed head of the subject. When inducing small, localized, and reversible changes in the living brain tissue, especially the directly under laying parts of the motor cortex can be effected. By altering the firing-patterns of the neurons, the influenced brain area is disabled. The repetitive TMS (rTMS) describes, as the name reveals, the application of many short electrical stimulations with a high frequency and is more common than TMS. The effects of this procedure last up to weeks and the method is in most cases used in combination with measuring methods, for example: to study the effects in detail.
Application: The TMS-method gives more evidence about the functionality of certain brain areas than measuring methods on their own. It was a very helpful method in mapping the motor cortex. For example: While rTMS is applied to the prefrontal cortex, the patient is not able to build up short term memory. That determines the prefrontal cortex, to be directly involved in the process of short term memory. By contrast measuring methods on their own, can only investigate a correlation between the processes. Since even earlier researches were aware that TMS could cause suppression of visual perception, speech arrest, and paraesthesias, TMS has been used to map specific brain functions in areas other than motor cortex. Several groups have applied TMS to the study of visual information processing, language production, memory, attention, reaction time and even more subtle brain functions such as mood and emotion. Yet long time effects of TMS on the brain have not been investigated properly, Therefore experiments are not yet made in deeper brain regions like the hypothalamus or the hippocampus on humans. Although the potential utility of TMS as a treatment tool in various neuropsychiatric disorders is rapidly increasing, its use in depression is the most extensively studied clinical applications to date. For instance in the year 1994, George and Wassermann hypothesized that intermittent stimulation of important prefrontal cortical brain regions might also cause downstream changes in neuronal function that would result in an antidepressant response. Here again, the methods effects are not clear enough to use it in clinical treatments today. Although it is too early at this point to tell whether TMS has long lasting therapeutic effects, this tool has clearly opened up new hopes for clinical exploration and treatment of various psychiatric conditions. Further work in understanding normal mental phenomena and how TMS affects these areas appears to be crucial for advancement. A critically important area that will ultimately guide clinical parameters is to combine TMS with functional imaging to directly monitor TMS effects on the brain. Since it appears that TMS at different frequencies has divergent effects on brain activity, TMS with functional brain imaging will be helpful to better delineate not only the behavioral neuropsychology of various psychiatric syndromes, but also some of the pathophysiologic circuits in the brain.
tDCS
transcranial Direct Current Stimulation: The principle of tDCS is similar to the technique of TMS. Like TMS this is a non-invasive and painless method of stimulation. The excitability of brain regions is modulated by the application of a weak electrical current.
History and development: It was first observed that electrical current applied to the skull lead to an alleviation of pain. Scribonius Largus, the court physician to the Roman emperor Claudius, found that the current released by the electric ray has positive effects on headaches. In the Middle Ages the same property of another fish, the electrical catfish, was used to treat epilepsy. Around 1800, the so-called galvanism (it was concerned with effects of today’s electrophysiology) came up. Scientists like Giovanni Aldini experimented with electrical effects on the brain. A medical application of his findings was the treatment of melancholy. During the twentieth century among neurologists and psychiatrists electrical stimulation was a controversial but nevertheless wide spread method for the treatment of several kinds of mental disorders (e.g. Electroconvulsive therapy by Ugo Cerletti).
Mechanism: The tDCS works by fixation of two electrodes on the skull. About 50 percent of the direct current applied to the skull reaches the brain. The current applied by a direct current battery usually is around 1 to 2 mA. The modulation of activity of the brain regions is dependent on the value of current, on the duration of stimulation and on the direction of current flow. While the former two mainly have an effect on the strength of modulation and its permanence beyond the actual stimulation, the latter differentiates the modulation itself. The direction of the current (anodic or cathodic) is defined by the polarity and position of the electrodes. Within tDCS two distinct ways of stimulation exist. With the anodal stimulation the anode is put near the brain region to be stimulated and analogue for the cathodal stimulation the cathode is placed near the target region. The effect of the anodal stimulation is that the positive charge leads to depolarization in the membrane potential of the applied brain regions, whereas hyperpolarisation occurs in the case of cathodal stimulation due to the negative charge applied. The brain activity thereby is modulated. Anodal stimulation leads to a generally higher activity in the stimulated brain region. This result can also be verified with MRI scans, where an increased blood flow in the target region indicates a successful anodal stimulation.
Applications: From the description of the TMS method it is should be obvious that there are various fields of appliances. They reach from identifying and pulling together brain regions with cognitive functions to the treatment of mental disorders. Compared to TMS it is an advantage of tDCS to not only is able to modulate brain activity by decreasing it but also to have the possibility to increase the activity of a target brain region. Therefore the method could provide an even better suitable treatment of mental disorders such as depression. The tDCS method has also already proven helpful for apoplectic stroke patients by advancing the motor skills.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/04%3A_Behavioural_and_Neuroscience_Methods/4.05%3A_Techniques_for_Modulating_Brain_Activity.txt
|
Besides using methods to measure the brain’s physiology and anatomy, it is also important to have techniques for analyzing behaviour in order to get a better insight on cognition. Compared to the neuroscientific methods, which concentrate on neuronal activity of the brain regions, behavioural methods focus on overt behaviour of a test person. This can be realized by well defined behavioural methods (e.g. eye-tracking), test batteries (e.g. IQ-test) or measurements which are designed to answer specific questions concerning the behaviour of humans. Furthermore, behavioural methods are often used in combination with all kinds of neuroscientific methods mentioned above. Whenever there is an overt reaction on a stimulus (e.g. picture) these behavioural methods can be useful. Another goal of a behavioural test is to examine in what terms damage of the central nervous system influences cognitive abilities.
A Concept of a behavioural test
The tests are performed to give an answer to certain questions about human behaviour. In order to find an answer to that question, a test strategy has to be developed. First it has to be carefully considered, how to design the test in the best way, so that the measurement results provide an accurate answer to the initial question. How can the test be conducted so that founding variables are minimal and the focus really is on the problem? When an appropriate test arrangement is found, the defining of test variables is the next part. The test is now conducted and probably repeated until a sufficient amount of data is collected. The next step is the evaluation of the resulting data, with the suitable methods of statistics. If the test reveals a significant result, it might be the case that further questions arise about neuronal activity underlying the behaviour. Then neuroscientific methods are useful to investigate correlating brain activities. Methods, which proved to provide good evidence to a certain recurrent question about cognitive abilities of subjects, can bring together in a test battery.
Example: Question: Does a noisy surrounding affect the ability to solve a certain problem?
Possible test design: Expose half of the subject to a silent environment while solving the same task as the other half in a noisy environment. In this example founding variables might be different cognitive abilities of the participants. Test variables could be the time needed to solve the problem and the loudness of the noise etc. If statistical evaluation shows significance: Probable further questions: How does noise affect the brain activities on a neuronal level?
Are you interested in doing a behavioural test on your own, visit: the socialpsychology.org website.[2]
Test batteries
A neuropsychological assessment utilizes test batteries that give an overview on a person’s cognitive strengths and weaknesses by analyzing various cognitive abilities. A neuropsychological test battery is used by a neuropsychologist to assess brain dysfunctions that can rise from developmental, neurological or psychiatric issues. Such batteries can appraise various mental functions and the overall intelligence of a person.
Firstly, there are test batteries designed to assess whether a person suffers from a brain damage or not. They generally work well in discriminating those with brain damage from neurologically impaired patients, but worse when it comes to discriminating them from those with psychiatric disorders. The most popular test, Halstead-Reitan battery, assesses abilities ranging from basic sensory processing to complex reasoning. Furthermore, the Halstead-Reitan battery provides information on the cause of the damage, the brain areas that were harmed, and the stage the damage has reached. Such information is valuable in developing a rehabilitation program. Another test battery, the Luria-Nebraska battery, is twice as fast to administer as the Halstead-Reitan. Its subtests are ordered according to twelve content scales (e.g. motor functions, reading, memory etc.). These two test batteries do not focus only on the absolute level of performance, but look at the qualitative manner of performance as well. This allows for a more comprehensive understanding of the cognitive impairment.
Another type of test batteries, the so-called IQ tests, aims to measure the overall cognitive performance of an individual. The most commonly used tests for estimating intelligence are the Wechsler family intelligence tests. Age-appropriate test versions exist for small children from age 2 years and 6 months, school-aged children, and adults. For example, the Wechsler Intelligence Scale for Children, fifth edition (WISC-V) measures various cognitive abilities in children between 6 and 16 years of age. The test consists of multiple subtests that form five different main indexes of cognitive performance. These main constructs are verbal reasoning skills, inductive reasoning skills, visuo-spatial processing, processing speed and working memory. Performance is analyzed both compared to a normative sample of similarly aged peers and within the test subject, assessing personal strengths and weaknesses.
The Eye Tracking Procedure
Another important procedure for analyzing behavior and cognition is Eye-tracking. This is a procedure of measuring either where we are looking (the point of gaze) or the motion of an eye relative to the head. There are different techniques for measuring the movement of the eyes and the instrument that does the tracking is called the tracker. The first non-intrusive tracker was invented by George Buswell.
The eye tracking is a study with a long history, starting back in the 1800s. In 1879 Louis Emile Javal noticed that reading does not involve smooth sweeping of the eye along the text but rather series of short stops which are called fixations. This observation is one of the first attempts to examine the eye’s directions of interest. The book of Alfred L. Yarbus which he published in 1967 after an important eye tracking research is one of the most quoted eye tracking publications ever. The eye tracking procedure is not that complicated. Video based eye trackers are frequently used. A camera focuses on one or both eyes and records the movements while the viewer looks at some stimulus. The most modern eye trackers use contrast to locate the center of the pupil and create corneal reflections using infrared or near-infrared non-collimated light.
There are also two general types of eye tracking techniques. The first one – the Bright Pupil is an effect close to the red eye effect and it appears when the illuminator source is onset from the optical path while when the source is offset from the optical path, the pupil appears to be dark (Dark Pupil). The Bright Pupil creates great contrast between the iris and the pupil which allows tracking in light conditions from dark to very bright but it is not effective for outdoor tracking. There are also different eye tracking setup techniques. Some are head mounted, some require the head to be stable, and some automatically track the head during motion. The sampling rate of the most of them is 30 Hz. But when we have rapid eye movement, for example during reading, the tracker must run at 240, 350 or even 1000-1250 Hz in order to capture the details of the movement. Eye movements are divided to fixations and saccades. When the eye movement pauses in a certain position there is a fixation and saccade when it moves to another position. The resulting series of fixations and saccades is called a scan path. Interestingly most information from the eye is received during a fixation and not during a saccade. Fixation lasts about 200 ms during reading a text and about 350 ms during viewing of a scene and a saccade towards new goal takes about 200 ms. Scan paths are used in analyzing cognitive intent, interest and salience.
Eye tracking has a wide range of application – it is used to study a variety of cognitive processes, mostly visual perception and language processing. It is also used in human-computer interactions. It is also helpful for marketing and medical research. In recent years the eye tracking has generated a great deal of interest in the commercial sector. The commercial eye tracking studies present a target stimulus to consumers while a tracker is used to record the movement of the eye. Some of the latest applications are in the field of the automotive design. Eye tracking can analyze a driver’s level of attentiveness while driving and prevent drowsiness from causing accidents.
4.07: Modeling Brain-Behaviour
Another major method, which is used in cognitive neuroscience, is the use of neural networks (computer modelling techniques) in order to simulate the action of the brain and its processes. These models help researchers to test theories of neuropsychological functioning and to derive principles viewing brain-behaviour relationships.
A basic neural network.
In order to simulate mental functions in humans, a variety of computational models can be used. The basic component of most such models is a “unit”, which one can imagine as showing neuron-like behaviour. These units receive input from other units, which are summed to produce a net input. The net input to a unit is then transformed into that unit’s output, mostly utilizing a sigmoid function. These units are connected together forming layers. Most models consist of an input layer, an output layer and a “hidden” layer as you can see on the right side. The input layer simulates the taking up of information from the outside world, the output layer simulates the response of the system and the “hidden” layer is responsible for the transformations, which are necessary to perform the computation under investigation. The units of different layers are connected via connection weights, which show the degree of influence that a unit in one level has on the unit in another one.
The most interesting and important about these models is that they are able to "learn" without being provided specific rules. This ability to “learn” can be compared to the human ability e.g. to learn the native language, because there is nobody who tells one “the rules” in order to be able to learn this one. The computational models learn by extracting the regularity of relationships with repeated exposure. This exposure occurs then via “training” in which input patterns are provided over and over again. The adjustment of “the connection weights between units” as already mentioned above is responsible for learning within the system. Learning occurs because of changes in the interrelationships between units, which occurrence is thought to be similar in the nervous system.
4.08: References
1. Filler, AG: The history, development, and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI, DTI: Nature Precedings DOI: 10.1038/npre.2009.3267.4.
2. Socialpsychology.org
• Ward, Jamie (2006) The Student's Guide to Cognitive Neuroscience New York: Psychology Press
• Banich,Marie T. (2004). Cognitive Neurosciene and Neuropsychology. Housthon Mifflin Company. ISBN 0618122109
• Gazzangia, Michael S.(2000). Cognitive Neuroscience. Blackwell Publishers. ISBN 0631216596
• 27.06.07 Sparknotes.com
• (1) 4 April 2001 / Accepted: 12 July 2002 / Published: 26 June 2003 Springer-Verlag 2003. Fumiko Maeda • Alvaro Pascual-Leone. Transcranial magnetic stimulation: studying motor neurophysiology of psychiatric disorders
• (2) a report by Drs Risto J Ilmoniemi and Jari Karhu Director, BioMag Laboratory, Helsinki University Central Hospital, and Managing Director, Nexstim Ltd
• (3) Repetitive Transcranial Magnetic Stimulation as Treatment of Poststroke Depression: A Preliminary Study Ricardo E. Jorge, Robert G. Robinson, Amane Tateno, Kenji Narushima, Laura Acion, David Moser, Stephan Arndt, and Eran Chemerinski
• Moates, Danny R. An Introduction to cognitive psychology. B:HRH 4229-724 0
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/04%3A_Behavioural_and_Neuroscience_Methods/4.06%3A_Behavioural_Methods.txt
|
Happiness, sadness, anger, surprise, disgust and fear. All these words describe some kind of abstract inner states in humans, in some cases difficult to control. We usually call them feelings or emotions. But what is the reason that we are able to "feel"? Where do emotions come from and how are they caused? And are emotions and feelings the same thing? Or are we supposed to differentiate?
These are all questions that cognitive psychology deals with in emotion research. Emotion research in the cognitive science is not much older than twenty years. The reason for this lies perhaps in the fact that much of the cognitive psychology tradition was based on computer-inspired information-processing models of cognition.
This chapter gives an overview about the topic for a better understanding of motivation and emotions. It provides information about theories concerning the cause of motivation and emotions in the human brain, their processes, their role in the human body and the connection between the two topics. We will try to show the actual state of research, some examples of psychologist experiments, and different points of view in the issue of emotions. In the end we will briefly outline some disorders to emphasize the importance of emotions for the social interaction.
5.02: Motivation
About Drives and Motives
Motivation is an extended notion, which refers to the starting, controlling and upholding of corporal and mental activities. It is declared by inner processes and variables which are used to explain behavioral changes. Motivations are commonly separated into two types:
1.Drives: describe acts of motivation like thirst or hunger that have primarily biological purposes.
2.Motives: are driven by primarily social and psychological mechanisms.
Motivation is an interceding variable, which means that it is a variable that is not directly observable. Therefore, in order to study motivation, one must approach it through variables which are measurable and observable:
• Observable terms of variation (independent variables [1])
• Indicators of behavior (dependent variables[2]) e.g.: rate of learning, level of activity, ...
There are two major methodologies used to manipulate drives and motives in experiments:
Stimulation: Initiating motives by aversive attractions like shocks, loud noise, heat or coldness. On the other side attractions can activate drives which lead to positive affective states, e.g. sexual drives.
Deprivation: means that you prohibit the access to elementary aspects of biological or psychological health, like nutrition or social contacts. As a result it leads it to motives or drives which are not common for this species under normal conditions.
A theory of motivations was conceived by Abraham Maslow in 1970 (Maslow's hierarchy of needs). He considered two kinds of motivation:
1. Defected motivation: brings humans to reconsider their psychical and physical balance.
2. Adolescence motivation: gets people to pass old events and states of their personal development.
Maslow argues that everyone has a hierarchy of needs(see picture).
Regarding to this, our innate needs could be ordered in a hierarchy, starting at the “basic” ones and heading towards higher developed aspects of humanity. The hypothesis is that the human is ruled by lower needs as long as they are not satisfied. If they are satisfied in an adequate manner, the human then deals with higher needs. (compare to chapter attention)
Hierarchy of needs, Maslow (1970)
Nevertheless, all throughout history you can find examples of people who willingly practiced deprivation through isolation, celibacy, or by hunger strike. These people may be the exceptions to this hypothesis, but they may also have some other, more pressing motives or drives which induce them to behave in this way.
It seems that individuals are able to resist certain motives via personal cognitive states. The ability of cognitive reasoning and willing is a typical feature of being human and can be the reason for many psychological diseases which indicates that humans are not always capable to handle all rising mental states. Humans are able to manipulate their motives without knowing the real emotional and psychological causes. This introduces the problem that the entity of consciousness, unconsciousness and what ever else could be taken into account is essentially unknown. Neuroscience cannot yet provide a concrete explanation for the neurological substructures of motives, but there has been considerable progress in understanding the neurological procedures of drives.
The Neurological Regulation of Drives
The Role of the Hypothalamus
The purpose of drives is to correct disturbances of homeostasis which is controlled by the hypothalamus. Deviations from the optimal range of a regulated parameter like temperature are detected by neurons concentrated in the periventricular zone of the hypothalamus. These neurons then produce an integrated response to bring the parameter back to its optimal value. This response generally consists of
1. Humoral response
2. Visceromotor response
3. Somatic motor response
When you are dehydrated, freezing, or exhausted, the appropriate humoral and visceromotor responses are activated automatically,[3] e.g.: body fat reserves are mobilized, urine production is inhibited, you shiver, blood is shunted away from the body surface, … But it is much faster and more effective to correct these disturbances by eating, drinking water or actively seeking or generating warmth by moving. These are examples of drives generated by the somatic motor system, and they are incited to emerge by the activity of the lateral hypothalamus.
For illustration we will make a brief overview on the neural basis of the regulation of feeding behavior, which is divided into the long-term and the short-term regulation of feeding behavior.
The long-term regulation of feeding behavior prevents energy shortfalls and concerns the regulation of body fat and feeding. In the 1940s the “dual center” model was popular, which divided the hypothalamus in a “hunger center” (lateral hypothalamus) and a “satiety center” (ventromedial hypothalamus). This theory developed from the facts that bilateral lesions of the lateral hypothalamus causes anorexia, a severely diminished appetite for food (lateral hypothalamic syndrome) and on the other side bilateral lesions of the ventromedial hypothalamus causes overeating and obesity (ventromedial hypothalamic syndrome). Anyway, it has been proved that this “dual model” is overly simplistic. The reason why hypothalamic lesions affect body fat and feeding behavior has in fact much to do with leptin signaling. Adipocytes (fat cells) release the hormone leptin, which regulates body mass by acting directly on neurons of the arcuate nucleus[4] of the hypothalamus that decreases appetite and increase energy expenditure. A fall in leptin levels stimulates another type of arcuate nuleus neurons[5] and neurons in the lateral hypothalamus,[6] which activate the parasympathetic division of the ANS, and stimulate feeding behavior. The short-term regulation of feeding behavior deals with appetite and satiety. Until 1999 scientists believed that hunger was merely the absence of satiety. This changed with the discovery of a peptide called ghrelin, which is highly concentrated in the stomach and is released into the bloodstream when the stomach is empty. In the arcuate nucleus it activates neurons,[7] that strongly stimulate appetite and food consumption. The meal finally ends by the concerted actions of several satiety signals, like gastric distension and the release of insulin.[8] But it seems that animals not only eat because they want food to satisfy their hunger. They also eat because they like food in a merely hedonistic sense. Research on humans and animals suggests that “liking” and “wanting” are mediated by separate circuits in the brain.
The Role of Dopamine in Motivation
In the early 1950s, Peter Milner and James Olds conducted an experiment in which a rat had an electrode implanted in its brain, so the brain could be locally stimulated at any time. The rat was seated in a box, which contained a lever for food and water and a lever that would deliver a brief stimulus to the brain when stepped on. At the beginning the rat wandered about the box and stepped on the levers by accident, but before long it was pressing the lever for the brief stimulus repeatedly. This behavior is called electrical self-stimulation. Sometimes the rats would become so involved in pressing the lever that they would forget about food and water, stopping only after collapsing from exhaustion. Electrical self-stimulation apparently provided a reward that reinforced the habit to press the lever. Researches were able to identify the most effective sites for self-stimulation in the different regions of the brain: the mesocorticolimbic dopamine system. Drugs that block dopamine receptors reduced the self-stimulation behavior of the rat. In the same way this drugs greatly reduced the pressing of a lever for receiving of food even if the rat was hungry. These experiments suggested a mechanism by which natural rewards (food, water, sex) reinforce particular behavior. Dopamine plays an important role in addiction of drugs like heroin, nicotine and cocaine. Thus these drugs either stimulate dopamine release (heroin, nicotine) or enhance dopamine actions (cocaine) in the nucleus accumbens. Chronic stimulation of this pathway causes a down-regulated of the dopamine “reward” system. This adaption leads to the phenomenon of drug tolerance. Indeed, drug discontinuation in addicted animals is accompanied by a marked decrease in dopamine release and function in the nucleus accumbens, leading to the symptom of craving for the discontinued drug. The exact role of dopamine in motivating behavior continues to be debated. However, much evidence suggests that animals are motivated to perform behaviors that stimulate dopamine release in the nucleus accumbens and related structures.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/05%3A_Motivation_and_Emotion/5.01%3A_Introduction.txt
|
Basics
In contrast to previous research, modern brain based neuroscience has taken a more serviceable approach to the field of Emotions, because emotions definitely are brain related processes which deserve scientific study, whatever their purpose may be.
One interpretation regards emotions as „action schemes“, which especially lead to a certain behaviour which is essential for survival. It is important to distinguish between conscious aspects of emotion like subjective - often bodily - feelings, as well as unconscious aspects like the detection of a threat. This will be discussed later on in conjunction with awareness of emotion. It is also important to differentiate between a mood and an emotion. A mood refers to a situation where an emotion occurs frequently or continuously. As an example: Fear is an emotion, anxiety is a mood.
The first question which arises is how to categorise emotions. They could be treated as a single entity, but perhaps it could even make more sense to distinguish between them, which leads to the question if some emotions like happiness or anger are more basic than other types like jealousy or love and if emotions are dependent on culture and/or language.
One of the most influential ethnographic studies by Eckman and Friesen, which is based on the comparison of facial expressions of emotions in different cultures, concluded that there are six basic types of emotions expressed in faces - namely sadness, happiness, disgust, surprise, anger and fear, independent from culture and language. An alternative approach is to differentiate between emotions not by categorising but rather by measuring the intensity of an emotion by imposing different dimensions, e.g. their valence and their arousal. If this theory would be true then one might expect to find different brain regions which selectivey process positive or negative emotions.
Six basic types of emotions expressed in faces
Complex emotions like jealousy, love and pride are different from basic emotions as they comprehend awareness of oneself in relation to other people and one's attitude towards other people. Hence they come along with a more complex attributional process which is required to appreciate thoughts and beliefs of other people. Complex emotions are more likely be dependent on cultural influences than basic types of emotions. If you think of Knut who is feeling embarrassment, you have to consider what kind of action he committed in which situation and how this action raised the disapproval of other people.
Awareness and Emotion
Awareness is closely connected with changes in the environment or in the psycho-physiological state. Why recognise changes rather than stable states? An answer could be that changes are an important indicator of our situation. They show that our situation is unstable. Paying attention or focusing on that might increase the chance to survive. A change bears more information than repetitive events. This appears more exciting. Repetition reduces excitement. If we think that we got the most important information from a situation or an event,we become unaware of such an event or certain facts.
Current research in this field suggest that changes are needed to emerge emotions,so we can say that it is strong attention dependent. The event has to draw our attention. No recognition, no emotions. But do we have always an emotional evaluation, when we are aware of certain events? How has the change to be relevant for our recognition? Emotional changes are highly personal significant, saying that it needs a relation to our personal self.
Significance presupposes order and relations. Relations are to meaning as colours are to vision: a necessary condition, but not its whole content. One determines the significance and the scope of a change by f.e. event´s impact (event´s strength), reality, relevance and factors related to the background circumstances of the subject. We feel no emotion in response to change which we perceive as unimportant or unrelated. Roughly one can say that emotions express our attitude toward unstable significant objects which are somehow related to us.
This is also always connected with the fact that we have greater response to novel experience. Something that is unexpected or unseen yet. When children get new toys, they are very excited at first, but after a while one can perceive, or simply remember their own childhood, that they show less interest in that toy. That shows, that emotional response declines during time. This aspect is called the process of adaptation. The threshold of awareness keeps rising if stimulus level is constant. Hence, awareness decreases. The organism withdraws its consciousness from more and more events. The person has the pip, it has enough. The opposite effect is also possible. It is known as the process of facilitation. In this case the threshold of awareness diminishes.
Consciousness is focusing on increasing number of events. This happens if new stimuli are encountered. The process of adaptation might prevent us from endlessly repetitive actions. A human would not be able to learn something new or be caught in an infinite loop. The emotional environment contains not only what is, and what will be, experienced but also all that could be, or that one desires to be, experienced; for the emotional system, all such possibilities are posited as simultaneously there and are compared with each other.
Whereas intellectual thinking expresses a detached and objective manner of comparison, the emotional comparison is done from a personal and interested perspective; intellectual thinking may be characterised as an attempt to overcome the personal emotional perspective. It is quite difficult to give an external description of something that is related to an intrinsic, personal perspective. But it is possible. In the following the most popular theories will be shown, and an rough overview about the neural substrates of emotions.
The Neural Correlate of Emotion
Papez Circuit
James W. Papez was the investigator of the Papez Circuit theory (1937). He was the first who tried to explain emotions in a neurofuncional way. Papez discovered the circuit after injecting the rabing-virus into a cat's hippocampus andobserved its effects on the brain. The Papez circuit is chiefly involved in the cortical control of emotion. The corpus mamillare (part of the hypothalamus) plays a central role. The Papez Circuit involves several regions in the brain with the following course:
● The hippocampus projects to fornix and via this to corpus mamillare
● from here neurons project via the fasciculus mamillothalamicus to nucleus anterior of the thalamus and then to the gyrus cinguli
● due to the connection of gyrus cinguli and hippocampus the circuit is closed.
1949 Paul MacLean extended this theory by hypothezing that regions like the amygdala and the orbitofrontal cortex work together with the circuit and form an emotional brain. However, the theory of the Papez circuit could no longer be held because, for one, some regions of the circuit can no longer be related to functions to which they were ascribed primarily. And secondly, current state of research concludes that each basic emotion has its own circuit. Furthermore, the assumption that the limbic system is solely responsible for these functions is out-dated. Other cortical and non-cortical structures of the brain have an enormous bearing on the limbic system. So the emergence of emotion is always an interaction of many parts of the brain.
Amygdala and Fear
The Amygdala (lat. Almond), latinic-anatomic Corpus amygdaloideum, is located in the left and right temporal lobe. It belongs to the limbic system and is essentially involved in the emergence of fear. In addition, the amygdala plays a decisive role in the emotional evaluation and recognition of situations as well as in the analysis of potential threat. It handles external stimuli and induces vegetative reactions. These may help prepare the body for fight and flight by increasing heart and breathing-rate. The small mass of grey matter is also responsible for learning on the basis of reward or punishment. If the two parts of the amygdala are destroyed the person loses their sensation of fear and anger. Experiments with patients whose amygdala is damaged show the following: The participants were impaired to a lesser degree with recognizing facial anger and disgust. They could not match pictures of the same person when the expressions were different. Beyond Winston, O´Doherty and Dolan report that the amygdala activation was independent of whether or not subjects engaged in incidental viewing or explicit emotion judgements. However, other regions (including the ventromedial frontal lobes) were activated only when making explicit judgements about the emotion. This was interpreted as reinstatement of the „feeling“ of the emotion. Further studies show that there is a slow route to the amygdala via the primary visual cortex and a fast subcortical route from the thalamus to the amygdala. The amygdala is activated by unconscious fearful expressions in healthy participants and also „blindsight“ patients with damage to primary visual cortex. The fast route is imprecise and induces fast unconscious reactions towards a threat before you consciously notice and may properly react via the slow route. This was shown by experiments with persons who have a snake phobia (ophidiophobics) or a spider phobia (arachnophobics). When they get to see a snake, the former showed a bodily reaction, before they reported seeing the snake. A similar reaction was not observable in the case of a spiderphobia. By experiments with spiders the results were the other way round.
Recognition of Other Emotional Categories
Another basic emotional category which is largely independent of other emotions is disgust. It literally means „bad taste“ and is evolutionary related to contamination through ingestion. Patients with the Huntington's disease have problems with recognizing disgust. The insula, a small region of cortex buried beneath the temporal lobes, plays an important role for facial expressions of disgust. Furthermore, the half of the patients with a damaged amygdala have problems with facial expressions of sadness. The damage of the ventral regions of the basal ganglia causes the deficit in the selective perception of anger and this brain area could be responsible for the perception of aggression. Happiness cannot be selectively impaired because it consist of a more distributed network.
Functional Theories
In order to explain human emotions, that means to discover how they arise and investigate how they are represented in the brain, researchers worked out several theories. In the following the most important views will be discussed
James – Lange Theory
The James – Lange theory of emotion states that the self – perception of bodily changes produces emotional experience. For example you are happy because you are laughing or you feel sad because you are crying. Alternatively, when a person sees a spider he or she might experience fear. One problem according this theory is that it is not clear what kind of processing leads to the changes in the bodily state and wether this process can be seen as a part of the emotion itself. However, people paralyzed from the neck down, who have little awareness of sensory input are still able to experience emotions. Also, research by Schacter and Singer has shown, that changes in bodily state are not enough to produce emotions. Because of that, an extension of this theory was necessary.
Two Factor Theory
The two factor theory views emotion as an compound of the two factors: physiological arousal and cognition. Schacter and Singer (1962) did well-known studies in this field of research. They injected participants with adrenaline (called epinephrine in the USA). This is a drug that causes a number of effects like increased blood flow to the muscles and increased heart rate. The result was that the existence of the drug in the body did not lead to experiences of emotion. Just with the presence of an cognitive setting, like an angry man in the room, participants did self – report an emotion. Contrary to the James – Lange theory this study suggests that bodily changes can only support conscious emotional experiences but do not create emotions. Therefore, the interpretation of a certain emotion depends on the physiological state in correlation to the subjects circumstances.
Wikipedia has related information at Two factor theory
Somatic Marker Hypothesis
This current theory of emotions (from A. Damasio) emphasizes the role of bodily states and implies that “somatic marker” signals have influence on behaviour, like particularly reasoning and decision–making. Somatic markers are the connections between previous situations, which are stored in the cortex, and the bodily feeling of such situations (e.g. stored in the amygdala). From this it follows, that the somatic markers are very useful during the decision process, because they can give you immediate response on the grounds of previous acquired knowledge, whether the one or the other option “feels” better. People who are cheating and murdering without feeling anything miss somatic markers which would prevent them from doing this.
In order to investigate this hypothesis a gambling task was necessary. There have been four decks of cards (A, B, C, D) on the table and the participants had to take always one in turn. On the other side of the card was either a monetary penalty or gain. The players have been told that they must play so that they win the most. Playing from decks A and B leads to a loss of money whereas choosing decks C and D leads to gain. Persons without a brain lesion learned to avoid deck A and B but players with such damage did not.
Reading Minds
Empathy is the ability to appreciate others’ emotions and their point of view. Simulation theory states that the same neural and cognitive resources are used by perceiving the emotional expressions of others and by producing actions and this expressions in oneself. If you are watching a movie where one person touches another, the same neural mechanism (in the somatosensory cortex) is activated as if you were physically touched. Further studies investigated empathy for pain. That means, if you see someone experiencing pain, two regions in your brain are overlapping. The first region is responsible for expecting another person’s pain, and the second region is responsible for experiencing this pain oneself.
Mood and Memory
While we store a memory, we not only record all sensory data, we also store our mood and emotional state. Our current mood thus will affect the memories that are most effortlessly available to us, such that when we are in a good mood we recollect good memories (and vice versa). While the nature of memory is associative this also means that we tend to store happy memories in a linked set. There are two different ways we remember past events:
Mood-congruence
Memory occurs where current mood helps recall of mood-congruent material, e.g. characters in stories that feel like the reader feels while reading, regardless of our mood at the time the material was stored. Thus when we are happy, we are more likely to remember happy events. Also remembering all of the negative events of our past when depressed is an example of mood congruence. That means that you can rather remember a funeral where you were happy in a happy mood while you remember a party where you were sad in a sad mood, although a funeral is sad and a party is happy.
Mood-dependency
Memory occurs where the congruence of current mood with the mood at the time of memory storage helps recall of that memory. When we are happy, we are more likely to remember other times when we were happy. So, if you want to remember something, get into the mood you were in when you experienced it. You can easily try this yourself. You just have to bring into a certain mood by listening to the saddest/happiest music you know. Now you learn a list of words. Then you try to recall the list in the other/the same mood. You will see that you remember the list better when you are in the same mood as you were while learning it.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/05%3A_Motivation_and_Emotion/5.03%3A_Emotions.txt
|
Without balanced emotions, one's ability to interact in a social network will be affected in some manner (e.g. reading minds). In this part of the chapter some grave disorders will be presented- these are: depression, austism and antisocial behaviour disorders as psychopathy and sociopathy. It is important to mention that those disorders will mainly be considered in regard to their impact on social competence. To get a full account of the characteristics of each of the disorders, we recommend reading the particular articles provided by Wikipedia.
Autism
Autism is thought to be an innate condition with individual forms distributed on a broad spectrum. This means that symptoms can range from minor behavioral problems to major mental deficits, but there is always some impairment of social competence. The American Psychiatric Association characterizes autism as "the presence of markedly abnormal or impaired development in social interaction and communication and a markedly restricted repertoire of activities and interests" (1994, diagnostic and statistical manual; DSM-IV). The deficits in social competence are sometimes divided into the so-called "triad of impairments", including:
(1)Social interaction This includes difficulties with social relationships, for example appearing distanced and indifferent to other people.
(2)Social communication Autists have problems with verbal and non-verbal communication, for example, they do not fully understand the meaning of common gestures, facial expressions or the voice tones. They often show reduced or even no eye-contact as well, avoid body contact like shaking hands and have difficulties to understand metaphores and "to read between the lines".
(3)Social imagination Autists lack social imagination manifesting in difficulties in the development of interpersonal play and imagination, for example having a limited range of imaginative activities, possibly copied and pursued rigidly and repetitively.
All forms of autism can already be recognized during childhood and therefore disturb the proper socialization of the afflicted child. Often autistic children are less interested in playing with other children but for example love to arrange their toys with utmost care. Unable to interpret emotional expressions and social rules autists are prone to show inappropriate behaviour towards the people surrounding them. Autists may not obviously be impaired therefore other people misunderstand their actions as provocation.
Still there are other features of autism- autists often show stereotyped behaviour and feel quite uncomfortable when things change in the routines and environment they are used to. Very rarely, a person with autism may have a remarkable talent, such as memorizing a whole city panorama including, for example, the exact number of windows in each of the buildings.
There are several theories trying to explain autism or features of autism. In an experiment conducted by Baron-Cohen and colleagues (1995) cartoons were presented to normal and autistic children showing a smiley in the centre of each picture and four different sweets in each corner (see picture below). The smiley, named Charlie, was gazing at one of the sweets. The children were asked question as: "Which chocolate does Charlie want?"
Autistic children were able to detect where the smiley was looking but unable to infer its 'desires'. (adapted graphic from Ward, J. (2006). The Students Guide to Cognitive Neuroscience. Hove: Psychology Press. page 316)
Normal children could easily infer Charlie's desires from Charlie's gaze direction whereas autistic children would not guess the answer.
Additional evidence from other experiments suggest that autists are unable to use eye gaze information to interpret people's desires and predict their behaviour which would be crucial for social interaction. Another proposal to explain autistic characteristics suggests that autists lack representations of other people's mental states (mindblindness - proposed by Baron-Cohen, 1995b).
Depression
Depression is a disorder that leads to an emotional disfunction characterized by a state of intensive sadness, melancholia and despair. The disorder affects social and everyday life. There are many different forms of depression that differ in strength and duration. People affected by depression suffer from anxiety, distorted thinking, dramatic mood changes and many other symptoms. They feel sad, and everything seems to be bleak. This leads to an extremely negative view of themselves and their current and future situation. These factors can lead to a loss of a normal social life that might affect the depressed person even further. Suffering from depression and losing your social network can thereby lead to a vicious circle.
Psychopathy and Sociopathy
Psychopathy and sociopathy are nowadays subsumed under the notion of antisocial behaviour disorders but experts are still quite discordant whether both are really separated disturbances or rather forms of other personal disorders e.g. autism. Psychopaths and sociopaths often get into conflict with their social environment because they repeatedly violate social and moral rules. Acquired sociopathy manifests in the inability to form lasting relationships, irresponsible behaviour as well as getting angry quite fast and exceptional strong egocentric thinking. While acquired sociopathy might be characterised by impulsive antisocial behaviour often having no personal advantage, developmental psychopathy manifests in goal directed and self-initiated aggression. Acquired sociopathy is caused by brain injury especially found in the orbitofrontal lobe (frontal lobe) and is thought to be a failure to use emotional cues and the loss of social knowledge. Therefore sociopaths are unable to control and plan their behaviour in a socially adequate manner. In contrast to sociopaths psychopaths are not getting angry because of minor reasons but they act aggressively without understandable reasons at all which might be due to their inability to understand and distinguish between moral rules (concerning the welfare of others) and conventions (consensus rules of society). Furthermore it even happens that they feel no guilt or empathy for their victims. Psychopathy is probably caused by a failure to process distress cues of others, meaning that they are unable to understand sad and fearful expressions and consequently suppress their aggression (Blair 1995). It is important to mention that they are nevertheless able to detect stimuli being threatening for themselves.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/05%3A_Motivation_and_Emotion/5.04%3A_Disorders.txt
|
We hope that this chapter gave you an overview and answered the question we posed at the beginning. As one can see this young field of cognitive is wide and not yet completely researched. Many different theories were proposed to explain emotions and motivation like the James-Lange Theory which claims that bodily changes lead to emotional experiences. This theory led to the Two-Factor-Theory which in contrast says that bodily changes only support emotional experiences. Whereas the newest theory (Somatic marker) states that somatic markers support decision making. While analyzing emotions, one has to distinguish between conscious emotions, like a feeling, and unconscious aspects, like the detection of threat. Presently, researchers distinguish six basic emotions that are independent from cultural aspects. In comparison to this basic emotions other emotions also comprehend social awareness. So, emotions are not only important for our survival but for our social live, too. Reading faces helps us to communicate and interpret behaviour of other people. Many disorders impair this ability leaving the afflicted person with an inability to integrate himself into the social community. Another important part in understanding emotions is awareness; we only pay attention on new things in order to avoid getting unimportant information. Moods also affect our memory - we can remember things better if we are in the same mood as in the situation before and if the things we want to remember are connoted in the same way as our current mood. We also outlined the topic of motivation which is crucial to initiate and uphold our mental and corporal activities. Motivation consists of two parts: drives (biological needs) and motives (primarily social and psychological mechanisms). One important theory is the Maslow Hierarchy of Needs; it states that higher motivations are only aspired if lower needs are satisfied. As this chapter only dealt with mood and memory, the next chapter deals with memory and language.
5.06: References
1. Independent variables are the circumstance of major interest in an experiment. The Participant does only react on them, but cannot actively change them. They are independent of his behaviour.
2. The measured behaviour is called the dependent variable.
3. At the humoral response hypothalamic neurons stimulate or inhibit the release of pituitary hormones into the bloodstream and at the visceromotor response neurons in the hypothalamus adjust the balance of sympathetic and parasympathetic outputs of the autonomic nervous system (ANS).
4. αMSH neurons and CART neurons of the arcuate nucleus. αMSH(alpha-malanocyte-stimulating hormone) and CART(cocaine- and amphetamine-regulated transcript) are anoretic peptides, which activate the pituitary hormones TSH(thyroid-stimulating hormone) and ACTH(adrenocorticotropic hormone), that have the effect of raising the metabolic rate of cells throughout the body.
5. NPY neurons and AgRP neurons. NPY(neuropeptide Y) and AgRP(agouti-related peptide) are orexigenic peptides, which inhibit the secretion of TSH and ACTH.
6. MCH(melanin-concentrating hormone) neurons, which have extremely widespread connections in the brain, including direct monosynaptic innervation of most of the cerebral cortex, that is involved in organizing and initiating goal-directed behaviors, such as raiding the refrigerator.
7. The NPY- and AgRP neurons.
8. The pancreatic hormone insulin, released by β cells of the pancreas, acts directly on the arcuate and ventromedial nuclei of the hypothalamus. It appears that it operates in much the same way as leptin to regulate feeding behavior, with the difference that its primary stimulus for realisng is increased blood glucose level.
Books
• Zimbardo, Philip G. (1995, 12th edition). Psychology and Life. Inc. Scott, Foresman and Company, Glenview, Illinois. ISBN 020541799X
• Banich,Marie T. (2004). Cognitive Neuroscience and Neuropsychology. Housthon Mifflin Company. ISBN 0618122109
• Robert A. Wilson and Frank C. Keil. (2001). The MIT Encyclopedia of Cognitive Sciences (MITECS). Bradford Book. ISBN 0262731444
• Antonio R. Damasio. (1994) reprinted (2005). Descartes' Error: Emotion, Reason and the Human Brain. Penguin Books. ISBN 014303622X
• Antonio R. Damasio. (1999). The Feeling of what Happens. Body and Emotion in the Making of Consciousness. Harcourt Brace & Company. ISBN 0099288761
• Aaron Ben-Ze'ev (Oct 2001). The Subtlety of Emotions.(MIT CogNet). ISBN 0262523191
• Ward, J. (2006). The Students Guide to Cognitive Neuroscience. Hove: Psychology Press. ISBN 1841695351
Journals
• The emotional brain. Tim Dalgleish.
• (1) Leonard, C.M., Rolls, E.T., Wilson, F.A.W. & Baylis, C.G. Neurons in the amygdala of the monkey with responses selective for faces.
Behav. Brain Res. 15, 159-176 (1985)
• (2)Adolphs, R., Tranel, D., Damasio, H. & Damasio, A. Impaired recognition of emotion in facial expressions following bilateral damage of the human amygdala.
Nature 372, 669-672 (1994)
• (3)Young, A. W. et al. Face processing impairments after amygdalotomy.
Brain 118, 15-24 (1995)
• (4)Calder, A. J. et al. Facial emotion recognition after bilateral amygdala damage: Differentially severe impairment of fear.
Cognit. Neuropsychol. 13, 699-745 (1996)
• (5)Scott, S. K. et al. Impaired auditory recognition of fear and anger following bilateral amygdala lesions.
Nature 385, 254-257 (1997)
• (6)Cahill, L., Babinsky, R., Markowitsch, H. J. & McGaugh, J. L. The amygdala and emotional memory.
Nature 377, 295-296 (1995)
• (7)Wood, Jacqueline N. and Grafman, Jordan (02/2003). Human Prefrontal Cotex.
Nature Reviews/ Neuroscience
• (8)Brothers, L. , Ring, B. & Kling, A. Response of neurons in the macaque amygdala to complex social stimuli.
Behav. Brain Res. 41, 199-213 (1990)
• (9)Bear, M.F., Connors, B.W., Paradiso, M.A. (2006, 3rd edition). Neuroscience. Exploring the Brain. Lippincott Wiliams & Wilkins. ISBN 0-7817-6003-8
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/05%3A_Motivation_and_Emotion/5.05%3A_Summary.txt
|
Imagine our friend Knut, who we have already introduced in earlier chapters of this book, hastily walking through his apartment looking everywhere for a gold medal that he has won many years ago at a swimming contest. The medal is very important to him, since it was his recently deceased mother who had insisted on him participating. The medal reminds him of the happy times in his life. But now he does not know where it is. He is sure that he had last seen it two days ago but, searching through his recent experiences, he is not able to recall where he has put it.
So what exactly enables Knut to remember the swimming contest and why does the medal trigger the remembrance of the happy times in his life? Also, why is he not able to recall where he has put the medal, even though he is capable of scanning through most of his experiences of the last 48 hours?
Memory, with all of its different forms and features, is the key to answering these questions. When people talk about memories, they are subconsciously talking about "the capacity of the nervous system to acquire and retain usable skills and knowledge, which allows living organisms to benefit from experience".[1] Yet, how does this so-called memory function? In the process of answering this question, many different models of memory have evolved. Distinctions are drawn between Sensory Memory, Short Term Memory, and Long Term Memory based on the period of time information is accessible after it is first encountered. Sensory Memory, which can further be divided into Echoic and Iconic Memory, has the smallest time span for accessibility of information. With Short Term and Working Memory, information is accessible seconds to minutes after it is first encountered. While Long Term Memory, has an accessibility period from minutes to years to decades. This chapter discusses these different types of memory and further gives an insight into memory phenomena like False Memory and Forgetting. Finally, we will consider biological foundations that concern memory in human beings and the biological changes that occur when learning takes place and information is stored.
6.02: Types of Memory
In the following section, we will discuss the three different types of memory and their respective characteristics: Sensory Memory, Short Term (STM) or Working Memory (WM) and Long Term Memory (LTM).
Sensory Memory
This type of memory has the shortest retention time, only milliseconds to five seconds. Roughly, Sensory Memory can be subdivided into two main kinds:
Sensory Memory
• Iconic Memory (visual input)
• Echoic Memory (auditory input)
While Iconic and Echoic Memory have been well researched, there are other types of Sensory Memory, like haptic, olfactory, etc., for which no sophisticated theories exist so far.
It should be noted, though, that according to the Atkinson and Shiffrin (1968)[2] Sensory Memory was considered to be the same thing as Iconic Memory. Echoic Memory was added to the concept of Sensory Memory due to research done by Darwin and others (1972).[3] Let us consider the following intuitive example for Iconic Memory: Probably we all know the phenomenon that it seems possible to draw lines, figures or names with lighted sparklers by moving the sparkler fast enough in a dark environment. Physically, however, there are no such things as lines of light. So why can we nevertheless see such figures? This is due to Iconic Memory. Roughly speaking, we can think of this subtype of memory as a kind of photographic memory, but one which only lasts for a very short time (milliseconds, up to a second). The image of the light of a sparkler remains in our memory (persistence of vision) and thus makes it seem to us like the light leaves lines in the dark. The term "Echoic Memory", as the name already suggests, refers to auditory input. Here the persistence time is a little longer than with Iconic Memory (up to five seconds).
At the level of Sensory Memory no manipulation of the incoming information occurs, it is transferred to the Working Memory. By ‘transfer’ it is meant that the amount of information is reduced because the capacity of the working memory is not large enough to cope with all the input coming from our sense organs. The next paragraph will deal with the different theories of selection when transferring information from Sensory Memory to Working Memory.
One of the first experiments researching the phenomenon of Attention was the Shadowing Task (Cherry et al., 1953).[4] This experiment deals with the filtering of auditory information. The subject is wearing earphones, getting presented a different story on each ear. He or she has to listen to and repeat out loud the message on one ear (shadowing). When asked for the content of the stories of both ears only the story of the shadowed side can be repeated; participants do not know about the content of the other ear’s story. From these results Broadbent concluded the Filter Theory (1958).[5] This theory proposes that the filtering of information is based on specific physical properties of stimuli. For every frequency there exists a distinct nerve pathway. The attention control selects which pathway is active and can thereby control which information is passed to the Working Memory. This way it is possible to follow the utterance of one person with a certain voice frequency even though there are many other sounds in the surrounding. But imagine a situation in which the so called cocktail party effect applies: having a conversation in a loud crowd at a party and listening to your interlocutor you will immediately switch to listening to another conversation if the content of it is semantically relevant to you, e.g. if your name is mentioned.
So it is found that filtering also happens semantically. The above mentioned Shadowing Task was changed so that the semantic content of a sentence was split up between the ears, and the subject, although shadowing, was able to repeat the whole sentence because he or she was following the semantic content unconsciously.
Reacting to the effect of semantic filtering, new theories were developed. Two important theories are the Attenuation Theory (Treisman, 1964)[6] and the Late Selection Theory (Deutsch & Deutsch, 1963).[7] The former proposes that we attenuate information which is less relevant, but do not filter it out completely. Thereby also semantic information of ignored frequencies can be analyzed but not as efficiently as those of the relevant frequencies. The Late Selection Theory presumes that all information is analyzed first and afterwards the decision of the importance of information is made. Treisman and Geffen did an experiment to find out which one of the theories holds. The experiment was a revision of the Shadowing Task. Again the subjects have to shadow one ear but in contrast they also have to pay attention to a certain sound which could appear on either ear. If the sound occurs the subject has to react in a certain way (for example knock on the table). The result is that the subject identifies the sound on the shadowed ear in 87% of all cases and can only do this in 8% of the cases on the ignored side. This shows that the information on the ignored side must be attenuated since the rate of identification is lower. If the Late Selection Theory were to hold then the subject would have to analyze all information and would have to be able to identify the same amount on the ignored side as on the shadowed side. Since this is not the case the Attenuation Theory by Treisman explains the empirical results more accurately.
Illustration of the Attention Control Model by a) Treisman - Attenuation Theory and b) Deutsch & Deutsch – Late Selection Theory.
Short Term Memory
The Short Term Memory (STM) was initially discussed by Attkinson and Shiffrin (1968).[8] The Short Term Memory is the link between Sensory Memory and Long Term Memory (LTM). Later Baddeley proposed a more sophisticated approach and called the interface Working Memory (WM). We will first look at the classical Short Term Memory Model and then go on to the concept of Working Memory.
As the name suggests, information is retained in the Short Term Memory for a rather short period of time (15–30 seconds).
Short Term Memory
If we look up a phone number in the phone book and hold it in mind long enough for dialling the number, it is stored in the Short Term Memory. This is an example of a piece of information which can be remembered for a short period of time. According to George Miller (1956)[9] the capacity of the Short Term Memory is five to nine pieces of information (The magical number seven, plus or minus two). The term "pieces of information” or, as it is also called, chunk might strike one as a little vague. All of the following are considered as chunks: single digits or letters, whole words or even sentences and the like. It has been shown by experiments also done by Miller that chunking (the process of bundeling information) is a useful method to memorize more than just single items in the common sense. Gobet et al. defined a chunk as "a collection of elements that are strongly associated with one another but are weakly associated with other chunks" (Goldstein, 2005).[10] A very intuitive example of chunking information is the following:
Try to remember the following digits:
• 0 3 1 2 1 9 8 2
But you could also try another strategy to remember these digits:
• 03. 12. 1982.
With this strategy you bundeled eight pieces of information (eight digits) to three pieces with help to remember them as a date schema.
A famous experiment concerned with chunking was conducted by Chase and Simon (1973)[11] with novices and experts in chess playing. When asked to remember certain arrangements of chess pieces on the board, the experts performed significantly better that the novices. However, if the pieces were arranged arbitrarily, i.e. not corresponding to possible game situations, both the experts and the novices performed equally poorly. The experienced chess players do not try to remember single positions of the figures in the correct game situation, but whole bundles of figures as already seen before in a game. In incorrect game situations this strategy cannot work which shows that chunking (as done by experienced chess players) enhances the performance only in specific memory tasks.
From Short Term Memory to Baddeley’s Working Memory Model
Baddeley and Hitch (1974)[12] drew attention to a problem with the Short Term Memory Model. Under certain conditions it seems to be possible to do two different tasks simultaneously, even though the STM, as suggested by Atkinson and Shiffrin, should be regarded as a single, undivided unit. An example for the performance of two tasks simultaneously would be the following: a person is asked to memorize four numbers and then read a text (unrelated to the first task). Most people are able to recall the four numbers correctly after the reading task, so apparently both memorizing numbers and reading a text carefully can be done at the same time. According to Baddeley and Hitch the result of this experiment indicates that the number-task and the reading-task are handled by two different components of Short Term Memory. So they coined the term "Working Memory" instead of "Short Term Memory" to indicate that this kind of Memory enables us to perform several cognitive operations at a time with different parts of the Working Memory.
Working Memory
According to Baddeley, Working Memory is limited in its capacity (the same limitations hold as for Short Term Memory) and the Working Memory is not only capable of storage, but also of the manipulation of incoming information. Working Memory consists of three parts:
• Phonological Loop
• Visuospatial Sketch Pad
• Central Executive
We will consider each module in turn:
The Phonological Loop is responsible for auditory and verbal information, such as phone numbers, people’s names or general understanding of what other people are talking about. We could roughly say that it is a system specialized for language. This system can again be subdivided into an active and a passive part. The storage of information belongs to the passive part and fades after two seconds if the information is not rehearsed explicitly. Rehearsal, on the other hand, is regarded as the active part of the Phonological Loop. The repetition of information deepens the memory. There are three well-known phenomena that support the idea that the Phonological Loop is specialized for language: The phonological similarity effect, the word-length effect and articulatory suppression. When words that sound similar are confused, we speak of the phonological similarity effect. The word-length effect refers to the fact that it is more difficult to memorize a list of long words and better results can be achieved if a list of short words is memorized. Let us look at the phenomenon of articulatory suppression in a little more detail. Consider the following experiment:
Participants are asked to memorize a list of words while saying "the, the, the ...“ out loud. What we find is that, with respect to the word-length effect, the difference in performance between lists of long and short words is levelled out. Both lists are memorized equally poorly. The explanation given by Baddeley et al. (1986),[13] who conducted this experiment, is that the constant repetition of the word "the" prevents the rehearsal of the words in the lists, independent of whether the list contains long or short words. The findings become even more drastic if we compare the memory-performance in the following experiment (also conducted by Baddeley and his co-workers in 1986):
Participants were again asked to say out loud "the, the, the ..." But instead of memorizing words from a list of short or long words, their task was to remember words that were either spoken to them or shown to them written on paper. The results indicated that the participants’ performances were significantly better if the words were presented to them and not read out aloud. Baddeley concluded from this fact that the performance in a memory task is improved if the two stimuli can be dealt with in distinct components of the Working Memory. In other words, since the reading of words is handled in the Visuospatial Sketch Pad, whereas the saying of "the" belongs to the Phonological Loop, the two tasks do not "block" each other. The rather bad performance of hearing words while speaking could be explained by the fact that both hearing and speaking are dealt with in the Phonological Loop and thus the two tasks conflict with each other, decreasing the performance of memorization.
In the Visuospatial Sketch Pad, visual and spatial information is handled. This means that information about the position and properties of objects can be stored. As we have seen above, performance decreases if two tasks that are dealt with in the same component are to be done simultaneously. Let us consider a further example that illustrates this effect. Brandimonte and co-workers (1992)[14] conducted an experiment where participants were asked to say out loud "la, la, la...“ At the same time they were given the task of subtracting a partial image from a given whole image. The subtraction had to be done mentally because the two images were presented only for a short time. The interesting result was that the performance not only didn't decrease while saying "la, la, la ..." when compared to doing the subtraction-task alone, but the performance even increased. According to Brandimonte this was due to the fact that the subtraction task was easier if handled in the Visuospatial Sketch Pad as opposed to the Phonological Loop (both the given and the resulting pictures were such that they could also be named, i.e. verbalized, a task that belongs to the Phonological Loop). As mentioned above, because of the fact that the subtraction of a partial image from a whole given image is easier if done visually, the performance increased if participants were forced to visually perform that task, i.e. if they were forced to use the component that is suited best for the given task. We have seen that the Phonological Loop and the Visuospatial Sketch Pad deal with rather different kinds of information which nonetheless have to somehow interact in order to do certain tasks. The component that connects those two systems is the Central Executive. The Central Executive co-ordinates the activity of both the Phonological Loop and the Visuospatial Sketch Pad. Imagine the following situation: You are driving a car and your friend in the passenger seat has the map and gives you directions. The directions are given verbally, i.e. they are handled by the Phonological Loop, while the perception of the traffic, street lights, etc. is obviously visual, i.e. dealt with in the Visuospatial Sketch Pad. If you now try to follow the directions given to you by your friend it is necessary to somehow combine both kinds of information, the verbal and the visual information. This important connection of the two components is done by the Central Executive. It also links the Working Memory to Long Term Memory, controls the storage in Long Term Memory and the retrieval from it. The process of storage is influenced by the duration of holding information in Working Memory and the amount of manipulation of the information. The latter is stored for a longer time if it is semantically interpreted and viewed with relation to other information already stored in Long Term Memory. This is called Deep Processing. Pure syntactical processing (reading a text for typos) is called Shallow Processing. Baddeley proposes also further capabilities for the Central Executive:
• Initiating movement
• Control of conscious attention
Problems which arise with the Working Memory approach
In theory, all information has to pass the Working Memory in order to be stored in the Long Term Memory. However, cases have been reported where patients could form Long Term Memories even though their STM-abilities were severely reduced. This clearly poses a problem to the modal model approach. It was suggested by Shallice and Warrington (1970)[15] that there must be another possible way for information to enter Long Term Memory than via Working Memory.
Long Term Memory
As the name already suggest, Long Term Memory is the system where memories are stored for a long time. "Long" in this sense means something between a few minutes and several years or even decades to lifelong.
Long Term Memory
Similar to Working Memory, Long Term Memory can again be subdivided into different types. Two major distinctions are made between Declarative (conscious) and Implicit (unconscious) Memory. Those two subtypes are again split into two components each: Episodic and Semantic Memory with respect to Declarative Memory and Priming Effects, and Procedural Memory with respect to Implicit Memory. In contrast to Short Term or Working Memory, the capacity of Long Term Memory is theoretically infinite. The opinions as to whether information remains in the Long Term Memory forever or whether information can get deleted differ. The main argument for the latter opinion is that apparently not all information that was ever stored in LTM can be recalled. However, theories that regard Long Term Memories as not being subject to deletion emphasize that there might be a useful distinction between the existence of information and the ability to retrieve or recall that information at a given moment. There are several theories about the “forgetting” of information. These will be covered in the section “Forgetting and False Memory”.
Declarative Memory
Let us now consider the two types of Declarative Memory. As noted above, those two types are Episodic and Semantic Memory. Episodic Memory refers to memories for particular events that have been experienced by somebody (autobiographical information). Typically, those memories are connected to specific times and places. Semantic Memory, on the other hand, refers to knowledge about the world that is not connected to personal events. Vocabularies, concepts, numbers or facts would be stored in the Semantic Memory. Another subtype of memories stored in Semantic Memory is that of the so called Scripts. Scripts are something like blueprints of what happens in a certain situation. For example, what usually happens if you visit a restaurant (You get the menu, you order your meal, eat it and you pay the bill). Semantic and Episodic Memory are usually closely related to one another, i.e. memory of facts might be enhanced by interaction with memory about personal events and vice versa. For example, the answer to the factual question of whether people put vinegar on their chips might be answered positively by remembering the last time you saw someone eating fish and chips. The other way around, good Semantic Memory about certain things, such as football, can contribute to more detailed Episodic Memory of a particular personal event, like watching a football match. A person that barely knows the rules of that game will most probably have a less specific memory for the personal event of watching the game than a football-expert will.
Implicit Memory
We now turn to the two different types of Implicit Memory. As the name suggests, both types are usually active when unconscious memories are concerned. This becomes most evident for Procedural Memory, though it must be said that the distinction between both types is not as clearly cut as in the case of Declarative Memory and that often both categories are collapsed into the single category of Procedural Memory. But if we want to draw the distinction between Priming Effects and Procedural Memory, the latter category is responsible for highly skilled activities that can be performed without much conscious effort. Examples would be the tying of shoelaces or the driving of a car, if those activities have been practiced sufficiently. It is some kind of movement plan. As regards the Priming Effect, consider the following experiment conducted by Perfect and Askew (1994):[16]
Participants were asked to read a magazine without paying attention to the advertisements. After that, different advertisements were presented to them; some had occurred in the magazine, others had not. The participants were told to rate the presented advertisement with respect to different criteria such as how appealing, how memorable or eye-catching they were. The result was that in general those advertisements that had been in the magazine received higher rankings than those that had not been in the magazine. Additionally, when asked which advertisements the participants had actually seen in the magazine, the recognition was very poor (only 2.8 of the 25 advertisements were recognized). This experiment shows that the participants performed implicit learning (as can be seen from the high rankings of advertisements they had seen before) without being conscious of it (as can be seen from the poor recognition rate). This is an example of the Priming Effect.
Final overview of all different types of memory and their interaction
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/06%3A_Memory/6.01%3A_Introduction.txt
|
As important as memory is, also the process of Forgetting is present to everybody.
Therefore one might wonder:
• Why do we forget at all?
• What do we forget?
• How do we forget?
Why do we forget at all?
One might come up with something you could call “mental hygiene”. It is not useful to remember every little detail of your life and your surrounding, but rather a disadvantage because you maybe would not be able to remember the important things as quickly or even quick enough but have an overload of facts in your memory. Therefore it is important that unused memories are “cleaned up” so that only relevant information is stored.
What do we forget and how?
There are different theories about how things are forgotten. One theory proposes that the capacity of the Long Term Memory is infinite. This would mean that actually all memories are stored in the LTM but some information cannot be recalled (anymore) due to factors to be mentioned in the following paragraphs:
There are two main theories about the causes of forgetting:
• The Trace Decay Theory states that you need to follow a certain path, or trace, to recall a memory. If this path has not been used for some time, one would say that the activity of the information decreases (it fades (->decays)), which leads to difficulty or the inability to recall the memory.
• The Interference Theory proposes that all memories interfere with each other. One distinguishes between two kinds of interferences:
• Proactive Interference:
Earlier memories influence new ones or hinder one to make new ones.
• Retroactive Interference:
Old memories are changed by new ones, maybe even so much that the original one is completely ‘lost’.
• Which of the two theories applies in your opinion?
• Do you agree with a mixture of the two?
In 1885 Herrmann Ebbinghaus did several self-experiments to research human forgetting. He memorized a list of meaningless syllables, like “WUB” and “ZOF”, and tried to recall as many as possible after certain intervals of time for several weeks. He found out that forgetting can be described with an almost logarithmic curve, the so called forgetting curve which you can see on the left.
These theories about forgetting already make clear that memory is not a reliable recorder but it is a construction based on what actually happened plus additional influences, such as other knowledge, experiences, and expectations. Thus false memories are easily created.
In general there are three types of tendencies towards which people’s memories are changed. These tendencies are called
Biases in memory
One distinguishes between three major types:
• Egocentric Bias
It makes one see his or herself in the best possible light.
• Consistency Bias
Because of which one perceives his or her basic attitudes to remain persistent over time.
• Positive Change Bias
It is cause for the fact that one perceives things to be generally improving.
(For a list of more known memory biases see: List of memory biases)
There are moments in our lives that we are sure we will never forget. It is generally perceived that the memories of events that we are emotionally involved with are remembered for a longer time than others and that we know every little detail of them. These kinds of memories are called Flashbulb Memories.
The accuracy of the memories is an illusion, though. The more time passes, the more these memories have changed while our feeling of certainty and accuracy increases. Examples for Flashbulb Memories are one’s wedding, the birth of one’s child or tragedies like September 11th.
Interesting changes in memory can also occur due to Misleading Postevent Information (MPI). After an event information given another person can so to say intensify your memory in a certain respect. This effect was shown in an experiment by Loftus and Palmer (1974):[17] The subjects watched a film in which there were several car accidents. Afterwards they were divided into three groups that were each questioned differently. While the control group was not asked about the speed of the cars at all, in the other groups questions with a certain key word were posed. One group was asked how fast the cars were going when they hit each other, while in the other question the verb “smashed” was used. One week later all participants were asked whether they saw broken glass in the films. Both the estimation of speed and the amount of people claiming to have seen broken glass increased steadily from the control group to the third group.
Based on this Misinformation Effect the Memory Impairment Hypothesis was proposed.
This hypothesis states that suggestible and more detailed information that one receives after having made the actual memory can replace the old memory.
Keeping the possible misleading information in mind, one can imagine how easily eyewitness testimony can be (purposely or accidentally) manipulated. Depending on which questions the witnesses are asked they might later on remember to see, for example, a weapon or not.
These kinds of changes in memory are present in everyone on a daily basis. But there are other cases: People with a lesion in the brain sometimes suffer from Confabulation. They construct absurd and incomplete memories that can even contradict with other memories or with what they know. Although the people might even be aware of the absurdness of their memories they are still firmly convinced of them. (See Helen Phillips' article Mind fiction: Why your brain tells tall tales)
Repressed and Recovered Memories
If one cannot remember an event or detail, it does not mean that the memory is completely lost. Instead one would say that these memories are repressed, which means that they cannot easily be remembered. The process of remembering in these cases is called recovery.
Recovering of a repressed memory usually occurs due to a retrieval cue. This might be an object or a scene that reminds one of something which has happened long ago.
Traumatic events, which happened during childhood for example, can be recovered with the help of a therapist. This way, perpetrators have been brought to trial after decades.
Still, the correctness of the “recovered” memory is not guaranteed: as we know, memory is not reliable and if the occurrence of an event is suggestible one might produce a false memory.
Look at the illustration to the right to be able to relate to these processes.
How did the memory for an event become what it is?
Other than on a daily basis errors in memory and amnesia are due to damages in the brain. The following paragraphs will present the most important brain regions enabling memory and mention effects of damage to them.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/06%3A_Memory/6.03%3A_Forgetting_and_False_Memory.txt
|
In this section, we will first consider how information is stored in synapses and then talk about two regions of the brain that are mainly involved in forming new memories, namely the amygdala and the hippocampus. To show what effects memory diseases can have and how they are classified, we will discuss a case study of amnesia and two other common examples for amnesic diseases: Karsakoff’s amnesia and Alzheimer’s disease.
Information storage
The idea that physiological changes at synapses happen during learning and memory was first introduced by Donald Hebb.[18] It was in fact shown that activity at a synapse leads to structural changes at the synapse and to enhanced firing in the postsynaptic neuron. Since this process of enhanced firing lasts for several days or weeks, we talk about Long Term Potentiation (LTP). During this process, existing synaptic proteins are altered and new proteins are synthesized at the modified synapse. What does all this have to do with memory? It has been discovered that LTP is most easily generated in regions of the brain which are involved in learning and memory - especially the hippocampus, about which we will talk in more detail later. Donald Hebb found out that not only a synapse of two neurons is involved in LTP but that a particular group of neurons is more likely to fire together. According to this, an experience is represented by the firing of this group of neurons. So it works according to the principle: “what wires together fires together”.
Amygdala
The amygdala is involved in the modulation of memory consolidation.
Following any learning event, the Long Term Memory for the event is not instantaneously formed. Rather, information regarding the event is slowly assimilated into long term storage over time, a process referred to as memory consolidation, until it reaches a relatively permanent state. During the consolidation period, memory can be modulated. In particular, it appears that emotional arousal following a learning event influences the strength of the subsequent memory for that event. Greater emotional arousal following a learning event enhances a person's retention of that event. Experiments have shown that administration of stress hormones to individuals, immediately after they learn something, enhances their retention when they are tested two weeks later. The amygdala, especially the basolateral nuclei, is involved in mediating the effects of emotional arousal on the strength of the memory for the event. There were experiments conducted by James McGaugh on animals in special laboratories. These laboratories have trained animals on a variety of learning tasks and found that drugs injected into the amygdala after training affect the animal’s subsequent retention of the task. These tasks include basic Pavlovian Tasks such as Inhibitory Avoidance, where a rat learns to associate a mild footshock with a particular compartment of an apparatus, and more complex tasks such as spatial or cued water maze, where a rat learns to swim to a platform to escape the water. If a drug that activates the amygdala is injected into the amygdala, the animals had better memory for the training in the task. When a drug that inactivated the amygdala was injected, the animals had impaired memory for the task. Despite the importance of the amygdala in modulating memory consolidation, however, learning can occur without it, although such learning appears to be impaired, as in fear conditioning impairments following amygdala damage. Evidence from work with humans indicates a similar role of the amygdala in humans. Amygdala activity at the time of encoding information correlates with retention for that information. However, this correlation depends on the relative "emotionality" of the information. More emotionally-arousing information increases amygdalar activity, and that activity correlates with retention.
Hippocampus
Psychologists and neuroscientists dispute over the precise role of the hippocampus, but, generally, agree that it plays an essential role in the formation of new memories about experienced events (Episodic or Autobiographical Memory).
Some researchers prefer to consider the hippocampus as part of a larger medial temporal lobe memory system responsible for general declarative memory (memories that can be explicitly verbalized — these would include, for example, memory for facts in addition to episodic memory). Some evidence supports the idea that, although these forms of memory often last a lifetime, the hippocampus ceases to play a crucial role in the retention of the memory after a period of consolidation. Damage to the hippocampus usually results in profound difficulties in forming new memories (anterograde amnesia), and normally also affects access to memories prior to the damage (retrograde amnesia). Although the retrograde effect normally extends some years prior to the brain damage, in some cases older memories remain intact - this sparing of older memories leads to the idea that consolidation over time involves the transfer of memories out of the hippocampus to other parts of the brain. However, researchers have difficulties in testing the sparing of older memories and, in some cases of retrograde amnesia, the sparing appears to affect memories formed decades before the damage to the hippocampus occurred, so its role in maintaining these older memories remains controversial.
Amnesia
As already mentioned in the preceding section about the hippocampus, there are two types of amnesia - retrograde and antrograde amnesia.
Different types of Amnesia
Amnesia can occur when there is damage to a number of regions in the medial temporal lobe and their surrounding structures. The patient H.M. is probably one of the best known patients who suffered from amnesia. Removing his medial temporal lobes, including the hippocampus, seemed to be a good way to treat the epilepsy. What could be observed after this surgery was that H.M. was no longer able to remember things which happened after his 16th birthday, which was 11 years before the surgery. So given the definitions above one can say that he suffered retrograde amnesia. Unfortunately, he was not able to learn new information due to the fact that his hippocampus was also removed. H.M. therefore suffered not only from retrograde amnesia, but also from anterograde amnesia. His Implicit Memory, however, was still working. In procedural memory tests, for example, he still performed well. When he was asked to draw a star on a piece of paper which was shown to him in a mirror, he performed as bad as every other participant in the beginning. But after some weeks his performance improved even though he could not remember having done the task many times before. Thus, H.M.’s Declarative Memory showed severe deficits but his Implicit Memory was still fine. Another quite common cause of amnesia is the Korsakoff’s syndrome or also called Korsakoff’s amnesia. Long term alcoholism usually elicits this Korsakoff’s amnesia due to a prolonged deficiency of vitamin B1. This syndrome is associated with the pathology of the midline diencephalon including the dorsomedial thalamus. Alzheimer’s disease is probably the best known type of amnesia because it is the most common type in our society. Over 40 percent of the people who are older than 80 are affected by Alzheimer’s disease. It is a neurodegenerative disease and the region in the brain which is most affected is the entorhinal cortex. This cortex forms the main input and output of the hippocampus and so damages here are mostly severe. Knowing that the hippocampus is especially involved in forming new memories one can already guess the patients have difficulties in learning new information. But in late stages of Alzheimer’s disease also retrograde amnesia and even other cognitive abilities, which we are not going to discuss here, might occur.
This figure shows the brain structures which are involved in forming new memories
Final checklist of what you should keep in mind
1. Why does memory exist?
2. What is sensory memory?
3. What is the distinction between Short Term memory and Working Memory?
4. What is Long Term Memory and which brain area(s) are involved in forming new memories?
5. Remember the main results of the theory (For example: What does the Filter Theory show?)
6. Don’t forget why we forget!
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/06%3A_Memory/6.04%3A_Some_Neurobiological_Facts_about_Memory.txt
|
1. Quotation from www.wwnorton.com.
2. Atkinson, R. C. & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes.
In K. Spence & J. Spence (Eds.), The psychology of learning and motivation (Volume 2). New York: Academic Press.
3. Darwin, C. J., Turvey, M. T., & Crowder, R. G. (1972). An auditory analogue of the Sperling partial report procedure:
Evidence for brief auditory storage. Cognitive Psychology, 3, 255-267.
4. Cherry, E. C. (1953). Some experiments on the recognition of speech with one and with two ears.
Journal of Accoustical Society of America, 25, 975-979.
5. Broadbent, D. E. (1958). Perception and communication. New York: Pergamon.
6. Treisman, A. M. (1964). Monitoring and storage of irrelevant messages and selective attention.
Journal of Verbal Learning and Verbal Behaviour, 3, 449-459.
7. Deutsch, J. A. & Deutsch, D. (1963). Attention: Some theoretical considerations. Psycological Review, 70, 80-90.
8. Atkinson, R. C. & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes.
In K. Spence & J. Spence (Eds.), The psychology of learning and motivation (Volume 2). New York: Academic Press.
9. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information.
Psychological Review, 63, 81-97.
10. Goldstein, E. B. (2005). Cognitive Psychology. London: Thomson Leaning, page 157.
11. Chase, W. G. & Simon, H.A. (1973). The mind’s eye in chess. In W. G. Chase (Ed.), Visual information processing.
New York: Academic Press.
12. Baddeley, A. D. & Hitch, G. (1974). Working memory. In G. A. Bower (Ed.), Recent advances in learning and motivation (Vol. 8).
New York: Academic Press.
13. Baddeley, A. D. (1986). Working Memory. Oxford: Oxford University Press.
14. Brandimonte, M. A., Hitch, G. J., & Bishop, D. V. M. (1992). Influence of short-term memory codes on visual image processing:
Evidence from image transformation tasks. Journal of Experimental Psychology: Learing, Memory, and Cognition, 18, 157-165.
15. Shallice, T., & Warrington, E. K. (1970). Independent functioning of verbal memory stores: A neuropsychological study.
Quarterly Journal of Experimental Psychology,22, 261-273.
16. Perfect, T. J., & Askew, C. (1994). Print adverts: Not remembered but memorable. Applied Cognitive Psychology, 8, 693-703.
17. Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of an automobile destruction:
An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behavior, 13, 585-589.
18. Hebb, D. O. (1948). Organization of behavior. New York: Wiley.
Everyday memory - Eyewitness testimony
Introduction
Witness psychology is the study of human as an observer and reporter of events in life. It’s about how detailed and accurate we register what is happening, how well we remember what we observed, what causes us to forget and remember the mistakes, and our ability to assess the reliability and credibility of others' stories. It is the study of observation and memory for large and small events in life, from everyday trivialities to the dramatic and traumatic events that shook our lives (Magnussen, 2010)
Basic concepts
The eyewitness identification literature has developed a number of definitions and concepts that require explanation. Each definition and concept is described below.
A lineup is a procedure in which a criminal suspect (or a picture of the suspect) is placed among other people (or pictures of other people) and shown to an eyewitness to see if the witness will identify the suspect as the culprit in question. The term suspect should not be confused with the term culprit. A suspect might or might not be the culprit, a suspect is suspected of being the culprit (Wells & Olson, 2003)
Fillers are people in the lineup who are not suspects. Fillers, sometimes called foils or distractors, are known-innocent members of the lineup. Therefore, the identification of filler would not result in charges being brought against the filler. A culprit-absent lineup is one in which an innocent suspect is embedded among fillers and a culprit-present lineup is one in which a guilty suspect (culprit) is embedded among fillers. The primary literature sometimes calls these target-present and target-absent lineups (Wells & Olson, 2003).
A simultaneous lineup is one in which all lineup members are presented to the eyewitness at once and is the most common lineup procedure in use by law enforcement. A sequential lineup, on the other hand, is one in which the witness is shown only one person at a time but with the expectation that there are several lineup members to be shown (Wells & Olson, 2003).
A lineup’s functional size is the number of lineup members who are “viable” choices for the eyewitness. For example, if the eyewitness described the culprit as being a tall male with dark hair and the suspect is the only lineup member who is tall with dark hair, then the lineup’s functional size would be 1.0 even if there were 10 fillers. Today functional size is used generically to mean the number of lineup members who fit the eyewitness’s description of the culprit (Wells & Olson, 2003).
Mock witnesses are people who did not actually witness the crime but are asked to pick a person from the lineup based on the eyewitness’s verbal description of the culprit. They are shown the lineup and are asked to indicate who is the offender. Mock witnesses are used to test the functional size of the lineup (Wells & Olson, 2003).
The diagnosticity of suspect identification is the ratio of accurate identification rate with a culprit-present lineup to the inaccurate identification rate with a culprit- absent lineup. The diagnosticity of “not there” is the ratio of “not there” response rates with culprit-absent lineups to “not there” response rates with culprit-present lineups. The diagnosticity of filler identifications is the ratio of filler identification rates with culprit-absent lineups to filler identification rates with culprit-present lineups (Wells & Olson, 2003)
Among variables that affect eyewitness identification accuracy, a system variable is one that is, or could be, under control of the criminal justice system, while an estimator variable is one that is not. Estimator variables include lighting conditions at the time of witnessing and whether the witness and culprit are of the same or of different races. System variables include instructions given to eyewitnesses prior to viewing a lineup and the functional size of a lineup. The distinction between estimator and system variables has assumed great significance in the eyewitness identification literature since it was introduced in the late 1970s . In large part, the prominence of this distinction attests to the applied nature of the eyewitness identification literature. Whereas the development of a literature on estimator variables permits some degree of post diction that might be useful for assessing the chances of mistaken identification after the fact, the development of a system variable literature permits specification of how eyewitness identification errors might be prevented in the first place (Wells & Olson, 2003).
History and Reliability
The criminal justice system relies heavily on eyewitness identification for investigating and prosecuting crimes. Psychology has built the only scientific literature on eyewitness identification and has warned the justice system of problems with eyewitness identification evidence. Recent DNA exoneration cases have corroborated the warnings of eyewitness identification researchers by showing that mistaken eyewitness identification was the largest single factor contributing to the conviction of innocent people (Wells & Olson, 2003).
Psychological researchers who began programs in the 1970s, however, have consistently articulated concerns about the accuracy of eyewitness identification. Using various methodologies, such as filmed events and live staged crimes, eyewitness researchers have noted that mistaken identification rates can be surprisingly high and that eyewitnesses often express certainty when they mistakenly select someone from a lineup. Although their findings were quite compelling to the researchers themselves, it was not until the late 1990s that criminal justice personnel began taking the research seriously. This change in attitude about the psychological literature on eyewitness identification arose primarily from the development of forensic DNA tests in the 1990s (Wells & Olson, 2003). More than 100 people who were convicted prior to the advent of forensic DNA have now been exonerated by DNA tests, and more than 75% of these people were victims of mistaken eyewitness. The apparent prescience of the psychological literature regarding problems with eyewitness identification has created a rising prominence of eyewitness identification research in the criminal justice system. Because most crimes do not include DNA-rich biological traces, reliance on eyewitness identification for solving crimes has not been significantly diminished by the development of forensic DNA tests. The vast criminal justice system itself has never conducted an experiment on eyewitness identification (Wells & Olson, 2003).
Research
The experimental method has dominated the eyewitness literature, and most of the experiments are lab based. Lab-based experimental methods for studying eyewitness issues have strengths and weaknesses. The primary strength of experimental methods is that they are proficient at establishing cause–effect relations. This is especially important for research on system variables, because one needs to know in fact whether a particular system manipulation is expected to cause better or worse performance. In the real world, many variables can operate at the same time and in interaction with one another (Wells, Memon & Penrod, 2006)
Multicollinearity can be quite a problem in archival/field research, because it can be very difficult to sort out which (correlated) variables are really responsible for observed effects. The control of variables that is possible in experimental research can bring clarity to causal relationships that are obscured in archival research. For example, experiments on stress during witnessing have shown, quite compellingly, that stress interferes with the ability of eyewitnesses to identify a central person in a stressful situation. However, when Yuille and Cutshall (1986) studied multiple witnesses to an actual shooting, they found that those who reported higher stress had better memories for details than did those who reported lower stress. Why the different results? In the experimental setting, stress was manipulated while other factors were held constant; in the actual shooting, those who were closer to the incident reported higher levels of stress (presumably because of their proximity) but also had a better view. Thus, in the actual case, stress and view covaried. The experimental method is not well suited to post diction with estimator variables—that is, there may be limits to generalizing from experiments to actual cases. One reason is that levels of estimator variables in experiments are fixed and not necessarily fully representative of the values observed in actual cases. In addition, it is not possible to include all interesting and plausible interactions among variables in any single experiment (or even in a modest number of experiments). Clearly, generalizations to actual cases are best undertaken on the basis of a substantial body of experimental research conducted across a wide variety of conditions and employing a wide variety of variables. Nevertheless, the literature is largely based on experiments due to a clear preference by eyewitness researchers to learn about cause and effect. Furthermore, ‘‘ground truth’’ (the actual facts of the witnessed event) is readily established in experiments, because the witnessed events are creations of the experimenters. This kind of ground truth is difficult, if not impossible, to establish when analyzing actual cases (Wells et al. 2006).
Memory
The world is complex. All natural situations or scenes contains infinitely more physical and social information than the brain is able to detect. The brain’s ability to record information is limited. In studies of immediate memory for strings of numbers that have been read once, it turns out that most people begin to go wrong if the number of single digits exceeds five (Nordby, Raanaas & Magnussen, 2002). The limitations of what humans are capable to process, leads to an automatically selection of information. This selection is partially controlled by external factors, the factors in our environment that captures our attention (Magnussen, 2010). In the witness psychology we often talk about the weapon focus, in which eyewitnesses attend to the weapon, which reduces their memory for other information (Eysenck & Keane, 2010). The selection of information in a cognitive overload situation is also governed by psychological factors, the characteristics of the person who is observing. It is about the emotional state and the explicit and implicit expectations of what will happen. Psychologists call such expectations cognitive schemas. Cognitive schemas forms a sort of hypotheses or map of the world based on past experiences. These hypotheses or mental maps of the world determines what the brain chooses of the information, and how it interprets and if it will be remembered. When information is uncertain or ambiguous, the psychological factors are strong (Magnussen, 2010).
Eyewitness testimony can be distorted via confirmation bias, i.e., event memory is influenced by the observer’s expectation. A study made by Lindholm and Christianson (1998), Swedish and immigrant students saw a videotaped simulated robbery in which the perpetrator seriously wounded a cashier with a knife. After watching the video, participants were shown color photographs of eight men – four Swedes and the remainder immigrants. Both Swedish and immigrant participants were twice as likely to select an innocent immigrant as an innocent swede. Immigrants are overrepresented in Swedish crime statistics, and this influenced participants’ expectations concerning the likely ethnicity of the criminal (Eysenck & Keane, 2010)
Bartlett (1932) explained why our memory is influenced by our expectations. He argued that we possess numerous schemas or packets of knowledge stored in long-term memory. These schemas lead us to form a certain expectations and can distort our memory by causing us to reconstruct an event details based on “what must have been true”(Eysenck & Keane, 2010). What we select of information, and how we interpret information is partially controlled by cognitive schemas. Many cognitive schemas are generalized, and for a large automated and non-conscious, as the expectation that the world around us is stable and does not change spontaneously. Such generalized expectations are basic economic and making sure we do not have to devote so much energy to monitor the routine events of daily life, but they also contribute to the fact that we in certain situations may overlook important, but unexpected information, or supplement the memory with details who is form consistent, but who actually don´t exist (Magnussen, 2010).
Estimator variables
First, estimator variables are central to our understanding of when and why eyewitnesses are most likely to make errors. Informing police, prosecutors, judges, and juries about the conditions that can affect the accuracy of an eyewitness account is important. Second, our understanding of the importance of any given system variable is, at least at the extreme, dependent on levels of the estimator variables. Consider a case in which a victim eyewitness is abducted and held for 48 hours by an unmasked perpetrator; the witness has repeated viewings of the perpetrator, lighting is good, and so on. We have every reason to believe that this witness has a deep and lasting memory of the perpetrator’s face. Then, within hours of being released, the eyewitness views a lineup. Under these conditions, we would not expect system variables to have much impact. For instance, a lineup that is biased against an innocent suspect is not likely to lead this eyewitness to choose the innocent person, because her memory is too strong to be influenced by lineup bias. On the other hand, when an eyewitness’s memory is weaker, system variables have a stronger impact. Psychologists have investigated the effects on identification accuracy of a large number of estimator variables, witness, crime, and perpetrator characteristics. Here we recount findings concerning several variables that have received significant research attention and achieved high levels of consensus among experts (based on items represented in a survey by Kassin, Tubb, Hosch, & Memon, 2001) or have been the subject of interesting recent research (Wells et al. 2006).
References
Eysenck, M.E., & Keane, M.T., (2010). Cognitive psychology. A student´s Handbook (6th Edn). New York: Psychological Press
Magnussen, S., (2010). Vitnepsykologi. Pålitelighet og troverdighet I dagligliv og rettssal. Oslo: Abstrakt forlag as.
Nordby, K., Raanaas, R.K. & Magnussen, S. (2002). The expanding telephone number. I: Keying briefly presented multiple-digit numbers. Behavior and Information Technology, 21, 27-38.
Wells, G.L., Memon, A., & Penrod, S.D. ( 2006). Eyewitness Evidence. Improving Its Probative Value, 7(2), 45-75.
Wells, G.L., & Olson, E.A., (2003). Eyewitness Testimony, 54:277-95. Doi: 10.1146/annurev.psych.54.101601.145028
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/06%3A_Memory/6.06%3A_References.txt
|
Introduction
"You need memory to keep track of the flow of conversation" [1]
Maybe the interaction between memory and language does not seem very obvious at first, but this interaction is necessary when trying to lead a conversation properly. Memory is the component for storing and retrieving information. So to remember both things just said and information heard before which might be important for the conversation. Whereas language serves for following the conversational partner, to understand what he says and to reply to him in an understandable way.
This is not a simple process which can be learned within days. In childhood everybody learns to communicate, a process lasting for years.
So how does this work? Possible responses to the question of language acquisition are presented in this chapter.The section also provides an insight into the topic of malfunctions in the brain. Concerning dysfunctions the following questions arise: How can the system of language and memory be destroyed? What causes language impairments? How do the impairments become obvious? These are some of the topics dealt with in this chapter.
Up to now, the whole profoundness of memory and language cannot be explored because the present financial resources are insufficient. And the connection between memory and language mostly becomes obvious when an impairment arises. So certain brain areas are explored when having a comparison between healthy brain and impaired brain. Then it is possible to find out what function this brain area has and how a dysfunction becomes obvious.
7.02: Basics
Memory
Memory is the ability of the nervous system to receive and keep information. It is divided into three parts: Sensory memory, Short-term memory and Long-term memory. Sensory memory holds information for milliseconds and is separated into two components. The iconic memory is responsible for visual information, whereas auditory information is processed in the echoic memory. Short-term memory keeps information for at most half a minute. Long-term memory, which can store information over decades, consists of the conscious explicit and the unconscious implicit memory. Explicit memory, also known as declarative, can be subdivided into semantic and episodic memory. Procedural memory and priming effects are components of the implicit memory.
Brain regions:
Brain regions Memory
Frontal lobe, parietal lobe, dorsolateral prefrontal cortex Short-term Memory/ Working Memory
Hippocampus Short-term Memory → Long-term Memory
Medial temporal lobe (neocortex) Declarative Memory
Amygdala, Cerebellum Procedural Memory
For detailed information see chapter Memory
Language
Language is an essential system for communication which highly influences our life. This system uses sounds, symbols and gestures for the purpose of communication. Visual and auditory systems of a human body are the entrance-pathway for language to enter the brain. The motor system is responsible for speech and writing production, it serves as exit-pathway for language. The nature of language exists in the brain processes between the sensory and motor systems, especially between visual or auditory income and written or spoken outcome. The biggest part of the knowledge about brain mechanism for language is deduced from studies of language deficits resulting from brain damage. Even if there are about 10 000 different languages and dialects in the world, all of them express the subtleties of human experience and emotion.
For detailed information see chapters Comprehension and Neuroscience of Comprehension
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/07%3A_Memory_and_Language/7.01%3A_Introduction.txt
|
A phenomenon which occurs daily and in everybody’s life is the acquisition of language. Anyhow scientists are not yet able to explain the underlying processes in detail or to define the point when language acquisition commences, even if they agree that it happens long before the first word is spoken.
Theorists like Catherine Snow and Michael Tomasello think that the acquisition of language skills begins at birth. Others claim, it already commences in the womb. Newborns are not able to speak, even if babbling activates the brain regions later involved in speech production.
The ability to understand the meaning of words already begins before the first birthday, even if they cannot be pronounced till then. The phonological representation of words in the memory changes between the stage of repetitive syllable-babbling and the one-word stage. At first children associate words with concrete objects, followed by an extension to the class of objects. After a period of overgeneralisation the children’s system of concept approaches to the adults’ one. To prove the assumption of understanding the meaning of words that early, researches at MIT let children watch two video clips of “Sesame Street”. Simultaneously the children heard the sentences “Cookie Monster is tickling Big Bird” or “Big Bird is tickling Cookie Monster”. The babies consistently looked more at the video corresponding to the sentence, what is an evidence for comprehension of more complex sentences, than they are able to produce during the one-word period.
The different stages of speech production are listed in the table below.
Age Stage of Acquisition Example
6th month Stage of babbling:
- systematic combining of vowels and consonants
7th – 10th month Stage of repetitive syllable-babbling:
- higher part of consonants → paired with a vowel – monosyllabic
reduplicated babbling
da, ma, ga
mama, dada, gaga
11th – 12th month Stage of variegated babbling:
- combination of different consonants and vowels
bada, dadu
12th month Usage of first words - John Locke(1995):
- prephonological → consonant-vowel(-consonant)
car, hat
Locke’s theory about the usage of the first word is only a general tendency. Other researchers like Charlotte Bühler (1928), a German psychologist, think that the age of speaking the first word is around the tenth month, whereas Elizabeth Bates et al. (1992) proposed a period between eleven and 13 months. The one-word stage described above can last from two till ten months. Until the second year of life a vocabulary of about 50 words evolves, four times more than the child utilises. Two thirds of the language processed is still babbling. After this stage of learning the vocabulary increases rapidly. The so called vocabulary spurt causes an increment of about one word every two hours. From now on children learn to have fluent conversations with a simple grammar containing errors.
As you can see in the following example, the length of the sentences and the grammatical output changes a lot. While raising his son, Knut keeps a tally of his son’s speech production, to see how fast the language develops:
Speech diary of Knut’s son Andy:
(Year; Month)
2;3: Play checkers. Big drum. I got horn. A bunny rabbit walk.
2;4: See marching bear go? Screw part machine. That busy bulldozer truck.
2;5: Now put boots on. Where wrench go? Mommy talking bout lady. What that paper clip doing?
2;6: Write a piece a paper. What that egg doing? I lost a shoe. No, I don't want to sit seat.
2;7: Where piece a paper go? Ursula has a boot on. Going to see kitten. Put the cigarette down. Dropped a rubber band. Shadow has hat just like that. Rintintin don't fly, Mommy.
2;8: Let me get down with the boots on. Don't be afraid a horses. How tiger be so healthy and fly like kite? Joshua throw like a penguin.
2;9: Where Mommy keep her pocket book? Show you something funny. Just like turtle make mud pie.
2;10: Look at that train Ursula brought. I simply don't want put in chair. You don't have paper. Do you want little bit, Cromer? I can't wear it tomorrow.
2;11: That birdie hopping by Missouri in bag? Do want some pie on your face? Why you mixing baby chocolate? I finish drinking all up down my throat. I said why not you coming in? Look at that piece a paper and tell it. We going turn light on so you can't see.
3;0: I going come in fourteen minutes. I going wear that to wedding. I see what happens. I have to save them now. Those are not strong mens. They are going sleep in wintertime. You dress me up like a baby elephant.
3;1: I like to play with something else. You know how to put it back together. I gon' make it like a rocket to blast off with. I put another one on the floor. You went to Boston University? You want to give me some carrots and some beans? Press the button and catch it, sir. I want some other peanuts. Why you put the pacifier in his mouth? Doggies like to climb up.
3;2: So it can't be cleaned? I broke my racing car. Do you know the light wents off? What happened to the bridge? When it's got a flat tire it's need a go to the station. I dream sometimes. I'm going to mail this so the letter can't come off. I want to have some espresso. The sun is not too bright. Can I have some sugar? Can I put my head in the mailbox so the mailman can know where I are and put me in the mailbox? Can I keep the screwdriver just like a carpenter keep the screwdriver? [2]
Obviously children are able to conjugate verbs and to decline nouns using regular rules. To produce irregular forms is more difficult, because they have to be learnt and stored in Long-term memory one by one. Rather than the repetition of words, the observation of speech is important to acquire grammatical skills. Around the third birthday the complexity of language increases exponentially and reaches a rate of about 1000 syntactic types.
Another interesting field concerning the correlation between Memory and Language is Multilingualism. Thinking about children educated bilingual, the question arises how the two languages are separated or combined in the brain. Scientists assume that especially lexical information is stored independently for each language; the semantic and syntactic levels rather could be unified. Experiments have shown that bilinguals have a more capacious span of memory when they listen to words not only in one but in both languages.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/07%3A_Memory_and_Language/7.03%3A_Acquisition_of_Language.txt
|
Reading about the disorders concerning memory and language one might possibly think about amnesia or aphasia, both common diseases in the concerned brain regions. But when dealing with the correlation of memory and language we want to introduce only diseases which affect loss of memory as well as loss of language.
Alzheimer's Disease
Discovered in 1906 by Alois Alzheimer, this disease is the most common type of dementia. Alzheimer’s is characterised by symptoms like loss of memory, loss of language skills and impairments in skilled movements. Additionally, other cognitive functions such as planning or decision-making which are connected to the frontal and temporal lobe can be reduced. The correlation between memory and language in this context is very important because they work together in order to establish conversations. When both are impaired, communication becomes a difficult task. People with Alzheimer’s have reduced working memory capability, so they cannot keep in mind all of the information they heard during a conversation. They also forget words which they need to denote items, their desires and to understand what they are told. Affected persons also change their behaviour; they become anxious, suspicious or restless and they may have delusions or hallucinations. In the early stages of the disorder, sick persons become less energetic or suffer little loss of memory. But they are still able to dress themselves, to eat and to communicate. Middle stages of the disease are characterised by problems of navigation and orientation. They do not find their way home or even forget where they live. In the late stages of the disease, the patients’ ability to speak, read and write decreases enormously. They are no longer able to denote objects and to talk about their feelings and desires. So their family and the nursing staff have great problems to find out what the patients want to tell them. In the end-state, the sick persons do not show any response or reaction. They lie in bed, have to be fed and are totally helpless. Most of them die after four to six years after diagnosis, although the disease can endure from three to twenty years. A cause for this is the difficulty to distinguish Alzheimer’s from other related disorders. Only after death when seeing the shrinkage of the brain one can definitely say that the person was affected by Alzheimer’s disease.
"Genetic Science Learning Center, University of Utah, http://learn.genetics.utah.edu/ A comparison of the two brains:
In the Alzheimer brain:
· The cortex shrivels up, damaging areas involved in thinking, planning and remembering.
· Shrinkage is especially severe in the hippocampus, an area of the cortex that plays a key role in formation of new memories.
· Ventricles (fluid-filled spaces within the brain) grow larger.
Scientists say that long before the first symptoms appear nerve cells that store and retrieve information have already begun to degenerate. There are two theories giving an explanation for the causes of Alzheimer’s disease. The first describes plaques as protein fragments which defect the connection between nerve cells. They arise when little fragments release from nerve cell walls and associate with other fragments from outside the cell. These combined fragments, called plaques, append to the outside of nerve cells and destroy the connections. Then the nerve cells start to die because they are no longer provided with nutrients. As a conclusion the stimuli are no longer transferred. The second theory explains that tangles limit the functions of nerve cells. They are twisted fibers of another protein that form inside brain cells and destroy the vital cell transport made of proteins. But scientists have not yet found out the exact role of plaques and tangles.
"Genetic Science Learning Center, University of Utah, http://learn.genetics.utah.edu/
- Alzheimer tissue has many fewer nerve cells and synapses than a healthy brain.
- Plaques, abnormal clusters of protein fragments, build up between nerve cells.
Dead and dying nerve cells contain tangles, which are made up of twisted fibers of another protein.
Alzheimer’s progress is separated into three stages: In the early stages (1), tangles and plaques begin to evolve in brain areas where learning, memory, thinking and planning takes place. This may begin 20 years before diagnosis. In the middle stages (2), plaques and tangles start to spread to areas of speaking and understanding speech. Also the sense of where your body is in relation to objects around you is reduced. This may last from 2–10 years. In advanced Alzheimer’s disease (3), most of the cortex is damaged, so that the brain starts to shrink seriously and cells begin to die. The people lose their ability to speak and communicate and they do not recognise their family or people they know. This stage may generally last from one to five years.
Today, more than 18 million people suffer from Alzheimer’s disease, in Germany there are nearly 800,000 people. The number of affected persons increases enormously. Alzheimer’s is often only related to old people. Five percent of the people older than 65 years and fifteen to twenty percent of the people older than 80 years suffer from Alzheimer’s. But also people in the late thirties and forties can be affected by this heritable disease. The probability to suffer from Alzheimer’s when parents have the typicall old-generation-Alzheimer’s is not very high.
Autism
Autism is a neurodevelopment condition, which causes neurodevelopmental disorders in several fields. For the last decade, autism has been studied in light of Autistic Spectrum Disorders, including mild and severe autism, as well as Asperger's syndrome. Individuals with autism, for example, have restricted perception and problems in information processing. The often associated intellectual giftedness only holds for a minority of people with autism, whereas the majority possesses a normal amount of intelligence or is below the average.
There are different types of autism, i.a.:
• Asperger’s syndrome – usually arising at the age of three
• infantile autism – arising between nine and eleven months after birth
The latter is important because it shows the correlation between memory and language in the children's behaviour very clearly. Two different types of infantile autism are the low functioning autism (LFA) and the high functioning autism (HFA). The LFA describes children with an IQ lower than 80, the HFA those with an IQ higher than 80. The disorders in both types are similar, but they are more strongly developed in children with LFA.
The disorders are mainly defined by the following aspects:
1. the inability of normal social interaction, e.g. amicable relations to other children
2. the inability of ordinary communication, e.g. disorder of spoken language/idiosyncratic language
3. stereotypical behaviour, e.g. stereotypical and restricted interests with an atypical content
To demonstrate the inability to manage normal communication and language, the University of Pittsburgh and the ESRC performed experiments to provide possible explanations. Sentences, stories or numbers were presented to children with autism and to normal children. The researchers concluded that the disorders in people with HFA and LFA are caused by an impairment in declarative memory. This impairment leads to difficulties in learning and remembering sentences, stories or personal events, whereas the ability to learn numbers is available. It has been shown that these children are not able to link words they heard to their general knowledge, thus the words are only partially learnt, with an idiosyncratic meaning. This explains why LFA and HFA affected children differ in their way of thinking from sane children. It is often difficult for them to understand others and vice versa. Furthermore scientists believe that the process of language learning depends on an initial vocabulary of fully meaningful words. It is assumed that these children do not possess such a vocabulary, thus their language development is impaired. In a few cases the acquisition of language fails completely, therefore in some cases the children are not able to use language in general. The inability of learning and using language can be a consequence of an impairment of declarative memory. It might also cause a low IQ because the process of learning is language-mediated. In HFA the IQ is not significantly lower than the IQ of sane children. This correlates well with their better understanding of word meanings. They have a milder form of autism. The experiments have also shown that adults do not have problems with the handling of language. A reason for that might be that they have been taught to use it during development or maybe they acquired this ability through reading and writing. The causes of autism are not yet explored appropriately to get some idea how to help and support those people having autism in everyday-life. It is still not clear whether the diseases are really caused by genetic disorders. It is also possible that other neurological malfunctions like brain damages or biochemical specialties are responsible for autism. The research just started to get answers to those questions.
7.05: References and Resources
1. E. G. Goldstein, "Cognitive Psychology - Connecting Mind, Research, and Everyday Experience", page 137, THOMSON WADSWORTH TM 2005
2. S. Pinker, The Language Instinct, p.269f
Books
Steven Pinker: The Language Instinct; The Penguin Press, 1994, ISBN 0140175296
Gisela Klann-Delius: Spracherwerb; Sammlung Metzler, Bd 325; Verlag J.B.Metzler; Stuttgart, Weimar, 1999; ISBN 3476103218
Arnold Langenmayr: Sprachpsychologie - Ein Lehrbuch; Verlag für Psychologie, Hogrefe, 1997; ISBN 3801710440
Mark F. Bear, Barry W. Connors, Michael A. Paradiso: Neuroscience - Exploring The Brain; Lippincott Williams & Wilkins, 3rd edition, 2006; ISBN 0781760038
Links
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/07%3A_Memory_and_Language/7.04%3A_Disorders_and_Malfunctions.txt
|
Mental imagery was already discussed by the early Greek philosophers. Socrates sketched a relation between perception and imagery by assuming that visual sensory experience creates images in the human's mind, which are representations of the real world. Later on, Aristoteles stated that "thought is impossible without an image". At the beginning of the 18th century, Bishop Berkeley proposed another role of mental images - similar to the ideas of Sokrates - in his theory of idealism. He assumed that our whole perception of the external world consists only of mental images.
At the end of the 19th century, Wilhelm Wundt - the generally acknowledged founder of experimental psychology and cognitive psychology - called imagery, sensations and feelings the basic elements of consciousness. Furthermore, he had the idea that the study of imagery supports the study of cognition because thinking is often accompanied by images. This remark was taken up by some psychologists and gave rise to the imageless-thought debate, which discussed the same question Aristoteles already had asked: Is thought possible without imagery?
In the early 20th century, when Behaviourism became the main stream of psychology, Watson argued that there is no visible evidence of images in human brains and therefore, the study of imagery is worthless. This general attitude towards the value of research on imagery did not change until the birth of cognitive psychology in the 1950s and -60s.
Later on, imagery has often been believed to play a very large, even pivotal, role in both memory (Yates, 1966; Paivio, 1986) and motivation (McMahon, 1973). It is also commonly believed to be centrally involved in visuo-spatial reasoning and inventive or creative thought.
8.02: Concept
Imagination is the ability to form images, perceptions, and concepts. In fact, images, perceptions and concepts are not perceived through sight, hearing or other senses. Imagine the work of the mind and help create fantasy. Imagination helps to provide meaning and provide an understanding of knowledge; imagination is the basic ability of people to create meaning for the world; imagination also plays a key role in the learning process. The basic training method for cultivating imagination is to listen to stories; when listening to stories, the accuracy of wording is the basic factor of “generating the world”. Imagine any power we face. We combine what we touch, see and hear into a "picture" by imagination. It is widely believed that as an intrinsic ability, as a factor in perceiving the public world from the senses, in the process of inventing a partial or complete personal domain in the mind, the term is used in the professional use of psychology, meaning the mind. In the recovery process, restores the sensory object previously presented to the perception. Because the use of this term contradicts the use of everyday language, some psychologists tend to describe this process as an "image process" or "image", or as a "regeneration" imaginary process, using "generating or constructive" The imaginary process "images of imagination" are seen with the "eye of the soul." Imagination can also be expressed through fairy tales or imaginary situations. The most famous invention or entertainment product is created by one's imagination. One hypothesis about the evolutionary process of human imagination is that imagination allows conscious species to solve problems by using psychological simulations. Children are often considered to be imaginative groups. Because their way of thinking has not yet formed, there are fewer ideological restrictions and rules than adults. Therefore, it is often imaginative. Children often use stories or pretend games to practice their imagination. When children have fantasies, they play a role on two levels: on the first level, they use role-play to achieve what they create with imagination; at the second level, they pretend to believe that the situation is to play games.
In terms of behavior, what they create seems to be a real reality, and this already exists in the myth of the story. Artists often need a lot of imagination because they are engaged in creative work, and imagination can help them break existing rules and bring aesthetics into a new model. Imagine the universal use of this term, which means the process of forming an image that has not been experienced before in the mind, or the process of forming an image that is at least partially experienced or formed by different combinations. Some typical examples are as follows: fairy tale Fiction The illusion of forms inspired by fantasy novels and science fiction spurs readers to pretend that these stories are real, and by resorting to fantasy objects such as books or fantasy, these objects are not in the fantasy world. Imagination in this sense (not limited to the precise knowledge gained from actual needs) is somewhat exempt from objective limitations. Imagine being in the position of another person. This ability is very important for social relationships and social understanding. Einstein said: "Imagination... is more important than knowledge. Knowledge is limited. Imagination encompasses the world." [8] However, in all fields, even imagination is limited: therefore, one Human imagination does violate the basic law, or the inevitable principle of violating the actual possibility, or the possibility of rationality in a certain situation, is considered a mental disorder. The same limitations place imagination in the realm of scientific assumptions. The advancement of scientific research is largely due to some temporary explanations; these explanations are based on imagination. However, these assumptions must be made up of previously identified facts and must be coordinated with the principles of a particular science. Imagination is an experimental part of thinking that creates functional-based theories and ideas. Separate objects from real perceptions and use complex "features" to envision creating new ideas or changing ideas. This part of the thinking is crucial to improving and improving the way new and old tasks are completed. These experimental ideas can be implemented steadily in the field of simulation; then, if the concept is very likely and its function is real, then the concept can be implemented in reality. Imagination is the key to the new development of the soul. It shares with others and progresses together. Imagination can be divided into: Automatic imagination (from dreams and daydreams) Don't automatically imagine (renewable imagination, creative imagination, future dreams)
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/08%3A_Imagery/8.01%3A_Introduction_and_History.txt
|
Imagine yourself back on vacation again. You are now walking along the beach, while projecting images of white benzene-molecules onto the horizon. At once you are realizing that there are two real little white dots under your projection. Couriously you are walking towards them, until your visual field is filled by two seriously looking, but fiercely debating scientists. As they take notice of your presence, they invite you to take a seat and listen to the still unsolved imagery debate.
Today’s imagery debate is mainly influenced by two opposing theories: On the one hand Zenon Pylyshyn’s (left) propositional theory and on the other hand Stephen Kosslyn’s (right) spatial representation theory of imagery processing.
Theory of propositional representation
The theory of Propositional Representation was founded by Dr. Zenon Pylyshyn who invented it in 1973. He described it as an epiphenomenon which accompanies the process of imagery, but is not part of it. Mental images do not show us how the mind works exactly. They only show us that something is happening. Just like the display of a compact disc player. There are flashing lights that display that something happens. We are also able to conclude what happens, but the display does not show us how the processes inside the compact disc player work. Even if the display would be broken, the compact disc player would still continue to play music.
Representation
The basic idea of the propositional representation is that relationships between objects are represented by symbols and not by spatial mental images of the scene. For example, a bottle under a table would be represented by a formula made of symbols like UNDER(BOTTLE,TABLE). The term proposition is lend from the domains of Logic and Linguistics and means the smallest possible entity of information. Each proposition can either be true or false.
If there is a sentence like "Debby donated a big amount of money to Greenpeace, an organization which protects the environment", it can be recapitulated by the propositions "Debby donated money to Greenpeace", "The amount of money was big" and "Greenpeace protects the environment". The truth value of the whole sentence depends on the truth values of its constituents. Hence, if one of the propositions is false, so is the whole sentence.
Propositional networks
This last model does not imply that a person remembers the sentence or its single propositions in its exact literal wording. It is rather assumed that the information is stored in the memory in a propositional network.
In Figure 1, each circle represents a single proposition. Regarding the fact that some components are connected to more than one proposition, they construct a network of propositions. Propositional networks can also have a hierarchy, if a single component of a proposition is not a single object, but a proposition itself. An example of a hierarchical propositional network describing the sentence "John believes that Anna will pass her exam" is illustrated in Figure 2.
Complex objects and schemes
Even complex objects can be generated and described by propositional representation. A complex object like a ship would consist of a structure of nodes which represent the ships properties and the relationship of these properties.
Almost all humans have concepts of commonly known objects like ships or houses in their mind. These concepts are abstractions of complex propositional networks and are called schemes. For example our concept of a house includes propositions like:
```Houses have rooms.
Houses can be made from wood.
Houses have walls.
Houses have windows.
...
```
Listing all of these propositions does not show the structure of relationships between these propositions. Instead, a concept of something can be arranged in a schema consisting of a list of attributes and values, which describe the properties of the object. Attributes describe possible forms of categorisation, while values rep- resent the actual value for each attribute. The schema-representation of a house looks like this:
```House
Category: building
Material: stone, wood
Contains: rooms
Function: shelter for humans
Shape: rectangular
...
```
The hierarchical structure of schemes is organised in categories. For example, "house" belongs to the category "building" (which has of course its own schema) and contains all attributes and values of the parent schema plus its own specific values and attributes. This way of organising objects in our environment into hierarchical models enables us to recognize objects we have never seen before in our life, because they can possibly be related to categories we already know.
Experimental support
In an experiment performed by Wisemann und Neissner in 1974, people are shown a picture which, on first sight, seems to consist of random black and white shapes. After some time the subjects realise that there is a dalmatian dog in it. The results of this show that people who recognise the dog remember the picture better than people who do not recognise him. An possible explanation is that the picture is stored in the memory not as a picture, but as a proposition.
In an experiment by Weisberg in 1969, subjects had to memorise sentences like "Children who are slow eat bread that is cold". Then the subjects were asked to associate the first word from the sentence that comes in their mind to a word given by the experiment conductor. Almost all subjects associated the word "children" to the given word "slow", although the word "bread" has a position that is more close to the given word "slow" than the word "children". An explanation for this is that the sentence is stored in the memory using the three propositions "Children are slow", "Children eat bread" and "Bread is cold". The subjects associated the word "children" with the given word "slow", because both belong to one proposition, while "bread" and "slow" belong to different ones. The same evidence was proven in another experiment by Ratcliff and McKoon in 1978.
Theory of spatial representation
Stephen Kosslyn's theory opposing Pylyshyn's propositional approach implies that images are not only represented by propositions. He tried to find evidence for a spatial representation system that constructs mental, analogous, three-dimensional models.
The primary role of this system is to organize spatial information in a general form that can be accessed by either perceptual or linguistic mechanisms. It also provides coordinate frameworks to describe object locations, thus creating a model of a perceived or described environment. The advantage of a coordinate representation is that it is directly analogous to the structure of real space and captures all possible relations between objects encoded in the coordinate space. These frameworks also reflect differences in the salience of objects and locations consistent with the properties of the environment, as well as the ways in which people interact with it. Thus, the representations created are models of physical and functional aspects of the environment.
Encoding
What, then, can be said about the primary components of cognitive spatial representation? Certainly, the distinction between the external world and our internal view of it is essential, and it is helpful to explore the relationship between the two further from a process-oriented perspective.
The classical approach assumes a complex internal representation in the mind that is constructed through a series of specific perceived stimuli, and that these stimuli generate specific internal responses. Research dealing specifically with geographic-scale space has worked from the perspective that the macro-scale physical environment is extremely complex and essentially beyond the control of the individual. This research, such as that of Lynch and of Golledge (1987) and his colleagues, has shown that there is a complex of behavioural responses generated from corresponding complex external stimuli, which are themselves interrelated. Moreover, the results of this research offers a view of our geographic knowledge as a highly interrelated external/internal system. Using landmarks encountered within the external landscape as navigational cues is the clearest example of this interrelationship.
The rationale is as follows: We gain information about our external environment from different kinds of perceptual experience; by navigating through and interacting directly with geographic space as well as by reading maps, through language, photographs and other communication media. Within all of these different types of experience, we encounter elements within the external world that act as symbols. These symbols, whether a landmark within the real landscape, a word or phrase, a line on a map or a building in a photograph, trigger our internal knowledge representation and generate appropriate responses. In other words, elements that we encounter within our environment act as external knowledge stores.
Each external symbol has meaning that is acquired through the sum of the individual perceiver's previous experience. That meaning is imparted by both the specific cultural context of that individual and by the specific meaning intended by the generator of that symbol. Of course, there are many elements within the natural environment not "generated" by anyone, but that nevertheless are imparted with very powerful meaning by cultures (e.g., the sun, moon and stars). Man-made elements within the environment, including elements such as buildings, are often specifically designed to act as symbols as at least part of their function. The sheer size of downtown office buildings, the pillars of a bank facade and church spires pointing skyward are designed to evoke an impression of power, stability or holiness, respectively.
These external symbols are themselves interrelated, and specific groupings of symbols may constitute self-contained external models of geographic space. Maps and landscape photographs are certainly clear examples of this. Elements of differing form (e.g., maps and text) can also be interrelated. These various external models of geographic space correspond to external memory. From the perspective just described, the total sum of any individual's knowledge is contained in a multiplicity of internal and external representations that function as a single, interactive whole. The representation as a whole can therefore be characterised as a synergistic, self-organising and highly dynamic network.
Experimental support
Interaction
Early experiments on imagery were already done in 1910 by Perky. He tried to find out, if there is any interaction between imagery and perception by a simple mechanism. Some subjects are told to project an image of common objects like a ship onto a wall. Without their knowledge there is a back projection, which subtly shines through the wall. Then they have to describe this picture, or are questioned about for example the orientation or the colour of the ship. In Perkys experiment, none of the 20 subjects recognised that the description of the picture did not arise from their mind, but were completely influenced by the picture shown to them.
Image Scanning
Another seminal research in this field were Kosslyn's image-scanning experiments in the 1970s. Referring to the example of the mental representation of a ship, he experienced another linearity within the move of the mental focus from one part of the ship to another. The reaction time of the subjects increased with distance between the two parts, which indicates, that we actually create a mental picture of scenes while trying to solve small cognitive tasks. Interestingly, this visual ability can be observed also with congenitally blind, as Marmor and Zaback (1976) found out. Presuming, that the underlying processes are the same of sighted subjects, it could be concluded that there is a deeper encoded system that has access to more than the visual input.
Mental Rotation Task
Other advocates of the spatial representation theory, Shepard and Metzler, developed the mental rotation task in 1971. Two objects are presented to a participant in different angles and his job is to decide whether the objects are identical or not. The results show that the reaction times increases linearly with the rotation angle of the objects. The participants mentally rotate the objects in order to match the objects to one another. This process is called "mental chronometry".
Together with Paivio's memory research, this experiment was crucial for the importance of imagery within cognitive psychology, because it showed the similarity of imagery to the processes of perception. For a mental rotation of 40° the subjects needed two seconds in average, whereas for a 140° rotation the reaction time increased to four seconds. Therefore, it can be concluded that people in general have a mental object rotation rate of 50° per second.
Spatial Frameworks
Although most research on mental models has focussed on text comprehension, researchers generally believe that mental models are perceptually based. Indeed, people have been found to use spatial frameworks like those created for texts to retrieve spatial information about observed scenes (Bryant, 1991). Thus, people create the same sorts of spatial memory representations no matter if they read about an environment or see it themselves.
Size and the visual field
If an object is observed from different distances, it is harder to perceive details if the object is far away because the objects fill only a small part of the visual field. Kosslyn made an experiment in 1973 in which he wanted to find out if this is also true for mental images, to show the similarity of the spatial representation and the perception of real environment. He told participants to imagine objects which are far away and objects which are near. After asking the participants about details, he supposed that details can be observed better if the object is near and fills the visual field. He also told the participants to imagine animals with different sizes near by another. For example an elephant and a rabbit. The elephant filled much more of the visual field than the rabbit and it turned out that the participants were able to answer questions about the elephant more rapidly than about the rabbit. After that the participants had to imagine the small animal besides an even smaller animal, like a fly. This time, the rabbit filled the bigger part of the visual field and again, questions about the bigger animal were answered faster. The result of Kosslyn's experiments is that people can observe more details of an object if it fills a bigger part of their mental visual field. This provides evidence that mental images are represented spatial.
Discussion
Since the 1970s, many experiments enriched the knowledge about imagery and memory to a great extend in the course of the two opposing point of views of the imagery debate. The seesaw of assumed support was marked of lots of smart ideas. The following section is an example of the potential of such controversities.
In 1978, Kossylyn expanded his image screening experiment from objects to real distances represented on maps. In the picture you see our island with all the places you encountered in this chapter. Try to imagine, how far away from each other they are. This is exactly the experiment performed by Kossylyn. Again, he predicted successfully a linear dependency between reaction time and spatial distance to support his model.
In the same year, Pylyshyn answered with what is called the "tacit-knowledge explanation", because he supposed that the participants include knowledge about the world without noticing it. The map is decomposed into nodes with edges in between. The increase of time, he thought, was caused by the different quantity of nodes visited until the goal node is reached.
Only four years later, Finke and Pinker published a counter model. Picture (1) shows a surface with four dots, which were presented to the subjects. After two seconds, it was replaced by picture (2), with an arrow on it. The subjects had to decide, if the arrow pointed at a former dot. The result was, that they reacted slower, if the arrow was farer away from a dot. Finke and Pinker concluded, that within two seconds, the distances can only be stored within a spatial representation of the surface.
To sum it up, it is commonly believed, that imagery and perception share certain features but also differs in some points. For example, perception is a bottom-up process that originates with an image on the retina, whereas imagery is a top-down mechanism which originates when activity is generated in higher visual centres without an actual stimulus. Another distinction can be made by saying that perception occurs automatically and remains relatively stable, whereas imagery needs effort and is fragile. But as psychological discussions failed to point out one right theory, now the debate is translocated to neuroscience, which methods had promising improvements throughout the last three decades.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/08%3A_Imagery/8.03%3A_The_Imagery_Debate.txt
|
Investigating the brain - a way to resolve the imagery debate?
Visual imagery was investigated by psychological studies relying solely on behavioural experiments until the late 1980s. By that time, research on the brain by electrophysiological measurements such as the event-related potential (ERP) and brain-imaging techniques (fMRI, PET) became possible. It was therefore hoped that neurological evidence how the brain responds to visual imagery would help to resolve the imagery debate.
We will see that many results from neuroscience support the theory that imagery and perception are closely connected and share the same physiological mechanisms. Nevertheless the contradictory phenomena of double dissociations between imagery and perception shows that the overlap is not perfect. A theory that tries to take into account all the neuropsychological results and gives an explanation for the dissociations will therefore be presented in the end of this section.
Support for shared physiological mechanisms of imagery and perception
Brain imaging experiments in the 1990s confirmed the results which previous electrophysiological measurements had already made. Therein brain activity of participants was measured, using either PET or fMRI, both when they were creating visual images and when they were not creating images. These experiments showed that imagery creates activity in the striate cortex which is, being the primary visual receiving area, also active during visual perception. Figure 8 (not included yet due to copyright issues) shows how activity in the striate cortex increased both when a person perceived an object (“stimulus on”) and when the person created a visual image of it (“imagined stimulus”). Although the striate cortex has not become activated by imagery in all brain-imaging studies, most results indicate that it is activated when participants are asked to create detailed images.
Another approach to understand imagery has been made by studies of people with brain damage in order to determine if both imagery and perception are affected in the same way. Often, patients with perceptual problems also have problems in creating images like in the case of people having both lost the ability to see colour and to create colours through imagery. Another example is that of a patient with unilateral neglect, which is due to damage to the parietal lobes and causes that the patient ignores objects in one half of his visual field. By asking the patient to imagine himself standing at a place that is familiar to him and to describe the things he is seeing, it was found out that he did not only neglect the left side of his perceptions but also the left side of his mental images, as he could only name objects that were on the right hand side of his mental image.
The idea that mental imagery and perception share physiological mechanisms is thus supported by both brain imaging experiments with normal participants and effects of brain damage like in patients with unilateral neglect. However, also contradictory results have been observed, indicating that the underlying mechanisms of perception and imagery cannot be identical.
Double dissociation between imagery and perception
A double dissociation exists when a single dissociation (one function is present another is absent) can be demonstrated in one person and the complementary type of single dissociation can be demonstrated in another person. Regarding imagery and perception a double dissociation has been observed as there are both patients with normal perception but impaired imagery and patients with impaired perception but normal imagery. Accordingly, one patient with damage to his occipital and parietal lobes was able to recognise objects and draw accurate pictures of objects placed before him, but was unable to draw pictures from memory, which requires imagery. Contrary, another patient suffering from visual agnosia was unable to identify pictures of objects even though he could recognise parts of them. For example, he did not recognise a picture of an asparagus but labelled it as “rose twig with thorns”. On the other hand, he was able to draw very detailed pictures from memory which is a task depending on imagery.
As double dissociation usually suggests that two functions rely on different brain regions or physiological mechanisms, the described examples imply that imagery and perception do not share exactly the same physiological mechanisms. This of course conflicts with the evidence from brain imaging measurements and other cases of patients with brain damage mentioned above that showed a close connection between imagery and perception.
Interpretation of the neuropsychological results
A possible explanation for the paradox that on the one hand there is great evidence for parallels between perception and imagery but on the other hand the observed double dissociation conflicts with these results goes as follows. Mechanisms of imagery and perception overlap only partially so that the mechanisms responsible for imagery are mainly located in higher visual centres and the mechanisms underlying perception are located at both lower and higher centres (Figure 9, not included yet due to copyright issues). Accordingly, perception is thought to constitute a bottom-up-process that starts with an image in the retina and involves processing in the retina, the Lateral Geniculate Nucleus, the striate cortex and higher cortical areas. In contrast, imagery is said to start as a top-down process, as its activity is generated in higher visual centres without any actual stimulus, that is without an image on the retina. This theory provides explanations for both the patient with impaired perception but normal imagery and the patient with normal perception but impaired imagery. In the first case, the patient’s perceptual problems could be explained by damage to early processing in the cortex and his ability to still create images by the intactness of higher areas of the brain. Similarly, in the latter case, the patient's impaired imagery could be caused by damage to higher-level areas whereas the lower centres could still be intact. Even though this explanation fits several cases it does not fit all cases. Consequently, further research has to accomplish the task of developing an explanation that is able to explain the relation between perception and imagery sufficiently.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/08%3A_Imagery/8.04%3A_Neuropsychological_approach.txt
|
Besides the imagery debate, which is concerned with the question how we imagine for example objects, persons, situations and involve our senses in these mental pictures, questions concerning the memory are still untreated. In this part of the chapter about imagery we are dealing with the questions how images are encoded in the brain, and how they are recalled out of our memory. In search of answering these questions three major theories evolved. All of them explain the encoding and recalling processes different, and as usual validating experiments were realised for all these theories.
In search of answering these questions, three major streams evolved. All of them try to explain the encoding and recalling processes differently and, as usual, validating experiments were realised in all streams.
The common-code theory
This view of memory and recall theories that images and words access semantic information in a single conceptual system that is neither word-like nor spatial-like. The model of common-code hypothesis that for example images and words both require analogous processing before accessing semantic information. So the semantic information of all sensational input is encoded in the same way. The consequence is that when you remember for example a situation where you were watching an apple falling down a tree, the visual information about the falling of the apple and the information about the sound, which appeared when the apple hit the ground, both are constructed on – the – fly in the specific brain regions (e.g. visual images in the visual cortex) out of one code stored in the brain. Another difference of this model is that it claims images require less time than words for accessing the common conceptual system. Therefore images need less time to be discriminated, because they share a smaller set of possible alternatives than words. Apart from that words have to be picked out of a much larger set of ambiguous possibilities in the mental dictionary. The heaviest point of criticism on this model is that it does not declare where this common code is stored at the end.
The abstract-propositional theory
This theory rejects any notion of the distinction between verbal and non - verbal modes of representation, but instead describes representations of experience or knowledge in terms of an abstract set of relations and states, in other words propositions. This theory postulates that the recall of images is better if the one who is recalling the image has some connection to the meaning of the image which is recalled. For example if you are looking at an abstract picture on which a bunch of lines is drawn, which you cannot combine in a meaningful way with each other, the recall process of this picture will be very hard (if not impossible). As reason for this it is assumed, that there is no connection to propositions, which can describe some part of the picture, and no connection to a propositional network, which reconstructs parts of the picture. The other case is, that you look at a picture with some lines in it, which you can combine in a meaningful way with each other. The recall process should be successful, because in this case you can scan for a proposition which has at least one attribute with the meaning of the image you recognised. Then this proposition returns the information which is necessary to recall it.
The dual-code theory
Unlike the common-code and abstract-propositional approaches, this model postulates that words and images are represented in functionally distinct verbal and non - verbal memory systems. To establish this model, Roland and Fridberg (1985) had run an experiment, in which the subjects had either to imagine a mnemonic or how they walk the way to their home through their neighbourhoods. While the subjects did one of this tasks, their brain was scanned with the positron emission tomography (PET). Figure 10 is a picture combining the brains of the subjects, which achieved the first and the second task.
Figure 10: Green dots represent regions which showed a higher activity during the walking home task; yellow dots represent regions which showed a higher activity during the mnemonic task.
As we can see on the picture, for the processing of verbal and spatial information different brain areas are involved. The brain areas, which were active during the walking home task, are the same areas which are active during the visual perception and the information processing. And among those areas which showed activity while the mnemonic task was carried out, the Broca-centre is included, where normally language processing is located. This can be considered as an evidence for both representation types to be somehow connected with the modalities, as Paivio’s theory about dual-coding suggests Anderson (1996). Can you imagine other examples, which argue for the dual-code theory? For example, you walk along the beach in the evening, there are some beach bars ahead. You order a drink, and next to you, you see a person, which seems to be familiar to you. While you drink your drink, you try to remember the name of this person, but you fail stranded, even if you can remember where you have seen the person the last time, and perhaps what you have talked about in that situation. Now imagine another situation. You walk through the city, and you pass some coffee bars, out of one of them you hear a song. You are sure that you know that song, but you cannot remember the name of the interpreter, nor the name of the song either where you have heard it. Both examples can be interpreted as indicators for the assumption that in these situations you can recall the information which you perceived in the past, but you fail in remembering the propositions you connected to them.
In this area of research, there are of course other unanswered questions, for example why we cannot imagine smell, how the recall processes are performed or where the storage of images is located. The imagery debate is still going on, and ultimate evidence showing which of the models explains the connection between imagery and memory are missing. For now the dual-code theory seems to be the most promising model.
8.06: References
Anderson, John R. (1996). Kognitive Psychlogie: eine Einfuehrung. Heidelberg: Spektrum Akademischer Verlag.
Bryant, D. J., B. Tversky, et al. (1992). ”Internal and External Spatial Frameworks for Representing Described Scenes.” Jornal of Memory and Language 31: 74-98.
Coucelis, H., Golledge, R., and Tobler, W. (1987). Exploring the anchor- point hypothesis of spatial cognition. Journal of Environmental Psychol- ogy, 7, 99-122.
E.Bruce Goldstein, Cognitive Psychology, Connecting Mind, Research, and Everyday Experience (2005) - ISBN 0-534-57732-6.
Marmor, G.S. and Zaback, L.A. (1976). Mental Rotation in the blind: Does mental rotation depend on visual imagery?. Journal of Experimental Psychology: Human Perception and Performance, 2, 515-521.
Roland, P. E. & Fridberg, L. (1985). Localization of critical areas activated by thinking. Journal of Neurophysiology, 53, 1219 – 1243.
Paivio, A. (1986). Mental representation: A dual-coding approach. New York: Oxford University Press.
8.07: Links and Further Reading
Cognitive Psychology Osnabrueck
Dr. Rolf A. Zwaan's Homepage with many Papers
Articles
Cherney, Leora (2001): Right Hemisphere Brain Damage
Grodzinsky, Yosef (2000): The neurology of syntax: Language use without Broca’s area.
Mueller, H. M., King, J. W. & Kutas, M. (1997). Event-related potentials elicited by spoken relative clauses; Cognitive Brain Research 4:193-203.
Mueller, H.M. & Kutas, M. (1996). What’s in a name? Electrophysiological differences between spoken nouns, proper names and one’s own name; NeuroReport 8:221-225.
Revised in July 2007 by: Alexander Blum (Spatial Representation, Discussion of the Imagery Debate, Images), Daniel Elport (Propositional Representation), Alexander Lelais (Imagery and Memory), Sarah Mueller (Neuropsychological approach), Michael Rausch (Introduction, Publishing)
Authors of the first version (2006): Wendy Wilutzky, Till Becker, Patrick Ehrenbrink (Propositional Representation), Mayumi Koguchi, Da Shengh Zhang (Spatial Representation, Intro, Debate).
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/08%3A_Imagery/8.05%3A_Imagery_and_Memory.txt
|
"Language is the way we interact and communicate, so, naturally, the means of communication and the conceptual background that’s behind it, which is more important, are used to try to shape attitudes and opinions and induce conformity and subordination. Not surprisingly, it was created in the more democratic societies." - Chomsky
Language is a central part of everyday life and communication a natural human necessity. For those reasons there has been a high interest in their properties. However describing the processes of language turns out to be quite hard.
We can define language as a system of communication through which we code and express our feelings, thoughts, ideas and experiences.[1]
Already Plato was concerned with the nature of language in his dialogue “Cratylus”, where he discussed first ideas about nowadays important principles of linguistics namely morphology and phonology. Gradually philosophers, natural scientists and psychologists became interested in features of language.
Since the emergence of the cognitive science in the 50's and Chomsky´s criticism on the behaviourist view, language is seen as a cognitive ability of humans, thus incorporating linguistics in other major fields like computer science and psychology. Today, psycho-linguistics is a discipline on its own and its most important topics are acquisition, production and comprehension of language.
Especially in the 20th century many studies concerning communication have been conducted, evoking new views on old facts. New techniques, like CT, MRI and fMRI or EEG, as described in Methods of Behavioural and Neuroscience Methods, made it possible to observe brain during communication processes in detail.
Later on an overview of the most popular experiments and observed effects is presented. But in order to understand those one needs to have a basic idea of semantics and syntax as well as of linguistic principles for processing words, sentences and full texts.
Finally some questions will arise: How is language affected by culture? Or in philosophical terms, the discussion about the relationship between language and thoughts has to be developed.
9.02: Historical Review on Psycholinguistics and Neurolinguistics
Starting with philosophical approaches, the nature of the human language had ever been a topic of interest. Galileo in the 16th century saw the human language as the most important invention of humans. Later on in the 18th century the scientific study of language began by psychologists. Wilhelm Wundt (founder of the first laboratory of psychology) saw language as the mechanism by which thoughts are transformed into sentences. The observations of Wernike and Broca (see chapter 9) were milestones in the studies of language as a cognitive ability. In the early 1900s the behaviouristic view influenced the study of language very much. In 1957 B.F.Skiner published his book "Verbal Behaviour", in which he proposed that learning of language can be seen as a mechanism of reinforcement. Noam Chomsky (quoted at the beginning of this chapter) published in the same year "Syntactic Structures". He proposed that the ability to invent language is somehow coded in the genes. That led him to the idea that the underlying basis of language is similar across cultures. There might be some kind of universal grammar as a base, independent of what kind of language (including sign language) might be used by humans. Further on Chomsky published a review of Skinner´s "Verbal Behaviour" in which he presented arguments against the behaviouristic view. There are still some scientists who are convinced that it does not need a mentalist approach like Chomsky proposed, but in the meantime most agree that human language has to be seen as a cognitive ability. Current goals of Psycholinguistics
A natural language can be analysed at a number of different levels. In linguistics we differ between phonology (sounds), morphology (words), syntax (sentence structure), semantics (meaning), and pragmatics (use). Linguists try to find systematic descriptions capturing the regularities inherent in the language itself. But a description of natural language just as a abstract structured system, can not be enough. Psycholinguists rather ask, how the knowledge of language is represented in the brain, and how it is used. Today's most important research topics are:
1. comprehension: How humans understand spoken as well as written language, how language is processed and what interactions with memory are involved.
2. speech production: Both the physical aspect of speech production, and the mental process that stands behind the uttering of a sentence.
3. acquisition: How people learn to speak and understand a language.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.01%3A_Introduction.txt
|
What is a language? What kinds of languages do exist? Are there characteristic features that are unique in human language?
There are plenty of approaches how to describe languages. Especially in computational linguistics, researchers try to find formal definitions for different kinds of languages. But for psychology other aspects of language than its function as pure system of communication are of central interest. Language is also a tool we use for social interactions starting with the exchange of news up to the identification of social groups by their dialect. We use it for expressing our feelings, thoughts, ideas etc.
Although there are plenty ways to communicate (consider Non-Human-Language) humans expect their system of communication - the human language to be unique. But what is it that makes the human language so special and unique?
Four major criteria have been proposed by Professor Franz Schmalhofer from the University of Osnabrück as explained below:
• semanticity
• displacement
• creativity
• structure dependency
Semanticity means the usage of symbols. Symbols can either refer to objects or to relations between objects. In the human language words are the basic form of symbols. For example the word "book" refers to an object made of paper on which something might be written. A relation symbol is the verb "to like" which refers to the sympathy of somebody to something or someone.
The criterion of displacement means that not only objects or relations at presence can be described but there are also symbols which refer to objects in another time or place. The word "yesterday" refers to day before and objects mentioned in a sentence with "yesterday" refer to objects from another time than the present one. Displacement is about the communication of events which had happened or will happen and the objects belonging to that event.
Having a range of symbols to communicate these symbols can be newly combined. Creativity is the probable most important feature. Our communication is not restricted to a fixed set of topics or predetermined messages. The combination of a finite set of symbols to an infinite number of sentences and meaning. With the infinite number of sentences the creation of novel messages is possible. How creative the human language is can be illustrated by some simple examples like the process that creates verbs from nouns. New words can be created, which do not exist so far, but we are able to understand them.
Examples:
leave the boat on the beach -> beach the boat
keep the aeroplane on the ground -> ground the aeroplane
write somebody an e-mail -> e-mail somebody
Creative systems are also found in other aspects of language, like the way sounds are combined to form new words. i.e. prab, orgu, zabi could be imagined as names for new products.
To avoid an arbitrary combination of symbols without any regular arrangement "true" languages need structure dependency. Combining symbols the syntax is relevant. A change in the symbol order might have an impact on the meaning of the sentence. For example “The dog bites the cat” has obviously a different meaning than “The cat bites the dog” based on the different word arrangement of the two sentences. Non-Human Language - Animal Communication Forms of Communication
As mentioned before human language is just one of quite a number of communication forms. Different forms of communication can be found in the world of animals. From a little moth to a giant whale, all animals appear to have the use of communication.
Not only humans use facial expression for stressing utterances or feeling, facial expressions can be found among apes. The expression, for example "smiling" indicates cooperativeness and friendliness in both the human and the ape world. On the other hand an ape showing teeth indicates the willingness to fight.
Posture is a very common communicative tool among animals. Lowering the front part of the body and extending the front legs is a sign of dogs that they are playful whereas lowering the full body is a dog’s postural way to show its submissiveness. Postural communication is known in both human and non-human primates.
Besides facial expression, gesture and posture that are found in human communication, there are other communicative devices which are either just noticeable by the sub-consciousness of humans like scent or cannot be found amongst humans like light, colour and electricity. The chemicals which are used for a communicative function are called pheremones. Those pheremones are used to mark territorial or to signal its reproductive readiness. For animals scent is a very important tool which predominates their mating behaviour. Humans are influenced in their mating behaviour by scent as well but there are more factors to that behaviour so that scent is not predominating.
The insects use species-dependent light patterns to signal identity, sex and location. For example the octopus changes colour for signalling territorial defence and mating readiness. In the world of birds colour is wide spread, too. The male peacock has colourful feathering to impress female peahens as a part of mating behaviour. These ways of communication help to live in a community and survive in certain environment. Characteristic Language Features in Animal Communication
As mentioned above it is possible to describe the uniqueness of human language by four criteria (semanticity, displacement, creativity and structural dependency) which are important devices in the human language to form a clear communication between humans. To see if these criteria exist in animal communication - i.e. if animals possess a "true" language - several experiments with non-human primates were performed. Non-human primates were taught American Sign Language (ASL) and a specially developed token language to detect in how far they are capable of linguistic behaviour. Can semanticity, displacement, creativity and structure dependency be found in non-human language?
Experiments
1. In 1931, comparative psychologist Winthrop Niles Kellogg and his wife started an experiment with a chimpanzee, which he raised alongside his own child. The purpose was of course to see how environment influenced development, could chimpanzee be more like human? Eventually the experiment failed, the factors are: the behavior of their son started to become more and more chimpanzee-like and also the tiredness of having to do both experiment and raising two babies at the same time.
2. Human language In 1948, in Orange Park, Florida, Keith and Cathy Hayes tried to teach English words to a chimpanzee named Viki. She was raised as if she were a human child. The chimpanzee was taught to "speak" easy English words like "cup". The experiment failed since with the supralanyngal anatomy and the vocal fold structure that chimpanzees have it is impossible for them to produce human speech sounds. The failure of the Viki experiment made scientists wonder how far are non-human primates able to communicate linguistically.
3. Sign language From 1965 to 1972 the first important evidence showing rudiments of linguistic behaviour was "Washoe", a young female chimpanzee. The experimenters Allen and Beatrice Gardner conducted an experiment where Washoe learned 130 signs of the American Sign Language within three years. Showing pictures of a duck to Washoe and asking WHAT THAT? she combined the symbols of WATER and BIRD to create WATER BIRD as she had not learned the word DUCK (the words in capital letters refer to the signs the apes use to communicate with the experimenter).
It was claimed that Washoe was able to arbitrarily combine signs spontaneously and creatively. Some scientists criticised the ASL experiment of Washoe because they claimed that ASL is a loose communicative system and strict syntactic rules are not required. Because of this criticism different experiments were developed and performed which focus on syntactic rules and structure dependency as well as on creative symbol combination.
A non-human primate named "Kanzi" was trained by Savage-Rumbaugh in 1990. Kanzi was able to deal with 256 geometric symbols and understood complex instructions like GET THE ORANGE THAT IS IN THE COLONY ROOM. The experimenter worked with rewards.
A question which arose was whether these non-human primates were able to deal with human-like linguistic capacities or if they were just trained to perform a certain action to get the reward.
For more detailed explanations of the experiments see The Mind of an Ape.
Can the characteristic language features be found in non-human communication?
Creativity seems to be present in animal communication as amongst others Washoe showed with the creation of WATER BIRD for DUCK. Although some critics claimed that creativity is often accidental or like in the case of Washoe’s WATER BIRD the creation relays on the fact that water and bird were present. Just because of this presence Washoe invented the word WATER BIRD.
In the case of Kanzi a certain form of syntactic rules was observed. In 90% of Kanzi’s sentences there was first the invitation to play and then the type of game which Kanzi wanted to play like CHASE HIDE, TICKLE SLAP and GRAB SLAP. The problem which was observed was that it is not always easy to recognise the order of signs. Often facial expression and hand signs are performed at the same time. One ape signed the sentence I LIKE COKE by hugging itself for “like” and forming the sign for “coke” with its hands at the same time. Noticing an order in this sign sentence was not possible.
A certain structural dependency could be observed at Kanzi’s active and passive sentences. When Matata, a fellow chimpanzee was grabbed Kanzi signed GRAB MATATA and when Matata was performing an action such as biting Kanzi produced MATATA BITE. It has not yet been proved that symbolic behaviour is occurring. Although there are plenty evidences that creativity and displacement occur in animal communication some critics claim that these evidences can be led back to dressage and training. It was claimed that linguistic behaviour cannot be proved as it is more likely to be a training to correctly use linguistic devices. Apes show just to a little degree syntactic behaviour and they are not able to produce sentences containing embedded structures. Some linguists claim that because of such a lack of linguistic features non-human communication cannot be a “true” language. Although we do not know the capacity of an ape's mind it does not seem that the range of meanings observed in ape's wild life approach the capaciousness of semanticity of human communication. Furthermore apes seem not to care to much about displacement as it appears that they do not communicate about imaginary pasts or futures.
All in all non-human primate communication consisting of graded series of communication shows little arbitrariness. The results with non-human primates led to a controversial discussion about linguistic behaviour. Many researchers claimed that the results were influenced by dressage.
For humans language is a communication form suited to the patterns of human life. Other communication systems are better suited for fellow creatures and their mode of existence.
Now that we know that there is a difference between animal communication and human language we will see detailed features of the human language. Language Comprehension & Production Language features – Syntax and Semantics
In this chapter the main question will be “how do we understand sentences?”. To find an answer to that problem it is necessary to have a closer look at the structure of languages. The most important properties every human language provides are rules which determine the permissible sentences and a hierarchical structure (phonemes as basic sounds, which constitute words, which in turn constitute phrases, which constitute sentences, which constitute texts). These feature of a language enable humans to create new unique sentences. The fact that all human languages have a common ground even if they developed completely independent from one another may lead to the conclusion that the ability to process language must be innate. Another evidence of a inborn universal grammar is that there were observations of deaf children who were not taught a language and developed their own form of communication which provided the same basic constituents. Two basic abilities human beings have to communicate is to interpret the syntax of a sentence and the knowledge of the meaning of single words, which in combination enables them to understand the semantic of whole sentences. Many experiments have been done to find out how the syntactical and semantical interpretation is done by human beings and how syntax and semantics works together to construct the right meaning of a sentence. Physiological experiments had been done in which for example the event-related potential (ERP) in the brain was measured as well as behavioristic experiments in which mental chronometry, the measurement of the time-course of cognitive processes, was used. Physiological experiments showed that the syntactical and the semantical interpretation of a sentence takes place separately from each other. These results will be presented below in more detail.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.03%3A_Characteristic_Features.txt
|
Semantical incorrectness in a sentence evokes an N400 in the ERP The exploration of the semantic sentence processing can be done by the measurement of the event-related potential (ERP) when hearing a semantical correct sentence in comparison to a semantical incorrect sentence. For example in one experiment three reactions to sentences were compared:
Semantically correct: “The pizza was too hot to eat.” Semantically wrong: “The pizza was too hot to drink.” Semantically wrong: “The pizza was too hot to cry.”
In such experiments the ERP evoked by the correct sentence is considered to show the ordinary sentence processing. The variations in the ERP in case of the incorrect sentences in contrast to the ERP of the correct sentence show at what time the mistake is recognized. In case of semantic incorrectness there was observed a strong negative signal about 400ms after perceiving the critical word which did not occure, if the sentence was semantically correct. These effects were observed mainly in the paritial and central area. There was also found evidence that the N400 is the stronger the less the word fits semantically. The word “drink” which fits a little bit more in the context caused a weaker N400 than the word “cry”. That means the intensity of the N400 correlates with the degree of the semantic mistake. The more difficult it is to search for a semantic interpretation of a sentence the higher is the N400 response.
To examine the syntactical aspects of the sentence processing a quite similar experiment as in the case of the semantic processing was done. There were used syntactical correct sentences and incorrect sentences, such as (correct:)“The cats won´t eat…” and (incorrect:)“The cats won´t eating…”. When hearing or reading a syntactical incorrect sentence in contrast to a syntactical correct sentence the ERP changes significantly on two different points of time. First of all there a very early increased response to syntactical incorrectness after 120ms. This signal is called the ‘early left anterior negativity’ because it occurs mainly in the left frontal lobe. This advises that the syntactical processing is located amongst others in Broca's area which is located in the left frontal lobe. The early response to syntactical mistakes also indicates that the syntactical mistakes are detected earlier than semantic mistakes.
The other change in the ERP when perceiving a syntactical wrong sentence occurs after 600ms in the paritial lobe. The signal is increasing positively and is therefore called P600. Possibly the late positive signal is reflecting the attempt to reconstruct the grammatical problematic sentence to find a possible interpretation. File:Cpnp3001.jpg Syntactical incorrectness in a sentence evokes after 600ms a P600 in the electrodes above the paritial lobe.
To summarize the three important ERP-components: First of all there occurs the ELAN at the left frontal lobe which shows a violation of syntactical rules. After it follows the N400 in central and paritial areas as a reaction to a semantical incorrectness and finally there occurs a P600 in the paritial area which probably means a reanalysis of the wrong sentence.
9.05: Behavioristic Approach Parsing a Sentence
Behavioristic experiments about how human beings parse a sentence often use syntactically ambiguous sentences. Because it is easier to realize that sentence-analysing mechanisms called parsing take place when using sentences in which we cannot automatically constitute the meaning of the sentence. There are two different theories about how humans parse sentences. The syntax-first approach claims that syntax plays the main part whereas semantics has only a supporting role, whereas the interactionist approach states that both syntax and semantics work together to determine the meaning of a sentence. Both theories will be explained below in more detail.
The Syntax-First Approach of Parsing The syntax-first approach concentrates on the role of syntax when parsing a sentence. That humans infer the meaning of a sentence with help of its syntactical structure (Kako and Wagner 2001) can easily be seen when considering Lewis Carroll´s poem ‘Jabberwocky’:
"Twas brillig, and the slithy toves Did gyre and gimble in the wabe: All mimsy were the borogoves, And the mome raths outgrabe."
Although most of the words in the poems have no meaning one may ascribe at least some sense to the poem because of its syntactical structure.
There are many different syntactic rules that are used when parsing a sentence. One important rule is the principle of late closure which means that a person assumes that a new word he perceives is part of the current phrase. That this principle is used for parsing sentences can be seen very good with help of a so called garden-path sentence. Experiments with garden-path sentences have been done by Frazier and Fayner 1982. One example of a garden-path sentence is: “Because he always jogs a mile seems a short distance to him.” When reading this sentence one first wants to continue the phrase “Because he always jogs” by adding “a mile” to the phrase, but when reading further one realizes that the words “a mile” are the beginning of a new phrase. This shows that we parse a sentence by trying to add new words to a phrase as long as possible. Garden-path sentences show that we use the principle of late closure as long it makes syntactically sense to add a word to the current phrase but when the sentence starts to get incorrect semantics are often used to rearrange the sentence. The syntax-first approach does not disregard semantics. According to this approach we use syntax first to parse a sentence and semantics is later on used to make sense of the sentence.
Apart from experiments which show how syntax is used for parsing sentences there were also experimens on how semantics can influence the sentence processing. One important experiment about that issue has been done by Daniel Slobin in 1966. He showed that passive sentences are understood faster if the semantics of the words allow only one subject to be the actor. Sentences like “The horse was kicked by the cow.” and “The fence was kicked by the cow.” are grammatically equal and in both cases only one syntactical parsing is possible. Nevertheless the first sentence semantically provides two subjects as possible actors and therefore it needs longer to parse this sentence. By measuring this significant difference Daniel Slobin showed that semantics play an important role in parsing a sentence, too.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.04%3A_Physiological_Approach.txt
|
The interactionist approach ascribes a more central role to semantics in parsing a sentence. In contrast to the syntax-first approach, the interactionist theory claims that syntax is not used first but that semantics and syntax are used simultaneously to parse the sentence and that they work together in clarifying the meaning. There have been several experiments which provide evidence that semantics are taken into account from the very beginning reading a sentence. Most of these experiments are working with the eye-tracking techniques and compare the time needed to read syntactical equal sentences in which critical words cause or prohibit ambiguity by semantics. One of these experiments was done by John Trueswell and coworkers in 1994. He measured the eye movement of persons when reading the following two sentences:
The defendant examined by the lawyer turned out to be unreliable. The evidence examined by the lawyer turned out to be unreliable.
He observed that the time needed to read the words “by the lawyer” took longer in case of the first sentence because in the first sentence the semantics first allow an interpretation in which the defendant is the one who examines, while the evidence only can be examined. This experiment shows that the semantics also play a role while reading the sentence which supports the interactionist approach and argues against the theory that semantics are only used after a sentence has been parsed syntactically. Inferences Creates Coherence
Coherence is the semantic relation of information in different parts of a text to each other. In most cases coherence is achieved by inference; that means that a reader draws information out of a text that is not explicitly stated in this text. For further information the chapter [Neuroscience of Text Comprehension] should be considered.
9.07: Situation Model
A situation model is a mental representation of what a text is about. This approach proposes that the mental representation people form as they read a story does not indicate information about phrases, sentences, paragraphs, but a representation in terms of the people, objects, locations, events described in the story (Goldstein 2005, p. 374)
9.08: Using Language
Conversations are dynamic interactions between two or more people (Garrod &Pickering, 2004 as cited in Goldstein 2005). The important thing to mention is that conversation is more than the act of speaking. Each person brings in his or her knowledge and conversations are much easier to process if participants bring in shared knowledge. In this way, participants are responsible of how they bring in new knowledge. H.P. Grice proposed in 1975 a basic principle of conversation and four “conversational maxims.” His cooperative principle states that “the speaker and listener agree that the person speaking should strive to make statements that further the agreed goals of conversation.” The four maxims state the way of how to achieve this principle.
1. Quantity: The speaker should try to be informative, no over-/underinformation.
2. Quality: Do not say things which you believe to be false or lack evidence of.
3. Manner: Avoiding being obscure or ambiguous.
4. Relevance: Stay on topic of the exchange.
An example of a rule of conversation incorporating three of those maxims is the given-new-contract. It states that the speaker should construct sentences so that they include given and new information. (Haviland & Clark, 1974 as cited in Goldstein, 2005). Consequences of not following this rule were demonstrated by Susan Haviland and Herbert Clark by presenting pairs of sentences (either following or ignoring the given-new-contract) and measuring the time participants needed until they fully understood the sentence. They found that participants needed longer in pairs of the type:
``` We checked the picnic supplies.
The beer was warm.
Rather than:
We got some beer out of the trunk.
The beer was warm.
```
The reason that it took longer to comprehend the second sentence of the first pair is that inferencing has to be done (beer has not been mentioned as being part of the picnic supplies). (Goldstein, 2005, p. 377-378)
9.09: Language Culture and Cognition
In the parts above we saw that there has been a lot of research of language, from letters through words and sentences to whole conversations. Most of the research described in the parts above was processed by English speaking researchers and the participants were English speaking as well. Can those results be generalised for all languages and cultures or might there be a difference between English speaking cultures and for example cultures with Asian or African origin?
Imagine our young man from the beginning again: Knut! Now he has to prepare a presentation with his friend Chang for the next psychology seminar. Knut arrives at his friend’s flat and enters his living-room, glad that he made it there just in time. They have been working a few minutes when Chang says: ”It has become cold in here!“ Knut remembers that he did not close the door, stands up and...”stop! What is happening here?!“
This part is concerned with culture and its connection to language. Culture, not necessarily in the sense of "high culture" like music, literature and arts but culture is the "know-how" a person must have to tackle his or her daily life. This know-how might include high culture but it is not necessary.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.06%3A_The_Interactionist_Approach_of_Parsing.txt
|
Scientists wondered in how far culture affects the way people use language. In 1991 Yum studied the indirectness of statements in Asian and American conversations. The statement "Please shut the door" was formulated by Americans in an indirect way. They might say something like "The door is open" to signal that they want to door to be shut. Even more indirect are Asian people. They often do not even mention the door but they might say something like "It is somewhat cold today". Another cultural difference affecting the use of language was observed by Nisbett in 2003 in observation about the way people pose questions. When American speaker ask someone if more tea is wanted they ask something like "More tea?". Different to this Asian people would ask if the other one would like more drinking as for Asians it seems obvious that tea is involved and therefore mentioning the tea would be redundant. For Americans it is the other way round. For them it seems obvious that drinking is involved so they just mention the tea.
This experiment and similar ones indicate that people belonging to Asian cultures are often relation orientated. Asians focus on relationships in groups. Contrasting, the Americans concentrate on objects. The involved object and its features are more important than the object's relation to other objects. These two different ways of focusing shows that language is affected by culture.
A experiment which clearly shows these results is the mother-child interaction which was observed by Fernald and Morikawa in 1993. They studied mother-child talk of Asian and American mothers. An American mother trying to show and explain a car to her child often repeated the object "car" and wants the child to repeat it as well. The mother focuses on the features of the car and labels the importance of the object itself. The Asian mother shows the toy car to her child, gives the car to the child and wants it to give the car back. The mother shortly mentions that the object is a car but concentrates on the importance of the relation and the politeness of giving back the object.
Realising that there are plenty differences in how people of different cultures use language the question arises if languages affects the way people think and perceive the world.
9.11: What is the Connection between Language and Cognition
Sapir-Whorf Hypothesis
In the 1950s Edward Sapir and Benjamin Whorf proposed the hypothesis that the language of a culture affects the way people think and perceive. The controversial theory was question by Elenor Rosch who studied colour perception of Americans and Danis who are members of an stone-age agricultural culture in the Iran. Americans have several different categories for colour as for example blue, red, yellow and so on. Danis just have two main colour categories. The participants were ask to recall colours which were shown to them before. That experiment did not show significant differences in colour perception and memory as the Sapir-Whorf hypothesis presumes. File:Color-naming exp.jpg Color-naming experiment by Roberson et al. (2000)
Categorical Perception
Nevertheless a support for the Sapir-Whorf hypothesis was Debi Roberson's demonstration for categorical perception based on the colour perception experiment by Rosch. The participants, a group of English-speaking British and another group of Berinmos from New Guinea were ask to name colours of a board with colour chips. The Berinmos distinguish between five different colour categories and the denotation of the colour names is not equivalent to the British colour denotation. Apart from these differences there are huge differences in the organisation of the colour categories. The colours named green and blue by British participants where categorised as nol which also covers colours like light-green, yellow-green, and dark blue. Other colour categories differ similarly.
The result of Roberson's experiment was that it is easier for British people to discriminate between green and blue whereas Berinmos have less difficulties distinguishing between Nol and Wap. The reaction to colour is affected by language, by the vocabulary we have for denoting colours. It is difficult for people to distinguish colours from the same colour category but people have less trouble differentiating between colours from different categories. Both groups have categorical colour perception but the results for naming colours depends on how the colour categories were named. All in all it was shown that categorical perception is influenced by the language use of different cultures.
These experiments about perception and its relation to cultural language usage leads to the question whether thought is related to language with is cultural differences.
9.12: Is Thought Dependent on or even caused by Language
Historical theories
An early approach was proposed by J.B. Watson‘s in 1913. His peripheralist approach was that thought is a tiny not noticeable speech movement. While thinking a person performs speech movements as he or she would do while talking. A couple year later, in 1921 Wittgenstein poses the theory that the limits of a person's language mean the limits of that person's world. As soon as a person is not able to express a certain content because of a lack of vocabulary that person is not able to think about those contents as they are outside of his or her world. Wittgenstein's theory was doubted by some experiments with babies and deaf people.
Present research
To find some evidence for the theory that language and culture is affecting cognition Lian-hwang Chiu designed an experiment with American and Asian children. The children were asked to group objects in pairs so that these objects fit together. On picture that was shown to the children there was a cow, a chicken and some grass. The children had to decided which of the two objects fitted together. The American children mostly grouped cow and chicken because of group of animals they belong to. Asian children more often combined the cow with the grass as there is the relation of the cow normally eating grass.
In 2000 Chui repeated the experiment with words instead of pictures. A similar result was observed. The American children sorted their pairs taxonomically. Given the words "panda", "monkey" and "banana" American children paired "panda" and monkey". Chinese children grouped relationally. They put "monkey" with "banana". Another variation of this experiment was done with bilingual children. When the task was given in English to the children they grouped the objects taxonomically. A Chinese task caused a relational grouping. The language of the task clearly influenced on how to group the objects. That means language may affects the way people think.
The results of plenty experiments regarding the relation between language, culture and cognition let assume that culture affects language and cognition is affected by language.Our way of thinking is influenced by the way we talk and thought can occur without language but the exact relation between language and thought remains to be determined.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.10%3A_Culture_and_Language.txt
|
Historical review on Psycholinguistics & Neurolinguistics
Starting with philosophical approaches, the nature of the human language had ever been a topic of interest. Galileo in the 16th century saw the human language as the most important invention of humans. Later on in the 18th century the scientific study of language began by psychologists. Wilhelm Wundt (founder of the first laboratory of psychology) saw language as the mechanism by which thoughts are transformed into sentences. The observations of Wernike and Broca (see chapter 9) were milestones in the studies of language as a cognitive ability. In the early 1900s the behaviouristic view influenced the study of language very much. In 1957 B.F.Skiner published his book "Verbal Behaviour", in which he proposed that learning of language can be seen as a mechanism of reinforcement. Noam Chomsky (quoted at the beginning of this chapter) published in the same year "Syntactic Structures". He proposed that the ability to invent language is somehow coded in the genes. That led him to the idea that the underlying basis of language is similar across cultures. There might be some kind of universal grammar as a base, independent of what kind of language (including sign language) might be used by humans. Further on Chomsky published a review of Skinner´s "Verbal Behaviour" in which he presented arguments against the behaviouristic view. There are still some scientists who are convinced that it does not need a mentalist approach like Chomsky proposed, but in the meantime most agree that human language has to be seen as a cognitive ability.
Current goals of Psycholinguistics
A natural language can be analysed at a number of different levels. In linguistics we differ between phonology (sounds), morphology (words), syntax (sentence structure), semantics (meaning), and pragmatics (use). Linguists try to find systematic descriptions capturing the regularities inherent in the language itself. But a description of natural language just as a abstract structured system, can not be enough. Psycholinguists rather ask, how the knowledge of language is represented in the brain, and how it is used. Today's most important research topics are:
1) comprehension: How humans understand spoken as well as written language, how language is processed and what interactions with memory are involved.
2) speech production: Both the physical aspect of speech production, and the mental process that stands behind the uttering of a sentence.
3) acquisition: How people learn to speak and understand a language.
Characteristic features
What is a language? What kinds of languages do exist? Are there characteristic features that are unique in human language?
There are plenty of approaches how to describe languages. Especially in computational linguistics researchers try to find formal definitions for different kinds of languages. But for psychology other aspects of language than its function as pure system of communication are of central interest. Language is also a tool we use for social interactions starting with the exchange of news up to the identification of social groups by their dialect. We use it for expressing our feelings, thoughts, ideas etc.
Although there are plenty ways to communicate (consider Non-Human-Language) humans expect their system of communication - the human language to be unique. But what is it that makes the human language so special and unique?
Four major criteria have been proposed by Professor Franz Schmalhofer from the University of Osnabrück as explained below:
-semanticity
-displacement
-creativity
-structure dependency
Semanticity means the usage of symbols. Symbols can either refer to objects or to relations between objects. In the human language words are the basic form of symbols. For example the word "book" refers to an object made of paper on which something might be written. A relation symbol is the verb "to like" which refers to the sympathy of somebody to something or someone.
The criterion of displacement means that not only objects or relations at presence can be described but there are also symbols which refer to objects in another time or place. The word "yesterday" refers to day before and objects mentioned in a sentence with "yesterday" refer to objects from another time than the present one. Displacement is about the communication of events which had happened or will happen and the objects belonging to that event.
Having a range of symbols to communicate these symbols can be newly combined. Creativity is the probable most important feature. Our communication is not restricted to a fixed set of topics or predetermined messages. The combination of a finite set of symbols to an infinite number of sentences and meaning. With the infinite number of sentences the creation of novel messages is possible. How creative the human language is can be illustrated by some simple examples like the process that creates verbs from nouns. New words can be created, which do not exist so far, but we are able to understand them.
Examples:
leave the boat on the beach -> beach the boat
keep the aeroplane on the ground -> ground the aeroplane
write somebody an e-mail -> e-mail somebody
Creative systems are also found in other aspects of language, like the way sounds are combined to form new words. i.e. prab, orgu, zabi could be imagined as names for new products.
To avoid an arbitrary combination of symbols without any regular arrangement "true" languages need structure dependency. Combining symbols the syntax is relevant. A change in the symbol order might have an impact on the meaning of the sentence. For example “The dog bites the cat” has obviously a different meaning than “The cat bites the dog” based on the different word arrangement of the two sentences.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.13%3A_Language_as_a_Cognitive_Ability.txt
|
Forms of Communication
As mentioned before human language is just one of quite a number of communication forms. Different forms of communication can be found in the world of animals. From a little moth to a giant whale, all animals appear to have the use of communication.
Not only humans use facial expression for stressing utterances or feeling, facial expressions can be found among apes. The expression, for example "smiling" indicates cooperativeness and friendliness in both the human and the ape world. On the other hand an ape showing teeth indicates the willingness to fight.
Posture is a very common communicative tool among animals. Lowering the front part of the body and extending the front legs is a sign of dogs that they are playful whereas lowering the full body is a dog’s postural way to show its submissiveness. Postural communication is known in both human and non-human primates.
Besides facial expression, gesture and posture that are found in human communication, there are other communicative devices which are either just noticeable by the sub-consciousness of humans like scent or cannot be found amongst humans like light, colour and electricity. The chemicals which are used for a communicative function are called pheremones. Those pheremones are used to mark territorial or to signal its reproductive readiness. For animals scent is a very important tool which predominates their mating behaviour. Humans are influenced in their mating behaviour by scent as well but there are more factors to that behaviour so that scent is not predominating.
The insects use species-dependent light patterns to signal identity, sex and location. For example the octopus changes colour for signalling territorial defence and mating readiness. In the world of birds colour is wide spread, too. The male peacock has colourful feathering to impress female peahens as a part of mating behaviour. These ways of communication help to live in a community and survive in certain environment.
Characteristic Language Features in Animal Communication
As mentioned above it is possible to describe the uniqueness of human language by four criteria (semanticity, displacement, creativity and structural dependency) which are important devices in the human language to form a clear communication between humans. To see if these criteria exist in animal communication - i.e. if animals possess a "true" language - several experiments with non-human primates were performed. Non-human primates were taught American Sign Language (ASL) and a specially developed token language to detect in how far they are capable of linguistic behaviour. Can semanticity, displacement, creativity and structure dependency be found in non-human language?
Experiments
1. Human language In 1948, in Orange Park, Florida, Keith and Cathy Hayes tried to teach English words to a chimpanzee named Viki. She was raised as if she were a human child. The chimpanzee was taught to "speak" easy English words like "cup". The experiment failed since with the supralanyngal anatomy and the vocal fold structure that chimpanzees have it is impossible for them to produce human speech sounds. The failure of the Viki experiment made scientists wonder how far are non-human primates able to communicate linguistically.
2. Sign language From 1965 to 1972 the first important evidence showing rudiments of linguistic behaviour was "Washoe", a young female chimpanzee. The experimenters Allen and Beatrice Gardner conducted an experiment where Washoe learned 130 signs of the American Sign Language within three years. Showing pictures of a duck to Washoe and asking WHAT THAT? she combined the symbols of WATER and BIRD to create WATER BIRD as she had not learned the word DUCK (the words in capital letters refer to the signs the apes use to communicate with the experimenter).
It was claimed that Washoe was able to arbitrarily combine signs spontaneously and creatively. Some scientists criticised the ASL experiment of Washoe because they claimed that ASL is a loose communicative system and strict syntactic rules are not required. Because of this criticism different experiments were developed and performed which focus on syntactic rules and structure dependency as well as on creative symbol combination.
A non-human primate named "Kanzi" was trained by Savage-Rumbaugh in 1990. Kanzi was able to deal with 256 geometric symbols and understood complex instructions like GET THE ORANGE THAT IS IN THE COLONY ROOM. The experimenter worked with rewards.
A question which arose was whether these non-human primates were able to deal with human-like linguistic capacities or if they were just trained to perform a certain action to get the reward.
For more detailed explanations of the experiments see The Mind of an Ape.
Can the characteristic language features be found in non-human communication?
Creativity seems to be present in animal communication as amongst others Washoe showed with the creation of WATER BIRD for DUCK. Although some critics claimed that creativity is often accidental or like in the case of Washoe’s WATER BIRD the creation relays on the fact that water and bird were present. Just because of this presence Washoe invented the word WATER BIRD.
In the case of Kanzi a certain form of syntactic rules was observed. In 90% of Kanzi’s sentences there was first the invitation to play and then the type of game which Kanzi wanted to play like CHASE HIDE, TICKLE SLAP and GRAB SLAP. The problem which was observed was that it is not always easy to recognise the order of signs. Often facial expression and hand signs are performed at the same time. One ape signed the sentence I LIKE COKE by hugging itself for “like” and forming the sign for “coke” with its hands at the same time. Noticing an order in this sign sentence was not possible.
A certain structural dependency could be observed at Kanzi’s active and passive sentences. When Matata, a fellow chimpanzee was grabbed Kanzi signed GRAB MATATA and when Matata was performing an action such as biting Kanzi produced MATATA BITE. It has not yet been proved that symbolic behaviour is occurring. Although there are plenty evidences that creativity and displacement occur in animal communication some critics claim that these evidences can be led back to dressage and training. It was claimed that linguistic behaviour cannot be proved as it is more likely to be a training to correctly use linguistic devices. Apes show just to a little degree syntactic behaviour and they are not able to produce sentences containing embedded structures. Some linguists claim that because of such a lack of linguistic features non-human communication cannot be a “true” language. Although we do not know the capacity of an ape's mind it does not seem that the range of meanings observed in ape's wild life approach the capaciousness of semanticity of human communication. Furthermore apes seem not to care to much about displacement as it appears that they do not communicate about imaginary pasts or futures.
All in all non-human primate communication consisting of graded series of communication shows little arbitrariness. The results with non-human primates led to a controversial discussion about linguistic behaviour. Many researchers claimed that the results were influenced by dressage.
For humans language is a communication form suited to the patterns of human life. Other communication systems are better suited for fellow creatures and their mode of existence.
Now that we know that there is a difference between animal communication and human language we will see detailed features of the human language.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.14%3A_Non-Human_Language_-_Animal_Communication.txt
|
Language features – Syntax and Semantics
In this chapter the main question will be “how do we understand sentences?”. To find an answer to that problem it is necessary to have a closer look at the structure of languages. The most important properties every human language provides are rules which determine the permissible sentences and a hierarchical structure (phonemes as basic sounds, which constitute words, which in turn constitute phrases, which constitute sentences, which constitute texts). These feature of a language enable humans to create new unique sentences. The fact that all human languages have a common ground even if they developed completely independent from one another may lead to the conclusion that the ability to process language must be innate. Another evidence of a inborn universal grammar is that there were observations of deaf children who were not taught a language and developed their own form of communication which provided the same basic constituents. Two basic abilities human beings have to communicate is to interpret the syntax of a sentence and the knowledge of the meaning of single words, which in combination enables them to understand the semantic of whole sentences. Many experiments have been done to find out how the syntactical and semantical interpretation is done by human beings and how syntax and semantics works together to construct the right meaning of a sentence. Physiological experiments had been done in which for example the event-related potential (ERP) in the brain was measured as well as behavioristic experiments in which mental chronometry, the measurement of the time-course of cognitive processes, was used. Physiological experiments showed that the syntactical and the semantical interpretation of a sentence takes place separately from each other. These results will be presented below in more detail.
Physiological Approach
Semantics
Semantical incorrectness in a sentence evokes a N400 in the ERP
Semantical incorrectness in a sentence evokes an N400 in the ERP The exploration of the semantic sentence processing can be done by the measurement of the event-related potential (ERP) when hearing a semantical correct sentence in comparison to a semantical incorrect sentence. For example in one experiment three reactions to sentences were compared:
Semantically correct: “The pizza was too hot to eat.” Semantically wrong: “The pizza was too hot to drink.” Semantically wrong: “The pizza was too hot to cry.”
In such experiments the ERP evoked by the correct sentence is considered to show the ordinary sentence processing. The variations in the ERP in case of the incorrect sentences in contrast to the ERP of the correct sentence show at what time the mistake is recognized. In case of semantic incorrectness there was observed a strong negative signal about 400ms after perceiving the critical word which did not occure, if the sentence was semantically correct. These effects were observed mainly in the paritial and central area. There was also found evidence that the N400 is the stronger the less the word fits semantically. The word “drink” which fits a little bit more in the context caused a weaker N400 than the word “cry”. That means the intensity of the N400 correlates with the degree of the semantic mistake. The more difficult it is to search for a semantic interpretation of a sentence the higher is the N400 response.
Syntax
Syntactical incorrectness in a sentence can evoce an ELAN (early left anterior negativity) in the electrodes above the left frontal lobe after 120ms.
To examine the syntactical aspects of the sentence processing a quite similar experiment as in the case of the semantic processing was done. There were used syntactical correct sentences and incorrect sentences, such as (correct:)“The cats won´t eat…” and (incorrect:)“The cats won´t eating…”. When hearing or reading a syntactical incorrect sentence in contrast to a syntactical correct sentence the ERP changes significantly on two different points of time. First of all there a very early increased response to syntactical incorrectness after 120ms. This signal is called the ‘early left anterior negativity’ because it occurs mainly in the left frontal lobe. This advises that the syntactical processing is located amongst others in Broca's area which is located in the left frontal lobe. The early response to syntactical mistakes also indicates that the syntactical mistakes are detected earlier than semantic mistakes.
The other change in the ERP when perceiving a syntactical wrong sentence occurs after 600ms in the paritial lobe. The signal is increasing positively and is therefore called P600. Possibly the late positive signal is reflecting the attempt to reconstruct the grammatical problematic sentence to find a possible interpretation.
Syntactical incorrectness in a sentence evokes after 600ms a P600 in the electrodes above the paritial lobe.
To summarize the three important ERP-components: First of all there occurs the ELAN at the left frontal lobe which shows a violation of syntactical rules. After it follows the N400 in central and paritial areas as a reaction to a semantical incorrectness and finally there occurs a P600 in the paritial area which probably means a reanalysis of the wrong sentence.
Behavioristic Approach – Parsing a Sentence
Behavioristic experiments about how human beings parse a sentence often use syntactically ambiguous sentences. Because it is easier to realize that sentence-analysing mechanisms called parsing take place when using sentences in which we cannot automatically constitute the meaning of the sentence. There are two different theories about how humans parse sentences. The syntax-first approach claims that syntax plays the main part whereas semantics has only a supporting role, whereas the interactionist approach states that both syntax and semantics work together to determine the meaning of a sentence. Both theories will be explained below in more detail.
The Syntax-First Approach of Parsing The syntax-first approach concentrates on the role of syntax when parsing a sentence. That humans infer the meaning of a sentence with help of its syntactical structure (Kako and Wagner 2001) can easily be seen when considering Lewis Carroll´s poem ‘Jabberwocky’:
"Twas brillig, and the slithy toves Did gyre and gimble in the wabe: All mimsy were the borogoves, And the mome raths outgrabe."
Although most of the words in the poems have no meaning one may ascribe at least some sense to the poem because of its syntactical structure.
There are many different syntactic rules that are used when parsing a sentence. One important rule is the principle of late closure which means that a person assumes that a new word he perceives is part of the current phrase. That this principle is used for parsing sentences can be seen very good with help of a so called garden-path sentence. Experiments with garden-path sentences have been done by Frazier and Fayner 1982. One example of a garden-path sentence is: “Because he always jogs a mile seems a short distance to him.” When reading this sentence one first wants to continue the phrase “Because he always jogs” by adding “a mile” to the phrase, but when reading further one realizes that the words “a mile” are the beginning of a new phrase. This shows that we parse a sentence by trying to add new words to a phrase as long as possible. Garden-path sentences show that we use the principle of late closure as long it makes syntactically sense to add a word to the current phrase but when the sentence starts to get incorrect semantics are often used to rearrange the sentence. The syntax-first approach does not disregard semantics. According to this approach we use syntax first to parse a sentence and semantics is later on used to make sense of the sentence.
Apart from experiments which show how syntax is used for parsing sentences there were also experimens on how semantics can influence the sentence processing. One important experiment about that issue has been done by Daniel Slobin in 1966. He showed that passive sentences are understood faster if the semantics of the words allow only one subject to be the actor. Sentences like “The horse was kicked by the cow.” and “The fence was kicked by the cow.” are grammatically equal and in both cases only one syntactical parsing is possible. Nevertheless the first sentence semantically provides two subjects as possible actors and therefore it needs longer to parse this sentence. By measuring this significant difference Daniel Slobin showed that semantics play an important role in parsing a sentence, too.
The Interactionist Approach of Parsing
The interactionist approach ascribes a more central role to semantics in parsing a sentence. In contrast to the syntax-first approach, the interactionist theory claims that syntax is not used first but that semantics and syntax are used simultanuasly to parse the sentence and that they work together in clearifying the meaning. There have been made several experiments which provide evidence that semantics are taking into account from the very beginning reading a sentence. Most of these experiments are working with the eye-tracking techniques and compare the time needed to read syntactical equal senences in which critical words cause or prohibit ambiguitiy by semantics. One of these experiments has been done by John Trueswell and coworkers in 1994. He measured the eye movement of persons when reading the following two sentences:
The defendant examined by the lawyer turned out to be unreliable. The evidence examined by the lawyer turned out to be unreliable.
He observed that the time needed to read the words “by the lawyer” took longer in case of the first sentence because in the first sentence the semanics first allow an interpretation in which the defendant is the one who examines, while the evidence only can be examined. This experiment shows that the semantics also play a role while reading the sentence which supports the interactionist approach and argues against the theory that semantics are only used after a sentence has been parsed syntactically.
Inferences Creates Coherence
Coherence is the semantic relation of information in different parts of a text to each other. In most cases coherence is achieved by inference; that means that a reader draws information out of a text that is not explicitly stated in this text. For further information the chapter Psychology and Cognitive Neuroscience/Situation Models and Inferencing#Neuropsychology of Inferencing Neuroscience of Text Comprehension should be considered.
Situation Model
A situation model is a mental representation of what a text is about. This approach proposes that the mental representation people form as they read a story does not indicate information about phrases, sentences, paragraphs, but a representation in terms of the people, objects, locations, events described in the story (Goldstein 2005, p. 374)
For a more detailed description of situation models, see Psychology and Cognitive Neuroscience/Situation Models and Inferencing Situation Models
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.15%3A_Language_Comprehension_and_Production.txt
|
Conversations are dynamic interactions between two or more people (Garrod &Pickering, 2004 as cited in Goldstein 2005). The important thing to mention is that conversation is more than the act of speaking. Each person brings in his or her knowledge and conversations are much easier to process if participants bring in shared knowledge. In this way, participants are responsible of how they bring in new knowledge. H.P. Grice proposed in 1975 a basic principle of conversation and four “conversational maxims.” His cooperative principle states that “the speaker and listener agree that the person speaking should strive to make statements that further the agreed goals of conversation.” The four maxims state the way of how to achieve this principle.
1. Quantity: The speaker should try to be informative, no over-/underinformation.
2. Quality: Do not say things which you believe to be false or lack evidence of.
3. Manner: Avoiding being obscure or ambiguous.
4. Relevance: Stay on topic of the exchange.
An example of a rule of conversation incorporating three of those maxims is the given-new-contract. It states that the speaker should construct sentences so that they include given and new information. (Haviland & Clark, 1974 as cited in Goldstein, 2005). Consequences of not following this rule were demonstrated by Susan Haviland and Herbert Clark by presenting pairs of sentences (either following or ignoring the given-new-contract) and measuring the time participants needed until they fully understood the sentence. They found that participants needed longer in pairs of the type:
``` We checked the picnic supplies.
The beer was warm.
```
``` Rather than:
We got some beer out of the trunk.
The beer was warm.
```
The reason that it took longer to comprehend the second sentence of the first pair is that inferencing has to be done (beer has not been mentioned as being part of the picnic supplies). (Goldstein, 2005, p. 377-378)
9.17: Language Culture and Cognition
In the parts above we saw that there has been a lot of research of language, from letters through words and sentences to whole conversations. Most of the research described in the parts above was processed by English speaking researchers and the participants were English speaking as well. Can those results be generalised for all languages and cultures or might there be a difference between English speaking cultures and for example cultures with Asian or African origin?
Imagine our young man from the beginning again: Knut! Now he has to prepare a presentation with his friend Chang for the next psychology seminar. Knut arrives at his friend’s flat and enters his living-room, glad that he made it there just in time. They have been working a few minutes when Chang says: ”It has become cold in here!“ Knut remembers that he did not close the door, stands up and...”stop! What is happening here?!“
This part is concerned with culture and its connection to language. Culture, not necessarily in the sense of "high culture" like music, literature and arts but culture is the "know-how" a person must have to tackle his or her daily life. This know-how might include high culture but it is not necessary.
Culture and Language
Scientists wondered in how far culture affects the way people use language. In 1991 Yum studied the indirectness of statements in Asian and American conversations. The statement "Please shut the door" was formulated by Americans in an indirect way. They might say something like "The door is open" to signal that they want to door to be shut. Even more indirect are Asian people. They often do not even mention the door but they might say something like "It is somewhat cold today". Another cultural difference affecting the use of language was observed by Nisbett in 2003 in observation about the way people pose questions. When American speaker ask someone if more tea is wanted they ask something like "More tea?". Different to this Asian people would ask if the other one would like more drinking as for Asians it seems obvious that tea is involved and therefore mentioning the tea would be redundant. For Americans it is the other way round. For them it seems obvious that drinking is involved so they just mention the tea.
This experiment and similar ones indicate that people belonging to Asian cultures are often relation orientated. Asians focus on relationships in groups. Contrasting, the Americans concentrate on objects. The involved object and its features are more important than the object's relation to other objects. These two different ways of focusing shows that language is affected by culture.
A experiment which clearly shows these results is the mother-child interaction which was observed by Fernald and Morikawa in 1993. They studied mother-child talk of Asian and American mothers. An American mother trying to show and explain a car to her child often repeated the object "car" and wants the child to repeat it as well. The mother focuses on the features of the car and labels the importance of the object itself. The Asian mother shows the toy car to her child, gives the car to the child and wants it to give the car back. The mother shortly mentions that the object is a car but concentrates on the importance of the relation and the politeness of giving back the object.
Realising that there are plenty differences in how people of different cultures use language the question arises if languages affects the way people think and perceive the world.
What is the connection between language and cognition?
Sapir-Whorf Hypothesis
In the 1950s Edward Sapir and Benjamin Whorf proposed the hypothesis that the language of a culture affects the way people think and perceive. The controversial theory was question by Elenor Rosch who studied colour perception of Americans and Danis who are members of an stone-age agricultural culture in the Iran. Americans have several different categories for colour as for example blue, red, yellow and so on. Danis just have two main colour categories. The participants were ask to recall colours which were shown to them before. That experiment did not show significant differences in colour perception and memory as the Sapir-Whorf hypothesis presumes.
Color-naming experiment by Roberson et al. (2000)
Categorical Perception
Nevertheless a support for the Sapir-Whorf hypothesis was Debi Roberson's demonstration for categorical perception based on the colour perception experiment by Rosch. The participants, a group of English-speaking British and another group of Berinmos from New Guinea were ask to name colours of a board with colour chips. The Berinmos distinguish between five different colour categories and the denotation of the colour names is not equivalent to the British colour denotation. Apart from these differences there are huge differences in the organisation of the colour categories. The colours named green and blue by British participants where categorised as nol which also covers colours like light-green, yellow-green, and dark blue. Other colour categories differ similarly.
The result of Roberson's experiment was that it is easier for British people to discriminate between green and blue whereas Berinmos have less difficulties distinguishing between Nol and Wap. The reaction to colour is affected by language, by the vocabulary we have for denoting colours. It is difficult for people to distinguish colours from the same colour category but people have less trouble differentiating between colours from different categories. Both groups have categorical colour perception but the results for naming colours depends on how the colour categories were named. All in all it was shown that categorical perception is influenced by the language use of different cultures.
These experiments about perception and its relation to cultural language usage leads to the question whether thought is related to language with is cultural differences.
Is thought dependent on, or even caused by language?
Historical theories
An early approach was proposed by J.B. Watson‘s in 1913. His peripheralist approach was that thought is a tiny not noticeable speech movement. While thinking a person performs speech movements as he or she would do while talking. A couple year later, in 1921 Wittgenstein poses the theory that the limits of a person's language mean the limits of that person's world. As soon as a person is not able to express a certain content because of a lack of vocabulary that person is not able to think about those contents as they are outside of his or her world. Wittgenstein's theory was doubted by some experiments with babies and deaf people.
Present research
To find some evidence for the theory that language and culture is affecting cognition Lian-hwang Chiu designed an experiment with American and Asian children. The children were asked to group objects in pairs so that these objects fit together. On picture that was shown to the children there was a cow, a chicken and some grass. The children had to decided which of the two objects fitted together. The American children mostly grouped cow and chicken because of group of animals they belong to. Asian children more often combined the cow with the grass as there is the relation of the cow normally eating grass.
In 2000 Chui repeated the experiment with words instead of pictures. A similar result was observed. The American children sorted their pairs taxonomically. Given the words "panda", "monkey" and "banana" American children paired "panda" and monkey". Chinese children grouped relationally. They put "monkey" with "banana". Another variation of this experiment was done with bilingual children. When the task was given in English to the children they grouped the objects taxonomically. A Chinese task caused a relational grouping. The language of the task clearly influenced on how to group the objects. That means language may affects the way people think.
The results of plenty experiments regarding the relation between language, culture and cognition let assume that culture affects language and cognition is affected by language.Our way of thinking is influenced by the way we talk and thought can occur without language but the exact relation between language and thought remains to be determined.
9.18: References
1. E. B. Goldstein, "Cognitive Psychology - Connecting Mind, Research, and Everyday Experience" (2005), page 346
Books
• O'Grady, W.; Dobrovolsky, M.; Katamba, F.: Contemporary Linguistics. Copp Clark Pittmann Ltd. (1996)
• Banich, Marie T. : Neuropsychology. The neural bases of mental function. (1997)
• Goldstein, E.B.: Cognitive Psychology: Connecting Mind, Research and Everyday Experience. (2005)
• Akmajian, A.; Demers, R. A.; Farmer, A. K.; Harnish R. M.: Linguistics - An Introductin to Language and Communication, fifth Edition; the MIT Press Cambridge, Massachusetts, London, England; (2001)
• Yule, G.: The study of language, second edition, Cambridge University Press; (1996)
• Premack, D.; Premack, A.J.: The Mind of an Ape. W W Norton & Co Ltd.(1984)
Journals
• MacCorquodale, K.: On Chomsky´s Review of Skinner´s verbal Behavior. Journal of experimental analysis of behaviour. (1970) Nr.1 Chap. 13, p. 83-99,
• Stemmer, N: Skinner's verbal behaviour, Chomsky's review, and mentalism. Journal of experimental analysis of behaviour. (1990) Nr.3 Chap. 54, p. 307-315
• Chomsky, N.: Collateral Language. TRANS, Internet journal for cultural sciences.(2003) Nr. 15
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/09%3A_Comprehension/9.16%3A_Using_Language.txt
|
What is happening inside my head when I listen to a sentence? How do I process written words? This chapter will take a closer look on brain processes concerned with language comprehension. Dealing with natural language understanding, we distinguish between the neuroscientific and the psycholinguistic approach. As text understanding spreads through the broad field of cognitive psychology, linguistics, and neurosciences, our main focus will lay on the intersection of two latter, which is known as neurolinguistics.
Different brain areas need to be examined in order to find out how words and sentences are being processed. For long time scientist were restricted to draw conclusions from certain brain lesions to the functions of corresponding brain areas. During the last 40 years techniques for brain imaging and ERP-measurement have been established which allow for a more accurate identification of brain parts involved in language processing.
Scientific studies on these phenomena are generally divided into research on auditory and visual language comprehension; we will discuss both. Not to forget is that it is not enough to examine English: To understand language processing in general, we have to look at non-Indo-European and other language systems like sign language. But first of all we will be concerned with a rough localization of language in the brain.
10.02: Lateralization of Language
Although functional lateralization studies and analyses find individual differences in personality or cognitive style don't favor one hemisphere or the other, some brain functions occur in one or the other side of the brain. Language tends to be on the left and attention on the right (Nielson, Zielinski, Ferguson, Lainhart & Anderson, 2013). There is a lot of evidence that each brain hemisphere has its own distinct functions in language comprehension. Most often, the right hemisphere is referred to as the non-dominant hemisphere and the left is seen as the dominant hemisphere. This distinction is called lateralization (from the Latin word lateral, meaning sidewise) and reason for it first was raised by experiments with split-brain patients. Following a top-down approach we will then discuss the right hemisphere which might have the mayor role in higher level comprehension, but is not quite well understood. Much research has been done on the left hemisphere and we will discuss why it might be dominant before the following sections discuss the fairly well understood fundamental processing of language in this hemisphere of the brain.
Functional asymmetry
Anatomical differences between left and right hemisphere
Initially we will consider the most apparent part of a differentiation between left and right hemisphere: Their differences in shape and structure. As visible to the naked eye there exists a clear asymmetry between the two halves of the human brain: The right hemisphere typically has a bigger, wider and farther extended frontal region than the left hemisphere, whereas the left hemisphere is bigger, wider and extends farther in it’s occipital region (M. T. Banich,"Neuropsychology", ch.3, pg.92). Significantly larger on the left side in most human brains is a certain part of the temporal lobe’s surface, which is called the planum temporale. It is localized near Wernicke’s area and other auditory association areas, wherefore we can already speculate that the left hemisphere might be stronger involved in processes of language and speech treatment.
In fact such a left laterality of language functions is evident in 97% of the population (D. Purves, "Neuroscience", ch.26, pg.649). But actually the percentage of human brains, in which a "left-dominance" of the planum temporale is traceable, is only 67% (D. Purves, "Neuroscience", ch.26, pg.648). Which other factors play aunsolved yet.
Evidence for functional asymmetry from "split brain" patients
In hard cases of epilepsy a rarely performed but popular surgical method to reduce the frequency of epileptic seizures is the so-called corpus callosotomy. Here a radical cut through the connecting "communication bridge" between right and left hemisphere, the corpus callosum, is done; the result is a "split-brain". For patients whose corpus callosum is cut, the risk of accidental physical injury is mitigated, but the side-effect is striking: Due to this eradicative transection of left and right half of the brain these two are not longer able to communicate adequately. This situation provides the opportunity to study differentiation of functionality between the hemispheres. First experiments with split-brain patients were performed by Roger Sperry and his colleagues at the California Institute of Technology in 1960 and 1970 (D. Purves, "Neuroscience", ch.26, pg.646). They lead researchers to sweeping conclusions about laterality of speech and the organization of the human brain in general.
A digression on the laterality of the visual system
Visual system
A visual stimulus, located within the left visual field, projects onto the nasal (inner) part of the left eye’s retina and onto the temporal (outer) part of the right eye’s retina. Images on the temporal retinal region are processed in the visual cortex of the same side of the brain (ipsilateral), whereas nasal retinal information is mapped onto the opposite half of the brain (contralateral).
The stimulus within the left visual field will completely arrive in the right visual cortex to be processed and worked up. In "healthy" brains this information furthermore attains the left hemisphere via the corpus callosum and can be integrated there. In split-brain patients this current of signals is interrupted; the stimulus remains "invisible" for the left hemisphere.
Split Brain Experiments
The experiment we consider now is based on the laterality of the visual system: What is seen in the left half of the visual field will be processed in the right hemisphere and vice versa. Aware of this principle a test operator presents the picture of an object to one half of the visual field while the participant is instructed to name the seen object, and to blindly pick it out of an amount of concrete objects with the contralateral hand.
It can be shown that a picture, for example the drawing of a die, which has only been presented to the left hemisphere, can be named by the participant ("I saw a die"), but is not selectable with the right hand (no idea which object to choose from the table). Contrarily the participant is unable to name the die, if it was recognized in the right hemisphere, but easily picks it out of the heap of objects on the table with the help of the left hand.
These outcomes are clear evidence of the human brain’s functional asymmetry. The left hemisphere seems to dominate functions of speech and language processing, but is unable to handle spatial tasks like vision-independent object recognition. The right hemisphere seems to dominate spatial functions, but is unable to process words and meaning independently. In a second experiment evidence arose that a split-brain patient can only follow a written command (like "get up now!"), if it is presented to the left hemisphere. The right hemisphere can only "understand" pictorial instructions.
The following table (D. Purves, "Neuroscience", ch.26, pg.647) gives a rough distinction of functions:
Left Hemisphere Right Hemisphere
• analysis of right visual field
• language processing
• writing
• speech
• analysis of left visual field
• spatial tasks
• visuospatial tasks
• object and face recognition
First it is important to keep in mind that these distinctions comprise only functional dominances, no exclusive competences. In cases of unilateral brain damage, often one half of the brain takes over tasks of the other one. Furthermore it should be mentioned that this experiment works only for stimuli presented for less than a second. This is because not only the corpus callosum, but as well some subcortical comissures serve for interhemispheric transfer. In general both can simultaneously contribute to performance, since they use complement roles in processing.
A digression on handedness
An important issue, when exploring the different brain organization, is handedness, which is the tendency to use the left or the right hand to perform activities. Throughout history, left-handers, which only comprise about 10% of the population, have often been considered being something abnormal. They were said to be evil, stubborn, defiant and were, even until the mid 20th century, forced to write with their right hand.
One most commonly accepted idea, as to how handedness affects the hemispheres, is the brain hemisphere division of labour. Since both speaking and handiwork require fine motor skills, the presumption here is that it would be more efficient to have one brain hemisphere do both, rather than having it divided up. Since in most people, the left side of the brain controls speaking, right-handedness predominates. The theory also predicts that left-handed people have a reversed brain division of labour.
In right handers, verbal processing is mostly done in the left hemisphere, whereas visuospatial processing is mostly done in the opposite hemisphere. Therefore, 95% of speech output is controlled by the left brain hemisphere, whereas only 5% of individuals control speech output in their right hemisphere. Left-handers, on the other hand, have a heterogeneous brain organization. Their brain hemisphere is either organized in the same way as right handers, the opposite way, or even such that both hemispheres are used for verbal processing. But usually, in 70% of the cases, speech is controlled by the left-hemisphere, 15% by the right and 15% by either hemisphere. When the average is taken across all types of left-handedness, it appears that left-handers are less lateralized.
After, for example, damage occurs to the left hemisphere, it follows that there is a visuospatial deficit, which is usually more severe in left-handers than in right-handers. Dissimilarities may derive, in part, from differences in brain morphology, which concludes from asymmetries in the planum temporale. Still, it can be assumed that left-handers have less division of labour between their two hemispheres than right-handers do and are more likely to lack neuroanatomical asymmetries.
There have been many theories as to find out why people are left-handed and what its consequences may be. Some people say that left-handers have a shorter life span or higher accident rates or autoimmune disorders. According to the theory of Geschwind and Galaburda, there is a relation to sex hormones, the immune system, and profiles of cognitive abilities that determine, whether a person is left-handed or not. Concludingly, many genetic models have been proposed, yet the causes and consequences still remain a mystery (M.T.Banich, "Neuropsychology", ch.3, pg. 119).
The right hemisphere
The role of the right hemisphere in text comprehension
The experiments with "split-brain" patients and evidence that will be discussed soon suggest that the right hemisphere is usually not (but in some cases, e.g. 15% of left handed people) dominant in language comprehension. What is most often ascribed to the right hemisphere is cognitive functioning. When damage is done to this part of the brain or when temporal regions of the right hemisphere are removed, this can lead to cognitive-communication problems, such as impaired memory, attention problems, and poor reasoning (L. Cherney, 2001). Investigations lead to the conclusion that the right hemisphere processes information in a gestalt and holistic fashion, with a special emphasis on spatial relationships. Here, an advantage arises for differentiating two distinct faces because it examines things in a global manner and it also reacts to lower spatial, and also auditory, frequency. The former point can be undermined with the fact that the right hemisphere is capable of reading most concrete words and can make simple grammatical comparisons (M. T. Banich,“Neuropsychology“, ch.3, pg.97). But in order to function in such a way, there must be some sort of communication between the brain halves.
Prosody - the sound envelope around words
Consider how different the simple statement "She did it again" could be interpreted in the following context taken from Banich: LYNN: Alice is way into this mountain-biking thing. After breaking her arm, you'd think she'd be a little more cautious. But then yesterday, she went out and rode Captain Jack's. That trail is gnarly - narrow with lots of tree roots and rocks. And last night, I heard that she took a bad tumble on her way down. SARA: She did it again Does Sara say that with rising pitch or emphatically and with falling intonation? In the first case she would ask whether Alice has injured herself again. In the other case she asserts something she knows or imagines: That Alice managed to hurt herself a second time. Obviously the sound envelope around words - prosody - does matter.
Reason to belief that recognition of prosodic patterns appears in the right hemisphere arises when you take into account patients that have damage to an anterior region of the right hemisphere. They suffer from aprosodic speech, that is, their utterances are all at the same pitch. They might sound like a robot from the 80ties. There is another phenomena appearing from brain damage: dysprosodic speech. In that case the patient speaks with disordered intonation. This is not due to a right hemisphere lesion, but arises when damage to the left hemisphere is suffered. The explanation is that the left hemisphere gives ill-timed prosodic cues to the right hemisphere, thus proper intonation is affected.
Beyond words: Inference from a neurological point of view
On the word level, the current studies are mostly consistent with each other and with findings from brain lesion studies. But when it comes to the more complex understanding of whole sentences, texts and storylines, the findings are split. According to E. C. Ferstl’s review “The Neuroanatomy of Text Comprehension. What’s the story so far?” (2004), there is evidence for and against right hemisphere regions playing the key role in pragmatics and text comprehension. On the current state of knowledge, we cannot exactly say how and where cognitive functions like building situation models and inferencing work together with “pure” language processes.
As this chapter is concerned with the neurology of language, it should be remarked that patients with right hemisphere damage have difficulties with inferencing. Take into account the following sentence:
With mosquitoes, gnats, and grasshoppers flying all about, she came across a small black bug that was being used to eavesdrop on her conversation.
You might have to reinterpret the sentence until you realize that "small black bug" does not refer to an animal but rather to a spy device. People with damage in the right hemisphere have problems to do so. They have difficulty to follow the thread of a story and to make inferences about what has been said. Furthermore they have a hard time understanding non-literal aspects of sentences like metaphors, so they might be really horrified when they hear that someone was "Crying her eyes out".
The reader is referred to the next chapter for a detailed discussion of Situation Models
The left hemisphere
Further evidence for left hemisphere dominance: The Wada technique
Before concerning concrete functionality of the left hemisphere, further evidence for the dominance of the left hemisphere is provided. Of relevance is the so-called Wada technique, allowing testing which hemisphere is responsible for speech output and usually being used in epilepsy patients during surgery. It is not a brain imaging technique, but simulates a brain lesion. One of the hemispheres is anesthetized by injecting a barbiturate (sodium amobarbital) in one of the patient’s carotid arteries. Then he is asked to name a number of items on cards. When he is not able to do that, despite the fact that he could do it an hour earlier, the concerned hemisphere is said to be the one responsible for speech output. This test must be done twice, for there is a chance that the patient produces speech bilaterally. The probability for that is not very high, in fact, according to Rasmussen & Milner 1997a (as referred to in Banich, p. 293) it occurs only in 15% of the left-handers and none of the right-handers. (It is still unclear where these differences in left-handers’ brains come from.)
That means that in most people, only one hemisphere “produces” speech output – and in 96% of right-handers and 70% of left-handers, it is the left one. The findings of the brain lesion studies about asymmetry were confirmed here: Normally (in healthy right-handers), the left hemisphere controls speech output.
Explanations of left hemisphere dominance
Two theories why the left hemisphere might have special language capacities are still discussed. The first states that dominance of the left hemisphere is due to a specialization for precise temporal control of oral and manual articulators. Here the main argument is that gestures related to a story line are most often made with the right and therefore by the left hemisphere controlled hand whilst other hand movements appear equally often with both hands. The other theory says that the left hemisphere is dominant because it is specialized for linguistic processing and is due to a single patient - a speaker of American Sign Language with a left hemisphere lesion. He could neither produce nor comprehend ASL, but could still communicate by using gestures in non-linguistic domains.
How innate is the organisational structure of the brain?
Not only cases of left-handers but also brain imaging techniques have shown examples of bilateral language processing: According to ERP studies (by Bellugi et al. 1994 and Neville et al. 1993 as cited in E. Dabrowska, "Language, Mind an Brain" 2004, p. 57), people with the Williams’ syndrome (WS) also have no dominant hemisphere for language. WS patients have a lot of physical and mental disorders, but show, compared to their other (poor) cognitive abilities, very good linguistic skills. And these skills do not rely on one dominant hemisphere, but both of them contribute equally. So, whilst the majority of the population has a dominant left hemisphere for language processing there are a variety of exceptions to that dominance. That there are different “organisation possibilities” in individual brains Dabrowska (p. 57) suggests that the organisational structure in the brain could be less innate and fixed as it is commonly thought.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/10%3A_Neuroscience_of_Text_Comprehension/10.01%3A_Introduction.txt
|
This section will explain where and how language is processed. To avoid intersections with visual processes we will firstly concentrate on spoken language. Scientists have developed three approaches of conceiving information about this issue. The first two approaches are based upon brain lesions, namely aphasia, whereas the recent approach relies on results of on modern brain-image techniques.
Neurological Perspective
The Neurological Perspective describes which pathways language follows in order to be comprehended. Scientists revealed that there are concrete areas inside the brain where concrete tasks of language processing are taking place. The most known areas are the Broca and the Wernicke Area.
Broca’s aphasia
Broca's and Wernicke's area
One of the most well-known aphasias is Broca’s aphasia that causes patients to be unable to speak fluently. Moreover they have a great difficulty producing words. Comprehension, however, is relatively intact in those patients. Because these symptoms do not result from motoric problems of the vocal musculature, a region in the brain that is responsible for linguistic output must be lesioned. Broca discovered that the brain region causing fluent speech is responsible for linguistic output, must be located ventrally in the frontal lobe, anterior to the motor strip. Recent research suggested that Broca`s aphasia results also from subcortical tissue and white matter and not only cortical tissue.
Example of spontaneous Speech - Task: What do you see on this picture?
„O, yea. Det‘s a boy an‘ girl... an‘ ... a ... car ... house... light po‘ (pole). Dog an‘ a ... boat. ‚N det‘s a ... mm ... a ... coffee, an‘ reading. Det‘s a ... mm ... a ... det‘s a boy ... fishin‘.“ (Adapted from „Principles of Neuroscience“ 4th edition, 2000, p 1178)
Wernicke‘s aphasia
Another very famous aphasia, known as Wernicke`s aphasia, causes opposite syndromes. Patients suffering from Wernicke`s aphasia usually speak very fluently, words are pronounced correctly, but they are combined senselessly – “word salad” is the way it is most often described. Understanding what patients of Wernicke`s aphasia say is especially difficult, because they use paraphasias (substitution of a word in verbal paraphasia, of word with similar meaning in semantic paraphasia, and of a phoneme in phonemic paraphasia) and neologisms. With Wernicke`s aphasia the comprehension of simple sentences is a very difficult task. Moreover their ability to process auditory language input and also written language is impaired. With some knowledge about the brainstructure and their tasks one is able to conclude that the area that causes Wernicke`s aphasia, is situated at the joint of temporal, parietal and occipital regions, near Heschl`s gyrus (primary auditory area), because all the areas receiving and interpreting sensory information (posterior cortex), and those connecting the sensory information to meaning (parietal lobe) are likely to be involved.
Example of spontaneous Speech - Task: What do you see on this picture?
„Ah, yes, it‘s ah ... several things. It‘s a girl ... uncurl ... on a boat. A dog ... ‘S is another dog ... uh-oh ... long‘s ... on a boat. The lady, it‘s a young lady. An‘ a man a They were eatin‘. ‘S be place there. This ... a tree! A boat. No, this is a ... It‘s a house. Over in here ... a cake. An‘ it‘s, it‘s a lot of water. Ah, all right. I think I mentioned about that boat. I noticed a boat being there. I did mention that before ... Several things down, different things down ... a bat ... a cake ... you have a ...“ (adapted from „Principles of Neuroscience“ 4th edition, 2000, p 1178)
Conduction aphasia
Wernicke supposed that an aphasia between Broca‘s area and Wernicke‘s area, namely conduction aphasia, would lead to severe problems to repeat just heard sentences rather than having problems with the comprehension and production of speech. Indeed patients suffering from this kind of aphasia show an inability to reproduce sentences since they often make phonemic paraphasias, may substitute or leave out words, or might say nothing. Investigations determined that the "connection cable", namely the arcuate fasciculus between Wernicke‘s and Broca‘s area is almost invariably damaged in case of a conduction aphasia. That is why conduction aphasia is also regarded as a disconnection syndrome (the behavioural dysfunction because of a damage to the connection of two connected brain regions).
Example of the repetition of the sentence „The pastry-cook was elated“:
„The baker-er was /vaskerin/ ... uh ...“ (adapted from „Principles of Neuroscience“ 4th edition, 2000, p 1178)
Transcortical motor aphasia and global aphasia
Transcortical motor aphasia, another brain lesion caused by a connection disruption, is very similar to Broca`s aphasia, with the difference that the ability to repeat is kept. In fact people with a transcortical motor aphasia often suffer from echolalia, the need to repeat what they just heard. Usually patients` brain is damaged outside Broca`s area, sometimes more anterior and sometimes more superior. Individuals with transcortical sensory aphasia have similar symptoms as those suffering from Wernicke`s aphasia, except that they show signs of echolalia. Lesions in great parts of the left hemisphere lead to global aphasia, and thus to an inability of both comprehending and producing language, because not only Broca`s or Wenicke`s area is damaged. (Barnich, 1997, pp. 276–282)
Overview of the effects of aphasia from the neurological perspective
Type of Aphasia Spontaneous Speech Paraphasia Comprehension Repetition Naming
• Broca`s
• Wernicke`s
• Conduction
• Transcortical motor
• Transcortical sensory
• Global
• Nonfluent
• Fluent
• Fluent
• Nonfluent
• Fluent
• Nonfluent
• Uncommon
• Common (verbal)
• Common (literal)
• Uncommon
• Common
• Variable
• Good
• Poor
• Good
• Good
• Poor
• Poor
• Poor
• Poor
• Poor
• Good (echolalia)
• Good (echolalia)
• Poor
• Poor
• Poor
• Poor
• Poor
• Poor
• Poor
(Adapted from Benson, 1985,p. 32 as cited in Barnich, 1997, p. 287)
Psychological Perspective
Since the 1960‘s psychologists and psycholinguists tried to resolve how language is organised and represented inside the brain. Patients with aphasias gave good evidence for location and discrimination of the three main parts of language comprehension and production, namely phonology, syntax and semantics.
Phonology
Phonology deals with the processing of meaningful parts of speech resulting from the mere sound. More over there exists a differentiation between a phonemic representation of a speech sound which are the smallest units of sounds that leads to different meanings (e.g. the /b/ and /p/ in bet and pat) and phonetic representation. The latter means that a speech sound may be produced in a different manner at different situations. For instance the /p/ in pill sounds different than the /p/ in spill since the former /p/ is aspirated and the latter is not.
Examining which parts are responsible for phonetic representation, patients with Broca`s or Wernicke`s aphasia can be compared. As the speech characteristic for patients with Broca`s aphasia is non-fluent, i.e. they have problems producing the correct phonetic and phonemic representation of a sound, and people with Wernicke`s aphasia do not show any problems speaking fluently, but also have problems producing the right phoneme. This indicates that Broca`s area is mainly involved in phonological production and also, that phonemic and phonetic representation do not take place in the same part of the brain. Scientists examined on a more precise level the speech production, on the level of the distinctive features of phonemes, to see in which features patients with aphasia made mistakes.
A distinctive feature describes the different manners and places of articulation. /t/ (like in touch) and /s/ (like in such) for example are created at the same place but produced in different manner. /t/ and /d/ are created at the same place and in the same manner but they differ in voicing.
Results show that in fluent as well as in non-fluent aphasia patients usually mix up only one distinctive feature, not two. In general it can be said that errors connected to the place of articulation are more common than those linked to voicing. Interestingly some aphasia patients are well aware of the different features of two phonemes, yet they are unable to produce the right sound. This suggests that though patients have great difficulty pronouncing words correctly, their comprehension of words is still quite good. This is characteristic for patients with Broca`s aphasia, while those with Wernicke`s aphasia show contrary symptoms: they are able to pronounce words correctly, but cannot understand what the words mean. That is why they often utter phonologically correct words (neologisms) that are not real words with a meaning.
Syntax
Syntax describes the rules of how words must be arranged to result in meaningful sentences. Humans in general usually know the syntax of their mother tongue and thus slip their tongue if a word happens to be out of order in a sentence. People with aphasia, however, often have problems with parsing of sentences, not only with respect to the production of language but also with respect to comprehension of sentences. Patients showing an inability of comprehension and production of sentences usually have some kind of anterior aphasia, also called agrammatical aphasia. This can be revealed in tests with sentences. These patients cannot distinguish between active and passive voice easily if both agent and object could play an active part. For example patients do not see a difference between “The boy chased the girl” and “The boy was chased by the girl”, but they do understand both “The boy saw the apple” and “The apple was seen by the boy”, because they can seek help of semantics and do not have to rely on syntax alone. Patients with posterior aphasia, like for example Wernicke`s aphasia, do not show these symptoms, as their speech is fluent. Comprehension by mere syntactic means would be possible as well, but the semantic aspect must be considered as well. This will be discussed in the next part.
Semantics
Semantics deals with the meaning of words and sentences. It has been shown that patients suffering from posterior aphasia have severe problems understanding simple texts, although their knowledge of syntax is intact. The semantic shortcoming is often examined by a Token Test, a test in which patients have to point to objects referred to in simple sentences. As might have been guessed, people with anterior aphasia have no problems with semantics, yet they might not be able to understand longer sentences because the knowledge of syntax then is involved as well.
Overview of the effects of aphasia from the psychological perspective
anterior Aphasia (e.g. Broca) posterior Aphasia (e.g. Wernicke)
Phonology phonetic and phonemic representation affected phonemic representation affected
Syntax affected no effect
Syntax no effect affected
In general studies with lesioned people have shown that anterior areas are needed for speech output and posterior regions for speech comprehension. As mentioned above anterior regions are also more important for syntactic processing, while posterior regions are involved in semantic processing. But such a strict division of the parts of the brain and their responsibilities is not possible, because posterior regions must be important for more than just sentence comprehension, as patients with lesions in this area can neither comprehend nor produce any speech. (Barnich, 1997, pp. 283–293)
Evidence from Advanced Neuroscience Methods
Measuring the functions of both normal and damaged brains has been possible since the 1970s, when the first brain imaging techniques were developed. With them, we are able to “watch the brain working” while the subject is e.g. listening to a joke. These methods (further described in chapter 4) show whether the earlier findings are correct and precise.
Generally, imaging shows that certain functional brain regions are much smaller than estimated in brain lesion studies, and that their boundaries are more distinct (cf. Banich p. 294). The exact location varies individually, therefore bringing the results of many brain lesion studies together caused too big estimated functional regions before. For example, stimulating brain tissue electrically (during epilepsy surgery) and observing the outcome (e.g. errors in naming tasks) led to a much better knowledge where language processing areas are located.
PET studies (Fiez & Petersen, 1993, as cited in Banich, p. 295) have shown that in fact both anterior and posterior regions were activated in language comprehension and processing, but with different strengths – in agreement with the lesion studies. The more active speech production is required in experiments, the more frontal is the main activation: For example, when the presented words must be repeated.
Another result (Raichle et al. 1994, as referred to in Banich, p. 295) was that the familiarity of the stimuli plays a big role. When the subjects were presented well-known stimuli sets in well-known experimental tasks and had to repeat them, anterior regions were activated. Those regions were known to cause conduction aphasia when damaged. But when the words were new ones, and/or the subjects never had to do a task like this before, the activation was recorded more posterior. That means, when you repeat an unexpected word, the heaviest working brain tissue is about somewhere under your upper left earlap, but when you knew this word that would be the next to repeat before, it is a bit nearer to your left eye.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/10%3A_Neuroscience_of_Text_Comprehension/10.03%3A_Auditory_Language_Processing.txt
|
The processing of written language is performed when we are reading or writing and is thought to happen in a distinct neural processing unit than auditory language processing. Reading and writing respectively rely on vision whereas spoken language is first mediated by the auditory system. Language systems responsible for written language processing have to interact with a sensory system different from the one involved in spoken language processing.
Visual language processing in general begins when the visual forms of letters (“c” or “C” or “c”) are mapped onto abstract letter identities. These are then mapped onto a word form and the corresponding semantic representation (the “meaning” of the word, i.e. the concept behind it). Observations of patients that lost a language ability due to a brain damage led to different disease patterns that indicated a difference between perception (reading) and production (writing) of visual language just like it is found in non-visual language processing.
Alexic patients possess the ability to write while not being able to read whereas patients with agraphia are able to read but cannot write. Though alexia and agraphia often occur together as a result of damage to the angular gyrus, there were patients found having alexia without agraphia (e.g. Greenblatt 1973, as cited in M. T. Banich, “Neuropsychology“, p. 296) or having agraphia without alexia (e.g. Hécaen & Kremin, 1976, as cited in M. T. Banich, “Neuropsychology“, p. 296). This is a double dissociation that suggests separate neural control systems for reading and writing.
Since double dissociations are also found in phonological and surface dyslexia, experimental results support the theory that language production and perception respectively are subdivided into separate neural circuits. The two route model shows how these two neural circuits are believed to provide pathways from written words to thoughts and from thoughts to written words.
Two routes model
1.1. Each route derives the meaning of a word or the word of a meaning in a different way
In essence, the two routes model contains two routes. Each of them derives the meaning of a word or the word of a meaning in a different way, depending on how familiar we are with the word.
Using the phonological route means having an intermediate step between perceiving and comprehending of written language. This intermediate step takes places when we are making use of grapheme-to-phoneme rules. Grapheme-to-phoneme rules are a way of determining the phonological representation for a given grapheme. A grapheme is the smallest written unit of a word (e.g. “sh” in “shore”) that represents a phoneme. A phoneme on the other hand is the smallest phonological unit of a word distinguishing it from another word that otherwise sounds the same (e.g. “bat” and “cat”). People learning to read or are encountering new words often use the phonological route to arrive at a meaning representation. They construct phonemes for each grapheme and then combine the individual phonemes to a sound pattern that is associated with a certain meaning (see 1.1).
The direct route is supposed to work without an intermediate phonological representation, so that print is directly associated with word-meaning. A situation in which the direct route has to be taken is when reading an irregular word like “colonel”. Application of grapheme-to-phoneme rules would lead to an incorrect phonological representation.
According to Taft (1982, as referred to in M. T. Banich,“Neuropsychology“, p. 297) and others the direct route is supposed to be faster than the phonological route since it does not make use of a “phonological detour” and is therefore said to be used for known words ( see 1.1). However, this is just one point of view and others, like Chastain (1987, as referred to in M. T. Banich, “Neuropsychology“, p. 297), postulate a reliance on the phonological route even in skilled readers.
The processing of written language in reading
1.2. Regularity effects are common in cases of surface alexia
Several kinds of alexia could be differentiated, often depending on whether the phonological or the direct route was impaired. Patients with brain lesions participated in experiments where they had to read out words and non-words as well as irregular words. Reading of non-words for example requires access to the phonological route since there cannot be a “stored” meaning or a sound representation for this combination of letters.
Patients with a lesion in temporal structures of the left hemisphere (the exact location varies) suffer from so called surface alexia. They show the following characteristic symptoms that suggest a strong reliance on the phonological route: Very common are regularity effects, that is a mispronunciation of words in which the spelling is irregular like "colonel" or "yacht" (see 1.2). These words are pronounced according to grapheme-to-phoneme rules, although high-frequency irregularly spelled words may be preserved in some cases, the pronunciation according to the phonological route is just wrong.
Furthermore, the would-be pronunciation of a word is reflected in reading-comprehension errors. When asked to describe the meaning of the word “bear”, people suffering from surface alexia would answer something like “a beverage” because the resulting sound pattern of “bear” was the same for these people as that for “beer”. This characteristic goes along with a tendency to confuse homophones (words that sound the same but are spelled differently and have different meanings associated). However, these people are still able to read non-words with a regular spelling since they can apply grapheme-to-phoneme rules to them.
1.3. Patients with phonological alexia have to rely on the direct route
In contrast, phonological alexia is characterised by a disruption in the phonological route due to lesions in more posterior temporal structures of the left hemisphere. Patients can read familiar regular and irregular words by making use of stored information about the meaning associated with that particular visual form (so there is no regularity effect like in surface alexia). However, they are unable to process unknown words or non-words, since they have to rely on the direct route (see 1.3).
Word class effects and morphological errors are common, too. Nouns, for example, are read better than function words and sometimes even better than verbs. Affixes which do not change the grammatical class or meaning of a word (inflectional affixes) are often substituted (e.g. “farmer” instead of “farming”). Furthermore, concrete words are read with a lower error rate than abstract ones like “freedom” (concreteness effect).
Deep Alexia shares many symptomatic features with phonological alexia such as an inability to read out non-words. Just as in phonological alexia, patients make mistakes on word inflections as well as function words and show visually based errors on abstract words (“desire” → “desert”). In addition to that, people with deep alexia misread words as different words with a strongly related meaning (“woods” instead of “forest”), a phenomenon referred to as semantic paralexia. Coltheart (as referred to in the “Handbook of Neurolinguistics”, ch.41-3, p. 563) postulates that reading in deep dyslexia is mediated by the right hemisphere. He suggests that when large lesions affecting language abilities other than reading prevent access to the left hemisphere, the right-hemispheric language store is used. Lexical entries stored there are accessed and used as input to left-hemisphere output systems.
Overview alexia
The processing of written language in spelling
The phonological route is supposed to make use of phoneme-to-grapheme rules while the direct route links thought to writing without an intermediary phonetic representation
Just like in reading, two separate routes –a phonological and a direct route- are thought to exist. The phonological route is supposed to make use of phoneme-to-grapheme rules while the direct route links thought to writing without an intermediary phonetic representation (see 1.4).
It should be noted here that there is a difference between phoneme-to-grapheme rules (used for spelling) and grapheme-to-phoneme rules in that one is not simply the reverse of the other. In case of the grapheme “k” the most common phoneme for it is /k/. The most common grapheme for the phoneme /k/, however, is “c”. Phonological agraphia is caused by a lesion in the left supramarginal gyrus, which is located in the parietal lobe above the posterior section of the Sylvian fissure (M. T. Banich, “Neuropsychology“, p. 299). The ability to write regular and irregular words is preserved while the ability to write non-words is not. This, together with a poor retrieval of affixes (which are not stored lexically), indicates an inability to associate spoken words with their orthographic form via phoneme-to-grapheme rules. Patients rely on the direct route, which means that they use orthographic word-form representations that are stored in lexical memory. Lesions at the conjunction of the posterior parietal lobe and the parieto-occipital junction cause so called lexical agraphia that is sometimes also referred to as surface agraphia. As the name already indicates, it parallels surface alexia in that patients have difficulty to access lexical-orthographic representations of words. Lexical agraphia is characterised by a poor spelling of irregular words but good spelling for regular and non-words. When asked to spell irregular words, patients often commit regularization errors, so that the word is spelled phonologically correct (for example, “whisk” would be written as “wisque”). The BEST to CONNECT is to CAPITALISE the WORDS you WANT TO COMMUNICATE for readers to COMPREHEND.
Overview agraphia
Evidence from Advanced Neuroscience Methods
How can we find evidence for the theory of the two routes. Until now neuroscientific research is not able to ascertain that there are neural circuits representing a system like the one described above. The problem of finding evidence for visual language processing on two routes in contrast to one route ( as stated by e.g. from Seidenberg & McClelland as referred to in M. T. Banich,“Neuropsychology“, p. 308) is that it is not clear what characteristic brain activation would indicate that it is either happening on two or one routes. To investigate whether there are one or two systems, neuroimaging studies examine correlations between the activations of the angular gyrus, which is thought to be a crucial brain area in written language processing and other brain regions. It was found out that during reading of non- words ( which would strongly engage the phonological route) the activation is mostly correlated with brain regions which are involved in phonological processing e.g. superior temporal regions (BA 22) and Boca’s area. During reading of normal words (which would strongly engage the direct route) the highest activation was found in occipital and ventral cortex. That at least can imply that there are two distinct routes. However, these are conclusions drawn from highest correlations which do not ensure this suggestion. What neuroimaging studies do ascertain is that the usage of a phonological and a direct route strongly overlap, which is rather unspectacular since it is quiet reasonable that fluent speaker mix both of the routes. Other studies additionally provide data in which the activated brain regions during reading of non-words and reading of normal words differ. ERP studies suggest that the left hemisphere possesses some sort mechanism which response to combinations of letters in a string, or to its orthography and / or to the phonological representation of the string. ERP waves differ, during early analysis of the visual form of the string, if the string represents a correct word or just pronounceable nonsense (Posner & McCandliss, 1993 as referred in M.T. Banich, „Neuropsychology“p. 307-308). That indicates that this mechanism is sensitive to correct or incorrect words.
The opposite hemisphere, the right hemisphere, is in contrast to the left hemisphere, not involved in abstract mapping of word meaning but is rather responsible for encoding word specific visual forms. ERP and PET studies provides evidence that the right hemisphere responds in a stronger manner than the left hemisphere to letter like strings. Moreover divided visual field studies reveal that the right hemisphere can better distinguish between different shapes of the same letter (e.g. in different handwritings) than the left hemisphere. The contribution of visual language processing on both hemispheres is that the right hemisphere first recognizes a written word as letter sequences, no matter how exactly they look like, then the language network in the left hemisphere builds up an abstract representation of the word, which is the comprehension of the word.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/10%3A_Neuroscience_of_Text_Comprehension/10.04%3A_Visual_Language_Processing.txt
|
Most neurolinguistic research is concerned with production and comprehension of English language, either written or spoken. However, looking at different language systems from a neuroscientific perspective can substantiate as well as differentiate acknowledged theories of language processing. The following section shows how neurological research of three symbolic systems, each different from English in some aspect, has made it possible to distinguish - at least to some extent - between brain regions that deal with the modality of the language (and therefore may vary from language to language, depending on whether the language in question is e.g. spoken or signed) from brain regions that seem to be necessary to language processing in general - regardless whether we are dealing with signed, spoken, or even musical language.
Kana and Kanji
Kana and Kanji are the two writing systems used parallel in the Japanese language. Since different approaches are used in them to represent words, studying Japanese patients with alexia is a great possibility to test the hypothesis about the existence of two different routes to meaning, explicated in the previous section.
The English writing system is phonological – each grapheme in written English roughly represents one speech sound – a consonant or a vowel. There are, however, other possible approaches to writing down a spoken language. In syllabic systems like the Japanese kana, one grapheme stands for one syllable. If written English were syllabic, it could e.g. include a symbol for the syllable “nut”, appearing both in the words “donut” and “peanut”. Syllabic systems are sound-based – since the graphemes represent units of spoken words rather than meaning directly, an auditory representation of the word has to be created in order to arrive at the meaning. Therefore, reading of syllabic systems should require an intact phonological route. In addition to kana, Japanese also use a logographic writing system called kanji, in which one grapheme represents a whole word or a concept. Different from phonological and syllabic systems, logographic systems don’t comprise systematical relationships between visual forms and the way they’re pronounced – instead, visual form is directly associated with the pronunciation and meaning of the corresponding word. Reading kanji should therefore require the direct route to meaning to be intact.
The hypothesis about the existence of two different routes to meaning has been confirmed by the fact that after brain damage, there can be a double dissociation between kana and kanji. Some Japanese patients can thus read kana but not kanji (surface alexia), whereas other can read kanji but not kana (phonological alexia). In addition, there is evidence that different brain regions of Japanese native speakers are active while reading kana and kanji, although like in the case of English native speakers, these regions also overlap.
Since the distinction between direct and phonological route also makes sense in case of Japanese, it may be a general principle common to all written languages that reading them relies on two independent (at least partially) systems, both using different strategies to catch the meaning of a written word – either associating the visual form directly with the meaning (the direct route), or using the auditory representation as an intermediary between the visual form and the meaning of the word (the phonological route).
Sign Language
From a linguistic perspective, sign languages share many features of spoken languages – there are many regionally bounded sign languages, each with a distinct grammar and lexicon. Since at the same time, sign languages differ from spoken languages in the way the words are “uttered”, i.e. in the modality, neuroscientific research in them can yield valuable insights into the question whether there are general neural mechanisms dealing with language, regardless of its modality.
Structure of SL
Sign languages are phonological languages - every meaningful sign consists of several phonemes (phonemes used to be called cheremes (Greek χερι: hand) until their cognitive equivalence to phonemes in spoken languages was realized) that carry no meaning as such, but are nevertheless important to distinguish the meaning of the sign. One distinctive feature of SL phonemes is the place of articulation – one hand shape can have different meanings depending on whether it’s produced at the eye-, nose-, or chin-level. Other features determining the meaning of a sign are hand shape, palm orientation, movement, and non-manual markers (e.g. facial expressions).
To express syntactic relationships, Sign Languages exploit the advantages of the visuo-spatial medium in which the signs are produced – the syntactic structure of sign languages therefore often differs from that of spoken languages. Two important features of most sign language's grammars (including American Sign Language (ASL), Deutsche Gebärdensprache (DGS) and several other major sign languages) are directionality and simultaneous encoding of elements of information:
• Directionality
The direction in which the sign is made often determines the subject and the object of a sentence. Nouns in SL can be 'linked' to a particular point in space, and later in the discourse they can be referred to by pointing to that same spot again (this is functionally related to pronouns in English). The object and the subject can then be switched by changing the direction in which the sign for a transitive verb is made.
• Simultaneous encoding of elements of information
The visual medium also makes it possible to encode several pieces of information simultaneously. Consider e.g. the sentence "The flight was long and I didn't enjoy it". In English, the information about the duration and unpleasantness of the flight have to be encoded sequentially by adding more words to the sentence. To enrich the utterance "The flight was long” with the information about the unpleasantness of the flight, another sentence (“I did not enjoy it") has to be added to the original one. So, in order to convey more information, the length of the original sentence must grow. In sign language, however, the increase of information in an utterance doesn’t necessarily increase the length of the utterance. To convey information about the unpleasantness of a long flight experienced in the past, one can just use the single sign for "flight" with the past tense marker, moved in a way that represents the attribute "long", combined with the facial expression of disaffection. Since all these features are signed simultaneously, no additional time is needed to utter "The flight was long" as compared to "The flight was long and I didn't enjoy it".
Neurology of SL
Since sentences in SL are encoded visually, and since its grammar is often based on visual rather than sequential relationships among different signs, it could be suggested that the processing of SL mainly depends on the right hemisphere, which is mainly concerned with the performance on visual and spatial tasks. However, there is evidence suggesting that processing of SL and spoken language might be equally dependant on the left hemisphere, i.e. that the same basic neural mechanism may be responsible for all language functioning, regardless of its modality (i.e. whether the language is spoken or signed).
The importance of the left hemisphere in SL processing indicated e.g. by the fact that signers with a damaged right hemisphere may not be aphasiacs, whereas as in case of hearing subjects, lesions in the left hemisphere of signers can result in subtle linguistic difficulties (Gordon, 2003). Furthermore, studies of aphasic native signers have shown that damage to anterior portions of the left hemisphere (Broca’s area) result in a syndrome similar to Broca’s aphasia – the patients lose fluency of communication, they aren’t able to correctly use syntactic markers and inflect verbs, although the words they sign are semantically appropriate. In contrast, patients with damages to posterior portions of the superior temporal gyrus (Wernicke’s area) can still properly inflect verbs, set up and retrieve nouns from a discourse locus, but the sequences they sign have no meaning (Poizner, Klima & Bellugi, 1987). So, like in the case of spoken languages, anterior and posterior portions of the left hemisphere seem to be responsible for the syntax and semantics of the language respectively. Hence, it’s not essential for the "syntax processing mechanisms" of the brain whether the syntax is conveyed simultaneously through spatial markers or successively through word order and morphemes added to words - the same underlying mechanisms might be responsible for syntax in both cases.
Further evidence for the same underlying mechanisms for spoken and signed languages comes from studies in which fMRI has been used to compare the language processing of:
• 1. congenitally deaf native signers of British Sign Language,
• 2. hearing native signers of BSL (usually hearing children of deaf parents)
• 3. hearing signers who have learned BSL after puberty
• 4. non-signing subjects
Investigating language processing in these different groups allows making some distinctions between different factors influencing language organization in the brain - e.g. to what amount does deafness influences the organization of language in the brain as compared to just having SL as a first language(1 vs. 2), or to what amount does learning of SL as a first language differ from learning SL as native language(1,2 vs.3), or to what amount is language organized in speakers as compared to signers(1,2,3 vs.4).
These studies have shown that typical areas in the left hemisphere are activated in both native English speakers given written stimuli and native signers given signs as stimuli. Moreover, there are also areas that are equally activated both in case of deaf subjects processing sign language and hearing subjects processing spoken language – a finding which suggests that these areas constitute the core language system regardless of the language modality(Gordon, 2003).
Different from speakers, however, signers also show a strong activation of the right hemisphere. This is partly due to the necessity to process visuo-spatial information. Some of those areas, however (e.g. the angular gyrus) are only activated in native signers and not in hearing subjects that learned SL after puberty. This suggests that the way of learning sign languages (and languages in general) changes with time: Late learner's brains are unable to recruit certain brain regions specialized for processing this language (Newman et al., 1998).]
We have seen that evidence from aphasias as well as from neuroimaging suggest the same underlying neural mechanisms to be responsible for sign and spoken languages. It ‘s natural to ask whether these neural mechanisms are even more general, i.e. whether they are able to process any type of symbolic system underlying some syntax and semantics. One example of this kind of more general symbolic system is music.
Music
Like language, music is a human universal involving some combinatorial principles that govern the organizing of discrete elements (tones) into structures (phrases) that convey some meaning – music is a symbolic system with a special kind of syntax and semantics. It’s therefore interesting to ask whether music and natural language share some neural mechanisms: whether processing of music is dependent on processing of language or the other way round, or whether the underlying mechanisms underlying them are completely separate. By investigating the neural mechanisms underlying music we might find out whether the neural processes behind language are unique to the domain of natural language, i.e. whether language is modular. Up to now, research in the neurobiology of music has yielded contradicting evidence regarding these questions.
On the one hand, there is evidence that there is a double dissociation of language and music abilities. People suffering from amusia are unable to perceive harmony, to remember and to recognize even very simple melodies; at the same time they have no problems in comprehending or producing speech. There is even a case of a patient who developed amusia without aprosodia, i.e. although she couldn't recognize tone in musical sequences, she nevertheless could still make use of pitch, loudness, rate, or rhythm to convey meanings in spoken language (Pearce, 2005). This highly selective problem in processing music (amusia) can occur as a result of brain damage, or be inborn; in some cases it runs on families, suggesting a genetic component. The complement syndrome of amusia also exists – after suffering a brain damage in the left hemisphere, the Russian composer Shebalin lost his speech functions, but his musical abilities remained intact (Zatorre, McGill, 2005).
On the other hand, neuroimaging data suggest that language and music have a common mechanism for processing syntactical structures. The P600 ERP`s in the Broca area, measured as a response to ungrammatical sentences, is also elicited in subjects listening to musical chord sequences lacking harmony (Patel, 2003) – the expectation of typical sequences in music could therefore be mediated by the same neural mechanisms as the expectation of grammatical sequences in language.
A possible solution to this apparent contradiction is the dual system approach (Patel, 2003) according to which music and language share some procedural mechanisms (frontal brain areas) responsible for processing the general aspects of syntax, but in both cases these mechanisms operate on different representations (posterior brain areas) – notes in case of music and words in case of language.
10.06: Outlook
Many questions are to be answered, for it is e.g. still unclear whether there is a distinct language module (that you could cut out without causing anything in other brain functions) or not. As Evely C. Ferstl points out in her review, the next step after exploring distinct small regions responsible for subtasks of language processing will be to find out how they work together and build up the language network.
10.07: References and Further Reading
Books - english
• Brigitte Stemmer, Harry A. Whitaker. Handbook of Neurolinguistics. Academic Press (1998). ISBN 0126660557
• Marie T. Banich: Neuropsychology. The neural bases of mental function (1997).
• Ewa Dąbrowska: Language, Mind and Brain. Edinburgh University press Ltd.(2004)
• a review: Evelyn C. Ferstl, The functional neuroanatomy of text comprehension. What's the story so far?" from: Schmalhofer, F. & Perfetti, C. A. (Eds.), Higher Level Language Processes in the Brain:Inference and Comprehension Processes. Lawrence Erlbaum. (2004)
Books - german
• Müller,H.M.& Rickert,G. (Hrsg.): Neurokognition der Sprache. Stauffenberg Verlag (2003)
• Poizner, Klima & Bellugi: What the hands reveal about the brain. MIT Press (1987)
• N. Chomsky: Aspects of the Theory of Syntax. MIT Press (1965). ISBN 0262530074
• Neville & Bavelier: Variability in the effects of experience on the development of cerebral specializations: Insights from the study of deaf individuals. Washington, D.C.: US Government Printing Office (1998)
• Newman et al.: Effects of Age of Acquisition on Cortical Organization for American Sign Language: an fMRI Study. NeuroImage, 7(4), part 2 (1998)
Links - english
Links - german
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/10%3A_Neuroscience_of_Text_Comprehension/10.05%3A_Other_Symbolic_Systems.txt
|
An important function and property of the human cognitive system is the ability to extract important information out of textually and verbally described situations. This ability plays a vital role in understanding and remembering. But what happens to this information after it is extracted, how do we represent it and how do we use it for inferencing? With this chapter we introduce the concept of a “situation model” (van Dijk&Kintsch, 1983, “mental model”: Johnson-Laird, 1983), which is the mental representation of what a text is about. We discuss what these representations might look like and show the various experiments that try to tackle these questions empirically. By assuming situations to be encoded by perceptual symbols (Barsalou, 1999), the theory of Situation Models touches many aspects of Cognitive Philosophy, Linguistics and Artificial Intelligence. In the beginning of this chapter, we will mention why Situation Models are important and what we use them for. Next we will focus on the theory itself by introducing the four primary types of information - the situation model components, its Levels of Representation and finally two other basic types of knowledge used in situation model construction and processing (general world knowledge and referent specific knowledge).
Situation models not only form a central concept in theories of situated cognition that helps us in understanding how situational information is collected and how new information gets integrated, but they can also explain many other phenomena. According to van Dijk & Kintsch, situation models are responsible for processes like domain-expertise, translation, learning from multiple sources or completely understanding situations just by reading about them. These situation models consist, according to most researches in this area, of five dimensions, which we will explain later. When new information concerning one of these dimensions is extracted, the situation model is changed according to the new information. The bigger the change in the situation model is, the more time the reader needs for understanding the situation with the new information. If there are contradictions, e.g. new information which does not fit into the model, the reader fails to understand the text and probably has to reread parts of the text to build up a better model. It was shown in several experiments that it is easier to understand texts that have only small changes in the five dimensions of text understanding. It also has been found that it is easier for readers to understand a text if the important information is more explicitly mentioned. For this reason several researchers wrote about the importance of fore-grounding important information (see Zwaan&Radvansky 1998 for a detailed list). The other important issue about situation models is the multidimensionality. Here the important question is how are the different dimensions related and what is their weight for constructing the model. Some researchers claim that the weight of the dimensions shifts according to the situation which is described. Introducing such claims will be the final part of this chapter and aims to introduce you to current and future research goals.
The VIP: Rolf A. Zwaan
Rolf A. Zwaan, born September 13, 1962 in Rotterdam (the Netherlands), is a very important person for this topic, since he made the most research (92 publications in total), and also because most of our data is taken from his work. Zwaan did his MA (1986) and his Ph.D. (1992) at the Utrecht University (Netherlands), both cum laude. Since then he collected multiple awards like the Developing Scholar Award (Florida state University, 1999) or the Fellow of the Hanse Institute for Advanced Study (Delmenhorst, Germany, 2003) and became member of several Professional Organisations like the Psychonomic Society, the Cognitive Science Society or the American Psychological Society. He works as Chair of the Biology & Cognitive Psychology at the Erasmus University in Rotterdam (Netherlands), since 2007.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/11%3A_Situation_Models_and_Inferencing/11.01%3A_Introduction.txt
|
A lot of tasks which are based on language processing can only be explained by the usage of situation models. The so called situation model or mental model consists of five different dimensions, which refer to different sources. To comprehend a text or just a simple sentence, situation models are useful. Furthermore the comprehension and combination of several texts and sentences can be explained by that theory much better. In the following, some examples are listed why we really need situation models.
Integration of information across sentences
Integration of information across sentences is more than just understanding a set of sentences. For example:
“Gerhard Schroeder is in front of some journalists. Looking forward to new ideas is nothing special for the Ex-German chancellor. It is like in the good old days in 1971 when the leader of the Jusos was behind the polls and talked about changes.”
This example only makes sense to the reader if he is aware that “Gerhard Schroeder”, “Ex-German chancellor” and “the leader of the Jusos in 1971” is one and the same person. If we build up a situation model, in this example “Gerhard Schroeder” is our token. Every bit of information which comes up will be linked to this token, based on grammatical and world knowledge. The definite article in the second sentence refers to the individual in the first sentence. This is based on grammatical knowledge. Every definite article indicates a connection to an individual in a previous sentence. If there would be an indefinite article, we have to build a new token for a new individual. The third sentence is linked by domain knowledge to the token. It has to be known that “Gerhard Schroeder” was the leader of the Jusos in 1971. Otherwise the connection can only be guessed. We can see that an integrated situation model is needed to comprehend the connection between the three sentences.
Explanation of similarities in comprehension performances across modalities
The explanation of similarities in comprehension performances across modalities can only be done by the usage of situation models. If we read a newspaper article, watch a report on television or listen to a report on radio, we come up with a similar understanding of the same information, which is conveyed through different modalities. Thus we create a mental representation of the information or event. This mental representation does not depend on the modalities itself. Furthermore there is empirical evidence for this intuition. Baggett (1979) found out that students who saw a short film and students who heard a spoken version of the events in the short film finally produced a structurally similar recall protocol. There were differences in the protocols of the two groups but the differences were due to content aspects. Like the text version explicitly stated that a boy was on his way to school and in the movie this had to be inferred.
Domain expertise on comprehension
Situation models have a deep influence for effects of domain expertise on comprehension. In detail this means that person A, whose verbal skills are less than from person B, is able to outperform person B, if he has more knowledge of the topic domain. To give evidence for this intuition, there was a study by Schneider and Körkel (1989). They compared the recalls of “experts” and novices of a text about a soccer match. In the study were three different grades: 3rd, 5th and 7th. One important example in that experiment was that the 3rd grade soccer experts outperformed the 7th grade novices. The recall of units in the text was 54% by the 3rd grade experts and 42% by the 7th grade novices. The explanation is quite simple: The 3rd grade experts built up a situation model and used knowledge from their long-term memory (Ericsson & Kintsch, 1995). The 7th grade novices had just the text by which they can come up with a situation model. Some more studies show evidence for the theory that domain expertise may counteract with verbal ability, i.e. Fincher-Kiefer, Post, Greene & Voss, 1988 or Yekovich, Walker, Ogle & Thompson in 1990.
Explanation of translation skills
An other example why we need situation models is by trying to explain translation. Translating a sentence or a text from one language to another is not simply done by translating each word and building a new sentence structure until the sentence seems to be sound. If we have a look now at the example of a Dutch sentence:
Now we can conclude that the translation level between Dutch and English is not based on the lexical-semantic level; it is based on the situation level. In this example “don’t do something (action) before you haven’t done something else (another action)”. Other studies came up with findings that the ability to construct situation models during the translation is important for the translation skill (Zwann, Ericsson, Lally and Hill, in 1998).
Multiple source learning
People are able to learn about a domain from multiple documents. This phenomenon can be explained by a situation model, too. For example, we try to learn something about the “Cold War” we use different documents with information. The information in one document may be similar to other documents. Referents can be the same or special relationships in the “Cold War” can just be figured out by the usage of different documents. So what we are really doing by learning and reasoning is that we integrate information on the base of different documents into a common situation model, which has got an organized order of the information we’ve learned.
We have seen that we need situation models in different tasks of language processing, but situation models are not needed in all tasks of language processing. An example is proofreading. A proofreader checks every word for its correctness. This ability does not contain the ability to construct situation models. This task uses the resources of the long-term memory in which the correct writing of each word is stored. The procedure is like:
This is done word by word. It is unnecessary to create situation models in this task for language processing.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/11%3A_Situation_Models_and_Inferencing/11.02%3A_Why_do_we_need_Situation_Models.txt
|
Space
Very often, objects that are spatially close to us are more relevant than more distant objects. Therefore, one would expect the same for situation models. consistent with this idea, comprehenders are slower to recognise words denoting objects distant from a protagonist than those denoting objects close to the protagonist (Glenberg, Meyer & Lindem, 1987).
When comprehenders have extensive knowledge of the spatial layout of the setting of the story (e.g., a building), they update their representations according to the location and goals of the protagonist. They have the fastest mental access to the room that the protagonist is currently in or is heading to. For example, they can more readily say whether or not two objects are in the same room if the room mentioned is one of these rooms than if it is some other room in the building (e.g., Morrow, Greenspan, & Bower, 1987). This makes perfect sense intuitively because these are the rooms that would be relevant to us if we were in the situation.
People’s interpretation of the meaning of a verb denoting movement of people or objects in space, such as to approach, depends on their situation models. For example, comprehenders interpret the meaning of approach differently in The tractor is just approaching the fence than in The mouse is just approaching the fence. Specifically, they interpret the distance between the figure and the landmark as being longer when the figure is large (tractor) compared with when it is small (mouse). The comprehenders’ interpretation also depends on the size of the landmark and the speed of the figure (Morrow & Clark, 1988). Apparently, comprehenders behave as if they are actually standing in the situation, looking at the tractor or mouse approaching a fence.
Time
We assume by default that events are narrated in their chronological order, with nothing left out. Presumably this assumption exists because this is how we experience events in everyday life. Events occur to us in a continuous flow, sometimes in close succession, sometimes in parallel, and often partially overlapping. Language allows us to deviate from chronological order, however. For example, we can say, “Before the psychologist submitted the manuscript, the journal changed its policy.” The psychologist submitting the manuscript is reported first, even though it was the last of the two events to occur. If people construct a situation model, this sentence should be more difficult to process than its chronological counterpart (the same sentence, but beginning with “After”). Recent neuroscientific evidence supports this prediction. Event-related brain potential (ERP) measurements indicate that “before” sentences elicit, within 300 ms, greater negativity than “after” sentences. This difference in potential is primarily located in the left anterior part of the brain and is indicative of greater cognitive effort (Münte, Schiltz, & Kutas, 1998). In real life, events follow each other seamlessly. However, narratives can have temporal discontinuities, when writers omit events not relevant to the plot. Such temporal gaps, typically signalled by phrases such as a few days later, are quite common in narratives. Nonetheless, they present a departure from everyday experience. Therefore, time shifts should lead to (minor) disruptions of the comprehension process. And they do. Reading times for sentences that introduce a time shift tend to be longer than those for sentences that do not (Zwaan, 1996).
```All other things being equal, events that happened just recently are more accessible to us than events that happened a while ago.
Thus, in a situation model, enter should be less accessible after An hour ago, John entered the building than after A moment ago, John entered the building.
Recent probe-word recognition experiments support this prediction (e.g., Zwaan, 1996).
```
Causation
As we interact with the environment, we have a strong tendency to interpret event sequences as causal sequences. It is important to note that, just as we infer the goals of a protagonist, we have to infer causality; we cannot perceive it directly. Singer and his colleagues (e.g., Singer, Halldorson, Lear, & Andrusiak, 1992) have investigated how readers use their world knowledge to validate causal connections between narrated events. Subjects read sentence pairs, such as 1a and then 1b or 1a’ and then 1b, and were subsequently presented with a question like 1c:
(1a) Mark poured the bucket of water on the bonfire.
(1a’) Mark placed the bucket of water by the bonfire.
(1b) The bonfire went out.
(1c) Does water extinguish fire?
Subjects were faster in responding to 1c after the sequence 1a-1b than after 1a’-1b. According to Singer, the reason for the speed difference is that the knowledge that water extinguishes fire was activated to validate the events described in 1a-1b. However, because this knowledge cannot be used to validate 1a’-1b, it was not activated when subjects read that sentence pair.
Intentionality
We are often able to predict people’s future actions by inferring their intentionality, i.e. their goals. For example, when we see a man walking over to a chair, we assume that he wants to sit, especially when he has been standing for a long time. Thus, we might generate the inference “He is going to sit.” Keefe and McDaniel (1993) presented subjects with sentences like After standing through the 3-hr debate, the tired speaker walked over to his chair (and sat down) and then with probe words (e.g., sat, in this case). Subjects took about the same amount of time to name sat when the clause about the speaker sitting down was omitted and when it was included. Moreover, naming times were significantly faster in both of these conditions than in a control condition in which it was implied that the speaker remained standing.
Protagonists and Objects
Comprehenders are quick to make inferences about protagonists, presumably in an attempt to construct a more complete situation model. Consider, for example, what happens after subjects read the sentence The electrician examined the light fitting. If the following sentence is She took out her screwdriver, their reading speed is slowed down compared with when the second sentence is He took out his screwdriver. This happens because she provides a mismatch with the stereotypical gender of an electrician, which the subjects apparently inferred while reading the first sentence (Carreiras, Garnham, Oakhill, & Cain, 1996).
```Comprehenders also make inferences about the emotional states of characters.
For example, if we read a story about Paul, who wants his brother Luke to be good in baseball, the concept of “pride” becomes activated in our mind when we read
that Luke receives the Most Valuable Player Award (Gernsbacher, Goldsmith, & Robertson, 1992).
Thus, just as in real life, we make inferences about people’s emotions when we comprehend stories.```
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/11%3A_Situation_Models_and_Inferencing/11.03%3A_Multidimensionality_of_Situation_Models.txt
|
Introduction
In the process of language and text comprehension new information has to be integrated into the current situation model. This is achieved by a processing framework. There are various theories and insights on this process. Most of them only model one or some aspects of Situation Models and language comprehension.
A list of theories, insights and developments in language comprehension frameworks:
• an interactive model of comprehension (Kintsch and van Dijk, 1978)
• early Computatinal Model (Miller, Kintsch, 1980)
• Constructing-integration Model (Kintsch, 1988)
• Structure-Building-Framework (Gernsbacher,1990)
• Capacity Constraint Reader Model (Just, Carpenter, 1992)
• Constructivist framework (Graesser, Singer, Trabasso, 1994)
• Event Indexing Model (Zwaan, Langston, Graesser, 1995)
• Landscape Model (van den Brock, Risden, Fletcher, & Thurlow, 1996)
• Capacity-constrained construction-integration Model (Goldman, Varma, Coté, 1996)
• The Immersed Experiencer Framework (Zwaan, 2003)
In this part of the chapter on Situation Models we will talk about several models; we will start with some of the early stuff and then go to the popular later ones. We will start with the work of Kintsch in the 70s and 80s and then go on to later research which is based on this.
An interactive Model of Comprehension
This model was already developed in the 80s; it is the basis for many later models like the CI-Model, or even the Immersed-Experiencer Framework. According to Kintsch and van Dijk (1978), text comprehension proceeds in cycles. In every cycle a few propositions are processed, this number is determined by the capacity of the Short-Term Memory, so 7 plus or minus 2. In every cycle the new propositions are connected to existing ones, they therefore form a connected and hierarchical set.
Early Computational Model
This computational model from Miller and Kintsch tried to model earlier theories of comprehension, to make predictions according to these and compare them to behavioural studies and experiments. It consisted of several modules. One was a chunking program: It's task is to read in one word at the moment, identify if it is a proposition and decide whether to integrate it or not. This part of the model was not done computationally. The next part in the input order was the Microstructure Coherence Program (MCP). The MCP sorted the propositions and stored them in the Working Memory Coherence Graph. The task of the Working Memory Coherence Graph was then to decide which propositions should be kept active during the next processing cycle. All propositions are stored in the Long Term Memory Coherence Graph, this decided which propositions should be transferred back in to the Working Memory or it can construct a whole new Working Memory Graph with a different superordinate node. The problem with this Computational Model was that it show a really low performance. But still it led to further research which tried to overcome it's shortcomings.
Event-Indexing Model
The Event-Indexing Model was first proposed by Zwaan, Langston and Graesser (1995). It makes claims about how the incoming information in comprehension is processed and how it is represented in the long-term memory.
According to the Event-Indexing Model all incoming actions events are split into five indexes. The five indexes are the same as the five situational dimensions, though Zwaan & Radvasnky(1998) claim that there are possibly more dimensions. These might be found in future research. One basic point of this model is the processing time of integrating new events into the current model. It is easier to integrate a new incoming event if it shares indexes with a previous event. The more contiguous the new event is, the easier it is integrated into the new Situation Model. This prediction made by Zwaan & Radvanksy (1998) is supported by some prior research (Zwaan, Magliano and Graesser, 1995). The other important point of the Event-Indexing Model is the representation in long-term memory. Zwaan & Radvasnky (1998) predict that this representation is a network of nodes, these nodes encode the events. The nodes are linked with each other through situational links according to the indexes they share. This connection does not only encode if two nodes share indexes but it also encodes the number of shared indexes through its strength. This second point already hints what the Event-Indexing Model lacks. There are several things which it does not include. For example it does not encode the temporal order of the events nor the direction of the causal relationships. The biggest disadvantage of the Event-Indexing Model is clearly that it treats the different dimensions as different entities though they probably interact with each other.
Zwaan & Radvansky (1998) updated the Event-Indexing Model with some features. This new model splits the processed information into three types. These three types are the situational framework, the situational relations and the situational content. The situational framework grounds the situation in space and time, and its construction is obligatory. If no information is given this framework is probably built up by standard values retrieved from prior world knowledge or some empty variable would be instantiated. he situational relations are based on the five situational dimensions. These are analysed through the Event-Indexing Model. This kind of situational information includes not the basic information, which is given in the situational framework, but the relationships between the different entities or nodes in the network. In contrast to the situational framework the situational relations are not obligatory. If there is no information given or there are no possible inferences between entities, then there is simply no relationship there. There is also an index which addresses importance to the different relations. This importance consists of the necessity of the information to understand the situation, the easiness to inference it when it would not be mentioned and how easy the information can later be remembered. Another distinction this theory makes is the one between functional and non-functional relations (Carlson-Radvansky & Radvansky, 1996; Garrod & Sanford, 1989). Functional relations describe the interaction between different entities whereas non-functional relations are the ones between non-interacting entities. The situational content consists of the entities in the situation like protagonists and objects and their properties. These are only integrated explicitly in the Situation Model, like situational relations, if they are necessary for the understanding of the situation. Nonetheless the central and most important entities and their properties are obligatory again. It is proposed that, in order to keep the processing time low, non-essential information is only represented by something like a pointer so that this information can be retrieved if necessary.
The Immersed Experiencer Framework
The Immersed Experiencer Framework (IEF) is based on prior processing framework models (see above for a detailed list) but tries to include several other research findings too. For example it was found that during comprehension brain regions are activated, which are very close or even overlap with brain regions which are active during the perception or the action of the words meaning (Isenberg et al., 2000; Martin & Chao, 2001; Pulvermüller, 1999, 2002). During comprehension there is also a visual representation of shape and orientation of objects (Dahan & Tanenhaus, 2002; Stanfield & Zwaan, 2002; Zwaan et al., 2002; Zwaan & Yaxley, in press a, b). Visual-spatial information primes sentence processing (Boroditsky, 2000). These visual representations can interfer with the comprehension (Fincher-Kiefer, 2001). Findings from (Glenberg, Meyer, & Lindem, 1987; Kaup & Zwaan, in press; Morrow et al., 1987; Horton & Rapp, in press; Trabasso & Suh, 1993; Zwaan et al., 2000) suggest that information which is part of the situation and the text is more active in the reader's mind than information which is not included. The fourth research finding is that people move their eyes and hand during comprehension in a consistent way with the described the situation. (Glenberg & Kaschak, in press; Klatzky et al., 1989; Spivey et al., 2000).
The main point of the Immersed Experiencer Framework is the idea that words active experiences with their referents. For example "an eagle in the sky" activates a visual experience of a eagle with stretched-out wings while "an eagle in the nest" activates a different visual experience. According to Zwaan (2003) the IEF should be seen as an engine to make predictions about language comprehension. These predictions are then suggested for further research.
According to the IEF the process of language comprehension consists of three components, these are activation, construal and integration. Each component works at a different level. Activation works at the world level, construal is responsible for the clause level while integration is active at the discourse level. Though the IEF shares many points with earlier models of language comprehension it differs in some main points. For example it suggests that language comprehension involves action and perceptual representations and not amodal propositions (Zwaan, 2003).
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/11%3A_Situation_Models_and_Inferencing/11.04%3A_Processing_Frameworks.txt
|
A lot of theories try to explain the situation model or so called mental model in different representations. Several theories of the representation deal with the comprehension from the text into the situation model itself. How many levels are included or needed and how is the situation model constructed, is it done by once like:
Sentence → Situation Model
Or are there levels in between which have to be passed until the model is constructed? Here are three different representations shown which try to explain the construction of the situation model by a text.
Propositional Representation
The propositional Representation claims that a sentence will be structured in another way and then it is stored. Included information does not get lost. We will have a look at the simple sentence:
“George loves Sally” the propositional representation is [LOVES(GEORGE, SALLY)]
It is easy to see that the propositional representation is easy to create and the information is still available.
Three levels of representation
Fletcher(1994); van Dijk & Kintch(1983); Zwaan & Radvansky (1998)
This theory says that there exist three levels of representation the surface form, text base and the situation model. In this example the sentence “The frog ate the bug.” Is already the surface form. We naturally create semantically relations to understand the sentence (semantic tree in the figure). The next level is the “Text base”. [EAT(FROG, BUG)] is the propositional representation and Text base is close to this kind of representation, except that it is rather spatial. Finally the situation model is constructed by the “Text base” representation. We can see that the situation model does not include any kind of text. It is a mental picture of information in the sentence itself.
Two levels of representation
Frank Koppen, Nordman, Vonk (to appear) Zwaan (2004)
This theory is like the “three levels of representations” theory. But the “Text base” level is left out. The theory itself claims that the situation model is created by the sentence itself and there is no “Text base” level needed.
Further situation model theories directing experiences exist. So not only text comprehension is done by situation models, learning through direct experience is handled by situation models, too.
KIWi-Model
A unified model by "Prof. Dr." Schmalhofer
One unified model the so called KIWi-Model tries to explain how text representation and direct experience interact with a situation model. Additionally the domain knowledge is integrated. The domain knowledge is used by forming a situation model in different tasks like simple sentence comprehension (chapter: Why do we need Situation Models). The KIWi-Model shows that a permanent interaction between “text representation → situation model” and between “sensory encoding → situation model” exists. These interactions supports the theory of a permanent updating of the mental model.
11.06: Inferencing
Inferencing is used to build up complex situation models with limited information. For example: in 1973 John Bransford and Marcia Johnson made a memory experiment in which they had two groups reading variations of the same sentence.
The first group read the text "John was trying to fix the bird house. He was pounding the nail when his father came out to watch him do the work"
The second group read the text "John was trying to fix the bird house. He was looking for the nail when his father came out to watch him do the work"
After reading, some test statements were presented to the participants. These statements contained the word hammer which did not occur in the original sentences, e.g.: "John was using a hammer to fix the birdhouse. He was looking for the nail when his father came out to watch him". Participants of the first group said they had seen 57% of the test statements, while the participants from the second group had seen only 20% of the test statements.
As one can see, in the first group there is a tendency of believing to have seen the word hammer. The participants of this group made the inference, that John used a hammer to pound the nail. This memory influence test is good example to get an idea what is meant by making inferences and how they are used to complete situation models.
While reading a text, inferencing creates information which is not explicitly stated in the text; hence it is a creative process. It is very important for text understanding in general, because texts cannot include all information needed to understand the sense of a story. Texts usually leave out what is known as world knowledge. World knowledge is knowledge about situations, persons or items that most people share, and therefore don't need to be explicitly stated. Each person should be able to infer this kind of information, as for example that we usually use hammers to pound nails. It would be impossible to write a text, if it had to include all information it deals with; if there was no such thing like inferencing or if it was not automatically done by our brain.
There is a number of different kinds of inferences:
Anaphoric Inference
This kind of Inferencing usually connects objects or persons from one to another sentence. Therefore it is responsible for connecting cross-sentence information. E.g. in "John hit the nail. He was proud of his stroke", we directly infer that "he" and "his" relate to "John". We normally make this kind of inference quite easily. But there can be sentences where more persons and other words relating to them are mixed up and people have problems understanding the story at first. This is normally regarded as bad writing style.
Instrumental Inference
This type of Inference is about the tools and the methods used in the text, like the hammer in the example above. Or for example, if you read about somebody flying to New York, you would not infer that this person has built a dragon-flyer and jumped off a cliff but that he or she used a plane, since there is nothing else mentioned in the text and a plane is the most common form of flying to New York. If there is no specific information about tools, instruments and methods, we get this information from our General World Knowledge
Causal Inference
Causal Inference is the conclusion that one event caused another in the text, like in "He hit his nail. So his finger ached". The first sentence gives the reason why the situation described in the second sentence came to be. It would be more difficult to draw a causal inference in an example like "He hit his nail. So his father ran away", although one could create an inference on this with some fantasy.
Causal inferences create causal connections between text elements. These connections are separated into local connections and global connections. Local connections are made within a range of 1 to 3 sentences. This depends on factors like the capacity of the working memory and the concentration due reading. Global connections are drawn between the information in one sentence together with the background information gathered so far about the whole text. Problems can occur with Causal Inferences when a story is inconsistent. For example, vegans eating steak would be inconsistent. An interesting fact about Causal Inferences (Goldstein, 2005) is that the kind of Inferences we draw here that are not easily seen at first are easier to remember. This may be due to the fact that they required a higher mental processing capacity while drawing the inference. So this "not-so-easy" inference seems to be marked in a way that it is easier to remember it.
Predictive / Forward Inference
Predictive/Forward Inferences uses the General World Knowledge of the reader to build his prediction of the consequences of what is currently happening in the story into the Situation Model.
Integrating Inferences into Situation Models
The question how models enter inferential processes is highly controversial in the two disciplines of cognitive psychology and artificial intelligence. A.I. gave a deep insight in psychological procedures and since the two disciplines crossed their ways and give two main bases of the cognitive science. The arguments in these are largely independent from each other although they have much in common.
Johnson-Laird (1983) makes a distinction between three types of reasoning-theories in which inferencing plays an important role. The first class gears to logical calculi and have been implemented in many formal system. The programming language Prolog arises from this way of dealing with reasoning and in psychology many theories postulate formal rules of inference, a "mental logic." These rules work in a purely syntactic way and so are "context free," blind for the context of its content. A simple example clarifies the problem with this type of theory:
``` If patients have cystitis, then they are given penicillin.
```
and the logical conclusion:
``` If patients have cystitis and are allergic to penicillin, then they are given penicillin
```
This is logically correct, but seems to fail our common sense of logic.
The second class of theories postulate content specific rules of inference. Their origin lies in programming languages and production systems. They work with forms like "If x is a, then x is b". If one wants to show that x is b, showing that x is a sub-goal of this argumentation. The idea of basing psychological theories of reasoning on content specific rules was discussed by Johnson-Laird and Wason and various sorts of such theories have been proposed. A related idea is that reasoning depends on the accumulation of specific examples within a connectionist framework, where the distinction between inference and recall is blurred.
The third class of theories is based on mental models and does not use any rules of inferencing. The process of building mental models of things heard or read. The models are in an permanent change of updates. A model built, will be equipped with new features of the new information as long as there is no information, which generates a conflict with that model. If this is the case the model is generally re-built, so that the conflict generating information fits into the new model.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/11%3A_Situation_Models_and_Inferencing/11.05%3A_Levels_of_Representation_in_Language_and_Text_Comprehension.txt
|
Linguistic Cues versus World Knowledge
According to many researchers, language is the set of processing instructions on how to build up the Situation Model of the represented situation (Gernsbacher, 1990; Givon, 1992; Kintsch, 1992; Zwaan & Radvansky, 1998). As mentioned, readers use the lexical cues and information to connect the different situational dimensions and integrate them into the model. Another important point here is prior world knowledge. World knowledge also influences how the different information in a situation model are related. The relation between linguistic cues and world knowledge is therefore an important topic of current and future research in the area of Situation Models.
Multidimensionality
Another important aspect of current research in the area of Situation Models is the Multidimensionality of the Models. The main aspect is here how the different dimensions relate to each other, how they influence and interact. The question here is also if they interact at all and which interact. Most studies in the field were only about one or a few of the situational dimensions.
11.08: References
Ashwin Ram, et al. (1999) Understanding Language Understanding - chapter 5
Baggett, P. (1979). Structurally equivalent stories in movie and text and the effect of the medium on recall. Journal of Verbal Learning and Verbal Behavior, 18, 333-356.
Bertram F. Malle, et al. (2001) Intentions and Intentionality - chapter 9
Boroditsky, L. (2000). Metaphoric Structuring: Understanding time through spatial metaphors. Cognition, 75, 1-28.
Carlson-Radvansky, L. A., & Radvansky, G. A. (1996). The influence of functional relations on spatial term selection. Psychological Science, 7, 56-60.
Carreiras, M., et al. (1996). The use of stereotypical gender information in constructing a mental model: Evidence from English and Spanish. Quarterly Journal of Experimental Psychology, 49A, 639-663.
Dahan, D., & Tanenhaus, M.K. (2002). Activation of conceptual representations during spoken word recognition. Abstracts of the Psychonomic Society, 7, 14.
Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102, 211-245.
Farah, M. J., & McClelland, J. L. (1991). A computational model of semantic memory impairment: modality specificity and emergent category specificity. Journal of Experimental Psychology: General, 210, 339-357.
Fincher-Kiefer (2001). Perceptual components of situation models. Memory & Cognition, 29 , 336-343.
Fincher-Kiefer, R., et al. (1988). On the role of prior knowledge and task demands in the processing of text. Journal of Memory and Language, 27, 416-428.
Garrod, S. C., & Sanford, A. J. (1989). Discourse models as interfaces between language and the spatial world. Journal of Semantics, 6, 147-160.
Gernsbacher, M.A. (1990), Language comprehension as structure building. Hillsdale, NJ: Erlbaum.
Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9, 558-565.
Glenberg, A. M., et al. (1987) Mental models contribute to foregrounding during text comprehension. Journal of Memory and Language 26:69-83.
Givon, T. (1992), The grammar of referential coherence as mental processing instructions, Linguistics, 30, 5-55.
Goldman, S.R., et al. (1996). Extending capacityconstrained construction integration: Towards "smarter" and flexible models of text comprehension. Models of understanding text (pp. 73–113).
Goldstein, E.Bruce, Cognitive Psychology, Connecting Mind, Research, and Everyday Experience (2005) - ISBN 0-534-57732-6.
Graesser, A. C., Singer, M., & Trabasso, T. (1994), Constructing inferences during narrative text comprehension. Psychological Review, 101, 371-395.
Holland, John H. , et al. (1986) Induction.
Horton, W.S., Rapp, D.N. (in press). Occlusion and the Accessibility of Information in Narrative Comprehension. Psychonomic Bulletin & Review.
Isenberg, N., et al. (1999). Linguistic threat activates the human amygdala. Proceedings of the National Academy of Sciences, 96, 10456-10459.
Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press.
John R. Koza, et al. (1996) Genetic Programming
Just, M. A., & Carpenter, P. A. (1992). A capacity hypothesis of comprehension: Individual differences ih working memory. Psychological Review, 99, 122-149.
Kaup, B., & Zwaan, R.A. (in press). Effects of negation and situational presence on the accessibility of text information. Journal of Experimental Psychology: Learning, Memory, and Cognition.
Keefe, D. E., & McDaniel, M. A. (1993). The time course and durability of predictive inferences. Journal of Memory and Language, 32, 446-463.
Kintsch, W. (1988), The role of knowledge in discourse comprehension: A construction-integration model, Psychological Review, 95, 163-182.
Kintsch, W., & van Dijk, T. A. (1978), Toward a model of text comprehension and production, Psychological Review, 85, 363-394.
Kintsch, W. (1992), How readers construct situation models for stories: The role of syntactic cues and causal inferences. In A. E Healy, S. M. Kosslyn, & R. M. Shiffrin (Eds.), From learning processes to cognitive processes. Essays in honor of William K. Estes (Vol. 2, pp. 261 – 278).
Klatzky, R.L., et al. (1989). Can you squeeze a tomato? The role of motor representations in semantic sensibility judgments. Journal of Memory and Language, 28, 56-77.
Martin, A., & Chao, L. L. (2001). Semantic memory and the brain: structure and processes. Current Opinion in Neurobiology, 11, 194-201.
McRae, K., et al. (1997). On the nature and scope of featural representations of word meaning. Journal of Experimental Psychology: General, 126, 99-130.
Mehler, Jacques, & Franck, Susana. (1995) Cognition on Cognition - chapter 9
Miceli, G., et al. (2001). The dissociation of color from form and function knowledge. Nature Neuroscience, 4, 662-667.
Morrow, D., et al. (1987). Accessibility and situation models in narrative comprehension. Journal of Memory and Language, 26, 165-187.
Pulvermüller, F. (1999). Words in the brain's language. Behavioral and Brain Sciences, 22, 253-270.
Pulvermüller, F. (2002). A brain perspective on language mechanisms: from discrete neuronal ensembles to serial order. Progress in Neurobiology, 67, 85–111.
Radvansky, G. A., & Zwaan, R.A. (1998). Situation models.
Schmalhofer, F., MacDaniel, D. Keefe (2002). A Unified Model for Predictive and Bridging Inferences
Schneider, W., & Körkel, J. (1989). The knowledge base and text recall: Evidence from a short-term longitudinal study. Contemporary Educational Psychology, 14, 382-393.
Singer, M., et al. (1992). Validation of causal bridging inferences. Journal of Memory and Language, 31, 507-524.
Spivey, M.J., et al. (2000). Eye movements during comprehension of spoken scene descriptions. Proceedings of the Twenty-second Annual Meeting of the Cognitive Science Society (pp. 487–492).
Stanfield, R.A. & Zwaan, R.A. (2001). The effect of implied orientation derived from verbal context on picture recognition. Psychological Science, 12, 153-156.
Talmy, Leonard,(2000) Toward a Cognitive Semantics - Vol. 1 - chapter1
van den Broek, P., et al. (1996). A "landscape" view of reading: Fluctuating patterns of activation and the construction of a memory representation. In B. K. Britton & A. C. Graesser (Eds.), Models of understanding text (pp. 165–187).
Van Dijk, T. A., and W. Kintsch. (1983).Strategies of discourse comprehension.
Yekovich, F.R., et al. (1990). The influence of domain knowledge on inferencing in low-aptitude individuals. In A. C. Graesser & G. H. Bower (Eds.), The psychology of learning and motivation (Vol. 25, pp. 175–196). New York: Academic Press.
Zwaan, R.A. (1996). Processing narrative time shifts. Journal of Experimental Psychology: Learning, Memory and Cognition, 22, 1196-1207
Zwaan, R.A. (2003), The Immersed Experiencer: Toward an embodied theory of language comprehension.B.H. Ross (Ed.) The Psychology of Learning and Motivation, Vol. 44. New York: Academic Press.
Zwaan, R. A., et al. (1998). Situation-model construction during translation. Manuscript in preparation, Florida State University.
Zwaan, R. A., et al. ( 1995 ). The construction of situation models in narrative comprehension: An event-indexing model. Psychological Science, 6, 292-297.
Zwaan, R. A., et al. (1995). Dimensions of situation model construction in narrative comprehension. Journal of Experimental Psychology." Learning, Memory, and Cognition, 21, 386-397.
Zwaan, R. A., Radvansky (1998), Situation Models in Language Comprehension and Memory. in Psychological Bulletin, Vol.123,No2 p. 162-185.
Zwaan, R.A., et al. (2002). Do language comprehenders routinely represent the shapes of objects? Psychological Science, 13, 168-171.
Zwaan, R.A., & Yaxley, R.H. (a). Spatial iconicity affects semantic-relatedness judgments. Psychonomic Bulletin & Review.
Zwaan, R.A., & Yaxley, R.H. (b). Hemispheric differences in semantic-relatedness judgments. Cognition.
11.09: Links
Cognitive Psychology Osnabrück
Summer School course on Situation Models and Embodied Language Processes
Dr. Rolf A. Zwaan's Homepage with many Papers
International Hanse-Conference on Higher level language processes in the brain: Inference and Comprehension Processes 2003
University of Notre Dame Situation Model Research Group
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/11%3A_Situation_Models_and_Inferencing/11.07%3A_Important_Topics_of_Current_Research.txt
|
Most human cognitive abilities rely on or interact with what we call knowledge. How do people navigate through the world? How do they solve problems, how do they comprehend their surroundings and on which basis do people make decisions and draw inferences? For all these questions, knowledge, the mental representation of the world is part of the answer.
What is knowledge? According to Merriam-Websters online dictionary, knowledge is “the range of one’s information and understanding” and “the circumstance or condition of apprehending truth or fact through reasoning”. Thus, knowledge is a structured collection of information, that can be acquired through learning, perception or reasoning.
This chapter deals with the structures both in human brains and in computational models that represent knowledge about the world. First, the idea of concepts and categories as a model for storing and sorting information is introduced, then the concept of semantic networks and, closely related to these ideas, an attempt to explain the way humans store and handle information is made. Apart from the biological aspect, we are also going to talk about knowledge representation in artificial systems which can be helpful tools to store and access knowledge and to draw quick inferences.
After looking at how knowledge is stored and made available in the human brain and in artificial systems, we will take a closer look at the human brain with regard to hemispheric specialisation. This topic is not only connected to knowledge representation, since the two hemispheres differ in which type of knowledge is stored in each of them, but also to many other chapters of this book. Where, for example, is memory located, and which parts of the brain are relevant for emotions and motivation? In this chapter we focus on the general differences between the right and the left hemisphere. We consider the question whether they differ in what and how they process information and give an overview about experiments that contributed to the scientific progress in this field.
12.02: Knowledge Representation in the Brain
Concepts and Categories
For many cognitive functions, concepts are essential. Concepts are mental representations, including memory, reasoning and using/understanding language. One function of concepts is the categorisation of knowledge which has been studied intensely. In the course of this chapter, we will focus on this function of concepts.
Imagine you wake up every single morning and start wondering about all the things you have never seen before. Think about how you would feel if an unknown car parked in front of your house. You have seen thousands of cars but since you have never seen this specific car in this particular position, you would not be able to provide yourself with any explanation. Since we are able to find an explanation, the questions we need to ask ourselves are: How are we able to abstract from prior knowledge and why do we not start all over again if we are confronted with a slightly new situation? The answer is easy: We categorise knowledge. Categorisation is the process by which things are placed into groups called categories.
Categories are so called “pointers of knowledge”. You can imagine a category as a box, in which similar objects are grouped and which is labeled with common properties and other general information about the category. Our brain does not only memorise specific examples of members of a category, but also stores general information that all members have in common and which therefore defines the category. Coming back to the car-example, this means that our brain does not only store how your car, your neighbors’ and your friends’ car look like, but it also provides us with the general information that most cars have four wheels, need to be fueled and so on. Because categorisation immediately allows us to get a general picture of a scene by allowing us to recognise new objects as members of a category, it saves us much time and energy that we otherwise would have to spend in investigating new objects. It helps us to focus on the important details in our environment, and enables us to draw the correct inferences. To make this obvious, imagine yourself standing at the side of a road, wanting to traverse it. A car approaches from the left. Now, the only thing you need to know about this car is the general information provided by the category, that it will run you over if you don't wait until it has passed. You don't need to care about the car's color, number of doors and so on. If you were not able to immediately assign the car to the category "car", and infer the necessity to step back, you would get hit because you would still be busy with examining the details of that specific and unknown car. Therefore categorisation has proved itself as being very helpful for surviving during evolution and allows us to quickly and efficiently navigate through our environment.
Definitional Approach
Take a look at the following picture! You will see four different kinds of cars. They differ in shape, color and other features, nonetheless you are probably sure that they are all cars.
What makes us so convinced about the identity of these objects? Maybe we can try to find a definition which describes all these cars. Have all of them four wheels? No, There are some which have only three. Do all cars drive with petrol? No, That's not true for all cars either. Apparently we will fail to come up with a definition. The reason for this failure is that we have to generalise to make a definition. That would work perhaps for geometrical objects, but obviously not for natural things. They do not share completely identical features in one category for that it is problematic to find an appropriate definition. There are however similarities between members of one category, so what about this familiarity? The famous philosopher and linguist Ludwig Wittgenstein asked himself this question and claimed to have found a solution. He developed the idea of family resemblance. That means that members of a category resemble each other in several ways. For example cars differ in shape, color and many other properties but every car resembles somehow other cars. The following two approaches determines categories by similarity.
Prototype Approach
The prototype approach was proposed by Rosch in 1973. A prototype is an average case of all members in a particular category, but it is not an actual, really existent member of the category. Even extreme various features of members within one category can be explained by this approach. Different degrees of prototypicality represent differences among category- members. Members which resemble the prototype very strongly are high-prototypical. Members which differ in a lot of ways from the prototype are therefore low-prototypical. There seem to be connections to the idea of family resemblance and indeed some experiments showed that high prototypicality and high family resemblance are strongly connected. The typicality effect describes the fact that high-prototypical members are faster recognised as a member of a category. For example participants had to decide whether statements like “A penguin is a bird.” or “A sparrow is bird.” are true. Their decisions were much faster concerning the “sparrow” as a high-prototypical member of the category “bird” than for an atypical member as “penguin”. Participants also tend to prefer prototypical members of a category when asked to list objects of a category. Concerning the birds-example, they rather list “sparrow” than “penguin”, which is a quite intuitive result. In addition high-prototypical objects are strongly affected by priming.
Exemplar Approach
The typicality effect can also be explained by a third approach which is concerned with exemplars. Similar to a prototype, an exemplar is a very typical member of the category. The difference between exemplars and prototypes is that exemplars are actually existent members of a category that a person has encountered in the past. Nevertheless, it involves also the similarity of an object to a standard object. Only that the standard here involves many examples and not the average, each one called an exemplar.
Again we can show the typicality effect: Objects that are similar to many examples we have encountered are classified faster to objects which are similar to few examples. You have seen a sparrow more often in your life than a penguin, so you should recognise the sparrow faster.
For both prototype and exemplar approach there are experiments whose results support either one approach. Some people claim that the exemplar approach has less problems with variable categories and with atypical cases within categories. E.g. the category “games” is quite difficult to realise with the prototype approach. How do you want to find an average case for all games, like football, golf, chess. The reason for that could be that “real” category- members are used and all information of the individual exemplars, which can be useful when encountering other members later, are stored. Another point where the approaches can be compared is how well they work for differently sized categories. The exemplar approach seems to work better for smaller categories and prototypes do better for larger categories.
Some researchers concluded that people may use both approaches: When we initially learn something about a category we average seen exemplars into a prototype. It would be very bad in early learning, if we already take into account what exceptions a category has. In getting to know some of these exemplars more in detail the information becomes strengthened.
“We know generally what cats are (the prototype), but we know specifically our own cat the best (an exemplar).” (Minda & Smith, 2001)
Hierarchical Organization of Categories
Now that we know about the different approaches of how we go about forming categories, let us look at the structure of a category and the relationship between categories. The basic idea is that larger categories can be split up into more specific and smaller ones.
Rosch stated that by this process three levels of categorization are created:
It is interesting that the decrease of information from basic to superordinate is really high but that the increase of information from basic down to subordinate is rather low. Scientists wanted to find out if among these levels one is preferred over the others. They asked participants to name presented objects as quickly as possible. The result was that the subjects tended to use the basic-level name, which includes the optimal amount of stored information. Therefore a picture of a retriever would be named “dog” rather than “animal” or “retriever”. It is important to note that the levels are different for each person depending on factors such as expertise and culture.
One factor which influences our categorization is knowledge itself. Experts pay more attention to specific features of objects in their area than non-experts would do. For example after presenting some pictures of birds experts of birds tend to say the subordinate name (blackbird, sparrow) while non-experts just say "bird". The basic level in the area of interest of an expert is lower than the basic level of a layperson. Therefore knowledge and experience of people affect categorization.
Another factor is culture. Imagine a people living for instance in close contact with their natural environment, and have therefore a greater knowledge about plants etc. than, for example, students in Germany. If you ask the latter what they see in nature, they use the basic level ‘tree’ and if you do the same task for the people closer to nature they will tend to answer in terms of lower level concepts such as ‘oak tree’.
Representation of Categories in the Brain
There is evidence that some areas in the brain are selective for different categories, but it is not very probable that there is a corresponding brain area for each category. Results of neurophysiological research point to a kind of double dissociation for living and non-living things. Evidence has been found in fMRI studies that they are indeed represented in different brain areas. It is important to denote that nevertheless there is much overlap between the activation of different brain areas by categories. Moreover when going one step closer into the physical area there is a connection to mental categories, too. There seem to exist neurons which respond better to objects of a particular category, namely so called “category-specific neurons”. These neurons fire not only as a response to one object but to many objects within one category. This leads to the idea that probably many neurons fire if a person recognises a particular object and that maybe these combined patterns of the firing neurons represent the object.
Semantic Networks
The "Semantic Network approach" proposes that concepts of the mind are arranged in networks, in other words, in a functional storage-system for the `meanings' of words. Of course, the concept of a semantic net is very flexible. In a graphical illustration of such a semantic net, concepts of our mental dictionary are represented by nodes, which in this way represent a piece of knowledge about our world.
The properties of a concept could be placed, or "stored", next to a node representing that concept. Links between the nodes indicate the relationship between the objects. The links can not only show that there is a relationship, they can also indicate the kind of relation by their length, for example.
Every concept in the net is in a dynamical correlation with other concepts, which may have protoypically similar characteristics or functions.
Collins and Quillian's Model
Semantic Network according to Collins and Quillian with nodes, links, concept names and properties.
One of the first scientists who thought about structural models of human memory that could be run on a computer was Ross Quillian (1967). Together with Allan Collins, he developed the Semantic Network with related categories and with a hierarchical organisation.
In the picture on the right hand side, Collins and Quillians network with added properties at each node is shown. As already mentioned, the skeleton-nodes are interconnected by links. At the nodes, concept names are added. Like in paragraph "Hierarchical Organisation of Categories", general concepts are on the top and more particular ones at the bottom. By looking at the concept "car", one gets the information that a car has 4 wheels, has an engine, has windows, and furthermore moves around, needs fuel, is manmade.
These pieces of information must be stored somewhere. It would take too much space, if every detail must be stored at every level. So the information of a car is stored at the basis level and further information about specific cars, e.g. BMW, is stored at the lower level, where you do not need the fact that the BMW also has four wheels, if you already know that it is a car. This way of storing shared properties at a higher-level node is called Cognitive Economy.
In order not to produce redundancies, Collins and Quillian thought of this as an information inheritance principle. Information, that is shared by several concepts, is stored in the highest parent node, containing the information. So all son-nodes, that are below the information bearer , also can access the information about the properties. However, there are exceptions. Sometimes a special car has not four wheels, but three. This specific property is stored in the son-node.
The logic structure of the network is convincing, since it can show that the time of retrieving a concept and the distances in the network correlate. The correlation is proven by the sentence-verification technique. In experiments probands had to answer statements about concepts with "yes" or "no". It took actually longer to say "yes", if the concept bearing nodes were further apart.
The phenomenon that adjacent concepts are activated is called Spreading activation. These concepts are far more easily accessed by memory, they are "primed". This was studied and backed by David Meyer and Roger Schaneveldt (1971) with a lexical-decision task. Probands had to decide if word pairs were words or non-words. They were faster at finding real word pairs if the concepts of the two words were close by in the intended network.
While having the ability to explain many questions, the model has some flaws.
The Typicality Effect is one of them. It is known that "reaction times for more typical members of a category are faster than for less typical members". (MITECS) This contradicts the assumptions of Collins' and Quillian's Model, that the distance in the net is responsible for reaction time. It was experimentally determined that some properties are stored at specific nodes, therefore the cognitive economy stands in question. Furthermore, there are examples of faster concept retrieval although the distances in the network are longer.
These points led to another version of the Semantic Network approach: Collins and Loftus Model.
Collins and Loftus Model
Collins and Loftus (1975) tried to abandon these problems by using shorter or longer links depending on the relatedness and interconnections between formerly not directly linked concepts. Also the former hierarchic structure was substituted by a more individual structure of a person. Only to name a few of the extensions. As shown in the picture on the right, the new model represents interpersonal differences, such as acquired during a humans lifespan. They manifest themselves in the layout and the various lengths of the links of the same concepts.
An example: The concept "vehicle" is connected to car, truck or bus by short links, and to fire engine or ambulance with longer links.
After these enhancements, the model is so omnipotent that some researchers scarced it for being too flexible. In their opinion, the model is no longer a scientific theory, because it is not disprovable. Furthermore, we do not know how long these links are in us. How should they be measurable and could they actually?
Connectionist Approach
Every concept in a semantic net is in a dynamical correlation with other concepts which can have prototypically similar characteristics or functions. The neural networks in the brain are organised similarly. Furthermore, it is useful to include the features of ”spreading activation” and ”parallel distributed activity” in a concept of such a semantic net to explain the complexity of the very sophisticated environment.
Basic Principles of Connectionism
The connectionists did this by modeling their networks after neural networks in the nervous system. Every node of the diagram represents a neuron-like processing unit. These units can be divided into three subgroups: Input units, which become activated by a stimulation of the environment, hidden units, which receive signals from an input-unit and pass them to an output unit and output units, which show a pattern of activation that represents the initial stimulus. Excitatory and inhibitory connections between units just like synapses in the brain allow ’input’ to be analyzed and evaluated. For computing the outcome of such systems, it is useful to attach a certain ’weight’ to the input of the connectionists system, that mimics the strength of a stimulus of the human nervous system.
It needs to be emphasized that connectionist networks are not models of how the nervous system works. The approach of connectionist networks is a hypothetical approach to represent categories in network patterns. Another name for the connectionist approach is Parallel Distributed Processing approach, for short PDP, since processing takes place in parallel lines and the output is distributed across many units.
Operation of Connectionist Networks
First a stimulus is presented to the input units. Then the links pass on the signal to the hidden units, that distribute the signal to the output units via further links. In the first trial, the output units shows a wrong pattern. After many repetitions, the pattern finally is correct. This is achieved by back propagation. The error signals are send back to the hidden units and the signals are reprocessed. During these repetitive trials, the ”weights” of the signal are gradually calibrated on behalf of the error signals in order to get a right output pattern at last. After having achieved a correct pattern for one stimulus, the system is ready to learn a new concept.
Evaluating Connectionism
The PDP approach is important for knowledge representation studies. It is far from perfect, but on the move to get there. The process of learning enables the system to make generalizations, because similar concepts create similar patterns. After knowing one car, the system can recognize similar patterns as other cars, or may even predict how other cars look like. Furthermore, the system is protected against total wreckage. A damage to single units will not cause the system’s total breakdown, but will delete only some patterns, which use those units. This is called graceful degradation and is often found in patients with brain lesions. These two arguments lead to the third. The PDP is organized similarly to the human brain. And some effective computer programs have been developed on this basis, that were able to predict the consequences of human brain damage.
On the other hand, the connectionist approach is not without problems. Formerly learned concepts can be superposed by new concepts. In addition, PDP can not explain more complex processes than learning concepts. Neither can it explain the phenomenon of rapid learning, which does not require extensive learning. It is assumed that rapid learning takes place in the hippocampus, and that conceptual and gradual learning is located in the cortex.
In conclusion, the PDP approach can explain some features of knowledge representation very well but fails for some complex processes.
Mental Representation
There are different theories on how living beings, especially humans encode information to knowledge. We may think of diverse mental representations of the same object. When reading the written word "car", we call this a discrete symbol. It matches with all imaginable cars and is therefore not bound to a special vehicle. It is an abstract, or amodal, representation. This is different if instead we see a picture of a car. It might be a red sports car. Now we speak of a non-discrete symbol, an imaginable picture that appears in front of our inner eye and that fits only to certain cars of sufficiently similar appearance.
Propositional Approach
The Propositional Approach is one possible way to model mental representations in the human brain. It works with discrete symbols which are strongly connected among each other. The usage of discrete symbols necessitates clear definitions of each symbol, as well as information about the syntactic rules and the context dependencies in which the symbols may be used. The symbol "car" is only comprehensible for people who do understand English and have seen a car before and therefore know what a car is about. The Propositional Approach is an explicit way to explain mental representation.
Definitions of propositions differ in the different fields of research and are still under discussion. One possibility is the following: ”Traditionally in philosophy a distinction is made between sentences and the ideas underlying those sentences, called propositions. A single proposition may be expressed by an almost unlimited number of sentences. Propositions are not atomic, however; they may be broken down into atomic concepts called ”Concepts”.
In addition, mental propositions deal with the storage, retrieval and interconnection of information as knowledge in the human brain. There is a big discussion, if the brain really works with propositions or if the brain processes its information to and from knowledge in another way or perhaps in more than one way.
Imagery Approach
One possible alternative to the Propositional Approach, is the Imagery Approach. Since here the representation of knowledge is understood as the storage of images as we see them, it is also called analogical or perceptual approach. In contrast to the Propositional Approach it works with non-discrete symbols and is modality specific. It is an implicit approach to mental representation. The picture of the sports car includes implicitly seats of any kind. If additionally mentioned that they are off-white, the image changes to a more specific one. How two non-discrete symbols are combined is not as predetermined as it is for discrete symbols. The picture of the off-white seats may exist without the red car around, as well as the red car did before without the off-white seats. The Imagery and the Propositional Approaches are also discussed in chapter 8.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/12%3A_Knowledge_Representation_and_Hemispheric_Specialization/12.01%3A_Introduction.txt
|
Computational knowledge representation is concerned with how knowledge can be represented symbolically and how it can be manipulated in automated ways. Almost all of the theories mentioned above evolved in symbiosis with computer science. On the one hand, computer science uses the human brain as an inspiration for computational systems, on the other hand, artificial models are used to further our understanding of the biological basis of knowledge representation.
Knowledge representation is connected to many other fields related to information processing, e.g. logic, linguistics, reasoning, and the philosophical aspects of these fields. In particular, it is one of the crucial topics of Artificial Intelligence, as it deals with information encoding, storing and usage for computational models of cognition.
There are three main points that need to be addressed with regard to computational knowledge representation: The process, the formalisms and the applications of knowledge engineering.
Knowledge Engineering
The process of developing computational knowledge-based systems is called knowledge engineering. This process involves assessing the problem, developing a structure for the knowledge base and implementing actual knowledge into the knowledge base. The main task for knowledge engineers is to identify an appropriate conceptual vocabulary.
There are different kinds of knowledge, for instance rules of games, attributes of objects and temporal relations, and each type is expressed best by its own specific vocabulary. Related conceptual vocabularies that are able to describe objects and their relationships are called ontologies. These conceptual vocabularies are highly formal and each is able to express meaning in specific fields of knowledge. They are used for queries and assertions to knowledge bases and make sharing knowledge possible. In order to represent different kinds of knowledge in one framework, Jerry Hobbs (1985) proposed the principle of ontological promiscuity. Thereby several ontologies are mixed together to cover a range of different knowledge types.
A query to a system that represents knowledge about a world made of everyday items and that can perform actions in this world may look like this: “Take the cube from the table!”. This query could be processed as follows: First, since we live in a temporal world, the action needs to be a processed in a way that can be broken down into successive steps. Secondly, we make general statements about the rules for our system, for example that gravitational forces have a certain effect. Finally, we try out the chain of tasks that have to be done to take the cube from the table. 1) Reach out for the cube with the hand, 2) grab it, 3) raise the hand with the cube, etc. Logical Reasoning is the perfect tool for this task, because a logical system can also recognise if the task is possible at all.
There is a problem with the procedure described above. It is called the frame problem. The system in the example deals with changing states. The actions that take place change the environment. That is, the cube changes its place. Yet, the system does not make any propositions about the table so far. We need to make sure, that after picking up the cube from the table, the table does not change its state. It should not disappear or break down. This could happen, since the table is no longer needed. The systems tells that the cube is in the hand and omits any information about the table. In order to tackle the Frame Problem there have to be stated some special axioms or similar things. The Frame Problem has not been solved completely. There are different approaches to a resolution. Some add object spatial and temporal boundaries to the system/world (Hayes 1985). Others try more direct modeling. They do transformations on state descriptions. For example: Before the transformation the cube is on the table, after transformation , the table still exists, but independent from the cube.
Knowledge Representation Formalisms
The type of knowledge representation formalism determines how information is stored. Most knowledge representation applications are developed for a specific purpose, for example a digital map for robot navigation or a graph like account of events for visualizing stories.
Each knowledge representation formalisms needs a strict syntax, semantics and inference procedure in order to be clear and computable. Most formalisms have the following attributes to be able to express information more clearly: The Semantic Network Approach, hierarchies of concepts (e.g. vehicle -> car -> truck) and property inheritance (e.g. red cars have four wheels since cars have four wheels). There are attributes that provide the possibility to add new information to the system without creating any inconsistencies, and the possibility to create a "closed-world" assumption. For example if the information that we have gravitation on earth is omitted, the closed-world assumption must be false for our earth/world.
A problem for knowledge representation formalisms is that expressive power and deductive reasoning are mutually exclusive. If a formalism has a big expressive power, it is able to describe a wide range of (different) information, but is not able to do brilliant inferring from (given) data. Propositional logic is restricted to Horn clauses. A Horn clause is a disjunction of literals with at most one positive literal. It has a very good decision procedure(inferring), but can not express generalisations. An example is given in the logical programming language Prolog. If a formalism has a big deductive complexity, it is able to do brilliant inferring, i.e. make conclusions, but has a poor range of what it can describe. An example is second-order logic. So, the formalism has to be tailored to the application of the KR system. This is reached by compromises between expressiveness and deductive complexity. In order to get a greater deductive power, expressiveness is sacrificed and vice versa.
With the growth of the field of knowledge bases, many different standards have been developed. They all have different syntactic restrictions. To allow intertranslation, different "interchange" formalisms have been created. One example is the Knowledge Interchange Format which is basically first-order set theory plus LISP (Genesereth et al. 1992).
Applications of Knowledge Representation
Computational knowledge representation is mostly not used as a model of cognition but to make pools of information accessible, i.e. as an extension of database technology. In these cases general rules and models are not needed. With growing storage media, one is capable of creating simple knowledge bases stating all specific facts. The information is stored in the form of sentential knowledge, that is knowledge saved in form of sentences comparable to propositions and program code. Knowledge is seen as a reservoir of useful information rather than as supporting a model of cognitive activity. More recently, increased available memory size has made it feasible to use "compute-intensive" representations that simply list all the particular facts rather than stating general rules. These allow the use of statistical techniques such as Markov simulation, but seem to abandon any claim to psychological plausibility.
Artificial Intelligence
Artificial intelligence or intelligence added to a system that can be arranged in a scientific context or Artificial Intelligence (English: Artificial Intelligence or simply abbreviated AI) is defined as the intelligence of a scientific entity. This system is generally considered a computer. Intelligence is created and incorporated into a machine (computer) in order to be able to do work as human beings can. Several types of fields that use artificial intelligence include expert systems, computer games (games), fuzzy logic, artificial neural networks and robotics. Many things seem difficult for human intelligence, but for Informatics it is relatively unproblematic. For example: transforming equations, solving integral equations, making chess games or Backgammon. On the other hand, things that for humans seem to demand a little intelligence, until now are still difficult to realize in Informatics. For example: Object / Face Introduction, playing soccer.
Although AI has a strong connotation of science fiction, AI forms a very important branch of computer science, dealing with behavior, learning and intelligent adaptation in a machine. Research in AI involves making machines to automate tasks that require intelligent behavior. Examples include control, planning and scheduling, the ability to answer customer diagnoses and questions, as well as handwriting recognition, voice and face. Such things have become separate disciplines, which focus on providing solutions to real life problems. The AI system is now often used in the fields of economics, medicine, engineering and the military, as has been built in several home computer and video game software applications. This 'artificial intelligence' not only wants to understand what an intelligence system is, but also constructs it. There is no satisfactory definition for 'intelligence': 1. intelligence: the ability to acquire knowledge and use it 2. or intelligence is what is measured by a 'Intelligence Test'
Broadly speaking, AI is divided into two notions namely Conventional AI and Computational Intelligence (CI, Computational Intelligence). Conventional AI mostly involves methods now classified as machine learning, which are characterized by formalism and statistical analysis. Also known as symbolic AI, logical AI, pure AI and GOFAI, Good Old Fashioned Artificial Intelligence. The methods include: 1. Expert system: apply the capability of consideration to reach conclusions. An expert system can process a large amount of information that is known and provides conclusions based on these information. 2. Case based considerations 3. Bayesian Network 4. Behavior-based AI: a modular method for manually establishing AI systems Computational intelligence involves iterative development or learning (e.g. tuning parameters as in connectionist systems. This learning is based on empirical data and is associated with non-symbolic AI, irregular AI and soft calculations. The main methods include: 1. Neural Network: a system with very strong pattern recognition capabilities 2. Fuzzy systems: techniques for consideration under uncertainty, have been used extensively in modern industry and consumer product control systems. 3. Evolutionary computing: applying biologically inspired concepts such as population, mutation and "survival of the fittest" to produce better problem solving. These methods are mainly divided into evolutionary algorithms (e.g. genetic algorithms) and group intelligence (e.g. ant algorithms) With a hybrid intelligent system, experiments were made to combine these two groups. Expert inference rules can be generated through neural networks or production rules from statistical learning as in ACT-R. A promising new approach states that strengthening intelligence tries to achieve artificial intelligence in the process of evolutionary development as a side effect of strengthening human intelligence through technology.
History of artificial intelligence In the early 17th century, René Descartes argued that an animal's body was nothing but complicated machines. Blaise Pascal invented the first mechanical digital calculating machine in 1642. At 19, Charles Babbage and Ada Lovelace worked on programmable mechanical calculators. Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which overhauled formal logic. Warren McCulloch and Walter Pitts published "Logical Calculus of Ideas that Remain in Activities" in 1943 which laid the foundation for neural networks. The 1950s were a period of active effort in AI. The first AI program to work was written in 1951 to run the Ferranti Mark I engine at the University of Manchester (UK): a script play program written by Christopher Strachey and a chess game program written by Dietrich Prinz. John McCarthy made the term "artificial intelligence" at the first conference provided for this issue, in 1956. He also discovered the Lisp programming language. Alan Turingmemper introduced "Turing test" as a way to operationalize intelligent behavior tests. Joseph Weizenbaum built ELIZA, a chatterbot that applies Rogerian psychotherapy. During the 1960s and 1970s, Joel Moses demonstrated the power of symbolic considerations to integrate problems in the Macsyma program, a knowledge-based program that was first successful in the field of mathematics. Marvin Minsky and Seymour Papert published Perceptrons, which demonstrated simple neural network boundaries and Alain Colmerauer developed the computer language Prologue. Ted Shortliffe demonstrates the power of a rule-based system for representation of knowledge and inference in diagnosis and medical therapy which is sometimes referred to as the first expert system. Hans Moravec developed the first computer controlled vehicle to deal with the tangled, starred road independently. In the 1980s, neural networks were used extensively with the reverse propagation algorithm, first explained by Paul John Werbos in 1974. In 1982, physicists such as Hopfield used statistical techniques to analyze storage properties and network optimization nerve. Psychologists, David Rumelhart and Geoff Hinton, continue their research on neural network models in memory. In 1985 at least four research groups rediscovered the Back-Propagation learning algorithm. This algorithm is successfully implemented in computer science and psychology. The 1990s marked large gains in various fields of AI and demonstrations of various applications. More specifically Deep Blue, a chess computer game, defeated Garry Kasparov in a well-known match 6 game in 1997. DARPA stated that the costs saved through applying the AI method for scheduling units in the first Gulf War had replaced all investment in AI research since 1950 to the US government. The great challenge of DARPA, which began in 2004 and continues to this day, is a race for a \$ 2 million prize where vehicles are driven by themselves without communication with humans, using GPS, computers and sophisticated sensors, across several hundred miles of challenging desert areas.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/12%3A_Knowledge_Representation_and_Hemispheric_Specialization/12.03%3A_Computational_Knowledge_Representation.txt
|
After having dealt with how knowledge is stored in the brain, we now turn to the question of whether the brain is specialised and, if it is specialised, which functions are located where and which knowledge is present in which hemisphere. These questions can be subsumed under the topic “hemispheric specialisation” or “lateralisation of processing” which looks at the differences in processing between the two hemispheres of the human brain.
Differences between the hemispheres can be traced back to as long as 3.5 million years ago. Evidence for this are fossils of australopithecines (which is an ancient ancestor of homo sapiens). Because differences have been present for so long and survived the selective pressure they must be useful in some way for our cognitive processes.
Differences in Anatomy and Chemistry
Although at first glance the two hemispheres look identical, they differ in in various ways.
Concerning the anatomy, some areas are larger and the tissue contains more dendritic spines in one hemisphere than in the other. An example of this is what used to be called “Broca’s area” in the left hemisphere. This area which is –among other things- important for speech production shows greater branching in the left hemisphere than in the respective right hemisphere area. Because of the left hemisphere’s importance for language, with which we will deal later, one can conclude that anatomical differences have consequences for lateralisation in function.
Neurochemistry is another domain the hemispheres differ in: The left hemisphere is dominated by the neurotransmitter dopamine, whereas the right hemisphere shows higher concentrations of norepinephrine. Theories suggest that modules specialised on cognitive processes are distributed over the brain according to the neurotransmitter needed. Thus, a cognitive function relying on dopamine would be located in the left hemisphere.
The Corpus Callosum
The two hemispheres are interconnected via the corpus callosum, the major cortical connection. With its 250 million nerve fibres it is like an Autobahn for neural data connecting the two hemispheres. There are in fact smaller connections between the hemispheres but these are little paths in comparison. All detailed higher order information must pass through the corpus callosum when being transferred from one hemisphere to the other. The transfer time, which can be measured with ERP, lies between 5 and 20 ms.
Historic Approaches
Hemispheric specialisation has been of interest since the days of Paul Broca and Karl Wernicke, who discovered the importance of the left hemisphere for speech in the 1860s. Broca examined a number of patients who could not produce speech but whose understanding of language was not severed, whereas Wernicke examined patients who suffered the opposite symptoms (i.e. who could produce speech but did not understand anything). Both Broca and Wernicke found that their patients’ brains had damage to distinct areas of the left hemisphere.
Because in these days language was seen as the cognitive process superior to all other processes, the left hemisphere was believed to be superior to the right which was expressed in the “cerebral dominance theory” developed by J.H. Jackson. The right hemisphere was seen as a “spare tire [...] having few functions of its own” (Banich, S.94). This view was not challenged until the 1930s. In this decade and the following, research dramatically changed this picture. Of special importance for showing the role of the right hemisphere was Sperry, who conducted several experiments in 1974 for which he won the Nobel Prize in Medicine and Physiology in 1981.
Experiments with Split-Brain Patients
Sperry’s experiments took place with people who suffered a condition called “split brain syndrome” because they underwent a commissurotomy. In a commissurotomy the corpus callosum is sectioned so that communication between the hemispheres becomes severed in these patients. With his pioneering experiments, Sperry wanted to find out whether the left hemisphere really plays such an important role in speech processing as suggested by Broca and Wernicke.
Sperry used different experimental designs in his studies, but the basic assumption behind all experiments of this type was that perceptual information received at one side of the body is processed in the contra-lateral hemisphere of the brain. In one of the experiments the subjects had to recognise objects by touching it with merely one hand, while being blindfolded. He then asked the patients to name the object they felt and found that people could not name it when touching it with the left hand (which is linked to the right hemisphere). The question that arose was whether this inability was due to a possible function of the right hemisphere as “spare tire” or due to something else. Sperry now changed the design of his experiment so that patients now had to show that they recognised the objects by using it the right way. For example, if they recognised a pencil they would use it to write. With this changed design, no difference in performance between both hands were found.
In a different experiment conducted by Sperry et al. the patients were shown the word sky to one visual field and scraper to the other. They now had to draw the whole word they had seen with one hand. The patients were not able to synthesise this to skyscraper, instead they draw a scraper overlapped by some cloud. Thus it was concluded that each hemisphere took control of the hand to draw what it had seen.
Experiments with Patients with other Brain-Lesions
There have been other experiments conducted to gain more knowledge about hemispheric specialisation. They were conducted with epileptic individuals who were about to receive surgery where parts of one of their hemispheres was going to be removed. Before the surgery started it was important to find out which hemisphere is responsible for speech in this individual. This was done using the Wada-technique, where barbiturate is injected into one of the arteries supplying the brain with blood. Shortly after the injection, the contra-lateral side of the body is paralysed. If the person is now still able to speak, the doped hemisphere of the brain is not responsible for speech production in this individual. With the results of this technique it could be estimated that 95\% of all adult right-handers use their left hemisphere for speech.
Research with people who suffer brain lesions or even have a commissurotomy has some major draw backs: The reason why they had to undergo such surgery is usually epileptic seizures. Because of this, it is possible that their brains are not typical or have received damage to other areas during the surgery. Also, these studies have been performed with very limited numbers of subjects, so the statistical reliability might not be high.
Experiments with Neurologically Intact Individuals
In addition to experiments with brain-severed patients, studies with neurologically intact individuals have been conducted to measure perceptual asymmetries. These are usually performed with one of three methods: Namely the “divided visual field technique”, “dichaptic presentation” and “dichotic presentation”. Each of them again has as basic assumption the fact that perceptual information received at one side of the body is processed in the contra-lateral hemisphere.
Highly simplified picture of the visual pathway.
The divided visual field technique is based on the fact that the visual field can be divided into the right (RVF) and left visual field (LVF). Each visual field is processed independently from the other in the contra-lateral hemisphere. The divided visual field technique includes two different experimental designs: The experimenter can present one picture in just one of the visual fields and then let the subject respond to this stimulus. The other possibility involves showing two different pictures in each visual field.
A problem that can occur using the visual field technique is that the stimulus must be presented for less than 200 ms because this is how long the eyes can look at one point without shifting of the visual field.
In the dichaptic presentation technique the subject is presented two objects at the same time in each hand. (c.f. Sperry’s experiments)
The dichotic presentation technique enables researchers to study the processing of auditory information. Here, different information is presented simultaneously to each ear. Experiments with these techniques found that a sensory stimulus is processed 20 to 100 ms faster when it is initially directed to the specialised hemisphere for that task and the response is 10% more accurate.
Explanations for this include three hypotheses, namely the direct access theory, the callosal relay model and the activating-orienting model. The direct access theory assumes that information is processed in that hemisphere to which it is initially directed. This may result in less accurate responses, if the initial hemisphere is the unspecialised hemisphere. The Callosal relay model states that information if initially directed to the wrong hemisphere is transferred to the specialised hemisphere over the corpus callosum. This transfer is time-consuming and is the reason for loss of information during transfer. The activating-orienting model assumes that a given input activates the specialised hemisphere. This activation then places additional attention on the contra-lateral side of the activated hemisphere, “making perceptual information on that side even more salient”. (Banich)
Common Results
All the experiments mentioned above have some basic findings in common: The left hemisphere is superior at verbal tasks such as the processing of speech, speech production and recognition of letters whereas the right hemisphere excels at non-verbal tasks such as face recognition or tasks that involve spatial skills such as line orientation, or distinguishing different pitches of sound. This is evidence against the cerebral dominance theory which appointed the right hemisphere to be a spare tire! In fact both hemispheres are distinct and outclass at different tasks, and neither one can be omitted without this having high impact on cognitive performance.
Although the hemispheres are so distinct and are experts at their assigned functions, they also have limited abilities in performing the tasks for which the other hemisphere is specialised. In the picture above is an overview which hemisphere gives raise to what ability.
Differences in Processing
Experiment on local and global processing with patients with left- or right-hemisphere damage
There are two sets of approaches to the question of hemispheric specialisation. One set of theories is about the topic by asking the question “What tasks is each hemisphere specialised for?”. Theories that belong to this set, assign the different levels of ability to process sensory information to the different levels of abilities for higher cognitive skills. One theory that belongs to this set is the “spatial frequency hypothesis”. This hypothesis states that the left hemisphere is important for fine detail analysis and high spatial frequency in visual images whereas the right hemisphere is important for low spatial frequency. We have pursued this approach above.
The other approach does not focus on what type of information is processed by each hemisphere but rather on how each hemisphere processes information. This set of theories assumes that the left hemisphere processes information in an analytic, detail- and function-focused way and that it places more importance on temporal relations between information, whereas the right hemisphere is believed to go about the processing of information in a holistic way, focusing on spatial relations and on appearance rather than on function.
The picture above shows an exemplary response to different target stimuli in an experiment on global and local processing with patients who suffer right- or left-hemisphere damage. Patients with damage to the right hemisphere often suffer a lack of attention to the global form, but recognise details with no problem. For patients with left-hemisphere-damage this is true the other way around. This experiment supports the assumption that the hemispheres differ in the way they process information.
Interaction of the Hemispheres
Why is the transfer between the hemispheres needed at all if the hemispheres are so distinct concerning functioning, anatomy, chemistry and the transfer results in degrading of quality of information and takes time? The reason is that the hemispheres, although so different, do interact. This interaction has important advantages because as studies by Banich and Belger have shown it may “enhance the overall processing capacity under high demand conditions” (Banich). (Under low demand conditions the transfer does not make as much sense because the cost of transferring the information to the other hemisphere are higher than the advantages of parallel processing.)
The two hemispheres can interact over the corpus callosum in different ways. This is measured by first computing performance of each hemisphere individually and then measuring the overall performance of the whole brain. In some tasks one hemisphere may dominate the other in the overall performance, so the overall performance is as good or bad as the performance of one of the single hemispheres. What's surprising is that the dominating hemisphere may very well be the one that is less specialised, so here is another example of a situation where parallel processing is less effective than processing in just one half of the brain.
Another way of how the hemispheres interact is that overall processing is an average of performance of the two individual hemispheres.
The third, most surprising way the hemispheres can interact is that when performing a task together the hemispheres behave totally different than when performing the same task individually. This can be compared to social behavior of people: Individuals behave different in groups than they would when being by themselves.
Individual Factors Influencing Lateralisation
After having looked at hemispheric specialisation from a general point of view, we now want to focus on differences between individuals concerning hemispheric specialisation. Aspects that may have an impact on lateralisation might be age, gender or handed-ness.
Age could be one factor which decides in how far each hemisphere is used at specific tasks. Researchers have suggested that lateralisation develops with age until puberty. Thus infants should not have functionally-lateralised brains. Here are four pieces of evidence that speak against this hypothesis:
Infants already show the same brain anatomy as adults. This means the brain of a new born is already lateralised. Following the hypothesis that anatomy is linked to function this means that lateralisation is not developed at a later period in life.
Differences in perceptual asymmetries that means superior performance at processing verbal vs. non- verbal material in the different hemispheres cannot be observed in children aged 5 to 13, i.e. children aged 5 process the material the same way 13 year olds do.
Experiments with 1-week-old infants showed that they responded with increased interest to verbal material when this was presented to the right ear than when presented to the left ear and increased interest to non-verbal material when presented to the left ear. The infants’ interest was hereby measured by the frequency of soother sucking.
Although children who underwent hemispherectomy (the surgical removal of one hemisphere) do develop the cognitive skills of the missing hemisphere (in contrast to adults or adolescents who can only partly compensate for missing brain parts), they do not develop these skills to the same extent as a child with hemispherectomy of the other hemisphere. For example: A child whose right hemisphere has been removed will develop spatial skills but not to the extent that a child whose left hemisphere has been removed, and thus still possesses the right hemisphere.
Handedness is another factor that might influence brain lateralisation. There is statistical evidence that left-handers have a different brain organisation than right-handers. 10% of the population is left-handed. Whereas 95% of the right-handed people process verbal material in a superior manner in the left-hemisphere, there is no such a high figure for verbal superiority of one hemisphere in left-handers: 70% of the left-handers process verbal material in the left-hemisphere, 15% process verbal material in the right hemisphere (the functions of the hemispheres are simply switched around), and the remaining 15% are not lateralised, meaning that they process language in both hemispheres. Thus as a group, left-handers seem to be less lateralised. However a single left-handed-individual can be just as lateralised as the average right-hander.
Gender is also an aspect that is believed to have impact on the hemispheric specialisation. In animal studies, it was found that hormones create brain differences between the genders that are related to reproductional functions. In humans it is hard to determine to which extent it is really hormones that cause differences and to which extent it is culture and schooling that are responsible.
One brain area for which a difference between the genders was observed is the corpus callosum. Although one study found that the c.c. is larger in women than in men these results could not be replicated. Instead it was found that the posterior part of the c.c. is more bulbous in women than in men. This might however be related to the fact that the average woman has a smaller brain than the average man and thus the bulbousness of the posterior section of the c.c. might be related to brain size and not to gender.
In experiments that measure performance in various tasks between the genders the cultural aspect is of great importance because men and women might use different problem solving strategies due to schooling.
Summary
Although the two hemispheres look like each other’s mirror images at first glance, this impression is misleading. Taking a closer look, the hemispheres not only differ in their conformation and chemistry, but most importantly in their function. Although both hemispheres can perform all basic cognitive tasks, there exists a specialisation for specific cognitive demands. In most people, the left hemisphere is an expert at verbal tasks, whereas the right hemisphere has superior abilities in non-verbal tasks. Despite the functional distinctness the hemispheres communicate with each other via the corpus callosum.
This fact has been utilised by Sperry’s experiments with split-brain-patients. These are outstanding among other experiments measuring perceptual asymmetries because they were the first experiments to refute the hemispheric dominance theory and received recognition through the Nobel Prize for Medicine and Physiology.
Individual factors such as age, gender or handed-ness have no or very little impact on hemispheric functioning.
12.05: References
Editors: Robert A. Wilson and Frank C. Keil.(Eds.) (online version July 2006). The MIT Encyclopedia of the Cognitive Sciences (MITECS), Bradford Books
Knowledge Representation
Goldstein, E. Bruce.(2005). Cognitive Psychology - Connecting, Mind Research, and Everyday Experience. Thomson, Wadsworth. Ch 8 Knowledge, 265-308.
Sowa, John F.(2000). Knowledge Representation - Logical, Philosophical, and Computational Foundations. Brooks/Cole.
Slides concerning Knowledge from: http://www.cogpsy.uos.de/ , Knowledge: Propositions and images. Knowledge: Concepts and categories.
Minda, J. P. & Smith, J. D. (2001). Prototypes in category learning: The effects of category size, category structure, and stimulus complexity. Journal of Experimental Psychology: Learning, Memory, & Cognition, 27, 775–799.
Hemispheric Distribution
Banich, Marie T.(1997).Neuropsycology - The Neural Bases of Mental Function. Hougthon Mifflin Company. Ch 3 Hemispheric Specialisation, 90-123.
Hutsler, J. J., Gillespie, M. E., and Gazzaniga (2002). The evolution of hemispheric specialisation. In Bizzi, E., Caliassano, P. and Volterra V. (Eds.) Frontiers of Life, Volume III: The Intelligent Systems Academic Press: New York.
Birbaumer, Schmidt(1996). Biologische Psychologie. Springer Verlag Berlin-Heidelberg. 3. Auflage. Ch 24 Plastizität, Lernen, Gedächtnis. Ch 27 Kognitive Prozesse (Denken).
Kandel, Eric R.; Schwartz, James H.; Jessel, Thomas M.(2000). Principles of Neural Science. Mc Graw Hill. 4.th edition. Part IX, Ch 62 Learning and Memory.
Ivanov, Vjaceslav V.(1983). Gerade und Ungerade - Die Assymmetrie des Gehirns und der Zeichensysteme. S.Hirzel Verlag Stuttgart.
David W.Green ; et al.(1996). Cognitive Science - An Introduction. Blackwell Publishers Ltd. Ch 10 Learning and Memory(David Shanks).
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/12%3A_Knowledge_Representation_and_Hemispheric_Specialization/12.04%3A_Hemispheric_Distribution.txt
|
No matter which public topic you discuss or which personal aspect you worry about – you need reasons for your opinion and argumentation. Moreover, the ability of reasoning is responsible for your cognitive features of decision making and choosing among alternatives.
Everyone of us uses these two abilities in everyday life to the utmost. Let us, therefore, consider the following scene of Knut's life:
“It is again a rainy afternoon in Osnabrück (Germany) and as Knut and his wife are tired of observing the black crows in their garden they decide to escape from the shabby weather and spend their holidays in Spain. Knut has never been to Spain before and is pretty excited. They will leave the next day, thus he is packing his bag. The crucial things first: some underwear, some socks, a pair of pyjamas and his wash bag with a toothbrush, shampoo, soap, sun milk and insect spray. But, Knut cannot find the insect spray until his wife tells him that she lost it and will buy some new. He advises her to take an umbrella for the way to the chemist as it is raining outside, before he turns back to his packing task. But what did he already pack into his bag? Immediately, he remembers and continues, packing his clothing into the bag, considering that each piece fits another one and finally his Ipod as he exclusively listens to music with this device. Since the two of them are going on summer holidays, Knut packs especially shorts and T-Shirts into his bag. After approximately half an hour, he is finally convinced that he has done everything necessary for having some fine holidays.”
With regard to this sketch of Knut's holiday preparation, we will explain the basic principles of reasoning and decision making. In the following, it will be shown how much cognitive work is necessary for this fragment of everyday life. After presenting an insight into the topic, we will illustrate what kind of brain lesions lead to what kind of impairments of these two cognitive features.
13: Reasoning and Decision Making
In a process of reasoning available information is taken into account in form of premises. Through a process of inferencing a conclusion is reached on the base of these premises. The conclusion’s content of information goes beyond the one of the premises. To make this clear consider the following consideration Knut makes before planning his holiday:
``` 1. Premise: In all countries in southern Europe it is pretty warm during summer.
2. Premise: Spain is in southern Europe.
Conclusion: Therefore, in Spain it is pretty warm during summer.
```
The conclusion in this example follows directly from the premises but it entails information which is not explicitly stated in the premises. This is a rather typical feature of a process of reasoning. In the following it is decided between the two major kinds of reasoning, namely inductive and deductive which are often seen as the complement of one another.
Deductive reasoning
Deductive reasoning is concerned with syllogisms in which the conclusion follows logically from the premises. The following example about Knut makes this process clear:
``` 1.Premise: Knut knows: If it is warm, one needs shorts and T-Shirts.
2.Premise: He also knows that it is warm in Spain during summer.
Conclusion: Therefore, Knut reasons that he needs shorts and T-Shirts in Spain.
```
In the given example it is obvious that the premises are about rather general information and the resulting conclusion is about a more special case which can be inferred from the two premises.
Hereafter it is differentiated between the two major kinds of syllogisms, namely categorical and conditional ones.
Categorical syllogisms
In categorical syllogisms the statements of the premises begin typically with “all”, “none” or “some” and the conclusion starts with “therefore” or “hence”. These kinds of syllogisms fulfill the task of describing a relationship between two categories. In the example given above in the introduction of deductive reasoning these categories are Spain and the need for shorts and T-Shirts. Two different approaches serve the study of categorical syllogisms which are the normative approach and the descriptive approach.
The normative approach
The normative approach is based on logic and deals with the problem of categorizing conclusions as either valid or invalid. “Valid” means that the conclusion follows logically from the premises whereas “invalid” means the contrary. Two basic principles and a method called Euler Circles (Figure 1) have been developed to help judging about the validity. The first principle was created by Aristotle and says “If the two premises are true, the conclusion of a valid syllogism must be true” (cp. Goldstein, 2005). The second principle describes that “The validity of a syllogism is determined only by its form, not its content.” These two principles explain why the following syllogism is (surprisingly) valid:
``` All flowers are animals. All animals can jump. Therefore, all flowers can jump.
```
Even though it is quite obvious that the first premise is not true and further that the conclusion is not true, the whole syllogism is still valid. Applying formal logic to the syllogism in the example, the conclusion is valid.
Figure 1, Euler Circles
Due to this precondition it is possible to display a syllogism formally with symbols or letters and explain its relationship graphically with the help of diagrams. There are various ways to demonstrate a premise graphically. Starting with a circle to represent the first premise and adding one or more circles for the second one (Figure 1), the crucial move is to compare the constructed diagrams with the conclusion. It should be clearly laid out whether the diagrams are contradictory or not. Agreeing with one another, the syllogism is valid. The displayed syllogism (Figure 1) is obviously valid. The conclusion shows that everything that can jump contains animals which again contains flowers. This agrees with the two premises which point out that flowers are animals and that these are able to jump. The method of Euler Circles is a good device to make syllogisms better conceivable.
The descriptive approach
The descriptive approach is concerned with estimating people´s ability of judging validity and explaining judging errors. This psychological approach uses two methods in order to determine people`s performance:
```Method of evaluation: People are given two premises, a conclusion and the task to judge whether the syllogism is valid or not.
(preferred one)
Method of production: Participants are supplied with two premises and asked to develop a logically valid conclusion.
(if possible)
```
While using the method of evaluation researchers found typical misjudgments about syllogisms. Premises starting with “All”, “Some” or “No” imply a special atmosphere and influence a person in the process of decision making. One mistake often occurring is judging a syllogism incorrectly as valid, in which the two premises as well as the conclusion starts with “All”. The influence of the provided atmosphere leads to the right decision at most times, but is definitely not reliable and guides the person to a rash decision. This phenomenon is called the atmosphere effect.
In addition to the form of a syllogism, the content is likely to influence a person’s decision as well and causes the person to neglect his logical thinking. The belief bias states that people tend to judge syllogisms with believable conclusions as valid, while they tend to judge syllogisms with unbelievable conclusions as invalid. Given a conclusion as like “Some bananas are pink”, hardly any participants would judge the syllogism as valid, even though it might be valid according to its premises (e.g. Some bananas are fruits. All fruits are pink.)
Mental models of deductive reasoning
It is still not possible to consider what mental processes might occur when people are trying to determine whether a syllogism is valid. After researchers observed that Euler Circles can be used to determine the validity of a syllogism, Phillip Johnson–Laird (1999) wondered whether people would use such circles naturally without any instruction how to use them. At the same time he found out that they do not work for some more complex syllogisms and that a problem can be solved by applying logical rules, but most people solve them by imagining the situation. This is the basic idea of people using mental models – a specific situation that is represented in a person’s mind that can be used to help determine the validity of syllogisms – to solve deductive reasoning problems. The basic principle behind the Mental Model Theory is: A conclusion is valid only if it cannot be refuted by any mode of the premises. This theory is rather popular because it makes predictions that can be tested and because it can be applied without any knowledge about rules of logic. But there are still problems facing researchers when trying to determine how people reason about syllogisms. These problems include the fact that a variety of different strategies are used by people in reasoning and that some people are better in solving syllogisms than others.
Effects of culture on deductive reasoning
People can be influenced by the content of syllogisms rather than by focusing on logic when judging their validity. Psychologists have wondered whether people are influenced by their cultures when judging. Therefore they have done cross–cultural experiments in which reasoning problems were presented to people of different cultures. They observed that people from different cultures judge differently to these problems. People use evidence from their own experience (empirical evidence) and ignore evidence presented in the syllogism (theoretical evidence).
Conditional syllogisms
Another type of syllogisms is called “conditional syllogism”. Just like the categorical one, it also has two premises and a conclusion. In difference the first premise has the form “If … then”. Syllogisms like this one are common in everyday life. Consider the following example from the story about Knut:
``` 1. Premise: If it is raining, Knut`s wife gets wet.
2. Premise: It is raining.
Conclusion: Therefore, Knut`s wife gets wet.
```
Conditional syllogisms are typically given in the abstract form: “If p then q”, where “p” is called the antecedent and “q” the consequent.
Forms of conditional syllogisms
There are four major forms of conditional syllogisms, namely Modus Ponens, Modus Tollens, Denying The Antecedent and Affirming The Consequent. These are illustrated in the table below (Table 1) by means of the conditional syllogism above (i.e. If it is raining, Knut`s wife gets wet). The table indicates the premises, the resulting conclusions and it shows whether these are valid or not. The lowermost row displays the relative number of correct judgements people make about the validity of the conclusions.
Table 1, Different kinds of conditional syllogisms
Modus Ponens Modus Tollens Denying the Antecedent Affirming the Consequent
Description The antecedent of the first premise is affirmed in the second premise. The consequent of the first premise is negated in the second premise. The antecedent of the first premise is negated in the second premise. The antecedent of the first premise is affirmed in the second premise.
Formal If P then Q.
P
Therefore Q.
If P then Q.
Not-Q
Therefore Not-P.
If P then Q.
Not-P
Therefore Not-Q.
If P then Q.
Q
Therefore P.
Example If it is raining, Knut's wife gets wet.
It is raining.
Therefore Knut's wife gets wet
If it is raining, Knut's wife gets wet.
Knut's wife does not get wet.
Therefore it is not raining.
If it is raining, Knut's wife gets wet.
It is not raining.
Therefore Knut's wife does not get wet.
If it is raining, Knut's wife gets wet.
Knut's wife gets wet
Therefore it is raining.
Validity VALID VALID INVALID INVALID
Correct Judgements 97% correctly identify as valid. 60% correctly identify as valid. 40% correctly identify as invalid. 40% correctly identify as invalid.
Obviously, the validity of the syllogisms with valid conclusions is easier to judge in a correct manner than the validity of the ones with invalid conclusions. The conclusion in the instance of the modus ponens is apparently valid. In the example it is very clear that Knut`s wife gets wet, if it is raining.
The validity of the modus tollens is more difficult to recognize. Referring to the example, in the case that Knut`s wife does not get wet it can`t be raining. Because the first premise says that if it is raining, she gets wet. So the reason for Knut`s wife not getting wet is that it is not raining. Consequently, the conclusion is valid.
The validity of the remaining two kinds of conditional syllogisms is judged correctly only by 40% of people. If the method of denying the antecedent is applied, the second premise says that it is not raining. But from this fact it follows not logically that Knut`s wife does not get wet – obviously rain is not the only reason for her to get wet. It could also be the case that the sun is shining and Knut tests his new water pistol and makes her wet. So, this kind of conditional syllogism does not lead to a valid conclusion.
Affirming the consequent in the case of the given example means that the second premise says that Knut`s wife gets wet. But again the reason for this can be circumstances apart from rain. So, it follows not logically that it is raining. In consequence, the conclusion of this syllogism is invalid.
The four kinds of syllogisms have shown that it is not always easy to make correct judgments concerning the validity of the conclusions. The following passages will deal with other errors people make during the process of conditional reasoning.
The Wason Selection Task
The Wason Selection Task is a famous experiment which shows that people make more errors in the process of reasoning, if it is concerned with abstract items than if it involves real-world items (Wason, 1966).
In the abstract version of the Wason Selection Task four cards are shown to the participants with each a letter on one side and a number on the other (Figure 3, yellow cards). The task is to indicate the minimum number of cards that have to be turned over to test whether the following rule is observed: “If there is a vowel on one side then there is an even number on the other side”. 53% of participants selected the ‘E’ card which is correct, because turning this card over is necessary for testing the truth of the rule. However still another card needs to be turned over. 64% indicated that the ‘4’ card has to be turned over which is not right. Only 4% of participants answered correctly that the ‘7’ card needs to be turned over in addition to the ‘E’. The correctness of turning over these two cards becomes more obvious if the same task is stated in terms of real-world items instead of vowels and numbers. One of the experiments for determining this was the beer/drinking-age problem used by Richard Griggs and James Cox (1982). This experiment is identical to the Wason Selection Task except that instead of numbers and letters on the cards everyday terms (beer, soda and ages) were used (Figure 2, green cards). Griggs and Cox gave the following rule to the participants: “If a person is drinking beer then he or she must be older than 19 years.” In this case 73% of participants answered in a correct way, namely that the cards with “Beer” and “14 years” on it have to be turned over to test whether the rule is kept.
Figure 2, The Wason Selection Task
Why is the performance better in the case of real–world items?
There are two different approaches which explain why participants’ performance is significantly better in the case of the beer/drinking-age problem than in the abstract version of the Wason Selection Task, namely one approach concerning permission schemas and an evolutionary approach.
The regulation: “If one is 19 years or older then he/she is allowed to drink alcohol”, is known by everyone as an experience from everyday life (also called permission schema). As this permission schema is already learned by the participants it can be applied to the Wason Selection Task for real–world items to improve participants` performance. On the contrary such a permission schema from everyday life does not exist for the abstract version of the Wason Selection Task.
The evolutionary approach concerns the important human ability of cheater-detection . This approach states that an important aspect of human behaviour especially in the past was/is the ability for two persons to cooperate in a way that is beneficial for both of them. As long as each person receives a benefit for whatever he/she does in favour of the other one, everything works well in their social exchange. But if someone cheats and receives benefit from others without giving it back, some problem arises (see also chapter Cognitive Psychology and Cognitive Neuroscience/Evolutionary Perspective on Social Cognitions#3. Evolutionary Perspective on Social Cognitions). It is assumed that the property to detect cheaters has become a part of human`s cognitive makeup during evolution. This cognitive ability improves the performance in the beer/drinking-age version of the Wason Selection Task as it allows people to detect a cheating person who does not behave according to the rule. Cheater-detection does not work in the case of the abstract version of the Wason Selection Task as vowels and numbers do not behave or even cheat at all as opposed to human beings.
Inductive reasoning
In the previous sections deductive reasoning was discussed, reaching conclusions based on logical rules applied to a set of premises. However, many problems cannot be represented in a way that would make it possible to use these rules to get a conclusion. This subchapter is about a way to be able to decide in terms of these problems as well: inductive reasoning.
Figure 3, Deductive and inductive reasoning
Inductive reasoning is the process of making simple observations of a certain kind and applying these observations via generalization to a different problem to make a decision. Hence one infers from a special case to the general principle which is just the opposite of the procedure of deductive reasoning (Figure 3). A good example for inductive reasoning is the following:
``` Premise: All crows Knut and his wife have ever seen are black.
Conclusion: Therefore, they reason that all crows on earth are black.
```
In this example it is obvious that Knut and his wife infer from the simple observation about the crows they have seen to the general principle about all crows. Considering Figure 4 this means that they infer from the subset (yellow circle) to the whole (blue circle). As in this example it is typical in a process of inductive reasoning that the premises are believed to support the conclusion, but do not ensure it.
Figure 4
Forms of inductive reasoning
The two different forms of inductive reasoning are "strong" and "weak" induction. The former describes that the truth of the conclusion is very likely, if the assumed premises are true. An example for this form of reasoning is the one given in the previous section. In this case it is obvious that the premise ("All crows Knut and his wife have ever seen are black") gives good evidence for the conclusion ("All crows on earth are black") to be true. But nevertheless it is still possible, although very unlikely, that not all crows are black.
On the contrary, conclusions reached by "weak induction" are supported by the premises in a rather weak manner. In this approach the truth of the premises makes the truth of the conclusion possible, but not likely. An example for this kind of reasoning is the following:
``` Premise: Knut always hears music with his IPod.
Conclusion: Therefore, he reasons that all music is only heard with IPods.
```
In this instance the conclusion is obviously false. The information the premise contains is not very representative and although it is true, it does not give decisive evidence for the truth of the conclusion.
To sum it up, strong inductive reasoning gets to conclusions which are very probable whereas the conclusions reached through weak inductive reasoning on the base of the premises are unlikely to be true.
Reliability of conclusions
If the strength of the conclusion of an inductive argument has to be determined, three factors concerning the premises play a decisive role. The following example which refers to Knut and his wife and the observations they made about the crows (see previous sections) displays these factors:
When Knut and his wife observe in addition to the black crows in Germany also the crows in Spain, the number of observations they make concerning the crows obviously increases. Furthermore, the representativeness of these observations is supported, if Knut and his wife observe the crows at all different day- and nighttimes and see that they are black every time. Theoretically it may be that the crows change their colour at night what would make the conclusion that all crows are black wrong. The quality of the evidence for all crows to be black increases, if Knut and his wife add scientific measurements which support the conclusion. For example they could find out that the crows' genes determine that the only colour they can have is black.
Conclusions reached through a process of inductive reasoning are never definitely true as no one has seen all crows on earth and as it is possible, although very unlikely, that there is a green or brown exemplar. The three mentioned factors contribute decisively to the strength of an inductive argument. So, the stronger these factors are, the more reliable are the conclusions reached through induction.
Processes and constraints
In a process of inductive reasoning people often make use of certain heuristics which lead in many cases quickly to adequate conclusions but sometimes may cause errors. In the following, two of these heuristics (availability heuristic and representativeness heuristic) are explained. Subsequently, the confirmation bias is introduced which sometimes influences people's reasons according to their own opinion without them realising it.
The availability heuristic
Things that are more easily remembered are judged to be more prevalent. An example for this is an experiment done by Lichtenstein et al. (1978). The participants were asked to choose from two different lists the causes of death which occur more often. Because of the availability heuristic people judged more “spectacular” causes like homicide or tornado to cause more deaths than others, like asthma. The reason for the subjects answering in such a way is that for example films and news in television are very often about spectacular and interesting causes of death. This is why these information are much more available to the subjects in the experiment.
Another effect of the usage of the availability heuristic is called illusory correlations. People tend to judge according to stereotypes. It seems to them that there are correlations between certain events which in reality do not exist. This is what is known by the term “prejudice”. It means that a much oversimplified generalization about a group of people is made. Usually a correlation seems to exist between negative features and a certain class of people (often fringe groups). If, for example, one's neighbour is jobless and very lazy one tends to correlate these two attributes and to create the prejudice that all jobless people are lazy. This illusory correlation occurs because one takes into account information which is available and judges this to be prevalent in many cases.
The representativeness heuristic
If people have to judge the probability of an event they try to find a comparable event and assume that the two events have a similar probability. Amos Tversky and Daniel Kahneman (1974) presented the following task to their participants in an experiment: “We randomly chose a man from the population of the U.S., Robert, who wears glasses, speaks quietly and reads a lot. Is it more likely that he is a librarian or a farmer?” More of the participants answered that Robert is a librarian which is an effect of the representativeness heuristic. The comparable event which the participants chose was the one of a typical librarian as Robert with his attributes of speaking quietly and wearing glasses resembles this event more than the event of a typical farmer. So, the event of a typical librarian is better comparable with Robert than the event of a typical farmer. Of course this effect may lead to errors as Robert is randomly chosen from the population and as it is perfectly possible that he is a farmer although he speaks quietly and wears glasses.
Figure 5, Feminist bank tellers
The representativeness heuristic also leads to errors in reasoning in cases where the conjunction rule is violated. This rule states that the conjunction of two events is never more likely to be the case than the single events alone. An example for this is the case of the feminist bank teller (Tversky & Kahneman, 1983). If we are introduced to a woman of whom we know that she is very interested in women’s rights and has participated in many political activities in college and we are to decide whether it is more likely that she is a bank teller or a feminist bank teller, we are drawn to conclude the latter as the facts we have learnt about her resemble the event of a feminist bank teller more than the event of only being a bank teller.
But it is in fact much more likely that somebody is just a bank teller than it is that someone is a feminist in addition to being a bank teller. This effect is illustrated in Figure 5 where the green square, which stands for just being a bank teller, is much larger and thus more probable than the smaller violet square, which displays the conjunction of bank tellers and feminists, which is a subset of bank tellers.
Confirmation bias
This phenomenon describes the fact that people tend to decide in terms of what they themselves believe to be true or good. If, for example, someone believes that one has bad luck on Friday the thirteenth, he will especially look for every negative happening at this particular date but will be inattentive to negative happenings on other days. This behaviour strengthens the belief that there exists a relationship between Friday the thirteenth and having bad luck. This example shows that the actual information is not taken into account to come to a conclusion but only the information which supports one's own belief. This effect leads to errors as people tend to reason in a subjective manner, if personal interests and beliefs are involved.
All the mentioned factors influence the subjective probability of an event so that it differs from the actual probability (probability heuristic). Of course all of these factors do not always appear alone, but they influence one another and can occur in combination during the process of reasoning.
Why inductive reasoning at all?
All the described constraints show how prone to errors inductive reasoning is and so the question arises, why we use it at all?
But inductive reasons are important nevertheless because they act as shortcuts for our reasoning. It is much easier and faster to apply the availability heuristic or the representativeness heuristic to a problem than to take into account all information concerning the current topic and draw a conclusion by using logical rules.
In the following excerpt of very usual actions there is a lot of inductive reasoning involved although one does not realize it on the first view. It points out the importance of this cognitive ability:
The sunrise every morning and the sunset in the evening, the change of seasons, the TV program, the fact that a chair does not collapse when we sit on it or the light bulb that flashes after we have pushed a button.
All of these cases are conclusions derived from processes of inductive reasoning. Accordingly, one assumes that the chair one is sitting on does not collapse as the chairs on which one sat before did not collapse. This does not ensure that the chair does not break into pieces but nevertheless it is a rather helpful conclusion to assume that the chair remains stable as this is very probable. To sum it up, inductive reasoning is rather advantageous in situations where deductive reasoning is just not applicable because only evidence but no proved facts are available. As these situations occur rather often in everyday life, living without the use of inductive reasoning is inconceivable.
Induction vs. deduction
The table below (Table 2) summarises the most prevalent properties and differences between deductive and inductive reasoning which are important to keep in mind.
Table 2, Induction vs. deduction
Deductive Reasoning Inductive Reasoning
Premises Stated as facts or general principles ("It is warm in the Summer in Spain.") Based on observations of specific cases ("All crows Knut and his wife have seen are black.")
Conclusion Conclusion is more special than the information the premises provide. It is reached directly by applying logical rules to the premises. Conclusion is more general than the information the premises provide. It is reached by generalizing the premises' information.
Validity If the premises are true, the conclusion must be true. If the premises are true, the conclusion is probably true.
Usage More difficult to use (mainly in logical problems). One needs facts which are definitely true. Used often in everyday life (fast and easy). Evidence is used instead of proved facts.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/13%3A_Reasoning_and_Decision_Making/13.01%3A_Reasoning.txt
|
According to the different levels of consequences, each process of making a decision requires appropriate effort and various aspects to be considered. The following excerpt from the story about Knut makes this obvious: “After considering facts like the warm weather in Spain and shirts and shorts being much more comfortable in this case (information gathering and likelihood estimation) Knut reasons that he needs them for his vacation. In consequence, he finally makes the decision to pack mainly shirts and shorts in his bag (final act of choosing).” Now it seems like there cannot be any decision making without previous reasoning, but that is not true. Of course there are situations in which someone decides to do something spontaneously, with no time to reason about it. We will not go into detail here but you might think about questions like "Why do we choose one or another option in that case?"
Choosing among alternatives
The psychological process of decision making constantly goes along with situations in daily life. Thinking about Knut again we can imagine him to decide between packing more blue or more green shirts for his vacation (which would only have minor consequences) but also about applying a specific job or having children with his wife (which would have relevant influence on important circumstances of his future life). The mentioned examples are both characterized by personal decisions, whereas professional decisions, dealing for example with economic or political issues, are just as important.
The utility approach
Figure 6, Relation between (monetary) gains/losses and their subjective value according to Prospect Theory
There are three different ways to analyze decision making. The normative approach assumes a rational decision-maker with well-defined preferences. While the rational choice theory is based on a priori considerations, the descriptive approach is based on empirical observations and on experimental studies of choice behavior. The prescriptive enterprise develops methods in order to improve decision making. According to Manktelow and Reber´s definition, “utility refers to outcomes that are desirable because they are in the person’s best interest” (Reber, A. S., 1995; Manktelow, K., 1999). This normative/descriptive approach characterizes optimal decision making by the maximum expected utility in terms of monetary value. This approach can be helpful in gambling theories, but simultaneously includes several disadvantages. People do not necessarily focus on the monetary payoff, since they find value in things other than money, such as fun, free time, family, health and others. But that is not a big problem, because it is possible to apply the graph (Figure 6), which shows the relation between (monetary) gains/losses and their subjective value / utility, which is equal to all the valuable things mentioned above. Therefore, not choosing the maximal monetary value does not automatically describe an irrational decision process.
Misleading effects
But even respecting the considerations above there might still be problems to make the “right” decision because of different misleading effects, which mainly arise because of the constraints of inductive reasoning. In general this means that our model of a situation/problem might not be ideal to solve it in an optimal way. The following three points are typical examples for such effects.
Subjective models
This effect is rather equal to the illusory correlations mentioned before in the part about the constraints of inductive reasoning. It is about the problem that models which people create might be misleading, since they rely on subjective speculations. An example could be deciding where to move by considering typical prejudices of the countries (e.g. always good pizza, nice weather and a relaxed life-style in Italy in contrast to some kind of boring food and steady rain in Great Britain). The predicted events are not equal to the events occurring indeed. (Kahneman & Tversky, 1982; Dunning & Parpal, 1989)
Focusing illusion
Another misleading effect is the so-called focusing illusion. By considering only the most obvious aspects in order to make a certain decision (e.g. the weather) people often neglect various really important outcomes (e.g. circumstances at work). This effect occurs more often, if people judge about others compared with judgments about their own living.
Framing effect
A problem can be described in different ways and therefore evoke different decision strategies. If a problem is specified in terms of gains, people tend to use a risk-aversion strategy, while a problem description in terms of losses leads to apply a risk-taking strategy. An example of the same problem and predictably different choices is the following experiment: A group of people is asked to imagine themselves \$300 richer than they are, is confronted with the choice of a sure gain of \$100 or an equal chance to gain \$200 or nothing. Most people avoid the risk and take the sure gain, which means they take the risk-aversion strategy. Alternatively if people are asked to assume themselves to be \$500 richer than in reality, given the options of a sure loss of \$100 or an equal chance to lose \$200 or nothing, the majority opts for the risk of losing \$200 by taking the risk seeking or risk-taking strategy. This phenomenon is known as framing effect and can also be illustrated by Figure 6 above, which is a concave function for gains and a convex one for losses. (Foundations of Cognitive Psychology, Levitin, D. J., 2002)
Justification in decision making
Decision making often includes the need to assign a reason for the decision and therefore justify it. This factor is illustrated by an experiment by A. Tversky and E. Shafir (1992): A very attractive vacation package has been offered to a group of students who have just passed an exam and to another group of students who have just failed the exam and have the chance to rewrite it after the holidays coming up. All students have the options to buy the ticket straight away, to stay at home, or to pay \$5 for keeping the option open to buy it later. At this point, there is no difference between the two groups, since the number of students who passed the exam and decided to book the flight (with the justification of a deserving a reward), is the same as the number of students who failed and booked the flight (justified as consolation and having time for reoccupation). A third group of students who were informed to receive their results in two more days was confronted with the same problem. The majority decided to pay \$5 and keep the option open until they would get their results. The conclusion now is that even though the actual exam result does not influence the decision, it is required in order to provide a rationale.
Executive functions
Figure 7, Left frontal lobe
Subsequently, the question arises how this cognitive ability of making decisions is realized in the human brain. As we already know that there are a couple of different tasks involved in the whole process, there has to be something that coordinates and controls those brain activities – namely the executive functions. They are the brain's conductor, instructing other brain regions to perform, or be silenced, and generally coordinating their synchronized activity (Goldberg, 2001). Thus, they are responsible for optimizing the performance of all “multi-threaded” cognitive tasks.
Locating those executive functions is rather difficult, as they cannot be appointed to a single brain region. Traditionally, they have been equated with the frontal lobes, or rather the prefrontal regions of the frontal lobes; but it is still an open question whether all of their aspects can be associated with these regions.
Nevertheless, we will concentrate on the prefrontal regions of the frontal lobes, to get an impression of the important role of the executive functions within cognition. Moreover, it is possible to subdivide these regions into functional parts. But it is to be noted that not all researchers regard the prefrontal cortex as containing functionally different regions.
Executive functions in practice
According to Norman and Shallice, there are five types of situations in which executive functions may be needed in order to optimize performance, as the automatic activation of behaviour would be insufficient. These are situations involving...
1. ...planning or decision making.
2. ...error correction or trouble shooting.
3. ...responses containing novel sequences of actions.
4. ...technical difficulties or dangerous circumstances.
5. ...the control of action or the overcoming of strong habitual responses.
The following parts will have a closer look to each of these points, mainly referring to brain-damaged individuals.
Surprisingly, intelligence in general is not affected in cases of frontal lobe injuries (Warrington, James & Maciejewski, 1986). However, dividing intelligence into crystallised intelligence (based on previously acquired knowledge) and fluid intelligence (meant to rely on the current ability of solving problems), emphasizes the executive power of the frontal lobes, as patients with lesions in these regions performed significantly worse in tests of fluid intelligence (Duncan, Burgess & Emslie, 1995).
1. Planning or decision making
Impairments in abstract and conceptual thinking
To solve many tasks it is important that one is able to use given information. In many cases, this means that material has to be processed in an abstract rather than in a concrete manner. Patients with executive dysfunction have abstraction difficulties. This is proven by a card sorting experiment (Delis et al., 1992):
The cards show names of animals and black or white triangles placed above or below the word. Again, the cards can be sorted with attention to different attributes of the animals (living on land or in water, domestic or dangerous, large or small) or the triangles (black or white, above or below word). People with frontal lobe damage fail to solve the task because they cannot even conceptualize the properties of the animals or the triangles, thus are not able to deduce a sorting-rule for the cards (in contrast, there are some individuals only perseverating; they find a sorting-criterion, but are unable to switch to a new one).
These problems might be due to a general difficulty in strategy formation.
Goal directed behavior
Let us again take Knut into account to get an insight into the field of goal directed behaviour – in principle, this is nothing but problem solving since it is about organizing behavior towards a goal. Thus, when Knut is packing his bag for his holiday, he obviously has a goal in mind (in other words: He wants to solve a problem) – namely get ready before the plane starts. There are several steps necessary during the process of reaching a certain goal:
Goal must be kept in mind
Knut should never forget that he has to pack his bag in time.
Dividing into subtasks and sequencing
Knut packs his bag in a structured way. He starts packing the crucial things and then goes on with rest.
Completed portions must be kept in mind
If Knut already packed enough underwear into his bag, he would not need to search for more.
Flexibility and adaptability
Imagine that Knut wants to pack his favourite T-Shirt, but he realizes that it is dirty. In this case, Knut has to adapt to this situation and has to pick another T-Shirt that was not in his plan originally.
Evaluation of actions
Along the way of reaching his ultimate goal Knut constantly has to evaluate his performance in terms of ‘How am I doing considering that I have the goal of packing my bag?’.
Executive dysfunction and goal directed behavior
The breakdown of executive functions impairs goal directed behavior to a large extend. In which way cannot be stated in general, it depends on the specific brain regions that are damaged. So it is quite possible that an individual with a particular lesion has problems with two or three of the five points described above and performs within average regions when the other abilities are tested. However, if only one link is missing from the chain, the whole plan might get very hard or even impossible to master. Furthermore, the particular hemisphere affected plays a role as well.
Another interesting result was the fact that lesions in the frontal lobes of left and right hemisphere impaired different abilities. While a lesion in the right hemisphere caused trouble in making recency judgements, a lesion in the left hemisphere impaired the patient’s performance only when the presented material was verbal or in a variation of the experiment that required self-ordered sequencing. Because of that we know that the ability to sequence behaviour is not only located in the frontal lobe but in the left hemisphere particularly when it comes to motor action.
Problems in sequencing
In an experiment by Milner (1982), people were shown a sequence of cards with pictures. The experiment included two different tasks: recognition trials and recency trials. In the former the patients were shown two different pictures, one of them has appeared in the sequence before, and the participants had to decide which one it was. In the latter they were shown two different pictures, both of them have appeared before, they had to name the picture that was shown more recently than the other one. The results of this experiment showed that people with lesions in temporal regions have more trouble with the recognition trial and patients with frontal lesions have difficulties with the recency trial since anterior regions are important for sequencing. This is due to the fact that the recognition trial demanded a properly functioning recognition memory, the recency trial a properly functioning memory for item order. These two are dissociable and seem to be processed in different areas of the brain.
The frontal lobe is not only important for sequencing but also thought to play a major role for working memory. This idea is supported by the fact that lesions in the lateral regions of the frontal lobe are much more likely to impair the ability of 'keeping things in mind' than damage to other areas of the frontal cortex do.
But this is not the only thing there is to sequencing. For reaching a goal in the best possible way it is important that a person is able to figure out which sequence of actions, which strategy, best suits the purpose, in addition to just being able to develop a correct sequence. This is proven by an experiment called 'Tower of London' (Shallice, 1982) which is similar to the famous 'Tower of Hanoi' task with the difference that this task required three balls to be put onto three poles of different length so that one pole could hold three balls, the second one two and the third one only one ball, in a way that a changeable goal position is attained out of a fixed initial position in as few moves as possible. Especially patients with damage to the left frontal lobe proved to work inefficiently and ineffectively on this task. They needed many moves and engaged in actions that did not lead toward the goal.
Problems with the interpretation of available information
Quite often, if we want to reach a goal, we get hints on how to do it best. This means we have to be able to interpret the available information in terms of what the appropriate strategy would be. For many patients of executive dysfunction this is not an easy thing to do either. They have trouble to use this information and engage in inefficient actions. Thus, it will take them much longer to solve a task than healthy people who use the extra information and develop an effective strategy.
Problems with self-criticism and -monitoring
The last problem for people with frontal lobe damage we want to present here is the last point in the above list of properties important for proper goal directed behavior. It is the ability to evaluate one's actions, an ability that is missing in most patients. These people are therefore very likely to 'wander off task' and engage in behavior that does not help them to attain their goal. In addition to that, they are also not able to determine whether their task is already completed at all. Reasons for this are thought to be a lack of motivation or lack of concern about one's performance (frontal lobe damage is usually accompanied by changes in emotional processing) but these are probably not the only explanations for these problems.
Another important brain region in this context – the medial portion of the frontal lobe – is responsible for detecting behavioral errors made while working towards a goal. This has been shown by ERP experiments where there was an error-related negativity 100ms after an error has been made. If this area is damaged, this mechanism cannot work properly any more and the patient loses the ability to detect errors and thus monitor his own behavior.
However, in the end we must add that although executive dysfunction causes an enormous number of problems in behaving correctly towards a goal, most patients when assigned with a task are indeed anxious to solve it but are just unable to do so.
2. Error correction and trouble shooting
Figure 8, Example for the WCST: Cards sorted according to shape (a), number (b) or color (c) of the objects
The most famous experiment to investigate error correction and trouble shooting is the Wisconsin Card Sorting Test (WCST). A participant is presented with cards that show certain objects. These cards are defined by shape, color and number of the objects on the cards. These cards now have to be sorted according to a rule based on one of these three criteria. The participant does not know which rule is the right one but has to reach the conclusion after positive or negative feedback of the experimenter. Then at some point, after the participant has found the correct rule to sort the cards, the experimenter changes the rule and the previous correct sorting will lead to negative feedback. The participant has to realize the change and adapt to it by sorting the cards according to the new rule.
Patients with executive dysfunction have problems identifying the rule in the first place. It takes them noticeably longer because they have trouble using already given information to make a conclusion. But once they got to sorting correctly and the rule changes, they keep sorting the cards according to the old rule although many of them notice the negative feedback. They are just not able to switch to another sorting-principle, or at least they need many tries to learn the new one. They perseverate.
Problems in shifting and modifying strategies
Intact neuronal tissue in the frontal lobe is also crucial for another executive function connected with goal directed behavior that we described above: Flexibility and adaptability. This means that persons with frontal lobe damage will have difficulties in shifting their way of thinking – meaning creating a new plan after recognizing that the original one cannot be carried out for some reason. Thus, they are not able to modify their strategy according to this new problem. Even when it is clear that one hypothesis cannot be the right one to solve a task, patients will stick to it nevertheless and are unable to abandon it (called 'tunnel vision').
Moreover, such persons do not use as many appropriate hypotheses for creating a strategy as people with damage to other brain regions do. In what particular way this can be observed in patients can again not be stated in general but depends on the nature of the shift that has to be made.
These earlier described problems of 'redirecting' of one's strategies stand in contrast to the actual 'act of switching' between tasks. This is yet another problem for patients with frontal lobe damage. Since the control system that leads task switching as such is independent from the parts that actually perform these tasks, the task switching is particularly impaired in patients with lesions to the dorsolateral prefrontal cortex while at the same time they have no trouble with performing the single tasks alone. This of course, causes a lot of problems in goal directed behavior because as it was said before: Most tasks consist of smaller subtasks that have to be completed.
3. Responses containing novel sequences of actions
Many clinical tests have been done, requiring patients to develop strategies for dealing with novel situations. In the Cognitive Estimation Task (Shallice & Evans, 1978) patients are presented with questions whose answers are unlikely to be known. People with damage to the prefrontal cortex have major difficulties to produce estimates for questions like: “How many camels are in Holland?”.
In the FAS Test (Miller, 1984) subjects have to generate sequences of words (not proper names) beginning with a certain letter (“F” , “A” or “S”) in a one-minute period. This test involves developing new strategies, selecting between alternatives and avoiding repeating previous given answers. Patients with left lateral prefrontal lesions are often impaired (Stuss et al., 1998).
4. Technical difficulties or dangerous circumstances
One single mistake in a dangerous situation may easily lead to serious injuries while a mistake in a technical difficult situation (e.g. building a house of cards) would obviously lead to failure. Thus, in such situations, automatic activation of responses clearly would be insufficient and executive functions seem to be the only solution for such problems.
Wilkins, Shallice and McCarthy (1987) were able to prove a connection between dangerous or difficult situations and the prefrontal cortex, as patients with lesions to this area were impaired during experiments concerning dangerous or difficult situations. The ventromedial and orbitofrontal cortex may be particularly important for these aspects of executive functions.
5. Control of action or the overcoming of strong habitual responses
Deficits in initiation, cessation and control of action
We start by describing the effects of the loss of the ability to start something, to initiate an action. A person with executive dysfunction is likely to have trouble beginning to work on a task without strong help from the outside, while people with left frontal lobe damage often show impaired spontaneous speech and people with right frontal lobe damage rather show poor nonverbal fluency. Of course, one reason is the fact that this person will not have any intention, desire or concern on his or her own of solving the task since this is yet another characteristic of executive dysfunction. But it is also due to a psychological effect often connected with the loss of properly executive functioning: Psychological inertia. Like in physics, inertia in this case means that an action is very hard to initiate, but once started, it is again very hard to shift or stop. This phenomenon is characterized by engagement in repetitive behavior, is called perseveration (cp. WCST).
Another problem caused by executive dysfunction can be observed in patients suffering from the so called environmental dependency syndrome. Their actions are impelled or obligated by their physical or social environment. This manifests itself in many different ways and depends to a large extent on the individual’s personal history. Examples are patients who begin to type when they see a computer key board, who start washing the dishes upon seeing a dirty kitchen or who hang up pictures on the walls when finding hammer, nails and pictures on the floor. This makes these people appear as if they were acting impulsively or as if they have lost their ‘free will’. It shows a lack of control for their actions. This is due to the fact that an impairment in their executive functions causes a disconnection between thought and action. These patients know that their actions are inappropriate but like in the WCST, they cannot control what they are doing. Even if they are told by which attribute to sort the cards, they will still keep sorting them sticking to the old rule due to major difficulties in the translation of these directions into action.
What is needed to avoid problems like these are the abilities to start, stop or change an action but very likely also the ability to use information to direct behavior.
Deficits in cognitive estimation
Next to the difficulties to produce estimates to questions whose answers are unlikely known, patients with lesions to the frontal lobes have problems with cognitive estimation in general.
Cognitive estimation is the ability to use known information to make reasonable judgments or deductions about the world. Now the inability for cognitive estimation is the third type of deficits often observed in individuals with executive dysfunction. It is already known that people with executive dysfunction have a relatively unaffected knowledge base. This means they cannot retain knowledge about information or at least they are unable to make inferences based on it. There are various effects which are shown on such individuals. Now for example patients with frontal lobe damage have difficulty estimating the length of the spine of an average woman. Making such realistic estimations requires inferencing based on other knowledge which is in this case, knowing that the height of the average woman is about 5 ft 6 in (168 cm) and considering that the spine runs about one third to one half the length of the body and so on. Patients with such a dysfunction do not only have difficulties in their estimates of cognitive information but also in their estimates of their own capacities (such as their ability to direct activity in goal – oriented manner or in controlling their emotions). Prigatuno, Altman and O’Brien (1990) reported that when patients with anterior lesions associated with diffuse axonal injury to other brain areas are asked how capable they are of performing tasks such as scheduling their daily activities or preventing their emotions from affecting daily activities, they grossly overestimate their abilities. From several experiments Smith and Miler (1988) found out that individuals with frontal lobe damages have no difficulties in determining whether an item was in a specific inspection series they find it difficult to estimate how frequently an item did occur. This may not only reflect difficulties in cognitive estimation but also in memory task that place a premium on remembering temporal information. Thus both difficulties (in cognitive estimation and in temporal sequencing) may contribute to a reduced ability to estimate frequency of occurrence.
Despite these impairments in some domains the abilities of estimation are preserved in patients with frontal lobe damage. Such patients also do have problems in estimating how well they can prevent their emotions for affecting their daily activities. They are also as good at judging how many dues they will need to solve a puzzle as patients with temporal lobe damage or neurologically intact people.
Theories of frontal lobe function in executive control
In order to explain that patients with frontal lobe damage have difficulties in performing executive functions, four major approaches have developed. Each of them leads to an improved understanding of the role of frontal regions in executive functions, but none of these theories covers all the deficits occurred.
Role of working memory
The most anatomically specific approach assumes the dorsolateral prefrontal area of the frontal lobe to be critical for working memory. The working memory which has to be clearly distinguished from the long term memory keeps information on-line for use in performing a task. Not being generated for accounting for the broad array of dysfunctions it focuses on the three following deficits:
1. Sequencing information and directing behavior toward a goal
2. Understanding of temporal relations between items and events
3. Some aspects of environmental dependency and perseveration
Research on monkeys has been helpful to develop this approach (the delayed-response paradigm, Goldman-Rakic, 1987, serves as a classical example).
Role of Controlled Versus Automatic Processes
There are two theories based on the underlying assumption that the frontal lobes are especially important for controlling behavior in non-experienced situations and for overriding stimulus-response associations, but contribute little to automatic and effortless behavior (Banich, 1997).
Stuss and Benson (1986) consider control over behavior to occur in a hierarchical manner. They distinguish between three different levels, of which each is associated with a particular brain region. In the first level sensory information is processed automatically by posterior regions, in the next level (associated with the executive functions of the frontal lobe) conscious control is needed to direct behavior toward a goal and at the highest level controlled self-reflection takes place in the prefrontal cortex.
This model is appropriate for explaining deficits in goal-oriented behavior, in dealing with novelty, the lack of cognitive flexibility and the environmental dependency syndrome. Furthermore it can explain the inability to control action consciously and to criticise oneself. The second model developed by Shalice (1982) proposes a system consisting of two parts that influence the choice of behavior. The first part, a cognitive system called contention scheduling, is in charge of more automatic processing. Various links and processing schemes cause a single stimulus to result in an automatic string of actions. Once an action is initiated, it remains active until inhibited. The second cognitive system is the supervisory attentional system which directs attention and guides action through decision processes and is only active “when no processing schemes are available, when the task is technically difficult, when problem solving is required and when certain response tendencies must be overcome” (Banich , 1997).
This theory supports the observations of few deficits in routine situations, but relevant problems in dealing with novel tasks (e.g. the Tower of London task, Shallice, 1982), since no schemes in contention scheduling exist for dealing with it. Impulsive action is another characteristic of patients with frontal lobe damages which can be explained by this theory. Even if asked not to do certain things, such patients stick to their routines and cannot control their automatic behavior.
Use of Scripts
The approach based on scripts, which are sets of events, actions and ideas that are linked to form a unit of knowledge was developed by Schank (1982) amongst others.
Containing information about the setting in which an event occurs, the set of events needed to achieve the goal and the end event terminating the action. Such managerial knowledge units (MKUs) are supposed to be stored in the prefrontal cortex. They are organized in a hierarchical manner being abstract at the top and getting more specific at the bottom.
Damage of the scripts leads to the inability to behave goal-directed, finding it easier to cope with usual situations (due to the difficulty of retrieving a MKU of a novel event) and deficits in the initiation and cessation of action (because of MKUs specifying the beginning and ending of an action.)
Role of a goal list
The perspective of artificial intelligence and machine learning introduced an approach which assumes that each person has a goal list, which contains the tasks requirements or goals. This list is fundamental to guiding behavior and since frontal lobe damages disrupt the ability to form a goal list, the theory helps to explain difficulties in abstract thinking, perceptual analysis, verbal output and staying on task. It can also account for the strong environmental influence on patients with frontal lobe damages, due to the lack of internal goals and the difficulty of organizing actions toward a goal.
Brain Region Possible Function (left hemisphere) Possible Function (right hemisphere) Brodman's Areas which are involved
ventrolateral prefrontal cortex (VLPFC) Retrieval and maintenance of semantic and/or linguistic information Retrieval and maintenance of visuospatial information 44, 45, 47 (44 & 45 = Broca's Area)
dorsolateral prefrontal cortex )DLPRF) Selecting a range of responses and suppressing inappropriate ones; manipulating the contents of working memory Monitoring and checking of information held in mind, particularly in conditions of uncertainty; vigilance and sustained attention 9, 46
anterior prefrontal cortex; frontal pole; rostral prefrontal cortex Multitasking; maintaining future intentions & goals while currently performing other tasks or subgoals same 10
anterior cingulate cortex (dorsal) Monitoring in situations of response conflict and error detection same 24 (dorsal) & 32 (dorsal)
13.03: Summary
It is important to keep in mind that reasoning and decision making are closely connected to each other: Decision making in many cases happens with a previous process of reasoning. People's everyday life is decisively coined by the synchronized appearance of these two human cognitive features. This synchronization, in turn, is realized by the executive functions which seem to be mainly located in the frontal lobes of the brain.
13.04: References
• Krawczyk,Daniel (2018). Reasoning: The Neuroscience of How We Think. Elsevier.
• Goldstein, E. Bruce (2005). Cognitive Psychology - Connecting, Mind Research, and Everyday Experience. Thomson Wadsworth.
• Marie T. Banich (1997). Neuropsychology. The neural bases of Mental Function. Houghton Mifflin.
• Wilson, Robert A.& Keil, Frank C. (1999). The MIT Encyclopedia of the Cognitive Sciences. Massachusetts: Bradford Book.
• Ward, Jamie (2006). The Student's Guide To Cognitive Science. Psychology Press.
• Levitin, D. J.(2002). Foundations of Cognitive Psychology.
• Schmalhofer, Franz. Slides from the course: Cognitive Psychology and Neuropsychology, Summer Term 2006/2007, University of Osnabrueck
13.05: Links
Reasoning
Quizz to check whether you understood the difference of deduction and induction
Short text with graphics
Reasoning in geometry
Euler circles
Wason Selection Task
Difference: Induction, Deduction
Decision making
How to make good decisions
Making ethical decisions
Web-published journal by the Society for Judgement and Decision Making
Executive functions
Elaborate document (pdf) from the Technical University of Dresden (in German)
Text from the Max Planck Society, Munich (in English)
Short description and an extensive link list
Executive functions & ADHD
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/13%3A_Reasoning_and_Decision_Making/13.02%3A_Decision_Making.txt
|
Developing from the information processing approach, present cognitive psychology differs from classical psychological approaches in the methods used as well as in the interdisciplinary connections to other sciences. Apart from rejecting introspection as a valid method to analyse mental phenomena, cognitive psychology introduces further, mainly computer-based, techniques which have not been in the range of classical psychology by now.
By using brain-imaging-techniques like fMRI, cognitive psychology is able to analyse the relation between the physiology of the brain and mental processes. In the future, cognitive psychology will likely focus on computer-based methods even more. Thus, the field will profit from improvements in the area of IT. For example, contemporary fMRI scans are plagued by many possible sources of error, which should be solved in the future, thereby improving the power and precision of the technique. In addition, computational approaches can be combined with classical behavioural approaches, where one infers a participant's mental states from exhibited behaviour.
Cognitive psychology, however, does not only rely on methods developed by other branches of science. It also collaborates with closely related fields, including artificial intelligence, neuroscience, linguistics and the philosophy of mind. The advantage of this multidisciplinary approach is clear: different perspectives on the topic make it possible to test hypotheses using different techniques and to eventually develop new conceptual frameworks for thinking about the mind. Often, modern studies of cognitive psychology criticise classical information processing approaches, which opens the door for other approaches to acquire additional importance. For example, the classical approach has been modified to a parallel information processing approach, which is thought to be closer to the actual functioning of the brain.
14.02: Today's Approaches
The momentary usage of brain imaging
How are the known brain imaging methods used? What kind of information can be derived using this methods?
fMRI
fMRI is an non-invasive imaging method that pictures active structures of the brain in a high spatial resolution. For that the participant has to lie in a tube and his brain is pictured. While doing a task active structures in the brain of the participant can be recognised on the recordings.
How?
If parts of the brain are active, the metabolism is also stimulated. The blood, that has an important function in the metabolic transport is flowing to the active nerve cells. The haemoglobin in the red blood cells carries oxygen (oxyhaemogliobin) when flowing to the part that is active and that needs oxygen, to consume and work. With consumption the haemoglobin „delivers“ the oxygen (desoxyhaemoglobin). This leads to local changes in the relative concentration of oxyhemoglobin and desoxyhemoglobin and changes in local blood volume and blood flow. While haemoglobin is oxygenated it is diamagnetic (meaning the material tends to leave the magnetic field), but paramagnetic (material tends to migrate into the magnetic field) while desoxygenated. The magnetic resonance signal of blood is therefore slightly different depending on the level of oxygenation.
By being able to detect the magnetic properties mentioned above, the fMRI-scanner is able to determine alterations in blood flow and blood volume, and construct a picture. This picture shows the brain and its activated parts. While the participant is doing a task the researcher can derive, which brain regions are involved. This is an indirect measure, as the metabolism is measured and not the neuronal activity itself. Furthermore this imaging method has good spatial resolution(where the activity occurs) but low temporal resolution (when the activity occurs), as measurements occur after the neuronal activity.
EEG
The Electroencephalogram (EEG) is another non-invasive brain imaging method. Electronic signals from the human brain are recorded while the participant is doing a task. The electronic activity of the neuronal cells, that is adding can be measured.
The electronic activity is measured by attaching electrodes to the skin of the head. In most cases the electrodes are installed on a cap, that the participant wears. It is very time-consuming to install the cap correctly on the head of the participant, but it is very important for the outcome, that everything is in the right place. To assure the adding of the signals the electrodes have to be installed geometric and in a parallel configuration. This technique is applied to measure the event-related potential (ERP), potential changes. They are correlated temporal to an emotional, sensoric, cognitive or motoric event. In the experiment a certain event has to be repeated again and again. The type ERP then can be extracted and calculated. This method is not only time-consumptive, also a lot of disrupting factors complicate the measuring. Moreover this method has a very high temporal resolution, but a very low spatial resolution. It is hardly possible to measure activity in deeper brain regions or to detect the source of the activity interpreting only the recordings.
Interdisciplinary Approaches
Cognitive Science
Cognitive science is multidisciplinary science. It comprises areas of cognitive psychology, linguistics, neuroscience, artificial intelligence, cognitive anthropology, computer science and philosophy. Cognitive science concentrates to study the intelligent behaviour of humans, which includes perception, learning, memory, thought and language. Research in cognitive sciences are based on naturalistic research methods such as cognitive neuropsychology, introspection, psychological experimentation, mathematical modelling and philosophical argumentation.
In the beginning of the cognitive sciences the most common method was introspection. It meant that the test subject evaluated his or her own cognitive thinking. In these experiments the researchers were using experienced subjects because they had to analyse and report their own cognitive thinking. Problems can occur when the results are interpreted and the subject has different reports from the same action. Obviously a clear separation is needed between the matters that can be studied by introspection and the ones that are not adequate for this method.
Computational modelling in cognitive science means that the mind is seen as a machine. This approach seeks to express theoretical ideas through computational modelling that generate behaviour similar to humans. Mathematical modelling is based on flow charts. The model's quality is very important to ensure the equivalence of the input and results.
Nowadays the researchers in cognitive sciences use often theoretical and computational models. "This does not exclude their primary method of experimentation with human participants. In cognitive sciences it is also important to bring the theories and the experimenting together. Because it comprises so many fields of science it is important to bring together the most appropriate methods from all these fields. The psychological experiments should be interpreted through a theory that expresses mental representations and procedures. The most productive and revealing way to perform research in cognitive sciences is to combine different approaches and methods together. This ensures overall picture from the research area and it comprises the viewpoints of all the different fields." (Thagard, Cognitive Science) Nevertheless Cognitive Science has not yet managed to succeed in bringing the different areas together. Nowadays it is criticised for not establishing a science on its own. Rather few scientist really address themselves as cognitive scientists. Furthermore the basic metaphor of the brain functioning like a computer is challenged as well as the distinctions between their models and nature (cf. Eysenck & Keane, Cognitive Psychology, pp. 519-520). This of course brings up a lot of work for the future. Cognitive Science has to work on better models that explain natural processes and that are reliably able to make predictions. Furthermore these models have to combine multiple mental phenomena. In addition to that a general "methodology for relating a computational model's behaviour to human behaviour" has to be worked out. Hereby the strength of such models can be increased. Apart from that Cognitive Science needs to establish an identity with prominent researchers that avow themselves to Cognitive Science. And finally its biggest goal, the creation of a general unifying theory of human cognition (see Theory Part), has to be reached (cf. ibid, p. 520).
Experimental Cognitive Psychology
Psychological experimentation studies mental functions. This is done with indirect methods meaning reasoning. These studies are performed to find causal relations and the factors influencing behaviour. The researcher observes visible actions and makes conclusions based on these observations. Variables are changed one at a time and the effect of this change is being observed. The benefits of experimental researching are that the manipulated factors can be altered in nearly any way the researcher wants. From this point it is finally possible to find causal relations.
In being the classical approach within the field of cognitive psychology, experimental studies have been the basis for the development of numerous modern approaches within contemporary Cognitive Psychology. Its empirical methods have been developed and verified over time and the gained results were a foundation for many enhancements contributed to the field of psychology.
Taking into consideration the established character of experimental cognitive psychology, one might think that methodological changes are rather negligible. But recent years came up with a discussion concerning the question, whether the results of experimental CP remain valid in the “real world” at all. A major objection is that the artificial environment in an experiment might cause that certain facts and coherences are unintentionally ignored, which is due to the fact that for reasons of clarity numerous factors are suppressed (cf. Eysenck & Keane, Cognitive Psychology, pp. 514–515). A possible example for this is the research concerning attention. Since the attention of the participant is mainly governed by the experimenter’s instructions, its focus is basically determined. Therefore "relatively little is known of the factors that normally influence the focus of attention" (ibid, p. 514). Furthermore it turns out to be problematic that mental phenomena are often examined in isolation. While trying to make the experimental setup as concise as possible (in order to get clearly interpretable results) one decouples the aspect at issue from adjacent and interacting mental processes. This leads to the problem that the results turn out to be valid in the idealised experimental setting only but not in “real life”. Here multiple mental phenomena interact with each other and numerous outer stimuli influence the behaviour of mental processes. The validity gained by such studies could only be characterised as an internal validity (which means that the results are valid in the special circumstances created by the experimenter) but not as an external validity (which means that the results stay valid in changed and more realistic circumstances) (cf. ibid, p. 514). These objections lead to experiments which have been developed to refer closer to "real life". According to these experiments "real-world" phenomena like 'absent-mindedness', 'everyday memory' or 'reading' gain importance. Nevertheless the discussion remains whether such experiments really deliver new information about mental processes. And whether these 'everyday phenomenon studies' really become broadly accepted greatly depends on the results current experiments will deliver.
Another issue concerning experimental setups in cognitive psychology is the way individual differences are handled. In general the results from an experiment are generated by an analysis of variance. This causes that results which are due to individual differences are averaged out and not taken into further consideration. Such a procedure seems to be highly questionable, especially if put into the context of an investigation of Bowers in 1973, which showed that over 30% of the variance in such studies are due to individual differences or their interaction with the current situation (cf. ibid, p. 515). Based on such facts one challenge for future experimental cognitive psychology is the analysis of individual differences and finding way to include knowledge about such differences in general studies.
Cognitive Neuroscience
Another approach towards a better understanding of human cognition is cognitive neuroscience. Cognitive neuroscience lies at the interface between traditional cognitive psychology and the brain sciences. It is a science whose approach is characterised by attempts to derive cognitive level theories from various types of information, such as computational properties of neural circuits, patterns of behavioural damage as a result of brain injury or measurements of brain activity during the execution of cognitive tasks (cf. www.psy.cmu.edu). Cognitive neuroscience helps to understand how the human brain supports thought, perception, affection, action, social process and other aspects of cognition and behaviour, including how such processes develop and change in the brain over time (cf. www.nsf.gov).
Cognitive neuroscience has emerged in the last decade as an intensely active and influential discipline, forged from interactions among the cognitive sciences, neurology, neuroimaging, physiology, neuroscience, psychiatry, and other fields. New methods for non-invasive functional neuroimaging of subjects performing psychological tasks have been of particular importance for this discipline. Non-invasive functional neuroimaging includes: positron emission tomography (PET), functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), optical imaging (near infra-red spectroscopy or NIRS), anatomical MRI, and diffusion tensor imaging (DTI) The findings of cognitive neuroscience are directed towards enabling a basic scientific understanding of a broad range of issues involving the brain, cognition and behaviour. (cf. www.nsf.gov).
Cognitive neuroscience becomes a very important approach to understand human cognition, since results can clarify functional brain organisation, such as the operations performed by a particular brain area and the system of distributed, discrete neural areas supporting a specific cognitive representation. These findings can reveal the effect on brain organization of individual differences (including even genetic variation) (cf. www.psy.cmu.edu, www.nsf.gov). Another importance of cognitive neuroscience is that cognitive neuroscience provides some ways that allow us to "obtain detailed information about the brain structures involved in different kinds of cognitive processing" (Eysenck & Keane, Cognitive Psychology, p. 521). Techniques such as MRI and CAT scans have proved of particular value when used on patients to discover which brain areas are damaged. Before non-invasive methods of cognitive neuroscience were developed localisation of "brain damage could only be established by post mortem examination" (ibid). Knowing which brain areas are related to which cognitive process would surely lead to obtain a clearer view of brain region, hence, in the end would help for a better understanding of human cognition processes. Another strength of cognitive neuroscience is that it serves as a tool to demonstrate the reality of theoretical distinctions. For example, it has been argued by many theorists that implicit memory can be divided into perceptual and conceptual implicit memory; support for that view has come from PET studies, which show that perceptual and conceptual priming tasks affected different areas of the brain (cf. ibid, pp. 521-522). However, cognitive neuroscience is not that perfect to be able to stand alone and answer all questions dealing with human cognition. Cognitive neuroscience has some limitations, dealing with data collecting and data validity. In most neuroimaging studies, data is collected from several individuals and then averaged. Some concern has arose about such averaging because of the existence of significant individual differences. Though the problem was answered by Raichle (1998), who stated that the differ in individual brain should be appreciated, however general organising principles emerge that transcend these differences, a broadly accepted solution to the problem has yet to be found (cf. ibid, p. 522).
Cognitive Neuropsychology
Cognitive Neuropsychology maps the connection between brain functions and cognitive behaviour. Patients with brain damages have been the most important source of research in neuropsychology. Neuropsychology also examines dissociation (“forgetting”), double dissociation and associations (connection between two things formed by cognition). Neuropsychology uses technological research methods to create images of the brain functioning. There are many differences in techniques to scan the brain. The most common ones are EEG (Electroencephalography), MRI and fMRI (functional Magnetic Resonance Imaging) and PET (Positron Emission Tomography).
Cognitive Neuropsychology became very popular since it delivers good evidence. Theories developed for normal individuals can be verified by patients with brain damages. Apart from that new theories could have been established because of the results of neuropsychological experiments. Nevertheless certain limitations to the approach as it is today cannot be let out of consideration. First of all the fact that people having the same mental disability often do not have the same lesion needs to be pointed out (cf. ibid, pp.516-517). In such cases the researchers have to be careful with their interpretation. In general it could only be concluded that all the areas that the patients have injured could play a role in the mental phenomenon. But not which part really is decisive. Based on that future experiments in this area tend to make experiments with a rather small number of people with pretty similar lesion respectively compare the results from groups with similar syndromes and different lesions. In addition to that the situation often turns out to be vice versa. Some patients do have pretty similar lesions but show rather different behaviour (cf. ibid, p.517). One probable reason therefore is that the patients differ in their age and lifestyle (cf. Banich, Neuropsychology, p.55). With better technologies in the future one will be better able to distinguish the cases in which really the various personalities make the difference or in which cases the lesions are not entirely equal. In addition to that the individual brain structures which may cause the different reactions to the lesions will become a focus of research. Another problem for Cognitive Neuropsychology is that their patients are rare. The patients which are interesting for such research have lesions of an accident or suffered during war. But in addition there are differences in the manner of the lesion. Often multiple brain regions are damaged which makes it very hard to determine which of them is responsible for the examined phenomenon. The dependency on chance whether there are available patients will remain in future. Thereby predictions concerning this aspect of the research are not very reliable. Apart from that it is not possible yet to localise some mental processes in the brain. Creative thought or organisational planning are examples (cf. Eysenck & Keane, Cognitive Psychology, p.517). A possible outcome of the research is that those activities rely on parallel processing. This would support the idea of the modification of the information processing theory that will be discussed later on. But if it shows up that a lot of mental processes depend on such parallel processing it would turn out to be a big drawback for Cognitive Psychology since its core is the modularization of the brain and the according phenomena. In this context the risk of overestimation and underestimation has to be mentioned. The latter occurs because Cognitive Psychology often only identifies the most important brain region for the mental task. Other regions that are related thereto could be ignored. This could turn out to be fundamental if really parallel processing is crucial to many mental activities. Overestimation occurs when fibers that only pass the damaged brain region are lesioned, too. The researcher concludes that the respective brain region plays an important role in the phenomenon he analyses even though only the deliverance of the information passed that region (cf. ibid). Modern technologies and experiments here have to be developed in order to provide valid and precise results.
Unifying Theories
A unified theory of cognitive science serves the purpose to bring together all the vantage points one can take toward the brain/mind. If a theory could be formed which incorporates all the discoveries of the disciplines mentioned above a full understanding would be tangible.
ACT-R
ACT-R is a Cognitive Architecture, an acronym for Adaptive Control of Thought–Rational. It provides tools which enable us to model the human cognition. It consists mainly of five components: Perceptual-motor modules, declarative memory, procedural memory, chunks and buffers. The declarative memory stores facts in “knowledge-units”, the chunks. These are transmitted through the modules respective buffers, which contain one chunk at a time. The procedural memory is the only one without an own buffer, but is able to access the contents of the other buffers. For example those of the perceptual-motor modules, which are the interface with the (simulated) outer world. Production is accomplished by predefined rules, written is LISP. The main character behind it is John R. Anderson who tributes the inspiration to Allan Newell.
SOAR
SOAR is another Cognitive Architecture, an acronym for State, Operator And Result. It enables one to model complex human capabilities. Its goal is to create an agent with human-like behaviour. The working principles are the following: Problem-solving is a search in a problem-space. Permanent Knowledge is represented by production rules in the production memory. Temporary Knowledge is represented by objects in the working memory. New Goals are created only if a dead end is reached. The learning mechanism is Chunking. Chunking: If SOAR encounters an impasse and is unable to resolve it with the usual technique, it uses “weaker” strategies to circumvent the dead end. In case one of these attempts leads to success, the respective route is saved as a new rule, a chunk, preventing the impasse to occur again. SOAR was created by John Laird, Allen Newell and Paul Rosenbloom.
Neural Networks
There are two types of neural networks: biological and artificial.
A biological NN consists of neurons which are physically or functionally connected with each other. Since each neuron can connect to multiple other neurons the number of possible connections is exponentially high. The connections between neurons are called synapses. Signalling along these synapses happens via electrical signalling or chemical signalling, which induces electrical signals. The chemical signalling works by various neurotransmitters.
Artificial NN are divided by their goals. One is that of artificial intelligence and the other cognitive modelling. Cognitive modelling NN try to simulate biological NN in order to gain better understanding of them, for example the brain. Until now the complexity of the brain and similar structures has prevented a complete model from being devised, so the cognitive modelling focuses on smaller parts like specific brain regions. NNs in artificial intelligence are used to solve distinct problems. But though their goals differ the methods applied are very similar. An artificial NN consist of artificial neurons (nodes) which are connected by mathematical functions. These functions can be of other functions which in turn can be of yet other functions and so on. The actual work is done by following the connections according to their weights. Weights are properties of the connections defining the probability of the specific route to be taken by the program and can be changed by it, thus optimizing the main function. Hereby it is possible to solve problems for which it is impossible to write a function “by hand”.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/14%3A_Present_and_Future_of_Research/14.01%3A_Introduction___Until_now.txt
|
Brain imaging/activity measuring
As described in section 2.1. and 2.2. there are disadvantages of the brain imaging methods. fMRI has a low temporal resolution, but EEG a low spatial resolution. An interdisciplinary attempt is to combine both methods, to reach both a high spatial and temporal resolution. This technique (simultaneous EEG-measuring in the fMR) is used for instance in studying children with extratemporal epilepsy. It is important to assign the temporal progress to a region in which the epileptic seizure has its roots. In December 2006 a conference in Munich discussed another idea of this mixture of methods: the study of Alzheimer's disease. It could be possible to recognise this disease very early. This could lead to new therapies to reduce the speed and the amount of cell-dead. In December of 2006 a conference in Munich discussed this eventuality. Brain imaging methods are not only useful in medical approaches. Other disciplines could benefit from the brain imaging methods and derive new conclusions. For instance for social psychologist the brain imaging methods are interesting. Experiments with psychopathic personalities are only one possibility to explore the behaviour of humans. For literature scientists there could be a possibility to study stylistic devices and their effect of humans while reading a poem. Another attempt in future research is to synchronise the direction of sight and the stimuli, that was trigger for the change of direction. This complex project needs data from eye-tracking experiments and data from fMRI-studies.
Unifying theories more unifying
Since the mind is a single system it should be possible to explain it as such without having to take different perspectives for every approach (neurological,psychological,computational). Having such a theory would enable us to understand our brain far more thorough than now, and might eventually lead an everyday application. But until now there is no working Unifying Theory of Cognition, which fulfils the requirements stated by Allen Newell in his book Unified Theories of Cognition. Accordingly a UTC has to explain: How intelligent organisms respond flexibly to the environment. How they exhibit goal-directed behaviour and choose goals rationally (and in response to interrupts: see previous point). How they use symbols. How they learn from experience. Even Newells own implementation SOAR does not reach these goals.
Promising experiments
Here I collected the abstracts of a few recent findings, feel free to modify or add to them.
>>Unintentional language switch [] Kho, K.H., Duffau, H., Gatignol, P., Leijten, F.S.S., Ramsey, N.F., van Rijen, P.C. & Rutten, G-J.M. (2007) Utrecht Abstract [1]
We present two bilingual patients without language disorders in whom involuntary language switching was induced. The first patient switched from Dutch to English during a left-sided amobarbital Wada-test. Functional magnetic resonance imaging yielded a predominantly left-sided language distribution similar for both languages. The second patient switched from French to Chinese during intraoperative electrocortical stimulation of the left inferior frontal gyrus. We conclude that the observed language switching in both cases was not likely the result of a selective inhibition of one language, but the result of a temporary disruption of brain areas that are involved in language switching. These data complement the few lesion studies on (involuntary or unintentional) language switching, and add to the functional neuroimaging studies of switching, monitoring, and controlling the language in use.
>>Bilateral eye movement -> memory Parker, A. & Dagnall, N. (2007) Manchester Metropolitan University, One hundred and two participants listened to 150 words, organised into ten themes (e.g. types of vehicle), read by a male voice. Next, 34 of these participants moved their eyes left and right in time with a horizontal target for thirty seconds (saccadic eye movements); 34 participants moved their eyes up and down in time with a vertical target; the remaining participants stared straight ahead, focussed on a stationary target. After the eye movements, all the participants listened to a mixture of words: 40 they'd heard before, 40 completely unrelated new words, and 10 words that were new but which matched one of the original themes. In each case the participants had to say which words they'd heard before, and which were new. The participants who'd performed sideways eye movements performed better in all respects than the others: they correctly recognised more of the old words as old, and more of the new words as new. Crucially, they were fooled less often by the new words whose meaning matched one of the original themes - that is they correctly recognised more of them as new. This is important because mistakenly identifying one of these 'lures' as an old word is taken as a laboratory measure of false memory. The performance of the participants who moved their eyes vertically, or who stared ahead, did not differ from each other. Episodic memory improvement induced by bilateral eye movements is hypothesized to reflect enhanced interhemispheric interaction, which is associated with superior episodic memory (S. D. Christman & R. E. Propper. 2001). Implications for neuropsychological mechanisms underlying eye movement desensitization and reprocessing (F. Shapiro, 1989, 2001), a therapeutic technique for posttraumatic stress disorder, are discussed
>>is the job satisfaction–job performance relationship spurious? A meta-analytic examination
Nathan A. Bowling(Department of Psychology, Wright State University) Abstract [2]
The job satisfaction–job performance relationship has attracted much attention throughout the history of industrial and organizational psychology. Many researchers and most lay people believe that a causal relationship exists between satisfaction and performance. In the current study, however, analyses using meta-analytic data suggested that the satisfaction–performance relationship is largely spurious. More specifically, the satisfaction–performance relationship was partially eliminated after controlling for either general personality traits (e.g., Five Factor Model traits and core self-evaluations) or for work locus of control and was almost completely eliminated after controlling for organization-based self-esteem. The practical and theoretical implications of these findings are discussed.
>>Mirror-touch synesthesia is linked with empathy
Michael J Banissy & Jamie Ward (Department of Psychology, University College London)
Abstract [3] Watching another person being touched activates a similar neural circuit to actual touch and, for some people with 'mirror-touch' synesthesia, can produce a felt tactile sensation on their own body. In this study, we provide evidence for the existence of this type of synesthesia and show that it correlates with heightened empathic ability. This is consistent with the notion that we empathize with others through a process of simulation.
Discussion points
Where are the limitations of research? Can we rely on our intuitive idea of our mind? What impact could a complete understanding of the brain have on everyday life?
Brain activity as a false friend
In several experiments the outcome is not unambiguous. This hinders a direct derivation from the data. In experiments with psychopathic personalities researchers had to weaken their thesis, that persons with missing activity in the frontal lobe are predetermined for being violent psychopathic people, that are unethical murderers. Missing activity in the frontal lobe leads to a disregulation of threshold for emotional, impulsive or violent actions. But this also advantages for example fire fighters or policemen, who have to withstand strong pressures and who need a higher threshold. So missing activity is not a sufficient criterion for psychopathic personalities.
14.04: Conclusion
Today's work in the field of Cognitive Psychology gives several hints how future work in this area may look like. In practical applications improvements will probably mainly be driven by the limitations one faces today. Here in particular the newer subfields of Cognitive Psychology will develop quickly. How such changes look like heavily depends on the character of future developments in technology. Especially improvements in Cognitive Neuropsychology and Cognitive Neuroscience depend on the advancements of the imaging techniques. In addition to that the theoretical framework of the field will be influenced by such developments. The parallel processing theory may still be modified according to new insights in computer science. Thereby or eventually by the acceptance of one of the already existing overarching theories the theoretical basis for the current research could be reunified. But if it takes another 30 years to fulfil Newell's dream of such a theory or if it will happen rather quick is still open. As a rather young science Cognitive Psychology still is subject to elementary changes. All its practical and theoretical domains are steadily modified. Whether the trends mentioned in this chapter are just dead ends or will cause a revolution of the field could only be predicted which definitely is hard.
14.05: References
[4]
Wikipedia has related information at FMRI
Anderson, John R., Lebiere, Christian, The Atomic Components of Thought, Lawrence Erlbaum Associates, 1998
Banich, Marie T., Neuropsycology - The Neural Bases of Mental Function, Hougthon Mifflin Company, 1997
E. Br. Goldstein, Cognitive Psychology, Wadsworth, 2004
Lyon, G.Reid, Rumsey, Judith M.: Neuroimaging. A Window to Neurological Foundations of Learning and Behaviour in Children. Baltimore. 1996.
M. W. Eysenck, M. T. Keane, Cognitive Psychology - A Student's Handbook, Psychology Press Ltd, 2000
Thagard, Paul, Cognitive Science in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, 2004
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_and_Cognitive_Neuroscience_(Wikibooks)/14%3A_Present_and_Future_of_Research/14.03%3A_Future_Research.txt
|
When experimental psychology arose in the nineteenth century, it was a unified discipline. However, as the experimental method began to be applied to a larger and larger range of psychological phenomena, this new discipline fragmented, causing what became known in the 1920s as the “crisis in psychology,” a crisis that has persisted to the present day.
Cognitive science arose in the 1950s when it became apparent that a number of different disciplines, including psychology, computer science, linguistics and philosophy, were fragmenting. Some researchers responded to this situation by viewing cognition as a form of information processing. In the 1950s, the only plausible notion of information processing was the kind that was performed by a recent invention, the digital computer. This singular notion of information processing permitted cognitive science to emerge as a highly unified discipline.
A half century of research in cognitive science, though, has been informed by alternative conceptions of both information processing and cognition. As a result, the possibility has emerged that cognitive science itself is fragmenting. The purpose of this first chapter is to note the existence of three main approaches within the discipline: classical cognitive science, connectionist cognitive science, and embodied cognitive science. The existence of these different approaches leads to obvious questions: What are the core assumptions of these three different schools of thought? What are the relationships between these different sets of core assumptions? Is there only one cognitive science, or are there many different cognitive sciences? Chapter 1 sets the stage for asking such questions; the remainder of the book explores possible answers to them.
1.02: A Fragmented Psychology
Modern experimental psychology is rooted in two seminal publications from the second half of the nineteenth century (Schultz & Schultz, 2008), Fechner’s (1966) Elements of Psychophysics, originally published in 1860, and Wundt’s Principles of Physiological Psychology, originally published in 1873 (Wundt & Titchener, 1904). Of these two authors, it is Wundt who is viewed as the founder of psychology, because he established the first experimental psychology laboratory—his Institute of Experimental Psychology—in Leipzig in 1879, as well as the first journal devoted to experimental psychology, Philosophical Studies, in 1881 (Leahey, 1987).
Fechner’s and Wundt’s use of experimental methods to study psychological phenomena produced a broad, unified science.
This general significance of the experimental method is being more and more widely recognized in current psychological investigation; and the definition of experimental psychology has been correspondingly extended beyond its original limits. We now understand by ‘experimental psychology’ not simply those portions of psychology which are directly accessible to experimentation, but the whole of individual psychology. (Wundt & Titchner, 1904, p. 8)
However, not long after its birth, modern psychology began to fragment into competing schools of thought. The Würzberg school of psychology, founded in 1896 by Oswald Külpe, a former student of Wundt’s, challenged Wundt’s views on the scope of psychology (Schultz & Schultz, 2008). The writings of the functionalist school being established in North America were critical of Wundt’s structuralism (James, 1890a, 1890b). Soon, behaviourism arose as a reaction against both structuralism and functionalism (Watson, 1913).
Psychology’s fragmentation soon began to be discussed in the literature, starting with Bühler’s 1927 “crisis in psychology” (Stam, 2004), and continuing to the present day (Bower, 1993; Driver-Linn, 2003; Gilbert, 2002; Koch, 1959, 1969, 1976, 1981, 1993; Lee, 1994; Stam, 2004; Valsiner, 2006; Walsh-Bowers, 2009). For one prominent critic of psychology’s claim to scientific status,
psychology is misconceived when seen as a coherent science or as any kind of coherent discipline devoted to the empirical study of human beings. Psychology, in my view, is not a single discipline but a collection of studies of varied cast, some few of which may qualify as science, whereas most do not. (Koch, 1993, p. 902)
The fragmentation of psychology is only made more apparent by repeated attempts to find new approaches to unify the field, or by rebuttals against claims of disunity (Drob,2003; Goertzen,2008; Henriques,2004; Katzko,2002; Richardson, 2000; Smythe &McKenzie, 2010; Teo, 2010; Valsiner, 2006; Walsh-Bowers, 2009; Watanabe, 2010; Zittoun, Gillespie,&Cornish, 2009).
The breadth of topics being studied by any single psychology department is staggering; psychology correspondingly uses an incredible diversity of methodologies. It is not surprising that Leahey (1987, p. 3) called psychology a “large, sprawling, confusing human undertaking.” Because of its diversity, it is likely that psychology is fated to be enormously fragmented, at best existing as a pluralistic discipline (Teo, 2010; Watanabe, 2010).
If this is true of psychology, then what can be expected of a more recent discipline, cognitive science? Cognitive science would seem likely to be even more fragmented than psychology, because it involves not only psychology but also many other disciplines. For instance, the website of the Cognitive Science Society states that the Society,
brings together researchers from many fields that hold a common goal: understanding the nature of the human mind. The Society promotes scientific interchange among researchers in disciplines comprising the field of Cognitive Science, including Artificial Intelligence, Linguistics, Anthropology, Psychology, Neuroscience, Philosophy, and Education. (Cognitive Science Society, 2013)
The names of all of these disciplines are proudly placed around the perimeter of the Society’s logo.
When cognitive science appeared in the late 1950s, it seemed to be far more unified than psychology. Given that cognitive science draws from so many different disciplines, how is this possible?
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/01%3A_The_Cognitive_Sciences-_One_or_Many/1.01%3A_Chapter_Overview.txt
|
When psychology originated, the promise of a new, unified science was fuelled by the view that a coherent object of enquiry (conscious experience) could be studied using a cohesive paradigm (the experimental method). Wundt defined psychological inquiry as “the investigation of conscious processes in the modes of connexion peculiar to them” (Wundt & Titchner, 1904, p. 2). His belief was that using the experimental method would “accomplish a reform in psychological investigation comparable with the revolution brought about in the natural sciences.” As experimental psychology evolved the content areas that it studied became markedly differentiated, leading to a proliferation of methodologies. The fragmentation of psychology was a natural consequence.
Cognitive science arose as a discipline in the mid-twentieth century (Boden, 2006; Gardner, 1984; Miller, 2003), and at the outset seemed more unified than psychology. In spite of the diversity of talks presented at the “Special Interest Group in Information Theory” at MIT in 1956, cognitive psychologist George Miller,
left the symposium with a conviction, more intuitive than rational, that experimental psychology, theoretical linguistics, and the computer simulation of cognitive processes were all pieces from a larger whole and that the future would see a progressive elaboration and coordination of their shared concerns. (Miller, 2003, p. 143)
The cohesiveness of cognitive science was, perhaps, a natural consequence of its intellectual antecedents. A key inspiration to cognitive science was the digital computer; we see in Chapter 2 that the invention of the computer was the result of the unification of ideas from the diverse fields of philosophy, mathematics, and electrical engineering.
Similarly, the immediate parent of cognitive science was the field known as cybernetics (Ashby, 1956; de Latil, 1956; Wiener, 1948). Cybernetics aimed to study adaptive behaviour of intelligent agents by employing the notions of feedback and information theory. Its pioneers were polymaths. Not only did cyberneticist William Grey Walter pioneer the use of EEG in neurology (Cooper, 1977), he also invented the world’s first autonomous robots (Bladin, 2006; Hayward, 2001; Holland, 2003a; Sharkey & Sharkey, 2009). Cybernetics creator Norbert Wiener organized the Macy Conferences (Conway & Siegelman, 2005), which were gatherings of mathematicians, computer scientists, psychologists, psychiatrists, anthropologists, and neuroscientists, who together aimed to determine the general workings of the human mind. The Macy Conferences were the forerunners of the interdisciplinary symposia that inspired cognitive scientists such as George Miller.
What possible glue could unite the diversity of individuals involved first in cybernetics, and later in cognitive science? One answer is that cognitive scientists are united in sharing a key foundational assumption that cognition is information processing (Dawson, 1998). As a result, a critical feature of cognition involves representation or symbolism (Craik, 1943). The early cognitive scientists,
realized that the integration of parts of several disciplines was possible and desirable, because each of these disciplines had research problems that could be addressed by designing ‘symbolisms.’ Cognitive science is the result of striving towards this integration. (Dawson, 1998, p. 5)
Assuming that cognition is information processing provides a unifying principle, but also demands methodological pluralism. Cognitive science accounts for human cognition by invoking an information processing explanation. However, information processors themselves require explanatory accounts framed at very different levels of analysis (Marr, 1982; Pylyshyn, 1984). Each level of analysis involves asking qualitatively different kinds of questions, and also involves using dramatically different methodologies to answer them.
Marr (1982) proposed that information processors require explanations at the computational, algorithmic, and implementational levels. At the computational level, formal proofs are used to determine what information processing problem is being solved. At the algorithmic level, experimental observations and computer simulations are used to determine the particular information processing steps that are being used to solve the information processing problem. At the implementational level, biological or physical methods are used to determine the mechanistic principles that actually instantiate the information processing steps. In addition, a complete explanation of an information processor requires establishing links between these different levels of analysis.
An approach like Marr’s is a mandatory consequence of assuming that cognition is information processing (Dawson, 1998). It also makes cognitive science particularly alluring. This is because cognitive scientists are aware not only that a variety of methodologies are required to explain information processing, but also that researchers from a diversity of areas can be united by the goal of seeking such an explanation.
As a result, definitions of cognitive science usually emphasize co-operation across disciplines (Simon, 1980). Cognitive science is “a recognition of a fundamental set of common concerns shared by the disciplines of psychology, computer science, linguistics, economics, epistemology, and the social sciences generally” (Simon, 1980, p. 33). Interviews with eminent cognitive scientists reinforce this theme of interdisciplinary harmony and unity (Baumgartner & Payr, 1995). Indeed, it would appear that cognitive scientists deem it essential to acquire methodologies from more than one discipline.
For instance, philosopher Patricia Churchland learned about neuroscience at the University of Manitoba Medical School by “doing experiments and dissections and observing human patients with brain damage in neurology rounds” (Baumgartner & Payr, 1995, p. 22). Philosopher Daniel Dennett improved his computer literacy by participating in a year-long working group that included two philosophers and four AI researchers. AI researcher Terry Winograd studied linguistics in London before he went to MIT to study computer science. Psychologist David Rumelhart observed that cognitive science has “a collection of methods that have been developed, some uniquely in cognitive science, but some in related disciplines. . . . It is clear that we have to learn to appreciate one another’s approaches and understand where our own are weak” (Baumgartner & Payr, 1995, p. 196).
At the same time, as it has matured since its birth in the late 1950s, concerns about cognitive science’s unity have also arisen. Philosopher John Searle stated, “I am not sure whether there is such a thing as cognitive science” (Baumgartner & Payr, 1995, p. 203). Philosopher John Haugeland claimed that “philosophy belongs in cognitive science only because the ‘cognitive sciences’ have not got their act together yet” (p. 103). AI pioneer Herbert Simon described cognitive science as a label “for the fact that there is a lot of conversation across disciplines” (p. 234). For Simon, “cognitive science is the place where they meet. It does not matter whether it is a discipline. It is not really a discipline—yet.”
In modern cognitive science there exist intense disagreements about what the assumption “cognition is information processing” really means. From one perspective, modern cognitive science is fragmenting into different schools of thought—classical, connectionist, embodied—that have dramatically different views about what the term information processing means. Classical cognitive science interprets this term as meaning rule-governed symbol manipulations of the same type performed by a digital computer. The putative fragmentation of cognitive science begins when this assumption is challenged. John Searle declared, “I think that cognitive science suffers from its obsession with the computer metaphor” (Baumgartner & Payr, 1995, p. 204). Philosopher Paul Churchland declared, “we need to get away from the idea that we are going to achieve Artificial Intelligence by writing clever programs” (p. 37).
Different interpretations of information processing produce variations of cognitive science that give the strong sense of being mutually incompatible. One purpose of this book is to explore the notion of information processing at the foundation of each of these varieties. A second is to examine whether these notions can be unified.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/01%3A_The_Cognitive_Sciences-_One_or_Many/1.03%3A_A_Unified_Cognitive_Science.txt
|
One reason that Wilhelm Wundt is seen as the founder of psychology is because he established its first academic foothold at the University of Leipzig. Wundt created the first experimental psychology laboratory there in 1879. Psychology was officially part of the university calendar by 1885. Today, hundreds of psychology departments exist at universities around the world.
Psychology is clearly healthy as an academic discipline. However, its status as a science is less clear. Sigmund Koch, a noted critic of psychology (Koch, 1959, 1969, 1976, 1981, 1993), argued in favor of replacing the term psychology with the psychological studies because of his view that it was impossible for psychology to exist as a coherent discipline.
Although it is much younger than psychology, cognitive science has certainly matured into a viable academic discipline. In the fall of 2010, the website for the Cognitive Science Society listed 77 universities around the world that offered cognitive science as a program of study. Recent developments in cognitive science, though, have raised questions about its scientific coherence. To parallel Koch, should we examine “cognitive science,” or is it more appropriate to inquire about “the cognitive sciences”? Investigating this issue is one theme of the current book.
According to psychologist George Miller (2003), cognitive science was born on September 11, 1956. At this early stage, the unity of cognitive science was not really an issue. Digital computers were a relatively recent invention (Goldstine, 1993; Lavington, 1980; Williams, 1997; Zuse, 1993). At the time, they presented a unified notion of information processing to be adopted by cognitive science. Digital computers were automatic symbol manipulators (Haugeland, 1985): they were machines that manipulated symbolic representations by applying well-defined rules; they brought symbolic logic to mechanized life. Even though some researchers had already noted that the brain may not work exactly like a computer, the brain was still assumed to be digital, because the all-or-none generation of an action potential was interpreted as being equivalent to assigning a truth value in a Boolean logic (McCulloch & Pitts, 1943; von Neumann, 1958).
Classical cognitive science, which is the topic of Chapter 3, was the first school of thought in cognitive science and continues to dominate the field to this day. It exploited the technology of the day by interpreting “information processing” as meaning “rule-governed manipulation of symbol” (Feigenbaum & Feldman, 1995). This version of the information processing hypothesis bore early fruit, producing major advances in the understanding of language (Chomsky, 1957, 1959b, 1965) and of human problem solving (Newell, Shaw, & Simon, 1958; Newell & Simon, 1961, 1972). Later successes with this approach led to the proliferation of “thinking artifacts”: computer programs called expert systems (Feigenbaum & McCorduck, 1983; Kurzweil, 1990). Some researchers have claimed that the classical approach is capable of providing a unified theory of thought (Anderson, 1983; Anderson et al., 2004; Newell, 1990).
The successes of the classical approach were in the realm of well-posed problems, such problems being those with unambiguously defined states of knowledge and goal states, not to mention explicitly defined operations for converting one state of knowledge into another. If a problem is well posed, then its solution can be described as a search through a problem space, and a computer can be programmed to perform this search (Newell & Simon, 1972). However, this emphasis led to growing criticisms of the classical approach. One general issue was whether human cognition went far beyond what could be captured just in terms of solving well-posed problems (Dreyfus, 1992; Searle, 1980; Weizenbaum, 1976).
Indeed, the classical approach was adept at producing computer simulations of game playing and problem solving, but was not achieving tremendous success in such fields as speech recognition, language translation, or computer vision. “An overall pattern had begun to take shape. . . . an early, dramatic success based on the easy performance of simple tasks, or low-quality work on complex tasks, and then diminishing returns, disenchantment, and, in some cases, pessimism” (Dreyfus, 1992, p. 99).
Many abilities that humans are expert at without training, such as speaking, seeing, and walking, seemed to be beyond the grasp of classical cognitive science. These abilities involve dealing with ill-posed problems. An ill-posed problem is deeply ambiguous, has poorly defined knowledge states and goal states, and involves poorly defined operations for manipulating knowledge. As a result, it is not well suited to classical analysis, because a problem space cannot be defined for an ill-posed problem. This suggests that the digital computer provides a poor definition of the kind of information processing performed by humans. “In our view people are smarter than today’s computers because the brain employs a basic computational architecture that is more suited to deal with a central aspect of the natural information processing tasks that people are so good at” (Rumelhart & McClelland, 1986c, p. 3).
Connectionist cognitive science reacted against classical cognitive science by proposing a cognitive architecture that is qualitatively different from that inspired by the digital computer metaphor (Bechtel & Abrahamsen, 2002; Churchland, Koch, & Sejnowski, 1990; Churchland & Sejnowski, 1992; Clark, 1989, 1993; Horgan & Tienson, 1996; Quinlan, 1991). Connectionists argued that the problem with the classical notion of information processing was that it ignored the fundamental properties of the brain. Connectionism cast itself as a neuronally inspired, biologically plausible alternative to classical cognitive science (Bechtel & Abrahamsen, 2002; McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c). “No serious study of mind (including philosophical ones) can, I believe, be conducted in the kind of biological vacuum to which [classical] cognitive scientists have become accustomed” (Clark, 1989, p. 61).
The architecture proposed by connectionism was the artificial neural network (Caudill & Butler, 1992a, 1992b; Dawson, 2004, 2005; De Wilde, 1997; Muller & Reinhardt, 1990; Rojas, 1996). An artificial neural network is a system of simple processors, analogous to neurons, which operate in parallel and send signals to one another via weighted connections that are analogous to synapses. Signals detected by input processors are converted into a response that is represented as activity in a set of output processors. Connection weights determine the input-output relationship mediated by a network, but they are not programmed. Instead, a learning rule is used to modify the weights. Artificial neural networks learn from example.
Artificial neural networks negate many of the fundamental properties of the digital computer (von Neumann, 1958). Gone was the notion that the brain was a digital symbol manipulator governed by a serial central controller. In its place, the processes of the brain were described as subsymbolic and parallel (Smolensky, 1988); control of these processes was decentralized. Gone was the classical distinction between structure and process, in which a distinct set of explicit rules manipulated discrete symbols stored in a separate memory. In its place, the brain was viewed as a distributed system in which problem solutions emerged from the parallel activity of a large number of simple processors: a network was both structure and process, and networks both stored and modified information at the same time (Hillis, 1985). Gone was the assumption that information processing was akin to doing logic (Oaksford & Chater, 1991). In its place, connectionists viewed the brain as a dynamic, statistical pattern recognizer (Churchland & Sejnowski, 1989; Grossberg, 1980; Smolensky, 1988).
With all such changes, though, connectionism still concerned itself with cognition as information processing—but of a different kind: “These dissimilarities do not imply that brains are not computers, but only that brains are not serial digital computers” (Churchland, Koch, & Sejnowski, 1990, p. 48, italics original).
Connectionist models of cognition have had as long a history as have classical simulations (Dawson, 2004; Medler, 1998). McCulloch and Pitts described powerful neural network models in the 1940s (McCulloch, 1988a), and Rosenblatt’s (1958, 1962) perceptrons were simple artificial neural networks that were not programmed, but instead learned from example. Such research waned in the late 1960s as the result of proofs about the limitations of simple artificial neural networks (Minsky & Papert, 1988; Papert, 1988).
However, the limitations of early networks were overcome in the mid-1980s, by which time new techniques had been discovered that permitted much more powerful networks to learn from examples (Ackley, Hinton, & Sejnowski, 1985; Rumelhart, Hinton, & Williams, 1986b). Because of these new techniques, modern connectionism has achieved nearly equal status to classical cognitive science. Artificial neural networks have been used to model a wide range of ill-posed problems, have generated many expert systems, and have successfully simulated domains once thought to be exclusive to the classical approach (Bechtel & Abrahamsen, 2002; Carpenter & Grossberg, 1992; Enquist & Ghirlanda, 2005; Gallant, 1993; Gluck & Myers, 2001; Grossberg, 1988; Kasabov, 1996; Pao, 1989; Ripley, 1996; Schmajuk, 1997; Wechsler, 1992).
In a review of a book on neural networks, Hanson and Olson (1991, p. 332) claimed that “the neural network revolution has happened. We are living in the aftermath.” This revolution, as is the case with most, has been messy and acrimonious, markedly departing from the sense of unity that cognitive science conveyed at the time of its birth. A serious and angry debate about the merits of classical versus connectionist cognitive science rages in the literature.
On the one hand, classical cognitive scientists view the rise of connectionism as being a rebirth of the associationist and behaviourist psychologies that cognitivism had successfully replaced. Because connectionism eschewed rules and symbols, classicists argued that it was not powerful enough to account for the regularities of thought and language (Fodor & McLaughlin, 1990; Fodor & Pylyshyn, 1988; Pinker, 2002; Pinker & Prince, 1988). “The problem with connectionist models is that all the reasons for thinking that they might be true are reasons for thinking that they couldn’t be psychology” (Fodor & Pylyshyn, 1988, p. 66). A Scientific American news story on a connectionist expert system included Pylyshyn’s comparison of connectionism to voodoo: “‘People are fascinated by the prospect of getting intelligence by mysterious Frankenstein-like means—by voodoo! And there have been few attempts to do this as successful as neural nets” (Stix, 1994, p. 44). The difficulty with interpreting the internal structure of connectionist networks has been used to argue against their ability to provide models, theories, or even demonstrations to cognitive science (McCloskey, 1991).
On the other hand, and not surprisingly, connectionist researchers have replied in kind. Some of these responses have been arguments about problems that are intrinsic to the classical architecture (e.g., slow, brittle models) combined with claims that the connectionist architecture offers solutions to these problems (Feldman & Ballard, 1982; Rumelhart & McClelland, 1986c). Others have argued that classical models have failed to provide an adequate account of experimental studies of human cognition (Oaksford, Chater, & Stenning, 1990). Connectionist practitioners have gone as far as to claim that they have provided a paradigm shift for cognitive science (Schneider, 1987).
Accompanying claims for a paradigm shift is the view that connectionist cognitive science is in a position to replace an old, tired, and failed classical approach. Searle (1992, p. 247), in a defense of connectionism, has described traditional cognitivist models as being “obviously false or incoherent.” Some would claim that classical cognitive science doesn’t study the right phenomena. “The idea that human activity is determined by rules is not very plausible when one considers that most of what we do is not naturally thought of as problem solving” (Horgan & Tienson, 1996, p. 31). Paul Churchland noted that “good old-fashioned artificial intelligence was a failure. The contribution of standard architectures and standard programming artificial intelligence was a disappointment” (Baumgartner & Payr, 1995, p. 36). Churchland went on to argue that this disappointment will be reversed with the adoption of more brain-like architectures.
Clearly, the rise of connectionism represents a fragmentation of cognitive science. This fragmentation is heightened by the fact that connectionists themselves freely admit that there are different notions about information processing that fall under the connectionist umbrella (Horgan & Tienson, 1996; Rumelhart & McClelland, 1986c). “It is not clear that anything has appeared that could be called a, let alone the, connectionist conception of cognition” (Horgan & Tienson, 1996, p. 3).
If the only division within cognitive science was between classical and connectionist schools of thought, then the possibility of a unified cognitive science still exists. Some researchers have attempted to show that these two approaches can be related (Dawson, 1998; Smolensky & Legendre, 2006), in spite of the differences that have been alluded to in the preceding paragraphs. However, the hope for a unified cognitive science is further challenged by the realization that a third school of thought has emerged that represents a reaction to both classical and connectionist cognitive science.
This third school of thought is embodied cognitive science (Chemero, 2009; Clancey, 1997; Clark, 1997; Dawson, Dupuis, & Wilson, 2010; Robbins & Aydede, 2009; Shapiro, 2011). Connectionist cognitive science arose because it felt that classical cognitive science did not pay sufficient attention to a particular part of the body, the brain. Embodied cognitive science critiques both classical and connectionist approaches because both ignore the whole body and its interaction with the world. Radical versions of embodied cognitive science aim to dispense with mental representations completely, and argue that the mind extends outside the brain, into the body and the world (Agre, 1997; Chemero, 2009; Clancey, 1997; Clark, 2008; Clark & Chalmers, 1998; Noë, 2009; Varela, Thompson, & Rosch, 1991; Wilson, 2004).
A key characteristic of embodied cognitive science is that it abandons methodological solipsism (Wilson, 2004). According to methodological solipsism (Fodor, 1980), representational states are individuated only in terms of their relations to other representational states. Relations of the states to the external world—the agent’s environment—are not considered. “Methodological solipsism in psychology is the view that psychological states should be construed without reference to anything beyond the boundary of the individual who has those states” (Wilson, 2004, p. 77).
Methodological solipsism is reflected in the sense-think-act cycle that characterizes both classical and connectionist cognitive science (Pfeifer & Scheier, 1999). The sense-think-act cycle defines what is also known as the classical sandwich (Hurley, 2001), in which there is no direct contact between sensing and acting. Instead, thinking—or representations—is the “filling” of the sandwich, with the primary task of planning action on the basis of sensed data. Both classical and connectionist cognitive science adopt the sense-think-act cycle because both have representations standing between perceptual inputs and behavioural outputs. “Representation is an activity that individuals perform in extracting and deploying information that is used in their further actions” (Wilson, 2004, p. 183).
Embodied cognitive science replaces the sense-think-act cycle with sense-act processing (Brooks, 1991, 1999; Clark, 1997, 1999, 2003; Hutchins, 1995; Pfeifer & Scheier, 1999). According to this alternative view, there are direct links between sensing and acting. The purpose of the mind is not to plan action, but is instead to coordinate sense-act relations. “Models of the world simply get in the way. It turns out to be better to use the world as its own model” (Brooks, 1991, p. 139). Embodied cognitive science views the brain as a controller, not as a planner. “The realization was that the so-called central systems of intelligence—or core AI as it has been referred to more recently—was perhaps an unnecessary illusion, and that all the power of intelligence arose from the coupling of perception and actuation systems” (Brooks, 1999, p. viii).
In replacing the sense-think-act cycle with the sense-act cycle, embodied cognitive science distances itself from classical and connectionist cognitive science. This is because sense-act processing abandons planning in particular and the use of representations in general. Brooks (1999, p. 170) wrote: “In particular I have advocated situatedness, embodiment, and highly reactive architectures with no reasoning systems, no manipulable representations, no symbols, and totally decentralized computation.” Other theorists make stronger versions of this claim: “I hereby define radical embodied cognitive science as the scientific study of perception, cognition, and action as necessarily embodied phenomena, using explanatory tools that do not posit mental representations” (Chemero, 2009, p. 29).
The focus on sense-act processing leads directly to the importance of embodiment. Embodied cognitive science borrows a key idea from cybernetics: that agents are adaptively linked to their environment (Ashby, 1956; Wiener, 1948). This adaptive link is a source of feedback: an animal’s actions on the world can change the world, which in turn will affect later actions. Embodied cognitive science also leans heavily on Gibson’s (1966, 1979) theory of direct perception. In particular, the adaptive link between an animal and its world is affected by the physical form of the animal—its embodiment. “It is often neglected that the words animal and environment make an inseparable pair” (Gibson, 1979, p. 8). Gibson proposed that sensing agents “picked up” properties that indicated potential actions that could be taken on the world. Again, the definition of such affordances requires taking the agent’s form into account.
Embodied cognitive science also distances itself from both classical and connectionist cognitive science by proposing the extended mind hypothesis (Clark, 1997, 1999, 2003, 2008; Wilson, 2004, 2005). According to the extended mind hypothesis, the mind is not separated from the world by the skull. Instead, the boundary between the mind and the world is blurred, or has disappeared. A consequence of the extended mind is cognitive scaffolding, where the abilities of “classical” cognition are enhanced by using the external world as support. A simple example of this is extending memory by using external aids, such as notepads. However, full-blown information processing can be placed into the world if appropriate artifacts are used. Hutchins (1995) provided many examples of navigational tools that externalize computation. “It seems that much of the computation was done by the tool, or by its designer. The person somehow could succeed by doing less because the tool did more” (p. 151).
Embodied cognitive science provides another fault line in a fragmenting cognitive science. With notions like the extended mind, the emphasis on action, and the abandonment of representation, it is not clear at first glance whether embodied cognitive science is redefining the notion of information processing or abandoning it altogether. “By failing to understand the source of the computational power in our interactions with simple ‘unintelligent’ physical devices, we position ourselves well to squander opportunities with so-called intelligent computers” (Hutchins, 1995, p. 171).
Further fragmentation is found within the embodied cognition camp (Robbins & Aydede, 2009; Shapiro, 2011). Embodied cognitive scientists have strong disagreements amongst themselves about the degree to which each of their radical views is to be accepted. For instance, Clark (1997) believed there is room for representation in embodied cognitive science, while Chemero (2009) did not.
In summary, early developments in computer science led to a unitary notion of information processing. When information processing was adopted as a hypothesis about cognition in the 1950s, the result was a unified cognitive science. However, a half century of developments in cognitive science has led to a growing fragmentation of the field. Disagreements about the nature of representations, and even about their necessity, have spawned three strong camps within cognitive science: classical, connectionist, and embodied. Fragmentation within each of these camps can easily be found. Given this situation, it might seem foolish to ask whether there exist any central ideas that can be used to unify cognitive science. However, the asking of that question is an important thread that runs through the current book.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/01%3A_The_Cognitive_Sciences-_One_or_Many/1.04%3A_Cognitive_Science_or_the_Cognitive_Sciences.txt
|
In the short story The Library of Babel, Jorge Luis Borges (1962) envisioned the universe as the Library, an infinite set of hexagonal rooms linked together by a spiral staircase. Each room held exactly the same number of books, each book being exactly 410 pages long, all printed in an identical format. The librarians hypothesize that the Library holds all possible books, that is, all possible arrangements of a finite set of orthographic symbols. They believe that “the Library is total and that its shelves register . . . all that is given to express, in all languages” (p. 54).
Borges’ librarians spend their lives sorting through mostly unintelligible volumes, seeking those books that explain “humanity’s basic mysteries” (Borges, 1962, p. 55). Central to this search is the faith that there exists a language in which to express these answers. “It is verisimilar that these grave mysteries could be explained in words: if the language of philosophers is not sufficient, the multiform Library will have produced the unprecedented language required, with its vocabularies and grammars” (p. 55).
The fictional quest of Borges’ librarians mirrors an actual search for ancient texts. Scholasticism was dedicated to reviving ancient wisdom. It was spawned in the tenth century when Greek texts preserved and translated by Islamic scholars made their way to Europe and led to the creation of European universities. It reached its peak in the thirteenth century with Albertus Magnus’ and Thomas Aquinas’ works on Aristotelian philosophy. A second wave of scholasticism in the fifteenth century was fuelled by new discoveries of ancient texts (Debus, 1978). “The search for new classical texts was intense in the fifteenth century, and each new discovery was hailed as a major achievement” (Debus, 1978, p. 4). These discoveries included Ptolemy’s Geography and the only copy of Lucretius’ De rerum natura, which later revived interest in atomism.
Borges’ (1962) emphasis on language is also mirrored in the scholastic search for the wisdom of the ancients. The continued discovery of ancient texts led to the Greek revival in the fifteenth century (Debus, 1978), which enabled this treasure trove of texts to be translated into Latin. In the development of modern science, Borges’ “unprecedented language” was first Greek and then Latin.
The departure from Latin as the language of science was a turbulent development during the scientific revolution. Paracelsus was attacked by the medical establishment for presenting medical lectures in his native Swiss German in 1527 (Debus, 1978). Galileo published his 1612 Discourse on Bodies in Water in Italian, an act that enraged his fellow philosophers of the Florentine Academy (Sobel, 1999). For a long period, scholars who wrote in their vernacular tongue had to preface their writings with apologies and explanations of why this did not represent a challenge to the universities of the day (Debus, 1978).
Galileo wrote in Italian because “I must have everyone able to read it” (Sobel, 1999, p. 47). However, from some perspectives, writing in the vernacular actually produced a communication breakdown, because Galileo was not disseminating knowledge in the scholarly lingua franca, Latin. Galileo’s writings were examined as part of his trial. It was concluded that “he writes in Italian, certainly not to extend the hand to foreigners or other learned men” (Sobel, 1999, p. 256).
A different sort of communication breakdown is a common theme in modern philosophy of science. It has been argued that some scientific theories are incommensurable with others (Feyerabend, 1975; Kuhn, 1970). Incommensurable scientific theories are theories that are impossible to compare because there is no logical or meaningful relation between some or all of the theories’ terms. Kuhn argued that this situation would occur if, within a science, different researchers operated under different paradigms. “Within the new paradigm, old terms, concepts, and experiments fall into new relationships one with the other. The inevitable result is what we must call, though the term is not quite right, a misunderstanding between the two schools” (Kuhn, 1970, p. 149). Kuhn saw holders of different paradigms as being members of different language communities—even if they wrote in the same vernacular tongue! Differences in paradigms caused communication breakdowns.
The modern fragmentation of cognitive science might be an example of communication breakdowns produced by the existence of incommensurable theories. For instance, it is not uncommon to see connectionist cognitive science described as a Kuhnian paradigm shift away from classical cognitive science (Horgan & Tienson, 1996; Schneider, 1987). When embodied cognitive science is discussed in Chapter 5, we see that it too might be described as a new paradigm.
To view the fragmentation of cognitive science as resulting from competing, incommensurable paradigms is also to assume that cognitive science is paradigmatic. Given that cognitive science as a discipline is less than sixty years old (Boden, 2006; Gardner, 1984; Miller, 2003), it is not impossible that it is actually pre-paradigmatic. Indeed, one discipline to which cognitive science is frequently compared—experimental psychology—may also be pre-paradigmatic (Buss, 1978; Leahey, 1992).
Pre-paradigmatic sciences exist in a state of disarray and fragmentation because data are collected and interpreted in the absence of a unifying body of belief. “In the early stages of the development of any science different men confronting the same range of phenomena, but not usually all the same particular phenomena, describe and interpret them in different ways” (Kuhn, 1970, p. 17). My suspicion is that cognitive science has achieved some general agreement about the kinds of phenomena that it believes it should be explaining. However, it is pre-paradigmatic with respect to the kinds of technical details that it believes are necessary to provide the desired explanations.
In an earlier book, I argued that the assumption that cognition is information processing provided a framework for a “language” of cognitive science that made interdisciplinary conversations possible (Dawson, 1998). I demonstrated that when this framework was applied, there were more similarities than differences between classical and connectionist cognitive science. The source of these similarities was the fact that both classical and connectionist cognitive science adopted the information processing hypothesis. As a result, both schools of thought can be examined and compared using Marr’s (1982) different levels of analysis. It can be shown that classical and connectionist cognitive sciences are highly related at the computational and algorithmic levels of analysis (Dawson, 1998, 2009).
In my view, the differences between classical and cognitive science concern the nature of the architecture, the primitive set of abilities or processes that are available for information processing (Dawson, 2009). The notion of an architecture is detailed in Chapter 2. One of the themes of the current book is that debates between different schools of thought in cognitive science are pre-paradigmatic discussions about the possible nature of the cognitive architecture.
These debates are enlivened by the modern rise of embodied cognitive science. One reason that classical and connectionist cognitive science can be easily compared is that both are representational (Clark, 1997; Dawson, 1998, 2004). However, some schools of thought in embodied cognitive science are explicitly anti-representational (Brooks, 1999; Chemero, 2009; Noë, 2004). As a result, it is not clear that the information processing hypothesis is applicable to embodied cognitive science. One of the goals of the current book is to examine embodied cognitive science from an information processing perspective, in order to use some of its key departures from both classical and connectionist cognitive science to inform the debate about the architecture.
The search for truth in the Library of Babel had dire consequences. Its librarians “disputed in the narrow corridors, proffered dark curses, strangled each other on the divine stairways, flung the deceptive books into the air shafts, met their death cast down in a similar fashion by the inhabitants of remote regions. Others went mad” (Borges, 1962, p. 55). The optimistic view of the current book is that a careful examination of the three different schools of cognitive science can provide a fruitful, unifying position on the nature of the cognitive architecture.
1.06: Plan of Action
A popular title for surveys of cognitive science is What is cognitive science? (Lepore & Pylyshyn, 1999; von Eckardt, 1995). Because this one is taken, a different title is used for the current book. But steering the reader towards an answer to this excellent question is the primary purpose of the current manuscript.
Answering the question What is cognitive science? resulted in the current book being organized around two central themes. One is to introduce key ideas at the foundations of three different schools of thought: classical cognitive science, connectionist cognitive science, and embodied cognitive science. A second is to examine these ideas to see whether these three “flavours” of cognitive science can be unified. As a result, this book is presented in two main parts.
The purpose of Part I is to examine the foundations of the three schools of cognitive science. It begins in Chapter 2, with an overview of the need to investigate cognitive agents at multiple levels. These levels are used to provide a framework for considering potential relationships between schools of cognitive science. Each of these schools is also introduced in Part I. I discuss classical cognitive science in Chapter 3, connectionist cognitive science in Chapter 4, and embodied cognitive science in Chapter 5.
With the foundations of the three different versions of cognitive science laid out in Part I, in Part II, I turn to a discussion of a variety of topics within cognitive science. The purpose of these discussions is to seek points of either contention or convergence amongst the different schools of thought.
The theme of Part II is that the key area of disagreement amongst classical, connectionist, and embodied cognitive science is the nature of the cognitive architecture. However, this provides an opportunity to reflect on the technical details of the architecture as the potential for a unified cognitive science. This is because the properties of the architecture—regardless of the school of thought—are at best vaguely defined. For instance, Searle (1992, p. 15) has observed that “‘intelligence,’ ‘intelligent behavior,’ ‘cognition’ and ‘information processing,’ for example are not precisely defined notions. Even more amazingly, a lot of very technically sounding notions are poorly defined—notions such as ‘computer,’ ‘computation,’ ‘program,’ and ‘symbol’” (Searle, 1992, p. 15).
In Part II, I also present a wide range of topics that permit the different schools of cognitive science to make contact. It is hoped that my treatment of these topics will show how the competing visions of the different schools of thought can be coordinated in a research program that attempts to specify an architecture of cognition inspired by all three schools.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/01%3A_The_Cognitive_Sciences-_One_or_Many/1.05%3A_Cognitive_Science-_Pre-paradigmatic.txt
|
Cognitive science is an intrinsically interdisciplinary field of study. Why is this so? In the current chapter, I argue that the interdisciplinary nature of cognitive science necessarily emerges because it assumes that cognition is information processing. The position I take is that explanations of information processors require working at four different levels of investigation, with each level involving a different vocabulary and being founded upon the methodologies of different disciplines.
The chapter begins with a historical treatment of logicism, the view that thinking is equivalent to performing mental logic, and shows how this view was converted into the logical analysis of relay circuits by Claude Shannon. Shannon’s work is then used to show that a variety of different arrangements of switches in a circuit can perform the same function, and that the same logical abilities can be constructed from different sets of core logical properties. Furthermore, any one of these sets of logical primitives can be brought to life in a variety of different physical realizations.
The consequence of this analysis is that information processors must be explained at four different levels of investigation. At the computational level, one asks what kinds of information processing problems can be solved by a system. At the algorithmic level, one asks what procedures are being used by a system to solve a particular problem of interest. At the architectural level, one asks what basic operations are used as the foundation for a specific algorithm. At the implementational level, one asks what physical mechanisms are responsible for bringing a particular architecture to life.
My goal in this chapter is to introduce these different levels of investigation. Later chapters reveal that different approaches within cognitive science have differing perspectives on the relative importance, and on the particular details, of each level.
2.02: Machines and Minds
Animism is the assignment of lifelike properties to inanimate, but moving, objects. Animism characterizes the thinking of young children, who may believe that a car, for instance, is alive because it can move on its own (Piaget, 1929). Animism was also apparent in the occult tradition of the Renaissance; the influential memory systems of Lull and of Bruno imbued moving images with powerful, magical properties (Yates, 1966).
Animism was important to the development of scientific and mathematical methods in the seventeenth century: “The Renaissance conception of an animistic universe, operated by magic, prepared the way for a conception of a mechanical universe, operated by mathematics” (Yates, 1966, p. 224). Note the animism in the introduction to Hobbes’ (1967) Leviathan:
For seeing life is but a motion of limbs, the beginning whereof is in some principal part within; why may we not say, that all Automata (Engines that move themselves by means of springs and wheeles as doth a watch) have an artificial life? For what is the Heart, but a Spring; and the Nerves, but so many Springs; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer? (Hobbes, 1967, p. 3)
Such appeals to animism raised new problems. How were moving humans to be distinguished from machines and animals? Cartesian philosophy grounded humanity in mechanistic principles, but went on to distinguish humans-as-machines from animals because only the former possessed a soul, whose essence was “only to think” (Descartes, 1960, p. 41).
Seventeenth-century philosophy was the source of the mechanical view of man (Grenville, 2001; Wood, 2002). It was also the home of a reverse inquiry: was it possible for human artifacts, such as clockwork mechanisms, to become alive or intelligent?
By the eighteenth century, such philosophical ponderings were fuelled by “living machines” that had made their appearance to great public acclaim. Between 1768 and 1774, Pierre and Henri-Louis Jaquet-Droz constructed elaborate clockwork androids that wrote, sketched, or played the harpsichord (Wood, 2002). The eighteenth-century automata of Jacques de Vaucanson, on display for a full century, included a flute player and a food-digesting duck. Von Kempelen’s infamous chess-playing Turk first appeared in 1770; it was in and out of the public eye until its destruction by fire in 1854 (Standage, 2002).
Wood (2002, p. xxvii) notes that all automata are presumptions “that life can be simulated by art or science or magic. And embodied in each invention is a riddle, a fundamental challenge to our perception of what makes us human.” In the eighteenth century, this challenge attracted the attention of the Catholic Church. In 1727, Vaucanson’s workshop was ordered destroyed because his clockwork servants, who served dinner and cleared tables, were deemed profane (Wood, 2002). The Spanish Inquisition imprisoned both Pierre Jaquet-Droz and his writing automaton!
In spite of the Church’s efforts, eighteenth-century automata were popular, tapping into a nascent fascination with the possibility of living machines. This fascination has persisted uninterrupted to the present day, as evidenced by the many depictions of robots and cyborgs in popular fiction and films (Asimov, 2004; Caudill, 1992; Grenville, 2001; Ichbiah, 2005; Levin, 2002; Menzel, D’Aluisio, & Mann, 2000).
Not all modern automata were developed as vehicles of entertainment. The late 1940s saw the appearance of the first autonomous robots, which resembled, and were called, Tortoises (Grey Walter, 1963). These devices provided “mimicry of life” (p. 114) and were used to investigate the possibility that living organisms were simple devices that were governed by basic cybernetic principles. Nonetheless, Grey Walter worried that animism might discredit the scientific merit of his work:
We are daily reminded how readily living and even divine properties are projected into inanimate things by hopeful but bewildered men and women; and the scientist cannot escape the suspicion that his projections may be psychologically the substitutes and manifestations of his own hope and bewilderment. (Grey Walter, 1963, p. 115)
While Grey Walter’s Tortoises were important scientific contributions (Bladin, 2006; Hayward, 2001; Holland, 2003b; Sharkey & Sharkey, 2009), the twentieth century saw the creation of another, far more important, automaton: the digital computer. The computer is rooted in seventeenth-century advances in logic and mathematics. Inspired by the Cartesian notion of rational, logical, mathematical thought, the computer brought logicism to life.
Logicism is the idea that thinking is identical to performing logical operations (Boole, 2003). By the end of the seventeenth century, numerous improvements to Boole’s logic led to the invention of machines that automated logical operations; most of these devices were mechanical, but electrical logic machines had also been conceived (Buck & Hunka, 1999; Jevons, 1870; Marquand, 1885; Mays, 1953). If thinking was logic, then thinking machines—machines that could do logic—existed in the late nineteenth century.
The logic machines of the nineteenth century were, in fact, quite limited in ability, as we see later in this chapter. However, they were soon replaced by much more powerful devices. In the first half of the twentieth century, the basic theory of a general computing mechanism had been laid out in Alan Turing’s account of his universal machine (Hodges, 1983; Turing, 1936). The universal machine was a device that “could simulate the work done by any machine. . . . It would be a machine to do everything, which was enough to give anyone pause for thought” (Hodges, 1983, p. 104). The theory was converted into working universal machines—electronic computers—by the middle of the twentieth century (Goldstine, 1993; Reid, 2001; Williams, 1997).
The invention of the electronic computer made logicism practical. The computer’s general ability to manipulate symbols made the attainment of machine intelligence seem plausible to many, and inevitable to some (Turing, 1950). Logicism was validated every time a computer accomplished some new task that had been presumed to be the exclusive domain of human intelligence (Kurzweil, 1990, 1999). The pioneers of cognitive science made some bold claims and some aggressive predictions (McCorduck, 1979): in 1956, Herbert Simon announced to a mathematical modelling class that “Over Christmas Allen Newell and I invented a thinking machine” (McCorduck, 1979, p. 116). It was predicted that by the late 1960s most theories in psychology would be expressed as computer programs (Simon & Newell, 1958).
The means by which computers accomplished complex information processing tasks inspired theories about the nature of human thought. The basic workings of computers became, at the very least, a metaphor for the architecture of human cognition. This metaphor is evident in philosophy in the early 1940s (Craik, 1943).
My hypothesis then is that thought models, or parallels, reality—that its essential feature is not ‘the mind,’ ‘the self,’ ‘sense data’ nor ‘propositions,’ but is symbolism, and that this symbolism is largely of the same kind which is familiar to us in mechanical devices which aid thought and calculation. (Craik, 1943, p. 57)
Importantly, many modern cognitive scientists do not see the relationship between cognition and computers as being merely metaphorical (Pylyshyn, 1979a, p. 435): “For me, the notion of computation stands in the same relation to cognition as geometry does to mechanics: It is not a metaphor but part of a literal description of cognitive activity.”
Computers are special devices in another sense: in order to explain how they work, one must look at them from several different perspectives. Each perspective requires a radically different vocabulary to describe what computers do. When cognitive science assumes that cognition is computation, it also assumes that human cognition must be explained using multiple vocabularies.
In this chapter, I provide an historical view of logicism and computing to introduce these multiple vocabularies, describe their differences, and explain why all are needed. We begin with the logicism of George Boole, which, when transformed into modern binary logic, defined the fundamental operations of modern digital computers.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.01%3A_Chapter_Overview.txt
|
In 1854, with the publication of An Investigation of the Laws of Thought, George Boole (2003) invented modern mathematical logic. Boole’s goal was to move the study of thought from the domain of philosophy into the domain of mathematics:
There is not only a close analogy between the operations of the mind in general reasoning and its operations in the particular science of Algebra, but there is to a considerable extent an exact agreement in the laws by which the two classes of operations are conducted. (Boole, 2003, p. 6)
Today we associate Boole’s name with the logic underlying digital computers (Mendelson, 1970). However, Boole’s algebra bears little resemblance to our modern interpretation of it. The purpose of this section is to trace the trajectory that takes us from Boole’s nineteenth-century calculus to the twentieth-century invention of truth tables that define logical functions over two binary inputs.
Boole did not create a binary logic; instead he developed an algebra of sets. Boole used symbols such as $x,\,y,$ and $z$ to represent classes of entities. He then defined “signs of operation, as +, –, ´, standing for those operations of the mind by which the conceptions of things are combined or resolved so as to form new conceptions involving the same elements” (Boole, 2003, p. 27). The operations of his algebra were those of election: they selected subsets of entities from various classes of interest (Lewis, 1918).
For example, consider two classes: $x$ (e.g., “black things”) and $y$ (e.g., “birds”). Boole’s expression $x\,+\,y$ performs an “exclusive or” of the two constituent classes, electing the entities that were “black things,” or were “birds,” but not those that were “black birds.”
Elements of Boole’s algebra pointed in the direction of our more modern binary logic. For instance, Boole used multiplication to elect entities that shared properties defined by separate classes. So, continuing our example, the set of “black birds” would be elected by the expression $xy$. Boole also recognized that if one multiplied a class with itself, the result would simply be the original set again. Boole wrote his fundamental law of thought as $xx\,=\,x$, which can also be expressed as $x^2\,=\,x$. He realized that if one assigned numerical quantities to $x$, then this law would only be true for the values 0 and 1. “Thus it is a consequence of the fact that the fundamental equation of thought is of the second degree, that we perform the operation of analysis and classification, by division into pairs of opposites, or, as it is technically said, by dichotomy” (Boole, 2003, pp. 50–51). Still, this dichotomy was not to be exclusively interpreted in terms of truth or falsehood, though Boole exploited this representation in his treatment of secondary propositions. Boole typically used 0 to represent the empty set and 1 to represent the universal set; the expression $1\,–\,x$ elected those entities that did not belong to $x$.
Boole’s operations on symbols were purely formal. That is, the actions of his logical rules were independent of any semantic interpretation of the logical terms being manipulated.
We may in fact lay aside the logical interpretation of the symbols in the given equation; convert them into quantitative symbols, susceptible only of the values 0 and 1; perform upon them as such all the requisite processes of solution; and finally restore to them their logical interpretation. (Boole, 2003, p. 70)
This formal approach is evident in Boole’s analysis of his fundamental law. Beginning with $x^2\,=\,x$, Boole applied basic algebra to convert this expression into $x\,–\,x^2\,=\,0$. He then simplified this expression to $x(1\,–\,x)\,=\,0$. Note that none of these steps are logical in nature; Boole would not be able to provide a logical justification for his derivation. However, he did triumphantly provide a logical interpretation of his result: 0 is the empty set, 1 the universal set, $x$ is some set of interest, and $1\,–\,x$ is the negation of this set. Boole’s algebraic derivation thus shows that the intersection of $x$ with its negation is the empty set. Boole noted that, in terms of logic, the equation $x(1\,–\,x)\,=\,0$ expressed,
that it is impossible for a being to possess a quality and not to possess that quality at the same time. But this is identically that ‘principle of contradiction’ which Aristotle has described as the fundamental axiom of all philosophy. (Boole, 2003, p. 49)
It was important for Boole to link his calculus to Aristotle, because Boole not only held Aristotelian logic in high regard, but also hoped that his new mathematical methods would both support Aristotle’s key logical achievements as well as extend Aristotle’s work in new directions. To further link his formalism to Aristotle’s logic, Boole applied his methods to what he called secondary propositions. A secondary proposition was a statement about a proposition that could be either true or false. As a result, Boole’s analysis of secondary propositions provides another glimpse of how his work is related to our modern binary interpretation of it.
Boole applied his algebra of sets to secondary propositions by adopting a temporal interpretation of election. That is, Boole considered that a secondary proposition could be true or false for some duration of interest. The expression $xy$ would now be interpreted as electing a temporal period during which both propositions $x$ and $y$ are true. The symbols 0 and 1 were also given temporal interpretations, meaning “no time” and “the whole of time” respectively. While this usage differs substantially from our modern approach, it has been viewed as the inspiration for modern binary logic (Post, 1921).
Boole’s work inspired subsequent work on logic in two different ways. First, Boole demonstrated that an algebra of symbols was possible, productive, and worthy of exploration: “Boole showed incontestably that it was possible, by the aid of a system of mathematical signs, to deduce the conclusions of all these ancient modes of reasoning, and an indefinite number of other conclusions” (Jevons, 1870, p. 499). Second, logicians noted certain idiosyncrasies of and deficiencies with Boole’s calculus, and worked on dealing with these problems. Jevons also wrote that Boole’s examples “can be followed only by highly accomplished mathematical minds; and even a mathematician would fail to find any demonstrative force in a calculus which fearlessly employs unmeaning and incomprehensible symbols” (p. 499). Attempts to simplify and correct Boole produced new logical systems that serve as the bridge between Boole’s nineteenth-century logic and the binary logic that arose in the twentieth century.
Boole’s logic is problematic because certain mathematical operations do not make sense within it (Jevons, 1870). For instance, because addition defined the “exclusive or” of two sets, the expression $x\,+\,x$ had no interpretation in Boole’s system. Jevons believed that Boole’s interpretation of addition was deeply mistaken and corrected this by defining addition as the “inclusive or” of two sets. This produced an interpretable additive law, $x\,+\,x\,=\,x$, that paralleled Boole’s multiplicative fundamental law of thought.
Jevons’ (1870) revision of Boole’s algebra led to a system that was simple enough to permit logical inference to be mechanized. Jevons illustrated this with a three-class system, in which upper-case letters (e.g., $A$) picked out those entities that belonged to a set and lower-case letters (e.g., $a$) picked out those entities that did not belong. He then produced what he called the logical abecedarium, which was the set of possible combinations of the three classes. In his three-class example, the abecedarium consisted of eight combinations: $ABC,\,ABc,\,AbC,\,Abc,\,aBC,\,aBc,\,abC,$ and $abc$. Note that each of these combinations is a multiplication of three terms in Boole’s sense, and thus elects an intersection of three different classes. As well, with the improved definition of logical addition, different terms of the abecedarium could be added together to define some set of interest. For example Jevons (but not Boole!) could elect the class $B$ with the following expression: $B\,=\,ABC\,+\,ABc\,+\,aBC\,+\,aBc$.
Jevons (1870) demonstrated how the abecedarium could be used as an inference engine. First, he used his set notation to define concepts of interest, such as in the example $A$ = iron, $B$ = metal, and $C$ = element. Second, he translated propositions into intersections of sets. For instance, the premise “Iron is metal” can be rewritten as “$A$ is $B$,” which in Boole’s algebra becomes $AB$, and “metal is element” becomes $BC$. Third, given a set of premises, Jevons removed the terms that were inconsistent with the premises from the abecedarium: the only terms consistent with the premises $AB$ and $BC$ are $ABC,\,aBC,\,abC,$and $abc$. Fourth, Jevons inspected and interpreted the remaining abecedarium terms to perform valid logical inferences. For instance, from the four remaining terms in Jevons’ example, we can conclude that “all iron is element,” because $A$ is only paired with $C$ in the terms that remain, and “there are some elements that are neither metal nor iron,” or $abC$. Of course, the complete set of entities that is elected by the premises is the logical sum of the terms that were not eliminated.
Jevons (1870) created a mechanical device to automate the procedure described above. The machine, known as the “logical piano” because of its appearance, displayed the 16 different combinations of the abecedarium for working with four different classes. Premises were entered by pressing keys; the depression of a pattern of keys removed inconsistent abecedarium terms from view. After all premises had been entered in sequence, the terms that remained on display were interpreted. A simpler variation of Jevons’ device, originally developed for four-class problems but more easily extendable to larger situations, was invented by Allan Marquand (Marquand, 1885). Marquand later produced plans for an electric version of his device that used electromagnets to control the display (Mays, 1953). Had this device been constructed, and had Marquand’s work come to the attention of a wider audience, the digital computer might have been a nineteenth-century invention (Buck & Hunka, 1999).
With respect to our interest in the transition from Boole’s work to our modern interpretation of it, note that the logical systems developed by Jevons, Marquand, and others were binary in two different senses. First, a set and its complement (e.g., $A$ and $a$) never co-occurred in the same abecedarium term. Second, when premises were applied, an abecedarium term was either eliminated or not. These binary characteristics of such systems permitted them to be simple enough to be mechanized.
The next step towards modern binary logic was to adopt the practice of assuming that propositions could either be true or false, and to algebraically indicate these states with the values 1 and 0. We have seen that Boole started this approach, but that he did so by applying awkward temporal set-theoretic interpretations to these two symbols.
The modern use of 1 and 0 to represent true and false arises later in the nineteenth century. British logician Hugh McColl’s (1880) symbolic logic used this notation, which he borrowed from the mathematics of probability. American logician Charles Sanders Peirce (1885) also explicitly used a binary notation for truth in his famous paper “On the algebra of logic: A contribution to the philosophy of notation.” This paper is often cited as the one that introduced the modern usage (Ewald, 1996). Peirce extended Boole’s work on secondary propositions by stipulating an additional algebraic law of propositions: for every element $x$, either $x\,=\,0$ or $x\,=\,1$, producing a system known as “the two-valued algebra” (Lewis, 1918).
The two-valued algebra led to the invention of truth tables, which are established in the literature in the early 1920s (Post, 1921; Wittgenstein, 1922), but were likely in use much earlier. There is evidence that Bertrand Russell and his then student Ludwig Wittgenstein were using truth tables as early as 1910 (Shosky, 1997). It has also been argued that Charles Peirce and his students probably were using truth tables as early as 1902 (Anellis, 2004).
Truth tables make explicit an approach in which primitive propositions ($p,\,q,\,r,$ etc.) that could only adopt values of 0 or 1 are used to produce more complex expressions. These expressions are produced by using logical functions to combine simpler terms. This approach is known as “using truth-value systems” (Lewis & Langford, 1959). Truth-value systems essentially use truth tables to determine the truth of functions of propositions (i.e., of logical combinations of propositions). “It is a distinctive feature of this two-valued system that when the property, 0 or 1, of the elements $p,\,q,$ etc., is given, any function of the elements which is in the system is thereby determined to have the property 0 or the property 1” (p. 199).
Consider Table $PageIndex{1}$, which provides the values of three different functions (the last three columns of the table) depending upon the truth value of two simple propositions (the first two columns of the table):
$p$ $q$ $p\cdot q$ $p+q$ $p\cdot (p+q)$
1 1 1 1 1
1 0 0 1 1
0 1 0 1 0
0 0 0 0 0
Table 1. Examples of the truth value system for two elementary propositions and some of their combinations. The possible values of p and q are given in the first two columns. The resulting values of different functions of these propositions are provided in the remaining columns.
Truth-value systems result in a surprising, simplified approach to defining basic or primitive logical functions. When the propositions p and q are interpreted as being only true or false, then there are only four possible combinations of these two propositions that can exist, i.e., the first two columns of Table $PageIndex{1}$. A primitive function can be defined as a function that is defined over p and q, and that takes on a truth value for each combination of these variables.
Given that in a truth-value system a function can only take on the value of 0 or 1, then there are only 16 different primitive functions that can be defined for combinations of the binary inputs p and q (Ladd, 1883). These primitive functions are provided in Table $PageIndex{2}$; each row of the table shows the truth values of each function for each combination of the inputs. An example logical notation for each function is provided in the last column of the table. This notation was used by Warren McCulloch (1988b), who attributed it to earlier work by Wittgenstein.
Not surprisingly, an historical trajectory can also be traced for the binary logic defined in Table $PageIndex{2}$. Peirce’s student Christine Ladd actually produced the first five columns of that table in her 1883 paper, including the conversion of the first four numbers in a row from a binary to a base 10 number. However, Ladd did not interpret each row as defining a logical function. Instead, she viewed the columns in terms of set notation and each row as defining a different “universe.” The interpretation of the first four columns as the truth values of various logical functions arose later with the popularization of truth tables (Post, 1921; Wittgenstein, 1922).
Table 2. Truth tables for all possible functions of pairs of propositions. Each function has a truth value for each possible combination of the truth values of p and q, given in the first four columns of the table. The Number column converts the first four values in a row into a binary number (Ladd, 1883). The logical notation for each function is taken Warren McCulloch (1988b).
Truth tables, and the truth-value system that they support, are very powerful. They can be used to determine whether any complex expression, based on combinations of primitive propositions and primitive logical operations, is true or false (Lewis, 1932). In the next section we see the power of the simple binary truth-value system, because it is the basis of the modern digital computer. We also see that bringing this system to life in a digital computer leads to the conclusion that one must use more than one vocabulary to explain logical devices.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.03%3A_From_the_Laws_of_Thought_to_Binary_Logic.txt
|
The short story The Dreams in the Witch-house by Lovecraft (1933) explored the link between mathematics and magic. The story explained how a student discovers that the act of writing out mathematical equations can alter reality. This alteration provided an explanation of how the accused Salem witch Keziah Mason escaped her seventeenth-century captors:
She had told Judge Hathorne of lines and curves that could be made to point out directions leading through the walls of space to other spaces and beyond. . . . Then she had drawn those devices on the walls of her cell and vanished. (Lovecraft, 1933, p. 140)
This strange link between the formal and the physical was also central to another paper written in the same era as Lovecraft’s story. The author was Claude Shannon, and the paper’s title was “A symbolic analysis of relay and switching circuits” (Shannon, 1938). However, his was not a work of fiction. Instead, it was a brief version what is now known as one of the most important master’s theses ever written (Goldstine, 1993). It detailed the link between Boolean algebra and electrical circuits, and showed how mathematical logic could be used to design, test, and simplify circuits. “The paper was a landmark in that it helped to change digital circuit design from an art to a science” (p. 120).
Shannon had a lifelong interest in both mathematics and mechanics. While his most influential papers were mathematical in focus (Shannon, 1938, 1948), he was equally famous for his tinkering (Pierce, 1993). His mechanical adeptness led to the invention of a number of famous devices, including Theseus, a mechanical maze-solving mouse. Later in his career Shannon seemed to take more pride in the gadgets that he had created and collected than in his numerous impressive scientific awards (Horgan, 1992).
Shannon’s combined love of the mathematical and the mechanical was evident in his education: he completed a double major in mathematics and electrical engineering at the University of Michigan (Calderbank & Sloane, 2001). In 1936, he was hired as a research assistant at MIT, working with the differential analyzer of Vannevar Bush. This machine was a pioneering analog computer, a complex array of electrical motors, gears, and shafts that filled an entire room. Its invention established Bush as a leader in electrical engineering as well as a pioneer of computing (Zachary, 1997). Bush, like Shannon, was enamored of the link between the formal and the physical. The sight of the differential analyzer at work fascinated Bush “who loved nothing more than to see things work. It was only then that mathematics—his sheer abstractions—came to life” (Zachary, 1997, p. 51).
Because of his work with Bush’s analog computer, Shannon was prepared to bring another mathematical abstraction to life when the opportunity arose. The differential analyzer had to be physically reconfigured for each problem that was presented to it, which in part required configuring circuits that involved more than one hundred electromechanical relays, which were used as switches. In the summer of 1937, Shannon worked in Bell Labs and saw that engineers there were confronted with designing more complex systems that involved thousands of relays. At the time, this was labourious work that was done by hand. Shannon wondered if there was a more efficient approach. He discovered one when he realized that there was a direct mapping between switches and Boolean algebra, which Shannon had been exposed to in his undergraduate studies.
An Internet search will lead to many websites suggesting that Shannon recognized that the opening or closing of a switch could map onto the notions of “false” or “true.” Actually, Shannon’s insight involved the logical properties of combinations of switches. In an interview that originally appeared in Omni magazine in 1987, he noted “It’s not so much that a thing is ‘open’ or ‘closed,’ the ‘yes’ or ‘no’ that you mentioned. The real point is that two things in series are described by the word ‘and’ in logic, so you would say this ‘and’ this, while two things in parallel are described by the word ‘or’” (Liversidge, 1993). In particular, Shannon (1938) viewed a switch (Figure 2-1A) as a source of impedance; when the switch was closed, current could flow and the impedance was 0, but when the switch was open (as illustrated in the figure) the impedance was infinite; Shannon used the symbol 1 to represent this state. As a result, if two switches were connected in series (Figure 2-1B) current would only flow if both switches were closed. Shannon represented this as the sum x + y. In contrast, if switch x and switch y were connected in parallel (Figure 2-1C), then current would flow through the circuit if either (i.e., both) of the switches were closed. Shannon represented this circuit as the product xy. Shannon’s (1938) logical representation is a variation of the two-valued logic that was discussed earlier. The Boolean version of this logic represented false with 0, true with 1, or with addition, and and with multiplication. Shannon’s version represented false with 1, true with 0, or with multiplication, and and with addition. But because Shannon’s reversal of the traditional logic is complete, the two are equivalent. Shannon noted that the basic properties of the two-valued logic were true of his logical interpretation of switches: “Due to this analogy any theorem of the calculus of propositions is also a true theorem if interpreted in terms of relay circuits” (p. 714).
Figure 2-1. (A) An electrical switch, labelled x. (B) Switches x and y in series. (C) Switches x and y in parallel. The practical implication of Shannon’s (1938) paper was that circuit design and testing was no longer restricted to hands-on work in the physical domain. Instead, one could use pencil and paper to manipulate symbols using Boolean logic, designing a circuit that could be proven to generate the desired input-output behaviour. Logical operations could also be used to ensure that the circuit was as simple as possible by eliminating unnecessary logical terms: “The circuit may then be immediately drawn from the equations” (p. 713). Shannon illustrated this technique with examples that included a “selective circuit” that would permit current when 1, 3, or 4—but not 0 or 2—of its relays were closed, as well as an electric combination lock that would only open when its 5 switches were depressed in a specific order. Amazingly, Shannon was not the first to see that electrical circuits were logical in nature (Burks, 1975)! In 1886, Charles Peirce wrote a letter to his student Alan Marquand suggesting how the latter’s logic machine (Marquand, 1885) could be improved by replacing its mechanical components with electrical ones. Peirce provided diagrams of a serial 3-switch circuit that represented logical conjunction (and) and a parallel 3-switch circuit that represented logical disjunction (or). Peirce’s nineteenth-century diagrams would not have been out of place in Shannon’s twentieth-century paper.
In Lovecraft’s (1933) story, the witch Keziah “might have had excellent reasons for living in a room with peculiar angles; for was it not through certain angles that she claimed to have gone outside the boundaries of the world of space we know?” Shannon’s (1938) scholarly paper led to astonishing conclusions for similar reasons: it detailed equivalence between the formal and the physical. It proved that electric circuits could be described in two very different vocabularies: one the physical vocabulary of current, contacts, switches and wires; the other the abstract vocabulary of logical symbols and operations.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.04%3A_From_the_Formal_to_the_Physical.txt
|
According to a Chinese proverb, we all like lamb, but each has a different way to cook it. This proverb can be aptly applied to the circuits of switches for which Shannon (1938) developed a logical interpretation. Any of these circuits can be described as defining a logical function that maps inputs onto an output: the circuit outputs a current (or not) depending on the pattern of currents controlled by one or more switches that flow into it. However, just like lamb, there are many different ways to “cook” the input signals to produce the desired output. In short, many different circuits can be constructed to compute the same input-output function.
To illustrate this point, let us begin by considering Shannon’s (1938) selective circuit, which would be off when 0 or 2 of its 4 relays were closed, but which would be on when any other number of its relays was closed. In Shannon’s original formulation, 20 components—an arrangement of 20 different switches—defined a circuit that would behave in the desired fashion. After applying logical operations to simplify the design, Shannon reduced the number of required components from 20 to 14. That is, a smaller circuit that involved an arrangement of only 14 different switches delivered the same input-output behaviour as did the 20-switch circuit.
Reflecting on these two different versions of the selective circuit, it’s clear that if one is interested in comparing them, the result of the comparison depends on the perspective taken. On the one hand, they are quite different: they involve different numbers of components, related to one another by completely different patterns of wiring. On the other hand, in spite of these obvious differences in details, at a more abstract level the two designs are identical, in the sense that both designs produce the same input-output mapping. That is, if one built a truth table for either circuit that listed the circuit’s conductivity (output) as a function of all possible combinations of its 4 relays (inputs), the two truth tables would be identical. One might say that the two circuits use markedly different procedures (i.e., arrangements of internal components) to compute the same input-output function. They generate the same behaviour, but for different reasons.
Comparisons between different devices are further complicated by introducing the notion of an architecture (Brooks, 1962). In computer science, the term architecture was originally used by Frederick P. Brooks Jr., a pioneering force in the creation of IBM’s early computers. As digital computers evolved, computer designers faced changing constraints imposed by new hardware technologies. This is because new technologies defined anew the basic information processing properties of a computer, which in turn determined what computers could and could not do. A computer’s architecture is its set of basic information processing properties (Blaauw & Brooks, 1997, p. 3): “The architecture of a computer system we define as the minimal set of properties that determine what programs will run and what results they will produce.”
The two different versions of Shannon’s (1938) selective circuit were both based on the same architecture: the architecture’s primitives (its basic components) were parallel and serial combinations of pairs of switches. However, other sets of primitives could be used.
An alternative architecture could use a larger number of what Shannon (1938) called special types of relays or switches. For instance, we could take each of the 16 logical functions listed in Table 2-2 and build a special device for each. Each device would take two currents as input, and would convert them into an appropriate output current. For example, the XOR device would only deliver a current if only one of its input lines was active; it would not deliver a current if both its input lines were either active or inactive—behaving exactly as it is defined in Table 2-2. It is easy to imagine building some switching circuit that used all of these logic gates as primitive devices; we could call this imaginary device “circuit x.”
The reason that the notion of architecture complicates (or enriches!) the comparison of devices is that the same circuit can be created from different primitive components. Let us define one additional logic gate, the NOT gate, which does not appear in Table 2-2 because it has only one input signal. The NOT gate reverses or inverts the signal that is sent into it. If a current is sent into a NOT gate, then the NOT gate does not output a current. If a current is not sent into a NOT gate, then the gate outputs a current. The first NOT gate—the first electromechanical relay— was invented by American physicist Joseph Henry in 1835. In a class demonstration, Henry used an input signal to turn off an electromagnet from a distance, startling his class when the large load lifted by the magnet crashed to the floor (Moyer, 1997).
The NOT gate is important, because it can be used to create any of the Table 2-2 operations when combined with two other operators that are part of that table: AND, which McCulloch represented as p·q, and OR, which McCulloch represented as p ∨ q. To review, if the only special relays available are NOT, A,ND and OR, then one can use these three primitive logic blocks to create any of the other logical operations that are 34 Chapter 2 given in Table 2-2 (Hillis, 1998). “This idea of a universal set of blocks is important: it means that the set is general enough to build anything” (p. 22).
To consider the implications of the universal set of logic gates to comparing circuits, let us return to our imaginary circuit x. We could have two different versions of this circuit, based on different architectures. In one, the behaviour of the circuit would depend upon wiring up some arrangement of all the various logical operations given in Table 2-2, where each operation is a primitive—that is, carried out by its own special relay. In the other, the arrangement of the logical operations would be identical, but the logical operations in Table 2-2 would not be primitive. Instead, we would replace each special relay from the first circuit with a circuit involving NOT, AND, and OR that would produce the desired behaviour.
Let us compare these two different versions of circuit x. At the most abstract level, they are identical, because they are generating the same input-output behaviour. At a more detailed level—one that describes how this behaviour is generated in terms of how the logical operations of Table 2-2 are combined together—the two are also identical. That is, the two circuits are based on the same combinations of the Table 2-2 operations. However, at a more detailed level, the level of the architecture, the two circuits are different. For the first circuit, each logical operation from Table 2-2 would map onto a physical device, a special relay. This would not be true for the second circuit. For it, each logical operation from Table 2-2 could be decomposed into a combination of simpler logical operations—NOT, AND, OR—which in turn could be implemented by simple switches. The two circuits are different in the sense that they use different architectures, but these different architectures are used to create the same logical structure to compute the same input-output behaviour.
We now can see that Shannon’s (1938) discoveries have led us to a position where we can compare two different electrical circuits by asking three different questions. First, do the two circuits compute the same input-output function? Second, do the two circuits use the same arrangement of logical operations used to compute this function? Third, do the two circuits use the same architecture to bring these logical operations to life? Importantly, the comparison between two circuits can lead to affirmative answers to some of these questions, and negative answers to others. For instance, Shannon’s two selective circuits use different arrangements of logical operations, but are based on the same architecture, and compute the same input-output function. The two versions of our imaginary circuit x compute the same input-output function, and use the same arrangement of logical operations, but are based on different architectures.
Ultimately, all of the circuits we have considered to this point are governed by the same physical laws: the laws of electricity. However, we will shortly see that it is possible to have two systems that have affirmative answers to the three questions listed in the previous paragraph, but are governed by completely different physical laws.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.05%3A_Multiple_Procedures_and_Architectures.txt
|
Many of the ideas that we have been considering in this chapter have stemmed from Shannon’s (1938) logical interpretation of relay circuits. But what is a relay? A relay is essentially a remote-controlled switch that involves two separate circuits (Gurevich, 2006). One of these circuits involves a source of current, which can be output through the relay’s drain. The second circuit controls the relay’s gate. In an electromechanical relay, the gate is an electromagnet (Figure 2-2). When a signal flows through the gate, the magnet becomes active and pulls a switch closed so that the source flows through the drain. When the gate’s signal is turned off, a spring pulls the switch open, breaking the first circuit, and preventing the source from flowing through the drain.
Figure 2-2. A relay, in which a signal through an electromagnetic gate controls a switch that determines whether the current from the source will flow through the drain.
The relay shown in Figure 2-2 can be easily reconfigured to convert it into a NOT gate. This is accomplished by having the switch between the source and the drain pulled open by the gate, and having it closed by a spring when the gate is not active. This was how, in 1835, Joseph Henry turned the power off to a large electromagnet, causing it to drop its load and startle his class (Moyer, 1997).
The type of relay shown in Figure 2-2 was critically important to the development of the telegraph in the mid-nineteenth century. Telegraphs worked by sending electrical pulses—dots and dashes—long distances over copper wire. As the signals travelled, they weakened in intensity. In order to permit a message to be communicated over a long distance, the signal would have to be re-amplified at various points along its journey. Relays were the devices that accomplished this. The weak incoming signals were still strong enough to activate a relay’s magnet. When this happened, a stronger current—provided by the source—was sent along the telegraph wire, which was connected to the relay’s drain. The relay mechanism ensured that the pattern of pulses being sent along the drain matched the pattern of pulses that turned the gate on and off. That is, the periods of time during which the relay’s switch was closed matched the durations of the dots and dashes that operated the relay’s magnet. The ability of a telegraph company to communicate messages over very long distances depended completely on the relays that were interspersed along the company’s network.
This dependence upon relays played a critical role in the corporate warfare between competing telegraph companies. In 1874, the only relay in use in the telegraph industry was an electromagnetic one invented by Charles Grafton Page; the patent for this device was owned by Western Union. An imminent court decision was going to prevent the Automatic Telegraph Company from using this device in its own telegraph system because of infringement on the patent.
The Automatic Telegraph Company solved this problem by commissioning Thomas Edison to invent a completely new relay, one that avoided the Page patent by not using magnets (Josephson, 1961). Edison used a rotating chalk drum to replace the electromagnet. This is because Edison had earlier discovered that the friction of a wire dragging along the drum changed when current flowed through the wire. This change in friction was sufficient to be used as a signal that could manipulate the gate controlling the circuit between the source and the drain. Edison’s relay was called a motograph.
Edison’s motograph is of interest to us when it is compared to the Page relay. On the one hand, the two devices performed the identical function; indeed, Edison’s relay fit exactly into the place of the page relay:
First he detached the Page sounder from the instrument, an intensely interested crowd watching his every movement. From one of his pockets he took a pair of pliers and fitted [his own motograph relay] precisely where the Page sounder had been previously connected, and tapped the key. The clicking—and it was a joyful sound—could be heard all over the room. There was a general chorus of surprise. ‘He’s got it! He’s got it!’ (Josephson, 1961, p. 118)
On the other hand, the physical principles governing the two relays were completely different. The key component of one was an electromagnet, while the critical part of the other was a rotating drum of chalk. In other words, the two relays were functionally identical, but physically different. As a result, if one were to describe the purpose, role, or function of each relay, then the Page relay and the Edison motograph would be given the same account. However, if one were to describe the physical principles that accomplished this function, the account of the Page relay would be radically different from the account of the Edison motograph—so different, in fact, that the same patent did not apply to both. Multiple realization is the term used to recognize that different physical mechanisms can bring identical functions to life.
The history of advances in communications and computer technology can be described in terms of evolving multiple realizations of relays and switches. Electromagnetic relays were replaced by vacuum tubes, which could be used to rapidly switch currents on and off and to amplify weak signals (Reid, 2001). Vacuum tubes were replaced by transistors built from semiconducting substances such as silicon. Ultimately, transistors were miniaturized to the point that millions could be etched into a single silicon chip.
One might suggest that the examples listed above are not as physically different as intended, because all are electrical in nature. But relays can be implemented in many nonelectrical ways as well. For example, nanotechnology researchers are exploring various molecular ways in which to create logic gates (Collier et al., 1999; Okamoto, Tanaka, & Saito, 2004). Similarly, Hillis (1998) described in detail a hydraulic relay, in which the source and drain involve a high-pressure water line and a weaker input flow controls a valve. He pointed out that his hydraulic relay is functionally identical to a transistor, and that it could therefore be used as the basic building block for a completely hydraulic computer. “For most purposes, we can forget about technology [physical realization]. This is wonderful, because it means that almost everything that we say about computers will be true even when transistors and silicon chips become obsolete” (p. 19).
Multiple realization is a key concept in cognitive science, particularly in classical cognitive science, which is the topic of Chapter 3. Multiple realization is in essence an argument that while an architectural account of a system is critical, it really doesn’t matter what physical substrate is responsible for bringing the architecture into being. Methodologically this is important, because it means that computer simulation is a viable tool in cognitive science. If the physical substrate doesn’t matter, then it is reasonable to emulate the brain-based architecture of human cognition using completely different hardware—the silicon chips of the digital computer. Theoretically, multiple realization is also important because it raises the possibility that non-biological systems could be intelligent and conscious. In a famous thought experiment (Pylyshyn, 1980), each neuron in a brain is replaced with a silicon chip that is functionally equivalent to the replaced neuron. Does the person experience any changes in consciousness because of this change in hardware? The logical implication of multiple realization is that no change should be experienced. Indeed, the assumption that intelligence results from purely biological or neurological processes in the human brain may simply be a dogmatic attempt to make humans special when compared to lower animals or machines (Wiener, 1964, p. 31): “Operative images, which perform the functions of their original, may or may not bear a pictorial likeness to it. Whether they do or not, they may replace the original in its action, and this is a much deeper similarity.”
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.06%3A_Relays_and_Multiple_Realizations.txt
|
Imagine bringing several different calculating devices into a class, with the goal of explaining how they work. How would you explain those devices? The topics that have been covered in the preceding pages indicate that several different approaches could—and likely should—be taken.
One approach would be to explain what was going on at a physical or implementational level. For instance, if one of the devices was an old electronic calculator, then you would feel comfortable in taking it apart to expose its internal workings. You would likely see an internal integrated circuit. You might explain how such circuits work by talking about the properties of semiconductors and how different layers of a silicon semiconductor can be doped with elements like arsenic or boron to manipulate conductivity (Reid, 2001) in order to create components like transistors and resistors.
Interestingly, the physical account of one calculator will not necessarily apply to another. Charles Babbage’s difference engine was an automatic calculator, but was built from a set of geared columns (Swade, 1993). Slide rules were the dominant method of calculation prior to the 1970s (Stoll, 2006) and involved aligning rulers that represented different number scales. The abacus is a set of moveable beads mounted on vertical bars and can be used by experts to perform arithmetic calculations extremely quickly (Kojima, 1954). The physical accounts of each of these three calculating devices would be quite different from the physical account of any electronic calculator.
A second approach to explaining a calculating device would be to describe its basic architecture, which might be similar for two different calculators that have obvious physical differences. For example, consider two different machines manufactured by Victor. One, the modern 908 pocket calculator, is a solar-powered device that is approximately 3" × 4" × ½" in size and uses a liquid crystal display. The other is the 1800 desk machine, which was introduced in 1971 with the much larger dimensions of 9" × 11" × 4½". One reason for the 1800’s larger size is the nature of its power supply and display: it plugged into a wall socket, and it had to be large enough to enclose two very large (inches-high!) capacitors and a transformer. It also used a gas discharge display panel instead of liquid crystals. In spite of such striking physical differences between the 1800 and the 908, the “brains” of each calculator are integrated circuits that apply arithmetic operations to numbers represented in binary format. As a result, it would not be surprising to find many similarities between the architectures of these two devices.
Of course, there can be radical differences between the architectures of different calculators. The difference engine did not use binary numbers, instead representing values in base 10 (Swade, 1993). Claude Shannon’s THROBACK computer’s input, output, and manipulation processes were all designed for quantities represented as Roman numerals (Pierce, 1993). Given that they were designed to work with different number systems, it would be surprising to find many architectural similarities between the architectures of THROBACK, the difference engine, and the Victor electronic machines.
A third approach to explaining various calculators would be to describe the procedures or algorithms that these devices use to accomplish their computations. For instance, what internal procedures are used by the various machines to manipulate numerical quantities? Algorithmic accounts could also describe more external elements, such as the activities that a user must engage in to instruct a machine to perform an operation of interest. Different electronic calculators may require different sequences of key presses to compute the same equation.
For example, my own experience with pocket calculators involves typing in an arithmetic expression by entering symbols in the same order in which they would be written down in a mathematical expression. For instance, to subtract 2 from 4, I would enter “4 – 2 =” and expect to see 2 on display as the result. However, when I tested to see if the Victor 1800 that I found in my lab still worked, I couldn’t type that equation in and get a proper response. This is because this 1971 machine was designed to be easily used by people who were more familiar with mechanical adding machines. To subtract 2 from 4, the following expression had to be entered: “4 + 2 –”. Apparently the “=” button is only used for multiplication and division on this machine!
More dramatic procedural differences become evident when comparing devices based on radically different architectures. A machine such as the Victor 1800 adds two numbers together by using its logic gates to combine two memory registers that represent digits in binary format. In contrast, Babbage’s difference engine represents numbers in decimal format, where each digit in a number is represented by a geared column. Calculations are carried out by setting up columns to represent the desired numbers, and then by turning a crank that rotates gears. The turning of the crank activates a set of levers and racks that raise and lower and rotate the numerical columns. Even the algorithm for processing columns proceeds in a counterintuitive fashion. During addition, the difference engine first adds the odd-numbered columns to the even-numbered columns, and then adds the even-numbered columns to the odd-numbered ones (Swade, 1993).
A fourth approach to explaining the different calculators would be to describe them in terms of the relation between their inputs and outputs. Consider two of our example calculating devices, the Victor 1800 and Babbage’s difference engine. We have already noted that they differ physically, architecturally, and procedurally. Given these differences, what would classify both of these machines as calculating devices? The answer is that they are both calculators in the sense that they generate the same input-output pairings. Indeed, all of the different devices that have been mentioned in the current section are considered to be calculators for this reason. In spite of the many-levelled differences between the abacus, electronic calculator, difference engine, THROBACK, and slide rule, at a very abstract level—the level concerned with input-output mappings—these devices are equivalent.
To summarize the discussion to this point, how might one explain calculating devices? There are at least four different approaches that could be taken, and each approach involves answering a different question about a device. What is its physical nature? What is its architecture? What procedures does it use to calculate? What input-output mapping does it compute?
Importantly, answering each question involves using very different vocabularies and methods. The next few pages explore the diversity of these vocabularies. This diversity, in turn, accounts for the interdisciplinary nature of cognitive science.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.07%3A_Multiple_Levels_of_Investigation_and_Explanation.txt
|
For a cyberneticist, a machine was simply a device for converting some input into some output—and nothing more (Ashby, 1956, 1960; Wiener, 1948, 1964). A cyberneticist would be concerned primarily with describing a machine such as a calculating device in terms of its input-output mapping. However, underlying this simple definition was a great deal of complexity.
First, cybernetics was not interested in the relation between a particular input and output, but instead was interested in a general account of a machine’s possible behaviour “by asking not ‘what individual act will it produce here and now?’ but ‘what are all the possible behaviours that it can produce?’” (Ashby, 1956, p. 3).
Second, cybernetics wanted not only to specify what possible input-outputs could be generated by a device, but also to specify what behaviours could not be generated, and why: “Cybernetics envisages a set of possibilities much wider than the actual, and then asks why the particular case should conform to its usual particular restriction” (Ashby, 1956, p. 3).
Third, cybernetics was particularly concerned about machines that were nonlinear, dynamic, and adaptive, which would result in very complex relations between input and output. The nonlinear relationships between four simple machines that interact with each other in a network are so complex that they are mathematically intractable (Ashby, 1960).
Fourth, cybernetics viewed machines in a general way that not only ignored their physical nature, but was not even concerned with whether a particular machine had been (or could be) constructed or not. “What cybernetics offers is the framework on which all individual machines may be ordered, related and understood” (Ashby, 1956, p. 2).
How could cybernetics study machines in such a way that these four different perspectives could be taken? To accomplish this, the framework of cybernetics was exclusively mathematical. Cyberneticists investigated the input-output mappings of machines by making general statements or deriving proofs that were expressed in some logical or mathematical formalism.
By the late 1950s, research in cybernetics proper had begun to wane (Conway & Siegelman, 2005); at this time cybernetics began to evolve into the modern field of cognitive science (Boden, 2006; Gardner, 1984; Miller, 2003). Inspired by advances in digital computers, cognitive science was not interested in generic “machines” as such, but instead focused upon particular devices that could be described as information processors or symbol manipulators.
Given this interest in symbol manipulation, one goal of cognitive science is to describe a device of interest in terms of the specific information processing problem that it is solving. Such a description is the result of performing an analysis at the computational level (Dawson, 1998; Marr, 1982; Pylyshyn, 1984).
A computational analysis is strongly related to the formal investigations carried out by a cyberneticist. At the computational level of analysis, cognitive scientists use formal methods to prove what information processing problems a system can—and cannot—solve. The formal nature of computational analyses lend them particular authority: “The power of this type of analysis resides in the fact that the discovery of valid, sufficiently universal constraints leads to conclusions . . . that have the same permanence as conclusions in other branches of science” (Marr, 1982, p. 331).
However, computational accounts do not capture all aspects of information processing. A proof that a device is solving a particular information processing problem is only a proof concerning the device’s input-output mapping. It does not say what algorithm is being used to compute the mapping or what physical aspects of the device are responsible for bringing the algorithm to life. These missing details must be supplied by using very different methods and vocabularies.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.08%3A_Formal_Accounts_of_Input-Output_Mappings.txt
|
Neuroscientist Valentino Braitenberg imagined a world comprising domains of both water and land (Braitenberg, 1984). In either of these domains one would find a variety of agents who sense properties of their world, and who use this information to guide their movements through it. Braitenberg called these agents “vehicles.” In Braitenberg’s world of vehicles, scientists encounter these agents and attempt to explain the internal mechanisms that are responsible for their diverse movements. Many of these scientists adopt what Braitenberg called an analytic perspective: they infer internal mechanisms by observing how external behaviours are altered as a function of specific changes in a vehicle’s environment. What Braitenberg called analysis is also called reverse engineering.
We saw earlier that a Turing machine generates observable behaviour as it calculates the answer to a question. A description of a Turing machine’s behaviours— be they by design or by artifact—would provide the sequence of operations that were performed to convert an input question into an output answer. Any sequence of steps which, when carried out, accomplishes a desired result is called an algorithm (Berlinski, 2000). The goal, then, of reverse engineering a Turing machine or any other calculating device would be to determine the algorithm it was using to transform its input into a desired output.
Calculating devices exhibit two properties that make their reverse engineering difficult. First, they are often what are called black boxes. This means that we can observe external behaviour, but we are unable to directly observe internal properties. For instance, if a Turing machine was a black box, then we could observe its movements along, and changing of symbols on, the tape, but we could not observe the machine state of the machine head.
Second, and particularly if we are faced with a black box, another property that makes reverse engineering challenging is that there is a many-to-one relationship between algorithm and mapping. This means that, in practice, a single input-output mapping can be established by one of several different algorithms. For example, there are so many different methods for sorting a set of items that hundreds of pages are required to describe the available algorithms (Knuth, 1997). In principle, an infinite number of different algorithms exist for computing a single input-output mapping of interest (Johnson-Laird, 1983).
The problem with reverse engineering a black box is this: if there are potentially many different algorithms that can produce the same input-output mapping, then mere observations of input-output behaviour will not by themselves indicate which particular algorithm is used in the device’s design. However, reverse engineering a black box is not impossible. In addition to the behaviours that it was designed to produce, the black box will also produce artifacts. Artifacts can provide great deal of information about internal and unobservable algorithms.
Imagine that we are faced with reverse engineering an arithmetic calculator that is also a black box. Some of the artifacts of this calculator provide relative complexity evidence (Pylyshyn, 1984). To collect such evidence, one could conduct an experiment in which the problems presented to the calculator were systematically varied (e.g., by using different numbers) and measurements were made of the amount of time taken for the correct answer to be produced. To analyze this relative complexity evidence, one would explore the relationship between characteristics of problems and the time required to solve them.
For instance, suppose that one observed a linear increase in the time taken to solve the problems 9 × 1, 9 × 2, 9 × 3, et cetera. This could indicate that the device was performing multiplication by doing repeated addition (9, 9 + 9, 9 + 9 + 9, and so on) and that every “+ 9” operation required an additional constant amount of time to be carried out. Psychologists have used relative complexity evidence to investigate cognitive algorithms since Franciscus Donders invented his subtractive method in 1869 (Posner, 1978).
Artifacts can also provide intermediate state evidence (Pylyshyn, 1984). Intermediate state evidence is based upon the assumption that an input-output mapping is not computed directly, but instead requires a number of different stages of processing, with each stage representing an intermediate result in a different way. To collect intermediate state evidence, one attempts to determine the number and nature of these intermediate results.
For some calculating devices, intermediate state evidence can easily be collected. For instance, the intermediate states of the Turing machine›s tape, the abacus’ beads or the difference engine’s gears are in full view. For other devices, though, the intermediate states are hidden from direct observation. In this case, clever techniques must be developed to measure internal states as the device is presented with different inputs. One might measure changes in electrical activity in different components of an electronic calculator as it worked, in an attempt to acquire intermediate state evidence.
Artifacts also provide error evidence (Pylyshyn, 1984), which may also help to explore intermediate states. When extra demands are placed on a system’s resources, it may not function as designed, and its internal workings are likely to become more evident (Simon, 1969). This is not just because the overtaxed system makes errors in general, but because these errors are often systematic, and their systematicity reflects the underlying algorithm.
Because we rely upon their accuracy, we would hope that error evidence would be difficult to collect for most calculating devices. However, error evidence should be easily available for calculators that might be of particular interest to us: humans doing mental arithmetic. We might find, for instance, that overtaxed human calculators make mistakes by forgetting to carry values from one column of numbers to the next. This would provide evidence that mental arithmetic involved representing numbers in columnar form, and performing operations column by column (Newell & Simon, 1972). Very different kinds of errors would be expected if a different approach was taken to perform mental arithmetic, such as imagining and manipulating a mental abacus (Hatano, Miyake, & Binks, 1977).
In summary, discovering and describing what algorithm is being used to calculate an input-output mapping involves the systematic examination of behaviour. That is, one makes and interprets measurements that provide relative complexity evidence, intermediate state evidence, and error evidence. Furthermore, the algorithm that will be inferred from such measurements is in essence a sequence of actions or behaviours that will produce a desired result.
The discovery and description of an algorithm thus involves empirical methods and vocabularies, rather than the formal ones used to account for input-output regularities. Just as it would seem likely that input-output mappings would be the topic of interest for formal researchers such as cyberneticists, logicians, or mathematicians, algorithmic accounts would be the topic of interest for empirical researchers such as experimental psychologists.
The fact that computational accounts and algorithmic accounts are presented in different vocabularies suggests that they describe very different properties of a device. From our discussion of black boxes, it should be clear that a computational account does not provide algorithmic details: knowing what input-output mapping is being computed is quite different from knowing how it is being computed. In a similar vein, algorithmic accounts are silent with respect to the computation being carried out.
For instance, in Understanding Cognitive Science, Dawson (1998) provides an example machine table for a Turing machine that adds pairs of integers. Dawson also provides examples of questions to this device (e.g., strings of blanks, 0s, and 1s) as well as the answers that it generates. Readers of Understanding Cognitive Science can pretend to be the machine head by following the instructions of the machine table, using pencil and paper to manipulate a simulated ticker tape. In this fashion they can easily convert the initial question into the final answer—they fully understand the algorithm. However, they are unable to say what the algorithm accomplishes until they read further in the book.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.09%3A_Algorithms_from_Artifacts.txt
|
What vocabulary is best suited to answer questions about the how a particular inputoutput mapping is calculated? To explore this question, let us consider an example calculating device, a Turing machine (Turing, 1936). This calculator processes symbols that are written on a ticker-tape memory divided into cells, where each cell can hold a single symbol. To use a Turing machine to add (Weizenbaum, 1976), a user would write a question on the tape, that is, the two numbers to be added together. They would be written in the format that could be understood by the machine. The Turing machine would answer the input question by reading and rewriting the tape. Eventually, it would write the sum of the two numbers on the tape—its answer—and then halt.
How does a Turing machine generate answers to the written questions? A Turing machine consists of a machine head whose actions are governed by a set of instructions called the machine table. The machine head will also be in one of a set of possible physical configurations called machine states. The machine head reads a symbol on the tape. This symbol, in combination with the current machine state, determines which machine table instruction to execute next. An instruction might tell the machine head to write a symbol, or to move one cell to the left or the right along the tickertape. The instruction will also change the machine head’s machine state.
A Turing machine does not answer questions instantly. Instead, it takes its time, moving back and forth along the tape, reading and writing symbols as it works. A long sequence of actions might be observed and recorded, such as “First the machine head moves four cells to the right. Then it stops, and replaces the 1 on the tape with a 0. Then it moves three cells to the left.”
The record of the observed Turing machine behaviours would tell us a great deal about its design. Descriptions such as “When given Question A, the machine generated Answer X” would provide information about the input-output mapping that the Turing machine was designed to achieve. If we were also able to watch changes in machine states, more detailed observations would be possible, such as “If the machine head is in State 1 and reads a ‘1’ on the tape, then it moves one cell left and adopts State 6.” Such observations would provide information about the machine table that was designed for this particular device’s machine head.
Not all Turing machine behaviours occur by design; some behaviours are artifacts. Artifacts occur because of the device’s design but are not explicitly part of the design (Pylyshyn, 1980, 1984). They are unintentional consequences of the designed procedure.
For instance, the Turing machine takes time to add two numbers together; the time taken will vary from question to question. The amount of time taken to answer a question is a consequence of the machine table, but is not intentionally designed into it. The time taken is an artifact because Turing machines are designed to answer questions (e.g., “What is the sum of these two integers?”); they are not explicitly designed to answer questions in a particular amount of time.
Similarly, as the Turing machine works, the ticker tape adopts various intermediate states. That is, during processing the ticker tape will contain symbols that are neither the original question nor its eventual answer. Answering a particular question will produce a sequence of intermediate tape states; the sequence produced will also vary from question to question. Again, the sequence of symbol states is an artifact. The Turing machine is not designed to produce a particular sequence of intermediate states; it is simply designed to answer a particular question. Multiple Levels of Investigation 43
One might think that artifacts are not important because they are not explicit consequences of a design. However, in many cases artifacts are crucial sources of information that help us reverse engineer an information processor that is a “black box” because its internal mechanisms are hidden from view.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.10%3A__Behaviour_by_Design_and_by_Artifact.txt
|
We have described an algorithm for calculating an input-output mapping as a sequence of operations or behaviours. This description is misleading, though, because the notion of sequence gives the impression of a linear ordering of steps. However, we would not expect most algorithms to be linearly organized. For instance, connectionist cognitive scientists would argue that more than one step in an algorithm can be carried out at the same time (Feldman & Ballard, 1982). As well, most algorithms of interest to classical cognitive scientists would likely exhibit a markedly hierarchical organization (Miller, Galanter, & Pribram, 1960; Simon, 1969). In this section, I use the notion of hierarchical organization to motivate the need for an algorithm to be supported by an architecture.
What does it mean for an algorithm to be hierarchical in nature? To answer this question, let us again consider the situation in which behavioural measurements are being used to reverse engineer a calculating black box. Initial experiments could suggest that an input-output mapping is accomplished by an algorithm that involves three steps (Step 1 → Step 2 → Step 3). However, later studies could also indicate that each of these steps might themselves be accomplished by sub-algorithms.
For instance, it might be found that Step 1 is accomplished by its own fourstep sub-algorithm (Step a → Step b → Step c → Step d). Even later it could be discovered that one of these sub-algorithms is itself the product of another sub-subalgorithm. Such hierarchical organization is common practice in the development of algorithms for digital computers, where most programs are organized systems of functions, subfunctions, and sub-subfunctions. It is also a common characteristic of cognitive theories (Cummins, 1983).
The hierarchical organization of algorithms can pose a problem, though, if an algorithmic account is designed to explain a calculating device. Consider our example where Step 1 of the black box’s algorithm is explained by being hierarchically decomposed into the sub-algorithm “Step a → Step b → Step c → Step d.” On closer examination, it seems that nothing has really been explained at all. Instead, we have replaced Step 1 with a sequence of four new steps, each of which requires further explanation. If each of these further explanations is of the same type as the one to account for Step 1, then this will in turn produce even more steps requiring explanation. There seems to be no end to this infinite proliferation of algorithmic steps that are appearing in our account of the calculating device.
This situation is known as Ryle’s regress. The philosopher Gilbert Ryle raised it as a problem with the use of mentalistic terms in explanations of intelligence:
Must we then say that for the hero’s reflections how to act to be intelligent he must first reflect how best to reflect to act? The endlessness of this implied regress shows that the application of the criterion of appropriateness does not entail the occurrence of a process of considering this criterion. (Ryle, 1949, p. 31)
Ryle’s regress occurs when we explain outer intelligence by appealing to inner intelligence.
Ryle’s regress is also known as the homunculus problem, where a homunculus is an intelligent inner agent. The homunculus problem arises when one explains outer intelligence by appealing to what is in essence an inner homunculus. For instance, Fodor noted the obvious problems with a homuncular explanation of how one ties their shoes:
And indeed there would be something wrong with an explanation that said, ‘This is the way we tie our shoes: we notify a little man in our head who does it for us.’ This account invites the question: ‘How does the little man do it?’ but, ex hypothesis, provides no conceptual mechanisms for answering such questions. (Fodor, 1968a, p. 628)
Indeed, if one proceeds to answer the invited question by appealing to another homunculus within the “little man,” then the result is an infinite proliferation of homunculi.
To solve Ryle’s regress an algorithm must be analyzed into steps that do not require further decomposition in order to be explained. This means when some function is decomposed into a set of subfunctions, it is critical that each of the subfunctions be simpler than the overall function that they work together to produce (Cummins, 1983; Dennett, 1978; Fodor, 1968a). Dennett (1978, p. 123) noted that “homunculi are bogeymen only if they duplicate entire the talents they are rung in to explain.” Similarly, Fodor (1968a, p. 629) pointed out that “we refine a psychological theory by replacing global little men by less global little men, each of whom has fewer unanalyzed behaviors to perform than did his predecessors.”
If the functions produced in a first pass of analysis require further decomposition in order to be themselves explained, then the subfunctions that are produced must again be even simpler. At some point, the functions become so simple—the homunculi become so stupid—that they can be replaced by machines. This is because at this level all they do is answer “yes” or “no” to some straightforward question. “One discharges fancy homunculi from one’s scheme by organizing armies of such idiots to do the work” (Dennett, 1978, p. 124).
The set of subfunctions that exist at this final level of decomposition belongs to what computer scientists call the device’s architecture (Blaauw & Brooks, 1997; Brooks, 1962; Dasgupta, 1989). The architecture defines what basic abilities are built into the device. For a calculating device, the architecture would specify three different types of components: the basic operations of the device, the objects to which these operations are applied, and the control scheme that decides which operation to carry 48 Chapter 2 out at any given time (Newell, 1980; Simon, 1969). To detail the architecture is to specify “what operations are primitive, how memory is organized and accessed, what sequences are allowed, what limitations exist on the passing of arguments and on the capacities of various buffers, and so on” (Pylyshyn, 1984, p. 92).
What is the relationship between an algorithm and its architecture? In general, the architecture provides the programming language in which an algorithm is written. “Specifying the functional architecture of a system is like providing a manual that defines some programming language. Indeed, defining a programming language is equivalent to specifying the functional architecture of a virtual machine” (Pylyshyn, 1984, p. 92).
This means that algorithms and architectures share many properties. Foremost of these is that they are both described as operations, behaviours, or functions, and not in terms of physical makeup. An algorithm is a set of functions that work together to accomplish a task; an architectural component is one of the simplest functions—a primitive operation—from which algorithms are composed. In order to escape Ryle’s regress, one does not have to replace an architectural function with its physical account. Instead, one simply has to be sure that such a replacement is available if one wanted to explain how the architectural component works. It is no accident that Pylyshyn (1984) uses the phrase functional architecture in the quote given above.
Why do we insist that the architecture is functional? Why don’t we appeal directly to the physical mechanisms that bring an architecture into being? Both of these questions are answered by recognizing that multiple physical realizations are possible for any functional architecture. For instance, simple logic gates are clearly the functional architecture of modern computers. But we saw earlier that functionally equivalent versions of these gates could be built out of wires and switches, vacuum tubes, semiconductors, or hydraulic valves.
To exit Ryle’s regress, we have to discharge an algorithm’s homunculi. We can do this by identifying the algorithm’s programming language—by saying what its architecture is. Importantly, this does not require us to say how, or from what physical stuff, the architecture is made! “Whether you build a computer out of transistors, hydraulic valves, or a chemistry set, the principles on which it operates are much the same” (Hillis, 1998, p. 10).
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.11%3A_Architectures_against_Homunculi.txt
|
At the computational level, one uses a formal vocabulary to provide a rigorous description of input-output mappings. At the algorithmic level, a procedural or behavioural vocabulary is employed to describe the algorithm being used to calculate a particular input-output mapping. The functional architecture plays a special role at the algorithmic level, for it provides the primitive operations from which algorithms are created. Thus we would expect that the behavioural vocabulary used for algorithms to also be applied to the architecture.
The special nature of the architecture means that additional behavioural descriptions are required. A researcher must also collect behavioural evidence to support his or her claim that some algorithmic component is in fact an architectural primitive. One example of this, which appears when the ideas that we have been developing in this chapter are applied to the science of human cognition, is to conduct behavioural experiments to determine whether a function is cognitively impenetrable (Pylyshyn, 1984; Wright & Dawson, 1994). We return to this kind of evidence in Chapter 3.
Of course, the fundamental difference between algorithm and architecture is that only the latter can be described in terms of physical properties. Algorithms are explained in terms of the architectural components in which they are written. Architectural components are explained by describing how they are implemented by some physical device. At the implementational level a researcher uses a physical vocabulary to explain how architectural primitives are brought to life.
An implementational account of the logic gates illustrated in Figure 2-1 would explain their function by appealing to the ability of metal wires to conduct electricity, to the nature of electric circuits, and to the impedance of the flow of electricity through these circuits when switches are open (Shannon, 1938). An implementational account of how a vacuum tube creates a relay of the sort illustrated in Figure 2-2 would appeal to what is known as the Edison effect, in which electricity can mysteriously flow through a vacuum and the direction of this flow can be easily and quickly manipulated to manipulate the gate between the source and the drain (Josephson, 1961; Reid, 2001).
That the architecture has dual lives, both physical and algorithmic (Haugeland, 1985), leads to important philosophical issues. In the philosophy of science there is a great deal of interest in determining whether a theory phrased in one vocabulary (e.g., chemistry) can be reduced to another theory laid out in a different vocabulary (e.g., physics). One approach to reduction is called the “new wave” (Churchland, 1985; Hooker, 1981). In a new wave reduction, the translation of one theory into another is accomplished by creating a third, intermediate theory that serves as a bridge between the two. The functional architecture is a bridge between the algorithmic and the implementational. If one firmly believed that a computational or algorithmic account could be reduced to an implementational one (Churchland, 1988), then a plausible approach to doing so would be to use the bridging properties of the architecture.
The dual nature of the architecture plays a role in another philosophical discussion, the famous “Chinese room argument” (Searle, 1980). In this thought experiment, people write questions in Chinese symbols and pass them through a slot into a room. Later, answers to these questions, again written in Chinese symbols, are passed back to the questioner. The philosophical import of the Chinese room arises when one looks into the room to see how it works.
Inside the Chinese room is a native English speaker—Searle himself—who knows no Chinese, and for whom Chinese writing is a set of meaningless squiggles. The room contains boxes of Chinese symbols, as well as a manual for how to put these together in strings. The English speaker is capable of following these instructions, which are the room’s algorithm. When a set of symbols is passed into the room, the person inside can use the instructions and put together a new set of symbols to pass back outside. This is the case even though the person inside the room does not understand what the symbols mean, and does not even know that the inputs are questions and the outputs are answers. Searle (1980) uses this example to challengingly ask where in this room is the knowledge of Chinese? He argues that it is not to be found, and then uses this point to argue against strong claims about the possibility of machine intelligence.
But should we expect to see such knowledge if we were to open the door to the Chinese room and peer inside? Given our current discussion of the architecture, it would perhaps be unlikely to answer this question affirmatively. This is because if we could look inside the “room” of a calculating device to see how it works—to see how its physical properties bring its calculating abilities to life—we would not see the input-output mapping, nor would we see a particular algorithm in its entirety. At best, we would see the architecture and how it is physically realized in the calculator. The architecture of a calculator (e.g., the machine table of a Turing machine) would look as much like the knowledge of arithmetic calculations as Searle and the instruction manual would look like knowledge of Chinese. However, we would have no problem recognizing the possibility that the architecture is responsible for producing calculating behaviour!
Because the architecture is simply the primitives from which algorithms are constructed, it is responsible for algorithmic behaviour—but doesn’t easily reveal this responsibility on inspection. That the holistic behaviour of a device would not be easily seen in the actions of its parts was recognized in Leibniz’ mill, an early eighteenth-century ancestor to the Chinese room.
In his Monadology, Gottfried Leibniz wrote:
Supposing there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain Perception. It is accordingly in the simple substance, Multiple Levels of Investigation 51 and not in the composite nor in a machine that the Perception is to be sought. (Leibniz, 1902, p. 254)
Leibniz called these simple substances monads and argued that all complex experiences were combinations of monads. Leibniz’ monads are clearly an antecedent of the architectural primitives that we have been discussing over the last few pages. Just as thoughts are composites in the sense that they can be built from their component monads, an algorithm is a combination or sequence of primitive processing steps. Just as monads cannot be further decomposed, the components of an architecture are not explained by being further decomposition, but are instead explained by directly appealing to physical causes. Just as the Leibniz mill’s monads would look like working pieces, and not like the product they created, the architecture produces, but does not resemble, complete algorithms.
The Chinese room would be a more compelling argument against the possibility of machine intelligence if one were to look inside it and actually see its knowledge. This would mean that its homunculi were not discharged, and that intelligence was not the product of basic computational processes that could be implemented as physical devices.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.12%3A_Implementing_Architectures.txt
|
The logic machines that arose late in the nineteenth century, and the twentieth-century general-purpose computers that they evolved into, are examples of information processing devices. It has been argued in this chapter that in order to explain such devices, four different vocabularies must be employed, each of which is used to answer a different kind of question. At the computational level, we ask what information processing problem is being solved by the device. At the algorithmic level, we ask what procedure or program is being used to solve this problem. At the architectural level, we ask from what primitive information capabilities is the algorithm composed. At the implementational level, we ask what physical properties are responsible for instantiating the components of the architecture.
As we progress from the computational question through questions about algorithm, architecture, and implementation we are moving in a direction that takes us from the very abstract to the more concrete. From this perspective each of these questions defines a different level of analysis, where the notion of level is to be taken as “level of abstractness.” The main theme of this chapter, then, is that to fully explain an information processing device one must explain it at four different levels of analysis.
The theme that I’ve developed in this chapter is an elaboration of an approach with a long history in cognitive science that has been championed in particular by Pylyshyn (1984) and Marr (1982). This historical approach, called the tri-level hypothesis (Dawson, 1998), is used to explain information devices by performing analyses at three different levels: computational, algorithmic, and implementational. The approach that has been developed in this chapter agrees with this view, but adds to it an additional level of analysis: the architectural. We will see throughout this book that an information processing architecture has properties that separate it from both algorithm and implementation, and that treating it as an independent level is advantageous.
The view that information processing devices must be explained by multiple levels of analysis has important consequences for cognitive science, because the general view in cognitive science is that cognition is also the result of information processing. This implies that a full explanation of human or animal cognition also requires multiple levels of analysis.
Not surprisingly, it is easy to find evidence of all levels of investigation being explored as cognitive scientists probe a variety of phenomena. For example, consider how classical cognitive scientists explore the general phenomenon of human memory.
At the computational level, researchers interested in the formal characterization of cognitive processes (such as those who study cognitive informatics [Wang, 2003, 2007]), provide abstract descriptions of what it means to memorize, including attempts to mathematically characterize the capacity of human memory (Lopez, Nunez, & Pelayo, 2007; Wang, 2009; Wang, Liu, & Wang, 2003).
At the algorithmic level of investigation, the performance of human subjects in a wide variety of memory experiments has been used to reverse engineer “memory” into an organized system of more specialized functions (Baddeley, 1990) including working memory (Baddeley, 1986, 2003), declarative and nondeclarative memory (Squire, 1992), semantic and episodic memory (Tulving, 1983), and verbal and imagery stores (Paivio, 1971, 1986). For instance, the behaviour of the serial position curve obtained in free recall experiments under different experimental conditions was used to pioneer cognitive psychology’s proposal of the modal memory model, in which memory was divided into a limited-capacity, short-term store and a much larger-capacity, long-term store (Waugh & Norman, 1965). The algorithmic level is also the focus of the art of memory (Yates, 1966), in which individuals are taught mnemonic techniques to improve their ability to remember (Lorayne, 1998, 2007; Lorayne & Lucas, 1974).
That memory can be reverse engineered into an organized system of subfunctions leads cognitive scientists to determine the architecture of memory. For instance, what kinds of encodings are used in each memory system, and what primitive processes are used to manipulate stored information? Richard Conrad’s (1964a, 1964b) famous studies of confusion in short-term memory indicated that it represented information using an acoustic code. One of the most controversial topics in classical cognitive science, the “imagery debate,” concerns whether the primitive form of spatial information is imagery, or whether images are constructed from more primitive propositional codes (Anderson, 1978; Block, 1981; Kosslyn, Thompson, & Ganis, 2006; Pylyshyn, 1973, 1981a, 2003b).
Even though classical cognitive science is functionalist in nature and (in the eyes of its critics) shies away from biology, it also appeals to implementational evidence in its study of memory. The memory deficits revealed in patient Henry Molaison after his hippocampus was surgically removed to treat his epilepsy (Scoville & Milner, 1957) provided pioneering biological support for the functional separations of short-term from long-term memory and of declarative memory from nondeclarative memory. Modern advances in cognitive neuroscience have provided firm biological foundations for elaborate functional decompositions of memory (Cabeza & Nyberg, 2000; Poldrack et al., 2001; Squire, 1987, 2004). Similar evidence has been brought to bear on the imagery debate as well (Kosslyn, 1994; Kosslyn et al., 1995; Kosslyn et al., 1999; Kosslyn, Thompson, & Alpert, 1997).
In the paragraphs above I have taken one tradition in cognitive science (the classical) and shown that its study of one phenomenon (human memory) reflects the use of all of the levels of investigation that have been the topic of the current chapter. However, the position that cognitive explanations require multiple levels of analysis (e.g., Marr, 1982) has not gone unchallenged. Some researchers have suggested that this process is not completely appropriate for explaining cognition or intelligence in biological agents (Churchland, Koch, & Sejnowski 1990; Churchland & Sejnowski, 1992).
For instance, Churchland, Koch, & Sejnowski (1990, p. 52) observed that “when we measure Marr’s three levels of analysis against levels of organization in the nervous system, the fit is poor and confusing.” This observation is based on the fact that there appear to be a great many different spatial levels of organization in the brain, which suggests to Churchland, Koch, & Sejnowski that there must be many different implementational levels, which implies in turn that there must be many different algorithmic levels.
The problem with this argument is that it confuses ontology with epistemology. That is, Churchland, Koch, & Sejnowski (1990) seemed to be arguing that Marr’s levels are accounts of the way nature is—that information processing devices are literally organized into the three different levels. Thus when a system appears to exhibit, say, multiple levels of physical organization, this brings Marr-as-ontology into question. However, Marr’s levels do not attempt to explain the nature of devices, but instead provide an epistemology—a way to inquire about the nature of the world. From this perspective, a system that has multiple levels of physical organization would not challenge Marr, because Marr and his followers would be comfortable applying their approach to the system at each of its levels of physical organization.
Other developments in cognitive science provide deeper challenges to the multiple-levels approach. As has been outlined in this chapter, the notion of multiple levels of explanation in cognitive science is directly linked to two key ideas: 1) that information processing devices invite and require this type of explanation, and 2) that cognition is a prototypical example of information processing. Recent developments in cognitive science represent challenges to these key ideas. For instance, embodied cognitive science takes the position that cognition is not information processing of the sort that involves the rule-governed manipulation of mentally represented worlds; it is instead the control of action on the world (Chemero, 2009; Clark, 1997, 1999; Noë, 2004, 2009; Robbins & Aydede, 2009). Does the multiplelevels approach apply if the role of cognition is radically reconstrued?
Churchland, Koch, & Sejnowski. (1990, p. 52) suggested that “[‘]which really are the levels relevant to explanation in the nervous system[’] is an empirical, not an a priori, question.” One of the themes of the current book is to take this suggestion to heart by seeing how well the same multiple levels of investigation can be applied to the three major perspectives in modern cognitive science: classical, connectionist, and embodied. In the next three chapters, I begin this pursuit by using the multiple levels introduced in Chapter 2 to investigate the nature of classical cognitive science (Chapter 3), connectionist cognitive science (Chapter 4), and embodied cognitive science (Chapter 5). Can the multiple levels of investigation be used to reveal principles that unify these three different and frequently mutually antagonistic approaches? Or is modern cognitive science beginning to fracture in a fashion similar to what has been observed in experimental psychology?
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/02%3A_Multiple_Levels_of_Investigation/2.13%3A_Levelling_the_Field.txt
|
When cognitive science arose in the late 1950s, it did so in the form of what is now known as the classical approach. Inspired by the nature of the digital electronic computer, classical cognitive science adopted the core assumption that cognition was computation. The purpose of the current chapter is to explore the key ideas of classical cognitive science that provide the core elements of this assumption.
The chapter begins by showing that the philosophical roots of classical cognitive science are found in the rationalist perspective of Descartes. While classical cognitive scientists agree with the Cartesian view of the infinite variety of language, they do not use this property to endorse dualism. Instead, taking advantage of modern formal accounts of information processing, they adopt models that use recursive rules to manipulate the components of symbolic expressions. As a result, finite devices—physical symbol systems—permit an infinite behavioural potential. Some of the key properties of physical symbol systems are reviewed.
One consequence of viewing the brain as a physical substrate that brings a universal machine into being is that this means that cognition can be simulated by other universal machines, such as digital computers. As a result, the computer simulation of human cognition becomes a critical methodology of the classical approach. One issue that arises is validating such simulations. The notions of weak 3 56 Chapter 3 and strong equivalence are reviewed, with the latter serving as the primary goal of classical cognitive science.
To say that two systems—such as a simulation and a human subject—are strongly equivalent is to say that both are solving the same information processing problem, using the same algorithm, based on the same architecture. Establishing strong equivalence requires collecting behavioural evidence of the types introduced in Chapter 2 (relative complexity, intermediate state, and error evidence) to reverse engineer a subject’s algorithm. It also requires discovering the components of a subject’s architecture, which involves behavioural evidence concerning cognitive impenetrability as well as biological evidence about information processing in the brain (e.g., evidence about which areas of the brain might be viewed as being information processing modules). In general, the search for strong equivalence by classical cognitive scientists involves conducting a challenging research program that can be described as functional analysis or reverse engineering.
The reverse engineering in which classical cognitive scientists are engaged involves using a variety of research methods adopted from many different disciplines. This is because this research strategy explores cognition at all four levels of investigation (computational, algorithmic, architectural, and implementational) that were introduced in Chapter 2. The current chapter is organized in a fashion that explores computational issues first, and then proceeds through the remaining levels to end with some considerations about implementational issues of importance to classical cognitive science.
3.02: Mind, Disembodied
In the seventh century, nearly the entire Hellenistic world had been conquered by Islam. The Greek texts of philosophers such as Plato and Aristotle had already been translated into Syriac; the new conquerors translated these texts into Arabic (Kuhn, 1957). Within two centuries, these texts were widely available in educational institutions that ranged from Baghdad to Cordoba and Toledo. By the tenth century, Latin translations of these Arabic texts had made their way to Europe. Islamic civilization “preserved and proliferated records of ancient Greek science for later European scholars” (Kuhn, 1957, p. 102).
The availability of the ancient Greek texts gave rise to scholasticism in Europe during the middle ages. Scholasticism was central to the European universities that arose in the twelfth century, and worked to integrate key ideas of Greek philosophy into the theology of the Church. During the thirteenth century, scholasticism achieved its zenith with the analysis of Aristotle’s philosophy by Albertus Magnus and Thomas Aquinas.
Scholasticism, as a system of education, taught its students the wisdom of the ancients. The scientific revolution that took flight in the sixteenth and seventeenth centuries arose in reaction to this pedagogical tradition. The discoveries of such luminaries as Newton and Leibniz were only possible when the ancient wisdom was directly questioned and challenged.
The seventeenth-century philosophy of René Descartes (1996, 2006) provided another example of fundamental insights that arose from a reaction against scholasticism. Descartes’ goal was to establish a set of incontestable truths from which a rigorous philosophy could be constructed, much as mathematicians used methods of deduction to derive complete geometries from a set of foundational axioms. “The only order which I could follow was that normally employed by geometers, namely to set out all the premises on which a desired proposition depends, before drawing any conclusions about it” (Descartes, 1996, p. 9).
Descartes began his search for truth by applying his own, new method of inquiry. This method employed extreme skepticism: any idea that could possibly be doubted was excluded, including the teachings of the ancients as endorsed by scholasticism. Descartes, more radically, also questioned ideas supplied by the senses because “from time to time I have found that the senses deceive, and it is prudent never to trust completely those who have deceived us even once” (Descartes, 1996, p. 12). Clearly this approach brought a vast number of concepts into question, and removed them as possible foundations of knowledge.
What ideas were removed? All notions of the external world could be false, because knowledge of them is provided by unreliable senses. Also brought into question is the existence of one’s physical body, for the same reason. “I shall consider myself as not having hands or eyes, or flesh, or blood or senses, but as falsely believing that I have all these things” (Descartes, 1996, p. 15).
Descartes initially thought that basic, self-evident truths from mathematics could be spared, facts such as 2 + 3 = 5. But he then realized that these facts too could be reasonably doubted.
How do I know that God has not brought it about that I too go wrong every time I add two and three or count the sides of a square, or in some even simpler matter, if that is imaginable? (Descartes, 1996, p. 14)
With the exclusion of the external world, the body, and formal claims from mathematics, what was left for Descartes to believe in? He realized that in order to doubt, or even to be deceived by a malicious god, he must exist as a thinking thing. “I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind” (Descartes, 1996, p. 17). And what is a thinking thing? “A thing that doubts, understands, affirms, denies, is willing, is unwilling, and also imagines and has sensory perceptions” (p. 19).
After establishing his own existence as incontestably true, Descartes used this fact to prove the existence of a perfect God who would not deceive. He then established the existence of an external world that was imperfectly sensed.
However, a fundamental consequence of Descartes’ analysis was a profound division between mind and body. First, Descartes reasoned that mind and body must be composed of different “stuff.” This had to be the case, because one could imagine that the body was divisible (e.g., through losing a limb) but that the mind was impossible to divide.
Indeed the idea I have of the human mind, in so far as it is a thinking thing, which is not extended in length, breadth or height and has no other bodily characteristics, is much more distinct than the idea of any corporeal thing. (Descartes, 1996, p. 37)
Further to this, the mind was literally disembodied—the existence of the mind did not depend upon the existence of the body.
Accordingly this ‘I,’ that is to say, the Soul by which I am what I am, is entirely distinct from the body and is even easier to know than the body; and would not stop being everything it is, even if the body were not to exist. (Descartes, 2006, p. 29)
Though Descartes’ notion of mind was disembodied, he acknowledged that mind and body had to be linked in some way. The interaction between mind and brain was famously housed in the pineal gland: “The mind is not immediately affected by all parts of the body, but only by the brain, or perhaps just by one small part of the brain, namely the part which is said to contain the ‘common’ sense” (Descartes, 1996, p. 59). What was the purpose of this type of interaction? Descartes noted that the powers of the mind could be used to make decisions beneficial to the body, to which the mind is linked: “For the proper purpose of the sensory perceptions given me by nature is simply to inform the mind of what is beneficial or harmful for the composite of which the mind is a part” (p. 57).
For Descartes the mind, as a thinking thing, could apply various rational operations to the information provided by the imperfect senses: sensory information could be doubted, understood, affirmed, or denied; it could also be elaborated via imagination. In short, these operations could not only inform the mind of what would benefit or harm the mind-body composite, but could also be used to plan a course of action to obtain the benefits or avoid the harm. Furthermore, the mind— via its capacity for willing—could cause the body to perform the desired actions to bring this plan into fruition. In Cartesian philosophy, the disembodied mind was responsible for the “thinking” in a sense-think-act cycle that involved the external world and the body to which the mind was linked.
Descartes’ disembodiment of the mind—his claim that the mind is composed of different “stuff ” than is the body or the physical world—is a philosophical position called dualism. Dualism has largely been abandoned by modern science, including cognitive science. The vast majority of cognitive scientists adopt a very different philosophical position called materialism. According to materialism, the mind is caused by the brain. In spite of the fact that it has abandoned Cartesian dualism, most of the core ideas of classical cognitive science are rooted in the ideas that Descartes wrote about in the seventeenth century. Indeed, classical cognitive science can be thought of as a synthesis between Cartesian philosophy and materialism. In classical cognitive science, this synthesis is best expressed as follows: cognition is the product of a physical symbol system (Newell, 1980). The physical symbol system hypothesis is made plausible by the existence of working examples of such devices: modern digital computers.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.01%3A_Chapter_Overview.txt
|
We have seen that the disembodied Cartesian mind is the thinking thing that mediates the sensing of, and acting upon, the world. It does so by engaging in such activities as doubting, understanding, affirming, denying, perceiving, imagining, and willing. These activities were viewed by Descartes as being analogous to a geometer’s use of rules to manipulate mathematical expressions. This leads us to ask, in what medium is thought carried out? What formal rules does it employ? What symbolic expressions does it manipulate?
Many other philosophers were sympathetic to the claim that mental activity was some sort of symbol manipulation. Thomas Hobbes is claimed as one of the philosophical fathers of classical cognitive science because of his writings on the nature of the mind:
When a man Reasoneth, hee does nothing else but conceive a summe totall, from Addition of parcels; or conceive a Remainder, from Substraction of one summe from another.” Such operations were not confined to numbers: “These operations are not incident to Numbers only, but to all manner of things that can be added together, and taken one out of another. (Hobbes, 1967, p. 32)
Hobbes noted that geometricians applied such operations to lines and figures, and that logicians applied these operations to words. Thus it is not surprising that Hobbes described thought as mental discourse—thinking, for him, was language-like.
Why were scholars taken by the idea that language was the medium in which thought was conducted? First, they agreed that thought was exceptionally powerful, in the sense that there were no limits to the creation of ideas. In other words, man in principle was capable of an infinite variety of different thoughts. “Reason is a universal instrument which can operate in all sorts of situations” (Descartes, 2006, p. 47). Second, language was a medium in which thought could be expressed, because it too was capable of infinite variety. Descartes expressed this as follows:
For it is a very remarkable fact that there are no men so dull-witted and stupid, not even madmen, that they are incapable of stringing together different words, and composing them into utterances, through which they let their thoughts be known. (Descartes, 2006, p. 47) Modern linguists describe this as the creative aspect of language (Chomsky, 1965, 1966). “An essential property of language is that it provides the means for expressing indefinitely many thoughts and for reacting appropriately in an indefinite range of new situations” (Chomsky, 1965, p. 6).
While Descartes did not write a great deal about language specifically (Chomsky, 1966), it is clear that he was sympathetic to the notion that language was the medium for thought. This is because he used the creative aspect of language to argue in favor of dualism. Inspired by the automata that were appearing in Europe in his era, Descartes imagined the possibility of having to prove that sophisticated future devices were not human. He anticipated the Turing test (Turing, 1950) by more than three centuries by using language to separate man from machine.
For we can well conceive of a machine made in such a way that it emits words, and even utters them about bodily actions which bring about some corresponding change in its organs . . . but it is not conceivable that it should put these words in different orders to correspond to the meaning of things said in its presence. (Descartes, 2006, p. 46)
Centuries later, similar arguments still appear in philosophy. For instance, why is a phonograph recording of someone’s entire life of speech an inadequate simulation of that speech (Fodor, 1968b)? “At the very best, phonographs do what speakers do, not what speakers can do” (p. 129).
Why might it be impossible for a device to do what speakers can do? For Descartes, language-producing machines were inconceivable because machines were physical and therefore finite. Their finite nature made it impossible for them to be infinitely variable.
Although such machines might do many things as well or even better than any of us, they would inevitably fail to do some others, by which we would discover that they did not act consciously, but only because their organs were disposed in a certain way. (Descartes, 2006, pp. 46–47)
In other words, the creativity of thought or language was only possible in the infinite, nonphysical, disembodied mind.
It is this conclusion of Descartes’ that leads to a marked distinction between Cartesian philosophy and classical cognitive science. Classical cognitive science embraces the creative aspect of language. However, it views such creativity from a materialist, not a dualist, perspective. Developments in logic and in computing that have occurred since the seventeenth century have produced a device that Descartes did not have at his disposal: the physical symbol system. And—seemingly magically—a physical symbol system is a finite artifact that is capable of an infinite variety of behaviour.
By the nineteenth century, the notion of language as a finite system that could be infinitely expressive was well established (Humboldt, 1999, p. 91): “For language is quite peculiarly confronted by an unending and truly boundless domain, the essence of all that can be thought. It must therefore make infinite employment of finite means.” While Humboldt’s theory of language has been argued to presage many of the key properties of modern generative grammars (Chomsky, 1966), it failed to provide a specific answer to the foundational question that it raised: how can a finite system produce the infinite? The answer to that question required advances in logic and mathematics that came after Humboldt, and which in turn were later brought to life by digital computers.
While it had been suspected for centuries that all traditional pure mathematics can be derived from the basic properties of natural numbers, confirmation of this suspicion was only obtained with advances that occurred in the nineteenth and twentieth centuries (Russell, 1993). The “arithmetisation” of mathematics was established in the nineteenth century, in what are called the Dedekind-Peano axioms (Dedekind, 1901; Peano, 1973). This mathematical theory defines three primitive notions: 0, number, and successor. It also defines five basic propositions: 0 is a number; the successor of any number is a number; no two numbers have the same successor; 0 is not the successor of any number; and the principle of mathematical induction. These basic ideas were sufficient to generate the entire theory of natural numbers (Russell, 1993).
Of particular interest to us is the procedure that is used in this system to generate the set of natural numbers. The set begins with 0. The next number is 1, which can be defined as the successor of 0, as s(0). The next number is 2, which is the successor of 1, s(1), and is also the successor of the successor of 0, s(s(0)). In other words, the successor function can be used to create the entire set of natural numbers: 0, s(0), s(s(0)), s(s(s(0))), and so on.
The definition of natural numbers using the successor function is an example of simple recursion; a function is recursive when it operates by referring to itself. The expression s(s(0)) is recursive because the first successor function takes as input another version of itself. Recursion is one method by which a finite system (such as the Dedekind-Peano axioms) can produce infinite variety, as in the set of natural numbers.
Recursion is not limited to the abstract world of mathematics, nor is its only role to generate infinite variety. It can work in the opposite direction, transforming the large and complex into the small and simple. For instance, recursion can be used to solve a complex problem by reducing it to a simple version of itself. This problem-solving approach is often called divide and conquer (Knuth, 1997).
One example of this is the famous Tower of Hanoi problem (see Figure 3-1), first presented to the world as a wooden puzzle by French mathematician Edouard Lucas in 1883. In this puzzle, there are three locations, A, B, and C. At the start of this problem there is a set of differently sized wooden discs stacked upon one another at location A. Let us number these discs 0, 1, 2, and so on, where the number assigned to a disc indicates its size. The goal for the problem is to move this entire stack to location C, under two restrictions: first, only one disc can be moved at a time; second, a larger disc can never be placed upon a smaller disc.
Figure 3-1. The starting configuration for a five-disc version of the Tower of Hanoi problem.
The simplest version of the Tower of Hanoi problem starts with only disc 0 at location A. Its solution is completely straightforward: disc 0 is moved directly to location C, and the problem is solved. The problem is only slightly more complicated if it starts with two discs stacked on location A. First, disc 0 is moved to location B. Second, disc 1 is moved to location C. Third, disc 0 is moved from A to C, stacked on top of disc 1, and the problem has been solved.
What about a Tower of Hanoi problem that begins with three discs? To solve this more complicated problem, we can first define a simpler subproblem: stacking discs 0 and 1 on location B. This is accomplished by doing the actions defined in the preceding paragraph, with the exception that the goal location is B for the subproblem. Once this subtask is accomplished, disc 2 can be moved directly to the final goal, location C. Now, we solve the problem by moving discs 0 and 1, which are stacked on B, to location C, by again using a procedure like the one described in the preceding paragraph.
This account of solving a more complex version of the Tower of Hanoi problem points to the recursive nature of divide and conquer: we solve the bigger problem by 0 1 2 3 4 A (Start) B (Spare) C (Goal) first solving a smaller version of the same kind of problem. To move a stack of n discs to location C, we first move the smaller stack of n – 1 discs to location B. “Moving the stack” is the same kind of procedure for the n discs and for the n – 1 discs. The whole approach is recursive in the sense that to move the big stack, the same procedure must first be used to move the smaller stack on top of the largest disc.
The recursive nature of the solution to the Tower of Hanoi is made obvious if we write a pseudocode algorithm for moving the disks. Let us call our procedure MoveStack (). It will take four arguments: the number of discs in the stack to be moved, the starting location, the “spare” location, and the goal location. So, if we had a stack of three discs at location A, and wanted to move the stack to location C using location B as the spare, we would execute MoveStack (3, A, B, C).
The complete definition of the procedure is as follows:
MoveStack (N, Start, Spare, Goal)
If N = 0
Exit
Else
MoveStack (N – 1, Start, Goal, Spare)
MoveStack (1, Start, Spare, Goal)
MoveStack (N – 1, Spare, Start, Goal)
EndIf
Note the explicit recursion in this procedure, because MoveStack () calls itself to move a smaller stack of disks stacked on top of the disk that it is going to move. Note too that the recursive nature of this program means that it is flexible enough to work with any value of N. Figure 3-2 illustrates an intermediate state that occurs when this procedure is applied to a five-disc version of the problem.
Figure 3-2. An intermediate state that occurs when MoveStack () is applied to a five-disc version of the Tower of Hanoi.
In the code given above, recursion was evident because MoveStack () called itself. There are other ways in which recursion can make itself evident. For instance, recursion can produce hierarchical, self-similar structures such as fractals (Mandelbrot, 1983), whose recursive nature is immediately evident through visual inspection. Consider the Sierpinski triangle (Mandelbrot, 1983), which begins as an equilateral triangle (Figure 3-3).
Figure 3-3. The root of the Sierpinski triangle is an equilateral triangle.
The next step in creating the Sierpinski triangle is to take Figure 3-3 and reduce it to exactly half of its original size. Three of these smaller triangles can be inscribed inside of the original triangle, as is illustrated in Figure 3-4.
Figure 3-4. The second step of constructing a Sierpinski triangle.
The rule used to create Figure 3-4 can be applied recursively and (in principle) infinitely. One takes the smaller triangle that was used to create Figure 3-4, makes it exactly half of its original size, and inscribes three copies of this still smaller triangle into each of the three triangles that were used to create Figure 3-4. This rule can be applied recursively to inscribe smaller triangles into any of the triangles that were added to the figure in a previous stage of drawing. Figure 3-5 shows the result when this rule is applied four times to Figure 3-4.
Figure 3-5. The Sierpinski triangle that results when the recursive rule is applied four times to Figure 3-4.
The Sierpinski triangle, and all other fractals that are created by recursion, are intrinsically self-similar. That is, if one were to take one of the smaller triangles from which Figure 3-4 is constructed and magnify it, one would see still see the hierarchical structure that is illustrated above. The structure of the whole is identical to the (smaller) structure of the parts. In the next section, we see that the recursive nature of human language reveals itself in the same way.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.03%3A__Mechanizing_the_Infinite.txt
|
Consider a finite set of elements (e.g., words, phonemes, morphemes) that can, by applying certain rules, be combined to create a sentence or expression that is finite in length. A language can be defined as the set of all of the possible expressions that can generated in this way from the same set of building blocks and the same set of rules (Chomsky, 1957). From this perspective, one can define a grammar as a device that can distinguish the set of grammatical expressions from all other expressions, including those that are generated from the same elements but which violate the rules that define the language. In modern linguistics, a basic issue to investigate is the nature of the grammar that defines a natural human language.
Chomsky (1957) noted that one characteristic of a natural language such as English is that a sentence can be lengthened by inserting a clause into its midst. As we see in the following section, this means that the grammar of natural languages is complicated enough that simple machines, such as finite state automata, are not powerful enough to serve as grammars for them.
The complex, clausal structure of a natural language is instead captured by a more powerful device—a Turing machine—that can accommodate the regularities of a context-free grammar (e.g., Chomsky, 1957, 1965). A context-free grammar can be described as a set of rewrite rules that convert one symbol into one or more other symbols. The application of these rewrite rules produces a hierarchically organized symbolic structure called a phrase marker (Radford, 1981). A phrase marker is a set of points or labelled nodes that are connected by branches. Nonterminal nodes represent lexical categories; at the bottom of a phrase marker are the terminal nodes that represent lexical categories (e.g., words). A phrase marker for the simple sentence Dogs bark is illustrated in Figure 3-6.
Figure 3-6. A phrase marker for the sentence Dogs bark.
The phrase marker for a sentence can be illustrated as an upside-down tree whose structure is grown from the root node S (for sentence). The application of the rewrite rule S → NP VP produces the first layer of the Figure 3-6 phrase marker, showing how the nodes NP (noun phrase) and VP (verb phrase) are grown from S. Other rewrite rules that are invoked to create that particular phrase marker are NP → , → N, N → dogs, VP → , → V, and V → bark. When any of these rewrite rules are applied, the symbol to the left of the → is rewritten as the symbol or symbols to the right. In the phrase marker, this means the symbols on the right of the → are written as nodes below the original symbol, and are connected to the originating node above, as is shown in Figure 3-6.
In a modern grammar called x-bar syntax (Jackendoff, 1977), nodes like NP and VP in Figure 3-6 are symbols that represent phrasal categories, nodes like and are symbols that represent lexical categories, and nodes like “and” are symbols that represent categories that are intermediates between lexical categories and phrasal categories. Such intermediate categories are required to capture some regularities in the syntax of natural human languages.
In some instances, the same symbol can be found on both sides of the → in a rewrite rule. For instance, one valid rewrite rule for the intermediate node of a noun NP N S N dogs bark VP V V Elements of Classical Cognitive Science 67 phrase is → AP , where AP represents an adjective phrase. Because the same symbol occurs on each side of the equation, the context-free grammar is recursive. One can apply this rule repeatedly to insert clauses of the same type into a phrase. This is shown in Figure 3-7, which illustrates phrase markers for noun phrases that might apply to my dog Rufus. The basic noun phrase is the dog. If this recursive rule is applied once, it permits a more elaborate noun phrase to be created, as in the cute dog. Recursive application of this rule permits the noun phrase to be elaborated indefinitely, (e.g., the cute brown scruffy dog).
Figure 3-7. Phrase markers for three noun phrases: (A) the dog, (B) the cute dog, and (C) the cute brown scruffy dog. Note the recursive nature of (C).
The recursive nature of a context-free grammar is revealed in a visual inspection of a phrase marker like the one illustrated in Figure 3-7C. As one inspects the figure, one sees the same pattern recurring again and again, as was the case with the Sierpinski triangle. The recursive nature of a context-free grammar produces self-similarity within a phrase marker. The recursion of such a grammar is also responsible for its ability to use finite resources (a finite number of building blocks and a finite number of rewrite rules) to produce a potentially infinite variety of expressions, as in the sentences of a language, each of which is represented by its own phrase marker.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.04%3A_Phrase_Markers_and_Fractals.txt
|
Behaviourism viewed language as merely being observable behaviour whose development and elicitation was controlled by external stimuli:
A speaker possesses a verbal repertoire in the sense that responses of various forms appear in his behavior from time to time in relation to identifiable conditions. A repertoire, as a collection of verbal operants, describes the potential behavior of a speaker. To ask where a verbal operant is when a response is not in the course of being emitted is like asking where one’s knee-jerk is when the physician is not tapping the patellar tendon. (Skinner, 1957, p. 21)
Skinner’s (1957) treatment of language as verbal behaviour explicitly rejected the Cartesian notion that language expressed ideas or meanings. To Skinner, explanations of language that appealed to such unobservable internal states were necessarily unscientific:
It is the function of an explanatory fiction to allay curiosity and to bring inquiry to an end. The doctrine of ideas has had this effect by appearing to assign important problems of verbal behavior to a psychology of ideas. The problems have then seemed to pass beyond the range of the techniques of the student of language, or to have become too obscure to make further study profitable. (Skinner, 1957, p. 7)
Modern linguistics has explicitly rejected the behaviourist approach, arguing that behaviourism cannot account for the rich regularities that govern language (Chomsky, 1959b).
The composition and production of an utterance is not strictly a matter of stringing together a sequence of responses under the control of outside stimulation and intraverbal association, and that the syntactic organization of an utterance is not something directly represented in any simple way in the physical structure of the utterance itself. (Chomsky, 1959b, p. 55)
Modern linguistics has advanced beyond behaviourist theories of verbal behaviour by adopting a particularly technical form of logicism. Linguists assume that verbal behaviour is the result of sophisticated symbol manipulation: an internal generative grammar.
By a generative grammar I mean simply a system of rules that in some explicit and well-defined way assigns structural descriptions to sentences. Obviously, every speaker of a language has mastered and internalized a generative grammar that expresses his knowledge of his language. (Chomsky, 1965, p. 8)
A sentence’s structural description is represented by using a phrase marker, which is a hierarchically organized symbol structure that can be created by a recursive set of rules called a context-free grammar. In a generative grammar another kind of rule, called a transformation, is used to convert one phrase marker into another.
The recursive grammars that have been developed in linguistics serve two purposes. First, they formalize key structural aspects of human languages, such as the embedding of clauses within sentences. Second, they explain how finite resources are capable of producing an infinite variety of potential expressions. This latter accomplishment represents a modern rebuttal to dualism; we have seen that Descartes (1996) used the creative aspect of language to argue for the separate, nonphysical existence of the mind. For Descartes, machines were not capable of generating language because of their finite nature.
Interestingly, a present-day version of Descartes’ (1996) analysis of the limitations of machines is available. It recognizes that a number of different information processing devices exists that vary in complexity, and it asks which of these devices are capable of accommodating modern, recursive grammars. The answer to this question provides additional evidence against behaviourist or associationist theories of language (Bever, Fodor, & Garrett, 1968).
Figure 3-8. How a Turing machine processes its tape.
In Chapter 2, we were introduced to one simple—but very powerful—device, the Turing machine (Figure 3-8). It consists of a machine head that manipulates the symbols on a ticker tape, where the ticker tape is divided into cells, and each cell is capable of holding only one symbol at a time. The machine head can move back and forth along the tape, one cell at a time. As it moves it can read the symbol on the current cell, which can cause the machine head to change its physical state. It is also capable of writing a new symbol on the tape. The behaviour of the machine head—its new physical state, the direction it moves, the symbol that it writes—is controlled by a machine table that depends only upon the current symbol being read and the current state of the device. One uses a Turing machine by writing a question on its tape, and setting the machine head into action. When the machine head halts, the Turing machine’s answer to the question has been written on the tape.
What is meant by the claim that different information processing devices are available? It means that systems that are different from Turing machines must also exist. One such alternative to a Turing machine is called a finite state automaton (Minsky, 1972; Parkes, 2002), which is illustrated in Figure 3-9. Like a Turing machine, a finite state automaton can be described as a machine head that interacts with a ticker tape. There are two key differences between a finite state machine and a Turing machine.
Figure 3-9. How a finite state automaton processes the tape. Note the differences between Figures 3-9 and 3-8.
First, a finite state machine can only move in one direction along the tape, again one cell at a time. Second, a finite state machine can only read the symbols on the tape; it does not write new ones. The symbols that it encounters, in combination with the current physical state of the device, determine the new physical state of the device. Again, a question is written on the tape, and the finite state automaton is started. When it reaches the end of the question, the final physical state of the finite state automaton represents its answer to the original question on the tape.
It is obvious that a finite state automaton is a simpler device than a Turing machine, because it cannot change the ticker tape, and because it can only move in one direction along the tape. However, finite state machines are important information processors. Many of the behaviours in behaviour-based robotics are produced using finite state machines (Brooks, 1989, 1999, 2002). It has also been argued that such devices are all that is required to formalize behaviourist or associationist accounts of behaviour (Bever, Fodor, & Garrett., 1968).
What is meant by the claim that an information processing device can “accommodate” a grammar? In the formal analysis of the capabilities of information processors (Gold, 1967), there are two answers to this question. Assume that knowledge of some grammar has been built into a device’s machine head. One could then ask whether the device is capable of accepting a grammar. In this case, the “question” on the tape would be an expression, and the task of the information processor would be to accept the string, if it is grammatical according to the device’s grammar, or to reject the expression, if it does not belong to the grammar. Another question to ask would be whether the information processor is capable of generating the grammar. That is, given a grammatical expression, can the device use its existing grammar to replicate the expression (Wexler & Culicover, 1980)?
In Chapter 2, it was argued that one level of investigation to be conducted by cognitive science was computational. At the computational level of analysis, one uses formal methods to investigate the kinds of information processing problems a device is solving. When one uses formal methods to determine whether some device is capable of accepting or generating some grammar of interest, one is conducting an investigation at the computational level.
One famous example of such a computational analysis was provided by Bever, Fodor, and Garrett (1968). They asked whether a finite state automaton was capable of accepting expressions that were constructed from a particular artificial grammar. Expressions constructed from this grammar were built from only two symbols, a and b. Grammatical strings in the sentence were “mirror images,” because the pattern used to generate expressions was bNabN where N is the number of bs in the string. Valid expressions generated from this grammar include a, bbbbabbbb, and bbabb. Expressions that cannot be generated from the grammar include ab, babb, bb, and bbbabb.
While this artificial grammar is very simple, it has one important property: it is recursive. That is, a simple context-free grammar can be defined to generate its potential expressions. This context-free grammar consists of two rules, where Rule 1 is S → a, and Rule 2 is a → bab. A string is begun by using Rule 1 to generate an a. Rule 2 can then be applied to generate the string bab. If Rule 2 is applied recursively to the central bab then longer expressions will be produced that will always be consistent with the pattern bNabN.
Bever, Fodor, and Garrett (1968) proved that a finite state automaton was not capable of accepting strings generated from this recursive grammar. This is because a finite state machine can only move in one direction along the tape, and cannot write to the tape. If it starts at the first symbol of a string, then it is not capable of keeping track of the number of bs read before the a, and comparing this to the number of bs read after the a. Because it can’t go backwards along the tape, it can’t deal with recursive languages that have embedded clausal structure.
Bever, Fodor, and Garrett (1968) used this result to conclude that associationism (and radical behaviourism) was not powerful enough to deal with the embedded clauses of natural human language. As a result, they argued that associationism should be abandoned as a theory of mind. The impact of this proof is measured by the lengthy responses to this argument by associationist memory researchers (Anderson & Bower, 1973; Paivio, 1986). We return to the implications of this argument when we discuss connectionist cognitive science in Chapter 4.
While finite state automata cannot accept the recursive grammar used by Bever, Fodor, and Garrett (1968), Turing machines can (Révész, 1983). Their ability to move in both directions along the tape provides them with a memory that enables them to match the number of leading bs in a string with the number of trailing bs.
Modern linguistics has concluded that the structure of human language must be described by grammars that are recursive. Finite state automata are not powerfulenough devices to accommodate grammars of this nature, but Turing machines are. This suggests that an information processing architecture that is sufficiently rich to explain human cognition must have the same power—must be able to answer the same set of questions—as do Turing machines. This is the essence of the physical symbol system hypothesis (Newell, 1980), which are discussed in more detail below. The Turing machine, as we saw in Chapter 2 and further discuss below, is a universal machine, and classical cognitive science hypothesizes that “this notion of symbol system will prove adequate to all of the symbolic activity this physical universe of ours can exhibit, and in particular all the symbolic activities of the human mind” (Newell, 1980, p. 155).
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.05%3A_Behaviourism%2C_Language%2C_and_Recursion.txt
|
The ability of a device to accept or generate a grammar is central to another computational level analysis of language (Gold, 1967). Gold performed a formal analysis of language learning which revealed a situation that is known as Gold’s paradox (Pinker, 1979). One solution to this paradox is to adopt a position that is characteristic of classical cognitive science, and which we have seen is consistent with its Cartesian roots. This position is that a good deal of the architecture of cognition is innate.
Gold (1967) was interested in the problem of how a system could learn the grammar of a language on the basis of a finite set of example expressions. He considered two different situations in which the learning system could be presented with expressions. In informant learning, the learner is presented with either valid or invalid expressions, and is also told about their validity, i.e., told whether they belong to the grammar or not. In text learning, the only expressions that are presented to the learner are grammatical.
Whether a learner is undergoing informant learning or text learning, Gold (1967) assumed that learning would proceed as a succession of presentations of expressions. After each expression was presented, the language learner would generate a hypothesized grammar. Gold proposed that each hypothesis could be described as being a Turing machine that would either accept the (hypothesized) grammar or generate it. In this formalization, the notion of “learning a language” has become “selecting a Turing machine that represents a grammar” (Osherson, Stob, & Weinstein, 1986).
According to Gold’s (1967) algorithm, a language learner would have a current hypothesized grammar. When a new expression was presented to the learner, a test would be conducted to see if the current grammar could deal with the new expression. If current grammar succeeded, then it remained. If the current grammar failed, then a new grammar—a new Turing machine—would have to be selected.
Under this formalism, when can we say that a grammar has been learned? Gold defined language learning as the identification of the grammar in the limit. When a language is identified in the limit, this means that the current grammar being hypothesized by the learner does not change even as new expressions are encountered. Furthermore, it is expected that this state will occur after a finite number of expressions have been encountered during learning.
In the previous section, we considered a computational analysis in which different kinds of computing devices were presented with the same grammar. Gold (1967) adopted an alternative approach: he kept the information processing constant— that is, he always studied the algorithm sketched above—but he varied the complexity of the grammar that was being learned, and he varied the conditions under which the grammar was presented, i.e., informant learning versus text learning.
In computer science, a formal description of any class of languages (human or otherwise) relates its complexity to the complexity of a computing device that could generate or accept it (Hopcroft & Ullman, 1979; Révész, 1983). This has resulted in a classification of grammars known as the Chomsky hierarchy (Chomsky, 1959a). In the Chomsky hierarchy, the simplest grammars are regular, and they can be accommodated by finite state automata. The next most complicated are context-free grammars, which can be processed by pushdown automata (a device that is a finite state automaton with a finite internal memory). Next are the context-sensitive grammars, which are the domain of linear bounded automata (i.e., a device like a Turing machine, but with a ticker tape of bounded length). The most complex grammars are the generative grammars, which can only be dealt with by Turing machines.
Gold (1967) used formal methods to determine the conditions under which each class of grammars could be identified in the limit. He was able to show that text learning could only be used to acquire the simplest grammar. In contrast, Gold found that informant learning permitted context-sensitive and context-free grammars to be identified in the limit.
Gold’s (1967) research was conducted in a relatively obscure field of theoretical computer science. However, Steven Pinker brought it to the attention of cognitive science more than a decade later (Pinker, 1979), where it sparked a great deal of interest and research. This is because Gold’s computational analysis revealed a paradox of particular interest to researchers who studied how human children acquire language.
Gold’s (1967) proofs indicated that informant learning was powerful enough that a complex grammar can be identified in the limit. Such learning was not possible with text learning. Gold’s paradox emerged because research strongly suggests that children are text learners, not informant learners (Pinker, 1979, 1994, 1999). It is estimated that 99.93 percent of the language to which children are exposed is grammatical (Newport, Gleitman, & Gleitman, 1977). Furthermore, whenever feedback about language grammaticality is provided to children, it is not systematic enough to be used to select a grammar (Marcus, 1993).
Gold’s paradox is that while he proved that grammars complex enough to model human language could not be text learned, children learn such grammars—and do so via text learning! How is this possible?
Gold’s paradox is an example of a problem of underdetermination. In a problem of underdetermination, the information available from the environment is not sufficient to support a unique interpretation or inference (Dawson, 1991). For instance, Gold (1967) proved that a finite number of expressions presented during text learning were not sufficient to uniquely determine the grammar from which these expressions were generated, provided that the grammar was more complicated than a regular grammar.
There are many approaches available for solving problems of underdetermination. One that is most characteristic of classical cognitive science is to simplify the learning situation by assuming that some of the to-be-learned information is already present because it is innate. For instance, classical cognitive scientists assume that much of the grammar of a human language is innately available before language learning begins.
The child has an innate theory of potential structural descriptions that is sufficiently rich and fully developed so that he is able to determine, from a real situation in which a signal occurs, which structural descriptions may be appropriate to this signal. (Chomsky, 1965, p. 32)
If the existence of an innate, universal base grammar—a grammar used to create phrase markers—is assumed, then a generative grammar of the type proposed by Chomsky can be identified in the limit (Wexler & Culicover, 1980). This is because learning the language is simplified to the task of learning the set of transformations that can be applied to phrase markers. More modern theories of transformational grammars have reduced the number of transformations to one, and have described language learning as the setting of a finite number of parameters that determine grammatical structure (Cook & Newson, 1996). Again, these grammars can be identified in the limit on the basis of very simple input expressions (Lightfoot, 1989). Such proofs are critical to cognitive science and to linguistics, because if a theory of language is to be explanatorily adequate, then it must account for how language is acquired (Chomsky, 1965).
Rationalist philosophers assumed that some human knowledge must be innate. This view was reacted against by empiricist philosophers who viewed experience as the only source of knowledge. For the empiricists, the mind was a tabula rasa, waiting to be written upon by the world. Classical cognitive scientists are comfortable with the notion of innate knowledge, and have used problems of underdetermination to argue against the modern tabula rasa assumed by connectionist cognitive scientists (Pinker, 2002, p. 78): “The connectionists, of course, do not believe in a blank slate, but they do believe in the closest mechanistic equivalent, a generalpurpose learning device.” The role of innateness is an issue that separates classical cognitive science from connectionism, and will be encountered again when connectionism is explored in Chapter 4.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.06%3A_Underdetermination_and_Innateness.txt
|
Special-purpose logic machines had been developed by philosophers in the late nineteenth century (Buck & Hunka, 1999; Jevons, 1870; Marquand, 1885). However, abstract descriptions of how devices could perform general-purpose symbol manipulation did not arise until the 1930s (Post, 1936; Turing, 1936). The basic properties laid out in these mathematical theories of computation define what is now known as a physical symbol system (Newell, 1980; Newell & Simon, 1976). The concept physical symbol system defines “a broad class of systems that is capable of having and manipulating symbols, yet is also realizable within our physical universe” (Newell, 1980, p. 136).
A physical symbol system operates on a finite set of physical tokens called symbols. These are components of a larger physical entity called a symbol structure or a symbolic expression. It also consists of a set of operators that can create, modify, duplicate, or destroy symbols. Some sort of control is also required to select at any given time some operation to apply. A physical symbol system produces, over time, an evolving or changing collection of expressions. These expressions represent or designate entities in the world (Newell, 1980; Newell & Simon, 1976). As a result, the symbol manipulations performed by such a device permit new meanings to be derived, in the same way as new knowledge is arrived at in the proofs discovered by logicians and mathematicians (Davis & Hersh, 1981).
The abstract theories that describe physical symbol systems were not developed into working artifacts until nearly the midpoint of the twentieth century. “Our deepest insights into information processing were achieved in the thirties, before modern computers came into being. It is a tribute to the genius of Alan Turing” (Newell & Simon, 1976, p. 117). The first digital computer was the Z3, invented in Germany in 1941 by Konrad Zuse (1993). In the United States, the earliest computers were University of Pennsylvania’s ENIAC (created 1943–1946) and EDVAC (created 1945–1950), Harvard’s MARK I (created 1944), and Princeton’s IAS or von Neumann computer (created 1946–1951) (Burks, 2002; Cohen, 1999). The earliest British computer was University of Manchester’s “Baby,” the small-scale experimental machine (SSEM) that was first activated in June, 1948 (Lavington, 1980).
Although specific details vary from machine to machine, all digital computers share three general characteristics (von Neumann, 1958). First, they have a memory for the storage of symbolic structures. In what is now known as the von Neumann architecture, this is a random access memory (RAM) in which any memory location can be immediately accessed—without having to scroll through other locations, as in a Turing machine—by using the memory’s address. Second, they have a mechanism separate from memory that is responsible for the operations that manipulate stored symbolic structures. Third, they have a controller for determining which operation to perform at any given time. In the von Neumann architecture, the control mechanism imposes serial processing, because only one operation will be performed at a time.
Perhaps the earliest example of serial control is the nineteenth-century punched cards used to govern the patterns in silk that were woven by Joseph Marie Jacquard’s loom (Essinger, 2004). During weaving, at each pass of the loom’s shuttle, holes in a card permitted some thread-controlling rods to be moved. When a rod moved, the thread that it controlled was raised; this caused the thread to be visible in that row of the pattern. A sequence of cards was created by tying cards together end to end. When this “chain” was advanced to the next card, the rods would be altered to create the appropriate appearance for the silk pattern’s next row.
The use of punched cards turned the Jacquard loom into a kind of universal machine: one changed the pattern being produced not by changing the loom, but simply by loading it with a different set of punched cards. Thus not only did Jacquard invent a new loom, but he also invented the idea of using a program to control the actions of a machine. Jacquard’s program was, of course, a sequence of punched cards. Their potential for being applied to computing devices in general was recognized by computer pioneer Charles Babbage, who was inspired by Jacquard’s invention (Essinger, 2004).
By the late 1950s, it became conventional to load the program—then known as the “short code” (von Neumann, 1958)—into memory. This is called memory-stored control; the first modern computer to use this type of control was Manchester’s “Baby” (Lavington, 1980). In Chapter 2 we saw an example of this type of control in the universal Turing machine, whose ticker tape memory holds both the data to be manipulated and the description of a special-purpose Turing machine that will do the manipulating. The universal Turing machine uses the description to permit it to pretend to be the specific machine that is defined on its tape (Hodges, 1983).
In a physical symbol system that employs memory-stored control, internal characteristics will vary over time. However, the time scale of these changes will not be uniform (Newell, 1990). The data that is stored in memory will likely be changed rapidly. However, some stored information—in particular, the short code, or what cognitive scientists would call the virtual machine (Pylyshyn, 1984, 1991), that controls processing would be expected to be more persistent. Memory-stored control in turn chooses which architectural operation to invoke at any given time. In a digital computer, the architecture would not be expected to vary over time at all because it is fixed, that is, literally built into the computing device.
The different characteristics of a physical symbol system provide a direct link back to the multiple levels of investigation that were the topic of Chapter 2. When such a device operates, it is either computing some function or solving some information processing problem. Describing this aspect of the system is the role of a computational analysis. The computation being carried out is controlled by an algorithm: the program stored in memory. Accounting for this aspect of the system is the aim of an algorithmic analysis. Ultimately, a stored program results in the device executing a primitive operation on a symbolic expression stored in memory. Identifying the primitive processes and symbols is the domain of an architectural analysis. Because the device is a physical symbol system, primitive processes and symbols must be physically realized. Detailing the physical nature of these components is the goal of an implementational analysis.
The invention of the digital computer was necessary for the advent of classical cognitive science. First, computers are general symbol manipulators. Their existence demonstrated that finite devices could generate an infinite potential of symbolic behaviour, and thus supported a materialist alternative to Cartesian dualism. Second, the characteristics of computers, and of the abstract theories of computation that led to their development, in turn resulted in the general notion of physical symbol system, and the multiple levels of investigation that such systems require.
The final link in the chain connecting computers to classical cognitive science is the logicist assumption that cognition is a rule-governed symbol manipulation of the sort that a physical symbol system is designed to carry out. This produces the physical symbol system hypothesis: “the necessary and sufficient condition for a physical system to exhibit general intelligent action is that it be a physical symbol system” (Newell, 1980, p. 170). By necessary, Newell meant that if an artifact exhibits general intelligence, then it must be an instance of a physical symbol system. By sufficient, Newell claimed that any device that is a physical symbol system can be configured to exhibit general intelligent action—that is, he claimed the plausibility of machine intelligence, a position that Descartes denied.
What did Newell (1980) mean by general intelligent action? He meant,
the same scope of intelligence seen in human action: that in real situations behavior appropriate to the ends of the system and adaptive to the demands of the environment can occur, within some physical limits. (Newell, 1980, p. 170)
In other words, human cognition must be the product of a physical symbol system. Thus human cognition must be explained by adopting all of the different levels of investigation that were described in Chapter 2.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.07%3A_Physical_Symbol_Systems.txt
|
In 1840, computer pioneer Charles Babbage displayed a portrait of loom inventor Joseph Marie Jacquard for the guests at the famous parties in his home (Essinger, 2004). The small portrait was incredibly detailed. Babbage took great pleasure in the fact that most people who first saw the portrait mistook it to be an engraving. It was instead an intricate fabric woven on a loom of the type that Jacquard himself invented.
The amazing detail of the portrait was the result of its being composed of 24,000 rows of weaving. In a Jacquard loom, punched cards determined which threads would be raised (and therefore visible) for each row in the fabric. Each thread in the loom was attached to a rod; a hole in the punched card permitted a rod to move, raising its thread. The complexity of the Jacquard portrait was produced by using 24,000 punched cards to control the loom.
Though Jacquard’s portrait was impressively complicated, the process used to create it was mechanical, simple, repetitive—and local. With each pass of the loom’s shuttle, weaving a set of threads together into a row, the only function of a punched card was to manipulate rods. In other words, each punched card only controlled small components of the overall pattern. While the entire set of punched cards represented the total pattern to be produced, this total pattern was neither contained in, nor required by, an individual punched card as it manipulated the loom’s rods. The portrait of Jacquard was a global pattern that emerged from a long sequence of simple, local operations on the pattern’s components.
In the Jacquard loom, punched cards control processes that operate on local components of the “expression” being weaved. The same is true of the physical symbol systems. Physical symbol systems are finite devices that are capable of producing an infinite variety of potential behaviour. This is possible because the operations of a physical symbol system are recursive. However, this explanation is not complete. In addition, the rules of a physical symbol system are local or componential, in the sense that they act on local components of an expression, not on the expression as a whole.
For instance, one definition of a language is the set of all of its grammatical expressions (Chomsky, 1957). Given this definition, it is logically possible to treat each expression in the set as an unanalyzed whole to which some operation could be applied. This is one way to interpret a behaviourist theory of language (Skinner, 1957): each expression in the set is a holistic verbal behaviour whose likelihood of being produced is a result of reinforcement and stimulus control of the expression as a whole.
However, physical symbol systems do not treat expressions as unanalyzed wholes. Instead, the recursive rules of a physical symbol system are sensitive to the atomic symbols from which expressions are composed. We saw this previously in the example of context-free grammars that were used to construct the phrase markers of Figures 3-6 and 3-7. The rules in such grammars do not process whole phrase markers, but instead operate on the different components (e.g., nodes like S, N, VP) from which a complete phrase marker is constructed.
The advantage of operating on symbolic components, and not on whole expressions, is that one can use a sequence of very basic operations—writing, changing, erasing, or copying a symbol—to create an overall effect of far greater scope than might be expected. As Henry Ford said, nothing is particularly hard if you divide it into small jobs. We saw the importance of this in Chapter 2 when we discussed Leibniz’ mill (Leibniz, 1902), the Chinese room (Searle, 1980), and the discharging of homunculi (Dennett, 1978). In a materialist account of cognition, thought is produced by a set of apparently simple, mindless, unintelligent actions—the primitives that make up the architecture.
The small jobs carried out by a physical symbol system reveal that such a system has a dual nature (Haugeland, 1985). On the one hand, symbol manipulations are purely syntactic—they depend upon identifying a symbol’s type, and not upon semantically interpreting what the symbol stands for. On the other hand, a physical symbol system’s manipulations are semantic—symbol manipulations preserve meanings, and can be used to derive new, sensible interpretations.
Interpreted formal tokens lead two lives: syntactical lives, in which they are meaningless markers, moved according to the rules of some self-contained game; and semantic lives, in which they have meanings and symbolic relations to the outside world. (Haugeland, 1985, p. 100)
Let us briefly consider these two lives. First, we have noted that the rules of a physical symbol system operate on symbolic components of a whole expression. For this to occur, all that is required is that a rule identifies a particular physical entity as being a token or symbol of a particular type. If the symbol is of the right type, then the rule can act upon it in some prescribed way.
For example, imagine a computer program that is playing chess. For this program, the “whole expression” is the total arrangement of game pieces on the chess board at any given time. The program analyzes this expression into its components: individual tokens on individual squares of the board. The physical characteristics of each component token can then be used to identify to what symbol class it belongs: queen, knight, bishop, and so on. Once a token has been classified in this way, appropriate operations can be applied to it. If a game piece has been identified as being a “knight,” then only knight moves can be applied to it—the operations that would move the piece like a bishop cannot be applied, because the token has not been identified as being of the type “bishop.”
Similar syntactic operations are at the heart of a computing device like a Turing machine. When the machine head reads a cell on the ticker tape (another example of componentiality!), it uses the physical markings on the tape to determine that the cell holds a symbol of a particular type. This identification—in conjunction with the current physical state of the machine head—is sufficient to determine which instruction to execute.
To summarize, physical symbol systems are syntactic in the sense that their rules are applied to symbols that have been identified as being of a particular type on the basis of their physical shape or form. Because the shape or form of symbols is all that matters for the operations to be successfully carried out, it is natural to call such systems formal. Formal operations are sensitive to the shape or form of individual symbols, and are not sensitive to the semantic content associated with the symbols.
However, it is still the case that formal systems can produce meaningful expressions. The punched cards of a Jacquard loom only manipulate the positions of thread-controlling rods. Yet these operations can produce an intricate woven pattern such as Jacquard’s portrait. The machine head of a Turing machine reads and writes individual symbols on a ticker tape. Yet these operations permit this device to provide answers to any computable question. How is it possible for formal systems to preserve or create semantic content?
In order for the operations of a physical symbol system to be meaningful, two properties must be true. First, the symbolic structures operated on must have semantic content. That is, the expressions being manipulated must have some relationship to states of the external world that permits the expressions to represent these states. This relationship is a basic property of a physical symbol system, and is called designation (Newell, 1980; Newell & Simon, 1976). “An expression designates an object if, given the expression, the system can either affect the object itself or behave in ways dependent on the object” (Newell & Simon, 1976, p. 116).
Explaining designation is a controversial issue in cognitive science and philosophy. There are many different proposals for how designation, which is also called the problem of representation (Cummins, 1989) or the symbol grounding problem (Harnad, 1990), occurs. The physical symbol system hypothesis does not propose a solution, but necessarily assumes that such a solution exists. This assumption is plausible to the extent that computers serve as existence proofs that designation is possible.
The second semantic property of a physical symbol system is that not only are individual expressions meaningful (via designation), but the evolution of expressions—the rule-governed transition from one expression to another—is also meaningful. That is, when some operation modifies an expression, this modification is not only syntactically correct, but it will also make sense semantically. As rules modify symbolic structures, they preserve meanings in the domain that the symbolic structures designate, even though the rules themselves are purely formal. The application of a rule should not produce an expression that is meaningless. This leads to what is known as the formalist’s motto: “If you take care of the syntax, then the semantics will take care of itself ” (Haugeland, 1985, p. 106).
The assumption that applying a physical symbol system’s rules preserves meaning is a natural consequence of classical cognitive science’s commitment to logicism. According to logicism, thinking is analogous to using formal methods to derive a proof, as is done in logic or mathematics. In these formal systems, when one applies rules of the system to true expressions (e.g., the axioms of a system of mathematics which by definition are assumed to be true [Davis & Hersh, 1981]), the resulting expressions must also be true. An expression’s truth is a critical component of its semantic content.
It is necessary, then, for the operations of a formal system to be defined in such a way that 1) they only detect the form of component symbols, and 2) they are constrained in such a way that manipulations of expressions are meaningful (e.g., truth preserving). This results in classical cognitive science’s interest in universal machines.
A universal machine is a device that is maximally flexible in two senses (Newell, 1980). First, its behaviour is responsive to its inputs; a change in inputs will be capable of producing a change in behaviour. Second, a universal machine must be able compute the widest variety of input-output functions that is possible. This “widest variety” is known as the set of computable functions.
A device that can compute every possible input-output function does not exist. The Turing machine was invented and used to prove that there exist some functions that are not computable (Turing, 1936). However, the subset of functions that are computable is large and important:
It can be proved mathematically that there are infinitely more functions than programs. Therefore, for most functions there is no corresponding program that can compute them. . . . Fortunately, almost all these noncomputable functions are useless, and virtually all the functions we might want to compute are computable. (Hillis, 1998, p. 71)
A major discovery of the twentieth century was that a number of seemingly different symbol manipulators were all identical in the sense that they all could compute the same maximal class of input-output pairings (i.e., the computable functions). Because of this discovery, these different proposals are all grouped together into the class “universal machine,” which is sometimes called the “effectively computable procedures.” This class is “a large zoo of different formulations” that includes “Turing machines, recursive functions, Post canonical systems, Markov algorithms, all varieties of general purpose digital computers, [and] most programming languages” (Newell, 1980, p. 150).
Newell (1980) proved that a generic physical symbol system was also a universal machine. This proof, coupled with the physical symbol system hypothesis, leads to a general assumption in classical cognitive science: cognition is computation, the brain implements a universal machine, and the products of human cognition belong to the class of computable functions.
The claim that human cognition is produced by a physical symbol system is a scientific hypothesis. Evaluating the validity of this hypothesis requires fleshing out many additional details. What is the organization of the program that defines the physical symbol system for cognition (Newell & Simon, 1972)? In particular, what kinds of symbols and expressions are being manipulated? What primitive operations are responsible for performing symbol manipulation? How are these operations controlled? Classical cognitive science is in the business of fleshing out these details, being guided at all times by the physical symbol system hypothesis.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.08%3A__Componentiality%2C_Computability%2C_and_Cognition.txt
|
According to the formalist’s motto (Haugeland, 1985) by taking care of the syntax, one also takes care of the semantics. The reason for this is that, like the rules in a logical system, the syntactic operations of a physical symbol system are constrained to preserve meaning. The symbolic expressions that a physical symbol system evolves will have interpretable designations.
We have seen that the structures a physical symbol system manipulates have two different lives, syntactic and semantic. Because of this, there is a corollary to the formalist’s motto, which might be called the semanticist’s motto: “If you understand the semantics, then you can take the syntax for granted.” That is, if you have a semantic interpretation of a physical symbol system’s symbolic expressions, then you can use this semantic interpretation to predict the future behaviour of the system—the future meanings that it will generate—without having to say anything about the underlying physical mechanisms that work to preserve the semantics.
We have seen that one of the fundamental properties of a physical symbol system is designation, which is a relation between the system and the world that provides interpretations to its symbolic expressions (Newell, 1980; Newell & Simon, 1976). More generally, it could be said that symbolic expressions are intentional—they are about some state of affairs in the world. This notion of intentionality is rooted in the philosophy of Franz Brentano (Brentano, 1995). Brentano used intentionality to distinguish the mental from the physical: “We found that the intentional in-existence, the reference to something as an object, is a distinguishing characteristic of all mental phenomena. No physical phenomenon exhibits anything similar” (p. 97).
To assume that human cognition is the product of a physical symbol system is to also assume that mental states are intentional in Brentano’s sense. In accord with the semanticist’s motto, the intentionality of mental states can be used to generate a theory of other people, a theory that can be used to predict the behaviour of another person. This is accomplished by adopting what is known as the intentional stance (Dennett, 1987).
The intentional stance uses the presumed contents of someone’s mental states to predict their behaviour. It begins by assuming that another person possesses intentional mental states such as beliefs, desires, or goals. As a result, the intentional stance involves describing other people with propositional attitudes.
A propositional attitude is a statement that relates a person to a proposition or statement of fact. For example, if I said to someone “Charles Ives’ music anticipated minimalism,” they could describe me with the propositional attitude “Dawson believes that Charles Ives’ music anticipated minimalism.” Propositional attitudes are of interest to philosophy because they raise a number of interesting logical problems. For example, the propositional attitude describing me could be true, but at the same time its propositional component could be false (for instance, if Ives’ music bore no relationship to minimalism at all!). Propositional attitudes are found everywhere in our language, suggesting that a key element of our understanding of others is the use of the intentional stance.
In addition to describing other people with propositional attitudes, the intentional stance requires that other people are assumed to be rational. To assume that a person is rational is to assume that there are meaningful relationships between the 84 Chapter 3 contents of mental states and behaviour. To actually use the contents of mental states to predict behaviour—assuming rationality—is to adopt the intentional stance.
For instance, given the propositional attitudes “Dawson believes that Charles Ives’ music anticipated minimalism” and “Dawson desires to only listen to early minimalist music,” and assuming that Dawson’s behaviour rationally follows from the contents of his intentional states, one might predict that “Dawson often listens to Ives’ compositions.” The assumption of rationality, “in combination with home truths about our needs, capacities and typical circumstances, generates both an intentional interpretation of us as believers and desirers and actual predictions of behavior in great profusion” (Dennett, 1987, p. 50).
Adopting the intentional stance is also known as employing commonsense psychology or folk psychology. The status of folk psychology, and of its relation to cognitive science, provides a source of continual controversy (Christensen & Turner, 1993; Churchland, 1988; Fletcher, 1995; Greenwood, 1991; Haselager, 1997; Ratcliffe, 2007; Stich, 1983). Is folk psychology truly predictive? If so, should the theories of cognitive science involve lawful operations on propositional attitudes? If not, should folk psychology be expunged from cognitive science? Positions on these issues range from eliminative materialism’s argument to erase folk-psychological terms from cognitive science (Churchland, 1988), to experimental philosophy’s position that folk concepts are valid and informative, and therefore should be empirically examined to supplant philosophical concepts that have been developed from a purely theoretical or analytic tradition (French & Wettstein, 2007; Knobe & Nichols, 2008).
In form, at least, the intentional stance or folk psychology has the appearance of a scientific theory. The intentional stance involves using a set of general, abstract laws (e.g., the principle of rationality) to predict future events. This brings it into contact with an important view of cognitive development known as the theory-theory (Gopnik & Meltzoff, 1997; Gopnik, Meltzoff, & Kuhl, 1999; Gopnik & Wellman, 1992; Wellman, 1990). According to the theory-theory, children come to understand the world by adopting and modifying theories about its regularities. That is, the child develops intuitive, representational theories in a fashion that is analogous to a scientist using observations to construct a scientific theory. One of the theories that a child develops is a theory of mind that begins to emerge when a child is three years old (Wellman, 1990).
The scientific structure of the intentional stance should be of no surprise, because this is another example of the logicism that serves as one of the foundations of classical cognitive science. If cognition really is the product of a physical symbol system, if intelligence really does emerge from the manipulation of intentional representations according to the rules of some mental logic, then the semanticist’s motto should hold. A principle of rationality, operating on propositional attitudes, should offer real predictive power.
However, the logicism underlying the intentional stance leads to a serious problem for classical cognitive science. This is because a wealth of experiments has shown that human reasoners deviate from principles of logic or rationality (Hastie, 2001; Tversky & Kahneman, 1974; Wason, 1966; Wason & Johnson-Laird, 1972). “A purely formal, or syntactic, approach to [reasoning] may suffer from severe limitations” (Wason & Johnson-Laird, 1972, p. 244). This offers a severe challenge to classical cognitive science’s adherence to logicism: if thinking is employing mental logic, then how is it possible for thinkers to be illogical?
It is not surprising that many attempts have been made to preserve logicism by providing principled accounts of deviations from rationalism. Some of these attempts have occurred at the computational level and have involved modifying the definition of rationality by adopting a different theory about the nature of mental logic. Such attempts include rational analysis (Chater & Oaksford, 1999) and probabilistic theories (Oaksford & Chater, 1998, 2001). Other, not unrelated approaches involve assuming that ideal mental logics are constrained by algorithmic and architectural-level realities, such as limited memory and real time constraints. The notion of bounded rationality is a prototypical example of this notion (Chase, Hertwig, & Gigerenzer, 1998; Evans, 2003; Hastie, 2001; Rubinstein, 1998; Simon, Egidi, & Marris, 1995).
The attempts to preserve logicism reflect the importance of the intentional stance, and the semanticist’s motto, to cognitive science. Classical cognitive science is committed to the importance of a cognitive vocabulary, a vocabulary that invokes the contents of mental states (Pylyshyn, 1984).
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.09%3A_The_Intentional_Stance.txt
|
The physical symbol systems of classical cognitive science make a sharp distinction between symbols and the rules that manipulate them. This is called the structure/ process distinction. For instance, in a Turing machine the symbols reside in one medium (the ticker tape) that is separate from another medium (the machine head) that houses the operators for manipulating symbols. Whatever the specific nature of cognition’s universal machine, if it is a classical physical symbol system, then it will exhibit the structure/process distinction.
In general, what can be said about the symbols that define the structure that is manipulated by a physical symbol system? It has been argued that cognitive science’s notion of symbol is ill defined (Searle, 1992). Perhaps this is because apart from the need that symbols be physically distinctive, so that they can be identified as being tokens of a particular type, symbols do not have definitive properties. Symbols are arbitrary, in the sense that anything can serve as a symbol.
The arbitrary nature of symbols is another example of the property of multiple realization that was discussed in Chapter 2.
What we had no right to expect is the immense variety of physical ways to realize any fixed symbol system. What the generations of digital technology have demonstrated is that an indefinitely wide array of physical phenomena can be used to develop a digital technology to produce a logical level of essentially identical character. (Newell, 1980, p. 174)
This is why universal machines can be built out of gears (Swade, 1993), LEGO (Agulló et al., 2003), electric train sets (Stewart, 1994), hydraulic valves, or silicon chips (Hillis, 1998).
The arbitrariness of symbols, and the multiple realization of universal machines, is rooted in the relative notion of universal machine. By definition, a machine is universal if it can simulate any other universal machine (Newell, 1980). Indeed, this is the basic idea that justifies the use of computer simulations to investigate cognitive and neural functioning (Dutton & Starbuck, 1971; Gluck & Myers, 2001; Lewandowsky, 1993; Newell & Simon, 1961; O’Reilly & Munakata, 2000).
For any class of machines, defined by some way of describing its operational structure, a machine of that class is defined to be universal if it can behave like any machine of the class. This puts simulation at the center of the stage. (Newell, 1980, p. 149)
If a universal machine can be simulated by any other, and if cognition is the product of a universal machine, then why should we be concerned about the specific details of the information processing architecture for cognition? The reason for this concern is that the internal aspects of an architecture—the relations between a particular structure-process pairing—are not arbitrary. The nature of a particular structure is such that it permits some, but not all, processes to be easily applied. Therefore some input-output functions will be easier to compute than others because of the relationship between structure and process. Newell and Simon (1972, p. 803) called these second-order effects.
Consider, for example, one kind of representation: a table of numbers, such as Table 3-1, which provides the distances in kilometres between pairs of cities in Alberta (Dawson, Boechler, & Valsangkar-Smyth, 2000). One operation that can easily be applied to symbols that are organized in such a fashion is table lookup. For instance, perhaps I was interested in knowing the distance that I would travel if I drove from Edmonton to Fort McMurray. Applying table lookup to Table 3-1, by looking for the number at the intersection between the Edmonton row and the Fort McMurray column, quickly informs me that the distance is 439 kilometres. This is because the tabular form of this information makes distances between places explicit, so that they can be “read off of ” the representation in a seemingly effortless manner.
Other information cannot be so easily gleaned from a tabular representation. For instance, perhaps I am interested in determining the compass direction that points from Edmonton to Fort McMurray. The table does not make this information explicit—directions between cities cannot be simply read off of Table 3-1.
Table 3-1. Distances in kilometres between cities in Alberta, Canada.
However, this does not mean that the table does not contain information about direction. Distance-like data of the sort provided by Table 3-1 can be used as input to a form of factor analysis called multidimensional scaling (MDS) (Romney, Shepard, & Nerlove, 1972; Shepard, Romney, & Nerlove, 1972). This statistical analysis converts the table of distances into a map-like representation of objects that would produce the set of distances in the table. Dawson et al. (2000) performed such an analysis on the Table 3-1 data and obtained the map that is given in Figure 3-10. This map makes the relative spatial locations of the cities obvious; it could be used to simply “read off ” compass directions between pairs of places.
Figure 3-10. Results of applying MDS to Table 3-1.
“Reading off ” information from a representation intuitively means accessing this information easily—by using a small number of primitive operations. If this is not possible, then information might be still be accessed by applying a larger number of operations, but this will take more time. The ease of accessing information is a result of the relationship between structure and process.
The structure-process relationship, producing second-order effects, underscores the value of using relative complexity evidence, a notion that was introduced in Chapter 2. Imagine that a physical symbol system uses a tabular representation of distances. Then we would expect it to compute functions involving distance very quickly, but it would be much slower to answer questions about direction. In contrast, if the device uses a map-like representation, then we would expect it to answer questions about direction quickly, but take longer to answer questions about distance (because, for instance, measuring operations would have to be invoked).
In summary, while structures are arbitrary, structure-process relations are not. They produce second-order regularities that can affect such measures as relative complexity evidence. Using such measures to investigate structure-process relations provides key information about a system’s algorithms and architecture.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.10%3A_Structure_and_Process.txt
|
The physical symbol system hypothesis defines classical cognitive science. This school of thought can be thought of as the modern derivative of Cartesian philosophy. It views cognition as computation, where computation is the rule-governed manipulation of symbols. Thus thinking and reasoning are viewed as the result of performing something akin to logical or mathematical inference. A great deal of this computational apparatus must be innate.
However, classical cognitive science crucially departs from Cartesian philosophy by abandoning dualism. Classical cognitive science instead adopts a materialist position that mechanizes the mind. The technical notion of computation is the application of a finite set of recursive rules to a finite set of primitives to evolve a set of finite symbolic structures or expressions. This technical definition of computation is beyond the capabilities of some devices, such as finite state automata, but can be accomplished by universal machines such as Turing machines or electronic computers. The claim that cognition is the product of a device that belongs to the same class of artifacts such as Turing machines or digital computers is the essence of the physical symbol system hypothesis, and the foundation of classical cognitive science.
Since the invention of the digital computer, scholars have seriously considered the possibility that the brain was also a computer of this type. For instance, the all-or-none nature of a neuron’s action potential has suggested that the brain is also digital in nature (von Neumann, 1958). However, von Neumann went on to claim that the small size and slow speed of neurons, in comparison to electronic components, suggested that the brain would have a different architecture than an electronic computer. For instance, von Neumann speculated that the brain’s architecture would be far more parallel in nature.
Von Neumann’s (1958) speculations raise another key issue. While classical cognitive scientists are confident that brains belong to the same class as Turing machines and digital computers (i.e., all are physical symbol systems), they do not expect the brain to have the same architecture. If the brain is a physical symbol system, then what might its architecture be like?
Many classical cognitive scientists believe that the architecture of cognition is some kind of production system. The model of production system architecture was invented by Newell and Simon (Newell, 1973; Newell & Simon, 1961, 1972) and has been used to simulate many psychological phenomena (Anderson, 1983; Anderson et al., 2004; Anderson & Matessa, 1997; Meyer et al. 2001; Meyer & Kieras, 1997a, 1997b; Newell, 1990; Newell & Simon, 1972). Production systems have a number of interesting properties, including an interesting mix of parallel and serial processing.
A production system is a general-purpose symbol manipulator (Anderson, 1983; Newell, 1973; Newell & Simon, 1972). Like other physical symbol systems, production systems exhibit a marked distinction between symbolic expressions and the rules for manipulating them. They include a working memory that is used to store one or more symbolic structures, where a symbolic structure is an expression that is created by combining a set of atomic symbols. In some production systems (e.g., Anderson, 1983) a long-term memory, which also stores expressions, is present as well. The working memory of a production system is analogous to the ticker tape of a Turing machine or to the random access memory of a von Neumann computer.
The process component of a production system is a finite set of symbol-manipulating rules that are called productions. Each production is a single rule that pairs a triggering condition with a resulting action. A production works by scanning the expressions in working memory for a pattern that matches its condition. If such a match is found, then the production takes control of the memory and performs its action. A production’s action is some sort of symbol manipulation—adding, deleting, copying, or moving symbols or expressions in the working memory.
A typical production system is a parallel processor in the sense that all of its productions search working memory simultaneously for their triggering patterns. However, it is a serial processor—like a Turing machine or a digital computer— when actions are performed to manipulate the expressions in working memory. This is because in most production systems only one production is allowed to operate on memory at any given time. That is, when one production finds its triggering condition, it takes control for a moment, disabling all of the other productions. The controlling production manipulates the symbols in memory, and then releases its control, which causes the parallel scan of working memory to recommence.
We have briefly described two characteristics, structure and process, that make production systems examples of physical symbol systems. The third characteristic, control, reveals some additional interesting properties of production systems.
On the one hand, stigmergy is used to control a production system, that is, to choose which production acts at any given time. Stigmergic control occurs when different agents (in this case, productions) do not directly communicate with each other, but conduct indirect communication by modifying a shared environment (Theraulaz & Bonabeau, 1999). Stigmergy has been used to explain how a colony of social insects might coordinate their actions to create a nest (Downing & Jeanne, 1988; Karsai, 1999). The changing structure of the nest elicits different nest-building behaviours; the nest itself controls its own construction. When one insect adds a new piece to the nest, this will change the later behaviour of other insects without any direct communication occurring.
Production system control is stigmergic if the working memory is viewed as being analogous to the insect nest. The current state of the memory causes a particular production to act. This changes the contents of the memory, which in turn can result in a different production being selected during the next cycle of the architecture.
On the other hand, production system control is usually not completely stigmergic. This is because the stigmergic relationship between working memory and productions is loose enough to produce situations in which conflicts occur. Examples of this type of situation include instances in which more than one production finds its triggering pattern at the same time, or when one production finds its triggering condition present at more than one location in memory at the same time. Such situations must be dealt with by additional control mechanisms. For instance, priorities might be assigned to productions so that in a case where two or more productions were in conflict, only the production with the highest priority would perform its action.
Production systems have provided an architecture—particularly if that architecture is classical in nature—that has been so successful at simulating higher-order cognition that some researchers believe that production systems provide the foundation for a unified theory of cognition (Anderson, 1983; Anderson et al., 2004; Newell, 1990). Production systems illustrate another feature that is also typical of this approach to cognitive science: the so-called classical sandwich (Hurley, 2001).
Imagine a very simple agent that was truly incapable of representation and reasoning. Its interactions with the world would necessarily be governed by a set of reflexes that would convert sensed information directly into action. These reflexes define a sense-act cycle (Pfeifer & Scheier, 1999).
In contrast, a more sophisticated agent could use internal representations to decide upon an action, by reasoning about the consequences of possible actions and choosing the action that was reasoned to be most beneficial (Popper, 1978, p. 354): “While an uncritical animal may be eliminated altogether with its dogmatically held hypotheses, we may formulate our hypotheses, and criticize them. Let our conjectures, our theories die in our stead!” In this second scenario, thinking stands as an intermediary between sensation and action. Such behaviour is not governed by a sense-act cycle, but is instead the product of a sense-think-act cycle (Pfeifer & Scheier, 1999).
Hurley (2001) has argued that the sense-think-act cycle is the stereotypical form of a theory in classical cognitive science; she called this form the classical sandwich. In a typical classical theory, perception can only indirectly inform action, by sending information to be processed by the central representational processes, which in turn decide which action is to be performed.
Production systems exemplify the classical sandwich. The first production systems did not incorporate sensing or acting, in spite of a recognized need to do so. “One problem with psychology’s attempt at cognitive theory has been our persistence in thinking about cognition without bringing in perceptual and motor processes” (Newell, 1990, p. 15). This was also true of the next generation of production systems, the adaptive control of thought (ACT) architecture (Anderson, 1983). ACT “historically was focused on higher level cognition and not perception or action” (Anderson et al., 2004, p. 1038).
More modern production systems, such as EPIC (executive-process interactive control) (Meyer & Kieras, 1997a, 1997b), have evolved to include sensing and acting. EPIC simulates the performance of multiple tasks and can produce the psychological refractory period (PRP). When two tasks can be performed at the same time, the stimulus onset asynchrony (SOA) between the tasks is the length of time from the start of the first task to the start of the second task. When SOAs are long, the time taken by a subject to make a response is roughly the same for both tasks. However, for SOAs of half a second or less, it takes a longer time to perform the second task than it does to perform the first. This increase in response time for short SOAs is the PRP.
EPIC is an advanced production system. One of its key properties is that productions in EPIC can act in parallel. That is, at any time cycle in EPIC processing, all productions that have matched their conditions in working memory will act to alter working memory. This is important; when multiple tasks are modelled there will be two different sets of productions in action, one for each task. EPIC also includes sensory processors (such as virtual eyes) and motor processors, because actions can constrain task performance. For example, EPIC uses a single motor processor to control two “virtual hands.” This results in interference between two tasks that involve making responses with different hands.
While EPIC (Meyer & Kieras, 1997a, 1997b) explicitly incorporates sensing, acting, and thinking, it does so in a fashion that still exemplifies the classical sandwich. In EPIC, sensing transduces properties of the external world into symbols to be added to working memory. Working memory provides symbolic expressions that guide the actions of motor processors. Thus working memory centralizes the “thinking” that maps sensations onto actions. There are no direct connections between sensing and acting that bypass working memory. EPIC is an example of sense-think-act processing.
Radical embodied cognitive science, which is discussed in Chapter 5, argues that intelligence is the result of situated action; it claims that sense-think-act processing can be replaced by sense-act cycles, and that the rule-governed manipulation of expressions is unnecessary (Chemero, 2009). In contrast, classical researchers claim that production systems that include sensing and acting are sufficient to explain human intelligence and action, and that embodied theories are not necessary (Vera & Simon, 1993).
It follows that there is no need, contrary to what followers of SA [situated action] seem sometimes to claim, for cognitive psychology to adopt a whole new language and research agenda, breaking completely from traditional (symbolic) cognitive theories. SA is not a new approach to cognition, much less a new school of cognitive psychology. (Vera & Simon, 1993, p. 46)
We see later in this book that production systems provide an interesting medium that can be used to explore the relationship between classical, connectionist, and embodied cognitive science.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.11%3A_A_Classical_Architecture_for_Cognition.txt
|
There are two fundamentals that follow from accepting the physical symbol system hypothesis (Newell, 1980; Newell & Simon, 1976). First, general human intelligence is the product of rule-governed symbol manipulation. Second, because they are universal machines, any particular physical symbol system can be configured to simulate the behaviour of another physical symbol system.
A consequence of these fundamentals is that digital computers, which are one type of physical symbol system, can simulate another putative member of the same class, human cognition (Newell & Simon, 1961, 1972; Simon, 1969). More than fifty years ago it was predicted “that within ten years most theories in psychology will take the form of computer programs, or of qualitative statements about the characteristics of computer programs” (Simon & Newell, 1958, pp. 7–8). One possible measure of cognitive science’s success is that a leading critic of artificial intelligence has conceded that this particular prediction has been partially fulfilled (Dreyfus, 1992).
There are a number of advantages to using computer simulations to study cognition (Dawson, 2004; Lewandowsky, 1993). The difficulties in converting a theory into a working simulation can identify assumptions that the theory hides. The formal nature of a computer program provides new tools for studying simulated concepts (e.g., proofs of convergence). Programming a theory forces a researcher to provide rigorous definitions of the theory’s components. “Programming is, again like any form of writing, more often than not experimental. One programs, just as one writes, not because one understands, but in order to come to understand.” (Weizenbaum, 1976, p. 108).
However, computer simulation research provides great challenges as well. Chief among these is validating the model, particularly because one universal machine can simulate any other. A common criticism of simulation research is that it is possible to model anything, because modelling is unconstrained:
Just as we may wonder how much the characters in a novel are drawn from real life and how much is artifice, we might ask the same of a model: How much is based on observation and measurement of accessible phenomena, how much is based on informed judgment, and how much is convenience? (Oreskes, ShraderFrechette, & Belitz, 1994, p. 644)
Because of similar concerns, mathematical psychologists have argued that computer simulations are impossible to validate in the same way as mathematical models of behaviour (Estes, 1975; Luce, 1989, 1999). Evolutionary biologist John Maynard Smith called simulation research “fact free science” (Mackenzie, 2002).
Computer simulation researchers are generally puzzled by such criticisms, because their simulations of cognitive phenomena must conform to a variety of challenging constraints (Newell, 1980, 1990; Pylyshyn, 1984). For instance, Newell’s (1980, 1990) production system models aim to meet a number of constraints that range from behavioural (flexible responses to environment, goal-oriented, operate in real time) to biological (realizable as a neural system, develop via embryological growth processes, arise through evolution).
In validating a computer simulation, classical cognitive science becomes an intrinsically comparative discipline. Model validation requires that theoretical analyses and empirical observations are used to evaluate both the relationship between a simulation and the subject being simulated. In adopting the physical symbol system hypothesis, classical cognitive scientists are further committed to the assumption that this relation is complex, because it can be established (as argued in Chapter 2) at many different levels (Dawson, 1998; Marr, 1982; Pylyshyn, 1984). Pylyshyn has argued that model validation can take advantage of this and proceed by imposing severe empirical constraints. These empirical constraints involve establishing that a model provides an appropriate account of its subject at the computational, algorithmic, and architectural levels of analysis. Let us examine this position in more detail.
First, consider a relationship between model and subject that is not listed above—a relationship at the implementational level of analysis. Classical cognitive science’s use of computer simulation methodology is a tacit assumption that the physical structure of its models does not need to match the physical structure of the subject being modelled.
The basis for this assumption is the multiple realization argument that we have already encountered. Cognitive scientists describe basic information processes in terms of their functional nature and ignore their underlying physicality. This is because the same function can be realized in radically different physical media. For instance, AND-gates can be created using hydraulic channels, electronic components, or neural circuits (Hillis, 1998). If hardware or technology were relevant—if the multiple realization argument was false—then computer simulations of cognition would be absurd. Classical cognitive science ignores the physical when models are validated. Let us now turn to the relationships between models and subjects that classical cognitive science cannot and does not ignore.
In the most abstract sense, both a model and a modelled agent can be viewed as opaque devices, black boxes whose inner workings are invisible. From this perspective, both are machines that convert inputs or stimuli into outputs or responses; their behaviour computes an input-output function (Ashby, 1956, 1960). Thus the most basic point of contact between a model and its subject is that the input-output mappings produced by one must be identical to those produced by the other. Establishing this fact is establishing a relationship between model and subject at the computational level.
To say that a model and subject are computing the same input-output function is to say that they are weakly equivalent. It is a weak equivalence because it is established by ignoring the internal workings of both model and subject. There are an infinite number of different algorithms for computing the same input-output function (Johnson-Laird, 1983). This means that weak equivalence can be established between two different systems that use completely different algorithms.
Weak equivalence is not concerned with the possibility that two systems can produce the right behaviours but do so for the wrong reasons. Weak equivalence is also sometimes known as Turing equivalence. This is because weak equivalence is at the heart of a criterion proposed by computer pioneer Alan Turing, to determine whether a computer program had achieved intelligence (Turing, 1950). This criterion is called the Turing test.
Turing (1950) believed that a device’s ability to participate in a meaningful conversation was the strongest test of its general intelligence. His test involved a human judge conducting, via teletype, a conversation with an agent. In one instance, the agent was another human. In another, the agent was a computer program. Turing argued that if the judge could not correctly determine which agent was human then the computer program must be deemed to be intelligent. A similiar logic was subscribed to by Descartes (2006). Turing and Descartes both believed in the power of language to reveal intelligence; however, Turing believed that machines could attain linguistic power, while Descartes did not.
A famous example of the application of the Turing test is provided by a model of paranoid schizophrenia, PARRY (Kosslyn, Ball, & Reiser, 1978). This program interacted with a user by carrying on a conversation—it was a natural language communication program much like the earlier ELIZA program (Weizenbaum, 1966). However, in addition to processing the structure of input sentences, PARRY also computed variables related to paranoia: fear, anger, and mistrust. PARRY’s responses were thus affected not only by the user’s input, by also by its evolving affective states. PARRY’s contributions to a conversation became more paranoid as the interaction was extended over time.
A version of the Turing test was used to evaluate PARRY’s performance (Colby et al., 1972). Psychiatrists used teletypes to interview PARRY as well as human 96 Chapter 3 paranoids. Forty practising psychiatrists read transcripts of these interviews in order to distinguish the human paranoids from the simulated ones. They were only able to do this at chance levels. PARRY had passed the Turing test: “We can conclude that psychiatrists using teletyped data do not distinguish real patients from our simulation of a paranoid patient” (p. 220).
The problem with the Turing test, though, is that in some respects it is too easy to pass. This was one of the points of the pioneering conversation-making program, ELIZA (Weizenbaum, 1966), which was developed to engage in natural language conversations. Its most famous version, DOCTOR, modelled the conversational style of an interview with a humanistic psychotherapist. ELIZA’s conversations were extremely compelling. “ELIZA created the most remarkable illusion of having understood the minds of the many people who conversed with it” (Weizenbaum, 1976, p. 189). Weizenbaum was intrigued by the fact that “some subjects have been very hard to convince that ELIZA is not human. This is a striking form of Turing’s test” (Weizenbaum, 1966, p. 42).
However, ELIZA’s conversations were not the product of natural language understanding. It merely parsed incoming sentences, and then put fragments of these sentences into templates that were output as responses. Templates were ranked on the basis of keywords that ELIZA was programmed to seek during a conversation; this permitted ELIZA to generate responses rated as being highly appropriate. “A large part of whatever elegance may be credited to ELIZA lies in the fact that ELIZA maintains the illusion of understanding with so little machinery” (Weizenbaum, 1966, p. 43).
Indeed, much of the apparent intelligence of ELIZA is a contribution of the human participant in the conversation, who assumes that ELIZA understands its inputs and that even strange comments made by ELIZA are made for an intelligent reason.
The ‘sense’ and the continuity the person conversing with ELIZA perceives is supplied largely by the person himself. He assigns meanings and interpretations to what ELIZA ‘says’ that confirm his initial hypothesis that the system does understand, just as he might do with what a fortune-teller says to him. (Weizenbaum, 1976, p. 190)
Weizenbaum believed that natural language understanding was beyond the capability of computers, and also believed that ELIZA illustrated this belief. However, ELIZA was received in a fashion that Weizenbaum did not anticipate, and which was opposite to his intent. He was so dismayed that he wrote a book that served as a scathing critique of artificial intelligence research (Weizenbaum, 1976, p. 2): “My own shock was administered not by any important political figure in establishing his philosophy of science, but by some people who insisted on misinterpreting a piece of work I had done.”
The ease with which ELIZA was misinterpreted—that is, the ease with which it passed a striking form of Turing’s test—caused Weizenbaum (1976) to question most research on the computer simulation of intelligence. Much of Weizenbaum’s concern was rooted in AI’s adoption of Turing’s (1950) test as a measure of intelligence.
An entirely too simplistic notion of intelligence has dominated both popular and scientific thought, and this notion is, in part, responsible for permitting artificial intelligence’s perverse grand fantasy to grow. (Weizenbaum, 1976, p. 203)
However, perhaps a more reasoned response would be to adopt a stricter means of evaluating cognitive simulations. While the Turing test has had more than fifty years of extreme influence, researchers are aware of its limitations and have proposed a number of ways to make it more sensitive (French, 2000).
For instance, the Total Turing Test (French, 2000) removes the teletype and requires that a simulation of cognition be not only conversationally indistinguishable from a human, but also physically indistinguishable. Only a humanoid robot could pass such a test, and only do so by not only speaking but also behaving (in very great detail) in ways indistinguishable from a human. A fictional version of the Total Turing Test is the Voight-Kampff scale described in Dick’s (1968) novel Do Androids Dream of Electric Sheep? This scale used behavioural measures of empathy, including pupil dilation, to distinguish humans from androids.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.12%3A_Weak_Equivalence_and_the_Turing_Test.txt
|
The Turing test has had a long, influential history (French, 2000). However, many would agree that it is flawed, perhaps because it is too easily passed. As a consequence, some have argued that artificial intelligence research is very limited (Weizenbaum, 1976). Others have argued for more stringent versions of the Turing test, such as the Total Turing Test.
Classical cognitive science recognizes that the Turing test provides a necessary, but not a sufficient, measure of a model’s validity. This is because it really only establishes weak equivalence, by collecting evidence that two systems are computationally equivalent. It accomplishes this by only examining the two devices at the level of the input-output relationship. This can only establish weak equivalence, because systems that use very different algorithms and architectures can still compute the same function.
Classical cognitive science has the goal of going beyond weak equivalence. It attempts to do so by establishing additional relationships between models and subjects, identities between both algorithms and architectures. This is an attempt to establish what is known as strong equivalence (Pylyshyn, 1984). Two systems are said to be strongly equivalent if they compute the same input-output function (i.e., 98 Chapter 3 if they are weakly equivalent), accomplish this with the same algorithm, and bring this algorithm to life with the same architecture. Cognitive scientists are in the business of making observations that establish the strong equivalence of their models to human thinkers.
Classical cognitive science collects these observations by measuring particular behaviours that are unintended consequences of information processing, and which can therefore reveal the nature of the algorithm that is being employed. Newell and Simon (1972) named these behaviours second-order effects; in Chapter 2 these behaviours were called artifacts, to distinguish them from the primary or intended responses of an information processor. In Chapter 2, I discussed three general classes of evidence related to artifactual behaviour: intermediate state evidence, relative complexity evidence, and error evidence.
Note that although similar in spirit, the use of these three different types of evidence to determine the relationship between the algorithms used by model and subject is not the same as something like the Total Turing Test. Classical cognitive science does not require physical correspondence between model and subject. However, algorithmic correspondences established by examining behavioural artifacts put much stronger constraints on theory validation than simply looking for stimulus-response correspondences. To illustrate this, let us consider some examples of how intermediate state evidence, relative complexity evidence, and error evidence can be used to validate models.
One important source of information that can be used to validate a model is intermediate state evidence (Pylyshyn, 1984). Intermediate state evidence involves determining the intermediate steps that a symbol manipulator takes to solve a problem, and then collecting evidence to determine whether a modelled subject goes through the same intermediate steps. Intermediate state evidence is notoriously difficult to collect, because human information processors are black boxes—we cannot directly observe internal cognitive processing. However, clever experimental paradigms can be developed to permit intermediate states to be inferred.
A famous example of evaluating a model using intermediate state evidence is found in some classic and pioneering research on human problem solving (Newell & Simon, 1972). Newell and Simon collected data from human subjects as they solved problems; their method of data collection is known as protocol analysis (Ericsson & Simon, 1984). In protocol analysis, subjects are trained to think out loud as they work. A recording of what is said by the subject becomes the primary data of interest.
The logic of collecting verbal protocols is that the thought processes involved in active problem solving are likely to be stored in a person’s short-term memory (STM), or working memory. Cognitive psychologists have established that items stored in such a memory are stored as an articulatory code that permits verbalization to maintain the items in memory (Baddeley, 1986, 1990; Conrad, 1964a, 1964b; Waugh & Norman, 1965). As a result, asking subjects to verbalize their thinking steps is presumed to provide accurate access to current cognitive processing, and to do so with minimal disruption. “Verbalization will not interfere with ongoing processes if the information stored in STM is encoded orally, so that an articulatory code can readily be activated” ” (Ericsson & Simon, 1984, p. 68).
In order to study problem solving, Newell and Simon (1972) collected verbal protocols for problems that were difficult enough to engage subjects and generate interesting behaviour, but simple enough to be solved. For instance, when a subject was asked to decode the cryptarithmetic problem DONALD + GERALD = ROBERT after being told that D = 5, they solved the problem in twenty minutes and produced a protocol that was 2,186 words in length.
The next step in the study was to create a problem behaviour graph from a subject’s protocol. A problem behaviour graph is a network of linked nodes. Each node represents a state of knowledge. For instance, in the cryptarithmetic problem such a state might be the observation that “R is odd.” A horizontal link from a node to a node on its right represents the application of an operation that changed the state of knowledge. An example operation might be “Find a column that contains a letter of interest and process that column.” A vertical link from a node to a node below represents backtracking. In many instances, a subject would reach a dead end in a line of thought and return to a previous state of knowledge in order to explore a different approach. The 2,186-word protocol produced a problem behaviour graph that consisted of 238 different nodes.
The initial node in a problem behaviour graph represents a subject’s starting state of knowledge when given a problem. A node near the end of the problem behaviour graph represents the state of knowledge when a solution has been achieved. All of the other nodes represent intermediate states of knowledge. Furthermore, in Newell and Simon’s (1972) research, these intermediate states represent very detailed elements of knowledge about the problem as it is being solved.
The goal of the simulation component of Newell and Simon’s (1972) research was to create a computer model that would generate its own problem behaviour graph. The model was intended to produce a very detailed mimicry of the subject’s behaviour—it was validated by examining the degree to which the simulation’s problem behaviour graph matched the graph created for the subject. The meticulous nature of such intermediate state evidence provided additional confidence for the use of verbal protocols as scientific data. “For the more information conveyed in their responses, the more difficult it becomes to construct a model that will produce precisely those responses adventitiously—hence the more confidence we can place in a model that does predict them” (Ericsson & Simon, 1984, p. 7).
Newell and Simon (1972) created a computer simulation by examining a subject’s problem behaviour graph, identifying the basic processes that it revealed in its links between nodes, and coding each of these processes as a production in a production system. Their model developed from the protocol for the DONALD + GERALD = ROBERT problem consisted of only 14 productions. The behaviour of this fairly small program was able to account for 75 to 80 percent of the human subject’s problem behaviour graph. “All of this analysis shows how a verbal thinkingaloud protocol can be used as the raw material for generating and testing a theory of problem solving behavior” (Newell & Simon, 1972, p. 227).
The contribution of Newell and Simon’s (1972) research to classical cognitive science is impossible to overstate. One of their central contributions was to demonstrate that human problem solving could be characterized as searching through a problem space. A problem space consists of a set of knowledge states—starting state, one or more goal states, and a potentially large number of intermediate states—that each represent current knowledge about a problem. A link between two knowledge states shows how the application of a single rule can transform the first state into the second. A problem behaviour graph is an example of a problem space. Searching the problem space involves finding a route—a sequence of operations—that will transform the initial state into a goal state. From this perspective, problem solving becomes the domain of control: finding as efficiently as possible an acceptable sequence of problem-solving operations. An enormous number of different search strategies exist (Knuth, 1997; Nilsson, 1980); establishing the strong equivalence of a problem-solving model requires collecting evidence (e.g., using protocol analysis) to ensure that the same search or control strategy is used by both model and agent.
A second kind of evidence that is used to investigate the validity of a model is relative complexity evidence (Pylyshyn, 1984). Relative complexity evidence generally involves examining the relative difficulty of problems, to see whether the problems that are hard (or easy) for a model are the same problems that are hard (or easy) for a modelled subject. The most common kind of relative complexity evidence collected by cognitive scientists is response latency (Luce, 1986; Posner, 1978). It is assumed that the time taken for a system to generate a response is an artifactual behaviour that can reveal properties of an underlying algorithm and be used to examine the algorithmic relationship between model and subject.
One domain in which measures of response latency have played an important role is the study of visual cognition (Kosslyn & Osherson, 1995; Pinker, 1985). Visual cognition involves solving information processing problems that involve spatial relationships or the spatial layout of information. It is a rich domain of study because it seems to involve qualitatively different kinds of information processing: the data-driven or preattentive detection of visual features (Marr, 1976; Richards, 1988; Treisman, 1985), top-down or high-level cognition to link combinations of visual features to semantic interpretations or labels (Jackendoff, 1983, 1987; Treisman, 1986, 1988), and processing involving visual attention or visual routines that include both data-driven and top-down characteristics, and which serve as an intermediary between feature detection and object recognition (Cooper & Shepard, 1973a, 1973b; Ullman, 1984; Wright, 1998).
Visual search tasks are frequently used to study visual cognition. In such a task, a subject is usually presented with a visual display consisting of a number of objects. In the odd-man-out version of this task, in one half of the trials one of the objects (the target) is different from all of the other objects (the distracters). In the other half of the trials, the only objects present are distracters. Subjects have to decide as quickly and accurately as possible whether a target is present in each display. The dependent measures in such tasks are search latency functions, which represent the time required to detect the presence or absence of a target as a function of the total number of display elements.
Pioneering work on visual search discovered the so-called pop-out effect: the time required to detect the presence of a target that is characterized by one of a small number of unique features (e.g., colour, orientation, contrast, motion) is largely independent of the number of distractor elements in a display, producing a search latency function that is essentially flat (Treisman & Gelade, 1980). This is because, regardless of the number of elements in the display, when the target is present it seems to pop out of the display, bringing itself immediately to attention. Notice how the target pops out of the display illustrated in Figure 3-11.
Figure 3-11. Unique features pop out of displays, regardless of display size.
In contrast, the time to detect a target defined by a unique combination of features generally increases with the number of distractor items, producing search latency functions with positive slopes. Figure 3-12 illustrates visual search in objects that are either connected or unconnected (Dawson & Thibodeau, 1998); connectedness 102 Chapter 3 is a property that is not local, but is only defined by relations between multiple features (Minsky & Papert, 1988). The larger the number of display items, the longer it takes to find the target when it is present in the display. Is there a target in Figure 3-12? If so, is it harder to find than the one that was present in Figure 3-11?
Figure 3-12. Unique combinations of features do not pop out.
Search latency results as those described above, which revealed that some objects pop out but others do not, formed the basis for feature integration theory (Treisman, 1985, 1986, 1988; Treisman & Gelade, 1980; Treisman & Gormican, 1988; Treisman, Sykes, & Gelade, 1977). Feature integration theory is a multistage account of visual cognition. In the first state, preattentive processors register the locations of a small set of primitive visual features on independent feature maps. These maps represent a small number of properties (e.g., orientation, colour, contrast movement) that also appear to be transduced by early neural visual detectors (Livingstone & Hubel, 1988). If such a feature is unique to a display, then it will be the only active location in its feature map. This permits pop out to occur, because the location of the unique, primitive feature is preattentively available.
Unique combinations of features do not produce unique activity in a single feature map and therefore cannot pop out. Instead, they require additional processing in order to be detected. First, attentional resources must be used to bring the various independent feature maps into register with respect to a master map of locations. This master map of locations will indicate what combinations of features coexist at each location in the map. Second, a “spotlight” of attention is used to scan the master map of locations in search of a unique object. Because this attentional spotlight can only process a portion of the master map at any given time, and because it must be scanned from location to location on the master map, it takes longer for unique combinations of features to be found. Furthermore, the search of the master map will become longer and longer as more of its locations are filled, explaining why the latency to detect unique feature combinations is affected by the number of distractors present.
Relative complexity evidence can also be used to explore some of the components of feature integration theory. For example, several researchers have proposed models of the how the attentional spotlight is shifted to detect targets in a visual search task (Fukushima, 1986; Gerrissen, 1991; Grossberg, 1980; Koch & Ullman, 1985; LaBerge, Carter, & Brown, 1992; Sandon, 1992). While the specific details of these models differ, their general structure is quite similar. First, these models represent the display being searched as an array of processors whose activities encode the visual distinctiveness of the location that each processor represents (i.e., how different it is in appearance relative to its neighbours). Second, these processors engage in a winner-take-all (WTA) competition (Feldman & Ballard, 1982) to identify the most distinctive location. This competition is defined by lateral inhibition: each processor uses its activity as an inhibitory signal in an attempt to reduce the activity of its neighbours. Third, the display element at the winning location is examined to see whether or not it is the target. If it is, the search stops. If it is not, activity at this location either decays or is inhibited (Klein, 1988), and a new WTA competition is used to find the next most distinctive location in the display.
This type of model provides a straightforward account of search latency functions obtained for targets defined by unique conjunctions of features. They also lead to a unique prediction: if inhibitory processes are responsible for directing the shift of the attentional spotlight, then search latency functions should be affected by the overall adapting luminance of the display. This is because there is a greater degree of inhibition during the processing of bright visual displays than there is for dimmer displays (Barlow, Fitzhugh, & Kuffler, 1957; Derrington & Lennie, 1982; Ransom-Hogg & Spillmann, 1980; Rohaly & Buchsbaum, 1989).
A visual search study was conducted to test this prediction (Dawson & Thibodeau, 1998). Modifying a paradigm used to study the effect of adaptive luminance on motion perception (Dawson & Di Lollo, 1990), Dawson and Thibodeau (1998) had subjects perform a visual search task while viewing the displays through neutral density filters that modified display luminance while not affecting the relative contrast of elements. There were two major findings that supported the kinds of models of attentional shift described above. First, when targets pop out, the response latency of subjects was not affected by adaptive luminance. This is consistent with feature integration theory, in the sense that a shifting attentional spotlight is not required for pop out to occur. Second, for targets that did not pop out, search latency functions were affected by the level of adaptive luminance. For darker displays, both the intercept and the slope of the search latency functions increased significantly. This is consistent with the hypothesis that this manipulation interferes with the inhibitory processes that guide shifts of attention.
A third approach to validating a model involves the use of error evidence. This approach assumes that errors are artifacts, in the sense that they are a natural consequence of an agent’s information processing, and that they are not a deliberate or intended product of this processing.
One source of artifactual errors is the way information processing can be constrained by limits on internal resources (memory or attention) or by external demands (the need for real time responses). These restrictions on processing produce bounded rationality (Simon, 1982). Another reason for artifactual errors lies in the restrictions imposed by the particular structure-process pairing employed by an information processor. “A tool too gains its power from the fact that it permits certain actions and not others. For example, a hammer has to be rigid. It can therefore not be used as a rope” (Weizenbaum, 1976, p. 37). Like a tool, a particular structure-process pairing may not be suited for some tasks and therefore produces errors when faced with them.
One example of the importance of error evidence is found in the large literature on human, animal, and robot navigation (Cheng, 2005; Cheng & Newcombe, 2005; Healy, 1998; Jonsson, 2002; Milford, 2008). How do organisms find their place in the world? One approach to answering this question is to set up small, manageable indoor environments. These “arenas” can provide a variety of cues to animals that learn to navigate within them. If an agent is reinforced for visiting a particular location, what cues does it use to return to this place?
One paradigm for addressing this question is the reorientation task invented by Ken Cheng (1986). In the reorientation task, an agent is typically placed within a rectangular arena. Reinforcement is typically provided at one of the corner locations in the arena. That is, the agent is free to explore the arena, and eventually finds a reward at a location of interest—it learns that this is the “goal location.” The agent is then removed from the arena, disoriented, and returned to an (often different) arena, with the task of using the available cues to relocate the goal. Of particular interest are experimental conditions in which the arena has been altered from the one in which the agent was originally trained.
An arena that is used in the reorientation task can provide two different kinds of navigational information: geometric cues and feature cues (Cheng & Newcombe, 2005). Geometric cues are relational, while feature cues are not.
A geometric property of a surface, line, or point is a property it possesses by virtue of its position relative to other surfaces, lines, and points within the same space. A non-geometric property is any property that cannot be described by relative position alone. (Gallistel, 1990, p. 212)
In a rectangular arena, metric properties (e.g., wall lengths, angles between walls) combined with an agent’s distinction between left and right (e.g., the long wall is to the left of the short wall) provide geometric cues. Non-geometric cues or feature cues can be added as well. For instance, one arena wall can have a different colour than the others (Cheng, 1986), or different coloured patterns can be placed at each corner of the arena (Kelly, Spetch, & Heth, 1998).
One question of interest concerns the relative contributions of these different cues for reorientation. This is studied by seeing how the agent reorients after it has been returned to an arena in which cues have been altered. For example, the feature cues might have been moved to new locations. This places feature cues in conflict with geometric cues. Will the agent move to a location defined by geometric information, or will it move to a different location indicated by feature information? Extensive use of the reorientation task has revealed some striking regularities.
Some of the most interesting regularities found in the reorientation task pertain to a particular error in reorientation. In an arena with no unique feature cues (no unique wall colour, no unique pattern at each corner), geometric cues are the only information available for reorienting. However, geometric cues cannot uniquely specify a goal location in a rectangular arena. This is because the geometric cues at the goal location (e.g., 90° angle, shorter wall to the left and longer wall to the right) are identical to the geometric cues present at the diagonally opposite corner (often called the rotational location). Under these conditions, the agent will produce rotational error (Cheng, 1986, 2005). When rotational error occurs, the trained agent goes to the goal location at above-chance levels; however, the animal goes to the rotational location equally often. Rotational error is usually taken as evidence that the agent is relying upon the geometric properties of the environment.
When feature cues are present in a rectangular arena, a goal location can be uniquely specified. In fact, when cues are present, an agent should not even need to pay attention to geometric cues, because these cues are not relevant. However, evidence suggests that geometric cues still influence behaviour even when such cues are not required to solve the task.
First, in some cases subjects continue to make some rotational errors even when feature cues specify the goal location (Cheng, 1986; Hermer & Spelke, 1994). Second, when feature cues present during training are removed from the arena in which reorientation occurs, subjects typically revert to generating rotational error (Kelly, Spetch, and Heth, 1998; Sovrano, Bisazza, & Vallortigara, 2003). Third, in studies in which local features are moved to new locations in the new arena, there is a conflict between geometric and feature cues. In this case, reorientation appears to be affected by both types of cues. The animals will not only increase their tendency to visit the corner marked by the feature cues that previously signaled the goal, but also produce rotational error for two other locations in the arena (Brown, Spetch, & Hurd, 2007; Kelly, Spetch, and Heth, 1998).
Rotational error is an important phenomenon in the reorientation literature, and it is affected by a complex interaction between geometric and feature cues. A growing variety of models of reorientation are appearing in the literature, including models consistent with the symbol-manipulating fundamental of classical cognitive science (Cheng, 1986; Gallistel, 1990), neural network models that are part of connectionist cognitive science (Dawson et al., 2010), and behaviour-based robots that are the domain of embodied cognitive science (Dawson, Dupuis, & Wilson, 2010; Nolfi, 2002). All of these models have two things in common. First, they can produce rotational error and many of its nuances. Second, this error is produced as a natural byproduct of a reorientation algorithm; the errors produced by the models are used in aid of their validation.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.13%3A__Towards_Strong_Equivalence.txt
|
Classical cognitive scientists often develop theories in the form of working computer simulations. These models are validated by collecting evidence that shows they are strongly equivalent to the subjects or phenomena being modelled. This begins by first demonstrating weak equivalence, that both model and subject are computing the same input-output function. The quest for strong equivalence is furthered by using intermediate state evidence, relative complexity evidence, and error evidence to demonstrate, in striking detail, that both model and subject are employing the same algorithm.
However, strong equivalence can only be established by demonstrating an additional relationship between model and subject. Not only must model and subject be employing the same algorithm, but both must also be employing the same primitive processes. Strong equivalence requires architectural equivalence.
The primitives of a computer simulation are readily identifiable. A computer simulation should be a collection of primitives that are designed to generate a behaviour of interest (Dawson, 2004). In order to create a model of cognition, one must define the basic properties of a symbolic structure, the nature of the processes that can manipulate these expressions, and the control system that chooses when to apply a particular rule, operation, or process. A model makes these primitive characteristics explicit. When the model is run, its behaviour shows what these primitives can produce.
While identifying a model’s primitives should be straightforward, determining the architecture employed by a modelled subject is far from easy. To illustrate this, let us consider research on mental imagery.
Mental imagery is a cognitive phenomenon in which we experience or imagine mental pictures. Mental imagery is often involved in solving spatial problems (Kosslyn, 1980). For instance, imagine being asked how many windows there are on the front wall of the building in which you live. A common approach to answering this question would be to imagine the image of this wall and to inspect the image, mentally counting the number of windows that are displayed in it. Mental imagery is also crucially important for human memory (Paivio, 1969, 1971, 1986; Yates, 1966): we are better at remembering items if we can create a mental image of them. Indeed, the construction of bizarre mental images, or of images that link two or more items together, is a standard tool of the mnemonic trade (Lorayne, 1985, 1998, 2007; Lorayne & Lucas, 1974).
An early achievement of the cognitive revolution in psychology (Miller, 2003; Vauclair & Perret, 2003) was a rekindled interest in studying mental imagery, an area that had been neglected during the reign of behaviourism (Paivio, 1971, 1986). In the early stages of renewed imagery research, traditional paradigms were modified to solidly establish that concept imageability was a key predictor of verbal behaviour and associative learning (Paivio, 1969). In later stages, new paradigms were invented to permit researchers to investigate the underlying nature of mental images (Kosslyn, 1980; Shepard & Cooper, 1982).
For example, consider the relative complexity evidence obtained using the mental rotation task (Cooper & Shepard, 1973a, 1973b; Shepard & Metzler, 1971). In this task, subjects are presented with a pair of images. In some instances, the two images are of the same object. In other instances, the two images are different (e.g., one is a mirror image of the other). The orientation of the images can also be varied—for instance, they can be rotated to different degrees in the plane of view. The angular disparity between the two images is the key independent variable. A subject’s task is to judge whether the images are the same or not; the key dependent measure is the amount of time required to respond.
In order to perform the mental rotation task, subjects first construct a mental image of one of the objects, and then imagine rotating it to the correct orientation to enable them to judge whether it is the same as the other object. The standard finding in this task is that there is a linear relationship between response latency and the amount of mental rotation that is required. From these results it has been concluded that “the process of mental rotation is an analog one in that intermediate states in the process have a one-to-one correspondence with intermediate stages in the external rotation of an object” (Shepard & Cooper, 1982, p. 185). That is, mental processes rotate mental images in a holistic fashion, through intermediate orientations, just as physical processes can rotate real objects.
Another source of relative complexity evidence concerning mental imagery is the image scanning task (Kosslyn, 1980; Kosslyn, Ball, & Reisler, 1978). In the most famous version of this task, subjects are first trained to create an accurate mental image of an island map on which seven different locations are marked. Then subjects are asked to construct this mental image, focusing their attention at one of the locations. They are then provided with a name, which may or may not be one of the other map locations. If the name is of another map location, then subjects are instructed to scan across the image to it, pressing a button when they arrive at the second location.
In the map-scanning version of the image-scanning task, the dependent variable was the amount of time from the naming of the second location to a subject’s button press, and the independent variable was the distance on the map between the first and second locations. The key finding was that there was nearly a perfectly linear relationship between latency and distance (Kosslyn Ball, & Reisler, 1978): an increased distance led to an increased response latency, suggesting that the image had spatial extent, and that it was scanned at a constant rate.
The scanning experiments support the claim that portions of images depict corresponding portions of the represented objects, and that the spatial relations between portions of the image index the spatial relations between the corresponding portions of the imaged objects. (Kosslyn, 1980, p. 51)
The relative complexity evidence obtained from tasks like mental rotation and image scanning provided the basis for a prominent account of mental imagery known as the depictive theory (Kosslyn, 1980, 1994; Kosslyn, Thompson, & Ganis, 2006). This theory is based on the claim that mental images are not merely internal representations that describe visuospatial information (as would be the case with words or with logical propositions), but instead depict this information because the format of an image is quasi-pictorial. That is, while a mental image is not claimed to literally be a picture in the head, it nevertheless represents content by resemblance.
There is a correspondence between parts and spatial relations of the representation and those of the object; this structural mapping, which confers a type of resemblance, underlies the way images convey specific content. In this respect images are like pictures. Unlike words and symbols, depictions are not arbitrarily paired with what they represent. (Kosslyn, Thompson, & Ganis, 2006, p. 44)
The depictive theory specifies primitive properties of mental images, which have sometimes been called privileged properties (Kosslyn, 1980). What are these primitives? One is that images occur in a spatial medium that is functionally equivalent to a coordinate space. A second is that images are patterns that are produced by activating local regions of this space to produce an “abstract spatial isomorphism” (Kosslyn, 1980, p. 33) between the image and what it represents. This isomorphism is a correspondence between an image and a represented object in terms of their parts as well as spatial relations amongst these parts. A third is that images not only depict spatial extent, they also depict properties of visible surfaces such as colour and texture.
These privileged properties are characteristic of the format mental images—the structure of images as symbolic expressions. When such a structure is paired with particular primitive processes, certain types of questions are easily answered. These processes are visual in nature: for instance, mental images can be scanned, inspected at different apparent sizes, or rotated. The coupling of such processes with the depictive structure of images is well-suited to solving visuospatial problems. Other structure-process pairings—in particular, logical operations on propositional expressions that describe spatial properties (Pylyshyn, 1973)—do not make spatial information explicit and arguably will not be as adept at solving visuospatial problems. Kosslyn (1980, p. 35) called the structural properties of images privileged because their possession “[distinguishes] an image from other forms of representation.”
That the depictive theory makes claims about the primitive properties of mental images indicates quite clearly that it is an account of cognitive architecture. That it is a theory about architecture is further supported by the fact that the latest phase of imagery research has involved the supplementing behavioural data with evidence concerning the cognitive neuroscience of imagery (Kosslyn, 1994; Kosslyn et al., 1995; Kosslyn et al., 1999; Kosslyn, Thompson, & Alpert, 1997; Kosslyn, Thompson, & Ganis, 2006). This research has attempted to ground the architectural properties of images into topographically organized regions of the cortex.
Computer simulation has proven to be a key medium for evaluating the depictive theory of mental imagery. Beginning with work in the late 1970s (Kosslyn & Shwartz, 1977), the privileged properties of mental images have been converted into a working computer model (Kosslyn, 1980, 1987, 1994; Kosslyn et al., 1984; Kosslyn et al., 1985). In general terms, over time these models represent an elaboration of a general theoretical structure: long-term memory uses propositional structures to store spatial information. Image construction processes convert this propositional information into depictive representations on a spatial medium that enforces the primitive structural properties of images. Separate from this medium are primitive processes that operate on the depicted information (e.g., scan, inspect, interpret). This form of model has shown that the privileged properties of images that define the depictive theory are sufficient for simulating a wide variety of the regularities that govern mental imagery.
The last few paragraphs have introduced Kosslyn’s (e.g., 1980) depictive theory, its proposals about the privileged properties of mental images, and the success that computer simulations derived from this theory have had at modelling behavioural results. All of these topics concern statements about primitives in the domain of a theory or model about mental imagery. Let us now turn to one issue that has not yet been addressed: the nature of the primitives employed by the modelled subject, the human imager.
The status of privileged properties espoused by the depictive theory has been the subject of a decades-long imagery debate (Block, 1981; Tye, 1991). At the heart of the imagery debate is a basic question: are the privileged properties parts of the architecture or not? The imagery debate began with the publication of a seminal paper (Pylyshyn, 1973), which proposed that the primitive properties of images were not depictive, but were instead descriptive properties based on a logical or propositional representation. This position represents the basic claim of the propositional theory, which stands as a critical alternative to the depictive theory.
The imagery debate continues to the present day; propositional theory’s criticism of the depictive position has been prolific and influential (Pylyshyn, 1981a, 1981b, 1984, 2003a, 2003b, 2003c, 2007). The imagery debate has been contentious, has involved a number of different subtle theoretical arguments about the relationship between theory and data, and has shown no signs of being clearly resolved. Indeed, some have argued that it is a debate that is cannot be resolved, because it is impossible to identify data that is appropriate to differentiate the depictive and propositional theories (Anderson, 1978). In this section, the overall status of the imagery debate is not of concern. We are instead interested in a particular type of evidence that has played an important role in the debate: evidence concerning cognitive penetrability (Pylyshyn, 1980, 1984, 1999).
Recall from the earlier discussion of algorithms and architecture that Newell (1990) proposed that the rate of change of various parts of a physical symbol system would differ radically depending upon which component was being examined. Newell observed that data should change rapidly, stored programs should be more enduring, and the architecture that interprets stored programs should be even more stable. This is because the architecture is wired in. It may change slowly (e.g., in human cognition because of biological development), but it should be the most stable information processing component. When someone claims that they have changed their mind, we interpret this as meaning that they have updated their facts, or that they have used a new approach or strategy to arrive at a conclusion. We don’t interpret this as a claim that they have altered their basic mental machinery—when we change our mind, we don’t change our cognitive architecture!
The cognitive penetrability criterion (Pylyshyn, 1980, 1984, 1999) is an experimental paradigm that takes advantage of the persistent “wired in” nature of the architecture. If some function is part of the architecture, then it should not be affected by changes in cognitive content—changing beliefs should not result in a changing architecture. The architecture is cognitively impenetrable. In contrast, if some function changes because of a change in content that is semantically related to the function, then this is evidence that it is not part of the architecture.
If a system is cognitively penetrable then the function it computes is sensitive, in a semantically coherent way, to the organism’s goals and beliefs, that is, it can be altered in a way that bears some logical relation to what the person knows. (Pylyshyn, 1999, p. 343)
The architecture is not cognitively penetrable.
Cognitive penetrability provides a paradigm for testing whether a function of interest is part of the architecture or not. First, some function is measured as part of a pre-test. For example, consider Figure 3-13, which presents the Müller-Lyer illusion, which was discovered in 1889 (Gregory, 1978). In a pre-test, it would be determined whether you experience this illusion. Some measurement would be made to determine whether you judge the horizontal line segment of the top arrow to be longer than the horizontal line segment of the bottom arrow.
Second, a strong manipulation of a belief related to the function that produces the Müller-Lyer illusion would be performed. You, as a subject, might be told that the two horizontal line segments were equal in length. You might be given a ruler, and asked to measure the two line segments, in order to convince yourself that your experience was incorrect and that the two lines were of the same length.
Figure 3-13. The Müller-Lyer illusion.
Third, a post-test would determine whether you still experienced the illusion. Do the line segments still appear to be of different length, even though you are armed with the knowledge that this appearance is false? This illusion has had such a long history because its appearance is not affected by such cognitive content. The mechanism that is responsible for the Müller-Lyer illusion is cognitively impenetrable.
This paradigm has been applied to some of the standard mental imagery tasks in order to show that some of the privileged properties of images are cognitively penetrable and therefore cannot be part of the architecture. For instance, in his 1981 dissertation, Liam Bannon examined the map scanning task for cognitive penetrability (for methodological details, see Pylyshyn, 1981a). Bannon reasoned that the instructions given to subjects in the standard map scanning study (Kosslyn,Ball, & Reiser, 1978) instilled a belief that image scanning was like scanning a picture. Bannon was able to replicate the Kosslyn, Ball, & Reiser results in one condition. However, in other conditions the instructions were changed so that the images had to be scanned to answer a question, but no beliefs about scanning were instilled. In one study, Bannon had subjects shift attention from the first map location to the second (named) location, and then judge the compass direction from the second location to the first. In this condition, the linearly increasing relationship between distance and time disappeared. Image scanning appears to be cognitively penetrable, challenging some of the architectural claims of depictive theory. “Images can be examined without the putative constraints of the surface display postulated by Kosslyn and others” (Pylyshyn, 1981a, p. 40).
The cognitive penetrability paradigm has also been applied to the mental rotation task (Pylyshyn, 1979b). Pylyshyn reasoned that if mental rotation is accomplished by primitive mechanisms, then it must be cognitively impenetrable. One prediction that follows from this reasoning is that the rate of mental rotation should be independent of the content being rotated—an image depicting simple content should, by virtue of its putative architectural nature, be rotated at the same rate as a different image depicting more complex content.
Pylyshyn (1979b) tested this hypothesis in two experiments and found evidence of cognitive penetration. The rate of mental rotation was affected by practice, by the content of the image being rotated, and by the nature of the comparison task that subjects were asked to perform. As was the case with image scanning, it would seem that the “analog” rotation of images is not primitive, but is instead based on simpler processes that do belong to the architecture.
The more carefully we examine phenomena, such as the mental rotation findings, the more we find that the informally appealing holistic image-manipulation views must be replaced by finer grained piecemeal procedures that operate upon an analyzed and structured stimulus using largely serial, resource-limited mechanisms. (Pylyshyn, 1979b, p. 27)
Cognitive penetrability has played an important role in domains other than mental imagery. For instance, in the literature concerned with social perception and prediction, there is debate between a classical theory called theory-theory (Gopnik & Meltzoff, 1997; Gopnik &Wellman, 1992) and a newer approach called simulation theory (Gordon, 1986, 2005b), which is nicely situated in the embodied cognitive science that is the topic of Chapter 5. There is a growing discussion about whether cognitive penetrability can be used to discriminate between these two theories (Greenwood, 1999; Heal, 1996; Kuhberger et al., 2006; Perner et al., 1999; Stich &Nichols, 1997). Cognitive penetrability has also been applied to various topics in visual perception (Raftopoulos, 2001), including face perception (Bentin & Golland, 2002) and the perception of illusory motion (Dawson, 1991; Dawson &Wright, 1989; Wright & Dawson, 1994).
While cognitive penetrability is an important tool when faced with the challenge of examining the architectural equivalence between model and subject, it is not without its problems. For instance, in spite of it being applied to the study of mental imagery, the imager debate rages on, suggesting that penetrability evidence is not as compelling or powerful as its proponents might hope. Perhaps one reason for this is that it seeks a null result—the absence of an effect of cognitive content on cognitive function. While cognitive penetrability can provide architectural evidence for strong equivalence, other sources of evidence are likely required. One source of such additional evidence is cognitive neuroscience.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.14%3A_The_Impenetrable_Architecture.txt
|
Classical cognitive science assumes that cognition is computation, and endorses the physical symbol system hypothesis. As a result, it merges two theoretical positions that in the seventeenth century were thought to be in conflict. The first is Cartesian rationalism, the notion that the products of thought were rational conclusions drawn from the rule-governed manipulation of pre-existing ideas. The second is anti-Cartesian materialism, the notion that the processes of thought are carried out by physical mechanisms.
The merging of rationalism and materialism has resulted in the modification of a third idea, innateness, which is central to both Cartesian philosophy and classical cognitive science. According to Descartes, the contents of some mental states were innate, and served as mental axioms that permitted the derivation of new content (Descartes, 1996, 2006). Variations of this claim can be found in classical cognitive science (Fodor, 1975). However, it is much more typical for classical cognitive science to claim innateness for the mechanisms that manipulate content, instead of claiming it for the content itself. According to classical cognitive science, it is the architecture that is innate.
Innateness is but one property that can serve to constrain theories about the nature of the architecture (Newell, 1990). It is a powerful assumption that leads to particular predictions. If the architecture is innate, then it should be universal (i.e., shared by all humans), and it should develop in a systematic pattern that can be linked to biological development. These implications have guided a tremendous amount of research in linguistics over the last several decades (Jackendoff, 2002). However, innateness is but one constraint, and many radically different architectural proposals might all be consistent with it. What other constraints might be applied to narrow the field of potential architectures?
Another constraining property is modularity (Fodor, 1983). Modularity is the claim that an information processor is not just one homogeneous system used to handle every information processing problem, but is instead a collection of specialpurpose processors, each of which is especially suited to deal with a narrower range of more specific problems. Modularity offers a general solution to what is known as the packing problem (Ballard, 1986).
The packing problem is concerned with maximizing the computational power of a physical device with limited resources, such as a brain with a finite number of neurons and synapses. How does one pack the maximal computing power into a finite brain? Ballard (1986) argued that many different subsystems, each designed to deal with a limited range of computations, will be easier to fit into a finite package than will be a single general-purpose device that serves the same purpose as all of the subsystems.
Of course, in order to enable a resource-limited system to solve the same class of problems as a universal machine, a compromise solution to the packing problem may be required. This is exactly the stance adopted by Fodor in his influential 1983 monograph The Modularity of Mind. Fodor imagined an information processor that used general central processing, which he called isotropic processes, operating on representations delivered by a set of special-purpose input systems that are now known as modules.
If, therefore, we are to start with anything like Turing machines as models in cognitive psychology, we must think of them as embedded in a matrix of subsidiary systems which affect their computations in ways that are responsive to the flow of environmental events. The function of these subsidiary systems is to provide the central machine with information about the world. (Fodor, 1983, p. 39)
According to Fodor (1983), a module is a neural substrate that is specialized for solving a particular information processing problem. It takes input from transducers, preprocesses this input in a particular way (e.g., computing three-dimensional structure from transduced motion signals [Hildreth, 1983; Ullman, 1979]), and passes the result of this preprocessing on to central processes. Because modules are specialized processors, they are domain specific. Because the task of modules is to inform central processing about the dynamic world, modules operate in a fast, mandatory fashion. In order for modules to be fast, domain-specific, and mandatory devices, they will be “wired in,” meaning that a module will be associated with fixed neural architecture. A further consequence of this is that a module will exhibit characteristic breakdown patterns when its specialized neural circuitry fails. All of these properties entail that a module will exhibit informational encapsulation: it will be unaffected by other models or by higher-level results of isotropic processes. In other words, modules are cognitively impenetrable (Pylyshyn, 1984). Clearly any function that can be shown to be modular in Fodor’s sense must be a component of the architecture.
Fodor (1983) argued that modules should exist for all perceptual modalities, and that there should also be modular processing for language. There is a great deal of evidence in support of this position.
For example, consider visual perception. Evidence from anatomy, physiology, and clinical neuroscience has led many researchers to suggest that there exist two distinct pathways in the human visual system (Livingstone& Hubel, 1988; Maunsell &Newsome, 1987; Ungerleider &Mishkin, 1982). One is specialised for processing visual form, i.e., detecting an object’s appearance: the “what pathway.” The other is specialised for processing visual motion, i.e., detecting an object’s changing location: the “where pathway.” This evidence suggests that object appearance and object motion are processed by distinct modules. Furthermore, these modules are likely hierarchical, comprising systems of smaller modules. More than 30 distinct visual processing modules, each responsible for processing a very specific kind of information, have been identified (van Essen, Anderson, & Felleman, 1992).
A similar case can be made for the modularity of language. Indeed, the first biological evidence for the localization of brain function was Paul Broca’s presentation of the aphasic patient Tan’s brain to the Paris Société d’Anthropologie in 1861 (Gross, 1998). This patient had profound agrammatism; his brain exhibited clear abnormalities in a region of the frontal lobe now known as Broca’s area. The Chomskyan tradition in linguistics has long argued for the distinct biological existence of a language faculty (Chomsky, 1957, 1965, 1966). The hierarchical nature of this faculty—the notion that it is a system of independent submodules— has been a fruitful avenue of research (Garfield, 1987); the biological nature of this system, and theories about how it evolved, are receiving considerable contemporary attention (Fitch, Hauser, & Chomsky, 2005; Hauser, Chomsky, & Fitch, 2002). Current accounts of neural processing of auditory signals suggest that there are two pathways analogous to the what-where streams in vision, although the distinction between the two is more complex because both are sensitive to speech (Rauschecker & Scott, 2009).
From both Fodor’s (1983) definition of modularity and the vision and language examples briefly mentioned above, it is clear that neuroscience is a key source of evidence about modularity. “The intimate association of modular systems with neural hardwiring is pretty much what you would expect given the assumption that the key to modularity is informational encapsulation” (p. 98). This is why modularity is an important complement to architectural equivalence: it is supported by seeking data from cognitive neuroscience that complements the cognitive penetrability criterion.
The relation between modular processing and evidence from cognitive neuroscience leads us to a controversy that has arisen from Fodor’s (1983) version of modularity. We have listed a number of properties that Fodor argues are true of modules. However, Fodor also argues that these same properties cannot be true of central or isotropic processing. Isotropic processes are not informationally encapsulated, domain specific, fast, mandatory, associated with fixed neural architecture, or cognitively impenetrable. Fodor proceeds to conclude that because isotropic processes do not have these properties, cognitive science will not be able to explain them.
I should like to propose a generalization; one which I fondly hope will someday come to be known as ‘Fodor’s First Law of the Nonexistence of Cognitive Science.’ It goes like this: the more global (e.g., the more isotropic) a cognitive process is, the less anybody understands it. (Fodor, 1983, p. 107)
Fodor’s (1983) position that explanations of isotropic processes are impossible poses a strong challenge to a different field of study, called evolutionary psychology (Barkow, Cosmides, & Tooby, 1992), which is controversial in its own right (Stanovich, 2004). Evolutionary psychology attempts to explain how psychological processes arose via evolution. This requires the assumption that these processes provide some survival advantage and are associated with a biological substrate, so that they are subject to natural selection. However, many of the processes of particular interest to evolutionary psychologists involve reasoning, and so would be classified by Fodor as being isotropic. If they are isotropic, and if Fodor’s first law of the nonexistence of cognitive science is true, then evolutionary psychology is not possible.
Evolutionary psychologists have responded to this situation by proposing the massive modularity hypothesis (Carruthers, 2006; Pinker, 1994, 1997), an alternative to Fodor (1983). According to the massive modularity hypothesis, most cognitive processes—including high-level reasoning—are modular. For instance, Pinker (1994, p. 420) has proposed that modular processing underlies intuitive mechanics, intuitive biology, intuitive psychology, and the self-concept. The mind is “a collection of instincts adapted for solving evolutionarily significant problems—the mind as a Swiss Army knife” (p. 420). The massive modularity hypothesis proposes to eliminate isotropic processing from cognition, spawning modern discussions about how modules should be defined and about what kinds of processing are modular or not (Barrett & Kurzban, 2006; Bennett, 1990; Fodor, 2000; Samuels, 1998).
The modern debate about massive modularity indicates that the concept of module is firmly entrenched in cognitive science. The issue in the debate is not the existence of modularity, but is rather modularity’s extent. With this in mind, let us return to the methodological issue at hand, investigating the nature of the architecture. To briefly introduce the types of evidence that can be employed to support claims about modularity, let us consider another topic made controversial by proponents of massive modularity: the modularity of musical cognition.
As we have seen, massive modularity theorists see a pervasive degree of specialization and localization in the cognitive architecture. However, one content area that these theorists have resisted to classify as modular is musical cognition. One reason for this is that evolutionary psychologists are hard pressed to explain how music benefits survival. “As far as biological cause and effect are concerned, music is useless. It shows no signs of design for attaining a goal such as long life, grandchildren, or accurate perception and prediction of the world” (Pinker, 1997, p. 528). As a result, musical processing is instead portrayed as a tangential, nonmodular function that is inconsequentially related to other modular processes. “Music is auditory cheesecake, an exquisite confection crafted to tickle the sensitive spots of at least six of our mental faculties” (p. 534).
Not surprisingly, researchers interested in studying music have reacted strongly against this position. There is currently a growing literature that provides support for the notion that musical processing—in particular the perception of rhythm and of tonal profile—is indeed modular (Alossa & Castelli, 2009; Peretz, 2009; Peretz & Coltheart, 2003; Peretz & Hyde, 2003; Peretz & Zatorre, 2003, 2005). The types of evidence reported in this literature are good examples of the ways in which cognitive neuroscience can defend claims about modularity.
One class of evidence concerns dissociations that are observed in patients who have had some type of brain injury. In a dissociation, an injury to one region of the brain disrupts one kind of processing but leaves another unaffected, suggesting that the two kinds of processing are separate and are associated with different brain areas. Those who do not believe in the modularity of music tend to see music as being strongly related to language. However, musical processing and language processing have been shown to be dissociated. Vascular damage to the left hemisphere of the Russian composer Shebalin produced severe language deficits but did not affect his ability to continue composing some of his best works (Luria, Tsvetkova, & Futer, 1965). Reciprocal evidence indicates that there is in fact a double dissociation between language and music: bilateral damage to the brain of another patient produced severe problems in music memory and perception but did not affect her language (Peretz et al., 1994).
Another class of evidence is to seek dissociations involving music that are related to congenital brain disorders. Musical savants demonstrate such a dissociation: they exhibit low general intelligence but at the same time demonstrate exceptional musical abilities (Miller, 1989; Pring, Woolf, & Tadic, 2008). Again, the dissociation is double. Approximately 4 percent of the population is tone deaf, suffering from what is called congenital amusia (Ayotte, Peretz, & Hyde, 2002; Peretz et al., 2002). Congenital amusics are musically impaired, but they are of normal intelligence and have normal language abilities. For instance, they have normal spatial abilities (Tillmann et al., 2010), and while they have short-term memory problems for musical stimuli, they have normal short-term memory for verbal materials (Tillmann, Schulze, & Foxton, 2009). Finally, there is evidence that congenital amusia is genetically inherited, which would be a plausible consequence of the modularity of musical processing (Peretz, Cummings, & Dube, 2007).
A third class of evidence that cognitive neuroscience can provide about modularity comes from a variety of techniques that noninvasively measure regional brain activity as information processing occurs (Cabeza & Kingstone, 2006; Gazzaniga, 2000). Brain imaging data can be used to seek dissociations and attempt to localize function. For instance, by seeing which regions of the brain are active during musical processing but not active when a nonmusical control task is performed, a researcher can attempt to associate musical functions with particular areas of the brain.
Brain imaging techniques have been employed by cognitive neuroscientists interested in studying musical processing (Peretz & Zatorre, 2003). Surprisingly, given the other extensive evidence concerning the dissociation of music, this kind of evidence has not provided as compelling a case for the localization of musical processing in the human brain (Warren, 2008). Instead, it appears to reveal that musical processing invokes activity in many different areas throughout the brain (Schuppert et al., 2000). “The evidence of brain imaging studies has demonstrated that music shares basic brain circuitry with other types of complex sound, and no single brain area can be regarded as exclusively dedicated to music” (Warren, 2008, p. 34). This is perhaps to be expected, under the assumption that “musical cognition” is itself a fairly broad notion, and that it is likely accomplished by a variety of subprocesses, many of which are plausibly modular. Advances in imaging studies of musical cognition may require considering finer distinctions between musical and nonmusical processing, such as studying the areas of the brain involved with singing versus those involved with speech (Peretz, 2009).
Disparities between behavioural evidence concerning dissociations and evidence from brain imaging studies do not necessarily bring the issue of modularity into question. These disparities might simply reveal the complicated relationship between the functional and the implementational nature of an architectural component. For instance, imagine that the cognitive architecture is indeed a production system. An individual production, functionally speaking, is ultra-modular. However, it is possible to create systems in which the modular functions of different productions do not map onto localized physical components, but are instead defined as a constellation of physical properties distributed over many components (Dawson et al., 2000). We consider this issue in a later chapter where the relationship between production systems and connectionist networks is investigated in more detail.
Nevertheless, the importance of using evidence from neuroscience to support claims about modularity cannot be understated. In the absence of such evidence, arguments that some function is modular can be easily undermined.
For instance, Gallistel (1990) has argued that the processing of geometric cues by animals facing the reorientation task is modular in Fodor’s (1983) sense. This is because the processing of geometric cues is mandatory (as evidenced by the pervasiveness of rotational error) and not influenced by “information about surfaces other than their relative positions” (Gallistel, 1990, p. 208). However, a variety of theories that are explicitly nonmodular are capable of generating appropriate rotational error in a variety of conditions (Dawson, Dupuis, & Wilson, 2010; Dawson et al., 2010; Miller, 2009; Miller & Shettleworth, 2007, 2008; Nolfi, 2002). As a result, the modularity of geometric cue processing is being seriously re-evaluated (Cheng, 2008).
In summary, many researchers agree that the architecture of cognition is modular. A variety of different kinds of evidence can be marshaled to support the claim that some function is modular and therefore part of the architecture. This evidence is different from, and can complement, evidence about cognitive penetrability. Establishing the nature of the architecture is nonetheless challenging and requires combining varieties of evidence from behavioural and cognitive neuroscientific studies.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.15%3A_Modularity_of_Mind.txt
|
Methodologically speaking, what is classical cognitive science? The goal of classical cognitive science is to explain an agent’s cognitive abilities. Given an intact, fully functioning cognitive agent, the classical cognitive scientist must construct a theory of the agent’s internal processes. The working hypothesis is that this theory will take the form of a physical symbol system. Fleshing this hypothesis out will involve proposing a theory, and hopefully a working computer simulation, that will make explicit proposals about the agent’s symbol structures, primitive processes, and system of control.
Given this scenario, a classical cognitive scientist will almost inevitably engage in some form of reverse engineering.
In reverse engineering, one figures out what a machine was designed to do. Reverse engineering is what the boffins at Sony do when a new product is announced by Panasonic, or vice versa. They buy one, bring it back to the lab, take a screwdriver to it, and try to figure out what all the parts are for and how they combine to make the device work. (Pinker, 1997, p. 21)
The reverse engineering conducted by a classical cognitive science is complicated by the fact that one can’t simply take cognitive agents apart with a screwdriver to learn about their design. However, the assumption that the agent is a physical symbol system provides solid guidance and an effective methodology.
The methodology employed by classical cognitive science is called functional analysis (Cummins, 1975, 1983). Functional analysis is a top-down form of reverse engineering that maps nicely onto the multiple levels of investigation that were introduced in Chapter 2.
Functional analysis begins by choosing and defining a function of interest to explain. Defining a function of interest entails an investigation at the computational level. What problem is being solved? Why do we say this problem is being solved and not some other? What constraining properties can be assumed to aid the solution to the problem? For instance, we saw earlier that a computational theory of language learning (identifying a grammar in the limit) might be used to motivate possible properties that must be true of a language or a language learner.
The next step in a functional analysis is to decompose the function of interest into a set of subcomponents that has three key properties. First, each subcomponent is defined functionally, not physically. Second, each subcomponent is simpler than the original function. Third, the organization of the subcomponents—the flow of information from one component to another—is capable of producing the inputoutput behaviour of the original function of interest. “Functional analysis consists in analyzing a disposition into a number of less problematic dispositions such that the programmed manifestation of these analyzing dispositions amounts to a manifestation of the analyzed disposition” (Cummins, 1983, p. 28). These properties permit the functional analysis to proceed in such a way that Ryle’s regress will be avoided, and that eventually the homunculi produced by the analysis (i.e., the functional subcomponents) can be discharged, as was discussed in Chapter 2.
The analytic stage of a functional analysis belongs to the algorithmic level of analysis. This is because the organized system of subfunctions produced at this stage is identical to a program or algorithm for producing the overall input-output behaviour of the agent. However, the internal cognitive processes employed by the agent cannot be directly observed. What methods can be used to carve up the agent’s behaviour into an organized set of functions? In other words, how can observations of behaviour support decisions about functional decomposition?
The answer to this question reveals why the analytic stage belongs to the algorithmic level of analysis. It is because the empirical methods of cognitive psychology are designed to motivate and validate functional decompositions.
For example, consider the invention that has become known as the modal model of memory (Baddeley, 1986), which was one of the triumphs of cognitivism in the 1960s (Shiffrin & Atkinson, 1969; Waugh & Norman, 1965). According to this model, to-be-remembered information is initially kept in primary memory, which has a small capacity and short duration, and codes items acoustically. Without additional processing, items will quickly decay from primary memory. However, maintenance rehearsal, in which an item from memory is spoken aloud and thus fed back to the memory in renewed form, will prevent this decay. With additional processing like maintenance rehearsal, some of the items in primary memory pass into secondary memory, which has large capacity and long duration, and employs a semantic code.
The modal memory model was inspired and supported by experimental data. In a standard free-recall experiment, subjects are asked to remember the items from a presented list (Glanzer & Cunitz, 1966; Postman & Phillips, 1965). The first few items presented are better remembered than the items presented in the middle— the primacy effect. Also, the last few items presented are better remembered than the middle items—the recency effect. Further experiments demonstrated a functional dissociation between the primacy and recency effects: variables that influenced one effect left the other unaffected. For example, introducing a delay before subjects recalled the list eliminated the recency effect but not the primacy effect (Glanzer & Cunitz, 1966). If a list was presented very quickly, or was constructed from low-frequency words, the primacy effect—but not the recency effect—vanished (Glanzer, 1972). To explain such functional dissociation, researchers assumed an organized system of submemories (the modal model), each with different properties.
The analytic stage of a functional analysis is iterative. That is, one can take any of the subfunctions that have resulted from one stage of analysis and decompose it into an organized system of even simpler sub-subfunctions. For instance, as experimental techniques were refined, the 1960s notion of primary memory has been decomposed into an organized set of subfunctions that together produce what is called working memory (Baddeley, 1986, 1990). Working memory is decomposed into three basic subfunctions. The central executive is responsible for operating on symbols stored in buffers, as well as for determining how attention will be allocated across simultaneously ongoing tasks. The visuospatial buffer stores visual information. The phonological loop is used to store verbal (or speech-like) information. The phonological loop has been further decomposed into subfunctions. One is a phonological store that acts as a memory by holding symbols. The other is a rehearsal process that preserves items in the phonological store.
We saw in Chapter 2 that functional decomposition cannot proceed indefinitely if the analysis is to serve as a scientific explanation. Some principles must be applied to stop the decomposition in order to exit Ryle’s regress. For Cummins’ (1983) functional analysis, this occurs with a final stage—causal subsumption. To causally subsume a function is to explain how physical mechanisms bring the function into being. “A functional analysis is complete when the program specifying it is explicable via instantiation—i.e., when we can show how the program is executed by the system whose capacities are being explained” (p. 35). Cummins called seeking such explanations of functions the subsumption strategy. Clearly the subsumption strategy is part of an architectural level of investigation, employing evidence involving cognitive impenetrability and modularity. It also leans heavily on evidence gathered from an implementational investigation (i.e., neuroscience).
From a methodological perspective, classical cognitive science performs reverse engineering, in the form of functional analysis, to develop a theory (and likely a simulation) of cognitive processing. This enterprise involves both formal and empirical methods as well as the multiple levels of investigation described in Chapter 2. At the same time, classical cognitive science will also be involved in collecting data to establish the strong equivalence between the theory and the agent by establishing links between the two at the different levels of analysis, as we have been discussing in the preceding pages of the current chapter.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.16%3A_Reverse_Engineering.txt
|
The purpose of the current chapter was to introduce the foundations of classical cognitive science—the “flavour” of cognitive science that first emerged in the late 1950s—and the school of thought that still dominates modern cognitive science. The central claim of classical cognitive science is that “cognition is computation.” This short slogan has been unpacked in this chapter to reveal a number of philosophical assumptions, which guide a variety of methodological practices.
The claim that cognition is computation, put in its modern form, is identical to the claim that cognition is information processing. Furthermore, classical cognitive science views such information processing in a particular way: it is processing that is identical to that carried out by a physical symbol system, a device like a modern digital computer. As a result, classical cognitive science adopts the representational theory of mind. It assumes that the mind contains internal representations (i.e., symbolic expressions) that are in turn manipulated by rules or processes that are part of a mental logic or a (programming) language of thought. Further to this, a control mechanism must be proposed to explain how the cognitive system chooses what operation to carry out at any given time.
The classical view of cognition can be described as the merging of two distinct traditions. First, many of its core ideas—appeals to rationalism, computation, innateness—are rooted in Cartesian philosophy. Second, it rejects Cartesian dualism by attempting to provide materialist explanations of representational processing. The merging of rationality and materialism is exemplified by the physical symbol system hypothesis. A consequence of this is that the theories of classical cognitive science are frequently presented in the form of working computer simulations.
In Chapter 2, we saw that the basic properties of information processing systems required that they be explained at multiple levels. Not surprisingly, classical cognitive scientists conduct their business at multiple levels of analysis, using formal methods to answer computational questions, using simulation and behavioural methods to answer algorithmic questions, and using a variety of behavioural and biological methods to answer questions about architecture and implementation.
The multidisciplinary nature of classical cognitive science is revealed in its most typical methodology, a version of reverse engineering called functional analysis. We have seen that the different stages of this type of analysis are strongly related to the multiple levels of investigations that were discussed in Chapter 2. The same relationship to these levels is revealed in the comparative nature of classical cognitive science as it attempts to establish the strong equivalence between a model and a modelled agent.
The success of classical cognitive science is revealed by its development of successful, powerful theories and models that have been applied to an incredibly broad range of phenomena, from language to problem solving to perception. This chapter has emphasized some of the foundational ideas of classical cognitive science at the expense of detailing its many empirical successes. Fortunately, a variety of excellent surveys exist to provide a more balanced account of classical cognitive science’s practical success (Bechtel, Graham, & Balota, 1998; Bermúdez, 2010; Boden, 2006; Gleitman & Liberman, 1995; Green, 1996; Kosslyn & Osherson, 1995; Lepore&Pylyshyn, 1999; Posner, 1991; Smith&Osherson, 1995; Stillings, 1995; Stillings et al., 1987; Thagard, 1996; Wilson & Keil, 1999).
Nevertheless, classical cognitive science is but one perspective, and it is not without its criticisms and alternatives. Some cognitive scientists have reacted against its avoidance of the implementational (because of multiple realization), its reliance on the structure/process distinction, its hypothesis that cognitive information processing is analogous to that of a digital computer, its requirement of internal representations, and its dependence on the sense-think-act cycle. Chapter 4 turns to the foundations of a different “flavour” of cognitive science that is a reaction against the classical approach: connectionist cognitive science.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/03%3A_Elements_of_Classical_Cognitive_Science/3.17%3A_What_is_Classical_Cognitive_Science%3F.txt
|
The previous chapter introduced the elements of classical cognitive science, the school of thought that dominated cognitive science when it arose in the 1950s and which still dominates the discipline today. However, as cognitive science has matured, some researchers have questioned the classical approach. The reason for this is that in the 1950s, the only plausible definition of information processing was that provided by a relatively new invention, the electronic digital computer. Since the 1950s, alternative notions of information processing have arisen, and these new notions have formed the basis for alternative approaches to cognition.
The purpose of the current chapter is to present the core elements of one of these alternatives, connectionist cognitive science. The chapter begins with several sections (4.1 through 4.4) in which are described the core properties of connectionism and of the artificial neural networks that connectionists use to model cognitive phenomena. These elements are presented as a reaction against the foundational assumptions of classical cognitive science. Many of these elements are inspired by issues related to the implementational level of investigation. That is, connectionists aim to develop biologically plausible or neuronally inspired models of information processing.
The chapter then proceeds with an examination of connectionism at the remaining three levels of investigation. The computational level of analysis is the focus of Sections 4.5 through 4.7. These sections investigate the kinds of tasks that artificial neural networks can accomplish and relate them to those that can be accomplished by the devices that have inspired the classical approach. The general theme of these sections is that artificial neural networks belong to the class of universal machines.
Sections 4.8 through 4.13 focus on the algorithmic level of investigation of connectionist theories. Modern artificial neural networks employ several layers of processing units that create interesting representations which are used to mediate input-output relationships. At the algorithmic level, one must explore the internal structure of these representations in an attempt to inform cognitive theory. These sections illustrate a number of different techniques for this investigation.
Architectural issues are the topics of Sections 4.14 through 4.17. In particular, these sections show that researchers must seek the simplest possible networks for solving tasks of interest, and they point out that some interesting cognitive phenomena can be captured by extremely simple networks.
The chapter ends with an examination of the properties of connectionist cognitive science, contrasting the various topics introduced in the current chapter with those that were explored in Chapter 3 on classical cognitive science.
4.02: Nurture versus Nature
The second chapter of John Locke’s (1977) An Essay Concerning Human Understanding, originally published in 1706, begins as follows:
It is an established opinion among some men that there are in the understanding certain innate principles; some primary notions, characters, as it were, stamped upon the mind of man, which the soul receives in its very first being, and brings into the world with it. (Locke, 1977, p. 17)
Locke’s most famous work was a reaction against this view; of the “some men” being referred to, the most prominent was Descartes himself (Thilly, 1900).
Locke’s Essay cr iticized Cartesian philosophy, questioning its fundamental teachings, its core principles and their necessary implications, and its arguments for innate ideas, not to mention all scholars who maintained the existence of innate ideas (Thilly, 1900). Locke’s goal was to replace Cartesian rationalism with empiricism, the view that the source of ideas was experience. Locke (1977) aimed to show “how men, barely by the use of their natural faculties, may attain to all of the knowledge they have without the help of any innate impressions” (p. 17). Locke argued for experience over innateness, for nurture over nature.
The empiricism of Locke and his descendants provided a viable and popular alternative to Cartesian philosophy (Aune, 1970). It was also a primary influence on some of the psychological theories that appeared in the late nineteenth and early twentieth centuries (Warren, 1921). Thus it should be no surprise that empiricism is reflected in a different form of cognitive science, connectionism. Furthermore, just as empiricism challenged most of the key ideas of rationalism, connectionist cognitive science can be seen as challenging many of the elements of classical cognitive science.
Surprisingly, the primary concern of connectionist cognitive science is not classical cognitive science’s nativism. It is instead the classical approach’s excessive functionalism, due largely to its acceptance of the multiple realization argument. Logic gates, the core element of digital computers, are hardware independent because different physical mechanisms could be used to bring the two-valued logic into being (Hillis, 1998). The notion of a universal machine is an abstract, logical one (Newell, 1980), which is why physical symbol systems, computers, or universal machines can be physically realized using LEGO (Agulló et al., 2003), electric train sets (Stewart, 1994), gears (Swade, 1993), hydraulic valves (Hillis, 1998) or silicon chips (Reid, 2001). Physical constraints on computation do not seem to play an important role in classical cognitive science.
To connectionist cognitive science, the multiple realization argument is flawed because connectionists believe that the information processing responsible for human cognition depends critically on the properties of particular hardware, the brain. The characteristics of the brain place constraints on the kinds of computations that it can perform and on the manner in which they are performed (Bechtel & Abrahamsen, 2002; Churchland, Koch, & Sejnowski, 1990; Churchland & Sejnowski, 1992; Clark, 1989, 1993; Feldman & Ballard, 1982).
Brains have long been viewed as being different kinds of information processors than electronic computers because of differences in componentry (von Neumann, 1958). While electronic computers use a small number of fast components, the brain consists of a large number of very slow components, that is, neurons. As a result, the brain must be a parallel processing device that “will tend to pick up as many logical (or informational) items as possible simultaneously, and process them simultaneously” (von Neumann, 1958, p. 51).
Von Neumann (1958) argued that neural information processing would be far less precise, in terms of decimal point precision, than electronic information processing. However, this low level of neural precision would be complemented by a comparatively high level of reliability, where noise or missing information would have far less effect than it would for electronic computers. Given that the basic architecture of the brain involves many connections amongst many elementary components, and that these connections serve as a memory, the brain’s memory capacity should also far exceed that of digital computers.
The differences between electronic and brain-like information processing are at the root of connectionist cognitive science’s reaction against classic cognitive science. The classical approach has a long history of grand futuristic predictions that fail to materialize (Dreyfus, 1992, p. 85): “Despite predictions, press releases, films, and warnings, artificial intelligence is a promise and not an accomplished fact.” Connectionist cognitive science argues that this pattern of failure is due to the fundamental assumptions of the classical approach that fail to capture the basic principles of human cognition.
Connectionists propose a very different theory of information processing— a potential paradigm shift (Schneider, 1987)—to remedy this situation. Even staunch critics of artificial intelligence research have indicated a certain sympathy with the connectionist view of information processing (Dreyfus & Dreyfus, 1988; Searle, 1992). “The fan club includes the most unlikely collection of people. . . . Almost everyone who is discontent with contemporary cognitive psychology and current ‘information processing’ models of the mind has rushed to embrace the ‘connectionist alternative’” (Fodor & Pylyshyn, 1988, p. 4).
What are the key problems that connectionists see in classical models? Classical models invoke serial processes, which make them far too slow to run on sluggish componentry (Feldman & Ballard, 1982). They involve explicit, local, and digital representations of both rules and symbols, making these models too brittle. “If in a digital system of notations a single pulse is missing, absolute perversion of meaning, i.e., nonsense, may result” (von Neumann, 1958, p. 78). Because of this brittleness, the behaviour of classical models does not degrade gracefully when presented with noisy inputs, and such models are not damage resistant. All of these issues arise from one underlying theme: classical algorithms reflect the kind of information processing carried out by electronic computers, not the kind that characterizes the brain. In short, classical theories are not biologically plausible.
Connectionist cognitive science “offers a radically different conception of the basic processing system of the mind-brain, one inspired by our knowledge of the nervous system” (Bechtel & Abrahamsen, 2002, p. 2). The basic medium of connectionism is a type of model called an artificial neural network, or a parallel distributed processing (PDP) network (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c). Artificial neural networks consist of a number of simple processors that perform basic calculations and communicate the results to other processors by sending signals through weighted connections. The processors operate in parallel, permitting fast computing even when slow componentry is involved. They exploit implicit, distributed, and redundant representations, making these networks not brittle. Because networks are not brittle, their behaviour degrades gracefully when presented with noisy inputs, and such models are damage resistant. These advantages accrue because artificial neural networks are intentionally biologically plausible or neuronally inspired.
Classical cognitive science develops models that are purely symbolic and which can be described as asserting propositions or performing logic. In contrast, connectionist cognitive science develops models that are subsymbolic (Smolensky, 1988) and which can be described as statistical pattern recognizers. Networks use representations (Dawson, 2004; Horgan & Tienson, 1996), but these representations do not have the syntactic structure of those found in classical models (Waskan & Bechtel, 1997). Let us take a moment to describe in a bit more detail the basic properties of artificial neural networks.
An artificial neural network is a computer simulation of a “brain-like” system of interconnected processing units (see Figures 4-1 and 4-5 later in this chapter). In general, such a network can be viewed as a multiple-layer system that generates a desired response to an input stimulus. That is, like the devices described by cybernetics (Ashby, 1956, 1960), an artificial neural network is a machine that computes a mapping between inputs and outputs.
A network’s stimulus or input pattern is provided by the environment and is encoded as a pattern of activity (i.e., a vector of numbers) in a set of input units. The response of the system, its output pattern, is represented as a pattern of activity in the network›s output units. In modern connectionism—sometimes called New Connectionism—there will be one or more intervening layers of processors in the network, called hidden units. Hidden units detect higher-order features in the input pattern, allowing the network to make a correct or appropriate response.
The behaviour of a processor in an artificial neural network, which is analogous to a neuron, can be characterized as follows. First, the processor computes the total signal (its net input) being sent to it by other processors in the network. Second, the unit uses an activation function to convert its net input into internal activity (usually a continuous number between 0 and 1) on the basis of this computed signal. Third, the unit converts its internal activity into an output signal, and sends this signal on to other processors. A network uses parallel processing because many, if not all, of its units will perform their operations simultaneously.
The signal sent by one processor to another is a number that is transmitted through a weighted connection, which is analogous to a synapse. The connection serves as a communication channel that amplifies or attenuates signals being sent through it, because these signals are multiplied by the weight associated with the connection. The weight is a number that defines the nature and strength of the connection. For example, inhibitory connections have negative weights, and excitatory connections have positive weights. Strong connections have strong weights (i.e., the absolute value of the weight is large), while weak connections have near-zero weights.
The pattern of connectivity in a PDP network (i.e., the network’s entire set of connection weights) defines how signals flow between the processors. As a result, a network’s connection weights are analogous to a program in a conventional computer (Smolensky, 1988). However, a network’s “program” is not of the same type that defines a classical model. A network’s program does not reflect the classical structure/process distinction, because networks do not employ either explicit symbols or rules. Instead, a network’s program is a set of causal or associative links from signaling processors to receiving processors. The activity that is produced in the receiving units is literally caused by having an input pattern of activity modulated by an array of connection weights between units. In this sense, connectionist models seem markedly associationist in nature (Bechtel, 1985); they can be comfortably related to the old associationist psychology (Warren, 1921).
Artificial neural networks are not necessarily embodiments of empiricist philosophy. Indeed, the earliest artificial neural networks did not learn from experience; they were nativist in the sense that they had to have their connection weights “hand wired” by a designer (McCulloch & Pitts, 1943). However, their associationist characteristics resulted in a natural tendency for artificial neural networks to become the face of modern empiricism. This is because associationism has always been strongly linked to empiricism; empiricist philosophers invoked various laws of association to explain how complex ideas could be constructed from the knowledge provided by experience (Warren, 1921). By the late 1950s, when computers were being used to bring networks to life, networks were explicitly linked to empiricism (Rosenblatt, 1958). Rosenblatt’s artificial neural networks were not hand wired. Instead, they learned from experience to set the values of their connection weights.
What does it mean to say that artificial neural networks are empiricist? A famous passage from Locke (1977, p. 54) highlights two key elements: “Let us then suppose the mind to be, as we say, white paper, void of all characters, without any idea, how comes it to be furnished? . . . To this I answer, in one word, from experience.”
The first element in the above quote is the “white paper,” often described as the tabula rasa, or the blank slate: the notion of a mind being blank in the absence of experience. Modern connectionist networks can be described as endorsing the notion of the blank slate (Pinker, 2002). This is because prior to learning, the pattern of connections in modern networks has no pre-existing structure. The networks either start literally as blank slates, with all connection weights being equal to zero (Anderson et al., 1977; Eich, 1982; Hinton & Anderson, 1981), or they start with all connection weights being assigned small, randomly selected values (Rumelhart, Hinton, & Williams, 1986a, 1986b).
The second element in Locke’s quote is that the source of ideas or knowledge or structure is experience. Connectionist learning rules provide a modern embodiment of this notion. Artificial neural networks are exposed to environmental stimulation— activation of their input units—which results in changes to connection weights. These changes furnish a network’s blank slate, resulting in a pattern of connectivity that represents knowledge and implements a particular input-output mapping.
In some systems, called self-organizing networks, experience shapes connectivity via unsupervised learning (Carpenter & Grossberg, 1992; Grossberg, 1980, 1987, 1988; Kohonen, 1977, 1984). When learning is unsupervised, networks are only provided with input patterns. They are not presented with desired outputs that are paired with each input pattern. In unsupervised learning, each presented pattern causes activity in output units; this activity is often further refined by a winner-take-all competition in which one output unit wins the competition to be paired with the current input pattern. Once the output unit is selected via internal network dynamics, its connection weights, and possibly the weights of neighbouring output units, are updated via a learning rule.
Networks whose connection weights are modified via unsupervised learning develop sensitivity to statistical regularities in the inputs and organize their output units to reflect these regularities. For instance, in a famous kind of self-organizing network called a Kohonen network (Kohonen, 1984), output units are arranged in a two-dimensional grid. Unsupervised learning causes the grid to organize itself into a map that reveals the discovered structure of the inputs, where related patterns produce neighbouring activity in the output map. For example, when such networks are presented with musical inputs, they often produce output maps that are organized according to the musical circle of fifths (Griffith & Todd, 1999; Todd & Loy, 1991).
In cognitive science, most networks reported in the literature are not selforganizing and are not structured via unsupervised learning. Instead, they are networks that are instructed to mediate a desired input-output mapping. This is accomplished via supervised learning. In supervised learning, it is assumed that the network has an external teacher. The network is presented with an input pattern and produces a response to it. The teacher compares the response generated by the network to the desired response, usually by calculating the amount of error associated with each output unit. The teacher then provides the error as feedback to the network. A learning rule uses feedback about error to modify weights in such a way that the next time this pattern is presented to the network, the amount of error that it produces will be smaller.
A variety of learning rules, including the delta rule (Rosenblatt, 1958, 1962; Stone, 1986; Widrow, 1962; Widrow & Hoff, 1960) and the generalized delta rule (Rumelhart, Hinton, & Williams, 1986b), are supervised learning rules that work by correcting network errors. (The generalized delta rule is perhaps the most popular learning rule in modern connectionism, and is discussed in more detail in Section 4.9.) This kind of learning involves the repeated presentation of a number of inputoutput pattern pairs, called a training set. Ideally, with enough presentations of a training set, the amount of error produced to each member of the training set will be negligible, and it can be said that the network has learned the desired inputoutput mapping. Because these techniques require many presentations of a set of patterns for learning to be completed, they have sometimes been criticized as being examples of “slow learning” (Carpenter, 1989).
Connectionism’s empiricist and associationist nature cast it close to the very position that classical cognitivism reacted against: psychological behaviourism (Miller, 2003). Modern classical arguments against connectionist cognitive science (Fodor & Pylyshyn, 1988) cover much of the same ground as arguments against behaviourist and associationist accounts of language (Bever, Fodor, & Garrett, 1968; Chomsky, 1957, 1959a, 1959b, 1965). That is, classical cognitive scientists argue that artificial neural networks, like their associationist cousins, do not have the computational power to capture the kind of regularities modelled with recursive rule systems.
However, these arguments against connectionism are flawed. We see in later sections that computational analyses of artificial neural networks have proven that they too belong to the class “universal machine.” As a result, the kinds of inputoutput mappings that have been realized in artificial neural networks are both vast and diverse. One can find connectionist models in every research domain that has also been explored by classical cognitive scientists. Even critics of connectionism admit that “the study of connectionist machines has led to a number of striking and unanticipated findings; it’s surprising how much computing can be done with a uniform network of simple interconnected elements” (Fodor & Pylyshyn, 1988, p. 6).
That connectionist models can produce unanticipated results is a direct result of their empiricist nature. Unlike their classical counterparts, connectionist researchers do not require a fully specified theory of how a task is accomplished before modelling begins (Hillis, 1988). Instead, they can let a learning rule discover how to mediate a desired input-output mapping. Connectionist learning rules serve as powerful methods for developing new algorithms of interest to cognitive science. Hillis (1988, p. 176) has noted that artificial neural networks allow “for the possibility of constructing intelligence without first understanding it.”
One problem with connectionist cognitive science is that the algorithms that learning rules discover are extremely difficult to retrieve from a trained network (Dawson, 1998, 2004, 2009; Dawson & Shamanski, 1994; McCloskey, 1991; Mozer & Smolensky, 1989; Seidenberg, 1993). This is because these algorithms involve distributed, parallel interactions amongst highly nonlinear elements. “One thing that connectionist networks have in common with brains is that if you open them up and peer inside, all you can see is a big pile of goo” (Mozer & Smolensky, 1989, p. 3).
In the early days of modern connectionist cognitive science, this was not a concern. This was a period of what has been called “gee whiz” connectionism (Dawson, 2009), in which connectionists modelled phenomena that were typically described in terms of rule-governed symbol manipulation. In the mid-1980s it was sufficiently interesting to show that such phenomena might be accounted for by parallel distributed processing systems that did not propose explicit rules or symbols. However, as connectionism matured, it was necessary for its researchers to spell out the details of the alternative algorithms embodied in their networks (Dawson, 2004). If these algorithms could not be extracted from networks, then “connectionist networks should not be viewed as theories of human cognitive functions, or as simulations of theories, or even as demonstrations of specific theoretical points” (McCloskey, 1991, p. 387). In response to such criticisms, connectionist cognitive scientists have developed a number of techniques for recovering algorithms from their networks (Berkeley et al., 1995; Dawson, 2004, 2005; Gallant, 1993; Hanson &Burr, 1990; Hinton, 1986; Moorhead, Haig,&Clement, 1989; Omlin& Giles, 1996).
What are the elements of connectionism, and how do they relate to cognitive science in general and to classical cognitive science in particular? The purpose of the remainder of this chapter is to explore the ideas of connectionist cognitive science in more detail.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.01%3A_Chapter_Overview.txt
|
Classical cognitive science has been profoundly influenced by seventeenth-century Cartesian philosophy (Descartes, 1996, 2006). The Cartesian view that thinking is equivalent to performing mental logic—that it is a mental discourse of computation or calculation (Hobbes, 1967)—has inspired the logicism that serves as the foundation of the classical approach. Fundamental classical notions, such as the assumption that cognition is the result of rule-governed symbol manipulation (Craik, 1943) or that innate knowledge is required to solve problems of underdetermination (Chomsky, 1965, 1966), have resulted in the classical being viewed as a newer variant of Cartesian rationalism (Paivio, 1986). One key classical departure from Descartes is its rejection of dualism.
Classical cognitive science has appealed to recursive rules to permit finite devices to generate an infinite variety of potential behaviour. Classical cognitive science is the modern rationalism, and one of the key ideas that it employs is recursion. Connectionist cognitive science has very different philosophical roots. Connectionism is the modern form of empiricist philosophy (Berkeley, 1710; Hume, 1952; Locke, 1977), where knowledge is not innate, but is instead provided by sensing the world. “No man’s knowledge here can go beyond his experience” (Locke, 1977, p. 83). If recursion is fundamental to the classical approach’s rationalism, then what notion is fundamental to connectionism’s empiricism? The key idea is association: different ideas can be linked together, so that if one arises, then the association between them causes the other to arise as well.
For centuries, philosophers and psychologists have studied associations empirically, through introspection (Warren, 1921). These introspections have revealed the existence of sequences of thought that occur during thinking. Associationism attempted to determine the laws that would account for these sequences of thought.
The earliest detailed introspective account of such sequences of thought can be found in the 350 BC writings of Aristotle (Sorabji, 2006, p. 54): “Acts of recollection happen because one change is of a nature to occur after another.” For Aristotle, ideas were images (Cummins, 1989). He argued that a particular sequence of images occurs either because this sequence is a natural consequence of the images, or because the sequence has been learned by habit. Recall of a particular memory, then, is achieved by cuing that memory with the appropriate prior images, which initiate the desired sequence of images. “Whenever we recollect, then, we undergo one of the earlier changes, until we undergo the one after which the change in question habitually occurs” (Sorabji, 2006, p. 54). Aristotle’s analysis of sequences of thought is central to modern mnemonic techniques for remembering ordered lists (Lorayne, 2007; Lorayne & Lucas, 1974).
Aristotle noted that recollection via initiating a sequence of mental images could be a deliberate and systematic process. This was because the first image in the sequence could be selected so that it would be recollected fairly easily. Recall of the sequence, or of the target image at the end of the sequence, was then dictated by lawful relationships between adjacent ideas. Thus Aristotle invented laws of association.
Aristotle considered three different kinds of relationships between the starting image and its successor: similarity, opposition, and (temporal) contiguity:
And this is exactly why we hunt for the successor, starting in our thoughts from the present or from something else, and from something similar, or opposite, or neighbouring. By this means recollection occurs. (Sorabji, 2006, p. 54)
In more modern associationist theories, Aristotle’s laws would be called the law of similarity, the law of contrast, and the law of contiguity or the law of habit.
Aristotle’s theory of memory was essentially ignored for many centuries (Warren, 1921). Instead, pre-Renaissance and Renaissance Europe were more interested in the artificial memory—mnemonics—that was the foundation of Greek oratory. These techniques were rediscovered during the Middle Ages in the form of Ad Herennium, a circa 86 BC text on rhetoric that included a section on enhancing the artificial memory (Yates, 1966). Ad Herennium described the mnemonic techniques invented by Simonides circa 500 BC. While the practice of mnemonics flourished during the Middle Ages, it was not until the seventeenth century that advances in associationist theories of memory and thought began to flourish.
The rise of modern associationism begins with Thomas Hobbes (Warren, 1921). Hobbes’ (1967) notion of thought as mental discourse was based on his observation that thinking involved an orderly sequence of ideas. Hobbes was interested in explaining how such sequences occurred. While Hobbes’ own work was very preliminary, it inspired more detailed analyses carried out by the British empiricists who followed him.
Empiricist philosopher John Locke coined the phrase association of ideas, which first appeared as a chapter title in the fourth edition of An Essay Concerning Human Understanding (Locke, 1977). Locke’s work was an explicit reaction against Cartesian philosophy (Thilly, 1900); his goal was to establish experience as the foundation of all thought. He noted that connections between simple ideas might not reflect a natural order. Locke explained this by appealing to experience: Ideas that in themselves are not at all of kin, come to be so united in some men’s minds that it is very hard to separate them, they always keep in company, and the one no sooner at any time comes into the understanding but its associate appears with it. (Locke, 1977, p. 122) Eighteenth-century British empiricists expanded Locke’s approach by exploring and debating possible laws of association. George Berkeley (1710) reiterated Aristotle’s law of contiguity and extended it to account for associations involving different modes of sensation. David Hume (1852) proposed three different laws of association: resemblance, contiguity in time or place, and cause or effect. David Hartley, one of the first philosophers to link associative laws to brain function, saw contiguity as the primary source of associations and ignored Hume’s law of resemblance (Warren, 1921).
Debates about the laws of association continued into the nineteenth century. James Mill (1829) only endorsed the law of contiguity, and explicitly denied Hume’s laws of cause and effect or resemblance. Mill’s ideas were challenged and modified by his son, John Stuart Mill. In his revised version of his father’s book (Mill & Mill, 1869), Mill posited a completely different set of associative laws, which included a reintroduction of Hume’s law of similarity. He also replaced his father’s linear, mechanistic account of complex ideas with a “mental chemistry” that endorsed nonlinear emergence. This is because in this mental chemistry, when complex ideas were created via association, the resulting whole was more than just the sum of its parts. Alexander Bain (1855) refined the associationism of John Stuart Mill, proposing four different laws of association and attempting to reduce all intellectual processes to these laws. Two of these were the familiar laws of contiguity and of similarity.
Bain was the bridge between philosophical and psychological associationism (Boring, 1950). He stood,
exactly at a corner in the development of psychology, with philosophical psychology stretching out behind, and experimental physiological psychology lying ahead, in a new direction. The psychologists of the twentieth century can read much of Bain with hearty approval; perhaps John Locke could have done the same. (Boring, 1950, p. 240)
One psychologist who approved of Bain was William James; he frequently cited Bain in his Principles of Psychology (James, 1890a). Chapter 14 of this work provided James’ own treatment of associationism. James criticized philosophical associationism’s emphasis on associations between mental contents. James proposed a mechanistic, biological theory of associationism instead, claiming that associations were made between brain states:
We ought to talk of the association of objects, not of the association of ideas. And so far as association stands for a cause, it is between processes in the brain—it is these which, by being associated in certain ways, determine what successive objects shall be thought. (James, 1890a, p. 554, original italics)
James (1890a) attempted to reduce other laws of association to the law of contiguity, which he called the law of habit and expressed as follows: “When two elementary brain-processes have been active together or in immediate succession, one of them, on reoccurring, tends to propagate its excitement into the other” (p. 566). He illustrated the action of this law with a figure (James, 1890a, p. 570, Figure 40), a version of which is presented as Figure 4-1.
Figure 4-1. A distributed memory, initially described by James (1890a) but also part of modern connectionism.
Figure 4-1 illustrates two ideas, A and B, each represented as a pattern of activity in its own set of neurons. A is represented by activity in neurons a, b, c, d, and e; B is represented by activity in neurons l, m, n, o, and p. The assumption is that A represents an experience that occurred immediately before B. When B occurs, activating its neurons, residual activity in the neurons representing A permits the two patterns to be associated by the law of habit. That is, the “tracts” connecting the neurons (the “modifiable connections” in Figure 4-1) have their strengths modified.
The ability of A’s later activity to reproduce B is due to these modified connections between the two sets of neurons.
The thought of A must awaken that of B, because a, b, c, d, e, will each and all discharge into l through the paths by which their original discharge took place. Similarly they will discharge into m, n, o, and p; and these latter tracts will also each reinforce the other’s action because, in the experience B, they have already vibrated in unison. (James, 1890a, p. 569)
James’ (1890a) biological account of association reveals three properties that are common to modern connectionist networks. First, his system is parallel: more than one neuron can be operating at the same time. Second, his system is convergent: the activity of one of the output neurons depends upon receiving or summing the signals sent by multiple input neurons. Third, his system is distributed: the association between A and B is the set of states of the many “tracts” illustrated in Figure 4-1; there is not just a single associative link.
James’s (1890a) law of habit was central to the basic mechanism proposed by neuroscientist Donald Hebb (1949) for the development of cell assemblies. Hebb provided a famous modern statement of James’ law of habit:
When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. (Hebb, 1949, p. 62)
This makes explicit the modern connectionist idea that learning is modifying the strength of connections between processors. Hebb’s theory inspired the earliest computer simulations of memory systems akin to the one proposed by James (Milner, 1957; Rochester et al., 1956). These simulations revealed a critical role for inhibition that led Hebb (1959) to revise his theory. Modern neuroscience has discovered a phenomenon called long-term potentiation that is often cited as a biologically plausible instantiation of Hebb’s theory (Brown, 1990; Gerstner & Kistler, 2002; Martinez & Derrick, 1996; van Hemmen & Senn, 2002).
The journey from James through Hebb to the first simulations of memory (Milner, 1957; Rochester et al., 1956) produced a modern associative memory system called the standard pattern associator (McClelland, 1986). The standard pattern associator, which is structurally identical to Figure 4-1, is a memory capable of learning associations between pairs of input patterns (Steinbuch, 1961; Taylor, 1956) or learning to associate an input pattern with a categorizing response (Rosenblatt, 1962; Selfridge, 1956; Widrow & Hoff, 1960).
The standard pattern associator is empiricist in the sense that its knowledge is acquired by experience. Usually the memory begins as a blank slate: all of the connections between processors start with weights equal to zero. During a learning phase, pairs of to-be-associated patterns simultaneously activate the input and output units in Figure 4-1. With each presented pair, all of the connection weights— the strength of each connection between an input and an output processor—are modified by adding a value to them. This value is determined in accordance with some version of Hebb’s (1949) learning rule. Usually, the value added to a weight is equal to the activity of the processor at the input end of the connection, multiplied by the activity of the processor at the output end of the connection, and multiplied by some fractional value called a learning rate. The mathematical details of such learning are provided in Chapter 9 of Dawson (2004).
The standard pattern associator is called a distributed memory because its knowledge is stored throughout all the connections in the network, and because this one set of connections can store several different associations. During a recall phase, a cue pattern is used to activate the input units. This causes signals to be sent through the connections in the network. These signals are equal to the activation value of an input unit multiplied by the weight of the connection through which the activity is being transmitted. Signals received by the output processors are used to compute net input, which is simply the sum of all of the incoming signals. In the standard pattern associator, an output unit’s activity is equal to its net input. If the memory is functioning properly, then the pattern of activation in the output units will be the pattern that was originally associated with the cue pattern.
The standard pattern associator is the cornerstone of many models of memory created after the cognitive revolution (Anderson, 1972; Anderson et al., 1977; Eich, 1982; Hinton & Anderson, 1981; Murdock, 1982; Pike, 1984; Steinbuch, 1961; Taylor, 1956). These models are important, because they use a simple principle—James’ (1890a, 1890b) law of habit—to model many subtle regularities of human memory, including errors in recall. In other words, the standard pattern associator is a kind of memory that has been evaluated with the different kinds of evidence cited in Chapters 2 and 3, in an attempt to establish strong equivalence.
The standard pattern associator also demonstrates another property crucial to modern connectionism, graceful degradation. How does this distributed model behave if it is presented with a noisy cue, or with some other cue that was never tested during training? It generates a response that has the same degree of noise as its input (Dawson, 1998, Table 3-1). That is, there is a match between the quality of the memory’s input and the quality of its output.
The graceful degradation of the standard pattern associator reveals that it is sensitive to the similarity of noisy cues to other cues that were presented during training. Thus modern pattern associators provide some evidence for James’ (1890a) attempt to reduce other associative laws, such as the law of similarity, to the basic law of habit or contiguity.
In spite of the popularity and success of distributed associative memories as models of human learning and recall (Hinton & Anderson, 1981), they are extremely limited in power. When networks learn via the Hebb rule, they produce errors when they are overtrained, are easily confused by correlated training patterns, and do not learn from their errors (Dawson, 2004). An error-correcting rule called the delta rule (Dawson, 2004; Rosenblatt, 1962; Stone, 1986; Widrow & Hoff, 1960) can alleviate some of these problems, but it does not eliminate them. While association is a fundamental notion in connectionist models, other notions are required by modern connectionist cognitive science. One of these additional ideas is nonlinear processing.
|
textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.03%3A_Associations.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.