Sensation & Perception -kirja: chapter summaries Chapter 1 Summary INTRODUCTION 1. Sensation and perception are central to, and often precede, almost all aspects of human behavior and thought. There are many practical applications of our increased understanding of sensation and perception. 2. Gustav Fechner invented several clever methods for measuring the relationship between physical changes in the world and consequent psychological changes in observers. These methods remain in use today. Using Fechner’s methods, researchers can measure the smallest levels of stimulus that can be detected (absolutethreshold) and smallest differences that can be detected (difference thresholds, or just noticeable differences). 3. A more recent development for understanding performance signaldetection theory—permits us to simulate changes in the perceiver (e.g., internal noise and biases) in order to understand perceptual performance better. 4. We learn a great deal about perception by understanding the biological structures and processes involved. One early observation—the doctrine of specific nerve energies—expresses the fact that people are aware only of the activity of our nervous systems. For this reason, what matters is which nerves are stimulated, nothow they are stimulated. The central nervous system reflects specializations for the senses from cranial nerves to areas of the cerebral cortex involved in perception. 5. The essential activities of all neurons, including those involved in sensory processes, are chemical and electrochemical. Neurons communicate with each other through neurotransmitters, molecules that cross the synapse from the axon of one neuron to the dendrite of the next. Nerve impulses are electrochemical; voltages change along the axon as electrically charged ions (sodium and potassium) pass in and out of the membranes of nerve cells. 6. Recordings of individual neurons enable us to measure the lowest level of stimulus required for a neuron to fire (absolute threshold). Both the rate and the timing pattern of neural firing provide additional information about how the brain encodes stimuli in the world. 7. Electroencephalography (EEG) and magnetoencephalography (MEG) measure the activity of many neurons with great precision intiming, but not brain location. 8. Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) measure metabolic changes to precisely localize the activity of many neurons in the brain, but with little precision in timing. 9. Fourier analysis is a mathematical tool that helps researchers break down complex sounds and images in ways that permit better understanding of how sounds and images are sensed and perceived. Chapter 2 Summary The First Steps in Vision: From Light to Neural Signals 1. This chapter provided some insight into the complex journey that is required for us to see stars and other spots of light. The path of the light was traced from a distant star through the eyeball and to itsabsorption by photoreceptors and its transduction into neural signals. In subsequent chapters we’ll learn how those signals are transmitted to the brain and translated into the experience of perception. 2. Light, on its way to becoming a sensation (a visual sensation, that is), can be absorbed, scattered, reflected, transmitted, or refracted. It can become a sensation only when it’s absorbed by a photoreceptor in the retina. 3. Vision begins in the retina, when light is absorbed by rods or cones. The retina is like a minicomputer that transduces light energy into neural energy. 4. The retina sends information to the brain via ganglion cells, neurons whose axons make up the optic nerves. Retinal ganglion cells havecenter surround receptive fields and are concerned with changes in contrast (the difference in intensity between adjacent bits of the scene). 5. The visual system deals with large variations in overall light intensity by (a) regulating the amount of light entering the eyeball, (b) using different types of photoreceptors in different situations, and (c) effectively throwing away photons we don’t need. 6. Age-related macular degeneration (AMD) is a disease associated with aging that affects the macula. The leading cause of visual loss among the elderly in the United States, AMD gradually destroys sharp central vision, making it difficult to read, drive, and recognize faces. 7. Retinitis pigmentosa (RP) is a family of hereditary diseases characterized by the progressive death of photoreceptors and degeneration of the pigment epithelium. In the most common form of the disease, patients first notice vision problems in theirperipheral vision and under low light conditions, situations in which rods play the dominant role in collecting light. Chapter 3 Summary Spatial Vision: From Spots to Stripes 1. In this chapter we followed the path of image processing from the eyeball to the brain. Neurons in the cerebral cortex translate the array of stars perceived by retinal ganglion cells into the beginnings of forms and patterns. The primary visual cortex is organized into thousands of tiny computers, each responsible for determining the orientation, width, color, and other characteristics of the stripes in one small portion of the visual field. In Chapter 4 we will continue this story by seeing how other parts of the brain combine the outputs from these minicomputers to produce a coherent representation. 2. Perhaps the most important feature of image processing is the remarkable transformation of information from the circularreceptive fields of retinal ganglion cells to the elongated receptive fields of the cortex. 3. Cortical neurons are highly selective along a number of dimensions, including stimulus orientation, size, direction of motion, and eye of origin. 4. Neurons with similar preferences are often arranged in columns in primary visual cortex. 5. Selective adaptation provides a powerful, noninvasive tool for learning about stimulus specificity in human vision. 6. The human visual cortex contains pattern analyzers that are specific to spatial frequency and orientation. 7. Normal visual development requires normal visual experience. Abnormal visual experience early in life can cause massive changes in cortical physiology that result in a devastating and permanentloss of spatial vision. Chapter 4 Summary Perceiving and Recognizing Objects 1. A series of extrastriate visual areas continue the work of visual processing. Emerging from V1 (primary visual cortex) are two broad streams of processing: one going into the temporal lobe and the other into the parietal lobe. The temporal pathway seems specifically concerned with what a stimulus might be. This chapter follows that pathway. (The parietal wherepathway will be considered in later chapters.) 2. After early visual processes have extracted basic features from the visual input, it is the job of middle vision to organize these features into the regions, surfaces, and objects that can, in turn, serve as input to object recognition and scene-understanding processes. 3. Perceptual “committees” serve as an important metaphor in this chapter. The idea is that many semi-independentprocesses are working on the input at the same time. Different processes may come to different conclusions about the presence of an edge or the relationship between two elements in the input. Under most circumstances, we see the single conclusion that the committees settle on. The math of Bayes’ theorem is one way to formalize this process of finding the most likely explanation for input. 4. Multiple processes seek to carve the input into regions and to define the edges of those regions, and many rules are involved in this parsing of the image. For example, image elements are likely to group together if they are similar in color or shape, if they are near each other, or if they are connected to each other. Many of these grouping principles were first articulated by members of the Gestalt school. 5. Other, related processes seek to determine if a region is part of a foreground figure (like this black O) or part of the background (like the white area around the O). These rules of grouping and figure–ground assignment are driven by an implicit understanding of the physics of the world. Thus, events that are very unlikely to happen by chance (e.g., two contours parallel to each other) are taken to have meaning. (Those parallel contours are likely to be part of the same figure.) 6. The processes that divide visual input into objects and background have to deal with many complexities. Among these are occlusion—the fact that parts of objects may be hidden behind other objects, and the fact that objects themselves have a structure. Is your nose an object or a part of a larger whole? What about glasses or hair or a wig? 7. Template models of object recognition hold that an object in the world is recognized when its image fits a particular representation in the brain in the way that a key fits a lock. It has always been hard to see how naïve template models could work, because of the astronomical number of templates required: we might need one “lock” for every object in every orientation in every position in the visual field. 8. Structure models propose that objects are recognized by the relationship of parts. Thus, an H could be defined as two parallel lines with a perpendicular line joining them between their centers. A cat would be more difficult, but similar in principle. In their pure form, such models are viewpoint-independent. The orientation of the H doesn’t matter. Object recognition, however, is often viewpoint-dependent, suggesting that the correct model lies between the extremes of naïve template matching and pure structural description. 9. Faces are an interesting special case in which viewpoint is very important. Upright faces are much easier to recognize than inverted faces. Moreover, some regions of the brain seem to be specifically interested in faces. They lie near other regions in the temporal lobes that are important for recognition of other sorts of objects. 10. Recent physiological work showed very specific responses to very specific objects (e.g., the actress Jennifer Aniston) in the human temporal lobe. Other work showed that the first, rough acts of object recognition take place so fast that they must be accomplished by the first, feed-forward sweep of activity from the to the higher processing centers of the visual system. However, routine perception of objects requires feedbackfrom higher visual areas to those lying earlier in the pathway. Chapter 5 Summary The Perception of Color 1. Probably the most important fact to know about color vision is that lights and surfaces look colored because a particular distribution ofwavelengths of light is being analyzed by a particular visual system. Color is a mental phenomenon, not a physical phenomenon. Many animal species have some form of color vision. It seems to be important for identifying possible mates, possible rivals, and good things to eat. Color vision has evolved several times in several different ways in the animal kingdom. 2. Rod photoreceptors are sensitive to low (scotopic) light levels. There is only one type of rod photoreceptor. It yields one “number” for each location in the visual field. Rods can support only a one-dimensional representation of color from dark to light. Thus, scotopic vision is achromatic vision. 3. There are three types of cone photoreceptors, each having a different sensitivity to the wavelengths of light. Cones operate at brighter light levels than rods, producing three “numbers” at each location; the pattern of activity over the different cone types defines the color. 4. If two regions of an image produce the same response in the three cone types, they will look identical; that is, they will be metamers. And they will look identical even if the physical wavelengths coming from the two regions are different. 5. In additive color mixture, two or more lights are mixed. Adding a light that looks blue to a light that looks yellow will produce a light that looks white (if we pick the right blue and yellow). In subtractive color mixture, filters, paints, or other pigments that absorb some wavelengths and reflect others are mixed. Mixing a typical blue paint and a typical yellow paint will subtract most long and short wavelengths from the light reflected by the mixture, and the result will look green. 6. Color blindness is typically caused by the congenital absence or abnormality of one cone type—usually the L- or M-cone, usually inmales. Most color-blind individuals are not blind to differences in wavelength. Rather, their color perception is based on the outputs of two cone types instead of the normal three. 7. A single type of cone cannot be used, by itself, to discriminate between wavelengths of light. To enable discrimination, information from the three cones is combined to form three cone-opponent processes. Cones sensitive to long wavelengths (L-cones) are pitted against medium-wavelength (M) cones to create an (L – M) process that is roughly sensitive to the redness or greenness of a region. (L + M) cones are pitted against short-wavelength (S) cones to create a process roughly sensitive to the blueness or yellowness of a region. The third process is sensitive to the overall brightness of a region. 8. Color appearance is arranged around opponent colors: red versusgreen, and blue versus yellow. This color opponency involves further reprocessing of the cone signals from cone-opponent processes into color-opponent processes. 9. The visual system tries to disentangle the properties of surfaces in the world (e.g., the “red” color of a strawberry) from the properties of the illuminant (e.g., the “golden” light of evening), even though surface and illuminant information are combined in the input to the eyes. Mechanisms of color constancy use implicit knowledge about the world to correct for the influence of different illuminants and to keep the strawberry looking red under a wide range of conditions. Chapter 6 Summary Space Perception and Binocular Vision 1. Reconstructing a three-dimensional world from two, non-Euclidean, curved, two-dimensional retinal images is one basic problem faced by the brain. 2. A number of monocular cues provide information about three-dimensional space. These include occlusion, various size and position cues, aerial perspective, linear perspective, motion cues, accommodation, and convergence. 3. Having two eyes is an advantage for a number of reasons, some of which have to do with depth perception. It is important to remember, however, that it is possible to reconstruct the three-dimensional world from a single two-dimensional image. Two eyes have other advantages over just one: expanding the visual field, permitting binocular summation, and providing redundancy if one eye is damaged. 4. Having two laterally separated eyes connected to a single brain also provides us with important information about depth through the geometry of the small differences between the images in each eye. These differences, known as binocular disparities, give rise to stereoscopic depth perception. 5. Random dot stereograms show that we don’t need to know what we’re seeing before we see it in stereoscopic depth. Binocular disparity alone can support shape perception. 6. Stereopsis has been exploited to, literally, add depth to entertainment—from nineteenth-century photos to twenty-first-century movies. It has also served to enhance the perception of information in military and medical settings. 7. The difficulty of matching an image element in one eye with the correct element in the other eye is known as the correspondence problem. The brain uses several “tricks” to solve the problem. For example, it reduces the initial complexity of the problem by matching large “blobs” in the low-spatial-frequency information before trying to match every high-frequency detail. 8. Single neurons in the primary visual cortex and beyond havereceptive fields that cover a region in three-dimensional space, not just the two-dimensional image plane. Some neurons neurons seem to be concerned with a crude in-front/behind judgment. Other neurons are concerned with more precise, metrical depth perception. 9. When the stimuli on corresponding loci in the two eyes are different, we experience a continuous perceptual competition between the two eyes known as binocular rivalry. Rivalry is part of the effort to make the best guess about the current state of the world based on the current state of the input. 10. All of the various monocular and binocular depth cues are combined (unconsciously) according to what prior knowledge tells us about the probability of the current event. Making the wrong guess about the cause of visual input can lead to illusions. Bayes’ theorem is the basis of one type of formal understanding of the rules of combination. 11. Stereopsis emerges suddenly at about 4 months of age in humans, and it can be disrupted through abnormal visual experience during acritical period early in life. Chapter 7 Summary Attention and Scene Perception 1. Attention is a vital aspect of perception because we cannot process all of the input from our senses. The term attention refers to a large set of selective mechanisms that enable us to focus on some stimuliat the expense of others. Though this chapter talked almost exclusively about visual attention, attentional mechanisms exist in all sensory domains. 2. In vision, it is possible to direct attention to one location or one object. If something happens at an attended location, we will befaster to respond to it. It can be useful to refer to the “spotlight” of attention, though deployments of attention differ in important ways from movements of a spotlight. 3. In visual search tasks, observers typically look for a target item among a number of distractor items. If the target is defined by a salient basic feature, such as its color or orientation, search is very efficient and the number of distractors has little influence on the reaction time (the time required to find the target). If no basic feature information can guide the deployment of attention, then search is inefficient, as if each item needed to be examined one after the other. Search can be of intermediate efficiency if some feature information is available (e.g., if we’re looking for a red car, we don’t need to examine the blue objects in the parking lot). 4. Search for objects in real scenes is guided by the known features of the objects, by the salient features in the scene, and by a variety of scene-based forms of guidance. For example, if you’re looking for your can of soda, you will guide your attention to physically plausible locations (horizontal surfaces) and logically sensible places (the desk or counter, probably not the floor). 5. Attention varies over time as well as space. In the attentional-blink paradigm, observers search for two items in a rapid stream of stimuli that appear at fixation. Attention to the first target makes ithard to find the second if the second appears within 200 to 500 ms of the first. When two identical items appear in the stream of stimuli, a different phenomenon, repetition blindness, makes it hard to detect the second instance. 6. The effects of attention manifest themselves in several different ways in the brain. In some cases, attention is marked by a generalincrease in neural activity. In other cases, attention to a particular attribute tunes cells more sharply for that attribute. And in still other cases, attention to a stimulus or location causes receptive fields to shrink so as to exclude unattended stimuli. All of these effects might be the result of a single, underlying physiological mechanism of attention. 7. Damage to the parietal lobe of the brain produces deficits in visual attention. Damage to the right parietal lobe can lead to neglect, a disorder in which it is hard to direct attention into the contralesional (in this case, the left) visual field. Neglect patients may ignore half of an object or of their own body. Balint syndrome is the result of bilateral damage to the parietal lobe. Patients with this disorder may show simultagnosia, an inability to see more than one object at a time. 8. Scene perception involves both selective and nonselective processing. Tasks like visual search make extensive use of selectiveprocessing. Non-selective processing allows observers to appreciate the mean and variance of features across many objects (or proto-objects). Thus, you know the average orientation of trees in the woods (vertical) before knowing whether any particular tree is oriented perfectly vertically. Using spatial-frequency information, even without segmenting the scene into regions and objects, the nonselective pathway can provide information about the nature of a scene (e.g., whether it’s natural or man-made). 9. Picture memory experiments show that people can remember thousands of images after only a second or two of exposure to each. In contrast, change blindness experiments show that people can miss large changes in scenes if those changes do not markedly alter the meaning of the scene. 10. Our perceptual experience of scenes consists of nonselective processing of the layout and ensemble statistics of the scene, combined with selective processing of a very few objects at each moment. However, the final experience is an inference based on all of the preceding processing, not merely the sum of that processing. Usually this theory is adequate because we can rapidly check the world to determine whether the chair, the book, and the desk are still there. In the lab, however, we can use phenomena like inattentional blindness and change blindness to reveal the limits of our perception, and it is becoming increasingly clear that those limits can have real-world consequences. Chapter 8 Summary Motion Perception 1. Like color or orientation, motion is a primary perceptual dimension that is coded at various levels in the brain. Motion information is used to determine where objects are going and when they’re likely to get there, and to help us move through our environment without being hit in the head by flying objects. 2. We can build a simple motion-detecting circuit by using linear filters that delay and sum information (and are followed by nonlinearities). 3. V1 neurons view the world through a small window, leading to the well-known aperture problem (that is, a V1 neuron is unable to tell which elements correspond with one another when an object moves through its receptive field). 4. Strong physiological and behavioral evidence that the middletemporal area (MT) is involved in the perception of global motion. 5. Aftereffects for motion, like those for orientation or color, can provide important insights into the underlying mechanisms in humans. 6. Luminance-defined (first-order) motion and contrast- or texture-defined (second-order) motion appear to be analyzed by separate systems. 7. The brain has to figure out which retinal motion arises in the world, and which arises because of eye movements. Moreover, the brain must suppress the motion signals generated by our eye movements, or the world will be pretty “smeared.” 8. Motion information is critically important to us for navigating around our world, avoiding imminent collision, and recognizing the movement of animals and people. Chapter 9 Summary Hearing: Physiology and Psychoacoustics 1. Sounds are fluctuations of pressure. Sound waves are defined by the frequency, intensity (amplitude), and phase of fluctuations. Sound frequency and intensity correspond to our perceptions of pitch andloudness, respectively. 2. Sound is funneled into the ear by the outer ear, made more intense by the middle ear, and transformed into neural signals by theinner ear. 3. In the inner ear, cilia on the tops of inner hair cells are flexed by pressure fluctuations in ways that provide information about frequency and intensity to the auditory nerve and the brain. Auditory nerve fibers convey information through both the rate and the timing patterns with which they fire. 4. Different characteristics of sounds are processed at multiple places in the brain stem before information reaches the cortex. Information from both ears is brought together very early in the chain of processing. At each stage of auditory processing, including primary auditory cortex, neurons are organized in relation to the frequencies of sounds (tonotopically). 5. Humans and other mammals can hear sounds across an enormousrange of intensities. Not all sound frequencies are heard as being equally loud, however. Hearing across such a wide range of intensities is accomplished by the use of many auditory neurons. Different neurons respond to different levels of intensity. In addition, more neurons overall respond when sounds are more intense. 6. Series of channels (or filters) process sounds within bands of frequency. Depending on frequency, these channels vary in how wide (how many frequencies) or narrow they are. Consequently, it is easier to detect differences between some frequencies than between others. When energy from multiple frequencies is present, lower-frequency energy makes it relatively more difficult to hear higherfrequencies. 7. Hearing loss is caused by damage to the bones of the middle ear, to the hair cells in the cochlea, or to the neurons in the auditory nerve. Although hearing aids are helpful to listeners with hearing impairment, they cannot restore hearing as well as glasses can improve vision. Chapter 10 Summary Hearing in the Environment 1. Listeners use small differences, in time and intensity, across the two ears to learn the direction in the horizontal plane (azimuth) from which a sound comes. 2. Time and intensity differences across the two ears are not sufficient to fully indicate the location from which a sound comes. In particular, time and intensity differences are not sufficient to indicate whether sounds come from the front or the back, or from higher or lower (elevation). 3. The pinna, ear canal, head, and torso alter the intensities of different frequencies for sounds coming from different places in space, and listeners use these changes in intensity across frequency to identify the location from which a sound comes. 4. Perception of auditory distance is similar to perception of visualdepth because no single characteristic of the signal can inform a listener about how distant a sound source is. Listeners must combine intensity, spectral composition, and relative amounts of direct and reflected energy of sounds to estimate distance to a sound source. 5. Many natural sounds, including musical instruments and human speech, have rich harmonic structure with energy at integer multiples of the timbre frequency, and listeners are especially good at perceiving the pitch of harmonic sounds. 6. Important perceptual qualities of complex sounds are timbre(conveyed by the relative amounts of energy at different frequencies) and the onset and offset properties of attack and decay, respectively. 7. Because all the sounds in the environment are summed into a single waveform that reaches each ear, a major challenge for hearing is to separate sound sources in the combined signal. This general process is known as auditory scene analysis. Sound source segregation succeeds by using multiple characteristics of sounds, including spatial location, similarity in frequency and timbre, onset properties, and familiarity. 8. In everyday environments, sounds to which a person is listening often are interrupted by other, louder sounds. Perceptualrestoration is a process by which missing or degraded acoustic signals are perceptually replaced. Chapter 11 Summary Music and Speech Perception 1. Musical pitch has two dimensions: tone height and tone chroma. Musical notes are combined to form chords. Notes and chords vary in duration and are combined to form melodies. 2. Melodies are learned psychological entities defined by patterns of rising and falling musical pitches, with different durations and rhythms. 3. Rhythm is important to music, and to auditory perception more broadly. The process of perceiving sound sequences is biased tohear rhythm. 4. Humans evolved to be able to produce an extremely wide variety of sounds that can be used by languages. The production of speech sounds has three basic components: respiration, phonation, and articulation. Speech sounds vary in many dimensions, including intensity, duration, periodicity, and noisiness. 5. In terms of articulation and acoustics, speech sounds vary according to other speech sounds that precede and follow (coarticulation). Because of coarticulation, listeners cannot use any single acoustic feature to identify a vowel or consonant. Instead, listeners must use multiple properties of the speech signal. 6. In general, listeners discriminate speech sounds only as well as they can label them. This is categorical perception, which also has been shown for the perception of many other complex familiar auditory and visual stimuli. 7. How people perceive speech depends very much on their experience with speech sounds within a language. This experience includes learning which of the many acoustic features in speech tend to co-occur. Because of the role of experience in how we hear speech, it is often difficult to perceive and produce new speech sounds from a second language following experience with a first language. 8. One of the ways that infants learn words is to use their experience with the co-occurrence of speech sounds. 9. Speech sounds are processed in both hemispheres of the brain much like other complex sounds, until they become part of the linguistic message. Then, speech is further processed in anterior and ventral regions, mostly in the left superior temporal cortex, but also in the superior posterior temporal cortex. Chapter 12 Summary Spatial Orientation and the Vestibular System 1. The vestibular organs are a set of sense organs located in the inner ear that sense head motion and gravity and contribute to our sense of spatial orientation. 2. The vestibular organs include three semicircular canals (horizontal, anterior, and posterior) that sense angular motion, and two otolithorgans (utricle and saccule) that sense both gravity and linear acceleration. 3. Hair cells are the mechanoreceptors that convert both orientation with respect to gravity and head motion into signals that are sent to the brain. 4. The vestibular organs make a predominant contribution to our sense of spatial orientation, which includes three sensory modalities: linear motion, angular motion, and tilt. Direction and amplitude are qualities that define each of these three sensory modalities. 5. Our sense of spatial orientation results from a combination of information from multiple sensory systems. The vestibular andvisual systems make predominant contributions to our sense of spatial orientation. 6. The vestibular system contributes to our sense of spatial orientation, but the brain processes the vestibular to yield perceptions that differ substantially from the responses found on the afferentneurons. 7. We are exquisitely sensitive to head motion, even in the dark, recognizing the directions of rotation, linear motion, and tilt at verylow thresholds. 8. The vestibular system contributes to dynamic visual acuity by evoking compensatory vestibulo-ocular reflexes (VORs). 9. The vestibular system helps maintain balance via postural reflexes. 10. No area of the cortex exclusively devoted to processing vestibular information has been identified. Areas of the cortex that respond to vestibular stimuli also respond to visual and other sensory stimuli. 11. Vestibular problems are widespread, and treatments are limited. For Ménière’s syndrome patients, for example, the symptoms may become so disabling that patients accept treatments that yield permanent disability just to be rid of the symptoms. Chapter 13 Summary Touch 1. The sense of touch produces a number of distinct sensory experiences. Each type of experience is mediated by its ownsensory receptor system(s). Tactile receptors are responsive not only to pressure, but also to vibration, changes in temperature, and noxious stimulation. The kinesthetic system, which also contributes to our sense of touch, is further involved in sensing limb position and the movement of our limbs in space. Pleasant or emotional touch is another form of sensory specialization. 2. Four classes of pressure-sensitive (mechano-) receptors have been found within hairless skin, and another five classes within hairy skin. The organs used to sense limb position and movement (namely, our muscles, tendons, and joints) are more deeply situated within the body. Thermoreceptors respond to changes in skin temperature that occur, for example, when we contact objects that are warmer or cooler than our bodies. Nociceptors signal tissue damage (or its potential) and give rise to sensations of pain. 3. The pathways from touch receptors to the brain are complex. Two major pathways have been identified: a fast one (the dorsal column–medial lemniscal pathway) that carries information from mechanoreceptors, and a slower one (the spinothalamic pathway) that carries thermal and nociceptive information. Only the second pathway synapses when it first enters the spinal cord. These pathways project to the thalamus and from there to the primary somatosensory area, located in the parietal lobe just behind the central sulcus. This area contains several somatotopically organized subregions, in which adjacent areas of the body project to adjacent areas of the brain. The neural organization of the brain for touch has been shown to be remarkably plastic, even in adults. 4. Downward pathways from the brain play an important role in the perception of pain. According to the gate control theory, signals along these pathways interact at the spinal cord with those from the periphery of the body. Such interactions can block the pain signals that would otherwise be sent forward to the brain. The sensation of pain is further moderated by areas in the cortex. 5. Investigators have measured sensitivity to mechanical force by applying nylon hairs of different diameters to the skin. They determine spatial acuity of the skin by measuring the two-point touch threshold, and more precisely by discriminating the orientation of gratings applied to the skin. Tactile pressure sensitivity and spatial acuity vary with body site because of varying concentrations of different types of mechanoreceptors. The minimum depression of the skin needed to feel a stimulus vibrating at a particular rate (frequency) provides a measure of vibration sensitivity. 6. The sense of touch is intimately related to our ability to perform actions. Signals from the mechanoreceptors are necessary for simple actions such as grasping and lifting an object. Conversely, our own movements determine how touch receptors respond and, hence, which properties of the concrete world we can feel. Touch is better adapted to feeling the material properties of objects than it is to feeling their geometric features (e.g., shape), particularly when an object is large enough to extend beyond the fingertip. 7. Like other sensory modalities, touch gives rise to internal representations of the world, which convey the positions of objects using the body as a spatial reference system. Touch-derived representations are inputs to higher-level functions like allocation of attention and integration with information from other modalities. 8. The psychological study of touch is useful for a number of applications. Virtual touch environments that transmit forces to the touch receptors can provide a basis for training people to perform remote operations like surgery and perhaps, in the future, will convey the illusion of touched objects over the Internet. Chapter 14 Summary Olfaction 1. Olfaction is one of the two chemical senses; the other is taste (discussed in the next chapter). To be perceived as scent, a chemical must possess certain physical properties; however, even some molecules that possess these characteristics cannot be smelled. Olfaction has some unique physiological properties, one of which is that only about 35% of the genes that code for olfactory receptors in humans are functional. Another unusual feature is that most smells also stimulate the somatosensory system via the trigeminal nerve, and it is often impossible to distinguish the contribution of olfactory sensation from trigeminal stimulation. 2. The dominant biochemical theory of odor perception—shape-pattern theory—contends that the fit between a molecule and an olfactory receptor (OR) determines which molecules are detected as scents, and that specific odorant molecules activate arrays of ORs, producing specific patterns of neural activation for each perceived scent. However, this theory is not universally accepted, and alternate explanations exist (e.g., vibration theory). 3. Recently, researchers demonstrated a closer connection between thevisual system and olfaction than has ever before been thought to exist. Two examples are binaral rivalry and the observation that what we’re smelling influences what we concomitantly register as seeing. There is also a difference between active sniffing and passive inhalation of odors at both neurological and functional levels. Active sniffing may also have therapeutic applications for patients suffering from extreme physical disabilities. 4. Almost all odors that we encounter in the real world are mixtures, and we appear not to be very good at analyzing the discrete chemical components of scent mixtures. Olfaction is thus primarily a synthetic, as opposed to analytical, sense. However, analytical olfactory ability can be developed with training. True odor imagery is also weak (or nonexistent) for most people, but training, as in the case of odor experts, appears to facilitate this ability. 5. The psychophysical study of smell has shown that various odorant intensity levels and various cognitive functions are required for odor detection, discrimination, and recognition. Identification differs from odor recognition in that, in the former, one must come up with a name for the olfactory sensation. It is very difficult to name even very familiar odors—an experience known as the tip-of-the-nose phenomenon—one of several indications that linguistic processing is highly disconnected from olfactory experience. Unlike the case with other sensory experiences, however, we do not need to access any semantic information about an odor in order to respond to it appropriately, as long as it is familiar. 6. Another important discrepancy between the physical experience and the psychological experience of odors is the difference between receptor adaptation and cognitive habituation. Receptor adaptation occurs after continual odorant exposure over a number of minutes, can be undone after a few minutes away from the odorant, and is explained by a basic biochemical mechanism. By contrast, cognitive habituation occurs after long-term exposure (e.g., in a living or work environment) to a particular odor and takes weeks away from the odor to undo. Psychological influences can have strong effects on both perceived odor adaptation and habituation. At present, the physiological mechanisms responsible for the cognitive habituation are not fully understood. 7. The most immediate and basic response we have to an odor is whether we like it or not; this is hedonic evaluation. Odor hedonics are measured by pleasantness, familiarity, and intensity ratings. Pleasantness and familiarity are linearly related to odor liking; odor intensity has a more complex relationship to hedonic perception. Substantial evidence suggests that our hedonic responses to odors are learned and not innate, even for so-called stenches. That we have learned to like or dislike various odors rather than being born with hardwired responses is evolutionarily adaptive for generalistspecies like humans. The caveats to the learned proposition are odors that are highly trigeminally irritating (pain-inducing) and the potential genetic variability in the number and types of receptors expressed across individuals, which may influence olfactory sensitivity (intensity) and hence odor hedonic perception. 8. The key to olfactory associative learning is the emotional value of the context in which the odor is first encountered. If the emotional context is good, the odor will be liked; if it is bad, the odor will be disliked. Previously acquired emotional associations with odors also underlie validated aromatherapy effects. Emotional potency further distinguishes odor-evoked memories from memories triggered by other sensory cues. The neuroanatomy of the olfactory and limbic systems and their neuroevolutionary development illustrate how emotional processing and olfactory processing are uniquely and intimately interrelated. 9. Pheromones are chemicals emitted by individuals that affect the physiology and/or behavior of other members of the same species and may or may not have any smell. In all mammals that have been shown to use pheromones for communication, detection is mediated through the vomeronasal organ and processed by the accessory olfactory bulb. Humans do not possess a functional VNO or AOB, and evidence for human pheromones is controversial. However, human chemosignals that are processed through the olfactory system appear to have some influence on hormonal status and sexual arousal. Chapter 15 Summary Taste 1. Flavor is produced by the combination of taste and retronasal olfaction (olfactory sensations produced when odorants in the mouth are forced up behind the palate into the nose). Flavor sensations are localized to the mouth, even though the retronasal olfactory sensations come from the olfactory receptors high in the nasal cavity. 2. Taste buds are globular clusters of cells (like the segments in an orange). The tips of some of the cells (microvilli) contain sites that interact with taste molecules. Those sites fall into two groups: ion channels that mediate responses to salts and acids, and G protein–coupled receptors that bind to sweet and bitter compounds. 3. The tongue has a bumpy appearance because of structures called papillae. Filiform papillae (most numerous) have no taste buds. Taste buds are found in the fungiform papillae (front of the tongue), foliate papillae (rear edges of the tongue) and circumvallate papillae (rear center of the tongue), as well as on the roof the mouth. 4. Taste projects ipsilaterally from the tongue to the medulla, thalamus, and cortex. It projects first to the insula in the cortex, and from there to the orbitofrontal cortex, an area where taste can be integrated with other sensory input (e.g., retronasal olfaction). 5. Taste and olfaction play very different roles in the perception of foods and beverages. Taste is the true nutritional sense; taste receptors are tuned to molecules that function as important nutrients. Bitter taste is a poison detection system. Sweet taste enables us to respond to the sugars that are biologically useful to us: sucrose, glucose, and fructose. Salty taste enables us to identify sodium, a mineral crucial to survival because of its role in nerve conduction and muscle function. Sour taste permits us to avoid acids in concentrations that might injure tissue. 6. Umami, the taste produced by monosodium glutamate, has been suggested as a fifth basic taste that detects protein. However, umami lacks one of the most important properties of a basic taste: hardwired affect. Some individuals like umami, but others do not. The presence of glutamate receptors in the gut suggests that protein detection occurs there. Digestion breaks down proteins into their constituent amino acids, and the glutamate released stimulates gut glutamate receptors, leading to conditioned preferences for the sensory properties of the foods containing the protein. 7. The importance of taste to survival requires that we be able to recognize each taste quality independently, even when it is present in a mixture. By coding taste quality with labeled lines in much the same way that frequencies are coded in hearing, nature has ensured that we have this important capability. These labeled lines are noisy. For example, acids are able to stimulate fibers mediating saltiness, as well as those mediating sourness. Thus, acids tend to taste both salty and sour. 8. Foods do not taste the same to everyone. The Human Genome Project revealed that we carry about 25 genes for bitter taste. The most studied bitter receptor responds to PROP and shows allelic variation in humans leading to the designations “PROP nontaster” for those who taste the least bitterness and “PROP taster” for those who taste the most. In addition, humans vary in the number of fungiform papillae (and thus taste buds) they possess. Those with the most taste buds are called supertasters and live in a “neon” taste world; those with the fewest live in a “pastel” taste world. Psychologists discovered these differences by testing people’s ability to match sensory intensities of stimuli from different modalities. For example, the bitterness of black coffee matches the pain of a mildheadache to nontasters but resembles a severe headache to supertasters. The way foods taste affects palatability, which in turn affects diet. Poor diet contributes to diseases like cancer and cardiovascular disease. 9. For taste, unlike olfaction, liking and disliking are hardwired; for example, babies are born liking sweet and salty and disliking bitter. When we become deficient in salt or sucrose, liking for salty and sweet tastes, respectively, increases. Junk foods are constructed to appeal to these preferences. Liking the burn of chili peppers, on the other hand, is acquired and, with the exception of some pets, is essentially limited to humans. Because taste buds are surrounded by pain fibers, supertasters perceive much greater burn from chilis than do nontasters.
Möchten Sie kostenlos Ihre eigenen Notizen mit GoConqr erstellen? Mehr erfahren.