Binaural fusion

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Binaural fusion or binaural integration is a cognitive process that involves the "fusion" of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.

The process of binaural fusion is important for computing the location of sound sources in the horizontal plane (sound localization), and it is important for sound segregation.[1] Sound segregation refers the ability to identify acoustic components from one or more sound sources.[2] The binaural auditory system is highly dynamic and capable of rapidly adjusting tuning properties depending on the context in which sounds are heard. Each eardrum moves one-dimensionally; the auditory brain analyzes and compares movements of both eardrums to extract physical cues and synthesize auditory objects.[3]

When stimulation from a sound reaches the ear, the eardrum deflects in a mechanical fashion, and the three middle ear bones (ossicles) transmit the mechanical signal to the cochlea, where hair cells transform the mechanical signal into an electrical signal. The auditory nerve, also called the cochlear nerve, then transmits action potentials to the central auditory nervous system.[3]

In binaural fusion, inputs from both ears integrate and fuse to create a complete auditory picture at the brainstem. Therefore, the signals sent to the central auditory nervous system are representative of this complete picture, integrated information from both ears instead of a single ear.

Binaural fusion is responsible for what is known as the cocktail party effect, the ability of a listener to hear a particular speaker against other interfering voices.[3]

The binaural squelch effect is a result of nuclei of the brainstem processing timing, amplitude, and spectral differences between the two ears. Sounds are integrated and then separated into auditory objects. For this effect to take place, neural integration from both sides is required.[4]

Anatomy

Transmissions from the SOC, in the pons of the brainstem, travel along the lateral lemniscus to the IC, located in the midbrain. Signals are then relayed to the thalamus and further ascending auditory pathway.

As sound travels into the inner eardrum of vertebrate mammals, it encounters the hair cells that line the basilar membrane of the cochlea in the inner ear.[5] The cochlea receives auditory information to be binaurally integrated. At the cochlea, this information is converted into electrical impulses that travel by means of the cochlear nerve, which spans from the cochlea to the ventral cochlear nucleus, which is located in the pons of the brainstem.[6] The lateral lemniscus projects from the cochlear nucleus to the superior olivary complex (SOC), a set of brainstem nuclei that consists primarily of two nuclei, the medial superior olive (MSO) and the lateral superior olive (LSO), and is the major site of binaural fusion. The subdivision of the ventral cochlear nucleus that concerns binaural fusion is the anterior ventral cochlear nucleus (AVCN).[3] The AVCN consists of spherical bushy cells and globular bushy cells and can also transmit signals to the medial nucleus of the trapezoid body (MNTB), whose neuron projects to the MSO. Transmissions from the SOC travel to the inferior colliculus (IC) via the lateral lemniscus. At the level of the IC, binaural fusion is complete. The signal ascends to the thalamocortical system, and sensory inputs to the thalamus are then relayed to the primary auditory cortex.[3][7][8][9]

Function

The ear functions to analyze and encode a sound’s dimensions.[10] Binaural fusion is responsible for avoiding the creation of multiple sound images from a sound source and its reflections. The advantages of this phenomenon are more noticeable in small rooms, decreasing as the reflective surfaces are placed farther from the listener.[11]

Central auditory system

The central auditory system converges inputs from both ears (inputs contain no explicit spatial information) onto single neurons within the brainstem. This system contains many subcortical sites that have integrative functions. The auditory nuclei collect, integrate, and analyze afferent supply,[10] the outcome is a representation of auditory space.[3] The subcortical auditory nuclei are responsible for extraction and analysis of dimensions of sounds.[10]

The integration of a sound stimulus is a result of analyzing frequency (pitch), intensity, and spatial localization of the sound source.[12] Once a sound source has been identified, the cells of lower auditory pathways are specialized to analyze physical sound parameters.[3] Summation is observed when the loudness of a sound from one stimulus is perceived as having been doubled when heard by both ears instead of only one. This process of summation is called binaural summation and is the result of different acoustics at each ear, depending on where sound is coming from.[4]

The cochlear nerve spans from the cochlea of the inner ear to the ventral cochlear nuclei located in the pons of the brainstem, relaying auditory signals to the superior olivary complex where it is to be binaurally integrated.

Medial superior olive and lateral superior olive

The MSO contains cells that function in comparing inputs from the left and right cochlear nuclei.[13] The tuning of neurons in the MSO favors low frequencies, whereas those in the LSO favor high frequencies.[14]

GABAB receptors in the LSO and MSO are involved in balance of excitatory and inhibitory inputs. The GABAB receptors are coupled to G proteins and provide a way of regulating synaptic efficacy. Specifically, GABAB receptors modulate excitatory and inhibitory inputs to the LSO.[3] Whether the GABAB receptor functions as excitatory or inhibitory for the postsynaptic neuron, depends on the exact location and action of the receptor.[1]

Sound localization

Sound localization is the ability to correctly identify the directional location of sounds. A sound stimulus localized in the horizontal plane is called azimuth; in the vertical plane it is referred to as elevation. The time, intensity, and spectral differences in the sound arriving at the two ears are used in localization. Localization of low frequency sounds is accomplished by analyzing interaural time difference (ITD). Localization of high frequency sounds is accomplished by analyzing interaural level difference (ILD).[4]

Mechanism

Binaural hearing

Action potentials originate in the hair cells of the cochlea and propagate to the brainstem; both the timing of these action potentials and the signal they transmit provide information to the SOC about the orientation of sound in space. The processing and propagation of action potentials is rapid, and therefore, information about the timing of the sounds that were heard, which is crucial to binaural processing, is conserved.[15] Each eardrum moves in one dimension, and the auditory brain analyzes and compares the movements of both eardrums in order to synthesize auditory objects.[3] This integration of information from both ears is the essence of binaural fusion. The binaural system of hearing involves sound localization in the horizontal plane, contrasting with the monaural system of hearing, which involves sound localization in the vertical plane.[3]

Superior olivary complex

The primary stage of binaural fusion, the processing of binaural signals, occurs at the SOC, where afferent fibers of the left and right auditory pathways first converge. This processing occurs because of the interaction of excitatory and inhibitory inputs in the LSO and MSO.[1][3][13] The SOC processes and integrates binaural information, in the form of ITD and ILD, entering the brainstem from the cochleae. This initial processing of ILD and ITD is regulated by GABAB receptors.[1]

ITD and ILD

The auditory space of binaural hearing is constructed based on the analysis of differences in two different binaural cues in the horizontal plane: sound level, or ILD, and arrival time at the two ears, or ITD, which allow for the comparison of the sound heard at each eardrum.[1][3] ITD is processed in the LSO and results from sounds arriving earlier at one ear than the other; this occurs when the sound does not arise from directly in front or directly behind the hearer. ILD is processed in the MSO and results from the shadowing effect that is produced at the ear that is farther from the sound source. Outputs from the SOC are targeted to the dorsal nucleus of the lateral lemniscus as well as the IC.[3]

Lateral superior olive

LSO neurons are excited by inputs from one ear and inhibited by inputs from the other, and are therefore referred to as IE neurons. Excitatory inputs are received at the LSO from spherical bushy cells of the ipsilateral cochlear nucleus, which combine inputs coming from several auditory nerve fibers. Inhibitory inputs are received at the LSO from globular bushy cells of the contralateral cochlear nucleus.[3]

Medial superior olive

MSO neurons are excited bilaterally, meaning that they are excited by inputs from both ears, and they are therefore referred to as EE neurons.[3] Fibers from the left cochlear nucleus terminate on the left of MSO neurons, and fibers from the right cochlear nucleus terminate on the right of MSO neurons.[13] Excitatory inputs to the MSO from spherical bushy cells are mediated by glutamate, and inhibitory inputs to the MSO from globular bushy cells are mediated by glycine. MSO neurons extract ITD information from binaural inputs and resolve small differences in the time of arrival of sounds at each ear.[3] Outputs from the MSO and LSO are sent via the lateral lemniscus to the IC, which integrates the spatial localization of sound. In the IC, acoustic cues have been processed and filtered into separate streams, forming the basis of auditory object recognition.[3]

Binaural fusion abnormalities in autism

Current research is being performed on the dysfunction of binaural fusion in individuals with autism. The neurological disorder autism is associated with many symptoms of impaired brain function, including the degradation of hearing, both unilateral and bilateral.[16] Individuals with autism who experience hearing loss maintain symptoms such as difficulty listening to background noise and impairments in sound localization. Both the ability to distinguish particular speakers from background noise and the process of sound localization are key products of binaural fusion. They are particularly related to the proper function of the SOC, and there is increasing evidence that morphological abnormalities within the brainstem, namely in the SOC, of autistic individuals are a cause of the hearing difficulties.[17] The neurons of the MSO of individuals with autism display atypical anatomical features, including atypical cell shape and orientation of the cell body as well as stellate and fusiform formations.[18] Data also suggests that neurons of the LSO and MNTB contain distinct dysmorphology in autistic individuals, such as irregular stellate and fusiform shapes and a smaller than normal size. Moreover, a significant depletion of SOC neurons is seen in the brainstem of autistic individuals. All of these structures play a crucial role in the proper functioning of binaural fusion, so their dysmorphology may be at least partially responsible for the incidence of these auditory symptoms in autistic patients.[17]

References

  1. 1.0 1.1 1.2 1.3 1.4 Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.00 3.01 3.02 3.03 3.04 3.05 3.06 3.07 3.08 3.09 3.10 3.11 3.12 3.13 3.14 3.15 3.16 Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 4.2 Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. 10.0 10.1 10.2 Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. 13.0 13.1 13.2 Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. 17.0 17.1 Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.

External links