Taking an audio signal and transmitting/receiving it digitally is a multi-stage process, with each step influencing the quality of the transmitted sounds. This article provides a primer about the steps involved in the process for both near- and far-field transmission of signals.

Digital signal processing has opened up innovative ways where an audio signal can be manipulated. This flexibility allows the development of algorithms to improve the sound quality of the audio signal and opens up new ways in which audio signals can be stored and transmitted. Whereas FM has been the standard of analog wireless transmission used in the hearing aid world, digital is fast becoming the new norm for wireless transmission. This paper takes a behind-the-scenes look at some of the basic components of a wireless digital hearing aid that transmits audio data so that readers may appreciate the complexity of such a system.

All wireless digital hearing aids share the same functional stages shown in Figure 1. All analog audio signals must be digitized first through a process called analog-to-digital conversion (ADC). The sampled data is then coded in a specific way (audio codec) for wireless transmission. An antenna (or transmitter) using radio waves (a form of electromagnetic (EM) waves) is used to transmit these signals, and a receiving antenna (or receiver) paired to the transmitter detects the transmitted signal. The signal is then decoded (audio codec) and sent to the digital hearing aid for processing. The processed signal then goes through a digital-to-analog conversion (DAC) process again before it is output through the hearing aid receiver.

FIGURE 1. Functional stages of a wireless digital hearing aid.

Each one of these steps can have significant impact on the final power consumption of the hearing aids, the delay of the transmitted sounds, and the overall sound quality of the signal (to be discussed in Part 2). Thus, to understand wireless digital hearing aids, it is necessary that one understands some principles of digital sampling, audio codec (coding and decoding), and transceiver (transmitter and receiver) technology.

Digital Sampling

Francis Kuk, PhD, is director of audiology, and Bryan Crose, BS, and Petri Korhonen, MSc, are research engineers at the Widex Office of Research in Clinical Amplification (ORCA), Lisle, Ill, a division of Widex Hearing Aid Co, Long Island City, NY. Thomas Kyhn, BS, Martin Mørkebjerg, MSc, Mike Lind Rank, PhD, Preben Kidmose, PhD, Morten Holm Jensen, PhD, Søren Møllskov Larsen, MSc, and Michael Ungstrup, MSc, are research engineers at Widex A/S in Lynge, Denmark.

The process in which a digital system takes a continuous signal (ie, analog), samples it, and quantizes the amplitude so that the signal is discrete in amplitude (ie, no longer continuous) is known as analog-to-digital conversion (ADC). The digitized signal is a sequence of data samples (strings of “1” and “0”) which represent the finite amplitudes of the audio signal over time.

Sampling frequency. The number of times at which we measure the amplitude of an analog signal in one second is the sampling frequency or sampling rate. To capture all the frequencies within a signal, the sampling frequency must be at least twice the highest frequency in that signal. For example, if an audio signal has frequencies up to 8000 Hz, a sampling frequency of 16,000 Hz or higher must be used to sample the audio. Figure 2 shows an example of a 1000 Hz sine wave that is sampled at two different frequencies: 1333 Hz and 2000 Hz. As can be seen, the sampling frequency of 1333 Hz incorrectly sampled the 1000 Hz sinusoid as a 333 Hz sinusoid (Figure 2a, below left). When the same signal is sampled at 2000 Hz, the original waveform is accurately reconstructed as a 1000 Hz sine wave (Figure 2b, below right).

FIGURE 2. The effect of sampling frequency on a 1000 Hz waveform. The sample on the left (A) was reconstructed using a sampling frequency of 1333 Hz, causing distortion, whereas the 2000 Hz sampling frequency produced an accurate rendering of the signal.

Bit depth (or bit resolution). Digital systems use binary digits (0, 1) or bits to represent the amplitude of the sampled signal. The precision at which the amplitude variations within the audio signal can be reflected is determined by the bit resolution (or bit depth) of the digital processor. As the number of bits in a processor (or bit resolution) increases, finer amplitude differentiation becomes possible.

Figure 3 shows the difference in resolution when a sinusoid is sampled at 1 bit, 3 bits, and 5 bits. The blue line is the analog signal while the red line is the digital representation of the signal. The space between the blue and red lines (in yellow) is the quantization noise. Note that, as the number of bits increases, the resolution of the signal increases (becomes smoother) and the quantization noise decreases. In other words, the dynamic range (range of possible values between the most intense sound and the least intense sound) increases.



FIGURE 3. The effect of bit resolution on the output waveform (the blue line is the original sinusoid). The red line represents the digitized sinusoid. The difference between the red and blue lines (in yellow) is the quantization noise.

Perceptually, a signal that is processed with a high bit resolution will sound clearer, sharper, and cleaner than the same signal that is processed with a lower bit resolution. One shouldn’t think that more bits are needed to represent a more intense signal (or fewer bits for a soft sound); however, more bits are needed when loud and soft sounds are presented together (ie, fluctuations in level) and one is interested in preserving the relative amplitudes of these sounds (ie, dynamic range).

Sampling trade-offs: current drain. When an analog signal is converted into a digital form, the amount of information (number of bits) or size of the digital signal is a product of the sampling frequency, the bit resolution, and the duration of the sampling. A digital processor that uses a high bit resolution sampling at a high frequency results in more bits than ones that use a lower bit resolution and/or a lower sampling frequency. This means that more of the nuances of the input signal are available. Perceptually, this corresponds to a less noisy signal with a better sound quality. Unfortunately, more bits also mean more computations, larger memory, and longer time to transmit. Ultimately, this demands a higher current drain. Thus, a constant challenge for engineers is to seek the highest sampling frequency and the greatest bit resolution without significantly increasing the current drain.

Digital representation. Digital signals are represented as a string of 1’s and 0’s. To ensure that the data can be used correctly, other information is added to the beginning of the data string. This is called a “header” or the “command data.” This includes information such as the sampling rate, the number of bits per sample, and the number of audio channels present.

Figure 4 shows an example of what an audio header may look like (along with the digital audio). In this case, the 12-bit header consists of three 4-bit words—indicating how many channels it contains (mono or stereo), the sampling rate, and the number of bits per sample. The hearing aid processor reads the header first before it processes the data string.

FIGURE 4. Digital audio with header information.

Digital-to-analog conversion. To convert the processed digital string back into an analog signal (such as after processing by the hearing aid processor), a digital-to-analog converter (DAC) is needed. The DAC reads the instructions on the header and decodes the data at the same rate at which the audio is originally sampled. The output is low-pass filtered to smooth the transitions between voltages (the yellow shaded area in Figure 3). The signal is finally sent to an audio speaker (or receiver).

Audio Data Compression or Audio Codec

Rationale for data compression. When audio is converted from an analog to a digital format, the resulting size of the digital audio data can be quite large. For example, one minute of stereo audio recorded at a sampling frequency of 44,100 Hz (or samples per second) at a 16-bit resolution results in over 84 Mbits of information. This requires 10.5 Mbytes (MB) of storage (1 byte = 8 bits). That’s why an audio CD with a capacity of 783 Mbytes (MB) can hold only 74 minutes of songs.

To increase the number of songs that can be stored on the CD, one can either digitize the songs with a lower bit resolution, or sample them at a lower sampling frequency. Unfortunately, a lower bit resolution will decrease the amplitude resolution of the audio signal and increase the quantization noise. Decreasing the sampling frequency will limit the range of frequencies that are captured and lose some details of the songs. Thus, neither approach offers an acceptable solution to reduce the size of the data file and yet maintain the sound quality of the music.

Data compression (or data codec, short for “data coding and decoding”) allows digital data to be stored more efficiently, thus reducing the amount of physical memory required to store the data. Authors’ Note: Data compression should not be confused with amplitude compression, which is the compression or reduction of the dynamic range of an audio signal. Unless specifically intended, data compression generally does not reduce or alter the amplitude of the audio signal, but it does reduce the physical size (number of bits) that the audio signal occupies.

The transmission bit rate—or how much data (in number of bits) a transmitter is capable of sending in unit time—is a property of the transmitting channel. It depends on the available power supply, the criterion for acceptable sound quality of the transmitted signal, and also the integrity of the codec that is used to code and decode the transmitted signal. So, for example, while a higher bit rate usually means more data can be transmitted (and a better sound quality by inference), it does not guarantee sound quality because sound quality also depends on how well the codec system works.

How quickly an audio sample is transmitted (or downloaded) is important in the music world. The amount of downloading time is related to the size of the file and the bit rate of the transmitting channel. For example, a 4-minute song of 35 MB takes over 9 minutes to download using an average high-speed Internet connection (bit rate of 512 KB). If the same song is compressed using mp3 encoding technique, it is approximately 4 MB in size and takes approximately 1 minute to download. Thus, another reason for data compression (or codec) is to reduce the size of the “load” (or file) so the same data can be transmitted faster within the limits of the transmission channel without losing its quality.

A digital wireless hearing aid that transmits audio from one hearing aid to the other, or from a TV/cell phone, etc, to the hearing aid, has the same (or more) constraints as a music download. Because of the need for acceptable current consumption, the bit rate of current wireless digital hearing aids is typically lower than the high-speed Internet. In order to transmit the online digital audio without any noticeable delays or artifacts, some intelligent means for reducing the size of the audio data file is critical. (Note: this is not a necessary consideration for transmission of parametric data, such as hearing aid gain settings, because of the relatively small size and non-redundant nature of such data.)

Audio coding. The various algorithms that are used to code and decode an audio signal are called audio codec. The choice of a codec is based on several factors, such as the maximum available transmission bit rate, the desired audio quality of the transmitted signal, the complexity of the wireless platform, and the ingenuity of the design engineers. These decisions affect the effectiveness of the codec.

One can code a signal intelligently so it has good sound quality but fewer bits (thus requiring a lower transmission bit rate). Conversely, if the codec is not “intelligent” or if the original signal does not have a good sound quality, no transmission system at any bit rate can improve the sound quality.

There are two components in the audio encoding process: 1) Audio coding which involves “packaging” of the audio signals to a smaller size, and 2) Channel coding which involves adding error correction codes to handle potential corrupted data during the transmission. Protocol data, such as header information for data exchange, is also included prior to transmission.

Approaches to audio coding: lossless vs lossy. The objective for audio coding is to reduce the size of the audio file without removing pertinent information. Luckily, audio signals have large amounts of redundant information. These redundancies may be eliminated without affecting the identity and quality of the signal. Audio coding takes advantage of this property to reduce the size of the audio files. The two common approaches—lossless and lossy—may be used alone or in combination (these approaches may be used with other proprietary approaches as well).

Lossless codec. The systems that take advantage of the informational redundancy in audio signals are called lossless systems. These systems use “redundancy prediction algorithms” to compile all the redundant or repeated information in the audio signal. They then store the audio more efficiently with fewer bits but no information is lost. For example, the number 454545454545 can be coded as a 12-digit number by the computer. But the same number can also be coded as 6(45) to be read as 45 repeated 6 times.

This is the process used when computers compress files into a ZIP file. It is used in applications where exact data retention—such as computer programs, spreadsheets, computer text, etc—is necessary.

Lossy codec. The systems that take advantage of perceptual redundancy in audio coding are called lossy systems. They use “irrelevance algorithms” which apply existing knowledge of psychoacoustics to aid in eliminating sounds that are outside the normal perceptual limits of the human auditory system. For example, it is known that, when two sounds are presented simultaneously, the louder sound will exert a masking effect on the softer sound. The amount of masking depends on the closeness of the spectra of the two sounds. Because of masking effects, it is inconsequential perceptually if one does not code the softer sound while a louder one is present. Lossy audio coding algorithms are capable of very high data reduction, yet in these systems the output signal is not an exact replica of the input signal (even though they may be perceptually identical).

This type of codec is commonly used in mp3 technology. JPEG (Joint Photographic Experts Group) compression is another example of lossy data compression used in the visual domain.

Channel coding. One important consideration when sending any type of data (analog or digital) is the potential of the introduction of errors into the signal from electromagnetic interference during the transmission process. This is especially pertinent for wireless systems. Consequently, efforts must be made to ensure that the transmitted data are received correctly.

Channel coding algorithms provide a method to handle transmission errors. To achieve that objective, channel coding algorithms specify ways to check the accuracy of the received data. They also include additional codes that specify how errors can be handled.

Because there are no required standards on how these errors must be handled, channel coding algorithms vary widely among manufacturers. Some devices simply ignore and drop the data that are in error; some wait for the correct data to be sent; and others can correct the data that are in error. The various approaches can affect the robustness of the transmission and the sound quality of the transmitted signal.

Before sending the encoded digital audio (and the error correction codes), the encoder generates a header to the data following the protocol for wireless transmission. In this case, the header includes the address of the receiver, command data, and a data-type identification code that specifies which data are instructions, which are audio data, and which are error-correction codes. In addition, it also includes information on how to make sure that the transmitted data are correct; and how to handle “errors” if and when they are encountered.

Audio decoding. When a coded audio signal is received, it needs to be decoded so the original information can be retrieved. The receiver first examines the header information from the received coded signals so it knows how the received data should be handled. The received data then go through the channel decoder to ensure that the transmitted data are correct. Any transmission errors are handled at this channel decoding stage according to the error-correction codes of the channel codec. The channel-decoded signal then feeds through the audio decoder which unpacks the compressed digital audio data to restore the “original” digital audio.

“Bit-true” vs “non bit-true” decoding. There are two approaches to audio codec: bit-true and non bit-true. A bit-true codec means the decoder knows the encoder so it can decode the audio faithfully with the least current drain. Because it knows how the data are coded, it is prepared to handle any “errors” that it encounters during the transmission. A bit-true system is a dedicated system.

A non bit-true codec is an open system that allows multiple manufacturers to produce files that can be decoded by the same decoder. An example is the codec used in mp3 players. The advantage of a non bit-true system is its flexibility, adaptability, and ease of implementation by various manufacturers; it can save development time and resources. A potential problem is that the quality is not always ensured because different implementations are allowed. And because the decoder does not know the encoder, errors that are introduced during the transmission may not be corrected effectively and/or efficiently. This leads to drop outs and increased noise, and it may degrade the quality of the transmitted audio.

Wireless Transmission

Why wireless? Wireless allows the transfer of information (or audio data) over distance (from less than a meter to over thousands of miles) without the use of any wires or cables. Although wireless opens up the transmitted data to potential interference by other signals, the convenience that it offers and the possibility that data can be transferred over a long distance (such as a satellite) make it a desirable tool for data transmission.

The challenge for engineers is to minimize the potential for transmission errors (from interference) while keeping reasonable power consumption. Today, wireless transmission technology is also applied to hearing aids to bring about improvements in communication performance never before possible.

Vehicles for transmission: Electromagnetic (EM) waves. Wireless transmission is achieved through the use of electromagnetic (EM) waves. This is a type of transverse wave which has both an electric component and a magnetic component. EM waves by themselves are not audible unless they are converted to a sound wave (a longitudinal wave). One property of an EM wave is its ease of being modified by another signal. This makes EM waves excellent carriers of data.

Electromagnetic waves cover a wide range of frequencies. The choice of carrier frequency depends on how much information needs to be sent, how much power is available, the transmission distance, how many other devices are using that frequency, local laws and regulations, and terrestrial factors such as mountains or buildings that may be in the path of the transmission. Higher carrier frequencies can carry more information than lower frequency carriers. On the other hand, lower frequencies require less power for transmission.

The spectra of electromagnetic waves that are used today can be divided into different categories. Visible light is one form of electromagnetic waves and it is marked in the center of Figure 5. On the left side of the spectrum are the frequencies for radio transmission (or radio waves). These waves have a longer wavelength (and thus lower frequencies) than light and are commonly used for most types of wireless communication. One can see that most of the AM and FM radios use frequencies between the 106 and 108 Hz regions.

FIGURE 5. The electromagnetic (EM) spectra, with visible light near the center and most of our transmission carrier frequencies in the lower/longer frequency regions.

Far-field vs near-field transmission. Traditional wireless transmission systems use an antenna to transmit an EM wave through the air. The farther the wave is from the transmitter, the weaker its strength. However, the rate of decrease of the EM wave amplitude depends on how far the signal propagates.

An intended distance that is much greater than the wavelength of the carrier is classified as a far-field; in contrast, a distance much shorter than the wavelength is called a near-field. Thus, the distinction between a far- and a near-field not only depends on the physical distance, but also on the frequency of the carrier. In a far field, both the electric and magnetic (or inductive) field strengths decrease linearly with distance at a rate of 1/r. On the other hand, in a near-field, the magnetic field strength is dominated by a component which decreases at a rate of 1/r3 as shown in Figure 6.

FIGURE 6. Difference between far-field and near-field attenuation of the magnetic field.

The difference in the rate of decrease between the two components suggests that they may be utilized for different applications. Most wireless technologies today use both the electric and magnetic fields of EM waves for far-field transmission. In the area of hearing aids and assistive devices, this usually suggests a distance of 10 to 50 m. Because of the greater distance of far-field transmission, interference from and on other transmitted signals is likely to occur depending on the relative levels of the transmitted signals. For transmission over a short distance (less than 1 m, or near-field), the magnetic or inductive component is used instead because it retains its signal strength over the short distance. In addition to a lower current consumption, the shorter distance would mean less interference from and on other transmitted signals. This results in a greater security of the transmitted signals and immunity from other transmitted signals.

Bluetooth: A common far-field communication protocol. Bluetooth is a commonly used radio frequency (RF) wireless standard in many communication devices today. It is a wireless protocol for exchanging data up to 100 meters (thus, far-field) and uses the EM wave to carry data at a carrier frequency of 2.4 GHz with a bandwidth of 1 MHz (79 different channels).

Bluetooth is described as a protocol because it offers a predefined method of exchanging data between multiple devices. This means that two devices connected with a Bluetooth connection (ie, Bluetooth compatible) must meet certain requirements before they can exchange data. This qualifies it as an open or non bit-true system. The openness and connectivity are major reasons for its proliferated use in consumer electronics today.

Historically, Bluetooth was developed when computer wireless networks (Wi-Fi) became available. Wireless networks also use a 2.4 GHz carrier frequency band, but have a channel bandwidth of 22 MHz. This allows wireless networks to send more information over a farther distance, but at the expense of high power consumption. By restricting the range of the transmission, engineers are able to reduce the power consumption of Bluetooth. This enables devices smaller than notebook computers (eg, cell phones, PDAs, etc) to also utilize Bluetooth.

However, the power consumption of Bluetooth is still not low enough to permit its integration into a hearing aid. A typical Bluetooth chip requires a current drain from 45 milliAmps (mA) to as high as 80 mA for operation. If a Bluetooth chip were embedded in a hearing aid that uses a #10 battery (with a capacity of 80 mAh), the battery would only last 1 to 2 hours before it expires!

Another problem with Bluetooth is the audio delay inherent in the standard Bluetooth audio profile. In creating a standard that is adaptable to many different devices, Bluetooth has to satisfy many procedures to ensure a proper communication link between devices. This delays the immediate transmission of signals. For example, a delay of up to 150 ms may be noted between the direct sound and the transmitted sound from a TV using Bluetooth. When a delayed audio signal is mixed with the direct signal, a poorer sound quality—ranging from a “metallic” sound to an “echo”—may be perceived depending on the amount of delay. Excessive delay, such as 150 ms, could lead to a dis-synchrony between the visual and audio signals. Figure 7 shows the perceptual artifacts that may result from mixing direct sounds with transmitted sounds at various delays.

FIGURE 7. The consequences of direct and delayed transmitted signals on the perception of sound. Delays in excess of 10 ms become problematic.

Near-field magnetic induction (NFMI). The limited capacity of today’s hearing aid batteries makes it impractical to use Bluetooth exclusively for far-field transmission to the hearing aids.

The rapid rate of attenuation of the magnetic field (shown in Figure 6) would suggest high signal strength within a close proximity and low signal strength beyond. This ensures accurate transmission of data between intended devices (such as hearing aids). The rapid decay characteristics mean that its signal strength will not be sufficient to interfere with other near-field devices in the environment, nor will it be interfered with by other unintended near-field devices. A shorter range of transmission will also require a lower carrier frequency, reducing the power consumption.

This makes magnetic or inductive EM wave an ideal technology to be integrated within hearing aids for near-field or short-range communication. On the other hand, the orientation of the antennae (between the transmitter and the receiver) may affect the sensitivity of the reception. A remote control and wireless CROS hearing aids are prime examples of this form of technology.

Streamers and relay: A solution that incorporates inductive and Bluetooth. Using an inductive signal for wireless communication between hearing aids makes sense because of the security and low power requirement. However, connecting to external electronic devices (such as cell phone or TV) would become impossible. A solution which takes advantage of inductive technology and Bluetooth connectivity (or other far-field protocols) is needed to result in a practical solution.

This can be achieved using an external device (outside the hearing aid) which houses and uses both forms of wireless technologies. This device, which includes Bluetooth (and other far-field protocols) technology, can be larger than a hearing aid and accommodate a larger battery than standard hearing aid batteries. Thus, it connects with external devices (such as cell phones, etc) that are Bluetooth compatible.

The device should also have near-field magnetic (inductive) technology to communicate with the wearer’s hearing aids when it is placed close to the hearing aids. Thus, a Bluetooth signal could be received by this device then re-transmitted from this device to the hearing aid. This is the basis of the “streamers” used in many wireless hearing aids today.

FIGURE 8. A relay device that receives a Bluetooth signal and re-transmits it to the hearing aid on the other end.

Signal Transmission

Analog transmission. EM waves are used to carry the audio information so they may be transmitted wirelessly over a distance. This is accomplished by a process called modulation—where the EM wave (the carrier) is altered in a specific way (ie, modulated) to carry the desired signal.

There are two common analog modulation schemes: amplitude modulation (AM) and frequency modulation (FM). The signal that modulates the carrier is an audio signal (eg, speech or music). The same mechanism of modulation may be used in both far-field and near-field transmissions.

For amplitude modulation (AM), the amplitude of the carrier frequency is altered (or modulated) according to the amplitude of the signal that it is carrying. In Figure 9, observe how the amplitude-modulated signal shows the same amplitude change over time as the sine wave that is used to modulate the carrier. The valleys of the sine wave reduce the amplitude of the carrier waveform, and the peaks of the signal increase the amplitude of the carrier waveform.

For frequency modulation (FM), the frequency of the carrier is modulated according to the amplitude of the signal that is sent. Figure 9 displays how the frequency modulated signal shows the amplitude change of the sine wave by altering the closeness (or frequency) of the carrier waveform. Waveforms that are more spaced apart (lower frequency) represent the valleys of the sine wave, and waveforms that are closed together (higher frequency) represent the peaks of the sine wave. Both AM and FM receivers de-modulate the received signal and reconstruct the audio signal based on how the AM or FM signal is modulated.

FIGURE 9. Analog modulation schemes—amplitude modulation (AM) and frequency modulation (FM).

The Federal Communications Commission (FCC) regulates the use of the radio portion of the EM spectrum in the United States. In the field of amplification, the three frequency bands that are commonly used for FM systems include: 169-176 MHz (H Band), 180-187 MHz (J Band), and 216-217 MHz (N Band). The frequency band that is used in near-field transmission (and in remote) is typically around 10-15 MHz (although earlier systems still use a lower carrier frequency). The frequency band that is used for Bluetooth is the 2.4-2.5 GHz band. This frequency band is classified as one of several “Industrial, Scientific, and Medical” (ISM) bands.

Digital transmission. The previous discussion relates the use of an analog audio signal to modulate a high frequency EM carrier. In the process, the analog signal is being transmitted. When the signal that needs to be transmitted is digital, the analog modulation scheme will not be appropriate. In addition to the fact that the signal itself is digital (thus requiring digital transmission), there are other benefits of digital transmission.

Any form of signal transmission can be affected or contaminated by EM interference or noise. This is especially the case when the transmitted signal is farther away from the source because of the decrease in signal level (see Figure 6) and the constant noise level from other EM interferences (ie, the “signal-to-noise” level decreases). Thus sound quality (and even speech intelligibility) decreases as the distance increases.

On the other hand, a digital signal (“1” and “0”) is not as easily affected by the interfering EM noise. As long as the magnitude of the interfering noise does not change the value of the bit (from “1” to “0” and vice versa), the signal keeps its identity. Thus, digital transmission is more resistant to EM interference than analog transmission.

FIGURE 10. Hypothetical sound quality as a function of interference between analog and digital transmissions.

This suggests that the sound quality of a signal that is digitally transmitted may remain more natural (and less noisy) than an analog signal until a much higher level of EM intereference. Figure 10 shows the hypothetical sound quality difference between an analog transmisison and a digital transmission as a function of distance and/or interference.

How is digital transmission accomplished? In digital transmission, a technique called “Frequency Shift Keying” (FSK) is used. This modulation scheme uses two different frequencies around the carrier frequency to represent the “1” and “0” used in the binary representation. For example, a “1” may be assigned the frequency 10.65 MHz and a “0” the frequency 10.55 MHz for a carrier at 10.6 MHz. Each time a “1” needs to be sent, the transmitter will send out a 10.65 MHz signal; each time a “0” needs to be sent, a signal at 10.55 MHz will be sent.

Like analog modulation, when the transmitted signal (or pulse train) is received by the receiver, it needs to be demodulated into “1” and “0” to recreate the digital sequence. This is done by the demodulator at the receiver end. Frequencies around the 10.55 MHz will be identified as a “0,” and those around 10.65 MHz a “1.” Typically, two points per bit are sampled to estimate the bit identity.

While this approach is sufficient for the typical operations, errors (identification of a “1” as “0” and vice versa) could still occur under adverse conditions (such as intense EM inteference from another source). Thus, an important consideration in a wireless antenna or receiver design is how to handle the corrupted transmitted signal so the retrieved signal is as accurate as possible to the original signal.

Summary

The process of taking an audio signal and transmitting/receiving it digitally is a multi-stage process, each of which can affect the quality of the transmitted sounds. The following sequence summarizes all the steps involved in the process (for both near- and far-field transmissions):

1) The audio signal (eg, from TV) is digitized through an analog-to-digital conversion process into a digital form (ADC).
2) The digital signal goes through an audio encoding process to reduce its size (audio coding).
3) The encoded signal goes through channel coding to include error correction codes (channel coding).
4) Header information is included.
5) The coded signal is modulated through FKS (or other techniques) and prepared for broadcast (modulation).
6) The modulated signal is broadcast through the antenna (transmission by antenna).
7) The modulated signal is received by the antenna (reception by antenna).
8) The signal is demodulated to retrieve the digital codes (demodulation).
9) The header information is read.
10) The digital codes go through channel decoding to correct for errors (channel decoding).
11) The signals go through audio decoding to “decompress” or return to as much of its original form as possible (audio decoding).
12) The decoded digital signal can be processed by the hearing aid processor (DSP processing).
13) The processed signal leaves the hearing aid through a digital-to-analog converter to return to its analog form (DAC).

Correspondence can be addressed to HR or Francis Kuk, PhD, at .

Citation for this article:

Kuk F, Crose B, Korhonen P, Kyhn T, Mørkebjerg M, Rank ML, Kidmose P, Jensen MH, Larsen SM, Ungstrup M. Digital wireless hearing aids, Part 1: A primer. Hearing Review. 2010;17(3):54-67.