Saturday, May 29, 2010

Modern Hearing Aids, what you need to know to make an educated choice.

Modern digital hearing aids are complex electronic devices, really micro computers dedicated to processing sound. Whilst it is a good idea to research the features of the aid and at least understand the rudimentary, it is easy to get bogged down in the tech speak and the Manufacturer's marketing blurb.

The easiest way to understand hearing aids and to answer what they will do for you and how they will help you in your daily life, is to equate the level of technology of the aid, the age of the platform and your lifestyle needs.

First off the age of the platform and the level of technology. Every manufacturer designs and releases a new platform from time to time. The platform is based on the chipsets or microprocessors in the aids. They will release the new top end aid with this platform and then the downgrades. These are the levels of technology within the platform, which are basically split into high, medium, low and basic.

The best way to understand this is to equate these levels to lifestyle needs.

High: Busy professional, young person with active lifestyle, old person with active lifestyle, child.

Mid: Most of the rest of us, active generally but activities tend to be in not so riotous settings.

Low: More Sedentary lifestyle, occasionally active but mostly home life and family gatherings.

Basic: According to my wife this is me, I dispute this but nevertheless, for somebody who wants to listen to the TV, Radio, take phone calls and occasional visitors.

So the three questions you need answered are, how old is the platform? How does the aid fit in the technology levels? Who makes the aid? Armed with these answers you can then do some research to see if what you have been told is true and how the aid fits into your lifestyle.

This brings us to your lifestyle, be honest about your lifestyle needs, equate them with your budget and then make a decision on the purchase. Each level of technology will assist you to hear better, absolutely no doubt. The key is where they will help you to hear better, what sound situations.

My best advice is to buy the best you can afford and buy two if you need two. Two cheaper aids will help you hear better than one expensive one. Always buy an aid from the latest platform from any given manufacturer. Once you research the platform age and tech level of the aids you can make an educated choice. You should also think clearly about the type of aid whether ITE (in the ear) or BTE (behind the ear).

Whilst the emotional choice tends to be the discreet in the ear model, that may well not be the right aid for you. I would say in fact for reliability a behind the ear aid is probably a better choice for several reasons. It will be more reliable in the long term and probably last a lot longer, once your dog, baby, cat doesn't eat it. Once that choice is completed and a purchase is made, you can then have realistic expectations about what the devices will help you accomplish. Then to look after them read the following.

Choice and maintenance of hearing devices

Thursday, May 27, 2010

WOULD YOU SPEND TWO AND A HALF GRAND ON A FLAT SCREEN TV AND WHEEL IT INTO YOUR SAUNA EVERY DAY AND RUB BABY OIL INTO IT?Taking care of your Hearing Aids and why you should choose carefully.

After parting with probably a large chunk of your hard earned cash for hearing device, it makes real sense to take care of them to the best of your ability. The daily care for hearing instruments, whether BTEs or custom ITEs is relatively similar. At the end of every day, take the aids off and open the battery door, opening the battery door allows the battery breathe and conserves power.

In the case of a BTE, wipe the Aid case and the mould with a dry cloth. Check the sound bore of the mould and the tubing to see if any cerumen (wax) detritus or moisture has built up during the day. If you notice some, separate the mould and the aid and use a puffer ball to dislodge any foreign object. This should also force out any large beads of condensation. The mould and tube can then be placed in a soak box. A soak box is a small tub that can be filled with warm water and a wash tab. This will remove any bacteria or stubborn cerumen. When you remove it the next morning, it is important to dry the mould and tubing thoroughly, again the puffer ball will help here.

It is a good idea to place the BTE instrument in a dry box overnight. A dry box is a small tub which will hold a drying Pastille and your aid, overnight it will remove any moisture that has built up over the days use. Moisture is your enemy, while manufacturers take every pain to ensure that hearing aid circuits are protected, nothing and I do mean nothing is foolproof. A build up of moisture can wreak havoc with the working of an aid. It can cause weakened amplification and the failure of components within the aid.

With an ITE just wipe the case with a dry cloth to clean it off, then check the wax cap. With the brush supplied and holding the ITE with the wax cap facing the floor, gently brush the wax cap area. Then gently brush the microphone covers. These actions will remove any detritus that has gathered during the day. It is a good idea to place the aid in a dry box overnight particularly during the summer. Moisture again is a huge issue, particularly for a custom aid, because of its placement in the ear they are prone to moisture build up in susceptible ears.

Whilst I understand the emotional choice that is made when a Patient picks an ITE, in practice I always advised them to go for a BTE. You are about to spend a large amount of money on a sensitive electronic device that will change your life for the better. Although Manufacturers take every pain to try and protect it, it can not be hermetically sealed, it needs to remain open to the ear to work. The ear is a hostile place, it is warm wet and oily, I ask you this question and I appeal to you to answer it honestly and think about the ramifications.

WOULD YOU SPEND TWO AND A HALF GRAND ON A FLAT SCREEN TV AND WHEEL IT INTO YOUR SAUNA EVERY DAY AND RUB BABY OIL INTO IT?

I would assume most of you have answered no, so why do you do it to your hearing aid, your one link to a more normal lifestyle? The one thing that allows you a fuller enjoyment of your day to day life? Go for a BTE, they get smaller and smaller every day, with thin tube applications they are almost more discreet than an ITE. Make the right choice for you, but make it knowing all the facts.

Geoff

Otovation OTOpod M2, exciting new wireless Audiometer

INTRODUCING OTOPOD M2
The OTOPod M2 is a wireless and truly portable diagnostic audiometer and fitting solution.
The device's small size and elegant form combined with wireless operation make it convenient to use for both the hearing care professional and the patient. The OTOPod is also great for use outside the office in a domiciliary setting, where the comfortable handheld unit with integrated patient response button is ideal. The device and accessories can be contained within a small over shoulder camera type bag which is included. 
Plus, the OTOPod can be used with a variety of transducers, including TDH-39, EAR 5A inserts, and Sennheiser circumaurals. The transducers connect to the OTOPod with a mini-DIN connector for a nice secure connection. The M2 can be operated within NOAH to provide a complete diagnostic testing and fitting process for the following Widex hearing aid families: Passion 440, mind 440, mind 330, Passion 115, Passion 110, Passion 105, Inteo, AIKIA, Flash, REAL, Senso, Bravo, and Bravissimo.
OTOPOD FEATURES
• NOAH® 3 Compatible - Symphony NOAH Module and Widex's Compass 5.0 Software
• Air and bone conduction and pre-recorded speech testing
• Talk forward and back feature for sound booth use
• Wireless operation up to 3 meters (10 ft.) from your PC
• Advanced auto test features for air and bone conduction testing
• Fully customizable testing with dozens of parameters
• Print full-page reports to your local or network printer
• Use on rechargeable AA battery power or with power supply
• Highly portable, quick setup and easy to use
• Includes our OTONet wireless adapter for communicating with the PC
• Programs a wide range of Widex hearing aids

For a fuller overview of the features and the data sheets and standards of the OTOPod, please point your browser to www.otovation.com.

otopod



Otovation OTOPod M2 with Bone and Inserts.

Wednesday, May 26, 2010

Adjusting to hearing aids, the “oh my god that’s what the world sounds like?” moment.

When first fitted with hearing aids, there is a period of adjustment that varies from Patient to Patient. During the  initial period it will be very different to what you have been used to. During this period you should slowly build up your use of the aids from one or two hours a day to all day over a period of 14 days.

After having a untreated hearing loss, usually for up to ten years before you sought treatment, the most difficult part of adjusting to hearing aids is learning to listen. This may seem a stupid statement, we listen all the time? that may be true, but there is a real difference between listening and actively hearing.

While wearing the hearing aids you will hear sounds that you may not have heard for several years, and sounds that you have been aware of in a new completely different manner. It will take your brain sometime to become familiar with this sound information again. It usually takes from about six to eight weeks for the average brain to get used to this new method of hearing.

However this is the beginning, your appreciation of sounds which you hear will continue to increase incrementally over a six to twelve month period. However this varies from Patient to Patient, in certain circumstances, the initial adjustment period may take up to 6 months depending on the age of the patient and the condition of the brain.

It would be a good rule of thumb though to allow roughly six to eight weeks to become completely comfortable with any hearing aid. The biggest surprise to most new users is how they suddenly perceive their own voice. Those who suffer from extreme loss of hearing often to not understand that they are supposed to hear their own voice when they are talking.

It may seem strange to these individuals to actually hear their own voice while speaking and can actually be disturbing to the users on some occasions. For a first time user of hearing aids it can be either a moment of illumination or a confusing experience when they first hears their own voice clearly.

Most commonly, à new hearing aid wearer may think they’re shouting because their voice sounds louder than it normally has in the recent past. They may notice certain background and environmental noises that they were previously unaware of due to the level of their hearing loss.  All of these new sounds may seem horrible to the person. The key is that a new user is counselled to understand that these sounds are the sound of life. 

Further, these sounds will fade in importance to them as time progresses and the natural function of the brain begins to return. The brain has to now identify the sound, choose to ignore or listen to it and focus on picking out the speech from the background noise.  Another adjustment that needs to be made is not with the new user but with their family and  acquaintances. They will now have to remember that they no longer need to speak as loudly in the presence of the hearing aid wearer.

There are many different aspects of getting a new hearing aid – some of them positive, and some of them negative. Those who are around someone with a new hearing aid need to remember to employ patience as everyone becomes acclimatized to the situation. Most hearing-impaired people can benefit from hearing aids, although there are some who may not get on so well.

Many factors, including the severity of the hearing loss, the length of time without auditory stimulation, attitude and age of the patient, and the patient’s ability to interpret what they hear will have an impact on how well the patient adapts to the aid and ultimately how  successful treatment with a hearing aid will be.

The most important point to remember is getting a hearing aid does not make everything perfect, nor will it solve all your problems in every situation, but it definitely will improve your ability to communicate with other people by being able to improving your ability to hear them

Friday, May 21, 2010

Medical Record Card

The Widex medical record card will be released under the Widex Associate Programme soon. We have designed the card in order that an Audioligist can use the card to record all the information that is needed to meet regulatory requirements.

The layout of the card is such that it leads an audiologist through a consultation in a logical manner through to a demonstration conclusion. It will prompt the user to ask all the questions that are required and to follow all the steps.

The card can be used as a simple record and prompt tool or with some further training and explanation of the underlying strategies, it can be used as a powerful consultative tool. Using it in the secondary manner will allow you to connect with your Patient making their level of understanding clearer to you. It will also give you clues to the stage of the Patient journey they are at.

It will help you clarify and give them the answers you may not otherwise know were being asked. It will also allow you to meet and slowly overcome objections in a non threatening manner.
No matter which way you use it, as a skim through prompt and record or as a more in depth consultative tool, we think it will make your practice easier.

Geoff

Friday, May 14, 2010

Client Orientated Scale of Improvement, COSI and its uses for your practice.

The COSI has started to become an integral part of best practice in the last few years. What exactly is the COSI and why is it good for your Practice? The COSI is simply a very well designed piece of A4 paper or a form in one of many Audiological Software's. It is present in Noah and also in the Widex Compass software. The COSI is a place to record a Patient’s problem situations or lifestyle needs and then trace their ongoing success with a prescribed hearing system. Many Dispensers shy away from the COSI because they are not sure where it will fit either in their consultation or follow up nor do they understand how it may be the most powerful tool that they have ever used on several different and important levels.

At it's most simple level the COSI is used to record the difficulties that a Patient has in their day to day life. The crux is how you as a professional achieve this and if you use the golden opportunity that this gives to emotionally connect with the Patient. The COSI can allow you to connect with a Patient, gain acknowledgement of lifestyle impact from that Patient, manage a Patients expectations and gain agreement on a set strategy for dealing with the core issues.

So, how is this simple form going to achieve all this? Simply put, you use it to ask pertinent questions and record the answers. I hear you say I ask the problem areas already, you may, but do you ask in the right way, do you ask enough of the right questions and most importantly do you listen? The questions you need to ask need to be open ended, crucially you also have to remember that the answers you are being given may never have been voiced before. You may feel that you have heard them a thousand times, but this Patient may never have uttered them before. So do them the courtesy of listening.

Ask, “where are the areas that you have problems, the areas I would like to talk to you about are the areas that you feel cause you most problems in your daily lives and relationships. We can record five areas, but really we will concentrate mostly on three of them”. When they give you the areas, dig deeper, “so you say at the family table, who are you seated with and what type of conversation are we talking about, animated, quiet? What exactly are the problems you suffer?!

Dig deep, keep asking those open ended questions, questions that can not be simply answered with a yes or a no. When you have found out all of the information and led the Patient on a journey through that situation, ask them, How does that make you feel? If you have done your job properly they will tell you. It might be emotional, it might even be hard to listen to, but it generally will be the truth, unvarnished and direct. There may be tears, if so it may be awkward for you, I generally put my hand on their arm for a moment in order to acknowledge the pain.

Go through each situation they have recorded and do the same thing, generally you will not have to ask the loaded question again, they will tell you without prompting. At the end ask them which three problems would they most like to try to fix. When you have identified these areas, grade their current ability, ask them their expected ability and then most importantly agree a realistic final ability. Tell them directly what you feel you can do well and more importantly what you may not be able to do so well.

Why should you do this, there are several reasons, Practice efficacy, human capacity, commercial sense, but most importantly, it is the right thing to do for the Patient. You will make a strong emotional connection to your Patient, they will believe that you are interested in their problems and more importantly that you are interested in dealing with their problems. They will believe that you are an honest, compassionate and caring practitioner, if for some reason they can not do business with you, they will ensure that they will tell their friends to do business with you.

You will help them to truly recognise their difficulties and acknowledge the impact on their lifestyle. You will also manage their expectations openly and usually without Patient rancour. You will gain agreement for a course of action and in fact plan that action out. The Patient begins to talk about when's instead of ifs. You help your Patient acknowledge trauma and then lead them through it to solution, you allow them to openly express their feelings perhaps for the first time. All of this from a well designed piece of paper.

So for every type of dispenser, from the most hardened of commercial to the most Patient centred, there is a pay off from the COSI. More importantly, there is a pay off for their Patients.      

Friday, May 7, 2010

Hearing the Sounds of Spring with Hearing Aids

Hearing the Sounds of Spring with Hearing Aids


Springtime is here! Just listen to the sounds of birds chirping, and children laughing and playing outside after months of being cooped up indoors.

What – you can’t hear them? Before spring turns to summer (and new ambient sounds will be filling your environment), make sure your hearing is up to scratch. Just as the nature around us is experiencing a rebirth and renewal of sorts, so should your ears.
Millions affected with hearing loss

If you are one of those people who can’t hear the sounds of spring (or any other season, for that matter), you are far from alone. In fact, according to National Institute on Deafness and Other Communication Disorders (NIDCD), approximately 17 percent of American adults, or 36 million people, report some degree of hearing loss, making hearing loss “among the leading public health concerns.”

Here’s a breakdown of the numbers, as reported by Better Hearing Institute (BHI):

* 3 in 10 people over age 60 have hearing loss;
* 1 in 6 baby boomers (ages 41-59), or 14.6 percent, have a hearing problem;
* 1 in 14 Generation Xers (ages 29-40), or 7.4 percent, already have hearing loss;
* At least 1.4 million children (18 or younger) have hearing problems;
* It is estimated that 3 in 1,000 infants are born with serious to profound hearing loss.

As you can see, the numbers are staggering, but they don’t tell the whole story. The part that is missing here is that a considerable number of people with hearing loss who could benefit from hearing aids, don’t.

Studies show that 4 in 10 people with moderate to severe hearing loss use hearing aids, and only 1 in 10 people with mild impairment do. Research also demonstrates that, on average, people wait seven years before purchasing a hearing aid after learning of their hearing loss, while others never do. This means that millions of people walk around with untreated hearing loss, missing out on conversations, activities, interactions, and job opportunities.

With such clear and undisputed benefits, why do so many people forego treatment?

Among the primary reasons is the cost of hearing aids, which range, on average, from $1,800 to $5,000 per ear. While it is true that this price – not refunded by Medicare or most private insurers- is very high, it may still be affordable if you take the initial price and spread it out over several years that an average hearing device will last. You will have a very reasonable price of only $3 a day – the cost of a cup of coffee.

Other reasons advanced in surveys for not wearing hearing aids are really moot points – cosmetic considerations and fear of change.

C’mon, get real: hearing aids make you look old but straining to hear everyone around you doesn’t? Wearing hearing aids can actually allow you to appear younger and more youthful to those around you when you are actively participating in conversations and answering questions appropriately.
Hearing Aids & Clear benefits

Today’s digital hearing aids are to thank for the increase in overall consumer satisfaction with hearing aids, as seen in recent surveys. Satisfaction and benefit perceived are at all time high thanks to innovative technologies, ease of use features as well as a new focus on design.

The hearing aids you see people wearing today are a far cry from what you remember gramps wearing. They are now an accessory you can customize for your style and personality.

Some of the many benefits advanced digital hearing aids will provide you are:

* Improved hearing in background noise thanks to directional microphone technology
* Improved comfort due to technologies such as digital noise reduction and wind noise reduction while outdoors
* Automatic feedback suppression to reduce unwanted whistling that hearing aid wearers often have occur
* Addicted to your cellphone or MP3 player? Digital hearing aids today are able to connect to any Bluetooh enabled device wirelessly, allowing you to stay connected and to turn your hearing aids into personal headsets.

This list could go on for some time. The picture here is not only has digital technology brought us high-definition TVs, it has brought us high definition hearing.

You may think we are being dramatic here, but it is a proven fact that by cutting sufferers from social interactions and limiting their employment possibilities, hearing loss can cause feelings of sadness, isolation, and even depression. It can also put them at risk of accidents and serious injury if, say, they don’t hear the warning signs of fire alarms or oncoming traffic.

On the positive side, same studies show that use of hearing aids has a beneficial effect not only on hearing per se, but also on mental and emotional well-being, as well as the overall quality of life.

BHI says that, based on research, hearing aids use can boost:

* Earning power
* Communication in relationships
* Intimacy and warmth in family relationships
* Emotional stability
* Sense of control over life events
* Perception of mental functioning
* Physical health

These are all compelling reasons why you should not wait any longer to get tested and fitted: Spring is here, and while amplification may not help you hear the flowers grow, it will help you enjoy all the other sounds around you.

Contributor
Carolyn Smaka Au.D. Associate Editor, Healthy Hearing

This article found at Healthy Hearing website: http://www.healthyhearing.com/articles/46405-hearing-aids-springtime

Thursday, May 6, 2010

Digital Wireless Hearing Aids, Part 1: A Primer

Hearing Review - March 2010

Hearing Instrument Technology

Digital Wireless Hearing Aids, Part 1: A Primer

by Francis Kuk, PhD; Bryan Crose; Petri Korhonen, MSc; Thomas Kyhn; Martin Mørkebjerg, MSc; Mike Lind Rank, PhD; Preben Kidmose, PhD; Morten Holm Jensen, PhD; Søren Møllskov Larsen, MSc; and Michael Ungstrup, MSc

Taking an audio signal and transmitting/receiving it digitally is a multi-stage process, with each step influencing the quality of the transmitted sounds. This article provides a primer about the steps involved in the process for both near- and far-field transmission of signals.

Digital signal processing has opened up innovative ways where an audio signal can be manipulated. This flexibility allows the development of algorithms to improve the sound quality of the audio signal and opens up new ways in which audio signals can be stored and transmitted. Whereas FM has been the standard of analog wireless transmission used in the hearing aid world, digital is fast becoming the new norm for wireless transmission. This paper takes a behind-the-scenes look at some of the basic components of a wireless digital hearing aid that transmits audio data so that readers may appreciate the complexity of such a system.

All wireless digital hearing aids share the same functional stages shown in Figure 1. All analog audio signals must be digitized first through a process called analog-to-digital conversion (ADC). The sampled data is then coded in a specific way (audio codec) for wireless transmission. An antenna (or transmitter) using radio waves (a form of electromagnetic (EM) waves) is used to transmit these signals, and a receiving antenna (or receiver) paired to the transmitter detects the transmitted signal. The signal is then decoded (audio codec) and sent to the digital hearing aid for processing. The processed signal then goes through a digital-to-analog conversion (DAC) process again before it is output through the hearing aid receiver.

FIGURE 1. Functional stages of a wireless digital hearing aid.

Each one of these steps can have significant impact on the final power consumption of the hearing aids, the delay of the transmitted sounds, and the overall sound quality of the signal (to be discussed in Part 2). Thus, to understand wireless digital hearing aids, it is necessary that one understands some principles of digital sampling, audio codec (coding and decoding), and transceiver (transmitter and receiver) technology.

Digital Sampling

Francis Kuk, PhD, is director of audiology, and Bryan Crose, BS, and Petri Korhonen, MSc, are research engineers at the Widex Office of Research in Clinical Amplification (ORCA), Lisle, Ill, a division of Widex Hearing Aid Co, Long Island City, NY. Thomas Kyhn, BS, Martin Mørkebjerg, MSc, Mike Lind Rank, PhD, Preben Kidmose, PhD, Morten Holm Jensen, PhD, Søren Møllskov Larsen, MSc, and Michael Ungstrup, MSc, are research engineers at Widex A/S in Lynge, Denmark.

The process in which a digital system takes a continuous signal (ie, analog), samples it, and quantizes the amplitude so that the signal is discrete in amplitude (ie, no longer continuous) is known as analog-to-digital conversion (ADC). The digitized signal is a sequence of data samples (strings of “1” and “0”) which represent the finite amplitudes of the audio signal over time.

Sampling frequency. The number of times at which we measure the amplitude of an analog signal in one second is the sampling frequency or sampling rate. To capture all the frequencies within a signal, the sampling frequency must be at least twice the highest frequency in that signal. For example, if an audio signal has frequencies up to 8000 Hz, a sampling frequency of 16,000 Hz or higher must be used to sample the audio. Figure 2 shows an example of a 1000 Hz sine wave that is sampled at two different frequencies: 1333 Hz and 2000 Hz. As can be seen, the sampling frequency of 1333 Hz incorrectly sampled the 1000 Hz sinusoid as a 333 Hz sinusoid (Figure 2a, below left). When the same signal is sampled at 2000 Hz, the original waveform is accurately reconstructed as a 1000 Hz sine wave (Figure 2b, below right).

FIGURE 2. The effect of sampling frequency on a 1000 Hz waveform. The sample on the left (A) was reconstructed using a sampling frequency of 1333 Hz, causing distortion, whereas the 2000 Hz sampling frequency produced an accurate rendering of the signal.

Bit depth (or bit resolution). Digital systems use binary digits (0, 1) or bits to represent the amplitude of the sampled signal. The precision at which the amplitude variations within the audio signal can be reflected is determined by the bit resolution (or bit depth) of the digital processor. As the number of bits in a processor (or bit resolution) increases, finer amplitude differentiation becomes possible.

Figure 3 shows the difference in resolution when a sinusoid is sampled at 1 bit, 3 bits, and 5 bits. The blue line is the analog signal while the red line is the digital representation of the signal. The space between the blue and red lines (in yellow) is the quantization noise. Note that, as the number of bits increases, the resolution of the signal increases (becomes smoother) and the quantization noise decreases. In other words, the dynamic range (range of possible values between the most intense sound and the least intense sound) increases.

FIGURE 3. The effect of bit resolution on the output waveform (the blue line is the original sinusoid). The red line represents the digitized sinusoid. The difference between the red and blue lines (in yellow) is the quantization noise.

Perceptually, a signal that is processed with a high bit resolution will sound clearer, sharper, and cleaner than the same signal that is processed with a lower bit resolution. One shouldn’t think that more bits are needed to represent a more intense signal (or fewer bits for a soft sound); however, more bits are needed when loud and soft sounds are presented together (ie, fluctuations in level) and one is interested in preserving the relative amplitudes of these sounds (ie, dynamic range).

Sampling trade-offs: current drain. When an analog signal is converted into a digital form, the amount of information (number of bits) or size of the digital signal is a product of the sampling frequency, the bit resolution, and the duration of the sampling. A digital processor that uses a high bit resolution sampling at a high frequency results in more bits than ones that use a lower bit resolution and/or a lower sampling frequency. This means that more of the nuances of the input signal are available. Perceptually, this corresponds to a less noisy signal with a better sound quality. Unfortunately, more bits also mean more computations, larger memory, and longer time to transmit. Ultimately, this demands a higher current drain. Thus, a constant challenge for engineers is to seek the highest sampling frequency and the greatest bit resolution without significantly increasing the current drain.

Digital representation. Digital signals are represented as a string of 1’s and 0’s. To ensure that the data can be used correctly, other information is added to the beginning of the data string. This is called a “header” or the “command data.” This includes information such as the sampling rate, the number of bits per sample, and the number of audio channels present.

Figure 4 shows an example of what an audio header may look like (along with the digital audio). In this case, the 12-bit header consists of three 4-bit words—indicating how many channels it contains (mono or stereo), the sampling rate, and the number of bits per sample. The hearing aid processor reads the header first before it processes the data string.

FIGURE 4. Digital audio with header information.

Digital-to-analog conversion. To convert the processed digital string back into an analog signal (such as after processing by the hearing aid processor), a digital-to-analog converter (DAC) is needed. The DAC reads the instructions on the header and decodes the data at the same rate at which the audio is originally sampled. The output is low-pass filtered to smooth the transitions between voltages (the yellow shaded area in Figure 3). The signal is finally sent to an audio speaker (or receiver).

Audio Data Compression or Audio Codec

Rationale for data compression. When audio is converted from an analog to a digital format, the resulting size of the digital audio data can be quite large. For example, one minute of stereo audio recorded at a sampling frequency of 44,100 Hz (or samples per second) at a 16-bit resolution results in over 84 Mbits of information. This requires 10.5 Mbytes (MB) of storage (1 byte = 8 bits). That’s why an audio CD with a capacity of 783 Mbytes (MB) can hold only 74 minutes of songs.

To increase the number of songs that can be stored on the CD, one can either digitize the songs with a lower bit resolution, or sample them at a lower sampling frequency. Unfortunately, a lower bit resolution will decrease the amplitude resolution of the audio signal and increase the quantization noise. Decreasing the sampling frequency will limit the range of frequencies that are captured and lose some details of the songs. Thus, neither approach offers an acceptable solution to reduce the size of the data file and yet maintain the sound quality of the music.

Data compression (or data codec, short for “data coding and decoding”) allows digital data to be stored more efficiently, thus reducing the amount of physical memory required to store the data. Authors’ Note: Data compression should not be confused with amplitude compression, which is the compression or reduction of the dynamic range of an audio signal. Unless specifically intended, data compression generally does not reduce or alter the amplitude of the audio signal, but it does reduce the physical size (number of bits) that the audio signal occupies.

The transmission bit rate—or how much data (in number of bits) a transmitter is capable of sending in unit time—is a property of the transmitting channel. It depends on the available power supply, the criterion for acceptable sound quality of the transmitted signal, and also the integrity of the codec that is used to code and decode the transmitted signal. So, for example, while a higher bit rate usually means more data can be transmitted (and a better sound quality by inference), it does not guarantee sound quality because sound quality also depends on how well the codec system works.

How quickly an audio sample is transmitted (or downloaded) is important in the music world. The amount of downloading time is related to the size of the file and the bit rate of the transmitting channel. For example, a 4-minute song of 35 MB takes over 9 minutes to download using an average high-speed Internet connection (bit rate of 512 KB). If the same song is compressed using mp3 encoding technique, it is approximately 4 MB in size and takes approximately 1 minute to download. Thus, another reason for data compression (or codec) is to reduce the size of the “load” (or file) so the same data can be transmitted faster within the limits of the transmission channel without losing its quality.

A digital wireless hearing aid that transmits audio from one hearing aid to the other, or from a TV/cell phone, etc, to the hearing aid, has the same (or more) constraints as a music download. Because of the need for acceptable current consumption, the bit rate of current wireless digital hearing aids is typically lower than the high-speed Internet. In order to transmit the online digital audio without any noticeable delays or artifacts, some intelligent means for reducing the size of the audio data file is critical. (Note: this is not a necessary consideration for transmission of parametric data, such as hearing aid gain settings, because of the relatively small size and non-redundant nature of such data.)

Audio coding. The various algorithms that are used to code and decode an audio signal are called audio codec. The choice of a codec is based on several factors, such as the maximum available transmission bit rate, the desired audio quality of the transmitted signal, the complexity of the wireless platform, and the ingenuity of the design engineers. These decisions affect the effectiveness of the codec.

One can code a signal intelligently so it has good sound quality but fewer bits (thus requiring a lower transmission bit rate). Conversely, if the codec is not “intelligent” or if the original signal does not have a good sound quality, no transmission system at any bit rate can improve the sound quality.

There are two components in the audio encoding process: 1) Audio coding which involves “packaging” of the audio signals to a smaller size, and 2) Channel coding which involves adding error correction codes to handle potential corrupted data during the transmission. Protocol data, such as header information for data exchange, is also included prior to transmission.

Approaches to audio coding: lossless vs lossy. The objective for audio coding is to reduce the size of the audio file without removing pertinent information. Luckily, audio signals have large amounts of redundant information. These redundancies may be eliminated without affecting the identity and quality of the signal. Audio coding takes advantage of this property to reduce the size of the audio files. The two common approaches—lossless and lossy—may be used alone or in combination (these approaches may be used with other proprietary approaches as well).

Lossless codec. The systems that take advantage of the informational redundancy in audio signals are called lossless systems. These systems use “redundancy prediction algorithms” to compile all the redundant or repeated information in the audio signal. They then store the audio more efficiently with fewer bits but no information is lost. For example, the number 454545454545 can be coded as a 12-digit number by the computer. But the same number can also be coded as 6(45) to be read as 45 repeated 6 times.

This is the process used when computers compress files into a ZIP file. It is used in applications where exact data retention—such as computer programs, spreadsheets, computer text, etc—is necessary.

Lossy codec. The systems that take advantage of perceptual redundancy in audio coding are called lossy systems. They use “irrelevance algorithms” which apply existing knowledge of psychoacoustics to aid in eliminating sounds that are outside the normal perceptual limits of the human auditory system. For example, it is known that, when two sounds are presented simultaneously, the louder sound will exert a masking effect on the softer sound. The amount of masking depends on the closeness of the spectra of the two sounds. Because of masking effects, it is inconsequential perceptually if one does not code the softer sound while a louder one is present. Lossy audio coding algorithms are capable of very high data reduction, yet in these systems the output signal is not an exact replica of the input signal (even though they may be perceptually identical).

This type of codec is commonly used in mp3 technology. JPEG (Joint Photographic Experts Group) compression is another example of lossy data compression used in the visual domain.

Channel coding. One important consideration when sending any type of data (analog or digital) is the potential of the introduction of errors into the signal from electromagnetic interference during the transmission process. This is especially pertinent for wireless systems. Consequently, efforts must be made to ensure that the transmitted data are received correctly.

Channel coding algorithms provide a method to handle transmission errors. To achieve that objective, channel coding algorithms specify ways to check the accuracy of the received data. They also include additional codes that specify how errors can be handled.

Because there are no required standards on how these errors must be handled, channel coding algorithms vary widely among manufacturers. Some devices simply ignore and drop the data that are in error; some wait for the correct data to be sent; and others can correct the data that are in error. The various approaches can affect the robustness of the transmission and the sound quality of the transmitted signal.

Before sending the encoded digital audio (and the error correction codes), the encoder generates a header to the data following the protocol for wireless transmission. In this case, the header includes the address of the receiver, command data, and a data-type identification code that specifies which data are instructions, which are audio data, and which are error-correction codes. In addition, it also includes information on how to make sure that the transmitted data are correct; and how to handle “errors” if and when they are encountered.

Audio decoding. When a coded audio signal is received, it needs to be decoded so the original information can be retrieved. The receiver first examines the header information from the received coded signals so it knows how the received data should be handled. The received data then go through the channel decoder to ensure that the transmitted data are correct. Any transmission errors are handled at this channel decoding stage according to the error-correction codes of the channel codec. The channel-decoded signal then feeds through the audio decoder which unpacks the compressed digital audio data to restore the “original” digital audio.

“Bit-true” vs “non bit-true” decoding. There are two approaches to audio codec: bit-true and non bit-true. A bit-true codec means the decoder knows the encoder so it can decode the audio faithfully with the least current drain. Because it knows how the data are coded, it is prepared to handle any “errors” that it encounters during the transmission. A bit-true system is a dedicated system.

A non bit-true codec is an open system that allows multiple manufacturers to produce files that can be decoded by the same decoder. An example is the codec used in mp3 players. The advantage of a non bit-true system is its flexibility, adaptability, and ease of implementation by various manufacturers; it can save development time and resources. A potential problem is that the quality is not always ensured because different implementations are allowed. And because the decoder does not know the encoder, errors that are introduced during the transmission may not be corrected effectively and/or efficiently. This leads to drop outs and increased noise, and it may degrade the quality of the transmitted audio.

Wireless Transmission

Why wireless? Wireless allows the transfer of information (or audio data) over distance (from less than a meter to over thousands of miles) without the use of any wires or cables. Although wireless opens up the transmitted data to potential interference by other signals, the convenience that it offers and the possibility that data can be transferred over a long distance (such as a satellite) make it a desirable tool for data transmission.

The challenge for engineers is to minimize the potential for transmission errors (from interference) while keeping reasonable power consumption. Today, wireless transmission technology is also applied to hearing aids to bring about improvements in communication performance never before possible.

Vehicles for transmission: Electromagnetic (EM) waves. Wireless transmission is achieved through the use of electromagnetic (EM) waves. This is a type of transverse wave which has both an electric component and a magnetic component. EM waves by themselves are not audible unless they are converted to a sound wave (a longitudinal wave). One property of an EM wave is its ease of being modified by another signal. This makes EM waves excellent carriers of data.

Electromagnetic waves cover a wide range of frequencies. The choice of carrier frequency depends on how much information needs to be sent, how much power is available, the transmission distance, how many other devices are using that frequency, local laws and regulations, and terrestrial factors such as mountains or buildings that may be in the path of the transmission. Higher carrier frequencies can carry more information than lower frequency carriers. On the other hand, lower frequencies require less power for transmission.

The spectra of electromagnetic waves that are used today can be divided into different categories. Visible light is one form of electromagnetic waves and it is marked in the center of Figure 5. On the left side of the spectrum are the frequencies for radio transmission (or radio waves). These waves have a longer wavelength (and thus lower frequencies) than light and are commonly used for most types of wireless communication. One can see that most of the AM and FM radios use frequencies between the 106 and 108 Hz regions.

FIGURE 5. The electromagnetic (EM) spectra, with visible light near the center and most of our transmission carrier frequencies in the lower/longer frequency regions.

Far-field vs near-field transmission. Traditional wireless transmission systems use an antenna to transmit an EM wave through the air. The farther the wave is from the transmitter, the weaker its strength. However, the rate of decrease of the EM wave amplitude depends on how far the signal propagates.

An intended distance that is much greater than the wavelength of the carrier is classified as a far-field; in contrast, a distance much shorter than the wavelength is called a near-field. Thus, the distinction between a far- and a near-field not only depends on the physical distance, but also on the frequency of the carrier. In a far field, both the electric and magnetic (or inductive) field strengths decrease linearly with distance at a rate of 1/r. On the other hand, in a near-field, the magnetic field strength is dominated by a component which decreases at a rate of 1/r3 as shown in Figure 6.

FIGURE 6. Difference between far-field and near-field attenuation of the magnetic field.

The difference in the rate of decrease between the two components suggests that they may be utilized for different applications. Most wireless technologies today use both the electric and magnetic fields of EM waves for far-field transmission. In the area of hearing aids and assistive devices, this usually suggests a distance of 10 to 50 m. Because of the greater distance of far-field transmission, interference from and on other transmitted signals is likely to occur depending on the relative levels of the transmitted signals. For transmission over a short distance (less than 1 m, or near-field), the magnetic or inductive component is used instead because it retains its signal strength over the short distance. In addition to a lower current consumption, the shorter distance would mean less interference from and on other transmitted signals. This results in a greater security of the transmitted signals and immunity from other transmitted signals.

Bluetooth: A common far-field communication protocol. Bluetooth is a commonly used radio frequency (RF) wireless standard in many communication devices today. It is a wireless protocol for exchanging data up to 100 meters (thus, far-field) and uses the EM wave to carry data at a carrier frequency of 2.4 GHz with a bandwidth of 1 MHz (79 different channels).

Bluetooth is described as a protocol because it offers a predefined method of exchanging data between multiple devices. This means that two devices connected with a Bluetooth connection (ie, Bluetooth compatible) must meet certain requirements before they can exchange data. This qualifies it as an open or non bit-true system. The openness and connectivity are major reasons for its proliferated use in consumer electronics today.

Historically, Bluetooth was developed when computer wireless networks (Wi-Fi) became available. Wireless networks also use a 2.4 GHz carrier frequency band, but have a channel bandwidth of 22 MHz. This allows wireless networks to send more information over a farther distance, but at the expense of high power consumption. By restricting the range of the transmission, engineers are able to reduce the power consumption of Bluetooth. This enables devices smaller than notebook computers (eg, cell phones, PDAs, etc) to also utilize Bluetooth.

However, the power consumption of Bluetooth is still not low enough to permit its integration into a hearing aid. A typical Bluetooth chip requires a current drain from 45 milliAmps (mA) to as high as 80 mA for operation. If a Bluetooth chip were embedded in a hearing aid that uses a #10 battery (with a capacity of 80 mAh), the battery would only last 1 to 2 hours before it expires!

Another problem with Bluetooth is the audio delay inherent in the standard Bluetooth audio profile. In creating a standard that is adaptable to many different devices, Bluetooth has to satisfy many procedures to ensure a proper communication link between devices. This delays the immediate transmission of signals. For example, a delay of up to 150 ms may be noted between the direct sound and the transmitted sound from a TV using Bluetooth. When a delayed audio signal is mixed with the direct signal, a poorer sound quality—ranging from a “metallic” sound to an “echo”—may be perceived depending on the amount of delay. Excessive delay, such as 150 ms, could lead to a dis-synchrony between the visual and audio signals. Figure 7 shows the perceptual artifacts that may result from mixing direct sounds with transmitted sounds at various delays.

FIGURE 7. The consequences of direct and delayed transmitted signals on the perception of sound. Delays in excess of 10 ms become problematic.

Near-field magnetic induction (NFMI). The limited capacity of today’s hearing aid batteries makes it impractical to use Bluetooth exclusively for far-field transmission to the hearing aids.

The rapid rate of attenuation of the magnetic field (shown in Figure 6) would suggest high signal strength within a close proximity and low signal strength beyond. This ensures accurate transmission of data between intended devices (such as hearing aids). The rapid decay characteristics mean that its signal strength will not be sufficient to interfere with other near-field devices in the environment, nor will it be interfered with by other unintended near-field devices. A shorter range of transmission will also require a lower carrier frequency, reducing the power consumption.

This makes magnetic or inductive EM wave an ideal technology to be integrated within hearing aids for near-field or short-range communication. On the other hand, the orientation of the antennae (between the transmitter and the receiver) may affect the sensitivity of the reception. A remote control and wireless CROS hearing aids are prime examples of this form of technology.

Streamers and relay: A solution that incorporates inductive and Bluetooth. Using an inductive signal for wireless communication between hearing aids makes sense because of the security and low power requirement. However, connecting to external electronic devices (such as cell phone or TV) would become impossible. A solution which takes advantage of inductive technology and Bluetooth connectivity (or other far-field protocols) is needed to result in a practical solution.

This can be achieved using an external device (outside the hearing aid) which houses and uses both forms of wireless technologies. This device, which includes Bluetooth (and other far-field protocols) technology, can be larger than a hearing aid and accommodate a larger battery than standard hearing aid batteries. Thus, it connects with external devices (such as cell phones, etc) that are Bluetooth compatible.

The device should also have near-field magnetic (inductive) technology to communicate with the wearer’s hearing aids when it is placed close to the hearing aids. Thus, a Bluetooth signal could be received by this device then re-transmitted from this device to the hearing aid. This is the basis of the “streamers” used in many wireless hearing aids today.

FIGURE 8. A relay device that receives a Bluetooth signal and re-transmits it to the hearing aid on the other end.

Signal Transmission

Analog transmission. EM waves are used to carry the audio information so they may be transmitted wirelessly over a distance. This is accomplished by a process called modulation—where the EM wave (the carrier) is altered in a specific way (ie, modulated) to carry the desired signal.

There are two common analog modulation schemes: amplitude modulation (AM) and frequency modulation (FM). The signal that modulates the carrier is an audio signal (eg, speech or music). The same mechanism of modulation may be used in both far-field and near-field transmissions.

For amplitude modulation (AM), the amplitude of the carrier frequency is altered (or modulated) according to the amplitude of the signal that it is carrying. In Figure 9, observe how the amplitude-modulated signal shows the same amplitude change over time as the sine wave that is used to modulate the carrier. The valleys of the sine wave reduce the amplitude of the carrier waveform, and the peaks of the signal increase the amplitude of the carrier waveform.

For frequency modulation (FM), the frequency of the carrier is modulated according to the amplitude of the signal that is sent. Figure 9 displays how the frequency modulated signal shows the amplitude change of the sine wave by altering the closeness (or frequency) of the carrier waveform. Waveforms that are more spaced apart (lower frequency) represent the valleys of the sine wave, and waveforms that are closed together (higher frequency) represent the peaks of the sine wave. Both AM and FM receivers de-modulate the received signal and reconstruct the audio signal based on how the AM or FM signal is modulated.

FIGURE 9. Analog modulation schemes—amplitude modulation (AM) and frequency modulation (FM).

The Federal Communications Commission (FCC) regulates the use of the radio portion of the EM spectrum in the United States. In the field of amplification, the three frequency bands that are commonly used for FM systems include: 169-176 MHz (H Band), 180-187 MHz (J Band), and 216-217 MHz (N Band). The frequency band that is used in near-field transmission (and in remote) is typically around 10-15 MHz (although earlier systems still use a lower carrier frequency). The frequency band that is used for Bluetooth is the 2.4-2.5 GHz band. This frequency band is classified as one of several “Industrial, Scientific, and Medical” (ISM) bands.

Digital transmission. The previous discussion relates the use of an analog audio signal to modulate a high frequency EM carrier. In the process, the analog signal is being transmitted. When the signal that needs to be transmitted is digital, the analog modulation scheme will not be appropriate. In addition to the fact that the signal itself is digital (thus requiring digital transmission), there are other benefits of digital transmission.

Any form of signal transmission can be affected or contaminated by EM interference or noise. This is especially the case when the transmitted signal is farther away from the source because of the decrease in signal level (see Figure 6) and the constant noise level from other EM interferences (ie, the “signal-to-noise” level decreases). Thus sound quality (and even speech intelligibility) decreases as the distance increases.

On the other hand, a digital signal (“1” and “0”) is not as easily affected by the interfering EM noise. As long as the magnitude of the interfering noise does not change the value of the bit (from “1” to “0” and vice versa), the signal keeps its identity. Thus, digital transmission is more resistant to EM interference than analog transmission.

FIGURE 10. Hypothetical sound quality as a function of interference between analog and digital transmissions.

This suggests that the sound quality of a signal that is digitally transmitted may remain more natural (and less noisy) than an analog signal until a much higher level of EM intereference. Figure 10 shows the hypothetical sound quality difference between an analog transmisison and a digital transmission as a function of distance and/or interference.

How is digital transmission accomplished? In digital transmission, a technique called “Frequency Shift Keying” (FSK) is used. This modulation scheme uses two different frequencies around the carrier frequency to represent the “1” and “0” used in the binary representation. For example, a “1” may be assigned the frequency 10.65 MHz and a “0” the frequency 10.55 MHz for a carrier at 10.6 MHz. Each time a “1” needs to be sent, the transmitter will send out a 10.65 MHz signal; each time a “0” needs to be sent, a signal at 10.55 MHz will be sent.

Like analog modulation, when the transmitted signal (or pulse train) is received by the receiver, it needs to be demodulated into “1” and “0” to recreate the digital sequence. This is done by the demodulator at the receiver end. Frequencies around the 10.55 MHz will be identified as a “0,” and those around 10.65 MHz a “1.” Typically, two points per bit are sampled to estimate the bit identity.

While this approach is sufficient for the typical operations, errors (identification of a “1” as “0” and vice versa) could still occur under adverse conditions (such as intense EM inteference from another source). Thus, an important consideration in a wireless antenna or receiver design is how to handle the corrupted transmitted signal so the retrieved signal is as accurate as possible to the original signal.

Summary

The process of taking an audio signal and transmitting/receiving it digitally is a multi-stage process, each of which can affect the quality of the transmitted sounds. The following sequence summarizes all the steps involved in the process (for both near- and far-field transmissions):

1) The audio signal (eg, from TV) is digitized through an analog-to-digital conversion process into a digital form (ADC).
2) The digital signal goes through an audio encoding process to reduce its size (audio coding).
3) The encoded signal goes through channel coding to include error correction codes (channel coding).
4) Header information is included.
5) The coded signal is modulated through FKS (or other techniques) and prepared for broadcast (modulation).
6) The modulated signal is broadcast through the antenna (transmission by antenna).
7) The modulated signal is received by the antenna (reception by antenna).
8) The signal is demodulated to retrieve the digital codes (demodulation).
9) The header information is read.
10) The digital codes go through channel decoding to correct for errors (channel decoding).
11) The signals go through audio decoding to “decompress” or return to as much of its original form as possible (audio decoding).
12) The decoded digital signal can be processed by the hearing aid processor (DSP processing).
13) The processed signal leaves the hearing aid through a digital-to-analog converter to return to its analog form (DAC).

Correspondence can be addressed to HR or Francis Kuk, PhD, at fkuk@aol.com.