The use of evidence-based principles (EBP) has gained wide support in the last year. EBP relies on science to drive clinical office/practice decisions. Boiled down to its essence, EBP takes clinical research from the laboratory into the study of patient satisfaction as a whole. Specific to the field of hearing health care, it assumes the question, What does the scientific evidence reveal about hearing aid testing and fitting practices relative to their ability to create satisfied customers? and it assesses the level of scientific support for implementing clinical procedures in the real world.
The need for using evidence-based principlesor, at the very least, advocacy for a basic working knowledge of EBPin hearing care is not new; it has been endorsed through the years by many people. For example, the first seminar that HR reported about this subject was presented by J. Gail Neely, MD, of Washington University School of Medicine at an American Auditory Society meeting over 6 years ago (see June 2000 HR, p 67). Neely, who worked under Alvan Feinstein, MD (one of the fathers of modern Epidemiology), gave an eye-opening lecture on the need for progressive scholarship on the part of hearing care professionals. He detailed the five levels of study methodology in evidence-based medicine, from most-dependable to least-dependable: 1) randomized clinical trials with false-negatives and false-positives; 2) randomized trials with high levels of repeatability; 3) non-randomized trials with concurrent cohort comparisons; 4) non-randomized trials with historical comparisons; and 5) case studies without controls. (Note: A sixth level of evidence often cited in EBP is expert opinion.) Progressive scholarship, as Neely defined it, requires that professionals understand EBP concepts and hone their ability to assess levels of evidence in any particular study; recognize that some things learned in professional education are either obsolete or later found to be completely wrong; acknowledge that continuing education seminars, while useful, are not especially effective in changing practice procedures; and realize that the offices/practices that do not adhere to progressive scholarship are doomed to fall behind. In other words, you have to be an avid reader and rigorously maintain your skepticism, while keeping an open and discerning mind about things that might improve your client care.
Robyn Cox, PhD, and Michael Valente, PhD, recently guest-edited two landmark special-issues of the Journal of the American Academy of Audiology (July/August and November/December 2005) that focus on EBP and are must reads. When using the EBP approach to answer the question, What do we really know about hearing aid testing and fitting? the answer from members of the Independent Hearing Aid Fitting Forum (IHAFF), who served as authors in the special editions, were somewhat surprising. As Cox put it, The overriding observation was that most of the research has many limitations, and the evidence that exists pointing to effective strategies in hearing rehabilitation tends to be relatively weak. Indeed, Killion & Gudmundsen showed that measurement of unaided speech recognition in quiet was not correlated to the efficacy of the hearing aid fitting. Additionally, out of about 300 articles identified as candidates by Killion & Gudmundsen, only five met the review criteria for that edition of JAAA! (Also see Brian Taylors discussion of EBP on p 38, and Brent Edwards article, What Outsiders Tell Us About the Hearing Industry, in the March edition of HR, pgs 88-92.)
The take-home message is that EBP can go a long way toward helping reassess current fitting procedures, refine research methodology, and ultimately, develop a dynamic, scientifically supported Best Practices protocol for administering care to individuals with hearing impairment. As Sergei Kochkin, PhD, demonstrated in his article on value and hearing aids (February 2003 HR, pgs 12-26), once everyone gets on the same page using this best-practices protocol, customer satisfaction will increase dramatically.