How can professionals make informed decision—based on scientific facts—for their patients and practices?

Ruth Bentler, PhD, is a professor at the University of Iowa; Cheryl Eiler, MA, is a research audiologist at the Starkey Hearing Research Center in Berkeley, Calif; Benjamin Hornsby, PhD, is an assistant professor at Vanderbilt Bill Wilkerson Center in Nashville, Tenn; Sheila T. Moodie, MClSc, is a research audiologist at the National Centre for Audiology in London, Ontario, Canada; Laurel Olson, MA, is manager of clinical product research at Starkey Laboratories, Eden Prairie, Minn; and Michael Valente, PhD, is professor and director of adult audiology at Washington University School of Medicine in St Louis.

The culture of our profession is evolving. All stakeholders must share the same goal: improved patient care, based on management steeped in strong supporting scientific evidence.

Audiologists and other hearing care professionals are expected to assess the individual needs of their patients and—using the best available evidence coupled with clinical experience—prescribe the best technology for them. However, the strength of evidence (from a traditional evidence-based practice perspective) supporting the use of differing types of hearing aid technologies (eg, methods of compression, noise reduction, directional processing, and feedback-reduction strategies) is often weak.

Likewise, new technologies are being introduced at a rapid pace, and the time between the introduction of new technology and the peer-reviewed publication of clinical research quantifying its benefit—or lack thereof—can be years. As a result, the utility of the research in helping clinicians make informed decisions is significantly compromised.

Additionally, consensus on how to verify the performance and benefits of a new technology is difficult to achieve, and the efficacy (laboratory performance/benefit) or effectiveness (real-world performance/benefit) of these new technologies can vary based on the measures used during the assessment. Given these constraints it may be difficult for clinicians to draw strong conclusions about the potential benefits of current and new technologies for their patients.

Mountains (or Molehills?) of Evidence

Evidence-based practice (EBP) has its roots in medicine. EPB is defined as:

“…the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients…[by] integrating individual clinical expertise with the best available external clinical evidence from systematic research.”1

The literature is full of examples from medicine wherein practices deemed to be “best practice” at the time have turned out to be wrong or even harmful when scientific rigor was used to evaluate the effects. Examples include the ancient Greek practice of bloodletting for a variety of ailments, including hypertension.2 In the 19th century, opium was used to treat diabetes. In the 1940s, premature infants were “oxygenated” to prevent retrolental fibroplasia, a condition later found to be caused, not cured, by the treatment. The list goes on.

The application of EBP principles in hearing care has taken root in the past decade. Academics, clinicians, and manufacturers all have important roles in the use of these principles for sound decision-making. Understanding our roles, and roadblocks, is important for the successful movement of knowledge into everyday practice.

Concurrently, the audiological field has witnessed an explosion in the availability of both technology and published literature. Clinicians have access to new diagnostic tools, measurement tools, processing schemes, and even style designs every few months. In addition, research publications have become increasingly more abundant. The number of papers published in the main audiology journals has grown from 200 per year to 1,700 per year since 1960.3 A clinician would need to read more than five papers a day for 365 days annually in order to keep up! If hearing science literature is added, the total rises to 4,350 papers per year, and requires reading 12 papers daily!

The task is daunting. Yet, in this era of increased accountability (third-party payers, legislation, and ethics), the clinician is often forced to make clinical management decisions, in many cases without good available and supporting data. Further, it is often not clear to the clinician whether data provided from industry comprises evidence or marketing. All these factors present significant roadblocks to the use of EBP principles.

EBP, EBD, and Levels of Evidence

First, we must distinguish between EBP and evidence-based design (EBD). For research and development (R&D) purposes in industry, the principles of EBP hold, but with several caveats or hurdles. EBD is an evidence-based approach to hearing instrument design that encourages the development of features with proven patient benefit. That proven patient benefit often comes from a slightly different model of evidence gathering. Internal decision-making leads to new algorithm development, new microphone designs, or even new case styling.

The key word here is internal. For any manufacturer to move forward and maintain or improve market share, it must develop its own internal criteria for “evidence-base.” While that evidence is often generated internally, any advertisement, brochure, or other promotional claim must follow certain rules of content. That is, industry may initiate and utilize internal research in the development of new products and features in a proprietary manner. However, if the manufacturer plans to make claims relative to a new product or algorithm, the Guidelines for Hearing Aid Manufacturers for Substantiation of Performance Claims4 developed by the Hearing Industries Association (HIA) provide the protocol for obtaining the scientific data to substantiate those claims. HIA developed these guidelines in a consensus document that complied with the substantiation requirements of the Food and Drug Administration and the Federal Trade Commission. The guidelines also provide a basis for hearing aid manufacturers to privately resolve disputes about promotional claims.

In summary, research is carried out at various levels within the industry. Some efforts are proprietary; other efforts are intended to provide substantiation of advertising claims. Whether or not these internal efforts are used as evidence in the clinician’s critical evaluation of available data depends upon the level of the evidence. Although several hierarchies have been proposed, most follow along these lines:

Level 1a: Well-designed meta-analysis

Level 1b: Well-designed randomized control trial

Level 2a: Well-designed controlled study without randomization

Level 2b: Well-designed quasi-experimental study

Level 3: Well-designed nonexperimental studies (eg, case studies)

Level 4: Expert opinion, consensus statement, etc.

The lack of EBP may be holding back new investors in the hearing industry, says Brent Edwards, PhD, in his article, “What Outsiders Tell Us About the Hearing Industry.”

Information provided in a brochure or instructional booklet cannot be construed as evidence, according to this hierarchy. Information provided in trade magazines may or may not be construed as evidence. That is because, in addition to this level delineation, the quality of the research must be considered in determining its value in our decision-making. Issues of appropriate design, sample size, blinding, statistics, etc, all lead to the designation of high quality.

Inherent in this is the assumption that the study has undergone the necessary rigorous scrutiny of the peer-review process; consequently, trade magazine “data”—while carefully gathered and presented—typically has not undergone the scrutiny of the scientific community and cannot be considered to carry the same value as evidence from those peer-reviewed journals. If industry-generated research (for EBD) is to be included in the critical review (for EBP), that research must also be available to the practitioner in a peer-reviewed format.

Summit Recommendations

The Key Issues
In an effort to organize the discussion and recommendations of the summit participants, four issues/questions were defined:

  1. How does our profession speed up the process of validation of new evidence-based advances in hearing aid technology for clinicians so that they can decide whether or not to put them into practice?
  2. How do we disseminate information in a format that clinicians can look at and clearly interpret?
  3. How do we change clinical practice to be evidence-based?
  4. How do we ensure that students understand that what they do in the clinic should be based on the best evidence-based research available?
  1. Good research can be carried out in many settings, including clinics, university laboratories, and industry. However, good research takes considerable time and money to accomplish. In order to continue to move forward both for EBP and EBD in the hearing care field, increased investment in collaborative university-industry relationships and research consortia could be beneficial. Some potential barriers to increasing university-industry collaboration exist, including how to best include university researchers in the R&D process while ensuring that:
    • Protection of intellectual property for both parties is preserved;
    • The R&D product development time line is achieved; and
    • University ethical mandates are followed.

    Another barrier to improving communication of new findings is that peer-reviewed journal publication takes months—even years—to complete. That does not help the “speed up the process so clinicians can decide” problem related to the implementation of EBP in audiologic management. Both academic and industry settings are capable of providing the evidence. While large clinical trials may result in higher levels of evidence, multiple smaller studies can be pooled for the same result. Considering “effect size” across studies provides a larger data set in a shorter time frame. (Effect size refers to the magnitude of the result. A larger effect size is more likely to indicate an important and clinically significant result. Combined with confidence intervals, clinicians have useful information for clinical decision-making.)

    Funding this research must be a cooperative effort. Academics interested in the efficacy/effectiveness of new technologies must derive their operating costs from industry, private foundations, and limited federal sources. Industry-based researchers may have the R&D budgets, but not the time or resources to undertake the necessary research on a broad enough scale. Collaborative efforts across academic, industry, and private practice settings can ensure the availability of the necessary evidence in a more timely manner.

  2. Disseminating “best practice” information in a format that clinicians can clearly interpret is a challenge. There is sufficient evidence to suggest that most clinicians do not engage in EBP, but rather rely upon tradition, most recent experiences, and what they learned years ago in graduate school.5 In one survey of speech and hearing professionals,6 reasons for this oversight were noted as insufficient time in the workplace (70%) and lack of available evidence (48%). At the same time, the respondents reported being “very likely” to use other colleagues (68%), continuing education opportunities (50%), peer-reviewed literature (37%), and professional organization policy statements (25%) to obtain the necessary evidence for clinical decision-making.

    There are several ways to provide evidence in a more comprehensive manner, rather than expecting that clinicians will have the time and library resources to review evidence:

    • Rating evidence levels. Structured abstracts that include “Level of Evidence” ratings would provide easier access to important findings. Those assigned levels could be determined by an independent and trained panel of reviewers.
    • White papers. Position papers are another viable vehicle to get the foundation of evidence established. Our professional organizations (eg, ASHA, AAA) can accomplish this task by committee effort and compilation. Publication of these position papers online and in trade magazines makes them readily accessible.
    • Attending and participating at conferences. Unfortunately, since “passive dissemination of information” generally leads to only small changes in clinical practice,7 we need to ensure that continuing education opportunities at state and national meetings provide engaging activities for attendees. Interactive educational meetings and use of opinion leaders have been more consistently effective in promoting behavioral change among health professionals than distribution of practice guidelines and lectures.7
    • Emphasizing EBP principles in all educational and training activities. At the very basic level, training programs need to include the principles and process in their curriculum so that clinicians can most efficiently consume the available evidence for their ongoing clinical practices.
  3. Changing clinical practice to be more evidence-based creates a significant challenge. Change at the individual clinician level is important; however, equally important is modifying the organizational environments in which people work so that evidence-based decision-making fits the context in which the individual works.8 For example, if the profession of audiology is to thrive as a doctoral-level profession, we will need to ensure that our daily clinical practice promotes us as highly qualified professionals who have identified the determinants that improve research uptake and use by hearing health care providers. In addition, we will need to develop methods of evaluating the outcome of our evidence-based practices. To do so will take commitment on the part of the training programs, clinicians, and industry-related personnel.
  4. We can ensure that students understand that what they do in the clinic is, and should be, based on sound research. That attitude must become the underpinning of our profession, instilled from the classroom onward. Within each training program, the value of implementation of EBP must be emphasized. Lack of confidence in performing critical appraisal of research and knowledge of statistical analyses have evolved as other key barriers to research use in a number of studies across a number of health-science-related professions.

This underscores the importance of teaching these skills during graduate-level training and ensuring that graduates are provided with this training in continuing education opportunities. And, as any good educator will attest, these skills must be practiced and refined throughout the clinical career.

In summary, the culture of our profession is evolving. All stakeholders must share the same goal: improved patient care, based on management steeped in supporting scientific evidence. The academic environment holds the most responsibility for training future clinicians, thus instilling the value of good scientific principles in clinical management. The clinician holds responsibility for consuming the available evidence and using the data to support good practice decisions. Industry holds responsibility for following good research practice in the development of training and advertising materials. Good training, good research, and good collaboration among the stakeholders are essential.

References

  1. Sackett D, Rosenberg W, Gray J, Haynes R, Richardson W. Evidence based medicine: what it is and what it isn’t. Brit Med Jour. 1996;312:71-72.
  2. Clutterbuck H. On the Proper Administration of Blood-Letting, for the Prevention and Cure of Disease. London; 1840. Available at: www.library.ucla.edu/biomed/his/blood/clutterbuck.html. Accessed May 22, 2007.
  3. Thorne PR. Evidence-based audiology and clinical practice. Aust NZ J Audiol. 2003;25(1):10-15.
  4. Hearing Industries Association. Guidelines for Hearing Aid Manufacturers for Substantiation of Performance Claims. Alexandria, Va: HIA; 2002.
  5. Eisenberg J. What does the evidence mean? Can the law and medicine be reconciled? J Health Polit Policy Law. 2001;26(2):369-381.
  6. Mullen R. Survey tests members’ understanding of evidence-based practice. ASHA Leader. 2005;10(5):4,14.
  7. Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Getting research findings into practice: closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. Brit Med J. 1998; 317:465-468.
  8. Lomas J. Postscript: Understanding evidence-based decision-making—or, why keyboards are irrational. In: Lemiuex-Charles L, Champagne F, eds. Using Knowledge and Evidence in Health Care: Multidisciplinary Perspectives. Toronto: University of Toronto Press; 2004.

Correspondence can be addressed to HR or Ruth Bentler: .