Article Type
Changed
Mon, 01/14/2019 - 10:57
Display Headline
Validating an instrument for selecting interventions to change physician practice patterns

KEY POINTS FOR CLINICIANS

  • One size probably does not fit all when bringing physicians new information that might change their practice.
  • Physicians differ measurably in what they consider credible sources of information, the weight they assign to practical concerns, and their willingness to diverge from group norms in practice.
  • Interventions that bring new knowledge into practice can be tailored to physicians’ perspectives. Further research may show this approach to be more useful to physicians and more likely to succeed than current approaches.

We previously proposed a theoretical framework for selecting the most effective strategies for changing physicians’ practice patterns.1 This framework called for classifying physicians into 4 categories based on how they respond to new information about the effectiveness of clinical practices, then selecting the strategy best suited to each physician’s response style. In this paper we describe the development and validation of a psychometric instrument to classify physicians into the 4 categories. This is one more element in our ongoing effort to answer, rigorously and specifically, basic questions about the adoption of evidence-based practices; for example, how can we increase physicians’ use of proven interventions, such as β-blockers after myocardial infarction or tight blood pressure control for patients with type 2 diabetes? How can we reduce physicians’ use of disproved therapies, such as oral β-agonist tocolytics for preterm labor or antibiotics for viral illnesses?

The literature is rife with examples of singlemode and multimode studies using educational interventions, positive and negative incentives, group and individualized feedback, sanctions, regulations, academic detailing, and patient-demand interventions to bring about changes in physician practice. 2-5 Advocates of these approaches cite published examples of their success in changing clinical practices; in all cases, however, published and unpublished instances of failure exist as well. The lack of a consistent pattern of success or failure has led to a growing recognition that no single strategy will ever be a “magic bullet”5 ; therefore, the selection of practice change strategies must be based on specific situations and settings.6-8 However, it is still not known what characteristics of the setting matter most and which approach will work in a specific setting and situation.

We believe that one key factor in selecting effective strategies is the audience. Businesses learned long ago that market segmentation, in which products are advertised differently to people who have different needs, values, and views, is crucial to success in sales. Similarly, our theoretical framework posits that selecting the most appropriate change strategy requires first classifying clinicians according to how they respond to new information about the effectiveness of clinical strategies. We distinguish 4 classification categories: seekers, receptives, traditionalists, and pragmatists.1

Physician categories and underlying factors

Seekers consider systematically gathered, published data (rather than personal experience or authority) the most reliable source of knowledge. They critically appraise the data themselves and value what they view as correct practice over pragmatic concerns, such as seeing patients quickly and efficiently. Most notably, seekers make evidence-driven practice changes even when the changes are out of step with local medical culture.

Like seekers, receptives are evidence-oriented, but they generally rely on the judgment of respected others for critical appraisal of new information. Receptives are likely to act on information from a scientifically and clinically sound source. Although they do not always hew to local medical culture, receptives generally depart from local practice only when the evidence is sufficiently compelling.

Traditionalists view clinical experience and authority as the most reliable basis for practice, and therefore rely on personal experience and the judgment and teachings of clinical leaders for guidance. The term “traditionalist” is not meant to suggest that the practitioner follows older, more traditional medical practices; rather, it relates to the physician’s traditional view of clinical experience as the ultimate basis of knowledge. The traditionalist may be an early adopter of new technologies if a respected clinical leader suggests them. Traditionalists are not greatly concerned with how their practices fit local medical culture, and are more concerned with practicing correctly than efficiently.

Pragmatists focus on the day-to-day demands of a busy practice. Acutely aware of the many competing claims on their scarce time from patients, colleagues, employees, insurers, and hospitals, pragmatists evaluate calls to change their practice in terms of anticipated impact on time, workload, patient flow, and patient satisfaction rather than scientific validity or congruence with local medical culture. Pragmatists may view either evidence or experience as the most reliable foundation for practice, and may be willing to diverge from local norms when doing so is not disruptive; their primary concern, however, is efficiency.

 

 

As we emphasized in our original formulation, our categorizations refer to trait, not state; that is, the categories describe general response tendencies, not moment-to-moment clinical decision making. It is incorrect to say that a physician responds as a seeker in one instance and a pragmatist in another, or that the same person shows traditionalist responses to one topic and receptive responses to another. (Most actual clinical behavior is, of necessity, pragmatic most of the time.)

We hypothesize that these physician response styles represent various combinations of 3 underlying factors:

  1. Extent to which scientific evidence, rather than clinical experience and authority, is perceived as the best source of knowledge about good practice (evidence vs experience).
  2. Degree of comfort with clinical practices that are out of step with the local community’s practices or the recommendations of leaders (nonconformity).
  3. Importance attached to managing workload and patient flow while maintaining general patient satisfaction (practicality).

Not all possible combinations of the 3 factors exist, and some combinations are behaviorally indistinguishable—that is, they produce the same response style. The manner in which these 3 factors define the 4 types of physicians is shown in Table 1. In this paper we report the results of 3 iterations in the development of a psychometric instrument to measure these factors.

TABLE 1

Hypothesized factor loading by physician type

Physician typeEvidence vs experienceNonconformityPracticality
SeekersExtreme evidence endHighNot high
ReceptivesToward evidence endModerateNot high
TraditionalistsToward experience endVariableNot high
PragmatistsVariableVariableHigh

Methods

To test the hypothesized relationship between physician category and response to practice change interventions, we needed to develop an instrument for assessing physicians on the underlying 3 attributes so that, based on those attributes, we could subsequently place them in the 4 information response categories. We created several questions addressing each of our hypothesized factors and refined them for clarity. The question pool was further refined in consultation with active practitioners serving on commissions and committees of the American Academy of Family Practice, who represented a variety of nonacademic perspectives on clinical practice and learning. An 18-item psychometric instrument was prepared and pilot tested on a convenience sample of 112 family physicians in Iowa and Michigan who were participating in other research projects.

The results of that pilot test were used to prepare a second version, which was tested with 328 physicians at a regional CME conference and 889 physicians with the national Veterans Health Administration system for a total of 1217. The sample comprised 234 family physicians; 848 internists; 29 obstetrician/gynecologists; 27 general practitioners; 24 emergency physicians; and a small number of general surgeons, pediatricians, psychiatrists, and other specialists. The results from the second version guided the preparation of the third (Figure), which was tested on a sample of 64 family physicians at 2 CME events.

Because of the free-choice manner in which the instruments were distributed, it was not possible to calculate an exact response rate; however, the total number of participants equaled slightly more than 75% of the total number of instruments distributed.

To refine the instrument at each iteration, we began with a factor analysis using the principalcomponents method and orthogonal varimax rotation. The eigenvalues from the factor analysis were used to determine the number of factors in the optimum solution. The instrument’s questions were assigned to these factors based on the factor on which they loaded most heavily in the rotated solution. Cronbach α was calculated for each factor scale. At each iteration, questions loading less than 0.35 on all factors in the rotated varimax solution were dropped. Questions loading on 2 factors were revised for clearer wording in the subsequent draft. New questions were added to factor scales on which too few questions were loading. All analyses were performed using Intercooled Stata 7.0 statistical software (Stata Corp, College Station, TX) on a Linux workstation.

The results of the factor analysis were compared with the theory after the second and third iterations. Physicians were scaled on the 3 factors by summing the responses to the items of each scale, with strongly agree (SA) = 5 and strongly disagree (SD) = 1 (reversing the numbers for items phrased in the opposite manner). Normalization (adjusting scores to account for scales that included more items, resulting in larger maximum scores) was considered but rejected, because normalized scores proved more confusing than unequal scales when the results were presented to audiences.

We used the scale scores to classify the physicians into the 4 types (seeker, receptive, traditionalist, and pragmatist). We performed the factor analyses and interpretations as described in Tables2, 3, and 4, then translated the hypothesized relationships in Table 1 into specific calculations as shown in Tables 5 and 6 (for the second and third iterations, respectively). The chosen cutoff points were necessarily somewhat arbitrary; to prove them optimal, we must complete an external validation study of the physicians’ behavior vs their scale scores, which is now underway. The current data address the instrument’s development and internal consistency.

 

 

TABLE 2

Factor analysis solutions

 Eigenvalues by number of factors in solution
Iteration1234
12.881.671.441.23
21.951.200.8090.387
33.352.311.600.821

TABLE 3

Scale interpretations

ScaleInterpretationQuestions (on iteration 3)*
1Evidence–experience1, 3, 9, 12, 16, 17
2Nonconformity2, 5, 7, 11, 13, 15
3Practicality4, 6, 8, 10, 14
*See Figure.

TABLE 4

Scale internal consistencies

 Cronbach α at each iteration
IterationEvidence-experienceNonconformityPracticality
10.630.610.54
20.700.590.48
30.790.740.68

TABLE 5

Scale scores by physician type, second iteration

Physician typeEvidence vs experience (range, 5–25)Nonconformity (range, 4–20)Practicality (range, 4–20)
SeekersExtreme evidence end: ≥20High: >12Not high: ≤14
ReceptivesToward evidence end: ≥15Moderate: ≤12Not high: ≤14
TraditionalistsToward experience end: <15VariableNot high: ≤14
PragmatistsVariableVariableHigh: >14

TABLE 6

Scale scores by physician type, third iteration (depicted in the Figure)

Physician typeEvidence vs experience (range, 6–30)Nonconformity (range, 6–30)Practicality (range, 5–25)
SeekersExtreme evidence end: ≥22High: >18Not high: ≤14
ReceptivesToward evidence end: ≥18Moderate: ≤18Not high: ≤14
TraditionalistsToward experience end: <18VariableNot high: ≤14
PragmatistsVariableVariableHigh: >14

FIGURE Psychometric Instrument


Results

For the first, second, and third iterations, we received 106, 1120, and 61 instruments respectively that were completed in usable form. At every stage of the instrument’s development, factor analysis showed that a 3-factor model fit best. The eigenvalues declined rapidly when there were more than 3 factors (Table 2), showing that additional factors would not improve the solution.

Orthogonal rotation and interpretation of the questions making up each factor produced 3 psychologically meaningful scales (Table 3) corresponding closely to our theoretical model; the same 3 scales emerged at each iteration. The scales are named similarly to the theory above: evidence–experience, practicality, and nonconformity. The Cronbach α for each scale at each iteration is presented in Table 4.

Using the above-described classification scheme (with specific cutoffs detailed in Tables 5 and 6), the 1181 physicians who completed the instrument in the second and third iterations were classified as follows: 2.5% seekers; 57.0% receptives; 12.6% traditionalists; and 27.9% pragmatists. Different cutoff values would yield somewhat different percentages, but seekers are very few using any reasonable value.

Discussion

These results are consistent with the theoretical construct of 3 factors underlying our physician classification scheme and demonstrate that those factors can be measured on scales with reasonable internal consistency. The data are consistent with the theory on which the instrument was developed. Not all possible combinations of the 3 factors exist, which is consistent with the 4-types theory depicted in Table 1. For example, there should be no physicians who are strongly evidence-based and strongly conformist, and that combination does not occur. However, there are physicians who are strongly evidence-based and strongly nonconformist (the seekers). Few physicians selected either extreme for any factor, but with the exception of nonconformity, a broad range existed across all of the factors.

These findings show that physicians differ in their attitudes toward new information about the effectiveness or appropriateness of clinical strategies, and that those differences are measurable and quantifiable. Quantifying those differences was a major step forward in testing our theoretical framework for selecting effective practice change strategies.

The next step is to demonstrate external validity by showing that differences in physician behavior are consistent with demonstrable differences in attitudes. Such a study is underway at this writing. A trial of practice change interventions guided by the categorization scheme should be carried out subsequently.

The categories we propose do not reflect bimodal distributions of attributes; physicians are distributed relatively uniformly all along the 3 scales. The categories are useful descriptors, not absolute pigeonholes.

The results suggest to us that there is fertile ground for applied psychometrics and cognitive science research related to changing clinical practices. Such work may help illuminate the murky results of practice change intervention and guideline implementation studies to date. Further cognitive research about our own theoretical framework is likely to identify factors and complexities that we have not yet addressed.

· Acknowledgments ·

The authors thank Mark Ebell, MD, MS for his assistance in revising the instrument; Judith Zemencuk and Bonnie Boots-Miller of the Ann Arbor Veterans Administration Health Services Research and Development offices for their assistance in distributing the instrument to and collecting data from Veterans Administration physicians; Janice Klos for her help in gathering data from Michigan Academy of Family Practice member physicians; Van Harrison, PhD and his staff for their help in enlisting the participation of physicians at Michigan CME events; and of course, the Veterans Administration, Michigan Academy of Family Practice, and Michigan physicians who graciously completed instruments for this project.

References

1. Wyszewianski L, Green LA. Strategies for changing clinicians’ practice patterns: a new perspective. J Family Pract 2000;49:461-4.

2. Eisenberg JM. Doctors’ Decisions and the Cost of Medical Care. Ann Arbor: Health Administration Press, 1986.

3. Davis D, O’Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999;282:867-74.

4. Wensing M, van der Weijden T, Grol R. Implementing guidelines and innovations in general practice: which interventions are effective? Br J Gen Pract 1998;48:991-7.

5. Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ 1995;153:1423-31.

6. Grol R. Beliefs and evidence in changing clinical practice. BMJ 1997;315:418-21.

7. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458-65.

8. Woolf SH. Changing physician practice behavior: the merits of a diagnostic approach. J Fam Pract 2000;49:126-9.

Article PDF
Author and Disclosure Information

LEE A. GREEN, MD, MPH
DANIEL W. GORENFLO, PHD
LEON WYSZEWIANSKI, PHD
Ann Arbor, Michigan
From the University of Michigan Medical School (L.A.G., D.W.G.) and the University of Michigan School of Public Health (L.W.), Ann Arbor, MI. Presented in part at the North American Primary Care Research Group meeting of 2000. The authors report no competing interests. Address reprint requests to Lee Green, MD, MPH, Department of Family Medicine, 1018 Fuller, Campus 0708, Ann Arbor, MI 48109. E-mail: greenla@umich.edu.

Issue
The Journal of Family Practice - 51(11)
Publications
Page Number
938-942
Legacy Keywords
,Patternsphysician’s practiceeducationmedicalcontinuingpractice guidelinesdecision makingpsychometric instruments.(J Fam Pract 2002; 51:938–942)
Sections
Author and Disclosure Information

LEE A. GREEN, MD, MPH
DANIEL W. GORENFLO, PHD
LEON WYSZEWIANSKI, PHD
Ann Arbor, Michigan
From the University of Michigan Medical School (L.A.G., D.W.G.) and the University of Michigan School of Public Health (L.W.), Ann Arbor, MI. Presented in part at the North American Primary Care Research Group meeting of 2000. The authors report no competing interests. Address reprint requests to Lee Green, MD, MPH, Department of Family Medicine, 1018 Fuller, Campus 0708, Ann Arbor, MI 48109. E-mail: greenla@umich.edu.

Author and Disclosure Information

LEE A. GREEN, MD, MPH
DANIEL W. GORENFLO, PHD
LEON WYSZEWIANSKI, PHD
Ann Arbor, Michigan
From the University of Michigan Medical School (L.A.G., D.W.G.) and the University of Michigan School of Public Health (L.W.), Ann Arbor, MI. Presented in part at the North American Primary Care Research Group meeting of 2000. The authors report no competing interests. Address reprint requests to Lee Green, MD, MPH, Department of Family Medicine, 1018 Fuller, Campus 0708, Ann Arbor, MI 48109. E-mail: greenla@umich.edu.

Article PDF
Article PDF

KEY POINTS FOR CLINICIANS

  • One size probably does not fit all when bringing physicians new information that might change their practice.
  • Physicians differ measurably in what they consider credible sources of information, the weight they assign to practical concerns, and their willingness to diverge from group norms in practice.
  • Interventions that bring new knowledge into practice can be tailored to physicians’ perspectives. Further research may show this approach to be more useful to physicians and more likely to succeed than current approaches.

We previously proposed a theoretical framework for selecting the most effective strategies for changing physicians’ practice patterns.1 This framework called for classifying physicians into 4 categories based on how they respond to new information about the effectiveness of clinical practices, then selecting the strategy best suited to each physician’s response style. In this paper we describe the development and validation of a psychometric instrument to classify physicians into the 4 categories. This is one more element in our ongoing effort to answer, rigorously and specifically, basic questions about the adoption of evidence-based practices; for example, how can we increase physicians’ use of proven interventions, such as β-blockers after myocardial infarction or tight blood pressure control for patients with type 2 diabetes? How can we reduce physicians’ use of disproved therapies, such as oral β-agonist tocolytics for preterm labor or antibiotics for viral illnesses?

The literature is rife with examples of singlemode and multimode studies using educational interventions, positive and negative incentives, group and individualized feedback, sanctions, regulations, academic detailing, and patient-demand interventions to bring about changes in physician practice. 2-5 Advocates of these approaches cite published examples of their success in changing clinical practices; in all cases, however, published and unpublished instances of failure exist as well. The lack of a consistent pattern of success or failure has led to a growing recognition that no single strategy will ever be a “magic bullet”5 ; therefore, the selection of practice change strategies must be based on specific situations and settings.6-8 However, it is still not known what characteristics of the setting matter most and which approach will work in a specific setting and situation.

We believe that one key factor in selecting effective strategies is the audience. Businesses learned long ago that market segmentation, in which products are advertised differently to people who have different needs, values, and views, is crucial to success in sales. Similarly, our theoretical framework posits that selecting the most appropriate change strategy requires first classifying clinicians according to how they respond to new information about the effectiveness of clinical strategies. We distinguish 4 classification categories: seekers, receptives, traditionalists, and pragmatists.1

Physician categories and underlying factors

Seekers consider systematically gathered, published data (rather than personal experience or authority) the most reliable source of knowledge. They critically appraise the data themselves and value what they view as correct practice over pragmatic concerns, such as seeing patients quickly and efficiently. Most notably, seekers make evidence-driven practice changes even when the changes are out of step with local medical culture.

Like seekers, receptives are evidence-oriented, but they generally rely on the judgment of respected others for critical appraisal of new information. Receptives are likely to act on information from a scientifically and clinically sound source. Although they do not always hew to local medical culture, receptives generally depart from local practice only when the evidence is sufficiently compelling.

Traditionalists view clinical experience and authority as the most reliable basis for practice, and therefore rely on personal experience and the judgment and teachings of clinical leaders for guidance. The term “traditionalist” is not meant to suggest that the practitioner follows older, more traditional medical practices; rather, it relates to the physician’s traditional view of clinical experience as the ultimate basis of knowledge. The traditionalist may be an early adopter of new technologies if a respected clinical leader suggests them. Traditionalists are not greatly concerned with how their practices fit local medical culture, and are more concerned with practicing correctly than efficiently.

Pragmatists focus on the day-to-day demands of a busy practice. Acutely aware of the many competing claims on their scarce time from patients, colleagues, employees, insurers, and hospitals, pragmatists evaluate calls to change their practice in terms of anticipated impact on time, workload, patient flow, and patient satisfaction rather than scientific validity or congruence with local medical culture. Pragmatists may view either evidence or experience as the most reliable foundation for practice, and may be willing to diverge from local norms when doing so is not disruptive; their primary concern, however, is efficiency.

 

 

As we emphasized in our original formulation, our categorizations refer to trait, not state; that is, the categories describe general response tendencies, not moment-to-moment clinical decision making. It is incorrect to say that a physician responds as a seeker in one instance and a pragmatist in another, or that the same person shows traditionalist responses to one topic and receptive responses to another. (Most actual clinical behavior is, of necessity, pragmatic most of the time.)

We hypothesize that these physician response styles represent various combinations of 3 underlying factors:

  1. Extent to which scientific evidence, rather than clinical experience and authority, is perceived as the best source of knowledge about good practice (evidence vs experience).
  2. Degree of comfort with clinical practices that are out of step with the local community’s practices or the recommendations of leaders (nonconformity).
  3. Importance attached to managing workload and patient flow while maintaining general patient satisfaction (practicality).

Not all possible combinations of the 3 factors exist, and some combinations are behaviorally indistinguishable—that is, they produce the same response style. The manner in which these 3 factors define the 4 types of physicians is shown in Table 1. In this paper we report the results of 3 iterations in the development of a psychometric instrument to measure these factors.

TABLE 1

Hypothesized factor loading by physician type

Physician typeEvidence vs experienceNonconformityPracticality
SeekersExtreme evidence endHighNot high
ReceptivesToward evidence endModerateNot high
TraditionalistsToward experience endVariableNot high
PragmatistsVariableVariableHigh

Methods

To test the hypothesized relationship between physician category and response to practice change interventions, we needed to develop an instrument for assessing physicians on the underlying 3 attributes so that, based on those attributes, we could subsequently place them in the 4 information response categories. We created several questions addressing each of our hypothesized factors and refined them for clarity. The question pool was further refined in consultation with active practitioners serving on commissions and committees of the American Academy of Family Practice, who represented a variety of nonacademic perspectives on clinical practice and learning. An 18-item psychometric instrument was prepared and pilot tested on a convenience sample of 112 family physicians in Iowa and Michigan who were participating in other research projects.

The results of that pilot test were used to prepare a second version, which was tested with 328 physicians at a regional CME conference and 889 physicians with the national Veterans Health Administration system for a total of 1217. The sample comprised 234 family physicians; 848 internists; 29 obstetrician/gynecologists; 27 general practitioners; 24 emergency physicians; and a small number of general surgeons, pediatricians, psychiatrists, and other specialists. The results from the second version guided the preparation of the third (Figure), which was tested on a sample of 64 family physicians at 2 CME events.

Because of the free-choice manner in which the instruments were distributed, it was not possible to calculate an exact response rate; however, the total number of participants equaled slightly more than 75% of the total number of instruments distributed.

To refine the instrument at each iteration, we began with a factor analysis using the principalcomponents method and orthogonal varimax rotation. The eigenvalues from the factor analysis were used to determine the number of factors in the optimum solution. The instrument’s questions were assigned to these factors based on the factor on which they loaded most heavily in the rotated solution. Cronbach α was calculated for each factor scale. At each iteration, questions loading less than 0.35 on all factors in the rotated varimax solution were dropped. Questions loading on 2 factors were revised for clearer wording in the subsequent draft. New questions were added to factor scales on which too few questions were loading. All analyses were performed using Intercooled Stata 7.0 statistical software (Stata Corp, College Station, TX) on a Linux workstation.

The results of the factor analysis were compared with the theory after the second and third iterations. Physicians were scaled on the 3 factors by summing the responses to the items of each scale, with strongly agree (SA) = 5 and strongly disagree (SD) = 1 (reversing the numbers for items phrased in the opposite manner). Normalization (adjusting scores to account for scales that included more items, resulting in larger maximum scores) was considered but rejected, because normalized scores proved more confusing than unequal scales when the results were presented to audiences.

We used the scale scores to classify the physicians into the 4 types (seeker, receptive, traditionalist, and pragmatist). We performed the factor analyses and interpretations as described in Tables2, 3, and 4, then translated the hypothesized relationships in Table 1 into specific calculations as shown in Tables 5 and 6 (for the second and third iterations, respectively). The chosen cutoff points were necessarily somewhat arbitrary; to prove them optimal, we must complete an external validation study of the physicians’ behavior vs their scale scores, which is now underway. The current data address the instrument’s development and internal consistency.

 

 

TABLE 2

Factor analysis solutions

 Eigenvalues by number of factors in solution
Iteration1234
12.881.671.441.23
21.951.200.8090.387
33.352.311.600.821

TABLE 3

Scale interpretations

ScaleInterpretationQuestions (on iteration 3)*
1Evidence–experience1, 3, 9, 12, 16, 17
2Nonconformity2, 5, 7, 11, 13, 15
3Practicality4, 6, 8, 10, 14
*See Figure.

TABLE 4

Scale internal consistencies

 Cronbach α at each iteration
IterationEvidence-experienceNonconformityPracticality
10.630.610.54
20.700.590.48
30.790.740.68

TABLE 5

Scale scores by physician type, second iteration

Physician typeEvidence vs experience (range, 5–25)Nonconformity (range, 4–20)Practicality (range, 4–20)
SeekersExtreme evidence end: ≥20High: >12Not high: ≤14
ReceptivesToward evidence end: ≥15Moderate: ≤12Not high: ≤14
TraditionalistsToward experience end: <15VariableNot high: ≤14
PragmatistsVariableVariableHigh: >14

TABLE 6

Scale scores by physician type, third iteration (depicted in the Figure)

Physician typeEvidence vs experience (range, 6–30)Nonconformity (range, 6–30)Practicality (range, 5–25)
SeekersExtreme evidence end: ≥22High: >18Not high: ≤14
ReceptivesToward evidence end: ≥18Moderate: ≤18Not high: ≤14
TraditionalistsToward experience end: <18VariableNot high: ≤14
PragmatistsVariableVariableHigh: >14

FIGURE Psychometric Instrument


Results

For the first, second, and third iterations, we received 106, 1120, and 61 instruments respectively that were completed in usable form. At every stage of the instrument’s development, factor analysis showed that a 3-factor model fit best. The eigenvalues declined rapidly when there were more than 3 factors (Table 2), showing that additional factors would not improve the solution.

Orthogonal rotation and interpretation of the questions making up each factor produced 3 psychologically meaningful scales (Table 3) corresponding closely to our theoretical model; the same 3 scales emerged at each iteration. The scales are named similarly to the theory above: evidence–experience, practicality, and nonconformity. The Cronbach α for each scale at each iteration is presented in Table 4.

Using the above-described classification scheme (with specific cutoffs detailed in Tables 5 and 6), the 1181 physicians who completed the instrument in the second and third iterations were classified as follows: 2.5% seekers; 57.0% receptives; 12.6% traditionalists; and 27.9% pragmatists. Different cutoff values would yield somewhat different percentages, but seekers are very few using any reasonable value.

Discussion

These results are consistent with the theoretical construct of 3 factors underlying our physician classification scheme and demonstrate that those factors can be measured on scales with reasonable internal consistency. The data are consistent with the theory on which the instrument was developed. Not all possible combinations of the 3 factors exist, which is consistent with the 4-types theory depicted in Table 1. For example, there should be no physicians who are strongly evidence-based and strongly conformist, and that combination does not occur. However, there are physicians who are strongly evidence-based and strongly nonconformist (the seekers). Few physicians selected either extreme for any factor, but with the exception of nonconformity, a broad range existed across all of the factors.

These findings show that physicians differ in their attitudes toward new information about the effectiveness or appropriateness of clinical strategies, and that those differences are measurable and quantifiable. Quantifying those differences was a major step forward in testing our theoretical framework for selecting effective practice change strategies.

The next step is to demonstrate external validity by showing that differences in physician behavior are consistent with demonstrable differences in attitudes. Such a study is underway at this writing. A trial of practice change interventions guided by the categorization scheme should be carried out subsequently.

The categories we propose do not reflect bimodal distributions of attributes; physicians are distributed relatively uniformly all along the 3 scales. The categories are useful descriptors, not absolute pigeonholes.

The results suggest to us that there is fertile ground for applied psychometrics and cognitive science research related to changing clinical practices. Such work may help illuminate the murky results of practice change intervention and guideline implementation studies to date. Further cognitive research about our own theoretical framework is likely to identify factors and complexities that we have not yet addressed.

· Acknowledgments ·

The authors thank Mark Ebell, MD, MS for his assistance in revising the instrument; Judith Zemencuk and Bonnie Boots-Miller of the Ann Arbor Veterans Administration Health Services Research and Development offices for their assistance in distributing the instrument to and collecting data from Veterans Administration physicians; Janice Klos for her help in gathering data from Michigan Academy of Family Practice member physicians; Van Harrison, PhD and his staff for their help in enlisting the participation of physicians at Michigan CME events; and of course, the Veterans Administration, Michigan Academy of Family Practice, and Michigan physicians who graciously completed instruments for this project.

KEY POINTS FOR CLINICIANS

  • One size probably does not fit all when bringing physicians new information that might change their practice.
  • Physicians differ measurably in what they consider credible sources of information, the weight they assign to practical concerns, and their willingness to diverge from group norms in practice.
  • Interventions that bring new knowledge into practice can be tailored to physicians’ perspectives. Further research may show this approach to be more useful to physicians and more likely to succeed than current approaches.

We previously proposed a theoretical framework for selecting the most effective strategies for changing physicians’ practice patterns.1 This framework called for classifying physicians into 4 categories based on how they respond to new information about the effectiveness of clinical practices, then selecting the strategy best suited to each physician’s response style. In this paper we describe the development and validation of a psychometric instrument to classify physicians into the 4 categories. This is one more element in our ongoing effort to answer, rigorously and specifically, basic questions about the adoption of evidence-based practices; for example, how can we increase physicians’ use of proven interventions, such as β-blockers after myocardial infarction or tight blood pressure control for patients with type 2 diabetes? How can we reduce physicians’ use of disproved therapies, such as oral β-agonist tocolytics for preterm labor or antibiotics for viral illnesses?

The literature is rife with examples of singlemode and multimode studies using educational interventions, positive and negative incentives, group and individualized feedback, sanctions, regulations, academic detailing, and patient-demand interventions to bring about changes in physician practice. 2-5 Advocates of these approaches cite published examples of their success in changing clinical practices; in all cases, however, published and unpublished instances of failure exist as well. The lack of a consistent pattern of success or failure has led to a growing recognition that no single strategy will ever be a “magic bullet”5 ; therefore, the selection of practice change strategies must be based on specific situations and settings.6-8 However, it is still not known what characteristics of the setting matter most and which approach will work in a specific setting and situation.

We believe that one key factor in selecting effective strategies is the audience. Businesses learned long ago that market segmentation, in which products are advertised differently to people who have different needs, values, and views, is crucial to success in sales. Similarly, our theoretical framework posits that selecting the most appropriate change strategy requires first classifying clinicians according to how they respond to new information about the effectiveness of clinical strategies. We distinguish 4 classification categories: seekers, receptives, traditionalists, and pragmatists.1

Physician categories and underlying factors

Seekers consider systematically gathered, published data (rather than personal experience or authority) the most reliable source of knowledge. They critically appraise the data themselves and value what they view as correct practice over pragmatic concerns, such as seeing patients quickly and efficiently. Most notably, seekers make evidence-driven practice changes even when the changes are out of step with local medical culture.

Like seekers, receptives are evidence-oriented, but they generally rely on the judgment of respected others for critical appraisal of new information. Receptives are likely to act on information from a scientifically and clinically sound source. Although they do not always hew to local medical culture, receptives generally depart from local practice only when the evidence is sufficiently compelling.

Traditionalists view clinical experience and authority as the most reliable basis for practice, and therefore rely on personal experience and the judgment and teachings of clinical leaders for guidance. The term “traditionalist” is not meant to suggest that the practitioner follows older, more traditional medical practices; rather, it relates to the physician’s traditional view of clinical experience as the ultimate basis of knowledge. The traditionalist may be an early adopter of new technologies if a respected clinical leader suggests them. Traditionalists are not greatly concerned with how their practices fit local medical culture, and are more concerned with practicing correctly than efficiently.

Pragmatists focus on the day-to-day demands of a busy practice. Acutely aware of the many competing claims on their scarce time from patients, colleagues, employees, insurers, and hospitals, pragmatists evaluate calls to change their practice in terms of anticipated impact on time, workload, patient flow, and patient satisfaction rather than scientific validity or congruence with local medical culture. Pragmatists may view either evidence or experience as the most reliable foundation for practice, and may be willing to diverge from local norms when doing so is not disruptive; their primary concern, however, is efficiency.

 

 

As we emphasized in our original formulation, our categorizations refer to trait, not state; that is, the categories describe general response tendencies, not moment-to-moment clinical decision making. It is incorrect to say that a physician responds as a seeker in one instance and a pragmatist in another, or that the same person shows traditionalist responses to one topic and receptive responses to another. (Most actual clinical behavior is, of necessity, pragmatic most of the time.)

We hypothesize that these physician response styles represent various combinations of 3 underlying factors:

  1. Extent to which scientific evidence, rather than clinical experience and authority, is perceived as the best source of knowledge about good practice (evidence vs experience).
  2. Degree of comfort with clinical practices that are out of step with the local community’s practices or the recommendations of leaders (nonconformity).
  3. Importance attached to managing workload and patient flow while maintaining general patient satisfaction (practicality).

Not all possible combinations of the 3 factors exist, and some combinations are behaviorally indistinguishable—that is, they produce the same response style. The manner in which these 3 factors define the 4 types of physicians is shown in Table 1. In this paper we report the results of 3 iterations in the development of a psychometric instrument to measure these factors.

TABLE 1

Hypothesized factor loading by physician type

Physician typeEvidence vs experienceNonconformityPracticality
SeekersExtreme evidence endHighNot high
ReceptivesToward evidence endModerateNot high
TraditionalistsToward experience endVariableNot high
PragmatistsVariableVariableHigh

Methods

To test the hypothesized relationship between physician category and response to practice change interventions, we needed to develop an instrument for assessing physicians on the underlying 3 attributes so that, based on those attributes, we could subsequently place them in the 4 information response categories. We created several questions addressing each of our hypothesized factors and refined them for clarity. The question pool was further refined in consultation with active practitioners serving on commissions and committees of the American Academy of Family Practice, who represented a variety of nonacademic perspectives on clinical practice and learning. An 18-item psychometric instrument was prepared and pilot tested on a convenience sample of 112 family physicians in Iowa and Michigan who were participating in other research projects.

The results of that pilot test were used to prepare a second version, which was tested with 328 physicians at a regional CME conference and 889 physicians with the national Veterans Health Administration system for a total of 1217. The sample comprised 234 family physicians; 848 internists; 29 obstetrician/gynecologists; 27 general practitioners; 24 emergency physicians; and a small number of general surgeons, pediatricians, psychiatrists, and other specialists. The results from the second version guided the preparation of the third (Figure), which was tested on a sample of 64 family physicians at 2 CME events.

Because of the free-choice manner in which the instruments were distributed, it was not possible to calculate an exact response rate; however, the total number of participants equaled slightly more than 75% of the total number of instruments distributed.

To refine the instrument at each iteration, we began with a factor analysis using the principalcomponents method and orthogonal varimax rotation. The eigenvalues from the factor analysis were used to determine the number of factors in the optimum solution. The instrument’s questions were assigned to these factors based on the factor on which they loaded most heavily in the rotated solution. Cronbach α was calculated for each factor scale. At each iteration, questions loading less than 0.35 on all factors in the rotated varimax solution were dropped. Questions loading on 2 factors were revised for clearer wording in the subsequent draft. New questions were added to factor scales on which too few questions were loading. All analyses were performed using Intercooled Stata 7.0 statistical software (Stata Corp, College Station, TX) on a Linux workstation.

The results of the factor analysis were compared with the theory after the second and third iterations. Physicians were scaled on the 3 factors by summing the responses to the items of each scale, with strongly agree (SA) = 5 and strongly disagree (SD) = 1 (reversing the numbers for items phrased in the opposite manner). Normalization (adjusting scores to account for scales that included more items, resulting in larger maximum scores) was considered but rejected, because normalized scores proved more confusing than unequal scales when the results were presented to audiences.

We used the scale scores to classify the physicians into the 4 types (seeker, receptive, traditionalist, and pragmatist). We performed the factor analyses and interpretations as described in Tables2, 3, and 4, then translated the hypothesized relationships in Table 1 into specific calculations as shown in Tables 5 and 6 (for the second and third iterations, respectively). The chosen cutoff points were necessarily somewhat arbitrary; to prove them optimal, we must complete an external validation study of the physicians’ behavior vs their scale scores, which is now underway. The current data address the instrument’s development and internal consistency.

 

 

TABLE 2

Factor analysis solutions

 Eigenvalues by number of factors in solution
Iteration1234
12.881.671.441.23
21.951.200.8090.387
33.352.311.600.821

TABLE 3

Scale interpretations

ScaleInterpretationQuestions (on iteration 3)*
1Evidence–experience1, 3, 9, 12, 16, 17
2Nonconformity2, 5, 7, 11, 13, 15
3Practicality4, 6, 8, 10, 14
*See Figure.

TABLE 4

Scale internal consistencies

 Cronbach α at each iteration
IterationEvidence-experienceNonconformityPracticality
10.630.610.54
20.700.590.48
30.790.740.68

TABLE 5

Scale scores by physician type, second iteration

Physician typeEvidence vs experience (range, 5–25)Nonconformity (range, 4–20)Practicality (range, 4–20)
SeekersExtreme evidence end: ≥20High: >12Not high: ≤14
ReceptivesToward evidence end: ≥15Moderate: ≤12Not high: ≤14
TraditionalistsToward experience end: <15VariableNot high: ≤14
PragmatistsVariableVariableHigh: >14

TABLE 6

Scale scores by physician type, third iteration (depicted in the Figure)

Physician typeEvidence vs experience (range, 6–30)Nonconformity (range, 6–30)Practicality (range, 5–25)
SeekersExtreme evidence end: ≥22High: >18Not high: ≤14
ReceptivesToward evidence end: ≥18Moderate: ≤18Not high: ≤14
TraditionalistsToward experience end: <18VariableNot high: ≤14
PragmatistsVariableVariableHigh: >14

FIGURE Psychometric Instrument


Results

For the first, second, and third iterations, we received 106, 1120, and 61 instruments respectively that were completed in usable form. At every stage of the instrument’s development, factor analysis showed that a 3-factor model fit best. The eigenvalues declined rapidly when there were more than 3 factors (Table 2), showing that additional factors would not improve the solution.

Orthogonal rotation and interpretation of the questions making up each factor produced 3 psychologically meaningful scales (Table 3) corresponding closely to our theoretical model; the same 3 scales emerged at each iteration. The scales are named similarly to the theory above: evidence–experience, practicality, and nonconformity. The Cronbach α for each scale at each iteration is presented in Table 4.

Using the above-described classification scheme (with specific cutoffs detailed in Tables 5 and 6), the 1181 physicians who completed the instrument in the second and third iterations were classified as follows: 2.5% seekers; 57.0% receptives; 12.6% traditionalists; and 27.9% pragmatists. Different cutoff values would yield somewhat different percentages, but seekers are very few using any reasonable value.

Discussion

These results are consistent with the theoretical construct of 3 factors underlying our physician classification scheme and demonstrate that those factors can be measured on scales with reasonable internal consistency. The data are consistent with the theory on which the instrument was developed. Not all possible combinations of the 3 factors exist, which is consistent with the 4-types theory depicted in Table 1. For example, there should be no physicians who are strongly evidence-based and strongly conformist, and that combination does not occur. However, there are physicians who are strongly evidence-based and strongly nonconformist (the seekers). Few physicians selected either extreme for any factor, but with the exception of nonconformity, a broad range existed across all of the factors.

These findings show that physicians differ in their attitudes toward new information about the effectiveness or appropriateness of clinical strategies, and that those differences are measurable and quantifiable. Quantifying those differences was a major step forward in testing our theoretical framework for selecting effective practice change strategies.

The next step is to demonstrate external validity by showing that differences in physician behavior are consistent with demonstrable differences in attitudes. Such a study is underway at this writing. A trial of practice change interventions guided by the categorization scheme should be carried out subsequently.

The categories we propose do not reflect bimodal distributions of attributes; physicians are distributed relatively uniformly all along the 3 scales. The categories are useful descriptors, not absolute pigeonholes.

The results suggest to us that there is fertile ground for applied psychometrics and cognitive science research related to changing clinical practices. Such work may help illuminate the murky results of practice change intervention and guideline implementation studies to date. Further cognitive research about our own theoretical framework is likely to identify factors and complexities that we have not yet addressed.

· Acknowledgments ·

The authors thank Mark Ebell, MD, MS for his assistance in revising the instrument; Judith Zemencuk and Bonnie Boots-Miller of the Ann Arbor Veterans Administration Health Services Research and Development offices for their assistance in distributing the instrument to and collecting data from Veterans Administration physicians; Janice Klos for her help in gathering data from Michigan Academy of Family Practice member physicians; Van Harrison, PhD and his staff for their help in enlisting the participation of physicians at Michigan CME events; and of course, the Veterans Administration, Michigan Academy of Family Practice, and Michigan physicians who graciously completed instruments for this project.

References

1. Wyszewianski L, Green LA. Strategies for changing clinicians’ practice patterns: a new perspective. J Family Pract 2000;49:461-4.

2. Eisenberg JM. Doctors’ Decisions and the Cost of Medical Care. Ann Arbor: Health Administration Press, 1986.

3. Davis D, O’Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999;282:867-74.

4. Wensing M, van der Weijden T, Grol R. Implementing guidelines and innovations in general practice: which interventions are effective? Br J Gen Pract 1998;48:991-7.

5. Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ 1995;153:1423-31.

6. Grol R. Beliefs and evidence in changing clinical practice. BMJ 1997;315:418-21.

7. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458-65.

8. Woolf SH. Changing physician practice behavior: the merits of a diagnostic approach. J Fam Pract 2000;49:126-9.

References

1. Wyszewianski L, Green LA. Strategies for changing clinicians’ practice patterns: a new perspective. J Family Pract 2000;49:461-4.

2. Eisenberg JM. Doctors’ Decisions and the Cost of Medical Care. Ann Arbor: Health Administration Press, 1986.

3. Davis D, O’Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999;282:867-74.

4. Wensing M, van der Weijden T, Grol R. Implementing guidelines and innovations in general practice: which interventions are effective? Br J Gen Pract 1998;48:991-7.

5. Oxman AD, Thomson MA, Davis DA, Haynes RB. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ 1995;153:1423-31.

6. Grol R. Beliefs and evidence in changing clinical practice. BMJ 1997;315:418-21.

7. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458-65.

8. Woolf SH. Changing physician practice behavior: the merits of a diagnostic approach. J Fam Pract 2000;49:126-9.

Issue
The Journal of Family Practice - 51(11)
Issue
The Journal of Family Practice - 51(11)
Page Number
938-942
Page Number
938-942
Publications
Publications
Article Type
Display Headline
Validating an instrument for selecting interventions to change physician practice patterns
Display Headline
Validating an instrument for selecting interventions to change physician practice patterns
Legacy Keywords
,Patternsphysician’s practiceeducationmedicalcontinuingpractice guidelinesdecision makingpsychometric instruments.(J Fam Pract 2002; 51:938–942)
Legacy Keywords
,Patternsphysician’s practiceeducationmedicalcontinuingpractice guidelinesdecision makingpsychometric instruments.(J Fam Pract 2002; 51:938–942)
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media