Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Paradigm Shift Happens


by Martin Scanlan - September 28, 2015

David Sackett, a physician who died in May, 2015 at age 80, helped usher a paradigm shift in medicine toward embracing evidence-based medicine and applying principles of improvement science to health care reform. A parallel paradigm shift is afoot in educational research.

Kuhn (1962) introduced the concept of paradigm shift over a half century ago. It captures the notion that our worldviews are shaped by relatively integrated—and frequently tacit—sets of theoretical presuppositions. Breakthroughs in the fields of social and natural sciences frequently demand abandoning these presuppositions, allowing us to see anew. Kuhn’s phrase has caught on, and in the ensuing decades we have seen no shortage of ambitious efforts to identify, shape, and disrupt our operative paradigms. Few succeeded as marvelously and profoundly as physician David Sackett, who died in May 2015, at the age of 80.


ORIGINS OF A PARADIGM SHIFT


Sackett—alternately referred to as a giant amongst giants and a troublemaker—is credited as the grandfather of a paradigm shift in medicine. Born outside Chicago in 1934, he spent much of his professional career in Canada and Britain. Sackett pushed against the accepted paradigms in the medical field from early in his career. An anecdote from his time in medical school at the University of Chicago in 1959 is revealing. Sackett was working with a teenage patient who had contracted Hepatitis A. While the standard of care at the time dictated that the boy should be consigned to bed rest, Sackett was sympathetic to his desperate pleas to be allowed to move. Instead of simply accepting the conventional wisdom, Sackett sought evidence about what range of treatment options existed. After several dead-ends, he found a clinical trial report from 1955 by Tom Chambers, a U.S. army doctor who experimented with treating patients with hepatitis in the Korean Way. Sackett (2010) recounts the nature of this clinical trial:


Employing what I increasingly came to recognize as “elegant simplicity,” Tom and his colleagues allocated soldiers who met pre-defined Hepatitis criteria at random either to bed rest (continuously in bed, save for one trip daily to the bathroom and one trip to the shower weekly), or to be up and about as much as the patients wanted (with no effort made to control their activity save 1-hour rests after meals) throughout their hospital stay. The time to recovery (as judged by liver function testing) was indistinguishable between the comparison groups, and no recurrent jaundice was observed. (p. 255)


Surprised by this promising report, Sackett apologized to the boy on bed rest, and allowed him to get up and move about as he wished. The boy fully recovered without incident. Sackett, on the other hand, never did. Instead, a paradigm shift was launched:


I became a “trouble-maker,” constantly questioning conventional therapeutic wisdom, and offending especially the subspecialists when they pontificated (I thought) about how I ought to be treating my patients…ten years after I discharged my Hepatitis patient, armed with some book-learning and blessed with brilliant colleagues, I began to emulate these mentors by converting my passive skepticism into active inquiry, addressing such questions as: Why do you have to be a physician in order to provide first-contact primary care? Are the ‘experts’ correct that teaching people with raised blood pressure all about their illness really makes them more likely to take their medicine? Just because the aorto-coronary arterial bypass is good for ischaemic hearts, should we accept claims that extracranial–intracranial arterial bypass is good for ischaemic brains? (2010, p. 255)


Sackett’s advocacy for using clinical data to guide interventions provoked a paradigm shift in the field—evidence-based medicine.


EVIDENCE-BASED MEDICINE


Sackett, Rosenberg, Gray, Haynes, and Richardson (1996) describe evidence-based medicine as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research” (p. 71). The shift toward evidence-based medicine faces resistance from many angles. Some argue that it was nothing new—this was how things were already done. Others criticize it as an arrogant effort to simplistically cut costs. To others still, it is denigrated as a cookbook approach to suppressing the freedom of clinicians to be guided by their own professional judgment.


Over time, however, evidence-based medicine came to be embraced as a productive, liberating, and innovative approach to improving medical practice. Physicians engaged in evidence-based practice must balance drawing on their experiential knowledge with external evidence. Sackett et al., (1996) describe this balance:


Without clinical expertise, practice risks becoming tyrannized by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date to the detriment of patients (p. 72).


This paradigm shift toward evidence-based practices is part of a broader movement to apply improvement science in health care (Berwick, 2008). Improvement science is the process of sifting and winnowing through evidentiary chains to determine the best course of action (Berwick, 2008). This is not to reduce the practice to a mechanistic procedure. To be sure, physicians blend craft and science, their work laden with drama and mystery, professional judgment, and improvisation (Reilly, 2013). Yet, the basic processes of improvement science are increasingly accepted as fundamental drivers of systemic reform to improve the quality of care both in medicine and beyond (Gawande, 2007; 2011).


IMPROVEMENT SCIENCE IN EDUCATION


In a parallel manner to health care, both evidence-based education and the discipline of improvement science are beginning to seed fundamental changes in the field of education. Two trends exemplify this: design-based research and networked improvement communities.


DESIGN-BASED RESEARCH


First, design-based research (DBR) has started to receive more attention. Visualize research in education as a spectrum. At one end of the spectrum is traditional research. Similar to other social sciences, traditional education research is conceptualized and conducted from the vantage point of scholars, with practitioners agreeing to participate as research subjects. Key advantages of this type of research are that the researcher is afforded time and training to conduct data gathering and analysis. The researcher is also well-positioned to engage in the inquiry, as they have some distance from the practice itself. A disadvantage, however, is that the results of such research often do not transfer directly to affect the work of practitioners in the field.


At the other end of this educational research spectrum is action-research. Action research is conceptualized and conducted solely by practitioners. As such, it is eminently applicable to the practitioner’s field, regardless of discipline. Yet since action researchers and practitioners are one in the same, they are heavily invested in the outcomes, and at times, have trouble critically reflecting on evidence. Moreover, they often lack the time and training that would allow them to engage in a rigorous process of data gathering and analysis.


DBR strikes a middle ground on this spectrum (Anderson & Shattuck, 2012). Practitioners conceptualize the work and research questions, identifying pressing problems of practice to examine. Researchers who bring time, talent, and distance from the practice itself are responsible for gathering and analyzing data. However, DBR reflects a paradigm shift in generating evidence-based claims in education. Instead of these claims being solely the purview of the researcher, DBR creates a shared responsibility for this work between the academy and the field. While not synonymous, this development of DBR in education is not unlike movements in other fields, such as the rise of translational research in medicine (Rubio et al., 2010).


NETWORKED IMPROVEMENT COMMUNITIES


A related trend that hints at a nascent paradigm shift in education is networked improvement communities (NICs). Sociocultural theories of learning hold that our purposeful interactions with others in “communities of practice” shape what we know, can do, and believe (Gee, 2008; Lave & Wenger, 1991; Wenger, 1998). Efforts to promote professional learning communities reflect such theory (Hipp & Huffman, 2010; Pappano, 2007). Yet frequently, the actual structures that facilitate learning in communities of practice remain ambiguous and unarticulated (Scanlan, 2013). NICs help confront this weakness.


NICs are narrowly tailored communities of practice that address specific tasks through disciplined inquiry. Bryk, Gomez, Grunow, and LeMahieu (2010) define NICs as endorsing “shared, precise, measureable targets” (p. 11) that lead to tangible progress solving discrete problems. As such, NICs are congruent with the DBR movement to better link educational research with improved practice (Anderson & Shattuck, 2012).


NICs engage in rapid, iterative processes of initiating small innovations, prototyping, failing, reporting, and adjusting based on the failures (Bryk et al., 2015). To accomplish this, NIC participants articulate a working theory of improvement. This theory is a series of tightly crafted hypotheses that drive improvement efforts. Also, this theory of improvement addresses three questions: (a) What specifically are we trying to accomplish?; (b) What change might we introduce and why?; (c) How will we know that a change is actually an improvement?


Multiple tools, routines, and practices guide NICs in developing their working theory of improvement. NICs reflect a shift in applying the discipline of improvement science to educational reform. They reflect a deepening understanding of how to scaffold data use in a manner that leads to scalable changes in practice (Coburn & Turner, 2011).


PARADIGM SHIFT HAPPENS


Together, NICs and DBR are provoking a paradigm shift in the processes and products of educational research. This is reflected, for instance, in recent requests for proposals from foundations (e.g., the Research Practice Partnership grant from the Spencer Foundation) and the Federal Government (e.g., the Partnerships and Collaborations Focused on Problems of Practice or Policy grant from the Institute of Educational Sciences). The shift also appears in large initiatives of the Carnegie Foundation to promote systemic reform across whole school systems (e.g., Baltimore and Austin), and sectors (e.g., community colleges). The shift further appears in modest initiatives, such as the effort to create a network of bilingual Catholic schools. As scholars and practitioners, we do well to follow Sackett’s lead, asking the difficult questions, pushing against accepted truths, and helping this shift happen.


References


Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in education research. Educational Researcher, 41(1), 16-25.


Berwick, D. (2008). The science of improvement. Journal of the American Medical Association, 299(10), 1182-1184.


Bryk, A., Gomez, L., Grunow, A., & LeMahieu, P. (2015). Learning to improve: How America's schools can get better at getting better. Cambridge, MA: Harvard Education Press.


Coburn, C., & Turner, E. (2011). Research on data use: A framework and analysis. Measurement, 9(4), 173-206.


Gawande, A. (2007). Better. New York, NY: Metropolitan Books.


Gawande, A. (2011). The checklist manifesto: How to get things right. New York, UK: Picador.


Gee, J. P. (2008). A sociocultural perspective on opportunity to learn. In P. Moss, D. Pullin, J. P. Gee, E. Haertel, & L. J. Young (Eds.), Assessment, equity, and opportunity to learn (pp. 76-108). Cambridge, UK: Cambridge University Press.


Hipp, K. K., & Huffman, J. B. (2010). Demystifying professional learning communities. Lanham, MD: Rowman & Littlefield Education.


Kuhn, T. (1962). The structure of scientific revolutions. Princeton, NJ: Princeton University Press.


Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York, NY: Cambridge University Press.


Pappano, L. (2007). More than "making nice": Getting teachers to (truly) collaborate. Harvard Education Letter, 23(2), 1-3.


Reilly, B. (2013). One doctor: Close calls, cold cases, and the mysteries of medicine. New York, NY: Atria Books.


Rubio, D. M., Schoenbaum, E. E., Lee, L. S., Schteingart, D. E., Marantz, P. R., Anderson, K. E., Esposito, K. (2010). Defining translational research: Implications for training. Academic Medicine, 85(3), 470-475.


Sackett, D. (2010). A 1955 clinical trial report that changed my career. Journal of the Royal Society of Medicine, 103(6), 254-255.


Sackett, D., Rosenberg, W., Gray, J. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. BMJ, 312, 71-72.


Scanlan, M.. (2013). A earning architecture: How school leaders can design for learning social justice. Educational Administration Quarterly, 49(2) 348-391.


Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. New York, NY: Cambridge University Press.




Cite This Article as: Teachers College Record, Date Published: September 28, 2015
https://www.tcrecord.org ID Number: 18137, Date Accessed: 10/16/2021 7:16:30 AM

Purchase Reprint Rights for this article or review
 
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS