Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Chapter 4: Measure Learning Environments, Not Just Students, to Support Learning and Development


by David Paunesku & Camille A. Farrington - 2020

Background: Young people are more likely to develop into effective learners, productive adults, and engaged citizens when their learning environments afford them certain kinds of experiences. For example, students are more likely to succeed when they experience a sense of belonging in school or experience schoolwork as personally relevant.

Purpose: How can schools systematically ensure they provide every one of their students with the important developmental experiences they need to succeed and thrive? To answer this question, we offer a conceptual framework that integrates insights from empirical literatures in education, psychology, and developmental science; innovations from early warning indicator methods; and our own experiences as researchers working in partnership with practitioners to build more equitable and developmentally supportive learning environments.

Research Design: Integrative Perspective

Recommendations: We posit that schools currently pay a great deal of attention to the results of effective learning (e.g., high test scores), but not nearly enough attention to the causes of effective learning (e.g., assignments that are relevant enough to motivate students). We propose that schools could foster learning and development more systematically and more equitably if they started to measure, not just downstream learning outcomes, but also the upstream developmental experiences that make those outcomes more likely to unfold.

In recent years, public education has witnessed surging interest in two parallel developments: (1) a more integrated and holistic understanding of humans’ social, emotional, and academic development (SEAD), and (2) the use of early warning indicators (EWIs) that enable educators to recognize and address nascent conditions before those conditions produce harmful consequences for young people. Interest in each of these developments is warranted by the impact they have shown—when applied effectively—for supporting students and improving schools. In this article, we weave together insights from each of these developments into an applied, conceptual framework that is intended to inform systematic school improvement efforts. The framework centers on a contemporary, integrated understanding of SEAD, and it leverages the affordances of EWIs to support educators in applying insights from SEAD research systematically. The framework strives to answer the following questions: What do students need to thrive inside learning settings? And, how can educators systematically improve their own ability to meet those needs?


Before we explain in detail how and why our framework weaves together advancements from SEAD research and the early indicator movement, we describe the historical context in which these parallel advancements have taken place, and we consider the strengths and limitations of each approach. We then discuss how these two powerful approaches can be brought together in practice to build educators’ capacity to support young people’s learning and development. In laying out our arguments, we synthesize insights from empirical literatures in education, psychology, and developmental science, and we draw on our own experiences as researchers working in partnership with practitioners to apply these ideas in the field.


SEAD AND EWISs IN THE HISTORICAL CONTEXT


The growing interest in supporting students’ holistic social, emotional, and academic development and in the use of EWIs has arisen in the context of three decades of academic standards reform, in which test-based accountability has been the primary lever for trying to improve schools and accelerate student learning. This longstanding, highly impactful reform movement was motivated in large part by stark disparities in educational achievement by race and socioeconomic class between students in different states, districts, schools, and even academic tracks within schools (Mehta, 2015).


The logic of standards-based accountability was to eradicate these differences in performance and outcomes by elevating and standardizing our expectations for the academic achievement of all student groups. While some good has come from this policy approach, it has fallen short of achieving the laudable goals envisioned by its proponents (Mehta, 2015). Regrettably, we see only pockets of improvement, little change in disparities, and a concerted backlash against the narrowing of public education to student performance on standardized tests.


Further, there have been several unintended consequences of an education agenda that has focused almost exclusively on narrow academic outcomes, measured on an annual basis, to make high-stakes decisions about students, teachers, and schools. These include the deprioritization of untested subjects such as science, civics, art, and music (Au, 2007; Berliner, 2011; McMurrer, 2007); data routines that convey a restricted role for teachers in their students’ development, focused only on discrete academic skills (Valli & Buese, 2007); and a professional culture for educators often characterized by punitive accountability (Hargreaves & Braun, 2013; Webb, 2005). Together, these unintended consequences have engendered anxiety and undermined educators’ agency and professionalism while breeding defensiveness and burnout (Dworkin, 2009; Friedman, 2000; Mathison & Freeman, 2006). Unfortunately, some of these consequences may be most harmful to the intended beneficiaries of the standards-based reform movement: low-income students and students of color in racially segregated and under-resourced schools and districts (President’s Committee on the Arts and the Humanities, 2011). Given the small and uneven gains garnered from decades of intense focus on academic test scores, the education field is looking for another way forward.


How can the emergent focus on the integration of social, emotional, and academic development counterbalance the unintended consequences of test-based reform? The shift away from solely academic outcomes to include social and emotional skills and competencies has provided a richer and more holistic picture of the role that schools can play in supporting students’ development into healthy, engaged, and productive adults. In parallel, early warning indicators—measures that forecast important developmental outcomes and milestones—were created to provide educators with timely, actionable information about students’ developmental trajectories. The growing use and impact of these indicators has demonstrated that measures are much more useful for improvement if they enable educators to recognize problems early on, before it is too late to take corrective action.


HOW COULD EWISs BE LEVERAGED TO SUPPORT LEARNING AND DEVELOPMENT?


In this article, we argue that an integrated approach to student development—supported by complementary EWIs—holds significant promise for improving students’ experiences and outcomes. We propose that such an approach could serve as an antidote to some of the unforeseen consequences of standards-based reform, while simultaneously reinforcing its commitment to rigor and its basic premise that the benefits of public education should accrue equitably to all students. We believe the promise of this integrated approach can be realized under a new measurement paradigm that provides educators with rapid and ongoing insights about the social and emotional experiences that their classrooms are eliciting in their students.


Why do educators need information about the way their students experience their classrooms? Decades of science suggest that students’ social and emotional experiences matter for their academic learning as well as for their overall development, and that those experiences are profoundly shaped by the contexts in which students learn (Cantor et al., 2018; Farrington et al., 2012; Jones & Kahn, 2017; National Academies of Sciences, Engineering, and Medicine, 2018). The social and academic opportunities afforded to students in their classrooms, as well as how students experience these interactions and opportunities, substantially influence their motivation and cognitive engagement in learning (National Research Council and the Institute of Medicine, 2004). Further, there is evidence that classrooms create differential experiences for students by race, class, gender, and other social categories (Cohen & Steele, 2002; Steele, 1997; Walton & Cohen, 2011). It is clear that educators have enormous power to shape learning contexts and to influence students’ experiences within those contexts (Darling-Hammond et al., 2019; Osher et al., 2018).


We conceptualize classroom conditions as “upstream” factors that influence a host of “downstream” outcomes, including academic achievement, educational attainment, mental health, identity development, and trust in public institutions (see Figure 1; Love, 2019; Nagaoka et al., 2015; Valenzuela, 1999). Teachers can play an essential, upstream role in students’ development by creating classroom conditions that engender engaged learning and healthy development while proactively preventing behavioral or learning problems that may emerge in a more dysfunctional or less supportive classroom environment.


Research also suggests that human beings—in this case, students and educators—can improve much more quickly when provided rapid, specific, and actionable feedback (Askew, 2004; Hattie & Timperley, 2007). This implies that, when it comes to creating environments that foster healthy social, emotional, and academic development, educators could more quickly improve their own capacity if they were afforded such feedback. In this article, we describe how many schools are already leveraging EWIs to get such useful feedback, and we lay out how this approach could be adapted to support the systematic improvement of learning environments.


These are complementary scientific insights: Research that illuminates the conditions under which students thrive meets research that highlights how educators could rapidly improve in creating those conditions. In this article, we synthesize these insights into a new paradigm for SEAD measurement that shifts the focus from the downstream social–emotional and academic competencies of individual students to the upstream conditions in learning environments that give rise to those competencies. In describing this paradigm, we lay out a conceptual framework that draws on existing literature in psychology, human development, and education policy—and on our own experience as researchers working in partnership with educators to improve schools.

THE CRITICAL ROLE OF LEARNING ENVIRONMENTS IN SEAD


After three decades of high-stakes accountability in education, one response to an often myopic focus on summative test scores has been to shift the pendulum away from academic outcomes and onto social and emotional skills and competencies. This interest is warranted by the extensive empirical literatures showing that students’ developmental trajectories and outcomes are powerfully influenced by the social and emotional dimensions of their learning experiences—by students’ feelings and beliefs in and about school (Dweck et al., 2014; Farrington et al., 2012; Quay & Romero, 2015).


However, a disadvantage of any approach that counterposes a social and emotional focus against an academic focus is that it denies the fundamentally integrated nature of social, emotional, and academic development (Jones & Kahn, 2017). Students’ social and emotional experiences do not just matter for students’ social and emotional development; they powerfully affect students’ desire and ability to learn academic content as well (Immordino-Yang et al., 2018).


Currently, in most schools and districts with an explicit focus on “social–emotional learning,” the work is understood as implementing evidence-based programs to develop students’ social–emotional competencies (Jones & Bouffard, 2012; Weissberg & O’Brien, 2004). In the elementary grades, these generally focus on developing students’ self-regulatory or interpersonal skills (e.g., strategies for recognizing one’s emotions, controlling one’s behavior, or taking another’s perspective) (Rimm-Kaufman & Hulleman, 2015). In middle grades and high school, social–emotional learning work often focuses on self-regulated learning or other “success skills” to improve academic performance and reduce risk behaviors (Collaborative for Academic, Social, and Emotional Learning, 2015) or restorative practices in response to disciplinary infractions (Karp & Breslin, 2001).


Developing students’ competencies in ways that enable them to engage fully in their school work, persevere in the face of challenges, and learn from their mistakes is a worthy and important goal. In taking on this important mission, the default approach of many educators—trained in the student data-centered practices spawned by No Child Left Behind—would be to measure students’ baseline SEAD competencies and identify deficits, implement a program to explicitly teach the requisite skills to students, and periodically measure students for growth. As intuitive as it may be, this approach departs in important ways from our best current understanding of the ways social, emotional, and academic development unfolds.


A more expansive and contemporary understanding of social, emotional, and academic development recognizes the integrated nature of learning and development and the important role that students’ learning experiences play in their holistic development. Concomitant with this more expansive view is a shift in focus toward creating equitable and developmentally supportive learning environments rather than on measuring and remediating individual student competencies.


Emerging evidence suggests that SEAD competencies are not necessarily something students “have” or “don’t have” in particular “amounts,” but rather that social, emotional, and even academic competencies are cued and exhibited to a greater or lesser extent in different environments. When middle and high school students responded to survey questions about their own mindsets, perseverance, use of learning strategies, and academic behaviors in the context of two different classrooms, differences in the classroom environments across those two classes predicted measurable differences in student self-reports on these intrapersonal variables (Farrington et al., 2019). More developmentally supportive and psychologically attuned classrooms brought forth more socially, emotionally, and academically competent students.


The research evidence makes clear that human beings naturally become more engaged in learning when they perceive that key motivational conditions are in place. Students work harder, learn more, and perform better on academic tasks when they feel safe, valued, and respected in a learning environment (Farrington et al., 2012; Sakiz et al., 2012; Wentzel, 1997); when they see how schoolwork is relevant to their own lives and goals (Hulleman & Harackiewicz, 2009; Yeager et al., 2014a); and when they are in environments that frame challenges and setbacks as opportunities to stretch and grow (Blackwell et al., 2007; Paunesku et al., 2015; Yeager et al., 2014b). The research examples below more fully illustrate these points.


Example 1: To increase the relevance of science, students who were initially uninterested in science were randomly assigned to make connections between their science lessons and their personal lives. They earned semester science grades 23 percentile points higher than comparable peers (Hulleman & Harackiewicz, 2009).


Example 2: When students of color received critical feedback on an essay from their teacher, they were four times more likely to revise their essay—and they earned final grades 18 percentile points higher on average—if their teachers also expressed assurance that students could meet the high expectations (Yeager et al., 2014b).


Human beings only learn when they engage—when they put forth effort, focus attention, and persist through the inevitable challenges of learning difficult skills and concepts. Therefore, it is critical for educators to create conditions that foster such engagement in their classrooms. The science of learning and development suggests that schools could more effectively cultivate academic engagement and achievement and close opportunity gaps if educators could effectively attend to the social and emotional dimensions of learning.


Focusing on creating psychologically supportive learning environments for all students, rather than focusing on measuring and changing the behaviors of individual students, has two clear advantages for educators. First, good teachers already understand that part of their job is to create a welcoming and intellectually engaging classroom for all learners. Supporting teachers with robust science on how to do that just makes good sense. Teachers can use identified strategies to ensure that students feel like they belong in class, believe they can be successful in class, and see the intellectual skills they develop in class as relevant to their lives—all of which, in turn, affect students’ academic engagement and achievement (Dweck et al., 2014; Farrington et al., 2012; Ryan & Deci, 2000).


In contrast, intervening to address the social and emotional “deficits” of an individual student may feel to teachers like something far outside of their wheelhouse. Such a highly targeted intervention may necessitate more advanced training in psychology—and still fail if the environment conflicts with the intervention (Yeager & Walton, 2011; Yeager et al., 2019). Furthermore, as described in the next section, we do not have particularly reliable or practical ways to measure such individual-level social–emotional “deficits” or competencies (Duckworth & Yeager, 2015).


There is significant potential to help schools become more equitable and excellent learning environments by focusing on the conditions for healthy social, emotional, and academic development. However, we propose that this full potential will not be realized under the traditional SEAD measurement paradigm currently used in education.

PROBLEMS WITH THE CURRENT SEAD MEASUREMENT PARADIGM


If we want to systematically improve social, emotional, and academic development, we need to measure it. However, as stated above, there are many problems with the summative, individualistic, and accountability-oriented approach to measurement that is prevalent in today’s K–12 education system. When educators thus trained have tried to support students’ social and emotional development, they have understandably borrowed from the dominant academic measurement playbook and set out to diagnose and remediate deficits in individual students’ social and emotional skills. This presents a number of problems.


SEAD Is Social and Contextual, Not Just Individual

When it comes to learning academic skills, it is important and feasible to assess whether individual students have mastered particular skills. However, the traditional measurement paradigm’s focus on measuring discrete skills in individual students belies the fundamentally social and context-dependent nature of SEAD. Measuring individual SEAD skills and competencies in isolation from the contexts and experiences in which they are cued or developed can provide educators with an inaccurate picture and imply a misleading course of action. For example, if a student was assessed as not demonstrating any leadership skills, it would be hard to disentangle a true deficit in leadership skills from an environment that provided no opportunities to demonstrate student leadership. Research is clear that specific experiences and conditions, over time, foster students’ academic engagement, learning, identity development, and prosocial behavior (National Research Council and the Institute of Medicine, 2004), but traditional SEAD measurement approaches do not assess the degree to which these developmental experiences and conditions are present. Furthermore, despite researchers’ best efforts, current SEAD measures still cannot reliably measure change in such competencies at the individual level (Duckworth & Yeager, 2015).


SOCIAL–EMOTIONAL EXPERIENCE DOES NOT INCREASE IN THE SAME WAY AS ACADEMIC GROWTH


We argue here that academic achievement and social–emotional development are fundamentally different phenomena. Traditional academic competence grows systematically over time. The average fourth grader earns higher math scores than the average third grader, and we reasonably expect a student’s math knowledge (and hence their math scaled score) to increase every year. Taking a baseline measure and assessing growth over time makes sense when measuring math knowledge. It is akin to a parent marking a child’s height on a wall each birthday. In contrast, many social–emotional constructs don’t move in a one-way, linear direction. A student’s sense of belonging, feelings of relevance, and focused engagement wax and wane over the course of a day, a year, and a K–12 career. We don’t expect a student to demonstrate “more belonging” each year, so why would we apply a measurement paradigm built for linear growth to measure a student’s progress in belonging? Imagine putting pencil marks on the wall to measure your own engagement in work over time!


IF WE WANT TO UNDERSTAND STUDENTS’ LEARNING EXPERIENCES, WE HAVE TO ASK STUDENTS


When assessing students’ academic competencies, it is important to use an objective external criterion. One would get a much better sense of whether students knows how to divide fractions by observing them divide fractions than by asking them to rate their own skill at dividing fractions. In contrast, students are the most qualified reporters of their subjective learning experiences.


We can learn much from various attempts to measure the quality of classrooms and teachers. Student surveys have been used to capture school and classroom conditions, and the high-quality versions of these surveys have been found to be more predictive of students’ learning outcomes than have observations by trained observers (Ferguson, 2012). This result is perhaps unsurprising, given that students log substantial hours in their classrooms and therefore are likely to be better judges of classroom quality than a third party on a short visit could ever be.


Survey measures have typically been used under the implicit assumption that there is some “objective” reality that is captured best by the average of students’ perceptions of a learning environment. This assumption denies the essential role of subjective experience in learning and development. In reality, there is not one average reality in a classroom of 30 students; there are actually 30 different realities. That “same” environment may be experienced profoundly differently by students who are positioned differently in a class or a society, and what matters for any given student’s learning and development is the reality he or she experiences—the meaning the student makes of his or her school and classroom experience. For example, when an anagram task was described in a study as a “diagnostic test” of verbal abilities rather than a “puzzle,” this seemingly neutral description activated negative cultural stereotypes and elicited stereotype threat in Black students (Steele & Aronson, 1995; for a review of stereotype threat research, see Schmader & Johns, 2003). That stereotype threat substantially reduced the cognitive performance of Black but not White students because this stereotype was only relevant to Black students. Efforts to measure and improve developmental environments should reflect the important reality that the same environment can have differential effects.  


Of course, there are important limitations to even the best traditional survey measures. One mistake schools make in surveying students is not to use the data they collect in a way that is responsive to what students had to say. We wonder how often students experience “survey fatigue” because they are tired of expressing their opinions when it is clear to them that no one is listening. This is particularly likely with year-end survey data. Much like year-end test scores, they provide educators with information after the fact, too late to respond in any meaningful way. This problem with year-end test scores is exactly what gave rise to the use of academic EWIs.


Despite the challenges outlined above with respect to the measurement of social, emotional, and academic development, we believe that an alternative approach to SEAD measurement could sidestep many of these problems and play an essential role in the creation of developmentally supportive learning environments that allow students to thrive. We argue that such an approach would integrate important lessons from the methods used by proponents of academic EWIs. In the next section, we introduce some of the powerful advantages afforded by the use of academic EWIs in traditional educational contexts. Then we describe a new paradigm that integrates insights about the contextual nature of SEAD with the methodological advantages afforded by academic EWIs.

[39_23460.htm_g/00002.jpg]


Figure 1. Healthy learning and development results from a causal cascade that starts with supportive learning environments. (For a concrete example of this cascade in action, see end of the section “Early Warning Indicators: Looking Upstream to Support Improvement.”)

EWISs: LOOKING UPSTREAM TO SUPPORT IMPROVEMENT


In this section, we focus on the powerful affordances of EWIs, and we consider how these affordances could be coupled with insights from SEAD research to better support educators in creating thriving developmental environments. EWIs solve a problem that is inherent to all summative outcomes (whether narrowly academic or more holistic): Summative outcomes are generally available too late to inform corrective action.


One antidote to this late availability of summative outcomes has been to leverage EWIs to look “upstream” in a developmental process, and, thereby, get early (as opposed to late) indication that a student or group of students is on a troubling trajectory and that a course adjustment is needed (see Figure 1). The growing interest in EWIs is justified by the prominent role they have played in successful school improvement efforts (Allensworth & Easton, 2005; Balfanz, 2009; Balfanz et al., 2007; Kurlaender et al., 2019).


It is helpful to consider EWIs in the context of the extensive literature on effective feedback, which shows that individuals’ skills improve more quickly when they receive feedback that is both specific and timely (Butler & Winne, 1995; Juwah et al., 2004; Orsmond et al., 2002; Shute, 2008). Intuitively, it is much easier to improve when one receives timely, specific feedback. For example, it would be hard to get better at playing the piano without the immediate feedback afforded by being able to hear when one has played an incorrect note. Nor would we expect a student to quickly gain skills as an essayist or scientist without feedback from instructors or peers.


It is unsurprising, therefore, that educators and educational institutions can improve more quickly when provided with the specific, timely feedback afforded by high-quality EWIs. For example, the University of Chicago Consortium on School Research developed the Ninth Grade On-Track Indicator to help Chicago Public Schools (CPS) elevate high school graduation rates, which was a top priority for CPS in the 2000s and 2010s (Allensworth, 2013; Allensworth & Easton, 2005, 2007).  


As described more extensively elsewhere (see Allensworth & Easton, 2005, 2007; Gwynne et al., 2012), the On-Track Indicator enabled school staff to identify ninth graders whose grades and attendance placed them on a trajectory that would reduce their likelihood of graduating on time three years later. This timely information helped school staff revise policies and target additional resources to the ninth graders who were off track (Allensworth, 2013), and contributed to a 24-percentage-point increase in graduation rates over a 10-year period (Pitcher et al., 2016).


Similarly, the Talent Development model uses indicators of school-level performance and capitalizes on early and strategic intervention at specific grade levels (Kemple et al., 2005). In Philadelphia, this approach led to improvements in attendance, academic course credits earned, and rates of promotion to the next grade (Balfanz et al., 2006; Herlihy & Kemple, 2005; Kemple et al., 2005; Mac Iver, 2004). Early indicators have also been used to great effect in San Jose’s schools (Kless, 2013; Legters & Clark, 2015), and, more recently, in Boston Public Schools (Balfanz & Byrnes, 2019).


As the examples above and in the rest of this volume illustrate, EWIs can be a powerful tool for improving student outcomes and educational practice. These indicators can provide timely and specific information that enables educators to recognize and address problems at earlier stages than otherwise possible. However, the prevalent uses of EWIs in education have important limitations when seen through the lens of integrated social, emotional, and academic development.


As described in the sections above, one of the problems with a narrow focus on summative academic outcomes, like test scores, is that those measures are anchored to a misleadingly narrow understanding of academic development—an understanding that is divorced from academic development’s social and emotional context.


The most widely used leading indicators tend to register deficits in individual students’ skills and behaviors, rather than the deficits in the developmental contexts in which those students are learning. For example, existing EWIs may reveal that students are skipping class but not that the class being skipped has a toxic climate that is driving students away (Allensworth & Easton, 2005, 2007). Although EWIs that measure students’ skills and behaviors have proven useful in a number of contexts, one of their shortcomings is that such usage makes the things that students are “doing wrong” more salient than the things the system is doing wrong. This can reinforce a deficit-based narrative that blames students—or students’ families or communities—for poor developmental outcomes instead of putting the focus on the systemic reforms needed to create positive developmental contexts for all students.


An even more fundamental limitation with using students’ skills and behaviors as EWIs is that doing so reveals the results, rather than the causes, of social, emotional, and academic development. Consider how a concrete example maps to the causal cascade presented in Figure 1. Say that a student is starting to skip class because of its toxic learning climate. A traditional EWI, like On-Track, would support “early intervention” by enabling educators to recognize that a student’s absences (“student’s behavior” in Figure 1, box 3) put them at elevated risk for dropout (“Student Learning & Development” in Figure 1, box 4). This is certainly “earlier” than only recognizing the problem once a student has dropped out. However, an integrated understanding of SEAD affords the opportunity to look even earlier in the causal cascade.


What if educators could measure students’ experiences (Figure 1, box 2) instead of the behaviors and learning outcomes that result from those experiences (Figure 1, boxes 3–4)? For example, if educators could recognize that students experience a particular learning environment as toxic, they could intervene to improve that environment before students become so disengaged that they start to skip class. This is the opportunity we explore below.

A NEW CONCEPTUAL FRAMEWORK FOR SEAD MEASUREMENT AND IMPROVEMENT


Over the course of this article, we have made two complementary, overarching arguments. First, we argued that students’ subjective experiences of their learning environments powerfully support—or undermine—their development into avid learners, healthy human beings, productive adults, and engaged citizens. As a corollary, educators and schools should strive to establish supportive and equitable learning environments that afford such positive developmental experiences to all young people. Second, we have argued that systematic efforts to improve learning environments so that they provide such conditions call for a different approach to measurement. Whereas schools typically measure the skills and competencies of individual students against external criteria, systematic efforts to improve learning environments should look “upstream” and measure the characteristics of those environments, not just the competencies of individual students learning in those environments. Finally, in considering how to measure the excellence or equity of a learning environment, schools should recognize that learning environments are developmentally supportive or toxic not by virtue of some external criterion, but rather by virtue of the subjective experiences those environments elicit in different learners.


This framework implies that educators should (1) regularly assess the extent to which different students are having developmentally conducive learning experiences, and (2) actively experiment to identify practices that most effectively and equitably enhance their own students’ experiences along developmentally important dimensions.


In this final section, we explore some of the concrete issues involved in the operationalization of the implicated approach. For example, in order to apply this framework, educators need a practical way to measure developmentally important learning experiences, analytical support to interpret resulting data, and actionable strategies for improving students’ experiences. In addition, they must be psychologically prepared to invite and productively respond to constructive feedback from their students. Recognizing that these hurdles could be significant barriers to the broad adoption of this new paradigm, our teams have been working since 2017 to develop a web-based tool called Copilot.


We designed Copilot to help educators systematically learn about and improve their students’ learning experiences along developmentally important dimensions. Educators use Copilot to periodically assess (via student survey) students’ learning experiences; to learn principles and strategies that they could leverage to improve those experiences (through web-based learning modules); and to assess whether the practice changes they implement meaningfully improve their students’ learning experiences (by collecting additional rounds of student experience feedback after implementing a new practice).


In this section, we describe guiding principles (see Table 1) that emerged as we developed and adapted Copilot in response to feedback from educators who were using it—as well as educators who were considering whether to use it. This development and adaptation process can best be understood as a user-centered design process that was intended to result in an impactful tool, rather than a formal research project. We did not make systematic efforts to formally document how implementation differed at different sites or types of sites. Instead, we solicited ongoing input from the teachers, administrators, program staff, and education school faculty who were using Copilot so that we could adapt it to better serve their efforts to improve students’ learning experiences. Over the first three years of this process, we partnered with urban and suburban middle school and high school teachers and administrators in five states, with a national nonprofit that supports new teachers, and with a graduate school of education. Generally, educators used Copilot periodically over a semester or school year, and many resumed use in the next school year.


Our examples are rooted in the context of our work on the Copilot platform only for the sake of brevity and coherence. Our experiences advising schools and other educational organizations outside of the Copilot context affirm that these guiding principles are relevant to many attempts to improve student experience systematically—not just those that leverage a particular software platform.


Table 1. Supporting Educators to Systematically Improve Students’ Learning Experiences. Guiding Principles:


Measure malleable experiences that predict developmental outcomes.

Provide clear guidance for improvement.

Scaffold experimentation in the service of adaptation.

Focus on improvement and avoid blame.

Ensure students feel heard.


MEASURE MALLEABLE EXPERIENCES THAT PREDICT DEVELOPMENTAL OUTCOMES


Certain experiences are more important than others for healthy development. We consulted an extensive literature (for reviews, see Dweck et al., 2011; Farrington et al., 2012; National Research Council and Institute of Medicine, 2004; Ryan & Deci, 2000) in order to decide which classroom experiences we should help educators measure. We sought to prioritize the measurement of experiences that reliably support academic motivation, engagement, and success. We also aimed to select experiences that teachers could readily influence. As a result, we prioritized the three following classroom experiences during the first two years of piloting (additional experience measures are currently being piloted):


Teacher Caring: Whether or not students feel cared for by teachers, and the overall quality of student–teacher relationships, can have profound effects on students’ engagement and investment in learning (e.g., Noddings, 2012; Sakiz et al., 2012; Velasquez et al., 2013; Wentzel, 1997).


Feedback for Growth: Students need supportive feedback that helps them recognize their own potential to grow and succeed (e.g., Cohen et al., 1999; Hattie & Clarke, 2018; Yeager et al., 2014a).


Meaningful Work: Students are more likely to invest and engage with schoolwork when they see how it connects their personal aspirations and lives outside of school (e.g., Hulleman & Harackiewz, 2009; Paunesku, 2015; Yeager et al., 2014b).


PROVIDE CLEAR GUIDANCE FOR IMPROVEMENT


Most educators are not yet experts in social, emotional, and academic development, and they often want support to interpret and act on the results of student experience measures (Hough et al., 2017). We strived to provide such support by


using intuitive language when introducing the developmental importance of specific subjective experiences;


providing transparent feedback concerning what could be improved—e.g., teachers are provided with reports that show what percentage of students agreed with specific statements about their experiences in that class (see Figure 2.A);


immediately pointing educators to relevant best practices when results revealed opportunities to improve (see Figure 2.B); and


scaffolding the process of reflecting on data and planning improvements, e.g., through practice journals (see Figure 3).


[39_23460.htm_g/00004.jpg]


Figure 2. (A) A run chart shows teachers what percent of their students agree with specific statements about their class. (B) A library of best practices points teachers to research and strategies pertinent to instantiating specific learning experiences (see equitablelearning.org).


[39_23460.htm_g/00006.jpg]


Figure 3. Screenshot from a Copilot practice journal that prompts teachers to reflect on students’ feedback and plan practice changes in response


SCAFFOLD EXPERIMENTATION IN THE SERVICE OF ADAPTATION


Without a doubt, certain best practices can help teachers establish a better learning environment (Jones & Kahn, 2017; National Research Council and the Institute of Medicine, 2004). However, students and learning contexts are highly variable, and so are the specific ways that educators elect to implement best practices. Because of this variability in context and implementation, high-impact educational practices that worked in one context all too often prove ineffective in new contexts or in the hands of new practitioners (see Bryk et al., 2015; Schneider, 2014; Yeager & Walton, 2011). Continuous improvement methods afford a powerful methodology for overcoming this perennial implementation challenge: They recommend that practitioners measure the local impact of a practice so that they can recognize whether the practice should be adopted, adapted, or abandoned (Bryk et al., 2015).


The Copilot platform scaffolds this approach by enabling educators to (1) define multiple, iterative cycles in which they will work to enhance specific, developmentally important learning experiences; (2) learn and select from multiple best practices relevant to enhancing targeted experiences; and (3) easily measure whether the targeted student experiences are improving over time as new practices are implemented (see Figure 2.A). In doing so, this approach recognizes that educators usually need to adapt implementation to their own students, and it strives to support educators to experiment with those adaptations until they find those that work for them and for their students.


FOCUS ON IMPROVEMENT AND AVOID BLAME


The barriers to educators collecting and acting on student experience data are not just conceptual and technical. They are also psychological. When individuals feel afraid or attacked, it is harder for them to problem-solve creatively; they are also more likely to reject the offending information (Baas et al., 2008; Brehm, 1966). Unfortunately, many teachers are accustomed to measures being used for punitive accountability purposes, which can make them hesitant to collect data that could reveal their own areas for improvement. Recognizing this professional context, we took multiple steps to reduce teachers’ concerns about the possibility that their students’ feedback might be used unfairly against them. For example, only teachers can access their own results unless they choose to share them. Copilot learning modules also explicitly explain that the student feedback can be used for improvement but not for accountability purposes:


Copilot reports are designed to help teachers learn how to support their students more effectively. They should not be used to evaluate or compare between teachers . . . because students enter the classroom with mindsets and expectations that have been colored by their prior experiences. For example, students who have been mistreated by their teachers in the past may come into a new classroom expecting their new teacher to also treat them poorly. That expectation could lead them to rate their experiences more negatively than they otherwise would. . . . It would be unfair to penalize a teacher for a student’s previous experiences, but it would also be unfair to that student not to work hard to create more positive experiences that would inspire them to engage and learn. (Paunesku & Diaz de Lewis, 2019)


ENSURE STUDENTS FEEL HEARD


Educators increasingly survey their students, but the experience of sharing one’s opinion is not necessarily the experience of feeling heard. In the course of developing and iterating on the Copilot platform, we heard from some teachers that their students hated filling out the surveys; meanwhile, other teachers gushed that their students loved filling out the surveys. When we investigated this puzzling bifurcation, its cause quickly became clear. Students tended to appreciate the survey when their teachers introduced it by explaining that they wanted students’ feedback so that they could improve as a teacher—and when they explained what they would do differently in response to the feedback. In doing so, teachers modeled a growth mindset and made students feel like their opinions matter. In the words of one student whose teacher was using Copilot: “I think she cares what I think because she is asking us to take this survey about her teaching. If she didn't care what I had to say, she wouldn't be asking us to do this survey.” In contrast, when teachers simply demanded that students complete surveys while giving no indication that their responses were valued—or that they would affect the teacher’s practices—their students understandably grew frustrated. This latter approach is coercive, and it engenders resistance because it violates students’ sense of autonomy and control (Turner, 2005).

DISCUSSION AND FUTURE WORK


In this article, we have put forward a conceptual framework that integrates innovations from the EWI movement and insights from research on social, emotional, and academic development. Our framework starts by recognizing that students are more likely to develop into avid learners, healthy human beings, productive adults, and engaged citizens when their learning environments engender specific, developmentally conducive experiences. Seen through that lens, the role of schools and educators should center on ensuring that all students are afforded those experiences.


We argue that schools will only fulfill this fundamental mission—to provide essential developmental experiences to all students effectively and equitably—if they pay attention to students’ experiences and leverage the tools of continuous improvement to identify and fine-tune the practices that engender such experiences for their own students. We argue that schools generally pay too much attention to the downstream outcomes that result from students’ ongoing exposure to certain experiences. In contrast, schools do not pay enough attention to the quality of the experiences themselves. A shift in the focus of measurement upstream to include experiences—and not just outcomes—would be helpful because it would afford more timely and specific feedback about what needs to change in learning environments. The shift would clarify how educators can proactively change schools and educational practices to better support students. Finally, we point out that many of the most objectively important developmental experiences are actually subjective experiences. Students’ interpretations of what happened matter as much or more for development than what “objectively” transpired. To measure those subjective experiences, schools must ask students themselves about the nature of their experiences.


Last but not least, we provided guiding principles for the application of this framework, and concrete examples of their operationalization through our work on the Copilot platform—an online tool that we created to help educators adopt the approach we are advocating. Even though our attempts to apply our framework in the field are still in their infancy, we have been encouraged by the strides educators have made to improve their students’ learning experiences when equipped with these evolving tools and guidance. With the support of the Copilot software, hundreds of teachers assessed whether their students were having developmentally conducive learning experiences, and they experimented with new practices in order to identify practices that would enhance their students’ experiences along developmentally important dimensions. The teachers who did so overwhelmingly improved the quality of their students’ classroom experiences (see Gripshover & Paunesku, 2019).


Whereas prior research has consistently observed that adolescents’ academic experiences and motivation decline over time (Alspaugh, 1998; Eccles et al., 1991; Busteed, 2013), over 90% of the teachers who used Copilot to collect multiple rounds of student feedback successfully improved one or more of the student experiences they measured (see Gripshover & Paunesku, 2019). Given that those experiences are important predictors of motivational, academic, and developmental outcomes, it seems reasonable to assume that those teachers’ efforts will help their students become more academically motivated and successful than they otherwise would have been. For example, students who rated the measured experiences positively in a pilot study were 30% more likely to earn an A or B in the positively rated class (see Gripshover & Paunesku, 2019). Those benefits were pronounced for students of color; Black males who experienced positive learning conditions in a class were almost two times more likely to earn an A or B than those who did not.


In the years ahead, we hope to build on this promising approach with a growing community of like-minded educators, researchers, and educational advocates. For example, in the coming school year our two organizations—PERTS and the UChicago Consortium—are working with the National Equity Project to support a network of Midwestern school districts. Schools in these districts will start to pilot student experience measures and best practice recommendations with the goal of enabling educators to quickly recognize and mitigate racial disparities in the learning experiences afforded to different groups of students.


The importance of such interdisciplinary partnerships is hard to overstate because it represents a crucial route to building, testing, and spreading the use of student experience measures and practice recommendations. To date, we have only enabled educators to rapidly measure a small handful of the many experiences that affect social, emotional, and academic development. Furthermore, although a number of effective practices are already documented, we have little idea which practices are most relevant to various students in various settings, or how best to train educators to implement those practices. Those are critical questions that must be answered before the full potential of this promising approach can be realized. Yet, despite the hard work ahead, we cannot help but be inspired by the rapid improvements that teachers made when—for the first time—they were afforded a practical way to rapidly and systematically get and respond to meaningful feedback from their students.


References

 

Allensworth, E. M. (2013). The use of ninth-grade early warning indicators to improve Chicago schools. Journal of Education for Students Placed at Risk (JESPAR), 18(1), 68–83.

 

Allensworth, E. M., & Easton, J. Q. (2005). The on-track indicator as a predictor of high school graduation. UChicago Consortium on Chicago School Research. https://consortium.uchicago.edu/sites/default/files/2018-10/p78.pdf

 

Allensworth, E. M., & Easton, J. Q. (2007). What matters for staying on-track and graduating in Chicago public high schools: A close look at course grades, failures, and attendance in the freshman year. UChicago Consortium on Chicago School Research. https://consortium.uchicago.edu/sites/default/files/2018-10/07%20What%20Matters%20Final.pdf

 

Alspaugh, J. W. (1998). Achievement loss associated with the transition to middle school and high school. The Journal of Educational Research, 92(1), 20–25.

 

Askew, S. (Ed.). (2004). Feedback for learning. Routledge.

 

Au, W. (2007). High-stakes testing and curricular control: A qualitative metasynthesis. Educational Researcher, 36(5), 258–267.

 

Balfanz, R. (2009). Can the American high school become an avenue of advancement for all? The Future of Children, 19(1), 17–36.

 

Balfanz, R., & Byrnes, V. (2019). College, career and life readiness: A look at high school indicators of post-secondary outcomes in Boston. ERIC. https://files.eric.ed.gov/fulltext/ED596471.pdf

 

Balfanz, R., Herzog, L., & Mac Iver, D. J. (2007). Preventing student disengagement and keeping students on the graduation path in urban middle-grades schools: Early identification and effective interventions. Educational Psychologist, 42(4), 223–235.

 

Balfanz, R., Mac Iver, D. J., & Byrnes, V. (2006). The implementation and impact of evidence-based mathematics reforms in high-poverty middle schools: A multi-site, multi-year study. Journal for Research in Mathematics Education, 37(1), 33–64.

 

Baas, M., De Dreu, C. K., & Nijstad, B. A. (2008). A meta-analysis of 25 years of mood-creativity research: Hedonic tone, activation, or regulatory focus? Psychological Bulletin, 134(6), 779–806.

 

Berliner, D. (2011). Rational responses to high stakes testing: The case of curriculum narrowing and the harm that follows. Cambridge Journal of Education, 41(3), 287–302.

 

Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development, 78(1), 246–263.

 

Brehm, J. W. (1966). A theory of psychological reactance. Academic Press.

 

Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press.

 

Busteed, B. (2013, January 7). The school cliff: Student engagement drops with each school year. Gallup. https://news.gallup.com/opinion/gallup/170525/school-cliff-student-engagement-drops-school-year.aspx

 

Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281.

 

Cantor, P., Osher, D., Berg, J., Steyer, L., & Rose, T. (2019). Malleability, plasticity, and individuality: How children learn and develop in context. Applied Developmental Science, 23(4), 307–337.

 

Cohen, G. L., & Steele, C. M. (2002). A barrier of mistrust: How negative stereotypes affect cross-race mentoring. In J. Aronson (Ed.), Improving academic achievement: Impact of psychological factors on education (pp. 303–327). Academic Press.

 

Cohen, G. L., Steele, C. M., & Ross, L. D. (1999). The mentor’s dilemma: Providing critical feedback across the racial divide. Personality and Social Psychology Bulletin, 25(10), 1302–1318.

 

Collaborative for Academic, Social, and Emotional Learning. (2015). 2015 CASEL Guide: Effective social and emotional learning programs, middle and high school edition

 

Darling-Hammond, L., Flook, F., Cook-Harvey, C., Barron, B., & Osher, D. (2019). Implications for educational practice of the science of learning and development. Applied Developmental Science. https://doi.org/10.1080/10888691.2018.1537791

 

Duckworth, A. L., & Yeager, D. S. (2015). Measurement matters: Assessing personal qualities other than cognitive ability for educational purposes. Educational Researcher, 44(4), 237–251.

 

Dweck, C. S., Walton, G. M., & Cohen, G. L. (2014). Academic tenacity: Mindsets and skills that promote long-term learning. Bill & Melinda Gates Foundation.

 

Dworkin, A. G. (2009). Teacher burnout and teacher resilience: Assessing the impacts of the school accountability movement. In L. J. Saha & A. G. Dworkin (Eds.), International handbook of research on teachers and teaching (pp. 491–502). Springer.

 

Eccles, J. S., Lord, S., & Midgley, C. (1991). What are we doing to early adolescents? The impact of educational contexts on early adolescents. American Journal of Education, 99(4), 521–542.

 

Farrington, C. A., Porter, S., & Klugman, J. (2019). Do classroom environments matter for noncognitive aspects of student performance and students’ course grades? (Working paper). UChicago Consortium on School Research.

 

Farrington, C. A., Roderick, M., Allensworth, E., Nagaoka, J., Keyes, T. S., Johnson, D. W., & Beechum, N. O. (2012). Teaching adolescents to become learners: The role of noncognitive factors in shaping school performance—A critical literature review. UChicago Consortium on Chicago School Research.

 

Ferguson, R. F. (2012). Can student surveys measure teaching quality? Phi Delta Kappan, 94(3), 24–28.

 

Friedman, I. A. (2000). Burnout in teachers: Shattered dreams of impeccable professional performance. Journal of Clinical Psychology, 56(5), 595–606.

 

Gripshover, S., & Paunesku, D. (2019). How can schools support academic success while fostering healthy social and emotional development? PERTS, Stanford University. http://perts.net/optimal-conditions

 

Gwynne, J., Pareja, A. S., Ehrlich, S. B., & Allensworth, E. M. (2012). What matters for staying on-track and graduating in Chicago public schools: A focus on English language learners. UChicago Consortium on Chicago School Research. https://consortium.uchicago.edu/sites/default/files/2018-10/ELL%20Report.pdf

 

Hargreaves, A., & Braun, H. (2013). Data-driven improvement and accountability. National Education Policy Center. http://nepc.colorado.edu/publication/data-driven-improvement-accountability/

 

Hattie, J., & Clarke, S. (2018). Visible Learning: Feedback. Routledge.

 

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

 

Herlihy, C. M., & Kemple, J. (2005). The Talent Development Middle School model: Impacts through the 2002–2003 school year. An update to the December 2004 report. MDRC.

 

Hough, H., Kalogrides, D., & Loeb, S. (2017). Using surveys of students' social–emotional learning and school climate for accountability and continuous improvement. Policy Analysis for California Education (PACE).

 

Hulleman, C. S., & Harackiewicz, J. M. (2009). Promoting interest and performance in high school science classes. Science, 326(5958), 1410–1412.

 

Immordino-Yang, M. H., Darling-Hammond, L., & Krone, C. (2018). The brain basis for integrated social, emotional, and academic development: How emotions and social relationships drive learning. National Commission on Social, Emotional, and Academic Development. The Aspen Institute.

 

Jones, S. M., & Bouffard, S. M. (2012). Social and emotional learning in schools: From programs to strategies and commentaries. Social Policy Report, 26(4), 1–33.

 

Jones, S. M., & Kahn, J. (2017). The evidence base for how we learn: Supporting students’ social, emotional, and academic development. Consensus statements of evidence from the Council of Distinguished Scientists, National Commission on Social, Emotional, and Academic Development. The Aspen Institute.

 

Juwah, C., Macfarlane-Dick, D., Matthew, B., Nicol, D., Ross, D., & Smith, B. (2004). Enhancing student learning through effective formative feedback. The Higher Education Academy. https://www.heacademy.ac.uk/sites/default/files/resources/id353_senlef_guide.pdf

 

Karp, D. R., & Breslin, B. (2001). Restorative justice in school communities. Youth & Society, 33(2), 249–272.

 

Kemple, J. J., Herlihy, C. M., & Smith, T. J. (2005). Making progress toward graduation: Evidence from the talent development high school model. MDRC. https://www.mdrc.org/sites/default/files/full_432.pdf

 

Kless, L. (2013). San Jose Unified School District, 2010-2013: Building a culture of evidence-based practice around college readiness. Voices in Urban Education, 38, 23–27.

 

Kurlaender, M., Reed, S., & Hurtt, A. (2019). Improving college readiness: A research summary and implications for practice. PACE. https://edpolicyinca.org/sites/default/files/R_Kurlaender_Aug19.pdf

 

Legters, N., & Clark, E. (2015). Sharing accountability and results: How San Jose Unified School District is taking its College Readiness Indicator System (CRIS) to scale and transforming leadership culture on the way. Everyone Graduates Center. https://educationnorthwest.org/sites/default/files/sharing-accountability-results-sjusd.pdf


Love, B. L. (2019). We want to do more than survive: Abolitionist teaching and the pursuit of educational freedom. Beacon Press.

 

Mac Iver, M. A. (2004). Systemic supports for comprehensive school reform: The institutionalization of Direct Instruction in an urban school system. Journal of Education for Students Placed at Risk (JESPAR), 9(3), 303–321.

 

Mathison, S., & Freeman, M. (2006). Teacher stress and high stakes testing. In R. G. Lambert & C. J. McCarthy (Eds.), Understanding teacher stress in an age of accountability (pp. 4363). IAP.

 

McMurrer, J. (2007). Choices, changes, and challenges: Curriculum and instruction in the NCLB era. Center on Education Policy.

 

Mehta, J. (2015). The allure of order: High hopes, dashed expectations, and the troubled quest to remake American schooling. Oxford University Press.

 

Nagaoka, J., Farrington, C. A., Ehrlich, S. B., & Heath, R. (2015). Foundations for young adult success: A developmental framework. UChicago Consortium on Chicago School Research.

 

National Academies of Sciences, Engineering, and Medicine. (2018). How people learn II: Learners, contexts, and cultures. The National Academies Press. https://doi.org/10.17226/24783

 

National Research Council and the Institute of Medicine. (2004). Engaging schools: Fostering high school students’ motivation to learn. The National Academies Press.

 

Noddings, N. (2012). The caring relation in teaching. Oxford Review of Education, 38(6), 771–781.

 

Orsmond, P., Merry, S., & Reiling, K. (2002). The use of exemplars and formative feedback when using student derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education, 27(4), 309–323.

 

Osher, D., Cantor, P., Berg, J., Steyer, L., & Rose, T. (2018). Drivers of human development: How relationships and context shape learning and development. Applied Developmental Science. https://doi.org/10.1080/10888691.2017.1398650

 

Paunesku, D., & Diaz de Lewis, L. (2019). Copilot-Elevate [Computer software]. http://perts.net/elevate

 

Paunesku, D., Walton, G. M., Romero, C. L., Smith, E. N., Yeager, D. S., & Dweck, C. S. (2015). Mindset interventions are a scalable treatment for academic underachievement. Psychological Science, 26(6), 784–793.

 

Pitcher, M. A., Duncan, S. J., Nagaoka, J., Moeller, E., Dickerson, L., & Beechum, N. O. (2016). A capacity-building model for school improvement. University of Chicago, Network for College Success. https://ncs.uchicago.edu/page/capacity-building-model-school-improvement

 

President’s Committee on the Arts and the Humanities. (2011). Reinvesting in arts education: Winning America’s future through creative schools.

 

Quay, L., & Romero, C. (2015). What we know about learning mindsets from scientific research. Mindset Scholars Network. http://mindsetscholarsnetwork.org/wp-content/uploads/2015/09/What-We-Know-About-Learning-Mindsets.pdf

 

Rimm-Kaufman, S. E., & Hulleman, C. S. (2015). SEL in elementary school settings: Identifying mechanisms that matter. In J. A. Durlak, C. E. Domitrovich, R. P. Weissbert, & T. P. Gullotta (Eds.), The handbook of social and emotional learning (pp. 151–166). Guilford Press.

Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55, 68–78.

Sakiz, G., Pape, S. J., & Hoy, A. W. (2012). Does perceived teacher affective support matter for middle school students in mathematics classrooms? Journal of School Psychology, 50(2), 235–255.

 

Schmader, T., & Johns, M. (2003). Converging evidence that stereotype threat reduces working memory capacity. Journal of Personality and Social Psychology, 85(3), 440–452.

 

Schneider, J. (2014). From the Ivory Tower to the schoolhouse: How scholarship becomes common knowledge in education. Harvard Education Press.

 

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.

 

Steele, C. M. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. American Psychologist, 52, 613–629.

 

Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69(5), 797–811.

 

Turner, J. C. (2005). Explaining the nature of power: A three-process theory. European Journal of Social Psychology, 35(1), 1–22.

 

Valenzuela, A. (1999). Subtractive schooling: U.S.-Mexican youth and the politics of caring. State University of New York Press.

 

Valli, L., & Buese, D. (2007). The changing roles of teachers in an era of high-stakes accountability. American Educational Research Journal, 44(3), 519–558.

 

Velasquez, A., West, R., Graham, C., & Osguthorpe, R. (2013). Developing caring relationships in schools: A review of the research on caring and nurturing pedagogies. Review of Education, 1(2), 162–190.

 

Walton, G. M., & Cohen, G. L. (2011). A brief social-belonging intervention improves academic and health outcomes among minority students. Science, 331, 1447–1451.

 

Webb, P. T. (2005). The anatomy of accountability. Journal of Education Policy, 20(2), 189–208.

 

Weissberg, R. P., & O’Brien, M. U. (2004). What works in school-based social and emotional learning programs for positive youth development. The Annals of the American Academy of Political and Social Science, 591(1), 86–97.

 

Wentzel, K. R. (1997). Student motivation in middle school: The role of perceived pedagogical caring. Journal of Educational Psychology, 89(3), 411–419.

 

Yeager, D. S., Hanselman, P., Walton, G. M., Murray, J., Crosnoe, R., Muller, C., Tipton, E., Schneider, B., Hulleman, C. S., Hinojosa, C. P., Paunesku, D., Romero, C., Flint, K., Roberts, A., Trott, J., Iachan, R., Buontempo, J., Hooper, S. Y., Carvalho, C., . . . Dweck, C. S. (2019). A national experiment reveals where a growth mindset improves achievement. Nature, 573, 364–369.

 

Yeager, D. S., Paunesku, D., D’Mello, S., Spitzer, B. J., & Duckworth, A. L. (2014a). Boring but important: A self-transcendent purpose for learning fosters academic self-regulation. Journal of Personality and Social Psychology, 107(4), 559–580.

 

Yeager, D. S., Purdie-Vaughns, V., Garcia, J., Apfel, N., Brzustoski, P., Master, A., Hessert, W. T., Williams, M. E., & Cohen, G. L. (2014b). Breaking the cycle of mistrust: Wise interventions to provide critical feedback across the racial divide. Journal of Experimental Psychology, 143(2), 804–824.

 

Yeager, D. S., & Walton, G. M. (2011). Social-psychological interventions in education: They’re not magic. Review of Educational Research, 81(2), 267–301.

                    

        

        

             








Cite This Article as: Teachers College Record Volume 122 Number 14, 2020, p. 1-26
https://www.tcrecord.org ID Number: 23460, Date Accessed: 5/22/2022 11:16:26 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles
There are no related articles to display

Related Discussion
 
Post a Comment | Read All

About the Author
  • David Paunesku
    Stanford University
    E-mail Author
    DAVID PAUNESKU, Ph.D., is the executive director of PERTS, a center he cofounded at Stanford University. PERTS helps educators apply insights from the psychological sciences to improve students’ educational experiences and outcomes. He coauthored “A Brief Intervention to Encourage Empathic Discipline Cuts Suspension Rates in Half Among Adolescents” (Proceedings of the National Academy of Sciences).
  • Camille Farrington
    University of Chicago Consortium on School Research
    E-mail Author
    CAMILLE A. FARRINGTON, Ph.D., is a managing director and senior research associate at the UChicago Consortium on School Research. Her work focuses on understanding how learning environments provide opportunities for positive developmental experiences for students and how learning settings shape students’ beliefs, behaviors, performance, and development. She coauthored Foundations for Young Adult Success: A Developmental Framework (University of Chicago Consortium on Chicago School Research).
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS