Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13

The Social Psychology of Homophily: The Collective Sentiments of Education Advocacy Groups

by Jonathan A. Supovitz, Christian Kolouch & Alan J. Daly - 2020

Background/Context: As a major area of civic decision making, public education is a central arena for advocacy groups seeking to influence policy debates. An emerging body of research examines advocates’ use of social media. While debates about policy can be thought of as a clash of large ideas contained within frames, cognitive linguists note that framing strategies are activated by the particular words that advocates choose to convey their positions.

Purpose/Objective/Research Question/Focus of Study: This study examined the vociferous debate surrounding the Common Core State Standards on Twitter during the height of state adoption in 2014 and 2015. Combining social network analysis and natural language processing techniques, we first identified the organically forming factions within the Common Core debate on Twitter and then captured the collective psychological sentiments of these factions.

Research Design: The study employed quantitative statistical comparisons of the frequency of words used by members of different factions around the Common Core on Twitter that are associated in prior research with four psychological characteristics: mood, motivation, conviction, and thinking style.

Data Collection and Analysis: Data were downloaded from Twitter from November 2014 to October 2015 using at least one of three hashtags: #commoncore, #ccss, or #stopcommoncore. The resulting data set consisted of more than 500,000 tweets and retweets from more than 100,000 distinct actors. We then ran a community detection algorithm to identify the structural subcommunities, or factions. To measure the four psychological characteristics, we adapted Pennebaker and colleagues’ Linguistic Inquiry and Word Count libraries. We then connected the individual tweet authors to their faction based on the results of the social network analysis community detection algorithm. Using these groups, and the standardized results for each psychological characteristic/dimension, we performed a series of analyses of variance with Bonferroni corrections to test for differences in the psychological characteristics among the factions.

Findings/Results: For each of the four psychological characteristics, we found different patterns among the different factions. Educators opposed to the Common Core had the highest level of drive motivation, use of sad words, and use of words associated with a narrative thinking style. Opponents of the Common Core from outside education exhibited an affiliative drive motivation, a narrative thinking style, high levels of anger words, and low levels of conviction in their choice of language. Supporters of the Common Core used words that represented a more analytic thinking style, stronger levels of conviction, and words associated with a higher level of achievement orientation.

Conclusions/Recommendations: Individuals on Twitter, mostly strangers to each other, band together to form fluid communities as they share positions on particular issues. On Twitter, these bonds are formed by behavioral choices to follow, retweet, and mention others. This study reveals how like-minded individuals create a collective sentiment through their specific choice of words to express their views. By analyzing the underlying psychological characteristics associated with language, we show the distinct pooled psychologies of activists as they engaged together in political activity in an effort to influence the political environment.

In democratic systems, advocacy groups play central roles in shaping the conditions for policy making. Political scientists define advocacy organizations as interest groups that are independent of formal governing institutions and attempt to influence the actions of government (Burstein, 1998; Walker, 1991). Andrews and Edwards (2004) add that advocacy groups are those who “make public interest claims either promoting or resisting social change that, if implemented, would conflict with the social, cultural, political, or economic interests or values of society” (p. 481). Thus, advocacy groups represent different social factions that seek to bend the direction of public policy toward their beliefs and interests.

As a major area of public decision making that reflects social values, public education is a central arena for advocacy groups seeking to influence policy debates. From school desegregation struggles in the 1960s, to responses to the A Nation at Risk Report of the 1980s, to the charter school movement beginning in the 1990s, advocates have sought to influence the direction of policy (Labaree, 2007; Stein, 2004). Each of these debates, and positioning around them, reflects different views about the social purpose of schooling and the role of public education and how it should be carried out (Labaree, 1997; Tyack & Cuban, 1995).

Researchers have long examined the strategies and influence of advocacy groups in a variety of education areas. Teacher unions are perhaps the most oft chronicled formal advocacy groups in education (Bascia, 1998; Poole, 2001). Other examples in education include analyses of nonprofit groups and foundations that position themselves as supporters and opponents on such issues as charter schools (DeBray-Pelot et al., 2007; Kirst, 2007), immigrant education (Olsen, 2009) and private-school vouchers (Lubienski & Jameson-Brewer, 2016; Scott, 2011). More recently, researchers have noted how education policy think tanks, supported by private foundation funding, have used research and dissemination strategies to gain more traction in policy circles (Scott, 2009; Scott et al., 2009).

An emerging body of research has also begun exploring advocacy organizations’ use of social media (Bortree & Seltzer, 2009; Edwards & Hoefer, 2010; Greenberg & MacAulay, 2009; Supovitz & McGuinn, 2017; Supovitz et al., 2018). Because social media is more personalized, messages can be targeted, potentially broadening advocacy participation (Bennett, 2012). Guo and Saxton (2014) examined the social media use of 188 nonprofit advocacy organizations and learned that 93% of the organizations used social media to build an online community and call them to action. Lovejoy and Saxton (2012) examined the Twitter use of the 100 largest nonprofit organizations and found that they used the medium extensively to inform their followers and to strategically engage their stakeholders using community-building practices. Auger (2013) found that advocacy organizations were using social media to “persuade people to their point of view, for the most part through use of one-way communication” (p. 369).

Analysis of advocacy group messaging has mostly focused on the ways in which groups frame their communications and the particular language that they use to enact these frames. A substantial set of political science literature identifies issue framing as a powerful means of shaping public perceptions and attitudes about political issues (Brewer & Gross 2005; Nicholson & Howard 2003; Sniderman & Theriault, 2004). Advocates seek to gain influence by emphasizing particular aspects of an issue that mobilize their constituencies (Riker et al., 1996; Schattschneider, 1960). Through repeated use of framing, ideas enter the public discourse and, eventually, can become widely accepted. Nelson and Oxley (1999) showed how political framing affects public opinion by experimentally demonstrating how the portrayal of news had a significant influence on subjects’ beliefs and opinions about an issue. Gormley (2012) conducted several studies of issue framing in education, including an experimental study of how issue framing influenced views about the early childhood education policy of Head Start. Stein (2004) conducted a detailed case study of the ways in which Title I of the Elementary and Secondary Education Act of 1965 was framed as an extension of the Johnson administration’s efforts to reduce poverty in the United States.

While debates about policy can be thought of as a clash of large ideas contained within frames, cognitive linguists have noted that framing strategies are activated by the particular words that advocates choose to convey their perspective. Fillmore (1976) argued that language and framing were inseparable. According to Lakoff (2008), “language that fits that worldview activates that worldview, strengthening it” (p. 8). The linguistic choices that issue framers make reinforce the message and trigger the emotional connections that we make to a text. Such discourse analysis examines the social orientations conveyed by language choices (Burr, 1995; Gee, 2005). Discourse analysists argue that speech is a form of action that contains formal functions, social intentions, and unintended consequences (Fairclough, 1995; Luke, 2002; Van Dijk, 1997; Wetherell & Potter, 1988). Discourse examinations have largely focused on careful qualitative investigations of the choices, patterns, and meaning of language in speech and text (Wooffitt, 2005).

An emerging strand of language analysis, called natural language processing (NLP), involves large-scale investigations made possible by computing power. NLP is a form of discourse analysis that analyzes naturally occurring language habits or patterns. Among the many NLP tools are sentiment analysis (Liu, 2012), word counting (Pennebaker, 2011), and interactional sociolinguistics (Nguyen et al., 2016). Each is a computerized analytic system designed to examine a different aspect of language use in order to identify linguistic or syntactical patterns with the aim of providing insights into a text or other form of communication.

This study combines social network analysis and the NLP technique of word counting to first identify the organically forming subcommunities, or factions, within the Common Core debate on Twitter and then to capture the collective psychological sentiments of these factions.


The concept of social capital underlies social network theory. Social capital theory emphasizes the value of both the collective system of relationships within which people live and work, and the resources accrued by individual members through their activities, relationships, and positioning within a social community (Bourdieu, 1989; Burt, 1992; Coleman, 1988). Social network theory holds that information, ideas, and other resources flow through a network, which is influenced by the quality of interactions, or ties, among the actors in the community (Lin, 2001; Scott & Carrington, 2011). Lin (2001) argued that social capital consists of “the resources embedded in social relations and social structure which can be mobilized when an actor wishes to increase the likelihood of success in purposive action” (p. 24).

From a social perspective, resources are not physical or financial assets, but rather the relational elements that flow through underlying social networks, including knowledge, advice, and innovation (Daly & Finnigan, 2010). Further, it is the structure that results from the ties between individuals in a system that determines access to resources (Burt, 1992; Coleman, 1988; Granovetter, 1982; Putnam, 1995).

Researchers also have found that different kinds of ties are important. Strong ties, often measured by quantity or quality of interactions, support the transfer of tacit, nonroutine, or complex knowledge (Hansen, 2002; Marsden & Campbell, 1984; Reagans & McEvily, 2003; Uzzi, 1997). By contrast, weak ties allow brokering opportunities between actors (Burt, 1997) and access to nonredundant, novel information (Granovetter, 1973). Scholarship suggests that different types of interactions within a network enable access to different kinds of information (Haythornthwaite, 2005; Tenkasi & Chesmore, 2003). Regardless of whether the ties are strong or weak, research also indicates that people gravitate toward those who share common backgrounds and experiences, a phenomenon called homophily (Lin, 1999; McPherson et al., 2001).


In this study, we used word counting, a NLP technique to measure different psychological characteristics in written or spoken texts. Word counting is performed by comparatively analyzing a person’s use of certain words, each word belonging to a specific word library that has been empirically shown to reflect different aspects of an individual’s psychology. Popularized by social psychologist James Pennebaker of the University of Texas at Austin, word counting focuses not on the content of what people say, but on the specific choices of language that they use to convey their message (Pennebaker, 2011). Here we focus on four dimensions of psychology: mood, thinking style, drive motivation, and conviction. We chose these four because they have a research lineage in the word counting literature; each is described briefly next.

Mood. This study investigates three dimensions of the psychological characteristic of mood: sadness, happiness, and anger. To investigate the distinction between sadness and happiness located in word choices, Stirman and Pennebaker (2001) compared the poetry of suicidal and nonsuicidal poets. In the study, the poets who committed suicide—those who assumedly struggled with sadness or depression—used far more “I” words (I, me, my, etc.), increased numbers of causal words (based, effects, intend, provoke, etc.), and more past and future tense verbs. The other poets used far more “we” words (we, us, our, etc.), fewer causal words, and more of what are considered concrete nouns (dog, sister, house, etc.). Stirman and Pennebaker (2001) found that the two groups made different word choices even when they wrote poems about similar subjects. The authors hypothesized that sadness is a form of isolation, and therefore, the sad poets employed “I” or “my” more frequently. By contrast, the happier poets articulated their sadness using “us” or “we” because they share their experience with others and understand that they are not alone with their feelings (Pennebaker, 2011). A similar dichotomy exists when discussing the groups’ use of causal words, or those words directly associated to active thought. By various definitions, sadness (depression) is an active cognitive process, something strongly linked to introspection, self-reflection, or the consideration of past tragic events (Greenberg & Pyszczynski, 1986). Happiness, on the other hand, is a sentiment to be felt and enjoyed. Therefore, happy people use fewer words associated with acute cognition and therefore use fewer causal and insight words.

To understand the words associated with anger, Pennebaker and Lay (2002) examined the spontaneous press conferences of New York City mayor Rudy Giuliani, before and after 9/11, when he was “variously referred to in the media as . . . a man seething with anger and self-righteousness” (p. 251). Pennebaker and Lay (2001) found that pronoun use in particular shed light on Giuliani’s state of mind. While sad people use “I” and happy people use “we,” angry people use a lot of “you” (yours, you’re, y’all, etc.) mixed with “he,” “she,” or “they.” Anger is measured by second- and third-person pronoun use, the number of angry words used (abuse, damn, enrage, idiot, etc.), as well as with the number of present-tense verbs found in a specific text. Present tense prevalence is due to anger being another active emotion, one that is directed outward, inspired by an issue at hand and turned toward the offending party: “they” or “you” (Dodds et al., 2011).

Drive motivation. David McClelland’s needs theory (1985) hypothesized that people are driven in three distinct ways: by a need for power, a need for affiliation, or an underlying need for achievement. McClelland’s theory was based on analyses of subjects’ interpretations of ambiguous pictures on the Thematic Apperception Test (TAT), a psychological projection tool developed in the 1930s (Morgan, 1935). After administering the test, psychologists analyzed the transcripts of patient responses to glean insight into their minds.

As technology progressed, needs-theory researchers developed a computer-based scoring system to analyze the word choices of TAT respondents. McClelland and colleague David Winter found high correlations between subject assessments on the TAT and their other verbal and written expression (Winter & McClelland, 1978). This model relied on the idea that those driven by power used power words (e.g., ambition, manage, master, obey), those driven by achievement used achievement words (e.g., accomplish, challenge, overcome, strive), and people pushed by the construction of relationships used words associated with affiliation (e.g., ally, collaborate, communicate, interact). Among the many studies that have used his technique, Winter profiled the drives of naval officers (Winter, 1978), South African leaders (Winter, 1980), and every American president from 1789 to 1981 (Winter, 1987).

Conviction. How can we differentiate between truth and lies when liars insist that they are telling the truth? Pennebaker (2011) examined word choices in courtroom transcripts of successful criminal convictions in which defendants were subsequently prosecuted for perjury if they were believed to have lied during their original criminal trial. The ensuing perjury conviction or exoneration, typically based on DNA evidence or eyewitness accounts, provided either a validation or repudiation of the original testimony. Thus, Pennebaker had transcripts of original testimony and perjury testimony in cases that were subsequently proved to be true or false by subsequent evidence. Pennebaker (2011) found that defendants made significantly different word choices when lying or telling the truth. This finding has been replicated by additional investigations, using a variety of methods, that validated the claim that our words reveal people’s level of conviction (Bond & Lee, 2005; Larcker & Zakolyukina, 2012; Little & Skillicorn, 2008; Louwerse et al., 2010; Pennebaker et al. 2007).

In these studies, researchers consistently found that liars used certain types of words in their explanations that differed from those used by people telling the truth. Specifically, dishonest people employed fewer “I” words, more third-person pronouns, fewer number words (one, two, hundred, thousand, etc.), far fewer details such as concrete nouns, and, most commonly, what are called discrepancy words (would, should, could, etc.). People telling the truth, on the other hand, use far fewer emotional words—both positive and negative—more words related to time (yesterday, today, hour), increased number words, and fewer of both causal (made, make, intention, enact) and insight words (know, reasons, remember, think). The fact that liars use fewer number and time words correlates directly to the lack of detail in many lies. According to Pennebaker et al. (2007), the decreased use of causal and insight words in the written or spoken texts of people telling the truth occurs because our honest experiences are our own; when we retell them, we do not need to think, whereas constructing a lie is a much more arduous cognitive task.

We refer to this as a measure of conviction rather than lying because lying implies knowingly telling a falsehood, as opposed to believing that something is true even though it is factually false. We believe that liars who believe their misstatements can speak with the same conviction as truth-tellers.

Thinking style. Pennebaker also developed a theory of the psychological characteristic of thinking styles as expressed in word choices. He argued that people think in distinctly different ways, which were reflected in their distinct use of word bundles. As an introductory psychology professor, Pennebaker taught William James’s stream of consciousness theory to students as an example of tracking our nonlinear cognitive function. To teach this concept, Pennebaker assigned his students stream of consciousness diaries, asking them to spend specific amounts of time writing down anything that moved through their heads. Over the years, Pennebaker accumulated thousands of these diaries. Using these data, Pennebaker and King (1999) categorized pertinent words into function word groups (prepositions, pronouns, conjunctions, etc.). Using exploratory factor analysis, they identified the three factors of thinking style: analytic thinkers, who seem to understand the world through division and distinctions and tend to group and order people, places, and events into distinct categories; narrative thinkers, who predominantly interpret information through stories and focus their thoughts on individual experiences and anecdotes; and formal thinkers, who use language that is more stodgy and emotionally distant.

Analytic thinkers use more exclusives (but, without, except) negations (no, nor, nothing), tentative words (maybe, perhaps), and quantifiers (some, many, more, less). They also show a higher degree of cognitive complexity, an intellectual habit reflected by their reliance on causal (because, reason, effect) and insight words (realize, think, mean). Narrative thinkers use more personal pronouns (their focus on people), past-tense verbs, and conjunctions (particularly words such as “with” and “together”). High formal thinking and writing typically includes big words [words greater than six letters], high rates of articles [a, an, the, etc.] nouns, numbers, and prepositions. At the same time, formal writing has very few “I” words, verbs (especially present tense), discrepancy words (e.g., would, should, could) and common adverbs (really, very, so) (Pennebaker, 2011).


To apply these concepts to the social psychology of advocacy groups in the Common Core debate on Twitter, we investigate the following three research questions:


What were the social network groupings of the participants of the Common Core debate on Twitter, and what were their common interests?


What were the mood, drive motivation, thinking style, and conviction of the advocacy groups in the Common Core debate on Twitter, and did they differ by faction?


How do the psychological characteristics of the different groups help us to understand the social psychology of the different factions?


Most literally, the Common Core State Standards (CCSS) are a set of expectations of what students should know and be able to do in mathematics and English language arts at each grade level from kindergarten through 12th grade. The CCSS evolved out of a theory of action for school improvement first articulated in the 1990s (Smith & O’Day, 1991; Vinovskis, 1996). They were based on a key element of the systemic reform effort of the 1990s, which was founded on three general principles. First, ambitious standards were developed by each state to provide a set of targets for what students ought to know and be able to do at key grade junctures. Second, states measured progress toward standards through aligned assessments that combined rewards and sanctions for holding educators accountable. The third component was local flexibility in organizing capacity to determine how best to meet the academic expectations.

This structure of clear goals (standards), measures (assessments), and incentives (accountability) at the state level, combined with implementation autonomy, fits with America’s historical conception of education as a locally organized effort. Having each state develop its own standards and assessment systems produced a lot of variation in the quality and rigor of state educational systems across the country, which contributed to a perception of disappointment with the standards-based reform movement of the 1990s (Hamilton et al., 2008).

The new standards were named the “Common Core” because they were intended to eliminate the variation in the quality of state standards experienced in the 1990s (McDonnell & Weatherford, 2013). They were developed at the behest of the state governors and chief state school officers to avoid the charge of federal intrusion—which came nonetheless after the Obama administration incentivized states to adopt the CCSS with the Race to the Top funding competition and provided the financing for the Common Core testing consortia. The CCSS were adopted by the legislatures in 46 states and the District of Columbia in 2010.

Since their adoption, the CCSS have become increasingly controversial and politicized, with overall public opinion trending toward opposition to the standards and opinion being increasingly split by political party. Support for the CCSS declined from 65% to 49% between 2013 and 2015. These trends are distinctly partisan, with Democratic attitudes largely holding steady while Republican support plummeted 20 points, from 57% in 2013 to just 37% in 2015 (Education Next, 2016). Many states (including Indiana, Oklahoma, Missouri, New Jersey, Tennessee, and West Virginia) have withdrawn, renaming and reintroducing similar standards to sidestep the political maelstrom. More than half of the states have withdrawn from the associated Common Core–aligned test consortia.

A few studies have been conducted on the Common Core debate on Twitter using both qualitative discourse analysis and NLP techniques. Supovitz and Reinkordt (2017) used qualitative techniques to analyze the frames, metaphors, and lexical markers underlying the linguistic choices used by Common Core opponents on Twitter to activate five central metaphors that reinforced the overall frame of the standards as a threat to children and appealed to the value systems of a diverse set of constituencies. Collectively, Supovitz and Reinkordt (2017) found, these frames, and the metaphors and the language that triggered them, appealed to the value systems of both conservatives and liberals and contributed to the broad coalition from both within and outside of education, which was aligned in opposition to the standards. In a separate analysis of the same data, Supovitz et al. (2018) identified two forms of linguistic valence in the tweets about the Common Core: “policyspeak,” which refers to the cooler, more rational language of policy analysis, in which debate is based on the merits of the evidence and the logic of the argument, and “politicalspeak,” which is more emotional and appeals to people’s passions. After hand coding a sample of the tweets, they found that supporters of the standards were significantly more likely to use policyspeak, whereas opponents were more likely to employ politicalspeak. In another study of Common Core Twitter data overlapping the time period of this one, Wang and Fikis (2017) used sentiment analysis, a computer-based NLP analysis to detect sentiment (positive, negative, neutral) on Common Core–related tweets from December 2014 to December 2015. They found that both the overall sentiment, and that of opinion leaders, was overwhelmingly negative toward the CCSS.

Supovitz and McGuinn (2017) conducted interviews with 19 education advocacy groups that supported the CCSS in late 2013 and early 2014 and analyzed their communication strategies. They found that their web-based media strategies focused on toolkits and guides with only a thin social media presence, “based on a mistaken belief that Common Core supporters felt that information campaigns would be enough to clarify the benefits of the standards” (p. 24).


Here we describe the data from Twitter analyzed in the study, how we identified structural subcommunities, or factions, from the data, and how we measured the psychological characteristics of each of the factions and tested for differences among them.

Data. The data were downloaded from Twitter’s Application Programming Interface (API) for a one-year period between November 2014 and October 2015. The data are part of a larger study first presented on an interactive website, www.hashtagcommoncore.com (Supovitz et al., 2017). The tweets for this study were identified based on three hashtags: #commoncore, #ccss, and #stopcommoncore. The resulting data set consisted of 507,734 tweets and retweets from 100,247 distinct actors.

Identifying structural subcommunities/factions. To understand the inner structure and clustering of the interactions of the overall network, we ran a community detection algorithm to identify and represent structural subcommunities, or factions. The algorithm sorts people into groups based on the extent of their ties to other actors. A faction is defined as a group of individuals with more ties together than to others across groups even though group boundaries are somewhat porous.  It is important to note that we did not “preassign” individuals to these factions a priori based on their attributes; rather, we let their interactive activity on Twitter determine the structural group to which they belonged. This does not mean that members of a subcommunity did not communicate with others outside the community; their membership reflects the dominant interactions during the period of the study. Additionally, this does not mean that individuals self-affiliated with a particular group or faction; their Twitter activity connected them as such. When we ran the subcommunity algorithm, we found four major factions comprising the Common Core network in 2014–2015, although each of the four contained additional subgroupings as well.  

Measuring the psychological characteristics and testing for differences among factions. To measure the four psychological characteristics—mood, motivation, conviction, and thinking style—and their dimensions for each faction, we adapted the Linguistic Inquiry and Word Count (LIWC) libraries that Pennebaker and colleagues had developed for each psychological characteristic (Pennebaker et al., 2007). We first ran our entire data set of tweets through the LIWC program to locate all words used in the CCSS Twitter debate that matched them to the exhaustive libraries of the LIWC, thereby creating libraries specific to each psychological characteristic in our context. The word libraries ranged in size from 23 words to just over 900 words, depending on the psychological characteristic and dimension thereof. The Appendix shows information about the libraries for each psychological characteristic and dimension, the number of words in the library, and examples of the words contained in each library.

Once the libraries were built, we used Python, an open-source object-oriented programming language, to create search routines to comb through each of the 507,734 tweets (including retweets) and match the words to those in the libraries. This procedure produced a count of the total words in each tweet and the words that matched those in each library. We then aggregated these up from the tweet level to the actor level, which generated the total number of words used by each actor and the words used by that actor that were contained in the word library for that psychological characteristic/dimension. This gave us a stable measure of the proportion of matched words to total words for each individual for each library. It is important to recognize that this analytic approach identified words, not tweets, associated with each psychological characteristic. Thus, our analysis is at the word level, not tweet level, for factional members. Because we wanted to remove anomalies in the data—for example, someone could have tweeted five words, of which three matched those in the library—we decided to remove any individual who tweeted fewer than 15 words over the one-year period. This reduced our individual actor sample by about 20%, from 100,247 to 80,671.

Next, we standardized (with z scores, µ = 0; SD = 1) within each library and averaged across them (where relevant). This served to equally weight across the different-sized libraries within a psychological characteristic/dimension. This was necessary because of the imbalance of the number of words across libraries. For example, the sadness dimension of mood contains three libraries: focus future words, “I” words, and sad words. Because the library of “I” words (19 words) is far smaller than that of focus future words (113 words) or sad words (192 words), the unstandardized effects of focus future and sad words would swamp the effects of “I” words. By standardizing the libraries of a psychological characteristic/dimension before averaging across them, we equally weighted across the libraries, therefore producing an unbiased average for each individual. In two cases, Conviction and the Formal Thinking dimension of Thinking Style, we reverse coded a subset of the libraries (as noted in the Appendix) so that the greater use of the words in the library was always aligned with higher levels of the sentiment. We did this by multiplying the standardized results for the specified libraries by -1 before averaging across them.

Finally, we connected the individual tweeters to their faction based on the results of the social network analysis community detection algorithm. Using these groups and the standardized results scores for each psychological characteristic/dimension, we then performed a series of analyses of variance (ANOVAs) with Bonferroni corrections to test for differences in the psychological characteristics and their dimensions among factions.


Here we describe the different factions in the Common Core debate on Twitter based on their structural subcommunity groupings, and differences among the factions in the psychological characteristics of their mood, drive motivation, thinking style, and conviction.


The subcommunity analysis helped to address Research Question 1. This analysis identified four major factions based on the greater predominance of within-group ties, in addition to a set of people who were not connected to any of these factions. The four groups are shown in Figure 1. Again, these groups were identified based on their behavior on Twitter; they interacted far more with those within their faction than those outside the faction using the hashtags related to the Common Core that were the basis of the data collection. After identifying the factions through the modularity analysis, we examined the content of their tweets and noticed the commonalities that led to the affiliation of those within the faction. Our interpretation of the factions led us to name the groups as follows:


Opponents of the Common Core From Outside Education: This was the largest faction, consisting of 39% of the participants, and largely comprised those outside education who tended to oppose the Common Core. These were actors who took a largely anti–Common Core position because of their interest in other advocacy issues (antifederalism, privacy issues, political partisanship, etc.) that are often conflated with the Common Core debate.

Supporters of the Common Core: This group of about 26,000 actors, or 26% of the ecosystem, primarily comprised individuals and organizations within the education sector and who tended to support the Common Core or were connected in the Twitterverse to those who supported the Common Core.

Educators Opposed to the Common Core: This group, which made up about 10% of the participants in the #commoncore conversation, comprised organizations and individuals within education who also largely opposed the Common Core. These actors tended to be people who were against the Common Core for reasons related to both the standards themselves (developmentally inappropriate, ignore social and emotional issues) and education issues tied to the standards (anti-testing, etc.).

About 22,000 actors, or about 22% of the Common Core network on Twitter, were those who did not fit into any of the other major subcommunities.

Finally, we found that the dragnet captured about 2,500 people who used #ccss. These were participants from Costa Rica who, during this period, were tweeting about the nation’s social security system—Caja Costaricense Seguro Social—which also uses the acronym CCSS.


The analyses identifying the relative use of words associated with each psychological characteristic and dimension by each faction, and testing for differences among them, addressed the second research question. Before conducting these analysis (described in the Data and Analysis section), we removed both the Costa Ricans and those who tweeted fewer than 14 times over the course of the year. The results are shown in Table 1. The first part of Table 1 includes the standardized (z score) means and standard deviations for each faction for each psychological characteristic and dimension. The second part of Table 1 shows the results of the omnibus F test for group differences, and the results of three post-hoc tests of differences between the three factions of interest.


For each of the psychological characteristics and dimensions, we found different patterns between the different factions. The first three rows show the results for the three dimensions of motivation: power, achievement, and affiliation. For power motivation, the results showed a significant difference between all three groups, F(3, 78609) = 662.95, p = .000, with the educators opposed to the Common Core using the most power motivation words (µ = .099), the opponents of the Common Core from outside education using the second most power motivation words (µ = .023), and the supporters of the Common Core using the lowest number of power motivation words (µ = -.276).

These results were almost reversed for achievement motivation. Again, there were significant differences between all three factions, F(3, 78609) = 245.91, p = .000, but the supporters of the Common Core had the highest average use of achievement motivation words (µ = .154), with the opponents of the Common Core from inside education having the second highest average number of achievement motivation words (µ = -.007), and the opponents of the Standards from outside of education having the lowest use of  achievement motivation words (µ = -.047).

Again, the results were different for the affiliation motivation, with the omnibus F test indicating a difference among the groups, F(3, 78609) = 65.70, p = .000. The opponents of the Common Core from outside education used significantly more affiliation motivation words than did either the supporters of the standards (mean difference .042, SE .009, p = .000) or the opponents of the standards from within education (mean difference .042, SE .013, p = .006). There was no significant difference in the use of affiliation motivation words between the two within-education groups (mean difference .001, SE .013, p = 1.00).

There were also differences in the words used by the three factions in their expression of the three dimensions of mood. In terms of words associated with anger, there was an overall difference in the use of anger words among the three communities, F(3, 78609) = 456.09, p = .000. The members of the group of CCSS opponents from outside education used significantly more words associated with anger (µ = .078) than did either the members of the group of CCSS opponents from within education (µ = .032, mean difference .046, SE .007, p = .000), or the CCSS supporters (µ = -.085, mean difference .164, SE .005, p = .000). There was also a difference between the use of angry words between the opponents of the Common Core from within education and the supporters of the Common Core (mean difference .117, SE .007, p = .000).

In terms of the use of sad words, there were again significant differences among the three groups, F(3, 78609) = 250.88, p = .000, with all three being statistically significant from each other. The educators opposed to the Common Core used the highest proportion of words associated with sadness (µ = .119); the opponents of the standards from outside education used the second highest proportion of sad words (µ = .032); and the supporters of the Common Core used sad words the least frequently as a proportion of the content of their tweets (µ = -.071). There were no statistical differences in the use of words associated with happiness in the verbiage of any of the three factions.  

The use of words that expressed conviction were also significantly different among the three factions, F(3, 78609) = 352.26, p = .000. Of the three groups, the supporters of the Common Core had the highest use of conviction words (µ = .028), while educators who opposed the standards had the second highest (µ = .001), and the opponents from outside education used conviction words least frequently (µ = -.046). Post hoc tests indicated differences between all three groups, with CCSS supporters significantly higher than educators opposed to the CCSS (mean difference .027, SE .004, p = .000), and educators opposed to the CCSS significantly higher than opponents from outside of education (mean difference .047, SE .004, p = .000).

There were also many differences in the use of words associated with the three thinking styles among the members of the different factions. In terms of words associated with an analytic thinking style, there were significant differences among the three groups, F(3, 78609) = 181.78, p = .000. The members of the faction of supporters of the Common Core had the highest use of analytic thinking style words (µ = .039), the members of the faction of educators opposed to the Common Core had the second highest proportional use of analytic thinking style words (µ =.022), and the opponents of the Common Core from outside education had the lowest proportional use of analytic thinking style words (µ = .003). Post hoc analyses indicated significant differences among all three groups.

By contrast, the two factions that opposed the Common Core had the highest proportion of words indicative of a narrative thinking style. Educators who opposed the standards had the highest average standardized use of narrative thinking style words (µ = .074), and opponents of the standards outside education had the second highest (µ =.061). Post hoc analyses indicated that both of these groups used significantly more narrative thinking style words in their tweets than did the members of the faction made up of supporters of the Common Core (mean difference .121, SE .008, p = .000, and mean difference .108, SE .005, p = .000, respectively). There was no statistical difference between the two communities opposed to the Common Core (mean difference .013, SE .008, p = .549).

To finish, the faction of supporters of the Common Core used the highest proportion of formal thinking style words (µ = .062), followed by the members of the faction of opponents of the Common Core from outside education (µ = .014); the educators opposed to the Common Core used the fewest formal thinking style words (µ = .036). All these differences were statistically significant.

Finally, it is interesting to note that the participants in the Common Core debate who were not affiliated with any of the major factions (identified in Table 1 as “other”) had the lowest levels of words associated with virtually all the sentiments. This is likely because this faction is made up of a mishmash of people with no consistent perspective on the standards.


The results reveal several important insights about the social psychology of the major subcommunities engaged in the Common Core debate on Twitter in 2014–2015. However, before interpreting the meaning of these results, we need to highlight a distinction between our analyses and the theoretical foundation on which they are built. While the psychological characteristics identified in word choices by Pennebaker (2011) and others are specifically associated with individuals, we have transferred these psychological characteristics to groups via their membership in social networks. In doing so, we are moving from the terrain of psychology to that of social psychology.

One of the tenets of the connections between psychological characteristics and word choices in the word counting literature is that word use transcends context and is embedded in the psychology of the individual. Thus, for example, Pennebaker (2011) found that the word choices associated with different thinking styles transcended different kinds of writing; Winter and McClelland (1978) and Winter (1978, 1980, 1987) generalized drive motivation to an attribute of the individual, not just to a particular situation or position on a specific issue; and Stirman and Pennebaker (2001) found that sad words were littered across the poetry of anguished poets, regardless of the topic of any given poem.

Yet, our analyses differ from these examples in several important ways. First, through our analyses, we have sought to understand the overriding collective sentiment of disparate groups of individuals through their views on the single issue of the Common Core. In doing so, we seek to make statements about group sentiments rather than the psychological characteristics of individual members. Second, group membership is based on particular coalitions around a specific issue. We note the fluidity of factional membership and how connections can shift over time, depending on patterns of followership, retweeting, and mentioning. Hypothetically, we could test the mutability of these collective sentiments if there were another issue that cleaved the members of these groups similarly, but we have no reason to believe that the groups (particularly those within education) would similarly separate on another issue, and we do not have the data to test this hypothesis. Thus, we have no basis to claim that the collective sentiments we are associating with the different communities are fixed to that collective group, independent of time and issue.

While this technique could certainly be applied to other contexts, we believe that our interpretations of the results make sense solely in the context of the particular issue of the Common Core State Standards. While individual members of the factions may share psychological characteristics in common, collective groups have a social psychology that is more dependent on the particular configuration of the group, particularly where group membership is in flux. This is where homophily becomes an important consideration to assist in making sense of the results. Homophily, defined as the tendency of individuals to associate with similar others, may be a contributing factor that reinforces the patterns of the results. It could be that the members of the factions, in reading the messages of their fellow community members, are internalizing the word choices made by their confederates, which in turn influences the choices of the words they use, either consciously or subconsciously. Thus, we may be witnessing a self-reinforcing of the cycle where group affiliation exposes members to language choices, which in turn induces a replication of similar language in subsequent tweets, retweets, and mentions. Homophilic tendencies may be supercharging the results.

This said, how might we interpret the different word use across the different subcommunities? A summary picture of the collective psychological sentiments of the three subcommunities in the Common Core debate is shown in Figure 2. From this, we can see that the supporters of the standards had the highest levels of conviction and achievement motivation, and an analytic and formal thinking style. They also had the lowest levels of narrative thinking and use of anger words.


The educators opposed to the Common Core had the highest level of drive motivation and use of sad words, and a narrative thinking style. Opponents of the Common Core from outside education exhibited an affiliative drive motivation, a narrative thinking style, high levels of anger words, and low levels of conviction in their choice of language.

Delving deeper, the two groups opposing the Common Core had the highest use of words associated with power motivation. Rather than viewing these factions as made up of individuals who were driven by power, we interpret their use of power words to mean that they felt that the standards movement threatened their sense of influence over education. We interpret that the educators who opposed the Common Core felt that the standards were removing some of their agency as educators—that the demands of the standards constrained their sense of autonomy to construct the educational experiences of children. The faction made up of opponents of the Common Core from outside education may have similarly felt threatened in that the standards movement infringed on local control of education, which has traditionally allowed different groups to assert their individual influence within their communities. Interestingly, the low use of power words on the part of members of the faction supporting the Common Core belies the conventional wisdom that Common Core advocates sought to consolidate control of the education of American children.

By contrast, the faction supporting the Common Core had the highest use of words associated with achievement motivation. On the surface, it might seem that Common Core supporters’ relatively high use of achievement words could be linked to the successful adoption of the standards. However, we do not believe that this is an accurate interpretation, particularly given that the CCSS movement was under attack in states across the country at the time of our analysis. Rather, we interpret that the high measure of achievement words is rooted in the language of the standards themselves. As the reform focused on achievement, promotion of the standards naturally used many of the terms found in the achievement library; therefore, their advancement would also use language from the achievement library. By simply promoting the CCSS, or even by discussing the merits of standards, advocates are more likely to use terms such as “achieve,” “success,” “performance,” “test,” and “assessment”—all of which are achievement-oriented words. So, simply by promoting the standards’ ability to improve student performance, CCSS advocates stimulated their achievement orientation.

By contrast, the opponents of the standards from outside education had the highest use of words associated with an affiliation motivation, while there was no difference in affiliation word use between either of the two groups of educators. This could be due to the members of this group feeling a stronger bond among themselves because they viewed themselves as part of a larger movement—with the CCSS as just one strand of their common cause—and therefore used more affiliation-oriented words.

We interpret differences in the language associated with the mood states of the three different factions as a reflection of both their position and the status of the reform at the time of their activity. The high use of words associated with anger in the two groups opposed to the Common Core is plausibly related to both their opposition to the reform in general and their frustration with the adoption of the standards in spite of their efforts. Sadness, by contrast, is measured by the use of “I” words, future-tense verbs, and negative emotion words. Future-tense verbs show a consistent awareness of the future, a quality empirically associated with apprehension, anxiety, and negative emotions (Borkovec, 2002). In this case, the use of future focus verbiage among educators opposed to the CCSS could possibly stem from their worries about the implementation of the CCSS and how it might affect the future of education. Further, their habitual use of negative emotion words may reflect their generally negative appraisal of the standards and the surrounding Common Core debate. Conversely, the CCSS supporters’ relatively low use of negative words and future-tense verbs may reflect their positive support for the reform and the immediacy of their argument and position as it relies on the current implementation of the standards.

The conviction scale assesses the extent to which people believe what they are saying. Based on the words they used, the supporters of the Common Core had the highest level of belief in their position and arguments. When writing their tweets, they used high numbers of concrete nouns, number words, and time words. All three of these word types highlight their use of concrete, analytical arguments. By contrast, the low use of conviction words among CCSS opponents from outside education could mean a number of things. One interpretation is that by attempting to connect the Common Core to other issues, such as data mining and business profiteering, the members of factional members had to make more strained arguments, which lowered their levels of conviction. Another interpretation is that they simply did not believe what they were saying, but instead used explosive rhetoric to raise the passion of their constituencies in order to galvanize support for their cause.

Words associated with an analytic thinking style are contained in seven distinct word libraries, including causal words (e.g., effect, trigger, infer), insight words (e.g., explains, decides, proves), negations, prepositions, conjunctions, and quantitative terms (e.g., average, group, most). The supporters of the CCSS measured the highest in analytic thinking style language, which was likely due to the Common Core supporters’ greater use of analytic arguments. An analytic argument is one based in making distinctions, and such arguments are predicated on analytic thinking. Interestingly, these results are also consistent with findings from Supovitz et al. (2018), showing that supporters of the standards used policyspeak, whereas opponents of the standards tended to use politicalspeak. Policyspeak is a rational, analytic oriented form of speech, rooted in structured and measured argumentation, akin to an analytic thinking style. Politicalspeak, by contrast, used more frequently by CCSS opponents, is a more emotional method of communication, rooted in impassioned opinion, used more frequently by CCSS opponents.

Our analysis of narrative thinking found that both groups of Common Core opponents used significantly more narrative thinking style words than did the group supporting the Common Core. A plausible explanation for the similarity between the two groups opposing the Common Core, and their difference from the CCSS supporters, arises out of the definition of narrative thinking: Narrative thinkers interpret the world through individual experience and articulate their thoughts through story-telling. Narrative thinkers also tend to see issues and interpret information at a personal level, involving interactions rather than distinctions and relationships rather than differences. A personalized interpretation makes sense if we consider that the members of the factions opposing the standards tended to view the Common Core debate as a personal rather than a systems issue. Throughout the debate, members of the faction of opponents from outside education raised concerns that the standards dehumanized the education process for individual participants, turning students into acquiescent numbers or troves for large-scale data mining. By contrast, the relatively low use of narrative thinking style words in the faction of people supporting the Common Core may reflect their overall view that the standards were a strategy for systemic improvement, one that would build up the system to elevate overall performance. Their linguistic choices suggest that they engaged less with the personal implications of the CCSS, while emphasizing the overall benefits.

Supporters of the Common Core had the highest use of words associated with a formal thinking style. This group’s high use of formal words is similar to their high use of analytic thinking words. The similarity begins in the types of words that formal thinkers use: high rates of long words (longer than 6 letters) and prepositions, and a low number of emotion words and “I” words. These word types, like analytic word types, lend to a preference for concrete, academic argumentation rather than impassioned emotional speech.

In this study, we have demonstrated that groups of strangers who are brought together by a common view on an education policy issue also have distinctive collective psychological sentiments that differ from those with different views on the issue. Although this is an important development, it raises a number of questions. First, what is the sequence of sentimental homophily? Do people with common views on an issue share psychological characteristics before they band together around a specific issue? Or do their shared sentiments grow out of their interactions together? Relatedly, do shared sentiments grow stronger over time?

Individuals on Twitter, mostly strangers to each other, band together in a fluid community that have in common a position on a particular issue. Their bond is formed by their behavioral choices to follow, retweet, and mention each other on Twitter. In creating the tweets that make up the pool of words collectively used by the group, individuals with their own psychology choose words to express their sentiments on the topic. Thus, their collective clusters of words, and the collective sentiments they express, reflect the pooled psychologies of the individuals and their common position on the issue at hand. As this examination of advocacy group sentiments shows, their messages reflect more than just words.


Andrews, K. T., & Edwards, B. (2004). Advocacy organizations in the U.S. political process. Annual Review of Sociology, 30, 479–506.


Auger, G. A. (2013). Fostering democracy through social media: Evaluating diametrically opposed nonprofit advocacy organizations’ use of Facebook, Twitter, and YouTube. Public Relations Review, 39(4), 369–376.


Bascia, N. (1998). Teacher unions and educational reform. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International handbook of educational change (pp. 896–915). Kluwer Academic.

Bennett, W. L. (2012). The personalization of politics political identity, social media, and changing patterns of participation. The Annals of the American Academy of Political and Social Science, 644(1), 20–39.


Bond, G. D., & Lee, A. Y. (2005). Language of lies in prison: Linguistic classification of prisoners’ truthful and deceptive natural language. Applied Cognitive Psychology, 19(3), 313–329. 


Borkovec, T. D. (2002). Life in the future versus life in the present. Clinical Psychology: Science and Practice, 9(1), 76–80.


Bortree, D. S., & Seltzer, T. (2009). Dialogic strategies and outcomes: An analysis of environmental advocacy groups’ Facebook profiles. Public Relations Review, 35(3), 317–319.


Bourdieu, P. (1989). Social space and symbolic power. Sociological Theory, 7(1), 14–25.


Brewer, P. R., & Gross, K. (2005). Values, framing, and citizens’ thoughts about policy issues: Effects on content and quantity. Political Psychology, 26(6), 929–948. https://doi.org/10.1111/j.1467-9221.2005.00451.x


Burr, V. (1995). An introduction to social constructionism. Routledge.


Burstein, P. (1998). Interest organizations, political parties, and the study of democratic politics. In A. N. Costain & A. S. McFarland (Eds.), Social movements and American political institutions (pp. 39–56). Rowman & Littlefield.


Burt, R. S. (1992). Structural holes. Harvard University Press.


Burt, R. S. (1997). The contingent value of social capital. Administrative Science Quarterly, 42(2), 339–365.


Coleman, J. S. (1988). Social capital in the creation of human capital. American Journal of Sociology, 94, 95–120.


Daly, A. J., & Finnigan, K. S. (2010). A bridge between worlds: Understanding network structure to understand change strategy. Journal of Educational Change, 11(2), 111–138.


DeBray-Pelot, E. H., Lubienski, C. A., & Scott, J. T. (2007). The institutional landscape of interest group politics and school choice. Peabody Journal of Education, 82(2–3), 204–230.


Dodds, P. S., Harris, K. D., Kloumann, I. M., Bliss, C. A., & Danforth, C. M. (2011). Temporal patterns of happiness and information in a global social network: Hedonometrics and Twitter. PLoS ONE, 6, e26752. doi:10.1371/journal.pone.0026752


Education Next. (2016). Results from the 2016 Education Next Poll. http://educationnext.org/2016-ednext-poll-interactive/


Edwards, H. R., & Hoefer, R. (2010). Are social work advocacy groups using Web 2.0 effectively? Journal of Policy Practice, 9(3–4), 220–239.


Fairclough, N. (1995). Critical discourse analysis: The critical study of language. Language in Social Life Series. Longman.


Fillmore, C. J. (1976). Frame semantics and the nature of language. Annals of the New York Academy of Sciences, 280(1), 20–32. https://doi.org/10.1111/j.1749-6632.1976.tb25467.x


Gee, J. P. (2005). An introduction to discourse analysis: Theory and method (2nd ed.). Routledge.


Gormley, W. T., Jr. (2012). Voices for children: Rhetoric and public policy. Brookings Institution Press.


Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78(1),  1360–1380.


Granovetter, M. S. (1982). The strength of weak ties: A network theory revisited. In P. V. Marsden & N. Lin (Eds.), Social structure and network analysis (pp. 105–130). Sage.


Greenberg, J., & MacAulay, M. (2009). NPO 2.0? Exploring the web presence of environmental nonprofit organizations in Canada. Global Media Journal, 2(1), 63–88.


Greenberg, J., & Pyszczynski, T. (1986). Persistent high self-focus after failure and low self-focus after success: The depressive self-focusing style. Journal of Personality and Social Psychology, 50(5), 1039–1044.


Guo, C., & Saxton, G. D. (2014). Tweeting social change: How social media are changing nonprofit advocacy. Nonprofit and Voluntary Sector Quarterly, 43(1), 57–79.


Hamilton, L., Stecher, B., & Yuan, K. (2008). Standards-based reform in the United States: History, research, and future directions (No. RP-1384). RAND Education.


Hansen, M. (2002). Knowledge networks: Explaining effective knowledge sharing in multiunit companies. Organization Science, 13(3), 232–248.


Haythornthwaite, C. (2005). Social networks and Internet connectivity effects. Information, Community & Society, 8(2), 125–147.


Kirst, M. W. (2007). Politics of charter schools: Competing national advocacy coalitions meet local politics. Peabody Journal of Education, 82(2–3), 184–203.


Labaree, D. F. (1997). Public goods, private goods: The American struggle over educational goals. American Educational Research Journal, 34(1), 39–81.


Labaree, D. F. (2007). Education, markets, and the public good: The selected works of David F. Labaree. Routledge.


Lakoff, G. (2008). The political mind: Why you can’t understand 21st-century politics with an 18th-century brain. Penguin.


Larcker, D. F., & Zakolyukina, A. A. (2012). Detecting deceptive discussions in conference calls. Journal of Accounting Research, 50(2), 495–540. doi:10.1111/j.1475-679x.2012.00450.x


Lin, N. (1999). Social networks and status attainment. Annual Review of Sociology, 25(1), 467–487.


Lin, N. (2001). Social capital: A theory of social structure and action (Vol. 19). Cambridge University Press.


Little, A., & Skillicorn, B. (2008, June). Detecting deception in testimony. In Proceedings of IEEE International Conference of Intelligence and Security Informatics (pp. 13–18). Taipei, Taiwan.


Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1), 1–167.


Louwerse, M., Lin, K., Drescher, A., & Soumin, G. (2010). Linguistic cues predict fraudulent events in a corporate social network. Cognitive Science Journal, 32, 961–966. http://csjarchive.cogsci.rpi.edu/proceedings/2010/papers/0306/paper0306.pdf 


Lovejoy, K., & Saxton, G. D. (2012). Information, community, and action: How nonprofit organizations use social media. Journal of ComputerMediated Communication, 17(3), 337–353.


Lubienski, C., & Jameson Brewer, T. (2016). An analysis of voucher advocacy: Taking a closer look at the uses and limitations of “gold standard” research. Peabody Journal of Education, 91(4), 455–472.


Luke, A. (2002). Beyond science and ideology critique: Developments in critical discourse analysis. Annual Review of Applied Linguistics, 22, 96–110.


Marsden, P. V., & Campbell, K. E. (1984). Measuring tie strength. Social Forces, 63(2), 482–501.


McClelland, D. C. (1985). Human motivation. Scott Foresman.


McDonnell, L., & Weatherford, S. M. (2013). Organized interests and the Common Core. Educational Researcher, 42, 488–497.


McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1), 415–444.


Morgan, C. D. (1935). A method for investigating fantasies. Archives of Neurology & Psychiatry, 34(2), 289–306.


Nelson, T. E., & Oxley, Z. M. (1999). Issue framing effects on belief importance and opinion. The Journal of Politics, 61(4), 1040–1067. https://doi.org/10.2307/2647553


Nguyen, D., Doğruöz, A. S., Rosé, C. P., & de Jong, F. (2016). Computational sociolinguistics: A survey. Computational Linguistics, 42(3), 537–593.


Nicholson, S. P., & Howard, R. M. (2003). Framing support for the Supreme Court in the aftermath of Bush v. Gore. Journal of Politics, 65(3), 676–695. https://doi.org/10.1111/1468-2508.00207


Olsen, L. (2009). The role of advocacy in shaping immigrant education: A California case study. Teachers College Record, 111(3), 817–850.


Pennebaker, J. W. (2011). The secret life of pronouns: What our words say about us. Bloomsbury Press. 


Pennebaker, J. W., Chung, C. K., Ireland, M., Gonzales, A., & Booth, R. J. (2007). The development and psychometric properties of LIWC2007. LIWC.net.


Pennebaker, J. W., & King, L. A. (1999). Linguistic styles: Language use as an individual difference. Journal of Personality and Social Psychology, 77(6), 1296–1312.


Pennebaker, J. W., & Lay, T. C. (2002). Language use and personality during crises: Analyses of Mayor Rudolph Giuliani’s press conferences. Journal of Research in Personality, 36(3), 271–282.


Poole, W. L. (2001). The teacher unions’ role in 1990s educational reform: An organizational evolution perspective. Educational Administration Quarterly, 37(2), 173–196.


Putnam, R. D. (1995). Tuning in, tuning out: The strange disappearance of social capital in America. PS: Political Science & Politics, 28(4), 664–683.


Reagans, R., & McEvily, B. (2003). Network structure and knowledge transfer: The effects of cohesion and range. Administrative Science Quarterly, 48(2), 240–267.


Riker, W. H., Calvert, R. L., & Wilson, R. K. (1996). The strategy of rhetoric: Campaigning for the American Constitution. Yale University Press.


Schattschneider, E. E. (1960). The semi-sovereign people. Holt, Rinehart and Winston.


Scott, J. (2000). Social network analysis (2nd ed.). Sage.


Scott, J. (2009). The politics of venture philanthropy in charter school policy and advocacy. Educational Policy, 23(1), 106–136.


Scott, J. T. (2011). Market-driven education reform and the racial politics of advocacy. Peabody Journal of Education, 86(5), 580–599.


Scott, J., & Carrington, P. (2011). The SAGE handbook of social network analysis. SAGE.


Scott, J., Lubienski, C., & DeBray-Pelot, E. (2009). The politics of advocacy in education. Educational Policy, 23(1), 3–14.


Smith, M. S., & O’Day, J. A. (1991). Systemic school reform. In S. H. Fuhrman & B. Malen (Eds.), The politics of curriculum and testing: The 1990 yearbook of the Politics of Education Association (pp. 233–267). Falmer Press.


Sniderman P. M., & Theriault S. M. (2004). The structure of political argument and the logic of issue framing. In W. E. Saris & P. M. Sniderman (Eds.), Studies in public opinion: Attitudes, nonattitudes, measurement error, and change (pp. 133–165). Princeton University Press.


Stein, S. J. (2004). The culture of education policy. Teachers College Press


Stirman, S. W., & Pennebaker, J. W. (2001). Word use in the poetry of suicidal and nonsuicidal. Psychosomatic Medicine, 63(4), 517–522.


Supovitz, J., Daly, A. J., & Del Fresno, M. (2018). The Common Core debate on Twitter and the rise of the activist public. Journal of Educational Change. https://doi.org/10.1007/s10833-018-9327-2


Supovitz, J., Daly, A., Del Fresno, M., & Kolouch, C. (2017, May). #commoncore Project, Part 2. http://www.hashtagcommoncore.com


Supovitz, J., & McGuinn, P. (2017). Interest group activity in the context of Common Core implementation. Education Policy. Advance online publication. doi:10.1177/0895904817719516

Supovitz, J., & Reinkordt, E. (2017). Keep your eye on the metaphor: The framing of the Common Core on Twitter. Education Policy Analysis Archives, 25(31). http://dx.doi.org/10.14507/epaa.25.2285


Tenkasi, R., & Chesmore, M. (2003). Social networks and planned organizational change. Journal of Applied Behavioral Science, 39(3), 281–300.


Tyack, D. B., & Cuban, L. (1995). Tinkering toward utopia. Harvard University Press.

Uzzi, B. (1997). Social structure and competition in interfirm networks: The paradox of embeddedness. Administrative Science Quarterly, 42(1), 35–67.


Van Dijk, T. (1997). Analysing discourse analysis. Discourse & Society, 8(1), 5–6.


Vinovskis, M. A. (1996). An analysis of the concept and uses of systemic educational reform. American Educational Research Journal, 33(1), 53–85.


Walker, J. L. (1991). Mobilizing interest groups in America. University of Michigan Press.


Wang, Y., & Fikis, D. (2017). Common Core standards on Twitter: Public sentiment and opinion leaders. Educational Policy. Advance online publication. Online first. https://doi.org/10.1177/0895904817723739


Wetherell, M., & Potter, J. (1988). Discourse analysis and the identification of interpretative repertoires. In C. Antaki (Ed.), Analysing everyday explanation (pp. 168–183). Sage.


Winter, D. G. (1978). Navy leadership and management competencies: Convergence among interviews, test scores, and supervisors’ ratings [Unpublished paper]. Wesleyan University and McBer & Company.


Winter, D. G. (1980). An exploratory study of the motives of Southern African political leaders measured at a distance. Political Psychology, 2(2), 75–85.


Winter, D. G. (1987). Leader appeal, leader performance, and the motive profiles of leaders and followers: A study of American presidents and elections. Journal of Personality and Social Psychology, 

52(1), 196–202.


Winter, D. G., & McClelland, D. C. (1978). Thematic analysis: An empirically derived measure of the effects of liberal arts education. Journal of Educational Psychology, 70(1), 8–16.

Wooffitt, R. (2005). Conversation analysis and discourse analysis: A comparative and critical introduction. Sage.



Summary of Libraries Used to Measure Mood, Drive, Conviction, and Thinking Style

Psychological Characteristic


# of Libraries

Library Name

# of Words in Library

Example Words From Library




Anger Words


dumb, fight, frustrated


Focus Present Words


ask, die, go, infer, meet


I Words


I, my, me, I’m


You Words


you, your, u, you’re




Focus Past Words


came, did, gave, felt, got, saw


Noun Words


children, education, amendment


Positive Emotion Words


free, helping, please


We Words


we, our, us




Focus Future Words


wants, will, going, tonight


I Words


I, my, me, I’m


Sad Words


suffer, failing, lost, reject




Power Words


big, control, demand




Affiliation Words


our, colleague, we, alliance, encourage




Achievement Words


overcome, proud, tried




Auxiliary Verbs*


is, will, have, are




how, so, and, as


Discrepancy Words*


must, need, if


I Words


I, my, me, I’m


Negative Emotions


rotten, wrong, problem, defend




one, five, sixth, year, grade


Positive Emotions*


easy, free, please, ready




his, you, your, we, our


Social Words*


human, kids, public, talking, love


Time Words


now, stop, new, end



Word Length > 6 chars


You Words*


you, your, u, you’re


3rd-Person POV*


his, he, they, their

Appendix (continued)

Psychological Characteristic


# of Libraries

Library Name

# of Words in Library

Example Words From Library




Causal Words


reasonable, how, using, because




how, so, and, as


Insight Words


know, learn, think, explain




don’t, no, not, can’t




parents, help, our, we




more, all, every, much, another


Tentative Words


if, or, try, may




Article Word


a, an, the


Common Adverbs*


how, why, just, so, about


Discrepancy Words*


must, need, if


I Words*


I, my, me, I’m




parents, help, our, we


Word Length > 6 chars




3rd Person POV


his, he, they, their


Common Adverbs


how, why, just, so, about




how, so, and, as




his, you, your, we, our


Social Words


human, kids, public, talking, parents, love, fight

* Reverse coded during analysis.

Cite This Article as: Teachers College Record Volume 122 Number 6, 2020, p. 1-32
https://www.tcrecord.org ID Number: 23304, Date Accessed: 5/19/2022 5:48:49 AM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
Post a Comment | Read All

About the Author
  • Jonathan A. Supovitz
    University of Pennsylvania
    E-mail Author
    JONATHAN A. SUPOVITZ, Ph.D., is a professor of education policy and leadership at the University of Pennsylvania’s Graduate School of Education and executive director of the Consortium for Policy Research in Education (CPRE).
  • Christian Kolouch
    University of Pennsylvania
    E-mail Author
    CHRISTIAN KOLOUCH is a researcher at the Consortium for Policy Research in Education at the University of Pennsylvania.
  • Alan Daly
    University of California at San Diego
    E-mail Author
    ALAN J. DALY, Ph.D., is a professor of education in the Department of Education Studies at the University of California at San Diego.
Member Center
In Print
This Month's Issue