|
|
Teaching “Against” Social Media: Confronting Problems of Profit in the Curriculumby Daniel G. Krutka, Stefania Manca, Sarah M. Galvin, Christine Greenhow, Matthew J. Koehler & Emilia Askari - 2019 Educators increasingly teach with social media in varied ways, but they may do so without considering the ways in which social media corporations profit from their uses or compromise transparency, equity, health, safety, and democracy through the design of platforms. There is a lack of scholarship that addresses the curricular topics that educators might investigate to teach about social media platforms and the potential challenges they pose for education and society. In this article, we draw on sociotechnical theories that conceive of social media as microsystems to understand the relationship between users, education, and social media companies. We identify and describe five topics concerning social media design that educators can consider and investigate with students in a variety of settings: user agreements and use of data; algorithms of oppression, echo, and extremism; distraction, user choice, and access for nonusers; harassment and cyberbullying; and gatekeeping for accurate information. In each case, we suggest curricular possibilities for teaching about social media platforms that draw from intersections of curriculum, media, and educational studies. Across much of the world, social media increasingly mediates various aspects of peoples personal, communal, and political lives. In the United States, more Americans find news via social media than via newspapers (Shearer, 2018); 90% of teens use social media, and almost half claim to be almost constantly online (Jiang, 2018); and more than two thirds (68%) of adults use Facebook, with almost three quarters (74%) of those users visiting the site at least once a day (Gramlich, 2018). By not charging for data through their Free Basics initiative, Facebook provides the primary access to the Internet for many citizens in more than 40 countries (Yim, Gomez, & Carter, 2016). Educators have responded disparately to this seismic media shift. While some teachers and professors ignored or dismissed social media, some banned it in their schools or classrooms, and others embraced these platforms in assorted ways (Carpenter & Krutka, 2014). Educators from kindergarten to higher education have used social media platforms to varying degrees for their own professional development (Greenhalgh & Koehler, 2017; Greenhow & Askari, 2017; Greenhow, Galvin, Brandon, & Askari, in press; Manca & Ranieri, 2016b, 2016c; Niu, 2019; Veletsianos & Kimmons, 2016), taught with social media for varying in-class and out-of-class activities with students (Gao, Luo, & Zhang, 2012; Greenhow & Gleason, 2012; Greenhow, Menzer, & Gibbins, 2015; Greenhow & Robelia, 2009a, 2009b; Krutka, Bergman, Flores, Mason, & Jack, 2014; Manca & Ranieri, 2016a; Nowell, 2014), used social media for communication with parents or community members (Cho & Jimerson, 2017; Ophir, Rosenberg, Asterhan, & Schwarz, 2016), and engaged in various forms of activism (Gonzalez-Lizarraga, Becerra-Traver, & Yanez-Diaz, 2016). However, we believe there is a greater need to teach about social media as capitalist, profit-based companies, particularly in light of the numerous scandals that have hit social media companies in recent years. Because Facebook, for example, does not sell a product to users, the product is often usthe usersas our personal data and consumer attention are monetized, and voter attention is strategized. The revelation that Cambridge Analytica harvested millions of Facebook users personal data without consent for political purposes resulted in increased outcry from users, politicians, and consumer activists (Perrin, 2018), leading to targeted ads from both political candidates and Russian operatives (Dutt, Deb, & Ferrara, 2019). A class action lawsuit recently revealed that Facebook pushed developers of games like Angry Birds and PetVille to encourage friendly fraud, which permitted children to spend money without the permission of parents (Halverson, 2019). Facebook has even created shadow profiles for, and tracked, nonusers while also collecting data from text messages beyond their sites or apps (Tufekci, 2018a). While Facebook CEO Mark Zuckerberg has repeated a pattern of violating user trust or privacy and then issuing similar apologies to the community, Tufekci (2018a) argued that its neglect of user data privacy is a combination of self-serving impulses: namely, profit motives, the structural incentives inherent to the companys business model, and the one-sided ideology of its founders and some executives. These scandals, ideology, and business model underscore the reality that Facebook is a publicly traded company driven by profit motives. While social media companies present unique problems for educators, problems of profit are not new to the field of education. Public schools are institutions that should serve the common good, but neoliberal capitalist reformers have sought to reframe schools around the ethics of competition and profit (Apple, 2006). Similarly, the educational technology sector has increasingly infiltrated schools, and networks of corporate and state policy actors are progressively subsuming public education interests into corporate concerns (Player-Koro, Rensfeldt, & Selwyn, 2018; Selwyn, 2016; Selwyn, Nemorin, Bulfin, & Johnson, 2018). Schools and districts often sign exorbitant contracts to be Microsoft, Apple, or Google schools that provide these corporations with valuable market exposure to young consumers and their data (e.g., Singer, 2017). Similarly, while some social media platforms are designed specifically for education, other startups and corporations profit from educational uses of their free social media platforms. Social media companies are unique in their microtargeting of personalized ads, their delayed attention to and resources dedicated to preventing individual and collective harm, indecipherable or even surreptitious collection and selling of personal data, and their apps designed for ubiquitous personal smartphones to distract users wherever they go (Vaidhyanathan, 2018; van Dijck, Poell, & de Waal, 2018). We believe that educators must confront, or teach against, the components of social media driven by neoliberal, profit-motivated impulses. Educators, particularly those of us encouraging students to log on to these platforms, should consider questions such as: How do social media companies make money? To what degree do profit motives compromise transparency, equity, health, safety, and democracy? And, finally, how should educators and their students respond in the face of the moral and ethical failures of technology in society (Frank & Torphy, 2019, this yearbook; Luppicini, 2010)? In this exploratory chapter, we seek to identify key issues, frame discussions, and offer curricular suggestions for educators and scholars. We will address how educators and their students can seek to investigate issues, raise awareness, and possibly effect change with the aim that companies might: 1. be transparent in user agreements and the use of data; 2. create algorithms to decrease oppression, echo, and extremism; 3. design to reduce distraction, increase transparency, and encourage user choice; 4. thwart harassment and cyberbullying, identity impersonation, and various forms of violence; and 5. create gatekeeping mechanisms to address information accuracy, disinformation, and democracy. In the next section, we frame social media as sociotechnical microsystems and then draw on these ideas across each section of the chapter. CRITICAL VIEWS OF SOCIAL MEDIA AS MICROSYSTEMS Online platforms have penetrated and disrupted every sector of Western societies and contributed to shifts in economic markets and labor relations, political institutions and democratic processes, civic practices and personal relations (van Dijck et al., 2018). Scholars in media studies and related disciplines have gradually focused more on the social, material, cultural, and political dimensions of these infrastructures. Considering these rapid shifts and disruptions, Plantin and Punathambekar (2019) contended that many of the most widely used digital platforms now seem to operate as infrastructures themselves (p. 163). We believe this infrastructural perspective can frame our understanding of these digital platforms and their impacts on education. With this broad conceptual background in mind, in this chapter, we draw on critical information studies and sociotechnical theories that conceive of social media as microsystems to understand the relationship between users, education, and social media companies. Critical studies scholars who employ a sociotechnical approach offer a conceptual lens for understanding social media platforms and their effects on our lives as citizens (Bijker, Hughes, & Pinch, 1987; Huysman & Wulf, 2006; Williams & Edge, 1996). Like any technological device or service, social media are multilevel, sociotechnical systems whose information (software) level is strictly intertwined with individual and societal levels. That is, social media are interlinkages between technology, people, and the social environments that they inhabit; these intersections are influenced by organizational and technological components of social media platforms and how these components are conceived and implemented. For instance, social media companies make decisions about the forms of communication they support (e.g., text and word restrictions, page templates, video, images, likes, emoji), the collection and sharing of user data, and the design of algorithms to privilege certain types of posts or their threading. In turn, users constantly negotiate whether and how to appropriate social media platforms in their quotidian habits, from leisure to financial services (e.g., Durkin, Mulholland, & McCartan, 2015). Thus, social media can be understood as microsystems whereby social media companies both influence and reflect values of societies, such as the neoliberal and libertarian ethics of Silicon Valley. Social media have affected diverse institutions like politics, work, family, religion, and education. The influence that social media have exerted on society at all levels can be understood as a two-way process of mediatization by which social media, as an independent institution with a logic of its own, contribute to the transformation of social institutions and become part of those same institutions (Couldry & Hepp, 2017; Hepp, 2013; Hjarvard, 2013). Furthermore, a sociotechnical perspective emphasizes digital microsystems as both technocultural constructs and socioeconomic structures (van Dijck, 2013a). That is, social media platforms can be understood as multilayered microsystems consisting of six interdependent components, three components for each of the two main interrelated layers: the technocultural and the socioeconomic. In this model, the technocultural layer consists of (1) a technology component, namely, a number of services that help encode activities into a computational architecture that steers user behavior; (2) a user/usage component, which consists of user agency (e.g., choices users make) and usage or participation in the system; and (3) a content component, which determines the standardization of content and the uniform delivery of products or services. The socioeconomic layer consists of (1) an ownership component, which governs commercial and nonprofit social media platforms according to different policies; (2) a governance component, which is constituted by technical and social protocols and sets of rules for managing user activities; and (3) a business model component, which mediates the engineering of connectivity through subscription or free-use models. Considering the technocultural layer of the model, social media platforms can be understood as a set of relations that constantly need to be performed through encoded (i.e., programmed) sociality in order to shape different kinds of social connections. According to this view, social media platforms are sociocultural constructs whose technology components use (meta)data, algorithms, protocol, interfaces, and default settings to encode sociality at the same time, and by the same means, that they may exploit them. Features like Facebooks News Feed, for instance, allow users to monitor members activities and generate suggestions for new friends or new posts or events with which to engage. As a professional social networking service, LinkedIn codes connections between professionals or job seekers and employers, and e-commerce companies such as Amazon code customers preferences and buying behaviors. Moreover, social media platforms afford implicit participation (i.e., usage inscribed in the engineers design by means of the coding mechanisms) and explicit participation (i.e., how users interact with social media) by means of standardized content to facilitate connectedness among users. The former is enacted when users fill their profiles with personal information, and the latter when they like or comment on Facebook posts. As opposed to connectedness, connectivity refers to automated forms of connections that are engineered and manipulated on social media platforms. For example, while Facebook and LinkedIn deploy similar principles of connectivity and narrative, Facebook imposes an algorithm-based News Feed on every members page to convey friend posts, and LinkedIn displays features that afford professional self-promotion (van Dijck, 2013b). In social media platforms like Twitter, a double paradox is implemented. First, while the functions of following and trending presume that all users are equal and all content is carried indiscriminately, in practice, filtering mechanisms inscribe more attention to some Twitter users and tweets, thus favoring certain accounts and trends. Second, although Twitter claims to be an online town hall for networked communication, the platform has proved to be a powerful instrument for opinion manipulation (van Dijck, 2013a). Moreover, academic social network sites have been found to reinforce cultures of competitive self-monitoring and self-branding among scholars that reflect neoliberal and positivist university policies to measure quantifiable impact (Duffy & Pooley, 2017). In addition to their technocultural aspects described earlier, which can foster and exploit sociality, social media also have socioeconomic structures and can be analyzed in terms of their ownership status, governance, and business models. Ownership components govern commercial and nonprofit platforms according to different policies, while governance regards mechanisms by which communication and data traffic are managed and includes terms and conditions agreements that govern, and legally frame, the social media provideruser relationship. Over time, the ownership status of most social media platforms has changed from nonprofit to corporate owner-centered enterprises (e.g., Facebooks initial public offering was in 2012) in order to yield power to investors and annex valuable algorithms and patents owned by other companies. While not necessarily classified as social media, Google offers a related example of increasing control over end-user experience and user data through the ever-growing integration of search engines, user-based software systems, and online advertising systems. At the same time, partnership agreements between platforms or digital media companies and API-based services to gain mutual access to data streams have become common. Terms of service, which rule the use of (meta)data by service providers and third parties, include clauses about the platform owners right to use or sell these (meta)data, while very few define the rights of users to access their data or to control how their data are mined or resold. Moreover, because of varying regulations in national privacy laws (e.g., the European Unions General Data Protection Regulation), control over the terms of service is primarily in the hands of social media companies that can change conditions without users prior consent. Finally, business models mediate the engineering of connectivity (i.e., automated forms of social connections) through subscription models, such as in the case of social media platforms like ResearchGate and Academia.edu, which have supplemented their basic free-of-charge services with paid subscriptions as part of their overall business plan. However, in general it is far from transparent how Facebook and other platforms utilize their data to influence traffic and monetize engineered streams of information (van Dijck, 2013a, p. 12). Overall, most social media services have gradually adjusted privacy norms and accepted monetization based on the management of users data, while the model of free user services and free content is made possible through continuous microtargeting of advertising to specific users. However, many users are well aware of a platforms commercial motives and profit-driven strategies, yet they still make calculated decisions whether to utilize it based on how much they will benefit (van Dijck, 2013a, p. 41). Social media platforms like Facebook often espouse values of web for all and free speech messages, but these values that promote civic participation, open knowledge sharing, and transparency of monetization are often subsumed by profit interests (Friesen & Lowe, 2012; Pooley, 2017). As sociotechnical social media microsystems are increasingly means of mediatization, van Dijck (2015) cautions against the influx of California-based platforms into the various parts of the United States, Europe, and parts of Asia which has resulted in the Internet as a corporately steered, data-based vehicle that shapes the worlds communication and information needs (p. 1). Recently, Zuboff (2019) dubbed the profit model of Google, Facebook, and other companies, which unilaterally claims human experience as free raw material for translation into behavioral data (p. 8), as surveillance capitalism. In this section, we have considered social media as microsystems as a lens through which to interpret the problems created by profit-driven companies and analyze how educators might teach against them for transparency, democracy, and justice. In the next sections, we identify five primary areas of concern: user agreements and use of data; algorithms of oppression, echo, and extremism; design for distraction, easy user choice, or access for non-users; harassment and cyberbullying; and gatekeeping for accurate information. We explain and frame each issue and then offer possible ways educators might respond. BE TRANSPARENT IN USER AGREEMENTS AND THE USE OF DATA With regard to the socioeconomic layer, the governance component is conceived to design and implement the set of rules for managing user activities and all the data concerned with those uses. As argued earlier, concerns about privacy and the misuse of private user data by social media companies need to be addressed by educators who might teach with the aim of raising awareness about privacy issues and advocating for companies to increase transparency in user agreements and their uses of data. Social historians note that privacy is not an inalienable human right, but rather a social concept that varies across communities, generations, and individuals (Moore, 2017). The United States has an ongoing history of court battles to determine whether, and to what degree, privacy is a constitutionally protected right. As digital communications technology and social media have become increasingly ubiquitous over the past two decades, corporations managing that technology have gathered increasing amounts of data about their users, often enabled by terms of service agreements. Digitization and datafication are progressively inscribed and embedded in everyday life, and mechanisms of dataveillance can be both manifest and hidden mechanisms of social oppression (Noble, 2018; Schäfer & Van Es, 2017; van Dijck, 2014). Social media companies have faced criticisms not only for their mishandling of users exposure to hacks, but also, in the case of Facebook, for deliberate violation of privacy stipulations in their user agreement (Romm & Dwoskin, 2019). The harvesting of personal information by a third party, Cambridge Analytica, which then used those data for the microtargeting of cursory political ads, brought the issue of social media data privacy to public and legislative attention. In fact, following these revelations, many U.S. Americans changed their relationship with Facebook by either adjusting their privacy settings, taking a break from the service, or deleting the app or their account entirely (Perrin, 2018). In this digital context, citizens should be concerned about electoral manipulation and eroding expectations of privacy. Citizens, and media educators, around the world are demonstrating increasing concerns about the dangers in how businesses monetize users and protect personal data. In Europe, for instance, the Horizon Report Europe stressed that with social media use rapidly growing across Europe there is a need for schools to develop policies and guidelines so that students can better and more safely leverage these platforms and tools for learning (Johnson et al., 2014, p. 10). The passage and implementation of the General Data Protection Regulation (EU) 2016/679, which regulates European Union (EU) law on data protection and privacy for all individuals within the EU and the European Economic Area, also addressed the export of personal data outside the EU and European Economic Area. The main aim of the General Data Protection Regulation is to give control to individuals over their personal data and simplify the regulatory environment for international business by unifying the regulation within the EU. However, research suggests that tweens, teens, and other young adults do not consider the loss of privacy a top social media concern. In Anderson and Jiangs (2018) survey of 743 teens across the United States, 76% said that social media had a positive or neutral impact on their lives, whereas just 24% reported that social medias effect on them was negative overall. Among concerns mentioned by youth who felt that social media was a negative influence, privacy did not make the top five reasons listed; teens were more concerned about bullying, relationship harm, unrealistic views of other peoples lives, spending too much time on social media, and peer pressure. These results are consistent with Pangrazio and Selwyns (2018) study, in which the authors focused on how groups of teens responded after they were made aware of the ways in which their social media data were used by advertisers and other data customers. Youth expressed a range of motivations that prevented them from more actively managing the privacy of their personal social media data. First, participants ages 1317 reported their disinclination to challenge the social norm of openness created by social media platforms. As the authors stated, Interacting with others online is a statement of trust and there was a reluctance to jeopardize that through behaving abnormally (Pangrazio & Selwyn, 2018, p. 7). In addition, participants found the complex task of data management daunting. While youth were not overly concerned with most forms of data, they were disturbed when presented with their own specific geolocational tracking data. Other studies suggest that youth care about privacy in other ways. In a national survey of 802 U.S. teens, Madden et al. (2013) found that while teens were increasingly sharing information about themselves, they did take some steps to restrict and prune their profiles. Similarly, boyds (2014) ethnographic work suggested that youth do care about privacy, but often in ways related to privacy from, for example, parental and authority figures. Both areas of interest and disinterest regarding privacy and data use can be part of a media education curriculum that helps students make informed individual and social decisions. TEACHING ABOUT PRIVACY AND TERMS OF SERVICE Educators can help students not only better understand the landscape of social media privacy and data collection laws and policies, but also identify possible ways to take action on these issues. Pangrazio and Selwyn (2019) recommended a need for education around personal data literacies that include five domains: data identification, understandings, reflexivity, uses, and tactics. In this approach, students seek answers to the following questions: ● What are personal data? ● What are the origins, circulations, and uses of these different types of personal data? ● What are the implications of these different types of personal data for myself and others? ● How can I manage and make use of these different types of personal data? ● How can I do personal data differently? Answering such questions can encourage users to make informed decisions about uploading an image to Facebook, turning on geolocation settings on an app, or holding companies accountable for using data responsibly. Moreover, we offer two general ways to confront problematic data collection and indecipherable user agreements: industry self-regulation or regulatory legislation. To achieve the former, students might pressure social media companies to increase transparency and respect privacy. For example, browser apps such as AdBlock (2018) have enabled Internet users to view which companies buy information about users on websites they visit, including social media sites. Could social media companies integrate such features into their design? Not surprisingly, Facebook actively inserted code to block ad transparency tools created by several organizations (e.g., ProPublica, Mozilla, Who Targets Me) in late 2018 (Merrill & Tobin, 2019), which suggests that the company requires public pressure to shift its policies. Toward regulatory legislation, U.S. senators have sponsored, but not as of this writing passed, the Honest Ads Act aimed at increasing the transparency of online political advertisements (Merrill & Tobin, 2019). Students might create media to raise awareness about such legislation and work with organizations to lobby legislators. Tufekci (2018b) proffered that social media companies could start charging users for their services and stop collecting and selling so much user data. In other words, companies such as Facebook, Twitter, and Google could make their services, not their users data, the product. Of course, such changes would require addressing how pay models might increase the digital divide for citizens who cannot afford access to the network. A partial step would be to continue to sell some user data in aggregate, but to stop collecting and selling data at the level of the individual user. Pushes for regulatory legislation in Europe offer starting points for what laws may be necessary in other countries to force or incentivize social media companies to respect their users with transparent and ethical data privacy policies. Furthermore, education about social media privacy and data collection laws and policies is also needed. Peltier, Milne, Phelps, and Barrett (2010) offer detailed suggestions about how to teach about privacy to marketing students, presumably at the postsecondary level. Barrett, an author and the Global Privacy and Public Policy Executive at Acxiom (p. 19), explained that his company is among the largest private data brokers in the world and scrapes information from public social media posts into profiles it sells to advertisers (Acxiom, 2018; Singer, 2012). Barrett said that for practitioners, the marketing curriculum needs to play an increased role in helping develop faculty and students sensitivity to information privacy so that in the future there is less rather than more calls for regulation of marketing practices (Peltier et al., 2010, p. 242). Moreover, in U.S. law schools, privacy has long held a place in standard course lists, covering the federal and state Freedom of Information Acts and, more recently, legislation and case law focused on digital information, including social media data. There is less curriculum for teaching privacy on social media beyond the fields of business and law. Perhaps the most direct approach to increasing understanding of social media user agreements, privacy, and digital data is to encourage students to read, even in part, the agreements themselves. Students might develop criteria for ethical and user-friendly user agreements, apply those criteria to terms of service, and then begin media campaigns (e.g., social media posts, videos, persuasive letters) aimed at pressuring companies about unethical data collection. Before developing criteria, students might refer to the site, Terms of Service Didnt Read (2018), a nonprofit that rates terms of service agreements created by social media companies and other digital businesses. For another perspective, students could visit the site of a legal-document generation service such as Rocket Lawyer (2018) and consider the choices and rationale they might provide when creating their own terms of use for a hypothetical website. A related curricular suggestion is to encourage students to study, and reflect on, how and why companies lobby the U.S. Congress on privacy issues. Because many social media companies are headquartered in the United Statesincluding Facebook, Instagram, WhatsApp, Snapchat, and YouTubeattempts by U.S. lawmakers to legislate privacy and data collection impact millions of social media users globally. ProPublica, a nonpartisan and widely respected journalism nonprofit, maintains a database tracking congressional lobbying efforts by companies and industries. Data are collected from filings that lobbyists are required by U.S. law to submit to the U.S. government. A search of privacy-related lobbying (Lobbying Arrangements, 2018) found that clients hiring lobbyists in 2018 to influence members of Congress about privacy-related legislation included the Direct Marketing Association, the Association of National Advertisers, Google, Comcast, and little-known groups such as the 21st Century Privacy Coalition and the National Business Coalition on E-Commerce and Privacy. Although no social media companies are listed as individual lobbying clients on privacy in 2018, social media companies are key participants in information-industry groups. Researching these groups, their lobbyists and the bills for which advocate could help students to better understand privacy issues, how lobbying influences legislation, and what action might be needed. In some cases, social media companies post interesting case studies on the impact and reach of advertising on their sites (Facebook, 2018; Instagram, 2018; YouTube, 2018). Although these case studies are authored for potential advertisers, they can be analyzed by students for insights into the commercial value of the data that they, as social media users, provide to Facebook and Google for free. Many students likely will see the value to users of receiving targeted ads, as well as the value to the advertisers. Of course, social media data can be of value to civic-oriented institutions as well as advertisers. Columbia University offers a case study and teaching prompts exploring ethical questions surrounding how a news organization in New Jersey used public Facebook posts in its coverage of a college students murder (Mizner, 2010). Finally, a variety of videos online detail digital privacy concerns that could make useful prompts for classroom discussion (CollegeHumor, 2016; Hoback, Khana, & Ramos, 2013; Oliver, 2018a, 2018b; Messick & Gavrilovic, 2014; Science Studio, 2018). Because these issues influence not just youth but also adults and families, educators might consider offering the community resources or workshops. CREATE ALGORITHMS TO REDUCE OPPRESSION, ECHO, AND EXTREMISM At the technocultural layer, the influence of algorithms in our social media ecology subtlety governs what, when, and how content is presented to users. The very existence of these algorithms is hidden or opaque to most users, and very few understand what specific algorithms are used to populate their feeds and timelines in various social media platforms. This problem led Jaron Lanier (2018) to ask, How can we remain autonomous in a world where you are under constant surveillance and are constantly being prodded by algorithms run by some of the richest corporations in history, which have no way of making money except by being paid to manipulate your behaviour? (p. 2). Designers of any platform must use algorithms to specify what content is displayed and in what order: Instagram, YouTube, Facebook, and Twitter all do this. In doing so, algorithms do the work of encoding the computational architecture that steers users behavior. Algorithms can range from simple (e.g., display all information or posts in chronological order) to complex (e.g., based on prior activity and choices). For example, Facebook has shifted from a simple algorithm posted in chronological order, to a more complex EdgeRank algorithm calculated by three factors (i.e., affinity, weight, time decay scores), to now using an even more complex algorithm based in machine learning with thousands of factors (Bucher, 2012). These algorithms are not merely modelled on a set of pre-existing cultural assumptions, but also on anticipated or future-oriented assumptions about valuable and profitable interactions that are ultimately geared towards commercial and monetary purposes (Bucher, 2012, p. 1169). By making visible the algorithms that social media platforms use, educators might leverage them to teach ways to reduce oppression, echo chambers, and extremism. Social media algorithms often operate without users conscious understanding of what choices are made on their behalf. Most adult Facebook users (53%) admit they do not understand how the sites News Feed works, how it classifies their interests, or why certain posts or users are favored (Gramlich, 2018; Hitlin & Rainie, 2019; Smith, 2018a). The need for increased media and information literacy (Grizzle et al., 2014) stems from a need to understand the negative implications of the choices algorithms make on users behalf. For example, algorithms can increase the likelihood of users experiencing echo chambers or filter bubbles as posts are prioritized according to prior behavior. Often times, the result is a social media landscape where users can interact almost exclusively with people similar to them socially and politically and receive little attitude-challenging information that may be critical to democratic discourses (Bakshy, Messing, & Adamic, 2015; Messing & Westwood, 2014; Pariser, 2011). The information bubbles in which users live can be leveraged by nefarious and dangerous groups that understand that algorithms prioritize some information over others and find willing partners in social media companies like Facebook. For example, the misinformation campaign conducted by Russia to sow political discord during the U.S. elections was able to simultaneously target Facebook users in the Black Lives Matter (Glaser, 2018) and Blue Lives Matter groups (Ackerman, 2018). These campaigns use many tools to manipulate their messaging, but primarily the approach centers on generating carefully targeted micromessaging, followed by a series of phony networked accounts that simulate high levels of natural user interaction. In doing so, political campaigns leverage algorithms to favor material that amplifies their message as activity and interactions, and ads show up prominently in users feeds. The same tools are used by extremist and terrorist groups to spread their dangerous messages and recruit followers across various media platforms (e.g., Klausen, 2015), particularly because YouTube algorithms can lead users to more extreme content. Such microtargeting is central to the profit motives of social media companies, namely Facebook. Algorithms can also reinforce oppression. For example, Noble (2018) deftly described disturbing accounts of racism reified by the ubiquitous Google search engine. She documented how simple searches about black girls returned pornographic search results, ads, and disturbing auto-complete suggestions when searches were in progress. Examples like these have fueled the need for what Seaver (2018) called an anthropology of algorithms that calls for rejecting algorithms as neutral, stable, and technical objects. Instead, he argued that we view algorithms as complex sociotechnical systems in which people operate within cultural systems that bring with them values and context. In a similar vein, many have argued for extending employment practices at technology firms not only to diversify the culture but also to add new voices and break the cycle of problematic conceptions of race present within these cultures (e.g., Noble, 2018). Considering itself a socially benevolent force, Facebook uses algorithms to determine the trustworthiness of users, and the likelihood for suicide with this latter information passed on to law enforcement, with the company taking on the dubious role of operating as health care professionals (Spencer, 2019). TEACHING ABOUT SOCIAL MEDIA ALGORITHMS Although there is great power in the algorithms that drive social media platforms (Beer, 2017; Noble, 2018), this power is often subtle enough not to be noticed by students (or most users) who only pay attention to the results that algorithms produce. Before teachers and students can teach against some of the pitfalls created by algorithms (e.g., echo chambers, extremism, disinformation, oppression), teachers and students alike can work to develop an understanding of what algorithms are and how they influence what we see on social media platforms. Few curricula teach about these issues despite their prevalence in the lives of todays citizens. Some educators, however, have begun to address the need for such explorations. For example, Garcia (2016) provided an educators toolkit to guide an activity in which students discuss, evaluate, and consider implications of Instagram algorithms. Even without ready access to prepared curricula on the role of algorithms and social media, there are many potential avenues to creating experiences for students customized by grade level and prior experience. To increase awareness of the role of algorithms, resources produced by the computational thinking community (Barr & Stevenson, 2011) can be used as a starting point for helping students to understand, in a general sense, what algorithms do and how they work. To specifically address the role of algorithms in social media, teachers can engage students in guided activities that investigate how algorithms influence students experiences with social media (Vaidhyanathan, 2018). For example, teachers could structure activities that ask students the following questions: ● Why do some Google/Facebook/YouTube searches yield different results for different users? Does it matter who is searching? Does it matter where and when you are searching? ● Out of all your friends or followers on Facebook/Twitter /Instagram, do posts from the same subset of friends occur more regularly than others? Why? Students can also deconstruct algorithms to better understand their role in social media. Teachers could engage students in figuring out the rules that drive their favorite social media platforms work. For example: ● Out of all the content on your favorite social media platform, what criteria are used to decide what appears in what order? Is there one factor, or many? ● What can you do when you make content on your favorite social media platform to increase the chances of it being seen by others? ● What can you do within your favorite platform to change what youre seeing? How does that work? Students can also construct algorithms of their own as a means of understanding how alternative algorithms impact what they see in social media. For example, students could capture a random sample of content from the range of their friends/followers on a social media platform and put each piece of content on an index card. Using these pieces of content, students could: ● propose different sets of rules (algorithms) to prioritize content; ● explore how different rules lead to different orderings of content; ● ask questions about potential biases that stem from each of these new sets of rules; and ● form ideas about what algorithms are fairer, more democratic, or more useful. Armed with the knowledge of how algorithms work and how they might work differently, students and teachers are better positioned to teach against the pitfalls of algorithms and even advocate for companies or lawmakers to make changes for algorithms that are more transparent, just, and democratic. DESIGN TO REDUCE DISTRACTION AND INCREASE TRANSPARENCY AND CHOICE With reference to the user/usage component of the technocultural layer of social media microsystems, another aspect to consider in teaching about social media is design. Social media companies design their platforms to attract and hold a users attention: A flash, a movement, or an odd sound can pull attention away from this page to something in the periphery of your field of vision (Vaidhyanathan, 2018, p. 80). Technologists therefore develop persuasive technologies aimed at shifting users attitudes or behaviors (Fogg, 2003), which for social media companies amounts to avoiding friction that might result in less time on device. Taking into account these designs for distraction, which generate profits through increased exposure to advertising, educators must teach for greater transparency so that students understand social media platform designs, become equipped to make choices about how they spend their time and attention in these spaces, and consider how to challenge companies that design platform in ways that encourage unhealthy uses. As mentioned previously, social media platforms are also socioeconomic structures that combine ownership, governance, and business models; the business models of todays social media platforms depend on human attention, a valuable and limited resource that is courted, obtained, and resold by companies that seek to profit in attention markets. Indeed, the last decade has witnessed the rise and spread of a global attention industry, with annual revenue of approximately $500 billion and comprising businesses that depend on the resale of human attention (Wu, 2017, p. 2). Social media platforms are designed to attract attention by offering news, information, free services, and entertainment alongside our social networks: people such as family, friends, coworkers, friends of friends, and online friends whom we trust and know to varying degrees. Social media companies like Facebook, Instagram, YouTube, Twitter, and others act like attention brokers, reselling human attention to advertisers for cash. To challenge the monetization of us, educators must understand and teach about how social media platforms are designed for this attention industry, with what potential impacts on users, consumers, citizens, and learners. In this section, we provide examples of how social media designs capture users attention, before turning to recommendations for teaching about social media to reduce distraction, increase transparency, and encourage users choice. Mason (2019) argued that media education should begin by considering how particular media are created (p. 2). The messages often conveyed via the medium of social media are often brief, disjointed, and even shallow when compared with other media forms (e.g., long-form print). Therefore, the business model of social media companies requires keeping your attention from these numerous brief messages. Advertisers must steal the consumers attention, hold it for a moment, and get the consumer to take some, action like clicking on a web page or application (Vaidhyanathan, 2018). Social media companies use attention-targeting strategies in their designs; they manage the glut of information by deciding for us what is valuable or interesting through algorithms and advertising. On Facebook, for instance, because users are tagged with the content they upload and share, information (and misinformation and disinformation) can spread by word-of-mouth through ones Friends network. The Ice Bucket Challenge, a campaign to generate money for amyotrophic lateral sclerosis (ALS), provides one example of this. Because a person could name those challenged and tag them, linking profiles to content, more than 2.5 million Challenge videos circulated on Facebook and raised $98.2 million. Tufekci (2017) pointed out that ad-financed platforms use algorithmscomplex softwareto control visibility, sometimes drowning out activist messages in favor of more advertiser-friendly content (p. xxix). This can mean that algorithms amplify feel-good stories like the Ice Bucket Challenge and bury more difficult topics that confront issues of injustice, such as #BlackLivesMatter. The problem, then, is that the issue or cause that makes the catchiest Facebook campaign, with support from social influencers like celebrities, can generate the greatest influx of resources even if it is not especially urgent, important, or even true (Vaidhyanathan, 2018). This has been evident as more extreme political or conspiracy theory videos on YouTube are often recommended by the algorithm aimed at keeping users watching (Tufekci, 2017). Another design-to-distract feature of popular social media is the integration of the commercial and social; items meant to inform or entertain or persuade (e.g., to buy, donate, play) occur alongside each other (Vaidhyanathan, 2018). User interactions reinforce this blurring of content types as people post updates that are appealing, entertaining, or provocative enough to spark their friends or followers engagement (e.g., using like, comment, reply, retweet, share buttons). As users craft their social media profiles, they signal affiliations, interests, and identity; platforms like Facebook detect and harness these signals and patterns of interaction to create a churning lattice of affiliations (Vaidhyanathan, 2018, p. 83), which advertisers can use to target consumers, distract them away from some content, and move them to consider a purchase based on interests. Thus, by tracking patterns in our behavior and preferences as users, ads judged as relevant to us appear alongside posts by our friends. Ironically, we are potentially driven to distraction by the very same social media design features we use to cultivate our online identity and attract others attention to us. However, different economic models could encourage companies to design platforms that reduce distraction and encourage choice, transparency, and access for nonsubscribers, and companies could actually benefit from adopting these models. Indeed, there are advantages for social media companies in revising their designs and in creating more ethical and balanced spaces for users. For instance, there is some evidence that the current casino-like design techniques (Vaidhyanathan, 2018, p. 218) adopted by social media companies described earlier may lead to such technostress and social media exhaustion over time that individuals quit social media temporarily or long term (Cao & Sun, 2018; Luqman, Cao, Ali, Masood, Yu, 2017). Luqman and colleagues (2017) asserted that the as volume of status updates, photos, and other online materials on Facebook doubles every year, it is resulting in feature overload and information overload (p. 552). Indeed, researchers have found that the social, cognitive, and hedonic uses of Facebook have been shown to induce stress and fatigue in users, thereby contributing to peoples intention to voluntarily discontinue social media use. To mitigate such stress, the researchers suggest that social media designers provide more opportunities for users to manage feature and information overload. Platforms could be designed, for instance, to give users choices over the configuration of their posts or messages and whether they are visible or remain hidden from view (Luqman et al., 2017). TEACHING ABOUT SOCIAL MEDIA DESIGNS Educators can help students understand social media designs and actions they can take individually to mitigate social media distraction, stress, and fatigue and to collectively challenge social media companies platform designs that encourage unhealthy uses. Here are a few suggestions for getting started: ● Ask students: What do you do on social media and why? How do you think your social media platforms influence your actions positively and negatively? In addition, applications like AntiSocial, or a social media diary assignment (Damico & Krutka, 2018), can help students become more conscious of the quantity and quality of time spent on social media, and their uses, habits, and preferences. ● Talk with students about aspects of social media platform designs addressed in this chapter. Ask students: Why are many platforms designed to distract? How do social media companies benefit from distraction? What are the advantages or disadvantages of social media design features for individuals and for society? ● Encourage students to abstain from social media or go on a techno-fast, during which they reflect in a journal or in class discussion about what they notice during this time. What are they experiencing emotionally, socially, cognitively? ● Introduce students to mindfulness, or the psychological process of bringing their attention to experiences occurring in the present moment, and practice strategies that help students focus their attention and reflect on how to be intentional, contemplative, and mindful when online (Levy, 2016; Rheingold, 2014). ● Join movements that emphasize a more balanced, thoughtful approach to joining the social media ecology, like Silicon Valley parents, who intentionally limit the screen time for their children because of their concerns about technologys negative impacts on childrens and teens psychological and social development (Weller, 2018). ● Encourage students to engage in hashtag activism or networked campaigns to pressure companies to change aspects of their platforms that induce unhealthy, habitual, or mindless uses. THWART HARASSMENT AND CYBERBULLYING, IDENTITY IMPERSONATION, AND VARIOUS FORMS OF VIOLENCE Other issues related to the user/usage component of the technocultural layer concern problems of harassment, cyberbullying, identity impersonation, and other forms of violence on the platforms of social media companies. Cyberbullying and online harassment are widespread issues on social media. Fifty-nine percent of teens (Anderson, 2018) and 22% of undergraduates surveyed admitted to experiencing cyberbullying (Whitaker & Kowalski, 2015). Victims of cyberbullying face health problems that include anxiety, lower confidence and self-esteem, depression, mood disorders, increased aggression, psychosomatic symptoms, and suicide (Chen, Ho, & Lwin, 2016; OReilly et al., 2018; Watts, Wagner, Velasquez, & Behrens, 2017). The suicides of young people in the United States, such as Tyler Clementi (2010, age 18, university student), Sadie Riggs (2017, age 15, high school student), and Gabriella Green (2018, age 12, middle school student), provide examples of the painful toll that online bullying and harassment can take in the lives of young people. While the term online harassment is often used interchangeably with cyberbullying, cyberbullying is more specifically defined as one particular type of harassment (Ševčíková & Šmahel, 2009). Cyberbullying is a purposeful and repeated series of actions in online spaces intended to cause harm, such as defamation, embarrassment, denigration, exclusion, deception, or manipulation (Milosevic, 2016; Watts et al., 2017; Whitaker & Kowalski, 2015). While bullying is not a new phenomenon, online spaces offer different forms of access to individuals and audiences. Cyberbullying is distinguished from offline bullying because it affords higher anonymity, lower empathy for victims, increased separation of self from ones actions, and increased access to victims (Cassidy, Faucher, & Jackson, 2013; Hinduja & Patchin, 2015; Watts et al., 2017). On platforms that allow pseudonyms and anonymity, like Twitter, both individuals and organized groups can harass vulnerable, oppressed, or minoritized groups with the aim of intimidating or silencing their targets (Tufekci, 2017). While Facebooks real-name policy has caused problems for transgender and Indigenous users in particular, it can help to avoid anonymous attacks in some cases. Laws and policies in every U.S. state confront bullying, cyberbullying, and various forms of harassment (U.S. Department of Health and Human Services, 2017), and 34 states explicitly prohibit cyberbullying in schools (National Conference of State Legislatures, 2015a, 2015b). Despite these legislative efforts, suicide rates among adolescents and college students continue to rise (American Foundation for Suicide Prevention, 2017). Have social media companies done enough to confront this problem? Social media corporations cannot control all elements that lead to cyberbullying, including media exposure and personal and environmental factors (Chen et al., 2016); however, they can still contribute to the maintenance of safer online spaces. Milosevic (2016) reviewed the cyberbullying policies of 14 social media platforms (i.e., Facebook, Facebook-owned Instagram, Twitter, Ask.fm, Google-owned YouTube, Yik Yak, Secret app, Google+, Yahoo!-owned Tumblr, Snapchat, Whisper, and messenger apps Voxer, Facebook-owned WhatsApp, and KIK) and interviewed platform representatives, nongovernmental organizations, and e-safety experts in the United States and the European Union. He found that social media corporations mechanisms for managing cyberbullying included reporting tools, blocking and filtering software, geofencing [limiting access to a platform by geographic location, such as restricting users on Yik Yak to those within the university campus], human or automated moderation systems such as supervised machine learning, as well as anti-bullying educational materials (Milosevic, 2016, p. 5165). In addition to cyberbullying, other forms of political and identity harassment can serve to intimidate, censor, or distress users. Users can be harassed by individuals, groups, or even governments through doxing (i.e., disseminating private information about a person or group), campaigns of continuous harassment, or other means. Targeted political activists have long called on social media companies to do more to stop abuse, which platforms acquiesce to with varying urgency and effectiveness, but bots, trolls, and users can often find ways to continue their online assaults. Tufekci (2017) explained the challenge of confronting such attacks: Meanwhile, throughout most of 2016, Twitter would do fairly little to take down prominent racist or misogynist accounts that were using the platform to organize harassment of minorities. For example, in that same year, I reported a Twitter account that did nothing but tweet pictures of dead children to relatively high-profile accounts, including mine. The response I got from Twitter dryly said, We reviewed your report carefully and found that there was no violation of Twitters Rules regarding abusive behavior. On most platforms, a copyright claim is the demand for censorship or takedown that gets enforced most quickly while sustained and organized efforts to drive women, minorities, and dissidents of the twenty-first-century public squares are allowed to flourish. (p. 293) Such experiences can cause mental anguish, weariness, and other emotions that can result in self-silencing. Reporting (e.g., notifying Instagram of posted images of violence), blocking (e.g., unfriending someone on Facebook), and content filtering tools (e.g., muting words or accounts on Twitter) are perhaps the most common antibully and antiharassment mechanisms, all of which depend on the users to take action. However, as the preceding quote indicates, these mechanisms do not always provide resolution. Social media companies have also employed supervised machine learning, which can work without user prompting by using algorithms to automatically assess content and catch cyberbullying as it happens, rather than waiting until content is reported to the company. Although less accurate than user-dependent mechanisms, this approach can help filter blatant bullying, which may include foul language. For example, after rising concerns from users, Instagram announced its application of DeepText, an algorithm created by Facebook, to identify and remove comments identified as bullying (Holston, 2018). Shortly after loosening privacy settings for teens in October 2013, Facebook launched a Bullying Prevention Hub in collaboration with the Yale Center for Emotional Intelligence (see www.facebook.com/safety/bullying/). More recently, Facebook has launched the Charting a Course for an Oversight Board for Content Decisions initiative, which claims the company will abide by an oversight board concerning Facebooks content decisions (Facebook, 2019). Social media users, citizens, and students should follow the appointments, stances, and influence of Facebooks international board of experts and organizations to address matters such as free expression, technology and democracy, procedural fairness, and human rights. However, is this enough to ensure that individual users and groups are protected? On a small scale, youth are able to identify and report inappropriate content they might encounter on social media on a daily basis, but aggressive online attacks that come at a high frequency and possibly from multiple users may still negatively impact victims before reporting and blocking options are enacted. Social media places much of the onus of safety on the users themselves, expecting them to self-protect and self-advocate using provided tools (e.g., blocking aggressors on Facebook or reporting an inappropriate story on Snapchat), but users whose proficiencies with these features are low or who are overwhelmed with bullying content will still be at risk. For instance, Tufekci (2017) wrote about a Turkish woman whose political statements on Twitter were met with such large-scale harassment and bullying that using the platform became impossible. She could not report and block every offending comment on her own. Rather than protecting her expression, Twitter enabled harassers to silence her. In some cases, such harassment is neither random nor unique, but instead a political strategy by groups or authoritarian regimes to silence activists. As Facebook has expanded aggressively across the world, it rarely employs enough local people who can understand the culture and language enough to identify cases of not just cyberbullying, but the targeted harassment of political activists (Tufekci, 2017). There is a notable lack of legal stipulation placed on social media companies that might protect individuals like the woman Tufekci described. In fact, no laws are in place that require corporations to maintain certain anticyberbullying policies (Milosevic, 2016). Under Section 230 of the Communications Decency Act in the United States, social media companies are defined only as intermediaries and thus are not liable for cyberbullying on their platforms as long as the corporation was not directly involved in the offending content. While such laws may have made sense for the small start-up Internet companies for which the law was passed in the mid-1990s, it seems inadequate for the social media behemoths that exist today. Such laws can impact what companies may do to protect users; cyberbullying policies that require the corporation to take action may place the company at risk, jeopardizing its intermediary status (Milosevic, 2016). All 50 U.S. states have laws against (cyber)bullying, and all but eight also outline model policies to direct and guide school districts defense against (cyber)bullying among students (U.S. Department of Health and Human Services, 2017), but these well-intentioned initiatives place no onus on the social media companies themselves. In Michigan, for instance, cyberbullying is punishable by up to five years in jail and a $5,000 fine, yet a platform that facilitates such a crime is left unregulated (DeVito, 2018). In Europe, European data protection legislation is now being applied to issues of cyberbullying, online harassment, and identity theft. TEACHING ABOUT CYBERBULLYING AND HARASSMENT Numerous researchers studying students from middle school through college agree on the importance of increasing awareness and preparing students with knowledge and strategies to combat bullying (Chen et al., 2016; Hamm et al., 2015; OReilly et al., 2018; Watts et al., 2017; White & Carmody, 2016). Schools and parents often support zero-tolerance, punishment-based antibullying approaches, but such interventions can miss meaningful context, potentially causing more harm and leaving students feeling misunderstood (boyd, 2014; Cassidy et al., 2013). Alternatively, we suggest that teachers take a holistic approach to fostering students development as digital citizens who can critically think about their social media practices and take individual or social action for more safe, democratic, and just online spaces. Building self-esteem and learning to show empathy and good character on social media should be at the forefront of cyberbullying lessons. Approaches such as Morrisons (2006) restorative justice practices confront the management of emotions, including shame and pride within instances of (cyber)bullying. Educators should raise awareness about the interplay between mental health and social media experiences (Cassidy et al., 2013; Chen et al., 2016). Beginning in the elementary years, young students can address questions such as: ● What do kindness and respect look like online? ● How can I communicate respectfully with others? ● What do I do when others are unkind or disrespectful? Beyond building social-emotional skills, students need to know and feel comfortable using the options they have to protect themselves with blocking features and report cyberbullying they witness online (Hamm et al., 2015; Watts et al., 2017). Students first line of defense against cyberbullying comes from their own abilities to recognize, report, and block dangerous content and users. Hands-on explorations of social media platforms and antibullying resources could take place in the classroom and incorporate the following: ● Students might investigate third-party resources or applications that can help maintain safer online environments. For example, www.stopbullying.gov (U.S. Department of Health and Human Services, 2017) offers bullying support catered to youth, and Twitter Block Train, a browser extension on Chrome, allows users to block anyone following a particular page. ● Students might summarize, compare, or critique various privacy, blocking, and reporting options on various platforms. Students could create videos or online resources (e.g., wikis) that explain various platforms, offer tips to peers, and suggest revisions to policies. ● Students could also contact social media platforms, lawmakers, or safety organizations to communicate critiques of platforms and request changes. Additionally, teachers should focus on the digital literacies that students need to navigate platforms, recognize danger, and assess the risks and consequences of their own posts (Cassidy et al., 2013). In Van Royen, Poels, Vandebosch, and Adams (2017) study, researchers found that prompting adolescents to reflect on their potential audience before posting was one effective strategy to get youth to reconsider before sharing hurtful or sensitive content. In the classroom, students could dive more deeply into the consequences of cyberbullying and explore the following: ● The prevalence of cyberbullying today and its impact on users health. Furthermore, what data on cyberbullying are platforms collecting and releasing to the public? Reviewing news stories or contacting staff from various platforms could allow students to better understand how social media companies address, and fail to address, cyberbullying and harassment online. ● The lasting effects of online harassment, not just for individuals but for democracies worldwide, given that activists can be censored and self-censor in the face of targeted attacks. ● The differential treatment of groups, particularly women, people of color, LGBTQ users, and other groups who may face hate crimes or increased harassment online. Teachers should consider how students might develop or improve anticyberbullying initiatives, address online harassment on their campuses, and other measures (Cassidy et al., 2013; OReilly et al., 2018). In this way, students can participate in and share the responsibility of creating and maintaining safer online communities and more vibrant democracies. Finally, educators might consider and advocate for changes that social media companies like Facebook (e.g., change privacy settings, dedicate more employees) can make to prevent harassment on their mediums, even if it cuts into the companys bottom line. CREATE GATEKEEPING MECHANISMS TO ADDRESS INFORMATION ACCURACY, DISINFORMATION, AND DEMOCRACY In creating gatekeeping mechanisms to address information accuracy and democracy, users should consider both technocultural and the socioeconomic layers. Whereas the platforms services and user agency orient both their content production and participation in the system, the technical and social protocols affect reciprocal rights and responsibilities. Such protocols require, as with Facebook, for example, conspicuously one-sided responsibilities for users to learn about, respond to, and change the behavior needed to move toward a more trustworthy news environment. Misinformation has a long history via tabloids and yellow journalism, and disinformation has been around as Russian active measures or to discredit people of color (Mason, Krutka, & Stoddard, 2018; Woodson, King, & Kim, 2019). However, social media algorithms often amplify inaccurate but engaging and profit-making misinformation via less credible sources. In usurping some of the historical gatekeeping responsibilities of journalists, social media platforms and their users are left with increased responsibilities. While the 2016 U.S. elections raised the profile of fake news, social media platforms push misinformation in disparate ways to threaten democracy (Journell, 2019; McNamee, 2019; Vaidhyanathan, 2018). Because of concerns about the distribution and amplification of misinformation and disinformation on social media platforms, particularly around civic issues, educators might teach about social media companies to ensure that information accuracy and democracy are prioritized. Social media contribute to the transformation of social institutions and also become part of those same institutions. According to a recent survey, sizable shares of Americans believe that digital companies privilege and censor the views of certain groups over others, and as politicians politicize this point, platforms constantly seek to avoid regulation by appeasing various politicians concerns through algorithmic and terms of service violation decisions (Smith, 2018b). Tufekcis (2017) participation and research have highlighted the fragility of networked movements to sustain movements for democratic social change, especially as authoritarian regimes flood social media spaces with distraction and disinformation. Social media, namely Facebook, also have been accused of undermining democratic culture and processes through the amplification of disinformation (i.e., fake news); Russian targeting of foreign citizens to increase hyperpartisan divisions in the United States, Britain, and Ukraine; and the stoking of violence by murderous authoritarians in the Philippines and Myanmar (Beam, Hutchens, & Hmielowski, 2018; Dutt et al., 2019; Jacoby, 2018; Vaidhyanathan, 2018). However, how these mediums undercut social and political movements and fail to mitigate the concrete, physical risks they can create for activists, as well as their capacities for building and maintaining more peripheral dissident publics, requires further scholarly investigation (McCosker, 2015; Mundt, Ross, & Burnett, 2018). Tufekci (2018b) argued that social media companies have failed to dedicate the necessary staff and resources to addressing the array of problems created by the high traffic on their platforms. Of course, mitigating the downsides of social media is critical because numerous studies suggest that democratic activists have benefited from social media uses in a number of countries (Milošević-Đorđević & Žeželj, 2017). In addition to notable examples of the Arab Spring, Occupy Wall Street, and #BlackLivesMatter (Anderson, Toor, Rainie, & Smith, 2018; Hal-Hasan, Yim, & Lucas, 2019), political and social activism is reported in a growing number of social media experiences across diverse socioeconomic, cultural, and ethnic contexts (López García, Llorca Abad, Valera Ordaz, & Peris Blanes, 2018; OByrne & Hale, 2018; Sandoval-Almazan & Gil-Garcia, 2014). Because democracies require informed citizens, and social media constitute a vital component of our new media ecology, there have been pushes in education circles for different forms of information or media literacy, among other approaches (Journell, 2019). TEACHING ABOUT INFORMATION ACCURACY AND DEMOCRACY For teachers and students, one way to frame the skills needed to operate within todays interconnected information, media, and technological landscape is as media and information literacy (Grizzle et al., 2014, p. 16). One of the most prominent skills in this framework is the ability to critically analyze and evaluate information content in terms of its authority, credibility, and current purpose. With the rise of disinformation (i.e., fake news) and influence campaigns in social media (Lazer et al., 2018), coupled with the growing extent to which users use social media to read news (Gottfried & Shearer, 2016), the need for teachers and students alike to critically evaluate information for its veracity, credibility, and purpose is more important than ever. While the media and information literacy approach is more holistic in its aims than other options, it is emblematic of an increased focus of much of the emerging social media curriculum as grounded in media literacy or information literacy approaches centered on text analysis, particularly in response to the spread of misinformation and disinformation online. McGrew, Breakstone, Ortega, Smith, and Wineburg (2018) developed assessments and researched the ability of middle school, high school, and university students to exhibit civic online reasoning in evaluating the truthfulness of online content, including social media posts. Overall, they concluded that students are not prepared to navigate the maelstrom of information online (p. 185). Approximately half of participants, for example, were rated as beginner and distrusted all tweets instead of considering the source, evidence, and corroborating sources. Moreover, young people tend to believe that a social media post is credible when the message aligns with their beliefs (e.g., confirmation bias), but media education can improve judgments (Kahne & Bowyer, 2017). Middaugh (2018), therefore, pointed out that isolated academic tasks may not translate to users integrated social practices and thus educators and researchers must also investigate how youth learn to integrate concerns for accuracy and evidence into everyday practices of consuming and sharing media (p. 33). Moreover, learning from and contributing to an informed dialogue requires educators to address the numerous psychosocial factors (e.g., motivated reasoning, backfire effect, preference bubbles) and design features (e.g., filter bubbles, algorithmic amplification, microtargeted political ads) that students and citizens scroll across online (Journell, 2019). Similarly, Pangrazio and Selwyn (2018) argued that efforts to cultivate digital literacies also need to encompass the social and ethical aspects of social media, as technical skills alone are not sufficient in preparing young people for the complex situations and decisions they must navigate as part of use (p. 7). Conceptions of social media literacy or social media education comprise a combination of technological, cognitive, social, and ethical skills needed for critical evaluation of social media (Benson & Morgan, 2016; Vanwynsberghe, 2014). Social media users are expected not only to understand the accuracy of information but also to be conscious prosumers (i.e., producers and consumers) able to create appropriate and meaningful content to share with others (Gammon & White, 2011). Therefore, we recommend that educators focus on four general areas: ● Focused tasks, such as the assessments created by the Stanford History Education Group (https://sheg.stanford.edu/), to investigate civic online reasoning around the credibility of social media sources, evidence, and source corroboration. ● Situated activities, as Middaugh (2018) recommended, in which students identify strategies, practice participating and sharing in online civic activities that they find important, and then reflecting on the ethical and democratic considerations. ● Psychosocial and technical activities that help students consider how both their prior views and the platform design influence what they learn, share, and feel. The new book edited by Journell (2019) offers a range of activities and approaches that educators might use in their classrooms. ● Media comparison activities, as Mason (2018) has suggested, whereby students analyze how different media forms influence messages. This can include historical investigations into how new media forms (e.g., printing press, television, social media) have shifted civic structures, comparing differences in how students advocate for an issue differently using different media, or even fasting from social media to better understand its influence in political participation. ● Pressuring social media companies and legislators to create gatekeeping mechanisms that can better address information accuracy, disinformation, and their effects on democracy. Information accuracy is fundamental to democratic citizenship. Educators and researchers must investigate the possibilities and challenges of each of the mentioned general areas to better determine effective ways for confronting the problem of information accuracy that has been amplified on the platforms created by social media companies. CONCLUSION: CONFRONTING SOCIAL MEDIA FOR INDIVIDUAL AND SOCIAL CHANGE While we believe that educators and scholars have begun to grapple with how to teach with and about social media in some areas, there has been less curricular attention paid to the ways that social media companies prioritize profits in the design of platforms, algorithms, and business models and how these decisions influence users. Profit motives seemed to have blinded these companies from prioritizing the resources, people, and ethics to shape online spaces that are transparent, democratic, mindful, safe, and accurate. When educators view social media as microsystems that are both technocultural constructs and socioeconomic structures, then the evident and concealed ways in which corporations make decisions to extract wealth from us, minimize costs, and maximize profits over the common good require us to attend to such issues in an emerging social media curriculum. In each section of this article, we have suggested ways that educators might teach users to better address the problems of social media. Educators often tend to focus on teaching students the individual social media skills to navigate the media ecosystem in smart and safe ways. However, because many of the problems we mention are created by social media companies, these companies can solve them in ways that are far more effective than the efforts of individual users trying to swim upstream against the currents that corporations create. Educators should teach about social media to ensure that students know how to make individual and collective decisions for the common good. OByrne and Hale (2018) hopefully contended that perhaps there is an opportunity to skillfully employ digital spaces to resist harmful discourses and usher in social and political movements to improve the lives of all citizens (p. 6). Because many of these topics are also of concern to parents and community members, we believe that educators should consider how they might work with adults beyond the classroom to deepen commitments to ethical online practices. There are at least three ways that citizens might see large-scale changes among the problematic aspects of social media companies and their profit motives. First, social media companies need more competition. Competition among companies might provide users with more choices and result in better options. However, the phenomenon of network effectwhereby platforms are valuable primarily because the people with whom we want to connect are on the mediummeans that new platforms face an uphill climb against powerful companies like Facebook, Twitter, Snapchat, and LinkedIn, among others. Moreover, as social media platforms have gained popularity, Facebook in particular has been quick to copy or purchase them, eroding competition (Tufekci, 2018a). Second, social media companies require legislative regulation. As we have mentioned in this article, advocates have already pushed for stricter enforcement of EU law on data protection and privacy. U.S. legislators have, thus far unsuccessfully, pursued legislation like the Honest Ads Act, while others have recommended that Section 230 of the Communications Decency Act be rewritten to increase anticompetitive social media companies accountability for the content posted on their platforms. In February 2019, British lawmakers argued that a parliamentary study of technology companies showed that Facebook intentionally and knowingly violated both data privacy and anti-competition laws, and they called for increased regulation to hold Facebook and other technology companies accountable (Romm, 2019). At the time of this writing, Facebook is also under investigation by the Federal Trade Commission in the United States concerning whether the company violated a 2011 agreement on privacy with the U.S. government, which may result in a multi-billion-dollar fine (Romm, 2019). Finally, social media companies and citizens need to develop clearer technoethical standards around these critical issues of privacy, attention, and disinformation. While citizens may have to push for such ethical standards before industries hold themselves accountable, companies can also set standards and policies to encourage ethical behavior. For example, following the recent revelation that Facebook was surveilling the cell phone usage of minors against Apples stricter privacy policies, Apple announced it would reduce the platforms ability to disseminate apps (Breland, 2019). While lawmakers should regulate such actions, other technology companies can also push back against their peers. Even allowing journalists and researchers more access to social media data might help these companies identify many of the problems they currently struggle to address. In addition to teaching students skills to effectively navigate social media spaces, we believe that part of any effective social media education should involve efforts to take collective action to pressure, lobby, and regulate social media companies through various means. Social media companies should operate with clear expectations to users and the larger public of how the business operates and what it is selling. Of course, as some of these companies have already made clear, they are unlikely to make changes without public pressure. Educators may encourage multimedia campaigns, pursue hashtag activism (such as #DeleteFacebook campaigns), or contact legislators to encourage regulation through legislative means. Organizations like ProPublica, for example, have shown that social media companies will respond to public pressure and legal action (e.g., Tobin, 2018; Tobin, Merrill, Waldron, & Parris, 2018), and scholars have put forth different business models that could alleviate many profit-driven problems (e.g., Tufekci, 2018b). Students may demand more staffing to address harassment, nationalize or break up social media companies, or seek legislative or policy change. Educators also do not have to begin inquiries into social media problems with all the answers. For example, Whiting (2019) detailed how her students were able to identify social media problems and use design thinking to generate solutions. Social media scholars in the field of education should also continue efforts to investigate these technoethical issues to better understand the phenomenon, amplify diverse and interdisciplinary perspectives, and promote possible solutions (Rehm, Manca, Brandon, & Greenhow, 2019, this yearbook; Greenhow, Cho, Dennen, & Fishman, 2019, this yearbook). We hope that researchers might consider pursuing more in-depth studies in each of the topics mentioned in this chapter. Moreover, to go beyond Western-centric visions for social media, scholars might also highlight the ways social media companies operate in other countries where the technocultural and socioeconomic constructs of social media and instant messaging services differ. Such studies can help illuminate how citizens, lawmakers, and companies operate in different countries and also how individuals use U.S. platforms like Facebook or Twitter, or widely used mediums like WeChat or VKontakte. As we have seen, because social media platforms are affected by sociocultural patterns that are encoded in the design features of the platforms, different sociotechnical systems influence social and educational usage differently (Zhao, Shchekoturov, & Shchekoturova, 2017). We need more explorations of diverse sociotechnical microsystems, their specific characteristics, and their impact on usage in cross-country studies. If educators and scholars are to teach and research against the negative aspects of social media, we might just move toward a future in which our privacy, mental health, life balance, and democratic participation are not out of our control. References Ackerman, S. (2018, May). Russians biggest Facebook ad promoted Blue Lives Matter. Daily Beast. Retrieved from https://www.thedailybeast.com/russians-biggest-facebook-ad-promoted-blue-lives-matter Acxiom. (2018). Providing the worlds best data. Retrieved from https://www.acxiom.com/what-we-do/data AdBlock. (2018). Retrieved from Chrome Web Store website: https://chrome.google.com/webstore/detail/adblock/gighmmpiobklfepjocnamgkkbiglidom American Foundation for Suicide Prevention. (2017). Suicide statistics. Retrieved from https://afsp.org/about-suicide/suicide-statistics/ Anderson, M. (2018). A majority of teens have experienced some form of cyberbullying. Pew Research Center. Retrieved from http://www.pewinternet.org/2018/09/27/a-majority-of-teens-have-experienced-some-form-of-cyberbullying/ Anderson, M., & Jiang, J. (2018). Teens, social media & technology 2018. Pew Research Center. Retrieved from http://www.pewinternet.org/2018/05/31/teens-social-media-technology-2018 Anderson, M., Toor, S., Rainie, L., & Smith, A. (2018). Activism in the social media age. Pew Research Center. Retrieved from http://www.pewinternet.org/2018/07/11/activism-in-the-social-media-age/ Apple, M. W. (2006). Educating the right way: Markets, standards, god, and inequality (2nd ed.). New York, NY: Routledge. Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 11301132. Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and what is the role of the computer science education community? ACM Inroads, 2(1), 4854. Beam, M. A., Hutchens, M. J., & Hmielowski, J. D. (2018). Facebook news and (de)polarization: reinforcing spirals in the 2016 U.S. election. Journal Information, Communication & Society, 21(7), 940958. Beer, D. (2017). The social power of algorithms. Information Communication and Society, 20(1), 113. https://doi.org/10.1080/1369118X.2016.1216147 Benson, V., & Morgan, S. (2016). Social university challenge: Constructing pragmatic graduate competencies for social networking. British Journal of Educational Technology, 47(3), 465473. Bijker, W. E., Hughes, T. P., & Pinch, T. J. (Eds.). (1987). The social construction of technological systems. Cambridge, MA: The MIT Press. boyd, d. (2014). Its complicated: The social lives of networked teens. New Haven, CT: Yale University Press. Breland, A. (2019, January 31). Apple is doing more to police Facebook than the U.S. government. Mother Jones. Retrieved from https://www.motherjones.com/politics/2019/01/apple-reguates-facebook/ Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 11641180. Cao, X., & Sun, J. (2018). Exploring the effect of overload on the discontinuous intention of social media users: An SOR perspective. Computers in Human Behavior, 81, 1018. Carpenter, J. P., & Krutka, D. G. (2014). How and why educators use Twitter: A survey of the field. Journal of Research on Technology in Education, 46(4), 414434. Cassidy, W., Faucher, C., & Jackson, M. (2013). Cyberbullying among youth: A comprehensive review of current international research and its implications and application to policy and practice. School Psychology International, 34, 575612. Chen, L., Ho, S. S., & Lwin, M. O. (2016). A meta-analysis of factors predicting cyberbullying perpetuation and victimization: From the social cognitive and media effects approach. New Media & Society, 19, 11941213. Cho, V., & Jimerson, J. B. (2017). Managing digital identity on Twitter: The case of school administrators. Educational Management Administration & Leadership, 45(5), 884900. CollegeHumor. (2016). #Adamruinseverything: The terrifying cost of free websites [Video file]. Retrieved from https://www.youtube.com/watch?v=5pFX2P7JLwA Couldry, N., & Hepp, A. (2017). The mediated construction of reality. Cambridge, England: Polity Press. Damico, N., & Krutka, D. G. (2018). Social media diaries and fasts: Educating for digital mindfulness with pre-service teachers. Teaching and Teacher Education, 73, 109119. DeVito, L. (2018, Dec. 26). Cyberbullying is now a crime in Michigan punishable by jail time. Detroit Metro Times. Retrieved from https://www.metrotimes.com/news-hits/archives/2018/12/28/cyberbullying-is-now-a-crime-in-michigan-punishable-by-jail-time Duffy, B. E., & Pooley, J. D. (2017). Facebook for academics: The convergence of self-branding and social media logic on Academia.edu. Social Media + Society, 3(1), 111. Durkin, M., Mulholland, G., & McCartan, A. (2015). A socio-technical perspective on social media adoption: A case from retail banking. International Journal of Bank Marketing, 33(7), 944962. Dutt, R., Deb, A., & Ferrara, E. (2019). Senator, we sell ads: Analysis of the 2016 Russian Facebook ads campaign. In L. Akoglu, E. Ferrara, M. Deivamani, R. Baeza-Yates, & P. Yogesh (Eds.), Advances in data science (pp. 151168). Communications in Computer and Information Science: Vol. 941. Singapore: Springer. Facebook. (2018). Success stories. Retrieved from https://www.facebook.com/business/success Facebook. (2019). Charting a course for an oversight board for content decisions. Retrieved from https://newsroom.fb.com/news/2019/01/oversight-board/ Fogg, B. J. (2003). Persuasive technology: Using computers to change what we think and do. San Francisco, CA: Morgan Kaufmann. Frank,* K. A., & Torphy, * K. T. (2019). Social media, who cares? A dialogue between a millennial and a curmudgeon. *Equal authorship. Teachers College Record, 121(14). Retrieved from https://www.tcrecord.org/Content.asp?ContentId=23064 Friesen, N., & Lowe, S. (2012). The questionable promise of social media for education connective learning and the commercial imperative. Journal of Computer Assisted Learning, 28, 183194. Gammon, M. A., & White, J. (2011). (Social) media literacy: Challenges and opportunities for higher education. In C. Wankel (Ed.), Educating educators with social media (pp. 329345). Cutting-Edge Technologies in Higher Education: Vol. 1. Bingley, England: Emerald Group. Gao, F., Luo, T., & Zhang, K. (2012). Tweeting for learning: A critical analysis of research on microblogging in education published in 20082011. British Journal of Educational Technology, 43(5), 783801. Garcia, X. (2016). Discussing the impacts of social media algorithms. Science Friday. Retrieved from http://www.sciencefriday.com/wp-content/uploads/2016/04/InstagramAlgorithimsStudentWorksheet-1.pdf Glaser, A. (2018, May). Russian trolls were obsessed with Black Lives Matter. Slate. Retrieved from https://slate.com/technology/2018/05/russian-trolls-are-obsessed-with-black-lives-matter.html Gonzalez-Lizarraga, M. G., Becerra-Traver, M. T., & Yanez-Diaz, M. B. (2016). Cyberactivism: A new form of participation for university students. Comunicar, 46(24), 4754. Gottfried, J., & Shearer, E. (2016). News use across social media platforms 2016. Pew Research Center. Retrieved from http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/ Gramlich, J. (2018). 8 facts about Americans and Facebook. Pew Research Center. Retrieved from http://www.pewresearch.org/fact-tank/2018/10/24/facts-about-americans-and-facebook/ Greenhalgh, S. P., & Koehler, M. J. (2017). 28 days later: Twitter hashtags as just in time teacher professional development. TechTrends, 61, 273281 doi:10.1007/s11528-016-0142 Greenhow, C., & Askari, E. (2017). Learning and teaching with social network sites: A decade of research in K12 related education. Education and Information Technologies, 22(2), 623645. Greenhow, C., Cho, V., Dennen, V., & Fishman, B. (2019). Education and social media: Research directions to guide a growing field. Teachers College Record, 121(14). Retrieved from https://www.tcrecord.org/Content.asp?ContentId=23039 Greenhow, C., Galvin, S., Brandon, D., & Askari, E. (2018, October). A decade of research on K12 teaching with social media: Insights on the state-of-the-field. Paper presented at the #Cloud2Class Conference, East Lansing, Michigan. Greenhow, C., Galvin, S., Brandon, D., & Askari, E. (in press). Fifteen years of research on K-12 education and social media: Insights on the state of the field. Teachers College Record. Greenhow, C., & Gleason, B. (2012). Twitteracy: Tweeting as a new literacy practice. The Educational Forum, 76(4), 464478. Greenhow, C., Menzer, M., & Gibbins, T. (2015). Re-thinking scientific literacy: Arguing science issues in a niche Facebook application. Computers & Human Behavior, 53, 593604. Greenhow, C., & Robelia, E. (2009a). Old communication, new literacies: Social network sites as social learning resources. Journal of Computer-Mediated Communication, 14, 11301161. Greenhow, C., & Robelia, E. (2009b). Informal learning and identity formation in online social networks. Learning, Media and Technology, 34(2), 119140. Grizzle, A., Moore, P., Dezuanni, M., Asthana, S., Wilson, C., Banda, F., & Onumah, C. (2014). Media and information literacy: Policy and strategy guidelines. Retrieved from UNESCO website: http://unesdoc.unesco.org/images/0022/002256/225606e.pdf Hal-Hasan, A., Yim, D., & Lucas, H. C. (2019). A tale of two movements: Egypt during the Arab Spring and Occupy Wall Street. IEEE Transactions on Engineering Management, 66(1), 8497. Halverson, N. (2019, January 24). Facebook knowingly duped game-playing kids and their parents out of money. Reveal: The Center for Investigative Reporting. Retrieved from https://www.revealnews.org/article/facebook-knowingly-duped-game-playing-kids-and-their-parents-out-of-money/ Hamm, M. P., Newton, A. S., Chisholm, A., Shullhan, J., Milne, A., Sundar, P., . . . Hartling, L. (2015). Prevalence and effect of cyberbullying on children and young people: A scoping review of social media studies. Clinical Review & Education, 169, 770777. Hepp, A. (2013). Cultures of mediatization. Cambridge, England: Polity Press. Hinduja, S., & Patchin, J. W. (2015). Bullying beyond the schoolyard: Preventing and responding to cyberbullying. Thousand Oaks, CA: Corwin. Hitlin, P., & Rainie, L. (2019). Facebook algorithms and personal data. Washington, DC: Pew Research Center. Hjarvard, S. (2013). The mediatization of culture and society. London, England: Routledge. Hoback, C., Khana, N., & Ramos, J. (Producers), & Hoback, C. (Director). (2013). Terms and conditions may apply [Motion picture]. USA: Variance Films. Holston, L. M. (2018, May 1). Instagram unveils a bully filter. The New York Times. Retrieved from https://www.nytimes.com/2018/05/01/technology/instagram-bully-filter.html?rref=collection%2Ftimestopic%2FCyberbullying Huysman, M., & Wulf, V. (2006). IT to support knowledge sharing in communities, towards a social capital analysis. Journal of Information Technology, 21(1), 4051. Instagram. (2018). Success stories. Retrieved from https://business.instagram.com/success? Jacoby, J. (2018). The Facebook dilemma. Boston, MA: FRONTLINE. Retrieved from https://www.pbs.org/wgbh/frontline/film/facebook-dilemma/ Jiang, J. (2018). Teens who are constantly online are just as likely to socialize with their friends offline. Pew Research Center. Retrieved from http://www.pewresearch.org/fact-tank/2018/11/28/teens-who-are-constantly-online-are-just-as-likely-to-socialize-with-their-friends-offline/ Johnson, L., Adams Becker, S., Estrada, V., Freeman, A., Kampylis, P., Vuorikari, R., & Punie, Y. (2014). Horizon Report Europe: 2014 schools edition. Luxembourg, Luxembourg: Publications Office of the European Union, & Austin, TX: The New Media Consortium. Journell, W. (Ed.). (2019). Unpacking fake news: An educators guide to navigating the media with students. New York, NY: Teachers College Press. Kahne, J., & Bowyer, B. (2017). Educating for democracy in a partisan age: Confronting the challenges of motivated reasoning and misinformation. American Educational Research Journal, 54(1), 334. Klausen, J. (2015). Tweeting the Jihad: Social media networks of Western foreign fighters in Syria and Iraq. Studies in Conflict and Terrorism, 38(1), 122. Krutka, D. G., Bergman, D. J., Flores, R., Mason, K., & Jack, A. R. (2014). Microblogging about teaching: Nurturing participatory cultures through collaborative online reflection with pre-service teachers. Teaching and Teacher Education, 40, 8393. Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. New York, NY: Henry Holt. Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., . . . Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 10941096. Levy, D. (2016). Mindful tech: How to bring balance to our digital lives. New Haven, CT: Yale University Press. Lobbying arrangements results for privacy. (2018). ProPublica. Retrieved from https://projects.propublica.org/represent/lobbying/search?utf8=%E2%9C%93&search=privacy&commit=Search López García, G., Llorca Abad, G., Valera Ordaz, L., & Peris Blanes, A. (2018). Los debates electorales, ¿el último reducto frente la mediatización? Un estudio de caso de las elecciones generales españolas de 2015 [Electoral debates. The last stronghold against mediatization? A case study of the 2015 Spanish general elections]. Palabra Clave, 21(3), 772797. Luppicini, R. (2010). Technoethics and the evolving knowledge society: Ethical issues in technological design, research, development, and innovation. Hershey, PA: Idea Group. Luqman, A., Cao, X., Ali, A., Masood, A., & Yu, L. (2017). Empirical investigation of Facebook discontinues usage intentions based on SOR paradigm. Computers in Human Behavior, 70, 544555. Madden, M., Lenhart, A., Cortesi, S., Gasser, U., Duggan, M., Smith, A., & Beaton, M. (2013). Teens, social media, and privacy. Pew Research Center. Retrieved from http://www.pewinternet.org/2013/05/21/teens-social-media-and-privacy/ Manca, S., & Ranieri, M. (2016a). Facebook and the others. Potentials and obstacles of social media for teaching in higher education. Computers & Education, 95, 216230. Manca, S., & Ranieri, M. (2016b). Is Facebook still a suitable technology-enhanced learning environment? An updated critical review of the literature from 2012 to 2015. Journal of Computer Assisted Learning, 32(6), 503528. Manca, S., & Ranieri, M. (2016c). Yes for sharing, no for teaching! Social media in academic practices. The Internet and Higher Education, 29, 6374. Mason, L. E. (2018). Media. In D. G. Krutka, A. M. Whitlock, & M. Helmsing (Eds.), Keywords in the social studies: Concepts and conversations (pp. 293304). New York, NY: Peter Lang. Mason, L. E. (2019). Media literacy and pragmatism. In G. Cappello, M. Ranieri, & B. Thevenin (Eds.), The international encyclopedia of media literacy (pp. 15). San Francisco, CA: John Wiley. Advance online publication. https://doi.org/10.1002/9781118978238.ieml0119 Mason, L. E., Krutka, D., & Stoddard, J. (2018). Media literacy, democracy, and the challenge of fake news. Journal of Media Literacy Education, 10(2), 110. McCosker, A. (2015). Social media activism at the margins: Managing visibility, voice and vitality affects. Social Media + Society, 1(2), 111. https://doi.org/10.1177/2056305115605860 McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2018). Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory & Research in Social Education, 46(2), 165193. McNamee, R. (2019). Zucked: Waking up to the Facebook catastrophe. New York, NY: Penguin Press. Merrill, J. B., & Tobin, A. (2019, January 28). Facebook moves to block ad transparency toolsincluding ours. ProPublica. Retrieved from https://www.propublica.org/article/facebook-blocks-ad-transparency-tools/ Messick, G., & Gavrilovic, M. (Producers). (2014, August 24). The data brokers: Selling your personal information. 60 Minutes. Retrieved from https://www.cbsnews.com/news/data-brokers-selling-personal-information-60-minutes/ Messing, S., & Westwood, S. J. (2014). Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communication Research, 41(8), 10421063. Middaugh, E. (2018). Civic media literacy in a transmedia world: Balancing personal experience, factual accuracy and emotional appeal as media consumers and circulators. Journal of Media Literacy Education, 10(2), 3352. Milosevic, T. (2016). Social media companies cyberbullying policies. International Journal of Communication, 10, 51645185. Miloaević-Đorđević, J. S., & Žeželj, I. L. (2017). Civic activism online: Making young people dormant or more active in real life? Computers in Human Behavior, 70, 113–118. Mizner, D. (2010). The Facebook conundrum: The New Haven Independent and the Annie Le murder. Knight Case Studies Initiative, Graduate School of Journalism, Columbia University. Retrieved from https://casestudies.ccnmtl.columbia.edu/case/FacebookConundrum/ Moore, B. (2017). Privacy: Studies in social and cultural history. New York, NY: Routledge. Morrison, B. (2006). School bullying and restorative justice: Toward a theoretical understanding of the role of respect, pride, and shame. Journal of Social Issues, 62, 371392. Mundt, M., Ross, K., & Burnett, C. M. (2018). Scaling social movements through social media: The case of Black Lives Matter. Social Media + Society, 4(4), 114. https://doi.org/10.1177/2056305118807911 National Conference of State Legislatures. (2015a). Cyberbullying. Retrieved from http://www.ncsl.org/research/education/cyberbullying.aspx National Conference of State Legislatures. (2015b). State cyberstalking and cyber- Niu, L. (2019). Using Facebook for academic purposes: Current literature and directions for future research. Journal of Educational Computing Research, 56(8) 13841406. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York, NY: New York University Press. Nowell, S. D. (2014). Using disruptive technologies to make digital connections: Stories of media use and digital literacy in secondary classrooms. Educational Media International, 51(2), 109123. OByrne, W. I., & Hale, J. (2018). Employing digital spaces to resist harmful discourses: Intersections of learning, technology, and politics showing up in the lowcountry. Learning, Media and Technology, 43(49), 390399. Oliver, J. (2018a, July 30). John Oliverlast week tonightnew Facebook ad campaign [Video file]. Retrieved from https://www.youtube.com/watch?v=8ROdly-iIYQ Oliver, J. (2018b, September 23). Facebook: Last week tonight with John Oliver (HBO) [Video file]. Retrieved from https://www.youtube.com/watch?v=OjPYmEZxACM Ophir, Y., Rosenberg, H., Asterhan, C. S. C., & Schwarz, B. B. (2016). In times of war, adolescents do not fall silent: Teacher-student social network communication in wartime. Journal of Adolescence, 46, 98106. OReilly, M., Dogra, N., Whiteman, N., Hughes, J., Eruyar, S., & Reilly, P. (2018). Is social media bad for mental health and wellbeing? Exploring the perspective of adolescents. Clinical Child Psychology and Psychiatry, 23, 601613. Pangrazio, L., & Selwyn, N. (2018). Its not like its life or death or whatever: Young peoples understandings of social media data. Social Media + Society, 4(3), 19. Pangrazio, L., & Selwyn, N. (2019). Personal data literacies: A critical literacies approach to enhancing understandings of personal digital data. New Media & Society, 21(2), 419437. Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. London, England: Penguin Press. Peltier, J. W., Milne, G. R., Phelps, J. E., & Barrett, J. T. (2010). Teaching information privacy in marketing courses: Key educational issues for principles of marketing and elective marketing courses. Journal of Marketing Education, 32(2), 224246. Perrin, A. (2018). Americans are changing their relationship with Facebook. Pew Research Center. Retrieved from http://www.pewresearch.org/fact-tank/2018/09/05/americans-are-changing-their-relationship-with-facebook/ Plantin, J.-C., & Punathambekar, A. (2019). Digital media infrastructures: Pipes, platforms, and politics. Media, Culture & Society, 41(2), 163174. Player-Koro, C., Rensfeldt, A. B., & Selwyn, N. (2018). Selling tech to teachers: Education trade shows as policy events. Journal of Education Policy, 33(5), 682703. Pooley, J. (2017, August 15). Scholarly communications shouldnt just be open, but non-profit too. LSE Impact Blog. Retrieved from http://blogs.lse.ac.uk/impactofsocialsciences/2017/08/15/scholarly-communications-shouldnt-just-be-open-but-non-profit-too/ Rehm, M., Manca, S., Brandon, & C., & Greenhow, C. (2019). Beyond disciplinary boundaries: Mapping educational science in the discourse on social media. Teachers College Record, 121(14). Retrieved from https://www.tcrecord.org/Content.asp?ContentId=23050 Rheingold, H. (2014). Net smart: How to thrive online. Cambridge, MA: The MIT Press. Rocket Lawyer. (2018). Make your free website terms of use. Retrieved from https://www.rocketlawyer.co.uk/documents-and-forms/website-terms-and-conditions.rl# Romm, T. (2019, February, 17). Facebook intentionally and knowingly violated U.K. privacy and competition rules, British lawmakers say. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2019/02/18/facebook-intentionally-knowingly-violated-uk-privacy-competition-rules-british-lawmakers-say Romm, T., & Dwoskin, E. (2019, January 18). U.S. regulators have met to discuss imposing a record-setting fine against Facebook for privacy violations. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2019/01/18/us-regulators-have-met-discuss-imposing-record-setting-fine-against-facebook-some-its-privacy-violations/ Sandoval-Almazan, R., & Gil-Garcia, J. R. (2014). Towards cyberactivism 2.0? Understanding the use of social media and other information technologies for political activism and social movements. Government Information Quarterly, 31(3), 365378. Schäfer, M. T., & Van Es, K. F. (Eds.). (2017). The datafied society: Studying culture through data. Amsterdam, The Netherlands: Amsterdam University Press. Science Studio. (2018, March 1). Google & Facebook use YOUR DATA for cool/creepy things [Video file]. Retrieved from https://www.youtube.com/watch?v=JoHP7YxFdXQ Seaver, N. (2018). What should an anthropology of algorithms do? Cultural Anthropology, 33(3), 375385. Selwyn, N. (2016). Is technology good for education? Cambridge, England: Polity. Selwyn, N., Nemorin, S., Bulfin, S., & Johnson, N. F. (2018). Everyday schooling in the digital age: High school, high tech? London, UK: Routledge. Ševčíková, A., & Šmahel, D. (2009). Online harassment and cyberbullying in the Czech Republic. Journal of Psychology, 217(4), 227–229. Shearer, E. (2018). Social media outpaces print newspapers in the U.S. as a news source. Pew Research Center. Retrieved from http://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/ Singer, N. (2012, June 16). Mapping, and sharing, the consumer genome. The New York Times. Retrieved from https://www.nytimes.com/2012/06/17/technology/acxiom-the-quiet-giant-of-consumer-database-marketing.html Singer, N. (2017, May 13). How Google took over the classroom. The New York Times. Retrieved from https://www.nytimes.com/2017/05/13/technology/google-education-chromebooks-schools.html Smith, A. (2018a). Many Facebook users dont understand how the sites news feed works. Pew Research Center. Retrieved from http://www.pewresearch.org/fact-tank/2018/09/05/many-facebook-users-dont-understand-how-the-sites-news-feed-works/ Smith, A. (2018b). Public attitudes toward technology companies. Pew Research Center. Retrieved from http://www.pewinternet.org/2018/06/28/public-attitudes-toward-technology-companies/ Spencer, M. K. (2019, January 4). Facebooks suicide algorithms are invasive. Medium. Retrieved from https://medium.com/futuresin/facebooks-suicide-algorithms-is-invasive-25e4ef33beb5 Terms of Service Didnt Read. (2018). About. Retrieved from https://tosdr.org/index.html Tobin, A. (2018, July 25). Facebook promises to bar advertisers from targeting ads by race or ethnicity. Again. ProPublica. Retrieved from https://www.propublica.org/article/facebook-promises-to-bar-advertisers-from-targeting-ads-by-race-or-ethnicity-again Tobin, A., Merrill, J. B., Waldron, L., & Parris, T. (2018, April 10). We have some follow-ups for FacebookAnd we want your help. ProPublica. Retrieved from https://www.propublica.org/article/facebook-mark-zuckerberg-testimony-capitol-hill-how-you-can-help Tufekci, Z. (2017). Twitter and tear gas: The power and fragility of networked protest. New Haven, CT: Yale University Press. Tufekci, Z. (2018a, April 6). Why Zuckerbergs 14-year apology tour hasnt fixed Facebook. Wired. Retrieved from https://www.wired.com/story/why-zuckerberg-15-year-apology-tour-hasnt-fixed-facebook/ Tufekci, Z. (2018b, December 17). Yes, big platforms could change their business models. Wired. Retrieved from https://www.wired.com/story/big-platforms-could-change-business-models/ U.S. Department of Health and Human Services. (2017). Laws & policies. Retrieved from https://www.stopbullying.gov/laws/index.html#skip Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. Oxford, England: Oxford University Press. van Dijck, J. (2013a). The culture of connectivity: A critical history of social media. Oxford, England: Oxford University Press. van Dijck, J. (2013b). You have one identity: Performing the self on Facebook and LinkedIn. Media, Culture & Society, 35(2), 199215. van Dijck, J. (2014). Datafication, dataism and dataveillance. Surveillance & Society, 12(2), 197208. van Dijck, J. (2015). After connectivity: The era of connectication. Social Media + Society, 1(1), 12. https://doi.org/10.1177/2056305115578873 van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society: Public values in a connective world. New York, NY: Oxford University Press. Van Royen, K., Poels, K., Vandebosch, H., & Adam, P. (2017). Thinking before posting? Reducing cyber harassment on social networking sites through a reflective message. Computers in Human Behavior, 66, 345352. Vanwynsberghe, H. (2014). How users balance opportunity and risk: A conceptual exploration of social media literacy and measurement (Unpublished doctoral dissertation). Ghent University, Ghent, Belgium. Veletsianos, G., & Kimmons, R. (2016). Scholars in an increasingly open and digital world: How do education professors and students use Twitter? The Internet and Higher Education, 30, 110. Watts, L. K., Wagner, J., Velasquez, B., & Behrens, P. I. (2017). Cyberbullying in higher education: A literature review. Computers in Human Behavior, 69, 268274. Weller, C. (2018, February 18). Silicon Valley parents are raising their kids tech free. Business Insider. Retrieved from https://www.businessinsider.com/silicon-valley-parents-raising-their-kids-tech-free-red-flag-2018-2/ Whitaker, E., & Kowalski, R. M. (2015). Cyberbullying via social media. Journal of School Violence, 14, 1129. White, W. E., & Carmody, D. (2016). Preventing online victimization: College students views on intervention and prevention. Journal of Interpersonal Violence, 33, 22912307. Whiting, J. (2019, January 16). Heres what happened when students solved social media problems with design thinking. EdSurge. Retrieved from https://www.edsurge.com/amp/news/2019-01-16-here-s-what-happened-when-students-solved-social-media-problems-with-design-thinking Williams, R., & Edge, D. (1996). The social shaping of technology. Research Policy, 25(6), 865899. Woodson, A. N., King, L. J., & Kim, E. (2019). Real recognizes real: Thoughts on race, fake news, and naming our truths. In W. Journell (Ed.), Unpacking fake news: An educators guide to navigating the media with students (pp. 3041). New York, NY: Teachers College Press. Wu, T. (2017). Blind spot: The attention economy and the law. Antitrust Law Journal. http://dx.doi.org/10.2139/ssrn.2941094 Yim, M., Gomez, R., & Carter, M. S. (2016). Facebooks Free Basics: For or against community development? The Journal of Community Informatics, 12(2), 217225. YouTube. (2018). See how businesses like yours succeeded with YouTube. Retrieved from https://www.youtube.com/intl/en-GB/yt/advertise/success-stories/ Zhao, S., Shchekoturov, A. V., & Shchekoturova, S. D. (2017). Personal profile settings as cultural frames: Facebook versus VKontakte. Journal of Creative Communications, 12(3), 171184. Zuboff, S. (2019). The age of surveillance capitalism: The fight for the future at the new frontier of power. New York, NY: Hachette Book Group.
|
|
|
|
|
|