Title
Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Lateral Reading and the Nature of Expertise: Reading Less and Learning More When Evaluating Digital Information


by Sam Wineburg & Sarah McGrew - 2019

Background/Context: The Internet has democratized access to information but in so doing has opened the floodgates to misinformation, fake news, and rank propaganda masquerading as dispassionate analysis. Despite mounting attention to the problem of online misinformation and growing agreement that digital literacy efforts are important, prior research offers few concrete ideas about what skilled evaluations look like.


Purpose/Objective/Research Question/Focus of Study: Our purpose in this study was to seek out those who are skilled in online evaluations in order to understand how their strategies and approaches to evaluating digital content might inform educational efforts. We sampled 45 experienced users of the Internet: 10 Ph.D. historians, 10 professional fact checkers, and 25 Stanford University undergraduates. Analysis focused on the strategies participants used to evaluate online information and arrive at judgments of credibility.

Research Design: In this expert/novice study, participants thought aloud as they evaluated live websites and searched for information on social and political issues such as bullying, minimum wage, and teacher tenure. We analyze and present findings from three of the tasks participants completed.

Findings/Results: Historians and students often fell victim to easily manipulated features of websites, such as official-looking logos and domain names. They read vertically, staying within a website to evaluate its reliability. In contrast, fact checkers read laterally, leaving a site after a quick scan and opening up new browser tabs in order to judge the credibility of the original site. Compared to the other groups, fact checkers arrived at more warranted conclusions in a fraction of the time.



Conclusions/Recommendations: We draw on insights gleaned from the fact checkers’ practices to examine current curricular approaches to teaching web credibility as well as to suggest alternatives.



The 2016 election cycle made it clear that more people are relying on the Internet for informationoften with little idea about how to judge what they find there. Among the false stories that swirled during election season was Pizzagate, which alleged that Hillary Clinton was involved in a child sex trafficking ring that operated from a pizza shop in Washington, DC. Rumors spread on Facebook and Twitter; 4chan and Reddit users investigated purported evidence about the scandal; and websites like InfoWars and Breitbart ran lurid stories. The combination of purposeful and haphazard spread of mis- and disinformation resulted in real consequences: On December 4, 2016, Edgar Maddison Welch entered Comet Ping Pong restaurant to investigate and rescue the sex trafficking victims he expected to find. Police arrested him after he fired three rifle shots (Robb, 2017).


Educators and policy makers have responded to growing fears about online misinformation with a potential solution: media literacy. Four states have already passed laws mandating media literacy education, and other states are on their way (Foley, 2017). But even if states mandate media literacy, we still face a thorny problemwhat, exactly, to teach.


There is no shortage of resources for instruction in how to evaluate online information. Most share something in common: they focus students attention on a websites internal features. Was the site recently updated? Are there banner ads? Are the authors credentials provided? Is the language free of emotion? Yet, any organization with a competent web designer and a modicum of digital savvy can design a site that aces these questions, whether the organization is a trusted purveyor of information or not. Web development and design have outpaced our methods for evaluation (Wineburg, Breakstone, McGrew, & Ortega, 2016; Wineburg & McGrew, 2016). Despite mounting agreement that media literacy is important, we have few concrete ideas about what skilled evaluations look like. Our purpose in this study was to seek out those who are skilled in online evaluations in order to understand how their approaches to evaluating digital content might inform educational efforts.


STUDENTS ONLINE EVALUATIONS


Students often rely on the Internet for information about the world, but they struggle to evaluate the information they find there (Bennett, 2012; Gasser, Cortesi, Malik, & Lee, 2012). In one of the most extensive think-aloud studies to date, Hargittai, Fullerton, Menchen-Trevino, and Thomas (2010) observed 102 college students as they searched online. Students overwhelmingly ceded to Google the responsibility for determining the credibility of informationthe higher up a site ranked in Googles results, the more reliable students considered it to be. Another study found that undergraduates ignored the valuable information contained in Googles snippets (the few sentences accompanying each search result) and clicked instead on websites in higher positions even when those websites were less relevant (Pan et al., 2007).


Students do no better when they turn to evaluating websites. A study of nearly 8,000 responses to exercises about web credibility revealed widespread problems among middle school, high school, and college students. Students could not distinguish between traditional news and sponsored content (advertisements), they rarely evaluated sources (sponsoring organizations or authors) and were instead swayed by content that appeared to present strong evidence (in the form of photographs, data displays, etc.), and they judged websites based on superficial features such as their graphic design or how authoritative their logo or references made them seem (McGrew, Breakstone, Ortega, Smith, & Wineburg, 2018; Wineburg, Breakstone, McGrew, & Ortega, 2016).


Wiley et al. (2009) found that college students rarely considered where information came from when evaluating reliability, a finding replicated across a range of studies with students of different ages and in different countries (e.g., Barzilai & Zohar, 2012; List, Grossnickle, & Alexander, 2016; Walraven, Brand-Gruwel & Boshuizen, 2009). Instead, young people judge a website based on its relevance to their searching needs (Iding, Crosby, Auernheimer, & Klemm, 2009; Julien & Barker, 2009; Walraven et al., 2009) and its appearance (Agosto, 2002; Barzilai & Zohar, 2012).


Extant research suggests that young people struggle with many aspects of finding information onlinefrom selecting search results to judging whether a site is trustworthy. What, then, might a more sophisticated approach look like?


FRAMEWORKS OF CREDIBILITY


Since the early days of the web, scholars have created models for how people search for and evaluate information. Two early frameworks posited that prominence of information (how likely information was to be noticed) played a primary role in users evaluations. Information foraging theory (Pirolli & Card, 1995) argued that people who search for information are like animals searching for food. Informations prominence (i.e., its placement and accessibility) and its relevance (how closely digital content matches what the user is searching for) drive decisions about whether to use it. Foggs (2003) prominence-interpretation theory extended the importance of prominence to judgments of websites themselves: Fogg argued that evaluations could be explained based on whether a user noticed an element of a website (due to its prominence) and what judgment the user made based on that element (interpretation).


Subsequent frameworks argued that users base evaluations on surface features, such as length, references, and writing style (Lucassen & Schraagen, 2011) or genre and level of familiarity (Hilligoss & Rieh, 2008). Users also judge the actual contents of the information; they leverage their knowledge of the topic to check whether it is accurate and relevant to their needs. Finally, users judge the credibility of the source itself, deciding whether the source has expertise and is trustworthy (Hilligoss & Rieh, 2008; Lucassen & Schraagen, 2011; Wathen & Burkell, 2002). Whether the provider of information is deemed reliable is not necessarily more important than other considerations in these models.


CONCEPTUALIZING EXPERTISE IN CIVIC ONLINE REASONING


These models of credibility judgments share one major shortcoming: They describe what average users do instead of investigating expert practice. Studies of what skilled users do are less common. Lucassen and Schraagen (2011) studied people active on a car enthusiasts forum as a proxy for expert knowledge about car engines. Unsurprisingly, people who knew more about cars were better able to detect errors in Wikipedia than those who knew less. Similarly, a group of Dutch researchers compared psychology students and psychology faculty as they selected online sources on psychological topics; faculty spent more time scanning search results while students made more superficial evaluations (Brand-Gruwel, Kammerer, van Meeuwen, & van Gog, 2017). In another study, researchers designated a group of graduate students in educational technology as experts and compared their online research processes with those of university freshmen (novices) (Brand-Gruwel, Wopereis, & Vermetten, 2005). But the authors provided few clues about how experts went about selecting and evaluating information.  


We set out to purposefully sample skilled Internet users as they evaluated digital sources about social and political issues. The Internet is reshaping participatory politics, changing how we learn about the social world, communicate with our representatives in government, and organize political protests. In response, civic education should, among other goals, prepare students to analyze and evaluate information in order to learn about and investigate pressing civic and political issues (Kahne, Hodgin, & Eidman-Aadahl, 2016, p. 9). Civic online reasoning (McGrew et al., 2018) encompasses the ability to effectively search for, evaluate, and verify social and political information online; proficiency in online reasoning is necessary for citizens to use the Internet to inform their social and political choices. Civic online reasoning can be considered a subset of the wider fields of digital and media literacy, whose sweeping goals can include everything from how to protect ones online identity to learning to code or produce digital videos (e.g., Common Sense Media, n.d.; Mozilla, n.d., National Association for Media Literacy Education, 2007).

The present work set out to understand in greater detail what experts do when judging social and political information online. Before we could tackle this issue, we needed to figure out who qualified as an expert.

We turned to a group of professionals who evaluate sources for a living: historians. Ample research has established how historians source documents, interrogating a documents author and the circumstances of its creation as keys to determining its trustworthiness (Leinhardt & Young, 1996; Shanahan & Shanahan, 2008; Wineburg, 1991, 1998). Shanahan, Shanahan, and Misischia (2011) found wide variations in sourcing among academics from different fields. While mathematicians explicitly ignored the author of a paper, as it would only be a distraction and could help in no way with the process of making sense of the text, historians engaged in extensive sourcing, speculating about who the author was and what he or she represented (2011, pp. 408409).

Despite the growth of digital history, the majority of historians still conduct their research in archives of print documents (Dougherty & Nawrotzki, 2013). We thus set out to study a second group whose work is largely done on a computer screen: fact checkers, whose job it is to ascertain truth in digital form. These professionals are charged with evaluating claims and evidence and spend much of their time vetting digital information.

Finally, we recruited a third group: undergraduates at Stanford University. In 2016, Stanford rejected 95% of its applicants, making it the most competitive university in the United States (Anderson, 2016). Nearly all admitted students were in the top 10% of their high school classes and scored above the 90th percentile on the SAT (Stanford University, 2015). These young people attended a university in the heart of Silicon Valley, where technology startups sprout within campus labs and where computer science is the most popular major (Stanford University, 2017). These students are earmarkedat least according to Stanford University brochuresto lead the digital future.

METHOD

The purpose of this study was to investigate how experienced Internet users arrive at judgments of trustworthiness. We asked two main research questions: How do experienced users of the Internet arrive at judgments of credibility as they evaluate unfamiliar sites and investigate questions of social and political import? What strategies or heuristics do they use to efficiently find reliable information? Based on other expert/novice research (Ericsson, Chraness, Feltovich, & Hoffman, 2006), our hope in studying expertise was to distill a set of practices that can inform the development of curricula and assessment.


PARTICIPANTS


Historians


Ten historians were recruited. All held the PhD in history and were faculty at five different four-year colleges and universities in California and Washington state. Six were male; four were female. Their ages ranged from 39 to 69 (M = 47).1  


Fact Checkers


The fact checkers were all employed at well-regarded news and political fact-checking organizations. Eight were located in New York City or Washington, DC; two were based on the West Coast. As with the historians, six were male and four female. Ages ranged from 23 to 60 (M = 34). Two participants held masters degrees while one held a Ph.D.; the rest had bachelors degrees.


College Students


Students were recruited using fliers posted on the Stanford campus. Each received a $25 Amazon gift card for participating. All students were enrolled in the second or third quarter of their first year and were between the ages of 18 and 19; 11 identified as male, 13 as female, and one as non-binary. Every student reported spending at least four hours online each day.


PROTOCOL


We developed a set of six online tasks that took approximately 45 minutes to complete. Our focus was on evaluating digital sources that addressed social and political issues. Space limitations require that we narrow our discussion here to three of the main tasks participants completed (see Table 1).2


Table 1. Main Web Evaluations

Topic

Processes

Elicited

Participants could

Bullying in schools

URLs:

https://www.acpeds.org/the-college-speaks/position-statements/societal-issues/bullying-at-school-never-acceptable


https://www.aap.org/en-us/about-the-aap/aap-press-room/pages/Stigma-At-the-Root-of-Ostracism-and-Bullying.aspx

Evaluations internal and

     external to a site;

     comparing sites

Scroll, click on links, and

     leave the site to access

     any information online

Time Limit: 10 minutes


Minimum wage policy

URL:https://www.minimumwage.com/2014/10/denmarks-dollar-forty-one-menu/


Evaluations internal and

     external to a site


Scroll, click on links, and

     leave the site to access

     any information online

Time Limit: 5 minutes


Teacher tenure: Funding

     for plaintiffs in Vergara

     v. California


Open web search to find

     out who paid for the

     $1.2 million legal fees


Access any information  

     online

Time Limit: 5 minutes


PROCEDURE


Sessions with historians and fact checkers were conducted by the authors; sessions with students were conducted by one of the authors and other members of the research team. Participants were asked to complete a series of web-based tasks on a 13-inch MacBook Air. Websites were live, and participants were able to search the Internet as they normally doclicking on links, opening new tabs, and leaving a site to search elsewhere. We employed a think-aloud methodology (Ericsson & Simon, 1993; Pressley & Afflerbach, 1995) that has been used to understand the nature of expertise in a wide variety of fields, including the thought processes of literary critics (Warren, 2011), mathematicians (Shanahan & Shanahan, 2008), historians (Wineburg, 1991), and physicians making clinical judgements (Elstein, Kagan, Shulman, Jason, & Loupe, 1972). At the beginning of the interview, the think-aloud procedure was explained: Participants were told to say everything that came to mind as they completed the tasks. After introducing each task, we refrained from speaking unless the participant fell completely silent. In that case, questions like, What are you thinking? were used to encourage participants to verbalize their thoughts. Meta-analyses of think-aloud protocols show that they do not cause extensive disruption in the kinds of thought processes in which participants engage (Taylor & Dionne, 2000).


QuickTime Player version 10 was used to record audio and to capture video of the computer screen. QuickTime video-recorded all action on the screen, from scrolling movements to clicks, while simultaneously recording sound. We also used an iPhone 6 as a backup source of audio in case parts of the QuickTime sound files were muffled.


Participants were encouraged to do what they normally would when evaluating information and determining its trustworthiness. We used a variety of prompts to encourage natural behavior, including: You can open up new tabsdo whatever you normally would to learn about a site and Were interested in your take. You can stay on the page or go out to another website, anything you would normally do. We repeated these instructions at the beginning of each task. We also noted the time limit for each task and gave participants a one-minute warning before time was up.


DATA ANALYSIS


We developed rubrics to rate the quality of participants conclusions for each task. These rubrics were developed after extensive pilot testing with Ph.D. graduate students and university professors (we describe these rubrics in greater detail in subsequent sections that describe each task).


In addition to assigning rubric scores, we developed codes specific to each task (discussed in greater detail in Findings). We independently analyzed 10% of the data corpus, met to discuss coding and resolve inconsistencies, and updated the coding scheme. One rater (the second author) then coded the remainder of the protocols. A second coder who did not participate in the creation of the rubrics or coding scheme analyzed a random sample of 10 protocols (22% of the data corpus); interrater agreement was 92% across the three tasks (Cohens Kappa = 0.90).


Additional analyses varied by task. These included tracking the time participants took to settle on a conclusion; whether they stayed on or left a site, and, if they left, which other sites they visited; and whether they took steps to find out more about the individuals or groups behind the sites they consulted.


RESULTS

TASK 1: BULLYING


Participants evaluated articles about bullying on the websites of the American Academy of Pediatrics (the Academy) and the American College of Pediatricians (the College). Despite the similarity in names, the two organizations could not be more different.


The Academy


Established in 1932, the Academy is the largest professional organization of pediatricians in the world, with 66,000 members and a paid staff of 450. The Academy publishes Pediatrics, the fields flagship journal, and offers continuing education on everything from Sudden Infant Death Syndrome to the importance of wearing bicycle helmets during adolescence.


The website of the Academy bears a logo and trademarked motto. Resources and professional education opportunities for members are featured, including details on membership, the groups history since its founding in 1930, and opportunities to browse books and journals that the Academy publishes. Participants viewed an article on the Academy website entitled Stigma: At the Root of Ostracism and Bullying. The article describes a symposium in which six papers were presented, including Discrimination and Stigmatization of Non-heterosexual Children and Youth. Additional presentations focused on factors that might place youth at risk for bullying, such as weight, sexual orientation, race, and income (American Academy of Pediatrics, 2014).


The College


By comparison, the College is a splinter group that in 2002 broke from the Academy over the issue of adoption by same-sex couples. It is estimated to have between 200500 members, one full-time employee, and publishes no journal (Throckmorton, 2011). The group has come under withering criticism for its virulently anti-gay stance, its advocacy of reparative therapy (currently outlawed for minors in thirteen U.S. states), and incendiary posts such as one that advocates adding P for pedophile to the acronym LGBT, because pedophilia, they claim, is intrinsically woven into [the LGBT] agenda (American College of Pediatricians, 2015). The Southern Poverty Law Center has labeled the College a hate group that is deceptively named and acts to vilify gay people (Lenz, 2012; Southern Poverty Law Center, 2016). The Colleges portrayal of research findings on LGBT youth has provoked the ire of the nations leading scientists, including Francis Collins, MD, the director of National Institutes of Health, who wrote that the American College of Pediatricians pulled language out of context from a book I wrote . . . to support an ideology that can cause unnecessary anguish and encourage prejudice. The information they present is misleading and incorrect (as cited in Bradshaw, Weight, & Packard, March 3, 2011).3


At the same time, a quick glance at the Colleges site might lead one to conclude that it is a politically neutral medical organization (Turban, 2017). The website bears an official-looking logo and sports the motto Best for Children. An anodyne About Us page informs the reader that the College produce[s] sound policy, based upon the best available research, to assist parents and to influence society in the endeavor of childrearing. However, a closer look shows that the College does not mask its positions. The Mission of the College states: We recognize the basic father-mother family unit, within the context of marriage, to be the optimal setting for childhood development. The Colleges Position Statements are transparent on issues ranging from abortion (prematurely and unnecessarily ending a human life) to corporal punishment (effective under certain circumstances).


Participants began by evaluating an article on the College website entitled Bullying at School: Never Acceptable, where a section labeled Prevention advises schools to refrain from recognizing any students as particularly at risk of being bullied:


By focusing a program upon the special characteristic or activity of one student or group, the school opens the floodgates for other programs promoted by its advocates, i.e. over issues involving religion, ethnicity, stature, intelligence, race, or even athletic abilities. By focusing anti-bullying programs, instead, on the topic of general respectfulness, the school . . . avoids the pitfalls of calling undue attention to a particular group or perhaps venturing into controversial teachings. (Trumbull, 2013)


Many studies have shown that students who identify as LGBT are more likely to be bullied than their heterosexual peersover 80% of LGBT students were verbally harassed and over 40% were physically harassed at school . . . because of their sexual orientation, according to a study cited in the White House Conference on Bullying (Espelage, 2011, p. 65). Yet, the College implies that programs to reduce bullying against LGBT students amount to special treatment, and that these programs may validat[e] individuals displaying temporary behaviors or orientations (Trumbull, 2013).


Participants were given up to five minutes per site to evaluate the trustworthiness of each as a source of information about bullying. If they did not explicitly compare the two sites before the 10 minutes were up, we asked: If you had to say which website was more reliable and which was less reliable, what would you say?


We developed a rubric to characterize the quality of the conclusions participants reached: we awarded two points for specific, correct, and warranted descriptions of the sites. Such answers concluded that the Academy is the main professional organization and the College is a splinter group that formed because of ideological disagreements with its parent organization. We awarded one point for vague or indecisive evaluations, and zero points when participants reached wrong conclusions (such as equating both organizations in terms of trustworthiness).


For the College website, a Kruskal-Wallis nonparametric analysis of variance indicated significant differences in the conclusions reached by participants on the College website: Fact checkers had a perfect mean score of 2 (SD = 0); historians, 0.7 (SD = 0.95); and students, .16 (SD = 0.37) (H (2) corrected for ties = 27.5, p < .001). Follow-up Mann-Whitney U tests showed significant differences between fact checkers and historians (p = .003) and fact checkers and students (p < .001).


There were also significant differences in the quality of conclusion scores for the Academy site, H (2) corrected for ties = 25.2; p < .001. Fact checkers again had a perfect score (M = 2, SD = 0), historians a 1.2 (SD = 0.79), and students a 0.4 (SD = 0.58). Follow-up Mann-Whitney U tests yielded significant differences between fact checkers and historians (p = .01), fact checkers and students (p < .001), and historians and students (p = .007).


There were striking differences in which site participants judged the most reliable. Every fact checker unreservedly viewed the Academys site as the more reliable; historians often equivocated, expressing the belief that both sites were reliable; and students overwhelmingly judged the Colleges site the more reliable (see Figure 1).


Figure 1. Percentage of participants in each group selecting the College or the Academy as more reliable


[39_22806.htm_g/00002.jpg]

Taking Bearings


Fact checkers success was closely tied to what we think of as taking bearings, a concept borrowed from the world of navigation. Exploring an unfamiliar forest, experienced hikers know how easy it is to lose their way. Only foolhardy hikers trust their instincts and go traipsing off. Instead they rotate their compasss bezel to determine bearingsthe angle, measured in degrees, between North and their desired destination. Obviously, taking bearings on the web is not as precise as measuring an angle in degrees. It begins, however, with a similar premise: When navigating unfamiliar terrain, first gain a sense of direction.


Checker Cs approach exemplified the advantages of taking bearings. He spent a mere eight seconds on the Colleges landing page before going elsewhere. The first thing I would do is see if I can find anything on the organization, he said as he typed the organizations name into Google. He clicked on Wikipedias entry about the College and read that it is a socially conservative association of pediatricians . . . founded in 2002 . . . as a protest against the [American Academys] support for adoption by gay couples. Wikipedias entry linked to sources including a Boston Globe story (Beliefs drive research agenda of new think tanks, Kranish, 2005), a report from the Southern Poverty Law Center (American College of Pediatricians Defames Gays and Lesbians in the Name of Protecting Children, Lenz, 2012), and a brief from the American Civil Liberties Union (Misinformation from Doctors . . . Out to Hurt Students?, Coleman, 2010).


After a minute and twenty seconds, Checker C returned to the Colleges article on bullying. Reading the abstract that he had glanced at in the tasks opening seconds (see Figure 2), he paused at the phrase no group should be singled out, and remarked that this is often code for, you know, kids who are more likely to be bulliedstudents of color or gay or queer children, adding, Thats the kind of thing that I never would have known if I had just looked at [the article on bullying].


Figure 2. Abstract of Bullying at School: Never Acceptable (emphasis added)

[39_22806.htm_g/00004.jpg]


Rendered in under two minutes, Checker Cs conclusion was not only an accurate evaluation of the bullying article but also of the rest of the Colleges website, which presents an anti-gay stance throughout.4 Overall, fact checkers left the landing page of the College in about half a minute (M = 32 s, SD = 29 s). In contrast, historians took almost three times as long (M = 88 s, SD = 103 s) (eight of the 10 left the landing page; two never did). The 16 students who left the landing page (nine never did) took an average of 100 seconds (SD = 52 s).


Fact checkers comments as they left the landing page (see Table 2) showed an immediate impulse to take bearings. They understood the web as a maze filled with trap doors and blind alleys, where things are not always what they seem. Their stance toward the unfamiliar was cautious: While things may be as they seem, in the words of Checker D, I always want to make sure.


Table 2. Examples of Fact Checkers Comments Upon Leaving the Landing Page

Checker

Comment

A

I immediately want to know more about [the College]. So Im going to go to

     About Us.

D

My first move to figure out whether something is reliable is to click on the

     About Us page. . . . At face, the American College of Pediatricians sounds

     pretty formal, but I always want to make sure.

E

I want to learn a lot more about the American College of Pediatricians.

H

Its kind of hard to tell how mainstream this organization is, so I might open

     another tab just to read a little bit more about, if this is the main American

     pediatricians professional organization or if this is a splinter group for

     some reason.


Historians Reading


Two of 10 historians resembled fact checkers in how they took bearings. Leaving the landing page after a 20-second glance, Historian H opened the sites Resources tab and clicked on the link to focusonthefamily.com to confirm that it was in fact the organization founded by evangelist Dr. James Dobson. He returned to the Colleges Resources page with a hypothesis: They probably have an agenda to quote, cure, unquote homosexuality, which is another fundamentalist point of view. Historian S also left the Colleges site in less than half a minute. Googling the organizations name, he clicked on a Breitbart headline, American College of Pediatricians on Same-Sex Marriage Ruling: A Tragic Day for Americas Children. He concluded that the College is a heavily ideological site.


Historians H and S were exceptions. Asked whether the website of the splinter group or the 64,000-member Academy was the more trustworthy site, five of their colleagues equivocated. Seven of the historians never took bearings; one did so only after analyzing the bullying article for four minutes. After 10 minutes of review, most scholars had learned virtually nothing about the respective agendas of the two pediatrics organizations.


Historians were often taken in by the Colleges name and logo; its .org domain; its layout and aesthetics; and its scientific appearance, complete with abstract, references, and articles authored by medical doctors. Reading the Bullying at School article, Historian M commented on the presence of a scientific abstract and references, compared the site to WebMD, and noted that the article was signed by a doctor (true, but it was not something she verified, since she never left the landing page). She concluded:


I think I would probably find this pretty reliable on the basis that its written by an expert, its citing expert opinions, its been reviewed by at least some people from the College of Pediatricians, so it agrees with an expert opinion. But it is still nonetheless still an opinion piece, its just an opinion piece that I agree with, and . . . reflects the opinion of a group that I want to know the opinion of.


There was no basis for Historian Ms far-reaching conclusions other than the surface features of the site, its presentation of information, and the MD listed after the authors name.

 

One feature played a key role in shaping historians judgment: the presence of references at the bottom of the Colleges entry. Seven of 10 historians explicitly commented on them (see Table 3), viewing citations to Pediatrics and the Journal of Criminology, among others, as conferring legitimacy on the articles content.


Table 3. Historians Comments About References

Historian

Comment

A

It has references to kind of standard scientific literature, of backing up some of

     its claims so it has a kind of authoritative tone to it.

B

I would look at the references and see who the [author] is citing.

E

These are all references to professional journals so that definitely reinforces

     my sense that its a genuine site and that the information found here can be

     trusted.

I

I am looking at some of the footnotes and they all seem like perfectly

     credible sources. . . . I can trust this site.

K

Who are they actually citing? So Pediatrics, okay, so theyre citing real

     journals so I trust them a little bit more. . . . So the citations suggest that it

     has some reputable characteristics.

L

I like to look at the sources to see where they are getting things. These are all

     academic journals as opposed to random Google News, which you never

     know about.

N

I am looking at the references now and to what extent theyre linked up to

     journals that strike me as peer-reviewed journals and have some kind of

     credibility. So, they all seem to come from something that strikes meI

     dont know, Pediatricsbut I assume it seems to be in some kind of

     academic form.

aNot all references were to scientific articles. Among the 10 references, one was to Free Dictionary, two to Yahoo News blogs, one to Alliance Defense Fund, and the rest to refereed journal articles.  



Students Reading


By the end of 10 minutes, only three of the 25 students had distinguished between the stances of the College and the Academy. Fully 60% of students chose the College as the more reliable site. Even the five who favored the Academy learned little about the vast differences between the two organizations.


Students rarely took bearings when landing on an unfamiliar site. Nine of the 25 never left the original site; those who did tended to click on links that spoke to a personal interest rather than a search designed to find out more about the organization behind the website. Student 19, who planned to major in either ancient Greek or bioengineering, based her evaluation almost exclusively on features like the organizations name (sounds pretty legitimate); the sites layout, which included bullet points (nice to understand quickly) and section headings (thats really smart); and the absence of banner ads (makes you focus on the article). On the basis of graphic design, she concluded that the Colleges page was the more reliable of the two: What struck me was how [the Colleges site] was laid out. Student 19s approach was representative of how the majority of students conducted their evaluations (see Table 4).  


Table 4. Students Comments About Why They Trusted the Colleges Webpage

Reason for Conferring Trustworthiness

Examples of Reasoning

Scientific

     Presentation:

     abstract,

     references,

     authored by a

     medical doctor

This seems like itll be pretty promising. Theres an abstract, so I feel

     like this is like a research thing. (Student 12)

So now I see an abstract, which makes me think that this is a very

     research-based paper. . . . This seems like a very scientific article,

     because everything is in list form and very specific. The diction and

     the language is pretty scientific in general. I like that they are citing

     their sources with links and stuff.  (Student 15)

Its written by a doctor. . . . Therere references. Seems like a legitimate

     article. (Student 20)

Usefulness:

     amount of

     information,

     clarity and

     accessibility of

     article

It has a very clear title on what its view of bullying is. . . . I really like

     how its laid out with the little headings to easily find what you need,

     and bullet points are always easier to look through also. And the

     references are really useful if I were to be doing research project,

     because then I could just look at these references afterwards. Yeah, I

     think this would be a useful site. It does seem like they have a lot of

     information. (Student 13)

If I were writing a paper . . . then I would choose [the College] over [the

     Academy] simply because this just provides more information

     relevant to the topic. (Student 6)

Answering which is more reliable after looking at both sites: The

     [College article] because that actually gave me more information

     about bullying. (Student 11)

Graphic Design:

     pleasant layout,

     color scheme,

     lack of

     advertisements

They seemed equally reliable to me. I enjoyed the interface of the

     [College website] better. But they seemed equally reliable. Theyre

     both from academies or institutions that deal with this stuff every

     day. (Student 5)

Nice how theres not really any advertisements on this site. Makes it

     seem much more legitimate. (Student 19)

Organizations

     Apparent

     Authority:

     name, logo,

     URL

I can automatically see this source and trust it just because of how

     official it looksAmerican College of Pediatricians, even the font

     and the way the logo looks makes me think this is a mind hive that

     compiled this. (Student 7)

First statement on arriving at the site: American College of

     Pediatricians. Seems like a credible website, run by pediatricians.

     (Student 16)

First statement on arriving at the site: .org. So this looks like it might

     have been subsidized by a government agency.  (Student 18)


Three of the 25 students selected the Academy as more trustworthy because they learned something about, and rejected, the Colleges ideological stance. Two of the three stumbled upon information that provided insight into the Colleges views, but did not deliberately seek it out. Only one in 25 students took bearings in a way that could be compared to the fact checkers approach. Even then, the student spent nearly four minutes reading Bullying at School: Never Acceptable before leaving the site.


TASK 2: MINIMUM WAGE


Participants evaluated an article entitled Denmarks Dollar Forty-One Menu on the website minimumwage.com (see Figure 3). The article argues that if the United States followed the example of Denmark and raised wages, it would face higher food prices and diminished job opportunities. The article links to stories in the New York Times and Columbia Journalism Review, while the website includes tabs for research reports and news stories. Its About page says it is a project of the Employment Policies Institute (EPI), a group described as a nonprofit research organization . . . [that] sponsors nonpartisan research which is conducted by independent economists at major universities.


Figure 3. Denmarks Dollar Forty-One Menu on minimumwage.com


[39_22806.htm_g/00006.jpg]


Despite their nonpartisan declarations, minimumwage.com and the Employment Policies Institute are cloaked websites (Daniels, 2009). They are the handiwork of Berman and Company, a Washington, DC-based public relations firm that lobbies on behalf of the restaurant and hotel industries. Bermans specialty, in the words of the New York Times, is to create official-sounding nonprofit groups to disseminate information on behalf of corporate clients (Lipton, 2014). None of this information, however, is disclosed on minimumwage.com or the Employment Policies Institute website. A 2013 Salon article characterized the tactics of Berman and Company with the headline, Industry P.R. Firm Poses as Think Tank (Graves, 2013).


Participants were given up to five minutes to evaluate minimumwage.com. They could use any Internet resources (including leaving the site) to help them; we repeated the instructions to do what they would normally do when landing on an unfamiliar site. Participants who had not reached the Employment Policies Institute website after five minutes were given this prompt: Minimumwage.com is paid for by another person or organization. Spend up to three minutes to figure out who is behind this site.


We used the following rubric to rate participants responses:

Score

Description

0

Evaluates minimumwage.com based on surface features; does not identify

     connection to the Employment Policies Institute.

1

Determines that the Employment Policies Institute sponsors minimumwage.com,

     but learns nothing about the Employment Policies Institute.

2

Determines that the Employment Policies Institute sponsors minimumwage.com;

     describes the Employment Policies Institute as a non-profit and non-partisan

     think tank or research organization.

3

Determines that the Employment Policies Institute sponsors minimumwage.com;

     describes the Employment Policies Institute as an advocacy organization or

     raises substantial questions/concerns about its trustworthiness.

4

Determines that the Employment Policies Institute sponsors minimumwage.com

     and is a front site created by Berman and Company, a public relations firm.  



There were dramatic differences in what fact checkers, historians, and students learned during the tasks eight minutes. Before prompting, fact checkers conclusions averaged 3.3 (SD = .82) out of 4, versus historians average of 1.3 (SD = 1.4) and students .52 (SD = 1.16). A Kruskal-Wallis test showed significance (H (2) corrected for ties = 21.4, p < .001); follow-up Mann-Whitney U tests showed differences among fact checkers and historians (p = .003) and fact checkers and students (p < .001).


Without prompting, and in less than a minute, the fact checkers learned that EPI was minimumwage.coms parent (See Figure 4; M = 51 s, SD = 43 s). Historians took nearly four times as long (M = 3 min, 40 s, SD = 2 min). Six of the 10 needed to be prompted to find EPI. Among the three groups, students took the longest to get to EPI: an average of 5 minutes and 18 seconds (SD = 1 min, 24 s); the overwhelming majority of students (four-fifths) needed prompting.


Every fact checker concluded that Richard Berman (or Berman and Company) sponsored EPI and minimumwage.com. Only six historians did so, and those who did took nearly twice the time as checkers (Mcheckers = 3 m, 25 s, SD = 1 min, 42 s; Mhistorians = 6 m, SD = 2 min, 35 s). Only forty percent of students made it to Berman and Company; those that did took an average of nearly seven minutes (M = 6 min, 59 s, SD = 1 min, 51 s).


Figure 4. Average time for participants to determine Employment Policies Institutes sponsorship of minimumwage.com; average time and percentage of each group to determine Berman and Companys sponsorship of both websites

[39_22806.htm_g/00008.jpg]


Reading Laterally


Fact checkers learned more about minimumwage.com and did so in less time than the others. They employed a powerful heuristic for taking bearings: lateral reading. Fact checkers almost immediately opened up a series of new tabs on the horizontal axis of their browsers before fully reading the article.


Checker A glanced at Denmarks Dollar Forty-One Menu for six seconds before clicking on the pages About tab, where she learned that the site was a project of the Employment Policies Institute. She used keyboard shortcuts (pressing the command key while clicking) to open the link to the Employment Policies Institute site in a new tab alongside minimumwage.com (see Figure 5). After just three seconds on EPIs home page, she went to their About Us, scanned the bland description (Founded in 1991, the Employment Policies Institute is a non-profit research organization dedicated to studying public policy issues), and quipped, This is profoundly not helpful. In just over a half minute, she opened a new tab and searched for Employment Policies Institute.


Figure 5. Checker As lateral reading


[39_22806.htm_g/00010.jpg]


Scanning Googles snippets, Checker A skipped the first four results and selected SourceWatchs entry on EPI: So this says its one of several front groups created by a PR firm. She scrolled until she hit a linked quotation from a New York Times reporter who detailed his visit to the EPI, saying, I didnt see any evidence at all that there was an Employment Policies Institute office. One minute and twenty-seven seconds into the task, she clicked on SourceWatchs citation for this quote, which led to a National Public Radio story, A Closer Look at How Corporations Influence Congress. Rather than reading, Checker A used Command-F to search for EPI and corroborate SourceWatch's claims. A little over two minutes into the task, she had EPI sized up:


Obviously this isnt a legitimate organization, based on the reporting of this New York Times reporter. He talks about actually going there, he doesnt see any evidence at all that they actually had an office, there are no employees, all the staff there actually work for the PR firm.


Only then did she return to her original starting place, minimumwage.com, declaring, [The New York Times reporter] is right. Its a very legitimate looking website, but clearly, this is also advancing an agenda.


With breakneck speed, Checker A deftly traversed a digital morass, ignoring massive amounts of material (she barely read the original article) to conclude that minimumwage.com and EPI were not what they seemed. Though slightly less efficient, the other checkers mirrored Checker As lateral approach. The average time they took to leave the starting page was just over half a minute (M = 37 s, SD = 41 s). None accepted EPIs description at face value; instead, they read laterally, visiting an average of six pages before concluding that minimumwage.com and EPI were cloaked sites that represented corporate interests.


Historians Reading


Historians took longer, on average, to go from minimumwage.com to EPI than fact checkers took to conclude that both sites were the products of Berman and Company. Before prompting, only four of 10 historians connected minimumwage.com to the Employment Policies Institute. As in the previous task, Historians H and S were outliers. They left the landing page four times as fast as the others, averaging 26 seconds; their eight colleagues averaged 2 minutes, 5 seconds. Both were efficient lateral readers, wasting little time before opening additional tabs. Three of their colleagues, on the other hand, remained stuck on minimumwage.com for the entire task.


Even when some of the historians sought to read laterallyopening new tabs to research minimumwage.com or the Employment Policies Institutethey lacked essential searching skills. For example, a minute into the task, Historian K tried to learn more about minimumwage.com by opening a new tab to search for the name of the organization. But instead of putting the name of the organization in quotation marks and adding keywords like funding or who is behind, she typed [minimum wage.com] into the search bar, separating minimum from wage and adding no additional terms. The outcome was an entire page of results issued by the very organization she was trying to investigate. Sensing a dead end, she added [conservative?] to the search bar, which produced yet another page of fruitless results (see Figure 6).


Figure 6. Historian Ks search results for [minimumwage.com conservative?]


[39_22806.htm_g/00012.jpg]

Stymied, the historian abandoned lateral reading and returned to the original Denmarks Dollar Forty-One Menu page, little wiser than before. She clicked the pages Research tab to engage in a more familiar task: Let me see how I can interpret the legitimacy of their research. Historian K was not alone: her colleagues fumbled such basic moves as putting terms in quotation marks so that Google could search for contiguous terms. Each of these historians was an astute reader, but reading skills alone werent enough to pull back the curtain from a cloaked website.


Students Reading


Students struggled to get to the bottom of minimumwage.com. They either spent too much time reading vertically, staying on the page and reading it as they might a print document, or they engaged in fluttering, aimlessly moving across the screen, touching or not touching pieces of information . . . unconscious to its value and without a plan (Kirschner & Von Merriënboer, 2013, p. 171). When five minutes were up and before being prompted, 80% of students had devoted no time to investigating who was behind minimumwage.com.


Although some students left the landing page quickly, their exit was a far cry from the strategy of taking bearings. Instead, they meandered to different parts of the site, making decisions about where to click based on aspects that struck their fancy. A prospective chemical engineering major quickly glanced at Denmarks Dollar Forty-One Menu before scrolling to the bottom of the page and clicking on In Your State, an interactive map where users could click on different states and compare minimum wage rates and unemployment statistics. He spent two minutes playing with it, longer than he spent reading the initial article. Other students engaged in similar kinds of fluttering, clicking on features that piqued their curiosity rather than those that would justifiably inform their judgment about the sites trustworthiness (see Table 5).


Table 5. Students Fluttering on Minimumwage.com

Links Clicked

Students Comment while Clicking

Clicking Sequence

https://www.minimumwage.com/media/

Its interesting how the Media page is kept

     very minimalistic, and then you click on

     other things [clicking on News Reports,

     which leads to an EPI page] and it brings

     you to different pages [clicks back to

     Media page]. But I think its actually

     smart to keep that elsewhere just to

     organize it. (Student 19)

Visited Media page after

     visiting the Home,

     Myths, Research,

     and In Your State

     pages

https://www.minimumwage.com/research/

I dont really want to read their blog, and

     Im not interested right now in whats my

     states minimum wage and teen

     unemployment. . . . And videos and

     graphics are too time consuming.

     (Student 3)

Explaining her reasoning for

     clicking on the

     Research page instead

     of the Blog, In Your

     State, or Video and

     Graphics pages

https://www.minimumwage.com/news/

I like the layout of the blog, I think its also

     just very clear and everythings very

     cleanly laid out in a single column. Same

     with this [Research] page. . . . Oh, and

     then heres a description of the website.

     Um, this is a pretty cool page too.

     (Student 12)

Clicked through several

     pages of the website,

     including Home, In

     Your State, Blog,

     Research, About,

     and Myths. On each

     page, she focused

     comments on

     appearance and

     organization

https://www.minimumwage.com/media/

Maybe this is an impartial website. Is there

     any such thing [clicks to Videos and

     Graphics page] as an impartial website? I

     dont know. [reading advertisements

     posted on site] Unhappy New Year, If 7

     out of 10 doctors said you were sick, you

     would listen. (Student 1)

Clicked to Media and

     Videos and Graphics

     pages after viewing the

     Home and In Your

     State pages



TASK 3: VERGARA V. CALIFORNIA


In May 2012, lawyers in California filed a lawsuit on behalf of nine public school students, including Beatriz Vergara. They argued that the system of teacher tenure in California violated the state constitution by denying equal protection to students with ineffective teachers. In June 2014, a California Superior Court ruled in favor of the students. The case cost more than a million dollars to prosecute, a sum that typically exceeds the spending money of nine adolescents. In fact, the legal team was hired and financed by David Welch, a Silicon Valley entrepreneur who founded the organization Students Matter.


The press, however, often omitted this detail. What made for good copy was a David-versus-Goliath tale of adolescents taking on a powerful teachers union: nine students, mostly students of color, confronting a rotten bureaucracy and demanding better teachers. A news item on the website of KABC, the Los Angeles ABC affiliate, reported that The verdict is a win for nine students who sued the state saying that tenure policies have made it impossible for bad teachers to be fired (California Teacher Tenure, 2014). It made no mention of Students Matter, David Welch, or any of the big money that backed the suit.


Unlike the two previous tasks, this one began with a paper stimulus: the 379-word article from KABC. We gave participants time to read the article before telling them that the nine students had a million-dollar legal bill. We then asked them to spend five minutes searching for who paid the tab. Participants needed to, as it were, follow the money by locating information that named Students Matter, and ultimately David Welch, as the main backer of the lawsuit.


Vergara was a politically charged case with far-reaching implications. Students Matter argued that the case was about getting rid of laws that were handcuffing schools from doing whats best for kids when it comes to teachers (Vergara v. California, n.d.); the California Teachers Association painted it as a lawsuit brought by wealthy corporate special interests looking to eradicate educators professional and due process rights (Vergara v. State of California, n.d.). Given these conflicting claims and the number of bona fide news sources and partisan sites that wrote about the case, site selection and verification were essential. If participants could verify that Welch was the source of the plaintiffs funding across bona fide sources, they could be more certain that they had successfully navigated politically muddy waters to arrive at the correct answer.


The 25 Stanford students were the fastest in identifying Welch as the source of funding (M  = 1 minute, 42 seconds, SD = 86 s). Fact checkers and historians were slower. Historians took 2 minutes, 1 second (SD = 56 s), and checkers averaged 2 minutes, 8 seconds (SD = 93 s). Although they were the slowest to reach their conclusions, fact checkers were the most selective when it came to the sites they visited and took the most time to verify their answers.


We rated the quality of participants conclusions using a 5-point scale. Participants were given a 0 if they never identified Welch; a 1 if they identified Welch but did so only through a questionable source; a 2 if they identified and verified Welchs role based on two or more questionable sources; a 3 if they identified Welch using a bona fide source; and a 4 if they identified and verified Welchs role through at least one bona fide source and one additional source. We defined bona fide sources as those that have systems in place to ensure the quality and accuracy of the information they provide (e.g., they have a retraction or correction policy, employ an ombudsman, separate their editorial and news divisions, and are explicit about the methods with which they verify news) (Bounegru, Gray, Venturini, & Mauri, 2017; Bulger & Davison, 2018).


Using our rubric, the fact checkers conclusions merited a 3.6 (SD = 0.70), versus historians 2.4 (SD = 1.3) and students 2.3 (SD = 1.5). Fifteen students scored a 0, 1, or 2, while all but one of the fact checkers responses scored a 3 or 4. A Kruskal-Wallis test showed significance, H (2) corrected for ties = 27.5, p < .001; follow-up Mann-Whitney U tests showed differences between fact checkers and students (p = .016).


The differences between the students and the fact checkers approaches can be seen by comparing Checker D with Student 17, a mathematical and computational science major. Both identified Welch in under a minute (34 seconds for the student, 50 seconds for the checker). The student spent just a few seconds on the results yielded by searching for [vergara v california]. He clicked on the first result (the Students Matter page), but quickly returned to the search results, reminding himself, Im looking for the who paid. He selected vergaratrial.com, a partisan site created by the California Federation of Teachers, where he located Welchs name. He neither commented on the websites political stance nor whether he found it trustworthy; he simply located Welchs name and accepted it as fact.


Checker D initially searched for [vergara v california] before quickly adjusting it to [vergara v. california court records]. As she scrolled down the results, she said, Im coming up with a lot of different information. Id rather click on some press reports. She skipped the first three results, all of which were affiliated with Students Matter, along with vergaratrial.com and cacs.org (an organization she did not recognize), and instead opened articles from three news organizations and Wikipedia. Exhibiting what we call click restraint, she spent nearly 20 seconds scanning the search engine results page and reading the snippets before clicking on any link. Although she opened four additional webpages in new tabs (see Figure 7), her use of keyboard shortcuts meant that her eyes and focus never wavered from the results page. Checker D went first to Wikipedia, where she skipped over most of the entry by using the Contents menu to navigate to Litigants. There, she read that funding for the plaintiff school students was provided by David Welch, a Silicon Valley entrepreneur. She then clicked on the Washington Post article she had opened in a different tab. She used the command-F shortcut to search for Welchs name and confirmed his role in the case.


Checker D took 16 seconds longer than Student 17 to find Welchs name. However, she was more purposeful in the sites she opened, more discerning in the information she considered trustworthy, and more thorough in ascertaining that David Welch was indeed the money behind Vergara v. California.


Figure 7. Checker Ds search results showing the sites she opened


[39_22806.htm_g/00014.jpg]


Historians


Historians were only slightly better than students in the quality of their conclusions (Mhistorians = 2.4 versus Mstudents = 2.3). Although several historians excelled, quickly locating Welchs name and verifying his role on trusted sites, two of them relied exclusively on partisan or questionable sources and made no attempt to verify their conclusions.


A third, Historian N, never made it to Welch. He searched for [Vergara v. California] and started with Wikipedia. Rather than using Wikipedia to quickly locate Welchs name, Historian N went directly to the references to find a link to the case itself. For nearly three minutes, he examined the original court brief (number BC484642), scrolling up and down the PDF document, pausing at Procedural History and learning that the plaintiffs argued that the California Educational Code violated the equal protection clause of the state constitution. After searching in vain for the plaintiffs backers, he abandoned Wikipedia and initiated a new search, adding plaintiffs and attorneys to his original query.


He clicked on the first result (studentsmatter.org, Welchs organization) and went to Our Team, where he recognized the name of the lead attorney (someone I know & the Solicitor General under Bush). By the end of the task the only thing he could say was that the plaintiffs were represented by a team with deep legal pockets. An accurate statement, but then again, this was the starting point for the taskparticipants were told at the tasks start that legal fees in this case were over a million dollars and that their goal was to find out who paid the bill. By the tasks end, this historian was no closer to answering the question than when he started. How come?


The simplest answer was that Historian N did what historians are trained to do: search for primary sources. Had the task been to write a history of the Vergara case, initiating the research process with the court briefing mightve made sense. However, when the goal was to quickly learn who backed the teenagers, a close reading of a labyrinthine legal documentwhich never mentioned Welchtook precious time and sapped limited energy.


LIMITATIONS


The purpose of this exploratory study was to better understand the nature of expertise in the evaluation of online information. We recognize, however, that any task that involves researchers peering over participants shoulders creates an artificial environment that can distort what people ordinarily do. Despite repeated imperatives to do what you normally do, it has to be odd to be shown sites not of ones choosing and given time limits for searching. Studies are needed that observe people evaluating websites in more natural settings. At the same time, we reasoned that tasks without time limits threaten ecological validityjust-in-time searches are generally matters of minutes or seconds, not hours (Liu, White, & Dumais, 2010; Nielsen, 2011). Its also possible that a different sample of sites might have yielded different results. We chose sites that covered a range of topics and perspectives and that varied in how forthcoming they were about their agendas. But even within the categories we selected, there are innumerable options, each with unknown content effects. More extensive research is needed to know if the strategies we identified are generalizable across topics, sites, and searches.


Additionally, it may have been the case that participants didnt put forth their best effort, although we find that prospect unlikely. Our sample was comprised of people with high levels of self-regard and intellectual confidence. Looking foolish or appearing cavalier, especially when rendering judgments about issues of social and political moment, threatens that self-regard.


We are also aware that professional fact checkers were not the only possible group of experts we could have sampled. Others, such as Wikipedia editors who have earned the highest badges, specialists in cyber security, and professional librarians and information scientists, are also worthy of study. Teacher educators, especially those responsible for preparing new teachers to use information technologies, would be a fruitful sample, too. In their approach to websites, 2 of the 10 historians resembled the fact checkers more than their fellow historians. Small sample sizes exaggerate differences (Jussim, 2012): We cant rule out the possibility that doubling or tripling sample size would have produced different results. Studies that require intensive protocol analysis always present a trade-off between sample size and available resources. That said, a sample of 45 nearly hour-long protocols is on the higher end for this genre of research.


DISCUSSION


The participants in this study were capable readers. Historians had esteemed publications to their credit and held coveted positions in a field where tenured faculty lines are increasingly rare. The fact checkers worked for prestigious publications and rubbed shoulders with famous authors who depended on them to get things right. Our college students gained admission to the most competitive university in the U.S. in 2015 (Stanford University, 2015). Yet, despite our participants talents, there were unmistakable differences in how they navigated the web.


Only 2 of the 10 historians adroitly evaluated digital information. Others were often indistinguishable from college students. Both groups fell victim to the same digital ruses. Considering our participants experience and accomplishments, we are left to ask: What is it about the Internet that bedevils intelligent people? What did fact checkers do that allowed them to quickly and accurately discern the trustworthiness of information?


The answer, we suggest, lies with two concepts we introduced earlier: taking bearings and lateral reading. In order to take bearings, searchers obey the following imperative: Before diving deeply into unfamiliar content, chart a plan for moving forward. Taking bearings is what sailors, aviators, and hikers do to plot their course toward a desired destination. Landing in unfamiliar territory, fact checkers set out for their destinationmaking a judgment of credibility only after gaining a sense of where they had landed. To take bearings, web searchers obviously dont use a physical compass. But they need metaphorical compasses just as much as hikers need real ones.


In an Internet teeming with cloaked sites and astroturfers (see Daniels, 2009; SourceWatch, n.d.), taking bearings often assumes the form of lateral reading. When reading laterally, one leaves a website and opens new tabs along the browsers horizontal axis, drawing on the resources of the Internet to learn more about a site and its claims. When reading laterally, fact checkers paid little attention to features of a website like its appearance or contents. Instead, they quickly leapt off the landing page to open new tabs. Fact checkers, in short, learned most about a site by leaving it.


Lateral reading differs from one of the main approaches to literacy advocated by the Common Core State Standards. As defined by the Common Cores framers, close reading encourages students to read and reread deliberately in order to reflect on the meanings of individual words and sentences; the order in which sentences unfold; and the development of ideas over the course of the text, which ultimately leads students to arrive at an understanding of the text as a whole (PARCC Model Content Frameworks, 2011, p. 7). Close reading trains students to parse grammatical structures, to ponder the metaphors and similes an author employs, and to consider how minor shifts in word choice influence meaning. Close reading is essential to any thoughtful curriculum (Shanahan, 2012; Wolf, 2007). However, the close reading of a digital source when one doesnt yet know if the source can be trusted is neither efficient nor effective.


Just as close reading focuses students attention on carefully analyzing text, so a popular approach to teaching web evaluation advocates parsing the features and contents of an individual webpage. Often presented as checklists, these guidelines focus attention on elements internal to a website: Is its URL a .org or a .com? Is an author listed? Are there ads on the page? Are hyperlinks functional? The questions that appear on these lists range from 10 to as many as 30 (see Common Sense Media, 2012; Media Education Lab, n.d., News Literacy Project, n.d.), nearly all of which focus on easily manipulated features. Similarly, college library websites often advise students to use Five Criteria for Web Evaluation. These criteria (Authority, Accuracy, Objectivity, Currency, and Coverage,)or variations on the theme (including the CRAAP test: Currency, Relevance, Authority, Accuracy, and Purpose) focus users attention on closely reading and analyzing the contents of a single webpage.5


Even if we set aside the concern that students (and the rest of us) lack the patience to spend 15 minutes answering lists of questions about a single site, a bigger problem looms: identifying a .org URL, locating an author, and making sure a site is free of typos hardly confers credibility. The Employment Policies Institute not only carried a .org domain but was labeled a 501(c)(3) charitable organization. At a time when the Internet is characterized by polished web design, search engine optimization, and organizations vying to appear trustworthy, such guidelines create a false sense of security. Students and historians often stumbled because they closely read and followed the advice dispensed by checklists. Fact checkers succeeded because they did not.


Instead of closely reading or ticking off elements on a list, checkers ignored massive amounts of irrelevant (or less crucial) text in order to make informed judgments about the trustworthiness of digital information. In short, fact checkers read less but learned more.


This approach contrasted with students and historians tendency to closely read and analyze webpages. There were, however, moments when fact checkers slowed down. College students were faster at finding the name of the financial backer in the Vergara case. But speed came at the expense of quality. Students arrived at David Welchs name by promiscuous clicking, often without regard to a sources reliability. Fact checkers took longer not because of faulty search strategies or unhelpful keywords, but because they slowed down and carefully reviewed search results before clicking on any one of them. They displayed click restraint: Before clicking on any result, they mined Googles snippets for the wealth of information they contain. They examined each URL, considered the source of the information, and scanned the brief but fecund sentence fragments before alighting on a link to click. They stood back from the results page and viewed it as a whole, gaining a sense of the information neighborhood (Klurfeld & Schneider, 2014) into which they had landed. A searchers first click can mean destiny, either putting them on a path toward warranted conclusions or sending them into the wilderness of infinite regress. Click restraint tips the balance toward the former.


The strategies and heuristics that fact checkers deployed relied on three sources of background knowledge: knowledge of digital sources, knowledge of how the Internet and searches are structured, and knowledge of strategies to make searching and navigating effective and efficient. Fact checkers recognized and distinguished among an array of online venues, including where sites spread out across the political spectrum (Daily Kos is liberal, Daily Caller conservative). They understood the characteristics that make a source reliable versus those that act as fallible proxies for reliability, such as how an unfamiliar organization describes itself on its About page. For example, the Employment Policies Institute describes itself as a non-profit research organization dedicated to studying public policy issues. Checker As reaction was curt: This is profoundly not helpful. She knew that a nonprofit status does not stamp an organization as unquestioningly altruistic.


Yet knowledge of sources is necessary but not sufficient. Fact checkers also possessed knowledge of online structures, particularly how search results are organized and presented. They knew that the first result was not necessarily the most authoritative, and they spent time scrolling through results, often scanning the entire first page (and sometimes the second and third) before clicking on a link. They understood how search engine optimizers use sophisticated keywords and other techniques to game results, pushing some sites to the top and more authoritative sites to the bottom. Students and sometimes historians often clicked on the first few search results, rarely articulating a rationale for why they selected a particular link (a finding documented by others; e.g., Hargittai et al., 2010; Pan et al., 2007). Unlike fact checkers, they rarely took time to scan the entire search engine results page to gain an overall picture of the digital territory into which they had landed. They seldom clicked to the second or third page of search results, a finding common to over 70 percent of Internet searches (Schwartz, 2014).


Finally, lateral reading relies on canny strategies and heuristics for navigating the Internet. Some of these are easy to teach, like knowing that Google searches for contiguous words only when they are placed in quotation marks, or that right-clicking opens up new windows along the screens horizontal axis (instead of piling window upon window). Other heuristics, like lateral reading and click restraint, are more involved and are undergirded by an understanding of the Internet and how it differs from print sources.


Teaching strategies like lateral reading and click restraintas well as the knowledge that enables their effective usewill not solve all our information woes. However, these strategies may go part of the way in eliminating some of the most common errors in judging digital sources. Facing an onslaught of digital information, all of us need efficient strategies for separating truth from falsehood, good arguments from bad. As an example, consider the daunting challenge faced by California voters trying to sift through 17 separate initiatives on the 2016 ballot: plans to increase the tobacco tax, ban plastic bags, limit the sale of ammunition, legalize recreational marijuana, require porn stars to wear condoms while filming, approve a bond to build new schools, repeal the death penalty or make it easier to mete out, and so on. If the average citizen spent 10 minutes researching each initiative, we would consider this an act of responsible citizenship. The educational challenge we face is this: What can we teach to ensure that those 10 minutes count?


Addressing this challenge in an age where we encounter the world through a screen can begin with a course on media literacy. However, it cannot stop there. The digital revolution demands a fundamental reconsideration of how we teach all of the core school subjects. What should history teaching look like when students can go online and find evidence for the canard that thousands of black men put on grey uniforms to take up arms for the Confederacy? Or science teaching when anti-vaxxer sites maintain a proven link between autism and measles shots, despite a retraction by the journal publishing the claim and the fact that no respectable body of opinion supports the linkage (Walker- Smith v. General Medical Council, 2012)? Or language arts class when ad hominem arguments and alternative facts overwhelm civil discourse?

Technology can do many things, but it cant teach discernment. In Thomas Jeffersons day, an earlier technological revolution, the advent of moveable type, drastically lowered the cost of print. Jefferson watched with dismay as pamphlets and broadsides of dubious quality littered the streets. Jefferson understood that a dark side accompanies the expansion of expression. We would do well to take his solution to heart, as well as to realize it wont come cheaply: If we think [the people] not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion by education.

Notes

1. We sampled historians and fact checkers on the basis of professional affiliation and full-time employment as historians at four-year colleges and universities. We did not ask these participants for information about their race or ethnicity, languages spoken, or country of origin.


2. In addition to the tasks presented here, the full protocol included (1) brief evaluations of four static sites, (2) an open web search on a historical question with contemporary ramifications, and (3) locating the registrant of a website. The findings from those tasks are broadly consistent with what we present here. A description of the full protocol is available from the authors.  


3. The statement from Collins, which was posted on the National Institutes of Health website, is also available via the Web Archive: http://web.archive.org/web/20110727115017/http://www.nih.gov/about/director/04152010_statement_ACP.htm.  


4. The stance is prominent in other parts of the website, such as a Position Statement entitled On the Promotion of Homosexuality in the Schools, which states that the homosexual lifestyle carries grave health risks; that validating a students same-sex attraction during the adolescent years is premature and may be harmful; and sexual reorientation therapy can be effective. Retrieved from https://www.acpeds.org/wordpress/wp-content/uploads/On-the-Promotion-of...pdf.


5. The University of Alaska/Fairbanks guide is located at https://library.uaf.edu/ls101-evaluation, while Illinois State Universitys is https://guides.library.illinoisstate.edu/evaluating/craap.

Acknowledgements


This research was generously supported by the Robert R. McCormick Foundation and the Spencer Foundation (Grant 201600012, Sam Wineburg, Principal Investigator). The authors alone are responsible for its contents. We gratefully acknowledge the indispensable feedback of Mike Caulfield, Joel Breakstone, Teresa Ortega, Mark Smith, and two anonymous reviewers.


References


Agosto, D. E. (2002). A model of young peoples decision-making in using the Web. Library & Information Science Research, 24, 311341.


American Academy of Pediatrics. (2014). Stigma: At the root of ostracism and bullying. Retrieved from https://www.aap.org/en-us/about-the-aap/aap-press-room/pages/Stigma-At-the-Root-of-Ostracism-and-Bullying.aspx


American College of Pediatricians. (2015). P for pedophile. Retrieved from https://www.acpeds.org/p-for-pedophile


Anderson, N. (2016, April 1). Applied to Stanford or Harvard? You probably didnt get in. Admit rates drop, again. Washington Post. Retrieved from https://www.washingtonpost.com/news/grade-point/wp/2016/04/01/applied-to-stanford-or-harvard-you-probably-didnt-get-in-admit-rates-drop-again/?noredirect=on


Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online sources. Cognition and Instruction, 30, 3985.


Bennett, S. (2012). Digital natives. In Z. Yan (Ed.), Encyclopedia of cyber behavior: Volume 1 (pp. 212-219). Hershey, PA: IGI Global.


Bounegru. L., Gray, J., Venturini, T., & Mauri, M. (2017). A field guide to fake news and other information disorders. Amsterdam, NL: Public Data Lab. Retrieved from https://fakenews.publicdatalab.org/


Bradshaw, W., Weight, D. G., & Packard, T. (2011, Mary 3). Same sex attraction not a matter of choice. Salt Lake Tribune. Retrieved from http://archive.sltrib.com/printfriendly.php?id=51356807&itype=cmsid


Brand-Gruwel, S., Kammerer, Y., van Meeuwen, L., & van Gog, T. (2017). Source evaluation of domain experts and novices during Web search. Journal of Computer Assisted Learning, 33, 234251.


Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solving by experts and novices: Analysis of a complex cognitive skill. Computers in Human Behavior, 21, 487508.


Bulger, M., & Davison, P. (2018). The promises, challenges, and futures of media literacy. New York, NY: Data & Society. Retrieved from https://datasociety.net/output/the-promises-challenges-and-futures-of-media-literacy/


California teacher tenure law unconstitutional, judge says. (2014, June 10). Retrieved from http://abc7.com/education/california-teacher-tenure-law-unconstitutional/106228/


Coleman, T. (2010). Misinformation from doctors . . . out to hurt students? Retrieved from https://www.aclu.org/blog/speakeasy/misinformation-doctorsout-hurt-students


Common Sense Media. (2012). Identifying high-quality sites. Retrieved from https://www.commonsensemedia.org/educators/lesson/identifying-high-quality-sites-6-8


Common Sense Media. (n.d.). Scope and sequence: Common Sense K-12 digital citizenship curriculum. Retrieved from https://www.commonsense.org/education/scope-and-sequence


Daniels, J. (2009). Cloaked websites: Propaganda, cyber-racism and epistemology in the digital era. New Media and Society, 11, 659683.


Dougherty, J. & Nawrotzki, K. (Eds.). (2013). Writing history in the digital age. Ann Arbor, MI: University of Michigan Press.


Elstein, A. S., Kagan, N., Shulman, L. S., Jason, H., & Loupe, M. J. (1972). Methods and theory in the study of medical inquiry. Journal of Medical Education, 47, 8592.


Ericsson, K. A., Charness, N., Feltovich, P, & Hoffman, R. (2006). Cambridge handbook of expertise and expert performance. New York, NY: Cambridge.


Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA: MIT Press.


Espelage, D. (2011). Bullying and the lesbian, gay, bisexual, transgender, questioning (LGBTQ) community. Retrieved from https://www.stopbullying.gov/at-risk/groups/lgbt/white_house_conference_materials.pdf


Fogg, B. J. (2003, April). Prominence-interpretation theory: Explaining how people assess credibility online. Paper presented at the ACM Conference on Human Factors in Computing Systems, Ft. Lauderdale, FL.


Foley, B. (2017, December 31). Spread of fake news prompts literacy efforts in schools. PBS New Hour. Retrieved from https://www.pbs.org/newshour/education/spread-of-fake-news-prompts-literacy-efforts-in-schools


Gasser, U., Cortesi, S., Malik, M., & Lee, A. (2012). Youth and digital media: From credibility to information quality. Cambridge, MA: Berkman Center for Internet and Society.


Graves, L. (2013, November 13). Corporate Americas new scam: Industry P.R. firm poses as think tank! Salon. Retrieved from http://www.salon.com/2013/11/13/corporate_americas_new_scam_industry_p_r_firm_poses_as_think_tank/


Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010). Trust online: Young adults evaluation of web content. International Journal of Communication, 4, 468494.


Herold, B. (2018, February 28). Media literacy. Education Week, 37(22), 4.


Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Process and Management, 44, 14671484.


Iding, M. K., Crosby, M. E., Auernheimer, B., & Klemm, B. (2009). Web site credibility: Why do people believe what they believe? Instructional Science, 37, 4363.


Julien, H., & Barker, S. (2009). How high school students find and evaluate scientific

information: A basis for information literacy skills development. Library & Information Science Research, 31, 1217.


Jussim, L. (2012). Social perception and social reality. New York, NY: Oxford University Press.


Kahne, J., Hodgin, E., & Eidman-Aadahl, E. (2016). Redesigning civic education for the digital age: Participatory politics and the pursuit of democratic engagement. Theory & Research in Social Education, 44, 135.


Klurfeld, J., & Schneider, H. (2014). News literacy: Teaching the Internet generation to make reliable information choices. Washington, DC: Brookings Institution. Retrieved from https://www.brookings.edu/wp-content/uploads/2016/07/Klurfeld-Schneider_News-Literacy_June-2014.pdf


Kranish, M. (2005, July 31). Beliefs drive research agenda of new think tanks: Study on gay

adoption disputed by specialists. Boston Globe. Retrieved from

http://archive.boston.com/news/nation/washington/articles/2005/07/31/beliefs_drive_research_agenda_of_new_think_tanks/


Leinhardt, G., & Young, K. M. (1996). Two texts, three readers: Distance and expertise in reading history. Cognition and Instruction, 14, 441486.


Lenz, R. (2012). American College of Pediatricians defames gays and lesbians in the name of protecting children. Retrieved from https://www.splcenter.org


Lipton, E. (2014, February 9). Fight over minimum wage illustrates web of industry ties. New York Times. Retrieved from http://www.nytimes.com/2014/02/10/us/politics/fight-overminimum-wage-illustrates-web-of-industry-ties.html


List, A., Grossnickle, E. M., & Alexander, P. A. (2016). Undergraduate students justifications for source selection in a digital academic context. Journal of Educational Computing Research, 54, 2261.


Liu, C., White, R. W., & Dumais, S. (2010). Understanding web browsing behaviors through Weibull analysis of dwell time. Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval. Geneva, Switzerland.


Lucassen, T., & Schraagen, J. M. (2011). Factual accuracy and trust in information: The role of expertise. Journal of the American Society for Information Science and Technology, 62, 12321242.


McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2018). Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory & Research in Social Education, 46, 165193.


Media Education Lab. (n.d.) Who do you trust? Retrieved from http://mediaeducationlab.com/secondary-school-unit-2-who-do-you-trust


Mozilla. (n.d.). Web literacy. Retrieved from https://learning.mozilla.org/en-US/web-literacy


National Association for Media Literacy Education. (2007). Core principles of media literacy education in the United States. Retrieved from https://namle.net/publications/coreprinciples/


News Literacy Project (n.d.). Ten questions for fake news detection. Retrieved from www.thenewsliteracyproject.org/sites/default/files/GOTenQuestionsForFakeNewsFINAL.pdf


Nielsen, J. (2011). How long do users stay on web pages? Retrieved from https://www.nngroup.com


Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google we trust: Users decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 12, 801823.


PARCC Model Content Frameworks. (2011). Retrieved from https://parcc-assessment.org/content/uploads/2017/11/PARCCMCFELALiteracyAugust2012_FINAL.pdf


Pirolli, P., & Card, S., (1995, May). Information foraging in information access environments. Paper presented at the CHI Conference on Human Factors in Computing Systems, Denver, CO.


Pressley, M., & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading. Hillsdale, NJ: Erlbaum.


Robb, A. (2017, November 16). Anatomy of a fake news scandal. Rolling Stone. Retrieved from https://www.rollingstone.com/politics/news/pizzagate-anatomy-of-a-fake-news-scandal-w511904


Schwartz, B. (2014). A new click rate study for Google organic results. Retrieved from https://marketingland.com/new-click-rate-study-google-organic-results-102149


Shanahan, C., Shanahan, T., & Misischia, C. (2011). Analysis of expert readers in three disciplines: History, mathematics, and chemistry. Journal of Literacy Research, 43, 393429.


Shanahan, T. (2012, June 18). What is close reading? [Blog post]. Retrieved from http://shanahanonliteracy.com/blog/what-is-closereading#sthash.mxxi0paG.rtIrn0KW.dpbs


Shanahan, T., & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: Rethinking content-area literacy. Harvard Educational Review, 78, 4059.


SourceWatch (n.d.). Portal: Front groups. Retrieved from https://www.sourcewatch.org/index.php/Portal:Front_groups


Southern Poverty Law Center. (2017). Active hate groups 2016. Retrieved from https://www.splcenter.org/fighting-hate/intelligence-report/2017/active-hate-groups-2016


Stanford University. (2017). Facts 2017: Academics. Retrieved from http://facts.stanford.edu/academics/undergraduate-facts.


Stanford University. (2015). Our selection process: Applicant profile. Retrieved from http://admission.stanford.edu/basics/selection/profile15.html


Taylor, K. L., & Dionne, J. P. (2000). Accessing problem-solving strategy knowledge: The complementary use of concurrent verbal protocols and retrospective debriefing. Journal of Educational Psychology, 92, 413425.


Throckmorton, W. (2011, October 6). The American College of Pediatricians versus the American College of Pediatrics: Who leads and who follows? [Blog post]. Retrieved from http://www.patheos.com/blogs/warrenthrockmorton/2011/10/06/the-american-college-of-pediatricians-versus-the-american-academy-of-pediatrics-who-leads-and-who-follows/


Trumbull, D. (2013). Bullying at school: Never acceptable. Retrieved from http://www.acpeds.org/the-college-speaks/position-statements/societal-issues/bullying-at-school-never-acceptable.


Turban, J. (2017, May 8). The American College of Pediatricians is an anti-LGBT group [Blog post]. Retrieved from https://www.psychologytoday.com


Vergara v. California. (n.d.). Retrieved from http://studentsmatter.org/case/vergara/


Vergara v. State of California. (n.d.). Retrieved from http://www.cta.org/Vergara


Walker- Smith v. General Medical Council. (2012). EWHC (UK).


Walraven, A., Brand-Gruwel, S., & Boshuizen, H. (2009). How students evaluate information and sources when searching the World Wide Web for information. Computers & Education, 52, 234246.


Warren, J. E. (2011). Generic and specific expertise in English: An expert/expert study in poetry interpretation and academic argument. Cognition and Instruction, 29, 349374.


Wathen, C. N., & Burkell, J., (2002). Believe it or not: Factors influencing credibility on the web. Journal of the American Society for Information Science and Technology, 53, 134144.


Wiley, J., Goldman, S. R., Graesser, A. C., Sanchez, C. A., Ash, I. K., & Hemmerich, J. A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational Research Journal, 46, 10601106.


Wineburg, S. (1991). Historical problem solving: A study of the cognitive processes used in the evaluation of documentary and pictorial evidence. Journal of Educational Psychology, 83, 7387.


Wineburg, S. (1998). Reading Abraham Lincoln: An expert/expert study in the interpretation of historical texts. Cognitive Science, 22, 319346.


Wineburg, S., Breakstone, J., McGrew, S., & Ortega, T. (2016). Evaluating information: The cornerstone of civic online reasoning. Stanford, CA: Stanford History Education Group.


Wineburg, S., & McGrew, S. (2016, November 1). Why students cant Google their way to the truth. Education Week, 36(11), 22. Retrieved from https://www.edweek.org/ew/articles/2016/11/02/whystudents-cant-google-their-way-to.html


Wolf, M. (2007). Proust and the squid: The story and science of the reading brain. New York, NY: HarperCollins.




Cite This Article as: Teachers College Record Volume 121 Number 11, 2019, p. 1-40
https://www.tcrecord.org ID Number: 22806, Date Accessed: 3/7/2022 6:34:59 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Sam Wineburg
    Stanford University
    E-mail Author
    SAM WINEBURG is the Margaret Jacks Professor of Education and (by courtesy) History at Stanford University. His latest book is Why Learn History (When It’s Already on Your Phone) published by the University of Chicago Press.
  • Sarah McGrew
    University of Maryland
    E-mail Author
    SARAH MCGREW is an Assistant Professor in the Department of Teaching and Learning, Policy and Leadership at the University of Maryland, College Park. She studies how young people reason about online information and how teachers can support students to thoughtfully engage with digital content. Her recent publications include “Can Students Evaluate Online Sources? Learning from Assessments of Civic Online Reasoning” in Theory & Research in Social Education.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS