Information literacy as a factor of success at lateral reading

Shifting liability for trust. Literacy research. Feature of vertical and side reading. Impact of fake news on the audience. Using web search during a validation task. Applying of lateral reading by respondents with a high level of information literacy.

Рубрика Социология и обществознание
Вид дипломная работа
Язык английский
Дата добавления 28.08.2020
Размер файла 267,8 K

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http://www.allbest.ru/

FEDERAL STATE AUTONOMOUS EDUCATIONAL INSTITUTION

FOR HIGHER PROFESSIONAL EDUCATION

NATIONAL RESEARCH UNIVERSITY HIGHER SCHOOL OF ECONOMICS

St. Petersburg School of Social Sciences and Area Studies

Bachelor's project

Information Literacy As a Factor of Success at Lateral Reading

Aleksandr Pavlovich Nikulin

Saint Petersburg 2020

Content

Introduction

1. Motivation and Research Objectives

2. Previous research

2.1 Credibility definition

2.2 Literacy: Digital & Information

2.3 Fact-checking task

3. Results

3.1 Information Literacy Test

3.2 Discussion and Conclusion

References

Appendix

Introduction

In August 2019, a fake post began to spread on Instagram which offered making a repost and protecting the user's account from applying the new user rules. Referring to some fake media, it said that Instagram would soon change its privacy policy so that it would be possible to use images from profiles against the users in court cases. To avoid this, the user must repost the text of the message, thus "forbidding" Instagram to use his or her images and otherwise threatening to break the law. It seems surprising that a long known hoax can so easily trick so many influencers and opinion leaders. This is an example of the online hoax that has been exposed many times (David Mikkelson, 2012). However, this time it was taken for real by many influencers, i.e., users with a large number of subscribers, mostly from the fashion industry (Ilchi, 2019). Among others, the US secretary of energy made a repost in his personal account.

The previous example is quite a harmless example of fake, but there are serious cases. For example, in June 2019, against the backdrop of the political crisis in Sudan, there were accounts on Instagram that claimed that for each subscription and repost, one meal to starving civilians would be sent, which was not true (Lorenz, 2019). One of these accounts gained 400,000 subscribers in just one week.

They also spread misinformation. Many accounts simply changed their nicknames to get more subscribers. Spreading of such accounts creates real problems because it interferes with real charities, many of whom do not have Instagram accounts or lose them among others. There are many such examples, and fake news in general affects a larger audience than the truth (Vosoughi et al., 2018). But why does this happen? One of the key factors is the general level of digital literacy and critical thinking in a society (Pennycook & Rand, 2019b).

1. Motivation and Research Objectives

Digital literacy is an extension of the concept of literacy defined as the ability to access information in its various forms, analyse, evaluate and create new messages (Koltay, 2011). Unlike conventional literacy which refers mainly to information in printed form, digital literacy extends to the critical use of digital tools. Researchers distinguish several types of literacy, including information literacy, news literacy, and media literacy, whose definitions overlap greatly. A key part of any literacy is critical evaluation of information and its reliability. Hence, in this work I will be using the terms “information literacy” and “digital literacy” interchangeably.

Research shows that fake news spreads worse among people with higher levels of digital literacy and education in general (Afassinou, 2014). A higher level of digital literacy is also associated with a lower negative impact of the Internet on adolescents, with a lower risk at seeking health information. Low levels of digital literacy, by contrast, can lead to less participation in many important areas of society, compounding the impact of the digital divide (Deursen, 2017). Therefore, increasing digital literacy and skills of credibility assessment is an important public concern as one way to counter the spread of misinformation and to curb the digital divide.

Dealing with fake information spread can be approached from several directions. It can be prevented on the platform side by automatic machine learning methods or by fact-checking labels and warnings (Henry et al., 2020). It is also possible to use the help of crowdsourcing platforms. These methods have their pros and cons (Waler et al., 2019). However, in this work I argue that, based on past research, the most effective way to curb fake information is through the digital literacy approach which consists in increasing the critical ability of users to adequately assess information credibility.

I am therefore primarily interested in various digital literacy measures and their importance in relation to information credibility evaluation. The main purpose of this work is to test the new heuristic of information credibility validation proposed by Wineburg & McGrew (2017) and called “lateral reading”, as well as to better understand what factors of digital literacy can help in the successful use of the new heuristic and critical information evaluation.

Concisely, the main idea of lateral reading is to assess the credibility of the source not by its internal characteristics or text, but by what the rest of the web says about it, opening many additional links, i.e., “reading laterally”. In addition to the new heuristic, the side goals of the paper are to evaluate the level of information literacy of Russian university students and estimate the relationship between information literacy, the use of lateral reading and critical reading of the news. I am interested not only in the successful identification of fake news, but also in the possible factors of digital literacy. Thus, the main questions are:

RQ1: How will respondents approach the fact-checking problem depending on their level of information literacy?

RQ2: Is the information literacy level positively associated with success at the use of lateral reading? And the main hypotheses are:

H1: Respondents with higher information literacy levels will be more successful in fact-checking and critical evaluation of the source's credibility.

H2: Respondents with higher levels of information literacy will use lateral reading more often than others.

H3: Those who use lateral reading are more successful in a task that requires fact-checking information on the Internet.

A recent study (Breakstone et al., 2018) proposes not to teach people the single most effective method of estimating the validity of information but to help them learn a variety of skills, the ability to combine different approaches, thus increasing the robustness, efficiency, and engagement important for critical thinking. It is supposed that the relationship between the adequate assessment of information credibility and information literacy is not direct but depends on other factors, as shown in Figure 1. People with higher levels of information literacy use more tools when assessing information, thereby increasing their success. In this work, I have also tried to test this relationship.

Figure 1. A causal model of the relationship between information literacy and credibility evaluation.

The main contribution of this study is a better understanding of the factors of success at credibility evaluation, as well as a comparison of the new heuristic to assess the credibility of the information with more traditional methods. I also tested a model for the relationship between information literacy and credibility evaluation. Based on the results, recommendations can be made for the development of training programs for laypersons, which can be used in digital literacy courses and civic education. As an additional result, I collected a dataset on the level of information literacy among Russian students.

2. Previous research

2.1 Credibility definition

The concept of credibility has been studied for quite a long time, and the first studies were about newspapers, then the emergence of television, and in the 2000s the concept found application in the context of the Internet (Flanagin & Metzger, 2000). There are many definitions of information credibility, but the most common one is credibility as believability or trustworthiness. Information is considered trustworthy when it seems objective, fair and reliable. Information credibility is also a major aspect of information quality (Hilligoss & Rieh, 2008). However, information credibility depends on a subjective evaluation, and perceived credibility can be influenced by many factors, such as the website's internal characteristics, design, loading speed (Wathen & Burkell, 2002), text style (Bromme et al., 2015; Thomm & Bromme, 2012), or argument strength (Li & Suh, 2015). Therefore, in this paper perceived credibility is used as a proxy for a person's ability to correctly determine the truthfulness of a text if his or her opinion and ground truth are highly correlated. Media studies divide credibility into message credibility and source credibility, while the main factors of manipulating credibility perception are divided into three types: (1) source characteristics, (2) message, and (3) medium or channel (social media for example) (Wathen & Burkell, 2002).

Shifting the Credibility Responsibilities

While the numbers of Internet users are steadily growing, so does the amount of information online, but the attention to each publication decreases. In 2019 alone, the number of people using the Internet increased by 366 million new users, while the total number of users exceeded half the world population (Hootsuite & We Are Social, 2019). Not only the number of users is increasing, but also the amount of information produced and its importance. Recent surveys show that 68 percent of adult Americans received news through social media in 2016, with similar trends in other countries (Gottfried & Shearer, 2016). Among the adolescents, they report, the percentage is higher, about 78 percent and higher for some social media networks. Moreover, teenagers use increasingly different social media, which further contributes to attention diffusion (NW et al., 2018). The average time that users spend on one post is falling (Jakob Nielsen, 2011). Thus, users' attention span decreases, which may have a negative effect on critical thinking (Greenfield, 2009), the lack of which is an important factor in susceptibility to fake news (Pennycook & Rand, 2019b).

The Internet, and especially social networking sites, has become the main source of news and content, thus increasing the risks associated with the distribution of incorrect, false or propaganda information. As noted by earlier researchers, unlike traditional media where there was strict moderation and sufficiently transparent centralization, early Internet users did not have established ways to check the accuracy of information, and publications on the Internet did not go through the editorial stage, while the Internet itself was decentralized and more anonymous - in other words, the responsibility for content verification passed to consumers, rather than to producers as it had been before (Flanagin & Metzger, 2000). On the whole, the border between information producers and consumers has become less clear.

To date, a lot has changed for the better in providing the quality of online information, but new challenges are arising. Serious media have appeared on the Internet, whose activities are regulated by law, the materials are thoroughly edited and resonate in society. However, as technology has evolved, it has become easier to create fakes on a much larger scale using algorithms such as automatic fake news generators (Fitch, 2019; Knight, 2019) or deepfakes (Chesney & Citron, 2018), and to distribute them by exploiting personalized news feeds on social networks or by creating botnets (Bastos & Mercea, 2017; Shao et al., 2018). At the same time, a recent study shows that the overall level of consumption of fakes is quite low and accounts for about 0.15% of the total daily media diet of Americans (Allen et al., 2020). Unfortunately, there is no similar research for other countries.

2.2 Literacy: Digital & Information

There are several concepts related to the use of the Internet and digital tools. The main concepts are digital literacy, news literacy, information literacy, and media literacy. I follow Koltay (2011) in distinguishing between them (Koltay, 2011). For this paper, most relevant are “digital literacy” and “information literacy”.

Information literacy is defined as the ability to understand, locate, verify and use information (Koltay, 2011). This concept emphasizes the ability to navigate and find information, which takes on additional meanings in the context of the Internet.

Information literacy is a broader concept which is much easier to use in practice because it allows for a more objective assessment through completing tasks rather than through self-reported measures. In this paper, these two terms are used interchangeably, but with a special emphasis that both of them imply the ability to critically assess information. Despite the similarity of the concepts, recent research shows that information literacy, not digital literacy, is a significant predictor of the ability to identify fake news (Jones-Jang et al., 2019). Older studies have argued in favor of digital literacy, though. Therefore, the jury is still out on which aspects of literacy online are the most important and reliable.

Literacy Rate

Several generations of digital natives have grown up in the last three decades, who have been familiar with the way the Internet works since childhood. One might think that the ease with which they navigate in the digital environment is directly and positively related to the level of their digital literacy, including credibility evaluation, but studies show that this is not the case (Helsper & Eynon, 2010).

In fact, the new generations have problems with assessing credibility, just like digital immigrants. For example, a study of more than a thousand students (Hargittai et al., 2010) found that for young adults, search engines are paramount in evaluating credibility. Thus, in selecting a site and evaluating its trustworthiness, they trusted a place in the search engine output more than the author's credentials. The latter were checked only by about 10 percent of the students, thus shifting responsibility for information credibility on the search engine. Pan et al. (2007) came up with similar results, showing that young adults often click on websites not because those are most relevant to their query or task, but because they occupy a higher position in the search output, ignoring the abstracts provided by the search engine. Young adults tend to judge a site's credibility by its appearance (Agosto, 2002), its user-friendly interface, its suitability for their search needs using the "keyword-matching heuristic" more than by considering the source of information in the credibility assessment (List et al., 2016) and corroborative evidence across different sites (Wiley et al., 2009). See Wineburg & McGrew (2017) for a detailed review.

A recent study shows that little has changed in the past decade. A survey of 3,446 students found that most of them could not tell the difference between new stories and ads and could not see why the link between the site about climate change and the fossil fuel industry could affect its credibility. Instead of trying to figure out who was standing behind the site in general, they mainly focused on the internal characteristics of the site such as appearance, the “about us” page, and site domain (Breakstone, Joel et al., 2019). In other words, when estimating credibility, users tend to follow not the most effective strategies as shown by older studies, but rely on seemingly objective indicators at hand which are, in most cases, internal to the site and can be manipulated.

The ability to differentiate reliable information from unreliable remains a crucial skill in the information age. Higher skills of information credibility evaluation are a significant predictor of how successful a learner will be in the tasks that require searching and processing information over the Internet (Barzilai & Zohar, 2012; Wiley et al., 2009). Such skills are also needed in everyday life as they are an important component of political and civic life. According to King (2019), there is plenty of evidence that social media can have a major impact on election and voting results through focused targeting and fake news. It is even more relevant if there is political polarization and people tend to believe more in fake news supporting their preferred candidate (Allcott & Gentzkow, 2017). Therefore, the higher level of digital literacy is positively correlated with a critical reaction to social media campaigns and critical political engagement. As King (2019) and other papers conclude, the key determinant of democracy and equality is “thinking critically about information and knowledge”. If digital literacy skills are low, participation in many important areas of society such as political, economic, cultural, and health can shrink (Deursen, 2017).

Above and beyond politics, health information is another area for which having sufficient digital literacy is crucial as inaccurate health-related information can cause harm (Diviani et al., 2016). At the same time, however, the Internet currently does not provide reliable health information for individuals (Daraz et al., 2019). Yet, researchers observe that most users do not question the quality of existing online health information or rely on evaluation criteria that are not recommended by recognized web quality guidelines (Diviani et al., 2016). When used correctly, however, the Internet can provide information on safe health practices and, in some cases, enhance critical thinking (Goldstein & Usdin, 2017; Strasburger et al., 2010).

Therefore, one of the major challenges for society today is to develop effective strategies for evaluating online information as part of digital literacy, and to create training programs based on them.

Fighting the Fakes

Fake information is a problem for platforms, editors, and, in the long run, the audience. It is possible to fight against misinformation on the platform side. For example, several major platforms recently banned the publication of deepfakes (Monika Bickert, 2020; Peters, 2020). However, not all the platforms are equally successful in doing so. For example, while the number of interaction with false content on Facebook has been decreasing since 2016, it is increasing on Twitter, although both platforms are taking steps to combat false information (Allcott et al., 2019). However, it is difficult to say why this is the case. Platform changes are always implemented post factum because it is hard to predict the new ways to bypass the limitations of the platform. And when they do, change is slow enough to let the fake news become viral (Daniel Funke, 2018).

Moreover, there are studies showing that actual platform-induced checks of fake news almost never reach the consumers of such news (Guess et al., 2018). Warnings and fact-check tags in the feed may also have a side effect, reducing the confidence in even a reliable headline (Clayton et al., 2019), while generally having a positive effect reducing misperceptions (Bode & Vraga, 2015; Walter et al., 2019).

One way to implement massive alerts and fact-checks is provided by the services of crowdsourcing platforms based on public opinion. Recent research shows that this approach can provide a comparable quality of credibility evaluation to the professional fact-checkers (Pennycook & Rand, 2019a) showing that, on average, society is able to recognize fakes. Sometimes, however, a backfire effect can be observed when people confronted with facts get even more convinced that they are right. However, this is observed quite rarely (Thomas Wood & Ethan Porter, 2019). The crowdsourcing method does not help at the individual level. Moreover, it cannot help people with low levels of digital literacy when searching for information and validating it as there are no such warnings in the majority of platforms.

Thus, the most accessible and flexible way to assess credibility belongs to users, especially outside of popular platforms where there is no centralised control over the quality of information. As a consequence, there is a need to increase the general level of digital literacy.

Reading: Vertical & Lateral

The most popular method of information evaluation is the checklist approach, or vertical reading, that is prone to mistakes in evaluating online materials. Many modern recommendations and recognized guidelines are based on it, such as the CRAAP (currency, relevance, authority, accuracy, purpose) techniques and its variations (Detar, 2018). Most such lists are based on internal characteristics of the source (except for authority) which are easy to manipulate, so this approach has been criticized (Meola, 2004). Thus, it is possible to find sites that will pass this test completely but the information on which will not be reliable (Breakstone et al., 2018). This is similar to how young adults assess credibility in research based on the internal characteristics of the site or search engine output, which research shows is not effective enough.

Professional fact-checkers use lateral reading rather than checklist techniques in their work where they are more effective at assessing the credibility of information than even PhD in history who work with information sources professionally (Wineburg & McGrew, 2017). Lateral reading allows them to quickly and efficiently assess how much credibility they can give to the source, rather than actually read the source itself, once they find out what the rest of the web has to say about it. In addition to better fake identification, this approach has also a broader rationale as searching for proofs in other sources has a similar effect to using crowd wisdom, which, according to research (Pennycook & Rand, 2019a), can be compared to the level of professional fact checkers.

When there is not enough time to study a source, reading may be the least effective way to evaluate credibility when there are a large number of resources with fact checks on the web and it is faster to verify individual facts there and make up an overall opinion on credibility. Perhaps after that it may turn out that there is no need to read everything, or, on the contrary, the reading is worth spending more time. In this way, lateral reading makes it possible to clarify the context around the source and the value of the information obtained.

Thus, the checklist approach is not the best way to check information for validity, and professionals using lateral reading are more successful than others (Breakstone et al., 2018; Wineburg & McGrew, 2017). Wineburg and McGrew (2017) conclude that professionals should be imitated to create more effective training programs on information credibility evaluation, and people should be taught how to read laterally in the first place. However, there is no direct evidence other than the Wineburg & McGrew working paper that fact-checkers' success is due only to this new approach and not to other factors such as experience, differences in cognitive abilities or the choice of assessment assignments. It is possible that the effectiveness of different approaches also varies depending on the types of tasks or other control variables.

Methods

The first step of this study was to identify a measurement with which to evaluate the level of information literacy of respondents, as well as to define a fact-checking task by selecting a case whose credibility is precisely known and which, however, causes many disputes in the media. In this case, the main topic of the task was the “climategate” email controversy. In addition to the fact-checking task and the information literacy test, it was necessary to measure control variables known from the literature. The second step was to sample the respondents for an online survey and collect the data. Finally, the collected data were analysed to test the hypotheses.

Measures

The online survey consisted of three sections (Figure 2), each of them containing no more than six questions and taking from 5-10 minutes to complete. The completion time was not accounted for in the models. Before sending out the survey, I ran several supervised pre-tests to correct the errors and bugs and to add explanations where necessary. At the beginning of the survey, the purpose of the survey was explained to respondents and the anonymity and security of personal data was stated. It was specifically mentioned that, when taking the survey, the respondents could use any source including the web search.

Figure 2. The Survey structure

Controls

According to several studies, the evaluation of information credibility in general may be influenced by demographic variables, so questions about age, gender, university as well as year of study that may also be related to information literacy (Podgornik et al., 2016). No questions were asked about political views, yet they can have an impact - but only on conservative people (Walter et al., 2019). The ability to recognize fakes is linked more to analytical thinking than to political bias (Pennycook & Rand, 2019b).

Information literacy

The Information Literacy Test (ILT) proposed and tested on students in Podgornik et al. (2016) was used to measure information literacy among respondents. However, the original ILT consisted of 40 questions and took an average of 30 minutes to complete. Given that it is only part of the survey, it was too long. Therefore, I followed the Jones-Jang et al. (2019) paper which used this questionnaire in a reduced form of six questions. I selected six questions from the test that did not overlap in topics and were not related to working with scientific citations. I also translated the questions into Russian as the original scale was in English. I updated some of the answers in the translation, e.g. replaced the daily newspaper with a news aggregator, as it is already a more usual source of news for the student population which is the target population in this study.

Each ILT question tests the knowledge of some information literacy competence according to the IL ACRL standard such as to determine what information is needed, to access, find, use, and understand information (Table 6).

Table 1. The percentage of correct answers to questions.

Question

Mean

1

The most reliable, proven, concise and comprehensive description of an unknown specialized concept can be found in ______.

0.47

2

At university, you were asked to describe in writing the impact of human activity on climate change. After the first search query, you received a huge number of results. What should we do next?

0.72

3

In which list are the sources of information correctly ordered from the least to the most officially recognized and verified?

0.65

4

Which of the listed data is "raw", unprocessed?

0.41

5

Which statement about GMOs (genetically modified organisms) is not the author's personal opinion?

0.83

6

What is the best way to describe a commercial claiming that sunflower oil produced by a certain brand does not contain cholesterol?

0.81

The respondent is offered a choice of 4 options, where only one option is correct. The correct answer was coded 1, the wrong answer, 0. For example, one of the questions asks: "In which of the following lists are the sources of information correctly ordered from the least to the most officially recognized and verified?". Then respondents are given a choice of 4 different options where the correct one is "blog, news aggregator, a popular science publication, scientific journal". To create an index, the results were summed up by questions. However, not all the questions were positively correlated with each other (Figure 3).

Figure 3. Cross-correlation between questions in the Information Literacy Test.

Questions 1 and 2 showed a negative correlation with most of the questions, which also affected the reliability of the scale (б = 0.3). Therefore, questions with negative correlation were not considered for the index creation (M = 2.72 , SD = 0.99, “0-4”). The low reliability of the scale may be explained by the choice of the most non-intersecting questions for the short version of measuring the ILT. In the paper of Jones-Jang et al. (2019), a low б value of 0.52 was also observed. The authors considered that in this case the discriminant validity was more important for the study. The whole questionnaire with correct answers and translations can be found in the Appendix.

2.3 Fact-checking task

In the original study of lateral reading Wineburg and McGrew (2017) asked the respondents to perform 6 tasks - to estimate the credibility of the site, to find information on request, and to compare several sites among themselves. The entire survey took over 30 minutes, which, based on pre-tests, I estimated was too long for the online survey format. As a result, I decided to leave only one task where respondents were to read a fragment of text about global warming and then evaluate its credibility on a scale from 0-10 where the correct answer was 0. During data processing, the scale was changed so that the value of 10 means "not reliable". The higher a respondent scores on this scale, the more correct his or her answer. In addition, the respondents were also asked to choose among the sources they used to make a decision with multiple answer choices. This way, it was possible to control how respondents approached the assignment, which was useful for later analysis and helped answer one of the hypotheses.

As a case for the fact-checking task, I selected a fragment of an article on global warming published in the Novaya Gazeta by the journalist and columnist Yulia Latynina. The Novaya Gazeta is known for its investigations such as the persecution of gays in Chechnya (Gordienko & Milashina, 2017) or the suicides of school pupils and “death groups” in the Vkontakte social networking site (Mursalieva, 2016). It has also a name for its liberal and oppositional agenda.

After its publication, the article resonated in the Russian media, and some outlets responded with criticism (Ivanov, 2020). The main theme of the article and the fragment offered to the respondents was to expose the "hockey stick," the often criticized graph that presents one of the most visible pieces of evidence of global warming. The article criticizes the conclusions made by scientists on the basis of temperature change graphs, and questions the anthropogenic causes of climate change and the ethics of climate scientists.

As one of the main arguments, the article cites the emails of scientists that became available during the “climategate”, the scandal associated with the leaked archive of electronic documents and other data from the Climatic Research Unit (CRU) at the University of East Anglia. The messages taken out of context are often used by climate change denialists. Numerous committees were convened that have not found cases of unacceptable ethics, and some major media later apologized (Romm, 2010). Moreover, even the possible discrediting of several scientists does not deny the body of data accumulated by climate change scientists, and there is no disagreement among scientists (Cook et al., 2016). The full text of the task can be found in the Appendix.

Thus, the respondents were supposed to read a fragment and then make up their opinion on how accurately this text shows the situation around the “climategate”. The reference to the original publication was kept. There was also an explanation of what the climategate and the “hockey stick” graphs were. In the description of the task, the respondents were asked to use web search to perform the task and clarify unclear terms, and then make their opinion based on the facts from different sources. Given the one-page length of the proposed fragment, the most advantageous strategy would be not to read the text at first but to find official information on climategate, that is using lateral reading. If the respondents did so, they would evaluate the fragment's credibility as low or null. However, if they turned to other strategies of credibility evaluation such as source credibility (of the newspaper or the author), they would evaluate the text credibility much higher.

Participants

Students of Russian universities took part in the online survey (N = 140). The data were collected in April 2020 using google forms and recruiting through social networks in the VK open university thematic communities and chats, so the sample was not randomized. The main reason for the selection of students was the ILT which was developed for testing students, as well as the presence of works that used a similar design.

The average age of respondents was 20.57 years (SD = 1.84), and the total number of different universities was 19, but the majority of students came from three universities: National Research University Higher School of Economics, HSE (n = 61), St.Petersburg State University (n = 30), and St.Petersburg Peter the Great University (n = 10). Top 5 universities can be seen in Table 2.

Table 2. Top 5 universities in the study.

University

N

National Research University Higher School of Economics (St.Petersburg)

61

St.Petersburg State University

30

St.Petersburg Peter the Great University

10

Chernyshevsky State University

6

ITMO University

3

There is also a gender imbalance in the data, because only 30 percent noted gender as "male", while 70 percent of respondents marked themselves as “female” (Figure 4). This imbalance can be explained not only by the non-randomized sample, but also by the existing imbalance at the HSE where similar disproportional ratios are observed. There were no significant gender differences by ILT, however, so it should not be a big problem. On a scale from 1-5 where “5” denotes master students, most respondents study in the 4th year (M = 3, SD = 1.31, "1-5").

Figure 4. Age distribution by gender.

3. Results

3.1 Information Literacy Test

As it can be seen in Table 1, the most difficult question for respondents was Question 4 (“Which of the listed data is "raw", unprocessed?”). Only 41 percent of respondents answered correctly. The second most frequently answered choice was "data on population growth presented in tables", 28.8% of respondents chose it. The next most difficult question was Question 1 where it was necessary to choose the source where one can find the most verified definition of a new concept. Only 46.8% of respondents chose the right variant (“lexicon or encyclopedia”), the next most popular one was a "scientific article" - 34.5%.

Questions 5 and 6 were the easiest to answer, with 81% and 83% of correct answers. At the same time, the last question required some knowledge and erudition about cholesterol and where it occurs, that is it was not simple. Perhaps due to a more complex formulation, people used web search to answer the question, which gave the highest result. Also, only 65% of respondents were able to choose the right answer in Question 3, i.e. to sort the sources by the degree of credibility of information. Most of the mistakes were made when choosing between a "blog" and a "news aggregator". A news aggregator imposes much more requirements to the selection of information and moderation of sources than a blog where the author expresses his or her personal opinion.

In general, the average score of the ILT is much higher than in the study by Jones-Jang et al. (2019) where the maximum share of correct answers did not exceed 0.53. The distribution is left-skewed (Figure 5). This may be explained by sampling from students as many universities now have courses on digital literacy. The high result in the ILT also depends on age (r = 0.27, p = 0.001) and the year of study (r = 0.19, p = 0.02), but is not on gender (p = 0.8).

Figure 5. Information Literacy Test score distribution.

Fact-checking task

A value close to 10 means that the respondent marked the text as untrustworthy, 0 means that the text is estimated as trustworthy. Averages are close to the middle point of 5 (M = 4.71, SD = 2.12) and males are more skeptical and give lower credibility to the text, but the relationship is not statistically significant (p = 0.053) which is probably explained by the lower proportion of males in the sample (n = 52). Relationships between the year of study (r = 0.15, p = 0.06), age (r = 0.13, p = 0.13) and the credibility rating were not significant, either.

Figure 4. Credibility rating distribution by gender.

Most respondents used text internal features, intuition or erudition and background knowledge to evaluate text credibility. Only 35.9% of respondents used web search to answer a question.

There were multiple choices available in the question on sources used (Table 3). In general, respondents used an average of two sources to evaluate credibility (SD = 0.89). Internal features and intuition were the most frequently used - 13.1% of respondents. Internal features together with intuition and background knowledge were used by 9.49%, respectively. Internal features only were used by 12.4%. And the web search only was used by 4.3%.

The number of resources used does not depend on gender (p = 0.3), age (R = 0.05, p = 0.5) or year of study (p = 0.2). Also, there are no significant differences within the groups by the resource used.

Table 3. Sources used by respondents to evaluate text credibility.

Used to answer the question

Percentage

With personal school and professional knowledge

39.6

Intuition

54.0

Web search

35.3

Internal features within the text

64.0

Credible source

7.2

There are differences in the credibility assessment approach depending on the result in ILT. The number of sources used has a positive correlation with the result for the ILT (r = 0.31, p = 0.0002), i.e. respondents with a higher level of information literacy used more sources to evaluate text credibility. One-way ANOVA shows a significant difference between the ILT result and the number of resources used for 1-2 (p = 0.01) and 1-3 groups (p = 0.0005), as shown on Figure 5. However, this may be explained by the fact that the group that used only one approach is small.

Figure 5. Difference in number of approaches used during the fact-checking task by ILT score.

To understand which factors make the greatest contribution and consider control variables, a linear model was built (Table 4). According to the model, the result for the ILT is a significant predictor of the number of used sources at the time of information credibility evaluation, but the control variables are not significant and do not affect the dependent variable. Thus, each score for the ILT increases the number of used approaches by 0.29 with a positive correlation.

In order to test Hypothesis 2, I also compared the groups of whether or not respondents used web search during the fact-checking task. Since the ILT score is not distributed normally, a wilcoxon rank-sum test for independent samples was used to compare the averages, with the alternative hypothesis that a group that didn't use web search has a lower average ILT than those using web search. According to the test, it can be said that respondents who didn't use web search during the task have a statistically significant lower ILT score on average than respondents who used web search (p = 0.02). However, effect size is considered small (r = 0.2). In a similar way I, consequently, checked Hypothesis 3 by comparing credibility scores by web search usage groups, but have not found any significant difference (r = 0.04, p = 0.5).

Table 4. OLS regression predicting number of sources used during fact-checking task.

There is no significant relationship between the ILT score and the final text credibility rating (r = 0.08, p = 0.34), nor is there a relationship between the amount of sources used during the task and the final text credibility rating (r = 0.04, p = 0.64). Credibility score also seems to be the same on average within different groups (Figure 6).

Figure 6. Credibility rating by number of sources used.

For a more detailed result, a linear model was built (Table 5). However, depending on the cross-correlations, it is possible to remove several different variables from the index and obtain a different result (Model 2.1 & Model 2.2), but both models are not significant and also have a very low coefficient of determination. Therefore, in this case I cannot speak about the confirmation of Hypothesis 1. Comparing Model 1 and Model 2 mediation effect can be estimated with the help of Causal Mediation Analysis and bootstrap for the CI, which is not significant (p = 0.98) and reported average causal mediation effect was low (ACME = 0.003). Hence, I cannot point out the significance of the mediation between the result for the ILT and the credibility evaluation result through the number of different approaches used (Figure 1).

Table 5. OLS regression predicting text credibility rating.

Thus, the results of the analysis confirm Hypothesis 2 (with little effect size), but do not confirm Hypotheses 1 and 3.

3.2 Discussion and Conclusion

Several conclusions can be drawn from the study. First of all, if we proceed from the operationalization of lateral reading as the use of web search during the credibility evaluation task, then respondents with higher levels of information literacy did use lateral reading significantly more often, and they also used more approaches during the evaluation of information credibility. Against the expectations, the analysis did not show a significant relationship between the level of information literacy and the success at credibility evaluation. Similarly, the theoretical mediation model suggested at the beginning of this paper was not confirmed. These results provide interesting insights for getting more nuanced answers to the research questions, but they are not consistent with the literature I have relied on in the theoretical framework. The possible reasons are discussed below. literacy reading fake lateral

First, the overall results on information literacy and success in assessing credibility should be discussed in more detail. Although most students have done well on the information literacy questions, the overall level of information literacy skills remains low, which is consistent with most studies in the literature review. The average credibility score in the evaluation task is higher than the neutral answer, meaning that the majority of respondents found the text to be accurate, although this is not the case. Moreover, it was very easy and even more cost-effective for the respondents to verify this than to read the whole text because the case of climate change is quite clear and it is easy to find reliable information and investigations through web search.

However, using web search for establishing the text credibility does not directly indicate the ability of respondents to analyze information. It speaks more about the fact that, as we already know, people are rational and passive and try to complete tasks with minimal expenses, which in this case means to use their background knowledge and intuition, to save time. This is especially relevant in an online survey where the research cannot control how much time a person spends on the task, which is similar to how a person interacts with information in reality. There is an explanation for this - people often feel that, unlike others, they are far less sensitive to false information (Cohen et al., 1988; Cohen & Davis, 2016), and can therefore rely on their own knowledge, which, as previous studies (Pennycook & Rand, 2019) and the results of this work indicate, may well be incorrect or biased.

The only significant variable among the control questions was age - for predicting the information literacy level, not the credibility score. The higher the age of respondents, the better they answer the Information Literacy Test questions. It seems important to distinguish between the “knowledge-that” which students obtain in studies, passing similar tests at university, as well as being more familiar with the very concept of the test, and “knowledge-how” that students do not receive without personal motivation. This may explain the higher scores for ILT, as opposed to a paper (Jones-Jang et al., 2019) where the sample consisted not of students but of people 49 years old, on average, who have not had such experience of taking tests, a practice which became commonplace not so long ago. Thus, by taking the training, students get more of the “knowledge-that” which helps them in taking tests, but they do not get the “knowledge-how” that is obtained with age via real critical analysis of information. Therefore, assessment through a test may not be a sufficient proxy for measuring the ability of respondents to correctly estimate information credibility.

Respondents successfully answered most of the ILT questions, but some questions caused difficulties. The scores for these questions were also negatively correlated with other questions, which probably reduced the overall reliability of the survey. Interesting as well is the fact that the level of information literacy was not correlated with the credibility score, which challenges the literature (Jones-Jang et al., 2019; Wineburg & McGrew, 2017).

There may be several reasons for this, such as the applied sampling, adapting and translating the survey, or choosing a fact-checking assignment. Sampling problems do not seem to be significant, as even the gender unbalance has not had a significant impact on either the ILT or the credibility score. The sample can also be called fairly homogeneous because most of the students are from about the same university or city and of similar age, which means there are no significant differences in control variables. The choice of a credibility score assignment may not reflect the real skills of the respondent; however, even assuming that the use of web search really cannot serve as a sufficient indicator of lateral reading, the assignment was similar to the research design test before (Breakstone et al., 2018; Wineburg & McGrew, 2017) and, at the very minimum, measured the level of erudition and background knowledge that should be positively associated with the level of information literacy. Therefore, the reason for these contradictory results is likely to be the ILT adaptation or the insufficient sample size because the effect size was small.

The effect of mediation, i.e. the idea that credibility score depends on the level of information literacy through a different number of approaches during the analysis of information, has not been confirmed. This does not mean that people should not be taught how to use different approaches when interacting with information on the Internet. The main model predicting the credibility score was not significant; the mediation analysis does not give us much new information, either. I chose the answer options for different approaches based on a description of the observations of how respondents solved problems in Wineburg & McGrew (2017). Perhaps this is not an exhaustive list of approaches, and respondents used some other approaches that I did not specify in the choice options. This could have had a strong effect by reducing the predictive power of the model. In the future, more detailed analysis of how different approaches interact with each other is required - this would be much more useful information than just the number of sources used.

The adapted version of the survey resulted in a low reliability, which is consistent with a similar study (Jones-Jang et al., 2019), but is still lower than that for similar question choices. In the original study (Podgornik et al., 2016), the full questionnaire had high reliability (б = 0.74) which is considered acceptable. A possible reason for the low reliability of the questions in this study could be the choice of a particular set of questions, as well as their localization during translation into Russian. This hypothesis is further proved by the fact that, despite the low reliability of the overall index, individual questions appear to be related to the competences they test. For example, adding questions related to the ability to search for information as well as to understand the data and to be able to search for definitions increases the quality of the credibility score predictive model, but decreases the quality of the model that predicts the number of sources used. Similarly, if a question related to the ability to search for information is removed from the model, the quality of prediction of the number of used sources increases. That is, there is a positive correlation between the individual questions and the competences related to them, but together the predictive power of the index is small. The reasons for the negative correlation in the index are not completely clear. Perhaps, to improve the result, one should take more questions from the original questionnaire or use another, ready-made scale of information literacy in the language of survey. Initially, the questions were chosen in such a way that the online survey would not take too much time, but most respondents did not have difficulties in passing it anyway so more questions could be added.

Thus, this study was only partially successful, showing that in order to increase the number of different approaches as well as the use of lateral reading, it is necessary to improve the information literacy level in the first place. However, the study failed to reproduce the results of studies of similar design, so it would be incorrect to generalize the conclusions and make recommendations based on two unconfirmed hypotheses.

Significant limitations of this research are: a sample consisting only of students, which reduces external validity; the format limitation, since a simple online survey without using eye-tracking or even screen recording cannot reliably give more information about how exactly the respondents performed the task; last but not least, the lack of pre-test of the adapted scales in Russian, which reduces its reliability. Studies show that even small changes in a survey when translated can significantly distort the meaning (Sousa et al., 2017), so for further research it is necessary to properly adapt the survey to Russian, as well as the correct choice of questions. For this purpose, reverse translation and trial samples can be used for testing.

...

Подобные документы

  • Studies to determine the effects of fulltime and parttime employment on the academic success of college students, on time to graduation and on future earnings. Submission of proposals on how a university student employment offices may utilize these data.

    статья [62,1 K], добавлен 23.02.2015

  • "High-hume" как современные эффективные технологии, направленные на преобразование человека. Особенности информационной революции. Сущность понятия "экзистенционализм". "Self" как результат познавательной деятельности отдельных людей и целых обществ.

    контрольная работа [47,9 K], добавлен 30.05.2013

  • American marriage pattern, its types, statistics and trends among different social groups and ages. The reasons of marriage and divorce and analyzing the statistics of divorce and it’s impact on people. The position of children in American family.

    курсовая работа [48,3 K], добавлен 23.08.2013

  • The essence of social research communities and their development and functioning. Basic social theory of the XIX century. The main idea of Spencer. The index measuring inequality in income distribution Pareto. The principle of social action for Weber.

    реферат [32,5 K], добавлен 09.12.2008

  • The concept of public: from ancient times to era of Web 2.0. Global public communication. "Charlie Hebdo" case. Transition of public from on-line to off-line. Case study: from blog to political party. "M5S Public": features and mechanisms of transition.

    дипломная работа [2,7 M], добавлен 23.10.2016

  • Principles of asr teсhnology. Performance and designissues in speech applications. Current trends in voise-interactive call. Difining and acquiring literacy in the age of information. Content-based instruction and literacy development.

    курсовая работа [107,9 K], добавлен 21.01.2008

  • Reading is the foundation on which academic skills of an individual are built. The importance of teaching reading. Developing reading skills and strategies. Stages of conducting reading and reading activities. Rules of training of the advanced readers.

    курсовая работа [36,2 K], добавлен 10.04.2012

  • Effective reading is essential for success in acquiring a second language. Approaches to Teaching Reading Skills. The characteristic of methods of Teaching Reading to Learners. The Peculiarities of Reading Comprehension. Approaches to Correcting Mistakes.

    курсовая работа [60,1 K], добавлен 28.03.2012

  • The existent problems in teaching reading comprehension and finding the ways out of this problem by suggesting the exercises that can be useful in classroom activities. The reading skills and teaching technics, new technologies in teaching reading.

    курсовая работа [23,9 K], добавлен 17.04.2011

  • Improvement in English proficiency. Theoretical background of reading. Structure-proposition-evaluation method to read a book. Advantages of a Guided Matrix, the importance of rereading. Matrix Options at Different Levels. Assessing reading outcomes.

    курсовая работа [39,7 K], добавлен 22.02.2014

  • Main part: Reading skills. A Writing Approach to–Reading Comprehension–Schema Theory in Action. The nature of foreign-language teaching. Vocabulary teaching techniques.

    курсовая работа [23,8 K], добавлен 05.12.2007

  • Development of guidelines for students of the fifth year of practice teaching with the English language. Definition of reading, writing and speaking skills, socio-cultural component. Research issues in linguistics, literary and educational studies.

    методичка [433,9 K], добавлен 18.01.2012

  • The collection and analysis of information with a view of improving the business marketing activities. Qualitative & Quantitative Research. Interviews, Desk Research, Test Trial. Search Engines. Group interviews and focus groups, Secondary research.

    реферат [12,5 K], добавлен 17.02.2013

  • The material and technological basis of the information society are all sorts of systems based on computers and computer networks, information technology, telecommunication. The task of Ukraine in area of information and communication technologies.

    реферат [29,5 K], добавлен 10.05.2011

  • The lessons of reading and translation of different texts and word-combinations into Ukrainian. The most frequently used expressions with the verbs to be, to have and sentences with them. Reading and translation the dialogue used in the usual speech.

    учебное пособие [89,2 K], добавлен 25.03.2010

  • Reading the article. Matching the expressions from the first two paragraphs of this article. Answer if following statements true or false or is it impossible to say, are given the information in the article. Find adjectives to complete some definitions.

    контрольная работа [33,0 K], добавлен 29.04.2010

  • The role of experience from the first reading the book for further reading hobby. Familiarization with the work of Terry Prachetta, J. Tolkien, R. Howard, and William Shakespeare. "Crime and Punishment" - an outstanding novel in Russian literature.

    эссе [12,2 K], добавлен 16.01.2011

  • Six principles of business etiquette survival or success in the business world. Punctuality, privacy, courtesy, friendliness and affability, attention to people, appearance, literacy speaking and writing as the major commandments of business man.

    презентация [287,1 K], добавлен 21.10.2013

  • Research methods are strategies or techniques to conduct a systematic research. To collect primary data four main methods are used: survey, observation, document analysis and experiment. Several problems can arise when using questionnaire. Interviewing.

    реферат [16,7 K], добавлен 18.01.2009

  • Ability of the company to reveal and consider further action of competitive forces and their dynamics. Analysis of environment and the target market. Functional divisions and different levels in which еhe external information gets into the organization.

    статья [10,7 K], добавлен 23.09.2011

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.