Skip to main content

ORIGINAL RESEARCH article

Front. Phys., 26 April 2022
Sec. Social Physics
This article is part of the Research Topic Editor's Challenge in Social Physics: Misinformation and Cooperation View all 6 articles

Exploring the Effect of Spreading Fake News Debunking Based on Social Relationship Networks

  • School of Management, Harbin Institute of Technology, Harbin, China

Fake news spreads rapidly on social networks; the aim of this study is to compare the characteristics of the social relationship networks (SRNs) of refuters and non-refuters to provide a scientific basis for developing effective strategies for debunking fake news. First, based on six types of fake news published on Sina Weibo (a Chinese microblogging website) during 2015–2019 in China, a deep learning method was used to build text classifiers for identifying debunked posts (DPs) and non-debunked posts (NDPs). Refuters and non-refuters were filtered out, and their follower–followee relationships on social media were obtained. Second, the differences between DPs and NDPs were compared in terms of the volume and growth rate of the posts across various types of fake news. The SRNs of refuters and non-refuters and the k-core decompositions of these SRNs were constructed, and the differences in the growth rates between DPs and NDPs were explored. Business-related fake news was revealed to be debunked better; society-related fake news, the most widely spread in China, was debunked poorly; and science- and politics-related fake news was debunked the worst. Additionally, more celebrity accounts, larger node sizes with follower-followee relationships in the SRNs, and more weakly connected components were found to lead to a faster growth rate in the dissemination of posts, regardless of whether the posts were DPs or NDPs. This study can help practitioners develop more effective strategies for debunking fake news on social media in China.

Introduction

Since the 2016 U.S. presidential election, “fake news” has become a common term in the mainstream vernacular [1, 2]. Similar to Western countries, fake news is also prevalent in China [3], and it largely involves online rumors (yao yan in Chinese [4]). Furthermore, with the rapid development of social networking services (SNSs), each user in a social network has become both a spreader and receiver of information, and millions of people present, comment on, or share various topics on social media every day [5]. In addition, the emergence of social media as an information dissemination channel has reduced the gap between content producers and consumers and profoundly changed the way users obtain information, debate, and shape their attitudes [6, 7]. Although authoritative organizations such as the government and news media have made considerable efforts to debunk fake news, social media enables individuals to rapidly distribute fake news through social networks owing to its abundance of users and complex network structure, causing considerable panic in society [8, 9].

Effective debunking can be considered as a competition between debunked and non-debunked information. Thus, if we need to reduce and combat the proliferation of fake news on social media, we must improve the identification of the spread of the differences between debunked and non-debunked information on social media, make debunked information an effective hedge against non-debunked information, and understand the structure and functions of these technologically advanced social networks [10].

A better understanding of debunking strategies associated with different types of fake news can help address fake news more specifically on social media. As most researchers have concentrated on politically sensitive or field-specific fake news on social media, it has been difficult to derive common rules applicable to countering all types of fake news generated within a certain field [11]. Although in previous studies, the differences between the spread of facts and that of rumors on Twitter across various topics [12] or between the typical features of rumor and anti-rumor accounts on Sina Weibo [9] have been comprehensively evaluated, the differences between different types of debunked and non-debunked fake news based on real data on Chinese social media have been thoroughly assessed in only a few studies.

Social media (e.g., Sina Weibo) are information dissemination platforms based on social relationships (follower-followee relationships), wherein information dissemination is closely related to people’s social relationships [13], especially in networks that are large, complex, heterogeneous, and scalable [14]. Thus, people’s social relationships and information dissemination networks are interrelated and mutually reinforcing. The breadth and depth of people’s social relationship networks (SRNs) determine the breadth and depth of the information they obtain and how far the dissemination of this information can spread. Although the methods used for the study of fake news, such as fake news debunking detection (e.g., using features extracted from news articles and their social contexts, such as textual features and users’ profiles [15]) and diffusion network structure analysis (e.g., reposting or commenting networks [16]), have been identified, the ways in which users’ social relationships on social networks influence the spread of debunking messages have not been widely investigated. Additionally, these problems are important to address because, if a debunking methodology for fake news is not shared quickly and widely on social networks, people will fail to combat fake news in a timely and effective manner. Consequently, false information will continue to misguide public opinion on social media [17].

To meet the aforementioned objectives, first, we used a dataset containing 49,278 posts from 176 fake news events published on Sina Weibo from July 2015 to September 2019 and divided the fake news into six topics. Second, we developed a text classifier using deep learning by applying a long short-term memory (LSTM) algorithm to identify debunked posts (DPs) and non-debunked posts (NDPs), filtered the corresponding refuters and non-refuters, and obtained 74,987 follower-followee relationships between these refuters and non-refuters. Third, we analyzed the differences in the volume and growth rates between the DPs and NDPs for each type of fake news by comparing them in terms of the number and cumulative probability distribution of the posts. Fourth, for each type of fake news, we constructed SRNs involving refuters and non-refuters and the k-core decompositions of these SRNs; then, we investigated the proportion of each account type, the network size of the k-core decompositions of the SRNs, and the number of weakly connected components to explore the differences in the growth rates between DPs and NDPs. Finally, we analyzed the reasons for these differences across different types of fake news. Based on these results, we propose personalized and real-time governance strategies to serve as a guide for promoting healthier behavior among social media users and minimize the spread of fake news.

The contributions of this study are significant both in theory and practice. First, unlike previous research that focused on the sharing of fake news [6, 18], in this study, by using deep learning method and social network analysis method, we focus on the comparison of differences between the debunking and non-debunking of fake news across various topics and systematically construct, compare, and analyze the SRNs of refuters or non-refuters to provide strategies for combating various categories of fake news on Chinese social media at the macro level. Thus, we shift the scholarly focus from the dominant area of fake news information sharing to the emerging area of employing users and the social relationships among them to combat various types of fake news on Chinese social media platforms. Second, on a practical level, the “fake news” literature in China is expanded by focusing on various types of day-to-day online rumors on social media available from a large volume of fake news datasets. This study provides insight to practitioners such as social media managers, government staff, news authorities, and media staff on ways to debunk different types of fake news using targeted and personalized governance strategies.

The remainder of this paper is structured as follows. Related Work introduces work related to this study. Materials and Methods describes the data and methods. Results details the results of the experiments. Discussion presents a discussion on this study, the limitations of this study, and a scope for future work. Conclusion provides the conclusions of the study.

Related Work

Before introducing the study problem, we provide brief remarks on the terminology used. Researchers have provided different interpretations of the definitions and connotations of fake news, misinformation, and rumors; these terms are often used interchangeably in academic research [19]. First, we disregarded the politicized nature that the term “fake news” has indicated, especially since the 2016 U.S. presidential elections. Second, we adopted a broader definition of news on social media: it includes any information (e.g., text, emoticons, and links) posted on social media [12]. Third, we did not consider the intentions behind those who posted the online information [2] and the differences between automated social robots and humans [12]. For these reasons and owing to its useful scientific meaning and construction, we retained the term “fake news,” which is used by most researchers, to represent our research objective [1, 8]; this term refers to false news or false rumors stemming from authorities’ (e.g., government agencies, state media, and other authoritative organizations) statements that were determined to be false.

Social media platforms employ different approaches to combat online fake news. The first key strategy is to undermine economic incentives and shift focus to developing technical solutions to help users make more informed decisions [20], for example, by showing warning messages and relying on fact-checking units [2, 8, 17, 2124]. In this context, Hoaxy is a platform used for the collection, detection, analysis, and fact-checking of fraudulent online content from various viewpoints [22]. It was tested by collecting approximately 1,442,295 tweets and articles from 249,659 different users [16]. Content from news websites and social media was fed into a database that was updated on a regular basis, and it was analyzed to extract different hidden patterns [22]. Pennycook et al. selected fake news headlines from Snopes.com, a third-party website that fact-checks news stories, to investigate whether warning tags would effectively reduce belief in fake news by a prominent intervention that involved attaching warnings to the headlines of news stories that have been disputed by third-party fact-checkers [24]. Although researchers and online fact-checking organizations are continuously improving their fact-checking measures against the spread of fake news, most fact-checking processes depend on human labor, which requires considerable time and money [25, 26]. Despite recent advancements in automatic detection, identification models for fact checking lack the required adaptive and systematic applications [2]. Table 1 lists popular fact-checking analysis tools that are used to check the authenticity of online content.

TABLE 1
www.frontiersin.org

TABLE 1. Summary of some previous work related to debunking fake news.

Social network analysis has become a widely accepted tool [27]. Thus, the second key strategy is broadcasting denials to the public to prevent the exposure of individuals to fake news on social media [8], which can reduce the possibility of fake news spreading [28]. Fake news often spreads over social media through interpersonal communication [23], and personal involvement remains a salient construct of the spread of fake news on social media [29]. Thus, the key approach to combat the spread of fake news is to use the power of social relationships on social networks to spread messages from one individual to another [30]. Some researchers preferred to examine interventions on social networks that might be effective in debunking fake news by using source detection, i.e., finding out the person or location from whom or where the false information in the social network or web started spreading [16, 3135] (see Table 1). For example, Shelke and Attar provided a state-of-the-art survey of different source detection methodologies along with different available datasets and experimental setups in case of existing single and multiple misinformation sources [35]. Whereas other researchers preferred to investigate the propagation dynamics of fake news [16]. As shown in Table 1, the prominent methods of fake news diffusion models available in the literature can be classified into three major categories: soft computing [36], epidemiological [37, 38], and mathematical approaches [3941]. It is hard to execute these approaches because most of these studies are based on complex mathematical, physical, and epidemiological models; furthermore, in the real world, users may go beyond just controlling the simulation settings in the diffusion models [17].

The third key strategy is fake news detection. Different artificial intelligence algorithms along with cognitive psychology and mathematical models are used to identify false content [16]. As the assessment of the veracity of a news story is complex from an engineering point of view, the research community is approaching this task from different perspectives; Table 1 presents some prominent research on datasets, experimental settings, training, validation, and testing methods used in various machine learning and deep learning technologies, and other methods to address the issue [16, 42] (see Table 1).

As presented in Table 1, the first research aspect is the study of datasets and experimental settings. Different formats of datasets are used for content and behavioral analyses such as text tweets, images, headlines, news articles, URLs, users’ comments, suggestions, and discussions on particular events [16]. Most researchers used Twitter, Sina Weibo, and Facebook’s Application Programming Interface (API) for collecting and analyzing rumors and fake news as data sources [12, 43, 44], whereas other researchers preferred using a data repository such as FakeNewsNet that contains two comprehensive datasets PolitiFact and GossipCop to facilitate research in the field of fake news analysis [45]. These datasets collect multi-dimensional information from news content and social contexts and spatiotemporal data from diverse news domains [16]. In addition, some researchers compared the details of some of the widely used datasets and experimental setups in detail [46].

The second research aspect is the study of handcrafted feature extraction. Machine and deep learning are prominent techniques for designing models for detecting false information. The effectiveness of these algorithms mainly depends on pattern analysis and feature extraction of text [12, 47, 48], images [47, 49], users [50] messages [50, 51], propagation [52], structural [48], temporal [53], and linguistic specific features [16, 54] (see Table 1).

The third research aspect is the study of network structures. Network structures are innovative methods of assessing the credibility of a target article [55]. For example, Ishida and Kuraya proposed a bottom-up approach with relative, mutual, and dynamic credibility evaluation using a dynamic relational network (or mutual evaluation model) of related news articles, wherin each node can evaluate and in turn be evaluated by other nodes for credibility based on the consistency of the content of the node [56]. In addition, to model network structures and the user connectivity of online social networks, scalable synthetic graph generators were used by researchers. These generators provided a wide variety of generative graph models that could be used by researchers to generate graphs based on the extraction of different features such as propagation, temporal, connectivity, follower-followee relationship [16]. For example, Edunov et al. proposed Darwini, a graph generator that captures several core characteristics of real graphs and can be used efficiently to study the propagation and detection of false content by generating different social connections in the form of a graph [57]. To accomplish this, Darwini produces local clustering coefficients, degree distributions, node page ranks, eigenvalues, and many other matrices [57].

The fourth research aspect is the study of machine learning and deep learning classifiers. Some scholars have investigated real-time data in social media sites using stance detection methods to identify people who are supportive, neutral, or opposed to fake news [58]. These methods are widely based on the computation of machine or deep learning algorithms to achieve open or target-specific classification [5961]. For example, Pérez-Rosas et al. focused on the linguistic differences between fake news and legitimate news content by using machine learning techniques, including a linear support vector machine (SVM) classifier, and obtained 78% accuracy in detecting fake news on two novel datasets [62]. The major disadvantage of machine-learning-based models is that they are dependent on hand-crafted features that require exhaustive, meticulous, detailed, and biased human efforts; thus, recent technologies are shifting the trend towards deep learning-based models [16]. For example, Zarrella and Marsh applied a recurrent neural network (RNN) initialized with features learned via distant supervision to tackle the SemEval-2016 task 6 (Detecting Stance in Tweets, Subtask A: Supervised Frameworks) [63]. In parallel, they trained embeddings of words and phrases with the word2vec Skip-Gram method. This effort achieved the top score among 19 systems with an F1 score of 67.8%, and one of 71.1% in a non-official test. Poddar et al. developed a novel neural architecture for detecting the veracity of a rumor using the stances of people engaging in a conversation about it on Twitter [64]. Taking into consideration the conversation tree structure, the proposed CT-Stance model (stance predictor model) achieved the best performance with an accuracy of 79.86% when considering all three realistically available signals (target a tweet, conversation sequence, time). In addition, there are other fake news detection methods, such as the use of cognitive psychology to analyze human perceptions. For example, Kumar and Geethakumari explored the use of cognitive psychology concepts to evaluate the spread of misinformation, disinformation, and propaganda in online social networks by examining four main ingredients: the coherency of the message, credibility of the source, consistency of the message, and general acceptability of the message; they used the collaborative filtering property of social networks to detect any existing misinformation, disinformation, and propaganda [65].

Based on the analysis of the aforementioned related work, some previous studies on fake news debunking are summarized in Table 1. First, the differences between different types of debunked and non-debunked fake news based on real data on Chinese social media have not been comprehensively assessed; second, the ways in which users’ social relationships on social networks influence the spread of debunking messages have also not been fully investigated. Unlike previous studies, we used a deep learning method to build text classifiers for identifying debunked posts (DPs) and non-debunked posts (NDPs), and filtered out refuters and non-refuters. Then based on social analysis method, we focused on the empirical analysis of real data on Chinese social media platforms by comparing the characteristics of the SRNs of refuters and non-refuters, and we tried to discover useful strategies for debunking different categories of fake news effectively.

Materials and Methods

Study Context and Data Collection

Sina Weibo, often referred to as “Chinese Twitter,” is one of the most influential social network platforms in China [68]. Twitter is a widely used microblogging platform worldwide, and its content is mainly written in English, while Sina Weibo is the most famous microblogging platform in China, where all posts are written in Chinese [69]. Social media are known as interactive computer-mediated technologies that allow people to create, share or exchange information [69]. The increasing popularity of social media websites and Web 2.0 has led to an exponential growth of user-generated content, especially text content on the Internet [69], which generates a large amount of unstructured text data and provides data support for our research. Thus, to avoid selection bias in the data, our data were collected from Zhiwei Data Sharing Platform’s (hereinafter referred to as Zhiwei Data) database that contains fake news of public events posted on Sina Weibo from July 2015 to September 2019, among which all the fake news events have been verified as “false.” We then obtained 176 widely spread fake news events, for a total of 49,278 posts. We also cooperated with Zhiwei Data to obtain user profiles through the business API of Sina Weibo, including the demographic characteristics of the users, such as their names, genders, types of accounts, locations, the number of their posts, followees, and followers, the source of their posts, and their posting times.

As publicly available data were used in this study, we only referred to the summarized results and did not derive any sensitive data. Information on the individuals studied in this research has not been published elsewhere.

Methodology

Identification of Debunked Fake News Posts

Fake news contains partly true, false, and mixed information; thus, researchers need to know what kind of opinions are expressed by people communicating on social media; this can be determined through stance detection [58]. A stance is a person’s opinion on or attitude toward some target entity, idea, or event determined from a posting, e.g., “in favor of,” “neutral,” “against,” [70], “support,” “deny,” “comment,” or “query” [71]. According to the content of 176 fake news events, we considered both subjective expressions and their corresponding targets, which might not be explicitly mentioned, and labeled each stance with respect to a specific target of each event; each stance was divided into two relative standpoints, “against” and “other,” corresponding to DPs from refuters and NDPs from non-refuters, respectively. Additionally, a text classifier was developed to detect DPs, which indicated that people who made these posts were against fake news. Although we acknowledge that binary labeling has certain limitations, for our current research, we highlighted the main research objectives as well as the overwhelming advantages of this straightforward and simple classification when compared to its weaknesses [72].

Stance detection is an area with closely related subjects [58]. Thus, it is modeled as a supervised learning process to achieve better results when a training dataset is required. Therefore, to obtain a fully available dataset, we asked three members of our team to label posts using an “against” or “other” stance. Two members with a detailed understanding of fake news labeled 10,000 posts, which were randomly selected from 49,278 posts. Furthermore, we tackled this labeling analysis based on the conversations stemming from direct and nested replies to the posts originating from fake news [71]. Next, these two members discussed all the annotation results and reannotated the posts to agree on the differences. Finally, a third member randomly selected 1,000 posts from 10,000 posts for an annotation to calculate the intercoder reliability. In addition, Cohen’s kappa is popular descriptive statistics for summarizing the cross-classification of two nominal variables with nN2 identical categories [73, 74]. An n×n table can for example, be obtained by cross-classifying the ratings of two observers that each have classified a group of objects into n categories. In this case, the n×n table can be referred to as an agreement table, since it reflects how the ratings of the two observers agree and disagree [73]. Based on the above analysis, to calculate the inter-annotator agreement, we assessed the validity of the annotation scheme using Cohen’s kappa. The Cohen’s kappa (κ) value for the members was 0.889 (p<0.001), indicating a good agreement between them [75]. Finally, 5,613 DPs were labeled from the 10,000 posts.

According to the external references in a stop word dictionary, we used Python’s (Version 3.6.2) regular expressions to clean the sample data (including the compile and sub methods in Python’s RE package), including removing the relevant stop words (based on the list of stop words produced by the Harbin Institute of Technology), URLs, and punctuation and correcting any misspelt words. Then, we created our user dictionary, which includes some terms related to the 176 fake news events, and we used the Jieba Chinese text segmentation module in Python, one of the most widely used word segmentation tools for the Chinese, to segment the sample data. We then used word2vec from Google for word embedding [76]. The vector representations of words learned by word2vec can extract the deep semantic relationships between words, contributing more to text classification [77].

A long short-term memory (LSTM [78]) network was developed on the basis of recurrent neural networks (RNNs), which are capable of processing serialized information through their recurrent structures, to solve problems related to gradient vanishing or exploding [79]. LSTM does show a remarkable ability in processing natural language. Particularly, on Chinese social media posts where the words have complex context-dependent relationships, LSTM models perform well in applications with text classification tasks [79, 80]. Therefore, a deep neural network with an LSTM algorithm was employed to build the text classifier. In the LSTM model training process, the hyperparameter is a parameter that sets the value before the model is trained. Generally, the hyperparameters need to be optimized, and a set of optimal hyperparameters is selected for the model to improve the quality of the learning [81]. To prevent the overfitting problem of the model in the training process, we selected some appropriate combinations of parameters to train the model, and the hyperparameter configuration of the model was as follows: batch size = 512; maximum length of the sentence = 70; dropout = 0.5; activation = sigmoid; loss function = binary cross-entropy.

Then, we tested the vector dimensions of 50, 100, 150, … , 450, and 500 to obtain the best word vector dimension. We chose accuracy and F1 score as metrics to measure the performance of the classifiers [82]. A 10-fold cross-validation technique was also adopted to train our classifier and evaluate the performance of our classification algorithm. The experimental results of the classification are presented in Table 2. Compared with other dimensions, the experimental results had a relatively high accuracy of 90.98% and an F1 score of 92.05% with a word vector dimension of 350. Thus, we used a word vector dimension of 350 to build our classification. Finally, we obtained 25,856 DPs from 6,114 refuters and 23,422 NDPs from 8,285 non-refuters using our classifier. A flowchart for identifying debunked fake news posts is shown in Figure 1 [see procedures (1–5) in Figure 1].

TABLE 2
www.frontiersin.org

TABLE 2. Results of the classification.

FIGURE 1
www.frontiersin.org

FIGURE 1. Process for research procedures.

Qualitative Classification of Account Types and Fake News Topics

First, to analyze and examine how various accounts (refuters and non-refuters) were involved in DPs and NDPs, we referred to account classification standards based on the Sina Weibo certification on their homepages, which includes ten types of accounts: ordinary, media, government, celebrity, Weibo’s got talent, enterprise, campus, organization, website, and Weibo girl.

Second, we referenced the fake news classification standards of Twitter [12] and previous studies regarding the rumor classification standards of Sina Weibo in China [83]. We divided the fake news into six categories using the Zhiwei Data’s classification criteria: society, health, business, science and technology (hereinafter referred to as science), disaster, and politics and finance (hereinafter referred to as politics) (Table 3). We also asked three annotators from Zhiwei Data to label 176 events according to these six categories. Additionally, the Cohen’s kappa (κ) value for the annotators was 0.961 (p<0.001), indicating that the classification results were robust.

TABLE 3
www.frontiersin.org

TABLE 3. Examples for six categories of fake news (Translated into English from Chinese).

For each type of fake news, we used the number of posts to indicate the spreading volumes of the DPs and NDPs. We also sorted each event in ascending order according to the time of the post and calculated the relative hours of any post relative to the first post in each news event. For the six types of fake news, we aggregated all their corresponding events. Here, the epoch t was set to 1 h because of data sparsity and the 24 h daily schedule. Therefore, for the six types of fake news, we used pr to represent the cumulative probability of propagation at any hour t of the DPs or NDPs to indicate the growth rates of their spread, based on the method of Liu et al. [84], and we defined the following:

pr=n(i,j)(t)n(i,j)(t),(1)

where n(i,j)(t) represents the cumulative total of the posts with the type j in the fake news of category i until time t, t denotes the time when the propagation of the fake news of category i ends, j represents two types of posts (j=0, DPs; j=1, NDPs), i represents six categories of fake news (i=1, 2, , 6,  which represents society, health, business, science, disaster, and politics, respectively).

Social Relationship Network Construction and Analysis

On Sina Weibo, users (followers) may choose to follow any other users (followees); thus, they can automatically receive all the posts published by their followees, such as on Twitter [85]. Here, an SRN with weak ties is formed; users can easily follow many people without talking to them directly. Thus, we filtered out all the follower-followee relationships (hereinafter referred to as following relationships) of 6,114 refuters and 8,285 non-refuters involved in spreading fake news. We also filtered the source nodes (followers) and target nodes (followees) in the following relationship of users involved in fake news propagation and finally obtained 74,987 following relationships.

A set of following relationships and the set of Sina Weibo users connected by these relationships formed an SRN [85]. Relationships between people can be captured as graphs where vertices represent entities and edges represent connections among them [86]. A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph [87]. The k-core decomposition of a graph maintains, for each vertex, the max-k value: the maximum k value for which a k-core containing the vertex exists [86]. At the same time, k-core decomposition is often used in large-scale network analysis [86]. For example, the k-core decomposition was recently applied to several real-world networks (the Internet, the WWW, etc.) and was turned out to be an important tool for visualization of complex networks and interpretation of cooperative processes in them [87, 88]. Thus, to explore the network structure of the participants’ social relationships, for both refuters and non-refuters, we established SRNs under each topic and their k-core decompositions based on the following relationships between the users (2 × 2 × 6 = 24); the networks were directed and unweighted. In an SRN, each user is a node; if user i follows user j, then there is a directed edge from i to j, and isolated nodes denote users without a following relationship with anyone in the network. To better investigate the following relationships between refuters in DPs and those between non-refuters in NDPs, we divided all the users participating in the discussion into four categories for each type of fake news as follows: SRN of refuters (refuter–refuter SRN, R–R), k-core decomposition of SRN of refuters (refuter–refuter k-core decomposition of SRN, Rk–Rk), SRN of non-refuters (non-refuter–non-refuter SRN, NR–NR), and k-core decomposition of SRN of non-refuters (non-refuter–non-refuter k-core decomposition of SRN, NRk–NRk).

We computed the following set of basic network properties, which allowed us to encode each network according to a tuple of features: (1) the average degree (<K>), (2) the diameter (D), (3) the average clustering coefficient (CC), and (4) the average path length (L) [15, 89], where <K> is characterized by the average of the degrees of the nodes in the SRN and measures the average influence of the nodes in the SRN; D is characterized by the maximum value of the shortest distance between any two nodes in the network and measures the maximum length of any following relationship in the network; CC is characterized by the average of the clustering coefficients of the nodes in the network and measures the cohesive size of the network; and L is characterized by the average value of the shortest distance between the nodes in the network, which measures the average length of any following relationship in the network.

Note that our analysis is based on the fact that fake news propagation has already ended; we did not construct a reposting network. Some of the network indicators (1–4) in the aforementioned network analysis are only descriptions and portrayals of the basic properties of the network and cannot be used to explain the reasons for the differences in the propagation of DPs and NDPs. Therefore, to compare SRNs more effectively, we considered defining measurement indices based on the following three aspects. First, we considered that some accounts exhibited different characteristics in the spread of fake news [18] and that there might be some individual differences, that is, different types of nodes might play different roles in the spread. Second, in interpersonal communication, fake news tends to initially flow to neighbors, friends, and colleagues of the spreaders and then circulate in certain regions and groups; thus, we used the size of the nodes with social relationships to symbolize the magnitude of this spreading effect. Furthermore, if a shortcut existed between two local cliques, was more conducive to the spread of information, and the number of weakly connected components on social networks indicated the magnitude of this influence. Therefore, we defined the following three measure indices:

(1) Proportion of account types (Ratio)

To avoid differences in the sizes of the SRNs of different types of fake news, we considered the relative index of the proportion of account types participating in DPs and NDPs as follows:

Ratio=DPikNDPik,(2)

where DPj and NDPj represent the proportion of account type k in DPs and NDPs in the fake news of category i, where i=1, 2, , 6  represents society, health, business, science, disaster, and politics, respectively ; k=1, 2,, 5 represents ordinary users, media, government, celebrity, and other types of accounts (including Weibo got talent, enterprise, campus, organization, website, and Weibo girl accounts) ,  respectively.

(2) Network size of the k-core decomposition of an SRN (Size)

To avoid the differences in the scales of the SRNs of different types of fake news, we considered analyzing the ratio of the number of nodes in the SRNs before and after k-core decomposition to measure the scale of the SRNs and determine the size of nodes with following relationships in the k-core decompositions of the SRNs; accordingly, we defined size as follows:

Size=K_NodesijNodesij,(3)

where Nodesij and K_Nodesij, respectively, represent the number of nodes in the SRNs before and after their k-core decompositions for the fake news of category i, for refuters or non-refuters, where i=1, 2, , 6  represents society, health, business, science, disaster, and politics; and j=0, 1 represents refuters and non-refuters, respectively.

(3) Number of weakly connected components (Nwcc)

A weakly connected component of a directed graph is a maximal (sub)graph, where there exists a path u v ignoring edge directions for each pair of vertices (u,v) [15].

Additionally, we used the open-source software Gephi 0.9.2 to visualize networks and calculate the network properties [90, 91]. In the network, nodes were colored according to ten types of accounts, and the sizes of the nodes corresponded to their total degree (including the in-degree and out-degree), that is, the number of followees and followers of this account, which is a measure of the influence of an account. The Fruchterman-Reingold layout algorithm was used to calculate the graph layout and draw a graph by force-directed placement [92].

Statistical Analyses

Pearson chi-square ( χ2) tests were performed to compare the differences in the distributions of DPs and NDPs across each fake news topic, as well as the differences in the account distributions of DPs and NDPs across each fake news topic [93]. Wilcoxon rank sum tests were used to measure the differences in the number of posts between DPs and NDPs across each fake news topic over time [94]. The Pearson chi-square ( χ2) test and Wilcoxon rank sum test were conducted by SPSS for Windows, version 25.0.0 (IBM Corporation).

Research Procedures

The process followed in this study is illustrated in Figure 1. First, we collected 49,278 posts of 176 fake news events on Sina Weibo from July 2015 to September 2019. Second, we used the LSTM algorithm to build a stance classifier, which divided posts into DPs and NDPs and filtered out refuters and non-refuters for further analysis. Third, we divided the account types into ten categories and fake news into six topics. Fourth, for each type of fake news, we obtained 74,987 following relationships of refuters and non-refuters and constructed SRNs and their k-core decomposition SRNs. Fifth, we analyzed the differences in the volume and spreading growth rates between DPs and NDPs for each type of fake news by comparing the posts in terms of number and cumulative probability distribution. For each type of fake news, we analyzed the differences in the spreading growth rates of DPs and NDPs, as well as the reasons for these differences by investigating the proportion of account types, the network sizes of the k-core decompositions of the SRNs, and the number of weakly connected components. Finally, we provide recommendations for combating fake news on social media.

Results

Development of Debunked and Non-Debunked Posts Over Time

Distribution of Debunked and Non-Debunked Posts

To achieve a better comparison between DPs and NDPs, we used graphs for clearer data representation. The quarterly numbers of all the DPs and NDPs diffused on Sina Weibo from July 2015 to September 2019 are shown in Figure 2A. Owing to the scarcity and imbalance of the dataset at certain periods and for better comparison with the research on Twitter [12], we used quarterly data for the representation in Figure 2A. Note that the spread of fake news has increased yearly, particularly in 2018 and 2019. Similarly, the spread of DPs has gradually but significantly increased, especially in the third and fourth quarters of 2018; furthermore, the number of DPs is much higher than that of the NDPs, and the gap between the DPs and NDPs is gradually increasing (p<0.001).

FIGURE 2
www.frontiersin.org

FIGURE 2. Descriptive analysis of DPs and NDPs: (A) quarterly DPs and NDPs and (B) the number of DPs and NDPs under the six fake news topics. Notes: Science represents science and technology and Politics denotes politics and finance.

The total number of DPs and NDPs for various types of fake news are shown in Figure 2B. The result of the Pearson chi-square test was χ2(25)=117110 (p <0.001), indicating that the result had a very high degree of statistical significance. Figure 2B demonstrates that society-related fake news was the largest category of fake news in Chinese social media platforms, with 15,844 DPs and 15,501 NDPs, followed by health with 4,292 DPs and 3,981 NDPs; business-related fake news had 2,265 DPs and 1,329 NDPs, which differed from the distribution on Twitter (corresponding to politics, urban legends, business, terrorism, and war, respectively [12]). Similarly, DPs and NDPs showed a different trend, with more DPs than NDPs. We analyzed the reason for this because our research data are a review of historical data collected over a long period and the total number of DPs mostly exceeded that of the NDPs. However, the observation that the total number of DPs circulated was greater than that of NDPs circulated was insufficient to prove that this was an effective fake news rebuttal act.

Spreading Growth Rates of Debunked and Non-Debunked Posts

For the spread of fake news, early debunking is important to minimize harmful effects. Therefore, we considered further examining the differences in the propagation growth rates of DPs and NDPs. Using Eq. 1, we calculated the pr for each type of fake news and plotted the cumulative probability distribution for six fake news categories of DPs and NDPs (Figure 3). Owing to the imbalanced dataset, the number of posts varied and was greater during certain periods. The cumulative probability distribution in the six fake news categories of DPs and NDPs (Figure 3) indicates that the differences in the growth rates of the spread of DPs and NDPs for the same propagation time with the pr reaching a larger value, represented faster growth in speed during the same propagation time. For instance, as depicted in Figure 3D, after 70 h are completed, the pr of the NDPs is always higher than that of the DPs, indicating that the NDPs spread faster than the DPs. Figure 3 also indicates the time required for DPs and NDPs to reach a certain proportion of the same total number of posts for the same value of pr, e.g., for pr=0.5, half of the propagation of DPs or NDPs of the respective total number of posts, the shorter time represents the increase in the growth in speed.

FIGURE 3
www.frontiersin.org

FIGURE 3. Growth rate of debunked posts (DPs) and non-debunked posts (NDPs) in six categories of fake news over time: (A) Society, (B) Health, (C) Business, (D) Science, (E) Disaster, and (F) Politics. Notes: Science represents science and technology and Politics represents politics and finance.

Figure 3 shows that for all types of fake news, in the first 1/3 propagation stage, the NDPs and DPs spread rapidly; however, the details of the spread patterns are different. Except for business-related fake news (as shown in Figure 3C, the pr for DPs is close to NDPs; a low degree of statistical significance,  p=0.943), NDPs spread faster than DPs for the other five types of fake news (as shown in Figures 3B–F, the pr for the NDPs is higher than that for the DPs; high degree of statistical significances, p[0.000, 0.036]). For all types of fake news, in the first 1/3 propagation stage, the pr of NDPs can reach 0.8 and more, which can spread to more than 80% of the total number of NDPs; however, not all the pr of the DPs can reach 0.8, such as science- (Figure 3D) or politics-related (Figure 3F) fake news. Therefore, our results showed that the spreading patterns of DPs and NDPs were different for various categories of fake news, indicating that the patterns may be related to the characteristics of the event. Specifically, for the six types of fake news, the debunking effect of the business-related fake news was better, where DPs can catch up with the speed of NDPs, whereas for other types of fake news, DPs did not spread as rapidly as NDPs. Fake news related to science (Figure 3D) and politics (Figure 3F) was debunked the worst, indicating that the response speed of the DPs was slower and the release of the debunking messages was not as timely compared with DPs in other categories.

Social Relationship Networks of Refuter–Refuter and Non-Refuter–Non-Refuter

Distribution of Account Types

To investigate the different roles played by the various types of accounts in the growth rate of DPs and NDPs, we first examined the proportion of accounts in all types of fake news; the results are shown in Figure 4. For DPs and NDPs, the Pearson chi-square tests indicated that the results had a very high degree of statistical significance in the account distribution under different types of fake news [Figure 4A: χ2(20)=937.751,p <0.001; Figure 4B: χ2(20)=349.891,p <0.001]. Media and celebrity account types accounted for most DPs in six types of fake news, and media accounts typically accounted for the largest proportion of DPs, with an average proportion of 57.64% in the six types of fake news events, followed by celebrity accounts, with an average proportion of 20.60%. In terms of account types in NDPs, however, media and celebrity accounts continued to be the most common, with an average proportion of 31.75% and 43.89%, respectively. Celebrity accounts accounted for the largest proportion of the four types of fake news events, namely politics (55.39%), science (49.15%), business (46.28%), and society (40.39%), whereas media accounts accounted for the largest proportion of health- (42.02%) and disaster-related fake news (38.46%).

FIGURE 4
www.frontiersin.org

FIGURE 4. Proportion of account types in (A) debunked posts (DPs) and (B) non-debunked posts (NDPs) under six types of fake news. Notes: Other types of accounts include Weibo got talent, enterprise, campus, organization, website, and Weibo girl accounts.

Celebrity and media accounts played an important role in the spreading growth rates of DPs and NDPs. Therefore, combining the “80/20” distributions [95], we mainly focused on the role played by media and celebrity accounts in the propagation of DPs and NDPs, referring to Eq. 2. The results are shown in Table 4. We found that, for business-related fake news that achieved a better debunking effect on the spread of DPs, the relative proportion of the celebrity accounts in DPs to those in NDPs was the highest (Ratio2=0.708), whereas the relative proportion of media accounts in DPs to those in NDPs was the lowest (Ratio1=1.495). In contrast, for two types of fake news that had the worst debunking effects in the spread of DPs, namely, science- and politics-related fake news, the relative proportion of the celebrity accounts in DPs to those in NDPs was the lowest, at 0.292 and 0.361, respectively, whereas the relative proportion of the media accounts in DPs to those in NDPs was the highest, at 2.486 and 2.600, respectively.

TABLE 4
www.frontiersin.org

TABLE 4. Relative proportion of account types in DPs and those in NDPs under six types of fake news.

Comparison of Differences in Networks of Refuters and Non-Refuters

To analyze the factors affecting the growth rate of DPs and NDPs for the six types of fake news, we constructed SRNs of refuters and non-refuters and the corresponding k-core decompositions of these SRNs and attempted to discover the factors affecting the growth rate of DPs and NDPs in terms of their network characteristics. Therefore, we constructed SRNs of refuters and non-refuters for each type of fake news and portrayed the SRNs using four basic network properties (Figure 5). We found that, in SRNs of refuters and non-refuters for each type of fake news event, the fake news categories with a better fake news debunking effect (business) and those with worse fake news debunking effects (society, health, disaster, politics, and science) did not show significant differences in the properties of <K>, D, CC, and  L.

FIGURE 5
www.frontiersin.org

FIGURE 5. SRNs under six types of fake news: (A) of refuters (R–R) in society-related fake news, (B) of non-refuters (NR–NR) in society-related fake news, (C) R–R in health-related fake news, (D) NR–NR in health-related fake news, (E) R–R in business-related fake news, (F) NR–NR in business-related fake news, (G) R–R in science-related fake news, (H) NR–NR in science-related fake news, (I) R–R in disaster-related fake news, (J) NR–NR in disaster-related fake news, (K) R–R in politics-related fake news, and (L) NR–NR in politics-related fake news.

As shown in Figure 5, the fake news types with better debunking effects (business) and those with worse debunking effects (society, health, disaster, politics, and science) do not show significant differences in the properties of <K>, D, CC, and L. Therefore, to further analyze the influence of the following relationships between nodes (refuters or non-refuters) on the growth rate of DPs and NDPs, we performed k-core decomposition (k=1) on the SRNs to remove the isolated nodes (degree of nodes = 0) without any following relationship and obtain the k-core decompositions of SRNs composed of nodes with following relationships. Furthermore, for different types of fake news, we examined the differences between the network sizes of the k-core decompositions of the SRNs and the number of weakly connected components, as shown in Figure 6.

FIGURE 6
www.frontiersin.org

FIGURE 6. k-core decompositions of SRNs under six types of fake news: (A) SRN of refuters after k-core decomposition (Rk–Rk) in society-related fake news, (B) SRN of non-refuters after k-core decomposition (NRk–NRk) in society-related fake news, (C) Rk–Rk in health-related fake news, (D) NRk–NRk in health-related fake news, (E) Rk–Rk in business-related fake news, (F) NRk–NRk in business-related fake news, (G) Rk–Rk in science-related fake news, (H) NRk–NRk in science-related fake news, (I) Rk–Rk in disaster-related fake news, (J) NRk–NRk in disaster-related fake news, (K) Rk–Rk in politics-related fake news, and (L) NRk–NRk in politics-related fake news.

First, to analyze the impact of the size of nodes with following relationships in the SRN on the spread of information, the network size of the k-core decompositions of the SRNs was examined using Eq. 3. Our results showed that the network size of Rk–Rk was smaller than that of NRk–NRk in the five fake news categories of society, health, disaster, politics, and science. For example, in society-related fake news, RkRk(Size=0.545)<NRkNRk(Size=0.647). In contrast, in business-related fake news with a better debunking effect, the network size of RkRk(Size=0.340)>NRkNRk(Size=0.234).

Second, we also found that the number of weakly connected components in the k-core decompositions of the SRNs differed between the business-related fake news and the other five types of fake news. Specifically, the number of weakly connected components of Rk–Rk was smaller than that of NRk–NRk in the five fake news categories of society, health, disaster, politics, and science. For example, in health-related fake news, RkRk(Nwcc=9)<NRkNRk(Nwcc=16). In contrast, in business-related fake news that had a better debunking effect, RkRk(Nwcc=11)>NRkNRk(Nwcc=9).

Discussion

Our results are intended to highlight the differences in the growth rates between DPs and NDPs under different fake news topics from the perspective of SRNs on social media in China, as well as to deeply explore the reasons for the differences, and provide the following four key insights.

First, as shown in Figure 2, our results indicated that the spread of fake news is increasing yearly, similar to the findings reported by Vosoughi et al. [12]. We found that in Chinese social media, people have unique preferences for fake news on different topics. To be specific, our results showed that society- and health-related fake news were highly important to Chinese society [4], society-related fake news with 15,844 DPs and 15,501 NDPs, followed by health with 4,292 DPs and 3,981 NDPs, which differed from the distribution on Twitter (corresponding to politics, urban legends, business, terrorism, and war, respectively [12]). Thus, on the one hand, our results confirmed that the information-spreading patterns were different between categories of fake news events [96] and that such a pattern was determined by the event attributes [84]. On the other hand, our results confirmed that the differences in their spread in different countries can be attributed to cultural differences, the news media environment, and other environmental factors [97]. Furthermore, as shown in Figure 3, our results also showed that the debunking effect of business-related fake news was better than that of the other types of fake news. This finding is not consistent with those in previous research on fake news on Twitter, which found that falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information [12]. In contrast, in our findings, business news had a high pr value in DPs, which reflected the idea that enterprises had a high industry sensitivity in terms of fake news on Chinese social media platforms. For the other five types of fake news, especially for science- and politics-related fake news, the growth rate of the pr of DPs was slower than that of NDPs. One possible reason can be attributed to the nature of individuals’ relationships with their country, society, and national culture, wherein the social norms directly influence the individual’s belief system; this supports the existence of confirmation bias [6], which may result in an individual requiring more time to change their preexisting beliefs. Therefore, the control and management of these two categories of fake news should be urgently strengthened by practitioners, and it is necessary to raise the public’s awareness in these two areas immediately. From a cognitive perspective, fake news has a strong preemptive advantage in terms of its spread, resulting in a phenomenon called “failure of refutation,” despite real information refuting such fake news in the early stages of the spread. Thus, we propose that practitioners must empower ordinary users to enhance their knowledge and technology in that particular field, which can improve the users’ ability to recognize information. For example, government and mainstream media should improve their control and governance programs. This can be accomplished by explaining the truth accurately and increasing the credibility of reliable information by presenting them in clear and concise factual statements to increase the users’ ability to recognize the truth and gain immunity to false information, thereby avoiding and overcoming inherent human biases.

Second, for fake news on different topics, the results in Table 4 showed that regardless of whether the posts were DPs or NDPs, celebrity accounts (Ratio) played a significant role in promoting the spread of information [98]. Users perceive social media posts by influencers (e.g., celebrity accounts) as particularly credible and engaging [99, 100]. To be specific, some health-related fake news studies have shown that celebrity involvement can be an effective tool for health organizations to convey their messages to counter fake news [100]. At the same time, some political-related fake news studies have also indicated that celebrities may participate in fake news as propagators or endorsers, their status affords them extended social reach and the ability to greatly influence the fake news propagation and persuasion, such as governments and partisans use fake news for political framing [101]. Celebrities have been studied more at a micro-level (e.g., fake news event-specific) in previous studies; however, our results are more generalizable as they are drawn in the context of various categories of fake news on Chinese social media platforms at a macro level. For our results, a convincing explanation is that, according to social facilitation theory [102], the opinions of influential leaders tend to become dominant and are constantly strengthened by those around them, making their views highly consistent and strong. That is, people typically believe that influential users are naturally right and that obeying such users will grant them safety; this leads them to accept refutable messages from influential rebuttal sources without judgment. Media opinions and information in newspapers, radio, and television are usually not directly presented to the general audience; they need to pass through opinion leaders for personalized interpretation before flowing to the less active part of the population, i.e., “media information—opinion leaders—general audience.” Opinion leaders act as “secondary information dissemination hubs” in the spreading process and a conduit of information; they filter and shape news and share their interpretations with others [103]; therefore, they have a profound impact on opinion formation and diffusion [104]. The celebrities on Sina Weibo can be considered opinion leaders [105], and it is clear from the analysis results that they play a crucial role in the dissemination of information. Therefore, we propose that, in debunking, we should improve controlling the spread of fake news by paying attention to opinion leaders such as celebrities, focusing on the review of the information released by them, and effectively gatekeeping this information before it is released. Moreover, opinion leaders, such as celebrities, should take up the role of promoting the importance of debunking fake news and implementing it among the general public.

Third, as shown in Figure 6, for fake news on different topics, our results showed that messages were more likely to spread in networks with following relationships, that is, the larger the size of the nodes (Size) with following relationships in the k-core decompositions of SRNs, the faster the information would spread. Our results indicated that the wide distribution of nodes promoted the flow of information. Social media (such as Sina Weibo) are information dissemination platforms based on social relationships, where information dissemination and people’s social relationships are interrelated and mutually reinforcing. Fake news tends to initially spread to neighbors, friends, and colleagues of the spreaders and then circulate in certain areas and social groups. The breadth and depth of people’s SRNs determine the breadth and depth of the information that they obtain and how far the information they spread can reach. At the same time, our results in Figure 6 also showed that the greater the number of weakly connected components (Nwcc), the easier it was to promote the spread of information. Granovetter’s weak connection theory illustrates the shortcuts formed by the weak ties in different social circles, which are conducive to the spread of messages across these circles [106, 107]. Our results show that the number of weakly connected components (Nwcc) and size of nodes (Size) are more effective in disseminating widely, which is in accordance with the results of previous studies [107]. However, most previous studies have focused on single-topic fake news and reached these conclusions, such as the fake news that “iodized salt is radiation-proof” [107], whereas our findings are more general and reliable as they are drawn in the context of the macroscopic spread of various types of fake news on Chinese social media platforms. In addition, according to previous research, the number of very strong ties among interpersonal relationships is small, but the number of ties with low strength is big [108]. For example, on Sina Weibo, two-way following relationships comprise mostly relatives, friends, colleagues, and other strong connections, whereas one-way following relations comprise mostly weak connections with little connection and weak ties; these form the weakly connected component of the SRNs. Therefore, for one thing, we propose making full use of social networks among users in the fake news debunking process to achieve the following goals: first, improving netizens’ ability to distinguish between true and fake news, including teaching them how to fact-check information and educating them in media literacy; and second, improving netizens’ critical thinking ability and reducing the herding effect caused by their following relationships. For another thing, we propose that to form an effective hedge against the weak ties of NDPs in the process of fake news debunking, multi-departmental, multi-faceted, and multi-directional joint refutations should be established to spread debunking messages in different circles and a linkage mechanism should be formed. Fake news refuters should integrate support from different rebuttal agents, such as government departments, mainstream media outlets, and opinion leaders to pool their efforts to establish a rapid debunking mechanism for online fake news and expand the influence of authoritative news to debunk fake news quickly and accurately. In this way, governments can play an important role in promoting the dissemination of debunking information for fake news through enlarging the channels of right information disclosure, increasing weak tie connections with the public, and shortening the average path length between individuals in the networks [107].

However, we acknowledge that our research has certain limitations, which give way to promising topics for further research. First, we only analyzed posts on Sina Weibo, which has limitations in terms of data size and multi-platform sources. Thus, we will consider adding the data of other social media platforms, such as those of Baidu Tieba and WeChat. Second, most of our research results are the experimental results of a comparative analysis of users’ SRNs. Therefore, in future research, we will consider using simulation methods to enhance the robustness of our research results, such as applying some physical models for further analysis [109]. We will also consider the internal mechanism of the time delays of DPs among different categories of fake news and analyze the reasons why people opt for refuting or accepting the DPs. Thirdly, to understand the large-scale fake news on Chinese social media platforms, although we focused on the comparison of differences between the debunking and non-debunking of fake news across various topics and obtained some general and reliable findings based on a large-scale multi-category fake news dataset, some of the findings in our results were validation of some intuitive results or previous studies. In the future, by investigating different types of fake news on Chinese social media platforms, we hope to find more interesting and surprising results based on social relationship networks. For example, previous research indicated that structural positions stratified by such variables as gender, age, ethnicity, paid employment, educational attainment, income, and family responsibilities may shape the formation of social relationships and one’s ability to gain valuable information [110]. Therefore, in the future, we hope to do more insightful investigations on how the attributes of social relationship networks might affect the spread of debunking information. Finally, to highlight the research objective, we detected only two stances. In a future study, we will consider a more detailed refutation method based on multiple classifications using deep learning algorithms, not only by evaluating the true and false labels of fake news but also by combining concepts such as sociology, psychology, and communication theory to focus on controversy detection, degree of divergence, and group polarization. We will also consider using fake news data on the COVID-19 epidemic for an in-depth study.

Conclusion

The growth in social networking has made social media platforms such as Sina Weibo a breeding ground for fake news. Based on the six types of fake news spread on Sina Weibo, first, in this study, we investigated the differences in the volume and growth rates of posts between DPs and NDPs by comparing them in terms of their number and cumulative probability distribution. Second, we used three indices to explore the network characteristics of the following relationships of refuters and non-refuters, namely the proportion of account types, the network size of k-core decompositions of SRNs, and the number of weakly connected components, to uncover the deeper reasons for the differences in the growth rates of DPs and NDPs. Our results showed that the debunking of business-related fake news was better, that of society-related fake news, the most widely spread type of fake news in China, was poor, and that of science- and politics-related fake news, was the worst. Additionally, regardless of whether the posts were DPs or NDPs, a higher number of celebrity accounts, larger sizes of nodes with following relationships in the SRNs, and a higher number of weakly connected components would lead to a faster growth rate of dissemination. Finally, based on the aforementioned results, we examined the reasons for the differences in the growth rates of DPs and NDPs and proposed countermeasures and recommendations for fake news debunking management that can be followed and implemented by government and decision-making institutions.

Data Availability Statement

The raw data supporting the conclusion of this article will be made available by the authors, further inquiries can be directed to the corresponding author.

Author Contributions

XW: Conceptualization, Methodology, Software, Visualization, Formal analysis, Data Curation, Writing. FC: Methodology, Formal analysis. NM: Methodology. GY: Conceptualization, Writing, Funding acquisition.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 72074060).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

The authors would like to acknowledge the editor’s contribution and show appreciation to the reviewers for their helpful comments and recommendations. The authors would like to acknowledge all the authors cited in the references. The authors would like to thank Zhiwei Data Sharing Platform (http://university.zhiweidata.com/) for our data support.

References

1. Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D. Fake News on Twitter during the 2016 U.S. Presidential Election. Science (2019) 363(6425):374–8. doi:10.1126/science.aau2706

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Zhang X, Ghorbani AA. An Overview of Online Fake News: Characterization, Detection, and Discussion. Inf Process Management (2020) 57(2):102025. doi:10.1016/j.ipm.2019.03.004

CrossRef Full Text | Google Scholar

3. Cheng Y, Lee C-J. Online Crisis Communication in a post-truth Chinese Society: Evidence from Interdisciplinary Literature. Public Relations Rev (2019) 45(4):101826. doi:10.1016/j.pubrev.2019.101826

CrossRef Full Text | Google Scholar

4. Guo L. China's "Fake News" Problem: Exploring the Spread of Online Rumors in the Government-Controlled News Media. Digital Journalism (2020) 8(8):992–1010. doi:10.1080/21670811.2020.1766986

CrossRef Full Text | Google Scholar

5. Chen X, Wang N. Rumor Spreading Model Considering Rumor Credibility, Correlation and Crowd Classification Based on Personality. Sci Rep (2020) 10(1):5887–15. doi:10.1038/s41598-020-62585-9

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The Spreading of Misinformation Online. Proc Natl Acad Sci USA (2016) 113(3):554–9. doi:10.1073/pnas.1517441113

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Wang D, Qian Y. Echo Chamber Effect in the Discussions of Rumor Rebuttal about COVID-19 in China: Existence and Impact. J Med Internet Res (2021).

Google Scholar

8. Lazer DMJ, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, et al. The Science of Fake News. Science (2018) 359(6380):1094–6. doi:10.1126/science.aao2998

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Wu Y, Deng M, Wen X, Wang M, Xiong X. Statistical Analysis of Dispelling Rumors on Sina Weibo. Complexity (2020) 2020. doi:10.1155/2020/3176593

CrossRef Full Text | Google Scholar

10. Gradoń KT, Hołyst JA, Moy WR, Sienkiewicz J, Suchecki K. Countering Misinformation: A Multidisciplinary Approach. Big Data Soc (2021) 8(1):20539517211013848.

Google Scholar

11. Liu Y, Jin X, Shen H, Bao P, Cheng X. A Survey on Rumor Identification over Social media. Chin J Comput (2018) 40(1):1–23.

Google Scholar

12. Vosoughi S, Roy D, Aral S. The Spread of True and False News Online. Science (2018) 359(6380):1146–51. doi:10.1126/science.aap9559

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Xu K, Zheng X, Cai Y, Min H, Gao Z, Zhu B, et al. Improving User Recommendation by Extracting Social Topics and Interest Topics of Users in Uni-Directional Social Networks. Knowledge-Based Syst (2018) 140:120–33. doi:10.1016/j.knosys.2017.10.031

CrossRef Full Text | Google Scholar

14.Ahn Y-Y, Han S, Kwak H, Moon S, and Jeong H, editors. Analysis of Topological Characteristics of Huge Online Social Networking Services. Proceedings of the 16th international conference on World Wide Web (2007).

15. Pierri F, Piccardi C, Ceri S. Topology Comparison of Twitter Diffusion Networks Effectively Reveals Misleading Information. Sci Rep (2020) 10(1):1372–9. doi:10.1038/s41598-020-58166-5

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Meel P, Vishwakarma DK. Fake News, Rumor, Information Pollution in Social media and Web: A Contemporary Survey of State-Of-The-Arts, Challenges and Opportunities. Expert Syst Appl (2020) 153:112986. doi:10.1016/j.eswa.2019.112986

CrossRef Full Text | Google Scholar

17. Pal A, Chua AYK, Hoe-Lian Goh D. Debunking Rumors on Social media: The Use of Denials. Comput Hum Behav (2019) 96:110–22. doi:10.1016/j.chb.2019.02.022

CrossRef Full Text | Google Scholar

18. Oehmichen A, Hua K, Amador Diaz Lopez J, Molina-Solana M, Gomez-Romero J, Guo Y-k. Not All Lies Are Equal. A Study into the Engineering of Political Misinformation in the 2016 US Presidential Election. IEEE Access (2019) 7:126305–14. doi:10.1109/access.2019.2938389

CrossRef Full Text | Google Scholar

19. Wang Q, Yang X, Xi W. Effects of Group Arguments on Rumor Belief and Transmission in Online Communities: An Information cascade and Group Polarization Perspective. Inf Management (2018) 55(4):441–9. doi:10.1016/j.im.2017.10.004

CrossRef Full Text | Google Scholar

20. Jung A-K, Ross B, Stieglitz S. Caution: Rumors Ahead—A Case Study on the Debunking of False Information on Twitter. Big Data Soc (2020) 7(2):2053951720980127. doi:10.1177/2053951720980127

CrossRef Full Text | Google Scholar

21.AlRubaian M, Al-Qurishi M, Al-Rakhami M, Hassan MM, and Alamri A, editors. CredFinder: A Real-Time Tweets Credibility Assessing System. 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM. IEEE (2016).

22. Shao C, Hui PM, Wang L, Jiang X, Flammini A, Menczer F, et al. Anatomy of an Online Misinformation Network. PloS one (2018) 13:e0196087. doi:10.1371/journal.pone.0196087

PubMed Abstract | CrossRef Full Text | Google Scholar

23. J-w J, Lee E-J, Shin SY. What Debunking of Misinformation Does and Doesn't. Cyberpsychology, Behav Soc Networking (2019) 22(6):423–7.

Google Scholar

24. Pennycook G, Bear A, Collins ET, Rand DG. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines without Warnings. Management Sci (2020) 66(11):4944–57. doi:10.1287/mnsc.2019.3478

CrossRef Full Text | Google Scholar

25. Dale R. NLP in a post-truth World. Nat Lang Eng (2017) 23(2):319–24. doi:10.1017/s1351324917000018

CrossRef Full Text | Google Scholar

26. Shu K, Sliva A, Wang S, Tang J, Liu H. Fake News Detection on Social Media. SIGKDD Explor Newsl (2017) 19(1):22–36. doi:10.1145/3137597.3137600

CrossRef Full Text | Google Scholar

27. Stieglitz S, Mirbabaie M, Ross B, Neuberger C. Social media Analytics - Challenges in Topic Discovery, Data Collection, and Data Preparation. Int J Inf Manag (2018) 39:156–68. doi:10.1016/j.ijinfomgt.2017.12.002

CrossRef Full Text | Google Scholar

28. Zhang N, Huang H, Su B, Zhao J, Zhang B. Dynamic 8-state ICSAR Rumor Propagation Model Considering Official Rumor Refutation. Physica A: Stat Mech Its Appl (2014) 415:333–46. doi:10.1016/j.physa.2014.07.023

CrossRef Full Text | Google Scholar

29. Chua AYK, Banerjee S. Intentions to Trust and Share Online Health Rumors: An experiment with Medical Professionals. Comput Hum Behav (2018) 87:1–9. doi:10.1016/j.chb.2018.05.021

CrossRef Full Text | Google Scholar

30. Tripathy RM, Bagchi A, Mehta S. Towards Combating Rumors in Social Networks: Models and Metrics. Ida (2013) 17(1):149–75. doi:10.3233/ida-120571

CrossRef Full Text | Google Scholar

31. Zhu K, Ying L. Information Source Detection in the SIR Model: A Sample-Path-Based Approach. IEEE/ACM Trans Networking (2014) 24(1):408–21.

Google Scholar

32.Liu Y, Gao C, She X, and Zhang Z, editors. A Bio-Inspired Method for Locating the Diffusion Source with Limited Observers. 2016 IEEE Congress on Evolutionary Computation (CEC). IEEE (2016).

33.Choi J, Moon S, Woo J, Son K, Shin J, and Yi Y, editors. Rumor Source Detection under Querying with Untruthful Answers. IEEE INFOCOM 2017-IEEE Conference on Computer Communications. IEEE (2017).

34. Louni A, Subbalakshmi KP. Who Spread that Rumor: Finding the Source of Information in Large Online Social Networks with Probabilistically Varying Internode Relationship Strengths. IEEE Trans Comput Soc Syst (2018) 5(2):335–43. doi:10.1109/tcss.2018.2801310

CrossRef Full Text | Google Scholar

35. Shelke S, Attar V. Source Detection of Rumor in Social Network - A Review. Online Soc Networks Media (2019) 9:30–42. doi:10.1016/j.osnem.2018.12.001

CrossRef Full Text | Google Scholar

36. Han S, Zhuang F, He Q, Shi Z, Ao X. Energy Model for Rumor Propagation on Social Networks. Physica A: Stat Mech its Appl (2014) 394:99–109. doi:10.1016/j.physa.2013.10.003

CrossRef Full Text | Google Scholar

37. Turenne N. The Rumour Spectrum. PloS one (2018) 13:e0189080. doi:10.1371/journal.pone.0189080

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Wang X, Li Y, Li J, Liu Y, Qiu C. A Rumor Reversal Model of Online Health Information during the Covid-19 Epidemic. Inf Process Management (2021) 58(6):102731. doi:10.1016/j.ipm.2021.102731

CrossRef Full Text | Google Scholar

39. Cheng J-J, Liu Y, Shen B, Yuan W-G. An Epidemic Model of Rumor Diffusion in Online Social Networks. The Eur Phys J B (2013) 86(1):1–7. doi:10.1140/epjb/e2012-30483-5

CrossRef Full Text | Google Scholar

40. Tong G, Wu W, Du D-Z. Distributed Rumor Blocking with Multiple Positive Cascades. IEEE Trans Comput Soc Syst (2018) 5(2):468–80. doi:10.1109/tcss.2018.2818661

CrossRef Full Text | Google Scholar

41. Pham DV, Nguyen GL, Nguyen TN, Pham CV, Nguyen AV. Multi-topic Misinformation Blocking with Budget Constraint on Online Social Networks. IEEE Access (2020) 8:78879–89. doi:10.1109/access.2020.2989140

CrossRef Full Text | Google Scholar

42. Saquete E, Tomás D, Moreda P, Martínez-Barco P, Palomar M. Fighting post-truth Using Natural Language Processing: A Review and Open Challenges. Expert Syst Appl (2020) 141:112943. doi:10.1016/j.eswa.2019.112943

CrossRef Full Text | Google Scholar

43. Zubiaga A, Aker A, Bontcheva K, Liakata M, Procter R. Detection and Resolution of Rumours in Social media: A Survey. ACM Comput Surv (Csur) (2018) 51(2):1–36.

Google Scholar

44. Li Z, Zhang Q, Du X, Ma Y, Wang S. Social media Rumor Refutation Effectiveness: Evaluation, Modelling and Enhancement. Inf Process Management (2021) 58(1):102420. doi:10.1016/j.ipm.2020.102420

CrossRef Full Text | Google Scholar

45. Shu K, Mahudeswaran D, Liu H. FakeNewsTracker: a Tool for Fake News Collection, Detection, and Visualization. Comput Math Organ Theor (2019) 25(1):60–71. doi:10.1007/s10588-018-09280-3

CrossRef Full Text | Google Scholar

46.Al-Qurishi M, Al-Rakhami M, Alrubaian M, Alarifi A, Rahman SMM, and Alamri A, editors. Selecting the Best Open Source Tools for Collecting and Visualzing Social media Content. 2015 2nd world symposium on web applications and networking (WSWAN). IEEE (2015).

47. Yang Y, Zheng L, Zhang J, Cui Q, Li Z, Yu PS. TI-CNN: Convolutional Neural Networks for Fake News Detection. arXiv preprint arXiv (2018) 180600749.

Google Scholar

48. Vicario MD, Quattrociocchi W, Scala A, Zollo F. Polarization and Fake News. ACM Trans Web (2019) 13(2):1–22. doi:10.1145/3316809

CrossRef Full Text | Google Scholar

49. Jin Z, Cao J, Zhang Y, Zhou J, Tian Q. Novel Visual and Statistical Image Features for Microblogs News Verification. IEEE Trans multimedia (2016) 19(3):598–608.

Google Scholar

50.Yang F, Liu Y, Yu X, and Yang M, editors. Automatic Detection of Rumor on Sina Weibo. Proceedings of the ACM SIGKDD workshop on mining data semantics (2012).

51.Lin D, Lv Y, and Cao D, editors. Rumor Diffusion Purpose Analysis from Social Attribute to Social Content. 2015 International Conference on Asian Language Processing (IALP). IEEE (2015).

52.Castillo C, Mendoza M, and Poblete B, editors. Information Credibility on Twitter. Proceedings of the 20th international conference on World wide web (2011).

53. Shu K, Mahudeswaran D, Wang S, Lee D, Liu H. Fakenewsnet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social media. Big data (2020) 8(3):171–88. doi:10.1089/big.2020.0062

PubMed Abstract | CrossRef Full Text | Google Scholar

54. Vosoughi S, Mohsenvand MN, Roy D. Rumor Gauge. ACM Trans Knowl Discov Data (2017) 11(4):1–36. doi:10.1145/3070644

CrossRef Full Text | Google Scholar

55. Zhou S, Ng ST, Lee SH, Xu FJ, Yang Y. A Domain Knowledge Incorporated Text Mining Approach for Capturing User Needs on BIM Applications. Eng Construction Architectural Management (2019).

CrossRef Full Text | Google Scholar

56. Ishida Y, Kuraya S. Fake News and its Credibility Evaluation by Dynamic Relational Networks: A Bottom up Approach. Proced Computer Sci (2018) 126:2228–37. doi:10.1016/j.procs.2018.07.226

CrossRef Full Text | Google Scholar

57. Edunov S, Logothetis D, Wang C, Ching A, Kabiljo M. Darwini: Generating Realistic Large-Scale Social Graphs. arXiv preprint arXiv:161000664 (2016).

Google Scholar

58. Lillie AE, Middelboe ER. Fake News Detection Using Stance Classification: A Survey. arXiv preprint arXiv (2019) 190700181:907–16. doi:10.1007/978-3-319-50496–485

CrossRef Full Text | Google Scholar

59. Xu R, Zhou Y, Wu D, Gui L, Du J, Xue Y. Overview of NLPCC Shared Task 4: Stance Detection in Chinese Microblogs. Nat Lang understanding Intell Appl Springer (2016). p. 907–16. doi:10.1007/978-3-319-50496-4_85

CrossRef Full Text | Google Scholar

60. Aker A, Derczynski L, Bontcheva K. Simple Open Stance Classification for Rumour Analysis. arXiv preprint arXiv (2017) 170805286. doi:10.26615/978-954-452-049-6_005

CrossRef Full Text | Google Scholar

61.Dungs S, Aker A, Fuhr N, and Bontcheva K, editors. Can Rumour Stance Alone Predict Veracity? Proceedings of the 27th International Conference on Computational Linguistics (2018).

62. Pérez-Rosas V, Kleinberg B, Lefevre A, Mihalcea R. Automatic Detection of Fake News. arXiv preprint arXiv (2017) 170807104.

Google Scholar

63. Zarrella G, Marsh A. Mitre at Semeval-2016 Task 6: Transfer Learning for Stance Detection. arXiv preprint arXiv:160603784 (2016). doi:10.18653/v1/s16-1074

CrossRef Full Text | Google Scholar

64.Poddar L, Hsu W, Lee ML, and Subramaniyam S, editors. Predicting Stances in Twitter Conversations for Detecting Veracity of Rumors: A Neural Approach. 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI. IEEE (2018).

65. Kumar KK, Geethakumari G. Detecting Misinformation in Online Social Networks Using Cognitive Psychology. Human-centric Comput Inf Sci (2014) 4(1):1–22. doi:10.1186/s13673-014-0014-x

CrossRef Full Text | Google Scholar

66. Wang Z, Sui J. Multi-level Attention Residuals Neural Network for Multimodal Online Social Network Rumor Detection. FrPhy (2021) 9:466. doi:10.3389/fphy.2021.711221

CrossRef Full Text | Google Scholar

67.Conforti C, Pilehvar MT, and Collier N, editors. Towards Automatic Fake News Detection: Cross-Level Stance Detection in News Articles. Proceedings of the First Workshop on Fact Extraction and VERification (FEVER) (2018).

68. Rodríguez CP, Carballido BV, Redondo-Sama G, Guo M, Ramis M, Flecha R. False News Around COVID-19 Circulated Less on Sina Weibo Than on Twitter. How to Overcome False Information? Int Multidisciplinary J Soc Sci (2020) 9(2):107–28.

Google Scholar

69. Chen W, Lai KK, Cai Y. Exploring Public Mood toward Commodity Markets: A Comparative Study of User Behavior on Sina Weibo and Twitter. Intr (2020) 31(3):1102–19. doi:10.1108/intr-02-2020-0055

CrossRef Full Text | Google Scholar

70. Mohammad SM, Sobhani P, Kiritchenko S. Stance and Sentiment in Tweets. ACM Trans Internet Technol (2017) 17(3):1–23. doi:10.1145/3003433

CrossRef Full Text | Google Scholar

71. Derczynski L, Bontcheva K, Liakata M, Procter R, Hoi GWS, Zubiaga A. SemEval-2017 Task 8: RumourEval: Determining Rumour Veracity and Support for Rumours. arXiv preprint arXiv (2017) 170405972.

Google Scholar

72. Yu Y, Duan W, Cao Q. The Impact of Social and Conventional media on Firm Equity Value: A Sentiment Analysis Approach. Decis support Syst (2013) 55(4):919–26. doi:10.1016/j.dss.2012.12.028

CrossRef Full Text | Google Scholar

73. Warrens MJ. Weighted Kappa Is Higher Than Cohen's Kappa for Tridiagonal Agreement Tables. Stat Methodol (2011) 8(2):268–72. doi:10.1016/j.stamet.2010.09.004

CrossRef Full Text | Google Scholar

74. Fleiss JL, Cohen J, Everitt BS. Large Sample Standard Errors of Kappa and Weighted Kappa. Psychol Bull (1969) 72(5):323–7. doi:10.1037/h0028106

CrossRef Full Text | Google Scholar

75. Cohen J. A Coefficient of Agreement for Nominal Scales. Educ Psychol Meas (1960) 20(1):37–46. doi:10.1177/001316446002000104

CrossRef Full Text | Google Scholar

76. Mikolov T, Chen K, Corrado G, Dean J. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:13013781 (2013).

Google Scholar

77. Zhang D, Xu H, Su Z, Xu Y. Chinese Comments Sentiment Classification Based on Word2vec and SVMperf. Expert Syst Appl (2015) 42(4):1857–63. doi:10.1016/j.eswa.2014.09.011

CrossRef Full Text | Google Scholar

78. Hochreiter S, Schmidhuber J. Long Short-Term Memory. Neural Comput (1997) 9(8):1735–80. doi:10.1162/neco.1997.9.8.1735

PubMed Abstract | CrossRef Full Text | Google Scholar

79. Li Y, Wang X, Xu P. Chinese Text Classification Model Based on Deep Learning. Future Internet (2018) 10(11):113. doi:10.3390/fi10110113

CrossRef Full Text | Google Scholar

80. Chang C, Masterson M. Using Word Order in Political Text Classification with Long Short-Term Memory Models. Polit Anal (2020) 28(3):395–411. doi:10.1017/pan.2019.46

CrossRef Full Text | Google Scholar

81. Fei R, Yao Q, Zhu Y, Xu Q, Li A, Wu H, et al. Deep Learning Structure for Cross-Domain Sentiment Classification Based on Improved Cross Entropy and Weight. Scientific Programming (2020) 2020. doi:10.1155/2020/3810261

CrossRef Full Text | Google Scholar

82. Yu B, Chen W, Zhong Q, Zhang H. Specular Highlight Detection Based on Color Distribution for Endoscopic Images. FrPhy (2021) 8:575. doi:10.3389/fphy.2020.616930

CrossRef Full Text | Google Scholar

83. Lin D, Ma B, Jiang M, Xiong N, Lin K, Cao D. Social Network Rumor Diffusion Predication Based on Equal Responsibility Game Model. IEEE Access (2018) 7:4478–86.

Google Scholar

84. Liu C, Zhan X-X, Zhang Z-K, Sun G-Q, Hui PM. How Events Determine Spreading Patterns: Information Transmission via Internal and External Influences on Social Networks. New J Phys (2015) 17(11):113045. doi:10.1088/1367-2630/17/11/113045

CrossRef Full Text | Google Scholar

85. Huang R, Sun X. Weibo Network, Information Diffusion and Implications for Collective Action in China. Inf Commun Soc (2014) 17(1):86–104. doi:10.1080/1369118x.2013.853817

CrossRef Full Text | Google Scholar

86. Sarıyüce AE, Gedik B, Jacques-Silva G, Wu K-L, Çatalyürek ÜV. Incremental K-Core Decomposition: Algorithms and Evaluation. VLDB J (2016) 25(3):425–47. doi:10.1007/s00778-016-0423-8

CrossRef Full Text | Google Scholar

87. Dorogovtsev SN, Goltsev AV, Mendes JF. K-core Organization of Complex Networks. Phys Rev Lett (2006) 96:040601. doi:10.1103/PhysRevLett.96.040601

PubMed Abstract | CrossRef Full Text | Google Scholar

88. Alvarez-Hamelin JI, Dall'Asta L, Barrat A, Vespignani A. K-core Decomposition: A Tool for the Visualization of Large Scale Networks. arXiv preprint cs/0504107 (2005).

Google Scholar

89. Newman MEJ. The Structure and Function of Complex Networks. SIAM Rev (2003) 45(2):167–256. doi:10.1137/s003614450342480

CrossRef Full Text | Google Scholar

90.Bastian M, Heymann S, and Jacomy M, editors. Gephi: An Open Source Software for Exploring and Manipulating Networks. Third international AAAI conference on weblogs and social media (2009).

91. Grandjean M. Gephi: Introduction to Network Analysis and Visualisation (2015).

Google Scholar

92. Fruchterman TMJ, Reingold EM. Graph Drawing by Force-Directed Placement. Softw Pract Exper (1991) 21(11):1129–64. doi:10.1002/spe.4380211102

CrossRef Full Text | Google Scholar

93. Sharpe D. Chi-square Test Is Statistically Significant: Now what? Pract Assess Res Eval (2015) 20(1):8.

Google Scholar

94. Kerby DS. The Simple Difference Formula: An Approach to Teaching Nonparametric Correlation. Compr Psychol (2014) 3:11. doi:10.2466/11.it.3.1

CrossRef Full Text | Google Scholar

95. Nelson JL, Taneja H. The Small, Disloyal Fake News Audience: The Role of Audience Availability in Fake News Consumption. New Media Soc (2018) 20(10):3720–37. doi:10.1177/1461444818758715

CrossRef Full Text | Google Scholar

96. Crane R, Sornette D. Robust Dynamic Classes Revealed by Measuring the Response Function of a Social System. Proc Natl Acad Sci (2008) 105(41):15649–53. doi:10.1073/pnas.0803685105

PubMed Abstract | CrossRef Full Text | Google Scholar

97. DiFonzo N, Bordia P. Rumor Psychology: Social and Organizational Approaches. American Psychological Association (2007). doi:10.1037/11503-000

CrossRef Full Text | Google Scholar

98. Fath BP, Fiedler A, Li Z, Whittaker DH. Collective Destination Marketing in China: Leveraging Social media Celebrity Endorsement. Tourism Anal (2017) 22(3):377–87. doi:10.3727/108354217x14955605216113

CrossRef Full Text | Google Scholar

99. Djafarova E, Rushworth C. Exploring the Credibility of Online Celebrities' Instagram Profiles in Influencing the Purchase Decisions of Young Female Users. Comput Hum Behav (2017) 68:1–7. doi:10.1016/j.chb.2016.11.009

CrossRef Full Text | Google Scholar

100. Malik A, Khan ML, Quan-Haase A. Public Health Agencies Outreach through Instagram during the Covid-19 Pandemic: Crisis and Emergency Risk Communication Perspective. Int J Disaster Risk Reduction (2021) 61:102346. doi:10.1016/j.ijdrr.2021.102346

CrossRef Full Text | Google Scholar

101. George J, Gerhart N, Torres R. Uncovering the Truth about Fake News: A Research Model Grounded in Multi-Disciplinary Literature. J Management Inf Syst (2021) 38(4):1067–94. doi:10.1080/07421222.2021.1990608

CrossRef Full Text | Google Scholar

102. Zajonc RB. Social Facilitation. Science (1965) 149(3681):269–74. doi:10.1126/science.149.3681.269

PubMed Abstract | CrossRef Full Text | Google Scholar

103. Luqiu LR, Schmierbach M, Ng Y-L. Willingness to Follow Opinion Leaders: A Case Study of Chinese Weibo. Comput Hum Behav (2019) 101:42–50. doi:10.1016/j.chb.2019.07.005

CrossRef Full Text | Google Scholar

104. Zhao Y, Kou G, Peng Y, Chen Y. Understanding Influence Power of Opinion Leaders in E-Commerce Networks: An Opinion Dynamics Theory Perspective. Inf Sci (2018) 426:131–47. doi:10.1016/j.ins.2017.10.031

CrossRef Full Text | Google Scholar

105. Wang Z, Liu H, Liu W, Wang S. Understanding the Power of Opinion Leaders' Influence on the Diffusion Process of Popular mobile Games: Travel Frog on Sina Weibo. Comput Hum Behav (2020) 109:106354. doi:10.1016/j.chb.2020.106354

CrossRef Full Text | Google Scholar

106. Granovetter MS. The Strength of Weak Ties. Am J Sociol (1973) 78(6):1360–80. doi:10.1086/225469

CrossRef Full Text | Google Scholar

107. Wei J, Bu B, Guo X, Gollagher M. The Process of Crisis Information Dissemination: Impacts of the Strength of Ties in Social Networks. Kyb (2014). doi:10.1108/k-03-2013-0043

CrossRef Full Text | Google Scholar

108. Petróczi A, Nepusz T, Bazsó F. Measuring Tie-Strength in Virtual Social Networks. Connections (2007) 27(2):39–52.

Google Scholar

109. Sobkowicz P. Whither Now, Opinion Modelers? FrPhy (2020). p. 461.

Google Scholar

110. Lai G, Wong O. The Tie Effect on Information Dissemination: The Spread of a Commercial Rumor in Hong Kong. Social Networks (2002) 24(1):49–75. doi:10.1016/s0378-8733(01)00050-8

CrossRef Full Text | Google Scholar

Keywords: fake news, stance detection, deep learning, debunking, refuter, social relationship network

Citation: Wang X, Chao F, Ma N and Yu G (2022) Exploring the Effect of Spreading Fake News Debunking Based on Social Relationship Networks. Front. Phys. 10:833385. doi: 10.3389/fphy.2022.833385

Received: 11 December 2021; Accepted: 18 February 2022;
Published: 26 April 2022.

Edited by:

Matjaž Perc, University of Maribor, Slovenia

Reviewed by:

Marija Mitrovic Dankulov, University of Belgrade, Serbia
Valerio Restocchi, University of Edinburgh, United Kingdom

Copyright © 2022 Wang, Chao, Ma and Yu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Guang Yu, yug@hit.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.