23 May 2022
The internet today is a vast space overloaded with information for quick consumption. Many people seek information from online media sources every day, but this habit comes with the risk of getting exposed to false information, fabricated arguments and fake news, especially on the subject of COVID-19 – an evolving disease that is still relatively new in human history. Fact-checking and debunking are therefore considered necessary to correct and clarify misinformation.
A research project by Dr Zhang Xinzhi, Assistant Professor of the Department of Journalism, is currently examining the fact-checking and debunking practices of professional communicators (such as health journalists, the public health sector, and professional organisations) on social media, and evaluating the effectiveness of these clarification and debunking messages during the COVID-19 pandemic using an experimental design.
“I am interested in the factors influencing the success or failure of fact-checking messages by professional communicators, against the backdrop of what is known as ‘information disorder’ on social media,” says Dr Zhang, a researcher on how people receive, process, and engage with public information on digital media and how AI, digital media platforms, and big data technologies are changing news production and content delivery.
A global study of three regions
Titled “Why fact-checking fails? Factors influencing the effectiveness of corrective messages countering misinformation on social media: A comparison of Hong Kong, the United States, and the Netherlands”, the project examines evidence of communication breakdown in three regions, and it studies the current practices of misinformation clarification and debunking messages, and why fact-checking and debunking messages may backfire and fail to overturn misinformation.
Supported by the General Research Fund, the project is expected to be completed by August 2022. The research team comprises Dr Zhang and four other co-investigators based in Europe and North America. This global cross-disciplinary team includes experts in computational social science, computer-human interaction, and sociolinguistics.
“We look at the Hong Kong SAR, the United States and the Netherlands because these are three digitally advanced societies with high social media penetration rates,” says Dr Zhang. “At the same time, there are substantial differences among the three societies in terms of their media landscape, culture and social values, which will make our research results more comprehensive and representative.”
Collaborative research among experts
The team have been utilising a number of sources to triangulate the research questions, including public reports from fact-check centres and their social media platforms, posts and announcements by government and public health sectors, news articles and journalists’ public social media posts, in addition to comparative online survey experiments in Hong Kong, the United States and the Netherlands.
A particular focus has been placed on the subject of COVID-19. “We would like to find out the reasons and factors for the acceptance or denial of COVID-related debunking messages. Are the messages being communicated transparently, coherently, logically and in the right language?” To answer these questions, Dr Zhang and his teammates are using several online experiments to test several hypotheses on how the “message factors” (the use of language and logic, the source and its credibility, and distribution mechanisms) and the “psychological factors” (readers’ psychological orientation) would affect people’s attitudes and behavioural responses to digital content.
Getting messages right for better communication
Although the project is still in progress, the team have already made some insightful discoveries. Dr Zhang observed that, for example, when health journalists in the United States actively use the “@” function to mention and interact with other users (mostly elite and influential users) on social media, it may undermine user engagement in terms of likes and shares, while embedding multimedia content or tracible external sources (such as hyperlinks) will make a post more visible. At the same time, the researchers have found that the titles of social media posts by professional fact-checking organisations often contain clickbait elements to trigger more likes and shares, yet, posts with clickbait have fewer likes and shares if they are published by universities and mainstream media. Dr Zhang therefore argues for the careful use of the “@” function and clickbait in professional messages, and he highlights the importance of transparency and diversity when communicating with the public.
The project’s final findings will help government authorities, media organisations, journalists and technology companies design more effective messages and communication strategies to provide the public with correct and verified information, especially in the health domain.
As for the next stage, Dr Zhang and his colleagues plan on carrying out further experiments and in-depth interviews to understand the nuanced patterns of people’s information seeking and processing in response to different message formats and distribution agents – from social media to other emerging AI applications such as virtual reality channels and smart devices. Their work will be significant to people in different societies when it comes to discerning truth and making informed judgments in the online world where misinformation is so prevalent nowadays.