Our projects aim to enhance research capabilities and deepen expertise to mitigate misinformation. They also develop insights, tools, policies, and best practices around the use of Internet. They provide a platform for our collaborations with industry partners and catalyze strategic alliances across institutions both in Singapore and globally.
This five-year project brings together an interdisciplinary team to investigate digital information creation, dissemination and consumption using a lens of consumer behaviour.
The goal is to identify vulnerabilities in the digital information pipeline and developing tools and strategies to instil digital information resilience in online citizens.
Research team: Chen Tsuhan, Lee Mong Li, Simon Chesterman, Mohan Kankanhalli, Noah Lim, Ho Teck Hua, Wynne Hsu, Kan Min-Yen, Kokil Jaidka, Terence Sim, Araz Taeihagh, Robby Tan, Anthony Tung, Xiao Xiaokui, and Audrey Yue.
There are multiple frameworks and definitions on digital wellbeing, such as
digital wellness, ICT use for wellbeing, digital literacy and online safety, and from
supranational, regional to national and local blueprints and applications, with no
common framework. In collaboration with the DQ Institute, this project develops a working definition of digital wellbeing, identifies its core pillars, and designs a global assessment framework to evaluate the impact of digital wellbeing.
Working Paper
Research team: Audrey Yue, Natalie Pang, Park Yuhyun (DQ Institute), Zhang Renwen, Lim Ee Peng (SMU)
This research seeks to advance the understanding of deepfake perception by adopting a communicative approach. Specifically, it seeks to collaborate to fill in the gap of obtaining empirical evidence about the psychological relationship users establish with deepfake technology. The work packages will observe user attitudes and knowledge about fake technologies and characterize the impact of deepfake on users, particularly youths.
Research team: María Teresa Soto Sanfiel, Terence Sim, Saifuddin Ahmed (NTU)
Decoding AI and Data Governance for Improving the Digital Health of Internet Platforms.
This project aims to understand the propagation of disinformation in digital
platforms, the role of policy frameworks for governing them, and governance of data in
digital platforms, particularly in the context of fake news and deepfake content.
Research Article
Research team: Araz Taeihagh, M Ramesh, Angela Ke Li
Characterising WhatsApp “super spreaders” across the population.
The objective of this project is to establish the metrics of potential misinformation, understand the base-rate by which misinformation is shared on WhatsApp, and identify ‘super spreaders’ responsible for sending and receiving a large volume of misinformation.
CTIC Webinar
Research team: Jean Liu, Maria De Iorio, Eddie Tong
Fact-checking with Evidence Retrieval and Knowledge Graph.
Automated fact checking is a complex task involving evidence extraction, evidence reasoning and entailment. This project aims to perform fine-grained evidence extraction at the sentence level together with knowledge graphs for greater reasoning capacity. The evidence retrieval technology has been implemented in the LetsCheck platform.
Research team: Wynne Hsu, Kokil Jaidka, Lee Mong Li
Quantifying and Recommending Diversified News for Fact Checking using Knowledge Graphs.
This project explores ways to quantitatively measure the perspectives of news and assess the severity of information cocoons on social media. Algorithms that recommend relevant news articles with diversified perspectives will be developed to provide news consumers with a clearer picture of events by comparing news with different perspectives and examining different sources of reports of the same news,
thereby discerning misinformation and misinterpretations of facts or half-truth.
This project aims to explore the effects that hype (distortion and exaggeration) in scientific communication has on different audiences. Specifically, it observes the extent to which hype in science-related news affects credibility, reputation of scientists and science institutions. It also analyses the impact of hyped news on the well-being of audiences. This project focuses on hype in the areas of artificial intelligence, health communication, quantum physics and misinformation.
Research team: Dr. María Teresa Soto-Sanfiel, Dr. José Ignacio Latorre, Chong Chin-Wen.
The COVID-19 pandemic has revealed the centrality of the Internet in disseminating timely information and building solidarity. As most countries in the world experience one form of lockdown or another, the Internet has also become the key arena to seek connection. It goes without saying that there is an urgent need, now more than ever, to better understand the significance and complexity of the human, technological and policy dimensions of the Internet in shaping information, trust and community.
Inoculation against Fake News? An experiment using fake news game in the context of COVID-19.
This pilot project draws on a range of multi-disciplinary approaches ranging from sociology, media psychology, lab experimentation, and game design. It also adapts the biological theory of inoculation to consider information perception and credibility persuasion. The game prototype is novel and has the potential for public education. Abstract of preliminary key findings report.
Research team: Catherine Wong, Olivia Jensen, Elmie Nekmat.
Achieving Digital Well-being: A Community-based approach.
This pilot project evaluates the impact of digital readiness on policy and social
capital, and aims to bridge the gap between policy framework and practical implications to create support for
digital well-being in communities.
Research team: Natalie Pang, Carol Soon, Chew Han Ei
The CoVID-19 outbreak demonstrated the importance of responsible public discourse. Many false rumours circulating on social media about the virus, inciting fear and paranoia among people. In some cases, fake news could also accelerate the spread of the virus, e.g. spreading bad advice that cause public not to seek treatments or to encourage risky behaviours.
Taken to extremes, rumours and false narratives, when left unchecked, can encourage xenophobic comments or finger-pointing, and deepen societal fissures.