On 28 April 2025, the Centre for Trusted Internet and Community (CTIC) at the National University of Singapore hosted the inaugural Information Gyroscope Symposium on Mis-, Dis-, and Mal-Information (iGYRO SMDM) 2025. Held at the innovation 4.0 Seminar Room, the symposium convened over 100 researchers, policymakers, technologists, legal scholars, and the public to address the escalating challenges of misinformation, disinformation, and malinformation (MDM) in digital ecosystems. Watch the highlights to relive the moments!
Keynote Highlights
Prof. Nakov delivering his keynote presentation
The symposium commenced with a keynote by Prof. Preslav Nakov from the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), who discussed "Factuality challenges in the era of large language models," emphasising the complexities of ensuring truthfulness in AI-generated content. View slides
Prof. Paterson delivering her keynote presentation
In the afternoon, Prof. Jeannie Paterson from the University of Melbourne delivered a keynote on "Legal and policy choices in making platforms liable for mis-, dis-, and mal-Information," exploring the evolving legal frameworks and platform responsibilities in the digital information landscape. View slides
Panel Discussion: The Art & Science of Mitigating MDM
Panellists from left to right: Asst. Prof. Jaidka (moderator), Prof. Chen, Prof. Nakov, Prof. Paterson, Prof. Chesterman, and Prof. Lim
A dynamic panel, moderated by Asst. Prof. Kokil Jaidka, featured insights from Prof. Nakov, Prof. Paterson, Prof. Noah Lim, Prof. Tsuhan Chen, and Prof. Simon Chesterman. The discussion delved into interdisciplinary strategies for combating MDM, highlighting the necessity of integrating technological, behavioural, and policy-driven approaches.
Oral Presentations: Cutting-Edge Research
Oral presenters from left to right in Q&A session: Dr. Fan, Dr. Qi, Asst. Prof. Jaidka, Dr. Li, Dr. Loh and Dr. Putra
The symposium featured innovative research from the iGYRO project, highlighting interdisciplinary approaches to tackling MDM. Presentations introduced tools like T-lens, an AI system that predicts user responses to AI-generated content, and SNIFFER, a multimodal large language model for detecting and explaining out-of-context misinformation. Other talks examined the effects of elite cues and political polarisation on misinformation susceptibility, legal perspectives on remedies for online harm, and gendered experiences of digital wellbeing, offering a holistic view of the challenges and solutions in today’s information landscape.
Real or fake? Exploring human perception of AI-generated content - Dr. Shaojing Fan presented research on how individuals discern the authenticity of AI-generated content, highlighting cognitive biases and the need for improved digital literacy. View slides
The elite effect on misinformation susceptibility - Asst. Prof. Kokil Jaidka examined how endorsements from elite figures influence public susceptibility to misinformation, suggesting strategies for countering such effects.
SNIFFER: A multimodal LLM for explainable out-of-context misinformation detection system - Dr. Peng Qi introduced SNIFFER, a novel multimodal large language tool designed to detect and explain out-of-context misinformation by analysing both textual and visual content. View slides
Political polarisation vs veracity and congruence as a determinant of believing and sharing the news - Dr. Wencong Li explored how political polarisation affects individuals' likelihood to believe and share news, regardless of its veracity.
The misapprehension of online harm remedies in misinformation and disinformation - Dr. Eka Putra discussed misconceptions surrounding legal remedies for online harms caused by misinformation, advocating for clearer policy frameworks. View slides
The paradox of women’s digital wellbeing on reddit - Dr. Renae Loh analysed women's experiences on Reddit, revealing how the platform simultaneously serves as a space for empowerment and exposure to digital harms.
Demos & Posters
One of our researchers demonstrating his project to an attendee
During the lunch break, attendees engaged with a series of demos and poster presentations, offering hands-on experiences with tools and studies aimed at enhancing digital information resilience. These exhibits underscored the symposium’s commitment to practical solutions and interdisciplinary collaboration.
M3CHECK – A multilingual, multi-clue, multi-hop fact-checking demonstration system that enhances transparency and user understanding in verifying information accuracy. View poster
Disrupting Deepfakes: Passports contain invisible anti-copying patterns that appear whenever they are photocopied. We apply this idea to embed an invisible watermark in a facial video which produces obvious artifacts whenever the video is used to generate deepfakes. This protects the video from being manipulated.
Faithful logical reasoning with symbolic chain-of-thought – SymbCoT combines symbolic expressions with chain-of-thought to enhance logical reasoning. View poster
KALEIDO – An interpretable news retrieval system that diversifies news consumption by retrieving articles offering varied perspectives on a topic, aiming to counteract echo chambers. View poster
Diffusion facial forgery detection – Introducing DiFF, a large dataset of diffusion-generated facial forgeries, with a novel technique to improve detection accuracy. View poster
AI-generated image detection – A novel multi-cue aggregation network (MCAN) integrates spatial, frequency, and chromaticity cues, while chromaticity inconsistency (CI) reduces lighting effects and highlights noise differences. View poster
Shadowbans as a misinformation control tool – Auditing over 25,000 U.S. Twitter accounts to assess the effectiveness of shadowbans. View poster
RAMP – Fast influence maximisation with matroid constraints with a high-performance algorithm for selecting multiple influence seed sets in social networks, addressing viral marketing and misinformation. View poster
Robustness of LLM-based multi-agent collaboration – Examining how LLM-driven systems handle knowledge conflicts in collaborative programming. View poster
Building trust in the gen-AI era – Analysing global regulatory frameworks and AI tools to address MDM risks and improve digital resilience. View poster
The growing power of Big Tech in policymaking – Exploring how Big Tech influences policymaking through digital platforms and generative AI. View poster
Group photo of our iGYRO team
The iGYRO SMDM 2025 marked a significant step in fostering a holistic understanding of MDM. By bridging diverse disciplines and perspectives, the event laid the groundwork for future collaborations and innovations in the pursuit of trustworthy digital information ecosystems.
This project is funded by the Ministry of Education, Singapore, under its MOE AcRF TIER 3 Grant. For more information, visit the iGYRO SMDM 2025 event page and iGYRO project page.