Call for Papers
Modern communication does not rely anymore solely on classic media like newspapers or television, but rather takes place over social networks, in real-time, and with live interactions among users. The speedup in the amount of information available, however, also led to an increased amount and quality of misleading content, disinformation and propaganda Conversely, the fight against disinformation, in which news agencies and NGOs (among others) take part on a daily basis to avoid the risk of citizens’ opinions being distorted, became even more crucial and demanding, especially for what concerns sensitive topics such as politics, health and religion.
Disinformation campaigns are leveraging, among others, market-ready AI-based tools for content creation and modification: hyper-realistic visual, speech, textual and video content have emerged under the collective name of “deepfakes”, undermining the perceived credibility of media content. It is, therefore, even more crucial to counter these advances by devising new analysis tools able to detect the presence of synthetic and manipulated content, accessible to journalists and fact-checkers, robust and trustworthy, and possibly based on AI to reach greater performance.
Future multimedia disinformation detection research relies on the combination of different modalities and on the adoption of the latest advances of deep learning approaches and architectures. These raise new challenges and questions that need to be addressed in order to reduce the effects of disinformation campaigns. The workshop, in its second edition, welcomes contributions related to different aspects of AI-powered disinformation detection, analysis and mitigation.
Topics of interest include but are not limited to:
- Disinformation detection in multimedia content (e.g., video, audio, texts, images)
- Multimodal verification methods
- Synthetic and manipulated media detection
- Multimedia forensics
- Disinformation spread and effects in social media
- Analysis of disinformation campaigns in societally-sensitive domains
- Robustness of media verification against adversarial attacks and real-world complexities
- Fairness and non-discrimination of disinformation detection in multimedia content
- Explaining disinformation /disinformation detection technologies for non-expert users
- Temporal and cultural aspects of disinformation
- Dataset sharing and governance in AI for disinformation
- Datasets for disinformation detection and multimedia verification
- Open resources, e.g., datasets, software tools
- Multimedia verification systems and applications
- System fusion, ensembling and late fusion techniques
- Benchmarking and evaluation frameworks
The workshop is supported under the H2020 project AI4Media “A European Excellence Centre for Media, Society and Democracy”, and the Horizon Europe project vera.ai “VERification Assisted by Artificial Intelligence”.
|Paper submission due||extended until March 14, 2023|
|Acceptance notification||March 31, 2023|
|Camera-ready papers due||April 20, 2023|
|Workshop @ACM ICMR 2023||June 12, 2023|
June 12, 2023
Roberto Caldelli (CNIT, Florence, Italy; Universitas Mercatorum, Rome, Italy)
10:30 – 11:00 (EET)
Coffee break and networking
11:00 – 11:50 (EET)
Session 1: AI for Audio Analysis
- Synthetic speech detection through audio folding
Davide Salvi, Paolo Bestagini, Stefano Tubaro
- SpoTNet: A spoofing-aware transformer network for effective synthetic speech detection
Awais Khan, Khalid Mahmood Malik
11:50 – 13:00 (EET)
Session 2: Improving AI generalization
- Autoencoder-based data augmentation for deepfake detection
Dan-Cristian Stanciu, Bogdan Ionescu
- Improving synthetically generated images detection in cross-concept settings
Pantelis Dogoulis, Giorgos Kordopatis-Zilos, Ioannis Kompatsiaris, Symeon Papadopoulos
- Synthetic misinformers: Generating and combating multimodal misinformation
Stefanos-Iordanis Papadopoulos, Christos Koutlis, Symeon Papadopoulos, Panagiotis Petrantonakis
13:00 – 14:00 (EET)
Lunch break and networking
14:00 – 15:00 (EET)
Keynote 2: »Controllable image generation and manipulation«
Ioannis Patras (Queen Mary University of London, London, United Kingdom)
15:00 – 15:40 (EET)
Session 3: AI for (Dis-)Information Analysis
- In the spotlight: The Russian government’s use of official Twitter accounts to influence discussions about its war in Ukraine
- Examining European press coverage of the no-vax movement: A NLP framework
David Alonso del Barrio, Daniel Gatica-Perez
15:40 – 16:00 (EET)
Coffee break and networking
16:00 – 16:45 (EET)
Open Discussion on MAD Challenges and Opportunities
16:45 – 17:00 (EET)
When preparing your submission, please adhere strictly to the ACM ICMR 2023 instructions, to ensure the appropriateness of the reviewing process and inclusion in the ACM Digital Library proceedings.
MAD’23 paper requirements
Submissions to the MAD workshop are expected to be long papers (8 page limit, plus additional pages for references) and to comply with a double-blind review process. Details to ensure this compliance can be found in the website linked aboved.
The submission is now closed, thanks to all contributors!
All papers are available online in the Proceedings of the 2nd ACM International Workshop on Multimedia AI against Disinformation on the ACM website.
Looking back at MAD 23 – and ahead: A Recap of the 2nd ACM International Workshop on Multimedia AI against Disinformation is available on the vera.ai project website.
For any questions about the workshop, please contact the organizers via email at email@example.com.
The 2nd ACM International Workshop on Multimedia AI against Disinformation (MAD’23) will be organized with the ACM International Conference on Multimedia Retrieval (ACM ICMR 2023).
For any questions concerning the ACM ICMR 2023 conference, please contact the organizers here https://icmr2023.org/.
About MAD 2023
The 2nd ACM International Workshop on Multimedia AI against Disinformation (MAD’23) will be organized with the ACM International Conference on Multimedia Retrieval (ACM ICMR 2023) and is supported under the H2020 project AI4Media “A European Excellence Centre for Media, Society and Democracy”, and the Horizon Europe project vera.ai “VERification Assisted by Artificial Intelligence”.