October 19-20, 2025
Call for PapersMultimodal systems are transforming AI by enabling models to understand and act across language, vision, and other modalities, powering advances in robotics, autonomous driving, and scientific discovery. Yet, these capabilities raise serious safety and trustworthiness concerns, especially as traditional safeguards fall short in multimodal contexts. The Workshop on Safe and Trustworthy Multimodal AI Systems (SaFeMM-AI) at ICCV 2025 brings together the computer vision community to address challenges, including and beyond hallucinations, privacy leakage, and jailbreak vulnerabilities, and advance the development of safer, more robust, and more reliable multimodal models.
Archival track - will appear in ICCV proceedings
Full-Paper Submission
Full-Paper Notifications
Non-archival track - will NOT appear in ICCV proceedings
Short-Paper Submission
Camera-Ready Full-Paper (Archival Track)
Short-Paper Notifications
Camera-Ready Short-Paper (Non-Archival Track)
Workshop Day
Our workshop focuses on advancing the development of multimodal AI systems that can robustly handle unsafe or adversarial inputs and consistently generate safe, reliable, and trustworthy outputs. Topics of interest include but are not limited to:
Paper Submission Information for Short Paper (Active):
Submitted papers must be formatted using the ICCV 2025 Author Kit and are limited to four pages, including figures and tables. Additional pages are allowed only for references. We strongly encourage authors to carefully follow the ICCV Author Guidelines, since our workshop will adhere to the same formatting and submission policies as the main conference. Please check our Call for Papers document which also applies to short paper submission except the page limit.
Accepted full paper submissions will be included in the official ICCV 2025 workshop proceedings. Both accepted full papers and short papers will be presented in the workshop poster session, so at least one author should register and present the poster.
Note: Submission of Full-paper (Archival track) has ended, submission platform is currently open for short paper submissions only.
Time | Event |
---|---|
9:00 | Opening Remarks |
9:15 | Keynote Talk by Prof. Yao Qin |
9:45 | Orals/Spotlight 1 |
10:00 | Keynote Talk by Prof. Yarin Gal |
10:30 | Networking/Coffee Break |
11:00 | Keynote Talk by Prof. Florian Tramèr |
11:30 | Orals/Spotlight 2 |
11:45 | Orals/Spotlight 3 |
12:00 | Lunch Break |
13:30 | Poster Session |
15:00 | Networking/Coffee Break |
15:30 | Keynote Talk by Prof. Yoshua Bengio |
16:00 | Orals/Spotlight 4 |
16:15 | Orals/Spotlight 5 |
16:30 | Panel Discussion |
17:00 | Awards and Closing Remarks |
Yoshua Bengio is a world-leading expert in artificial intelligence, renowned for his pioneering work in deep learning, which earned him the 2018 A.M. Turing Award alongside Geoffrey Hinton and Yann LeCun. He is a Full Professor at Université de Montréal, Founder and Scientific Advisor of Mila - Quebec AI Institute. He received numerous awards, including the prestigious Killam Prize and Herzberg Gold medal in Canada, CIFAR's AI Chair, Spain's Princess of Asturias Award, the VinFuture Prize and he is a Fellow of both the Royal Society of London and Canada, Knight of the Legion of Honor of France, Officer of the Order of Canada, Member of the UN's Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology. Yoshua Bengio was named in 2024 as one of TIME magazine's 100 most influential people in the world. Concerned about the social impact of AI, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence and currently chairs the International Scientific Report on the Safety of Advanced AI. In June 2025, he launches a new non-profit AI safety research organization called LawZero, to prioritize safety over commercial imperatives.
Yarin leads the Oxford Applied and Theoretical Machine Learning (OATML) group. He is an Associate Professor of Machine Learning at the Computer Science department, University of Oxford. He is also the Tutorial Fellow in Computer Science at Christ Church, Oxford, a Turing AI Fellow at the Turing Institute, and Director of Research at the UK Government's AI Security Institute (AISI, formerly the Frontier AI Taskforce). Prior to his move to Oxford he was a Research Fellow in Computer Science at St Catharine's College at the University of Cambridge. He obtained his PhD from the Cambridge machine learning group, working with Prof Zoubin Ghahramani and funded by the Google Europe Doctoral Fellowship.
Florian Tramèr is an Assistant Professor of Computer Science at ETH Zürich, where he leads the Secure and Private AI (SPY) Lab. He is a member of the Information Security Institute and ZISC, and an associated faculty member of the ETHZ AI Center. His research lies at the intersection of Computer Security, Machine Learning, and Cryptography. His work studies the worst-case behavior of deep learning systems from an adversarial perspective, aiming to understand and mitigate long-term threats to user safety and privacy. Under his leadership, the SPY Lab investigates the robustness and trustworthiness of machine learning systems, often using adversarial attacks to probe and improve their security.
Qin Yao is an Assistant Professor at the Department of Electrical and Computer Engineering, affiliated with the Department of Computer Science at UC Santa Barbara, where she is also co-leading the REAL AI initiative. Meanwhile, she is a senior Research Scientist at Google DeepMind, working on Gemini Multimodal. She obtained her PhD degree at UC San Diego in Computer Science, advised by Prof. Garrison W. Cottrell. During her PhD, she also interned under the supervision of Geoffrey Hinton, Ian Goodfellow and many others.
Stay up to date by following us on @SaFeMMAI.
For any inquiries, feel free to reach out to us via email at: safemm.ai.workshop@gmail.com. or You may also contact the organizers directly: Carlos Hinojosa, and Yinpeng Dong.