Cover Image

SafeMM-AI: Safe and Trustworthy Multimodal AI Systems

ICCV2025 @ Honolulu Hawaii

October 19-20, 2025

Call for Papers

Introduction

Multimodal systems are redefining the boundaries of AI, enabling models that can understand, generate, and act across language, vision, and beyond. These capabilities are unlocking powerful applications in robotics, autonomous driving, AI-generated content, and scientific discovery.


However, these powerful capabilities also introduce complex challenges related to safety, trustworthiness, and ethical deployment. For instance, multimodal large language models (MLLMs), which integrate diverse inputs and outputs, pose new safety risks, ranging from hallucinations and privacy violations to adversarial attacks and biased behavior. Traditional safeguards designed for text-based models often fail in the multimodal setting, leaving critical vulnerabilities in visual understanding and cross-modal alignment.


The Workshop on Safe and Trustworthy Multimodal AI Systems (SaFeMM-AI) at ICCV 2025 brings together the computer vision community to address these challenges and advance the development of safer, more robust, and more reliable multimodal models. We welcome works that propose novel methods for mitigating multimodal hallucinations, safeguarding user privacy, and defending against adversarial or jailbreak attacks. We also encourage submissions that address bias, fairness, ethical transparency, and interpretability, as well as new evaluation protocols or benchmarks for assessing safety and trustworthiness in MLLMs and agentic systems.

Important Dates

  • June 19, 2025 (23:59 AoE)

    Paper Submission

  • July 11, 2025

    Paper Notification

  • August 11, 2025

    Camera-Ready Paper

  • October 19-20, 2025

    Workshop Day

Call for Papers

Our workshop focuses on advancing the development of multimodal AI systems that can robustly handle unsafe or adversarial inputs and consistently generate safe, reliable, and trustworthy outputs. Topics of interest include but are not limited to:


Please submit to our workshop at OpenReview. For more details and submission guidelines please check our Call for Papers document.

Schedule

The workshop schedule will be available soon.

Speakers

Our amazing speakers will be announced soon.

Organizers

Main Organizer 2 Yinpeng Dong Tsinghua University
Main Organizer 3 Adel Bibi University of Oxford
Main Organizer 4 Jindong Gu University of Oxford
Committee Member 1 Yichi Zhang Tsinghua University
Committee Member 4 Andres Villa KAUST
Committee Member 6 Chen Zhao KAUST
Committee Member 10 Philip Torr University of Oxford

Contact

Stay up to date by following us on @SaFeMMAI.

For any inquiries, feel free to reach out to us via email at: safemm.ai.workshop@gmail.com. or You may also contact the organizers directly: Carlos Hinojosa, and Yinpeng Dong.

Sponsorship

We are seeking sponsors to help fund travel grants, the best paper award, and other workshop initiatives. If you or your organization is interested in supporting SEA, please reach out to the organizing team via email (safemm.ai.workshop@gmail.com).