October 19-20, 2025
Call for PapersMultimodal systems are redefining the boundaries of AI, enabling models that can understand, generate, and act across language, vision, and beyond. These capabilities are unlocking powerful applications in robotics, autonomous driving, AI-generated content, and scientific discovery.
However, these powerful capabilities also introduce complex challenges related to safety, trustworthiness, and ethical deployment. For instance, multimodal large language models (MLLMs), which integrate diverse inputs and outputs, pose new safety risks, ranging from hallucinations and privacy violations to adversarial attacks and biased behavior. Traditional safeguards designed for text-based models often fail in the multimodal setting, leaving critical vulnerabilities in visual understanding and cross-modal alignment.
The Workshop on Safe and Trustworthy Multimodal AI Systems (SaFeMM-AI) at ICCV 2025 brings together the computer vision community to address these challenges and advance the development of safer, more robust, and more reliable multimodal models. We welcome works that propose novel methods for mitigating multimodal hallucinations, safeguarding user privacy, and defending against adversarial or jailbreak attacks. We also encourage submissions that address bias, fairness, ethical transparency, and interpretability, as well as new evaluation protocols or benchmarks for assessing safety and trustworthiness in MLLMs and agentic systems.
Paper Submission
Paper Notification
Camera-Ready Paper
Workshop Day
Our workshop focuses on advancing the development of multimodal AI systems that can robustly handle unsafe or adversarial inputs and consistently generate safe, reliable, and trustworthy outputs. Topics of interest include but are not limited to:
Please submit to our workshop at OpenReview. For more details and submission guidelines please check our Call for Papers document.
The workshop schedule will be available soon.
Stay up to date by following us on @SaFeMMAI.
For any inquiries, feel free to reach out to us via email at: safemm.ai.workshop@gmail.com. or You may also contact the organizers directly: Carlos Hinojosa, and Yinpeng Dong.
We are seeking sponsors to help fund travel grants, the best paper award, and other workshop initiatives. If you or your organization is interested in supporting SEA, please reach out to the organizing team via email (safemm.ai.workshop@gmail.com).