
Audiovisual media is becoming an increasingly dominant form of online information consumption. From firsthand, “in the wild” video footage of natural disasters to professionally edited news coverage of major political events, videos serve as rich sources of information for producing factual, grounded articles. Especially for actively unfolding events, grounding articles in video can help combat misinformation and provide journalists and analysts with tools to quickly synthesize new developments.
Individual research groups have independently begun addressing this challenge, leading to parallel yet disconnected efforts to define the research space. ACL 2025 hosted the first MAGMaR workshop focused on Video Event Retrieval. This year’s iteration focuses on two primary areas: (1) the retrieval of multimodal content spanning text, images, audio, and video; and (2) retrieval-augmented generation, with an emphasis on multimodal retrieval and grounded generation. To further this goal, we are again hosting a shared task, extending this year to full grounded article generation from multiple videos.
Relevant topics include document retrieval, multimodal retrieval, retrieval-augmented generation (RAG), multimodal RAG, multimodal question answering, and research on video, image, and audio understanding.
This workshop is organized in support of ACL's Special Interest Group on Image and Language (SIGIL).
The workshop will be a one-day hybrid event to allow remote participation and will be co-located with ACL 2026 in San Diego, USA on July 4th.
This shared task focuses on retrieving relevant videos and generating grounded reports that respond to information needs. Given a query describing a real-world current event, participating systems must identify pertinent videos from a large multilingual, multimodal collection and use that evidence to produce a coherent and informative written report.
There are two tracks:
Teams may submit to either track or both. Additional details on submission formats, evaluation, and task instructions are available in the shared task repository.
Submissions will be collected via Google Form:
https://docs.google.com/forms/d/1B_J_iJqisqmcOsNaL_K25hWV_13eF9oQ7xLuVinHd10
| Time | Program |
|---|---|
| 9:30 - 9:45 am | Welcome Remarks: Reno Kriz (Johns Hopkins University) |
| 9:45 - 10:30 am | Keynote 1: Nanyun (Violet) Peng (UCLA) |
| 10:30 - 11:00 am | Break |
| 11:00 am - 12:30 pm | Oral Presentations |
| 12:30 - 2:00 pm | Lunch |
| 2:00 - 3:30 pm | Poster Session |
| 3:30 - 4:00 pm | Break |
| 4:00 - 4:45 pm | Keynote 2: Chenliang Xu (University of Rochester) |
| 4:45 - 5:00 pm | Paper Awards and Closing |