Doctors often rely on medical images along with exams, lab tests, and patient histories to help them diagnose patients. But even the best vision-language models designed to interpret these images make mistakes. Sometimes they hallucinate. To address this problem MBZUAI researchers have developed a new approach called MOTOR, a step towards making AI tools more accurate in clinical settings. It combines retrieval-augmented generation (RAG) with an algorithm called optimal transport to retrieve clinically relevant images and text, ranks them, and feeds them to a vision-language model for processing. The research, led by PhD student Mai A. Shaaban, was presented at #MICCAI2025. Read more about MOTOR here: