MMPT’21: International Joint Workshop on Multi-Modal Pre-Training for Multimedia Understanding
- Bei Liu ,
- Jianlong Fu ,
- Shizhe Chen ,
- Qin Jin ,
- Alexander Hauptmann ,
- Yong Rui
ICMR 2021 |
Pre-training has been an emerging topic that provides a way to learn strong representation in many fields (eg, natural language processing, computing vision). In the last few years, we have witnessed many research works on multi-modal pre-training which have achieved state-of-the-art performances on many multimedia tasks (eg, image-text retrieval, video localization, speech recognition). In this workshop, we aim to gather peer researchers on related topics for more insightful discussion. We also intend to attract more researchers to explore and investigate more opportunities of designing and using innovative pre-training models for multimedia tasks.