Multimodal Matching Transformer for Live Commenting
Automatic live commenting aims to provide real-time comments on videos for viewers.
It encourages users engagement on online video sites, and is also a good benchmark for video-to-text generation.
Recent work on this task adopts encoder-decoder models to generate comments.
However, these methods do not model the interaction between videos and comments explicitly, so they tend to generate popular comments that are often irrelevant to the videos.
In this work, we aim to improve the relevance between live comments and videos by modeling the cross-modal interactions among different modalities.
To this end, we propose a multimodal matching transformer to capture the relationships among comments, vision, and audio.
The proposed model is based on the transformer framework and can iteratively learn the attention-aware representations for each modality.
We evaluate the model on a publicly available live commenting dataset.
Experiments show that the multimodal matching transformer model outperforms the state-of-the-art methods.