*Thanh-Thien Le, *Viet Dao, *Linh Van Nguyen, Nhung Nguyen, Linh Ngo Van, Thien Huu Nguyen
Adaptive Contrastive Learning on Multimodal Transformer for Review Helpfulness Prediction
Modern Review Helpfulness Prediction systems are dependent upon multiple modalities, typically texts and images. Unfortunately, those contemporary approaches pay scarce attention to polish representations of cross-modal relations and tend to suffer from inferior optimization. This might cause harm to model’s predictions in numerous cases. To overcome the aforementioned issues, we propose Multi-modal Contrastive Learning for Multimodal Review Helpfulness Prediction (MRHP) problem, concentrating on mutual information between input modalities to explicitly elaborate cross-modal relations. In addition, we introduce Adaptive Weighting scheme for our contrastive learning approach in order to increase flexibility in optimization. Lastly, we propose Multimodal Interaction module to address the unalignment nature of multimodal data, thereby assisting the model in producing more reasonable multimodal representations. Experimental results show that our method outperforms prior baselines and achieves state-of-the-art results on two publicly available benchmark datasets for MRHP problem.
Overall
Thong Nguyen, Xiaobao Wu, Anh Tuan Luu, Zhen Hai and Lidong Bing
Share Article