Skip to content

Latest commit

 

History

History
15 lines (12 loc) · 780 Bytes

README.md

File metadata and controls

15 lines (12 loc) · 780 Bytes

Relation-aware Hierarchcal Attention Framework for Video Question Answering

  • This is the repo of RHA model. This paper has been published on ICMR 2021, you can find this paper here

  • The code is modified from the baseline STAGE model. Some comments or code may be hard to read, since I recently have no time for code cleaning. If you have any questions, please submit a issue.

If this work is helpful, please cite as

@article{li2021relation,
  title={Relation-aware Hierarchical Attention Framework for Video Question Answering},
  author={Li, Fangtao and Bai, Ting and Cao, Chenyu and Liu, Zihe and Yan, Chenghao and Wu, Bin},
  journal={arXiv preprint arXiv:2105.06160},
  year={2021}
}