Cite
VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding
MLA
Xu, Hu, et al. VLM: Task-Agnostic Video-Language Model Pre-Training for Video Understanding. 2021. EBSCOhost, widgets.ebscohost.com/prod/customlink/proxify/proxify.php?count=1&encode=0&proxy=&find_1=&replace_1=&target=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2105.09996&authtype=sso&custid=ns315887.
APA
Xu, H., Ghosh, G., Huang, P.-Y., Arora, P., Aminzadeh, M., Feichtenhofer, C., Metze, F., & Zettlemoyer, L. (2021). VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding.
Chicago
Xu, Hu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, and Luke Zettlemoyer. 2021. “VLM: Task-Agnostic Video-Language Model Pre-Training for Video Understanding.” http://widgets.ebscohost.com/prod/customlink/proxify/proxify.php?count=1&encode=0&proxy=&find_1=&replace_1=&target=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2105.09996&authtype=sso&custid=ns315887.