1. Dublin city university and partners’ participation in the INS and VTT tracks at TRECVID 2016
- Author
-
Marsden, M., Mohedano, E., Kevin McGuinness, Calafell, A., Giró-I-Nieto, X., O’connor, N. E., Zhou, J., Azevedo, L., Daudert, T., Davis, B., Hürlimann, M., Afli, H., Du, J., Ganguly, D., Li, W., Way, A., and Smeaton, A. F.
- Subjects
Artificial intelligence ,Image processing ,Machine learning ,Digital video ,Imaging systems ,Computational linguistics ,Multimedia systems - Abstract
Dublin City University participated with a consortium of colleagues from NUI Galway and Universitat Politecnica de Catalunya in two tasks in TRECVid 2016, Instance Search (INS) and Video to Text (VTT). For the INS task we developed a framework consisting of face detection and representation and place detection and representation, with a user annotation of top-ranked videos. For the VTT task we ran 1,000 concept detectors from the VGG-16 deep CNN on 10 keyframes per video and submitted 4 runs for caption re-ranking, based on BM25, Fusion, word2vec and a fusion of baseline BM25 and word2vec. With the same pre-processing for caption generation we used an open source image-to-caption CNN-RNN toolkit NeuralTalk2 to generate a caption for each keyframe and combine them.