Back to Search Start Over

The MediaMill TRECVID 2006 semantic video search engine

Authors :
Snoek, C.G.M.
van Gemert, J.C.
Gevers, Th.
Huurnink, B.
Koelma, D.C.
van Liempt, M.
de Rooij, O.
van de Sande, K.E.A.
Seinstra, F.J.
Smeulders, A.W.M.
Thean, A.
Veenman, C.J.
Worring, M.
TNO Industrie en Techniek
Intelligent Sensory Information Systems (IVI, FNWI)
Source :
TREC Video Retrieval Evaluation, TRECVID 2006, 13-14 November 2006, Gaithersburg, MD, USA, Proceedings of the 4th TRECVID Workshop
Publication Year :
2006
Publisher :
Gaithersburg, MD, USA: National Institute of Standards and Technology, 2006.

Abstract

In this paper we describe our TRECVID 2006 experiments. The MediaMill team participated in two tasks: concept detection and search. For concept detection we use the MediaMill Challenge as experimental platform. The MediaMill Challenge divides the generic video indexing problem into a visual-only, textual-only, early fusion, late fusion, and combined analysis experiment. We provide a baseline implementation for each experiment together with baseline results, which we made available for the TRECVID community. The Challenge package was downloaded more than 80 times and we anticipate that it has been used by several teams for their 2006 submission. Our Challenge experiments focus specifically on visual-only analysis of video (run id: B\_MM). We extract image features, on global, regional, and keypoint level, which we combine with various supervised learners. A late fusion approach of visual-only analysis methods using geometric mean was our most successful run. With this run we conquer the Challenge baseline by more than 50\%. Our concept detection experiments have resulted in the best score for three concepts: i.e. \emph{desert}, \emph{flag us}, and \emph{charts}. What is more, using LSCOM annotations, our visual-only approach generalizes well to a set of 491 concept detectors. To handle such a large thesaurus in retrieval, an engine is developed which automatically selects a set of relevant concept detectors based on text matching and ontology querying. The suggestion engine is evaluated as part of the automatic search task (run id: A-MM) and forms the entry point for our interactive search experiments (run id: A-MM). Here we experiment with query by object matching and two browsers for interactive exploration: the CrossBrowser and the novel NovaBrowser. It was found that the NovaBrowser is able to produce the same results as the CrossBrowser, but with less user interaction. Similar to previous years our best interactive search runs yield top performance, ranking 2nd and 6th overall. Again a lot has been learned during this year's TRECVID campaign, we highlight the most important lessons at the end of this paper.

Details

Language :
English
Database :
OpenAIRE
Journal :
TREC Video Retrieval Evaluation, TRECVID 2006, 13-14 November 2006, Gaithersburg, MD, USA, Proceedings of the 4th TRECVID Workshop
Accession number :
edsair.dedup.wf.001..45464703270edfd2be5bf06e224a50d7