Fast Near-Duplicate Video Retrieval via Motion Time Series Matching
01 July 2012
This paper introduces a method for the efficient comparison and retrieval of near duplicates of a query video from a video database. The method generates video signatures from histograms of orientations of optical flow of feature points computed from uniformly sampled video frames concatenated over time to produce time series, which are then aligned and matched. Major incline matching, a data reduction and peak alignment method for time series, is adapted for faster performance. The resultant method is robust against a number of common transformations including: flipping, cropping, picture-in-picture, photometric, addition of noise and other artifacts. We demonstrate the efficacy of this approach through experimental results on the MUSCLE VCD 2007 dataset and a dataset derived from TRECVID 2009. The proposed method is shown to provide good precision at significantly higher speeds than results reported in the literature. Index Terms-- video copy detection, video retrieval, optical flow, near duplicates, time series. 1. INTRODUCTION The increasing quantity of videos shared online through sites such as YouTube have introduced numerous challenges in copyright enforcement and video search. Content-based video retrieval presents a potential solution to both these problems. The task of content-based video retrieval determines if a given query video has a duplicate in a set of videos [1]. Further complicating the task is that query videos may be distorted in a variety of ways including scaling, compression, cropping, camcording and photometric changes such as changes in gamma or addition of artifacts such as captions or picture-in-picture.