Lookup NU author(s): Dr Mohsen Naqvi,
Professor Jonathon Chambers
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
We investigate the problem of visual tracking of multiple human speakers in an office environment. In particular, we propose novel solutions to the following challenges: (1) robust and computationally efficient modeling and classification of the changing appearance of the speakers in a variety of different lighting conditions and camera resolutions; (2) dealing with full or partial occlusions when multiple speakers cross or come into very close proximity; (3) automatic initialization of the trackers, or re-initialization when the trackers have lost lock caused by e.g. the limited camera views. First, we develop new algorithms for appearance modeling of the moving speakers based on dictionary learning (DL), using an off-line training process. In the tracking phase, the histograms (coding coefficients) of the image patches derived from the learned dictionaries are used to generate the likelihood functions based on Support Vector Machine (SVM) classification. This likelihood function is then used in the measurement step of the classical particle filtering (PF) algorithm. To improve the computational efficiency of generating the histograms, a soft voting technique based on approximate Locality-constrained Soft Assignment (LcSA) is proposed to reduce the number of dictionary atoms (codewords) used for histogram encoding. Second, an adaptive identity model is proposed to track multiple speakers whilst dealing with occlusions. This model is updated online using Maximum a Posteriori (MAP) adaptation, where we control the adaptation rate using the spatial relationship between the subjects. Third, to enable automatic initialization of the visual trackers, we exploit audio information, the Direction of Arrival (DOA) angle, derived from microphone array recordings. Such information provides, a priori, the number of speakers and constrains the search space for the speaker's faces. The proposed system is tested on a number of sequences from three publicly available and challeng- ng data corpora (AV16.3, EPFL pedestrian data set and CLEAR) with up to five moving subjects.
Author(s): Barnard M, Koniusz P, Wang W, Kittler J, Naqvi SM, Chambers JA
Publication type: Article
Publication status: Published
Journal: IEEE Transactions on Multimedia
Print publication date: 01/04/2014
Online publication date: 22/01/2014
Acceptance date: 27/11/2013
ISSN (print): 1520-9210
ISSN (electronic): 1941-0077
Altmetrics provided by Altmetric