Video abstraction based on fMRI-driven visual attention model

Han, Junwei, Li, Kaiming, Shao, Ling, Hu, Xintao, He, Sheng, Guo, Lei, Han, Jungong and Liu, Tianming (2014) Video abstraction based on fMRI-driven visual attention model. Information Sciences, 281. pp. 781-796. ISSN 0020-0255

Full text not available from this repository. (Request a copy)
Official URL: http://dx.doi.org/10.1016/j.ins.2013.12.039

Abstract

The explosive growth of digital video data renders a profound challenge to succinct, informative, and human-centric representations of video contents. This quickly-evolving research topic is typically called ‘video abstraction’. We are motivated by the facts that the human brain is the end-evaluator of multimedia content and that the brain’s responses can quantitatively reveal its attentional engagement in the comprehension of video. We propose a novel video abstraction paradigm which leverages functional magnetic resonance imaging (fMRI) to monitor and quantify the brain’s responses to video stimuli. These responses are used to guide the extraction of visually informative segments from videos. Specifically, most relevant brain regions involved in video perception and cognition are identified to form brain networks. Then, the propensity for synchronization (PFS) derived from spectral graph theory is utilized over the brain networks to yield the benchmark attention curves based on the fMRI-measured brain responses to a number of training video streams. These benchmark attention curves are applied to guide and optimize the combinations of a variety of low-level visual features created by the Bayesian surprise model. In particular, in the training stage, the optimization objective is to ensure that the learned attentional model correlates well with the brain’s responses and reflects the attention that viewers pay to video contents. In the application stage, the attention curves predicted by the learned and optimized attentional model serve as an effective benchmark to abstract testing videos. Evaluations on a set of video sequences from the TRECVID database demonstrate the effectiveness of the proposed framework.

Item Type: Article
Uncontrolled Keywords: Video abstraction; Visual attention; Functional magnetic resonance imaging; Propensity for synchronization; Bayesian surprise model
Subjects: C800 Psychology
G400 Computer Science
G700 Artificial Intelligence
Department: Faculties > Engineering and Environment > Computer and Information Sciences
Related URLs:
Depositing User: Paul Burns
Date Deposited: 10 Jun 2015 10:35
Last Modified: 12 Oct 2019 22:30
URI: http://nrl.northumbria.ac.uk/id/eprint/22816

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics