Robust Iterative Quantization for Efficient ℓp-norm Similarity Search

Guo, Yuchen, Ding, Guiguang, Han, Jungong and Jin, Xiaoming (2016) Robust Iterative Quantization for Efficient ℓp-norm Similarity Search. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence, pp. 3382-3388. ISBN 978-1-57735-771-1

Full text not available from this repository. (Request a copy)
Official URL:


Iterative Quantization (ITQ) is one of the most successful hashing based nearest-neighbor search methods for large-scale information retrieval in the past a few years due to its simplicity and superior performance. However, the performance of this algorithm degrades significantly when dealing with noisy data. Additionally, it can barely facilitate a wide range of applications as the distortion measurement only limits to ℓ2 norm. In this paper, we propose an ITQ+ algorithm, aiming to enhance both robustness and generalization of the original ITQ algorithm. Specifically, a ℓp,q-norm loss function is proposed to conduct the ℓp-norm similarity search, rather than a ℓ2} norm search. Despite the fact that changing the loss function to ℓp,q-norm makes our algorithm more robust and generic, it brings us a challenge that minimizes the obtained orthogonality constrained ℓp,q-norm function, which is non-smooth and non-convex. To solve this problem, we propose a novel and efficient optimization scheme. Extensive experiments on benchmark datasets demonstrate that ITQ+ is overwhelmingly better than the original ITQ algorithm, especially when searching similarity in noisy data.

Item Type: Book Section
Subjects: G700 Artificial Intelligence
Department: Faculties > Engineering and Environment > Computer and Information Sciences
Depositing User: Becky Skoyles
Date Deposited: 03 Jan 2017 14:40
Last Modified: 12 Oct 2019 22:27

Actions (login required)

View Item View Item


Downloads per month over past year

View more statistics