Learning Fast Sparsifying Transforms
Date Available
2016-11-30Type
softwareData Creator
Rusu, CristianPublisher
University of EdinburghRelation (Is Referenced By)
https://arxiv.org/abs/1611.08230Metadata
Show full item recordCitation
Rusu, Cristian. (2016). Learning Fast Sparsifying Transforms, [software]. University of Edinburgh. https://doi.org/10.7488/ds/1573.Description
Given a dataset, the task of learning a transform that allows sparse representations of the data bears the name of dictionary learning. In many applications, these learned dictionaries represent the data much better than the static well-known transforms (Fourier, Hadamard etc.). The main downside of learned transforms is that they lack structure and therefore they are not computationally efficient, unlike their classical counterparts. This posses several difficulties especially when using power limited hardware such as mobile devices, therefore discouraging the application of sparsity techniques in such scenarios. In this paper we construct orthonormal and non-orthonormal dictionaries that are factorized as a product of a few basic transformations. In the orthonormal case, we solve exactly the dictionary update problem for one basic transformation, which can be viewed as a generalized Givens rotation, and then propose to construct orthonormal dictionaries that are a product of these transformations, guaranteeing their fast manipulation. We also propose a method to construct fast square but non-orthonormal dictionaries that are factorized as a product of few transforms that can be viewed as a further generalization of Givens rotations to the non-orthonormal setting. We show how the proposed transforms can balance very well data representation performance and computational complexity. We also compare with classical fast and learned general and orthonormal transforms.The following licence files are associated with this item: