Low-rank optimization on Tucker tensor varieties

Abstract

In the realm of tensor optimization, the low-rank Tucker decomposition is crucial for reducing the number of parameters and for saving storage. We explore the geometry of Tucker tensor varieties—the set of tensors with bounded Tucker rank—which is notably more intricate than the well-explored matrix varieties. We give an explicit parametrization of the tangent cone of Tucker tensor varieties and leverage its geometry to develop provable gradient-related line-search methods for optimization on Tucker tensor varieties. To the best of our knowledge, this is the first work concerning geometry and optimization on Tucker tensor varieties. In practice, low-rank tensor optimization suffers from the difficulty of choosing a reliable rank parameter. To this end, we incorporate the established geometry and propose a Tucker rank-adaptive method that aims to identify an appropriate rank with guaranteed convergence. Numerical experiments on tensor completion reveal that the proposed methods are in favor of recovering performance over other state-of-the-art methods. The rank-adaptive method performs the best across various rank parameter selections and is indeed able to find an appropriate rank.

Publication
arXiv:2311.18324