LMAOPT: Low-rank Matrix OPTimization

  • LMAOPT-v1.0 is a MATLAB code colection for solving three special cases of the following low-rank matrix optimization problem: $$ \Phi^{\star} := \min_{U\in\mathbf{R}^{m\times r}, V\in\mathbb{R}^{n\times r}}\Big\{ \Phi(U,V) := \phi(\mathcal{A}(UV^T) - B) \Big\}, $$ where $\phi$ is a proper, closed and convex function from $\mathbb{R}^l\to\mathbb{R}\cup\{+\infty\}$, $\mathcal{A}$ is a linear operator from $\mathbb{R}^{m\times n}$ to $\mathbb{R}^l$, and $B\in\mathbb{R}^l$ is a given observed vector. Here, we are more interested in the case $r \ll \min\{m, n\}$.
    Currently, we provide the code to solve three special cases of the above problem:
    • Quadratic loss: $\phi(\cdot) = \frac{1}{2}\Vert\cdot\Vert_2^2$;
    • Quadratic loss and symmetric case: $\phi(\cdot) = \frac{1}{2}\Vert\cdot\Vert_2^2$ and $U = V$; and
    • Nonsmooth objective loss with tractably proximal operators: For instance, $\phi(\cdot) = \Vert\cdot\Vert_1$.

    LMAOPT is implemented by Quoc Tran-Dinh at the Department of Statistics and Operations Research (STAT&OR), University of North Carolina at Chapel Hill (UNC). This is a joint work with Zheqi Zhang at STAT&OR, UNC.

    LMAOPT-v1.0 can be downloaded HERE. For further information, please read Readme3.txt.

    The theory and algorithms implemented in LMAOPT can be found in the following manuscript:
    1. Q. Tran-Dinh, and Z. Zhang. Extended Gauss-Newton and Gauss-Newton ADMM algorithms for low-rank matrix optimization. STAT&OR Tech. Report UNC-REPORT-2016.a, (2016).
      Available at:
      Abstract: We develop a generic Gauss-Newton (GN) framework for solving a class of nonconvex optimization problems involving low-rank matrix variables. As opposed to standard Gauss-Newton method, our framework allows one to handle general smooth convex cost function via its surrogate. The main complexity-per-iteration consists of the inverse of two rank-size matrices and at most six small matrix multiplications to compute a closed form Gauss-Newton direction, and a backtracking linesearch. We show, under mild conditions, that the proposed algorithm globally and locally converges to a stationary point of the original nonconvex problem. We also show empirically that the Gauss-Newton algorithm achieves much higher accurate solutions compared to the well studied alternating direction method (ADM). Then, we specify our Gauss-Newton framework to handle the symmetric case and prove its convergence, where ADM is not applicable without lifting variables. Next, we incorporate our Gauss-Newton scheme into the alternating direction method of multipliers (ADMM) to design a GN-ADMM algorithm for solving the low-rank optimization problem. We prove that, under mild conditions and a proper choice of the penalty parameter, our GN-ADMM globally converges to a stationary point of the original problem. Finally, we apply our algorithms to solve several problems in practice such as low-rank approximation, matrix completion, robust low-rank matrix recovery, and matrix recovery in quantum tomography. The numerical experiments provide encouraging results to motivate the use of nonconvex optimization.
  • DECOPT: DEcomposition OPTimization

    SCOPT: Self-Concordant OPTimization

    BMISolver: Optimization involving Bilinear Matrix Inequalities

    SCPCVX: Sequential Convex Programming - CVX Interface

    Software and codes that I use for my research

    I have been using many open source as well as commercial software packages for my research. I find that they are useful and valuable. Here are some of them.
    Visitors (From 20.07.2013)