Software
LMAOPT: Lowrank Matrix OPTimization
LMAOPTv1.0
is a MATLAB code colection for solving three special cases of the following lowrank matrix optimization problem:
$$
\Phi^{\star} := \min_{U\in\mathbf{R}^{m\times r}, V\in\mathbb{R}^{n\times r}}\Big\{ \Phi(U,V) := \phi(\mathcal{A}(UV^T)  B) \Big\},
$$
where $\phi$ is a proper, closed and convex function from $\mathbb{R}^l\to\mathbb{R}\cup\{+\infty\}$, $\mathcal{A}$ is a linear operator from $\mathbb{R}^{m\times n}$ to $\mathbb{R}^l$, and $B\in\mathbb{R}^l$ is a given observed vector. Here, we are more interested in the case $r \ll \min\{m, n\}$.
Currently, we provide the code to solve three special cases of the above problem:
 Quadratic loss: $\phi(\cdot) = \frac{1}{2}\Vert\cdot\Vert_2^2$;
 Quadratic loss and symmetric case: $\phi(\cdot) = \frac{1}{2}\Vert\cdot\Vert_2^2$ and $U = V$; and
 Nonsmooth objective loss with tractably proximal operators: For instance, $\phi(\cdot) = \Vert\cdot\Vert_1$.
LMAOPT is implemented by Quoc TranDinh at the Department of Statistics and Operations Research (STAT&OR), University of North Carolina at Chapel Hill (UNC).
This is a joint work with Zheqi Zhang at STAT&OR, UNC.
DOWNLOAD
LMAOPTv1.0 can be downloaded HERE.
For further information, please read Readme3.txt.
REFERENCES
The theory and algorithms implemented in LMAOPT can be found in the following manuscript:
 Q. TranDinh, and Z. Zhang. Extended GaussNewton and GaussNewton ADMM algorithms for lowrank matrix optimization.
STAT&OR Tech. Report UNCREPORT2016.a,
(2016).
Available at:
http://arxiv.org/abs/1606.03358
Abstract:
We develop a generic GaussNewton (GN) framework for solving a class of nonconvex optimization problems involving lowrank matrix variables.
As opposed to standard GaussNewton method, our framework allows one to handle general smooth convex cost function via its surrogate.
The main complexityperiteration consists of the inverse of two ranksize matrices and at most six small matrix multiplications to compute a closed form GaussNewton direction, and a backtracking linesearch.
We show, under mild conditions, that the proposed algorithm globally and locally converges to a stationary point of the original nonconvex problem.
We also show empirically that the GaussNewton algorithm achieves much higher accurate solutions compared to the well studied alternating direction method (ADM).
Then, we specify our GaussNewton framework to handle the symmetric case and prove its convergence, where ADM is not applicable without lifting variables.
Next, we incorporate our GaussNewton scheme into the alternating direction method of multipliers (ADMM) to design a GNADMM algorithm for solving the lowrank optimization problem.
We prove that, under mild conditions and a proper choice of the penalty parameter, our GNADMM globally converges to a stationary point of the original problem.
Finally, we apply our algorithms to solve several problems in practice such as lowrank approximation, matrix completion, robust lowrank matrix recovery, and matrix recovery in quantum tomography.
The numerical experiments provide encouraging results to motivate the use of nonconvex optimization.
DECOPT: DEcomposition OPTimization

DECOPTv1.0
is a MATLAB software package for solving constrained convex optimization problems of the form:
$$
\begin{array}{ll}
\displaystyle\min_{\mathbf{x}\in\mathbf{R}^p, \mathbf{r}\in\mathbb{R}^n} & f(\mathbf{x}) + g(\mathbf{r}) \\
\text{s.t.} & \mathbf{A}\mathbf{x}  \mathbf{r} = \mathbf{b}, ~\mathbf{l}\leq \mathbf{x}\leq \mathbf{u},
\end{array}
$$
where $f$ and $g$ are two convex functions, $\mathbf{A}\in\mathbb{R}^{n\times p}$, $\mathbf{b}\in\mathbb{R}^n$, $\mathbf{l}, \mathbf{u}\in\mathbb{R}^p$ are the lower and upper bound of $\mathbf{x}$.
Here, we assume that $f$ and $g$ are proximally tractable, i.e., the proximal operators $\mathrm{prox}_{f}$ and $\mathrm{prox}_g$ are efficient to compute (see below).
DECOPT aims at solving constrained convex optimization problem for any convex functions $f$ and $g$, where their proximal operator is provided.
The following special cases have been customized in DECOPT:
 Basis pursuit:
$$ \min_{\mathbf{x}}\Big\{ \Vert\mathbf{x}\Vert_1 : \mathbf{A}(\mathbf{x}) = \mathbf{b}, ~\mathbf{l} \leq \mathbf{x}\leq\mathbf{u} \Big\}.$$
 $\ell_1/\ell_2$unconstrained LASSO problem:
$$ \min_{\mathbf{x}}\Big\{ \frac{1}{2}\Vert\mathbf{A}(\mathbf{x})  \mathbf{b}\Vert_2^2 + \lambda\Vert\mathbf{x}\Vert_1\Big\}.$$
 $\ell_1/\ell_1$convex problem:
$$ \min_{\mathbf{x}}\Big\{ \Vert\mathbf{A}(\mathbf{x})  \mathbf{b}\Vert_1 + \lambda\Vert\mathbf{x}\Vert_1 \Big\}.$$
 Squareroot $\ell_1/\ell_2$ LASSO problem:
$$ \min_{\mathbf{x}}\Big\{ \Vert\mathbf{A}(\mathbf{x})  \mathbf{b}\Vert_2 + \lambda\Vert\mathbf{x}\Vert_1 \Big\}.$$
 $\ell_1/\ell_2$  the basis denosing (BPDN) problem:
$$ \min_{\mathbf{x}}\Big\{ \Vert\mathbf{x}\Vert_1 : \Vert\mathbf{A}(\mathbf{x})  \mathbf{b}\Vert_2 \leq \delta\Big\}.$$
 $\ell_2/\ell_1$  the $\ell_1$  constrained LASSO problem:
$$ \min_{\mathbf{x}}\Big\{ \frac{1}{2}\Vert\mathbf{A}(\mathbf{x})  \mathbf{b}\Vert_2^2 : \Vert\mathbf{x}\Vert_1 \leq \delta \Big\}.$$
Here, $\lambda > 0$ is a penalty parameter, $\delta > 0$ is a given noise level parameter,
$\mathbf{b}$ is the observed measurement vector, and $\mathbf{A}$ is a linear operator.
DECOPT is developed by Quoc TranDinh at the Laboratory for Information and Inference Systems (LIONS), EPFL, Lausanne, Switzerland.
This is a joint work with V. Cevher at EPFL/LIONS.
DOWNLOAD
DECOPTv1.0 can be downloaded HERE.
For further information, please read Readme2.txt.
REFERENCES
The theory and algorithms implemented in DECOPT can be found in the following papers:
 Q. TranDinh, and V. Cevher. Constrained convex minimization via modelbased excessive gap.
Proceedings of the annual conference on Neural Information Processing Systems Foundation (NIPS), Montreal, Canada, (2014).
Available at:
http://papers.nips.cc/paper/5574constrainedconvexminimizationviamodelbasedexcessivegap.
Abstract:
Abstract We introduce a modelbased excessive gap technique to analyze firstorder primal
dual methods for constrained convex minimization. As a result, we construct firstorder
primaldual methods with optimal convergence rates on the primal objective residual and
the primal feasibility gap of their iterates separately. Through a dual smoothing and prox
center selection strategy, our framework subsumes the augmented Lagrangian, alternating
direction, and dual fastgradient methods as special cases, where our rates apply.
 Q. Tran Dinh, and V. Cevher. A PrimalDual Algorithmic Framework for Constrained Convex Minimization. LIONS Tech. Report EPFLREPORT199844,
(2014).
Available at:
http://arxiv.org/abs/1406.5403
Abstract:
We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primaldual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction methodofmultipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.
SCOPT: SelfConcordant OPTimization
SCOPTv1.0  A MATLAB package for Composite Selfconcordant (like) Minimization
SCOPT is a MATLAB implementation of the proximalgradient, proximal quasiNewton, proximal Newton, and pathfollowing interiorpoint algorithms for solving composite convex minimization problems involving selfconcordant and selfconcordantlike cost functions:
$$
\displaystyle\min_{\mathbf{x}\in\mathbb{R}^n}\left\{ F(\mathbf{x}) := f(\mathbf{x}) + g(\mathbf{x}) \right\},
$$
where $f$ is a selfconcordant or selfconcordantlike function, and $g$ is a possibly nonsmooth and convex function.
 Briefly, the function $f$ is called selfconcordant (with the parameter $M_f >0$) if $\vert\varphi{'''}(t)\vert \leq M_f\varphi{''}(t)^{3/2}$, where $\varphi(t):= f(\mathbf{x} + t\mathbf{u})$ for $\mathbf{x}\in\mathrm{dom}(f)$, $\mathbf{u}\in\mathbb{R}^m$ and $\mathbf{x} + t\mathbf{u}\in\mathrm{dom}(f)$.
Examples: $f(\mathbf{X}) := \log(\det(\mathbf{X}))$ (for a symmetric and positive definite matrix $\mathbf{X}$), $f(\mathbf{x},t) := \log(t^2  \Vert\mathbf{x}\Vert_2^2)$, and $f(\mathbf{x}) := \sum_{i=1}^n\log(b_i  \mathbf{a}_i^T\mathbf{x})$ are all selfconcordant.
The function $g$ is usually assumed that its proximal operator:
$$
\mathrm{prox}_{g}(\mathbf{x}) := \mathrm{arg}\!\displaystyle\min_{\mathbf{z}\in\mathbb{R}^n}\left\{g(\mathbf{z}) + \frac{1}{2}\Vert\mathbf{z}  \mathbf{x}\Vert_2^2\right\},
$$
is "efficient" to compute (i.e., in a closed form or by polynomialtime algorithms). If the proximal operator of $g$ can be computed efficently, then we say that $g$ is tractably proximal.
SCOPT is developed by Quoc TranDinh at the Laboratory for Information and Inference Systems (LIONS), EPFL, Lausanne, Switzerland.
This is a joint work with A. Kyrillidis and V. Cevher at EPFL/LIONS.
SCOPT consists of several algorithms customized for solving the following convex optimization problems (but not limitted to those):
 Sparse Inverse Covariance Estimation: An instance of this problem class can be expressed as follows:
$$
\min_{\mathbf{X}\succ 0}\left\{ \log(\det(\mathbf{X})) + \mathrm{trace}(\boldsymbol{\Sigma}\mathbf{X}) + \rho\Vert\mathrm{vec}(\mathbf{X})\Vert_1 \right\},
$$
where $\rho > 0$ is a penalty parameter.
Reference: J. Friedman, T. Hastie, and R. Tibshirani. Sparse Inverse Covariance Estimation with the graphical LASSO. Biostatistics, vol. 9, no. 3, pp. 432441, 2007.
 Sparse Poisson Intensity Reconstruction: As an example, this problem has the following form:
$$
\min_{\mathbf{x}\geq 0}\left\{ \sum_{i=1}^m\mathbf{a}_i^T\mathbf{x}  \sum_{i=1}^my_i\log(\mathbf{a}_i^T\mathbf{x}) + \rho\Vert\mathbf{x}\Vert_{\mathrm{TV}}\right\},
$$
where $y_i\in\mathbb{N}$, $\rho > 0$ is a penalty parameter, and $\Vert\cdot\Vert_{\mathrm{TV}}$ is the TV (totalvariation) norm.
Reference: Z. T. Harmany, R. F. Marcia, and R. M. Willett. This is SPIRALTAP: Sparse Poisson Intensity Reconstruction Algorithms  Theory and Practice. IEEE Trans. Image Processing, vol. 21, no. 3, pp. 10841096, 2012.
 Heteroscedastic LASSO: This problem is formulated as follows:
$$
\min_{\sigma > 0, \boldsymbol{\beta}\in\mathbb{R}^p}\left\{ \log(\sigma) + \frac{1}{2n}\Vert\mathbf{X}\boldsymbol{\beta}  \sigma\mathbf{y}\Vert_2^2 + \rho\Vert\boldsymbol{\beta}\Vert_1\right\},
$$
where $\rho > 0$ is a penalty parameter.
Reference:
N. Stadler, P. Buhlmann, and S. V. de Geer. L1Penalization for mixture regression models. TEST, vol. 19, no. 2, pp. 209256, 2010.
 Sparse logistic and multimonomial logistic regression: This problem can be written in the following mathematical form:
$$
\min_{\mathbf{X}\in\mathbb{R}^{m\times p}}\left\{ \frac{1}{N}\sum_{i=1}^N\left[ \log\left(1 + \sum_{j=1}^m\mathrm{exp}(\mathbf{W}_{(j)}^T\mathbf{X}_{(i)})\right) \sum_{i=1}^my_i^{(j)}\mathbf{W}_{(j)^T\mathbf{X}_{(i)}}\right] + \rho\Vert\mathrm{vec}(\mathbf{X})\Vert_1 \right\},
$$
where $\mathbf{W}_{(j)}$ and $y_i^{(j)}$ are input data, and $\rho > 0$ is a penalty parameter.
Reference: B. Krishnapuram, M. Figueredo, L. Carin, and A. Hertemink. Sparse multinomial logistic regression: Fast algorithms and generalization bounds. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), vol. 27, pp. 957968, 2005.

In addition to the above mentioned problems, SCOPT also contains the codes which can be customized to solve the following constrained convex optimization problem:
$$
g^{\star} := \min_{\mathbf{x}\in\Omega}g(\mathbf{x}),
$$
where $g$ is a convex function with a tractable proximity operator, and $\Omega$ is a nonempty, closed and convex set in $\mathbb{R}^p$ endowed with a selfconcordant barrier $f$ (i.e., $f$ is selfconcordant with $M_f = 2$ and $\vert\varphi'(t)\vert \leq \sqrt{\nu}\varphi''(t)^{1/2}$ for given $\nu > 0$ and $\varphi(t) := f(\mathbf{x} + t\mathbf{u})$).
Example: The clustering problem in unsupervised learning can be reformulated as the above constrained convex problem using the $\max$norm:
$$
\begin{array}{ll}
\min_{\mathbf{K},\mathbf{R}, \mathbf{L}}& \Vert\mathrm{vec}(\mathbf{K}  \mathbf{A})\Vert_1 \\
\textrm{s.t.} & \begin{bmatrix}\mathbf{R} & \mathbf{K}\\ \mathbf{K}^T & \mathbf{L}\end{bmatrix}\succeq 0, ~\mathbf{R}_{ii} \leq 1, \mathbf{L}_{ii}\leq 1, ~i=1,\dots, p.
\end{array}
$$
Reference:
Jalali, A., and Srebro, N. Clustering using maxnorm constrained optimization. In Proc. of International Conference on Machine Learning (ICML2012) (2012), pp. 1–17.
DOWNLOAD
SCOPTv1.0 can be downloaded HERE.
For further information, please read Readme.txt.
REFERENCES
The theory and algorithms implemented in SCOPT can be found in the following papers:

Q. TranDinh, A. Kyrillidis, and V. Cevher (2013).
A proximal Newton framework for composite minimization: Graph learning without Cholesky decomposition and matrix inversions. Proceedings of the 30th International Conference on Machine Learning (ICML), JMLR W&CP 28(2): 271279, 2013.
Available at: https://arxiv.org/abs/1301.1459.
Abstract:
We propose a variable metric framework for minimizing the sum of a selfconcordant function and a possibly nonsmooth convex function, endowed with an easily computable proximal operator.
We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on the smooth part. An important highlight of our work is a new set of analytic stepsize selection and correction procedures based on the structure of the problem. We describe concrete algorithmic instances of our framework for several interesting applications and demonstrate them numerically on both synthetic and real data.
 Q. TranDinh, A. Kyrillidis and V. Cevher. Composite Selfconcordant Minimization, Journal of Machine Learning Research, 2014 (to appear).
Available at: arxiv.org/abs/1308.2867
Abstract:
We propose a variable metric framework for minimizing the sum of a selfconcordant function and a possibly nonsmooth convex function, endowed with an easily computable proximal operator. We theoretically establish the convergence of our framework without relying on the usual Lipschitz gradient assumption on the smooth part. An important highlight of our work is a new set of analytic stepsize selection and correction procedures based on the structure of the problem. We describe concrete algorithmic instances of our framework for several interesting applications and demonstrate them numerically on both synthetic and real data.

Q. TranDinh, A. Kyrillidis and V. Cevher. An inexact proximal pathfollowing algorithm for constrained convex minimization, SIAM J. Optimization, vol. 24, no. 4, pp. 17181745, 2014.
Available at: http://arxiv.org/abs/ 1311.1756.
Abstract:
Many scientific and engineering applications feature nonsmooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the nonsmooth objective is equipped with a tractable proximity operator and that the convex constraint set affords a selfconcordant barrier. We provide a new joint treatment of proximal and selfconcordant barrier concepts and illustrate that such problems can be efficiently solved, without the need of lifting the problem dimensions, as in disciplined convex optimization approach. We propose an inexact pathfollowing algorithmic framework and theoretically characterize the worstcase analytical complexity of this framework when the proximal subproblems are solved inexactly. To show the merits of our framework, we apply its instances to both synthetic and realworld applications, where it shows advantages over standard interior point methods. As a byproduct, we describe how our framework can obtain points on the Pareto frontier of regularized problems with selfconcordant objectives in a tuning free fashion.
BMISolver: Optimization involving Bilinear Matrix Inequalities
BMIsolverv1.1  a MATLAB software package for BMI optimization
BMIsolver is a MATLAB package for solving optimization problems with BMI (bilinear matrix inequality) constraints arising in feedback controller design (spectral abscissa, H2, Hinfinity and mixed H2/Hinf optimization problems). It is implemented based on the methods proposed in [1] and [2] which is a joint work with S. Gummusoy and W. Michiels. The code is developed by Quoc Tran Dinh at Department of Electrical Engineering (ESAT/SCD) and Optimization in Engineering Center (OPTEC), KU Leuven, Belgium (under the supervision of Prof. M. Diehl). The current version requires YALMIP as a modeling language. We recommend SeDuMi as an SDP solver.
Download BMIsolver
 A recently updated version (v.1.1) can be found HERE.
 Thirdparty packages:
References
 Q. Tran Dinh, S. Gumussoy, W. Michiels and M. Diehl. Combining ConvexConcave Decompositions and Linearization Approaches for solving BMIs, with Application to Static Output Feedback. Tech. Report, July, 2011, [pdf].
 Q. Tran Dinh, W. Michiels and M. Diehl. An inner convex approximation algorithm for BMI optimization and applications in control. Tech. Report, December, 2011, [pdf].
SCPCVX: Sequential Convex Programming  CVX Interface
Software and codes that I use for my research
I have been using many open source as well as commercial software packages for my research. I find that they are useful and valuable. Here are some of them.