L1 Optimization in matlab
Show older comments
Hi guys,
I am trying to solve a slightly modified L1 optimization problem in matlab
argmin_x : |x-d|||^2 + |Fx|||_1
where F is a low rank matrix and d is a given vector. x is the variable to be minimized. Could you suggest the best way to solve this in matlab??
Accepted Answer
More Answers (1)
Sravan Karrena
on 21 Mar 2019
Edited: Walter Roberson
on 21 Mar 2019
s = size(F,1);
nx = size(F,2);
f = [-2*d; zeros(s,1); ones(s,1)];
H = blkdiag(2*eye(nx),zeros(s),zeros(s));
Aeq = [F -eye(s) -zeros(s)];
beq = zeros(s,1);
A = [zeros(s,nx) eye(s) -eye(s); zeros(s,nx) -eye(s) -eye(s)];
b = zeros(2*s,1);
[xopt,fval] = quadprog(H,f,A,b,Aeq,beq);
xopt = xopt(1:nx)
Categories
Find more on Introduction to Installation and Licensing in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!