The problem has been studied extensively in the literature related to compressive sensing (Candès and Wakin 2008 Donoho 2006) and model selection in statistics and machine learning (Tibshirani 1996 Efron et al. The problem of sparse signal recovery is to reconstruct a sparse signal given a number of linear measurements of the signal. The essence of the proximal gradient algorithms is to divide the objective function into a smooth part and a non-smooth part, then take a proximal step (w.r.t. Finally our empirical studies provide further support for the proposed homotopy proximal mapping algorithm and verify the theoretical results. Signal sparsity is imposed using the ‘1-norm penalty on the signal’s linear transform coefcients or gradient map, respectively. In the realm of deterministic optimization, the sequence generated by iterative algorithms (such as proximal gradient descent) exhibit finite activity. method from 8 and Nesterov’s accelerated gradient methods 9 using soft-thresholding as the proximal operator for the ‘ 1-norm. Furthermore, our analysis explicitly exhibits that more observations lead to not only more accurate recovery but also faster convergence. In addition, in setting (iii), a practical variant of the proposed algorithms does not rely on the RIP constants and our results for sparse signal recovery are better than the previous results in the sense that our recovery error bound is smaller. In particular, we show that when the measurement matrix satisfies restricted isometric properties (RIP), one of the proposed algorithms with an appropriate setting of a parameter based on the RIP constants converges linearly to the optimal solution up to the noise level. The standard proximal gradient method, also known as iterative soft-thresholding when applied to this problem, has low computational cost per iteration but a. We prove a global linear convergence of the proposed homotopy proximal mapping (HPM) algorithms for recovering the sparse signal under three different settings (i) sparse signal recovery under noiseless measurements, (ii) sparse signal recovery under noisy measurements, and (iii) nearly-sparse signal recovery under sub-Gaussian noisy measurements. The algorithms adopt a simple proximal mapping of the \(\ell _1\) norm at each iteration and gradually reduces the regularization parameter for the \(\ell _1\) norm. In this paper, we present novel yet simple homotopy proximal mapping algorithms for reconstructing a sparse signal from (noisy) linear measurements of the signal or for learning a sparse linear model from observed data, where the former task is well-known in the field of compressive sensing and the latter task is known as model selection in statistics and machine learning.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |