Multiparameter-Regularisierung für hochdimensionales Lernen
Zusammenfassung der Projektergebnisse
Making accurate predictions is a crucial factor in many systems (such as in medical treatment and prevention, geomathematics, social dynamics, financial computations) for cost savings, efficiency, health, safety, and organizational purposes. At the same time, the situation mostly encountered in real-life applications is to have only at disposal incomplete or rough high-dimensional data, and extracting a predictive model from them is an impossible task unless one can rely on some a-priori knowledge of properties of the expected model. The impossibility of making a reliable prediction is the result of the combination of different factors, the most relevant being the incompleteness of the data, the roughness/noisiness of the data, and their intrinsic high-dimensional nature, which can cause various phenomena, collected under the name of the curse of dimensionality. In the current project we are developing a comprehensive analysis of techniques and numerical methods for performing reliable predictions from measured data. The fundamental challenge as the curse of dimensionality shall be overcome by incorporating additional information on top of the available data, through optimization by means of multi-parameter regularization, and studying different candidate core models together with additional sets of constraints. In this joint international project we unify two important aspects / philosophies, namely we are developing multi-parameter regularization in both Hilbert and Banach spaces to be able to have flexibility for solving various tasks and to have friendness usually inherent to Hilbert spaces for performing numerical treatment. In line with these aspects, we explore three research directions. First of them leads to multi-penalty regularization in Banach spaces, where only a few theoretically justified results on adaptive selection of multiple regularization parameters and regularization spaces have been obtained so far. The second research direction generalizes the known results to the effective learning in high-dimension. Here we focus on two main mechanisms of dimensionality reduction by assuming that our function has a special representation / format and then we recast the learning problem into the framework of multi-penalty regularization with the adaptively chosen parameters. Another research direction appears on the border between regularization theory and meta-learning. Since in many algorithms, for numerical simulation purposes, but even more crucially in data analysis, certain parameters need to be tuned for optimal performances, measured in terms either of speed or of resulting (approximation) quality, this begs for the development of a fast choice rule for the parameters, possibly provided certain low dimensional features of the data. It appears that this issue has not been systematically studied in the context of high-dimensional learning and the current project aims to shed the light on this promising but as of yet not researched area by considering meta-learning based regularization that presuposses that parameteres of a regularization method are determined from experience with this method in similar applications. The above mentioned project directions may, in the future, serve as a solid bridge across regularization, learning, and approximation theories and can play a fundamental role for various practical applications.
Projektbezogene Publikationen (Auswahl)
- Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices, Inverse Problems, 30(12):125003, 34, 2014
V. Naumova and S. Peter
(Siehe online unter https://doi.org/10.1088/0266-5611/30/12/125003) - Parameter choice strategies for multipenalty regularization, SIAM J. Numer. Anal., 52(4):1770-1794, 2014
M. Fornasier, V. Naumova and S.V. Pereverzyev
(Siehe online unter https://doi.org/10.1137/130930248) - Quasi-linear compressed sensing, Multiscale Model. Simul., 12(2):725-754, 2014
M. Ehler, M. Fornasier and J. Sigl
(Siehe online unter https://doi.org/10.1137/130929928) - Nonlinear residual minimization by iteratively reweighted least squares, Computational Optimization and Applications, 64(3):755-792, 2016
J. Sigl
(Siehe online unter https://doi.org/10.1007/s10589-016-9829-x) - A Machine Learning Approach to Optimal Tikhonov Regularisation I: Affine Manifolds
E. De Vito, M. Fornasier and V. Naumova
- Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery, Journal of Machine Learning Research, 19(47):149, 2018
C. Kuemmerle and J. Sigl
- Sparse PCA from Inaccurate and Incomplete Measurements
M. Fornasier, J. Maly and V. Naumova