Project Details
Multi-parameter regularization in high-dimensional learning
Applicant
Professor Dr. Massimo Fornasier
Subject Area
Mathematics
Term
from 2014 to 2018
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 254193214
Making accurate predictions is a crucial factor in many systems for cost savings, efficiency, health, safety, and organizational purposes. Inspired by the increased demand of robust predictive methods, in this joint international project we are developing a comprehensive analysis of techniques and numerical methods for performing reliable predictions from roughly measured high-dimensional data. The challenge of working with high-dimensional noisy data shall be overcome by incorporating additional information on top of the available data, through optimization by means of multi-parameter regularization, and studying different candidate core models together with additional sets of constraints. We address specifically three fundamental objectives, the first two of them have methodological nature and the last one has applicative nature. The first objective is to develop both comprehensive theoretical and numerical approaches to multi-penalty regularization in Banach spaces, which may be reproducing kernel Banach spaces or spaces of sparsely represented functions. This is motivated by the largely expected geometrical/structured features of high-dimensional data, which may not be well-represented in the framework of (typically more isotropic) Hilbert spaces. Moreover, it is a rather open research field where only preliminary results are available.The second objective will be to use multi-penalty regularization in Banach spaces in high-dimensional supervised learning. Here we focus on two main mechanisms of dimensionality reduction by assuming that our function has a special representation / format and then we recast the learning problem into the framework of multi-penalty regularization with the adaptively chosen parameters. As the last objective we shall apply the methodologies developed in the previous two tasks to meta-learning for optimal parameter choices of algorithms. Since in many algorithms, for numerical simulation purposes, but even more crucially in data analysis, certain parameters need to be tuned for optimal performances, measured in terms either of speed or of resulting (approximation) quality, this begs for the development of a fast choice rule for the parameters, possibly provided certain features of the data, which may retain nevertheless a rather high dimensionality. This rule shall be learned by training on previous applications of the algorithm. It appears that this issue has not been systematically studied in the context of high-dimensional learning.The above mentioned project directions may, in the future, serve as a solid bridge across regularization, learning, and approximation theories and can play a fundamental role for various practical applications.
DFG Programme
Research Grants
International Connection
Austria
Participating Institution
Österreichische Akademie der Wissenschaften
Johann Radon Institute for Computational and Applied Mathematics (RICAM)
Johann Radon Institute for Computational and Applied Mathematics (RICAM)
Participating Person
Professor Dr. Sergei Pereverzyev