Title: Randomized methods for dimension reduction
Authors: Benjamin Erichson - ICSI and UC Berkeley (United States) [presenting]
Abstract: In the era of Big Data, vast amounts of data are being collected and curated across the social, physical, engineering, biological, and ecological sciences. Techniques for dimensionality reduction, such as principal component analysis (PCA), are essential to the analysis of high-dimensional data. These methods take advantage of redundancies in the data in order to find low-rank, parsimonious models to reveal the underlying structure of the data. Classically, highly accurate deterministic matrix algorithms are used for this task. However, the emergence of large-scale datasets has severely challenged our computational ability to analyze data. Over the last decade, the concept of randomness has been demonstrated as an effective strategy to quickly produce approximate answers to familiar problems such as dimension reduction. Thus, the paradigm of randomized methods provides a scalable architecture for modern data science applications. These methods scale with the intrinsic rank of the data rather than the ambient dimensions of the measurement space. A brief overview of randomized methods for dimension reduction will be given.