Matrix superconcentration inequalities
Tatiana Brailovskaya

February 26th 2025

Matrix superconcentration inequalities

Tatiana Brailovskaya

Operator norms of Gaussian random matrices with arbitrary variance profiles exhibit superconcentration phenomena that generalize the seminal Tracy-Widom bounds beyond the i.i.d. case.

This talk will be broadcast at 13:30 GMT November 27th 2024 on MS Teams only.

Meeting ID: 393 527 089 878
Passcode: dsm7py

Abstract

One way to understand the concentration of the norm of a random matrix X with Gaussian entries is to apply a standard concentration inequality, such as the one for Lipschitz functions of i.i.d. standard Gaussian variables, which yields subgaussian tail bounds on the norm of X. However, as was shown by Tracy and Widom in 1990s, when the entries of X are i.i.d. the norm of X exhibits even sharper concentration. The phenomenon of a function of many i.i.d. variables having strictly smaller tails than those predicted by classical concentration inequalities is sometimes referred to as «superconcentration», a term originally dubbed by Chatterjee. I will discuss novel results that can be interpreted as superconcentration inequalities for the norm of X, where X is a Gaussian random matrix with independent entries and an arbitrary variance profile. We can also view our results as a nonhomogeneous extension of Tracy-Widom-type upper tail estimates for the norm of X.

Similar Talks