Title: Sparsity in artificial neural networks (ANN)
Authors: Sylvain Sardy - University of Geneva (Switzerland) [presenting]
Abstract: In the spirit of lasso, the estimation of the ANN many parameters with an L1-penalty is regularized. This has the advantage of performing variable selection (e.g., gene selection or feature selection in an image) and avoiding overfitting. The selection of the regularization parameter lambda is the Quantile Universal Threshold. This method requires no estimation of nuisance parameters (like sigma in Gaussian regression) and can retrieve the sparsity of the underlying ANN in certain regimes.