COMPSTAT 2022: Start Registration
View Submission - COMPSTAT2022
A0252
Title: Compression-enabled interpretability of deep learning models for scientific discovery Authors:  Reza Abbasi Asl - University of California, San Francisco (United States) [presenting]
Abstract: In the past decade, research in machine learning has been exceedingly focused on the development of models with remarkably high predictive capabilities. Specifically, models based on deep learning principles have shown promise in scientific discovery within domains such as neuroscience and healthcare. However, the huge number of parameters in these models have made them difficult to interpret for domain experts. We will discuss the role of model compression in building more interpretable and more stable deep learning models in the context of two computational neuroscience studies. First, we will introduce a group of iterative model compression algorithms for deep learning models. We will then discuss their role in building interpretable voxel-wise models of human brain activity evoked by natural movies. These compressed models reveal increased category-selectivity along the ventral visual pathway in human visual cortex with higher stability compared to uncompressed models. Then, we will investigate a compression-enabled stability-driven model interpretation framework to characterize complex biological neurons in non-human primate visual cortex. This visualization uncovers the diversity of stable patterns explained by neurons. Overall, these findings suggest the importance of model compression for stability-driven interpretation of deep learning models in scientific applications.