CMStatistics 2023: Start Registration
View Submission - CMStatistics
B0545
Title: Interpreting deep neural networks towards trustworthy AI Authors:  Bin Yu - UC Berkeley (United States) [presenting]
Abstract: The adaptive wavelet distillation (AWD) interpretation method is described for pre-trained deep learning models. AWD is shown to be both outperforming deep neural networks and interpretable in the motivating cosmology problem and an external validating cell biology problem. Moreover, an investigation into the effects of pre-training data distributions is discussed on large language models (LLMs) for fine-tuning pathology report classification. Finally, the need to quality control the entire data science life cycle is addressed, to build any model for trustworthy interpretable data results throughout the predictability-computability-stability (PCS) framework and documentation for veridical data science.