A0253
Title: Contrastive learning on multimodal analysis of electronic health records
Authors: Doudou Zhou - National University of Singapore (Singapore) [presenting]
Abstract: Electronic health record (EHR) systems contain a wealth of multimodal clinical data, including clinical codes and notes. However, many existing EHR-focused studies have traditionally either concentrated on an individual modality or merged different modalities in a rather rudimentary fashion. This approach often results in the perception of structured and unstructured data as separate entities, neglecting the inherent synergy between them. Despite the great success of multimodal contrastive learning on vision language, its potential remains under-explored in the realm of multimodal EHR, particularly in terms of its theoretical understanding. To accommodate the statistical analysis of multimodal EHR data, a novel multimodal feature embedding generative model is proposed, and a multimodal contrastive loss is designed to obtain the multimodal EHR feature representation. The theoretical analysis demonstrates the effectiveness of multimodal learning compared to single-modality learning, and the solution of the loss function is connected to the singular value decomposition of a pointwise mutual information matrix. This connection paves the way for a privacy-preserving algorithm tailored for multimodal EHR feature representation learning. Simulation studies show that the proposed algorithm performs well under a variety of configurations. The clinical utility of the proposed algorithm is further validated in real-world EHR data.