CFE 2019: Start Registration
View Submission - CMStatistics
B0440
Title: Learning finite mixture models by minimum Wasserstein distance estimator Authors:  Jiahua Chen - University of British Columbia (Canada) [presenting]
Abstract: When a population exhibits a level of heterogeneity, finite mixture models provide an easy interpretation: the population is made of several homogeneous subpopulations all from a parametric distribution family. As early as in 1894, Pearson used a two-component Gaussian mixture to fit a crab data set, suggesting the existence of two subspecies. Pearson used the method of moments likely for the ease of numerical computation. Contemporary practice in statistics favours the learning by maximum likelihood for statistical efficiency and the convenient EM-algorithm. The maximum likelihood estimator (MLE) searches for a distribution in the assumed distribution family that attains the minimum Kullback-Leibler divergence from the empirical distribution. Such minimum distance principle can be applied to learn mixtures based on any distances between two distributions. In the machine learning community, the Wasserstein distance has drawn increased attention for its intuitive geometric interpretations and it is successfully employed in many new applications. We study the minimum Wasserstein distance estimator for learning finite Gaussian mixtures. We establish its statistical consistency and demonstrate its superior performances in some applications compared with a penalized version of MLE as the MLE is not well defined for finite Gaussian mixtures.