EcoSta 2022: Start Registration
View Submission - EcoSta2022
A0977
Title: Pseudo-Mallows for preference learning and personalized recommendation Authors:  Qinghua Liu - University of Oslo (Norway) [presenting]
Abstract: The Mallows model has been proven to be useful for learning personal preferences from highly incomplete data and be applied to recommender systems. However, inference based on MCMC is slow, preventing its use in real time applications. We propose the Pseudo-Mallows distribution over the set of all permutations of $n$ items, to approximate the posterior distribution with a Mallows likelihood. The Pseudo-Mallows distribution is a product of univariate discrete Mallows-like distributions, constrained to remain in the space of permutations. In a variational setting, we optimise the variational order parameter by minimising a marginalized KL-divergence. We propose an approximate algorithm for this discrete optimization, and conjecture a certain form of the optimal variational order that depends on the data. Empirical evidence and some theory support our conjecture. Sampling from the Pseudo-Mallows distribution allows fast preference learning, compared to alternative MCMC based options, when the data exists in form of partial rankings of the items or of clicking on some items. Through simulations and a real life data case study, we demonstrate that the Pseudo-Mallows model learns personal preferences very well and makes recommendations much more efficiently, while maintaining similar accuracy compared to the exact Bayesian Mallows model.