Learning Inconsistent Preferences with Gaussian Processes
- Siu Lun Chau ,
- Javier González ,
- Dino Sejdinovic
International Conference on Artificial Intelligence and Statistics (AISTATS 2022) |
We revisit widely used preferential Gaussian processes (pgp) by Chu and Ghahramani (2005) and challenge their modelling assumption that imposes rankability of data items via latent utility function values. We propose a generalisation of pgp which can capture more expressive latent preferential structures in the data and thus be used to model inconsistent preferences, i.e. where transitivity is violated, or to discover clusters of comparable items via spectral decomposition of the learned preference functions. We also consider the properties of associated covariance kernel functions and its reproducing kernel Hilbert Space (RKHS), giving a simple construction that satisfies universality in the space of preference functions. Finally, we provide an extensive set of numerical experiments on simulated and real-world datasets showcasing the competitiveness of our proposed method with state-of-the-art. Our experimental findings support the conjecture that violations of rankability are ubiquitous in real-world preferential data.