|FIELD||AI and Natural Sciences|
|DATE||August 25 (Wed), 2021|
|TITLE||Riemannian Distortion Measures for Non-Euclidean Data|
A growing number of problems in machine learning involve data that is non-Euclidean. For such problems, a naive application of vector space learning algorithms will produce results that depend on the choice of local coordinates used to parametrize the data. At the same time, many unsupervised learning problems eventually reduce to an optimization, in which the criteria being minimized can be interpreted as the distortion associated with a mapping between two curved spaces. Exploiting this distortion minimizing perspective, we first show that a large class of unsupervised learning problems involving non-Euclidean data can be naturally framed as seeking a mapping between two Riemannian manifolds that is closest to being an isometry. A family of coordinate-invariant first-order distortion measures is then proposed that measure the proximity of the mapping to an isometry, and applied to manifold learning and autoencoder training for non-Euclidean data sets. Case studies ranging from human mass-shape data to MRI images demonstrate the many performance advantages of our Riemannian distortion minimization framework.