Towards Disentangled Representation Learning in Practice

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/170059
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1700593
http://dx.doi.org/10.15496/publikation-111386
Dokumentart: Dissertation
Erscheinungsdatum: 2025-09-08
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Informatik
Gutachter: Brendel, Wieland (Prof. Dr.)
Tag der mündl. Prüfung: 2024-09-16
DDC-Klassifikation: 004 - Informatik
Freie Schlagwörter:
representation learning
disentanglement
self-supervised learning
unsupervised learning
concept learning
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=en
Zur Langanzeige

Abstract:

While the success of deep learning is underpinned by learning representations of data, what information the learned representations extract remains a mystery. In our first contribution (C1), we show that state-of-the-art approaches to self-supervised visual representation learning extract the aspects, or factors of variation (FoVs), of the data that are invariant to data augmentations applied during training, discarding the variant FoVs. In studying augmentations used in practice, we find that while object class is left invariant, position, hue, and rotation information tend to be discarded, which is problematic for tasks outside of object recognition, e.g. object localization. In our second contribution (C2), we show that such approaches can yield \emph{disentangled} representations, where all FoVs are extracted separately in the representation, if all FoVs are variant to the augmentations, an assumption that notably isn't met by augmentations used in practice. In our third contribution (C3), we show evidence that this assumption can be met in natural video, where FoVs undergo transitions that are typically small in magnitude with occasional large jumps, characteristic of a temporally sparse distribution. While challenges remain for real-world disentanglement, our contributions provide guidance to the field in the pursuit of progress in representation learning.

Das Dokument erscheint in: