Group Supervised Learning: Extending Self-Supervised Learning to Multi-Device Settings
10 December 2021
We introduce a novel problem setting for self-supervised learning called Time-Synchronous Multi-Device Systems, which requires a solution in utilizing data from multiple data-generating devices during contrastive training. To this end, we propose a novel training setup, Group Supervised Learning (GSL), which is an extension of contrastive learning by contrasting time-series data gathered from different devices. GSL comprises of three main components, relating to Device Selection, Data Sampling and a novel loss function to enable contrastive learning in a group of devices. Comparisons were made between GSL and other semi-supervised and fully-supervised baselines, and the results demonstrated that our proposal is both data-efficient and outperforms the baselines by as high as 0.15 in micro F1-score across 2 human activity recognition datasets.