Sound-Based Sleep Staging by Exploiting Real-World Unlabeled Data

Mar 2, 2023
ICLR Workshop 2023

Abstract

With a growing interest in sleep monitoring at home, sound-based sleep staging with deep learning has emerged as a potential solution. However, collecting labeled data is restrictive in home environments due to the inconvenience of installing medical equipment at home. To handle this, we propose novel training approaches using accessible real-world sleep sound data. Our key contributions include a new semi-supervised learning technique called sequential consistency loss that considers the time-series nature of sleep sound and a semi-supervised contrastive learning method which handles out-of-distribution data in unlabeled home recordings. Our model was evaluated on various datasets including a labeled home sleep sound dataset and the public PSG-Audio dataset, demonstrating the robustness and generalizability of our model across real-world scenarios.

Authors

JongMok Kim
Daewoo Kim
Eunsung Cho
Hai Hong Tran
Joonki Hong
Dongheon Lee
JungKyung Hong
In-Young Yoon
Jeong-Whun Kim
Hyeryung Jang

Acknowledgments

Nojun Kwak was supported by NRF grant (2021R1A2C3006659) and IITP grant (2021-0-01343) funded by Korean Government. Hyeryung Jang was supported by NRF grant (2021R1F1A1063288) funded by the Korea government (MSIT).