当前共找到 1 篇文献分享。
1.
JZY (2022-07-31 16:12):
#paper DOI: 10.1109/TNSRE.2021.3110665 Multi-View Spatial-Temporal Graph Convolutional Networks With Domain Generalization for Sleep Stage Classification 该文被脑机接口的顶刊IEEE TNSRE录用,这项工作提出了一种基于域泛化的多视图时空图卷积神经网络进行睡眠分期,该模型使用域泛化方法有效地解决了受试者差异性问题,在无需目标域数据的情况下提取去个性化的睡眠特征,提高了模型的泛化性;同时充分建模多视图脑网络(脑功能性连接视图和脑空间距离视图)的空间特性。与现有的SOTA模型相比较,达到了最佳的性能。
Abstract:
Sleep stage classification is essential for sleep assessment and disease diagnosis. Although previous attempts to classify sleep stages have achieved high classification performance, several challenges remain open: 1) How to … >>>
Sleep stage classification is essential for sleep assessment and disease diagnosis. Although previous attempts to classify sleep stages have achieved high classification performance, several challenges remain open: 1) How to effectively utilize time-varying spatial and temporal features from multi-channel brain signals remains challenging. Prior works have not been able to fully utilize the spatial topological information among brain regions. 2) Due to the many differences found in individual biological signals, how to overcome the differences of subjects and improve the generalization of deep neural networks is important. 3) Most deep learning methods ignore the interpretability of the model to the brain. To address the above challenges, we propose a multi-view spatial-temporal graph convolutional networks (MSTGCN) with domain generalization for sleep stage classification. Specifically, we construct two brain view graphs for MSTGCN based on the functional connectivity and physical distance proximity of the brain regions. The MSTGCN consists of graph convolutions for extracting spatial features and temporal convolutions for capturing the transition rules among sleep stages. In addition, attention mechanism is employed for capturing the most relevant spatial-temporal information for sleep stage classification. Finally, domain generalization and MSTGCN are integrated into a unified framework to extract subject-invariant sleep features. Experiments on two public datasets demonstrate that the proposed model outperforms the state-of-the-art baselines. <<<
翻译
回到顶部