GlobalFusion: A Global Attentional Deep Learning Framework for Multisensor Information Fusion

Описание к видео GlobalFusion: A Global Attentional Deep Learning Framework for Multisensor Information Fusion

GlobalFusion: A Global Attentional Deep Learning Framework for Multisensor Information Fusion
Shengzhong Liu, Shuochao Yao, Jinyang Li, Dongxin Liu, Tianshi Wang, Huajie Shao, Tarek Abdelzaher

UbiComp '20: The ACM International Joint Conference on Pervasive and Ubiquitous Computing 2020
Session: Human Activity Recognition II

Abstract
The paper enhances deep-neural-network-based inference in sensing applications by introducing a lightweight attention mechanism called the global attention module for multi-sensor information fusion. This mechanism is capable of utilizing information collected from higher layers of the neural network to selectively amplify the influence of informative features and suppress unrelated noise at the fusion layer. We successfully integrate this mechanism into a new end-to-end learning framework, called GlobalFusion, where two global attention modules are deployed for spatial fusion and sensing modality fusion, respectively. Through an extensive evaluation on four public human activity recognition (HAR) datasets, we successfully demonstrate the effectiveness of GlobalFusion at improving information fusion quality. The new approach outperforms the state-of-the-art algorithms on all four datasets with a clear margin. We also show that the learned attention weights agree well with human intuition. We then validate the efficiency of GlobalFusion by testing its inference time and energy consumption on commodity IoT devices. Only a negligible overhead is induced by the global attention modules.

DOI:: https://doi.org/10.1145/3380999
WEB:: https://ubicomp.org/ubicomp2020/

Remote Presentations for ACM International Joint Conference on Pervasive and Ubiquitous Computing 2020 (UbiComp '20)

Комментарии

Информация по комментариям в разработке