Spatio-Temporal Networks for Human Activity Recognition based on Optical Flow in Omnidirectional Image Scenes
Autor: | Seidel, Roman |
---|---|
EAN: | 9783961002054 |
Sachgruppe: | Technik |
Sprache: | Englisch |
Seitenzahl: | 212 |
Produktart: | Kartoniert / Broschiert |
Veröffentlichungsdatum: | 01.03.2024 |
21,90 €*
Die Verfügbarkeit wird nach ihrer Bestellung bei uns geprüft.
Bücher sind in der Regel innerhalb von 1-2 Werktagen abholbereit.
The property of human motion perception is used in this dissertation to infer human activity from data using artificial neural networks. One of the main aims of this thesis is to discover which modalities, namely RGB images, optical flow and human keypoints, are best suited for HAR in omnidirectional data. Since these modalities are not yet available for omnidirectional cameras, they are synthetically generated with a 3D indoor simulation with the result of a large-scale dataset, called OmniFlow. Due to the lack of omnidirectional optical flow data, the OmniFlow dataset is validated using Test-Time Augmentation. Compared to the baseline, which contains Recurrent All-Pairs Field Transforms trained on the FlyingChairs and FlyingThings3D datasets, it was found that only about 1000 images need to be used for fine-tuning to obtain a very low End-point Error. For an evaluation on activity-level, two state-of-the-art convolutional neural networks (CNNs), namely the Temporal Segment Network (TSN) for the modalities RGB images and optical flow and the PoseC3D for the modality human keypoints, were used. Both CNNs were trained and validated on OmniFlow and on the real-world dataset OmniLab. For both networks, TSN and PoseC3D, three hyperparameters were varied and the top-1, top-5 and mean accuracies were reported. In addition, confusion matrices indicating the class-wise accuracy of the 15 activity classes have been given for the modalities RGB images, optical flow and human keypoints.