Institutional-Repository, University of Moratuwa.  

Combined static and motion features for deep-networks-based activity recognition in videos

Show simple item record

dc.contributor.author Ramasinghe, S
dc.contributor.author Rajasegaran, J
dc.contributor.author Jayasundara, V
dc.contributor.author Ranasinghe, K
dc.contributor.author Rodrigo, R
dc.contributor.author Pasqual, AA
dc.date.accessioned 2023-04-20T08:51:56Z
dc.date.available 2023-04-20T08:51:56Z
dc.date.issued 2019
dc.identifier.citation Ramasinghe, S., Rajasegaran, J., Jayasundara, V., Ranasinghe, K., Rodrigo, R., & Pasqual, A. A. (2019). Combined static and motion features for deep-networks-based activity recognition in videos IEEE Transactions on Circuits and Systems for Video Technology, 29(9), 2693–2707. https://doi.org/10.1109/TCSVT.2017.2760858 en_US
dc.identifier.issn 1051-8215 en_US
dc.identifier.uri http://dl.lib.uom.lk/handle/123/20900
dc.description.abstract Activity recognition in videos in a deep-learning setting—or otherwise—uses both static and pre-computed motion components. The method of combining the two components, whilst keeping the burden on the deep network less, still remains uninvestigated. Moreover, it is not clear what the level of contribution of individual components is, and how to control the contribution. In this work, we use a combination of CNNgenerated static features and motion features in the form of motion tubes. We propose three schemas for combining static and motion components: based on a variance ratio, principal components, and Cholesky decomposition. The Cholesky decomposition based method allows the control of contributions. The ratio given by variance analysis of static and motion features match well with the experimental optimal ratio used in the Cholesky decomposition based method. The resulting activity recognition system is better or on par with existing state-of-theart when tested with three popular datasets. The findings also enable us to characterize a dataset with respect to its richness in motion information. en_US
dc.language.iso en en_US
dc.publisher IEEE en_US
dc.subject Activity recognition en_US
dc.subject Fusing features en_US
dc.subject Convolutional Neural Networks (CNN) en_US
dc.subject Recurrent Neural Networks (RNN) en_US
dc.subject Long Short-Term Memory (LSTM) en_US
dc.title Combined static and motion features for deep-networks-based activity recognition in videos en_US
dc.type Article-Full-text en_US
dc.identifier.year 2019 en_US
dc.identifier.journal IEEE Transactions on Circuits and Systems for Video Technology en_US
dc.identifier.issue 9 en_US
dc.identifier.volume 29 en_US
dc.identifier.database IEEE Xplore en_US
dc.identifier.pgnos 2693 - 2707 en_US
dc.identifier.doi 10.1109/TCSVT.2017.2760858 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record