- Title: Quantitative movement analysis using single-camera videos
- Speaker: Łukasz Kidziński, Ph.D., Stanford University
- Time: Tuesday, February 9, 2021, at 10:00 a.m. Pacific Time
This event has passed. View the recorded talk and additional resources below.
Many neurological and musculoskeletal diseases impair movement, which limits people’s function and social participation. Quantitative assessment of motion is critical to medical decision-making but is currently possible only with expensive motion capture systems and highly trained personnel. We have developed machine learning-based algorithms for quantifying gait pathology using a single commodity camera, such as found in smartphones. Our methods aim to increase access to quantitative motion analysis in clinics and at home and enable researchers to conduct studies of neurological and musculoskeletal disorders at an unprecedented scale.
IIn the first half of this webinar, we will present our findings from analyzing a dataset of 1792 videos of patients from Gillette Children’s Specialty Healthcare (Kidziński, Ł., Yang, B., Hicks, J.L. et al., 2020). We used these data to train machine learning models and found that single-camera recordings can predict gait parameters with clinically relevant accuracy. Our predictions include cadence, speed, peak knee flexion, as well as the Gait Deviation Index, a holistic measure of gait abnormality, and the likelihood of receiving a surgery.
The second half of the webinar will be a short hands-on tutorial aimed at getting individuals started on running deep learning algorithms in the cloud to analyze movement from a single-camera video. We will demonstrate the OpenPose pose estimation software; highlight common preprocessing issues, such as missing data, noise, and bias; and discuss techniques to resolve such issues. We will also guide attendees in using the pre-processed data to derive estimates of metrics used in research and clinical applications. You will have time to try the tutorial during the webinar. You are also welcome to explore the tutorial on your own beforehand.
Kidziński, Ł., Yang, B., Hicks, J.L. et al. Deep neural networks enable quantitative movement analysis using single-camera videos. Nat Commun 11, 4054 (2020).
This webinar is offered jointly with the Mobilize Center, an NIH-funded Biomedical Technology Resource Center at Stanford University, as part of their webinar series.
Łukasz Kidziński is a research associate in the Neuromuscular Biomechanics Lab at Stanford University, applying state-of-the-art computer vision and reinforcement learning algorithms for improving clinical decisions and treatments. Previously he was a researcher in the CHILI group, Computer-Human Interaction in Learning and Instruction, at the EPFL in Switzerland, where he was developing methods for measuring and improving engagement of users in massive online open courses. He obtained a Ph.D. degree at Université Libre de Bruxelles in mathematical statistics, working on frequency-domain methods for dimensionality reduction in time series.