Biomechanics
Developing a markerless motion capture system for repositioning human body models
Diego Pensado (he/him/his)
Student
St. Mary's University
San Antonio, Texas, United States
Seth Mischo
Graduate Research Student
Virginia Tech Wake Forest School of Biomedical Engineering
Winston-Salem, North Carolina, United States
F. Scott Gayzik, Ph.D.
Associate Professor
Wake Forest University School of Medicine, United States
The Global Human Body Models Consortium (GHBMC) develops Finite Element (FE) models of the human body to simulate biomechanics and kinematics in a variety of boundary conditions. Different models incorporate different anthropometry, ranging from a 5th percentile to a 95th percentile model suite. These GHBMC models are validated against experimental impact tests and need to be as closely to biofidelic as possible in the test. Exact distance and angle measurements, such as joint angles and distance from impactors, are required to attain accurate simulation results. This study aimed to develop a markerless motion capture system, using machine learning techniques for human pose estimation, to reposition GHBMC models.
The YOLOv7 machine learning detection algorithm1 was used to develop the repositioning tool. The YOLOv7 algorithm, integrated with a post-hoc-stereo vision (PHSV), was used for the human pose estimation2 system. PHSV was developed by setting two cameras at perpendicular angles to define the x- and y-axis in one camera and the y- and z-axis in the other. This setup is not strictly stereo view as the cameras do not triangulate points into 3D coordinates in real-time. A subject stood in frame with their full body in view, intending to track a variety of anatomical landmarks. The YOLOv7’s run code was modified to output coordinates of key points and their confidence intervals. A Python script was created to read the x, y, and z coordinates from cameras 1 and 2 for each frame and transform the 2D coordinates into 3D coordinates. Joint angles were calculated from the skeletal motion which were then applied to a primitive pedestrian model for repositioning.
Results: Pose estimation and GHBMC repositioning were accurately achieved with the YOLOv7 markerless motion capture algorithm and the PHSV system. By using machine learning techniques and the PHSV, a system was developed capable of converting 2D into 3D coordinates of key point positions and joint angles. The method successfully repositioned the primitive pedestrian model and can be used in support of other GHBMC models.
Conclusion and Discussion: This study indicates that markerless motion capture algorithms can successfully be used to reposition GHBMC models for FE simulations. The method highlights the possibility of using machine learning for subject-specific skeletal motion tracking. Developing a subject-specific approach would significantly increase the applicability of simulation data, which can be critical in domains spanning from medical procedures to athletic performance evaluation. Developing a more robust camera setup would also improve the fidelity of the anatomical landmarking process, leading to the acquisition of higher resolution coordinates, leading to improved pose estimation, better model repositioning, and ultimately accurate FE predictions. Challenges remain to develop systems for real-world and real-time applications because of the presence of variables that affect the reliability of the data such as lighting conditions, clothing, and other environmental factors. Nonetheless, this proof-of-concept study supports the view that accurate markerless motion capture systems can be developed and reliably deployed for model repositioning.
This project was supported by the Global Human Body Models Consortium and by the NSF REU Site (Award #1950281) in the Department of Biomedical Engineering at Wake Forest University School of Medicine.
1Wang, C.-Y., A. Bochkovskiy, H.-Y. M. Liao. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv:2207.02696, DOI:10.48550/arXiv.2207.02696, 2022.
2Augmented Statups YOLOv7 Pose-Estimation Tutorial, 2023 < https://github.com/augmentedstartups/pose-estimation-yolov7 > [accessed 20 July 2023].