Sprinter with motion capture markers and metrics overlaid

Calibration process

PoseMech

PoseMech is structured as a multi-stage computer vision and biomechanical inference pipeline. The system begins with a deep learning–based 2D convolutional neural network (CNN) pose estimator that extracts body keypoints from monocular RGB video. These 2D landmarks are temporally stabilized and passed to a 3D reconstruction module, where depth-aware kinematic structure is inferred using geometric constraints and learned pose priors to generate a coherent 3D skeletal representation. The reconstructed pose is then mapped into a biomechanical computation layer consisting of 57 anatomically structured landmarks. These landmarks define segment coordinate systems for the trunk, pelvis, upper and lower extremities, enabling calculation of joint angles, segment orientations, and task-specific kinematic metrics. Using this articulated landmark model, PoseMech estimates joint flexion/extension angles and derives workload-relevant biomechanical indicators through kinematic transformation and inverse-dynamics–inspired approximations. By integrating 2D CNN perception, 3D pose reconstruction, and a structured 57-landmark biomechanical model, PoseMech converts raw visual motion into quantitative ergonomic assessments. This layered architecture enables contactless estimation of biomechanical workload while maintaining anatomical interpretability and computational scalability for real-world deployment.

Precision Graph
Outdoor mocap in the mountains.

PoseMech — Interactive Real-Time Pipeline Demo

Four-panel view of the PoseMech workflow: synchronized 2D capture, mesh overlay, real-time biomechanical signals, and an open scene visualization. Use the buttons to switch plot channels.

2D Capture
3D Mesh Overlay
Biomech Signal Plots
Open Scene (Biomech Viewer)

Block Diagram & System Structure

Pipeline: Capture → 2D Keypoints → 3D Reconstruction / Mesh → Biomech Estimation → Visualization & Reports.

PoseMech block diagram

Related Publications

  • Omidokun, J., Egeonu, D., Jia, B., & Yang, L. (2024). Leveraging digital perceptual technologies for remote perception and analysis of human biomechanical processes: A contactless approach for workload and joint force assessment. arXiv preprint, arXiv:2404.01576.
  • Egeonu, D., Jia, B., Omidokun, J., & Yang, L. (2024). Biomechanical assessment of exoskeleton intervention for injured and recovering workers: A simulation study of bending tasks. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 68(1).
  • Omidokun, J., Egeonu, D., Jia, B., & Yang, L. (2025). Leveraging digital perceptual technologies for analysis of human biomechanical processes: A contactless approach for workload assessment. IISE Transactions on Occupational Ergonomics and Human Factors, 1–14.
  • Egeonu, D., Omidokun, J., & Jia, B. (2025). Field-Applicable Ground Reaction Force Estimation Through Deep Learning: Enhancing Biomechanical and Ergonomic Assessments. IISE Annual Conference Proceedings, 1–6.