Abstract
Motion capture (mo-cap) technology is widely used across many fields, with animation in movies and video games being one of its primary applications. However, traditional mo-cap systems are costly due to their reliance on specialized hardware and software. This paper presents a low-cost, real-time mo-cap system designed specifically for animation purposes. The system works by tracking a person's movements using 3d human pose estimation and mapping those movements onto a virtual model using inverse kinematics. This is implemented using MediaPipe’s BlazePose model for pose estimation and StereoArts’ full-body inverse kinematics algorithm in Unity. It supports input from both prerecorded video and webcams, offering flexibility in how motion data is captured. While it performs well under controlled conditions, such as good lighting, a single person in the frame, and slow, clear movements, it struggles with input that includes self-occlusions, motion jitter, poor lighting, or multiple people. Traditional motion capture systems are less affected by these issues but come with a significantly higher cost, making this system a more accessible alternative despite its limitations. Moreover, this work demonstrates the potential of computer vision and machine learning–based mo-cap as alternatives to traditional systems.
Advisor
As’ad, Asa’d
Department
Computer Science
Recommended Citation
Mwindaare, Torence, "A Low-Cost, Real-Time Motion Capture Framework for Virtual Model Animation Using Pose Estimation and Inverse Kinematics" (2025). Senior Independent Study Theses. Paper 11295.
https://openworks.wooster.edu/independentstudy/11295
Keywords
motion capture, model animation, pose estimation, inverse kinematics, computer vision, BlazePose, Unity
Publication Date
2025
Degree Granted
Bachelor of Arts
Document Type
Senior Independent Study Thesis
© Copyright 2025 Torence Mwindaare