Research Project Outcomes: Mitigating Bias in Motion Capture Technology

Author: Tech Ethics Lab

Motion capture technology being demonstrated
An exhibitor demonstrates motion capture (mo-cap) technology at SIGGRAPH, an annual conference renowned for its impact on computer graphics and interactive techniques.

Motion capture, or mo-cap, is a process that digitally tracks and records the movements of objects or living beings in space. This cutting-edge technique, with its potential to create virtual worlds that closely mimic reality, has a range of applications in film and TV, science, and gaming—and continues to be built using outdated and flawed data.

The early development of mo-cap technology owes much to government initiatives. In 1955, the U.S. Air Force conducted a pivotal study that used the bodies of white, athletic males to design an optimal cockpit, focusing on pilots' range of motion. They also used male cadavers to replicate human anatomy. Twenty years later, a study prepared for the National Highway Traffic Safety Administration followed similar methodologies, using male cadavers to develop impact-protection systems for vehicles.

Today, outdated methods persist. For example, designers of fall detection technology hire stunt actors to simulate falls instead of involving older adults.

“The disconnect between the real social world and how mo-cap is developed to intervene calls for urgent change,” says Mona Sloane, coauthor of an analysis funded by the Notre Dame-IBM Technology Ethics Lab. Sloane is an assistant professor of data science and an assistant professor of media studies at the University of Virginia. “The field of mo-cap is growing, potentially scaling the impact of the assumptions baked into these systems. We must ensure these applications are safe and inclusive for all users.”

Embedded flaws integrated into the standards for mo-cap data influence design and pose significant safety risks for individuals who do not conform to the preconceived “typical” body type.

Sloane, along with Abigail Jacobs from the University of Michigan, Emanuel Moss from Intel Labs and the University of Virginia, and Cornell research assistants Emma Harvey and Hauke Sandhaus presented their findings at the ACM CHI 2024 in May. The distinguished Human-Computer Interaction conference awarded the team Best Paper Honorable Mention for their groundbreaking research.

“Mo-cap innovation relies on a gold standard that preserves the body archetype prevalent in the early days of the technology: white males,” Sloane says. “Social assumptions about what type of body is considered ‘normal’ are mathematically codified in data and evaluation standards still used today.”

Hidden biases, real-world implications

Sloane and her collaborators performed a systematic literature review of 278 mo-cap-related studies. In most cases, they found that mo-cap systems captured the movements of “those who are male, white, able-bodied, and of unremarkable weight.”

The team also discovered numerous outdated inferences that were carried over to later studies and continue to influence present-day mo-cap research. “Assumptions can create biases which can manifest in everyday devices because they are embedded into mo-cap validation processes,” Sloane says. “It is unsurprising that this has real-world impacts.”

For example, mo-cap technology is utilized in the design of kitchen appliances. By recording and analyzing user movements, designers can identify patterns and habits, leading to more intuitive control placements and features that align with people’s natural behaviors. Equipment, however, is often designed based on the average dimensions and reach of a specific body type. Users who do not fit these standards can find the appliances more challenging to use.

Biases in data standards can significantly impact safety and health. Since crash test dummies are predominantly modeled on male bodies, female occupants experience higher injury rates. Additionally, sensors and imaging technologies in surgical navigation systems may exhibit inherent biases, leading to reduced accuracy for patients with higher body mass indexes.

Sloan says engineers must understand the origins of the gold standard they measure against and confirm that it is, in fact, gold. Technicians should also scrutinize assumptions as part of their jobs to design technologies for all.

“We need more general awareness of the ways in which assumptions find their way into AI,” Sloane says, “because that will help developers be more mindful about the datasets they work with and the evaluation methods they use.”

A human solution

A dangerous assumption is that mo-cap technology is neutral. Sloane and her colleagues want engineers and technicians to know how social aspects have been embedded into mathematical models that seem objective or infrastructural.

”It is well known that every element of a technical system has a social origin, and those who build these systems are prone to bake in their own perspectives,” Sloane says. “This often goes back to the foundational aspects of how a technology is designed and how it is evaluated to answer the question, 'Is it any good?'”

She says it is an important moment for mo-cap systems, as there may still be time to catch and avoid potentially dangerous assumptions before they are further codified into AI-based applications.

Mo-cap systems generate detailed representations of bodies by gathering data from sensors attached to subjects, capturing their movements through space. These schematics are then integrated into tools, including open-source libraries of movement data and measurement systems, which establish baseline standards for human motion.

While the film industry initially brought mo-cap technology into the public spotlight, its applications extend far beyond entertainment. Coaches and trainers utilize mo-cap to analyze athletes’ movements, enhancing their performance through precise feedback. Surgeons leverage mo-cap for training, enabling them to practice complex procedures in controlled environments. Many home fitness enthusiasts use virtual reality headsets to engage in workouts that make them feel like they are with an instructor, increasing their motivation.

“Collecting your own mo-cap data is expensive and cumbersome, which makes existing datasets ever more important,” Sloane says. “Standards and benchmarks also grow in importance as mo-cap innovation accelerates. We have to ensure they take into account the diversity of real people and real social situations.”

Key takeaways

Sloane and her collaborators uncover how historical and social representations of bodies influence modern technological systems, often resulting in software and hardware that do not work equally for all populations, experiences, or purposes.

Their analysis identified three major historical periods defined by central measurement and validation practices. The Foundation Era (1930-1979) concentrated on anthropometry, the scientific study of human body measurements and proportions. The Standardization Era (1980-1999) introduced markers for recording and analyzing human movement. The Innovation Era (2000-present) has witnessed the development of less intrusive and more accessible technologies, expanding mo-cap applications. Tracing errors, the researchers highlight how entrenched assumptions have been codified throughout these points in history into data standards.

By providing insight into the social practices that shape AI technology design, the researchers emphasize the need to examine deep-seated foundational assumptions, as they have profound and practical real-world implications.

Technological ecosystems must cater to a broader range of people and address inequalities to be genuinely inclusive and work better for more people. Sloane and her collaborators’ findings highlight the importance of innovative thinking in shaping and validating mo-cap technologies—a critical factor in advocating for a more equitable AI landscape. By scrutinizing core assumptions, they aim to ensure fairness and relevance for all.

. . .

Since 2021, the Notre Dame-IBM Technology Ethics Lab has issued calls for proposals to support interdisciplinary research in technology ethics. The 2022–2023 CFPs, focusing on “Auditing AI,” emphasized the need to evaluate and ensure ethical standards in AI systems. Among the 15 projects selected was a proposal by Mona Sloane (University of Virginia), Abigail Jacobs (University of Michigan), and Emanuel Moss (Intel Labs and the University of Virginia), “Expanding AI Audits to Include Instruments: Accountability, Measurements, and Data in Motion Capture Technology.” This research expands AI audit frameworks to include hardware and data collection instruments, assessing the underlying assumptions and their validity in specific contexts, focusing on motion capture (mo-cap) technology. Sloane and her colleagues advocate for developing a comprehensive audit framework to address these considerations. The Notre Dame–IBM Technology Ethics Lab, a critical component of the Institute for Ethics and the Common Good and the Notre Dame Ethics Initiative, promotes interdisciplinary research and policy leadership in technology ethics and is supported by a $20 million investment from IBM.

 

Originally published by Tech Ethics Lab at techethicslab.nd.edu on July 29, 2024.