Research Project Outcomes: Driving inclusive urban growth through community-centric AI audits

Author: Tech Ethics Lab

View of Singapore at night with graphic depicting interconnectedness
Singapore's Wi-Fi communication network is the backbone of its smart city infrastructure, enabling connectivity, supporting intelligent systems, and enhancing residents’ quality of life.

Singapore has established itself as a global leader in smart city development, known for its innovative use of advanced technology and dedication to forward-thinking practices. As a "Smart City Index champion," the city-state consistently ranks at the top of the IMD Smart City Index, reflecting its unwavering commitment to building a sustainable urban future.

Singapore set a high standard for what it means to be a smart city in the 21st century with the launch of its Smart Nation initiative in 2014. This ambitious government program aims to integrate technology into every aspect of daily life, stimulate economic growth, and create a highly efficient, technology-driven ecosystem. The project reflects Singapore's strategic vision of transforming itself into a hub of innovation, leveraging technology to create a more connected, responsive, and resilient city-state.

Artificial Intelligence has been a cornerstone of Singapore's tech-driven progress, considerably elevating living standards. By analyzing vast datasets, AI and machine learning have become essential tools for predicting and optimizing resource consumption, minimizing waste, and improving transportation efficiency. These advancements have been instrumental in enabling more effective management of renewable energy sources and reducing the city-state's carbon footprint.

However, as Singapore advances its data-driven initiatives, it faces the significant challenge of ensuring that AI systems are designed to benefit all segments of society. A concentrated effort must be made to support vulnerable groups—including older adults, individuals with disabilities, and marginalized communities—so that everyone shares the benefits of technological progress equitably.

The Centre for AI and Data Governance (CAIDG) at Singapore Management University has proposed a recommendation to advance Singapore's future-ready objectives. A study co-authored by CAIDG Research Associate Wenxi Zhang and her team, supported by the Notre Dame-IBM Technology Ethics Lab, highlights the need for a framework that completes the feedback loop in AI audits.

The team proposes that external auditors collaborate with internal auditors to validate their assessments by incorporating community feedback. This approach ensures that the voices of marginalized groups are heard, making AI audits more responsive to the broader community's concerns.

Zhang and her colleagues, Sharanya Shanmugam, Jason Grant Allen, Willow Wong, and Olivia Xu, concentrated on human-AI interactions within community settings. They propose a systematic approach to monitor potential risks from intelligent systems.

“Technological advancement must be balanced with community-centric values to ensure Singapore’s ongoing tech-driven success,” Zhang says. “By actively engaging community members, our approach ensures the ethical and equitable deployment of AI-driven platforms for all.”

Traditional AI auditing frameworks help organizations understand risks, implement best practices, and comply with regulations. However, the CAIDG researchers emphasize that these overarching elements, while critical, need to be expanded to guarantee complete confidence and credibility in AI systems.

The team argues that including perspectives from those directly interacting with AI-based systems is equally important. “Users’ experiences and insights are invaluable for uncovering unforeseen risks and challenges that may not be apparent during the initial design and implementation phases,” Zhang says.

By acknowledging the importance of user feedback, the CAIDG researchers emphasize the need for a more holistic approach to AI governance—one that combines technical rigor with real-world experiences.

Beyond compliance

While it's crucial to base AI audits on procedural regularity, harm mitigation, and operational independence, an equally essential yet frequently overlooked aspect is how various communities perceive and experience AI once it is deployed.

CAIDG Research Associate Sharanya Shanmugam says what’s needed are “mechanisms that support people’s engagement with AI deployments and document—from the ground up—potential risks to collective equity.”

Shanmugam says populations facing greater challenges often interact with AI in ways that deviate from the experiences of more typical users. For example, vulnerable groups may struggle to navigate AI-driven systems designed for social services. If these challenges are not adequately addressed, AI technologies could inadvertently create barriers to essential resources, further marginalizing these populations.

The researchers highlight that Singapore already has frameworks integrating residents’ perspectives into AI oversight. One such system is the Model AI Governance Framework, which provides guidelines for the responsible deployment of AI. It emphasizes two fundamental principles: AI-assisted decision-making must be explainable, transparent, and fair, and AI systems should focus on human-centricity.

While these guidelines offer a solid foundation, their impact could be amplified by making them legally binding. Currently, organizations are encouraged to voluntarily strengthen their governance and risk management practices, with companies tasked with deciding the appropriate level of human oversight in AI-driven decision-making processes.

The CAIDG team also suggests that mandating human participation in AI audits could help align these systems with technical standards and societal values.

Exploring Community-Centric AI Audits

In August 2023, the researchers convened a roundtable discussion, bringing together experts from diverse sectors, including industry, government, academia, nonprofits, and finance. The objective was to assess the feasibility and practicality of implementing community-centric AI audit frameworks within Singapore’s regulatory structure.

The participants emphasized that AI developers should prioritize end-users' sentiments, acknowledging them as critical stakeholders in the development process. They pointed out that this approach aligns with initiatives like Singapore’s AI Verify, a testing framework that assesses AI system performance against internationally recognized principles through standardized tests. (AI Verify is consistent with global AI governance standards, including those established by the European Union and the Organization for Economic Co-operation and Development.)

The participants also identified several challenges. They noted that the term "audit" often carries negative connotations in the tech industry, evoking thoughts of burdensome paperwork and rigid regulations—factors hindering creativity and agility. The discussion also explored incentive-based (carrot) and compliance-based (stick) strategies for developing community-focused AI audit frameworks.

Additionally, the participants emphasized the need for assessment teams with a well-rounded mix of strong technical skills, regulatory knowledge, and specialized expertise to conduct effective and balanced audits. These teams must be capable of recognizing the strengths and limitations of various AI techniques across different audiences while also identifying opportunities to optimize and enhance cognitive and learning-based systems.

Core findings

Singapore has positioned itself as a global leader in urban technology, with smart AI and machine learning platforms playing pivotal roles in transforming daily life. However, sustaining this progress requires a commitment to inclusivity, ensuring the benefits of these technologies are accessible to everyone, regardless of their background or circumstances.

A promising approach to achieving this inclusivity is the community-centric AI audit framework proposed by researchers from Singapore Management University. Their model aims to create a more equitable technological landscape by incorporating procedural regularity, harm mitigation, and operational independence while emphasizing community input to address AI’s cultural and societal impacts.

As Singapore advances in developing its AI systems, it must address potential inequities that disproportionately affect vulnerable groups, including marginalized communities. The CAIDG framework stresses the importance of integrating community feedback into AI audits, allowing these voices to be heard and play a role in shaping its future.

The researchers contend that by making AI audits more inclusive and legally binding, Singapore can strengthen its AI systems’ credibility and societal alignment. This approach ensures the technology supports the city-state’s development goals and respects and empowers all members of society.

...

Since 2021, the Notre Dame-IBM Technology Ethics Lab has issued calls for proposals to support interdisciplinary research in technology ethics. The 2022–2023 CFPs, focusing on “Auditing AI,” emphasized the need to evaluate and ensure ethical standards in AI systems. One of the 15 projects selected was a proposal titled “AI Audits for Whom? A Community-Centric Approach to Rebuilding Public Trust” by Wenxi Zhang, Sharanya Shanmugan, Jason Grant Allen, Willow Wong, and Olivia Xu at Centre for AI and Data Governance (CAIDG) at Singapore Management University. The CAIDG researchers stress the crucial role of ethical AI governance and responsible data use in building public trust. They advocate for incorporating community perspectives into AI decision-making and audits. The Notre Dame–IBM Technology Ethics Lab, a critical component of the Institute for Ethics and the Common Good and the Notre Dame Ethics Initiative, promotes interdisciplinary research and policy leadership in technology ethics and is supported by a $20 million investment from IBM.

Originally published by Tech Ethics Lab at techethicslab.nd.edu on September 05, 2024.