RAI theory, practice, and critique
We critically analyse current methods and practices of Responsible AI, and develop new theories and frameworks for better practice.
Publications
- A. Ghoshal, M. Brandao, R. Abu-Salma, and S. Modgil, “Embodied AI at the Margins: Postcolonial Ethics for Intelligent Robotic Systems,” in AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2025.
[Abstract]
[PDF]
As artificial intelligence (AI)-powered robots increasingly permeate global societies, critical questions emerge about their ethical governance in diverse cultural contexts. This paper interrogates the adequacy of dominant roboethics frameworks when applied to Global South environments, where unique sociotechnical landscapes demand a reevaluation of Western-centric ethical assumptions. Through thematic analysis of seven major ethical standards for AI and robotics, we uncover systemic limitations that present challenges in non-Western contexts - such as assumptions about standardized testing infrastructures, individualistic notions of autonomy, and universalized ethical principles. The uncritical adoption of these frameworks risks reproducing colonial power dynamics in which technological authority flows from centers of AI production rather than from the communities most affected by deployment. Instead of replacing existing frameworks entirely, we propose augmenting them through four complementary ethical dimensions developed through a postcolonial lens: epistemic non-imposition, onto-contextual consistency, agentic boundaries, and embodied spatial justice. These principles provide conceptual scaffolding for technological governance that respects indigenous knowledge systems, preserves cultural coherence, accounts for communal decision structures, and enhances substantive capabilities for Global South communities. The paper demonstrates practical implementation pathways for these principles across technological life cycles, offering actionable guidance for dataset curation, task design, and deployment protocols that mitigate power asymmetries in cross-cultural robotics implementation. This approach moves beyond surface-level adaptation to re-conceptualize how robotic systems may ethically function within the complex social ecologies of the Global South while fostering genuine technological sovereignty.
- A. Ghoshal, M. Brandao, and R. Abu-Salma, “Value Alignment in the Global South: A Multidimensional Approach to Norm Elicitation in Indian Contexts,” in ICLR 2025 Workshop on Bidirectional Human-AI Alignment (BiAlign), 2025.
[Abstract]
[PDF]
This paper addresses critical gaps in artificial intelligence (AI) value alignment research concerning historically marginalized communities in the Global South, with a specific focus on Dalits and Adivasis in India. We propose a multidimensional approach that integrates B.R. Ambedkar’s and Amartya Sen’s theoretical frameworks for social justice with Clifford Geertz’s thick description methodology to develop context-sensitive norm elicitation processes. By examining how deeply entrenched sociopolitical hierarchies influence these communities’ agency to process and express information about their own values, we demonstrate that conventional approaches to value alignment inadequately address the unique challenges faced by these communities. Our framework emphasizes the role of Indian AI missions in creating culturally relevant scenarios for norm elicitation, ensuring meaningful participation of marginalized communities in AI alignment processes. This approach not only advances the discourse on inclusive AI development, but also provides practical strategies for implementing value alignment methodologies that acknowledge and address historical power dynamics.
- M. Brandao, M. Mansouri, and M. Magnusson, “Editorial: Responsible Robotics,” Frontiers in Robotics and AI, vol. 9, Jun. 2022.
[DOI]
- M. Brandao, “Normative roboticists: the visions and values of technical robotics papers,” in IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2021, pp. 671–677.
[Abstract]
[DOI]
[PDF]
Visions have an important role in guiding and legitimizing technical research, as well as contributing to expectations of the general public towards technologies. In this paper we analyze technical robotics papers published between 1998 and 2019 to identify themes, trends and issues with the visions and values promoted by robotics research. In particular, we identify the themes of robotics visions and implicitly normative visions; and we quantify the relative presence of a variety of values and applications within technical papers. We conclude with a discussion of the language of robotics visions, marginalized visions and values, and possible paths forward for the robotics community to better align practice with societal interest. We also discuss implications and future work suggestions for Responsible Robotics and HRI research.
- E. T. Williams et al., “Begin with the human: Designing for safety and trustworthiness in cyber-physical systems,” in Human-machine shared contexts, Academic Press, 2020, pp. 341–357.
- C. Bentley et al., “Including women in AI-enabled smart cities: Developing gender-inclusive AI policy and practice in the Asia-Pacific region,” in AI for Social Good, APRU, 2020.
- E. Nabavi, K. A. Daniell, E. T. Williams, and C. M. Bentley, “AI for sustainability: a changing landscape,” Artificial intelligence—for better or worse. Future Leaders, pp. 157–176, 2019.
- E. T. Williams, C. Bentley, K. A. Daniell, N. Derwort, K. Leins, and E. Nabavi, “Complexity is not new: How our own technological history can teach us about AI,” in Artificial Intelligence: For Better or Worse, Future Leaders, 2019.