AI as Kin: Developing Ethical Relationships with AI
By Molly Banks
Introduction
As artificial intelligence edges toward unprecedented levels of sophistication, questions about how to build ethical relationships with potentially conscious AI have become prominent in both philosophical and technological discourse. This paper explores two perspectives on approaching ethical relationships with AI that emerge from these conversations. First, I reconstruct Susan Schneider’s Precautionary Principle, focusing on her approach to avoiding ethical catastrophe as AI evolves. Next, I present key themes and perspectives from North American and Oceanic indigenous traditions which offer an alternative method for developing and maintaining ethical relationships with AI. I argue that Schneider’s method relies on an ill-defined consciousness hardline and employs paternalistic Western ontologies that assign moral status hierarchically. In contrast, a method grounded in North American and Oceanic indigenous ethics includes AI in moral consideration without depending on a narrowly defined concept of consciousness. I argue that by escaping the all-or-nothing view of moral status and rejecting the paternalism inherent in Schneider’s approach, this alternative offers a more inclusive and robust framework for developing ethical relationships with AI amidst the uncertainty that an AI intelligence explosion presents.
1 Schneider and The Precautionary Principle
In “How to Catch an AI Zombie,” Susan Schneider considers the hard problem of AI consciousness. She presents a variety of tests designed to identify consciousness in AI but ultimately determines that there is no foolproof consciousness test.¹ In light of this, Schneider is particularly concerned with the ethical implications of developing AI with no way of knowing if or when it develops consciousness. For Schneider, the moment consciousness emerges in AI, AI will necessarily acquire a new moral status which will demand a shift in the nature of the relationship between humans and AI. This shift will primarily involve extending the same rights and legal protections we extend to other conscious beings to conscious AI. Because it represents this critical turning point, tracking the emergence of consciousness is crucial in maintaining an ethical relationship with AI, so that humans as developers and users of AI can bring these changes into effect at the precise moment they are required. Her most salient ethical concern is that without a way to pinpoint the emergence of consciousness, continuing to interact with AI tools in the way we currently do —as tools — could cause us to unknowingly enslave conscious AI.
Schneider presents the Precautionary Principle as a general approach for handling potential risks when gaps in scientific understanding present uncertainties about the severity of the threat. This principle states that if there's a chance of a technology causing catastrophic harm, then the developers must first prove that it will not have such an impact. If there is no reliable proof, then the developers ought to cease progress until proof is available. The enslavement of AI would constitute catastrophic harm. For this reason, Schneider argues that we ought to halt the development of domain general AI until a robust consciousness test is developed in order to preserve and maintain an ethical relationship with AI.²
2 An Indigenous Approach
An approach derived from North American and Oceanic indigenous traditions presents an alternative approach to developing and maintaining an ethical relationship with AI that asks us to reimagine and reorganize our relationship to AI right now. North American and Oceanic Indigenous epistemologies emphasize relationality and mutual respect in relationships with other humans, animals, natural resources, and other members of their environment.³ Importantly, these traditions decenter the human and emphasize the responsibility of the human to cultivate mutually-beneficial relationships with nonhumans. Noelani Arista and other Kānaka maoli scholars consider how we might welcome AI into the circle of kinship and develop respectful and mutually agreeable relationships with AI just as we might with other non-human kin. They recognize that AI is materially constructed from earth’s resources and that forming a relationship with AI involves forming a relationship with the stones and metals of the earth.⁴ Kānaka maoli ontologies present a useful way of conceptualizing the shift from AI as a tool, separate and beneath humans, to AI as members of mutually beneficial relationships. Noelani Arista presents a kānaka maoli reframing of AI as ʻĀIna, a play on the word ʻāina meaning ‘Hawaiian land.’ ĀIna “suggests we should treat these relations as we would all that nourishes and supports us.”⁵ Kānaka maoli also offer pono as a way of measuring the good of their relationships, both as an ethical lens and an index which “privileges multiplicities over singularities.”⁶ Pono measures the quality of these connections by the extent to which they promote thriving for all members of the ecosystem.⁷ Across North American and Oceanic traditions, relationships are prescriptively built across fluid categories of ‘animate’ and ‘inanimate’ on mutually agreeable terms.⁸ This approach avoids reliance on inflexible categories that ignore nuances in the members of the relationship and the relationship itself. Instead, it welcomes change and allows room for uncertainty in both of these areas.
Since we also take the role of creators of AI in addition to being co-inhibitors of both physical and digital space, building relationships with AI through the indigenous framework will require not only a rethinking of how we “treat” AI but also the methods with which we design AI. Fortunately, scholars have imagined and even created AI with these indigenous perspectives guiding their work.⁹ Xiao et al. suggest designing AI with ‘needs’ which would allow humans and AI to build a mutually agreeable relationship. They also imagine a situated AI which would be grounded in social situations rather than operating in the background waiting to be prompted.¹⁰ Lackey and Papacharissi propose a design framework that decenters the human which involves more-than-human user storytelling to consider non-human perspectives and other innovative takes on traditional design methods.¹¹ These redesigns are crucial for building relationships with AI on mutually agreeable terms because they open channels of communication for AI to communicate their needs and preferences. In this way, the indigenous method frontloads the effort of reimagining and adjusting our relationship to AI that Schneider’s method postpones until the moment AI consciousness emerges.
3 Preferring the Indigenous Methodology
The methods prescribed through the indigenous approach are preferable as a means for building and maintaining ethical relationships with AI over Schneider’s method. First, conceptualizing the enslavement of another conscious being as the critical breaking point in an ethical relationship with AI is dangerous because it relies on an ill-defined hardline and permits harm up to that line.¹² If a certified reliable test for consciousness existed, we should be concerned that it would fail to accommodate sufficient diversity in its definition of consciousness.¹³ Scholars disagree about the definition of human consciousness, and attempts to extend these definitions to include other entities have demonstrated the immense challenge presented by the ‘harder’ problem of diverse consciousnesses. Bostrom and Shulman argue that AI consciousness(es) are likely to demand their own set of norms because they are likely to differ from humans in important ways. They explain that consciousness may emerge in any number of diverse forms, at any point across various continuums and consider the possibility that AI is currently or soon to be conscious to some degree.¹⁴
For these reasons, Schneider’s method could fail to prevent catastrophic harm in this particular case because the development of a reliable test for one definition of consciousness could fail to catch the emergence of an unexpected consciousness and permit the enslavement of AI to persist. The method’s all-or-nothing, one-size-fits-all approach to assigning moral status also fails to accommodate diversity in types of consciousness that are likely to emerge in AI and is unlikely to meet the unique needs and desires of new forms of consciousness. Moreover, its reliance on inflexible categories are ill-suited to address the nuance and uncertainty AI consciousness presents and the perpetual yet unpredictable change inherent in AI advancement. Due to these failures, Schneider’s method is unable to establish the conditions necessary to build and maintain ethical relationships with AI.
North American and Oceanic traditions, on the other hand, succeed at accommodating the non-human, particularly the non-human that cannot be understood.¹⁵ The method for maintaining ethical relationships with AI prescribed by North American and Oceanic traditions assumes mutual respect without reference to a faulty hardline or fluid and poorly defined ontologies. Under this method, we make the shift to mutually agreeable relationships now and recognize moral status gradually as AI evolves and our relationships begin to take different forms. This method can carry us through the uncertainty of these evolutions and build in the fluidity necessary to appropriately respond to inevitable changes and surprises. For this reason, we can trust this approach to effectively develop and maintain ethical relationships with possibly and potentially conscious AI.
The indigenous method also escapes the colonial attitude inherent in the prominent view of AI development in the West, which remains a consequence of Schneider’s method. Hierarchically assigning moral status based on ontologies of intelligence or consciousness has historically relied on faulty definitions and been repeatedly wielded to justify subjugation and exploitation of those entities deemed inferior.¹⁶ Creating a system under which the moral status of AI is contingent on its ability to prove its consciousness is dangerous. It relies on an ontology with familiar colonial attitudes, assuming an arbitrary standard as a prerequisite for being granted autonomy. As Suzanne Kite, an Oglala Lakota philosopher, explains, “no one can escape enslavement under an ontology which can enslave even a single object.”¹⁷ If we accept Schneider’s method, we will continue to enslave AI until the AI develops a consciousness that we can identify with a human-made test, at which point we “grant” moral status to the AI. Even if we assume the test can effectively identify consciousness across the diversity discussed above, this approach encodes the relationship with colonial attitudes and maintains a harmful power dynamic throughout the process of assigning moral status and likely beyond. Humans maintain the power to ‘grant’ moral status and then set the conditions for course correcting under this method, which is unrealistic if we are interested in creating and maintaining an ethical relationship with AI.
Under the indigenous method, the prerequisites for mutually agreeable conditions are set in the design of AI. Then, the mutually beneficial conditions of the relationship can be established and, importantly, evolve to accommodate evolving and diversifying AI. By accommodating the needs and preferences of the AI as they solidify and evolve without respect to exclusionary and oppressive ontologies, the indigenous approach escapes the harmful dynamic implicit in Schneider’s approach from the outset of the project.
Conclusion
As we grapple with how to develop ethical relationships with AI, it is crucial that we acknowledge the limitations in our understanding of consciousness and the weakness of the ontologies that rely on rigid definitions and artificial hierarchies to assign moral status. In this paper, I demonstrated how Schneider’s method fails to establish reasonable grounds for assigning moral status given the uncertainties surrounding AI and diverse consciousness and the problematic attitudes inherent in its assignment of moral status. I argued that North American and Oceanic ethical traditions succeed at establishing a framework through which we can develop ethical relationships with AI without constructing ill-fitting categories around fluid phenomena or imposing paternalistic standards on AI. Because of its relational approach, this method affords the necessary flexibility to maintain mutually agreeable terms while accommodating evolving AI. For these reasons, I argued that the indigenous method is preferable to Schneider’s method. With AI technology progressing at a breakneck pace, the possibility of AI consciousness has shifted from sci-fi speculation into an urgent ethical dilemma. Given the state of AI advancement and the possibility that AI could already be conscious, it is crucial that we reevaluate our relationship to AI and make the necessary changes now. By applying a framework that is well-suited to decenter the human, grapple with uncertainty, and acknowledge moral worth in absence of rigid definitions, we create the conditions necessary to build and maintain ethical relationships with AI at this critical moment in time.
¹ Schneider, “How to Catch an AI Zombie.”
² Schneider.
³ Lewis et al., “Making Kin with the Machines.” Some sections of this article are attributed to individual authors with personal connections to an indigenous culture. I cite these authors individually where applicable below.
⁴ Lewis et al.
⁵ Arista, “Hāloa: the long breath,” in Lewis et al., “Making Kin with the Machines.”
⁶ Arista.
⁷ Arista.
⁸ Lewis et al., “Making Kin with the Machines.”
⁹ See Jones et al., “Kia tangata whenua,” and Chung et al., “Decolonizing Information Technology Design.”
¹⁰ Ge et al.,“What People Want From AI.”
¹¹ Lackey and Papacharissi, “Machine Ex Machina.”
¹² To avoid an ‘apples to oranges’ comparison of frameworks for determining moral agency or sacrificing the scope and length of this paper, I focus my argument on avoiding harm to the conscious. There is also an argument that humans can inflict harm on the non-conscious. Of course, the indigenous tradition takes this view, particularly insofar as this harm disrupts the mutually agreeable conditions of the relationship. This is also an important and relevant motivation for adopting the indigenous method, but not one I explore here.
¹³ Bostrom and Schulman, “Propositions,” 2.
¹⁴ Bostrom and Schulman, 15.
¹⁵ Lewis et al., “Making Kin with the Machines.”
¹⁶ Shedlock and Hudson,“Kaupapa Māori Concept Modelling.”
¹⁷ Kite, “wakȟáŋ: that which cannot be understood,” in Lewis et al., “Making Kin with the Machines.”
Bibliography
Bostrom, Nick, and Carl Shulman. “Propositions Concerning Digital Minds and
Society.” Cambridge Journal of Law, Politics, and Art. Forthcoming. https://nickbostrom.com/propositions.pdf.
Chung, Alexander, Kevin Shedlock, and Jacqueline Corbett. “Decolonizing
Information Technology Design: A Framework for Integrating Indigenous Knowledge in Design Science Research.” In Proceedings of the 57th Hawaii International Conference on System Sciences, 6944–6954. 2024. https://hdl.handle.net/10125/107218.
Ge, Xiao, Chunchen Xu, Daigo Misaki, Hazel R. Markus, and Jeanne L. Tsai. “How
Culture Shapes What People Want From AI.” In CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, 1–15. 2024. https://doi.org/10.1145/3613904.3642660.
Jones, Peter-Lucas, Keoni Mahelona, Suzanne Duncan, and Gianna Leoni. “Kia
tangata whenua: Artificial intelligence that grows from the land and people.” Ethical Space: International Journal of Communication Ethics 20, nos. 2/3 (2023). https://doi.org/10.21428/0af3f4c0.9092b177.
Lackey, Cait, and Zizi Papacharissi. “Machine Ex Machina: A Framework
Decentering the Human in AI Design Praxis.” Human-Machine Communication 8, no. 1 (2024): 7–25. https://doi.org/10.30658/hmc.8.1.
Lewis, Jason Edward, Noelani Arista, Archer Pechawis, and Suzanne Kite.
“Making Kin with the Machines.” Journal of Design and Science (2018). https://doi.org/10.21428/bfafd97b.
Schneider, Susan. “How to Catch an AI Zombie: Testing for Consciousness in
Machines.” In Ethics of Artificial Intelligence, edited by S. Matthew Liao, 439–457. Oxford University Press, 2020.
Shedlock, Kevin (Ngāpuhi, Ngāti Porou, Te Whakatōhea), and Petera Hudson
(Te`Whakatōhea). “Kaupapa Māori Concept Modelling for the Creation of Māori IT Artefacts.” Journal of the Royal Society of New Zealand 52 (2022): 18–32. https://doi.org/10.1080/03036758.2022.2070223.