AI Ethics and the Tribal Trap
- Dr Stephen Hart

- May 8
- 4 min read

The race to define AI ethics is in full swing. Frameworks are drafted, principles debated, and guidelines established, all aiming to ensure artificial intelligence serves humanity beneficially. Yet, a crucial question often lurks uncomfortably in the background: are we anchoring these ethics to a fleeting snapshot of contemporary morality, oblivious to the dynamic, often turbulent, nature of societal values? And more worryingly, are we prepared for how these values might regress under pressure, potentially ensnaring AI in a "tribal trap"?
Our current ethical discourse around AI – fairness, accountability, transparency, non-maleficence – is undeniably rooted in 21st-century liberal democratic ideals. These are, by and large, positive and aspirational. However, history teaches us a stark lesson: morality is not static. What was once considered morally acceptable, even virtuous, can become abhorrent to later generations, and vice versa. Consider the abolition of slavery, the fight for women's suffrage, or changing attitudes towards LGBTQ+ rights. These weren't gentle evolutions; they were often born from societal upheaval, conflict, and a re-evaluation of fundamental beliefs.
Now, consider the unprecedented upheaval AI is poised to unleash. Widespread job displacement, radical shifts in economic power, new forms of surveillance, and the potential for autonomous weaponry are not minor tremors; they are seismic shifts. Such disruptions inevitably challenge the existing social order. When people face inequity in accessing resources, perceived threats to their identity or security (be it geographical, political, or economic), societal cohesion frays.
It's in these moments of profound stress that a more primal, "tribal" morality can re-emerge. Throughout history, when resources become scarce or threats loom large, human societies tend to revert to in-group preference. The "us vs. them" mentality intensifies. Survival of the tribe – whether defined by nationality, ethnicity, ideology, or even economic class – can supersede broader, more universalist ethical considerations. Morality becomes contextual, contingent, and fiercely protective of the in-group's interests.
Imagine, for a moment, the French Revolution, a period of extreme societal breakdown, resource scarcity for the masses, and an existential threat to the revolutionary cause. The revolutionaries, fueled by ideals of "Liberté, égalité, fraternité," also enacted the Reign of Terror. Actions considered barbaric by many today were, in that specific, brutal context, seen by a significant portion of the population as necessary, even morally justifiable, to protect the revolution and forge a new society.
Now, let's layer advanced AI onto this historical scenario. If the Jacobins had possessed sophisticated humanoid robotics capable of advanced facial recognition, tracking, and apprehension, is there any doubt they would have deployed them to hunt down aristocrats and perceived enemies of the state? From their perspective, shaped by years of oppression and the violent realities of revolution, deploying any tool – even hypothetical advanced robotics – to dismantle the structures of the Ancien Régime and hold the elite accountable would likely have been viewed not just as expedient, but as a moral imperative. The "ethics" of AI deployment would have been defined by the immediate, existential needs of the revolutionary "tribe."
This isn't to condone the Reign of Terror, but to illustrate how drastically moral frameworks can shift under duress. The AI ethics we design today, with noble intentions of preventing bias and ensuring fairness based on our current understanding, could be easily co-opted or reinterpreted in a future societal crisis.
If AI leads to massive resource consolidation in the hands of a few, or if one geopolitical bloc gains a decisive AI advantage, the "out-groups" will not perceive AI developed under current ethical frameworks as benign. They may see it as a tool of oppression, a perpetuator of their disadvantage. In such a scenario, their own moral calculus might justify actions – perhaps even the development of counter-AI with different ethical underpinnings – that we today would deem unethical.
The "tribal trap" for AI ethics, therefore, is twofold:
Designing for a static present: We risk creating ethical AI systems that are perfectly aligned with today's values but brittle and ill-equipped for a future where those values have shifted, possibly towards more exclusionary, survivalist modes.
Weaponization by future tribes: AI systems, however ethically designed now, could become potent tools for future in-groups to enforce their will, suppress dissent, or wage conflict, all under a re-contextualized "moral" justification.
So, what's the path forward? It requires a sobering humility about the permanence of our current moral convictions.
Robustness over Rigidity: AI ethics needs to be less about encoding specific contemporary values and more about building systems with inherent safeguards against misuse under duress. Think of constitutional principles designed to withstand political storms, rather than specific policy regulations.
Anticipate the Upheaval: We must proactively address the socio-economic disruptions AI will cause. Mitigating widespread inequity and ensuring a just transition isn't just good policy; it's a crucial defense against the societal breakdown that fuels tribalism.
Focus on Meta-Ethics: Perhaps the most resilient AI ethics will focus on procedural justice, transparency in how AI decisions are made (even if the outcomes are contested), and mechanisms for redress and adaptation as societal values evolve.
Inclusivity in Development and Governance: Broader global participation in AI development and governance can help surface diverse ethical perspectives and build systems that are more universally accepted, potentially forestalling the worst "us vs. them" scenarios.
The challenge is immense. We are attempting to imbue machines with ethical considerations while our own understanding of ethics is a moving target, susceptible to the ancient siren song of tribal survival. If we ignore this, we risk not only failing to create truly "ethical AI" but also inadvertently forging powerful tools that could be wielded in future moral landscapes far different, and potentially far harsher, than our own. The conversation around AI ethics must transcend contemporary platitudes and grapple with the deep, often uncomfortable, history of human morality under pressure.
Interested in exploring these concepts further? Contact us (humans) at stephen@roboethics.com.au
*This article and the associated image have been generated with the assistance of AI. This is an opinion piece about a possible future scenario and should not be taken as legal advice.



Comments