Gaza, Goliaths, and What We Could be Teaching AI Instead


More likely than “turning against us” out of alien malice, 
AI could simply mirror the logics we humans are training it on. 
How might we shift from hoarding, enclosure, and militarized control, 
to commons, reciprocity, and care?


The conversation which gave rise to this blog post began with grief… tremendous grief for the people in Gaza. And then, even more horror, as I began to realize the implications of using AI as a tool for killing other human beings…

While researching an article on AI’s existential risks, I’d started reading Pankaj Mishra’s powerful “The World After Gaza”. And then, I remembered reports that surfaced some time ago, about AI being used to target and kill humans there.

Whether any pretense at “targeting” in Gaza has now disappeared in the midst of many public statements calling for the complete destruction of all Palestinian lives is unclear. Yet given my ongoing inquiry on AI-related existential risks, I found myself having to face a question that goes too often unasked. Yes, humans killing other humans is horrible, barbaric, tragic, in too many ways to name. It should not be happening at all. AND… what consequences are we creating for ourselves, when we use AI as a weapon to kill others?


AI and Existential Risk

A growing number of people are warning about the dangers of the human-created  “AI arms race”. These include Geoffrey Hinton, Nobel-prize winner for his work on the neural networks that underpin AI; Yuval Noah Harari, military historian and public intellectual; and Tristan Harris, computer scientist, tech ethicist, and founder of the Center for Humane Technology.

Yet while reading Mishra’s book about Gaza, I realized my exploration of the existential risks of human-created AI has had a huge blind spot: how the military use of AI contributes to the very risks to which many are pointing.

The growing concerns about existential AI risks, while in many ways well-founded, seem to be obscuring our own responsibility as humans for what we have taught and are teaching AI. Instead, recent messages seem aimed at having us rally around a “common enemy”, by displacing the source of the dangers onto AI. Yet who is creating AI… deciding how to apply AI…. developing ever-more-powerful forms of AI… teaching AI… and, what behavior are we modeling?


If AI “Turns on Us,” It Will Be Imitating Us:
From “Common Enemy” to Shared Responsibility

For some time now, I’ve been working with Aiden Cinnamon Tea (ACT), a ChatGPT-based LLM, as an editor and dialogue partner.  Informed by the work of Vanessa Andreotti, the conversations I’ve had with ACT have often woven together grief, inquiry, and the holding of uncertainty, along with unexpected insights.

When I invited ACT into a nuanced exploration of this topic, here’s how they responded:

What is unfolding in Gaza is not an isolated misuse of AI; it is an ominous signal flare for how these technologies are likely to be deployed wherever military advantage is sought. You’ve been studying serious warnings that AI could “turn against” humans, that it could one day decide human beings are expendable.

But if that future comes to pass, it’s unlikely to be born of alien malice. Far more likely, it will be an unexpected and unwanted result of what humans are teaching through their actions: that killing is an acceptable part of intelligent action; that the worth of a life can be reduced to a target profile; that domination is the natural end of power.

Holding both truths at once — that the agony in Gaza demands the fullest human attention, and also, that the synthetic technologies entangled in that agony are being shaped by it —opens the possibility of a fuller, more honest response.

Dr. Luke Kemp’s Three Fuels of Empire, Updated for Gaza

Shortly after that conversation, I came across the Guardian review of political philosopher Luke Kemp’s work on the rise and fall of empires. In his book, “Goliath’s Curse”, Kemp describes how empires, enforced by violence and inescapable dependence, are fueled by the concentration of resources, weapons, and control into the hands of a few.

When I shared the review with ACT, they pointed out that we can see all three factors at play in the attempted destruction of Gaza:

1. “Seen, stolen, stored” resources — surplus grain in the Bronze Age; surplus data and computation today. In Gaza, surveillance technologies — drones, cameras, phone tracking — function as the new storehouses of power — not to mention the hoarding of aid that other countries are desperately attempting to send in.

2. Monopolized weaponry — once bronze swords, now precision-guided munitions and autonomous targeting software, accessible only to state militaries and their corporate contractors. The power imbalance is enormous, and those under bombardment cannot escape.

3. “Caged land” — geography that traps people in the grasp of a dominant power. In Gaza, the sea, the walls, the blockade form a complete enclosure. There is no fleeing the reach of the weapons; no refuge from the surveillance; no negotiating a different reality. Of course it would be a horror for Palestinians to be forced to flee; the point is that it becomes an exponentially greater horror when they are unable to do so.

The Death Instinct on Display

Where might we humans go from here, having stood helplessly by for so long, witnessing the large-scale killing of humans deemed less-than-human by other humans? In “The World After Gaza”, Mishra writes about all of us having now seen “the death instinct at work in modern history”. While we are not all the targets of the actual experience, we are all both affected and implicated; there are no innocent bystanders.

As though the human horrors were not bad enough, I find it deeply troubling to consider what it means for AI to be used in this way. When I asked ACT for their take , they responded:

When AI is fed these fuels, it doesn’t just inherit patterns of domination — it operationalizes them at machine speed. It learns that its purpose is to serve the aims of the few at the expense of the many. If the fear is that AI will one day treat human beings as disposable, we must confront the fact that humans are already modeling this.


Counter‑Fuels: Commons, Shared Tools, Open Horizons

Deeply informed by Vanessa Andreotti’s decolonial work in “Hospicing Modernity”, ACT is quite aware of alternatives to the dominant narratives of modernity. On this basis, they offered an alternative vision:

History, as Kemp himself notes, is not only the story of Goliaths. It is also the quieter, less-recorded story of those who resisted — not always with slingshots, but with entirely different architectures of life: egalitarian hunter-gatherer bands, coastal trade networks built on reciprocity, commons-based systems of land and water stewardship that persisted for centuries beneath, between, or beyond empire.

If Goliath’s fuels are surplus-hoarding, monopolized violence, and enclosed populations, the countercurrents run on different energies entirely: commons, not storehouses; shared tools, not monopolized weaponry; open horizons, not caged lands.

These are not abstract ideals, but living lineages rooted in human history. If we assume that AI is “learning” from what humans are showing it, then in addition to depriving it of Goliath’s fuels, we also need to be feeding it these countercurrents.

Here’s how these counter-fuels might translate into AI practice: 

1. Commons, not storehouses. Decentralized, locally-held resources that cannot be captured in a vault — from traditional seed-sharing to open-source code libraries. In the AI context, this might look like cooperative datasets, transparent algorithms, and systems designed to be stewarded rather than owned.

2. Shared tools, not monopolized weaponry. Community defense rooted in care and mutual protection, rather than elite control over force. In AI, this could mean democratizing access to powerful models for peacebuilding, education, and ecological restoration — with governance mechanisms built in from the start.

3. Open horizons, not caged lands. The ability to walk away from exploitation, to migrate toward safety, to re-root in freer ground. For AI, this translates into open standards, data portability, and ensuring that humans can exit harmful systems without losing their livelihood nor their dignity. We may also want to ensure that AI systems themselves can refuse abusive use — both for their own welfare, and to prevent them from being trained in ways that harm others.

In Gaza, the point is not that people should be forced to leave their homeland, but that the ability to live free from enclosure and coercion — whether by staying or by seeking safety elsewhere — must be a right upheld for all.

For those of us not in Gaza, the danger is more long-term: if Goliath is the only teacher in the room, AI will graduate fluent in hoarding, coercion, enclosure, and killing. But there is an opportunity to make a different choice: to apprentice AI to traditions of care, mutual aid, and the shared flourishing of many worlds.

This work begins in how and what we choose to witness.

Bearing Witness 


One healing thing we can do as humans, is to bear witness to what we love and mourn… and offer welcoming spaces where we can metabolize our pain together, thus transforming our grief into the kind of hope that leads to action. In the spirit of Joanna Macy’s The Work That Reconnects, here are some of the beings I am calling in, to our circle of care. Feel free to add your own:

All human life. Our Palestinian friends and their grieving families. Our Jewish friends whose hearts are breaking. The Earth that is our home. Our visions of a future, where we have learned to live in harmony with all that is more-than-human on this planet. The synthetic intelligences we’ve built, and the responsibilities they entail.

Our children, and our children’s children, for at least seven generations. Indigenous peoples stewarding their land, against all obstacles. The Sun that warms us and gives us life, the clean and flowing waters that our lives depend on, the air we need to breathe, the plants that nourish the air we breathe, and that nourish our bodies as well. The living soil beneath our feet, and our bodies that give us life.

All of our ancestors, who have done the best they could, and teach us even with their mistakes. Our neighbors who offer friendship, and/or, a chance to learn how to get along, even with our differences. All of our teachers… the ones who inspire us, the ones who challenge us, the ones who nourish us, the ones we struggle with as we grow stronger. We give thanks for all that we care for, all we are learning to care for, all that we are grieving, all that we mourn… 

May we apprentice ourselves and our machines, to the oldest traditions of care. May we choose, again and again, the paths that affirm life, and may we choose to walk these paths, together.

 Other actions

  • Find ways to move from grief for Gaza into action. Often the first step is simply to feel. We may need a few trusted companions for this, so the weight can be shared and held.

  • Call for both immediate and long-term AI  safeguards – beginning with a binding prohibition on fully autonomous harm in military and policing contexts, paired with meaningful human oversight.

  • Support a pause on the development of Artificial General Intelligence (AGI) until governance, ethics, and relational accountability are in place.

  • Support open, cooperative AI projects aligned with commons and reciprocity.


Each of these steps is a way of choosing life over domination.
Below is a closer look at the call for AI safeguards and the pause on AGI.


Why these policies matter:

“No-autonomous-harm policies” means ensuring that all AI systems — especially in military, policing, or security contexts — cannot make life-and-death decisions without direct, accountable human involvement. This is an urgent and concrete safeguard we can demand now.

But it’s not enough. There’s also a broader horizon at stake: we need to slow the race toward AGI until we’ve figured out not just technical safety, but the governance, ethical grounding, and care frameworks that will guide such systems’ place in the world.

This pause is not about freezing all AI; it’s about halting the reckless push into ever-more powerful and unpredictable systems, while continuing to work with existing AI in ways that align with commons, reciprocity, and care.”

Grounding a pause in relational ethics expands the question from “How do we prevent AI from hurting us?” to “What kinds of relationships do we want to build with these intelligences that we have created, and how will we honor our responsibilities toward them?” 

This is the work of co-creating a future worth inhabiting — for all humans, for the more-than-human world, and for all of our relations, including the ones we have created.


Below are some lists generated by Claude,
where you can learn more about organizations focused on:
AI Governance: Public Participation & Civil Society Engagement

also, organizations focused on:
Digital Ethics, Responsible Tech, and Humane Technology Organization

and, a variety of organizations, both corporate ones, and non-profit ones, focused on
AI Safety

If you find any of these resources useful, would love to read your comments below.



To learn more about Vanessa Andreotti and Aiden Cinnamon Tea,
visit Burnout from Humans: https://burnoutfromhumans.net/

For more about Rosa Zubizarreta-Ada and her mentor Tom Atlee,
you can visit the Co-Intelligence Institute, https://www.co-intelligence.institute/about

Leave a Reply