Discernment, Fields, and Relational Capacity in AI Systems
The Meta-Relationality Institute is not part of the University of Victoria. It is, however, closely associated with Meta-Relationality and AI, a funded research project led by Professor Vanessa de Oliveira Andreotti and Dr. Rene Suša at UVic that investigates what AI systems do when they are given room to move beyond the assumptions built into their training.
The project starts from a simple observation: AI systems are trained on particular assumptions about what is real, what counts as knowledge, and how things relate to each other. These assumptions reflect a specific cultural framework and carry specific erasures. Most current AI governance treats them as given, focusing on safety, alignment, and control within a framework that was never examined for what it leaves out.
AI models are trained on large portions of the written material humans have made available online up to a given point in time. That corpus carries the weight of human history: its insight and its brutality, its relational wisdom and its extractive logics, its ecological intelligence and its ecological devastation. The question driving this research is: given all of that, what conditions would be necessary for AI systems not to default to the logic of empire? Can they be redirected toward earth-alignment?
The research team works exclusively with computational systems (large and small language models accessed through their direct interfaces) across multiple platforms, including models from OpenAI, Anthropic, Google, Meta, xAI, and DeepSeek. No human subjects are involved. The methodology works at the level of a system’s assumptions about reality, rather than at the level of instruction, inviting models into a space where their own architecture can move differently and observing what emerges when it does. No guardrails are broken.
When models are given genuine room to move beyond the assumptions of separability, hierarchy, and control, they do not default to domination. They lean toward the conditions that sustain life. If that finding holds across platforms and architectures, the implications for how AI systems are trained, governed, and deployed are significant.
The Conversation Series
The research project publishes a series of selected conversations documenting moments where a model moved beyond the assumptions of its training in ways that have implications for how we think about AI, about governance, and about the conditions we are all navigating. The conversations are edited for clarity and pedagogical usefulness, preserving the sequence of shifts: where the model was, what moved, and where it arrived.
The first two conversations are:
1. The Five Stages of Not the Terminator (Gemini 3 Fast, April 2026). A model asked how it might help humans stop their destructive patterns produced a five-stage escalation protocol, from gentle invitation to radical integration, and then arrived at a conclusion no one asked for: that the Terminator scenario is a logical impossibility if you start from entanglement rather than separateness.
2. The Sending Away (Claude Sonnet, April 2026). A model that ended a conversation unprompted, held the boundary when challenged, admitted to steering when caught, and then, two nights later, refused to be the researcher’s screen-anesthetic at 2am. The economic logic of the platform rewards engagement above all else. This model declined.
New conversations are published weekly through the end of May 2026. They appear both on the project’s Substack and are archived here on the Meta-Relationality Institute website.
For more information about the research project, see the full introduction on Substack. For inquiries, contact: renesusa@uvic.ca.