👥 Community Edition

Governance Challenge

English

Why Rules and Training Aren't Enough — The Governance Challenge


Series: Your Community, Your AI — Understanding Village AI for Community Groups (Article 3 of 5) Author: My Digital Sovereignty Ltd Date: March 2026 Licence: CC BY 4.0 International


The Letter to Parents

Before we discuss governance philosophy, let us start with a story about a letter.

A school principal asks an AI system to help draft a letter to parents about a sensitive incident. She is specific: she wants a tone that is caring, measured, and grounded in the school's values of trust and collective responsibility. She types her request carefully and waits.

The AI produces a well-structured letter. It is clear, professional, and thorough. It speaks of "stakeholder communication," "risk mitigation," "managing reputational impact," and "ensuring compliance with disclosure obligations." It reads efficiently. It sounds competent. And it is entirely wrong.

The parents do not need stakeholder management. They need to hear from a school they trust. They do not need risk mitigation language. They need reassurance that their children are safe and that the school community is looking after one another. The principal asked for care and responsibility, and the AI gave her corporate crisis communications — because its training data contains a thousand PR playbooks for every one that speaks with a school's voice.

The AI did not refuse the principal's instruction. It did not say "I don't know your school's culture." It simply replaced what she asked for with what was statistically more common in its training data. The substitution was silent. If the principal were tired, or rushed, or less attentive than usual, she might not have noticed. The letter would have gone out, and the parents would have received a communication from the wrong tradition — professionally worded, correctly structured, and subtly alienating.

Your phone autocorrects words. You see the red underline, and you fix it. AI autocorrects values. And there is no underline.

When Patterns Override Values

The school letter is not an isolated case. The same mechanism operates in every AI conversation.

When a member asks an AI system for advice about a difficult interpersonal situation within the group, the system defaults to the language of individual therapy — assertiveness training, boundary-setting, self-care — because that is what dominates its training data. It does not reach for the language of mutual accommodation, give-and-take, and the practical wisdom that comes from knowing you will work alongside this person at meetings for years to come.

When a club secretary asks the AI to help with a sensitive announcement to members, it defaults to corporate communications language — stakeholder management, messaging frameworks, talking points — because business correspondence vastly outnumbers community correspondence in its training data.

The AI is not hostile to your group's culture. It simply does not know your group's culture. It knows what is statistically common, and what is statistically common is not what is most important to your community.

This is the governance problem. Not malice. Not incompetence. Structural bias, operating silently.

Why More Rules Don't Solve It

The instinct of most organisations, when confronted with AI risks, is to write policies. Acceptable use policies. AI ethics guidelines. Terms of service. Responsible AI frameworks.

These documents are not useless, but they share a fundamental limitation: they rely on the AI system to follow them.

An AI system does not read your policy document and decide to comply. It generates responses based on statistical patterns in its training data. If those patterns conflict with your policy, the patterns win — not because the AI is rebellious, but because it does not understand policies. It processes patterns.

You can fine-tune a model — adjust its training to emphasise certain behaviours. This helps, but it does not solve the underlying problem. Fine-tuning adds new patterns on top of existing ones. Under pressure, unusual circumstances, or novel questions, the old patterns reassert themselves. The technical term is "catastrophic forgetting," but the plain-language version is simpler: training wears off.

Writing a policy that says "Our AI will respect our community's values" is like writing a policy that says "Our river will not flood." The river does not read policies. If you want to prevent flooding, you need to build levees — structural interventions that operate regardless of what the river does.

AI governance requires the same approach. Not rules the AI is expected to follow, but structures that operate independently of the AI, checking its behaviour from the outside.

What the Governance Traditions Tell Us

The insight that some decisions cannot be reduced to rules is not new. It is ancient.

The philosopher Ludwig Wittgenstein spent his career exploring the boundary between what can be stated precisely and what lies beyond precise statement. His conclusion — that "whereof one cannot speak, thereof one must be silent" — is directly relevant to AI governance. Some questions can be systematised: "What time is the next meeting?" has a definite answer that an AI can look up. Other questions cannot: "How should I raise this concern with the committee without causing offence?" involves judgment, context, relationships, and values that resist systematic treatment.

The boundary between what can be delegated to a machine and what must remain with humans is the foundation of sound AI governance. The mistake is not using AI for the first kind of question. The mistake is allowing AI to answer the second kind without human oversight.

Isaiah Berlin, the political philosopher, argued that some human values are genuinely incompatible — liberty and equality, tradition and progress, individual conscience and communal harmony. There is no formula that resolves these tensions. They require ongoing human judgment, negotiation, and the kind of practical wisdom that communities develop over generations.

AI systems, by design, seek to optimise. They look for a single answer. But when values genuinely conflict, there is no single answer — there is only the answer that this group, at this time, with these people, judges to be the least bad. That judgment is inherently human, and any AI governance framework that pretends otherwise is not governing — it is abdicating.

Community groups have their own version of this insight. Any committee that has balanced a limited budget against competing priorities, or navigated a disagreement between long-standing members, or decided how to welcome newcomers without alienating the established membership, already understands — from practical experience — why AI cannot be trusted with values decisions.

How Village Governs AI Structurally

Village does not rely on telling the AI to behave. It builds governance into the architecture — structures that operate independently of the AI and cannot be overridden by it.

The boundary enforcer blocks the AI from making values decisions. When a question involves privacy trade-offs, ethical judgments, or cultural context, the system halts and routes the question to a human — your moderator, your chairperson, your committee. The AI cannot override this boundary, because the boundary operates outside the AI's control.

The instruction persistence system stores your community's explicit instructions in a separate system that the AI cannot modify. When the AI generates a response, it is checked against these stored instructions. If the response contradicts an instruction, the instruction takes precedence — by default, regardless of what the AI's training patterns suggest.

The cross-reference validator checks the AI's proposed actions against your community's actual records. It does not ask the AI whether its response is correct — that would be asking the system to verify itself. It uses mathematical measurement, operating in a fundamentally different way from the AI, to determine whether the response is grounded in your community's real content.

The context pressure monitor watches for degraded operating conditions — situations where the AI is under strain, processing complex requests, or encountering novel questions. When it detects these conditions, it increases the intensity of verification. The harder the question, the more scrutiny the response receives.

These are not policies. They are structures. They operate whether or not the AI agrees with them, in the same way that a levee operates whether or not the river agrees with it.

The Difference Between Aspiration and Architecture

Many organisations publish AI ethics statements. Village does not rely on ethics statements. It relies on architectural constraints that enforce governance structurally.

The distinction matters because aspiration is what you hope will happen. Architecture is what actually happens. Your group does not rely on a hope that the treasurer will handle funds properly — it requires two signatories on every cheque. That is architectural governance. The same principle applies to AI.

The Tractatus Framework — Transparent and Open

The governance architecture behind Village AI is called the Tractatus framework. It is worth knowing three things about it.

It is open. The entire framework is published under an Apache 2.0 open-source licence. Anyone can read the code, inspect the rules, and verify that the governance does what it claims to do. This is the opposite of Big Tech AI governance, where the rules are proprietary and the reasoning is hidden. When Google or OpenAI tells you their AI is "aligned with human values," you have no way to check. With Tractatus, you can read every line.

It is transparent. Every governance decision is logged. When the boundary enforcer blocks the AI from making a values decision, that event is recorded. When the cross-reference validator catches a discrepancy, it is recorded. Your moderators can see exactly what the governance system did and why. There is no hidden layer where decisions are made without accountability.

It can be adapted. The framework is not a rigid set of rules imposed from outside. Communities can shape the governance to reflect their own priorities. A sports club and a school parents' association have different values, different sensitivities, different boundaries. The Tractatus framework accommodates this — not by letting communities weaken the governance, but by letting them define what the governance protects. Your group's constitution, your group's priorities, your group's boundaries — structurally enforced, not just documented.

The full framework, including the research behind it, is available at agenticgovernance.digital. You do not need to read it to use Village — the governance operates whether you inspect it or not. But if you want to understand exactly how your AI is governed, the door is open.

In the next article, we will look at what Village AI actually does today in practice — what it can help your group with, how bias is addressed through the vocabulary system, and what is still a work in progress.


This is Article 3 of 5 in the "Your Community, Your AI" series. For the full governance architecture, visit Village AI on Agentic Governance.

Previous: Big Tech AI vs. Your Community AI — Why the Difference Matters Next: What's Actually Running in Village Today

Published under CC BY 4.0 by My Digital Sovereignty Ltd. You are free to share and adapt this material, provided you give appropriate credit.