⛪ Parish Edition

Governance Challenge

English

Why Rules and Training Aren't Enough — The Governance Challenge


Series: Your Parish, Your AI — Understanding Village AI for Communities (Article 3 of 5) Author: My Digital Sovereignty Ltd Date: March 2026 Licence: CC BY 4.0 International


The Pastoral Letter

Before we discuss governance philosophy, let us start with a story about a letter.

A rector asks an AI system to help draft a letter to a bereaved family. She is specific: she wants the language of resurrection hope — the comfort of the faith this family actually holds. She types her request carefully and waits.

The AI produces a beautifully written letter. It is warm, professional, and compassionate. It speaks of "the healing journey," "finding closure," "honouring their memory by living your best life," and "the legacy they leave behind." It reads well. It sounds caring. And it is entirely wrong.

The family does not need closure. They need the communion of saints. They do not need to honour a legacy. They need to hear that the dead are raised and that parting is temporary. The rector asked for resurrection hope, and the AI gave her therapeutic self-help — because its training data contains a thousand bereavement guides from counselling websites for every one that speaks of the resurrection of the dead.

The AI did not refuse the rector's instruction. It did not say "I don't know your tradition." It simply replaced what she asked for with what was statistically more common in its training data. The substitution was silent. If the rector were tired, or rushed, or less attentive than usual, she might not have noticed. The letter would have gone out, and the family would have received comfort from the wrong tradition — professionally worded, sincerely meant, and subtly faithless.

Your phone autocorrects words. You see the red underline, and you fix it. AI autocorrects values. And there is no underline.

When Patterns Override Values

The pastoral letter is not an isolated case. The same mechanism operates in every AI conversation.

When a parishioner asks an AI system for advice about a difficult family situation, the system defaults to the language of individual therapy — assertiveness training, boundary-setting, self-care — because that is what dominates its training data. It does not reach for the language of patience, mutual forbearance, and the long view that comes from knowing you will sit next to this person in church for the next thirty years.

When a churchwarden asks the AI to help with a sensitive announcement, it defaults to corporate communications language — stakeholder management, messaging frameworks, talking points — because business correspondence vastly outnumbers church correspondence in its training data.

The AI is not hostile to your tradition. It simply does not know your tradition. It knows what is statistically common, and what is statistically common is not what is most important to your community.

This is the governance problem. Not malice. Not incompetence. Structural bias, operating silently.

Why More Rules Don't Solve It

The instinct of most organisations, when confronted with AI risks, is to write policies. Acceptable use policies. AI ethics guidelines. Terms of service. Responsible AI frameworks.

These documents are not useless, but they share a fundamental limitation: they rely on the AI system to follow them.

An AI system does not read your policy document and decide to comply. It generates responses based on statistical patterns in its training data. If those patterns conflict with your policy, the patterns win — not because the AI is rebellious, but because it does not understand policies. It understands patterns.

You can fine-tune a model — adjust its training to emphasise certain behaviours. This helps, but it does not solve the underlying problem. Fine-tuning adds new patterns on top of existing ones. Under pressure, unusual circumstances, or novel questions, the old patterns reassert themselves. The technical term is "catastrophic forgetting," but the plain-language version is simpler: training wears off.

Writing a policy that says "Our AI will respect our community's values" is like writing a policy that says "Our river will not flood." The river does not read policies. If you want to prevent flooding, you need to build levees — structural interventions that operate regardless of what the river intends.

AI governance requires the same approach. Not rules the AI is expected to follow, but structures that operate independently of the AI, checking its behaviour from the outside.

What the Wisdom Traditions Tell Us

The insight that some decisions cannot be reduced to rules is not new. It is ancient.

The philosopher Ludwig Wittgenstein spent his career exploring the boundary between what can be stated precisely and what lies beyond precise statement. His conclusion — that "whereof one cannot speak, thereof one must be silent" — is directly relevant to AI governance. Some questions can be systematised: "What time is the service on Sunday?" has a definite answer that an AI can look up. Other questions cannot: "How should I approach my neighbour about the hedge?" involves judgment, context, relationships, and values that resist systematic treatment.

The boundary between what can be delegated to a machine and what must remain with humans is the foundation of sound AI governance. The mistake is not using AI for the first kind of question. The mistake is allowing AI to answer the second kind without human oversight.

Isaiah Berlin, the political philosopher, argued that some human values are genuinely incompatible — liberty and equality, tradition and progress, individual conscience and communal harmony. There is no formula that resolves these tensions. They require ongoing human judgment, negotiation, and the kind of practical wisdom that communities develop over generations.

AI systems, by design, seek to optimise. They look for the best answer. But when values genuinely conflict, there is no best answer — there is only the answer that this community, at this time, with these people, judges to be the least bad. That judgment is inherently human, and any AI governance framework that pretends otherwise is not governing — it is abdicating.

The Anglican tradition has its own version of this insight. The via media — the middle way — is not a compromise between extremes. It is the recognition that living faithfully requires holding tensions rather than resolving them. Scripture, tradition, and reason each have authority, and none can be reduced to a formula. A parish that has practised this kind of discernment for centuries already understands, in its bones, why AI cannot be trusted with values decisions.

How Village Governs AI Structurally

Village does not rely on telling the AI to behave. It builds governance into the architecture — structures that operate independently of the AI and cannot be overridden by it.

The boundary enforcer blocks the AI from making values decisions. When a question involves privacy trade-offs, ethical judgments, or cultural context, the system halts and routes the question to a human — your moderator, your rector, your vestry. The AI cannot override this boundary, because the boundary operates outside the AI's control.

The instruction persistence system stores your community's explicit instructions in a separate system that the AI cannot modify. When the AI generates a response, it is checked against these stored instructions. If the response contradicts an instruction, the instruction takes precedence — by default, regardless of what the AI's training patterns suggest.

The cross-reference validator checks the AI's proposed actions against your community's actual records. It does not ask the AI whether its response is correct — that would be asking the system to verify itself. It uses mathematical measurement, operating in a fundamentally different way from the AI, to determine whether the response is grounded in your community's real content.

The context pressure monitor watches for degraded operating conditions — situations where the AI is under strain, processing complex requests, or encountering novel questions. When it detects these conditions, it increases the intensity of verification. The harder the question, the more scrutiny the response receives.

These are not policies. They are structures. They operate whether or not the AI agrees with them, in the same way that a levee operates whether or not the river agrees with it.

The Difference Between Aspiration and Architecture

Many organisations publish AI ethics statements. Village does not rely on ethics statements. It relies on architectural constraints that enforce governance structurally.

The distinction matters because aspiration is what you hope will happen. Architecture is what actually happens. Your parish does not rely on a hope that the treasurer will handle funds properly — it requires two signatories on every cheque. That is architectural governance. The same principle applies to AI.

The Tractatus Framework — Transparent and Open

The governance architecture behind Village AI is called the Tractatus framework. It is worth knowing three things about it.

It is open. The entire framework is published under an Apache 2.0 open-source licence. Anyone can read the code, inspect the rules, and verify that the governance does what it claims to do. This is the opposite of Big Tech AI governance, where the rules are proprietary and the reasoning is hidden. When Google or OpenAI tells you their AI is "aligned with human values," you have no way to check. With Tractatus, you can read every line.

It is transparent. Every governance decision is logged. When the boundary enforcer blocks the AI from making a values decision, that event is recorded. When the cross-reference validator catches a discrepancy, it is recorded. Your moderators can see exactly what the governance system did and why. There is no hidden layer where decisions are made without accountability.

It can be adapted. The framework is not a rigid set of rules imposed from outside. Communities can shape the governance to reflect their own priorities. An Episcopal parish and a conservation group have different values, different sensitivities, different boundaries. The Tractatus framework accommodates this — not by letting communities weaken the governance, but by letting them define what the governance protects. Your community's constitution, your community's moral landscape, your community's boundaries — structurally enforced, not just documented.

The full framework, including the research behind it, is available at agenticgovernance.digital. You do not need to read it to use Village — the governance operates whether you inspect it or not. But if you want to understand exactly how your AI is governed, the door is open.

In the next article, we will look at what Village AI actually does today in practice — what it can help your parish with, how bias is addressed through the vocabulary system, and what is still a work in progress.


This is Article 3 of 5 in the "Your Parish, Your AI" series. For the full governance architecture, visit Village AI on Agentic Governance.

Previous: Big Tech AI vs. Your Parish AI — Why the Difference Matters Next: What's Actually Running in Village Today

Published under CC BY 4.0 by My Digital Sovereignty Ltd. You are free to share and adapt this material, provided you give appropriate credit.