Insights

Articles on AI governance, data sovereignty, and community-governed technology from My Digital Sovereignty.

New to AI governance?

Start with What Is AI, Really? — a plain-language guide to AI concepts for community and not-for-profit leaders.

Guardian Agents

Guardian Agents: How Village AI Holds Itself Accountable

What Guardian Agents do for community members — confidence badges, source analysis, and AI that shows its working. Four layers of verification built on mathematics, not more AI.

Why We Built Guardian Agents

The architectural reasoning behind Village's AI accountability system. Why "add guardrails" is insufficient, the recursive trust problem, and four design principles that resolve it.

Guardian Agents and the Philosophy of AI Accountability

How Wittgenstein, Berlin, Ostrom, and Te Ao Māori converge on the same architectural requirements for governing AI in community contexts. Four philosophical commitments that demanded specific engineering responses.

AI Governance Series

What Is AI, Really? A Guide for Community and Not-for-Profit Leaders

Core AI concepts explained in plain language — what AI is, how it works, why it hallucinates, and why community governance of AI is fundamentally different from corporate AI adoption.

Governing AI in Community and Not-for-Profit Contexts

Risk identification and baseline governance practices. The specific risk profile for not-for-profits, practical governance questions, and Te Mana Raraunga principles for indigenous data sovereignty.

Models of AI Governance for Communities and Not-for-Profits

Four distinct governance approaches — from vendor-centric to community-sovereign — with honest trade-offs and decision factors for choosing between them.

Village AI as a Situated Language Layer

How Village implements community-sovereign AI in practice — a small, locally-trained language model running on community-controlled infrastructure, understanding its specific context and values.

Stories

When Your AI Assistant Nearly Destroys What It Was Hired to Fix

The psychological dimension of AI over-trust. An incident where a capable AI assistant confidently proposed a fix that would have locked the founder out of his own community — and the automation bias that almost let it happen.