AI Policy on Wikipedia: How Automation Shapes Knowledge

When AI policy, a set of guidelines governing how artificial intelligence tools are used to edit, moderate, or enhance Wikipedia content. Also known as automated editing rules, it exists because machines can help—but they can also harm. Wikipedia doesn’t ban AI, but it demands control. No bot can rewrite history, erase marginalized voices, or push bias into articles without human review. The Wikimedia Foundation, the nonprofit that supports Wikipedia’s infrastructure and sets broad policy direction doesn’t run the site—volunteers do. And those volunteers are the ones who decide what AI can and can’t do.

AI tools are already editing Wikipedia. Some fix typos. Others flag vandalism. A few even suggest edits based on patterns in past changes. But here’s the catch: algorithmic editing, the use of automated systems to make content changes without direct human input for each edit isn’t trusted to make judgment calls. Why? Because AI doesn’t understand context. It can’t tell the difference between a factual update and a biased rewrite. It can’t spot when a source is outdated or when a minority view is being silenced. That’s why Wikipedia moderation, the process of reviewing, reverting, or approving edits by human volunteers to maintain quality and neutrality still rules. AI might draft the first line—but a human has to sign off on the whole paragraph.

There’s no official AI policy document that says "Do this, don’t do that." Instead, rules are built slowly, through community debate, test cases, and repeated mistakes. When a bot started auto-adding citations from unreliable sources, editors shut it down. When an AI tried to auto-delete articles about small towns, volunteers fought back with evidence of local significance. These aren’t theoretical debates—they’re daily fights over what knowledge stays and what gets erased. The AI policy isn’t written in stone. It’s written in edit histories, talk page arguments, and the quiet persistence of volunteers who refuse to let machines decide what’s true.

What you’ll find below isn’t a list of rules. It’s a collection of real stories—how volunteers spotted AI-generated falsehoods, how tools were reined in, how bias crept in and got corrected, and how the encyclopedia still stands because humans refused to hand over the pen.

Leona Whitcombe

Wikimedia Foundation's AI Literacy and Policy Advocacy

The Wikimedia Foundation is fighting to ensure AI learns from open knowledge responsibly. Their AI literacy programs and policy advocacy aim to protect Wikipedia’s integrity and demand transparency from AI companies using public data.