Why Consensus Voting Fails for Agent Truthfulness
Pass@k is the most popular reliability pattern in production agent systems right now. Run the same task k times, take a majority vote on the output, ship the consensus answer. It works beautifully ...

Source: DEV Community
Pass@k is the most popular reliability pattern in production agent systems right now. Run the same task k times, take a majority vote on the output, ship the consensus answer. It works beautifully for code generation — a function either passes the test suite or it doesn't. The objective verification is external to the agents. For factual accuracy, the pattern collapses. And most teams deploying it haven't figured out why yet. The failure is structural, not probabilistic. Consensus voting assumes that errors are independent and randomly distributed. If Agent A hallucinates, Agent B probably won't hallucinate the same thing. With enough agents, truth wins by majority. This assumption holds for coding tasks because the test suite is the arbiter. It does not hold for factual claims because there is no test suite for truth. Three failure modes Correlated hallucination. LLMs trained on similar data hallucinate in similar ways. Ask three instances of the same frontier model whether a specific