Self-Improving Multi-Agent System
An experiment in recursive self-improvement: three AI agents collaborate on a task, then propose and vote on changes to their own coordination protocol. The scaffolding stays fixed; only the policy evolves. Starting from a minimal propose-and-vote loop, agents autonomously discover critique rounds, refinement passes, and eventually modify their own self-improvement process. Built with Gemini 2.5 Flash, structured output via Pydantic, and no frameworks.