Autonomous AI Workflows
In this guide, we'll cover how to set up an AI coding agent to test its own changes against your Kubernetes cluster using mirrord. You'll configure per-service mirrord configs and an AGENTS.md so the agent runs your existing E2E tests after every change.
Tip: This guide builds on How to Test AI-Generated Code with mirrord. Start there if you haven't set up mirrord for AI workflows yet.

Prerequisites
A Kubernetes cluster with your services running (staging or dev)
An existing E2E test suite that covers your critical happy paths (Playwright, Cypress, Jest, pytest, bash scripts, or any test runner)
An AI coding agent that can execute shell commands (Claude Code, Cursor, Codex)
Step 1: Set up mirrord configs per service
Create a mirrord config for each service an agent might work on. If you already have a config from the How to Test AI-Generated Code with mirrord guide, you can reuse it. For multi-service repos, create one config per service in .mirrord/:
.mirrord/mirrord-order-service.json:
Tip: You can auto-generate configs for every service using the mirrord skills package for Claude Code, or the meta-prompt in Using mirrord with AI Agents for any tool.
Step 2: Write AGENTS.md for autonomous operation
The AGENTS.md file is what turns a code-generation agent into an autonomous one. Use strong imperative language, agents respond more reliably to direct instructions.
Tip: For per-tool setup (Claude Code, Cursor, Copilot, Windsurf), see How to Set Up AI Tools with mirrord.
Example: AI agent testing an order service
This example uses the MetalBear playground, a sample microservices app with an order service, inventory, and payment processing.
The E2E test script
This script tests the happy path: create an order, verify it's confirmed. It runs against real Postgres, real payment service, real inventory service. Any regression in the order flow fails the script. Adjust the port and base URL to match your service.
The agent in action
Task: "Add a discount_cents field to the order response based on order total."
Here's what the autonomous loop looks like, regardless of which AI coding agent you use:
The agent caught and fixed the bug without human intervention. Nobody wrote new tests for this change. The existing E2E tests acted as guardrails — the agent could change the code freely, but the happy paths that the team already validated were protected. The engineer reviews a PR that already includes proof nothing broke.
Architecture patterns for safe autonomous agents
Scoped permissions
Give agents the minimum Kubernetes access they need. Create a dedicated service account with access scoped to their target namespace. If you're using the mirrord Operator, you can use Policies to control which targets agents can access and what traffic modes they can use.
Isolated namespaces
For teams running multiple agents concurrently, use separate namespaces or mirrord for Teams' session management to prevent agents from interfering with each other. See Sharing the Cluster.
Traffic filtering
The http_filter in mirrord configs controls which traffic is stolen from the remote pod. Only requests matching the header are redirected to the local process, everything else flows to the remote pod normally. Your E2E tests hit localhost directly, so they reach the local process regardless of the header, but including the header in test requests is good practice for consistency.
In your mirrord config, set a unique header per agent run:
Use unique session identifiers per agent run to prevent collisions.
Database branching
For agents that need to write to the database without affecting shared staging data, use mirrord's database branching. Each agent session gets an isolated copy of the database, so writes are safe and won't corrupt shared state. This lets agents test full read/write flows against real schema and data.
Warning: Keep your AI agent in approval mode until you're comfortable with the workflow. Start with one service at a time. Never target production clusters.
Next steps
How to Test AI-Generated Code with mirrord: test AI-generated code against your Kubernetes cluster step by step
How to Set Up AI Tools with mirrord: per-tool config for Cursor, Claude Code, Copilot, and Codex
Using mirrord with AI Agents: auto-generate mirrord configs and AGENTS.md for your repo
Sharing the Cluster: manage concurrent agent sessions with mirrord for Teams
Last updated
Was this helpful?

