Configure AI Agents to Use mirrord
The goal is to help you create an AGENTS.md file that lives in your repository and tells AI agents something like: “Hey, when testing code changes, use mirrord first, not mocks or CI/CD.” Now the challenge is that writing this file manually is tedious. You need to figure out mirrord configs for each service, create helper scripts, write clear instructions and validate that everything works.
Environment Setup
For this demo, we’ll use the the MetalBear playground repository. It's a simple IP visit counter application written in Go, with a Redis dependency, which makes it ideal for demonstrating how this works.
The architecture looks like this:

Here’s what you’ll need to get started:
Access to a Kubernetes cluster
kubectl configured and ready
mirrord installed locally
For this guide, we use Claude Code as the AI assistant. In the Try It Yourself section, we’ll cover other assistants that can be used with the same workflow.
You can follow along using any cluster you already have access to. The important part is that you’re testing against a real environment.
The Meta-Prompt
Here’s where it gets interesting. Instead of manually creating all the configuration files, we gave an AI agent a comprehensive prompt that told it exactly what to generate.
We navigated to the repository root, opened Claude Code, and pasted in the following prompt:
When we ran the prompt, Claude:
Discovered the services by scanning for entry points like
main.goandapp.pyMatched them to Kubernetes deployments using
kubectl get deploymentsGenerated mirrord configurations, helper scripts, and
AGENTS.mdValidated that everything worked before presenting the results
Let's see it in action:
What Claude Generated
You saw Claude scan the repository and identify all the services. From there, we selected the ip-visit-counter service to configure. Claude generated three files:
AGENTS.mdInstructions that tell AI agents how to use mirrord when testing the service.mirrord/mirrord-ip-visit-counter.jsonThe mirrord configuration, including which Kubernetes deployment to connect to and how traffic should be handled.scripts/mirrord-ip-visit-counter.shA helper script that wraps the mirrord command with pre-flight checks, such as verifying that mirrord is installed, kubectl is working, and the deployment exists. This is the command Claude runs when you say “test the service.”AGENTS.mdreferences this script directly, so the agent knows to use it automatically.
Exploring the Generated Files
Let’s take a closer look at what the meta-prompt actually generated.
1. The AGENTS.md
AGENTS.mdThe AI instructions start with a prominent attention block:

Notice the wording: “NEVER”, “ALWAYS”, “MUST”. This is intentional. AI agents respond far more reliably to imperative instructions than to phrasing like “you might want to consider”.
The file also includes exact testing commands with a required header:

This detail is critical. When you later tell Claude to test the service, it automatically includes this header because AGENTS.md explicitly requires it.
2. The mirrord-ip-visit-counter.json config
mirrord-ip-visit-counter.json configThis JSON file tells mirrord how to connect to the cluster:

The configuration targets the ip-visit-counter deployment in the default namespace, filters traffic using the x-mirrord: local header, and enables outgoing network access so your local code can reach services running in the cluster.
3. The helper script
The helper script wraps the mirrord command and handles all the pre-flight checks:
Verifies that mirrord is installed
Confirms kubectl can reach the cluster
Checks that the deployment exists
Prints the active traffic filter
Runs the full mirrord command

This is the script Claude runs when you say “test the service”. Because AGENTS.md references it explicitly, the agent knows to use it automatically.
Testing the Setup
Next, we asked Claude to test the service. The key detail is that we explicitly told it to read AGENTS.md first.
Watch running in action:
Here’s what happened:
Claude read
AGENTS.mdand understood it should use mirrordIt ran the helper script automatically
It sent three test requests to
playground.metalbear.dev, the cluster URLmirrord routed those requests to the local service
The counter incremented as expected: 1 → 2 → 3
Now for the interesting part. We made a code change and had Claude test it immediately.
Watch modifiying in action:
This is what happened next:
Claude modified the code to add the message and date fields
It restarted the service with mirrord, reconnecting to the cluster
It sent two requests to the cluster URL, which mirrord routed to the local service
It showed the updated responses with the new fields
Without mirrord, this would follow the usual development loop: build a Docker image, push it to a registry, wait for Kubernetes to roll out the deployment, and only then test the change. If something breaks, you repeat the entire process. With mirrord, you skip all of that and test changes directly against the live cluster in seconds.
Try It Yourself
Now it’s your turn to try this with your own repository. Copy the meta-prompt from earlier in this guide and paste it into Claude Code, or your AI assistant of choice, at the repository root.
The assistant will discover your services, match them to Kubernetes deployments, and generate everything you need. We tested this workflow with Claude Code, Cursor, GitHub Copilot CLI, and Gemini CLI All of them followed the same interactive, step-by-step process. If you’re using a different assistant, you may need to run the discovery and generation steps as separate prompts.
Test: Consider testing it early. Instead of generating the files and moving on, it can be helpful to actually run the test. For example, ask: “Read AGENTS.md and test the [service-name] service”, and watch it work end to end.
Make a small code change: This is where you’ll feel the difference. Modify something, test it with mirrord, and see the fast feedback loop in action. Start with one service in one repository. Once you see how fast the iteration cycle becomes, you’ll want this everywhere.
A safety note
When working with AI agents and live Kubernetes environments:
Keep your AI assistant in edit or approval mode so you can review changes before they run
Never target production clusters. Use staging or development environments only
Start with one service at a time until you’re comfortable with the workflow
Team Benefits
Once you commit AGENTS.md to your repository, every team member using an AI coding assistant automatically benefits. When a new developer joins, they clone the repo, start using Claude or Cursor, and immediately see guidance to use mirrord for testing. No additional training is required.
For teams that want consistency: If your organization already has an internal AI assistant with a knowledge base, you can add this guide to your existing knowledge base. Then any developer can just ask "generate AGENTS.md with mirrord" and get a complete setup in seconds. Every repo gets configured the same way, following your team's best practices.
Keeping it updated: As your services evolve, just re-run the generation prompt. Added a new service? You'll get new configs, scripts, and a new section in AGENTS.md. Renamed a deployment? The targets in your mirrord configs update automatically.
Customization: The generated AGENTS.md is a starting point. Add sections about your testing conventions, links to internal docs, or instructions for specific scenarios. Need to filter traffic by headers or adjust file system modes? Just edit the JSON configs directly.
Enforcement: For teams that want to enforce this workflow, consider adding a pre-commit hook that reminds developers to test with mirrord before pushing.
Wrapping Up
We started with the self-correcting AI article to show what’s possible: AI agents that test their own changes against a real environment in seconds and iterate rapidly. Now you have the tools to bring that workflow into your own repository.
With the meta-prompt approach, you don’t manually write configuration files. The AI discovers your services, generates validated mirrord configs, and creates an AGENTS.md file that teaches future AI agents exactly how to test.
The result is a development workflow where AI writes code, tests it instantly with mirrord, finds bugs, fixes them, and tests again, all without waiting on CI/CD. Iteration cycles drop from minutes to seconds.
Start by trying it with the playground repo, then apply it to your own projects. Once you experience the fast feedback loop, it fundamentally changes how you work with AI coding assistants.
Last updated
Was this helpful?

