Multi-Cluster Setup
This guide covers how to set up multi-cluster mirrord. It involves installing the operator on all clusters, choosing an authentication method, and configuring the Primary cluster to connect to downstream clusters.
Prerequisites
Before you start, make sure you have:
The mirrord operator
3.141.0+Helm chart1.50.0+ready to install on all clusters.kubectlaccess to all clusters.For EKS IAM authentication: AWS CLI and
eksctlinstalled.
Authentication Methods
For each downstream cluster, you must specify an authType that determines how the Primary mirrord operator authenticates to it.
Bearer Token (authType: bearerToken)
authType: bearerToken)Uses ServiceAccount tokens that are automatically refreshed via the Kubernetes TokenRequest API. Good for most setups where the Primary cluster can reach the downstream cluster's API server.
You generate an initial token manually during setup. After that, the operator auto-refreshes the token before it expires using the TokenRequest API. The refreshed token keeps the same lifetime as the original.
EKS IAM (authType: eks)
authType: eks)For AWS EKS clusters. The Primary operator generates short-lived tokens using its IAM role (via IRSA). No secrets to manage - tokens are generated and refreshed automatically every 10 minutes.
On the Primary cluster, the operator pod gets AWS credentials through IRSA (sa.roleArn). It uses those credentials to generate a presigned STS URL, which is sent to the downstream cluster as a bearer token. The downstream EKS cluster validates the token with AWS STS, then maps the IAM role to a Kubernetes group via an Access Entry. Kubernetes RBAC on the downstream cluster grants permissions to that group.
No Kubernetes Secret is needed - authentication is entirely through IAM.
mTLS (authType: mtls)
authType: mtls)For clusters that require client certificate authentication. You provide the client certificate and key in the cluster configuration or Secret.
Kubernetes does not auto-refresh mTLS client certificates. You are responsible for rotating the certificates you provide before they expire.
Fields per Auth Type
Field
bearerToken
eks
mtls
server
Required
Required
Required
caData
Optional
Optional
Optional
bearerToken
Required (initial only, auto-refreshed after)
—
—
region
—
Required
—
tlsCrt
—
—
Required
tlsKey
—
—
Required
Setting Up Downstream Clusters
Every downstream cluster needs the mirrord operator installed with the operator.multiClusterMember helm chart value set to true. This creates the ServiceAccount, ClusterRoles, and ClusterRoleBindings that the Primary operator needs to manage sessions on that cluster.
Bearer Token / mTLS Clusters
Generate an initial token (bearer token auth only)
Save this token - you'll need it when configuring the Primary cluster. This initial token is only needed once during setup. After the Primary operator starts, it auto-refreshes tokens using the TokenRequest API before they expire.
For mTLS, skip this step. Instead, you'll provide the client certificate and key when configuring the Primary cluster.
EKS IAM Clusters
EKS IAM authentication lets the Primary operator authenticate to downstream EKS clusters using its AWS IAM role. No Kubernetes Secrets to manage — the operator generates short-lived tokens from its IAM identity.
How EKS IAM Authentication Works
The Primary operator pod needs to talk to downstream EKS clusters. To do that, it needs a token. Here's how the token gets created and accepted:
The pod gets AWS credentials — the
sa.roleArnannotation on the Primary operator's ServiceAccount tells EKS to inject AWS credentials into the pod via IRSA. Now the pod can act as that IAM role.The pod creates a token — the operator uses those AWS credentials to generate a presigned
GetCallerIdentitySTS URL. This URL is used as a Kubernetes bearer token that EKS understands.The downstream cluster validates the token — when the downstream EKS cluster receives this token, it calls AWS STS to verify the identity. AWS responds: "This is IAM role X."
EKS maps the IAM role to a Kubernetes group — the Access Entry on the downstream cluster maps that IAM role to a Kubernetes group (e.g.
mirrord-operator-envoy). Now the operator is authenticated as a member of that group.Kubernetes RBAC grants permissions — the ClusterRoleBindings on the downstream cluster (created by Helm with
multiClusterMemberIamGroup) grant themirrord-operator-envoygroup the necessary permissions.
What Goes Where
OIDC Identity Provider
Primary cluster (AWS)
Enables IRSA so the pod can assume an IAM role
IAM role + trust policy
AWS IAM
The identity the operator pod assumes. Has no AWS permissions — only used as a Kubernetes identity
sa.roleArn in Helm
Primary cluster
Annotates the operator's ServiceAccount so the pod gets AWS credentials for the IAM role
Access Entry
Each downstream EKS cluster (AWS)
Maps the IAM role to a Kubernetes group. Created via aws eks create-access-entry
multiClusterMemberIamGroup in Helm
Each downstream cluster
Creates ClusterRoleBindings that grant permissions to the Kubernetes group
The Primary cluster does not need an Access Entry. The operator pod runs inside the Primary cluster, so it authenticates using its ServiceAccount — no IAM token needed. The Access Entries are only needed on downstream clusters where the pod authenticates from the outside.
Setup Steps
Associate an OIDC Identity Provider with the Primary cluster
This is a one-time AWS setup. It registers the Primary EKS cluster's OIDC issuer as an IAM Identity Provider, which is what enables IRSA — the mechanism that lets pods assume IAM roles.
Skip this if already done (e.g. if other workloads in the cluster already use IRSA).
Create the IAM role with a trust policy
Create an IAM role that the operator pod will assume. The trust policy controls who can assume this role — it should only allow the Primary operator's ServiceAccount.
You can find the OIDC issuer ID from the Primary cluster:
This IAM role does not need any IAM policies attached (no s3:*, sqs:*, etc.). It has zero AWS permissions. All actual permissions come from Kubernetes RBAC on the downstream clusters. The role is only used as an identity.
Create an EKS Access Entry on each downstream cluster
This is an AWS-level configuration (not a Kubernetes resource). It tells each downstream EKS cluster: "When this IAM role authenticates, map it to the mirrord-operator-envoy Kubernetes group."
Run this for each downstream EKS cluster:
The Access Entry itself doesn't grant any Kubernetes permissions — it only establishes the identity mapping. Permissions come from RBAC (next step).
Install the operator on each downstream cluster
Install the operator with multiClusterMember and multiClusterMemberIamGroup:
This creates ClusterRoleBindings that grant the mirrord-operator-envoy Kubernetes group the permissions needed to manage sessions. The Access Entry (previous step) maps the IAM role to this group, so the Primary operator gets these permissions when it authenticates.
Install the operator on the Primary cluster with sa.roleArn
sa.roleArnOn the Primary cluster, set sa.roleArn so the operator pod can assume the IAM role:
This annotates the operator's ServiceAccount with eks.amazonaws.com/role-arn. When the pod starts, EKS injects AWS credentials into the pod automatically. The operator uses these credentials to generate tokens for each downstream cluster.
See the Configuring the Primary Cluster section below for the full Helm values.
Configuring the Primary Cluster
Install the operator on the Primary cluster with multi-cluster enabled and all downstream clusters configured.
Helm Values
The cluster key names in the clusters map should match the real cluster names. For EKS clusters this is especially important — the operator uses the key as the EKS cluster name when signing IAM tokens.
Where Data Is Stored
When you provide cluster configuration in the Helm values, the chart splits it into two places. Non-sensitive configuration (server, caData, authType, region, isDefault, namespace) goes into the ConfigMap (clusters-config.yaml). Sensitive credentials (bearerToken, tls.crt, tls.key) go into a Secret (mirrord-cluster-<name>).
For EKS IAM clusters, no Secret is created — everything is in the ConfigMap since authentication is through IAM, not stored credentials.
Manual Secret Creation
If you prefer to manage secrets outside of Helm values, you can create the Secret manually. The Secret must be labeled with operator.metalbear.co/remote-cluster-credentials=true and named mirrord-cluster-<cluster-name>. The cluster configuration (server, authType, etc.) still needs to be in the Helm values or the clusters-config.yaml ConfigMap.
For bearer token:
For mTLS:
EKS IAM clusters do not need a Secret at all. They authenticate using the operator's IAM role and only need the cluster configuration in the ConfigMap.
RBAC — How Permissions Work
When the Primary operator connects to a downstream cluster, it needs permissions to list targets, create sessions, run health checks, and more. These permissions are set up automatically by the Helm chart on each downstream cluster.
The chart creates two ClusterRoles (permission definitions):
mirrord-operator-envoy
General operations: listing targets, managing parent sessions, syncing database branches, reading pods/deployments, health checks
All member clusters
mirrord-operator-envoy-remote
Creating and managing child sessions
Member clusters only (not Primary)
A ClusterRole by itself doesn't grant anything — it only defines what actions are possible. ClusterRoleBindings connect the ClusterRole to an identity (a ServiceAccount or a group).
How Bindings Differ by Auth Type
Bearer token / mTLS
mirrord-operator-envoy ServiceAccount
multiClusterMember=true creates the bindings
EKS IAM
mirrord-operator-envoy Kubernetes group
multiClusterMemberIamGroup=mirrord-operator-envoy creates the bindings
For EKS IAM, the Access Entry maps the IAM role to the mirrord-operator-envoy Kubernetes group. The ClusterRoleBindings grant permissions to that group. So the chain is: IAM role -> Access Entry -> Kubernetes group -> ClusterRoleBinding -> ClusterRole -> permissions.
In practice:
Bearer token / mTLS downstream cluster:
multiClusterMember=trueEKS IAM downstream cluster:
multiClusterMember=true+multiClusterMemberIamGroup=mirrord-operator-envoy
Verify the Connection
After installing the operator on all clusters, verify that the Primary can reach all downstream clusters:
Look for the status.connected_clusters section:
Each connected cluster should show license_fingerprint and operator_version. If a cluster is not connected, you'll see an error field instead.
Token Refresh
Bearer Token
Automatic via TokenRequest API
Refreshed before expiration. Only the initial token is manual.
EKS IAM
Automatic every 10 minutes
Tokens are presigned STS URLs, valid for 15 minutes. No Secrets involved.
mTLS
Not auto-refreshed
You must rotate certificates manually before they expire.
FAQ
Q: Do developers need to know about multi-cluster? A: No. The developer experience is identical to single-cluster. Developers run mirrord exec as usual and the operator handles everything. Note that multi-cluster sessions only work when the developer connects to the Primary cluster — connecting directly to a downstream cluster will start a regular single-cluster session on that cluster.
Q: Can the Primary cluster also run workloads? A: Yes. By default, the Primary cluster participates as a workload cluster. Set managementOnly: true only if the Primary has no application pods.
Q: What happens if a downstream cluster is unreachable? A: The session creation will fail. The operator reports connection errors in the MirrordOperator status (see Verify the Connection above).
Q: Can I mix authentication methods? A: Yes. Each downstream cluster can use a different authType. You can have some clusters using bearer tokens, others using EKS IAM, and others using mTLS.
Q: Do I need the same operator version on all clusters? A: It's recommended. Version mismatches may cause compatibility issues.
Q: Does the IAM role need AWS permissions? A: No. The IAM role used for EKS IAM authentication has zero AWS permissions. It's only used as an identity. All actual permissions come from Kubernetes RBAC on the downstream clusters via the Access Entry and ClusterRoleBindings.
Last updated
Was this helpful?

