KubeCon Amsterdam 2026: Our Recap
We’re back from KubeCon + CloudNativeCon Europe 2026 in Amsterdam, and we’ve had a few days to decompress, compare notes, and process everything. Before the conference we put together a guide to what we expected, so consider this the companion piece: what actually happened, what landed differently than we thought, and why this particular KubeCon felt significant.
The World Outside, the Community Inside
It would feel dishonest to write a recap without acknowledging what’s going on beyond the walls of the convention center. The world is in a turbulent place right now. We even met one attendee who was on call during the conference actively failing over infrastructure from data centers affected by real-world events happening outside the cloud native bubble.
And yet, walking through the halls, the energy was overwhelmingly positive. The cloud native community showed up strong, vibrant, and hopeful. Europe doesn’t always get the credit it deserves when it comes to open source and cloud native innovation, especially next to the US. But the scale is hard to ignore: nearly 25 million developers on GitHub across the EU and over 155 million public-project contributions in the past year. The numbers point in that direction and you could feel it in the room. Several people told us they actually prefer the EU edition of KubeCon over the North American one. That’s anecdotal, of course, but it wasn’t a one-off comment. It came up again and again.
The Big Themes
In our pre-conference guide we flagged AI agents, platform engineering, and security and sovereignty as the themes to watch. All three were prominent, but a few areas were hit harder than expected.
Inference Routing Is Now an Infrastructure Problem
There was a lot of talk around how organizations are handling inference traffic. Not just “which model do we call,” but the full routing question: how do you balance across different models, geographic regions, and cost tiers in a way that actually scales? For a lot of the larger companies at the conference, this has moved out of the ML team’s domain and firmly into the infrastructure layer. It came up in sessions, at booths, and over drinks. It’s a distributed systems problem now, and the Kubernetes community is treating it like one.
Agentic AI Stole the Show
If there was a single breakout theme, this was it. The Agentics Day co-located event we highlighted before the conference delivered on its promise, and it set the tone for the rest of the week. What we heard repeatedly was that teams aren’t just experimenting with AI agents for operations anymore, they’re deploying them. Incident response, automated remediation, monitoring that can reason about what it’s seeing.
That said, the reception wasn’t universally warm. There was real cynicism from some attendees on the show floor whenever AI got shoehorned into a vendor pitch, they told us. And honestly, that’s a fair reaction when so much of the messaging feels forced, bolting “AI-powered” onto products where it doesn’t add obvious value. But in my opinion, dismissing the whole trend because the marketing is clumsy would be a mistake. The underlying shift is real, even if the packaging needs work.
Platform Teams Are Absorbing LLMOps
This one snuck up on us. We expected platform engineering to be a major theme, and it was, but the specific pattern that kept coming up was platform teams taking ownership of model-serving infrastructure. Organizations that already have a mature internal developer platform are extending it to handle LLM workloads: serving endpoints, prompt versioning, evaluation pipelines. The abstraction layer is the same, but what’s running underneath has changed. If your platform team hasn’t started fielding these requests yet, the conversations at KubeCon suggest it’s a matter of time.
The Rise of AI Gateways
This one felt like it was crystallizing in real time at the conference. As more teams integrate MCP servers, make LLM calls from their services, and run inference alongside traditional workloads, the question of who governs all of that becomes unavoidable. Rate limiting, access control, cost tracking, security policy enforcement. AI gateways are emerging as the answer, and while the category is still early, the demand was clearly there in the conversations we had. We actually wrote about this last month in our post on building an LLM gateway on Kubernetes that addresses risks like prompt injection and data leakage, so it was validating to see the same concerns come up so consistently on the show floor.
Running Your Own Models Is Becoming Much More Common
This was particularly pronounced at a European KubeCon. Data sovereignty requirements under EU regulation and the straightforward economics of high-volume inference are pushing more organizations to self-host LLMs and smaller language models on their own infrastructure. A year ago this felt like a niche choice for the most privacy-conscious teams. Now it’s becoming standard practice in regulated industries.
Security Ran Through Everything
Not only as a standalone track, but as a thread that kept surfacing in almost every conversation. Supply chain integrity, credential management, the new risks that come with AI model provenance and tampering. The attack surface has expanded significantly, and the community knows it. This wasn’t something people were talking about in isolation. It came up at the AI sessions, the platform talks, the booth conversations. If there was a single undercurrent that connected all the other themes, it was the shared recognition that security can’t be an afterthought at any layer.
mirrord Mini-Golf Was a Blast
If you visited our Activation Booth, you already know about the mirrord mini-golf course. The idea was simple: putt your way through physical representations of the cloud developer loop, one route for the legacy path and one for the mirrord shortcut lane straight into the staging zone. Around 400 players came through over the course of the conference, and the data told the story we were hoping it would.

Mini golf completion times: legacy vs mirrord route
The average completion time on the legacy route was roughly 46.3 seconds. The mirrord route? About 8.8 seconds. Over a 5x difference, which is actually a disappointment by our standards since mirrord typically delivers closer to a 10x improvement in the real world. The mini golf was fun, but the numbers gave people an anchor for the conversations that followed, and those conversations about dev loop friction were some of the best we had all week.
What This KubeCon Meant for MetalBear
We need to take a moment here because this conference was a milestone for us. This was the most effort and the most presence MetalBear has ever had at a KubeCon. Two booths on the show floor, a mini-golf activation, a side event co-hosted with friends, and an outstanding team of 10 that showed up with energy from start to finish.

The MetalBear team at KubeCon Amsterdam 2026
For a company with humble beginnings, this event blew our minds. And we don’t take it for granted.
On the swag front, our Dutch Golden Age masters-inspired stickers and t-shirts were a hit. We leaned into the Amsterdam setting, and it clearly resonated, because by the morning of the last day we had barely anything left to hand out. If you managed to grab one, wear it well.

Dutch Golden Age masters-inspired MetalBear swag
We want to thank everyone who came by, whether you were a customer sharing how mirrord fits into your workflow, an open source user with feedback, a practitioner who’d never heard of us but stuck around for a demo, or someone who just wanted to play mini golf. Those conversations are the best part of KubeCon for us.
Wrapping Up
KubeCon Amsterdam 2026 reinforced something we’ve felt building for a while: the cloud native community is not slowing down. AI is reshaping infrastructure and the field as a whole in ways that are moving faster than any single conference can capture. Platform engineering is maturing into something that genuinely helps developers. Security has become everyone’s problem, not just the security team’s. And the European cloud native ecosystem, taken as a whole, is a force to be reckoned with.
Follow us on LinkedIn for updates on where we’ll be next.

