
Thursday, January 29, 2026, 11:00am ET
AI coding assistants are generating more code than ever, but the productivity gains haven’t been as dramatic as expected. That’s because code generation was never the main bottleneck. Testing and integration were.
This problem is even more pronounced with AI-generated code, which often lacks full awareness of your application’s architecture and microservices, causing issues to surface late in staging or production.
In this workshop, we’ll show how to safely test AI-generated code directly against your staging Kubernetes environment without deploying. Using mirrord, an open-source development tool, local code can run in the real context of your cluster alongside existing services.
Through a live demo with Cursor + mirrord, you’ll see how this approach enables fast, realistic feedback on AI-generated changes, catching integration issues earlier and reducing reliance on slow CI pipelines and deployments.
What You'll Learn
Who Should Attend
Meet Your Instructors

Arsh Sharma
Sr DevRel, MetalBear
Arsh works as a Senior DevRel Engineer at MetalBear. Arsh is also a CNCF Ambassador and has previously been awarded the Kubernetes Contributor Award for his open source contributions. He loves tinkering with new projects in the cloud ecosystem and writing about his learnings. He has contributed to CNCF projects such as cert-manager and Kyverno and was also part of the open-source Kubernetes team at VMware in a previous role.
.png?width=150&height=150&name=image%20252%20(1).png)
Anton Weiss
Chief Cluster Whisperer, PerfectScale by DoiT
Anton has a storied career in creating engaging and informative content that helps practitioners navigate the complexities of ongoing Kubernetes operations. With previous experience as a CD Unit Leader, Head of DevOps, and CTO and CEO, he has worn many hats as a consultant, instructor, and public speaker. He is passionate about leveraging his expertise to support the needs of DevOps, Platform Engineering, and Kubernetes communities.