Wednesday, September 25th 2024, 12pm EST.
Load balancers are a staple of scalable, high-throughput, high-availability architectures. They work great to scale web services. When requests take longer, though, things get complicated. Requests can pile up on some backends; bursts of traffic can send the latency through the roof; and when autoscaling kicks in, it might be too late and/or too expensive.
Asynchronous architectures and message queues can help a lot here combined with event-driven autoscaling.
We're going to see how to implement that pattern on Kubernetes, leveraging:
Who should join:
Meet our Experts
Part of the Docker founding team. Docker Community Advocate between 2013 and 2018. These days he teaches Kubernetes at Enix, a French Cloud Native shop.
When he's not busy with computers, he collects musical instruments, and can arguably play the theme of Zelda on a dozen of them.
Anton Weiss
Chief Cluster Whisperer, PerfectScale
Software delivery optimization expert and Kubernetes fanboy. With previous experience as a CD Unit Leader, Head of DevOps, CTO, and CEO he has worn many hats as a consultant, instructor, and public speaker.
He is passionate about leveraging his expertise to support the needs of DevOps, Platform Engineering, and Kubernetes communities.