Looking to optimize your AI usage costs and enhance security by self-hosting your LLMs? It’s now much easier to do with Kaito!
In this webinar, we will explore how you can self-host and fine-tune large language models (LLMs) within a Kubernetes environment using KAITO. We will show the benefits of leveraging Kubernetes for scalable and efficient model deployment, and demonstrate how KAITO simplifies the orchestration and management of LLMs. Attendees will gain insights into the practical steps involved in setting up a self-hosted LLM, customizing it to meet specific needs, and optimizing performance. This session is ideal for technical specialists and platform engineers looking to enhance their AI capabilities with robust and flexible infrastructure solutions.
What you will learn:
Who should attend:
Meet our Experts
Alessandro Vozza
Sr. Technical Specialist Application Innovation & AI, Microsoft
Alessandro, a seasoned community leader, has spent the last few years architecting cloud-native infrastructures for Microsoft customers, energizing the Dutch tech community, and helping professionals achieve CKx certification. With over 25 years immersed in open-source technologies, Alessandro is deeply passionate about the cloud-native ecosystem. He's now back at Microsoft as a Senior Technical Specialist in Application Innovation & AI.
Anton Weiss
Chief Cluster Whisperer, PerfectScale
Software delivery optimization expert and Kubernetes fanboy. With previous experience as a CD Unit Leader, Head of DevOps, CTO, and CEO he has worn many hats as a consultant, instructor, and public speaker.
He is passionate about leveraging his expertise to support the needs of DevOps, Platform Engineering, and Kubernetes communities.