LLMOps: The Art of Implementation
Deploy, Monitor and support your LLM App
A 3-week workshop for engineers on the Ops stack for LLM applications. You'll learn to deploy, monitor and support an application including advanced RAG and multi-modal apps.
- Date: . Monday, March 4 - Wednesday, March 20
- Time: 4-7pm PT on Mondays and Wednesdays
- Cost: $2,000 for individuals | $6,000 for teams
Dive deep into the world of LLMs with this intensive three-week workshop. This meticulously designed course bridges the gap between theoretical knowledge and practical application, providing you with hands-on experience in building, evaluating, and scaling LLM applications.
You'll navigate through the complexities of the LLMOps lifecycle, distinguishing it from traditional MLOps. From mastering advanced Retrieval-Augmented Generation (RAG) techniques to deploying multi-modal applications on the cloud, you'll learn to operationalize LLMs at scale, ensuring continuous improvement through effective monitoring and feedback loops.
As industries adapt to take advantage of AI, businesses around the world are realizing that AI implementation will soon be essential to remain competitive and up-to-date.
Business leaders face the biggest challenge of not knowing how to begin. This workshop, exclusively for executives, directors, and decision-makers, has been designed to help them come up with concrete next steps to create a roadmap for incorporating AI into their businesses.
Gain a comprehensive understanding of the LLMOps lifecycle, how it diverges from MLOps, and operationalize multiple LLM apps from start to finish
Learn to construct applications using advanced RAG techniques and establish monitoring and feedback loops for continuous improvement
Construct an evaluation pipeline with best-in-class metrics and learn the intricacies of scaling multi-modal applications as cloud API services
This workshop includes two live sessions a week on Tuesdays and Thursdays, plus additional practice on your own time. You will also have access to Slack to collaborate with your peers and instructors outside of class time.
- What differentiates LLMOps from MLOps
- Master prompt engineering management and experiment tracking
- Enhance LLM generation with RAG and fine-tuning
- Host and deploy an LLM on your own infrastructure
- Review practical techniques for evaluating LLMs, optimizing performance, and fine-tuning for specific tasks
This workshop is for you if:
You are a software, ML, or dev ops engineer looking to expand your knowledge of LLM deployments
You have a foundational understanding of MLOps and wish to extend your expertise into LLMOps
You are eager to learn about the latest advancements in AI applications and how to operationalize these technologies at scale
"67% of companies saw revenue increase due to AI adoption"
- McKinsey Tech Trends Outlook 2022
About Andrew Ng
Featured Speaker: Dr. Andrew Ng
Dr. Andrew Ng is a globally recognized leader in AI. He is the Founder of DeepLearning.AI and Founder and CEO of Landing AI, and an Advisor to FourthBrain.
AI-First Mindset For Leaders
- You're struggling to figure out how AI can impact your organization
- You want to stay ahead of the curve with AI technology, but aren't sure how to start
- Your organization is experimenting with AI, but has not seen any real business impact from it
- Your organization has some valuable models, but they are not fully integrated into daily operations