LLM & RAG: From Chatbot to Agent
|
LLM & RAG: From Chatbot to Agent" provides participants with the conceptual foundation and technical know-how to apply Large Language Models effectively through Retrieval-Augmented Generation techniques. This intensive 5-day course is designed for technical professionals, data scientists, ML engineers, and technical leaders who want to understand how LLMs work, how retrieval pipelines enhance accuracy and factual grounding, and how to deploy such systems responsibly in production environments. The course bridges the gap between theory and practice, helping learners design, build, and evaluate RAG architectures for real-world use cases. Through a combination of conceptual lectures, technical deep-dives, and hands-on implementation exercises, participants will gain practical experience building three progressively sophisticated systems: a basic LLM chatbot, a functional RAG pipeline, and an optimized production-ready RAG application. What You'll LearnDay 1: Foundation and First Implementation Participants begin with a thorough understanding of Large Language Model fundamentals, including transformer architecture, training processes, tokenization, and embeddings. You'll explore how LLMs generate text through sampling strategies and learn prompt engineering techniques to optimize responses. The afternoon session focuses on practical implementation: connecting to LLM APIs, working with open-source models, and building your first functional chatbot with optimized latency and response quality. Day 2: Retrieval-Augmented Generation The second day addresses the critical limitations of standalone LLMs, hallucinations, knowledge cutoff dates, and lack of domain-specific knowledge, and introduces RAG as the solution. Participants will master information retrieval fundamentals including keyword search (TF-IDF, BM25), semantic search with embeddings, hybrid search techniques, and metadata filtering. You'll learn to work with vector databases (FAISS, Pinecone, Chroma, Weaviate) and implement comprehensive evaluation strategies. The hands-on session guides you through building a complete RAG system, from knowledge base construction to chatbot integration. Day 3: Advanced RAG Techniques The third day explores advanced optimization techniques that separate prototype systems from production-ready applications. Topics include approximate nearest neighbors (ANN) algorithms for scaling, advanced chunking strategies, query parsing and rewriting, cross-encoders, and reranking methods. Participants will also learn about agentic RAG systems that can use tools and make autonomous decisions, and advanced prompt engineering techniques including few-shot learning and chain-of-thought prompting. The afternoon focuses on evaluation strategies with component-level testing, end-to-end evaluation, and cost/latency optimization. Day 4: Production RAG & Deployment The fourth day focuses on taking RAG systems from prototype to production. Participants will explore multimodal RAG for processing text, images, and documents, and learn critical production considerations including logging, monitoring, quantization for deployment, security, data privacy, and bias mitigation. The afternoon covers the strategic choice between RAG and fine-tuning approaches, and culminates with a hands-on session to improve and productionize the RAG chatbot built in previous days. Day 5: From Chatbot to Agent The final day introduces the paradigm shift from passive chatbots to autonomous AI agents. Participants will explore function calling, tool use patterns, and the Model Context Protocol (MCP) as the open standard for connecting LLMs to external tools and data sources. The morning covers MCP architecture, connecting to existing MCP servers, and agent orchestration patterns. The afternoon is dedicated to hands-on building: participants will create their own MCP server from scratch, implement custom tools, and transform their RAG chatbot into a fully functional AI agent.
|
Content
|
Learning Outcomes
• Understand the principles and architecture of Large Language Models (LLMs) and RAG systems
• Implement a basic RAG pipeline integrating an LLM and a retrieval mechanism
• Evaluate and fine-tune RAG performance for factual accuracy and user intent
• Identify opportunities for responsible adoption of RAG solutions within organizations
• Explain business and ethical implications of AI-driven retrieval systems
Training Method
This course combines theoretical instruction with hands-on practical exercises, demonstrations, and discussions. Participants will engage in small project work to build and test RAG-enabled applications using open-source tools.
Certification
Certificate of ParticipationPrerequisites
AI practitioners, data scientists, machine learning engineers, and solution architects seeking both conceptual understanding and hands-on experience with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems. Technical managers
interested in the practical use and integration of LLMs in production environments are also welcome.
Planning and location
09:00 - 16:00
09:00 - 16:00
09:00 - 16:00
09:00 - 16:00
09:00 - 16:00