Section starts with an overview of AI, ML, and Neural Networks, followed by an in-depth look at neurons and their role in deep learning. Hands-on exercises, like experimenting with neural networks and setting up access to Google Gemini models, reinforce these concepts.
The section also introduces platforms like Hugging Face, guiding learners through its features, community, and practical use of AI models. Additionally, it covers Natural Language Processing (NLP), focusing on how Large Language Models (LLMs) handle tasks related to Natural Language Understanding (NLU) and Natural Language Generation (NLG). Quizzes are included throughout to assess understanding of key concepts.
Lesson | Title | Description |
---|---|---|
1 | Intro to AI, ML, Neural Networks, and Gen AI | Explore the evolution of Artificial Intelligence over the past two decades. This lesson provides an overview of AI technologies like Machine Learning (ML), Neural Networks, and Generative AI, laying the foundation for deeper understanding. |
2 | Neurons, Neural & Deep Learning Networks | Delve into the basic building blocks of Generative AI—neurons and neural networks. Understand how deep learning networks work and why they are pivotal to AI models. |
3 | Exercise: Try out a Neural Network for Solving Math Equations | Interact with a neural network to solve mathematical problems, demystifying the underlying mechanisms. This hands-on exercise helps reinforce your understanding of how these networks operate. |
4 | A Look at Generative AI Model as a Black Box | Gain insight into how a Generative AI model functions from an external perspective. This lesson simplifies complex AI models by exploring their behavior without diving into technical intricacies. |
5 | Quiz: Fundamentals of Generative AI Models | Test your knowledge of Generative AI and its core concepts through this quiz, reinforcing your understanding of the material covered so far. |
6 | An Overview of Generative AI Applications | Learn how to build Generative AI applications. Understand the process of accessing models, and explore the differences between open-source and closed-source models. |
7 | Exercise: Setup Access to Google Gemini Models | Experience setting up access to a Google Gemini hosted model. This hands-on exercise teaches you how to integrate these models into your code for real-world applications. |
8 | Introduction to Hugging Face | Discover the capabilities of Hugging Face, a leading platform for AI models. Learn about its inference endpoints, gated models, and libraries essential for building AI applications. |
9 | Exercise: Checkout the Hugging Face Portal | Walk through the Hugging Face portal to familiarize yourself with its features. This exercise will help you navigate its tools and understand how to leverage its resources effectively. |
10 | Exercise: Join the Community and Explore Hugging Face | Create an account on Hugging Face, request access to gated models, and generate access tokens. This exercise will enable you to interact with models using your tokens. |
11 | Quiz: Generative AI and Hugging Face | Check your understanding of Generative AI and Hugging Face with this quiz, designed to review key concepts and practical skills you’ve acquired. |
12 | Intro to Natural Language Processing (NLP, NLU, NLG) | Learn the fundamentals of Natural Language Processing (NLP) and its subsets, Natural Language Understanding (NLU) and Natural Language Generation (NLG). This lesson introduces the key concepts that power AI language models. |
13 | NLP with LLMs | Explore how Large Language Models (LLMs) handle NLP tasks. Understand the basics of transformer architecture and the differences between encoder-only and decoder-only models. |
14 | Exercise: Try Out NLP Tasks with Hugging Face Models | Use the Hugging Face portal to find and apply models for specific NLP tasks. This exercise helps solidify your understanding of LLMs in practical applications. |
15 | Quiz: NLP with LLMs | Test your grasp of NLP concepts, including NLP, NLU, NLG, and how LLMs execute these tasks, with this knowledge-check quiz. |
Section begins by explaining how models are named, providing insight into the structure and capabilities of different models. It then delves into various model types, including instruct, embedding, and chat models, highlighting their unique functions. The section also covers core NLP tasks like next word prediction and fill-mask, followed by detailed lessons on inference control parameters, such as randomness, diversity, and output length controls, which allow for precise tuning of model behavior. Hands-on exercises and quizzes reinforce these concepts, while an introduction to In-Context Learning reveals how models can learn from examples, simulating human learning processes.
Lesson | Title | Description |
---|---|---|
1 | Model Naming Scheme | Learn how creators or providers assign names to AI models, and what these names reveal about their architecture, capabilities, and intended use cases. |
2 | Instruct, Embedding, and Chat Models | Explore the key differences between instruct models, embedding models, and chat models, and see how platforms like Hugging Face use them to build AI applications. |
3 | Quiz: Instruct, Embedding, and Chat Models | Test your understanding of base, instruct, embedding, and chat models by completing this quiz and reinforcing key concepts from the lessons. |
4 | Next Word Prediction by LLM and Fill Mask Task | Discover how language models predict the next word in a sequence and tackle the fill-mask task, a common NLP challenge that evaluates a model’s vocabulary knowledge. |
5 | Model Inference Control Parameters | Dive into decoding parameters and understand how they shape a model’s output, with a walkthrough of commonly used controls in transformer-based models. |
6 | Randomness Control Inference Parameters | Understand how randomness is controlled in model outputs using hyperparameters like temperature, top-p, and top-k, to fine-tune creative or deterministic outputs. |
7 | Exercise: Setup Cohere Key and Try Out Randomness Control Parameters | Get hands-on with the Cohere API, register for a key, and explore randomness control by adjusting key parameters to impact model output. |
8 | Diversity Control Inference Parameters | Learn how to use frequency penalty and decoding penalty to manage the diversity of responses generated by a model. |
9 | Output Length Control Parameters | Explore how max output tokens and stop sequences help control the length of the model’s generated content for more focused results. |
10 | Exercise: Try Out Decoding or Inference Parameters | Apply what you’ve learned by tuning decoding parameters in real-world tasks to see how they affect the model’s behavior and outputs. |
11 | Quiz: Decoding Hyper-parameters | Check your understanding of decoding hyperparameters like temperature, max tokens, and others by taking this quiz. |
12 | Introduction to In-Context Learning | Learn how In-Context Learning allows models to mimic human learning by using examples, including techniques like zero-shot and few-shot learning. |
13 | Quiz: In-Context Learning | Assess your knowledge of In-Context Learning, and concepts like zero-shot, few-shot, and fine-tuning through this comprehensive quiz. |
Section begins with a hands-on exercise on installing and using the Hugging Face Transformers library in Python, followed by an exploration of task pipelines and pipeline classes. The section also covers using the Hugging Face Hub to interact with model endpoints and manage repositories. Practical exercises include a summarization task using Hugging Face models, allowing learners to experiment with abstractive and extractive summarization. Finally, the section dives into using the Hugging Face CLI for managing models and caching, ensuring efficient workflows. Quizzes throughout help solidify knowledge.
Lesson | Title | Description |
---|---|---|
1 | Exercise: Install & Work with Hugging Face Transformers Library | Get an overview of the Hugging Face Transformers library, followed by a step-by-step guide on how to install it and use it in Python for building AI applications. |
2 | Transformers Library Pipeline Classes | Understand how task pipelines work in Hugging Face, explore key pipeline classes, and see practical demonstrations of their use for tasks like text classification and translation. |
3 | Quiz: Hugging Face Transformers Library | Test your knowledge of the Hugging Face Transformers library, including how to use task pipelines effectively in various applications. |
4 | Hugging Face Hub Library & Working with Endpoints | Learn how to interact with the Hugging Face Hub to access model endpoints, manage model repositories, and integrate them into your projects for inference tasks. |
5 | Quiz: Hugging Face Hub Library | Check your understanding of the Hugging Face Hub, its endpoints, and the inference classes used to streamline model interaction. |
6 | Exercise: Proof of Concept (PoC) for Summarization Task | Explore both abstractive and extractive summarization methods, then apply Hugging Face models to implement a summarization task and experiment with real data. |
7 | Hugging Face CLI Tools and Model Caching | Learn how to use the Hugging Face CLI to manage tasks, including model caching and cache cleanup, while streamlining workflows with locally stored models. |
Section starts with an exploration of tensors, the multi-dimensional arrays fundamental to neural networks, and how they are processed within the pipeline classes. The section then covers model configuration classes to understand model architectures and parameters. It also explains the role of tokenizers and demonstrates their use through Hugging Face tokenizer classes. The concept of logits and their application in task-specific classes is examined, followed by an introduction to auto model classes for flexible model handling. The section concludes with a quiz to test your knowledge and an exercise focused on building a question-answering system, applying the learned concepts in a practical scenario.
Lesson | Title | Description |
---|---|---|
1 | Model Input/Output and Tensors | Learn the foundational concept of tensors, which represent the multi-dimensional arrays produced by neural networks. Understand how pipeline classes transform tensors into meaningful task outputs. |
2 | Hugging Face Model Configuration Classes | Explore model configuration classes to compare and understand the underlying architecture of Hugging Face models, including parameters like hidden layers and vector dimensions. |
3 | Model Tokenizers & Tokenization Classes | Dive into the critical role of tokenizers in converting text into input for models. This lesson explains what tokenizers are and demonstrates how to use Hugging Face tokenizer classes effectively. |
4 | Working with Logits | Learn what logits represent in machine learning, and explore their use in Hugging Face task-specific classes. This lesson includes a code walkthrough showing logits in action. |
5 | Hugging Face Models Auto Classes | Discover the flexibility of auto model classes, which automatically load appropriate models for various tasks. See how they simplify working with different Hugging Face models in practice. |
6 | Quiz: Hugging Face Classes | Test your knowledge of Hugging Face tokenizers, model configurations, and auto model classes with this quiz, designed to reinforce key concepts covered in the lessons. |
7 | Exercise: Build a Question Answering System | Learn about different types of Question/Answering tasks, then design and implement your own question-answering system using Hugging Face models, combining theory with hands-on practice. |
It covers key challenges faced by LLMs and introduces strategies to address them, such as prompt engineering, model grounding and conditioning, and transfer learning. The section also delves into various prompting techniques, including few-shot, zero-shot, chain of thought, and self-consistency, to improve LLM responses. Additionally, the section explores the tree of thoughts technique for solving reasoning and logical problems. Overall, the section equips learners with the knowledge and skills to effectively utilize and optimize LLMs for various tasks.
Lesson | Title | Description |
---|---|---|
1 | Challenges with Large Language Models | Learn about LangChain template classes for creating complex and reusable templates. |
2 | Model Grounding and Conditioning | Explore ICL from the LLM challenges perspective. Understand prompt engineering practices, transfer learning, and fine-tuning. |
3 | Exercise: Explore the Domain Adapted Models | Find domain-adapted models on Hugging Face for specific industries or tasks. |
4 | Prompt Engineering and Practices (1 of 2) | Learn about prompt structure and general best practices. |
5 | Prompt Engineering and Practices (2 of 2) | Continue discussing prompt engineering best practices. |
6 | Quiz & Exercise: Prompting Best Practices | Test your understanding of prompt engineering and practice fixing prompts. |
7 | Few-Shot & Zero-Shot Prompts | Understand how LLMs learn from few-shot prompts and the data requirements for ICL, fine-tuning, and pre-training. Learn best practices for few-shot and zero-shot prompts. |
8 | Quiz & Exercise: Few-Shot Prompts | Test your knowledge of few-shot and zero-shot prompting and practice fixing prompts for Named Entity Recognition (NER). |
9 | Chain of Thought Prompting Technique | Learn about the Chain of Thought (CoT) technique and how it enhances LLM responses. |
10 | Quiz & Exercise: Chain of Thought | Test your understanding of the CoT technique. |
11 | Self-Consistency Prompting Technique | Learn about the self-consistency technique and how it enhances LLM responses. |
12 | Tree of Thoughts Prompting Technique | Learn how the tree of thoughts technique can be used for solving reasoning and logical problems. Compare it to other techniques. |
13 | Quiz & Exercise: Tree of Thought | Test your knowledge of various prompting techniques and apply them to solve a task. |
14 | Exercise: Creative Writing Workbench (v1) | Use your knowledge of prompting techniques to build a creative workbench for a marketing team. |
Section covers key concepts such as prompt templates, few-shot prompt templates, prompt model specificity, LLM invocation, streaming responses, batch jobs, and Fake LLMs. Additionally, the section introduces LangChain Execution Language (LCEL) and its essential Runnable classes for constructing complex LLM chains. By the end of this section, learners will have a solid understanding of LangChain’s core components and be able to effectively use them to build various LLM-based applications.
Lesson | Title | Description |
---|---|---|
1 | Prompt Templates | Learn about LangChain template classes for creating complex and reusable templates. |
2 | Few-Shot Prompt Template & Example Selectors | Explore LangChain FewShotPromptTemplate and example selector classes. |
3 | Prompt Model Specificity | Understand that there’s no universal prompt for all LLMs and learn how to address this challenge. |
4 | LLM Invoke, Streams, Batches & Fake LLM | Learn how to invoke LLMs, stream responses, implement batch jobs, and use Fake LLMs for development. |
5 | Exercise: Interact with LLM Using LangChain | Practice invoking, streaming, and batching with LLMs, and experiment with Fake LLMs. |
6 | Exercise: LLM Client Utility | Understand how the LLM client utility is implemented. |
7 | Quiz: Prompt Templates, LLM, and Fake LLM | Test your knowledge of prompt templates, LLMs, and Fake LLMs. |
8 | Introduction to LangChain Execution Language | Learn about LangChain chains and components, LangChain Execution Language (LCEL), and a demo of LCEL usage. |
9 | Exercise: Create Compound Sequential Chain | Build a compound sequential chain using LCEL and the pipe operator. |
10 | LCEL: Runnable Classes (1 of 2) | Learn about essential Runnable classes for building gen AI task chains. |
11 | LCEL: Runnable Classes (2 of 2) | Continue learning about essential Runnable classes for building gen AI task chains. |
12 | Exercise: Try Out Common LCEL Patterns | Familiarize yourself with common LCEL patterns using the LCEL cheatsheet and how-tos documentation. |
13 | Exercise: Creative Writing Workbench v2 | Re-write the creative writing workbench project using LCEL and Runnable classes. |
14 | Quiz: LCEL, Chains and Runnables | Test your knowledge of LCEL, Runnables, and chains. |
Section starts by comparing different data formats and highlighting the importance of structured outputs. The section then introduces LangChain output parsers as a valuable tool for extracting structured information from LLM responses. Through hands-on exercises, learners will practice using specific output parsers like EnumOutputParser and PydanticOutputParser. A comprehensive project, the creative writing workbench, is included to demonstrate the practical application of these concepts. Finally, the section covers strategies for handling parsing errors, providing learners with essential knowledge for building robust LLM applications that produce structured outputs.
Lesson | Title | Description |
---|---|---|
1 | Challenges with Structured Responses | Compare structured, unstructured, and semi-structured data. Understand the need for structured LLM responses and best practices for achieving them. |
2 | LangChain Output Parsers | Learn about LangChain output parsers and how to use different types. |
3 | Exercise: Use the EnumOutputParser | Write code to use the LangChain EnumOutputParser. |
4 | Exercise: Use the PydanticOutputParser | Write code to use the LangChain PydanticOutputParser. |
5 | Project: Creative Writing Workbench | Understand the application requirements and your tasks for the creative writing workbench project. |
6 | Project: Solution Walkthrough (1 of 2) | Step-by-step solution for the creative writing workbench project (part 1). |
7 | Project: Solution Walkthrough (2 of 2) | Step-by-step solution for the creative writing workbench project (part 2). |
8 | Handling Parsing Errors | Learn various patterns for handling parsing errors in LLM responses and use LangChain utility classes for error handling. |
9 | Quiz and Exercise: Parsers, Error Handling | Test your knowledge of output parsers and try out the Retry Output Parser. |
Lessons in this section, explores the nature of data used for pre-training LLMs, the various sources of such data, and the processes involved in creating datasets. Additionally, the section introduces the HuggingFace dataset library and its capabilities, allowing learners to access and work with real-world datasets used for LLM training and testing. Through hands-on exercises, learners will gain practical experience in using the dataset library, accessing data from Hugging Face, and creating and publishing their own datasets. This section equips learners with the necessary knowledge and skills to effectively select, prepare, and manage datasets for LLM development.
Lesson | Title | Description |
---|---|---|
1 | Dataset for LLM Pre-training | Explore the nature of data used for LLM pre-training, sources of data sets, and the dataset creation process. |
2 | HuggingFace Datasets and Datasets Library | Learn about the HuggingFace dataset library and datasets. Discover a real dataset used for pre-training and testing LLMs. |
3 | Exercise: Use Features of Datasets Library | Learn to use the dataset library’s capabilities, access data on Hugging Face, and split datasets. |
4 | Exercise: Create and Publish a Dataset on Hugging Face | Learn how to create datasets and publish them on HuggingFace. |
Section starts with an introduction to contextual understanding and foundational elements of the Transformer architecture, including encoder and decoder models. The section explores vectors, vector spaces, and how embeddings are generated by large language models (LLMs), followed by methods for measuring semantic similarity.
Advanced topics include working with Sentence-BERT (SBERT), building classification and paraphrase mining tasks, and utilizing the LangChain library for embeddings. The section covers various search techniques such as lexical, semantic, and kNN search, and introduces optimization metrics like Recall, QPS, and Latency. It also provides hands-on exercises to build a movie recommendation engine and work with search algorithms, including FAISS, LSH, IVF, PQ, and HNSW. The section concludes with lessons on benchmarking ANN algorithms for similarity search.
Lesson | Title | Description |
---|---|---|
1 | What is the Meaning of Contextual Understanding? | In this lesson, you will explore the concept of context and how contextual understanding plays a key role in natural language processing and AI models. |
2 | Building Blocks of Transformer Architecture | Gain a foundational understanding of the Transformer architecture, covering key models like encoder-only, decoder-only, and encoder-decoder structures. |
3 | Intro to Vectors, Vector Spaces, and Embeddings | Learn the basics of vectors, vector spaces, and embeddings in AI. Understand how LLMs generate embeddings, using newsgroup postings as a real-world example. |
4 | Measuring Semantic Similarity | Discover how to measure semantic similarity, compare various distance metrics, and understand the strengths and weaknesses of different similarity measuring methods. |
5 | Quiz: Vectors, Embeddings, Similarity | Test your knowledge of key concepts like contextual understanding, semantic similarity, transformer architecture, vectors, and embeddings in this comprehensive quiz. |
6 | Sentence Transformer Models (SBERT) | Learn what SBERT is and explore its various use cases. This lesson also covers multiple SBERT models and how they enhance sentence-level embeddings. |
7 | Working with Sentence Transformers | Explore the sentence transformers library to simplify the use of SBERT models. See these models in action for generating embeddings and powering different NLP tasks. |
8 | Exercise: Work with Classification and Mining Tasks | Gain hands-on experience with classification and paraphrase mining tasks. Use instructions to code these tasks and deepen your understanding of how embeddings work in practice. |
9 | Creating Embedding with LangChain | Learn how to use LangChain’s embedding model classes to generate embeddings and evaluate semantic similarity. Additionally, explore techniques for caching embeddings. |
10 | Exercise: CacheBackedEmbeddings Classes | Learn to optimize the performance of applications by leveraging the CacheBackedEmbeddings class, which improves the speed and efficiency of embedding generation. |
11 | Lexical, Semantic, and kNN Search | Understand key search concepts such as lexical search, semantic search, and k-nearest neighbors (kNN), illustrated with practical use cases for each search type. |
12 | Search Efficiency and Search Performance Metrics | Discover techniques for optimizing semantic search algorithms and dive into critical performance metrics like Recall, Queries Per Second (QPS), and Latency. Explore trade-offs between these metrics. |
13 | Search Algorithms, Indexing, ANN, FAISS | Learn about the differences between RDBMS indexing and semantic search indexing. This lesson covers index training, build time, and the popular FAISS library for fast similarity search. |
14 | Quiz & Exercise: Try Out FAISS for Similarity Search | Test your understanding of search concepts with a quiz, followed by a hands-on exercise using the FAISS library to implement a similarity search algorithm. |
15 | Search Algorithm: Local Sensitivity Hashing (LSH) | Learn the logic behind the Local Sensitivity Hashing (LSH) algorithm and how to tune its configuration for optimal performance. See it in action using FAISS. |
16 | Search Algorithm: Inverted File Index (IVF) | Get an overview of the Inverted File Index (IVF) search algorithm, its configuration parameters, and a live code demonstration of IVF in practice. |
17 | Search Algorithm: Product Quantization (PQ) | Understand the concept of Product Quantization (PQ), its configuration parameters, and see a code example showing how to use PQ for efficient search. |
18 | Search Algorithm: HNSW (1 of 2) | This two-part lesson introduces the widely used search algorithm, Hierarchical Navigable Small World (HNSW). Part 1 covers its structure and key concepts. |
19 | Search Algorithm: HNSW (2 of 2) | In Part 2, continue exploring the HNSW algorithm, focusing on its implementation, performance benefits, and real-world applications. |
20 | Quiz & Exercise: Search Algorithms & Metrics | Test your understanding of various search algorithms and performance metrics through this quiz, followed by a hands-on exercise to reinforce the concepts. |
21 | Project: Build a Movie Recommendation Engine | Use a provided movie database with embeddings to design and implement a movie recommendation engine. Apply your knowledge of embeddings and search algorithms to solve a real-world task. |
22 | Benchmarking ANN Algorithms | Learn how to perform ANN (Approximate Nearest Neighbors) benchmarking. This lesson introduces the benchmarking process and walks you through a live demonstration on an ANN benchmarking platform. |
23 | Exercise: Benchmark the ANN Algorithms | Gain hands-on experience with ANN algorithm benchmarking. This exercise also helps deepen your understanding of the Recall metric through practical experimentation. |
It begins by addressing the challenges faced when using in-memory semantic search libraries, focusing on scalability and performance. The section introduces various vector databases available today, offering guidance on selecting the right database for different workloads.
Hands-on exercises allow learners to work with ChromaDB and integrate custom embedding models for enhanced vector searches. Key search techniques like chunking, symmetric and asymmetric searches are explored in depth. Learners will also gain insights into LangChain’s document loaders, text splitters for chunking, and retrievers for extracting relevant data. Advanced topics such as search scores and Maximal Marginal Relevancy (MMR) are covered, with practical examples. The section concludes with a project on implementing the Pinecone vector database and a quiz to test knowledge of vector databases and search optimization techniques.
Lesson | Title | Description |
---|---|---|
1 | Challenges with Semantic Search Libraries | Learn about the key challenges faced when using in-memory semantic search libraries, including scalability, performance, and memory limitations. |
2 | Introduction to Vector Databases | This lesson introduces vector databases, covering their role in storing and searching high-dimensional vectors. Explore the different databases available today and get tips on selecting the right one for your workload. |
3 | Exercise: Try Out ChromaDB | Gain hands-on experience by working with ChromaDB, a popular vector database. This exercise will guide you through setting up and performing vector searches with real-world data. |
4 | Exercise: Custom Embeddings | Learn how to integrate custom embedding models with ChromaDB. This hands-on exercise will help you apply custom embeddings to optimize your vector search workflows. |
5 | Chunking, Symmetric & Asymmetric Searches | Understand the concepts of chunking, and the difference between symmetric and asymmetric searches. Learn how these techniques improve search efficiency in vector databases. |
6 | LangChain Document Loaders | Explore LangChain document loaders, learning how to integrate various types of documents, such as PDFs, into your vector search pipeline with practical examples and demonstrations. |
7 | LangChain Text Splitters for Chunking | Learn how to use LangChain text splitter classes for chunking large documents. Discover the factors that determine chunk size and follow a code walkthrough to see how chunking is applied. |
8 | LangChain Retrievers & Vector Stores | Dive into LangChain retrievers and understand what vector stores are. A code walkthrough will demonstrate how to use retrievers to pull relevant information from vector databases. |
9 | Search Scores and Maximal-Marginal-Relevancy (MMR) | Learn about search scores and the concept of Maximal Marginal Relevancy (MMR). This lesson includes a practical demonstration of how to perform an MMR search using LangChain. |
10 | Project: Pinecone Adoption @ Company | In this project, you will explore the Pinecone vector database. Set up a free account, and implement a vector index using Pinecone to understand how it works in a real-world scenario. |
11 | Quiz: Vector Databases, Chunking, Text Splitters | Test your understanding of vector databases, chunking, document loaders, and retrievers in this quiz, ensuring you grasp the core concepts covered in the lessons. |
It covers key topics such as the fundamentals of conversational UIs, the differences between single-turn and multi-turn conversations, and the essential elements of the Streamlit framework. Additionally, the section explores the importance of conversation memory for maintaining context in chatbot interactions and introduces LangChain conversation memory classes for managing conversation history. Through hands-on exercises, learners will gain practical experience in building interactive chatbots using Streamlit and incorporating conversation memory capabilities. Finally, a project-based exercise allows learners to apply their knowledge to create a real-world PDF document summarization application.
Lesson | Title | Description |
---|---|---|
1 | Introduction to Streamlit Framework | Learn about StreamLit for creating user interfaces. |
2 | Exercise: Build a HuggingFace LLM Playground | Build a StreamLit application for interacting with any Hugging Face model and host it on HuggingFace Spaces. |
3 | Building Conversational User Interfaces | Understand how conversational interfaces or chatbot UIs are built and the difference between single-turn and multi-turn conversations. |
4 | Exercise: Build a Chatbot with Streamlit | Learn about Streamlit framework elements and build a chatbot user interface. |
5 | LangChain Conversation Memory | Explore conversation memory for history, challenges with managing large conversation history, and LangChain conversation memory classes. |
6 | Quiz & Exercise: Building Chatbots with LangChain | Test your understanding of topics covered in this section and learn how to manage conversation history (context) using LangChain ConversationSummaryMemory. |
7 | Project: PDF Document Summarizer Application | Understand the requirements and build a PDF document summarization application using StreamLit and LangChain classes. |
Section starts by introducing the concept of RAG and its benefits, followed by an exploration of LangChain’s chain creation functions for building RAG pipelines. The section then discusses challenges associated with conversational RAG and demonstrates a common issue in such scenarios. To address these challenges, learners will build a smart retriever using LangChain utility functions. The section also introduces advanced retrieval patterns like Multi Query Retriever (MQR), Parent Document Retriever (PDR), and Multi Vector Retriever (MVR), providing detailed explanations and code examples for each. Additionally, learners will explore techniques like ranking, sparse, dense, and ensemble retrievers, as well as Long Context Reorder (LCR) and contextual compression. By the end of this section, learners will have a comprehensive understanding of advanced retrieval techniques and their applications in RAG, enabling them to build more effective and sophisticated LLM-based applications.
Lesson | Title | Description |
---|---|---|
1 | Introduction to Retrieval Augmented Generation (RAG) | Learn about RAG, conversational RAG, and see a basic RAG code walkthrough. |
2 | LangChain RAG Pipelines | Understand LangChain chain creation functions, challenges with conversational RAG, and a common issue in conversational RAG. |
3 | Exercise: Build Smart Retriever with LangChain | Use LangChain utility functions to address the conversation interface issue demonstrated in the previous lesson. |
4 | Quiz: RAG and Retrievers | Test your understanding of Naive and advanced RAG. |
5 | Pattern: Multi Query Retriever (MQR) | Learn about the Multi Query Retriever (MQR) pattern, its flow, and try it out in code. |
6 | Pattern: Parent Document Retriever (PDR) | Learn about the Parent Document Retriever (PDR) pattern, its flow, and try it out in code. |
7 | Pattern: Multi Vector Retriever (MVR) | Learn about the Multi Vector Retriever (MVR) pattern, its use cases, and try it out in code. |
8 | Quiz: MQR, PDR and MVR | Test your knowledge of MQR, PDR, and MVR. |
9 | Ranking, Sparse, Dense & Ensemble Retrievers | Learn about sparse retrievers and the Ensemble retriever. See the Ensemble retriever in action. |
10 | Pattern: Long Context Reorder (LCR) | Learn about the Long Context Reorder (LCR) technique, its flow, and see it in action in code. |
11 | Quiz: Ensemble & Long Context Retrievers | Test your knowledge of Ensemble and Long Context Retrievers. |
12 | Pattern: Contextual Compressor | Learn about the contextual compressor technique, its flow, and try it out in code. |
13 | Pattern: Merger Retriever | Learn about the Merger retriever (also known as lord of retrievers), its flow, and try it out in code. |
14 | Quiz: Contextual Compressors and Merger Retriever Patterns | Test your knowledge of advanced retrievers. |
It begins with an introduction to agents, their interaction with tools and toolkits, and the fundamentals of Agentic RAG. Hands-on exercises allow learners to build both single-step and multi-step agents, exploring their internal workings, including creating an agent without LangChain to better understand the mechanics.
The section also covers LangChain tools, file management toolkits, and utilities for building agentic solutions. Advanced topics include the ReAct framework for multi-step agents, which enhances an agent’s reasoning capabilities. Learners will create a question-answering ReACT agent using external search tools like Tavily. The section wraps up with quizzes and exercises that apply these lessons, helping learners develop practical experience with Agentic RAG and LangChain-based solutions.
Lesson | Title | Description |
---|---|---|
1 | Introduction to Agents, Tools, and Agentic RAG | This lesson introduces the concept of agents, covering their purpose, how they interact with tools and toolkits, and explains the idea of Agentic RAG (Retrieval-Augmented Generation) in AI workflows. |
2 | Exercise: Build a Single-Step Agent without LangChain | In this hands-on exercise, you’ll create a single-step agent from scratch, allowing you to explore the internal mechanics of agents and understand how they operate in real-world scenarios. |
3 | LangChain Tools and Toolkits | Learn about LangChain’s tools and toolkit classes that simplify agent development. You will get hands-on experience using built-in tools to better understand their functions and integration into agentic workflows. |
4 | Quiz: Agents, Tools & Toolkits | Test your understanding of agents, tools, and toolkits in this interactive quiz, ensuring you have grasped the core concepts from the lessons. |
5 | Exercise: Try Out the FileManagement Toolkit | This hands-on exercise focuses on the LangChain FileManagement toolkit, giving you practical experience in managing files and data within an agentic system. |
6 | How Do We Humans & LLMs Think? | Explore the cognitive process behind human thinking and relate it to how LLMs (Large Language Models) process information, laying the groundwork for building more advanced AI agents. |
7 | ReACT Framework & Multi-Step Agents | Discover the ReAct framework, which is essential for building multi-step agents. Learn the benefits of this framework and how it enables agents to reason and act in complex tasks. |
8 | Exercise: Build Question/Answering ReACT Agent | In this practical exercise, you’ll build an Agentic RAG solution by creating a question-answering agent using the Tavily search engine to retrieve contextual information for accurate answers. |
9 | Exercise: Build a Multi-Step ReACT Agent | This exercise walks you through building a multi-step ReAct agent from the ground up without using LangChain, giving you a deeper understanding of how the ReAct framework works behind the scenes. |
10 | LangChain Utilities for Building Agentic-RAG Solutions | Learn about the various LangChain utility functions and classes that simplify the development of Agentic RAG solutions, making it easier to build more robust and scalable AI systems. |
11 | Exercise: Build an Agentic-RAG Solution using LangChain | In this exercise, you’ll rewrite a multi-step agent using LangChain, applying the knowledge gained to streamline the development process and leverage LangChain’s powerful utilities. |
12 | Quiz: Agentic RAG and ReACT | This quiz will test your understanding of Agentic RAG and the ReAct framework, ensuring you’re ready to apply these concepts in real-world applications. |