Turn the stub tools from Ch 4 into real implementations: embed project documentation with OpenAI embeddings, build a FAISS index, implement search_docs with vector similarity search, and implement generate_test_cases using retrieved context.
Build a complete, multi-turn AI agent from scratch using LangGraph — with persistent memory via SQLite checkpointer, proper streaming output, structured tool schemas, and multi-conversation thread support.
A practical introduction to LangChain’s core building blocks and LangGraph’s stateful graph abstraction — including messages, @tool, StateGraph, nodes, edges, and a complete Hello World agent.
What exactly is an AI agent, how does it differ from a chatbot or an LLM pipeline, and when should you actually use one? This chapter covers the agent loop, real use cases, and the honest reasons not to build an agent.
A deep dive into the four core components of an AI agent system, and why Context Engineering — managing everything in the LLM’s context window — matters far more than just writing good prompts.
Why I wrote this series, what you’ll build, and how to follow along — an overview of all nine chapters covering the full lifecycle of a production AI agent.
A curated 18-month learning roadmap for becoming an AI Speech Engineer — covering foundations, core technologies (ASR, TTS, Speaker Verification, Diarization, Voice Conversion), and the latest Audio Language Models, distilled from 6 years of hands-on experience.
An analysis of why language models hallucinate — hallucinations arise from statistical pressures in training and evaluation procedures that reward guessing over acknowledging uncertainty.
Speaker Diarization answers “Who spoken when?” — covering core concepts, traditional and modern end-to-end approaches, and the latest Sortformer model for speaker segmentation.
Understanding entropy and why it’s a core concept in decision trees, neural networks, and loss functions like cross-entropy.