NotebookLM Got New Updates: How I Built a Learning System That Matches How My Brain Works
The full story of Wyndo's evolution using NotebookLM.
I recently wrote the viral NotebookLM post read it here. This is supplementary material and a continuation of that. Follow me on LinkedIn and read my Newsletter: The AI Maker Newsletter. I’m
and I’ll be your guide today. I help small teams transform their productivity.I also help creators automate research, writing, and distribution end-to-end so they can scale their business without drowning in manual work.
My goal is making AI accessible for everyday life with simple strategies to build smarter, work faster, and live better (Building in public 🙏).
I’m the leading AI voice from Indonesia 🇮🇩 (Ed.)
WHAT IS NOTEBOOKLM?
🌏 NotebookLM is an AI Research Tool and Thinking Partner that keeps upgrading its capabilities by Google. Now with Deep Research and Goal setting with more seamless and natural conversations. The promise of NotebookLM as a research partner is big:
My Story 📜
“A few weeks ago my NotebookLM post took off and suddenly growth accelerated way faster than I expected. Wild to think 8,000+ people actually want to read about my systematic but chaotic experiments building in public and hacking automated workflows.” - Wyndo
I’m humbled by the support I’ve received. What AI tool are you currently breaking? Would love to hear what you’re testing.
Guides on this 🗺️
Google is about to release Gemini 3.0 Pro any day now and a slew of new AI products. Likely the third week of November. That’s just days from now. This means NotebookLM is going to get even more powerful. But where to start?
How to use NotebookLM for personalized knowledge synthesis
How I Learned Complex Topics 10x Faster with NotebookLM
How to Use Google’s Deep Research, Learn About and NotebookLM Together
NotebookLM is Constantly Improving its Features
The Google AI that makes research feel effortless
Master NotebookLM in 1 Hour: Complete Walkthrough
How to learn very quickly with NotebookLM
I Figured Out How to Make NotebookLM Write Like Me
I Used Google’s NotebookLM for 2 Years and It Changed the Way I Learn Forever
Please bookmark this list of guides to read later. Restack the article if you think it might be useful to others.
JUST START HERE 🔖
Click on the Title of this article to read it on the web. Not recommended to read on mobile or in inbox.
NotebookLM Got New Updates: How I Built a Learning System That Matches How My Brain Works
By
, October & November, 2025.Seven months ago, NotebookLM could: Upload sources, ask questions, get AI-powered answers, generate basic Audio Overviews as AI-generated podcasts.
Now? It’s a completely different tool.
If you’re returning to NotebookLM after a few months (or you’re new here), here’s what’s changed and where to find it in this guide:
Source curation:
Discover with filters – Specify source types (Reddit only, YouTube only, PDFs only) instead of getting random results
Discover with deep research - Run deep research to get in-depth reports with cited sources in one shot.
Multi format sources - Now you can expand your sources into Sheets, PDFs, Microsoft Word docs, and images.
Google Drive Integration - Specify which Google Docs and Slides you want to use as the sources
Multi-format learning:
Custom Reports – Briefing docs, study guides, blog posts with instructions that match your knowledge level
Custom Audio Overviews – Podcast conversations you can customize by format (Deep Dive, Brief, Critique, Debate) and focus
Custom Video Overviews – AI-narrated presentations with Nano Banana animations in six visual styles (Watercolor, Papercraft, Anime, Whiteboard, Retro Print, Heritage), and custom visuals plus two format options: Explainer (comprehensive) or Brief (quick highlights)
Testing what you learned:
Custom Flashcards – Scenario-based cards with adjustable difficulty and explanation buttons
Custom Quiz – Multiple-choice questions testing integration, tradeoffs, and failure modes
Smarter conversations:
Custom Chat Goals – Set specific roles, voices, or goals for how NotebookLM responds to you
Expanded processing – Full 1 million token context window, 6x more conversation capacity, automatically saved chat history
The breakthrough? Every feature now adapts to where YOU are in your learning journey, not where some tutorial writer thinks you should be.
Now, let me show you what happened when I actually used all of this to learn something I had no idea what it was all about…
I wanted to build my own AI agent chatbot for my newsletter with RAG (Retrieval Augmented Generation) capabilities using LangChain.
But, I had no idea where to start.
Every tutorial assumed I already understood vector databases, embeddings, and retrieval pipelines. Documentation was written for developers who lived and breathed Python. Stack Overflow threads casually threw around terms like “chunking strategies” and “similarity search” like everyone should just know what that meant.
I was stuck in that frustrating place where you know enough to be dangerous but not enough to be useful. I understood ChatGPT and Claude. I’d built automations in Make.com, Zapier, n8n, Relay, etc., that felt pretty sophisticated.
But LangChain?
Building actual AI agents from code?
That felt like I was being asked to perform surgery after only watching YouTube videos.
The gap between no-code automation and actual AI agent development felt impossibly wide. I couldn’t comprehend it.
Then I remembered the tool that had already changed my learning once before.
Seven months ago, when I was just starting the AI Maker newsletter, I wrote about NotebookLM. If this is your first time hearing about it, you might want to check that post to learn the basics. That post became my most popular piece by far—readers clearly connected with the idea of turning AI into a personalized learning system instead of just another research tool.
But here’s what I didn’t expect: NotebookLM itself kept evolving.
While I was focused on building AI automation workflows and exploring new AI tools, NotebookLM quietly added features that completely changed what’s possible. We’re not talking about minor updates—we’re talking about the ability to customize how AI teaches you based on exactly where you are in your learning journey.
And that changes everything when you’re trying to bridge the gap from no-code to actual development of building an agent.
So when I hit the wall with LangChain and RAG systems to build AI agent, I went back to NotebookLM. But this time, instead of just uploading sources and asking questions, I built a complete learning ecosystem that met me exactly where I was: consider me as an AI operator who understood AI concepts but had never built anything with agent development.
Before we continue, a quick shoutout for Jason Spielman, the artist and designer lead behind NotebookLM. His early sketches and experiments made today’s version possible. I was surprised by the complexity of the design—like watching a factory‑finished product arrive at your door, ready to use. If you enjoy the art of building a product, you might want to check out his NotebookLM’s early story.
It’s beautiful, I promise!
Now, back to the point.
Here’s what happened when I stopped trying to learn like a developer and instead built a system that matched how my brain actually works.
The learning system I built
Before I even started learning, I had to solve the first problem: What should I actually be reading?
When you’re learning something completely new, you don’t even know which sources are good. Google “LangChain RAG tutorial” and you get 10,000 results.
Which ones are for beginners? Which ones are outdated? Which ones assume knowledge I don’t have?
This is where NotebookLM’s “Discover” feature became my first breakthrough.
Phase 1: Curating the right sources (not just any sources)
Most people don’t realize you can customize what kind of sources NotebookLM finds for you. Instead of just asking “find me information about RAG,” you can be specific about the SOURCE TYPE you need.
Here’s how I used Discover customization:
Customization #1
“Find me sources from Reddit only”
Reddit threads have real developers admitting what makes them confused. You get threads like “I’m a beginner trying to understand RAG, here’s what finally made it click.” These people are usually one step ahead of you explaining their breakthroughs. Perfect for building initial mental models without making things complicated with buzzwords and jargons.
Customization #2
“Find me YouTube videos only”
YouTube has been my biggest learning source over the years because it usually has beginner guides I can easily follow. It’s also packed with content that can take you from beginner to intermediate.
Customization #3
“Find me official documentation PDFs from LangChain, OpenAI, and ChromaDB only”
Documentation is useless if you start here—it’s written for people who already understand basics. But once you have mental models from Reddit and YouTube, official docs suddenly make sense. Specifying “PDF only” or “from LangChain.com” prevents random blog posts and ensures authoritative sources.
Customization #4
“Find me sources from top tech publications only—AWS blogs, Google Cloud documentation, enterprise case studies”
These sources explain WHY companies build RAG systems and real-world architecture decisions. Once you understand HOW to build, you need production best practices, cost optimization, and security considerations that only enterprise sources cover.
By customizing the types of sources, I create a diverse set of materials to deepen my understanding of the topic I’m learning. This helps me avoid bias from any single publication and lets me see multiple perspectives, which makes my learning stronger than ever.
Additionally, you can use files in your Google Drive as sources. This is a new feature they recently added.
🚨 Pro tip: Want to level up your sources collection? Use Perplexity’s Deep Research to collect high‑quality sources, then ask it to compile the citations as raw links. Copy those links into NotebookLM as your source set. Now you’ve got a curated research hub, fast.
Update on NotebookLM’s new feature: Let Deep Research do the hunting for you
Important note: NotebookLM just launched its new Discover feature, upgrading it into deep research. Let’s explore it together.
Sometimes, I didn’t even know what sources to look for. I knew I wanted to understand RAG systems, but I didn’t know the right terminology, the key players, or which questions I should be asking.
This is where Deep Research became my secret weapon.
NotebookLM’s Deep Research feature acts like a dedicated researcher working for you. You give it a question, it creates a research plan, browses hundreds of websites on your behalf, and generates an organized report with cited sources—all in a few minutes.
But here’s what makes it different from just googling: the report isn’t the end product. It’s the starting point.
How I actually used it:
I asked Deep Research: “What are the most common mistakes beginners make when building their first RAG system, and what should they learn first?”
Deep Research went to work. It browsed developer blogs, Reddit threads, YouTube tutorials, and technical documentation. It refined its search as it learned more about the topic.
A few minutes later, I had:
A detailed report synthesizing what it found
Cited sources for every claim
One-click import to add both the report AND the sources directly into my notebook
I think this is game-changing!
Remember my earlier Discover customizations where I searched Reddit separately, then YouTube separately, then official docs separately? Deep Research did all of that simultaneously, but smarter. It found sources I never would have thought to look for because I didn’t know they existed.
Even better, while Deep Research ran in the background, I could keep adding other sources to my notebook. It didn’t freeze my workflow. By the time it finished, I had a curated research base that would have taken me hours to assemble manually.
What I’ve realized is that Deep Research is more than just a feature for finding information. Instead, it’s the feature that kickstarted my entire learning system.
🚨 Note: To expand a bit more on the Sourcing feature, NotebookLM now allows you to upload multi‑format sources such as Google Sheets, PDFs, Microsoft Word docs, and images. Make sure you give them a try as part of your learning sources.
Now, let’s move on to the next phase.
Phase 2: Building foundation through different learning styles
After curating my sources, I had the raw materials. Now I needed to actually learn from them. But here’s the problem: sitting down to read documentation felt overwhelming, and I had limited focused time each day.
This is where NotebookLM’s multi‑format generation shines. Instead of forcing one learning style, I matched formats to context—audio for commuting, video for visual clarity, and written reports for depth.
Let’s break them down.
1. Build the base with customized reports
When I was about to learn something new, I didn’t even know what I didn’t know. Terms like “vector embeddings,” “semantic search,” and “retrieval pipelines” were everywhere, but I couldn’t tell which ones were essential vs nice-to-have.
NotebookLM has a Report feature with multiple formats: blog posts, briefing documents, study guides, and four additional auto‑suggested options based on the documents you upload. You can generate them using custom instructions or write your own.
But, I’d like to start on my own.
Here are some custom instructions you can follow to create your own report:
Customization #1: Anchor new knowledge to what you already know
Instruction:
“Explain [NEW TOPIC] by contrasting it with [SOMETHING I ALREADY UNDERSTAND]”
My example:
“Explain LangChain and RAG by contrasting them with how make.com work”
Why this worked: Instead of learning in a vacuum, I anchored new concepts to something I already understood.
NotebookLM explained: “In Make.com, you build workflows that follow predetermined steps. With LangChain, you’re building systems where the AI decides which steps to take based on the user’s question. Make.com is a recipe you follow exactly. LangChain is giving the AI ingredients and letting it cook.”
Suddenly, I understood why I needed to learn this differently.
Customization #2: Progressive complexity layering
Instruction:
“Start with the simplest possible explanation of [TOPIC], then layer in complexity”
My example:
“Start with the simplest possible explanation of RAG, then layer in complexity”
Why this worked: Prevented information overload on day one.
What I learned:
Simple version: RAG = giving AI access to your own documents so it can answer questions about them
Next layer: It works by converting documents into numbers (embeddings) and finding relevant chunks when users ask questions
Technical layer: Vector databases store embeddings, similarity search retrieves relevant chunks, LLM generates answers using retrieved context
Each layer built on the previous one instead of dumping everything at once.
Customization #3: Skill level progression map
Instruction:
“Explain this topic in 4 passes: (1) Absolute beginner with no jargon, (2) Intermediate with key terms, (3) Advanced with technical depth, (4) Expert-level insights about what most people get wrong”
My example:
“Used this to understand the progression from simple chatbot → documentation assistant → research agent”
Why this worked: Gave me a progression path instead of trying to build the advanced version immediately.
The clarity I gained:
Beginner: FAQ chatbot (needs: basic RAG, simple embeddings, single source)
Intermediate: Documentation assistant (needs: chunk optimization, multiple sources, metadata filtering)
Advanced: Research agent (needs: complex retrieval strategies, re-ranking, citation tracking)
Expert-level insights: AI agent is best use for specific case rather than a broad use
I realized I didn’t need to master everything—I just needed to build the beginner version first.
2. Audio overview to build a customized podcast
Now that I had absorbed the written report, but I needed reinforcement during “dead time.”
For example: my afternoon walk and gym sessions.
NotebookLM’s Audio Overview creates podcast-style conversations between two AI hosts. You can customize the format (Deep Dive, Brief, Critique, Debate), the length, AND the focus. And you can ask questions as it plays.
🚨 Pro tip: NotebookLM’s mobile app lets me listen on the go, but I want more control. I’d like to download the podcast and upload it to Spotify, so it’s one tap away and blends seamlessly with the playlists I already follow.
Here’s my three-podcast strategy that worked incredibly well:
Customization #1: Beginner interviewing expert
The setup:
Use “Deep Dive” format with default length.
Instruction:
Target a specific audience: Have the first host act as a complete beginner who knows nothing about [TOPIC], and the second host act as an expert. The beginner should interview the expert, asking basic questions someone new would actually ask.”
My example:
“Have the first host act as a complete beginner who knows nothing about AI agents, and the second host act as an expert. The beginner should interview the expert about building RAG systems.”
Why this works:
You hear YOUR actual questions being asked by the beginner host
The expert explains without assuming prior knowledge
The interview format feels natural, not like a lecture
What I heard:
Beginner: “Okay, but why can’t I just paste my documents into ChatGPT?”
Expert: “Great question! Here’s the problem with that approach...”
This felt like listening to myself learn in real-time.
Customization #2: Expert debate
The setup:
Use “Debate” format with default length.
Instruction:
“Have two experts debate different approaches to [TOPIC]. One argues for [APPROACH A], the other argues for [APPROACH B]. They should discuss tradeoffs, not just argue who’s right.”
My example:
“Have two experts debate whether beginners should start with simple RAG or jump straight to advanced retrieval strategies. Discuss the tradeoffs of each approach.”
Why this works:
Hearing both sides prevents you from thinking there’s only one “right” way
You learn decision-making frameworks, not just facts
Shows you what experienced people actually argue about
What I heard:
Expert 1: “Start simple so you understand fundamentals”
Expert 2: “But simple approaches create bad habits you have to unlearn”
Both: Discussion of when each makes sense
This helped me understand there are different paths I can pursue, not one perfect answer.
Customization #3: Expert critique
The Setup:
Use “Critique” format with default length.
Instruction:
“Have an expert review the sources and offer constructive criticism: What’s missing? What’s oversimplified? What do beginners commonly misunderstand from this material?”
My example:
“Have an expert review these RAG tutorials and point out what’s missing, what’s oversimplified, and what beginners commonly misunderstand.”
Why this works:
Reveals gaps in the sources you collected
Warns you about common misconceptions BEFORE you develop them
Shows you what to be skeptical about
What I heard:
“These tutorials make RAG look simple, but they skip chunk optimization entirely”
“Most guides don’t mention that your first RAG system will probably return bad results”
“Notice how none of these sources discuss cost implications”
This critique saved me from thinking I understood everything after the first two listens.
3. Visual clarity using customized video overview
Now that I’ve understood the concepts by reading and listening, sometimes I still need someone to explain it like a 101 presentation, something that gets me up to speed and levels me up to more complex topics. This is where video comes into play.
NotebookLM’s Video Overview generates presentations with AI narration. The newest update uses Nano Banana—Gemini’s image generation model—to create contextual illustrations that help you understand and remember the content. You can choose from six visual styles (Watercolor, Papercraft, Anime, Whiteboard, Retro Print, Heritage) and two formats: Explainer for comprehensive coverage or Brief for quick highlights. For a visual learner, this can be your jam.
Recently, NotebookLM launched its custom visual feature where you can create any visual that you can think of:
You can explore with some things like: 8-bit pixelated, pop art, hyper realistic, super cute, 3D, etc. The limit is your imagination.
Now, let’s move on to my three-video strategy that built on what I learned from the podcasts:
🚨 Pro tip: Pick a visual style that matches your learning vibe. I used Whiteboard for technical concepts because it felt like a classroom lecture, but Papercraft for comparison tables because the layered look made differences pop visually.
Customization #1: Structured learning path
Instruction:
“Describe the show structure: Start with what I need to understand first, then what comes next, then what comes after that. Organize by learning sequence, not by topic complexity.”
My example:
“Start with what RAG is and why it exists, then explain the core components I need to know, then show me the basic workflow, then end with what I should learn first vs what can wait.”
Why this works:
Text slides organize information in clear learning order
You see the big picture before diving into details
Narration explains each slide while you read
What I actually saw:
Slide 1: “What is RAG?” with 3-4 bullet points
Slide 2: “Core Components” listing embeddings, vector DB, retrieval, LLM
Slide 3: “The Basic Workflow” with numbered steps
No complex diagrams—just organized text that built understanding step by step.
Customization #2: Comparison table presentation
Instruction:
“Target a specific use case: Present this to help me choose between [OPTION A], [OPTION B], and [OPTION C] by comparing them across key factors like complexity, cost, time investment, and best use cases.”
My example:
“Present this to help me choose how to build AI agent by comparing complexity, when to use each, and what I need to learn for each.”
Why this works:
Simple text-based comparison slides show differences clearly
Narration walks through the tradeoffs while you read
Helps you make decisions, not just consume information
What I actually saw:
Slide showing three columns with agent type, capability, and best for
The AI narrator explains deeper out of the slide
Table highlighting key differences
The visualization works well to help me choose my path. I could see how I would approach building my AI agent.
Customization #3: Mistake prevention checklist
Instruction:
“Focus on a specific topic: Create a presentation listing the most common mistakes beginners make with [TOPIC]. For each mistake, explain what goes wrong and what to do instead.”
My example:
“List the most common mistakes beginners make building RAG systems. For each mistake, explain what goes wrong (like using chunks that are too small or too large) and what to do instead.”
Why this works:
Slide-by-slide checklist format is easy to reference later
Text clearly states mistake → consequence → solution
Prevents hours of debugging obvious problems
What I actually saw:
Slide 1: “Mistake #1: Chunk Size” with bullet points explaining the problem
Slide 2: “What Happens” describing bad retrieval results
Slide 3: “What to Do Instead” with specific guidance
This presentation became my troubleshooting reference guide whenever I found problems.
Phase 3: Test your knowledge by validating what you think you know
After absorbing information through reports, podcasts, and videos, I thought I understood AI Agent and RAG systems. I could explain concepts when reading about them. I could nod along when hearing explanations.
But, I need to be tested to ensure I’ve understood the whole concept.
This is where NotebookLM’s testing features revealed the gaps between “I think I know this” and “I actually know this.”
1. Use Flashcards to test my understanding
I thought I understood concepts, but when I tried to implement them, I realized I was just memorizing definitions without actually understanding how things worked together.
NotebookLM’s flashcards let you generate cards with custom instructions and ask for explanations on any answer. You can choose how many cards to make and dial the difficulty up or down. Net result: a highly tailored learning session.
Here’s how I used flashcard customizations strategically:
Customization #1: Scenario-based testing
Instructions:
“Create scenario-based flashcards that test my decision-making about [TOPIC], not just definitions. Present real situations where I need to choose the right approach.”
My example:
“Create scenario-based flashcards that test my decision-making building a RAG AI agent, not definitions”
Why this works:
Forces you to APPLY knowledge, not just recognize it
Reveals whether you understand WHEN to use what you learned
Exposes gaps in practical understanding
Example card I got:
“A user uploads a 200-page PDF manual and wants to ask questions about it. Do you need: (a) fine-tuning, (b) RAG, (c) prompt engineering, or (d) function calling?”
I confidently answered “prompt engineering” because I thought you could just paste the content in the prompt.
Wrong.
The answer was RAG because 200 pages exceeds context windows and you need retrieval to pull relevant sections. This flashcard revealed I didn’t actually understand when RAG was necessary vs optional.
Customization #2: Common mistakes for beginners
Instructions:
“Focus on common mistakes beginners make with [TOPIC]. Create flashcards that test whether I can identify what’s wrong and why.”
My example:
“Focus on common mistakes beginners make when building an AI agent”
Why this works:
Prepares you for problems before you encounter them
Tests understanding of failure modes
Builds troubleshooting intuition
Example card that saved me from future headaches:
See? The flashcards are fully tailored to my custom instructions. If I want to ask about the reasoning behind an answer, I can click “Explain,” and it will take me to the Chat section where I can explore more.
Customization #3: Learn terminology for better understanding
Instructions:
“Create flashcards for the specific terms I keep confusing in [TOPIC]. Ask me to explain the difference between similar concepts in one sentence each.”
My example:
“Create flashcards for the specific terms I keep confusing: embeddings vs vectors vs vector databases, semantic search vs keyword search, retrieval vs generation.”
Why this works:
Clarifies terminology blocking your comprehension
Forces precise understanding, not vague “I kind of get it”
Makes documentation suddenly readable
The terminology that was blocking me:
“Explain the difference between embeddings, vectors, and vector databases in one sentence each”
Embeddings = converting text into numbers
Vectors = those numbers in array form
Vector databases = storage systems optimized for finding similar vectors
“What’s the difference between semantic search and keyword search?”
Keyword = finding exact word matches
Semantic = finding meaning matches (even with different words)
Getting this terminology straight made reading documentation 10x easier.
2. Quiz it: test how the ideas apply and connect
If you’re like me, you’re not satisfied with just flashcards. You’ll look for other ways to test your understanding of the new topics you’re learning. That’s why I tuned the quiz to stress‑test harder and reveal whether I can synthesize knowledge and solve real problems.
NotebookLM’s Quiz feature generates multiple-choice questions with explanations. Unlike flashcards which don’t provide answer options.
Here’s my quiz customization strategy:
Customization #1: Integration testing across concepts
Instructions:
“Create quiz questions that require me to combine multiple concepts from [TOPIC] to solve problems. Don’t test concepts in isolation—test whether I understand how they work together.”
My Example:
“Create quiz questions that require me to combine multiple RAG concepts—embeddings, chunk size, retrieval quality, and LLM generation—to solve real implementation problems.”
Why this works:
Tests whether you understand the SYSTEM, not just parts
Reveals gaps in how concepts connect
Prepares you for real-world complexity where nothing works in isolation
Example question I got: “Your RAG chatbot returns accurate information but users complain answers lack context. The issue is most likely: (a) Wrong embedding model, (b) Chunk size too small, (c) Vector database error, (d) LLM not understanding the question”
I guessed (a) because embedding seemed most technical.
Wrong.
Answer: (b) Chunk size too small—retrieves precise matches but loses surrounding context needed for complete answers. This question revealed I didn’t understand how chunk size affects answer quality, only that it affects retrieval.
Customization #2: Tradeoff decision testing
Instructions:
“Create quiz questions about the tradeoffs in [TOPIC]. For each question, make me choose between options where each has valid pros and cons—there’s no universally ‘correct’ answer without context.”
My Example:
“Create quiz questions about architectural tradeoffs in RAG systems. Make me choose between options based on different priorities like speed vs accuracy, cost vs quality, simplicity vs power.”
Why this works:
Prepares you for real decisions with no perfect answer
Tests understanding of WHY you’d choose each option
Builds decision-making frameworks, not just knowledge
Example question I got: “For a chatbot answering 10,000 questions/day, which approach balances cost and quality: (a) Retrieve 3 chunks with high-quality embeddings, (b) Retrieve 10 chunks with standard embeddings, (c) Retrieve 1 chunk with re-ranking, (d) Retrieve 5 chunks with hybrid search”
This wasn’t about “right” answer—it was about understanding the tradeoff:
More chunks = better context but higher cost
Better embeddings = better retrieval but slower/expensive
Re-ranking = improved relevance but added complexity
Hybrid search = best results but most expensive
The quiz forced me to think through implications, not just pick the fanciest option.
Customization #3: Failure mode prediction
Instructions:
“Create quiz questions asking me to predict what will break or fail in [TOPIC]. Present scenarios and ask what problem will occur and why.”
My Example:
“Create quiz questions asking me to predict what breaks when building AI agent systems. Present implementation scenarios and ask what problem will occur.”
Why this works:
Builds debugging intuition before you need it
Tests understanding of how systems fail, not just how they work
Prepares you for troubleshooting real problems
Example question I got:
The correct answer was D because specificity is necessary to build an AI agent that actually solves business problems and meets real needs.
If you’re unsure about your answer, ask for a hint. It’ll nudge you toward the right one.
Phase 4: Conversations that adapt to your learning style
After testing myself with flashcards and quizzes, I learned something frustrating: I knew WHAT was wrong, but I didn’t always understand WHY.
The quiz told me my chunk size was too small, but I couldn’t visualize what “too small” actually meant in practice. The flashcard said I needed RAG instead of prompt engineering, but I couldn’t explain when that decision point actually happens in real projects.
I had facts. Yet, I still didn’t have a strong understanding.
This is where Custom Chat Goals changed everything.
NotebookLM now lets you customize chat to adopt a specific goal, voice, or role. Simply put, you can program how the AI responds based on exactly where you are in your learning journey.
Here’s how I used chat customization to go from “I got the answer wrong” to “I actually understand why”:
Customization #1: The 5th-grade teacher
The setup:
Click the configuration icon in chat, then write your custom goal.
My instruction:
“Act as a teacher explaining to a 5th grader. Break down complex topics into simple, real-world analogies. No jargon unless you explain it in plain English first. Use examples from everyday life that anyone can understand.”
Why this works:
When you’re learning something completely new, technical explanations just create more confusion. You need to understand the CONCEPT before you learn the technical terms.
What I actually experienced:
Me: “Why do I need vector databases for RAG?”
NotebookLM:
Turned out vector databases weren’t scary. They were just an organized filing system.
Customization #2: The “show me, don’t tell me” explainer
My instruction:
“When I ask about a concept, give me a concrete before-and-after example showing what happens WITHOUT this concept versus WITH this concept. Use real scenarios, not abstract explanations. Show me the actual difference it makes.”
Why this works:
I learn by seeing examples, not by reading definitions. If I can see what breaks without something and what works with it, I understand why it matters.
What I actually experienced:
Me: “What’s the difference between regular search and semantic search?”
NotebookLM: “Let me show you with a real example.
WITHOUT semantic search (keyword matching): User asks: ‘How do I fix my internet connection?’ Your system searches for documents containing: ‘fix’ + ‘internet’ + ‘connection’ Returns: A document titled ‘Internet Connection Requirements’ (has the keywords but doesn’t solve the problem)
WITH semantic search (meaning matching): User asks: ‘How do I fix my internet connection?’ Your system searches for documents ABOUT: troubleshooting connectivity issues Returns: A document titled ‘WiFi Not Working? Try These Steps’ (doesn’t have exact keywords but actually solves the problem)
The difference: Keyword search finds exact word matches. Semantic search finds what the person actually MEANS.”
Now I understood why semantic search mattered—it finds solutions, not just matching words.
Customization #3: The “test after every answer” accountability partner
My instruction:
“After you answer my question, always end your response with a follow-up question that tests whether I actually understood what you just explained. Don’t just ask ‘does that make sense?’ - ask me something that proves I can apply the concept you just taught me.”
Why this works:
Here’s the problem with passive learning: you read an explanation, it makes sense in the moment, you think “got it,” and you move on. Then three days later when you try to actually use it, you realize you didn’t understand it at all.
This customization forces active learning. Every answer I get from NotebookLM ends with a question I have to answer. I can’t just nod along—I have to prove I understood by applying it immediately.
What I actually experienced:
Me: “What’s the difference between embeddings and vectors?”
NotebookLM: “Embeddings are the process of converting text into numerical representations. Vectors are the actual arrays of numbers that result from that process. Think of it like this: ‘embedding’ is the action (like ‘baking’), and ‘vector’ is the result (like ‘bread’). When you embed the word ‘dog,’ you get a vector like [0.2, 0.5, 0.8, ...].
Now here’s my question for you: If I told you I have two vectors - one for the word ‘king’ and one for the word ‘queen’ - and they’re very close together numerically, what does that tell you about those two words?”
Me: “Um... that they have similar meanings?”
NotebookLM: “Exactly! You just demonstrated you understand that vector proximity = semantic similarity. That’s the core concept. Ready for the next question?”
See what happened? I couldn’t just passively consume the answer. I had to immediately apply what I learned to prove I got it.
What actually changed
After a week of learning this, I finally understood exactly what I need to know right now versus what I can figure out later. I can start building my AI agent chatbot for my newsletter, so readers get full access to my knowledge—pulling resources across my newsletter and social notes—whenever they want to learn or explore my content archives.
Here’s what surprised me most about this whole process:
I thought NotebookLM’s new features were about generating different formats. Podcast, video, flashcards, quiz—just more ways to consume the same information, right?
Wrong.
Each format solved a different learning problem I didn’t even know I had:
Reports gave me the foundation, but I wasn’t going to re-read them during my commute.
Podcasts let me learn while walking, but I couldn’t visualize how components connected.
Videos showed me the structure, but I thought I understood more than I actually did.
Flashcards revealed I was just recognizing answers, not truly understanding them.
Quizzes exposed that I couldn’t actually apply what I’d learned.
Custom goals helped me to create a personal and learning style that matched how my brain worked.
What I realized was all features were built to customize my learning needs by meeting them where I was.
Seven months ago, I wrote about NotebookLM as a learning tool. That post became my most popular because readers connected with the idea of turning AI into a personalized learning system instead of following generic tutorials.
But what I didn’t realize then—and what I understand now—is this:
The future of learning isn’t about AI teaching you.
It’s about you teaching AI how to teach YOU.
Every customization I shared in this post was me telling NotebookLM where my knowledge gaps were, how my brain makes connections, what learning style works for my situation. The AI didn’t magically know this. I had to design my own learning path.
And honestly? That’s the real skill worth developing.
If you’re trying to learn something complex right now, whether it’s AI development like me, or marketing, or fitness, or anything else, stop forcing yourself through tutorials that weren’t designed for where you are.
Instead, think about:
What do you already know that you can anchor this new thing to?
Where are you actually confused right now?
Are you testing if you recognize information or if you can actually apply it?
Then use NotebookLM’s customization to build the learning system you need.
The prompts I shared aren’t magic formulas. They’re examples of designing learning that matches where you are and where you’re trying to go. Your customizations will look different because your learning needs are different.
And that’s exactly the point.
How’s NotebookLM working for you? Drop your go‑to tips and tricks in the comments.
I’m building more systems like this—AI workflows that solve real problems I’m actually facing. Some save me hours every week. Others took weeks of experimentation to get right.
If you want to see how I’m figuring this stuff out in real-time, subscribe to AI Maker. I share the complete breakdowns, the exact systems, the mistakes I made, and what actually works.
Because the best way to learn isn’t reading about what’s possible.
It’s watching someone figure things out, copying what works, and learning as you go.























Love this detailed walkthrough, Wyndo! You’re laying out the core methods top learners have refined for years, and it’s fantastic that NotebookLM now puts all of this within reach for everyone.
Anyone who’s serious about learning should absolutely dive into this system!
This is awesome. It really feels like an evolution, not just an update.
I've been using the Audio Overviews for my walks, but I hadn't thought about using the 'Expert Debate' format to learn about tradeoffs...that's a genuinely transformative way to understand a subject's nuances.
Great write-up, Wyndo!