Lesson 03: The Bottleneck | Managing the Working Memory Limit
Your brain can only hold ~4 chunks at once. So can your AI.
In this lesson, we confront the most brutal constraint in your cognitive architecture: Working Memory. For 60 years, psychology said the limit was "7 plus or minus 2." Then Cowan (2001) proved the real number is about 4. This changes everything about how you should learn — and how you should engineer prompts.
━━━━━━━━━━━━━━━━━━━━
🧠 WHAT YOU'LL LEARN:
00:00 — Recap: Lesson 02 (The Gatekeeper / RAS)
01:30 — The Biological Hardware: Mission, Mechanism, Problem
05:00 — What Working Memory Actually Is
08:30 — The Bridge: Context Windows & Token Limits
12:00 — Breach Protocol: 4 Chunking Strategies
16:00 — The Build: Chunking Protocol (Architect + Builder tracks)
20:00 — Key Takeaways + Lesson 04 Preview
━━━━━━━━━━━━━━━━━━━━
🔬 KEY CONCEPTS:
• Working Memory capacity: ~4 chunks (Cowan, 2001), not 7±2 (Miller, 1956)
• Duration: 15-30 seconds without rehearsal
• WM handles both storage AND manipulation simultaneously
• LLM context windows are the AI equivalent of Working Memory
• "Lost in the middle" effect applies to both humans and LLMs
• Chunking: the universal hack for fixed-capacity systems
━━━━━━━━━━━━━━━━━━━━
🛠️ BUILD ASSIGNMENTS:
ARCHITECT TRACK:
Design a manual "Chunking Protocol" template using the SCAN → GROUP → LABEL → SEQUENCE → VERIFY framework.
BUILDER TRACK:
Write a "Chunking Protocol" system prompt that instructs an AI to automatically decompose any documentation into WM-safe units.
STACK: Combine with your Lesson 02 High-Salience tools for maximum effect.
━━━━━━━━━━━━━━━━━━━━
📚 REFERENCES & FURTHER READING:
Core Research:
• Cowan, N. (2001). "The magical number 4 in short-term memory: A reconsideration of mental storage capacity." Behavioral and Brain Sciences, 24(1), 87-114.
https://doi.org/10.1017/S0140525X0100...
• Miller, G. A. (1956). "The magical number seven, plus or minus two: Some limits on our capacity for processing information." Psychological Review, 63(2), 81-97.
https://doi.org/10.1037/h0043158
Lost in the Middle (LLMs):
• Liu, N. F., et al. (2023). "Lost in the Middle: How Language Models Use Long Contexts." arXiv:2307.03172.
https://arxiv.org/abs/2307.03172
Chunking Research:
• Gobet, F., et al. (2001). "Chunking mechanisms in human learning." Trends in Cognitive Sciences, 5(6), 236-243.
https://doi.org/10.1016/S1364-6613(00...
• Chase, W. G., & Simon, H. A. (1973). "Perception in chess." Cognitive Psychology, 4(1), 55-81.
https://doi.org/10.1016/0010-0285(73)...
Context Windows & AI:
• IBM. "What is a context window?" IBM Think.
https://www.ibm.com/think/topics/cont...
• Anthropic. "Long context prompting tips." Anthropic Docs.
https://docs.anthropic.com/en/docs/bu...
Working Memory & Education:
• Sweller, J. (1988). "Cognitive load during problem solving: Effects on learning." Cognitive Science, 12(2), 257-285.
https://doi.org/10.1207/s15516709cog1...
• Baddeley, A. (2000). "The episodic buffer: a new component of working memory?" Trends in Cognitive Sciences, 4(11), 417-423.
https://doi.org/10.1016/S1364-6613(00...
Информация по комментариям в разработке