We’ve always thought large language models (LLMs) like Claude, GPT-4, and Gemini were just next-word predictors—but new research from Anthropic tells a very different story. In this video, I break down their blog post “Tracing the Thoughts of a Large Language Model” and explore what’s really happening under the hood.
LINK:
www.anthropic.com/news/tracing-thoughts-language-m…
www.anthropic.com/research/auditing-hidden-objecti…
arxiv.org/pdf/2503.21934v1
openai.com/index/chain-of-thought-monitoring/
epoch.ai/frontiermath/the-benchmark
www.lesswrong.com/posts/8ZgLYwBmB3vLavjKE/some-les…
transformer-circuits.pub/2025/attribution-graphs/b…
transformer-circuits.pub/2025/attribution-graphs/b…
bbycroft.net/llm
RAG Beyond Basics Course:
prompt-s-site.thinkific.com/courses/rag
Let's Connect:
🦾 Discord: discord.com/invite/t4eYQRUcXB
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: www.patreon.com/PromptEngineering
💼Consulting: calendly.com/engineerprompt/consulting-call
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Newsletter, localgpt:
tally.so/r/3y9bb0
00:00 How LLMs work
02:49 Next Word Prediction vs. Planning Ahead
03:28 Interpreting LLM Reasoning
04:14 Comparing LLMs to Computer Vision Models
05:20 The Biology of a Large Language Model
05:53 Universal Language of Thought
12:00 LLMs and Mathematical Reasoning
15:23 Faithfulness in Chain of Thought
20:05 LLMs and Hallucinations
22:06 Understanding Jailb
コメント