← Machine Learning Foundations

LLM Capabilities & Limitations

topic
Large Language Model (LLM) literacy encompasses understanding how transformer-based models generate text through next-token prediction trained on massive text corpora, why they hallucinate (producing confident-sounding false statements because they optimize for plausible continuation rather than factual accuracy), what they are genuinely excellent at (pattern synthesis, translation, summarization, ideation, drafting), what they consistently fail at (precise arithmetic, verified factual recall, causal reasoning, tasks requiring real-world grounding), and how prompt design affects output quality.

Role

LLMs are the first AI technology to have been adopted at mass scale by people without any technical background — deployed as universal assistants by hundreds of millions of users who have no model of what they actually are or why they fail the ways they do. The result is predictable and measurable: people citing AI-generated hallucinations as facts, making decisions from AI outputs that sound authoritative but are statistically plausible confabulations, and simultaneously failing to use LLMs for the tasks they are genuinely transformatively useful for. LLM literacy — knowing specifically when to trust, when to verify, and how to prompt for the type of output you need — is the most immediately practically valuable component of AI literacy for the majority of people.

Explore "LLM Capabilities & Limitations" on the interactive map →