Last week, I was coaching a CEO who stopped me mid-session with a straightforward question: "Sorry, but what does that actually mean?" after I had mentioned LLMs for the nth time.
It was a perfect reminder that while I try to avoid technical jargon, every leader should understand some foundational AI concepts to leverage this technology effectively.
So today, I'm breaking down five essential AI concepts in plain language – the building blocks you'll need to speak about and interact with AI in your role or organization confidently.
1. Large Language Models (LLMs): The Engine Behind Modern AI
Large Language Models (LLMs) are why we have the AI we know today. These systems, trained on massive datasets, power tools like ChatGPT, Claude, and Gemini.
LLMs memorized nearly all the public information, including every web page, books in the public domain, Wikipedia and even Reddit articles. This lets them quickly retrieve, summarize, and even create content based on patterns they've learned.

(This is the massive amount of data GPT-3 was trained on. It’s believed that GPT-4o was trained on a dataset that’s several times larger.)
What makes LLMs powerful isn't just their knowledge—it's their ability to make connections across different domains and generate new insights. And to let you interact with them without needing a Computer Science degree: regular human language will do. (See “Prompting” below.)
2. Tokens: The Building Blocks of AI Language
Tokens are how AI actually processes language—breaking words, parts of words, and even punctuation into smaller pieces that it can work with mathematically.
For example, the phrase "I need help with my presentation" might be broken into tokens like "I", "need", "help", "with", "my", "present", and "ation." Each token gets translated into a numeric code, which is how the AI model actually processes language.

(The sentence above breaks down into 28 tokens. Most words are one token, but tokenization is two.)
Why Tokens Lead to Hallucinations
Here's where things get tricky: because AI is fundamentally a mathematical model without true comprehension, it doesn't genuinely understand the meaning behind these tokens. It's just incredibly skilled at predicting the next likely token based on patterns.
This explains why even the best AI models can sometimes confidently deliver completely fabricated responses—a phenomenon known as "hallucination."

I gave an example to a group of Lead with AI students I coached last week: if an AI model's training data included the incorrect phrase "the sun is black" enough times, it might confidently state this as fact—not because it believes it, but because the tokens statistically align with its learned patterns.
When you're working with AI, remember this key insight: behind the scenes, it's playing a sophisticated numbers game with these tokens, not truly understanding content the way humans do.