Token
LLM & Language ModelsThe basic unit of text that AI models process — roughly equivalent to a word or word fragment. AI pricing, context windows, and rate limits are all measured in tokens.
Tokens are the atoms of AI language processing. Before an AI model reads your text, it breaks it into tokens using a tokenizer. In English, one token is roughly 3/4 of a word — 'hello' is 1 token, 'unbelievable' is 3 tokens, and a 1,000-word essay is about 1,300 tokens.
Tokens matter for three practical reasons: (1) pricing — API costs are per-token (GPT-4o charges $2.50 per million input tokens), (2) context window — a 128K token limit means ~96,000 words total conversation, and (3) generation speed — models generate tokens one at a time, so longer responses take longer.
Understanding tokens helps you optimize AI usage: shorter prompts cost less, staying within context limits prevents conversation amnesia, and knowing your token budget helps you plan complex workflows.
Real-World Example
When OpenAI charges $2.50 per million tokens or Claude offers a 200K token context window — understanding tokens (roughly 0.75 words each) helps you estimate costs and capacity.
Related Terms
More in LLM & Language Models
FAQ
What is Token?
The basic unit of text that AI models process — roughly equivalent to a word or word fragment. AI pricing, context windows, and rate limits are all measured in tokens.
How is Token used in practice?
When OpenAI charges $2.50 per million tokens or Claude offers a 200K token context window — understanding tokens (roughly 0.75 words each) helps you estimate costs and capacity.
What concepts are related to Token?
Key related concepts include Context Window, LLM (Large Language Model), API (Application Programming Interface), Tokenization. Understanding these together gives a more complete picture of how Token fits into the AI landscape.