Temperature
LLM & Language ModelsA setting that controls how random or creative an AI's responses are. Low temperature = predictable and focused. High temperature = diverse and creative.
Temperature is the most important generation parameter you can adjust. It ranges from 0 to 2 (typically), controlling the randomness of the model's token selection. At temperature 0, the model always picks the highest-probability next token (deterministic). At higher temperatures, it considers lower-probability tokens too (creative but riskier).
Practical guidelines: use low temperature (0-0.3) for factual tasks, code generation, data extraction, and anything where consistency matters. Use medium temperature (0.5-0.7) for general writing and conversation. Use high temperature (0.8-1.2) for creative writing, brainstorming, and when you want surprising outputs.
Temperature interacts with other parameters like top_p (nucleus sampling) and frequency_penalty. Most AI applications set these behind the scenes, but understanding temperature helps you use AI tools with adjustable settings more effectively.
Real-World Example
When ChatGPT gives you a bland, predictable answer — the temperature is probably set low. When it gives wildly creative but occasionally nonsensical output — temperature is high.
Related Terms
More in LLM & Language Models
FAQ
What is Temperature?
A setting that controls how random or creative an AI's responses are. Low temperature = predictable and focused. High temperature = diverse and creative.
How is Temperature used in practice?
When ChatGPT gives you a bland, predictable answer — the temperature is probably set low. When it gives wildly creative but occasionally nonsensical output — temperature is high.
What concepts are related to Temperature?
Key related concepts include Top-p (Nucleus Sampling), Token, LLM (Large Language Model). Understanding these together gives a more complete picture of how Temperature fits into the AI landscape.