Context Window
AI & AgentsThe maximum number of tokens an LLM can process in a single request. Determines how much schema metadata, query results, error messages, and instructions an agent can consider simultaneously. Larger context windows enable more complex database interactions but increase latency and cost.