Enabling language models to use tools through structured function calls
Product AnnouncementOpenAI introduced function calling in the Chat Completions API, allowing GPT-4 and GPT-3.5-turbo to generate structured JSON arguments for developer-defined functions. This API feature enabled the entire AI agent ecosystem: LangChain, AutoGPT, and hundreds of tool-using agent frameworks were built on function calling.
Developers could define functions with names, descriptions, and parameter schemas. The model would decide when to call a function and generate valid JSON arguments. This turned language models from text generators into decision-making agents that could interact with external systems — query databases, call APIs, execute code, and chain multiple tool calls together.
Function calling was the technical enabler for the AI agent wave of 2023-2024. LangChain, AutoGPT, BabyAGI, CrewAI, and dozens of agent frameworks were built on the ability of models to reliably call tools. Without function calling, "AI agents" would have remained unreliable prompt-engineering hacks. With it, they became a legitimate application pattern.
Later updates added parallel function calling (multiple tool calls in one response) and the ability to force the model to call a specific function. These refinements made agent architectures more efficient and predictable, enabling complex multi-step workflows where the model could query multiple data sources simultaneously.