Member-only story
OpenAI Just Announced New Models with Function Calling Capabilities
A great feature for developers.
Large language models keep getting better in terms of performance and capabilities allowing developers to create new LLM-based tools and applications more easily.
OpenAI just announced two new models and a great new feature, function calling capability.
Developers mostly use gpt-4
and gpt-3.5-turbo
models through API. OpenAI updated these models with more steerable versions.
The new version gpt-3.5-turbo
supports 16k context, which is a major improvement compared to the standard 4k version.
Another good news is about the costs. There is a cost reduction of 25% on input tokens for gpt-3.5-turbo
and of 75% on embeddings model.
All these enhancements are great but I think the most appealing one for the developer community is the function calling.
Function calling
Function calling allows for describing functions to the models and having the model output a JSON object containing function arguments.
One of the struggles when creating LLM-based applications is to get the output in a structured format of your choice. Prompt engineering helps with this — you can specify the type and structure of…