Functionary Medium V2.4
Functionary Medium V2.4 is a unique language model that can interpret and execute functions or plugins. But how does it do that? It determines when to execute these functions, whether in parallel or serially, and can understand their outputs. This means it only triggers functions as needed, making it efficient and fast. With its ability to analyze function outputs and provide relevant responses, Functionary Medium V2.4 is truly one of the best open-source alternatives to GPT-4. It also supports code interpretation and achieves state-of-the-art performance in Function Calling Accuracy. This model is designed to provide accurate results while making the most of its capabilities.
Table of Contents
Model Overview
Meet Functionary-Medium-V2.4, a cutting-edge language model that can interpret and execute functions/plugins. Imagine having a personal assistant that can understand and use various tools to help you with your tasks. That’s what Functionary-Medium-V2.4 can do!
Capabilities
Functionary-Medium-V2.4 is a powerful tool that can interpret and execute functions/plugins. It’s like having a personal assistant that can understand and use different tools to help you with your tasks.
- Intelligent parallel tool use: The model can use multiple tools at the same time to help you with your tasks.
- Analyze tool outputs: It can understand the outputs of different tools and provide relevant responses.
- Decide when to use tools: The model can decide when to use tools and when to provide a normal chat response.
- Support code interpreter: It can even interpret and execute code.
How it Works
The model uses a specially designed prompt template called “v2PromptTemplate” that breaks down each turn into from, recipient, and content portions. It converts function definitions into a format similar to TypeScript definitions and injects these definitions as system prompts.
Performance
Functionary-Medium-V2.4 achieves state-of-the-art performance in Function Calling Accuracy on the SGD dataset, with an accuracy of 88.11
. This means it can accurately predict function calls, including function name prediction and arguments extraction.
But how does it compare to other models? Let’s take a look at the table below:
Model Name | Function Calling Accuracy (Name & Arguments) |
---|---|
Functionary-Medium-V2.4 | 88.11 |
==OpenAI-gpt-3.5-turbo-1106== | 71.64 |
==OpenAI-gpt-4-1106-preview== | 76.29 |
As you can see, Functionary-Medium-V2.4 outperforms both ==OpenAI-gpt-3.5-turbo-1106== and ==OpenAI-gpt-4-1106-preview== in Function Calling Accuracy.
Limitations
While Functionary-Medium-V2.4 is a powerful tool, it’s not perfect. Let’s talk about some of its limitations.
- Understanding Functions: The model can interpret and execute functions, but it’s not always easy for it to understand the context and nuances of these functions.
- Function Calling Accuracy: While Functionary-Medium-V2.4 achieves state-of-the-art performance in Function Calling Accuracy, it’s still not 100% accurate.
- Limited Domain Knowledge: The model is trained on a specific dataset and might not have the same level of knowledge or expertise as a human in a particular domain.
Format
The model uses a unique architecture to interpret and execute functions/plugins. It can analyze the outputs of these functions and provide relevant responses. But what does this mean for you, the user?
- Architecture: The model uses a specially designed prompt template called “v2PromptTemplate” to break down each turn into three parts: from, recipient, and content.
- Supported Data Formats: The model supports code interpreter input and can understand function definitions in JSON Schema Objects.
- Input Requirements: To use the model, you’ll need to provide input in a specific format.