Functionary 7b V2.1
Functionary 7b V2.1 is a language model that can interpret and execute functions like a pro. What does this mean for you? It can use multiple tools in parallel, understand their outputs, and even decide when not to use them. This model is trained to provide accurate and relevant responses, making it a great alternative to other models like GPT-4. But what really sets it apart is its ability to learn from its interactions and improve over time. With its advanced architecture and specially designed prompt template, Functionary 7b V2.1 is a powerful tool for anyone looking to get more out of their conversations with AI.
Table of Contents
Model Overview
The Functionary-7B-V2.1 model is a language model that can understand and work with functions, kind of like how you use apps on your phone. It can decide when to use these functions, and how to use them together to get the right answer.
Capabilities
This model is a powerful tool that can interpret and execute functions/plugins. But what does that mean for you?
Imagine you’re having a conversation with a friend, and you ask them to check the weather for you. They can either tell you the weather themselves or ask someone else to do it for them. This model is like that friend, but instead of asking someone else, it can use special tools (called functions) to get the information it needs.
Here are some of the cool things this model can do:
- Intelligent parallel tool use: It can use multiple tools at the same time to get the information it needs. This means it can get answers faster and more efficiently.
- Understand tool outputs: It can understand the output of the tools it uses, which means it can give you more accurate and relevant answers.
- Decide when to use tools: It’s smart enough to know when it needs to use a tool and when it can just give you a normal answer. This means it can save time and resources.
Performance
This model is really good at calling functions correctly, and it’s even better than some other popular models. But how does it compare to other models? Let’s take a look:
Model Name | Function Calling Accuracy |
---|---|
Functionary-7B-V2.1 | 0.664 |
==OpenAI GPT-3.5== | 0.531 |
==OpenAI GPT-4== | 0.737 |
How it Works
So, how does this model actually work? Well, it uses a special prompt template that breaks down each turn into three parts: from, recipient, and content. It then converts function definitions into a special text format and injects them as system prompts. Finally, it starts the conversation and uses the functions and tools as needed.
Example Use Case
Here’s an example of how you can use this model to get the weather for Istanbul:
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary")
client.chat.completions.create(
model="path/to/functionary/model/",
messages=[{"role": "user", "content": "What is the weather for Istanbul?"}],
tools=[{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}}}}}],
tool_choice="auto"
)
This will yield a response that includes the current weather for Istanbul.
Limitations
This model is a powerful tool, but it’s not perfect. Let’s talk about some of its limitations.
- Limited Domain Knowledge: This model is trained on a specific dataset and might not have the same level of knowledge as other models in certain domains.
- Dependence on Function Definitions: This model relies on well-defined functions to provide accurate responses. If the function definitions are incomplete, outdated, or incorrect, the model’s performance will suffer.
- Limited Context Understanding: While this model can analyze functions and provide relevant responses, it might not always understand the context of the conversation.
Format
This model uses a special architecture that allows it to interpret and execute functions/plugins. This model can understand when to execute functions, whether in parallel or serially, and can analyze their outputs.
- Data Formats: This model supports input in the form of JSON Schema Objects, similar to OpenAI GPT function calls.
- Input Requirements: To use this model, you need to provide input in a specific format. This includes a list of messages, where each message has a
role
(e.g. “user” or “system”), acontent
field with the actual message text, and optionally atools
field with function definitions.