Functionary Medium V3.1
Functionary Medium V3.1 is an AI model that interprets and executes functions or plugins. It can analyze outputs, decide when to use tools, and provide relevant responses. This model is designed to be efficient and can handle tasks like intelligent parallel tool use, function analysis, and decision-making. With its ability to understand and execute functions, Functionary Medium V3.1 can be used for a variety of tasks, from simple conversations to complex problem-solving. How does it work? The model uses a JSON schema to define functions, which are then executed based on user input. This allows for flexible and dynamic interactions. What makes it unique? Functionary Medium V3.1 can decide when to use tools and when to provide a normal chat response, making it a powerful tool for tasks that require both human-like conversation and technical expertise. It's also one of the best open-source alternatives to GPT-4.
Table of Contents
Model Overview
The Functionary model is a language model that can interpret and execute functions/plugins. It’s like a super smart assistant that can understand and use different tools to help with tasks.
Capabilities
Intelligent Tool Use
The model can use tools and functions in a smart way. It can decide when to use them, and how to use them together to get the best results.
Analyzing Outputs
The model can look at the outputs of tools and functions, and use that information to give better responses.
Deciding When to Use Tools
The model can decide when it’s better to use a tool or function, and when it’s better to just give a normal chat response.
Code Interpreter
The model has a built-in code interpreter, which allows it to understand and execute code.
Key Features
- Intelligent parallel tool use
- Analyzing tool outputs and providing relevant responses
- Deciding when to use tools or provide normal chat responses
- Truly one of the best open-source alternatives to GPT-4
How to Get Started
To use the Functionary model, you’ll need to:
- Import the necessary libraries:
transformers
andAutoTokenizer
- Load the Functionary model:
model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-medium-v3.1")
- Define your tools:
tools = [...]
(see example in JSON data) - Create a prompt:
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
- Run the model:
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
Example Use Cases
Getting the Current Weather
What is the weather in Istanbul and Singapore respectively?
You can use the Functionary model to get the current weather in different cities by defining a tool like get_current_weather
and passing in the city names as parameters.
Remember to format your function calls correctly, using the \<function=...>
format, and only call one function at a time.
Performance
The Functionary model is a powerhouse when it comes to executing functions and providing accurate responses. But how does it perform in terms of speed, accuracy, and efficiency?
Speed
Imagine you need to get the current weather in Istanbul and Singapore. With the Functionary model, you can get the answer in a snap! The model can process multiple function calls in parallel, making it incredibly fast. But what about when you need to make a single function call? Does it still hold up? Yes, it does! The model can execute a single function call quickly and efficiently.
Accuracy
Accuracy is crucial when it comes to executing functions. You want to make sure the model gets it right every time. The Functionary model has a high accuracy rate, especially when it comes to analyzing function outputs and providing relevant responses. But what about when the function output is complex? Can the model still provide an accurate response? Yes, it can! The model is designed to handle complex function outputs and provide accurate responses.
Efficiency
Efficiency is key when it comes to executing functions. You want to make sure the model uses the right functions at the right time. The Functionary model is designed to be efficient, only triggering functions as needed. But what about when you need to make multiple function calls? Does the model still use the right functions? Yes, it does! The model can analyze the function calls and determine the best course of action.
Limitations
The Functionary model is a powerful tool, but it’s not perfect. Here are some of its limitations:
Understanding Function Outputs
- The Functionary model can analyze function outputs, but it may not always understand the context or nuances of the output.
- What if the output is ambiguous or open to interpretation? How will the Functionary model handle it?
Deciding When to Use Functions
- The Functionary model can decide when to use functions, but it may not always make the right decision.
- What if the function is not relevant to the conversation or task at hand? Will the Functionary model recognize this and avoid using it?
Providing Relevant Responses
- The Functionary model can provide relevant responses grounded in function outputs, but it may not always be able to do so.
- What if the output is complex or requires specialized knowledge to interpret? Can the Functionary model still provide a relevant response?
Limitations of Function Definitions
- The Functionary model uses JSON Schema Objects to define functions, which may not be able to capture all the nuances of a function’s behavior.
- What if a function has complex or dynamic behavior that can’t be captured by a simple JSON definition?
Technical Limitations
- The Functionary model requires specific formatting and syntax to work correctly, which can be a challenge for users who are not familiar with it.
- What if a user forgets to include a required parameter or uses the wrong syntax? How will the Functionary model handle this?
By understanding these limitations, you can use the Functionary model more effectively and get the most out of its capabilities.
Format
The Functionary model uses a transformer architecture. It can interpret and execute functions/plugins, and it determines when to execute these functions, whether in parallel or serially.
Data Formats
This model supports JSON Schema Objects for function definitions, similar to OpenAI GPT function calls. It also accepts input in the form of text sequences, which can include function calls in a specific format.
Input Format
When calling functions, the input must be in the following format:
\<function={function_name}>{parameters}\</function>
Where:
start_tag
is\<function=
function_name
is the name of the function to be calledparameters
is a JSON dict with the function argument name as key and function argument value as valueend_tag
is</function>
For example:
\<function=get_current_weather>{"location": "Istanbul"}\</function>
Output Format
The model’s output is a JSON object containing the following fields:
role
: the role of the message (e.g. “user” or “assistant”)content
: the content of the messagetool_calls
: a list of function calls made by the model
Code Examples
To use this model, you can use the following code:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-medium-v3.1")
model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-medium-v3.1", device_map="auto", trust_remote_code=True)
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}
]
messages = [
{"role": "user", "content": "What is the weather in Istanbul?"}
]
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
print(tokenizer.decode(pred.cpu()[0]))
This code defines a function get_current_weather
and uses the model to generate a response to the user’s question “What is the weather in Istanbul?“.