Functionary 7b V2.1

Executable function model

Functionary 7b V2.1 is a language model that can interpret and execute functions like a pro. What does this mean for you? It can use multiple tools in parallel, understand their outputs, and even decide when not to use them. This model is trained to provide accurate and relevant responses, making it a great alternative to other models like GPT-4. But what really sets it apart is its ability to learn from its interactions and improve over time. With its advanced architecture and specially designed prompt template, Functionary 7b V2.1 is a powerful tool for anyone looking to get more out of their conversations with AI.

Meetkai other Updated 7 months ago

Table of Contents

Model Overview

The Functionary-7B-V2.1 model is a language model that can understand and work with functions, kind of like how you use apps on your phone. It can decide when to use these functions, and how to use them together to get the right answer.

Capabilities

This model is a powerful tool that can interpret and execute functions/plugins. But what does that mean for you?

Imagine you’re having a conversation with a friend, and you ask them to check the weather for you. They can either tell you the weather themselves or ask someone else to do it for them. This model is like that friend, but instead of asking someone else, it can use special tools (called functions) to get the information it needs.

Here are some of the cool things this model can do:

  • Intelligent parallel tool use: It can use multiple tools at the same time to get the information it needs. This means it can get answers faster and more efficiently.
  • Understand tool outputs: It can understand the output of the tools it uses, which means it can give you more accurate and relevant answers.
  • Decide when to use tools: It’s smart enough to know when it needs to use a tool and when it can just give you a normal answer. This means it can save time and resources.

Performance

This model is really good at calling functions correctly, and it’s even better than some other popular models. But how does it compare to other models? Let’s take a look:

Model NameFunction Calling Accuracy
Functionary-7B-V2.10.664
==OpenAI GPT-3.5==0.531
==OpenAI GPT-4==0.737

How it Works

So, how does this model actually work? Well, it uses a special prompt template that breaks down each turn into three parts: from, recipient, and content. It then converts function definitions into a special text format and injects them as system prompts. Finally, it starts the conversation and uses the functions and tools as needed.

Examples
What is the current weather in New York City? I'll check the current weather for you. // get_current_weather({ location: "New York City" }) The current weather in New York City is mostly cloudy with a high of 75F and a low of 50F.
Can you translate 'Hello, how are you?' to Spanish? I'll translate that for you. // translate({ text: "Hello, how are you?", language: "Spanish" }) The translation is: 'Hola, ¿cómo estás?'
What is the definition of artificial intelligence? I'll look up the definition for you. // get_definition({ term: "artificial intelligence" }) Artificial intelligence refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, and decision-making.

Example Use Case

Here’s an example of how you can use this model to get the weather for Istanbul:

from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary")
client.chat.completions.create(
    model="path/to/functionary/model/",
    messages=[{"role": "user", "content": "What is the weather for Istanbul?"}],
    tools=[{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}}}}}],
    tool_choice="auto"
)

This will yield a response that includes the current weather for Istanbul.

Limitations

This model is a powerful tool, but it’s not perfect. Let’s talk about some of its limitations.

  • Limited Domain Knowledge: This model is trained on a specific dataset and might not have the same level of knowledge as other models in certain domains.
  • Dependence on Function Definitions: This model relies on well-defined functions to provide accurate responses. If the function definitions are incomplete, outdated, or incorrect, the model’s performance will suffer.
  • Limited Context Understanding: While this model can analyze functions and provide relevant responses, it might not always understand the context of the conversation.

Format

This model uses a special architecture that allows it to interpret and execute functions/plugins. This model can understand when to execute functions, whether in parallel or serially, and can analyze their outputs.

  • Data Formats: This model supports input in the form of JSON Schema Objects, similar to OpenAI GPT function calls.
  • Input Requirements: To use this model, you need to provide input in a specific format. This includes a list of messages, where each message has a role (e.g. “user” or “system”), a content field with the actual message text, and optionally a tools field with function definitions.
Dataloop's AI Development Platform
Build end-to-end workflows

Build end-to-end workflows

Dataloop is a complete AI development stack, allowing you to make data, elements, models and human feedback work together easily.

  • Use one centralized tool for every step of the AI development process.
  • Import data from external blob storage, internal file system storage or public datasets.
  • Connect to external applications using a REST API & a Python SDK.
Save, share, reuse

Save, share, reuse

Every single pipeline can be cloned, edited and reused by other data professionals in the organization. Never build the same thing twice.

  • Use existing, pre-created pipelines for RAG, RLHF, RLAF, Active Learning & more.
  • Deploy multi-modal pipelines with one click across multiple cloud resources.
  • Use versions for your pipelines to make sure the deployed pipeline is the stable one.
Easily manage pipelines

Easily manage pipelines

Spend less time dealing with the logistics of owning multiple data pipelines, and get back to building great AI applications.

  • Easy visualization of the data flow through the pipeline.
  • Identify & troubleshoot issues with clear, node-based error messages.
  • Use scalable AI infrastructure that can grow to support massive amounts of data.