Skip to content

Commit fa5fa07

Browse files
FilipFanHuanzhiMao
andauthored
feat(model): support FunctionGemma-270m-it for local inference (#1279)
FunctionGemma is a specialized variant of Gemma 3 270M optimized for function calling. However, it introduces a unique formatting convention for tool definitions and calls that is incompatible with the existing `GemmaHandler`. This commit adds local support for FunctionGemma-270m-it by: - Added `FunctionGemmaHandler` for local inference. - Implemented specific prompt formatting that matches the official documentation. References: - https://ai.google.dev/gemma/docs/functiongemma/formatting-and-best-practices#base-prompt-structure - https://ai.google.dev/gemma/docs/functiongemma/model_card#benchmark-results Co-authored-by: Huanzhi Mao <huanzhimao@gmail.com>
1 parent 11fc254 commit fa5fa07

File tree

4 files changed

+541
-0
lines changed

4 files changed

+541
-0
lines changed

berkeley-function-call-leaderboard/SUPPORTED_MODELS.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ For model names containing `{...}`, multiple versions are available. For example
5050
| Gemini-3-Pro-Preview | Function Calling | Google | gemini-3-pro-preview-FC |
5151
| Gemini-3-Pro-Preview | Prompt | Google | gemini-3-pro-preview |
5252
| Gemma-3-{1b,4b,12b,27b}-it | Prompt | Self-hosted 💻 | google/gemma-3-{1b,4b,12b,27b}-it |
53+
| FunctionGemma-270m-it | Function Calling | Self-hosted 💻 | google/functiongemma-270m-it-FC |
5354
| GLM-4-9b-Chat | Function Calling | Self-hosted 💻 | THUDM/glm-4-9b-chat |
5455
| GLM-4.5 | Function Calling | Zhipu AI | glm-4.5-FC |
5556
| GLM-4.5-Air | Function Calling | Zhipu AI | glm-4.5-air-FC |

berkeley-function-call-leaderboard/bfcl_eval/constants/model_config.py

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@
3939
)
4040
from bfcl_eval.model_handler.local_inference.falcon_fc import Falcon3FCHandler
4141
from bfcl_eval.model_handler.local_inference.gemma import GemmaHandler
42+
from bfcl_eval.model_handler.local_inference.functiongemma import FunctionGemmaHandler
4243
from bfcl_eval.model_handler.local_inference.glm import GLMHandler
4344
from bfcl_eval.model_handler.local_inference.granite import (
4445
GraniteFunctionCallingHandler,
@@ -1247,6 +1248,18 @@ class ModelConfig:
12471248
is_fc_model=False,
12481249
underscore_to_dot=False,
12491250
),
1251+
"google/functiongemma-270m-it-FC": ModelConfig(
1252+
model_name="google/functiongemma-270m-it",
1253+
display_name="FunctionGemma-270m-it (FC)",
1254+
url="https://ai.google.dev/gemma/docs/functiongemma",
1255+
org="Google",
1256+
license="gemma-terms-of-use",
1257+
model_handler=FunctionGemmaHandler,
1258+
input_price=None,
1259+
output_price=None,
1260+
is_fc_model=True,
1261+
underscore_to_dot=False,
1262+
),
12501263
"meta-llama/Llama-3.1-8B-Instruct-FC": ModelConfig(
12511264
model_name="meta-llama/Llama-3.1-8B-Instruct",
12521265
display_name="Llama-3.1-8B-Instruct (FC)",

berkeley-function-call-leaderboard/bfcl_eval/constants/supported_models.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,7 @@
107107
"google/gemma-3-4b-it",
108108
"google/gemma-3-12b-it",
109109
"google/gemma-3-27b-it",
110+
"google/functiongemma-270m-it-FC",
110111
"meta-llama/Llama-3.1-8B-Instruct-FC",
111112
"meta-llama/Llama-3.1-8B-Instruct",
112113
"meta-llama/Llama-3.1-70B-Instruct-FC",

0 commit comments

Comments
 (0)