You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
New RubyLLM::Schema/JSON schema powered params DSL for Tools
Enables full JSON schema support for parameters in Tools.
You can use the excellent RubyLLM::Schema DSL, or a custom JSON schema.
It's more elegant and more powerful than the old param helper, which is
still supported.
Fixes#480Fixes#76
1.**Inheritance:** Must inherit from `RubyLLM::Tool`.
70
73
2.**`description`:** A class method defining what the tool does. Crucial for the AI model to understand its purpose. Keep it clear and concise.
71
-
3.**`param`:** A class method used to define each input parameter.
72
-
***Name:** The first argument (a symbol) is the parameter name. It will become a keyword argument in the `execute` method.
73
-
***`type:`:** (Optional, defaults to `:string`) The expected data type. Common types include `:string`, `:integer`, `:number` (float), `:boolean`. Provider support for complex types like `:array` or `:object` varies. Stick to simple types for broad compatibility.
74
-
***`desc:`:** (Required) A clear description of the parameter, explaining its purpose and expected format (e.g., "The city and state, e.g., San Francisco, CA").
75
-
***`required:`:** (Optional, defaults to `true`) Whether the AI *must* provide this parameter when calling the tool. Set to `false` for optional parameters and provide a default value in your `execute` method signature.
76
-
4.**`execute` Method:** The instance method containing your Ruby code. It receives the parameters defined by `param` as keyword arguments. Its return value (typically a String or Hash) is sent back to the AI model.
74
+
3.**`params`:** (v1.9+) The DSL for describing your input schema. Declare nested objects, arrays, enums, and optional fields in one place. If you only need flat keyword arguments, the older `param` (v1.0+) helper remains available. See [Using the `param` Helper for Simple Tools](#using-the-param-helper-for-simple-tools).
75
+
4.**`execute` Method:** The instance method containing your Ruby code. It receives the keyword arguments defined by your schema and returns the payload the model will see (typically a String, Hash, or `RubyLLM::Content`).
77
76
78
77
> The tool's class name is automatically converted to a snake_case name used in the API call (e.g., `WeatherLookup` becomes `weather_lookup`). This is how the LLM would call it. You can override this by defining a `name` method in your tool class:
79
78
>
@@ -86,29 +85,96 @@ end
86
85
>```
87
86
{: .note }
88
87
89
-
### Provider-Specific Parameters
88
+
## Declaring Parameters
89
+
90
+
RubyLLM ships with two complementary approaches:
91
+
92
+
* The **`params` DSL** for expressive, structured inputs. (v1.9+)
93
+
* The **`param` helper** for quick, flat argument lists. (v1.0+)
94
+
95
+
Start with the DSL whenever you need anything beyond a handful of simple strings—it keeps complex schemas maintainable and identical across every provider.
96
+
97
+
### params DSL
90
98
{: .d-inline-block }
91
99
92
100
v1.9.0+
93
101
{: .label .label-green }
94
102
95
-
Some providers allow you to attach extra metadata to tool definitions (for example, Anthropic's `cache_control` directive for prompt caching). Use `with_params` on your tool class to declare these once and RubyLLM will merge them into the API payload when the provider understands them.
103
+
When you need nested objects, arrays, enums, or union types, the `params do ... end` DSL produces the JSON Schema that function-calling models expect while staying Ruby-flavoured.
Provider-specific tool parameters are passed through verbatim. Use `RUBYLLM_DEBUG=true` and keep an eye on your logs when rolling out new metadata.
129
+
RubyLLM bundles the DSL through [`ruby_llm-schema`](https://github.com/danielfriis/ruby_llm-schema), so every project has the same schema builders out of the box.
130
+
131
+
### Using the `param` Helper for Simple Tools
132
+
133
+
If your tool just needs a few scalar arguments, stick with the `param` helper. RubyLLM translates these declarations into JSON Schema under the hood.
134
+
135
+
```ruby
136
+
classDistance < RubyLLM::Tool
137
+
description "Calculates distance between two cities"
138
+
param :origin, desc:"Origin city name"
139
+
param :destination, desc:"Destination city name"
140
+
param :units, type::string, desc:"Unit system (metric or imperial)", required:false
141
+
142
+
defexecute(origin:, destination:, units:"metric")
143
+
# ...
144
+
end
145
+
end
146
+
```
147
+
148
+
### Supplying JSON Schema Manually
149
+
{: .d-inline-block }
150
+
151
+
v1.9.0+
152
+
{: .label .label-green }
153
+
154
+
Prefer to own the JSON Schema yourself? Pass a schema hash (or a class/object responding to `#to_json_schema`) directly to `params`:
RubyLLM normalizes symbol keys, deep duplicates the schema, and sends it to providers unchanged. This gives you full control when you need it.
112
178
113
179
## Returning Rich Content from Tools
114
180
@@ -286,6 +352,35 @@ chat.ask("Check weather for every major city...")
286
352
> Raising an exception in `on_tool_call` breaks the conversation flow - the LLM expects a tool response after requesting a tool call. This can leave the chat in an inconsistent state. Consider using better models or clearer tool descriptions to prevent loops instead of hard limits.
287
353
{: .warning }
288
354
355
+
## Advanced Tool Metadata
356
+
357
+
### Provider-Specific Parameters
358
+
{: .d-inline-block }
359
+
360
+
v1.9.0+
361
+
{: .label .label-green }
362
+
363
+
Some providers accept additional metadata alongside the JSON Schema—for example, Anthropic’s `cache_control` hints. Use `with_params` to declare these once on the tool class and RubyLLM will merge them into the payload when the provider supports the keys.
Provider metadata is passed through verbatim—turn on `RUBYLLM_DEBUG=true` if you want to inspect the final payload while experimenting.
383
+
289
384
## Advanced: Halting Tool Continuation
290
385
291
386
After a tool executes, the LLM normally continues the conversation to explain what happened. In rare cases, you might want to skip this and return the tool result directly.
0 commit comments