-
-
Notifications
You must be signed in to change notification settings - Fork 232
Support removing tools #238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👍 LGTM, would be useful! |
@tpaulshippy I was playing with this and had a problem where the LLM tried to call a tool that was defined in the chat's first message which I subsequently removed before sending a later message. (Seems like this was discussed a bit #229 (comment) ) I was using The current behavior of Line 138 in a35aa11
def execute_tool(tool_call)
tool = tools[tool_call.name.to_sym]
args = tool_call.arguments
tool.call(args)
end The simplest is probably to change it to |
I thought the comment from Carmine was about previous tool calls in the message history. You seem to be talking about the LLM trying to call a tool that previously was available right? Do you have the full payload of the request where you saw this? Super curious why the LLM would try to call a tool it's not given (even if it was given previously). Was there anything in the payload that would tell the LLM about the tool? |
@tpaulshippy Ah I see -- I think what was breaking it in my case was removing a tool before another I was trying to implement a tool use limit, where a tool could only be used N times, and would then remove itself from As a minimal example, here's a tool that removes itself from class GetNextWordTool < RubyLLM::Tool
description "Returns the next word"
def initialize(words, chat)
@words = words
@chat = chat
end
def execute
result = @words.shift || ""
@chat.tools.delete(:get_next_word) # Removes itself after first call
result
end
end
chat = RubyLLM.chat(provider: :ollama, model: "qwen3:8b").with_temperature(0.6)
chat.with_tools(GetNextWordTool.new(["unpredictable", "beginnings"], chat))
chat.ask("/nothink Use the get_next_word tool to get the first word. Then, call the get_next_word tool a second time to get the second word. Respond with a JSON array containing these two words. Do not guess. Use the tool twice.") which results in:
So I'm happy to concede that this issue probably shouldn't block this PR! 😄 Maybe just add to docs a note that it's unsafe to remove tools from within a tool call? 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @tpaulshippy I guess this can be useful, however would be great to have one test where we hit the real LLMs. I'd suggest to copy this test, but between the two executions you remove the Weather tool:
ruby_llm/spec/ruby_llm/chat_tools_spec.rb
Lines 48 to 63 in 4ff492d
CHAT_MODELS.each do |model_info| # rubocop:disable Style/CombinableLoops | |
model = model_info[:model] | |
provider = model_info[:provider] | |
it "#{provider}/#{model} can use tools in multi-turn conversations" do # rubocop:disable RSpec/ExampleLength,RSpec/MultipleExpectations | |
chat = RubyLLM.chat(model: model, provider: provider) | |
.with_tool(Weather) | |
response = chat.ask("What's the weather in Berlin? (52.5200, 13.4050)") | |
expect(response.content).to include('15') | |
expect(response.content).to include('10') | |
response = chat.ask("What's the weather in Paris? (48.8575, 2.3514)") | |
expect(response.content).to include('15') | |
expect(response.content).to include('10') | |
end | |
end |
@crmne Done! It's interesting to see what each model does. |
I think the better api is .with_tool(nil) Its the same with_schema |
Are you suggesting this would replace |
Introduces config.log_stream_debug to control detailed streaming debug output including chunks and accumulator states. Can be enabled via RUBYLLM_STREAM_DEBUG environment variable.
- Add on_tool_result callback that fires after tool execution - Fix Rails integration to chain user callbacks with persistence callbacks (fixes crmne#306) - Update documentation for event callbacks - Add tests for callback functionality
- Convert all provider modules to classes inheriting from RubyLLM::Provider - Enable per-instance configuration and connection management - Store config in provider instance to avoid passing it around everywhere - Improve local vs remote provider detection - Add openai_use_system_role config option for OpenAI-compatible servers - Document openai_use_system_role configuration option - Clean up unnecessary comments throughout codebase Fixes crmne#195
JRuby handles keyword arguments differently in initialize methods. Changed to use **options pattern to ensure compatibility.
Use optional hash argument instead of keyword arguments for ErrorMiddleware to ensure compatibility with JRuby's argument handling.
Anthropic models don't support structured output. Removed the capability from both direct and Bedrock providers. Fixes crmne#330
Tools can now return halt to prevent the automatic continuation that normally follows tool execution. This enables: - Agent handoffs where sub-agents handle responses - Token-saving terminal operations - Breaking the tool→response→tool loop when needed The halt helper returns a Tool::Halt object that stops continuation while preserving the tool result in conversation history. Resolves crmne#126, crmne#256, crmne#326
- Reference existing tool execution flow to show what halt skips - Add sub-agent and terminal operation examples with clear comments - Explain the difference between normal flow and halt flow - Clarify that context isn't automatically shared between agents Makes the complex topic of conversation halting easier to understand.
- Mark as advanced feature with clear warning - Emphasize that sub-agents work perfectly without halt - Clarify it's just an optimization to skip LLM commentary - Reduce emphasis to prevent overuse Makes it clear that halt is optional and rarely needed.
Having trouble getting mistral to pass test. And I don't have gpustack setup so don't have that cassette. |
superseded by da5dc02 |
What this does
I realized recently just how many tokens tools can take.
Here's an example of saying "Hello" to Bedrock with 4 basic local tools + the tools from the Playwright MCP:

This call took 3024 input tokens.
Without the Playwright MCP, the call takes 842 tokens.
In a chat with an agentic interface, I want the option to add/remove tools at will to save on tokens.
It also simplifies tool selection for the LLM if there are fewer tools to choose from.
Type of change
Scope check
Quality check
overcommit --install
and all hooks passmodels.json
,aliases.json
)API changes
Related issues
Resolves #229