Replies: 4 comments 2 replies
-
|
makes sense to me. what do you want to do with this? add it to the documentation or to the canonical providers? either way, send us a PR! |
Beta Was this translation helpful? Give feedback.
-
|
well, the good news is that we're building native model inference straight into goose which should mean no more mucking around with ports, context windows etc. keep an eye out for it, it should soon be in main and then released. we would be very much looking for people like you to figure out what works, what doesn't and what could work better if we had the right parameters |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
This is already possible, I think. OpenWebUI exposes OpenAPI-compatible endpoints. If you add an OpenAPI provider to Goose with the following: OpenAI Host: http://{your OWUI url}/api And your OWUI api key, you should be able to select the models from the list. This includes workspaces, so you can configure your workspace and use it in Goose. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Why ? The reason behind it was the lack of tool calls from Goose and the good results I was getting in OWUI .
The other reason is that being able to use OWUI as model provider would allow me to use meta models where I can specify the context lenght , the temperature etc.... as well as define how tools should be called (default or native).
You can see in the following config, the mrdev model, which is a metamodel with qwen3-vl:4b-instruct-q8_0 as base model.
results, tbh are now much better , hence sharing it here.
How though ?
create a folder called : custom_providers uinder the goose config folder
create a file in it with name: custom_open_web_ui.json
fill up this file with the following (adapt to your server address)
{ "name": "custom_open_web_ui", "engine": "openai", "display_name": "Open Web UI", "description": "Custom Open Web UI provider", "api_key_env": "CUSTOM_OPEN_WEB_UI_API_KEY", "base_url": "https://my.openwebui.server/api/chat/completions", "models": [ { "name": "qwen3-vl:4b-instruct-q8_0", "context_limit": 128000, "input_token_cost": null, "output_token_cost": null, "currency": null, "supports_cache_control": null }, { "name": "mrdev", "context_limit": 128000, "input_token_cost": null, "output_token_cost": null, "currency": null, "supports_cache_control": null } ], "headers": null, "timeout_seconds": 1800, "supports_streaming": false, "requires_auth": true }please note, you'll be in need of enabling the api keys in your OpenWebUI instance , following this guide:
https://docs.openwebui.com/getting-started/advanced-topics/monitoring/
Hope this helps !
ps.
Wondering how hard would be to have an official OpenWebUI model provider , so models and metamodels could be pulled from without having to write them in the config
Beta Was this translation helpful? Give feedback.
All reactions