You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have installed open-notebook using Docker on WSL (Ubuntu 24.04). I successfully configured API keys for both Google AI (Gemini) and Alibaba Cloud BaiLian (DashScope) following the instructions. The connections seem to work fine – for example, I can use chat models like gemini-2.5-flash and qwen-plus in conversations.
However, when I try to set up the embedding model in the settings, I cannot retrieve any embedding models from either provider. The dropdown list for embedding models is empty (or doesn't show any options). I have also tried manually entering the model names (e.g., text-embedding-004 for Google and text-embedding-v2 for Alibaba), but they don't seem to be recognized – the system still indicates that no embedding model is available.
Could anyone guide me on how to properly configure embedding models for these providers? Is there a specific step I missed? Do I need to enable embedding services separately, or is there a different endpoint for embeddings?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I have installed open-notebook using Docker on WSL (Ubuntu 24.04). I successfully configured API keys for both Google AI (Gemini) and Alibaba Cloud BaiLian (DashScope) following the instructions. The connections seem to work fine – for example, I can use chat models like gemini-2.5-flash and qwen-plus in conversations.
However, when I try to set up the embedding model in the settings, I cannot retrieve any embedding models from either provider. The dropdown list for embedding models is empty (or doesn't show any options). I have also tried manually entering the model names (e.g., text-embedding-004 for Google and text-embedding-v2 for Alibaba), but they don't seem to be recognized – the system still indicates that no embedding model is available.
Could anyone guide me on how to properly configure embedding models for these providers? Is there a specific step I missed? Do I need to enable embedding services separately, or is there a different endpoint for embeddings?
Any help would be greatly appreciated. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions