Is your feature request related to a problem? Please describe.
Currently, when adding custom models, the LLM provider and corresponding API key cannot be reused. This makes it inconvenient to add multiple models under a single provider.
Describe the solution you'd like
It would be helpful to introduce provider-level management for "OpenAI API compatible providers," allowing users to manage multiple custom models under the same provider and API key. This would lower the barrier for configuring multiple models and make the setup more user-friendly.
Additional context
This change could streamline the process and improve scalability for teams and individual users. Open to ideas and discussion on possible solutions or implementation strategies.