Skip to content

added support for custom openai api#43

Open
sebaxzero wants to merge 2 commits intorashadphz:mainfrom
sebaxzero:custom-openai-api
Open

added support for custom openai api#43
sebaxzero wants to merge 2 commits intorashadphz:mainfrom
sebaxzero:custom-openai-api

Conversation

@sebaxzero
Copy link
Copy Markdown

Support for a custom openai api, using default llama_index.llms.openai (no requirement change)

backend:

  • created "CUSTOM" model mapping with "gpt-4" as its name for llama_index to use 8k context.
  • added elif statement to chat, related queries and validator for ChatModel.CUSTOM

frontend:

  • added Custom API as selectable model.

docker:

enviromental variables:

CUSTOM_HOST=your-custom-host
CUSTOM_API_KEY=your-custom-api-key

.env example for lm-studio server:

CUSTOM_HOST=http://localhost:1234/v1
CUSTOM_API_KEY=local

api-key in most cases is not nedded.

why use custom instead of openai naming?

llama_index uses OPENAI_API_BASE while openai uses OPENAI_BASE_URL
adding the new variable make sure it does not conflict with cloud openai usage.

tested with lm-studio local server using:

  • llama3 8B (also some finetunes versions like Herme 2 Theta)
  • mistral v0.3 7B
  • phi 3 mini

preview:

Captura de pantalla 2024-06-02 220940

@vercel
Copy link
Copy Markdown

vercel bot commented Jun 3, 2024

@sebaxzero is attempting to deploy a commit to the rashadphil's projects Team on Vercel.

A member of the Team first needs to authorize it.

@ramanveerji
Copy link
Copy Markdown

Support for a custom openai api, using default llama_index.llms.openai (no requirement change)

backend:

  • created "CUSTOM" model mapping with "gpt-4" as its name for llama_index to use 8k context.
  • added elif statement to chat, related queries and validator for ChatModel.CUSTOM

frontend:

  • added Custom API as selectable model.

docker:

enviromental variables:

CUSTOM_HOST=your-custom-host
CUSTOM_API_KEY=your-custom-api-key

.env example for lm-studio server:

CUSTOM_HOST=http://localhost:1234/v1
CUSTOM_API_KEY=local

api-key in most cases is not nedded.

why use custom instead of openai naming?

llama_index uses OPENAI_API_BASE while openai uses OPENAI_BASE_URL adding the new variable make sure it does not conflict with cloud openai usage.

tested with lm-studio local server using:

  • llama3 8B (also some finetunes versions like Herme 2 Theta)
  • mistral v0.3 7B
  • phi 3 mini

preview:

Captura de pantalla 2024-06-02 220940

can you add cohere support for search also?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants