-
Notifications
You must be signed in to change notification settings - Fork 1.2k
fix: Update watsonx.ai provider to use LiteLLM mixin and list all models #3674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Bill Murdock <[email protected]>
Signed-off-by: Bill Murdock <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nits - please run pre-commits locally, thanks!
@github-actions run precommit |
⏳ Running pre-commit hooks on PR #3674... |
🤖 Applied by @github-actions bot via pre-commit workflow
✅ Pre-commit hooks completed successfully! 🔧 Changes have been committed and pushed to the PR branch. |
|
||
# Add metrics to the chunk | ||
if self.telemetry and chunk.usage: | ||
if self.telemetry and hasattr(chunk, "usage") and chunk.usage: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks unrelated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, my tests for watsonx.ai wouldn't succeed without this fix (as noted in the PR description). I am fine with letting some other PR put the fix in, but I think this PR should probably wait until that one is in if that's the plan.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does watsonx return usage information? we need to fix/adapt in the adapter, not in the core. putting this in the core obscures the issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line was already handling the case where there was no usage information by checking chunk.usage
. So I would argue this change is just doing what the line was already doing but in a more robust way. FWIW, the watsonx.ai API does return a usage block, e.g.:
"usage": {
"completion_tokens": 54,
"prompt_tokens": 79,
"total_tokens": 133
},
Notably though, it doesn't put the usage information on every chunk. It only puts it on the final chunk in the stream. I can see that when I call the streaming REST API directly. However, I don't see it when I call LiteLLM directly with streaming=True. I do see it when I call LiteLLM without streaming, FWIW. So I think LiteLLM might be dropping the usage information from the last chunk. Here is how I tested this in a notebook:
import litellm, asyncio
import nest_asyncio
nest_asyncio.apply()
async def get_litellm_response():
return await litellm.acompletion(
model="watsonx/meta-llama/llama-3-3-70b-instruct",
messages=[{"role": "user", "content": "What is the capital of France?"}],
stream=True
)
async def print_litellm_response():
response = await get_litellm_response()
async for chunk in response:
print(chunk)
asyncio.run(print_litellm_response())
With all that said, even if LiteLLM was correctly including this on the last chunk, we'd still have the issue that it is missing from all of the other chunks (unless LiteLLM put in an explicit None for this field for each other chunk). So I still think we should adopt this change here and let the line handle both missing AND explicit None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, fair enough 😄
Signed-off-by: Bill Murdock <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
api_key=self.get_api_key(), | ||
api_base=self.api_base, | ||
) | ||
logger.info(f"params to litellm (openai compat): {params}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
avoid this, it'll not only produce a lot of output but it'll also log user data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, thx! I put that in there thinking I wasn't pushing this file anyway, and then forgot to take it out when we wound up moving the b64_encode_openai_embeddings_response method in to the class. Fixed now.
as a bonus, if you can record inference responses for watsonx we'll be able to make additional changes w/o needing a key |
Signed-off-by: Bill Murdock <[email protected]>
) | ||
return await self._get_openai_client().completions.create(**params) # type: ignore | ||
async def check_model_availability(self, model): | ||
return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this always true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that was the way to signal that we don't have a static list, but I looked more closely at how the OpenAI mixin handles this issue, and I reimplemented to mirror what it does.
Matt writes:
How do I do that? |
Signed-off-by: Bill Murdock <[email protected]>
Signed-off-by: Bill Murdock <[email protected]>
assert inference_adapter.client.api_key == api_key | ||
|
||
|
||
@pytest.mark.parametrize( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do you need this instead of using the parametrized version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal of this function is to be a general-purposes test for whether the x-llamastack-provider-data API key header for inference providers that use the LiteLLM Mixin just like the one above tests the inference providers that use the OpenAI Mixin. Right now we only have one LIteLLM-based provider, but having a list here makes it easier to add more of these tests if/when we add more LIteLLM-based providers which I think we would do if there was a big demand for another LLM service that isn't OpenAI compatible.
|
||
# Add metrics to the chunk | ||
if self.telemetry and chunk.usage: | ||
if self.telemetry and hasattr(chunk, "usage") and chunk.usage: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, fair enough 😄
@jwm4, the issue you referenced was actually fixed a few weeks ago. So, switching from ibm_watsonx_ai to litellm is not the fix. Help me understand why we need to switch? Thanks! |
@leseb here is what happened:
So what are the benefits here? I think the original goal of standardizing on the mixins wound up being misleading because all of the others wound up being standardized on the OpenAI mixin and this one wound up being standardized on the LiteLLM mixin. In theory there could be some benefit to being standardized on the LiteLLM mixin in the future if there is ever big demand for more providers that are compatible with LiteLLM but not the OpenAI client. However, I think the main advantage of this PR is that it no longer uses a static list of supported models. We certainly could revert to using the watsonx.ai client libraries instead of LiteLLM and also keep the change that adds dynamic model listing. But then I think the question would be why do that work when this version is working now? |
Thanks for the very detailed answer, my concern with litellm is that it has a problematic license when shipping in enterprise products. On top of that, we have migrated most of our providers to use openai mixin, and I believe there is an intention to only support openai compatible inference providers in a near future. I believe we have an opportunity to remove litellm as a depency, along with the litellm mixin, I think we should cease it. To not waste anybody's time on a decision, I suggest you revert the changes on litellm mixin and keep the models list change. We can address the litellm part later. Thoughts? |
I would prefer we make a decision first and then act on it. Reverting the changes to use the litellm mixin and keeping the models list change seems like it would be a reasonable way to go for the reasons you describe, but I would want to hear from @mattf first before I started coding it. |
Incidentally, a third alternative to either depending on LiteLLM or going back to depending on the IBM watsonx client library is to re-implement using only the built-in |
there are no plans to exclude providers that don't implement an openai compatible api. if we can simplify the WatsonX adapter by using LiteLLM, because LiteLLM has already done the hard adapting work, we should do it. anyone who wants to exclude LiteLLM can clip the providers that use it. pulling related utilities together and near to where they are used is a also a good outcome. |
There is no explicit plan but that's my intention, hope the team will align on that. Let's postpone that discussion though.
Alright, let's not waste any more time on this, the utilities for LiteLLM (the mixins) are good for now, let's use them. Let's revisit its removal later too. Thanks! |
What does this PR do?
b64_encode_openai_embeddings_response
which was trying to enumerate over a dictionary and then reference elements of the dictionary using .field instead of ["field"]. That method is called by the LiteLLM mixin for embedding models, so it is needed to get the watsonx.ai embedding models to work.Closes #3165
Also it is related to a line-item in #3387 but doesn't really address that goal (because it uses the LiteLLM mixin, not the OpenAI one). I tried the OpenAI one and it doesn't work with watsonx.ai, presumably because the watsonx.ai service is not OpenAI compatible. It works with LiteLLM because LiteLLM has a provider implementation for watsonx.ai.
Test Plan
The test script below goes back and forth between the OpenAI and watsonx providers. The idea is that the OpenAI provider shows how it should work and then the watsonx provider output shows that it is also working with watsonx. Note that the result from the MCP test is not as good (the Llama 3.3 70b model does not choose tools as wisely as gpt-4o), but it is still working and providing a valid response. For more details on setup and the MCP server being used for testing, see the AI Alliance sample notebook that these examples are drawn from.