Conversation
chrsmith
left a comment
There was a problem hiding this comment.
We need to have the codyLLMConfigurationResolver have the same behavior as it did before we recently updated the code to be aware of the modelconfig.
It should just be a matter of returning the "model ID" where a model ID is expected, or a "provider ID / model ID" where that is expected... but from the sounds of it, it isn't clear what the expected behavior is.
Is it the case that the completion model is either the "model ID" (claude-3-sonnet) OR the "provider-and-model-id" (anthropic/starcoder)?
In other words, we can modify this code any way we wish without breaking the internal Sourcegraph instance. But without having a crisp understanding of what the client is expecting from us, that's going to be problematic...
Thinking aloud, I'll just take a very close look at the previous behavior and try to infer what how this was supposed to work. (Or if it is just "return whatever was specified in the site config", then just hard-code it based on the things we are expecting client side?)
Currently, clients fail to parse the autocomplete model. Context can be found in this thread.
This issue is not reproducible in all cases. Clients (tested in VS Code only) may resolve the autocomplete model based on the client configuration and feature flags before parsing the provider and model data from the connected Sourcegraph instance. However, if we disable the client configuration and feature flags (make
resolveDefaultModelFromVSCodeConfigOrFeatureFlagsalways returnnull), creating the provider configuration for autocomplete consistently fails (at least for me).The current implementation of codyLLMConfigurationResolver returns:
providercompletionModel.This PR proposes to return only the model ID as
completionModel. This adjustment allows clients to correctly resolve the provider and model for completions.I don’t believe this is a comprehensive solution or that it aligns with the new model configuration approach. With this change,
chatModelandfastChatModeluse model IDs prefixed with providers, while code completions use a combination ofproviderandcompletionModel(without the provider prefix as it’s a separate field).This PR is primarily for demonstration purposes. For more details, refer to the referenced Linear thread.
Test plan
Tested locally in VS Code.
Configuration used for testing
resolveDefaultModelFromVSCodeConfigOrFeatureFlagsalways returningnull