WIP: Model with dependency injection support #169
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Note: this change is build on top of #168.
Tools, prompts, and even telemetry support feels a little involved to construct.
While vercel AI sdk supports LanguageModel middleware to add some customization to models, that I found that middleware to be lacking to some extend. This is because most functionality is implemented in streamText, generateText commands.
This change experiments with a way of adding dependency injection to the language model. Plus it sets a few defaults and wraps streamText/generateText instead of the more low level. This way all models automatically have telemetry and some other defaults enabled without us repeating the settings.
The alternative language model is implemented by the AugmentedLanguageModel (for lack of a better name) type. The model accepts a providerRegistry and a model or model-id. In case a ID is given we will lookup the actual LanguageModel right before calling into the vercel AI SDK.
The augmented model can also be configured with system prompts, tools, and toolsets. All these can be plain values, but optionally support dependency injection.
With these changes the full model can be defined as a
const
for easier sharing, for example with test code.For example to define the agents chat model I didn't reuse the builders we normally use, but instead use the lower level functions and prompts:
By adding all features to the model we can group prompts and tools when creating the model:
Due to the model dependencies being typed Typescript will tell us if we miss a dependency when calling streamText/generateText: