feat: adding SAP AI Core as a Supported Provider#1865
feat: adding SAP AI Core as a Supported Provider#1865schardosin wants to merge 24 commits intomaximhq:mainfrom
Conversation
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds SAP AI Core as a first‑class provider: provider implementation, OAuth2 token cache, deployment discovery/cache, multi‑backend routing (OpenAI/Bedrock/Vertex) with streaming, types, DB migration/storage, UI/schema/forms/icons, extensive tests, docs, and CI/release secret exposure. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Bifrost as Bifrost\n(SAPAICoreProvider)
participant TokenCache as TokenCache\n(auth.go)
participant DeploymentCache as DeploymentCache\n(deployment.go)
participant SAP as SAP AI Core API
participant Backend as Backend Service\n(OpenAI/Bedrock/Vertex)
Client->>Bifrost: ChatCompletion request
Bifrost->>Bifrost: Validate key/config
Bifrost->>TokenCache: GetToken(clientID, clientSecret, authURL)
TokenCache->>TokenCache: Check cache (expiry buffer)
alt missing/expired
TokenCache->>SAP: POST /oauth/token
SAP-->>TokenCache: access_token + expires_in
TokenCache->>TokenCache: Store token
end
TokenCache-->>Bifrost: Access token
Bifrost->>DeploymentCache: GetDeploymentID(modelName, staticMap)
DeploymentCache->>DeploymentCache: Check TTL / cache
alt cache miss/expired
DeploymentCache->>SAP: GET /v2/lm/deployments
SAP-->>DeploymentCache: Deployments list
DeploymentCache->>DeploymentCache: Cache mapping
end
DeploymentCache-->>Bifrost: DeploymentID + BackendType
Bifrost->>Backend: Forward translated request (via SAP endpoints + headers)
Backend-->>Bifrost: Response / stream events
Bifrost-->>Client: Translated response / stream
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
transports/config.schema.json (1)
1474-1514:⚠️ Potential issue | 🔴 CriticalFix critical JSON syntax errors in vllm and sapaicore key config schema blocks.
The
transports/config.schema.jsonfile fails to parse as valid JSON. Thevllm_key_configobject (lines 1459–1471) is missing closing brackets]and}beforesapaicore_key_configbegins at line 1474. Additionally, theoneOfarray (lines 1515–1650) contains multiple missing commas between items (e.g., after line 1488, 1518, 1543, 1568) and an incomplete/malformed transition between the vllm and sapaicore config branches (around line 1597–1599).These errors prevent the schema from being parsed, blocking all config loading. As the authoritative source of truth for configuration, this file must be valid JSON before any other changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/config.schema.json` around lines 1474 - 1514, The JSON schema is invalid: close the missing array/object delimiters for the vllm_key_config block (ensure vllm_key_config's closing ] and } are present) before the sapaicore_key_config object, add the missing commas between items in the top-level oneOf array (insert commas after the vllm entry and other listed entries around the noted lines), and correct the malformed transition between the vllm and sapaicore branches so each branch is a well-formed JSON object in the oneOf array; verify symbols vllm_key_config, sapaicore_key_config and the containing oneOf array are syntactically valid JSON and run a JSON linter to confirm parsing succeeds.
🧹 Nitpick comments (3)
core/providers/sapaicore/errors_test.go (1)
175-178: The conditional assertion may not catch missing EventID.The current check only fails if
EventIDis non-nil but has the wrong value. If the parser unexpectedly fails to copy the nestedevent_id, this test would pass silently.♻️ Suggested improvement for stronger assertion
// Error-level event_id - if result.Error != nil && result.Error.EventID != nil && *result.Error.EventID != "err_evt_456" { - t.Errorf("expected error event_id 'err_evt_456', got %v", result.Error.EventID) + if result.Error == nil { + t.Fatal("expected error field to be set") + } + if result.Error.EventID == nil { + t.Fatal("expected error event_id to be set") + } + if *result.Error.EventID != "err_evt_456" { + t.Errorf("expected error event_id 'err_evt_456', got %q", *result.Error.EventID) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/errors_test.go` around lines 175 - 178, The test's conditional only fails when EventID is non-nil and wrong, so update the assertion to first require that result.Error and result.Error.EventID are non-nil and then assert *result.Error.EventID == "err_evt_456"; specifically modify the check around result.Error.EventID in errors_test.go to fail if EventID is nil and to call t.Errorf (or t.Fatalf) when the value is not "err_evt_456" so missing or incorrect event_id both cause test failures.core/providers/sapaicore/deployment.go (1)
156-157: EscaperesourceGroupin query URLs.Line 156 and Line 227 interpolate
resourceGroupdirectly into the query string. Useurl.QueryEscape(resourceGroup)to avoid malformed URLs for reserved characters.🔧 Proposed fix
+import "net/url" ... - deploymentsURL := fmt.Sprintf("%s/lm/deployments?status=RUNNING&resourceGroup=%s", normalizedURL, resourceGroup) + deploymentsURL := fmt.Sprintf("%s/lm/deployments?status=RUNNING&resourceGroup=%s", normalizedURL, url.QueryEscape(resourceGroup)) ... - deploymentsURL := fmt.Sprintf("%s/lm/deployments?status=RUNNING&resourceGroup=%s", normalizedURL, resourceGroup) + deploymentsURL := fmt.Sprintf("%s/lm/deployments?status=RUNNING&resourceGroup=%s", normalizedURL, url.QueryEscape(resourceGroup))Also applies to: 227-228
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/deployment.go` around lines 156 - 157, The code builds query URLs by interpolating resourceGroup directly (see deploymentsURL) which can produce malformed URLs for reserved characters; update the URL construction in core/providers/sapaicore/deployment.go (places that set deploymentsURL and the other URL around lines 227-228) to use url.QueryEscape(resourceGroup) when embedding the resource group into the query string, and add the net/url import if missing so the URL encoding is applied consistently.core/providers/sapaicore/sapaicore.go (1)
732-737: Use provider-key-aware unsupported-operation errors consistently.These methods hardcode
schemas.SAPAICoreinNewBifrostOperationError, while later methods already useNewUnsupportedOperationError(..., provider.GetProviderKey()). Align all unsupported handlers to the provider-key-aware pattern for alias-correct error metadata.Based on learnings: "In provider implementations, when raising unsupported-operation errors, pass provider.GetProviderKey() to NewUnsupportedOperationError so error messages and ExtraFields.Provider reflect the provider's alias."
Also applies to: 740-1283
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/sapaicore.go` around lines 732 - 737, The unsupported-operation handlers in this file (e.g., the return in TextCompletion and other methods between the blocks around the current diff and up through lines 740-1283) use providerUtils.NewBifrostOperationError with a hardcoded schemas.SAPAICore; update each of these to call providerUtils.NewUnsupportedOperationError and pass provider.GetProviderKey() (instead of schemas.SAPAICore) so the error metadata uses the provider alias—keep the same error message text and underlying fmt.Errorf("unsupported operation") but replace the constructor and provider identifier in every unsupported handler (look for usages in methods like TextCompletion and the other unsupported stubs in sapaicore.go).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release-pipeline.yml:
- Around line 569-573: The test-core job is missing the SAP AI Core environment
secrets required by the provider tests; update the test-core job's env block to
add SAP_AI_CORE_CLIENT_ID, SAP_AI_CORE_CLIENT_SECRET, SAP_AI_CORE_AUTH_URL,
SAP_AI_CORE_BASE_URL, and SAP_AI_CORE_RESOURCE_GROUP (using the corresponding
GitHub secrets: ${{ secrets.SAP_AI_CORE_CLIENT_ID }}, ${{
secrets.SAP_AI_CORE_CLIENT_SECRET }}, ${{ secrets.SAP_AI_CORE_AUTH_URL }}, ${{
secrets.SAP_AI_CORE_BASE_URL }}, ${{ secrets.SAP_AI_CORE_RESOURCE_GROUP }}),
ensuring the names match exactly so core/providers/sapaicore/*_test.go can
access credentials during go test.
In `@core/providers/sapaicore/auth_test.go`:
- Around line 153-205: The tests recompute time.Now() when checking expiration,
causing flakiness; change the expiration check to use the same reference
variable now (and optionally define buffer := 30*time.Second) used to construct
tt.expiresAt so it's deterministic—i.e., in the loop compute isExpired using
now.Add(30*time.Second).Before(token.expiresAt) (negated as currently written)
and reference cachedToken, tt.expiresAt, and the 30s buffer constant to locate
where to update the logic.
In `@core/providers/sapaicore/auth.go`:
- Around line 59-73: The code currently holds tc.mu.Lock() across the network
call to tc.fetchToken, which blocks other readers; change to check the cache
under a read lock (tc.mu.RLock()/RUnlock()), return if valid, otherwise release
the lock, call tc.fetchToken(clientID, clientSecret, authURL) without any lock,
then re-acquire the write lock (tc.mu.Lock()) to re-check tc.tokens[key] (in
case another goroutine updated it) and either use the cached value or store the
newly fetched token and expiresAt before unlocking; ensure you still return the
token and error paths correctly.
In `@core/providers/sapaicore/deployment.go`:
- Around line 99-122: The code currently holds dc.mu around the external call to
dc.fetchDeployments; release the write lock before making the API/token call to
avoid blocking readers. Change the flow in the function that references dc.mu,
cacheKey, dc.deployments and dc.ttl to: 1) acquire dc.mu, check cacheKey and
model freshness (as you already do), and if stale or missing, record that you
will refresh, then immediately dc.mu.Unlock() before calling
dc.fetchDeployments(clientID, clientSecret, authURL, baseURL, resourceGroup). 2)
After fetchDeployments returns, re-acquire dc.mu and perform a second
check/update of dc.deployments[cacheKey] (double-checked locking) to store the
fresh deployments and return the deploymentID/backend or the appropriate error.
Ensure no external calls occur while dc.mu is held.
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 305-310: The code mutates request.Params (clearing
MaxCompletionTokens and Temperature) in place for reasoning models via
isOpenAIReasoningOrGPT5Model, which breaks streaming and retries; instead create
a copy of the request (or at least a copied Params struct) before clearing those
fields and use that copy for the outbound call so the original request remains
unchanged, and apply the same non-mutating stripping logic in the streaming path
as well (mirror the change where the streaming/send logic is implemented so both
sync and stream use the sanitized copy).
In `@core/providers/sapaicore/types.go`:
- Around line 7-57: Rename the exported provider types to prefix them with
"SAPAICore" to follow the provider-type naming rule: change BackendType ->
SAPAICoreBackendType, BackendOpenAI/Bedrock/Vertex ->
SAPAICoreBackendOpenAI/Bedrock/Vertex, DeploymentStatus ->
SAPAICoreDeploymentStatus (and its constants
DeploymentStatusRunning/Stopped/Pending/Dead), DeploymentResource ->
SAPAICoreDeploymentResource, DeploymentDetails -> SAPAICoreDeploymentDetails,
DeploymentResourceDetails -> SAPAICoreDeploymentResourceDetails, BackendDetails
-> SAPAICoreBackendDetails, BackendModel -> SAPAICoreBackendModel, and
DeploymentsResponse -> SAPAICoreDeploymentsResponse; update all
references/imports/usages across the package to the new names and keep existing
JSON tags unchanged to preserve serialization. Ensure you update any switch/type
assertions and exported documentation/comments that reference the old
identifiers.
In `@docs/providers/supported-providers/overview.mdx`:
- Line 37: Update the SAP AI Core capability matrix row for "SAP AI Core
(`sapaicore/<model>`)" so that the Responses and Responses (stream) columns are
marked supported (change their cells from ❌ to ✅) to reflect the Bedrock-backed
SAP AI Core implementation; locate the row containing the exact label "SAP AI
Core (`sapaicore/<model>`)" in the table and modify the two relevant columns for
Responses and Responses (stream) accordingly so the table accurately matches the
provider implementation/docs.
In `@docs/providers/supported-providers/sapaicore.mdx`:
- Around line 51-68: The example config under sapaicore_key_config uses the
wrong key name: replace the "token_url" property with "auth_url" in the JSON
sample and the surrounding prose so it matches the SAP AI Core service key
schema (also update the second occurrence noted around the later example).
Ensure the description list uses `auth_url` ( OAuth2 token endpoint URL from
service key) and keep other keys (client_id, client_secret, base_url,
resource_group) unchanged.
- Around line 126-187: Replace the current Tabs ("Gateway (OpenAI model)",
"Gateway (Claude model)", "Gateway (Gemini model)", "Go SDK") with the required
three tabs: "Web UI", "API", and "config.json"; move the curl examples for each
model (e.g., model strings like "sapaicore/gpt-4o",
"sapaicore/anthropic--claude-3.5-sonnet", "sapaicore/gemini-1.5-pro") into the
API tab, provide a brief Web UI example showing the minimal user message payload
(e.g., messages: [{"role":"user","content":"Hello!"}] and
max_completion_tokens), and create a config.json tab containing a sample
provider config/JSON equivalent (showing Provider=SAPAICore, Model, and
MaxCompletionTokens fields analogous to the Go SDK snippet); ensure tab titles
exactly match "Web UI", "API", and "config.json" and that examples preserve the
Authorization header and token placeholders.
In `@framework/configstore/clientconfig.go`:
- Around line 379-390: The key hashing logic in GenerateKeyHash currently
ignores SAPAICoreKeyConfig, so updates to SAP AI Core credentials/deployments
won't change the computed hash and can cause stale reconciliation; modify
GenerateKeyHash to include SAPAICoreKeyConfig fields (ClientID, ClientSecret,
AuthURL, BaseURL, ResourceGroup and Deployments) in the hash computation (same
way other provider configs are included), and ensure the redaction-aware values
(or raw values if used elsewhere) are consistently used so changes to
SAPAICoreKeyConfig affect the generated key hash; update any parallel hashing
code paths that mirror GenerateKeyHash (the other block noted in the comment) to
include the same SAPAICoreKeyConfig fields for parity.
In `@framework/configstore/rdb.go`:
- Line 295: GenerateKeyHash in framework/configstore/clientconfig.go currently
omits SAPAICoreKeyConfig from the hash payload, so changes to that config won't
change the generated hash; update GenerateKeyHash to include SAPAICoreKeyConfig
using the same pattern as
AzureKeyConfig/VertexKeyConfig/BedrockKeyConfig/ReplicateKeyConfig/VLLMKeyConfig:
if key.SAPAICoreKeyConfig != nil, marshal it (using sonic.Marshal as used for
the others), handle and propagate any marshal error, and write the marshalled
bytes to the hash before finalizing the hash string.
In `@framework/configstore/tables/key.go`:
- Around line 321-343: The code is taking pointers into fields of
SAPAICoreKeyConfig (e.g., assigning k.SAPAICoreClientID =
&k.SAPAICoreKeyConfig.ClientID etc.) and later encrypting those pointers
in-place, which mutates the caller-owned SAPAICoreKeyConfig; instead, create new
standalone variables or value copies for ClientID, ClientSecret, AuthURL,
BaseURL, ResourceGroup (e.g., local vars like sapaicoreClientIDValue) and set
the SAPAICoreClientID/ClientSecret/AuthURL/BaseURL/ResourceGroup pointers to
point to those new copies (or perform encryption on the copies), then run the
encryption logic that currently operates in the encrypt block (the code
referenced around where encryption occurs) against those copies so the original
SAPAICoreKeyConfig fields are never overwritten or mutated.
- Around line 68-69: SAPAICoreResourceGroup is declared with
gorm:"type:varchar(255)" but is encrypted in the BeforeSave flow (see the
encryption logic around the BeforeSave/Save hooks at lines referencing
encryption), so change the gorm tag on SAPAICoreResourceGroup to use type:text
to accommodate ciphertext longer than 255 bytes; also review other fields of
type *schemas.EnvVar that are encrypted (e.g., the code referenced near the
BeforeSave hook) and ensure they likewise use gorm:"type:text" to avoid DB write
failures.
In `@ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx`:
- Around line 574-691: The SAP AI Core form inputs are missing required
data-testid attributes for testing; add data-testid attributes to the
interactive components referenced by the FormField names to follow the
<entity>-<element>-<qualifier> convention: for FormField name
`key.sapaicore_key_config.client_id` add data-testid="sapaicore-client-id-input"
to the EnvVarInput inside that FormField; for
`key.sapaicore_key_config.client_secret` add
data-testid="sapaicore-client-secret-input"; for
`key.sapaicore_key_config.auth_url` add data-testid="sapaicore-auth-url-input";
for `key.sapaicore_key_config.base_url` add
data-testid="sapaicore-base-url-input"; for
`key.sapaicore_key_config.resource_group` add
data-testid="sapaicore-resource-group-input"; and for
`key.sapaicore_key_config.deployments` add
data-testid="sapaicore-deployments-textarea" to the Textarea, ensuring the
attributes are placed on the actual interactive elements (EnvVarInput/Textarea)
not the wrapper components.
- Line 169: The SAP AI Core form block lacks test IDs on its interactive fields;
update the EnvVarInput and Textarea components inside the SAP AI Core section
(the conditional block guarded by isSAPAICore in apiKeysFormFragment.tsx) to add
data-testid attributes using the naming convention: add
data-testid="sapaicore-client-id-input" to the client ID EnvVarInput,
data-testid="sapaicore-client-secret-input" to the client secret EnvVarInput,
data-testid="sapaicore-auth-url-input" to the auth URL EnvVarInput,
data-testid="sapaicore-base-url-input" to the base URL EnvVarInput,
data-testid="sapaicore-resource-group-input" to the resource group EnvVarInput,
and data-testid="sapaicore-deployments-textarea" to the deployments Textarea so
tests can reliably target these elements.
In `@ui/lib/constants/config.ts`:
- Line 51: Update the sapaicore placeholder string to use the dotted Anthropic
model name format; in ui/lib/constants/config.ts locate the sapaicore constant
(the entry with value "e.g. gpt-4o, anthropic--claude-3-5-sonnet,
gemini-1.5-pro") and replace the incorrect hyphenated Anthropic example with the
dotted form "anthropic--claude-3.5-sonnet" so it matches the rest of the
codebase conventions.
In `@ui/lib/schemas/providerForm.ts`:
- Around line 279-282: The validation currently skips API key value checks for
providers in noApiKeyProviders (["vertex","bedrock","sapaicore"]) but doesn't
require sapaicore_key_config when effectiveProviderType === "sapaicore",
allowing empty credentials; update the data.keys.forEach validation to, when
effectiveProviderType === "sapaicore", assert that
keys[index].sapaicore_key_config exists and is non-empty (and call ctx.addIssue
with a clear message and path on failure) while keeping the existing checks for
other providers (use the same ctx.addIssue pattern used for missing key.value).
In `@ui/lib/types/schemas.ts`:
- Around line 165-169: The hasEnvVarValue helper treats whitespace-only strings
as present; update checks in hasEnvVarValue to trim envVar.env_var and
envVar.value before testing length (e.g., use envVar.env_var?.trim().length > 0
and envVar.value?.trim().length > 0) and preserve the existing from_env logic so
whitespace-only values are considered absent.
---
Outside diff comments:
In `@transports/config.schema.json`:
- Around line 1474-1514: The JSON schema is invalid: close the missing
array/object delimiters for the vllm_key_config block (ensure vllm_key_config's
closing ] and } are present) before the sapaicore_key_config object, add the
missing commas between items in the top-level oneOf array (insert commas after
the vllm entry and other listed entries around the noted lines), and correct the
malformed transition between the vllm and sapaicore branches so each branch is a
well-formed JSON object in the oneOf array; verify symbols vllm_key_config,
sapaicore_key_config and the containing oneOf array are syntactically valid JSON
and run a JSON linter to confirm parsing succeeds.
---
Nitpick comments:
In `@core/providers/sapaicore/deployment.go`:
- Around line 156-157: The code builds query URLs by interpolating resourceGroup
directly (see deploymentsURL) which can produce malformed URLs for reserved
characters; update the URL construction in
core/providers/sapaicore/deployment.go (places that set deploymentsURL and the
other URL around lines 227-228) to use url.QueryEscape(resourceGroup) when
embedding the resource group into the query string, and add the net/url import
if missing so the URL encoding is applied consistently.
In `@core/providers/sapaicore/errors_test.go`:
- Around line 175-178: The test's conditional only fails when EventID is non-nil
and wrong, so update the assertion to first require that result.Error and
result.Error.EventID are non-nil and then assert *result.Error.EventID ==
"err_evt_456"; specifically modify the check around result.Error.EventID in
errors_test.go to fail if EventID is nil and to call t.Errorf (or t.Fatalf) when
the value is not "err_evt_456" so missing or incorrect event_id both cause test
failures.
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 732-737: The unsupported-operation handlers in this file (e.g.,
the return in TextCompletion and other methods between the blocks around the
current diff and up through lines 740-1283) use
providerUtils.NewBifrostOperationError with a hardcoded schemas.SAPAICore;
update each of these to call providerUtils.NewUnsupportedOperationError and pass
provider.GetProviderKey() (instead of schemas.SAPAICore) so the error metadata
uses the provider alias—keep the same error message text and underlying
fmt.Errorf("unsupported operation") but replace the constructor and provider
identifier in every unsupported handler (look for usages in methods like
TextCompletion and the other unsupported stubs in sapaicore.go).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
.github/workflows/pr-tests.yml.github/workflows/release-pipeline.ymlcore/bifrost.gocore/internal/llmtests/account.gocore/providers/sapaicore/auth.gocore/providers/sapaicore/auth_test.gocore/providers/sapaicore/converters.gocore/providers/sapaicore/converters_test.gocore/providers/sapaicore/deployment.gocore/providers/sapaicore/deployment_test.gocore/providers/sapaicore/errors.gocore/providers/sapaicore/errors_test.gocore/providers/sapaicore/sapaicore.gocore/providers/sapaicore/sapaicore_test.gocore/providers/sapaicore/types.gocore/sapaicore.testcore/schemas/account.gocore/schemas/bifrost.gocore/utils.godocs/docs.jsondocs/openapi/openapi.jsondocs/providers/supported-providers/overview.mdxdocs/providers/supported-providers/sapaicore.mdxframework/configstore/clientconfig.goframework/configstore/migrations.goframework/configstore/rdb.goframework/configstore/tables/key.gotransports/bifrost-http/handlers/providers.gotransports/bifrost-http/lib/config.gotransports/config.schema.jsonui/app/workspace/providers/fragments/apiKeysFormFragment.tsxui/lib/constants/config.tsui/lib/constants/icons.tsxui/lib/constants/logs.tsui/lib/schemas/providerForm.tsui/lib/types/config.tsui/lib/types/schemas.tsui/lib/utils/validation.ts
👮 Files not reviewed due to content moderation or server errors (1)
- ui/lib/constants/icons.tsx
…r, Responses API support, tool calling, testing steps)
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/PR-1865.md`:
- Around line 135-137: Update the PR description under the "## Related issues"
section to explicitly state the relationship between issues: keep "Closes `#233`"
and add a clarifying line such as "Also relates to `#123`" or "Contributes to
`#123`" so it's clear whether this PR closes `#233` and/or contributes to `#123`; edit
the text near the existing "Closes `#233`" entry in PR-1865.md to include both
issue numbers and the exact relationship.
- Around line 72-75: Replace the inline token in the curl examples (the
Authorization: Bearer YOUR_BIFROST_KEY header shown in the curl -X POST
http://localhost:8080/v1/chat/completions snippet and the other instances at the
indicated ranges) with environment-variable substitution (e.g., use
Authorization: Bearer $BIFROST_KEY) and add a short preceding note showing how
to set the env var (export BIFROST_KEY=...) so examples do not encourage pasting
live tokens into shell history or process lists.
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
ui/lib/schemas/providerForm.ts (1)
2-2: Use relative import paths inui/libmodules.This changed import uses
@/lib/...; keep it relative to match the UI import rule for this directory.As per coding guidelines,
ui/**/*.{ts,tsx}must use relative imports from the ui/lib directory for constants, utilities, and types.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/lib/schemas/providerForm.ts` at line 2, The import in providerForm.ts uses an absolute alias "@/lib/utils/validation" which violates the ui/lib rule; update the import to use a relative path to the utils module (so the symbols isValidDeployments, isValidVertexAuthCredentials, isValidEnvVar are imported from the local ../utils/validation module) and ensure the rest of the file still references those functions unchanged.ui/lib/types/schemas.ts (1)
172-209: Consider extracting shared deployments validation into one helper.This deployments refine logic is now repeated across provider schemas. Centralizing it will reduce drift and keep behavior consistent across Azure/Vertex/Bedrock/SAP AI Core.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/lib/types/schemas.ts` around lines 172 - 209, Extract the repeated deployments validation into a single helper (e.g., isValidDeployments or validateDeploymentsString) and use it inside the schema .refine calls instead of duplicating the logic; update sapaicoreKeyConfigSchema (and the other provider schemas that currently inline the same refine) to call that helper to determine validity, keeping the same error message and path (deployments) so behavior remains identical across Azure/Vertex/Bedrock/SAP AI Core.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/sapaicore/auth.go`:
- Around line 37-40: cacheKey currently concatenates clientID and authURL with
":" which can collide if either value contains ":"; update cacheKey to produce
an unambiguous key (e.g. use a deterministic encoding such as JSON-encoding the
pair, or prefix clientID with its length like fmt.Sprintf("%d:%s:%s",
len(clientID), clientID, authURL), or use a NUL byte separator) so different
(clientID, authURL) pairs never collide; modify the cacheKey function to
implement one of these encodings and return that instead of
clientID+":"+authURL.
In `@core/providers/sapaicore/deployment.go`:
- Around line 174-175: The code interpolates resourceGroup directly into query
strings (e.g., building deploymentsURL from normalizedURL and resourceGroup),
which can break requests for values with reserved characters; change the
construction to percent-encode resourceGroup (use net/url.QueryEscape or build
query via url.Values / url.URL.Query()) before appending it to normalizedURL,
and apply the same fix to the other occurrence that builds a URL at lines
referenced (the second interpolation around 245-246) so both deploymentsURL and
the other URL use encoded query parameters instead of raw interpolation.
- Around line 58-60: The current deploymentCacheKey (deploymentCacheKey)
concatenates baseURL and resourceGroup with ":" which can produce ambiguous
collisions; change it to produce an unambiguous key by encoding the two inputs
or hashing their pair (e.g., compute SHA-256 over baseURL + a null separator +
resourceGroup and return the hex string) so the key is unique for each distinct
pair and cannot collide for different inputs.
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 314-317: Guard against nil req.Params before mutating
model-specific fields: in the block that checks
isOpenAIReasoningOrGPT5Model(req.Model) ensure req.Params is non-nil (allocate a
new Params struct) before setting req.Params.MaxCompletionTokens = nil and
req.Params.Temperature = nil; apply the same nil-check/initialization pattern to
the analogous code around the other occurrence (lines handling the same fields
at the second location) so neither path can panic when Params is omitted.
In `@ui/lib/types/schemas.ts`:
- Around line 251-275: In the first refine that validates path ["value"], don't
fail early when sapaicore_key_config is partially filled: keep the existing
check that returns true when hasRequiredFields is true, but if hasAnyField is
true and hasRequiredFields is false, do not return false — instead let the
refine pass (return true) so the second refine can enforce completeness; update
the block that references sapaicore_key_config, hasEnvVarValue,
hasRequiredFields, and hasAnyField accordingly, and apply the same change to the
other occurrences noted (around the other similar refine blocks).
---
Nitpick comments:
In `@ui/lib/schemas/providerForm.ts`:
- Line 2: The import in providerForm.ts uses an absolute alias
"@/lib/utils/validation" which violates the ui/lib rule; update the import to
use a relative path to the utils module (so the symbols isValidDeployments,
isValidVertexAuthCredentials, isValidEnvVar are imported from the local
../utils/validation module) and ensure the rest of the file still references
those functions unchanged.
In `@ui/lib/types/schemas.ts`:
- Around line 172-209: Extract the repeated deployments validation into a single
helper (e.g., isValidDeployments or validateDeploymentsString) and use it inside
the schema .refine calls instead of duplicating the logic; update
sapaicoreKeyConfigSchema (and the other provider schemas that currently inline
the same refine) to call that helper to determine validity, keeping the same
error message and path (deployments) so behavior remains identical across
Azure/Vertex/Bedrock/SAP AI Core.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
core/providers/sapaicore/auth.gocore/providers/sapaicore/auth_test.gocore/providers/sapaicore/deployment.gocore/providers/sapaicore/errors_test.gocore/providers/sapaicore/sapaicore.godocs/providers/supported-providers/overview.mdxdocs/providers/supported-providers/sapaicore.mdxframework/configstore/clientconfig.goframework/configstore/tables/key.goui/app/workspace/providers/fragments/apiKeysFormFragment.tsxui/lib/constants/config.tsui/lib/schemas/providerForm.tsui/lib/types/schemas.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- docs/providers/supported-providers/overview.mdx
- ui/lib/constants/config.ts
- docs/providers/supported-providers/sapaicore.mdx
- ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (3)
ui/lib/types/schemas.ts (1)
172-208: LGTM with optional refactor suggestion.The SAP AI Core key config schema follows the established patterns used by other provider schemas. The deployments validation logic is correct.
Consider extracting the duplicated deployments validation refine (used by
azureKeyConfigSchema,vertexKeyConfigSchema,bedrockKeyConfigSchema, and nowsapaicoreKeyConfigSchema) into a shared helper function to reduce code duplication.,
♻️ Optional: Extract shared deployments validator
// Shared deployments validation helper const deploymentsRefineConfig = { validator: (deployments: Record<string, string> | string | undefined) => { if (!deployments) return true; if (typeof deployments === "object") return true; if (typeof deployments === "string") { const trimmed = deployments.trim(); if (trimmed === "") return true; if (trimmed.startsWith("env.")) return true; try { const parsed = JSON.parse(trimmed); return typeof parsed === "object" && parsed !== null && !Array.isArray(parsed); } catch { return false; } } return false; }, config: { message: "Deployments must be a valid JSON object or an environment variable reference", path: ["deployments"] as const, }, }; // Usage in schemas: export const sapaicoreKeyConfigSchema = z .object({ // ... fields }) .refine((data) => deploymentsRefineConfig.validator(data.deployments), deploymentsRefineConfig.config);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/lib/types/schemas.ts` around lines 172 - 208, Duplicate deployments validation logic appears across sapaicoreKeyConfigSchema and other provider schemas; extract it into a shared helper (e.g., validateDeployments or deploymentsRefineConfig) and replace the inline .refine callbacks in sapaicoreKeyConfigSchema, azureKeyConfigSchema, vertexKeyConfigSchema, and bedrockKeyConfigSchema with a call to that helper (use .refine((data) => validateDeployments(data.deployments), { message: "...", path: ["deployments"] })). Ensure the helper accepts Record<string,string> | string | undefined and implements the same trimming, env. prefix, empty-string, JSON-parse, and object checks so behavior remains identical.core/providers/sapaicore/auth.go (1)
59-87: Consider per-key refresh deduplication to prevent token refresh stampedes.Concurrent misses for the same key can all call
fetchTokenat once. A per-keysingleflight(or equivalent) would reduce external auth pressure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/auth.go` around lines 59 - 87, The current GetToken flow uses a global mutex tc.mu and allows concurrent cache misses for the same key to all call tc.fetchToken, causing refresh stampedes; change this by introducing per-key deduplication (e.g., use golang.org/x/sync/singleflight or a per-key in-progress map) keyed by the same key variable so only one caller invokes tc.fetchToken(clientID, clientSecret, authURL) for a given key while others wait for the result; on success publish the token into tc.tokens (cachedToken) under tc.mu as you already do, and ensure callers receive the same token/error returned by the singleflight to preserve current double-check semantics.core/providers/sapaicore/sapaicore.go (1)
750-1303: UseNewUnsupportedOperationError(..., provider.GetProviderKey())consistently for unsupported APIs.Most unsupported stubs currently use
NewBifrostOperationErrorwith a static provider constant; aligning with the provider-key-based unsupported helper keeps metadata and aliasing consistent.Based on learnings: In provider implementations, when raising unsupported-operation errors, pass provider.GetProviderKey() to NewUnsupportedOperationError so error messages and ExtraFields.Provider reflect the provider's alias.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/sapaicore.go` around lines 750 - 1303, Replace calls to providerUtils.NewBifrostOperationError(...) in the unsupported-operation stubs with providerUtils.NewUnsupportedOperationError(..., provider.GetProviderKey()) so the provider alias is used consistently; update each unsupported method in this diff (e.g., TextCompletion, TextCompletionStream, CountTokens, Speech, SpeechStream, Transcription, TranscriptionStream, ImageGeneration, ImageGenerationStream, ImageEdit, ImageEditStream, ImageVariation, BatchCreate, BatchList, BatchRetrieve, BatchCancel, BatchResults, FileUpload, FileList, FileRetrieve, FileDelete, FileContent, ContainerCreate, ContainerList, ContainerRetrieve, ContainerDelete, ContainerFileCreate, ContainerFileList, ContainerFileRetrieve, ContainerFileContent, ContainerFileDelete) to call providerUtils.NewUnsupportedOperationError with the same human-readable message, the fmt.Errorf("unsupported operation") error, and provider.GetProviderKey() as the provider identifier.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/sapaicore/auth.go`:
- Around line 94-101: The token URL normalization incorrectly appends
"oauth/token" when authURL ends with "/oauth/token/"; update the logic in the
auth URL handling (references: authURL and tokenURL) so you first
normalize/truncate any trailing slash after "/oauth/token" (accept both
"/oauth/token" and "/oauth/token/") and ensure tokenURL ends exactly with
"/oauth/token" without duplicating segments; implement this by checking
TrimSuffix or explicit checks for both "/oauth/token" and "/oauth/token/" and
setting tokenURL accordingly.
- Around line 128-133: The error currently includes the raw token-endpoint body
via string(resp.Body()) which can leak sensitive data; update the OAuth2 error
path so the providerUtils.NewBifrostOperationError call (in the token request
handling where resp, resp.StatusCode() and fmt.Sprintf are used) does not embed
resp.Body(); instead format a status-only message (e.g., "OAuth2 token request
failed with status %d") or include only a sanitized/truncated diagnostic (e.g.,
first N chars with non-printables removed) before calling
providerUtils.NewBifrostOperationError so no full upstream response body is
surfaced.
In `@core/providers/sapaicore/deployment.go`:
- Around line 196-201: The error construction in deployment.go currently
includes the full upstream response body (string(resp.Body())) which can leak
sensitive data; update the NewBifrostOperationError call in the block that
checks resp.StatusCode() (and the similar block at the 267-272 occurrence) to
omit the raw resp.Body() and instead provide a status-focused or
sanitized/truncated message (e.g., "deployments request failed with status %d"
or include a redacted/truncated snippet via a sanitizer/truncation helper) when
building the fmt.Sprintf; keep the use of providerUtils.NewBifrostOperationError
and the resp.StatusCode() check, but remove direct exposure of resp.Body() from
the error text.
- Around line 343-350: normalizeBaseURL currently mishandles inputs ending with
"/v2/" causing "/v2/v2"; update the logic in normalizeBaseURL to first normalize
trailing slashes for the "/v2" segment: if the baseURL ends with "/v2/" return
baseURL without the trailing slash (or normalize to end with "/v2"); if it
already ends with "/v2" return as-is; otherwise if it ends with "/" append "v2";
else append "/v2". Modify the function normalizeBaseURL to implement this
ordering and checks so inputs like "…/v2/" become "…/v2" and others are handled
correctly.
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 1315-1452: The SSE scan loop using scanner (for scanner.Scan()) in
the streaming handler does not check scanner.Err() after the loop, so
I/O/token-size errors are silent; add a post-loop check (if err :=
scanner.Err(); err != nil) that logs the error via logger.Warn (include the
error), and handle it appropriately—e.g., send an error chunk through
providerUtils.ProcessAndSendResponse or close responseChan/cleanup so the caller
knows the stream failed—ensure this check runs after the existing loop that
updates usage, finishReason, chunkIndex, and toolCallIndex.
---
Nitpick comments:
In `@core/providers/sapaicore/auth.go`:
- Around line 59-87: The current GetToken flow uses a global mutex tc.mu and
allows concurrent cache misses for the same key to all call tc.fetchToken,
causing refresh stampedes; change this by introducing per-key deduplication
(e.g., use golang.org/x/sync/singleflight or a per-key in-progress map) keyed by
the same key variable so only one caller invokes tc.fetchToken(clientID,
clientSecret, authURL) for a given key while others wait for the result; on
success publish the token into tc.tokens (cachedToken) under tc.mu as you
already do, and ensure callers receive the same token/error returned by the
singleflight to preserve current double-check semantics.
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 750-1303: Replace calls to
providerUtils.NewBifrostOperationError(...) in the unsupported-operation stubs
with providerUtils.NewUnsupportedOperationError(..., provider.GetProviderKey())
so the provider alias is used consistently; update each unsupported method in
this diff (e.g., TextCompletion, TextCompletionStream, CountTokens, Speech,
SpeechStream, Transcription, TranscriptionStream, ImageGeneration,
ImageGenerationStream, ImageEdit, ImageEditStream, ImageVariation, BatchCreate,
BatchList, BatchRetrieve, BatchCancel, BatchResults, FileUpload, FileList,
FileRetrieve, FileDelete, FileContent, ContainerCreate, ContainerList,
ContainerRetrieve, ContainerDelete, ContainerFileCreate, ContainerFileList,
ContainerFileRetrieve, ContainerFileContent, ContainerFileDelete) to call
providerUtils.NewUnsupportedOperationError with the same human-readable message,
the fmt.Errorf("unsupported operation") error, and provider.GetProviderKey() as
the provider identifier.
In `@ui/lib/types/schemas.ts`:
- Around line 172-208: Duplicate deployments validation logic appears across
sapaicoreKeyConfigSchema and other provider schemas; extract it into a shared
helper (e.g., validateDeployments or deploymentsRefineConfig) and replace the
inline .refine callbacks in sapaicoreKeyConfigSchema, azureKeyConfigSchema,
vertexKeyConfigSchema, and bedrockKeyConfigSchema with a call to that helper
(use .refine((data) => validateDeployments(data.deployments), { message: "...",
path: ["deployments"] })). Ensure the helper accepts Record<string,string> |
string | undefined and implements the same trimming, env. prefix, empty-string,
JSON-parse, and object checks so behavior remains identical.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
core/providers/sapaicore/auth.gocore/providers/sapaicore/auth_test.gocore/providers/sapaicore/deployment.gocore/providers/sapaicore/deployment_test.gocore/providers/sapaicore/sapaicore.goui/lib/types/schemas.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- core/providers/sapaicore/deployment_test.go
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (5)
core/providers/sapaicore/deployment.go (2)
343-350:⚠️ Potential issue | 🟠 MajorNormalize trailing slashes before enforcing
/v2.Line 343-Line 350 mis-normalizes inputs ending with
/v2/into/v2/v2.🔧 Suggested fix
func normalizeBaseURL(baseURL string) string { - if strings.HasSuffix(baseURL, "/v2") { - return baseURL - } - if strings.HasSuffix(baseURL, "/") { - return baseURL + "v2" - } - return baseURL + "/v2" + trimmed := strings.TrimRight(baseURL, "/") + if strings.HasSuffix(trimmed, "/v2") { + return trimmed + } + return trimmed + "/v2" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/deployment.go` around lines 343 - 350, The normalizeBaseURL function mis-handles inputs that end with "/v2/" (producing "/v2/v2"); to fix, first trim trailing slashes from baseURL (e.g., strings.TrimRight(baseURL, "/")) then check if the trimmed value already ends with "/v2" and return it, otherwise append "/v2"; update the normalizeBaseURL implementation accordingly so inputs like "https://api.example.com/v2/" normalize to "https://api.example.com/v2" and inputs without a trailing segment become "/v2" appended.
196-201:⚠️ Potential issue | 🟠 MajorAvoid returning raw upstream deployment response bodies in errors.
Line 198 and Line 269 include
string(resp.Body()), which can leak provider internals/sensitive data via surfaced errors.🔧 Suggested fix
- fmt.Sprintf("deployments request failed with status %d: %s", resp.StatusCode(), string(resp.Body())), - fmt.Errorf("HTTP %d", resp.StatusCode()), + fmt.Sprintf("deployments request failed with status %d", resp.StatusCode()), + fmt.Errorf("http %d", resp.StatusCode()),- fmt.Sprintf("model listing request failed with status %d: %s", resp.StatusCode(), string(resp.Body())), - fmt.Errorf("HTTP %d", resp.StatusCode()), + fmt.Sprintf("model listing request failed with status %d", resp.StatusCode()), + fmt.Errorf("http %d", resp.StatusCode()),Also applies to: 267-272
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/deployment.go` around lines 196 - 201, The error construction in deployment.go that calls providerUtils.NewBifrostOperationError currently includes string(resp.Body()), which risks leaking upstream/sensitive data; update the error message in the code paths that build the "deployments request failed" and similar errors (the call sites that use resp and NewBifrostOperationError) to omit the raw response body and instead include only safe context (e.g., HTTP status and a generic message or a redacted/truncated indicator), and if needed record the full resp.Body() to a secure debug log (not returned in the error) for troubleshooting; ensure all occurrences (the failing deployments branch and the other occurrence around lines where resp.Body() was interpolated) are updated consistently.core/providers/sapaicore/sapaicore.go (1)
1325-1452:⚠️ Potential issue | 🟠 MajorHandle
scanner.Err()after the Vertex SSE scan loop.When
scanner.Scan()exits due to read/token-size errors, the current code silently continues; stream failures can be masked.🔧 Suggested fix
for scanner.Scan() { ... } + + if err := scanner.Err(); err != nil { + logger.Warn("vertex sse stream scan error: %v", err) + return + } // Send final chunk with usage if finishReason != nil || usage.TotalTokens > 0 {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/sapaicore.go` around lines 1325 - 1452, The SSE read loop using scanner.Scan() needs to check scanner.Err() after the loop ends and surface any read/token-size errors instead of silently returning; after the for scanner.Scan() loop, call if err := scanner.Err(); err != nil { logger.Error("Vertex stream scanner error: %v", err); /* send an error response via providerUtils.ProcessAndSendResponse or push an error into responseChan similar to other failure paths, include providerName/model and ctx cancellation if appropriate */ } to log and propagate the error; update the surrounding code paths that currently return silently so they either cancel ctx or send a failure Bifrost-style response (use providerUtils.GetBifrostResponseForStreamResponse and responseChan) so callers see stream failures.core/providers/sapaicore/auth.go (2)
128-133:⚠️ Potential issue | 🟠 MajorDo not include raw token-endpoint bodies in surfaced errors.
Line 130 currently embeds
string(resp.Body()); this can leak upstream details/tokens through error paths.🔧 Suggested fix
if resp.StatusCode() != fasthttp.StatusOK { return "", 0, providerUtils.NewBifrostOperationError( - fmt.Sprintf("OAuth2 token request failed with status %d: %s", resp.StatusCode(), string(resp.Body())), - fmt.Errorf("HTTP %d", resp.StatusCode()), + fmt.Sprintf("oauth2 token request failed with status %d", resp.StatusCode()), + fmt.Errorf("http %d", resp.StatusCode()), schemas.SAPAICore, ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/auth.go` around lines 128 - 133, The error currently includes the raw HTTP response body via string(resp.Body()), which can leak sensitive token data; update the error returned from the token request (the providerUtils.NewBifrostOperationError call that uses resp.StatusCode() and string(resp.Body())) to omit the response body and only include non-sensitive context (e.g., fmt.Sprintf("OAuth2 token request failed with status %d", resp.StatusCode())). If you need the body for debugging, log it at a debug level separately (do not include it in the surfaced error). Reference resp.StatusCode(), resp.Body(), and the providerUtils.NewBifrostOperationError invocation when making the change.
94-101:⚠️ Potential issue | 🟠 MajorNormalize token URL before suffix checks.
Line 96-Line 101 mishandles inputs ending with
/oauth/token/and can produce.../oauth/token/oauth/token.🔧 Suggested fix
- tokenURL := authURL - if !strings.HasSuffix(tokenURL, "/oauth/token") { - if strings.HasSuffix(tokenURL, "/") { - tokenURL += "oauth/token" - } else { - tokenURL += "/oauth/token" - } - } + tokenURL := strings.TrimRight(authURL, "/") + if !strings.HasSuffix(tokenURL, "/oauth/token") { + tokenURL += "/oauth/token" + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/auth.go` around lines 94 - 101, Normalize authURL by trimming trailing slashes before checking/appending the token path to avoid producing ".../oauth/token/oauth/token"; specifically, in the logic that sets tokenURL from authURL (the tokenURL and authURL variables in auth.go), first do a strings.TrimRight(authURL, "/") into a normalized variable, then check if strings.HasSuffix(normalized, "oauth/token") and set tokenURL to either normalized (if it already ends with oauth/token) or normalized + "/oauth/token" otherwise.
🧹 Nitpick comments (1)
core/providers/sapaicore/sapaicore.go (1)
750-1303: Useprovider.GetProviderKey()in unsupported stubs for consistency.These stubs hardcode
schemas.SAPAICore; usingprovider.GetProviderKey()keeps error metadata consistent with alias-aware provider patterns.🔧 Suggested fix (pattern)
- return nil, providerUtils.NewBifrostOperationError( + return nil, providerUtils.NewBifrostOperationError( "TextCompletion is not supported by SAP AI Core provider - use ChatCompletion instead", fmt.Errorf("unsupported operation"), - schemas.SAPAICore, + provider.GetProviderKey(), )Apply the same replacement across similar unsupported-operation stubs in this file.
Based on learnings: In provider implementations, when raising unsupported-operation errors, pass
provider.GetProviderKey()so error context reflects provider aliases consistently.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/sapaicore.go` around lines 750 - 1303, The unsupported-operation stubs (e.g., TextCompletion, TextCompletionStream, CountTokens, Speech, SpeechStream, Transcription, TranscriptionStream, ImageGeneration, ImageGenerationStream, ImageEdit, ImageEditStream, ImageVariation, BatchCreate, BatchList, BatchRetrieve, BatchCancel, BatchResults, FileUpload, FileList, FileRetrieve, FileDelete, FileContent, ContainerCreate, ContainerList, ContainerRetrieve, ContainerDelete, ContainerFileCreate, ContainerFileList, ContainerFileRetrieve, ContainerFileContent, ContainerFileDelete) currently pass schemas.SAPAICore into providerUtils.NewBifrostOperationError; update each call to use provider.GetProviderKey() instead of the hardcoded schemas.SAPAICore so error metadata remains alias-aware and consistent with the rest of the provider code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/sapaicore/types.go`:
- Around line 101-103: The ContextWindow values for the model entries "gpt-4.1",
"gpt-4.1-mini", and "gpt-4.1-nano" are incorrect (currently 1047576); update
each entry's ContextWindow field to the correct value 1048576 while leaving
MaxTokens unchanged so the map entries for these keys reflect the proper context
window.
---
Duplicate comments:
In `@core/providers/sapaicore/auth.go`:
- Around line 128-133: The error currently includes the raw HTTP response body
via string(resp.Body()), which can leak sensitive token data; update the error
returned from the token request (the providerUtils.NewBifrostOperationError call
that uses resp.StatusCode() and string(resp.Body())) to omit the response body
and only include non-sensitive context (e.g., fmt.Sprintf("OAuth2 token request
failed with status %d", resp.StatusCode())). If you need the body for debugging,
log it at a debug level separately (do not include it in the surfaced error).
Reference resp.StatusCode(), resp.Body(), and the
providerUtils.NewBifrostOperationError invocation when making the change.
- Around line 94-101: Normalize authURL by trimming trailing slashes before
checking/appending the token path to avoid producing
".../oauth/token/oauth/token"; specifically, in the logic that sets tokenURL
from authURL (the tokenURL and authURL variables in auth.go), first do a
strings.TrimRight(authURL, "/") into a normalized variable, then check if
strings.HasSuffix(normalized, "oauth/token") and set tokenURL to either
normalized (if it already ends with oauth/token) or normalized + "/oauth/token"
otherwise.
In `@core/providers/sapaicore/deployment.go`:
- Around line 343-350: The normalizeBaseURL function mis-handles inputs that end
with "/v2/" (producing "/v2/v2"); to fix, first trim trailing slashes from
baseURL (e.g., strings.TrimRight(baseURL, "/")) then check if the trimmed value
already ends with "/v2" and return it, otherwise append "/v2"; update the
normalizeBaseURL implementation accordingly so inputs like
"https://api.example.com/v2/" normalize to "https://api.example.com/v2" and
inputs without a trailing segment become "/v2" appended.
- Around line 196-201: The error construction in deployment.go that calls
providerUtils.NewBifrostOperationError currently includes string(resp.Body()),
which risks leaking upstream/sensitive data; update the error message in the
code paths that build the "deployments request failed" and similar errors (the
call sites that use resp and NewBifrostOperationError) to omit the raw response
body and instead include only safe context (e.g., HTTP status and a generic
message or a redacted/truncated indicator), and if needed record the full
resp.Body() to a secure debug log (not returned in the error) for
troubleshooting; ensure all occurrences (the failing deployments branch and the
other occurrence around lines where resp.Body() was interpolated) are updated
consistently.
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 1325-1452: The SSE read loop using scanner.Scan() needs to check
scanner.Err() after the loop ends and surface any read/token-size errors instead
of silently returning; after the for scanner.Scan() loop, call if err :=
scanner.Err(); err != nil { logger.Error("Vertex stream scanner error: %v",
err); /* send an error response via providerUtils.ProcessAndSendResponse or push
an error into responseChan similar to other failure paths, include
providerName/model and ctx cancellation if appropriate */ } to log and propagate
the error; update the surrounding code paths that currently return silently so
they either cancel ctx or send a failure Bifrost-style response (use
providerUtils.GetBifrostResponseForStreamResponse and responseChan) so callers
see stream failures.
---
Nitpick comments:
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 750-1303: The unsupported-operation stubs (e.g., TextCompletion,
TextCompletionStream, CountTokens, Speech, SpeechStream, Transcription,
TranscriptionStream, ImageGeneration, ImageGenerationStream, ImageEdit,
ImageEditStream, ImageVariation, BatchCreate, BatchList, BatchRetrieve,
BatchCancel, BatchResults, FileUpload, FileList, FileRetrieve, FileDelete,
FileContent, ContainerCreate, ContainerList, ContainerRetrieve, ContainerDelete,
ContainerFileCreate, ContainerFileList, ContainerFileRetrieve,
ContainerFileContent, ContainerFileDelete) currently pass schemas.SAPAICore into
providerUtils.NewBifrostOperationError; update each call to use
provider.GetProviderKey() instead of the hardcoded schemas.SAPAICore so error
metadata remains alias-aware and consistent with the rest of the provider code.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
core/providers/sapaicore/auth.gocore/providers/sapaicore/converters.gocore/providers/sapaicore/converters_test.gocore/providers/sapaicore/deployment.gocore/providers/sapaicore/deployment_test.gocore/providers/sapaicore/sapaicore.gocore/providers/sapaicore/types.go
There was a problem hiding this comment.
🧹 Nitpick comments (1)
core/providers/sapaicore/types.go (1)
74-118: Consider unexporting the model config map for better encapsulation.While
SAPAICoreModelConfigsis currently only read throughGetSAPAICoreModelConfig(), unexporting it reduces the surface area for accidental misuse and follows Go conventions for mutable package-level state. If other packages need access, a copy-returning accessor likeGetAllSAPAICoreModelConfigs()can be added.♻️ Suggested refactor
-var SAPAICoreModelConfigs = map[string]SAPAICoreModelConfig{ +var sapaicoreModelConfigs = map[string]SAPAICoreModelConfig{ // ... config entries } func GetSAPAICoreModelConfig(modelName string) SAPAICoreModelConfig { - if config, ok := SAPAICoreModelConfigs[modelName]; ok { + if config, ok := sapaicoreModelConfigs[modelName]; ok { return config } return SAPAICoreModelConfig{MaxTokens: 8192, ContextWindow: 200000} }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/types.go` around lines 74 - 118, Unexport the package-level map SAPAICoreModelConfigs by renaming it to sapaiCoreModelConfigs and update GetSAPAICoreModelConfig to read from the new name; then add a new exported accessor GetAllSAPAICoreModelConfigs() that returns a copy of the map (to avoid exposing mutable state) so callers can still enumerate configs without mutating package internals. Ensure references to SAPAICoreModelConfigs in the file are updated to sapaiCoreModelConfigs and that GetAllSAPAICoreModelConfigs performs a shallow copy of entries before returning.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@core/providers/sapaicore/types.go`:
- Around line 74-118: Unexport the package-level map SAPAICoreModelConfigs by
renaming it to sapaiCoreModelConfigs and update GetSAPAICoreModelConfig to read
from the new name; then add a new exported accessor
GetAllSAPAICoreModelConfigs() that returns a copy of the map (to avoid exposing
mutable state) so callers can still enumerate configs without mutating package
internals. Ensure references to SAPAICoreModelConfigs in the file are updated to
sapaiCoreModelConfigs and that GetAllSAPAICoreModelConfigs performs a shallow
copy of entries before returning.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
core/providers/sapaicore/sapaicore.go (1)
1329-1334:⚠️ Potential issue | 🟠 MajorScanner failures are logged but not surfaced to stream consumers.
At Line 1329,
scanner.Err()is only logged. This can end a stream as partial success without signaling an error chunk to callers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/sapaicore.go` around lines 1329 - 1334, The scanner error is only logged (scanner.Err()) and not propagated to consumers; modify the SSE streaming code so that when scanner.Err() != nil and ctx.Err() == nil you both log (logger.Warn) and surface the failure to clients by emitting an error chunk/event over the same stream writer used in this function (e.g., send an SSE "error" message or use the existing stream.Send/sendChunk helper) and then return the scanner error from the streaming function so callers receive a non-success result.core/providers/sapaicore/auth.go (1)
146-148:⚠️ Potential issue | 🟡 MinorUse lowercase text for wrapped
errorvalues.Line 147 uses
fmt.Errorf("HTTP %d", ...); this should be lowercase ("http %d").🛠️ Suggested change
- fmt.Errorf("HTTP %d", resp.StatusCode()), + fmt.Errorf("http %d", resp.StatusCode()),As per coding guidelines: "Error strings must be lowercase with no trailing punctuation per Go conventions."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/auth.go` around lines 146 - 148, The wrapped error string in core/providers/sapaicore/auth.go currently uses fmt.Errorf("HTTP %d", resp.StatusCode()) which violates Go error string conventions; change that wrapped error to use lowercase and no trailing punctuation (e.g., fmt.Errorf("http %d", resp.StatusCode())) in the OAuth2 token request failure handling so the created error follows the guidelines and still reflects resp.StatusCode().
🧹 Nitpick comments (3)
core/providers/sapaicore/sapaicore.go (1)
205-208: EscapedeploymentIDbefore inserting into the URL path.At Line 207,
deploymentIDis interpolated raw into a path segment. Path-escape it to prevent malformed URLs when IDs contain reserved characters.♻️ Suggested change
import ( "bufio" "context" "errors" "fmt" "io" "maps" + "net/url" "strings" "time" @@ func buildRequestURL(baseURL, deploymentID, path string) string { normalizedURL := normalizeBaseURL(baseURL) - return fmt.Sprintf("%s/inference/deployments/%s%s", normalizedURL, deploymentID, path) + return fmt.Sprintf("%s/inference/deployments/%s%s", normalizedURL, url.PathEscape(deploymentID), path) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/sapaicore.go` around lines 205 - 208, The buildRequestURL function inserts deploymentID directly into the path; path-escape deploymentID (e.g., using url.PathEscape) before formatting the string so reserved characters don’t break the URL. Update buildRequestURL to call a path-escape on the deploymentID variable and use the escaped value in the fmt.Sprintf call that builds the "/inference/deployments/%s%s" path.core/providers/sapaicore/auth.go (1)
48-50: NormalizeauthURLbefore building the token cache key.At Line 49, semantically equivalent URLs (e.g., with/without trailing slash) create different cache entries, which reduces cache hit rate and makes
ClearTokenless predictable across formats.♻️ Suggested change
func cacheKey(clientID, authURL string) string { - return fmt.Sprintf("%d:%s:%s", len(clientID), clientID, authURL) + normalizedAuthURL := strings.TrimRight(authURL, "/") + return fmt.Sprintf("%d:%s:%s", len(clientID), clientID, normalizedAuthURL) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/auth.go` around lines 48 - 50, Normalize authURL before building the token cache key in cacheKey (and any callers like ClearToken): parse authURL (using net/url), canonicalize it by lowercasing the scheme and host, remove any trailing slash from the path (or collapse multiple slashes), then use the normalized string when formatting the key in cacheKey so semantically equivalent URLs produce the same cache entry.core/providers/sapaicore/deployment.go (1)
60-62: CanonicalizebaseURLindeploymentCacheKeyto avoid duplicate cache buckets.At Line 61, equivalent endpoints can hash to different keys if URL format differs (
...,.../,.../v2), causing avoidable cache misses.♻️ Suggested change
func deploymentCacheKey(baseURL, resourceGroup string) string { - return fmt.Sprintf("%d:%s:%s", len(baseURL), baseURL, resourceGroup) + normalizedBaseURL := normalizeBaseURL(baseURL) + return fmt.Sprintf("%d:%s:%s", len(normalizedBaseURL), normalizedBaseURL, resourceGroup) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/deployment.go` around lines 60 - 62, The current deploymentCacheKey uses raw baseURL which allows semantically equivalent URLs to produce different keys; update deploymentCacheKey to canonicalize baseURL before hashing: parse baseURL with net/url, lowercase scheme/host, remove default ports, normalize the path with path.Clean, strip any trailing slash (except root) and optional API version segments if needed, then use that canonicalized URL (instead of raw baseURL) in fmt.Sprintf; keep the function name deploymentCacheKey and same parameter list so callers are unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/sapaicore/deployment.go`:
- Around line 202-204: The error strings created with fmt.Errorf("HTTP %d", ...)
in deployment.go (used when building the wrapped error for the deployments
request response and the other occurrence around the same block) should follow
Go conventions: change the message to lowercase (e.g., fmt.Errorf("http %d",
resp.StatusCode())) and remove trailing punctuation; update both occurrences
referenced by the wrapped error creation that currently use resp.StatusCode()
(and the related formatted message paired with body and schemas.SAPAICore) so
the returned error text is lowercase and consistent.
---
Duplicate comments:
In `@core/providers/sapaicore/auth.go`:
- Around line 146-148: The wrapped error string in
core/providers/sapaicore/auth.go currently uses fmt.Errorf("HTTP %d",
resp.StatusCode()) which violates Go error string conventions; change that
wrapped error to use lowercase and no trailing punctuation (e.g.,
fmt.Errorf("http %d", resp.StatusCode())) in the OAuth2 token request failure
handling so the created error follows the guidelines and still reflects
resp.StatusCode().
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 1329-1334: The scanner error is only logged (scanner.Err()) and
not propagated to consumers; modify the SSE streaming code so that when
scanner.Err() != nil and ctx.Err() == nil you both log (logger.Warn) and surface
the failure to clients by emitting an error chunk/event over the same stream
writer used in this function (e.g., send an SSE "error" message or use the
existing stream.Send/sendChunk helper) and then return the scanner error from
the streaming function so callers receive a non-success result.
---
Nitpick comments:
In `@core/providers/sapaicore/auth.go`:
- Around line 48-50: Normalize authURL before building the token cache key in
cacheKey (and any callers like ClearToken): parse authURL (using net/url),
canonicalize it by lowercasing the scheme and host, remove any trailing slash
from the path (or collapse multiple slashes), then use the normalized string
when formatting the key in cacheKey so semantically equivalent URLs produce the
same cache entry.
In `@core/providers/sapaicore/deployment.go`:
- Around line 60-62: The current deploymentCacheKey uses raw baseURL which
allows semantically equivalent URLs to produce different keys; update
deploymentCacheKey to canonicalize baseURL before hashing: parse baseURL with
net/url, lowercase scheme/host, remove default ports, normalize the path with
path.Clean, strip any trailing slash (except root) and optional API version
segments if needed, then use that canonicalized URL (instead of raw baseURL) in
fmt.Sprintf; keep the function name deploymentCacheKey and same parameter list
so callers are unchanged.
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 205-208: The buildRequestURL function inserts deploymentID
directly into the path; path-escape deploymentID (e.g., using url.PathEscape)
before formatting the string so reserved characters don’t break the URL. Update
buildRequestURL to call a path-escape on the deploymentID variable and use the
escaped value in the fmt.Sprintf call that builds the
"/inference/deployments/%s%s" path.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
core/go.sumis excluded by!**/*.sum
📒 Files selected for processing (5)
core/go.modcore/providers/sapaicore/auth.gocore/providers/sapaicore/deployment.gocore/providers/sapaicore/deployment_test.gocore/providers/sapaicore/sapaicore.go
✅ Files skipped from review due to trivial changes (1)
- core/go.mod
There was a problem hiding this comment.
🧹 Nitpick comments (2)
core/providers/sapaicore/deployment.go (2)
60-62: CanonicalizebaseURLinside cache-key generation.
deploymentCacheKeykeys on rawbaseURL, so equivalent URLs (...,.../,.../v2) can land in different buckets. Normalizing before key construction improves hit rate and makesClearCachebehavior less form-dependent.
103-116: Consider refreshing once before returning “model not deployed” on cache miss.Line 109-Line 115 and Line 136-Line 140 return immediately when the cache entry is fresh but the model is absent. That effectively negative-caches misses for the full TTL, so newly created deployments may stay undiscoverable until expiry. A one-time miss-triggered refresh (preferably coalesced per cache key) would reduce this stale window.
Also applies to: 131-141
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/deployment.go` around lines 103 - 116, The code currently returns "model not deployed" immediately when a fresh cache entry exists but modelName is missing (using dc.deployments, cached.fetchedAt, dc.ttl, cached.modelToDeployment), which causes negative caching; change the logic to perform one on-demand refresh for that cacheKey before returning the error: when modelName is absent and the cache is fresh, release the lock, coalesce a single refresh for that cacheKey (e.g., a per-key refreshInProgress flag or singleflight), fetch and update dc.deployments, then re-acquire the lock and re-check cached.modelToDeployment; only if still missing return providerUtils.NewBifrostOperationError. Ensure the refresh is coalesced to avoid thundering herds and preserve proper locking to prevent races.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@core/providers/sapaicore/deployment.go`:
- Around line 103-116: The code currently returns "model not deployed"
immediately when a fresh cache entry exists but modelName is missing (using
dc.deployments, cached.fetchedAt, dc.ttl, cached.modelToDeployment), which
causes negative caching; change the logic to perform one on-demand refresh for
that cacheKey before returning the error: when modelName is absent and the cache
is fresh, release the lock, coalesce a single refresh for that cacheKey (e.g., a
per-key refreshInProgress flag or singleflight), fetch and update
dc.deployments, then re-acquire the lock and re-check cached.modelToDeployment;
only if still missing return providerUtils.NewBifrostOperationError. Ensure the
refresh is coalesced to avoid thundering herds and preserve proper locking to
prevent races.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/configstore/tables/key.go`:
- Around line 673-700: The reconstruction guard currently only creates sapConfig
when any of the EnvVar fields (SAPAICoreClientID, SAPAICoreClientSecret,
SAPAICoreAuthURL, SAPAICoreBaseURL, SAPAICoreResourceGroup) are non-nil, causing
rows that only have SAPAICoreDeploymentsJSON to be dropped; update the
conditional that decides to allocate sapConfig to also check
SAPAICoreDeploymentsJSON (and that it's non-empty) so SAPAICoreKeyConfig is
rebuilt when deployments JSON exists, then proceed with the existing
json.Unmarshal into sapConfig.Deployments and assignment to
k.SAPAICoreKeyConfig.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
transports/config.schema.json (1)
1519-1695:⚠️ Potential issue | 🔴 CriticalJSON schema is invalid and unparseable due to multiple syntax errors in the key-config union block.
Line 1518 never closes the
vllm_key_configrequired array and object—it transitions directly tosapaicore_key_configon line 1519. Additionally, theoneOfcases contain duplicate condition entries with missing commas (lines 1567, 1597, 1622, 1647, 1672), and thesapaicorecase inserted at line 1673 is structurally malformed and incomplete. The file fails to parse as JSON with error "Expecting ',' delimiter: line 1519 column 15".The proposed fix correctly:
- Closes
vllm_key_configwith],"additionalProperties": false, and}- Positions
sapaicore_key_configas a proper sibling entry- Removes duplicate anyOf conditions throughout
- Adds the missing comma separators in anyOf arrays
- Creates a properly structured final
sapaicore_key_configoneOf caseProposed fix
"vllm_key_config": { "type": "object", "properties": { "url": { "type": "string", "minLength": 1, "description": "VLLM server base URL (can use env. prefix)" }, "model_name": { "type": "string", "minLength": 1, "description": "Exact model name served on this VLLM instance" } }, "required": [ "url", "model_name" - "sapaicore_key_config": { + ], + "additionalProperties": false + }, + "sapaicore_key_config": { "type": "object", "properties": { "client_id": { "type": "string", "description": "SAP AI Core OAuth2 client ID (can use env. prefix)" }, "client_secret": { "type": "string", "description": "SAP AI Core OAuth2 client secret (can use env. prefix)" }, "auth_url": { "type": "string", "description": "SAP AI Core OAuth2 token URL (can use env. prefix)" }, "base_url": { "type": "string", "description": "SAP AI Core API base URL (can use env. prefix)" }, "resource_group": { "type": "string", "description": "SAP AI Core resource group" }, "deployments": { "type": "object", "additionalProperties": { "type": "string" }, "description": "Model to deployment ID mappings" } }, "required": [ "client_id", "client_secret", "auth_url", "base_url", "resource_group" ], "additionalProperties": false } }, "oneOf": [ { "not": { "anyOf": [ { "required": ["azure_key_config"] }, { "required": ["vertex_key_config"] }, { "required": ["bedrock_key_config"] }, - { "required": ["vllm_key_config"] } - { "required": ["azure_key_config"] }, - { "required": ["vertex_key_config"] }, - { "required": ["bedrock_key_config"] }, + { "required": ["vllm_key_config"] }, { "required": ["sapaicore_key_config"] } ] } }, { "required": ["azure_key_config"], "not": { "anyOf": [ { "required": ["vertex_key_config"] }, { "required": ["bedrock_key_config"] }, - { "required": ["vllm_key_config"] } - { "required": ["vertex_key_config"] }, - { "required": ["bedrock_key_config"] }, + { "required": ["vllm_key_config"] }, { "required": ["sapaicore_key_config"] } ] } }, { "required": ["vertex_key_config"], "not": { "anyOf": [ { "required": ["azure_key_config"] }, { "required": ["bedrock_key_config"] }, - { "required": ["vllm_key_config"] } - { "required": ["azure_key_config"] }, - { "required": ["bedrock_key_config"] }, + { "required": ["vllm_key_config"] }, { "required": ["sapaicore_key_config"] } ] } }, { "required": ["bedrock_key_config"], "not": { "anyOf": [ { "required": ["azure_key_config"] }, { "required": ["vertex_key_config"] }, - { "required": ["vllm_key_config"] } - { "required": ["azure_key_config"] }, - { "required": ["vertex_key_config"] }, + { "required": ["vllm_key_config"] }, { "required": ["sapaicore_key_config"] } ] } }, { "required": ["vllm_key_config"], "not": { "anyOf": [ { "required": ["azure_key_config"] }, { "required": ["vertex_key_config"] }, - { "required": ["bedrock_key_config"] } - "required": [ - "sapaicore_key_config" - ], - "not": { - "anyOf": [ - { "required": ["azure_key_config"] }, - { "required": ["vertex_key_config"] }, - { "required": ["bedrock_key_config"] } + { "required": ["bedrock_key_config"] }, + { "required": ["sapaicore_key_config"] } + ] + } + }, + { + "required": ["sapaicore_key_config"], + "not": { + "anyOf": [ + { "required": ["azure_key_config"] }, + { "required": ["vertex_key_config"] }, + { "required": ["bedrock_key_config"] }, + { "required": ["vllm_key_config"] } ] } } ],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/config.schema.json` around lines 1519 - 1695, The JSON schema's key-config union is malformed: close the vllm_key_config definition (add the missing closing ] and object braces and "additionalProperties": false), move sapaicore_key_config so it is a sibling property (not injected mid-object), remove duplicated anyOf condition entries, and add the missing commas between array elements inside each oneOf/anyOf block; finally, add a dedicated oneOf case for "sapaicore_key_config" mirroring the other provider cases (required: ["sapaicore_key_config"] plus a not.anyOf that excludes the other provider keys) so the oneOf union is valid and parseable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@transports/bifrost-http/lib/config.go`:
- Around line 709-718: The legacy key-hash fallback is missing
ReplicateKeyConfig when constructing the schemas.Key passed to
configstore.GenerateKeyHash, which can misclassify key changes; update both
places that build the fallback payload (the schemas.Key literal used with
configstore.GenerateKeyHash) to include ReplicateKeyConfig:
dbKey.ReplicateKeyConfig so the generated hash includes replication settings and
reconcile correctly.
- Around line 175-186: The mapping that materializes virtual-key provider keys
into schemas.Key currently omits the ReplicateKeyConfig field, causing
provider-specific metadata to be dropped; update the constructor/mapper that
maps tableKey (the object with fields ID, Name, Value, Models, Weight via
getWeight, Enabled, UseForBatchAPI, AzureKeyConfig, VertexKeyConfig,
BedrockKeyConfig, SAPAICoreKeyConfig, ConfigHash) to include ReplicateKeyConfig:
set ReplicateKeyConfig: tableKey.ReplicateKeyConfig so the full key definition
is preserved when creating schemas.Key.
In `@transports/config.schema.json`:
- Around line 2354-2383: The schema for provider_with_sapaicore_config is
missing the pricing_overrides property; update the
provider_with_sapaicore_config object to include a "pricing_overrides" property
that references the existing definition (e.g., "$ref":
"#/$defs/pricing_overrides") consistent with the other provider_with_*_config
definitions, and ensure its presence follows the same required/optional
treatment as the peers and preserves "additionalProperties": false.
---
Outside diff comments:
In `@transports/config.schema.json`:
- Around line 1519-1695: The JSON schema's key-config union is malformed: close
the vllm_key_config definition (add the missing closing ] and object braces and
"additionalProperties": false), move sapaicore_key_config so it is a sibling
property (not injected mid-object), remove duplicated anyOf condition entries,
and add the missing commas between array elements inside each oneOf/anyOf block;
finally, add a dedicated oneOf case for "sapaicore_key_config" mirroring the
other provider cases (required: ["sapaicore_key_config"] plus a not.anyOf that
excludes the other provider keys) so the oneOf union is valid and parseable.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
core/schemas/bifrost.godocs/openapi/openapi.jsontransports/bifrost-http/lib/config.gotransports/config.schema.jsonui/lib/types/schemas.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- core/schemas/bifrost.go
- ui/lib/types/schemas.ts
There was a problem hiding this comment.
♻️ Duplicate comments (1)
transports/bifrost-http/lib/config.go (1)
174-187:⚠️ Potential issue | 🟠 Major
ReplicateKeyConfigis still missing from virtual-key materialization.The past review flagged this same issue. Despite the "Addressed" status,
ReplicateKeyConfigis not included in thisschemas.Keyconstruction. This will silently drop Replicate-specific key metadata when materializing virtual-key provider keys.🐛 Proposed fix
keysToAddToProvider = append(keysToAddToProvider, schemas.Key{ ID: tableKey.KeyID, Name: tableKey.Name, Value: tableKey.Value, Models: tableKey.Models, Weight: getWeight(tableKey.Weight), Enabled: tableKey.Enabled, UseForBatchAPI: tableKey.UseForBatchAPI, AzureKeyConfig: tableKey.AzureKeyConfig, VertexKeyConfig: tableKey.VertexKeyConfig, BedrockKeyConfig: tableKey.BedrockKeyConfig, + ReplicateKeyConfig: tableKey.ReplicateKeyConfig, SAPAICoreKeyConfig: tableKey.SAPAICoreKeyConfig, ConfigHash: tableKey.ConfigHash, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/lib/config.go` around lines 174 - 187, The virtual-key materialization is omitting ReplicateKeyConfig causing Replicate metadata loss; update the keysToAddToProvider append that creates schemas.Key (in the keysToAddToProvider construction) to include ReplicateKeyConfig: set ReplicateKeyConfig: tableKey.ReplicateKeyConfig alongside the other config fields (e.g., AzureKeyConfig, VertexKeyConfig, BedrockKeyConfig, SAPAICoreKeyConfig) so schemas.Key captures Replicate-specific data during materialization.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@transports/bifrost-http/lib/config.go`:
- Around line 174-187: The virtual-key materialization is omitting
ReplicateKeyConfig causing Replicate metadata loss; update the
keysToAddToProvider append that creates schemas.Key (in the keysToAddToProvider
construction) to include ReplicateKeyConfig: set ReplicateKeyConfig:
tableKey.ReplicateKeyConfig alongside the other config fields (e.g.,
AzureKeyConfig, VertexKeyConfig, BedrockKeyConfig, SAPAICoreKeyConfig) so
schemas.Key captures Replicate-specific data during materialization.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
core/providers/sapaicore/converters.gotransports/bifrost-http/lib/config.gotransports/config.schema.json
🚧 Files skipped from review as they are similar to previous changes (1)
- transports/config.schema.json
… fix image handling and compaction toolConfig; map developer role
- Add missing Provider interface methods (Rerank, Video*) - Update OpenAI handler calls for new signatures (responseHandler, ErrorConverter params) - Fix UI component merge issues in icons, schemas, and form fragments - Fix key.go BeforeSave/AfterFind with SAP AI Core field handling - Fix providers.go key redaction merge for SAP AI Core fields
…eaming) for Bedrock-backed models
…clarify native tool calling via Bedrock Converse
…r, Responses API support, tool calling, testing steps)
- Fix GenerateKeyHash omitting SAPAICoreKeyConfig (config drift bug) - Release write locks before network calls in auth/deployment caches - Copy request params before mutation; add reasoning model filtering to streaming - Use type:text for encrypted SAPAICoreResourceGroup column - Value-copy SAPAICoreKeyConfig fields before pointer assignment in BeforeSave - Add sapaicore_key_config validation in provider form superRefine - Fix overview.mdx Responses columns for SAP AI Core - Replace token_url with auth_url in docs config examples - Align doc tabs to Web UI / API / config.json format - Add data-testid attributes to SAP AI Core form fields - Stabilize auth_test.go with single time reference - Make errors_test.go EventID assertion unconditional - Fix placeholder model name to use dotted format - Trim whitespace in hasEnvVarValue helper
- Use length-prefixed cache keys to prevent collision in auth and deployment caches - URL-encode resourceGroup in deployment API query strings - Guard against nil Params before reasoning model param filtering (both sync/stream) - Fix schemas.ts first refine to pass through when partially filled (let second refine report specific error)
…ing convention Renamed: BackendType, BackendOpenAI/Bedrock/Vertex, DeploymentStatus and constants, DeploymentResource, DeploymentDetails, DeploymentResourceDetails, BackendDetails, BackendModel, DeploymentsResponse, TokenResponse, ModelConfig, ModelConfigs, GetModelConfig, CachedDeployment — all prefixed with SAPAICore. JSON tags unchanged to preserve wire format.
- Fix token URL normalization: strip trailing slash before appending /oauth/token (prevents double-append on URLs ending with /oauth/token/) - Truncate response body in error messages to 256 chars to prevent sensitive data leak (auth.go, deployment.go - 3 occurrences) - Fix normalizeBaseURL: strip trailing slash before /v2 check (prevents /v2/v2) - Add scanner.Err() check after Vertex SSE scan loop (catches silent I/O errors) - Add singleflight to TokenCache.GetToken() to coalesce concurrent refresh requests (prevents thundering herd on token expiry) - Replace NewBifrostOperationError with NewUnsupportedOperationError in 30 stubs (consistent error format with provider name and request type metadata) - Add test case for normalizeBaseURL with trailing /v2/
…dalone copies Rename local variables (e.g. clientID -> sapaicoreClientIDValue) to make it explicit that these are standalone value copies, not aliases of the caller-owned SAPAICoreKeyConfig fields. Encryption operates on these copies so the original config is never mutated.
…ruction guard Rows with only deployments JSON (and no EnvVar fields) were silently dropped because the conditional only checked the five EnvVar pointers. Now also checks SAPAICoreDeploymentsJSON so the config is rebuilt when deployments exist.
…k and pricing_overrides to schema - Add ReplicateKeyConfig to both GenerateKeyHash fallback call sites in config.go so key-change detection includes replication settings - Add pricing_overrides property to provider_with_sapaicore_config in config.schema.json, consistent with all other provider_with_*_config defs - Fix CachedTokens -> CachedReadTokens in converters.go (upstream rename)
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
transports/config.schema.json (1)
1519-1695:⚠️ Potential issue | 🔴 CriticalJSON syntax errors in virtual key config block (blocking schema validation).
The
sapaicore_key_configinsertion andoneOfbranch structures have missing commas and improperly closed objects, causing JSON parse failures at line 1519. The file will not load until these syntax errors are fixed.🔧 Suggested structural fix
"required": [ "url", "model_name" - "sapaicore_key_config": { + ], + "additionalProperties": false + }, + "sapaicore_key_config": { "type": "object", "properties": { "client_id": { @@ "required": [ "client_id", "client_secret", "auth_url", "base_url", "resource_group" ], "additionalProperties": false } }, "oneOf": [ { "not": { "anyOf": [ { "required": ["azure_key_config"] }, { "required": ["vertex_key_config"] }, { "required": ["bedrock_key_config"] }, - { "required": ["vllm_key_config"] } - { - "required": [ - "azure_key_config" - ] - }, - { - "required": [ - "vertex_key_config" - ] - }, - { - "required": [ - "bedrock_key_config" - ] - }, - { - "required": [ - "sapaicore_key_config" - ] - } + { "required": ["vllm_key_config"] }, + { "required": ["sapaicore_key_config"] } ] } }, @@ { "required": ["vllm_key_config"], "not": { "anyOf": [ { "required": ["azure_key_config"] }, { "required": ["vertex_key_config"] }, - { "required": ["bedrock_key_config"] } - "required": [ - "sapaicore_key_config" - ], - "not": { - "anyOf": [ - { - "required": [ - "azure_key_config" - ] - }, - { - "required": [ - "vertex_key_config" - ] - }, - { - "required": [ - "bedrock_key_config" - ] - } + { "required": ["bedrock_key_config"] }, + { "required": ["sapaicore_key_config"] } ] } + }, + { + "required": ["sapaicore_key_config"], + "not": { + "anyOf": [ + { "required": ["azure_key_config"] }, + { "required": ["vertex_key_config"] }, + { "required": ["bedrock_key_config"] }, + { "required": ["vllm_key_config"] } + ] + } } ],As the authoritative source of truth for all config.json fields, this schema file must be syntactically valid JSON before merging.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/config.schema.json` around lines 1519 - 1695, The schema has JSON syntax and structural errors around sapaicore_key_config and the surrounding oneOf branches: there are missing commas between array items, improperly closed objects/brackets, and duplicated/misaligned required blocks for azure_key_config, vertex_key_config, bedrock_key_config and vllm_key_config; fix by correcting punctuation (add missing commas), properly close the objects/arrays, remove duplicated redundant required entries, and ensure each oneOf branch is a valid object with the correct "required" and "not": { "anyOf": [...] } arrays (include a dedicated branch for "sapaicore_key_config" analogous to the others). Locate and repair the blocks referencing sapaicore_key_config, oneOf, azure_key_config, vertex_key_config, bedrock_key_config, and vllm_key_config so the JSON parses and the exclusive-key-config logic remains intact.
♻️ Duplicate comments (3)
transports/bifrost-http/lib/config.go (1)
175-186:⚠️ Potential issue | 🟠 MajorPreserve
ReplicateKeyConfigin virtual-key full-key materialization.
schemas.Keyconstruction still dropsReplicateKeyConfig, which can silently strip provider-specific key metadata when full key definitions are sourced from governance virtual keys.💡 Suggested fix
keysToAddToProvider = append(keysToAddToProvider, schemas.Key{ ID: tableKey.KeyID, Name: tableKey.Name, Value: tableKey.Value, Models: tableKey.Models, Weight: getWeight(tableKey.Weight), Enabled: tableKey.Enabled, UseForBatchAPI: tableKey.UseForBatchAPI, AzureKeyConfig: tableKey.AzureKeyConfig, VertexKeyConfig: tableKey.VertexKeyConfig, BedrockKeyConfig: tableKey.BedrockKeyConfig, + ReplicateKeyConfig: tableKey.ReplicateKeyConfig, SAPAICoreKeyConfig: tableKey.SAPAICoreKeyConfig, ConfigHash: tableKey.ConfigHash, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/lib/config.go` around lines 175 - 186, The schemas.Key construction in the virtual-key full-key materialization is omitting ReplicateKeyConfig, causing provider-specific metadata to be lost; update the code that maps tableKey to schemas.Key (the block building ID/Name/Value/... ConfigHash) to include ReplicateKeyConfig: set ReplicateKeyConfig: tableKey.ReplicateKeyConfig (or the correct field name) so the schemas.Key preserves that field when materializing full keys from governance virtual keys; ensure any downstream uses of schemas.Key expect and propagate ReplicateKeyConfig as well.core/providers/sapaicore/deployment.go (1)
196-203:⚠️ Potential issue | 🟠 MajorAvoid embedding upstream deployment response bodies in surfaced errors.
Line 202 and Line 277 include response body text in errors. Use status-only messages to reduce sensitive detail leakage.
🔧 Proposed fix
- body := string(resp.Body()) - if len(body) > 256 { - body = body[:256] + "...(truncated)" - } return nil, providerUtils.NewBifrostOperationError( - fmt.Sprintf("deployments request failed with status %d: %s", resp.StatusCode(), body), + fmt.Sprintf("deployments request failed with status %d", resp.StatusCode()), fmt.Errorf("http %d", resp.StatusCode()), schemas.SAPAICore, ) @@ - body := string(resp.Body()) - if len(body) > 256 { - body = body[:256] + "...(truncated)" - } return nil, providerUtils.NewBifrostOperationError( - fmt.Sprintf("model listing request failed with status %d: %s", resp.StatusCode(), body), + fmt.Sprintf("model listing request failed with status %d", resp.StatusCode()), fmt.Errorf("http %d", resp.StatusCode()), schemas.SAPAICore, )Also applies to: 271-278
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/deployment.go` around lines 196 - 203, The error currently embeds the upstream HTTP response body when calling providerUtils.NewBifrostOperationError (see the block using resp.StatusCode() and resp.Body()), which can leak sensitive data; change these error returns to include only the HTTP status (e.g., "deployments request failed with status %d") and a minimal wrapped error (fmt.Errorf("http %d")) without including resp.Body(), and make the same change for the other occurrence around the second providerUtils.NewBifrostOperationError call (the block at ~271-278) so no response body text is ever interpolated into surfaced errors.core/providers/sapaicore/auth.go (1)
140-147:⚠️ Potential issue | 🟠 MajorDo not surface token-endpoint response bodies in errors.
At Line 146, surfaced error text still includes upstream body content. Keep this status-only to avoid leaking provider-side details.
🔧 Proposed fix
- body := string(resp.Body()) - if len(body) > 256 { - body = body[:256] + "...(truncated)" - } return "", 0, providerUtils.NewBifrostOperationError( - fmt.Sprintf("OAuth2 token request failed with status %d: %s", resp.StatusCode(), body), - fmt.Errorf("HTTP %d", resp.StatusCode()), + fmt.Sprintf("oauth2 token request failed with status %d", resp.StatusCode()), + fmt.Errorf("http %d", resp.StatusCode()), schemas.SAPAICore, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/auth.go` around lines 140 - 147, The error currently includes the upstream token-endpoint response body (string(resp.Body())) in the NewBifrostOperationError message; remove any inclusion of the response body and only report the HTTP status to avoid leaking provider data. Locate the block that checks resp.StatusCode() != fasthttp.StatusOK and update the providerUtils.NewBifrostOperationError call (and its fmt.Sprintf message) so it only mentions the status code (e.g., "OAuth2 token request failed with status %d") and do not append or log body/truncated body; keep the existing fmt.Errorf("HTTP %d") or equivalent for the internal error but do not expose resp.Body().
🧹 Nitpick comments (1)
core/providers/sapaicore/sapaicore.go (1)
573-659: Consider extracting shared Bedrock stream transport scaffolding.
handleBedrockChatCompletionStreamandhandleBedrockResponsesStreamduplicate request setup, error handling, cancellation plumbing, and stream lifecycle management. A shared helper would reduce drift risk and make future fixes (timeouts, header behavior, cancellation semantics) one-place changes.Also applies to: 948-1034
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/sapaicore.go` around lines 573 - 659, The two functions handleBedrockChatCompletionStream and handleBedrockResponsesStream duplicate request construction, Do/response status/error handling, cancellation setup, response release (releaseStreamingResponseNoDrain), and goroutine lifecycle plumbing; extract a shared helper (e.g., performBedrockStreamRequest) that accepts parameters: ctx, postHookRunner, token, config, deploymentID, requestURLPath (or full URL), jsonBody, expectedAcceptHeader, providerName, request.Model, a streamProcessor callback (e.g., func(ctx, bodyStream, responseChan) that will call processBedrockConverseEventStream or the other processor), and logger; have the helper build fasthttp.Request, set headers (Authorization, AI-Resource-Group, Accept), call provider.client.Do, handle Do errors and non-OK status via ParseSAPAICoreError, create responseChan, start the goroutine that defers releaseStreamingResponseNoDrain(resp, logger), sets up SetupStreamCancellation(resp.BodyStream()), handles ctx cancellation/timeouts via providerUtils.HandleStreamCancellation/Timeout, calls the provided streamProcessor with resp.BodyStream() and responseChan, and finally close(responseChan); then replace handleBedrockChatCompletionStream and handleBedrockResponsesStream to call this helper with appropriate convertToBedrockConverse/other body and the correct processor callback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 65-73: The TokenCache.Cleanup() (and DeploymentCache cleanup) is
only invoked from SAPAICoreProvider.Shutdown(), which causes expired entries to
persist in long-lived processes; update the provider to run periodic cleanup in
background and/or perform opportunistic cleanup on cache access: add a cleanup
goroutine started when SAPAICoreProvider is created (e.g., in the
constructor/initializer that returns *SAPAICoreProvider) that calls
tokenCache.Cleanup() and deploymentCache.Cleanup() on a configurable interval
and stops on Shutdown(), and also add quick opportunistic cleanup calls inside
TokenCache/DeploymentCache accessors (e.g., Get/Set methods used by
SAPAICoreProvider) to prune expired entries during normal operations so caches
don't rely solely on Shutdown().
In `@core/utils.go`:
- Around line 99-111: The helper hasAzureEntraIDCredentials(providerType, key)
is unused; update the key filtering in the key-evaluation logic that currently
calls CanProviderKeyValueBeEmpty(baseProviderType) (the checks referenced in
core/bifrost.go where keys are filtered) to also allow empty API key values when
hasAzureEntraIDCredentials(baseProviderType, key) returns true, i.e., change the
emptiness condition to treat the key as valid if CanProviderKeyValueBeEmpty(...)
|| hasAzureEntraIDCredentials(...); alternatively, if you prefer not to support
this behavior, remove the hasAzureEntraIDCredentials function and its comment to
eliminate dead code. Ensure you reference the hasAzureEntraIDCredentials and
CanProviderKeyValueBeEmpty symbols when making the change.
In `@docs/providers/supported-providers/sapaicore.mdx`:
- Around line 175-193: The providers example currently uses an array but the
config loader expects "providers" to be an object map keyed by provider name;
change the snippet so "providers" is an object with a "sapaicore" key (not an
array element), and change "keys" from an array to a map keyed by the key id
(e.g. "sapaicore-key-1") where that entry contains the "sapaicore_key_config"
block (client_id, client_secret, auth_url, base_url, resource_group). Update the
example in sapaicore.mdx so callers can look up providers["sapaicore"] and
providers["sapaicore"].keys["sapaicore-key-1"] as expected.
In `@ui/lib/types/schemas.ts`:
- Line 278: The fallback key-value validation currently only checks
data.value?.value (the return line using data.value?.value?.length), which
rejects EnvVar-style inputs; update the validator that contains that return
statement so it also accepts EnvVar forms by checking the env-var properties
(e.g., data.value?.from_env or data.value?.env_var) in addition to
data.value?.value — modify the condition to return true when either a non-empty
value array exists or an env-var key is present (handle both from_env and
env_var keys to cover EnvVar inputs).
---
Outside diff comments:
In `@transports/config.schema.json`:
- Around line 1519-1695: The schema has JSON syntax and structural errors around
sapaicore_key_config and the surrounding oneOf branches: there are missing
commas between array items, improperly closed objects/brackets, and
duplicated/misaligned required blocks for azure_key_config, vertex_key_config,
bedrock_key_config and vllm_key_config; fix by correcting punctuation (add
missing commas), properly close the objects/arrays, remove duplicated redundant
required entries, and ensure each oneOf branch is a valid object with the
correct "required" and "not": { "anyOf": [...] } arrays (include a dedicated
branch for "sapaicore_key_config" analogous to the others). Locate and repair
the blocks referencing sapaicore_key_config, oneOf, azure_key_config,
vertex_key_config, bedrock_key_config, and vllm_key_config so the JSON parses
and the exclusive-key-config logic remains intact.
---
Duplicate comments:
In `@core/providers/sapaicore/auth.go`:
- Around line 140-147: The error currently includes the upstream token-endpoint
response body (string(resp.Body())) in the NewBifrostOperationError message;
remove any inclusion of the response body and only report the HTTP status to
avoid leaking provider data. Locate the block that checks resp.StatusCode() !=
fasthttp.StatusOK and update the providerUtils.NewBifrostOperationError call
(and its fmt.Sprintf message) so it only mentions the status code (e.g., "OAuth2
token request failed with status %d") and do not append or log body/truncated
body; keep the existing fmt.Errorf("HTTP %d") or equivalent for the internal
error but do not expose resp.Body().
In `@core/providers/sapaicore/deployment.go`:
- Around line 196-203: The error currently embeds the upstream HTTP response
body when calling providerUtils.NewBifrostOperationError (see the block using
resp.StatusCode() and resp.Body()), which can leak sensitive data; change these
error returns to include only the HTTP status (e.g., "deployments request failed
with status %d") and a minimal wrapped error (fmt.Errorf("http %d")) without
including resp.Body(), and make the same change for the other occurrence around
the second providerUtils.NewBifrostOperationError call (the block at ~271-278)
so no response body text is ever interpolated into surfaced errors.
In `@transports/bifrost-http/lib/config.go`:
- Around line 175-186: The schemas.Key construction in the virtual-key full-key
materialization is omitting ReplicateKeyConfig, causing provider-specific
metadata to be lost; update the code that maps tableKey to schemas.Key (the
block building ID/Name/Value/... ConfigHash) to include ReplicateKeyConfig: set
ReplicateKeyConfig: tableKey.ReplicateKeyConfig (or the correct field name) so
the schemas.Key preserves that field when materializing full keys from
governance virtual keys; ensure any downstream uses of schemas.Key expect and
propagate ReplicateKeyConfig as well.
---
Nitpick comments:
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 573-659: The two functions handleBedrockChatCompletionStream and
handleBedrockResponsesStream duplicate request construction, Do/response
status/error handling, cancellation setup, response release
(releaseStreamingResponseNoDrain), and goroutine lifecycle plumbing; extract a
shared helper (e.g., performBedrockStreamRequest) that accepts parameters: ctx,
postHookRunner, token, config, deploymentID, requestURLPath (or full URL),
jsonBody, expectedAcceptHeader, providerName, request.Model, a streamProcessor
callback (e.g., func(ctx, bodyStream, responseChan) that will call
processBedrockConverseEventStream or the other processor), and logger; have the
helper build fasthttp.Request, set headers (Authorization, AI-Resource-Group,
Accept), call provider.client.Do, handle Do errors and non-OK status via
ParseSAPAICoreError, create responseChan, start the goroutine that defers
releaseStreamingResponseNoDrain(resp, logger), sets up
SetupStreamCancellation(resp.BodyStream()), handles ctx cancellation/timeouts
via providerUtils.HandleStreamCancellation/Timeout, calls the provided
streamProcessor with resp.BodyStream() and responseChan, and finally
close(responseChan); then replace handleBedrockChatCompletionStream and
handleBedrockResponsesStream to call this helper with appropriate
convertToBedrockConverse/other body and the correct processor callback.
ℹ️ Review info
Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 95f5518a-e92a-4ffa-9298-83358bab9384
⛔ Files ignored due to path filters (1)
core/go.sumis excluded by!**/*.sum
📒 Files selected for processing (40)
.github/workflows/pr-tests.yml.github/workflows/release-pipeline.ymlcore/bifrost.gocore/go.modcore/internal/llmtests/account.gocore/providers/sapaicore/auth.gocore/providers/sapaicore/auth_test.gocore/providers/sapaicore/converters.gocore/providers/sapaicore/converters_test.gocore/providers/sapaicore/deployment.gocore/providers/sapaicore/deployment_test.gocore/providers/sapaicore/errors.gocore/providers/sapaicore/errors_test.gocore/providers/sapaicore/sapaicore.gocore/providers/sapaicore/sapaicore_test.gocore/providers/sapaicore/types.gocore/sapaicore.testcore/schemas/account.gocore/schemas/bifrost.gocore/utils.godocs/PR-1865.mddocs/docs.jsondocs/openapi/openapi.jsondocs/providers/supported-providers/overview.mdxdocs/providers/supported-providers/sapaicore.mdxframework/configstore/clientconfig.goframework/configstore/migrations.goframework/configstore/rdb.goframework/configstore/tables/key.gotransports/bifrost-http/handlers/providers.gotransports/bifrost-http/lib/config.gotransports/config.schema.jsonui/app/workspace/providers/fragments/apiKeysFormFragment.tsxui/lib/constants/config.tsui/lib/constants/icons.tsxui/lib/constants/logs.tsui/lib/schemas/providerForm.tsui/lib/types/config.tsui/lib/types/schemas.tsui/lib/utils/validation.ts
✅ Files skipped from review due to trivial changes (1)
- docs/PR-1865.md
🚧 Files skipped from review as they are similar to previous changes (14)
- core/providers/sapaicore/errors_test.go
- core/providers/sapaicore/errors.go
- transports/bifrost-http/handlers/providers.go
- framework/configstore/rdb.go
- .github/workflows/release-pipeline.yml
- framework/configstore/migrations.go
- core/internal/llmtests/account.go
- docs/providers/supported-providers/overview.mdx
- core/providers/sapaicore/auth_test.go
- ui/lib/constants/config.ts
- .github/workflows/pr-tests.yml
- ui/lib/constants/logs.ts
- core/providers/sapaicore/sapaicore_test.go
- ui/lib/utils/validation.ts
👮 Files not reviewed due to content moderation or server errors (1)
- ui/lib/constants/icons.tsx
…config format, and accept env var key values - Wire up hasAzureEntraIDCredentials in all 4 key-filtering locations in core/bifrost.go so Azure keys with Entra ID credentials pass validation even when the API key value is empty - Fix sapaicore.mdx config.json example to use object-keyed providers format instead of an array, matching the schema and other provider docs - Update UI key-value validator in schemas.ts to accept env var references (from_env + env_var) via the existing hasEnvVarValue helper
There was a problem hiding this comment.
🧹 Nitpick comments (2)
ui/lib/types/schemas.ts (2)
175-212: Extract deployments-string validation into a shared helper.Line 186-Line 206 duplicates the same deployments parsing/validation already used in other provider schemas. Centralizing this avoids drift when rules change.
♻️ Proposed refactor
+const isValidDeploymentsValue = (deployments: Record<string, string> | string | undefined): boolean => { + if (!deployments) return true; + if (typeof deployments === "object") return true; + const trimmed = deployments.trim(); + if (trimmed === "") return true; + if (trimmed.startsWith("env.")) return true; + try { + const parsed = JSON.parse(trimmed); + return typeof parsed === "object" && parsed !== null && !Array.isArray(parsed); + } catch { + return false; + } +}; export const sapaicoreKeyConfigSchema = z .object({ client_id: envVarSchema.optional(), client_secret: envVarSchema.optional(), auth_url: envVarSchema.optional(), base_url: envVarSchema.optional(), resource_group: envVarSchema.optional(), deployments: z.union([z.record(z.string(), z.string()), z.string()]).optional(), }) - .refine( - (data) => { - // If deployments is not provided, it's valid - if (!data.deployments) return true; - // If it's already an object, it's valid - if (typeof data.deployments === "object") return true; - // If it's a string, check if it's valid JSON or an env variable - if (typeof data.deployments === "string") { - const trimmed = data.deployments.trim(); - // Allow empty string - if (trimmed === "") return true; - // Allow env variables - if (trimmed.startsWith("env.")) return true; - // Validate JSON format - try { - const parsed = JSON.parse(trimmed); - return typeof parsed === "object" && parsed !== null && !Array.isArray(parsed); - } catch { - return false; - } - } - return false; - }, + .refine((data) => isValidDeploymentsValue(data.deployments), { { message: "Deployments must be a valid JSON object or an environment variable reference", path: ["deployments"], }, - ); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/lib/types/schemas.ts` around lines 175 - 212, The deployments string/object validation in sapaicoreKeyConfigSchema is duplicated elsewhere; extract the logic into a shared helper (e.g., isValidDeploymentsString or validateDeploymentsValue) and replace the inline refine predicate in sapaicoreKeyConfigSchema with a call to that helper; ensure the helper accepts the same union type (string | Record<string,string> | undefined), implements the exact rules (empty string allowed, env.* prefix allowed, JSON parse check that result is a non-null non-array object), and update other provider schemas to call the same helper so all deployments parsing/validation is centralized.
254-307: Consolidate SAP field-state checks used by both refines.Line 257-Line 272 and Line 290-Line 303 repeat the same required/any-field predicates. A shared helper would reduce mismatch risk between the two validators.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/lib/types/schemas.ts` around lines 254 - 307, The two refine blocks duplicate the same SAP AI Core field checks; extract the repeated predicates into small helpers (e.g., sapaicoreHasRequired(cfg) and sapaicoreHasAny(cfg)) that use hasEnvVarValue on cfg.client_id, cfg.client_secret, cfg.auth_url, cfg.base_url, and cfg.resource_group, then replace the inline boolean expressions in both refine callbacks with calls to these helpers (use data.sapaicore_key_config to pass cfg). Ensure the helpers are defined in the same module and referenced by both the first refine (where you check hasAnyField/hasRequiredFields) and the second refine (where you check hasAnyField/hasAllRequired).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@ui/lib/types/schemas.ts`:
- Around line 175-212: The deployments string/object validation in
sapaicoreKeyConfigSchema is duplicated elsewhere; extract the logic into a
shared helper (e.g., isValidDeploymentsString or validateDeploymentsValue) and
replace the inline refine predicate in sapaicoreKeyConfigSchema with a call to
that helper; ensure the helper accepts the same union type (string |
Record<string,string> | undefined), implements the exact rules (empty string
allowed, env.* prefix allowed, JSON parse check that result is a non-null
non-array object), and update other provider schemas to call the same helper so
all deployments parsing/validation is centralized.
- Around line 254-307: The two refine blocks duplicate the same SAP AI Core
field checks; extract the repeated predicates into small helpers (e.g.,
sapaicoreHasRequired(cfg) and sapaicoreHasAny(cfg)) that use hasEnvVarValue on
cfg.client_id, cfg.client_secret, cfg.auth_url, cfg.base_url, and
cfg.resource_group, then replace the inline boolean expressions in both refine
callbacks with calls to these helpers (use data.sapaicore_key_config to pass
cfg). Ensure the helpers are defined in the same module and referenced by both
the first refine (where you check hasAnyField/hasRequiredFields) and the second
refine (where you check hasAnyField/hasAllRequired).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: cc889254-b15b-445f-8788-65681f8cb9d2
📒 Files selected for processing (3)
core/bifrost.godocs/providers/supported-providers/sapaicore.mdxui/lib/types/schemas.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/providers/supported-providers/sapaicore.mdx
…, fix schema, and refactor streaming - Add periodic background cleanup goroutine (every 5 min) to prune expired token and deployment cache entries, stopped on Shutdown() - Add non-blocking opportunistic cleanup (TryLock) in TokenCache.GetToken and DeploymentCache.resolveDeployment cache-miss paths - Remove upstream response body from error messages in auth.go and deployment.go (3 locations) to avoid leaking provider data - Fix broken JSON syntax in config.schema.json: close vllm_key_config required array, add sapaicore_key_config as proper property, rebuild oneOf with 6 clean exclusive branches including sapaicore - Add missing ReplicateKeyConfig and VLLMKeyConfig to virtual-key materialization in transports config.go - Extract performBedrockStreamRequest shared helper to deduplicate handleBedrockChatCompletionStream and handleBedrockResponsesStream
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
transports/bifrost-http/lib/config.go (1)
2074-2092:⚠️ Potential issue | 🟠 MajorInclude
VLLMKeyConfigwhen mapping DB keys to in-memory representation.The key mapping in
loadDefaultProvidersincludesReplicateKeyConfigandSAPAICoreKeyConfigbut omitsVLLMKeyConfig. This is inconsistent with the virtual-key materialization at lines 174-189 which correctly includesVLLMKeyConfig. Without this field, vLLM-specific configuration (URL, ModelName) would be lost when loading from database.Suggested fix
keys[i] = schemas.Key{ ID: dbKey.ID, Name: dbKey.Name, Value: dbKey.Value, Models: dbKey.Models, Weight: dbKey.Weight, Enabled: dbKey.Enabled, UseForBatchAPI: dbKey.UseForBatchAPI, AzureKeyConfig: dbKey.AzureKeyConfig, VertexKeyConfig: dbKey.VertexKeyConfig, BedrockKeyConfig: dbKey.BedrockKeyConfig, ReplicateKeyConfig: dbKey.ReplicateKeyConfig, + VLLMKeyConfig: dbKey.VLLMKeyConfig, SAPAICoreKeyConfig: dbKey.SAPAICoreKeyConfig, ConfigHash: dbKey.ConfigHash, Status: dbKey.Status, Description: dbKey.Description, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/lib/config.go` around lines 2074 - 2092, The mapping loop in loadDefaultProviders that converts dbProvider.Keys into schemas.Key omits VLLMKeyConfig, so vLLM URL/ModelName is lost; update the mapping inside the for i, dbKey := range dbProvider.Keys loop to set VLLMKeyConfig: dbKey.VLLMKeyConfig on the constructed schemas.Key (alongside the existing ReplicateKeyConfig and SAPAICoreKeyConfig) so schemas.Key preserves vLLM-specific configuration when loading from the DB.
♻️ Duplicate comments (1)
core/providers/sapaicore/auth.go (1)
143-147:⚠️ Potential issue | 🟡 MinorNormalize wrapped HTTP error text to lowercase.
Line 146 uses
fmt.Errorf("HTTP %d", ...); per Go conventions, this should be lowercase ("http %d").🔧 Proposed fix
- fmt.Errorf("HTTP %d", resp.StatusCode()), + fmt.Errorf("http %d", resp.StatusCode()),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/auth.go` around lines 143 - 147, The wrapped HTTP error string in the providerUtils.NewBifrostOperationError call is using an uppercase "HTTP" literal; update the fmt.Errorf invocation inside core/providers/sapaicore/auth.go (the block that constructs the error when resp.StatusCode() != fasthttp.StatusOK, surrounding the NewBifrostOperationError call) to use a lowercase format string "http %d" so the wrapped error text is normalized to lowercase.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 124-127: Shutdown() currently closes provider.stopCleanup
unconditionally, causing a panic if called twice; add a sync.Once field (e.g.,
cleanupOnce) to the SAPAICoreProvider struct and wrap the
close(provider.stopCleanup) call in cleanupOnce.Do(func() {
close(provider.stopCleanup) }) so the channel is closed only once and Shutdown
becomes idempotent; ensure the new sync.Once field is initialized by zero value
(no explicit init required).
In `@transports/bifrost-http/lib/config.go`:
- Around line 786-796: The legacy key-hash fallback in reconcileProviderKeys
builds a schemas.Key for configstore.GenerateKeyHash but omits the VLLMKeyConfig
field, so vLLM provider keys produce incomplete hashes; update the struct
literal (the one passed to configstore.GenerateKeyHash where dbKey fields are
mapped—currently listing Name, Value, Models, Weight, AzureKeyConfig,
VertexKeyConfig, BedrockKeyConfig, ReplicateKeyConfig, SAPAICoreKeyConfig) to
also include VLLMKeyConfig: dbKey.VLLMKeyConfig so GenerateKeyHash receives the
vLLM config when computing dbKeyHash.
- Around line 711-721: The legacy key-hash fallback is missing the VLLM-specific
config, so GenerateKeyHash may miss vLLM changes; update the schemas.Key
construction used for dbKeyHash in config.go to include the VLLMKeyConfig field
(i.e., add VLLMKeyConfig: dbKey.VLLMKeyConfig) alongside ReplicateKeyConfig and
SAPAICoreKeyConfig when building the payload passed to
configstore.GenerateKeyHash so vLLM URL/ModelName changes are detected.
---
Outside diff comments:
In `@transports/bifrost-http/lib/config.go`:
- Around line 2074-2092: The mapping loop in loadDefaultProviders that converts
dbProvider.Keys into schemas.Key omits VLLMKeyConfig, so vLLM URL/ModelName is
lost; update the mapping inside the for i, dbKey := range dbProvider.Keys loop
to set VLLMKeyConfig: dbKey.VLLMKeyConfig on the constructed schemas.Key
(alongside the existing ReplicateKeyConfig and SAPAICoreKeyConfig) so
schemas.Key preserves vLLM-specific configuration when loading from the DB.
---
Duplicate comments:
In `@core/providers/sapaicore/auth.go`:
- Around line 143-147: The wrapped HTTP error string in the
providerUtils.NewBifrostOperationError call is using an uppercase "HTTP"
literal; update the fmt.Errorf invocation inside
core/providers/sapaicore/auth.go (the block that constructs the error when
resp.StatusCode() != fasthttp.StatusOK, surrounding the NewBifrostOperationError
call) to use a lowercase format string "http %d" so the wrapped error text is
normalized to lowercase.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 279781a1-8919-41d3-85b4-7865594f5563
📒 Files selected for processing (5)
core/providers/sapaicore/auth.gocore/providers/sapaicore/deployment.gocore/providers/sapaicore/sapaicore.gotransports/bifrost-http/lib/config.gotransports/config.schema.json
…ash fallbacks - Wrap close(stopCleanup) in sync.Once so Shutdown() can be called multiple times without panicking on a double-close - Add VLLMKeyConfig to both legacy GenerateKeyHash fallback sites in mergeProviderKeys and reconcileProviderKeys so vLLM URL/model changes are detected during config reconciliation
There was a problem hiding this comment.
♻️ Duplicate comments (1)
core/providers/sapaicore/sapaicore.go (1)
1344-1349:⚠️ Potential issue | 🟠 MajorPropagate Vertex SSE scanner failures to the client/post-hook chain.
At Line 1345, scanner errors are only logged. That can close the stream without a terminal error chunk, making truncated streams look like normal completion. Emit a stream error and return before final-success handling.
🔧 Proposed fix
// Check for scanner errors (I/O errors, buffer overflow, etc.) if err := scanner.Err(); err != nil { - if ctx.Err() == nil { // Only log if not cancelled + if ctx.Err() == nil { // Only log/send if not cancelled logger.Warn("Vertex SSE scanner error: %v", err) + streamErr := providerUtils.NewBifrostOperationError("vertex sse scanner error", err, providerName) + ctx.SetValue(schemas.BifrostContextKeyStreamEndIndicator, true) + providerUtils.ProcessAndSendBifrostError(ctx, postHookRunner, streamErr, responseChan, logger) } + return }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/sapaicore/sapaicore.go` around lines 1344 - 1349, The scanner.Err() block currently only logs via logger.Warn and allows normal completion; instead propagate the error into the stream/post-hook chain and abort processing so the stream is not treated as a successful completion. Replace the logger.Warn call in the scanner error branch (the scanner.Err() / ctx.Err() check) with code that (1) constructs/propagates a terminal stream error to the client/post-hook chain (call your stream error propagation helper or return an error from this handler so upstream can emit an error chunk), and (2) immediately return/stop further success handling so the final-success path is not executed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@core/providers/sapaicore/sapaicore.go`:
- Around line 1344-1349: The scanner.Err() block currently only logs via
logger.Warn and allows normal completion; instead propagate the error into the
stream/post-hook chain and abort processing so the stream is not treated as a
successful completion. Replace the logger.Warn call in the scanner error branch
(the scanner.Err() / ctx.Err() check) with code that (1) constructs/propagates a
terminal stream error to the client/post-hook chain (call your stream error
propagation helper or return an error from this handler so upstream can emit an
error chunk), and (2) immediately return/stop further success handling so the
final-success path is not executed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a25166e3-007e-483d-be3e-175ba2906b93
📒 Files selected for processing (2)
core/providers/sapaicore/sapaicore.gotransports/bifrost-http/lib/config.go
🚧 Files skipped from review as they are similar to previous changes (1)
- transports/bifrost-http/lib/config.go
|
Hi @akshaydeo , BTW, it has been extensively tested, I have being used it for everything that I am doing with SAP AI Core. Thank you, |
Hi @akshaydeo, Thank you |
|
hi - thanks for the PR. I will review and merge with the next round 🙇 |
Thank you, appreciated! :) |
Summary
Add SAP AI Core as a first-class provider in Bifrost with OAuth2 authentication, deployment resolution, and multi-backend routing (OpenAI, Anthropic/Amazon via Bedrock, Google Vertex/Gemini). This PR implements native tool/function calling for Anthropic (via Bedrock Converse) and Gemini, fixes vision input handling, and adds Responses API support (non-streaming and streaming) for Bedrock-backed models.
Changes
/converse,/converse-stream) for native tool calls$refcleanup (resolveSchemaRefs)developer->userensureToolConfigForHistory()builds stubtoolConfigfrom history (toolUse/toolResult) to satisfy BedrockNotable design decisions/trade-offs:
Type of change
Affected areas
How to test
client_id,client_secret,auth_url,base_url,resource_groupdeploymentsmap for faster routing{"type":"input_image","image_url":"https://.../image.jpg"}to user content; expect normal processing with no blank-text chunksIf adding new configs or environment variables, document them here:
SAP_AI_CORE_CLIENT_ID,SAP_AI_CORE_CLIENT_SECRET,SAP_AI_CORE_AUTH_URL,SAP_AI_CORE_BASE_URL,SAP_AI_CORE_RESOURCE_GROUPScreenshots/Recordings
Breaking changes
Related issues
Closes #233
Security considerations
Checklist
docs/contributing/README.mdand followed the guidelines