Skip to content

👻 update sample-provider-settings.yaml#1395

Open
ibolton336 wants to merge 2 commits intokonveyor:mainfrom
ibolton336:docs/update-sample-provider-settings
Open

👻 update sample-provider-settings.yaml#1395
ibolton336 wants to merge 2 commits intokonveyor:mainfrom
ibolton336:docs/update-sample-provider-settings

Conversation

@ibolton336
Copy link
Copy Markdown
Member

@ibolton336 ibolton336 commented May 1, 2026

Summary

Replaces the confusing YAML anchor/alias configuration pattern in sample-provider-settings.yaml with a straightforward flat layout, and upgrades LLM no-response log messages from silly to warn with actionable context.

Why

Provider settings overhaul

The old config used YAML anchors (&active) and aliases (*active) to select the active provider. This pattern was:

  • Confusing for users — most people don't know YAML anchor syntax, so they'd copy-paste blocks and silently break the reference chain
  • Noisy — every provider was an uncommented live block with empty credentials, making it hard to see which one was actually active
  • Hard for the extension to manage programmatically — the settings UI needs to read/write the active config cleanly

The new format puts a single explicit active: block at the top (what you see is what runs), with all other providers as commented-out reference examples below.

Logging upgrade

When the LLM returns no response, the old code logged at silly level — effectively invisible. These are now warn with a message suggesting a possible provider configuration issue, so users can actually diagnose problems.

Before / After

sample-provider-settings.yaml

Before — YAML anchor/alias pattern
models:
  OpenAI: &active
    environment:
      OPENAI_API_KEY: "" # Required
    provider: ChatOpenAI
    args:
      model: gpt-4o # Required

  AzureChatOpenAI:
    environment:
      AZURE_OPENAI_API_KEY: "" # Required
    provider: AzureChatOpenAI
    args:
      azureOpenAIApiDeploymentName: "" # Required
      azureOpenAIApiVersion: "" # Required

  # ... more providers, all uncommented ...

  JustAnExample:
    environment:
      ANY_KEY_1: "any environment variable needed for this model provider"
    provider: "provider-string"
    args:
      anyArgumentName1: "argument one"

# Move the `&active` anchor to the desired block and restart the server.
active: *active
After — flat active block with commented reference examples
# The active provider configuration used by the extension.
# Update this block directly or use the chat panel settings UI.
active:
  environment:
    OPENAI_API_KEY: "" # Required
  provider: ChatOpenAI
  args:
    model: gpt-4o # Required

# ──────────────────────────────────────────────────────────────────────
# Provider reference — copy a block above to `active:` and fill in
# the required fields.
# ──────────────────────────────────────────────────────────────────────

# --- OpenAI ---
# active:
#   environment:
#     OPENAI_API_KEY: ""
#   provider: ChatOpenAI
#   args:
#     model: gpt-4o

# --- Amazon Bedrock ---
# active:
#   environment:
#     AWS_ACCESS_KEY_ID: ""
#     AWS_SECRET_ACCESS_KEY: ""
#     AWS_DEFAULT_REGION: ""
#   provider: ChatBedrock
#   args:
#     model: meta.llama3-70b-instruct-v1:0

# --- Anthropic ---
# active:
#   environment:
#     ANTHROPIC_API_KEY: ""
#   provider: ChatAnthropic
#   args:
#     model: claude-sonnet-4-20250514

# ... more providers ...

LLM no-response logging (analysisIssueFix.ts, diagnosticsIssueFix.ts)

Before:

this.logger.silly("AnalysisIssueFix returned undefined response");

After:

this.logger.warn(
  `AnalysisIssueFix: LLM returned no response for file "${fileName}". ` +
    `This may indicate a model provider configuration issue.`,
);

Changes

File What changed
vscode/core/resources/sample-provider-settings.yaml Replaced anchor/alias pattern with flat active: block + commented examples; added Anthropic provider; removed ALWAYS_APPLIED_KEY placeholder
agentic/src/nodes/analysisIssueFix.ts sillywarn with file name context for AnalysisIssueFix and SummarizeHistory no-response cases
agentic/src/nodes/diagnosticsIssueFix.ts sillywarn for PlanFixes, FixGeneralIssues, and FixJavaDependencyIssues no-response cases

Testing

  • Config file is documentation only — no runtime behavior change
  • Logging changes are severity-only (sillywarn) with improved message text

ibolton336 added 2 commits May 1, 2026 14:26
When the LLM returns no response, log at warn level with actionable
messages instead of silly level. Helps diagnose misconfigured model
providers.

Affected nodes: AnalysisIssueFix, SummarizeHistory, PlanFixes,
FixGeneralIssues, FixJavaDependencyIssues

Signed-off-by: Ian Bolton <ibolton@redhat.com>
Refreshes the example provider settings with current model
provider options and clearer documentation comments.

Signed-off-by: Ian Bolton <ibolton@redhat.com>
@ibolton336 ibolton336 requested a review from a team as a code owner May 1, 2026 18:33
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 1, 2026

Warning

Rate limit exceeded

@ibolton336 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 44 minutes and 3 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 14739352-d380-4820-b967-eec08b806215

📥 Commits

Reviewing files that changed from the base of the PR and between 3d97606 and 9c3200d.

📒 Files selected for processing (3)
  • agentic/src/nodes/analysisIssueFix.ts
  • agentic/src/nodes/diagnosticsIssueFix.ts
  • vscode/core/resources/sample-provider-settings.yaml
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 44 minutes and 3 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

ibolton336 added a commit to ibolton336/editor-extensions that referenced this pull request May 1, 2026
Signed-off-by: Ian Bolton <ibolton@redhat.com>
@ibolton336 ibolton336 changed the title docs: update sample-provider-settings.yaml docs: simplify sample-provider-settings.yaml and upgrade LLM no-response logging May 1, 2026
@ibolton336 ibolton336 changed the title docs: simplify sample-provider-settings.yaml and upgrade LLM no-response logging 👻 update sample-provider-settings.yaml May 1, 2026
ibolton336 added a commit to ibolton336/editor-extensions that referenced this pull request May 1, 2026
Signed-off-by: Ian Bolton <ibolton@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant