Skip to content

Conversation

@jamsea
Copy link
Contributor

@jamsea jamsea commented Oct 22, 2025

This PR adds consistent error propagation to the OpenAI base LLM service, matching the pattern already used in the Anthropic service.

Changes

  • Added ErrorFrame import
  • Added catch-all exception handler
  • Properly handles CancelledError by re-raising
  • Logs exceptions with full traceback
  • Pushes ErrorFrame upstream for error detection
  • Maintains existing timeout handling

Problem Solved

Previously, OpenAI LLM service errors were suppressed and not communicated through the frame pipeline. This made it impossible to implement automatic error recovery strategies like LLM hot-switching.

Benefits

  • Enables consistent error detection across all LLM services
  • Allows downstream error monitors to detect failures
  • Makes LLM hot-switching and fallback strategies possible
  • Improves debugging with proper error logging

- Import ErrorFrame from pipecat.frames.frames
- Add catch-all exception handler matching Anthropic service pattern
- Handle asyncio.CancelledError by re-raising to maintain proper cancellation
- Catch and log all other exceptions with full traceback
- Push ErrorFrame upstream on exceptions for error detection
- Maintain existing httpx.TimeoutException handling
- Ensure LLMFullResponseEndFrame always sent in finally block

This change enables consistent error propagation across all LLM services,
allowing downstream error monitors to detect failures and implement
fallback strategies like LLM hot-switching.

Fixes issue where OpenAI service errors were suppressed and not
communicated through the frame pipeline, making automatic error
recovery impossible.
@codecov
Copy link

codecov bot commented Oct 22, 2025

Codecov Report

❌ Patch coverage is 16.66667% with 5 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/pipecat/services/openai/base_llm.py 16.66% 5 Missing ⚠️
Files with missing lines Coverage Δ
src/pipecat/services/openai/base_llm.py 32.91% <16.66%> (-1.06%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@jamsea jamsea requested review from aconchillo and markbackman and removed request for aconchillo October 22, 2025 08:07
await self.start_processing_metrics()
await self._process_context(context)
except asyncio.CancelledError:
raise
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are not doing anything, no need to handle it. By default it's already raised.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants