Skip to content

fix(prompts): prevent IndexError when LLM provided via constructor with empty models config #1334

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

Pouyanpi
Copy link
Collaborator

Description

When providing a Main LLM model object directly to the LLMRails constructor while having an empty models list in the YAML config, the system would throw an IndexError:

File "/nemoguardrails/llm/prompts.py", line 142, in get_task_model
    return _models[0]
           ~~~~~~~^^^
IndexError: list index out of range

Root Cause

The get_task_model function in prompts.py was attempting to access the first element of the _models list without checking if it was empty. This occurred when:

  1. An LLM was provided via the constructor (not in config)
  2. The config had an empty models list models: []
  3. The system tried to find a model for prompt selection

Solution

Implemented a tactical fix by adding a safety check before accessing the list:

  • Check if _models list is not empty before accessing _models[0]
  • Return None when no matching models are found
  • The existing code already handles None return gracefully by defaulting to "unknown" model

Notes

This is a tactical/bandaid fix as discussed in the bug report.

@Pouyanpi Pouyanpi added this to the v0.16.0 milestone Aug 15, 2025
@Pouyanpi Pouyanpi requested a review from Copilot August 15, 2025 09:50
@Pouyanpi Pouyanpi self-assigned this Aug 15, 2025
@Pouyanpi Pouyanpi added the bug Something isn't working label Aug 15, 2025
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes an IndexError that occurred when an LLM was provided via the constructor but the config had an empty models list. The fix adds a safety check to prevent accessing an empty list.

  • Added a safety check in get_task_model() to prevent IndexError when models list is empty
  • Added comprehensive test coverage for the edge case scenarios
  • Ensured the function gracefully returns None when no models are found

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
nemoguardrails/llm/prompts.py Added safety check to prevent IndexError when accessing empty models list
tests/test_llmrails.py Added integration test for LLMRails constructor with empty models config
tests/test_llm_task_manager.py Added unit tests for get_task_model function edge cases

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@Pouyanpi Pouyanpi changed the title fix(prompts): prevent IndexError when LLM provided via constructor wi… fix(prompts): prevent IndexError when LLM provided via constructor with empty models config Aug 15, 2025
…th empty models config

- Add check in get_task_model to handle empty _models list gracefully
- Return None instead of throwing IndexError when no models match
- Add comprehensive test coverage for various model configuration scenarios

Fixes the issue where providing an LLM object directly to LLMRails constructor
would fail if the YAML config had an empty models list.
@Pouyanpi Pouyanpi force-pushed the fix/index-error-prompts branch from 7dd02d7 to 68b5d24 Compare August 15, 2025 09:53
@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 70.63%. Comparing base (52ac7ed) to head (68b5d24).

Additional details and impacted files
@@           Coverage Diff            @@
##           develop    #1334   +/-   ##
========================================
  Coverage    70.63%   70.63%           
========================================
  Files          161      161           
  Lines        16304    16305    +1     
========================================
+ Hits         11516    11517    +1     
  Misses        4788     4788           
Flag Coverage Δ
python 70.63% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
nemoguardrails/llm/prompts.py 91.76% <100.00%> (+0.09%) ⬆️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants