-
Notifications
You must be signed in to change notification settings - Fork 6
Object file range expansion #561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Implemented range expansion for string fields in object files, allowing patterns like [1-5] to create multiple objects. - Added validation to ensure expanded lists have the same length. - Updated documentation to include usage examples and details on the new feature. - Added unit tests to verify range expansion functionality and error handling for mismatched lengths.
WalkthroughAdds range expansion for string fields in object files so patterns like Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Deploying infrahub-sdk-python with
|
Latest commit: |
cf01aa5
|
Status: | ✅ Deploy successful! |
Preview URL: | https://23d2d427.infrahub-sdk-python.pages.dev |
Branch Preview URL: | https://atg-20250930-ifc-1878.infrahub-sdk-python.pages.dev |
Codecov Report❌ Patch coverage is
@@ Coverage Diff @@
## develop #561 +/- ##
===========================================
+ Coverage 75.76% 76.06% +0.29%
===========================================
Files 101 102 +1
Lines 8969 9073 +104
Branches 1768 1794 +26
===========================================
+ Hits 6795 6901 +106
+ Misses 1688 1685 -3
- Partials 486 487 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 2 files with indirect coverage changes 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
infrahub_sdk/spec/object.py (1)
224-225
: Consider consistency withvalidate_format
.In
validate_format
(line 208),self.data
is updated with the expanded data, but here inprocess
,self.data
is not mutated. This inconsistency could lead to confusion or unexpected behavior if calling code relies onself.data
being updated afterprocess()
is called.Consider either:
- Updating
self.data
here as well for consistency, or- Removing the mutation in
validate_format
and returning the expanded data fromexpand_data_with_ranges
without side effectsasync def process(self, client: InfrahubClient, branch: str | None = None) -> None: schema = await client.schema.get(kind=self.kind, branch=branch) expanded_data = self.expand_data_with_ranges() + self.data = expanded_data for idx, item in enumerate(expanded_data):
infrahub_sdk/spec/range_expansion.py (1)
1-118
: Consider adding complete type hints.While function parameters have type hints, return types are present but internal variables and helper functions could benefit from complete typing. This would improve IDE support and catch potential type errors.
Example:
-def _pairwise(lst: list[int]) -> list[tuple[int, int]]: +def _pairwise(lst: list[int]) -> list[tuple[int, int]]: it = iter(lst) return list(zip(it, it))Note: The existing hints are already quite good, but adding hints to nested functions like
_pairwise
would make the codebase more maintainable.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
changelog/560.added.md
(1 hunks)docs/docs/python-sdk/topics/object_file.mdx
(1 hunks)infrahub_sdk/spec/object.py
(4 hunks)infrahub_sdk/spec/range_expansion.py
(1 hunks)tests/unit/sdk/spec/test_object.py
(2 hunks)tests/unit/sdk/test_range_expansion.py
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
When implementing Infrahub checks, subclass InfrahubCheck and override validate(data); do not implement or rely on a check() method
Files:
infrahub_sdk/spec/object.py
tests/unit/sdk/spec/test_object.py
infrahub_sdk/spec/range_expansion.py
tests/unit/sdk/test_range_expansion.py
tests/unit/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place and write unit tests under tests/unit/ (isolated component tests)
Files:
tests/unit/sdk/spec/test_object.py
tests/unit/sdk/test_range_expansion.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use the custom pytest plugin (infrahub_sdk.pytest_plugin) fixtures for clients, configuration, and Infrahub-specific testing
Files:
tests/unit/sdk/spec/test_object.py
tests/unit/sdk/test_range_expansion.py
docs/**/*.{md,mdx}
📄 CodeRabbit inference engine (CLAUDE.md)
docs/**/*.{md,mdx}
: Follow the Diataxis framework and classify docs as Tutorials, How-to guides, Explanation, or Reference
Structure How-to guides with required frontmatter and sections: introduction, prerequisites, step-by-step steps, validation, related resources
Structure Topics (Explanation) docs with introduction, concepts & definitions, background & context, architecture & design, connections, further reading
Use a professional, concise, informative tone with consistent structure across documents
Use proper language tags on all code blocks
Include both async and sync examples where applicable using the Tabs component
Add validation steps to guides to confirm success and expected outputs
Use callouts for warnings, tips, and important notes
Define new terms on first use and use domain-relevant terminology consistent with Infrahub’s model/UI
Conform to markdown style rules in .markdownlint.yaml and Vale styles in .vale/styles/
Files:
docs/docs/python-sdk/topics/object_file.mdx
🧬 Code graph analysis (3)
infrahub_sdk/spec/object.py (2)
infrahub_sdk/spec/range_expansion.py (1)
range_expansion
(88-118)infrahub_sdk/exceptions.py (2)
ValidationError
(115-127)ObjectValidationError
(130-137)
tests/unit/sdk/spec/test_object.py (3)
tests/unit/sdk/conftest.py (2)
client
(32-33)mock_schema_query_01
(1871-1878)infrahub_sdk/spec/object.py (4)
ObjectFile
(627-649)validate_format
(204-220)validate_format
(642-646)spec
(631-634)infrahub_sdk/exceptions.py (1)
ValidationError
(115-127)
tests/unit/sdk/test_range_expansion.py (1)
infrahub_sdk/spec/range_expansion.py (1)
range_expansion
(88-118)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: unit-tests (3.12)
- GitHub Check: unit-tests (3.10)
- GitHub Check: unit-tests (3.13)
- GitHub Check: integration-tests-latest-infrahub
🔇 Additional comments (16)
changelog/560.added.md (1)
1-1
: LGTM!The changelog entry clearly describes the new range expansion feature, including the validation requirements and the scope of changes.
infrahub_sdk/spec/object.py (3)
3-4
: LGTM!The new imports for
copy
andre
are necessary for the range expansion functionality.
13-13
: LGTM!Importing
range_expansion
andMATCH_PATTERN
from the new module is appropriate for the integration.
207-209
: LGTM!The integration correctly expands the data before validation, and updates
self.data
with the expanded result. Each expanded item is then validated individually.docs/docs/python-sdk/topics/object_file.mdx (1)
198-280
: LGTM!The documentation is well-structured, clear, and comprehensive. It follows the Diataxis framework for Explanation/Topic documentation with:
- Clear introduction and overview
- Multiple concrete examples showing single and multiple field expansion
- Error scenario with expected output
- Important notes about behavior and limitations
All code blocks have proper language tags, and the tone is professional and concise.
As per coding guidelines.
tests/unit/sdk/test_range_expansion.py (1)
1-107
: LGTM!Comprehensive test coverage for the
range_expansion
function. The tests cover:
- Basic numeric and alphabetic ranges
- Mixed ranges with multiple patterns
- Edge cases (empty brackets, no brackets, single values)
- Malformed inputs
- Special characters and unicode
- Whitespace handling
- Descending ranges
- Consecutive bracketed patterns
All tests are clear, focused, and follow pytest conventions.
As per coding guidelines.
tests/unit/sdk/spec/test_object.py (6)
46-56
: LGTM!The
location_expansion
fixture provides a clean test case for basic single-field range expansion.
59-70
: LGTM!The
location_expansion_multiple_ranges
fixture tests the scenario where multiple fields expand to the same length (5 values each). This validates the equal-length requirement.
73-84
: LGTM!The
location_expansion_multiple_ranges_bad_syntax
fixture tests the error case where two fields expand to different lengths (5 vs 6), which should trigger a validation error.
114-123
: LGTM!The test correctly validates that a single field with range pattern
AMS[1-5]
expands to 5 objects with the expected names.
126-137
: LGTM!The test validates that multiple fields with matching expansion lengths (both expand to 5 values) are correctly expanded and paired. The assertions verify both the name and description fields are properly expanded.
140-147
: LGTM!The test correctly validates that mismatched expansion lengths trigger a
ValidationError
with the appropriate error message.infrahub_sdk/spec/range_expansion.py (4)
4-4
: LGTM!The regex pattern correctly matches bracket expressions containing word characters, commas, and hyphens.
59-66
: LGTM!The function correctly extracts match positions and builds the cartesian product list for expansion.
69-85
: LGTM!The expansion logic correctly uses cartesian products to generate all combinations, and properly interleaves constant segments with expanded values.
88-118
: LGTM! Comprehensive documentation.The main function is well-structured with clear orchestration of helper functions. The docstring provides excellent examples covering numeric, alphabetic, and mixed ranges. Good attribution to the Netutils inspiration.
infrahub_sdk/spec/object.py
Outdated
def expand_data_with_ranges(self) -> list[dict[str, Any]]: | ||
"""Expand any item in self.data with range pattern in any value. Supports multiple fields, requires equal expansion length.""" | ||
range_pattern = re.compile(MATCH_PATTERN) | ||
expanded = [] | ||
for item in self.data: | ||
# Find all fields to expand | ||
expand_fields = {} | ||
for key, value in item.items(): | ||
if isinstance(value, str) and range_pattern.search(value): | ||
try: | ||
expand_fields[key] = range_expansion(value) | ||
except Exception: | ||
# If expansion fails, treat as no expansion | ||
expand_fields[key] = [value] | ||
if not expand_fields: | ||
expanded.append(item) | ||
continue | ||
# Check all expanded lists have the same length | ||
lengths = [len(v) for v in expand_fields.values()] | ||
if len(set(lengths)) > 1: | ||
raise ValidationError(f"Range expansion mismatch: fields expanded to different lengths: {lengths}") | ||
n = lengths[0] | ||
# Zip expanded values and produce new items | ||
for i in range(n): | ||
new_item = copy.deepcopy(item) | ||
for key, values in expand_fields.items(): | ||
new_item[key] = values[i] | ||
expanded.append(new_item) | ||
return expanded |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix exception handling and error construction.
The method has two critical issues:
-
Lines 182-184: Bare
except Exception
catches all exceptions and silently treats expansion failures as "no expansion". This can hide bugs in therange_expansion
function and make debugging difficult. Consider catching specific exceptions or at minimum logging the error. -
Line 191: The
ValidationError
constructor expectsidentifier
as the first positional argument, but you're passing a formatted message string. Based on theValidationError
class definition, this should be:raise ValidationError(identifier="range_expansion", message=f"Range expansion mismatch: fields expanded to different lengths: {lengths}")
Apply this diff to fix both issues:
if isinstance(value, str) and range_pattern.search(value):
try:
expand_fields[key] = range_expansion(value)
- except Exception:
+ except Exception as e:
# If expansion fails, treat as no expansion
+ # Log the error for debugging
+ # Note: In production, consider whether to fail fast or continue
expand_fields[key] = [value]
if not expand_fields:
expanded.append(item)
continue
# Check all expanded lists have the same length
lengths = [len(v) for v in expand_fields.values()]
if len(set(lengths)) > 1:
- raise ValidationError(f"Range expansion mismatch: fields expanded to different lengths: {lengths}")
+ raise ValidationError(
+ identifier="range_expansion",
+ message=f"Range expansion mismatch: fields expanded to different lengths: {lengths}"
+ )
🤖 Prompt for AI Agents
In infrahub_sdk/spec/object.py around lines 171 to 199, the range expansion
silently swallows all exceptions and constructs ValidationError incorrectly;
update the except to catch specific expansion errors (or catch Exception but
re-raise after logging) so expansion failures are not hidden—log the caught
error with context (including the key and value) and then fall back to
single-value behavior only if the error indicates non-expandable input,
otherwise re-raise; also change the ValidationError raise to use the correct
constructor signature, e.g. raise ValidationError(identifier="range_expansion",
message=f"Range expansion mismatch: fields expanded to different lengths:
{lengths}").
def _escape_brackets(s: str) -> str: | ||
return s.replace("\\[", "__LBRACK__").replace("\\]", "__RBRACK__") | ||
|
||
|
||
def _unescape_brackets(s: str) -> str: | ||
return s.replace("__LBRACK__", "[").replace("__RBRACK__", "]") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider collision-resistant escape sequences.
The current placeholder strings __LBRACK__
and __RBRACK__
could collide with user input that already contains these exact strings, leading to incorrect unescaping. While this is an edge case, consider using more collision-resistant placeholders (e.g., with UUID or special characters unlikely to appear in typical input).
🤖 Prompt for AI Agents
In infrahub_sdk/spec/range_expansion.py around lines 7 to 12, the current escape
tokens "__LBRACK__"/"__RBRACK__" can collide with user data; replace them with
module-level, collision-resistant constants created once (e.g., incorporate a
UUID or use a binary/sentinel sequence unlikely in input) so the same unique
tokens are used by both _escape_brackets and _unescape_brackets; update both
functions to reference those constants instead of hard-coded strings and ensure
they remain deterministic for the process (generate the UUID at import time) so
escaping and unescaping remain consistent.
def _char_range_expand(char_range_str: str) -> list[str]: | ||
"""Expands a string of numbers or single-character letters.""" | ||
expanded_values: list[str] = [] | ||
# Special case: if no dash and no comma, and multiple characters, error if not all alphanumeric | ||
if "," not in char_range_str and "-" not in char_range_str and len(char_range_str) > 1: | ||
if not char_range_str.isalnum(): | ||
raise ValueError(f"Invalid non-alphanumeric range: [{char_range_str}]") | ||
return list(char_range_str) | ||
|
||
for value in char_range_str.split(","): | ||
if not value: | ||
# Malformed: empty part in comma-separated list | ||
return [f"[{char_range_str}]"] | ||
if "-" in value: | ||
start_char, end_char = value.split("-", 1) | ||
if not start_char or not end_char: | ||
expanded_values.append(f"[{char_range_str}]") | ||
return expanded_values | ||
# Check if it's a numeric range | ||
if start_char.isdigit() and end_char.isdigit(): | ||
start_num = int(start_char) | ||
end_num = int(end_char) | ||
step = 1 if start_num <= end_num else -1 | ||
expanded_values.extend(str(i) for i in range(start_num, end_num + step, step)) | ||
# Check if it's an alphabetical range (single character) | ||
elif len(start_char) == 1 and len(end_char) == 1 and start_char.isalpha() and end_char.isalpha(): | ||
start_ord = ord(start_char) | ||
end_ord = ord(end_char) | ||
step = 1 if start_ord <= end_ord else -1 | ||
is_upper = start_char.isupper() | ||
for i in range(start_ord, end_ord + step, step): | ||
char = chr(i) | ||
expanded_values.append(char.upper() if is_upper else char) | ||
else: | ||
# Mixed or unsupported range type, append as-is | ||
expanded_values.append(value) | ||
else: | ||
# If the value is a single character or valid alphanumeric string, append | ||
if not value.isalnum(): | ||
raise ValueError(f"Invalid non-alphanumeric value: [{value}]") | ||
expanded_values.append(value) | ||
return expanded_values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent error handling for malformed input.
The function handles malformed input inconsistently:
- Lines 20-21 and 53-54 raise
ValueError
for non-alphanumeric values - Lines 25-27 and 30-32 return the malformed pattern as-is (wrapped in brackets)
This inconsistency could lead to unpredictable behavior. Consider standardizing the approach—either consistently raise exceptions for all malformed input, or consistently return malformed patterns as-is with clear documentation of this behavior.
Example of inconsistency:
# Raises ValueError
_char_range_expand("a@b")
# Returns ["[a--b]"] without raising
_char_range_expand("a--b")
🤖 Prompt for AI Agents
In infrahub_sdk/spec/range_expansion.py around lines 15 to 56, error handling is
inconsistent: some malformed inputs raise ValueError while other malformed
patterns are returned wrapped in brackets; standardize by raising ValueError for
all malformed cases. Specifically, change the branches that currently
return/[append] bracketed patterns (empty parts in comma list, missing start or
end in a hyphen range, and mixed/unsupported range types) to raise ValueError
with a clear message including the offending input, and keep the existing
ValueError behavior for non-alphanumeric single values so all malformed inputs
uniformly raise exceptions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great !! well done
…reusability and clarity
@dgarros Thank you. I've improved it a little to support nested expansion like so: ---
apiVersion: infrahub.app/v1
kind: Object
spec:
kind: DcimDevice
data:
- name: Device [1-5]
status: active
location: wm-site1
interfaces:
kind: InterfacePhysical
data:
- name: GigabitEthernet[1-3]/0/[1-24]
role: access
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
infrahub_sdk/spec/object.py (1)
474-483
: HandleMANY_OBJ_DICT_LIST
before callingexpand_data_with_ranges
The else‐branch currently treats every non‐ONE_OBJ format the same, so whenrel_info.format == RelationshipDataFormat.MANY_OBJ_DICT_LIST
(i.e.value
is a dict with a"data"
list), you end up passing a dict intoexpand_data_with_ranges
, which expects a list and will raise at runtime. Add an explicit case to unwrap the dict’s"data"
first:if rel_info.format == RelationshipDataFormat.ONE_OBJ: expanded_data = expand_data_with_ranges(data=[value]) nodes = await cls.create_related_nodes(…) clean_data[key] = nodes[0] - else: - expanded_data = expand_data_with_ranges(data=value) + elif rel_info.format == RelationshipDataFormat.MANY_OBJ_DICT_LIST: + expanded_data = expand_data_with_ranges(data=value["data"]) + nodes = await cls.create_related_nodes(…) + clean_data[key] = nodes + else: + expanded_data = expand_data_with_ranges(data=value) nodes = await cls.create_related_nodes(…) clean_data[key] = nodesThis ensures
expand_data_with_ranges
always receives a list.
♻️ Duplicate comments (1)
infrahub_sdk/spec/object.py (1)
170-199
: Address the previously flagged exception handling and ValidationError constructor issues.The two critical issues identified in the previous review remain unresolved:
Lines 181-183: The bare
except Exception
silently swallows all expansion failures, which can hide bugs in therange_expansion
function and make debugging difficult.Line 190: The
ValidationError
constructor expectsidentifier
as the first positional argument, but the code passes a formatted message string directly.Please apply the fixes suggested in the previous review:
if isinstance(value, str) and range_pattern.search(value): try: expand_fields[key] = range_expansion(value) - except Exception: + except Exception as e: # If expansion fails, treat as no expansion + # Log the error for debugging expand_fields[key] = [value] if not expand_fields: expanded.append(item) continue # Check all expanded lists have the same length lengths = [len(v) for v in expand_fields.values()] if len(set(lengths)) > 1: - raise ValidationError(f"Range expansion mismatch: fields expanded to different lengths: {lengths}") + raise ValidationError( + identifier="range_expansion", + message=f"Range expansion mismatch: fields expanded to different lengths: {lengths}" + )
🧹 Nitpick comments (4)
infrahub_sdk/spec/object.py (4)
208-210
: Consider the side effect of mutatingself.data
.The code mutates
self.data
by assigning the expanded result (line 209). This means subsequent calls tovalidate_format
will operate on already-expanded data, which could lead to unexpected behavior if the method is called multiple times or if the data is used elsewhere.Consider whether this mutation is intentional and necessary, or if you should work only with
expanded_data
without mutatingself.data
.
225-226
: Inconsistency withvalidate_format
approach.In
process
, the code expands data but does NOT mutateself.data
, whereas invalidate_format
(lines 208-209),self.data
is mutated. This inconsistency could be confusing.Consider standardizing the approach: either always mutate
self.data
in both methods or never mutate it in either method.
351-352
: Use consistent variable naming.The variable is named
extended_data
here, but elsewhere in the file (lines 208, 225, 462, 474, 568) it's namedexpanded_data
. Use consistent naming for clarity.Apply this diff for consistency:
- extended_data = expand_data_with_ranges(data=data["data"]) - for idx, peer_data in enumerate(extended_data): + expanded_data = expand_data_with_ranges(data=data["data"]) + for idx, peer_data in enumerate(expanded_data):
568-570
: Remove unnecessary blank line.Line 569 is an unnecessary blank line that can be removed for consistency with the rest of the codebase.
Apply this diff:
expanded_data = expand_data_with_ranges(data=data["data"]) - for idx, peer_data in enumerate(expanded_data):
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
infrahub_sdk/spec/object.py
(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
When implementing Infrahub checks, subclass InfrahubCheck and override validate(data); do not implement or rely on a check() method
Files:
infrahub_sdk/spec/object.py
🧬 Code graph analysis (1)
infrahub_sdk/spec/object.py (2)
infrahub_sdk/spec/range_expansion.py (1)
range_expansion
(88-118)infrahub_sdk/exceptions.py (2)
ValidationError
(115-127)ObjectValidationError
(130-137)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: unit-tests (3.13)
- GitHub Check: unit-tests (3.11)
- GitHub Check: unit-tests (3.12)
- GitHub Check: unit-tests (3.10)
- GitHub Check: integration-tests-latest-infrahub
🔇 Additional comments (3)
infrahub_sdk/spec/object.py (3)
3-4
: LGTM!Standard library imports correctly added to support range expansion functionality.
13-13
: LGTM!Import statement correctly references the new range expansion module.
462-471
: LGTM!The range expansion correctly wraps the single object dict in a list before expansion, then extracts the first node from the result.
…ObjectFileData class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
infrahub_sdk/spec/object.py (2)
460-483
: Critical issues with range expansion in non-mandatory relationships.The expansion logic at lines 462 and 474 has multiple critical problems:
Line 462 (ONE_OBJ format):
expand_data_with_ranges(data=[value])
wraps the value dict and attempts expansion, but since range patterns are nested insidevalue["data"]
, not at the top level ofvalue
, the expansion is ineffective.- Even if expansion worked, passing a list (
expanded_data
) tocreate_related_nodes
whenformat
isONE_OBJ
causes a type mismatch.create_related_nodes
at line 539 expects a dict for ONE_OBJ format, not a list.- Semantically, expanding a ONE_OBJ relationship into multiple objects conflicts with the cardinality constraint.
Line 474 (other formats):
- For
MANY_OBJ_DICT_LIST
format,value
is a dict like{"data": [...], "kind": "..."}
. Passing this dict directly toexpand_data_with_ranges(data=value)
causes a type error, as the function expects a list and will iterate over the dict's keys instead of items.- For
MANY_OBJ_DICT_LIST
, this also causes double expansion: once here and again at line 568 increate_related_nodes
.- For
MANY_OBJ_LIST_DICT
format,value
is a list, which matches the function signature, but expansion still won't reach nested"data"
dicts.Recommended fix:
Remove the expansion at lines 462 and 474. Let
create_related_nodes
handle expansion internally where appropriate:
- Line 568 already handles MANY_OBJ_DICT_LIST correctly by expanding
data["data"]
.- Consider adding similar expansion logic for MANY_OBJ_LIST_DICT format in
create_related_nodes
(lines 584-608) if needed.- For ONE_OBJ format, either don't support range expansion (document this limitation), or handle it specially by expanding
value["data"]
if it's a single dict with range patterns.Apply this diff to remove the incorrect expansion:
elif not rel_info.is_reference and not rel_info.is_mandatory: if rel_info.format == RelationshipDataFormat.ONE_OBJ: - expanded_data = expand_data_with_ranges(data=[value]) nodes = await cls.create_related_nodes( client=client, position=position, rel_info=rel_info, - data=expanded_data, + data=value, branch=branch, default_schema_kind=default_schema_kind, ) clean_data[key] = nodes[0] else: - expanded_data = expand_data_with_ranges(data=value) nodes = await cls.create_related_nodes( client=client, position=position, rel_info=rel_info, - data=expanded_data, + data=value, branch=branch, default_schema_kind=default_schema_kind, ) clean_data[key] = nodes
584-608
: Consider adding range expansion forMANY_OBJ_LIST_DICT
format.The
MANY_OBJ_DICT_LIST
format includes range expansion at line 568, but theMANY_OBJ_LIST_DICT
format does not. This inconsistency means range patterns inMANY_OBJ_LIST_DICT
relationship data won't be expanded.Consider adding expansion for each item's nested data:
if isinstance(data, list) and rel_info.format == RelationshipDataFormat.MANY_OBJ_LIST_DICT: for idx, item in enumerate(data): context["list_index"] = idx peer_kind = item.get("kind") or rel_info.peer_kind peer_schema = await cls.get_peer_schema( client=client, peer_kind=peer_kind, branch=branch, default_schema_kind=default_schema_kind ) if parent_node: rel_info.find_matching_relationship(peer_schema=peer_schema) context.update(rel_info.get_context(value=parent_node.id)) - node = await cls.create_node( - client=client, - schema=peer_schema, - position=position + [rel_info.name, idx + 1], - data=item["data"], - context=context, - branch=branch, - default_schema_kind=default_schema_kind, - ) - nodes.append(node) + # Expand item["data"] if it contains range patterns + expanded_items = expand_data_with_ranges(data=[item["data"]]) + for exp_idx, expanded_item_data in enumerate(expanded_items): + node = await cls.create_node( + client=client, + schema=peer_schema, + position=position + [rel_info.name, idx + 1, exp_idx + 1] if len(expanded_items) > 1 else position + [rel_info.name, idx + 1], + data=expanded_item_data, + context=context, + branch=branch, + default_schema_kind=default_schema_kind, + ) + nodes.append(node) return nodes
♻️ Duplicate comments (1)
infrahub_sdk/spec/object.py (1)
170-198
: Address the critical issues flagged in the previous review.Both issues from the previous review remain unaddressed:
Lines 181-183: Bare
except Exception
silently swallows all exceptions fromrange_expansion
, hiding bugs and making debugging difficult.Line 190:
ValidationError
constructor expectsidentifier
as the first parameter, but a formatted message string is passed instead.Apply this diff to fix both issues:
if isinstance(value, str) and range_pattern.search(value): try: expand_fields[key] = range_expansion(value) - except Exception: + except Exception as e: # If expansion fails, treat as no expansion + # Log the error for debugging + # Note: In production, consider whether to fail fast or continue expand_fields[key] = [value] if not expand_fields: expanded.append(item) continue # Check all expanded lists have the same length lengths = [len(v) for v in expand_fields.values()] if len(set(lengths)) > 1: - raise ValidationError(f"Range expansion mismatch: fields expanded to different lengths: {lengths}") + raise ValidationError( + identifier="range_expansion", + message=f"Range expansion mismatch: fields expanded to different lengths: {lengths}" + )
🧹 Nitpick comments (1)
infrahub_sdk/spec/object.py (1)
225-226
: Consider whether mutation invalidate_format
is necessary.Range expansion is correctly applied here before processing. However, note that
validate_format
(line 209) mutatesself.data
by reassigning it toexpanded_data
, whileprocess
does not. Sinceprocess
re-expands from the originalself.data
, the mutation invalidate_format
appears unnecessary unless there's a specific reason to persist the expanded data.Consider removing the mutation in
validate_format
for consistency:expanded_data = expand_data_with_ranges(self.data) - self.data = expanded_data for idx, item in enumerate(expanded_data):
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
infrahub_sdk/spec/object.py
(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
When implementing Infrahub checks, subclass InfrahubCheck and override validate(data); do not implement or rely on a check() method
Files:
infrahub_sdk/spec/object.py
🧬 Code graph analysis (1)
infrahub_sdk/spec/object.py (2)
infrahub_sdk/spec/range_expansion.py (1)
range_expansion
(88-118)infrahub_sdk/exceptions.py (2)
ValidationError
(115-127)ObjectValidationError
(130-137)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: unit-tests (3.12)
- GitHub Check: unit-tests (3.10)
- GitHub Check: unit-tests (3.13)
- GitHub Check: unit-tests (3.11)
- GitHub Check: unit-tests (3.9)
- GitHub Check: integration-tests-latest-infrahub
- GitHub Check: Cloudflare Pages
🔇 Additional comments (5)
infrahub_sdk/spec/object.py (5)
3-4
: LGTM!All necessary imports for range expansion are present and correctly structured.
Also applies to: 13-13
208-210
: LGTM!Range expansion is correctly applied before validation, and the expanded data is properly used for iteration.
351-352
: LGTM!Range expansion is correctly applied to nested relationship data before validation.
568-569
: LGTM!Range expansion is correctly applied to the nested relationship data (
data["data"]
) before creating related nodes.
170-198
: Ignore duplicateexpand_data_with_ranges
definition warning. A search confirms only one definition exists ininfrahub_sdk/spec/object.py
; the duplication concern can be disregarded.Likely an incorrect or invalid review comment.
… streamline data handling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
infrahub_sdk/spec/object.py (1)
170-198
: Critical: Past review issues remain unaddressed.The implementation still has the two critical issues previously flagged:
Lines 181-182: The bare
except Exception:
silently swallows all exceptions fromrange_expansion
, making debugging impossible. Ifrange_expansion
has a bug or receives malformed input, users will never know. You should catch specific exceptions or log the error with context (field name and value) before falling back.Line 190: The
ValidationError
constructor is called incorrectly. Based on the exception definition, it expectsidentifier
andmessage
as separate parameters, not a formatted string as the first argument.Apply this diff to fix both issues:
if isinstance(value, str) and range_pattern.search(value): try: expand_fields[key] = range_expansion(value) - except Exception: - # If expansion fails, treat as no expansion + except Exception as e: + # Log expansion failure for debugging + # Consider whether to fail fast or continue based on your requirements expand_fields[key] = [value] if not expand_fields: expanded.append(item) continue # Check all expanded lists have the same length lengths = [len(v) for v in expand_fields.values()] if len(set(lengths)) > 1: - raise ValidationError(f"Range expansion mismatch: fields expanded to different lengths: {lengths}") + raise ValidationError( + identifier="range_expansion", + message=f"Range expansion mismatch: fields expanded to different lengths: {lengths}" + )Additionally, consider enhancing the error message to include which fields were involved and their individual lengths for better debugging.
🧹 Nitpick comments (1)
infrahub_sdk/spec/object.py (1)
225-226
: Consider optimizing redundant expansion.If
validate_format
has already been called (which assigns expanded data back toself.data
at line 209), this expansion is redundant sinceself.data
already contains expanded items. However, ifprocess
can be called independently without prior validation, this expansion is necessary.Consider either:
- Adding a flag to track whether expansion has occurred to avoid redundant work
- Documenting that
validate_format
should be called beforeprocess
- Accepting the redundancy as a safety measure (current approach)
Given the "Chill" review setting, this is noted as a potential optimization rather than a required change.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
infrahub_sdk/spec/object.py
(6 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
When implementing Infrahub checks, subclass InfrahubCheck and override validate(data); do not implement or rely on a check() method
Files:
infrahub_sdk/spec/object.py
🧬 Code graph analysis (1)
infrahub_sdk/spec/object.py (3)
infrahub_sdk/spec/range_expansion.py (1)
range_expansion
(88-118)infrahub_sdk/exceptions.py (2)
ValidationError
(115-127)ObjectValidationError
(130-137)infrahub_sdk/schema/__init__.py (2)
get
(228-256)get
(523-563)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: unit-tests (3.12)
- GitHub Check: unit-tests (3.9)
- GitHub Check: unit-tests (3.13)
- GitHub Check: unit-tests (3.11)
- GitHub Check: unit-tests (3.10)
- GitHub Check: integration-tests-latest-infrahub
- GitHub Check: documentation
- GitHub Check: Cloudflare Pages
🔇 Additional comments (3)
infrahub_sdk/spec/object.py (3)
208-210
: LGTM with note on data mutation.The expansion is correctly applied before validation. Note that
self.data
is mutated to hold the expanded results, which is appropriate given that validation and processing both need to operate on expanded data.
351-352
: LGTM: Correctly handles nested expansion.The expansion of
data["data"]
ensures that range patterns in nested relationship objects are expanded before validation. This is necessary because top-level expansion only handles the main objects.
566-567
: LGTM: Consistent with validation path.The expansion here mirrors the validation path at line 351-352, ensuring nested relationship objects are expanded before creation. Note that if validation was called first, this is redundant since
validate_related_nodes
performs the same expansion (though without mutating the original data). This redundancy is acceptable for safety.
Implements #560
Summary by CodeRabbit
New Features
Documentation
Tests