-
Notifications
You must be signed in to change notification settings - Fork 6
Add command to generate Pydantic model based on a GraphQL query #546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughAdds a new infrahubctl CLI group "graphql" with subcommands export-schema and generate-return-types, implemented in infrahub_sdk/ctl/graphql.py and registered in cli_commands.py. Introduces GraphQL support modules under infrahub_sdk/graphql (constants, plugin, query, renderers, utils) and updates package exports. Adds documentation and changelog entries, test fixtures and unit tests for rendering and plugins, updates pyproject to include ariadne-codegen extras, and removes a Vale case-swap rule for "GraphQL". Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
@@ Coverage Diff @@
## develop #546 +/- ##
===========================================
- Coverage 75.85% 75.46% -0.40%
===========================================
Files 101 108 +7
Lines 8971 9243 +272
Branches 1769 1839 +70
===========================================
+ Hits 6805 6975 +170
- Misses 1680 1781 +101
- Partials 486 487 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 3 files with indirect coverage changes 🚀 New features to boost your workflow:
|
Deploying infrahub-sdk-python with
|
Latest commit: |
c2cf606
|
Status: | ✅ Deploy successful! |
Preview URL: | https://72959b29.infrahub-sdk-python.pages.dev |
Branch Preview URL: | https://dga-20250913-graphql-return.infrahub-sdk-python.pages.dev |
9f31369
to
6754510
Compare
Add support for fragment within the same file
6754510
to
911ca81
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (9)
tests/fixtures/unit/test_graphql_plugin/python01_after_annotation.py (1)
24-33
: Consider the verbosity of wrapper models for simple fields.The separate
BaseModel
classes forname
,description
, andstatus
(each containing onlyvalue: Optional[str]
) create a deeply nested structure. While this may accurately reflect the GraphQL schema, it adds verbosity.If these models are auto-generated from the GraphQL schema (as suggested by the fixture location), this is acceptable. However, if manually maintained, consider whether the GraphQL schema could be simplified.
pyproject.toml (1)
49-49
: Approve ariadne-codegen dependencyVersion ^0.15.2 is available on PyPI and has no known security advisories; consider updating to ^0.15.3 for the latest patch fixes, though 0.15.2 is acceptable.
docs/docs/python-sdk/guides/python-typing.mdx (1)
130-151
: Consider enhancing the example workflow with complete code samples.The workflow steps are clear, but the example would benefit from showing a complete
.gql
file example and a more detailed usage example with error handling.Consider adding a concrete
.gql
file example in step 1:# queries/get_devices.gql query GetDevices { InfraDevice { edges { node { id name { value } status { value } } } } }And enhancing step 3 with error handling:
from .queries.get_devices import GetDevicesQuery from pydantic import ValidationError try: response = await client.execute_graphql(query=MY_QUERY) data = GetDevicesQuery(**response) # Access type-safe fields for edge in data.infra_device.edges: print(f"Device: {edge.node.name.value}") except ValidationError as e: print(f"Invalid response structure: {e}")tests/unit/sdk/graphql/conftest.py (1)
6-13
: Consider consolidating duplicate enum definitions.The
MyStrEnum
andMyIntEnum
classes are duplicated in both this conftest file andtests/unit/sdk/graphql/test_renderer.py
(lines 8-15). To follow DRY principles, consider keeping these enum definitions only in conftest.py and removing them from test_renderer.py since fixtures defined here are already shared across test modules.Apply this approach:
- Keep the enum definitions here in conftest.py
- Remove lines 8-15 from test_renderer.py
- Import the enums in test_renderer.py if needed:
from .conftest import MyStrEnum, MyIntEnum
infrahub_sdk/ctl/graphql.py (4)
38-54
: Consider handling non-file, non-directory paths explicitly.If
query_path
exists but is neither a regular file nor a directory (e.g., a broken symlink or a special file), the function will return an empty list fromglob()
without any indication to the user. This may lead to confusion when no files are found.Apply this diff to add an explicit check:
if not query_path.is_dir() and query_path.is_file(): return [query_path] + + if not query_path.is_dir(): + raise ValueError(f"{query_path} is neither a file nor a directory") return list(query_path.glob("**/*.gql"))
57-74
: Consider catching parse exceptions for better error messages.
parse(queries_str)
on line 66 can raiseGraphQLSyntaxError
if the query file contains invalid GraphQL syntax. Currently, this exception propagates to the caller. While the caller ingenerate_return_types
does catch genericValueError
,GraphQLSyntaxError
is not a subclass ofValueError
and will not be caught, potentially causing an unhandled exception.Apply this diff to catch and re-raise as
ValueError
for consistent error handling:queries_str = queries_path.read_text(encoding="utf-8") - queries_ast = parse(queries_str) + try: + queries_ast = parse(queries_str) + except Exception as exc: + raise ValueError(f"Failed to parse GraphQL query: {exc}") from exc validation_errors = validate(
77-88
: Ensure parent directory exists before writing generated files.Line 87 writes to
file_path
without verifying that the parent directory exists. If the directory structure has been modified or deleted between discovery and generation, this will raise aFileNotFoundError
.Apply this diff to ensure the directory exists:
for file_name, module in package._result_types_files.items(): file_path = directory / file_name + file_path.parent.mkdir(parents=True, exist_ok=True) insert_fragments_inline(module, fragment)
Note on private attribute access:
This function accesses private attributes (
_result_types_files
,_add_comments_to_code
,_generated_files
) fromPackageGenerator
. While this is necessary for the plugin architecture and wrapper functionality, it creates tight coupling with ariadne-codegen internals. Consider adding a comment documenting this dependency or, if ariadne-codegen provides public APIs for these operations in the future, migrating to those.
112-118
: Consider clarifying default query path behavior.The
query
parameter defaults toPath.cwd()
(line 115), meaning the command will search the current working directory for.gql
files if no path is provided. This might be unexpected for users who expect an explicit path argument. Consider either:
- Making the parameter required (
query: Path = typer.Argument(..., help=...)
), or- Adding a note in the help text that it defaults to the current directory.
Apply this diff to clarify the default behavior:
query: Path = typer.Argument(Path.cwd(), help="Location of the GraphQL query file(s)."), + # Or make it required: + # query: Path = typer.Argument(..., help="Location of the GraphQL query file(s)."),Update the help text:
- query: Path = typer.Argument(Path.cwd(), help="Location of the GraphQL query file(s)."), + query: Path = typer.Argument(Path.cwd(), help="Location of the GraphQL query file(s). Defaults to current directory."),infrahub_sdk/graphql/utils.py (1)
24-30
: Refactor to avoid modifying list during iteration.Lines 26-28 remove an item from
module.body
while iterating over it. Although the immediatereturn
prevents issues in practice, this pattern is fragile and will silently skip subsequent fragment imports if multiple exist.Apply this diff to handle all fragment imports safely:
def remove_fragment_import(module: ast.Module) -> ast.Module: """Remove the fragment import from the module.""" - for item in module.body: - if isinstance(item, ast.ImportFrom) and item.module == "fragments": - module.body.remove(item) - return module - return module + module.body = [ + item for item in module.body + if not (isinstance(item, ast.ImportFrom) and item.module == "fragments") + ] + return moduleAlternatively, if only the first fragment import should be removed (and multiples are invalid), document this explicitly in the docstring.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lock
is excluded by!**/*.lock
📒 Files selected for processing (21)
.vale/styles/Infrahub/branded-terms-case-swap.yml
(0 hunks)changelog/+gql-command.added.md
(1 hunks)docs/docs/infrahubctl/infrahubctl-graphql.mdx
(1 hunks)docs/docs/python-sdk/guides/python-typing.mdx
(1 hunks)infrahub_sdk/ctl/cli_commands.py
(2 hunks)infrahub_sdk/ctl/graphql.py
(1 hunks)infrahub_sdk/graphql/__init__.py
(1 hunks)infrahub_sdk/graphql/constants.py
(1 hunks)infrahub_sdk/graphql/plugin.py
(1 hunks)infrahub_sdk/graphql/query.py
(1 hunks)infrahub_sdk/graphql/renderers.py
(3 hunks)infrahub_sdk/graphql/utils.py
(1 hunks)pyproject.toml
(5 hunks)tests/fixtures/unit/test_graphql_plugin/python01.py
(1 hunks)tests/fixtures/unit/test_graphql_plugin/python01_after_annotation.py
(1 hunks)tests/fixtures/unit/test_graphql_plugin/python02.py
(1 hunks)tests/fixtures/unit/test_graphql_plugin/python02_after_annotation.py
(1 hunks)tests/unit/sdk/graphql/conftest.py
(1 hunks)tests/unit/sdk/graphql/test_plugin.py
(1 hunks)tests/unit/sdk/graphql/test_query.py
(1 hunks)tests/unit/sdk/graphql/test_renderer.py
(1 hunks)
💤 Files with no reviewable changes (1)
- .vale/styles/Infrahub/branded-terms-case-swap.yml
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
When implementing Infrahub checks, subclass InfrahubCheck and override validate(data); do not implement or rely on a check() method
Files:
tests/fixtures/unit/test_graphql_plugin/python02.py
tests/fixtures/unit/test_graphql_plugin/python01.py
tests/unit/sdk/graphql/conftest.py
infrahub_sdk/graphql/__init__.py
infrahub_sdk/graphql/query.py
tests/unit/sdk/graphql/test_renderer.py
infrahub_sdk/graphql/plugin.py
infrahub_sdk/graphql/renderers.py
infrahub_sdk/ctl/graphql.py
infrahub_sdk/graphql/constants.py
infrahub_sdk/graphql/utils.py
tests/fixtures/unit/test_graphql_plugin/python01_after_annotation.py
tests/unit/sdk/graphql/test_plugin.py
infrahub_sdk/ctl/cli_commands.py
tests/fixtures/unit/test_graphql_plugin/python02_after_annotation.py
tests/unit/sdk/graphql/test_query.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use the custom pytest plugin (infrahub_sdk.pytest_plugin) fixtures for clients, configuration, and Infrahub-specific testing
Files:
tests/fixtures/unit/test_graphql_plugin/python02.py
tests/fixtures/unit/test_graphql_plugin/python01.py
tests/unit/sdk/graphql/conftest.py
tests/unit/sdk/graphql/test_renderer.py
tests/fixtures/unit/test_graphql_plugin/python01_after_annotation.py
tests/unit/sdk/graphql/test_plugin.py
tests/fixtures/unit/test_graphql_plugin/python02_after_annotation.py
tests/unit/sdk/graphql/test_query.py
tests/unit/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place and write unit tests under tests/unit/ (isolated component tests)
Files:
tests/unit/sdk/graphql/conftest.py
tests/unit/sdk/graphql/test_renderer.py
tests/unit/sdk/graphql/test_plugin.py
tests/unit/sdk/graphql/test_query.py
infrahub_sdk/ctl/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
infrahub_sdk/ctl/**/*.py
: Build CLI commands with Typer
Organize and keep all CLI commands within infrahub_sdk/ctl/
Files:
infrahub_sdk/ctl/graphql.py
infrahub_sdk/ctl/cli_commands.py
docs/**/*.{md,mdx}
📄 CodeRabbit inference engine (CLAUDE.md)
docs/**/*.{md,mdx}
: Follow the Diataxis framework and classify docs as Tutorials, How-to guides, Explanation, or Reference
Structure How-to guides with required frontmatter and sections: introduction, prerequisites, step-by-step steps, validation, related resources
Structure Topics (Explanation) docs with introduction, concepts & definitions, background & context, architecture & design, connections, further reading
Use a professional, concise, informative tone with consistent structure across documents
Use proper language tags on all code blocks
Include both async and sync examples where applicable using the Tabs component
Add validation steps to guides to confirm success and expected outputs
Use callouts for warnings, tips, and important notes
Define new terms on first use and use domain-relevant terminology consistent with Infrahub’s model/UI
Conform to markdown style rules in .markdownlint.yaml and Vale styles in .vale/styles/
Files:
docs/docs/python-sdk/guides/python-typing.mdx
docs/docs/infrahubctl/infrahubctl-graphql.mdx
🧠 Learnings (1)
📚 Learning: 2025-08-24T13:35:12.998Z
Learnt from: CR
PR: opsmill/infrahub-sdk-python#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T13:35:12.998Z
Learning: Applies to infrahub_sdk/ctl/**/*.py : Build CLI commands with Typer
Applied to files:
infrahub_sdk/ctl/cli_commands.py
🧬 Code graph analysis (11)
tests/fixtures/unit/test_graphql_plugin/python02.py (1)
tests/fixtures/unit/test_graphql_plugin/python01.py (2)
CreateDevice
(8-9)CreateDeviceInfraDeviceUpsert
(12-14)
tests/fixtures/unit/test_graphql_plugin/python01.py (2)
tests/fixtures/unit/test_graphql_plugin/python02.py (2)
CreateDevice
(6-7)CreateDeviceInfraDeviceUpsert
(10-12)tests/fixtures/unit/test_graphql_plugin/python02_after_annotation.py (2)
CreateDevice
(9-10)CreateDeviceInfraDeviceUpsert
(13-15)
tests/unit/sdk/graphql/conftest.py (1)
infrahub_sdk/protocols_base.py (1)
Enum
(111-112)
infrahub_sdk/graphql/__init__.py (2)
infrahub_sdk/graphql/query.py (2)
Mutation
(46-77)Query
(31-43)infrahub_sdk/graphql/renderers.py (3)
render_input_block
(152-212)render_query_block
(83-149)render_variables_to_string
(69-80)
infrahub_sdk/graphql/query.py (1)
infrahub_sdk/graphql/renderers.py (3)
render_input_block
(152-212)render_query_block
(83-149)render_variables_to_string
(69-80)
tests/unit/sdk/graphql/test_renderer.py (1)
infrahub_sdk/graphql/renderers.py (2)
render_input_block
(152-212)render_query_block
(83-149)
infrahub_sdk/ctl/graphql.py (4)
infrahub_sdk/async_typer.py (2)
AsyncTyper
(11-31)command
(29-31)infrahub_sdk/ctl/client.py (1)
initialize_client
(10-25)infrahub_sdk/ctl/utils.py (1)
catch_exception
(78-106)infrahub_sdk/graphql/utils.py (2)
insert_fragments_inline
(13-21)remove_fragment_import
(24-30)
tests/fixtures/unit/test_graphql_plugin/python01_after_annotation.py (1)
tests/fixtures/unit/test_graphql_plugin/python02.py (2)
CreateDevice
(6-7)CreateDeviceInfraDeviceUpsert
(10-12)
tests/unit/sdk/graphql/test_plugin.py (1)
infrahub_sdk/graphql/plugin.py (4)
FutureAnnotationPlugin
(12-28)generate_result_types_module
(23-28)generate_result_types_module
(56-62)generate_result_types_module
(81-86)
tests/fixtures/unit/test_graphql_plugin/python02_after_annotation.py (3)
tests/fixtures/unit/test_graphql_plugin/python01.py (2)
CreateDevice
(8-9)CreateDeviceInfraDeviceUpsert
(12-14)tests/fixtures/unit/test_graphql_plugin/python01_after_annotation.py (2)
CreateDevice
(8-9)CreateDeviceInfraDeviceUpsert
(12-14)tests/fixtures/unit/test_graphql_plugin/python02.py (2)
CreateDevice
(6-7)CreateDeviceInfraDeviceUpsert
(10-12)
tests/unit/sdk/graphql/test_query.py (1)
infrahub_sdk/graphql/query.py (2)
Mutation
(46-77)Query
(31-43)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: unit-tests (3.9)
- GitHub Check: unit-tests (3.13)
- GitHub Check: unit-tests (3.12)
- GitHub Check: unit-tests (3.11)
- GitHub Check: integration-tests-latest-infrahub
- GitHub Check: unit-tests (3.10)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (35)
pyproject.toml (2)
73-73
: LGTM! ariadne-codegen properly added to extras.The dependency is correctly included in both
ctl
andall
extras, making it available for CLI GraphQL codegen functionality.Also applies to: 86-86
135-135
: LGTM! Test fixture exclusions are appropriate.Excluding the generated test fixtures from Ruff and suppressing FA100 (future annotations) for these files makes sense, as they represent expected output from the codegen process and don't need to follow all project conventions.
Also applies to: 281-283
tests/fixtures/unit/test_graphql_plugin/python01_after_annotation.py (4)
1-6
: LGTM! Imports are correct for Pydantic v2 with forward references.The use of
from __future__ import annotations
enables forward reference strings without quotes, which is appropriate for the nested model structure.
8-9
: LGTM! Field alias matches GraphQL convention.The alias
"InfraDeviceUpsert"
correctly maps the Pythonic snake_case field name to the GraphQL PascalCase field name.
36-38
: LGTM! model_rebuild() calls are necessary for forward references.In Pydantic v2, when using forward references (even with
from __future__ import annotations
), callingmodel_rebuild()
after all referenced models are defined ensures proper field resolution.
1-39
: No changes needed for python01 fixtures The identical before/after fixtures correctly test idempotency of the FutureAnnotationPlugin.tests/fixtures/unit/test_graphql_plugin/python01.py (1)
1-39
: LGTM with caveat! Model structure is well-formed.The Pydantic models correctly represent a nested GraphQL response structure with:
- Proper use of
from __future__ import annotations
- Appropriate Field aliases for GraphQL naming conventions
- Correct
model_rebuild()
calls for forward reference resolutionHowever, this file appears identical to
python01_after_annotation.py
. See the verification request in that file's review regarding the intended relationship between these fixtures.changelog/+gql-command.added.md (1)
1-1
: LGTM! Clear and concise changelog entry.The changelog entry accurately describes the new feature and follows the expected format.
infrahub_sdk/graphql/constants.py (1)
1-1
: LGTM! Well-structured type mapping constant.The mapping correctly defines Python-to-GraphQL non-null type conversions for common scalar types. Centralizing this in a constants module improves maintainability.
infrahub_sdk/ctl/cli_commands.py (2)
29-29
: LGTM! Import follows established pattern.The import of
graphql_app
is consistent with other CLI subcommand imports in this file.
67-67
: LGTM! CLI registration follows established pattern.The registration of the
graphql
command group follows the same pattern as other CLI subcommands in this file. As per coding guidelines, CLI commands are properly organized using Typer ininfrahub_sdk/ctl/
.Based on learnings
docs/docs/python-sdk/guides/python-typing.mdx (1)
105-128
: LGTM! Clear introduction and rationale for Pydantic model generation.The section provides a strong rationale for generating Pydantic models from GraphQL queries, covering type safety, IDE support, documentation, and validation. The command examples are clear and include both current directory and specific file usage patterns.
docs/docs/infrahubctl/infrahubctl-graphql.mdx (2)
1-21
: LGTM! Clear command group overview.The main command group documentation follows the standard CLI reference format with proper usage block, options, and subcommand list.
22-36
: LGTM! Clear export-schema documentation.The
export-schema
command documentation is complete and follows the standard reference format with usage, options, and defaults clearly specified.tests/unit/sdk/graphql/test_plugin.py (3)
14-36
: LGTM! Fixture setup is clean and properly typed.The fixtures correctly load test resources using
get_fixtures_dir()
and appropriate encoding. Return type annotations are present.
38-48
: LGTM! Test correctly verifies idempotency of the plugin.The test appropriately checks that the plugin doesn't duplicate the
__future__
import when already present. Theoperation_definition=None
argument aligns with the plugin's signature (the parameter is intentionally unused in the implementation).
51-62
: LGTM! Test correctly verifies annotation insertion.The test appropriately validates that the plugin inserts the
__future__
import when it's missing. Together with the first test, this provides good coverage of both plugin branches.infrahub_sdk/graphql/__init__.py (1)
1-12
: LGTM! Clean public API organization.The module properly exposes the GraphQL utilities through explicit imports and
__all__
declaration, making the public interface clear.tests/unit/sdk/graphql/conftest.py (1)
16-113
: LGTM! Well-structured test fixtures.The fixtures provide comprehensive test data covering various GraphQL scenarios (no filters, aliases, fragments, different filter types). The naming is clear and the data structures properly represent the different test cases.
infrahub_sdk/graphql/renderers.py (5)
9-9
: LGTM! Good separation of concerns.Moving
VARIABLE_TYPE_MAPPING
to a constants module improves code organization.
12-66
: LGTM! Comprehensive value conversion with excellent documentation.The
convert_to_graphql_as_string
function properly handles various Python types (None, str, bool, Enum, list, BaseModel) and converts them to GraphQL string representations. The docstring provides clear examples and the implementation correctly usesjson.dumps()
for string escaping.
69-80
: LGTM! Clean variable rendering implementation.The function properly maps Python types to GraphQL variable type strings using the imported
VARIABLE_TYPE_MAPPING
.
83-149
: LGTM! Well-documented query block renderer.The
render_query_block
function correctly handles:
- Special GraphQL keys (
@filters
,@alias
)- Nested structures with proper indentation
- Recursive rendering for complex queries
The comprehensive docstring with examples makes the function's behavior clear.
152-212
: LGTM! Robust input block renderer.The
render_input_block
function properly handles:
- Nested dictionaries with recursive rendering
- Lists with mixed item types (dicts and primitives)
- Proper indentation at all nesting levels
The docstring clearly documents the differences from
render_query_block
(no special keys like@filters
or@alias
).tests/unit/sdk/graphql/test_renderer.py (1)
104-245
: LGTM! Comprehensive test coverage.The test functions thoroughly validate:
- Query block rendering with and without filters/aliases/fragments
- Input block rendering with nested structures and lists
- Different indentation levels (offset=2, indentation=2)
- Line-by-line assertion of expected output
The tests provide excellent coverage of the rendering functions.
infrahub_sdk/graphql/query.py (3)
8-28
: LGTM! Clean base class design.
BaseGraphQLQuery
provides a solid foundation with:
- Type-safe attributes (
query_type
,indentation
)- Flexible constructor accepting query, variables, and name
- Reusable
render_first_line()
method that properly formats the query opening with variables
31-43
: LGTM! Clean Query class implementation.The
Query
class properly:
- Sets
query_type = "query"
- Delegates rendering to
render_query_block
with appropriate parameters- Supports optional
convert_enum
parameter- Returns properly formatted GraphQL query string
46-77
: LGTM! Well-structured Mutation class.The
Mutation
class correctly:
- Extends
BaseGraphQLQuery
with mutation-specific attributes (mutation
,input_data
)- Renders both input block (mutation arguments) and query block (return fields)
- Uses proper indentation and formatting
- Supports
convert_enum
parameter throughoutinfrahub_sdk/graphql/plugin.py (2)
31-62
: LGTM! Type hint normalization is correctly implemented.The recursive traversal of
Subscript
nodes correctly handles nestedList
annotations (e.g.,List[List[X]]
→list[list[X]]
) and applies the transformation to all annotated assignments within class definitions. The integration withFutureAnnotationPlugin.insert_future_annotation
ensuresfrom __future__ import annotations
is present before normalizing type hints.
65-86
: LGTM! BaseModel import replacement is correctly implemented.The plugin correctly locates the
from base_model import BaseModel
import generated by ariadne-codegen and replaces it withfrom pydantic import BaseModel
. The error handling for missing imports is appropriate, and the AST transformation is accurate.tests/fixtures/unit/test_graphql_plugin/python02_after_annotation.py (1)
1-20
: LGTM! Fixture correctly demonstrates Pydantic v2 forward reference handling.This test fixture appropriately uses
from __future__ import annotations
with string-quoted forward references andmodel_rebuild()
calls to verify the FutureAnnotationPlugin behavior. The pattern aligns with Pydantic v2 best practices for resolving forward references.tests/fixtures/unit/test_graphql_plugin/python02.py (1)
1-17
: LGTM! Fixture correctly demonstrates standard forward reference pattern.This test fixture uses standard string-quoted forward references without
from __future__ import annotations
, with appropriatemodel_rebuild()
calls. This provides the baseline for testing FutureAnnotationPlugin transformations.tests/unit/sdk/graphql/test_query.py (2)
3-3
: LGTM! Import path updated to reflect module reorganization.The updated import from
infrahub_sdk.graphql.query
aligns with the PR's GraphQL module refactoring. The Query and Mutation classes maintain the same API, ensuring test compatibility.Based on coding guidelines.
16-291
: LGTM! Comprehensive test coverage for Query and Mutation rendering.The test suite thoroughly covers various scenarios: no variables, empty filters, filters with variables, enum handling, and mutation rendering. Tests follow pytest best practices with clear naming and explicit assertions.
Based on coding guidelines.
infrahub_sdk/graphql/utils.py (1)
4-10
: LGTM! Clean sentinel pattern for class lookup.The function correctly identifies the first class definition and returns a clear sentinel value (-1) when none exists. The implementation is straightforward and well-documented.
## `infrahubctl graphql generate-return-types` | ||
|
||
Create Pydantic Models for GraphQL query return types | ||
|
||
**Usage**: | ||
|
||
```console | ||
$ infrahubctl graphql generate-return-types [OPTIONS] [QUERY] | ||
``` | ||
|
||
**Arguments**: | ||
|
||
* `[QUERY]`: Location of the GraphQL query file(s). [default: /Users/damien/projects/infrahub-sdk-python-pink] | ||
|
||
**Options**: | ||
|
||
* `--schema PATH`: Path to the GraphQL schema file. [default: schema.graphql] | ||
* `--config-file TEXT`: [env var: INFRAHUBCTL_CONFIG; default: infrahubctl.toml] | ||
* `--help`: Show this message and exit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Consider clarifying the QUERY argument default value.
The documentation shows [default: /Users/damien/projects/infrahub-sdk-python-pink]
for the QUERY argument, which appears to be a hardcoded absolute path from the developer's local environment rather than a generic default.
The default path should be relative or described generically. Update line 50 to show the actual runtime default (likely current working directory) rather than a hardcoded absolute path:
-* `[QUERY]`: Location of the GraphQL query file(s). [default: /Users/damien/projects/infrahub-sdk-python-pink]
+* `[QUERY]`: Location of the GraphQL query file(s). [default: .]
This appears to be auto-generated documentation that captured a local path. Verify the actual default in the code (likely Path.cwd()
based on the AI summary) and update accordingly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
## `infrahubctl graphql generate-return-types` | |
Create Pydantic Models for GraphQL query return types | |
**Usage**: | |
```console | |
$ infrahubctl graphql generate-return-types [OPTIONS] [QUERY] | |
``` | |
**Arguments**: | |
* `[QUERY]`: Location of the GraphQL query file(s). [default: /Users/damien/projects/infrahub-sdk-python-pink] | |
**Options**: | |
* `--schema PATH`: Path to the GraphQL schema file. [default: schema.graphql] | |
* `--config-file TEXT`: [env var: INFRAHUBCTL_CONFIG; default: infrahubctl.toml] | |
* `--help`: Show this message and exit. | |
## `infrahubctl graphql generate-return-types` | |
Create Pydantic Models for GraphQL query return types | |
**Usage**: | |
🤖 Prompt for AI Agents
In docs/docs/infrahubctl/infrahubctl-graphql.mdx around lines 38 to 56 the QUERY
argument default shows a hardcoded absolute developer path; update the
documentation to use a generic description (e.g., "current working directory" or
"./") or the actual runtime default used in code (verify and use Path.cwd() if
that's correct) rather than the local path
/Users/damien/projects/infrahub-sdk-python-pink; replace the hardcoded path on
line 50 with the verified generic default and regenerate any auto-generated docs
so the fix persists.
``` | ||
|
||
> if you don't have your own Python module, it's possible to use relative path by having the `procotols.py` in the same directory as your script/transform/generator | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor typo in comment above this section.
Line 103 contains a typo: "procotols.py" should be "protocols.py".
Apply this diff to fix the typo:
-> if you don't have your own Python module, it's possible to use relative path by having the `procotols.py` in the same directory as your script/transform/generator
+> if you don't have your own Python module, it's possible to use relative path by having the `protocols.py` in the same directory as your script/transform/generator
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
> if you don't have your own Python module, it's possible to use relative path by having the `protocols.py` in the same directory as your script/transform/generator |
🤖 Prompt for AI Agents
In docs/docs/python-sdk/guides/python-typing.mdx around lines 104 to 104, fix
the typo in the comment above this section by replacing "procotols.py" with the
correct filename "protocols.py".
@app.command() | ||
@catch_exception(console=console) | ||
async def export_schema( | ||
destination: Path = typer.Option("schema.graphql", help="Path to the GraphQL schema file."), | ||
_: str = CONFIG_PARAM, | ||
) -> None: | ||
"""Export the GraphQL schema to a file.""" | ||
|
||
client = initialize_client() | ||
response = await client._get(url=f"{client.address}/schema.graphql") | ||
destination.write_text(response.text) | ||
console.print(f"[green]Schema exported to {destination}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate HTTP response before writing schema file.
Line 107-108 fetches the schema and writes response.text
directly to the file without checking the HTTP status code. If the server returns an error (e.g., 404, 500), the error response body will be written to the schema file, resulting in an invalid schema that will cause downstream failures.
Apply this diff to validate the response:
client = initialize_client()
response = await client._get(url=f"{client.address}/schema.graphql")
+ if response.status_code != 200:
+ raise ValueError(f"Failed to fetch schema: HTTP {response.status_code} - {response.text}")
+ destination.parent.mkdir(parents=True, exist_ok=True)
destination.write_text(response.text)
console.print(f"[green]Schema exported to {destination}")
Note on private method access:
Line 107 calls client._get()
, which is a private method. While this may be necessary if the public API doesn't expose a suitable method, consider documenting this dependency or requesting a public method from the InfrahubClient
maintainers.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@app.command() | |
@catch_exception(console=console) | |
async def export_schema( | |
destination: Path = typer.Option("schema.graphql", help="Path to the GraphQL schema file."), | |
_: str = CONFIG_PARAM, | |
) -> None: | |
"""Export the GraphQL schema to a file.""" | |
client = initialize_client() | |
response = await client._get(url=f"{client.address}/schema.graphql") | |
destination.write_text(response.text) | |
console.print(f"[green]Schema exported to {destination}") | |
@app.command() | |
@catch_exception(console=console) | |
async def export_schema( | |
destination: Path = typer.Option("schema.graphql", help="Path to the GraphQL schema file."), | |
_: str = CONFIG_PARAM, | |
) -> None: | |
"""Export the GraphQL schema to a file.""" | |
client = initialize_client() | |
response = await client._get(url=f"{client.address}/schema.graphql") | |
if response.status_code != 200: | |
raise ValueError(f"Failed to fetch schema: HTTP {response.status_code} - {response.text}") | |
destination.parent.mkdir(parents=True, exist_ok=True) | |
destination.write_text(response.text) | |
console.print(f"[green]Schema exported to {destination}") |
🤖 Prompt for AI Agents
In infrahub_sdk/ctl/graphql.py around lines 98 to 109, the export_schema command
writes response.text to the destination file without validating the HTTP
response; check the HTTP status before writing by inspecting
response.status_code (or calling response.raise_for_status()), and if the status
is not successful (e.g., not 200) print or log a clear error to console and exit
with a non-zero status instead of writing the body to the file; keep using
client._get if necessary but add a comment noting private-method dependency or
prefer a public client method if available.
for directory, gql_files in gql_per_directory.items(): | ||
for gql_file in gql_files: | ||
try: | ||
definitions = get_graphql_query(queries_path=gql_file, schema=graphql_schema) | ||
except ValueError as exc: | ||
print(f"Error generating result types for {gql_file}: {exc}") | ||
continue | ||
queries = filter_operations_definitions(definitions) | ||
fragments = filter_fragments_definitions(definitions) | ||
|
||
package_generator = get_package_generator( | ||
schema=graphql_schema, | ||
fragments=fragments, | ||
settings=ClientSettings( | ||
schema_path=str(schema), | ||
target_package_name=directory.name, | ||
queries_path=str(directory), | ||
include_comments=CommentsStrategy.NONE, | ||
), | ||
plugin_manager=plugin_manager, | ||
) | ||
|
||
try: | ||
for query_operation in queries: | ||
package_generator.add_operation(query_operation) | ||
except ParsingError as exc: | ||
console.print(f"[red]Unable to process {gql_file.name}: {exc}") | ||
module_fragment = package_generator.fragments_generator.generate() | ||
|
||
generate_result_types(directory=directory, package=package_generator, fragment=module_fragment) | ||
|
||
for file_name in package_generator._result_types_files.keys(): | ||
console.print(f"[green]Generated {file_name} in {directory}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent error logging and potential issue with partial generation.
Two issues in the generation loop:
-
Inconsistent logging: Line 144 uses
print()
while lines 165 and 171 useconsole.print()
. This creates inconsistent output formatting. -
Partial generation after parsing error: Lines 161-165 catch
ParsingError
fromadd_operation
and continue, but lines 166-168 proceed to generate types using the potentially incompletepackage_generator
. If some operations failed to parse, the generated types may be incomplete or incorrect, yet files are still written and success messages are printed (lines 170-171).
Apply this diff to fix both issues:
except ValueError as exc:
- print(f"Error generating result types for {gql_file}: {exc}")
+ console.print(f"[red]Error generating result types for {gql_file}: {exc}")
continue
queries = filter_operations_definitions(definitions)
fragments = filter_fragments_definitions(definitions)
package_generator = get_package_generator(
schema=graphql_schema,
fragments=fragments,
settings=ClientSettings(
schema_path=str(schema),
target_package_name=directory.name,
queries_path=str(directory),
include_comments=CommentsStrategy.NONE,
),
plugin_manager=plugin_manager,
)
+ parsing_failed = False
try:
for query_operation in queries:
package_generator.add_operation(query_operation)
except ParsingError as exc:
console.print(f"[red]Unable to process {gql_file.name}: {exc}")
+ parsing_failed = True
+
+ if parsing_failed:
+ continue
+
module_fragment = package_generator.fragments_generator.generate()
generate_result_types(directory=directory, package=package_generator, fragment=module_fragment)
for file_name in package_generator._result_types_files.keys():
console.print(f"[green]Generated {file_name} in {directory}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for directory, gql_files in gql_per_directory.items(): | |
for gql_file in gql_files: | |
try: | |
definitions = get_graphql_query(queries_path=gql_file, schema=graphql_schema) | |
except ValueError as exc: | |
print(f"Error generating result types for {gql_file}: {exc}") | |
continue | |
queries = filter_operations_definitions(definitions) | |
fragments = filter_fragments_definitions(definitions) | |
package_generator = get_package_generator( | |
schema=graphql_schema, | |
fragments=fragments, | |
settings=ClientSettings( | |
schema_path=str(schema), | |
target_package_name=directory.name, | |
queries_path=str(directory), | |
include_comments=CommentsStrategy.NONE, | |
), | |
plugin_manager=plugin_manager, | |
) | |
try: | |
for query_operation in queries: | |
package_generator.add_operation(query_operation) | |
except ParsingError as exc: | |
console.print(f"[red]Unable to process {gql_file.name}: {exc}") | |
module_fragment = package_generator.fragments_generator.generate() | |
generate_result_types(directory=directory, package=package_generator, fragment=module_fragment) | |
for file_name in package_generator._result_types_files.keys(): | |
console.print(f"[green]Generated {file_name} in {directory}") | |
for directory, gql_files in gql_per_directory.items(): | |
for gql_file in gql_files: | |
try: | |
definitions = get_graphql_query(queries_path=gql_file, schema=graphql_schema) | |
except ValueError as exc: | |
console.print(f"[red]Error generating result types for {gql_file}: {exc}") | |
continue | |
queries = filter_operations_definitions(definitions) | |
fragments = filter_fragments_definitions(definitions) | |
package_generator = get_package_generator( | |
schema=graphql_schema, | |
fragments=fragments, | |
settings=ClientSettings( | |
schema_path=str(schema), | |
target_package_name=directory.name, | |
queries_path=str(directory), | |
include_comments=CommentsStrategy.NONE, | |
), | |
plugin_manager=plugin_manager, | |
) | |
parsing_failed = False | |
try: | |
for query_operation in queries: | |
package_generator.add_operation(query_operation) | |
except ParsingError as exc: | |
console.print(f"[red]Unable to process {gql_file.name}: {exc}") | |
parsing_failed = True | |
if parsing_failed: | |
continue | |
module_fragment = package_generator.fragments_generator.generate() | |
generate_result_types(directory=directory, package=package_generator, fragment=module_fragment) | |
for file_name in package_generator._result_types_files.keys(): | |
console.print(f"[green]Generated {file_name} in {directory}") |
🤖 Prompt for AI Agents
In infrahub_sdk/ctl/graphql.py around lines 139 to 171, replace the plain
print() at line 144 with console.print(...) so logging is consistent, and change
the ParsingError handling so that if add_operation raises ParsingError you do
not proceed with fragment generation or writing result files — log the error via
console.print and continue to the next gql_file (or otherwise skip the remainder
of the loop for that file). Ensure the exception block returns/continues before
calling package_generator.fragments_generator.generate(),
generate_result_types(), or printing generated-file success messages.
class FutureAnnotationPlugin(Plugin): | ||
@staticmethod | ||
def insert_future_annotation(module: ast.Module) -> ast.Module: | ||
# First check if the future annotation is already present | ||
for item in module.body: | ||
if isinstance(item, ast.ImportFrom) and item.module == "__future__" and item.names[0].name == "annotations": | ||
return module | ||
|
||
module.body.insert(0, ast.ImportFrom(module="__future__", names=[ast.alias(name="annotations")], level=0)) | ||
return module |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against empty or multiple names in ImportFrom check.
Line 17 accesses item.names[0].name
without verifying that item.names
is non-empty. While rare, an ImportFrom
node from __future__
could have an empty names
list or contain multiple imports (e.g., from __future__ import annotations, division
). This can raise an IndexError
or fail to detect the annotation import correctly.
Apply this diff to improve the check:
def insert_future_annotation(module: ast.Module) -> ast.Module:
# First check if the future annotation is already present
for item in module.body:
- if isinstance(item, ast.ImportFrom) and item.module == "__future__" and item.names[0].name == "annotations":
+ if isinstance(item, ast.ImportFrom) and item.module == "__future__":
+ if any(alias.name == "annotations" for alias in item.names):
- return module
+ return module
module.body.insert(0, ast.ImportFrom(module="__future__", names=[ast.alias(name="annotations")], level=0))
return module
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class FutureAnnotationPlugin(Plugin): | |
@staticmethod | |
def insert_future_annotation(module: ast.Module) -> ast.Module: | |
# First check if the future annotation is already present | |
for item in module.body: | |
if isinstance(item, ast.ImportFrom) and item.module == "__future__" and item.names[0].name == "annotations": | |
return module | |
module.body.insert(0, ast.ImportFrom(module="__future__", names=[ast.alias(name="annotations")], level=0)) | |
return module | |
class FutureAnnotationPlugin(Plugin): | |
@staticmethod | |
def insert_future_annotation(module: ast.Module) -> ast.Module: | |
# First check if the future annotation is already present | |
for item in module.body: | |
if isinstance(item, ast.ImportFrom) and item.module == "__future__": | |
if any(alias.name == "annotations" for alias in item.names): | |
return module | |
module.body.insert( | |
0, | |
ast.ImportFrom( | |
module="__future__", | |
names=[ast.alias(name="annotations")], | |
level=0, | |
), | |
) | |
return module |
🤖 Prompt for AI Agents
In infrahub_sdk/graphql/plugin.py around lines 12 to 21, the ImportFrom check
assumes item.names[0] exists and only checks the first name, which can raise
IndexError or miss cases with multiple imports; change the condition to first
verify isinstance(item, ast.ImportFrom) and item.module == "__future__", then
check that item.names is truthy and use any(alias.name == "annotations" for
alias in item.names) to detect the future import safely; leave the rest of the
function (inserting the ImportFrom node at module.body[0] and returning the
module) unchanged.
def insert_fragments_inline(module: ast.Module, fragment: ast.Module) -> ast.Module: | ||
"""Insert the Pydantic classes for the fragments inline into the module.""" | ||
module_class_def_index = get_class_def_index(module) | ||
|
||
fragment_classes: list[ast.ClassDef] = [item for item in fragment.body if isinstance(item, ast.ClassDef)] | ||
for idx, item in enumerate(fragment_classes): | ||
module.body.insert(module_class_def_index + idx, item) | ||
|
||
return module |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle edge case when module has no class definitions.
If get_class_def_index
returns -1 (no classes found), line 19 will insert at index -1, placing fragments at the end rather than failing gracefully or documenting this behavior.
Consider one of these approaches:
Option 1: Document the behavior
def insert_fragments_inline(module: ast.Module, fragment: ast.Module) -> ast.Module:
- """Insert the Pydantic classes for the fragments inline into the module."""
+ """Insert the Pydantic classes for the fragments inline into the module.
+
+ If no class definitions exist in module, fragments are appended to the end.
+ """
Option 2: Raise an error for unexpected input
def insert_fragments_inline(module: ast.Module, fragment: ast.Module) -> ast.Module:
"""Insert the Pydantic classes for the fragments inline into the module."""
module_class_def_index = get_class_def_index(module)
+ if module_class_def_index == -1:
+ raise ValueError("Module must contain at least one class definition to insert fragments")
Option 3: Append explicitly when no classes exist
def insert_fragments_inline(module: ast.Module, fragment: ast.Module) -> ast.Module:
"""Insert the Pydantic classes for the fragments inline into the module."""
module_class_def_index = get_class_def_index(module)
+ if module_class_def_index == -1:
+ module.body.extend(fragment_classes)
+ return module
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def insert_fragments_inline(module: ast.Module, fragment: ast.Module) -> ast.Module: | |
"""Insert the Pydantic classes for the fragments inline into the module.""" | |
module_class_def_index = get_class_def_index(module) | |
fragment_classes: list[ast.ClassDef] = [item for item in fragment.body if isinstance(item, ast.ClassDef)] | |
for idx, item in enumerate(fragment_classes): | |
module.body.insert(module_class_def_index + idx, item) | |
return module | |
def insert_fragments_inline(module: ast.Module, fragment: ast.Module) -> ast.Module: | |
"""Insert the Pydantic classes for the fragments inline into the module. | |
If no class definitions exist in module, fragments are appended to the end. | |
""" | |
module_class_def_index = get_class_def_index(module) | |
fragment_classes: list[ast.ClassDef] = [item for item in fragment.body if isinstance(item, ast.ClassDef)] | |
for idx, item in enumerate(fragment_classes): | |
module.body.insert(module_class_def_index + idx, item) | |
return module |
🤖 Prompt for AI Agents
In infrahub_sdk/graphql/utils.py around lines 13 to 21, handle the case where
get_class_def_index returns -1 (no class defs) to avoid inserting at index -1:
calculate an insert_index = module_class_def_index if module_class_def_index >=
0 else len(module.body), then use that insert_index in the loop when inserting
fragment classes so fragments are explicitly appended when no classes exist
(alternatively you could raise a ValueError if that input should be invalid).
class MyStrEnum(str, Enum): | ||
VALUE1 = "value1" | ||
VALUE2 = "value2" | ||
|
||
|
||
class MyIntEnum(int, Enum): | ||
VALUE1 = 12 | ||
VALUE2 = 24 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove duplicate enum definitions.
These enum classes are already defined in tests/unit/sdk/graphql/conftest.py
(lines 6-13). Remove the duplicates here and import from conftest if needed.
-class MyStrEnum(str, Enum):
- VALUE1 = "value1"
- VALUE2 = "value2"
-
-
-class MyIntEnum(int, Enum):
- VALUE1 = 12
- VALUE2 = 24
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/unit/sdk/graphql/test_renderer.py around lines 8 to 15, duplicate enum
classes MyStrEnum and MyIntEnum are defined but already exist in
tests/unit/sdk/graphql/conftest.py (lines 6-13); remove these class definitions
from this file and instead import the enums from conftest (e.g., add from
.conftest import MyStrEnum, MyIntEnum at top) or rely on pytest
fixtures/namespace as appropriate so tests use the single shared definitions.
@pytest.fixture | ||
def query_data_no_filter(): | ||
data = { | ||
"device": { | ||
"name": {"value": None}, | ||
"description": {"value": None}, | ||
"interfaces": {"name": {"value": None}}, | ||
} | ||
} | ||
|
||
return data | ||
|
||
|
||
@pytest.fixture | ||
def query_data_alias(): | ||
data = { | ||
"device": { | ||
"name": {"@alias": "new_name", "value": None}, | ||
"description": {"value": {"@alias": "myvalue"}}, | ||
"interfaces": {"@alias": "myinterfaces", "name": {"value": None}}, | ||
} | ||
} | ||
|
||
return data | ||
|
||
|
||
@pytest.fixture | ||
def query_data_fragment(): | ||
data = { | ||
"device": { | ||
"name": {"value": None}, | ||
"...on Builtin": { | ||
"description": {"value": None}, | ||
"interfaces": {"name": {"value": None}}, | ||
}, | ||
} | ||
} | ||
|
||
return data | ||
|
||
|
||
@pytest.fixture | ||
def query_data_empty_filter(): | ||
data = { | ||
"device": { | ||
"@filters": {}, | ||
"name": {"value": None}, | ||
"description": {"value": None}, | ||
"interfaces": {"name": {"value": None}}, | ||
} | ||
} | ||
|
||
return data | ||
|
||
|
||
@pytest.fixture | ||
def query_data_filters_01(): | ||
data = { | ||
"device": { | ||
"@filters": {"name__value": "$name"}, | ||
"name": {"value": None}, | ||
"description": {"value": None}, | ||
"interfaces": { | ||
"@filters": {"enabled__value": "$enabled"}, | ||
"name": {"value": None}, | ||
}, | ||
} | ||
} | ||
return data | ||
|
||
|
||
@pytest.fixture | ||
def query_data_filters_02(): | ||
data = { | ||
"device": { | ||
"@filters": {"name__value": "myname", "integer__value": 44, "enumstr__value": MyStrEnum.VALUE2}, | ||
"name": {"value": None}, | ||
"interfaces": { | ||
"@filters": {"enabled__value": True, "enumint__value": MyIntEnum.VALUE1}, | ||
"name": {"value": None}, | ||
}, | ||
} | ||
} | ||
return data | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove duplicate fixture definitions.
All these fixtures (query_data_no_filter
, query_data_alias
, query_data_fragment
, query_data_empty_filter
, query_data_filters_01
, query_data_filters_02
, input_data_01
) are already defined in tests/unit/sdk/graphql/conftest.py
(lines 16-113). Pytest automatically discovers and makes conftest fixtures available to test modules, so these duplicates should be removed.
-@pytest.fixture
-def query_data_no_filter():
- data = {
- "device": {
- "name": {"value": None},
- "description": {"value": None},
- "interfaces": {"name": {"value": None}},
- }
- }
-
- return data
-
-
-@pytest.fixture
-def query_data_alias():
- data = {
- "device": {
- "name": {"@alias": "new_name", "value": None},
- "description": {"value": {"@alias": "myvalue"}},
- "interfaces": {"@alias": "myinterfaces", "name": {"value": None}},
- }
- }
-
- return data
-
-
-@pytest.fixture
-def query_data_fragment():
- data = {
- "device": {
- "name": {"value": None},
- "...on Builtin": {
- "description": {"value": None},
- "interfaces": {"name": {"value": None}},
- },
- }
- }
-
- return data
-
-
-@pytest.fixture
-def query_data_empty_filter():
- data = {
- "device": {
- "@filters": {},
- "name": {"value": None},
- "description": {"value": None},
- "interfaces": {"name": {"value": None}},
- }
- }
-
- return data
-
-
-@pytest.fixture
-def query_data_filters_01():
- data = {
- "device": {
- "@filters": {"name__value": "$name"},
- "name": {"value": None},
- "description": {"value": None},
- "interfaces": {
- "@filters": {"enabled__value": "$enabled"},
- "name": {"value": None},
- },
- }
- }
- return data
-
-
-@pytest.fixture
-def query_data_filters_02():
- data = {
- "device": {
- "@filters": {"name__value": "myname", "integer__value": 44, "enumstr__value": MyStrEnum.VALUE2},
- "name": {"value": None},
- "interfaces": {
- "@filters": {"enabled__value": True, "enumint__value": MyIntEnum.VALUE1},
- "name": {"value": None},
- },
- }
- }
- return data
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@pytest.fixture | |
def query_data_no_filter(): | |
data = { | |
"device": { | |
"name": {"value": None}, | |
"description": {"value": None}, | |
"interfaces": {"name": {"value": None}}, | |
} | |
} | |
return data | |
@pytest.fixture | |
def query_data_alias(): | |
data = { | |
"device": { | |
"name": {"@alias": "new_name", "value": None}, | |
"description": {"value": {"@alias": "myvalue"}}, | |
"interfaces": {"@alias": "myinterfaces", "name": {"value": None}}, | |
} | |
} | |
return data | |
@pytest.fixture | |
def query_data_fragment(): | |
data = { | |
"device": { | |
"name": {"value": None}, | |
"...on Builtin": { | |
"description": {"value": None}, | |
"interfaces": {"name": {"value": None}}, | |
}, | |
} | |
} | |
return data | |
@pytest.fixture | |
def query_data_empty_filter(): | |
data = { | |
"device": { | |
"@filters": {}, | |
"name": {"value": None}, | |
"description": {"value": None}, | |
"interfaces": {"name": {"value": None}}, | |
} | |
} | |
return data | |
@pytest.fixture | |
def query_data_filters_01(): | |
data = { | |
"device": { | |
"@filters": {"name__value": "$name"}, | |
"name": {"value": None}, | |
"description": {"value": None}, | |
"interfaces": { | |
"@filters": {"enabled__value": "$enabled"}, | |
"name": {"value": None}, | |
}, | |
} | |
} | |
return data | |
@pytest.fixture | |
def query_data_filters_02(): | |
data = { | |
"device": { | |
"@filters": {"name__value": "myname", "integer__value": 44, "enumstr__value": MyStrEnum.VALUE2}, | |
"name": {"value": None}, | |
"interfaces": { | |
"@filters": {"enabled__value": True, "enumint__value": MyIntEnum.VALUE1}, | |
"name": {"value": None}, | |
}, | |
} | |
} | |
return data |
🤖 Prompt for AI Agents
In tests/unit/sdk/graphql/test_renderer.py around lines 18 to 102, there are
duplicate pytest fixture definitions (query_data_no_filter, query_data_alias,
query_data_fragment, query_data_empty_filter, query_data_filters_01,
query_data_filters_02, and input_data_01) that are already provided by
tests/unit/sdk/graphql/conftest.py; remove these fixture definitions from this
test file so the tests use the conftest-provided fixtures instead, ensuring no
local redefinitions remain and that any references in this file rely on the
conftest fixtures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
infrahub_sdk/ctl/graphql.py (2)
98-110
: HTTP response validation is still missing.Line 107-108 still writes
response.text
directly to the schema file without validating the HTTP status code. If the server returns an error response (e.g., 404, 500), the error body will be written to the schema file, corrupting it and causing downstream failures ingenerate_return_types
.Apply this diff to validate the response:
client = initialize_client() response = await client._get(url=f"{client.address}/schema.graphql") + response.raise_for_status() destination.write_text(response.text) console.print(f"[green]Schema exported to {destination}")
Note: Line 107 uses the private method
client._get()
. If a public method becomes available, prefer that to avoid coupling to internal implementation details.
141-173
: Inconsistent logging and partial generation issues remain unresolved.Two issues persist in this loop:
Inconsistent logging: Line 146 still uses
print()
while lines 167 and 173 useconsole.print()
, creating inconsistent output formatting.Partial generation after parsing error: Lines 163-173 catch
ParsingError
fromadd_operation
and continue processing. Lines 168-173 proceed to generate fragments and write files even when operations failed to parse, resulting in incomplete or incorrect generated types. Success messages are still printed despite the error.Apply this diff to fix both issues:
except ValueError as exc: - print(f"Error generating result types for {gql_file}: {exc}") + console.print(f"[red]Error generating result types for {gql_file}: {exc}") continue queries = filter_operations_definitions(definitions) fragments = filter_fragments_definitions(definitions) package_generator = get_package_generator( schema=graphql_schema, fragments=fragments, settings=ClientSettings( schema_path=str(schema), target_package_name=directory.name, queries_path=str(directory), include_comments=CommentsStrategy.NONE, ), plugin_manager=plugin_manager, ) + parsing_failed = False try: for query_operation in queries: package_generator.add_operation(query_operation) except ParsingError as exc: console.print(f"[red]Unable to process {gql_file.name}: {exc}") + parsing_failed = True + + if parsing_failed: + continue + module_fragment = package_generator.fragments_generator.generate() generate_result_types(directory=directory, package=package_generator, fragment=module_fragment) for file_name in package_generator._result_types_files.keys(): console.print(f"[green]Generated {file_name} in {directory}")
🧹 Nitpick comments (1)
infrahub_sdk/ctl/graphql.py (1)
77-89
: Consider using public APIs instead of private attributes.This function accesses multiple private attributes and methods from
PackageGenerator
(_result_types_files
,_add_comments_to_code
,_generated_files
). While this may be necessary given the currentariadne-codegen
API, it creates tight coupling and risks breakage if the library changes its internals.If public APIs are available or can be requested from the library maintainers, prefer those. Otherwise, document this dependency and consider wrapping this logic in a way that isolates the private-attribute access.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
docs/docs/infrahubctl/infrahubctl-graphql.mdx
(1 hunks)infrahub_sdk/ctl/graphql.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/docs/infrahubctl/infrahubctl-graphql.mdx
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
When implementing Infrahub checks, subclass InfrahubCheck and override validate(data); do not implement or rely on a check() method
Files:
infrahub_sdk/ctl/graphql.py
infrahub_sdk/ctl/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
infrahub_sdk/ctl/**/*.py
: Build CLI commands with Typer
Organize and keep all CLI commands within infrahub_sdk/ctl/
Files:
infrahub_sdk/ctl/graphql.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
PR: opsmill/infrahub-sdk-python#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T13:35:12.998Z
Learning: Applies to infrahub_sdk/ctl/**/*.py : Organize and keep all CLI commands within infrahub_sdk/ctl/
🧬 Code graph analysis (1)
infrahub_sdk/ctl/graphql.py (4)
infrahub_sdk/async_typer.py (2)
AsyncTyper
(11-31)command
(29-31)infrahub_sdk/ctl/client.py (1)
initialize_client
(10-25)infrahub_sdk/ctl/utils.py (1)
catch_exception
(78-106)infrahub_sdk/graphql/utils.py (2)
insert_fragments_inline
(13-21)remove_fragment_import
(24-30)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: unit-tests (3.12)
- GitHub Check: unit-tests (3.13)
- GitHub Check: unit-tests (3.11)
- GitHub Check: unit-tests (3.9)
- GitHub Check: unit-tests (3.10)
- GitHub Check: integration-tests-latest-infrahub
- GitHub Check: documentation
- GitHub Check: Cloudflare Pages
🔇 Additional comments (4)
infrahub_sdk/ctl/graphql.py (4)
1-36
: LGTM!The imports are well-organized and all dependencies appear necessary for the CLI functionality. The plugin configuration in
ARIADNE_PLUGINS
is appropriate for controlling the code generation pipeline.
38-54
: LGTM!The function correctly handles both single files and directories, with appropriate validation and error handling. The recursive glob pattern ensures all
.gql
files are discovered.
57-74
: LGTM!The function properly validates the query file and performs schema validation. Excluding
NoUnusedFragmentsRule
is appropriate when fragments may be shared across files. Error messages are clear and helpful.
91-96
: LGTM!Standard Typer callback pattern for the CLI group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
infrahub_sdk/ctl/graphql.py (2)
107-110
: Unaddressed: Validate HTTP response before writing schema file.This is a duplicate of a previous review comment that has not been addressed. Lines 107-109 still fetch the schema and write
response.text
directly to the file without checking the HTTP status code. If the server returns an error (e.g., 404, 500), the error response body will be written to the schema file, resulting in an invalid schema that will cause downstream failures ingenerate_return_types
.Apply this diff to validate the response and ensure the parent directory exists:
client = initialize_client() response = await client._get(url=f"{client.address}/schema.graphql") + if response.status_code != 200: + raise ValueError(f"Failed to fetch schema: HTTP {response.status_code} - {response.text}") + destination.parent.mkdir(parents=True, exist_ok=True) destination.write_text(response.text) console.print(f"[green]Schema exported to {destination}")Note on private method access: Line 107 calls
client._get()
, which is a private method. Consider documenting this dependency or requesting a public method from theInfrahubClient
maintainers.
144-174
: Unaddressed: Inconsistent logging and partial generation after parsing error.This is a duplicate of a previous review comment that has not been addressed. Two issues remain:
Inconsistent logging (line 147): Uses
print()
instead ofconsole.print()
, creating inconsistent output formatting compared to lines 168 and 174.Partial generation after parsing error (lines 167-169): The code catches
ParsingError
fromadd_operation
and logs it, but then continues to lines 169-171 to generate fragments and write files. If some operations failed to parse, the generated types may be incomplete or incorrect, yet success messages are still printed (line 174). This creates a confusing user experience where errors are shown but the command reports success.Apply this diff to fix both issues:
except ValueError as exc: - print(f"Error generating result types for {gql_file}: {exc}") + console.print(f"[red]Error generating result types for {gql_file}: {exc}") continue queries = filter_operations_definitions(definitions) fragments = filter_fragments_definitions(definitions) package_generator = get_package_generator( schema=graphql_schema, fragments=fragments, settings=ClientSettings( schema_path=str(schema), target_package_name=directory.name, queries_path=str(directory), include_comments=CommentsStrategy.NONE, ), plugin_manager=plugin_manager, ) + parsing_failed = False try: for query_operation in queries: package_generator.add_operation(query_operation) except ParsingError as exc: console.print(f"[red]Unable to process {gql_file.name}: {exc}") + parsing_failed = True + + if parsing_failed: + continue + module_fragment = package_generator.fragments_generator.generate() generate_result_types(directory=directory, package=package_generator, fragment=module_fragment) for file_name in package_generator._result_types_files.keys(): console.print(f"[green]Generated {file_name} in {directory}")
🧹 Nitpick comments (2)
infrahub_sdk/ctl/graphql.py (2)
52-53
: Simplify the file check condition.The condition
not query_path.is_dir() and query_path.is_file()
is redundant. If a path is a file, it's inherently not a directory.Apply this diff:
- if not query_path.is_dir() and query_path.is_file(): + if query_path.is_file(): return [query_path]
78-90
: Document ariadne-codegen private API usageThis function relies on internal
PackageGenerator
attributes (_result_types_files
,_generated_files
) and method (_add_comments_to_code
). Add a comment explaining this dependency and open an issue/PR against ariadne-codegen to expose public APIs for:
- Iterating result-type files
- Inserting file comments (use
include_comments
config orget_file_comment
hook)- Tracking generated files
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
infrahub_sdk/ctl/graphql.py
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
When implementing Infrahub checks, subclass InfrahubCheck and override validate(data); do not implement or rely on a check() method
Files:
infrahub_sdk/ctl/graphql.py
infrahub_sdk/ctl/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
infrahub_sdk/ctl/**/*.py
: Build CLI commands with Typer
Organize and keep all CLI commands within infrahub_sdk/ctl/
Files:
infrahub_sdk/ctl/graphql.py
🧬 Code graph analysis (1)
infrahub_sdk/ctl/graphql.py (5)
infrahub_sdk/async_typer.py (2)
AsyncTyper
(11-31)command
(29-31)infrahub_sdk/ctl/client.py (1)
initialize_client
(10-25)infrahub_sdk/ctl/utils.py (1)
catch_exception
(78-106)infrahub_sdk/graphql/utils.py (2)
insert_fragments_inline
(13-21)remove_fragment_import
(24-30)tests/unit/sdk/graphql/test_plugin.py (1)
graphql_schema
(15-17)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: unit-tests (3.11)
- GitHub Check: unit-tests (3.13)
- GitHub Check: unit-tests (3.12)
- GitHub Check: unit-tests (3.10)
- GitHub Check: unit-tests (3.9)
- GitHub Check: integration-tests-latest-infrahub
- GitHub Check: documentation
- GitHub Check: Cloudflare Pages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some comments, but great addition!
|
||
> if you don't have your own Python module, it's possible to use relative path by having the `procotols.py` in the same directory as your script/transform/generator | ||
## Generating Pydantic models from GraphQL queries |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you're working with this file can you also fix the typo with procotols.py
just above that code rabbit is complaining about?
|
||
Generated Pydantic models from GraphQL queries offer several important benefits: | ||
|
||
- **Type Safety**: Catch type errors at development time instead of runtime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested change:
- Catch type errors at development time instead of runtime
+ Catch type errors during development time instead of at runtime
List of Path objects for all .gql files found | ||
""" | ||
if not query_path.exists(): | ||
raise FileNotFoundError(f"Directory not found: {query_path}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could look a bit weird if query_path
was a file instead of a directory. I.e. that we'd say "Directory not found" and point to a file could be confusing.
) | ||
if validation_errors: | ||
raise ValueError("\n\n".join(error.message for error in validation_errors)) | ||
return queries_ast.definitions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we don't support it yet within the SDK this would be problematic with fragments spread out in multiple files. Not an issue for this PR, just highlighting if we want to do something different with regards to the API of the command because of this.
"""Export the GraphQL schema to a file.""" | ||
|
||
client = initialize_client() | ||
response = await client._get(url=f"{client.address}/schema.graphql") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest that we move this and avoid calling client._get
from the CTL. Instead I's suggest that we add some client.schema.get_graphql_schema()
Do we need a branch parameter here?
try: | ||
definitions = get_graphql_query(queries_path=gql_file, schema=graphql_schema) | ||
except ValueError as exc: | ||
print(f"Error generating result types for {gql_file}: {exc}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use the console.print
method to show output to users.
for query_operation in queries: | ||
package_generator.add_operation(query_operation) | ||
except ParsingError as exc: | ||
console.print(f"[red]Unable to process {gql_file.name}: {exc}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we ignore this error and the one above aside from displaying output. Do we want the command to fail here?
def replace_base_model_import(cls, module: ast.Module) -> ast.Module: | ||
base_model_index = cls.find_base_model_index(module) | ||
if base_model_index == -1: | ||
raise ValueError("BaseModel not found in module") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we call .find_base_model_index()
in other locations too? I think we should avoid logic like this where .replace_base_model_import()
has to know what a -1
index means. In other situations it could just as well be the last item in a sequence. I think it would b cleaner if it was find_base_model_index
that raised the ValueError directly.
class Mutation(BaseGraphQLQuery): | ||
query_type = "mutation" | ||
|
||
def __init__(self, *args: Any, mutation: str, input_data: dict, **kwargs: Any): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Realise this is just moved code but I think we should remove*args: Any
and also **kwargs: Any)
to make this class easier to use..
|
||
fragment_classes: list[ast.ClassDef] = [item for item in fragment.body if isinstance(item, ast.ClassDef)] | ||
for idx, item in enumerate(fragment_classes): | ||
module.body.insert(module_class_def_index + idx, item) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would this look like if get_class_def_index()
returns a -1. The first entry gets added to the end and then the other fragments gets added starting from the beginning?
Fixes #545
This PR adds a new command to generate Pydantic model that matches the format of a given GraphQL query
For each graphql query, a python file with the same name will be created with the corresponding Pydantic models.
In order to generate the return types, the graphql schema needs to exist locally so this PR also adds a command to fetch the graphql schema from the infrahub instance.
Implementation
Internally, the command is leveraging the library
ariadne-codegen
to generate the Pydantic models.ariadne-codegen
provide a plugin system that makes it very easy to control the format of the generated file, currently we are doing some small adjustment but the system is very powerful.The main drawback if using ``ariadne-codegen` is that by default the system has been designed to generate not just the return types but the full client so a wrapper was required to only generate the types.
Summary by CodeRabbit
New Features
Documentation
Tests
Chores