Completed node output destructuring implementation#3806
Completed node output destructuring implementation#3806JustJ01 wants to merge 6 commits intoGraphiteEditor:masterfrom
Conversation
Summary of ChangesHello @JustJ01, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request finalizes the implementation of node output destructuring, a feature that significantly enhances the granularity and flexibility of node interactions. By allowing complex struct outputs to be broken down into their constituent fields, it simplifies data flow and improves the usability of nodes within the graph, making it easier to connect specific data points without intermediate processing steps. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Please rebase on the latest upstream master. |
There was a problem hiding this comment.
Code Review
The pull request introduces functionality for node output destructuring, allowing for the extraction of individual fields from a node's output. This involves adding output_fields to NodeMetadata, implementing a Destruct trait, and generating extractor nodes via a new destruct.rs module. The changes appear well-structured and integrate smoothly with the existing macro system. The OnceLock usage for protonode_identifier has been updated to correctly handle &'static str by leaking the boxed string, which is a common pattern for static string storage in Rust. The new deconstruct_output attribute in NodeFnAttributes provides a clear way to enable this feature for specific nodes.
e27f090 to
e215927
Compare
editor/src/messages/portfolio/document/node_graph/document_node_definitions.rs
Show resolved
Hide resolved
|
Also, please configure your ai / editor / git to automatically run |
|
@TrueDoctor I have made the requested changes which u asked for |
|
|
||
| #[cfg(feature = "std")] | ||
| #[node_macro::node(name("Split Channels"), category("Raster: Channels"), deconstruct_output)] | ||
| fn split_channels(_: impl Ctx, input: Table<Raster<CPU>>) -> SplitChannelsOutput { |
There was a problem hiding this comment.
Each call to extract_channel iterates over the entire raster. This means the image is traversed 4 times. Maybe do a single-pass implementation ?
Maybe do something like this
fn split_channels(_: impl Ctx, input: Table<Raster<CPU>>) -> SplitChannelsOutput {
let (red, green, blue, alpha) = input.into_four_channels(); // hypothetical single-pass function
SplitChannelsOutput { red, green, blue, alpha }
}
I can commit this function and implimentation if you'd like
There was a problem hiding this comment.
I think in this case we should not touch the split channels node at all since it makes the code less efficient, this was done as per Keavons request.
| static FIELDS: std::sync::OnceLock<Vec<#core_types::registry::StructField>> = std::sync::OnceLock::new(); | ||
| FIELDS.get_or_init(|| vec![ | ||
| #(#output_fields,)* | ||
| ]).as_slice() |
There was a problem hiding this comment.
Why do you use heap memory + syncronization here instead of constructing a slice in place?
|
|
||
| fn generate_extractor_node(core_types: &TokenStream2, fn_name: &syn::Ident, struct_name: &syn::Ident, field_name: &syn::Ident, ty: &Type, output_name: &LitStr) -> TokenStream2 { | ||
| quote! { | ||
| #[node_macro::node(category(""), name(#output_name))] |
There was a problem hiding this comment.
Why do we add the name() annotation here?
| fn parse_output_name(attrs: &[syn::Attribute]) -> syn::Result<Option<String>> { | ||
| let mut output_name = None; | ||
|
|
||
| for attr in attrs { | ||
| if !attr.path().is_ident("output") { | ||
| continue; | ||
| } | ||
|
|
||
| let mut this_output_name = None; | ||
| match &attr.meta { | ||
| Meta::Path(_) => { | ||
| return Err(Error::new_spanned(attr, "Expected output metadata like #[output(name = \"Result\")]")); | ||
| } | ||
| Meta::NameValue(_) => { | ||
| return Err(Error::new_spanned(attr, "Expected output metadata like #[output(name = \"Result\")]")); | ||
| } | ||
| Meta::List(_) => { | ||
| attr.parse_nested_meta(|meta| { | ||
| if meta.path.is_ident("name") { | ||
| if this_output_name.is_some() { | ||
| return Err(meta.error("Multiple output names provided for one field")); | ||
| } | ||
| let value = meta.value()?; | ||
| let lit: LitStr = value.parse()?; | ||
| this_output_name = Some(lit.value()); | ||
| Ok(()) | ||
| } else { | ||
| Err(meta.error("Unsupported output metadata. Supported syntax is #[output(name = \"...\")]")) | ||
| } | ||
| })?; | ||
| } | ||
| } | ||
|
|
||
| let this_output_name = this_output_name.ok_or_else(|| Error::new_spanned(attr, "Missing output name. Use #[output(name = \"...\")]"))?; | ||
| if output_name.is_some() { | ||
| return Err(Error::new_spanned(attr, "Multiple #[output(...)] attributes are not allowed on one field")); | ||
| } | ||
| output_name = Some(this_output_name); | ||
| } | ||
|
|
||
| Ok(output_name) | ||
| } |
There was a problem hiding this comment.
Is this function only used for overriding the field name? I don't think this code is currently used anywhere right? If it is indeed not used, it should be removed to reduce the code complexity
| let use_memo = !available_output_fields.is_empty() | ||
| && node_registry.get(&memo_node).is_some_and(|memo_implementations| { | ||
| memo_implementations | ||
| .iter() | ||
| .any(|(_, node_io)| node_io.call_argument == *input_type && node_io.return_value == first_node_io.return_value) | ||
| }); |
There was a problem hiding this comment.
use_memo will currently always be false since you did not the new structs to the memo node implementations
| inputs: vec![NodeInput::node(source_node_id, 0)], | ||
| call_argument: input_type.clone(), | ||
| implementation: DocumentNodeImplementation::ProtoNode(memo_node.clone()), | ||
| visible: false, |
There was a problem hiding this comment.
there is no need to set visible to false since the entire network is hidden anyway
| DocumentNode { | ||
| inputs: vec![NodeInput::node(source_node_id, 0)], | ||
| call_argument: input_type.clone(), | ||
| implementation: DocumentNodeImplementation::ProtoNode(memo_node.clone()), |
There was a problem hiding this comment.
shouldn't we get a static string for the identifier? why do we need heap allocations here?
|
@TrueDoctor I have started the AI code review. It will take a few minutes to complete. |
There was a problem hiding this comment.
No issues found across 10 files
Since this is your first cubic review, here's how it works:
- cubic automatically reviews your code and comments on bugs and improvements
- Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
- Add one-off context when rerunning by tagging
@cubic-dev-aiwith guidance or docs links (includingllms.txt) - Ask questions if you need clarification on any suggestion
5bb6104 to
52d2b38
Compare
From the discord discussions: https://discord.com/channels/731730685944922173/1210129484255076354/1475229902788628573