Skip to content

[Variant] Define shredding schema for VariantArrayBuilder #7921

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

friendlymatthew
Copy link
Contributor

@friendlymatthew friendlymatthew commented Jul 13, 2025

Which issue does this PR close?

My initial PR is getting too large so I figured it would be better to split these up.

Rationale for this change

This PR updates the VariantArrayBuilder to pass in the desired shredded output schema in the constructor. It also contains validation logic that defines what is a valid schema and what is not.

In other words, the schema that you define ahead of time gets checked if it is spec-compliant

@friendlymatthew
Copy link
Contributor Author

cc @scovich @alamb @Samyak2

Comment on lines 41 to 45
// if !value_field.is_nullable() {
// return Err(ArrowError::InvalidArgumentError(
// "Expected value field to be nullable".to_string(),
// ));
// }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not see anything in the shredding spec that explicitly states value can not be nullable. Same thing with typed_value below

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spec says that both can be nullable:

required group measurement (VARIANT) {
  required binary metadata;
  optional binary value;
  optional int64 typed_value;
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I don't know, is what should happen with metadata in a variant that shredded as a deeply nested struct, where one or more of those struct fields happen to be variant typed (which in turn can also be shredded further, and which can also contain variant fields of their own?

v: VARIANT {
   metadata: BINARY,
   value: BINARY,
   typed_value: {
       a: STRUCT {
           b: STRUCT {
               c: STRUCT {
                   w: VARIANT {
                       metadata: BINARY, << --- ???
                       value: BINARY,
                       typed_value: STRUCT {
                           x: STRUCT {
                               y: STRUCT {
                                   z: STRUCT {
                                       u: VARIANT {
                                           metadata: BINARY, <<--- ???
                                           value: BINARY,
                                       }
                                   }
                               }
                           }
                       }
                   }
               }
           }
       }
   }
}

The spec says that

Variant metadata is stored in the top-level Variant group in a binary metadata column regardless of whether the Variant value is shredded.

All value columns within the Variant must use the same metadata. All field names of a Variant, whether shredded or not, must be present in the metadata.

I'm pretty sure that means w and u must not have metadata columns -- because they are still "inside" v.

Even if one tried to store path names of u inside all three metadata columns, the field ids would disagree unless we forced u.metadata and w.metadata to be copies of v.metadata. Easy enough to do that in arrow-rs (all array data are anyway Arc), but what about the actual parquet file??

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the spec is 100% clear on this one, unfortunately. Maybe we need to ask the parquet-variant folks for clarification and/or refinement of the spec's wording?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should only be one metadata field, at the top level. So a metadata field at w is not allowed.

Also, the immediately a.b.c fields are not valid for a shredded variant. Every shredded field needs a new typed_value and/or value field directly under it. So the path in parquet for v:a.b.c.w.x.y.z should be a.typed_value.b.typed_value.c.typed_value.w.typed_value.x.typed_value.y.typed_value.z.typed_value.u.value.

Copy link

@cashmand cashmand Jul 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The relevant paragraph from the spec for my second comment:

Each shredded field in the typed_value group is represented as a required group that contains optional value and typed_value fields.
The value field stores the value as Variant-encoded binary when the typed_value cannot represent the field. This layout enables readers to skip data based on the field statistics for value and typed_value.
The typed_value field may be omitted when not shredding fields as a specific type.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more slightly pedantic clarification: the type of w and u in parquet are not VARIANT (i.e. only v is annotated with the Variant logical type). u and v are just shredded fields that happen to not have a typed_value field, only value.

Comment on lines 129 to 119
if metadata_field.is_nullable() {
return Err(ArrowError::InvalidArgumentError(
"Invalid VariantArray: metadata field can not be nullable".to_string(),
));
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I make sure to check metadata is not nullable. But I wonder if we should remove this. You could imagine a user wanting to use the same metadata throughout the entire building process?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for variant columns nested inside a shredded variant, we must not have a metadata column?
See https://github.com/apache/arrow-rs/pull/7921/files#r2204684924 above?

Copy link
Contributor Author

@friendlymatthew friendlymatthew Jul 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is how I understood it as well.

validate_shredded_schema is called at the top-level, if nested schemas exist, we recursively call validate_value_and_typed_value

This way, we only validate the metadata column once and at the top level

@friendlymatthew friendlymatthew force-pushed the friendlymatthew/shred branch 2 times, most recently from 118ee3f to a2c8c72 Compare July 13, 2025 01:56
Comment on lines 62 to 94
// this is directly mapped from the spec's parquet physical types
// note, there are more data types we can support
// but for the sake of simplicity, I chose the smallest subset
match typed_value_field.data_type() {
DataType::Boolean
| DataType::Int32
| DataType::Int64
| DataType::Float32
| DataType::Float64
| DataType::BinaryView => {}
DataType::Union(union_fields, _) => {
union_fields
.iter()
.map(|(_, f)| f.clone())
.try_for_each(|f| {
let DataType::Struct(fields) = f.data_type().clone() else {
return Err(ArrowError::InvalidArgumentError(
"Expected struct".to_string(),
));
};

validate_value_and_typed_value(&fields, false)
})?;
}

foreign => {
return Err(ArrowError::NotYetImplemented(format!(
"Unsupported VariantArray 'typed_value' field, got {foreign}"
)))
}
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't love this, but I treat the field DataTypes as the parquet physical type defined in the specification: https://github.com/apache/parquet-format/blob/master/VariantShredding.md#shredded-value-types.

I'm curious to get your thoughts, maybe we should stick with the Variant type mapping?

One reason why the current logic isn't the best is because when we go to reconstruct variants, certain variant types like int8 will get casted to a DataType::Int32. This means when we go to encode the values back to variant, we won't know their original types

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to store (logical) int32 in memory just because parquet physically encodes them that way? When reading an int8 column from normal parquet, doesn't it come back as an int8 PrimitiveArray?

Comment on lines 274 to 282
// Create union of different element types
let union_fields = UnionFields::new(
vec![0, 1],
vec![
Field::new("string_element", string_element, true),
Field::new("int_element", int_element, true),
],
);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't love this, the field names are weird. However, we need a way to support a heterogenous list of Fields.

I'm curious if there is a nicer way to represent a group.

Comment on lines 331 to 345
let typed_value_field = Field::new(
"typed_value",
DataType::Union(
UnionFields::new(
vec![0, 1],
vec![
Field::new("event_type", DataType::Struct(element_group_1), true),
Field::new("event_ts", DataType::Struct(element_group_2), true),
],
),
UnionMode::Sparse,
),
false,
);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to https://github.com/apache/arrow-rs/pull/7921/files#r2203048613, but this is nicer, since we can treat field names as key names.

Copy link
Contributor

@scovich scovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't actually review yet, but wanted to at least respond to one comment.

Comment on lines 41 to 45
// if !value_field.is_nullable() {
// return Err(ArrowError::InvalidArgumentError(
// "Expected value field to be nullable".to_string(),
// ));
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spec says that both can be nullable:

required group measurement (VARIANT) {
  required binary metadata;
  optional binary value;
  optional int64 typed_value;
}

Comment on lines 41 to 45
// if !value_field.is_nullable() {
// return Err(ArrowError::InvalidArgumentError(
// "Expected value field to be nullable".to_string(),
// ));
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I don't know, is what should happen with metadata in a variant that shredded as a deeply nested struct, where one or more of those struct fields happen to be variant typed (which in turn can also be shredded further, and which can also contain variant fields of their own?

v: VARIANT {
   metadata: BINARY,
   value: BINARY,
   typed_value: {
       a: STRUCT {
           b: STRUCT {
               c: STRUCT {
                   w: VARIANT {
                       metadata: BINARY, << --- ???
                       value: BINARY,
                       typed_value: STRUCT {
                           x: STRUCT {
                               y: STRUCT {
                                   z: STRUCT {
                                       u: VARIANT {
                                           metadata: BINARY, <<--- ???
                                           value: BINARY,
                                       }
                                   }
                               }
                           }
                       }
                   }
               }
           }
       }
   }
}

The spec says that

Variant metadata is stored in the top-level Variant group in a binary metadata column regardless of whether the Variant value is shredded.

All value columns within the Variant must use the same metadata. All field names of a Variant, whether shredded or not, must be present in the metadata.

I'm pretty sure that means w and u must not have metadata columns -- because they are still "inside" v.

Even if one tried to store path names of u inside all three metadata columns, the field ids would disagree unless we forced u.metadata and w.metadata to be copies of v.metadata. Easy enough to do that in arrow-rs (all array data are anyway Arc), but what about the actual parquet file??

}
}

if let Some(typed_value_field) = fields.iter().find(|f| f.name() == TYPED_VALUE) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if let Some(typed_value_field) = fields.iter().find(|f| f.name() == TYPED_VALUE) {
if let Some(typed_value_field) = typed_value_field_res {

Comment on lines 62 to 94
// this is directly mapped from the spec's parquet physical types
// note, there are more data types we can support
// but for the sake of simplicity, I chose the smallest subset
match typed_value_field.data_type() {
DataType::Boolean
| DataType::Int32
| DataType::Int64
| DataType::Float32
| DataType::Float64
| DataType::BinaryView => {}
DataType::Union(union_fields, _) => {
union_fields
.iter()
.map(|(_, f)| f.clone())
.try_for_each(|f| {
let DataType::Struct(fields) = f.data_type().clone() else {
return Err(ArrowError::InvalidArgumentError(
"Expected struct".to_string(),
));
};

validate_value_and_typed_value(&fields, false)
})?;
}

foreign => {
return Err(ArrowError::NotYetImplemented(format!(
"Unsupported VariantArray 'typed_value' field, got {foreign}"
)))
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to store (logical) int32 in memory just because parquet physically encodes them that way? When reading an int8 column from normal parquet, doesn't it come back as an int8 PrimitiveArray?

| DataType::Float32
| DataType::Float64
| DataType::BinaryView => {}
DataType::Union(union_fields, _) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial reaction was that I don't think variant data can represent a union type?

I guess this is a way to slightly relax strongly typed data, as long as the union members themselves are all valid variant types? And whichever union member is active becomes the only field+value of a variant object? But how would a reader of that shredded data know to read it back as a union, instead of the (sparse) struct it appears to be? Would it be better to just require a struct from the start?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend we start without support for union type and then add it as we implement additional functionality

Comment on lines 129 to 119
if metadata_field.is_nullable() {
return Err(ArrowError::InvalidArgumentError(
"Invalid VariantArray: metadata field can not be nullable".to_string(),
));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for variant columns nested inside a shredded variant, we must not have a metadata column?
See https://github.com/apache/arrow-rs/pull/7921/files#r2204684924 above?

Comment on lines 41 to 45
// if !value_field.is_nullable() {
// return Err(ArrowError::InvalidArgumentError(
// "Expected value field to be nullable".to_string(),
// ));
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the spec is 100% clear on this one, unfortunately. Maybe we need to ask the parquet-variant folks for clarification and/or refinement of the spec's wording?

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @friendlymatthew -- this is looking quite cool

In order to make progress, I would suggest we try and write up a basic 'end to end' test.

Specifically, perhaps something like

  1. Create a perfectly shredded Arrow array for a uint64 value
  2. Implement the get_variant API from #7919 / @Samyak2 for that array and show how it will return Variant::Int64

And then we can expand those tests for the more exciting shredding variants afterwards

I think the tests in this PR for just the schemas are a bit too theoretical -- they would be much closer to the end user if they were used for actual StructArrays in tests

| DataType::Float32
| DataType::Float64
| DataType::BinaryView => {}
DataType::Union(union_fields, _) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend we start without support for union type and then add it as we implement additional functionality

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @friendlymatthew -- I think this is quite close.

Ok(())
}

/// Validates that the provided [`Fields`] conform to the Variant shredding specification.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing I thought of while reviewing this PR was maybe we could potentially wrap this into its own Structure, like

struct VariantSchema {
  inner: Fields
}

And then all this validation logic could be part of the constructor

impl VariantSchema {
  fn try_new(fields: Fields) -> Result<Self> {... }
...
}

The benefits of this would be

  1. Now we could be sure that a validated schema was always passed to shred_variant
  2. We would then have a place to put methods on -- such as VariantSchema::type(path: VariantPath) for retrieving the type of a particular path, perhaps

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @friendlymatthew -- I think this is quite close.

Copy link
Contributor

@scovich scovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple more comments, but mostly waiting for the PR to address existing comments


pub fn validate_value_and_typed_value(
fields: &Fields,
allow_both_null: bool,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When would we allow (or forbid) both fields being null?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking was value and typed_value can be both null for objects.

Per the spec:

A field's value and typed_value are set to null (missing) to indicate that the field does not exist in the variant. To encode a field that is present with a null value, the value must contain a Variant null: basic type 0 (primitive) and physical type 0 (null).

@friendlymatthew friendlymatthew marked this pull request as draft July 17, 2025 20:47
@friendlymatthew
Copy link
Contributor Author

Last day at the conference! Will get back on the variant machine tomorrow. @scovich

@friendlymatthew friendlymatthew force-pushed the friendlymatthew/shred branch from 9ba8609 to debe856 Compare July 19, 2025 09:45
@friendlymatthew friendlymatthew force-pushed the friendlymatthew/shred branch 4 times, most recently from 609960f to fb7ba41 Compare July 20, 2025 12:20
Copy link
Contributor

@scovich scovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Liking the newtype for validated schema!

Comment on lines 150 to 152
todo!("how does a shredded value look like?");
// ideally here, i would unpack the shredded_field
// and recursively call validate_value_and_typed_value with inside_shredded_object set to true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You already did the validation for leaf values at L110 above; maybe just finish the job there, by recursing on the DataType::ListView and DataType::Struct match arms?

Comment on lines +181 to +189
match self.value_schema {
ValueSchema::MissingValue => None,
ValueSchema::ShreddedValue(_) => None,
ValueSchema::Value(value_idx) => Some(value_idx),
ValueSchema::PartiallyShredded { value_idx, .. } => Some(value_idx),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could also consider:

Suggested change
match self.value_schema {
ValueSchema::MissingValue => None,
ValueSchema::ShreddedValue(_) => None,
ValueSchema::Value(value_idx) => Some(value_idx),
ValueSchema::PartiallyShredded { value_idx, .. } => Some(value_idx),
use ValueSchema::*;
match self.value_schema {
MissingValue | ShreddedValue(_) => None,
Value(value_idx) | PartiallyShredded { value_idx, .. } => Some(value_idx),

Less redundancy... but I'm not sure it actually improves readability very much?

}

pub fn value(&self) -> Option<&FieldRef> {
self.value_idx().map(|i| self.inner.get(i).unwrap())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize the unwrap should be safe, but is there any harm in using flat_map instead to eliminate the possibility of panic?

Suggested change
self.value_idx().map(|i| self.inner.get(i).unwrap())
self.value_idx().flat_map(|i| self.inner.get(i))

Downside is, if the value index were ever incorrect, we would silently fail by returning None instead of panicking. But on the other hand, if the value index were ever incorrect, we're just as likely to silently return the wrong field rather than panic. I'm not sure panicking only some of the time actually helps?

(again below)

Comment on lines +194 to +204
match self.value_schema {
ValueSchema::MissingValue => None,
ValueSchema::Value(_) => None,
ValueSchema::ShreddedValue(shredded_idx) => Some(shredded_idx),
ValueSchema::PartiallyShredded {
shredded_value_idx, ..
} => Some(shredded_value_idx),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to above, but I think this one is actually a readability improvement:

Suggested change
match self.value_schema {
ValueSchema::MissingValue => None,
ValueSchema::Value(_) => None,
ValueSchema::ShreddedValue(shredded_idx) => Some(shredded_idx),
ValueSchema::PartiallyShredded {
shredded_value_idx, ..
} => Some(shredded_value_idx),
use ValueSchema::*;
match self.value_schema {
MissingValue | Value(_) => None,
ShreddedValue(shredded_idx) | PartiallyShredded { shredded_value_idx, .. } => {
Some(shredded_value_idx)
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or, maybe it just needs the use ValueSchema::* part, to avoid the fmt line breaks on ShreddedValue?

Suggested change
match self.value_schema {
ValueSchema::MissingValue => None,
ValueSchema::Value(_) => None,
ValueSchema::ShreddedValue(shredded_idx) => Some(shredded_idx),
ValueSchema::PartiallyShredded {
shredded_value_idx, ..
} => Some(shredded_value_idx),
use ValueSchema::*;
match self.value_schema {
MissingValue => None,
Value(_) => None,
ShreddedValue(shredded_idx) => Some(shredded_idx),
PartiallyShredded { shredded_value_idx, .. } => Some(shredded_value_idx),

Comment on lines -85 to -86
/// TODO: 1) Add extension type metadata
/// TODO: 2) Add support for shredding
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these TODO actually done? I didn't see anything related to 1/ in this PR, and I would think there's (a lot?) more to 2/ than just adding schema support?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I plan on at least getting 2 done in this PR

@friendlymatthew friendlymatthew force-pushed the friendlymatthew/shred branch 3 times, most recently from 0d7f90c to cce2453 Compare July 21, 2025 15:32
@friendlymatthew friendlymatthew force-pushed the friendlymatthew/shred branch from cce2453 to a71380a Compare July 21, 2025 15:32
@alamb
Copy link
Contributor

alamb commented Jul 24, 2025

👋 I am just checking in to see how this PR is going

I am not sure if it is correct, but I am thinking this PR is the first part of the "writing shredded variants" story

In order to drive it forward, I wonder if it might be a good idea to try and pick some simple test cases -- for example, try to write a test that produces the output VarantArray that is manually constructed in:

So perhaps that would mean a test like

// create a shredded schema that specifies shredding as Int32
let schema = ...; // Not sure??

// make an array builder with that schema
let mut builder = VariantArrayBuilder::new()
  .with_schema(shredded_schema);

// first row is value 34
let row0 = builder.variant_builder()
row0.append_variant(Variant::Int32(34));
row0.finish()
// second row is null
builder.append_null();
//third row is "n/a" (a string)
let row2 = builder.variant_builder()
row2.append_variant(Variant::from("n/a"));
row2.finish()
// fourth row is value 100
let row3 = builder.variant_builder()
row3.append_variant(Variant::Int32(100));
row3.finish()

// complete the array
let array = builder.finish();

// verify that the resulting array is a StructArray with the 
// structure specified in https://github.com/apache/arrow-rs/pull/7965

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants