Skip to content

Commit 1a8c26c

Browse files
ueshinHyukjinKwon
authored andcommitted
[SPARK-52829][PYTHON][FOLLOWUP] Remove unnecessary special handling
### What changes were proposed in this pull request? Removes unnecessary special handling for empty schema in UDTF with Arrow path. ### Why are the changes needed? `LocalDataToArrowConversion.convert` handles the empty schema properly after #51523. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? The existing tests. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #51606 from ueshin/issues/SPARK-52829/udtf. Authored-by: Takuya Ueshin <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
1 parent b14eb1e commit 1a8c26c

File tree

1 file changed

+1
-4
lines changed

1 file changed

+1
-4
lines changed

python/pyspark/worker.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1668,12 +1668,9 @@ def convert_to_arrow(data: Iterable):
16681668
pa.RecordBatch.from_pylist(data, schema=pa.schema(list(arrow_return_type)))
16691669
]
16701670
try:
1671-
ret = LocalDataToArrowConversion.convert(
1671+
return LocalDataToArrowConversion.convert(
16721672
data, return_type, prefers_large_var_types
16731673
).to_batches()
1674-
if len(return_type.fields) == 0:
1675-
return [pa.RecordBatch.from_struct_array(pa.array([{}] * len(data)))]
1676-
return ret
16771674
except Exception as e:
16781675
raise PySparkRuntimeError(
16791676
errorClass="UDTF_ARROW_TYPE_CONVERSION_ERROR",

0 commit comments

Comments
 (0)