-
Notifications
You must be signed in to change notification settings - Fork 28.7k
[SPARK-52848][SQL] Avoid cast to Double
in casting TIME/TIMESTAMP to DECIMAL
#51539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Double
in casting TIME/TIMESTAMP to DECIMAL
Double
in casting TIME/TIMESTAMP to DECIMALDouble
in casting TIME/TIMESTAMP to DECIMAL
@uros-db Please, review this PR, |
case TimestampType => buildCast[Long](_, t => changePrecision( | ||
// 19 digits is enough to represent any TIMESTAMP value in Long. | ||
// 6 digits of scale is for microseconds precision of TIMESTAMP values. | ||
Decimal.apply(t, 19, 6), target)) | ||
case _: TimeType => buildCast[Long](_, t => changePrecision( | ||
// 14 digits is enough to cover the full range of TIME value [0, 24:00) which is | ||
// [0, 24 * 60 * 60 * 1000 * 1000 * 1000) = [0, 86400000000000). | ||
// 9 digits of scale is for nanoseconds precision of TIME values. | ||
Decimal.apply(t, precision = 14, scale = 9), target)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we really sure that we always want to use fixed precision and scale here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to use fixed scale
at lest to get correct decimal. And precision
should guarantee that we cover full input range.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that part is alright. However, as a consequence of this, we also get fixed precision/scale in error messages (e.g. #51539 (comment)). Let's continue the discussion over there.
@@ -810,7 +810,7 @@ class CastWithAnsiOnSuite extends CastSuiteBase with QueryErrorsBase { | |||
), | |||
condition = "NUMERIC_VALUE_OUT_OF_RANGE.WITH_SUGGESTION", | |||
parameters = Map( | |||
"value" -> "86399.123456", | |||
"value" -> "86399.123456000", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to the previous comment, I think that these error messages become a bit counter-intuitive for users?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems it is counter-intuitive independently from the scale. No doubt there is a room for error improvement. We could print the original value of the source type like TIME'23:59:59.123456'
instead of 86399.123456
or maybe together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems it is counter-intuitive independently from the scale.
This is a good argument. And yes, source type is likely the best fit here, at least from the user perspective I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice trick with using both changePrecision
and Decimal
without double, although some error messages look a bit weird. If we're fine with this, then LGTM. Otherwise, I have no concerns regarding this PR.
What changes were proposed in this pull request?
In the PR, I propose to simplify casting TIME/TIMESTAMP to DECIMAL, and avoid intermediate casting to Double.
Why are the changes needed?
To avoid unnecessary arithmetic operations and to improve code maintenance.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
By running the affected test suites:
Was this patch authored or co-authored using generative AI tooling?
No.