-
-
Notifications
You must be signed in to change notification settings - Fork 295
WIP: Adding support for contrib/intarray style fast-allocated arrays #2086
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial comments, mostly style nits. Let's discuss the motivation more.
If memory serves, I believe an array is a varlena allocation that does not have to match the precise size of the array's length, yes? That is, the varlena can be allocated bigger than the array needs. Why not something more like the spare_capacity_mut
API that Vec has, so we can construct things without first zeroing them? I believe the zeroing overhead is in many cases inconsequential and often optimized-out anyways, but we can always make our code easier to optimize.
let elem_size = std::mem::size_of::<T>(); | ||
let nbytes: usize = port::ARR_OVERHEAD_NONULLS(1) + elem_size * len; | ||
|
||
unsafe { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In new code, a safe function with internal unsafe
should explain why the code is sound.
@usamoi @workingjubilee instead of all the |
@mhov Probably so, I think that would be a better starting point anyways as then it would be clearer what we're asking for. |
Faster arrays!
Now that
Array<T>
is stable, this PR ports in performance enhancements for creating PostgreSQLint[]
arrays—based on the optimizations found in PostgreSQL’scontrib/intarray
extension (specificallynew_intArrayType(int num)
).The goal is to significantly speed up the conversion from
&[i32]
to a PostgreSQLArrayType
Datum.Background
Currently, to return a PostgreSQL
int[]
from a Rust function, we typically return aVec<i32>
, which then gets converted to a Datum viaarray_datum_from_iter(..)
(implementation here).
This approach uses:
pg_sys::initArrayResult
pg_sys::accumArrayResult
pg_sys::makeArrayResult
These APIs build a varlena array by accumulating values and repeatedly reallocating memory as the capacity grows. For larger arrays, this results in substantial performance overhead.
Optimization
PostgreSQL's
contrib/intarray
takes a different approach for fixed-size datums: it pre-allocates the entire array up front and directlymemcpy
s the data intoARR_DATA_PTR
. This avoids the need for reallocations and is much faster.What I've added
BoxRet
forArray<'a, T>
whenT
is a fixed-size numeric type (i8
,i16
,i32
,f32
,f64
)Array<T>
directly as a return type from#[pg_extern]
functions.ArrayType
memcpy
-like behavior for fast construction from slicesArray<T>
from&[T]
Benchmarks
I've benchmarked two approaches for turning
Vec<i32> -> Array<i32>
:array_datum_from_iter(..)
Each test used 10,000 rows of randomly generated
int[]
in a temp table, work_mem = '1GB'. I used Instant::now() to time only the microseconds needed to convert and accumulated all those timingsIt's been a while since I contributed, so I'm a little rusty on matching the teams style/conventions, and as usual lifetime differences between PG allocated and rust allocated still escape me. I'm sure the ergonomics of the new Array functions could be improved. Am I doing anything the wrong way here?