Encapsulate database reading/writing in a lazy container#188
Encapsulate database reading/writing in a lazy container#188RobinMcCorkell wants to merge 3 commits intoHolzhaus:mainfrom
Conversation
03c8f30 to
f28824c
Compare
|
I have no objections to the concept, though since rekordcrate is already quite fast and memory is cheap, this primarily seems like premature optimization. Did you encounter a concrete problem that this solves? |
|
The current API with read_pages loads just the pages for a given table, so I assumed there was value in selectively loading the DB. If we don't care about that then we should eagerly load and persist everything, I agree. |
|
Hey @RobinMcCorkell, thank you for the working on this and sorry for the delay. I will give this a try this weekend. |
5270bd9 to
ffc3b08
Compare
|
@acrilique could you take a look at this proposed change? This idea predates your high level API in #210, and should capture database writes as well as reads. I don't know the best way to integrate this idea with the high level API though, WDYT? |
|
I like this change and I think the high level API could benefit from it. This The only thing that bugs me is the naming. Could we call this lower-level one something like |
ffc3b08 to
e1ab8b7
Compare
|
I have integrated this with the high-level API by adding methods to iterate over row variants and keeping a free function to post-process the playlists into the high-level structure. I chose to leave the naming (removing |
e1ab8b7 to
ae11f0e
Compare
|
@acrilique would appreciate a fresh review on this with the latest commits |
|
Sorry Robin, I've been a bit busy during the holidays and now I'm back to college w exams and stuff. I'll try to take a look these days, if @Holzhaus or @Swiftb0y don't do it before me. IMO the next natural step would be the db-from-scratch generation, as reexports (both with and without the lazy) were already working for me in pioneer cdj players when I tested in mid-december. |
|
The generation code I'm working on relies on this database API, hence why I'm keen to get this in first. There's a lot of extra complication when generating a database like generating offsets and choosing how many rows can fit |
acrilique
left a comment
There was a problem hiding this comment.
I left a couple of comments/questions but it looks very good already, and the unsafe bit makes sense and is well documented.
acrilique
left a comment
There was a problem hiding this comment.
Found the time to do a more thorough review. Left a few comments. ptal
85fa5b6 to
e658221
Compare
Sorry @RobinMcCorkell I still see the unsafe code blocks, I assume it's bc u haven't pushed yet? |
|
@acrilique indeed I missed the push, check now |
acrilique
left a comment
There was a problem hiding this comment.
Well, this LGTM. Hope the maintainers can take a look and merge at some point.
I'll start working on implementing a SerializedSize trait for row types to keep pushing on the serialization effort.
|
I already have something in the works for that too: #220 |
|
Hi @RobinMcCorkell @Holzhaus @Swiftb0y can we rebase and merge this one? Or at least get some fresh criticism from the maintainers as it was last reviewed a while ago. I have already written a program (which depends on rekordcrate with this PR and a few extra changes by me) that generates a fresh database from scratch that can be then read by a CDJ. So we've already come a long way |
e2ea919 to
422964d
Compare
422964d to
cf26bba
Compare
cf26bba to
d5e47b1
Compare
See the tests for example usage. This replaces
header.read_pages()with a lazily-loadedDatabasethat encapsulates the header and a vector of pages, some of which may be loaded into memory. A page can be loaded withload_page()which takes a page index, and returns a mutable reference to the page.I added an iterator API to iterate over pages and rows, though this requires some unsafe code due to the lifetime requirements of the iterator trait (the standard library also uses unsafe for iterators for the same reason btw). It's not ideal but better than the other alternatives I tried, which either required higher-kinded types which proved difficult to use or severely limited the ergonomics.
Other changes of note: