-
-
Notifications
You must be signed in to change notification settings - Fork 108
forward-allocating bump heap allocator #460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 0.1.7-main
Are you sure you want to change the base?
Conversation
|
We didn't really discuss this ahead of time. Not entirely sure this is the right solution here. It's quite a heavy change to make when we can support 14 out of 16 positions already. |
|
There's a more comprehensive set of realistic tests of combinations of positions and mints and other things that @IliaZyrin identified that we need to test and get a realistic view of first before doing something like this imo. |
programs/marginfi/src/allocator.rs
Outdated
| ## Why Backward Allocation Breaks with Extended Heap | ||
| If we simply compile program with increased HEAP_LENGTH without changing direction | ||
| and or not using `requestHeapFrame` (for functions that fit in 32 KiB), we get this situation: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is interesting I didn't know the forward allocation was incompatible with requestheapframe, is there a demo or something to show this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the doc to explain this better... but a similar implementation can be seen here:
Squads allocator.rs
| let oracle_ais = &remaining_ais[oracle_ai_idx..end_idx]; | ||
|
|
||
| // Create oracle adapter (heap allocation happens here) | ||
| let price_adapter_result = OraclePriceFeedAdapter::try_from_bank(&bank, oracle_ais, &clock); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did we experiment at all with how much heap the oracle call here actually uses in the kamino (like did we confirm it's 3kish bytes?)
maybe there is a way to cut back the heap usage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the ~3k ish bytes is just an over-exaggeration (worst case)... after testing with logs, the usage usually doesn't exceed ~128 bytes (and thats for the switchboard oracles)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeh it's fine if you're documenting worst case, just want to confirm the size at worst for our napkin math of memory usage.
If you can print the heap pointer before/after should be able to precisely doc the worst case.
wasn't aware we had that solution already @Henry-E ... @jgur-psyops mentioned it a while back and i just thought i'd give it a try |
|
It's not a solution as such. It's more like there's a lot more factors to consider than just memory issues when scaling to >8 integration assets. Things that would be a better use of your time than this
|
|
Great implementation overall, will give it another look after I land. Once we confirm that 16 positions works with the custom heap reset (it seems to), let's see if we can cut memory down to get the same pass condition without resetting the heap pointer (the answer might be no, but I suspect it's possible). don't spent too much time on it tho, if it's working with the custom heap then 1-2 days at most to try getting it working without. |
| let checkpoint = heap_pos(); | ||
| { | ||
| let temp_vec: Vec<u8> = vec![1, 2, 3]; // Heap allocation | ||
| let sum: u64 = temp_vec.iter().sum(); // Copy to stack! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is confusing comment, because we're not copying anything here, but rather putting a new value (sum) on stack.
| // Note: in flashloans, risk_engine is None, and we skip the cache price update. | ||
| maybe_price = risk_engine.and_then(|e| e.get_unbiased_price_for_bank(&bank_pk).ok()); | ||
| // Fetch unbiased price for cache update | ||
| let bank = bank_loader.load()?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the logic is now exactly the same as in the else below. I suggest we just move it out the scope to not duplicate.
| { | ||
| let heap_after_oracle = heap_pos(); | ||
| let heap_used = heap_after_oracle.saturating_sub(heap_checkpoint); | ||
| msg!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should remove this before merging. Or, which is probably better, change it to debug!().
| remaining_ais: &'info [AccountInfo<'info>], | ||
| health_cache: &mut Option<&mut HealthCache>, | ||
| ) -> MarginfiResult { | ||
| let (equity_assets, equity_liabs) = get_health_components( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing check for flashloan here? Also, should we just put it directly inside get_health_components if it's only used by the callers of this function?
| fi | ||
|
|
||
| if [ "$cluster" = "mainnet" ]; then | ||
| features="--features mainnet-beta" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not using custom-heap in mainnet?
|
|
||
| // Only deposit into the first MAX_KAMINO_POSITIONS banks | ||
| for (let i = 0; i < MAX_KAMINO_POSITIONS; i += depositsPerTx) { | ||
| // NOTE: We only deposit into 8 banks for this test suite since we don't use a LUT. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's just rename the const above to be KAMINO_POSITIONS and use it everywhere (with the value of 8, as before). Otherwise it creates confusion since MAX_KAMINO_POSITIONS is not even used.
| }); | ||
|
|
||
| // Add compute budget | ||
| // Add compute budget |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
formatting
| const MAX_KAMINO_DEPOSITS = 8; // Maximum Kamino positions per account | ||
| const NUM_KAMINO_BANKS_FOR_TESTING = 9; // Create 9 banks to test liquidator limit | ||
|
|
||
| const NUM_KAMINO_BANKS_FOR_TESTING = 17; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need 17 here? 15 is not enough?
Please update/delete the outdated comments throughout this file. E.g. they mention 9 banks and non-actual expectations.
| import { CONF_INTERVAL_MULTIPLE, ORACLE_CONF_INTERVAL } from "./utils/types"; | ||
|
|
||
| /** Maximum Kamino positions per account - now 16 with custom allocator */ | ||
| const MAX_KAMINO_POSITIONS = 16; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- This is unused. Let's delete it.
- I don't think we need a separate const for Kamino, given that the maximum Kamino positions is now equal to the maximum positions in general. But up to you, if you want to keep it for documentation purposes.
| /// with up to 16 positions. | ||
| /// | ||
| /// Returns (account_health, assets, liabilities) if the account is liquidatable. | ||
| pub fn check_pre_liquidation_condition_and_get_account_health<'info>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General comment about all these new functions. Should we remove their non-heap-optimized versions now? Or is there a reason for both variants to stay?
Summary:
Introduces a custom forward-allocating bump heap allocator with memory recycling capabilities, increasing MAX_KAMINO_POSITIONS from 8 to 16.
Key Changes:
Why:
Previously, accounts with many Kamino positions risked becoming unliquidatable due to CU/heap memory exhaustion. The custom allocator recycles heap memory (~3KB per position) so 16 positions fit within 32KB without requiring requestHeapFrame.