Skip to content

Conversation

YairVaknin-starkware
Copy link
Collaborator

@YairVaknin-starkware YairVaknin-starkware commented Sep 4, 2025

TITLE

[BUGFIX] Fix_temp_segment_chain_bug

Description

Fixes relocation chaining: when a temp segment pointed to another temp segment (multi temp segment hop), we weren’t resolving it all the way to its final destination.
We now flatten relocation rules so each temp maps directly to a concrete address (or int under extensive_hints).
Includes a cycle guard and rejects non-zero offsets when chains end at an int.

Factor out relocation-rule flattening into flatten_relocation_rules() with cfg variants:

  • non-extensive_hints: HashMap<usize, Relocatable>
  • extensive_hints: HashMap<usize, MaybeRelocatable>

Keep shared relocation flow in relocate_memory(); only preprocessing differs.

Added unit tests:

Without extensive_hints

  • flatten_relocation_rules_chain_happy — temp→temp→real chain flattens correctly (offsets composed).
  • flatten_relocation_rules_cycle_err — detects cycle and returns MemoryError::Relocation.
  • flatten_relocation_rules_missing_next_err — dangling chain (no rule for next temp) returns MemoryError::UnallocatedSegment((next_key, temp_len)).

With extensive_hints

  • flatten_relocation_rules_chain_happy_extensive_reloc_and_int — mixed chain:
    • temp→real flattens with offset composition.
    • multi-hop temp→temp→…→int collapses to the final int when cumulative offset is zero.
  • flatten_relocation_rules_int_with_non_zero_offset_err — multi-hop ending in Int with non-zero offset returns MemoryError::NonZeroOffset.
  • flatten_relocation_rules_cycle_err_extensive — detects cycle and returns MemoryError::Relocation.
  • flatten_relocation_rules_missing_next_err_extensive — dangling chain (no rule for next temp) returns MemoryError::UnallocatedSegment((next_key, temp_len)).

Integration (relocate_memory)

  • relocate_memory_temp_chain_to_reloc_multi_hop — exercises a multi-hop temp→temp→real chain end-to-end: references into temp memory are updated to real addresses, offsets are composed correctly, and the temp segment data is moved into the target real segment with consistency checks.
  • relocate_memory_temp_chain_to_int_multi_hop (with extensive_hints) — verifies the “collapse to Int” semantics: a chain like temp→temp→…→Int(99) causes all references to that temp to become Int(99), and the involved temp segments are dropped (their raw cells are not copied) by design.

Checklist

  • Linked to Github Issue
  • Unit tests added
  • Integration tests added.
  • This change requires new documentation.
    • Documentation has been added/updated.
    • CHANGELOG has been updated.

Copy link

github-actions bot commented Sep 4, 2025

**Hyper Thereading Benchmark results**




hyperfine -r 2 -n "hyper_threading_main threads: 1" 'RAYON_NUM_THREADS=1 ./hyper_threading_main' -n "hyper_threading_pr threads: 1" 'RAYON_NUM_THREADS=1 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 1
  Time (mean ± σ):     24.611 s ±  0.002 s    [User: 23.758 s, System: 0.849 s]
  Range (min … max):   24.610 s … 24.613 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 1
  Time (mean ± σ):     25.655 s ±  0.021 s    [User: 24.821 s, System: 0.832 s]
  Range (min … max):   25.641 s … 25.670 s    2 runs
 
Summary
  hyper_threading_main threads: 1 ran
    1.04 ± 0.00 times faster than hyper_threading_pr threads: 1




hyperfine -r 2 -n "hyper_threading_main threads: 2" 'RAYON_NUM_THREADS=2 ./hyper_threading_main' -n "hyper_threading_pr threads: 2" 'RAYON_NUM_THREADS=2 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 2
  Time (mean ± σ):     13.508 s ±  0.079 s    [User: 24.015 s, System: 0.875 s]
  Range (min … max):   13.452 s … 13.564 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 2
  Time (mean ± σ):     13.864 s ±  0.005 s    [User: 24.702 s, System: 0.903 s]
  Range (min … max):   13.861 s … 13.868 s    2 runs
 
Summary
  hyper_threading_main threads: 2 ran
    1.03 ± 0.01 times faster than hyper_threading_pr threads: 2




hyperfine -r 2 -n "hyper_threading_main threads: 4" 'RAYON_NUM_THREADS=4 ./hyper_threading_main' -n "hyper_threading_pr threads: 4" 'RAYON_NUM_THREADS=4 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 4
  Time (mean ± σ):     10.163 s ±  0.277 s    [User: 36.493 s, System: 1.103 s]
  Range (min … max):    9.968 s … 10.359 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 4
  Time (mean ± σ):     10.372 s ±  0.269 s    [User: 36.922 s, System: 1.130 s]
  Range (min … max):   10.182 s … 10.563 s    2 runs
 
Summary
  hyper_threading_main threads: 4 ran
    1.02 ± 0.04 times faster than hyper_threading_pr threads: 4




hyperfine -r 2 -n "hyper_threading_main threads: 6" 'RAYON_NUM_THREADS=6 ./hyper_threading_main' -n "hyper_threading_pr threads: 6" 'RAYON_NUM_THREADS=6 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 6
  Time (mean ± σ):     10.012 s ±  0.316 s    [User: 37.053 s, System: 1.120 s]
  Range (min … max):    9.788 s … 10.236 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 6
  Time (mean ± σ):     10.241 s ±  0.163 s    [User: 37.138 s, System: 1.122 s]
  Range (min … max):   10.126 s … 10.356 s    2 runs
 
Summary
  hyper_threading_main threads: 6 ran
    1.02 ± 0.04 times faster than hyper_threading_pr threads: 6




hyperfine -r 2 -n "hyper_threading_main threads: 8" 'RAYON_NUM_THREADS=8 ./hyper_threading_main' -n "hyper_threading_pr threads: 8" 'RAYON_NUM_THREADS=8 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 8
  Time (mean ± σ):     10.170 s ±  0.023 s    [User: 37.134 s, System: 1.110 s]
  Range (min … max):   10.154 s … 10.186 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 8
  Time (mean ± σ):     10.320 s ±  0.010 s    [User: 37.421 s, System: 1.170 s]
  Range (min … max):   10.313 s … 10.327 s    2 runs
 
Summary
  hyper_threading_main threads: 8 ran
    1.01 ± 0.00 times faster than hyper_threading_pr threads: 8




hyperfine -r 2 -n "hyper_threading_main threads: 16" 'RAYON_NUM_THREADS=16 ./hyper_threading_main' -n "hyper_threading_pr threads: 16" 'RAYON_NUM_THREADS=16 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 16
  Time (mean ± σ):      9.932 s ±  0.046 s    [User: 37.613 s, System: 1.224 s]
  Range (min … max):    9.899 s …  9.965 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 16
  Time (mean ± σ):     10.186 s ±  0.254 s    [User: 37.861 s, System: 1.228 s]
  Range (min … max):   10.007 s … 10.365 s    2 runs
 
Summary
  hyper_threading_main threads: 16 ran
    1.03 ± 0.03 times faster than hyper_threading_pr threads: 16


Copy link

github-actions bot commented Sep 4, 2025

Benchmark Results for unmodified programs 🚀

Command Mean [s] Min [s] Max [s] Relative
base big_factorial 2.129 ± 0.017 2.114 2.164 1.00
head big_factorial 2.135 ± 0.027 2.106 2.193 1.00 ± 0.02
Command Mean [s] Min [s] Max [s] Relative
base big_fibonacci 2.075 ± 0.030 2.043 2.150 1.00
head big_fibonacci 2.087 ± 0.035 2.050 2.169 1.01 ± 0.02
Command Mean [s] Min [s] Max [s] Relative
base blake2s_integration_benchmark 7.562 ± 0.066 7.495 7.706 1.01 ± 0.01
head blake2s_integration_benchmark 7.517 ± 0.061 7.428 7.594 1.00
Command Mean [s] Min [s] Max [s] Relative
base compare_arrays_200000 2.197 ± 0.024 2.177 2.247 1.00 ± 0.01
head compare_arrays_200000 2.187 ± 0.020 2.164 2.215 1.00
Command Mean [s] Min [s] Max [s] Relative
base dict_integration_benchmark 1.425 ± 0.005 1.418 1.432 1.00 ± 0.01
head dict_integration_benchmark 1.425 ± 0.014 1.409 1.456 1.00
Command Mean [s] Min [s] Max [s] Relative
base field_arithmetic_get_square_benchmark 1.225 ± 0.006 1.218 1.234 1.00 ± 0.01
head field_arithmetic_get_square_benchmark 1.222 ± 0.008 1.214 1.239 1.00
Command Mean [s] Min [s] Max [s] Relative
base integration_builtins 7.666 ± 0.234 7.521 8.316 1.01 ± 0.03
head integration_builtins 7.560 ± 0.106 7.427 7.751 1.00
Command Mean [s] Min [s] Max [s] Relative
base keccak_integration_benchmark 7.756 ± 0.086 7.680 7.979 1.00
head keccak_integration_benchmark 7.756 ± 0.143 7.608 8.053 1.00 ± 0.02
Command Mean [s] Min [s] Max [s] Relative
base linear_search 2.182 ± 0.017 2.154 2.205 1.00
head linear_search 2.196 ± 0.025 2.167 2.244 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base math_cmp_and_pow_integration_benchmark 1.531 ± 0.013 1.516 1.559 1.00 ± 0.02
head math_cmp_and_pow_integration_benchmark 1.525 ± 0.027 1.500 1.584 1.00
Command Mean [s] Min [s] Max [s] Relative
base math_integration_benchmark 1.458 ± 0.006 1.448 1.468 1.00 ± 0.01
head math_integration_benchmark 1.456 ± 0.007 1.443 1.467 1.00
Command Mean [s] Min [s] Max [s] Relative
base memory_integration_benchmark 1.215 ± 0.007 1.208 1.231 1.00 ± 0.01
head memory_integration_benchmark 1.215 ± 0.015 1.200 1.254 1.00
Command Mean [s] Min [s] Max [s] Relative
base operations_with_data_structures_benchmarks 1.543 ± 0.005 1.538 1.554 1.00 ± 0.01
head operations_with_data_structures_benchmarks 1.538 ± 0.013 1.522 1.563 1.00
Command Mean [ms] Min [ms] Max [ms] Relative
base pedersen 532.8 ± 1.1 531.2 534.4 1.00
head pedersen 535.7 ± 3.3 530.4 539.5 1.01 ± 0.01
Command Mean [ms] Min [ms] Max [ms] Relative
base poseidon_integration_benchmark 637.5 ± 5.5 631.1 651.2 1.02 ± 0.01
head poseidon_integration_benchmark 626.0 ± 6.3 619.2 638.1 1.00
Command Mean [s] Min [s] Max [s] Relative
base secp_integration_benchmark 1.840 ± 0.018 1.825 1.885 1.00 ± 0.01
head secp_integration_benchmark 1.834 ± 0.014 1.817 1.866 1.00
Command Mean [ms] Min [ms] Max [ms] Relative
base set_integration_benchmark 630.5 ± 1.7 627.8 633.1 1.00 ± 0.01
head set_integration_benchmark 629.5 ± 3.3 625.0 635.7 1.00
Command Mean [s] Min [s] Max [s] Relative
base uint256_integration_benchmark 4.276 ± 0.038 4.240 4.352 1.00 ± 0.01
head uint256_integration_benchmark 4.270 ± 0.047 4.205 4.355 1.00

@YairVaknin-starkware YairVaknin-starkware force-pushed the yairv/fix_temp_segment_chain_bug branch from f8ad17b to 602c809 Compare September 4, 2025 10:32
Copy link

codecov bot commented Sep 4, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 96.65%. Comparing base (e2c6c91) to head (8ab7764).

Additional details and impacted files
@@                    Coverage Diff                    @@
##           starkware-development    #2195      +/-   ##
=========================================================
+ Coverage                  96.63%   96.65%   +0.02%     
=========================================================
  Files                        103      103              
  Lines                      43867    44180     +313     
=========================================================
+ Hits                       42391    42704     +313     
  Misses                      1476     1476              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@YairVaknin-starkware
Copy link
Collaborator Author

Will add tests next week for codecov

@gabrielbosio
Copy link
Collaborator

Hi, @YairVaknin-starkware! It would be great if we have a description of this PR just to keep track of the development in the base branch more easily.

@YairVaknin-starkware
Copy link
Collaborator Author

Hi, @YairVaknin-starkware! It would be great if we have a description of this PR just to keep track of the development in the base branch more easily.

sure, done. PTAL.

@YairVaknin-starkware
Copy link
Collaborator Author

Will add tests next week for codecov

Done.

@YairVaknin-starkware
Copy link
Collaborator Author

YairVaknin-starkware commented Sep 7, 2025

Also, please note that this is a quick and simple fix, since we assume this table won't grow too much (and each separate chain won't be long). I can also impl it in a way that we won't traverse intermediate chain entries once we already set the value for the key that we started the chain on, but as I said, this doesn't seem worth it, and would need to record the visited entries in each chain.

@gabrielbosio
Copy link
Collaborator

Description looks good. Also I like the detailed tests.

  • Is it possible to add a test that calls relocate_memory like this one?
  • Is this something strictly related to the work being done in starkware-development branch or there might be a case where Cairo VM 2 has to handle a chain of temp segments?

@YairVaknin-starkware YairVaknin-starkware force-pushed the yairv/fix_temp_segment_chain_bug branch from d23f7a6 to 8ab7764 Compare September 16, 2025 17:24
@YairVaknin-starkware
Copy link
Collaborator Author

  • Is it possible to add a test that calls relocate_memory like this one?

added. PTAL @FrancoGiachetta @gabrielbosio @Yael-Starkware.

  • Is this something strictly related to the work being done in starkware-development branch or there might be a case where Cairo VM 2 has to handle a chain of temp segments?

It's a bug that could occur in any cairo0 code, but I only know of a (future) use-case that's needed for Stwo's backend (so aligned with the changes in starkware-development currently).

Copy link
Collaborator

@Yael-Starkware Yael-Starkware left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 2 files reviewed, 7 unresolved discussions (waiting on @FrancoGiachetta and @YairVaknin-starkware)


vm/src/vm/vm_memory/memory.rs line 290 at r2 (raw file):

    }
    #[cfg(not(feature = "extensive_hints"))]
    fn flatten_relocation_rules(&mut self) -> Result<(), MemoryError> {

this function and the next one has a lot of common logic, I'd make them one and separate with the cfg decorator only to the minimal extent.

Code quote:

fn flatten_relocation_rules(&mut self) -> Result<(), MemoryError> {

vm/src/vm/vm_memory/memory.rs line 330 at r2 (raw file):

            loop {
                match dst {
                    MaybeRelocatable::RelocatableValue(r) if r.segment_index < 0 => {

Suggestion:

relocatable

vm/src/vm/vm_memory/memory.rs line 344 at r2 (raw file):

                        match next {
                            MaybeRelocatable::RelocatableValue(nr) => {

Suggestion:

next_relocatable

vm/src/vm/vm_memory/memory.rs line 381 at r2 (raw file):

        for segment in self.data.iter_mut().chain(self.temp_data.iter_mut()) {
            for cell in segment.iter_mut() {
                let value = cell.get_value();

how does a value from the segment turn into a relocatable?

Code quote:

 let value = cell.get_value();

vm/src/vm/vm_memory/memory.rs line 387 at r2 (raw file):

                            addr,
                            &self.relocation_rules,
                        )?);

isn't that duplicate of what happens in flatten_relocation_rules?

Code quote:

                    Some(MaybeRelocatable::RelocatableValue(addr)) if addr.segment_index < 0 => {
                        let mut new_cell = MemoryCell::new(Memory::relocate_address(
                            addr,
                            &self.relocation_rules,
                        )?);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants