forked from AztecProtocol/aztec-packages
-
Notifications
You must be signed in to change notification settings - Fork 9
3.0.0 devnet.2 #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
porco-rosso-j
wants to merge
1,917
commits into
pr/additional-bb-methods
Choose a base branch
from
3.0.0-devnet.2
base: pr/additional-bb-methods
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…tocol#17872) Attempts fetching the parent block when computing an attestation even if past the reexecution deadline. This may fix flakes in tests where reexecution is not needed, in particular inactivity slash.
…ocol#17773) Cleanup + docs related to EllipticRelation - Add dos to the `EllipticRelation` - Improve/simplify the corresponding test `UltraRelationConsistency::EllipticRelation` so the derivation of the constraints from the explicit formulas is more transparent - Clean up / document the corresponding gate methods `create_ecc_<dbl/add>_gate`
…tocol#17774) Add `assert_on_curve` bool to primary constructor: `cycle_group(field_t _x, field_t _y, bool_t _is_infinity, bool assert_on_curve);` - All internal uses continue to avoid on_curve checks where they did before - Noir exposed operations (`MultiScalarMul`, `EcAdd`) enforce on curve constraints for all points
At one time we needed an SRS of size 2^k + 1 in IPA because we were extracting an additional generator from it. Now we use a separate generator scaled by a psuedorandom value.
Crude approach towards a granular `@aztec/aztec.js`. Makes it much more tree-shakable and tidy in general. It needs some work to move some exports around, but at least the big barrel export is gone, which will make for a brighter future.
Crude approach towards a granular `@aztec/aztec.js`. Makes it much more tree-shakable and tidy in general. It needs some work to move some exports around, but at least the big barrel export is gone, which will make for a brighter future.
Please read [contributing guidelines](CONTRIBUTING.md) and remove this line. For audit-related pull requests, please use the [audit PR template](?expand=1&template=audit.md). Co-authored-by: thunkar <[email protected]>
For next-net we were mixing the `NETWORK` env var with some custom settings. The chart did not allow that (it read either `network` or `customNetwork`) and this PR merges the two settings Co-authored-by: Phil Windle <[email protected]>
…ol#17892) For next-net we were mixing the `NETWORK` env var with some custom settings. The chart did not allow that (it read either `network` or `customNetwork`) and this PR merges the two settings
This embeds redis-cli and parallel in the repo in `ci3/bin` for use in the GA runner context right at the start of a CI run. These are needed, and installing them via apt can take minutes for some inexplicable reason. They are only used on the GA runner. Once the ec2 launches we're using whatever is in the AMI / build container.
This embeds redis-cli and parallel in the repo in `ci3/bin` for use in the GA runner context right at the start of a CI run. These are needed, and installing them via apt can take minutes for some inexplicable reason. They are only used on the GA runner. Once the ec2 launches we're using whatever is in the AMI / build container.
When checking how note filters are used I noticed that the ExtendedNote type is no longer used so I dropped it in this PR.
When checking how note filters are used I noticed that the ExtendedNote type is no longer used so I dropped it in this PR.
In discussing how to improve developer experience by getting rid of the `context` object as much as possible, I realized some of it is quite low hanging fruit. We can take a similar approach to the `storage` object and have the aztec-nr intermediate structs keep references to the context, resulting in the end-user invoking methods that already reference the context instead of having to manually provide it. The diff speaks for itself. Closes https://linear.app/aztec-labs/issue/F-109/remove-context-when-delivering-messages. Co-authored-by: Nicolás Venturo <[email protected]>
In discussing how to improve developer experience by getting rid of the `context` object as much as possible, I realized some of it is quite low hanging fruit. We can take a similar approach to the `storage` object and have the aztec-nr intermediate structs keep references to the context, resulting in the end-user invoking methods that already reference the context instead of having to manually provide it. The diff speaks for itself. Closes https://linear.app/aztec-labs/issue/F-109/remove-context-when-delivering-messages.
Addresses minor issues. - Adds check for chainid on the bloblib before allowing the test lookup - Removes unnecessary `RewardDistributor` storage on the rollup config (was not properly removed when the reward configuration was separated). - Return early in `RewardLib:_toShares` when result would be the same as in full execution - Removes unnecessary cast to bytes16 of value that is already bytes16
Addresses minor issues. - Adds check for chainid on the bloblib before allowing the test lookup - Removes unnecessary `RewardDistributor` storage on the rollup config (was not properly removed when the reward configuration was separated). - Return early in `RewardLib:_toShares` when result would be the same as in full execution - Removes unnecessary cast to bytes16 of value that is already bytes16
Attempt at fixing the error from [this run](https://github.com/AztecProtocol/aztec-packages/actions/runs/18787851531/job/53610628682) where the verification failed due to a non-existing contract. In this run, CREATE_ROLLUP_CONTRACTS was set to false, so the `DEPLOY_ROLLUP_CONTRACTS_DIR` template did not run. My guess is that the `tf output` loaded the output from a different run on a different version which had this contract that no longer exists. In particular, `RewardDeploymentExtLib` exists in 2.0 but not in 2.1, the deployment that failed.
Co-authored-by: ludamad <[email protected]> Co-authored-by: Claude <[email protected]>
BEGIN_COMMIT_OVERRIDE chore: Simplify default PairingPoints construction and update AcirProgram metadata (AztecProtocol#17912) refactor: Remove unused domain_start template parameter from Univariate fix(ci): set AVM_TRANSPILER="" in ci-barretenberg (AztecProtocol#17991) feat!: Aggregate multiple pairing points at once (AztecProtocol#17664) feat!: Databus consistency checks in CIVC verification (AztecProtocol#17559) fix: restore BB_WASM_PATH handling in yarn-project (AztecProtocol#17990) fix: Update Mega proof length (AztecProtocol#17995) fix(cmake): backwards headers (AztecProtocol#17993) chore: Disallow dangerous usage of add_variable and from_witness functions with non-field types (updated) (AztecProtocol#17994) fix: Fix merge train failure (AztecProtocol#18001) END_COMMIT_OVERRIDE
- Adjusted Docusaurus config to include current version based on environment variable. - Added ignoreFiles option to exclude specific protocol specs from versioned docs. - Updated Netlify configuration to define build commands for production and deploy-preview contexts. - Modified package.json scripts for improved build and serve commands, including environment-specific options. Co-authored-by: Josh Crites <[email protected]>
## Summary Implement disk-based CI logging system that writes logs to both Redis and persistent disk storage on bastion, solving the problem of 120GB+ compressed logs overwhelming Redis (which has only 2-week retention). ## Changes ### Infrastructure - 1TB EBS volume mounted at `/logs-disk` on CI bastion in us-east-2 ### Core Implementation - **`ci3/source_cache`**: New module with `cache_persistent()` function for dual Redis+disk writes - **`ci3/cache_log`**, **`ci3/run_test_cmd`**, **`ci3/denoise`**: Updated to use `cache_persistent()` for final log writes - **`ci3/bootstrap_ec2`**: SSH key passing via base64 encoding (following GCP_SA_KEY pattern) - **`ci3/aws/build_instance_ssh_config`**: SSH multiplexing config for efficient persistent connections - **`rkapp/rk.py`**: In [iac](AztecProtocol/iac#18). Disk fallback when Redis key expires ### Architecture - **Write path**: Live updates (every 5s) → Redis only. Final write → Redis + SSH to bastion disk - **Read path**: Try Redis first, fall back to `/logs-disk` on bastion if expired - **SSH transfer**: Background SSH pipe using multiplexed connections (ControlMaster, ControlPersist 10m) - **Efficiency**: Single long-lived SSH connection shared across all log writes (zero latency after first connect)
…l#17970) ## Summary Implement disk-based CI logging system that writes logs to both Redis and persistent disk storage on bastion, solving the problem of 120GB+ compressed logs overwhelming Redis (which has only 2-week retention). ## Changes ### Infrastructure - 1TB EBS volume mounted at `/logs-disk` on CI bastion in us-east-2 ### Core Implementation - **`ci3/source_cache`**: New module with `cache_persistent()` function for dual Redis+disk writes - **`ci3/cache_log`**, **`ci3/run_test_cmd`**, **`ci3/denoise`**: Updated to use `cache_persistent()` for final log writes - **`ci3/bootstrap_ec2`**: SSH key passing via base64 encoding (following GCP_SA_KEY pattern) - **`ci3/aws/build_instance_ssh_config`**: SSH multiplexing config for efficient persistent connections - **`rkapp/rk.py`**: In [iac](https://github.com/AztecProtocol/iac/pull/18). Disk fallback when Redis key expires ### Architecture - **Write path**: Live updates (every 5s) → Redis only. Final write → Redis + SSH to bastion disk - **Read path**: Try Redis first, fall back to `/logs-disk` on bastion if expired - **SSH transfer**: Background SSH pipe using multiplexed connections (ControlMaster, ControlPersist 10m) - **Efficiency**: Single long-lived SSH connection shared across all log writes (zero latency after first connect)
- wire local threshold ejection to tf and set = threshold - small slash amount
Creates a new type for a CIVC proof with public inputs so cases like this are clear in the future. The Tx now uses the type without public inputs, and the public inputs are attached to it as needed.
Seems like we regressed ourselves into a state where we couldn't run all the tests on the mainframe without hitting resource limits. e.g. running `yarn test` in yarn-project hits ~22k threads peak. This in itself needs investigating. But, the container limit is 65k, so what was the issue? Systemd has it's own limits - by default 15% the total, which is why bumping container pids helped, but not as much as i expected. This now ensures everything is maxed up the sysbox container limit. As I've rebuilt the amis im also removing the ensure_zig stuff as its in the container now (as well as ldid). I've had to fix a load of issues introduced by the foundry version bump. I'm unsure why the version was bumped when the existing version seemed to be working...
Seems like we regressed ourselves into a state where we couldn't run all the tests on the mainframe without hitting resource limits. e.g. running `yarn test` in yarn-project hits ~22k threads peak. This in itself needs investigating. But, the container limit is 65k, so what was the issue? Systemd has it's own limits - by default 15% the total, which is why bumping container pids helped, but not as much as i expected. This now ensures everything is maxed up the sysbox container limit. As I've rebuilt the amis im also removing the ensure_zig stuff as its in the container now (as well as ldid). I've had to fix a load of issues introduced by the foundry version bump. I'm unsure why the version was bumped when the existing version seemed to be working... edit: An investigation into the 22k threads was a combo of the cursed rayon and also newly discovered Tokio thread pool that for some inexplicable reason is in kevs kzg lib. Controlled down with TOKIO_WORKER_THREADS=1. A full parallel run of yarn-project tests now only seems to peak about 1k additional threads.
…eployments (AztecProtocol#18011) Fixes the slash inactivity network test on scenario and next deployments by setting local ejection threshold to an amount that allows 1 small slash without being ejected from the set. - Wires local threshold ejection to terraform - localEjectionThreshold = ejectionThreshold - slashAmountSmall
lagInEpochs: 0 doesn't work Co-authored-by: Mitchell Tracy <[email protected]>
lagInEpochs: 0 doesn't work
…ocol#18008) - Adjusted Docusaurus config to include current version based on environment variable. - Added ignoreFiles option to exclude specific protocol specs from versioned docs. - Updated Netlify configuration to define build commands for production and deploy-preview contexts. - Modified package.json scripts for improved build and serve commands, including environment-specific options.
…ol#18009) Attempt at fixing the error from [this run](https://github.com/AztecProtocol/aztec-packages/actions/runs/18787851531/job/53610628682) where the verification failed due to a non-existing contract. In this run, CREATE_ROLLUP_CONTRACTS was set to false, so the `DEPLOY_ROLLUP_CONTRACTS_DIR` template did not run. My guess is that the `tf output` loaded the output from a different run on a different version which had this contract that no longer exists. In particular, `RewardDeploymentExtLib` exists in 2.0 but not in 2.1, the deployment that failed.
…ecProtocol#18007) The deploy-testnet step would never trigger since the dependency was skipped since the minor version would not match.
Should no longer apply per AztecProtocol#17802 and AztecProtocol#17226
v3.0.0-devnet.2
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Please read contributing guidelines and remove this line.
For audit-related pull requests, please use the audit PR template.