Dank Mids is a EVM RPC batching library that helps reduce the number of HTTP requests to a node, saving time and resources. It automatically collects eth_call calls into multicalls and bundles all RPC calls together in jsonrpc batch calls.
The goal of this tool is to reduce the workload on RPC nodes and allow users to make calls to their preferred node more efficiently. This optimization is especially useful for developers writing scripts that perform large-scale blockchain analysis, as it can save development time and resources.
There are a number of optimizations that went into making Dank the fastest way to pull rpc data to Python.
- Implemented (mostly) in C.
- Bypasses the default formatters in web3.py
- JSON encoding and decoding is handled by msgspec. All responses are decoded to specialized msgspec.Struct objects defined in the evmspec library.
- We use my C-compiled faster-eth-abi and faster-eth-utils instead of the original python implementations eth-abi and eth-utils.
- Responses are decoded on a JIT (just-in-time) basis, meaning individual task cancellation works as expected even when response data is received as part of a larger batch.
- more stuff I'll write down later...
This diagram shows how requests move from user calls into Dank Mids queues, then through batch execution and response spoofing.
flowchart TD
A[User code<br/>await w3.eth.call / other RPC] --> B[DankMiddlewareController.__call__]
B -->|eth_call| C[eth_call request]
B -->|other RPC| D[RPCRequest]
C -->|multicall compatible| E[pending_eth_calls<br/>block to Multicall]
C -->|no multicall| D
D --> F[pending_rpc_calls<br/>JSONRPCBatch queue]
E --> G[RPCRequest.get_response<br/>triggers execute_batch when needed]
F --> G
G --> H[DankMiddlewareController.execute_batch]
H --> I[DankBatch<br/>multicalls + rpc_calls]
I --> J[DankBatch.coroutines]
J -->|large multicall| K[Multicall.get_response]
J -->|small multicall split| L[JSONRPCBatch]
J -->|rpc calls| L
K --> M[_requester.post<br/>eth_call to multicall contract]
M --> N[Multicall.spoof_response<br/>split results to eth_call futures]
L --> O[JSONRPCBatch.post<br/>build JSON-RPC batch payload]
O --> P[_requester.post batch<br/>+ decode responses]
P --> Q[JSONRPCBatch.spoof_response<br/>match by id, resolve futures]
N --> R[User awaiters resolve]
Q --> R
Notes:
- Batches can start early when the queue is full (
_Batch.append->controller.early_start). - Otherwise, the first waiter to need results will trigger
execute_batchfromRPCRequest.get_response.
To install Dank Mids, use pip:
pip install dank-mids
We've included a benchmark script that compares the time it takes to fetch the pool tokens (token0 and token1) for each pool on Sushiswap on Ethereum mainnet. To run it, first install the repo with poetry install and then run the benchmark with brownie run examples/benchmark.
Running 'examples/benchmark.py::main'...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4213/4213 [08:50<00:00, 7.95it/s]
brownie sync end: 2025-04-14 21:21:35.531099
brownie sync took: 0:08:50.212665
brownie 4 threads start: 2025-04-14 21:21:35.548373
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4213/4213 [08:31<00:00, 8.23it/s]
brownie 4 threads end: 2025-04-14 21:30:08.065397
brownie 4 threads took: 0:08:32.517024
brownie 16 threads start: 2025-04-14 21:30:08.086342
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4213/4213 [08:26<00:00, 8.32it/s]
brownie 16 threads end: 2025-04-14 21:38:38.141635
brownie 16 threads took: 0:08:30.055293
dank start: 2025-04-14 21:38:38.161024
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4213/4213 [00:55<00:00, 75.49it/s]
dank end: 2025-04-14 21:39:33.982835
dank took: 0:00:55.821811
As you can see, dank_mids allowed us to save 7 minutes and 34 seconds vs brownie with 16 threads. That's an 89% reduction in runtime, or about 9x as fast as brownie!
The primary function you need to use Dank Mids is setup_dank_w3_from_sync. This function takes a sync Web3 instance and wraps it for async use. If using dank_mids with eth-brownie, you can just import the premade dank_web3 object as well
Example usage of Dank Mids with web3py:
from dank_mids.helpers import setup_dank_w3_from_sync
dank_web3 = setup_dank_w3_from_sync(w3)
# OR
from dank_mids import dank_web3
# Then:
random_block = await dank_web3.eth.get_block(123)- COMING SOON: Dank Mids will also work with ape.
Yearn big brain Tonkers Kuma had this to say:
You can also set DANK_MIDS_DEMO_MODE=True to see a visual representation of the batching in real time on your console.


