Conversation
…at have been submitted
…hich is really an OTOCO instead.
- Introduced a reentrant lock to serialize order update events, preventing race conditions when 'fill' and 'new' events arrive concurrently. - Cleaned up order trackers to avoid duplicates and ensure proper order status handling. - Updated logging to provide clearer diagnostics during order processing. - Removed redundant orders cache to streamline order management.
📝 WalkthroughWalkthroughReplaces a backtesting strategy log call with a standard logger; adds broker order-tracker deduplication and reuse; introduces ProjectX ordering refactor (identifier-first, RLock for order updates, bracket/meta adjustments, price rounding); augments Schwab order timestamps; extends Order.is_canceled; adds strategy get_orders identifiers filter; logging handler backtest flag; extensive concurrency tests for ProjectX order-update lock. Changes
Sequence Diagram(s)sequenceDiagram
participant StrategyExecutor
participant Broker
participant ProjectX
participant OrderTrackers
StrategyExecutor->>Broker: sync broker orders / fetch updates
Broker->>OrderTrackers: get_tracked_orders(order.identifier)
alt duplicates found
Broker->>Broker: _clean_order_trackers(broker_order)
Broker->>OrderTrackers: remove duplicate tracked orders
end
ProjectX->>Broker: receive broker order update
Broker->>Broker: _detect_and_dispatch_order_changes()
Note over Broker: Acquire _order_update_lock (RLock)
alt status == ERROR
Broker->>Broker: process_error_order()
Broker->>StrategyExecutor: _on_error_order(order, error)
else status in (NEW/FILLED/CANCELED/UPDATED)
Broker->>StrategyExecutor: emit lifecycle event(s)
end
Note over Broker: Release _order_update_lock
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Pylint (4.0.5)lumibot/entities/order.py************* Module pylintrc ... [truncated 65412 characters] ... d": "W0201" lumibot/brokers/schwab.py************* Module pylintrc ... [truncated 121265 characters] ... "obj": "", lumibot/brokers/broker.py************* Module pylintrc ... [truncated 114789 characters] ... {
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lumibot/strategies/strategy_executor.py (1)
95-102:⚠️ Potential issue | 🟠 MajorAvoid introducing new backtest env toggles in executor defaults.
These two new env-variable switches add runtime behavior via process environment (
BACKTESTING_LOG_ITERATION_HEARTBEAT,BACKTESTING_CAPTURE_LOCALS). Please route these through strategy/broker config (or explicit parameters) instead of new env flags.Suggested refactor
- if self.broker.IS_BACKTESTING_BROKER: - self._log_iteration_heartbeat = os.environ.get("BACKTESTING_LOG_ITERATION_HEARTBEAT", "").lower() == "true" + if self.broker.IS_BACKTESTING_BROKER: + self._log_iteration_heartbeat = bool( + getattr(self.strategy, "backtesting_log_iteration_heartbeat", False) + ) - if self.broker.IS_BACKTESTING_BROKER: - self._capture_locals = os.environ.get("BACKTESTING_CAPTURE_LOCALS", "").lower() == "true" + if self.broker.IS_BACKTESTING_BROKER: + self._capture_locals = bool( + getattr(self.strategy, "backtesting_capture_locals", False) + )As per coding guidelines: "Do not add new environment variables by default. Prefer explicit function parameters, config objects, or stable defaults. Do not add env vars just to toggle/skip tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/strategies/strategy_executor.py` around lines 95 - 102, The change introduces two new environment-driven toggles (BACKTESTING_LOG_ITERATION_HEARTBEAT and BACKTESTING_CAPTURE_LOCALS) inside StrategyExecutor by setting self._log_iteration_heartbeat and self._capture_locals based on os.environ; instead, expose these as explicit config parameters or attributes on the broker/strategy and remove direct os.environ usage: add optional constructor parameters or read from an existing broker/strategy config (e.g., broker.IS_BACKTESTING_BROKER) to set self._log_iteration_heartbeat and self._capture_locals, and update any callers/tests to pass the desired flags rather than relying on new env vars.
🧹 Nitpick comments (3)
lumibot/backtesting/backtesting_broker.py (1)
2450-2453: Avoid eager string interpolation in this hot-path debug log.This path runs frequently; f-strings still allocate even when DEBUG is off. Prefer parameterized logging.
♻️ Proposed change
- self.logger.debug( - f"[DIAG] Order remained open for {display_symbol} ({detail}) " - f"id={order_identifier} at {self.datetime}" - ) + self.logger.debug( + "[DIAG] Order remained open for %s (%s) id=%s at %s", + display_symbol, + detail, + order_identifier, + self.datetime, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/backtesting/backtesting_broker.py` around lines 2450 - 2453, Replace the eager f-string interpolation in the hot-path debug call with parameterized logging so string formatting is deferred when DEBUG is off: change the self.logger.debug call that currently uses f"[DIAG] Order remained open for {display_symbol} ({detail}) id={order_identifier} at {self.datetime}" to use a format string and arguments (e.g., self.logger.debug("[DIAG] Order remained open for %s (%s) id=%s at %s", display_symbol, detail, order_identifier, self.datetime)), keeping the same message and the same variables (display_symbol, detail, order_identifier, self.datetime).tests/test_projectx_lifecycle_unit.py (1)
746-807: Test intent and assertions are slightly misaligned in lock-hold test.The docstring says it verifies second-thread blocking, but the current assertions only prove re-entrancy from the same thread. Please either add a true contender-thread blocking assertion or rename the test to reflect re-entrancy validation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_projectx_lifecycle_unit.py` around lines 746 - 807, The test test_detect_and_dispatch_holds_lock_during_dispatch currently only demonstrates RLock re-entrancy via slow_dispatch and does not verify a true second-thread block; update the test to either (A) actually spawn a separate contender thread that waits until lock_was_held is set and then attempts to acquire projectx_broker._order_update_lock with blocking=False (assert it fails while the first thread holds the lock, then succeeds after release), using second_thread_blocked Event to signal the blocked state, or (B) if you prefer to keep the current logic, rename the test to reflect that it verifies RLock re-entrancy rather than cross-thread blocking; reference test_detect_and_dispatch_holds_lock_during_dispatch, _dispatch_status_change, _order_update_lock, and _detect_and_dispatch_order_changes when making the change.lumibot/brokers/projectx.py (1)
756-758: Consider applyingorder_lookback_daysin_pull_broker_all_orderstoo.This path still scopes to “today” only; reusing
self.order_lookback_dayswould align retrieval semantics with_get_orders_at_brokerand reduce missed orders after restarts.♻️ Suggested tweak
- # Get all orders from today to catch any recent activity + # Get all orders from configured lookback window end_date = datetime.now(LUMIBOT_DEFAULT_PYTZ).replace(hour=23, minute=59, second=59) - start_date = end_date.replace(hour=0, minute=0, second=0) + start_date = end_date.replace(hour=0, minute=0, second=0) - timedelta(days=self.order_lookback_days)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/brokers/projectx.py` around lines 756 - 758, _pull_broker_all_orders currently limits fetched orders to "today" by computing start_date/end_date from now; change it to honor self.order_lookback_days by setting start_date to (now - timedelta(days=self.order_lookback_days)) at 00:00 in LUMIBOT_DEFAULT_PYTZ and end_date to now (or today 23:59:59) so the method uses the same lookback window as _get_orders_at_broker; update any uses of end_date/start_date accordingly and import/use datetime.timedelta where needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lumibot/brokers/broker.py`:
- Around line 988-1004: _clean_order_trackers currently performs removals but
returns None; change it to return the surviving tracked order instance so
callers (e.g., StrategyExecutor.sync_broker) can branch correctly. After doing
the remove operations, lookup the tracker collections for
broker_order.identifier and return the object that remains (prefer checking
_partially_filled_orders, then _new_orders, then _unprocessed_orders) so the
method yields the surviving tracked order instance (or None if nothing remains).
- Around line 1010-1014: The current early returns when existing_order is found
in self._new_orders (or found in another tracker) leave stale entries in
self._unprocessed_orders; before returning, remove any matching tracker from
self._unprocessed_orders (e.g., call
self._unprocessed_orders.discard(existing_order) or equivalent) so duplicates
are cleaned up; update the branches that currently "return existing_order" to
first cleanup self._unprocessed_orders for the existing_order and then return,
referencing the symbols existing_order, self._new_orders and
self._unprocessed_orders to locate the logic to change.
In `@lumibot/brokers/projectx.py`:
- Line 1256: The bracket metadata lookup is using order.id when meta is stored
by order.identifier; update calls to restore_bracket_meta_if_needed and any
bracket-meta detection logic to key by getattr(cached_order, 'identifier', None)
/ getattr(new_order, 'identifier', None) (not 'id') and ensure _bracket_meta
lookups use the identifier key; specifically replace the dict key construction
{getattr(cached_order,'id',None): cached_order} and any direct uses of order.id
in bracket-meta code paths with the corresponding identifier-based key to
restore/detect bracket parents correctly.
- Around line 1437-1440: The check in the bracket child submission block uses
submitted.id (which was renamed) causing false failure logs; update the
conditional to check getattr(submitted, 'identifier', None) (or simply submitted
and getattr(submitted, 'identifier', None)) instead of 'id', and update the
success branch to reference submitted.identifier consistently (symbols:
submitted, identifier, parent.identifier, kind, price, self.logger) so the
error/success logging correctly reflects the new identifier field.
- Around line 1056-1058: The code currently treats an unmapped
broker_position["type"] as not "long" and thus negates size, inverting exposure;
change the logic around POSITION_TYPE_MAPPING/broker_position to handle unknown
types explicitly: retrieve direction =
self.POSITION_TYPE_MAPPING.get(broker_position.get("type")) and then set
quantity only for known values (e.g., if direction == "long": quantity =
broker_position.get("size", 0); elif direction == "short": quantity =
-broker_position.get("size", 0); else: quantity = 0 or raise/log an error).
Update any downstream code that assumes a nonzero quantity accordingly and add a
warning log mentioning broker_position and its type when mapping returns None.
- Around line 1574-1589: The trade-fill path sets cached_order fields then calls
_dispatch_status_change(cached_order, cached_order) which passes the same object
as both old/new and lets the equivalence guard drop the event; fix by importing
copy.copy and pass a shallow copy as the "old" state (e.g. old =
copy(cached_order); then call self._dispatch_status_change(old, cached_order))
so the dispatcher sees a changed object and emits the FILLED_ORDER event,
ensuring to add "from copy import copy" to the module imports.
In `@lumibot/strategies/strategy.py`:
- Around line 1419-1423: The get_orders filtering wrongly treats an empty list
as falsy and returns all orders; change the conditional in the method that uses
identifiers (the block using self.broker.get_tracked_orders(self.name) and the
variable identifiers) to check for None explicitly (e.g., if identifiers is not
None) so that identifiers=[] produces an empty filtered_orders result while
identifiers=None still returns all_orders; update the return path accordingly in
the method (get_orders / the block shown) to ensure empty iterable filters yield
no matches.
In `@lumibot/tools/projectx_helpers.py`:
- Line 719: The inline comment for self._cache_ttl is incorrect: it sets
self._cache_ttl = 5 but comments "30 seconds cache"; update the comment to match
the actual value (e.g., "5 seconds cache") or change the numeric value to 30 if
the intended TTL was 30 seconds—make the edit in the assignment where
self._cache_ttl is defined so the comment and value are consistent.
- Around line 528-551: The order_modify method currently sends a modify request
even when no modifiable fields are provided; in order_modify(account_id,
order_id, size=None, limit_price=None, stop_price=None, trail_price=None) add a
pre-check that if all of size, limit_price, stop_price, and trail_price are None
then return a clear failure dict (e.g., {"success": False, "error": "No
modification fields provided"}) or raise a ValueError, avoiding the HTTP call;
place this guard before building the payload and before the requests.post call
so the function fails fast with a descriptive message.
In `@lumibot/traders/trader.py`:
- Around line 323-325: The inline comment next to set_console_log_level("ERROR")
is inconsistent with the code: it states INFO logs are allowed but the call
forces ERROR; update the comment to accurately reflect behavior (i.e., that when
quiet_logs is True or BACKTESTING_QUIET_LOGS applies we force console to ERROR
while file logs remain at INFO), and if the intended behavior was to allow INFO
when quiet_logs is False, change the code to call set_console_log_level("INFO")
in that branch or adjust the branch condition; reference set_console_log_level,
quiet_logs, and BACKTESTING_QUIET_LOGS to locate the logic and ensure the
comment documents the invariant clearly.
---
Outside diff comments:
In `@lumibot/strategies/strategy_executor.py`:
- Around line 95-102: The change introduces two new environment-driven toggles
(BACKTESTING_LOG_ITERATION_HEARTBEAT and BACKTESTING_CAPTURE_LOCALS) inside
StrategyExecutor by setting self._log_iteration_heartbeat and
self._capture_locals based on os.environ; instead, expose these as explicit
config parameters or attributes on the broker/strategy and remove direct
os.environ usage: add optional constructor parameters or read from an existing
broker/strategy config (e.g., broker.IS_BACKTESTING_BROKER) to set
self._log_iteration_heartbeat and self._capture_locals, and update any
callers/tests to pass the desired flags rather than relying on new env vars.
---
Nitpick comments:
In `@lumibot/backtesting/backtesting_broker.py`:
- Around line 2450-2453: Replace the eager f-string interpolation in the
hot-path debug call with parameterized logging so string formatting is deferred
when DEBUG is off: change the self.logger.debug call that currently uses
f"[DIAG] Order remained open for {display_symbol} ({detail})
id={order_identifier} at {self.datetime}" to use a format string and arguments
(e.g., self.logger.debug("[DIAG] Order remained open for %s (%s) id=%s at %s",
display_symbol, detail, order_identifier, self.datetime)), keeping the same
message and the same variables (display_symbol, detail, order_identifier,
self.datetime).
In `@lumibot/brokers/projectx.py`:
- Around line 756-758: _pull_broker_all_orders currently limits fetched orders
to "today" by computing start_date/end_date from now; change it to honor
self.order_lookback_days by setting start_date to (now -
timedelta(days=self.order_lookback_days)) at 00:00 in LUMIBOT_DEFAULT_PYTZ and
end_date to now (or today 23:59:59) so the method uses the same lookback window
as _get_orders_at_broker; update any uses of end_date/start_date accordingly and
import/use datetime.timedelta where needed.
In `@tests/test_projectx_lifecycle_unit.py`:
- Around line 746-807: The test
test_detect_and_dispatch_holds_lock_during_dispatch currently only demonstrates
RLock re-entrancy via slow_dispatch and does not verify a true second-thread
block; update the test to either (A) actually spawn a separate contender thread
that waits until lock_was_held is set and then attempts to acquire
projectx_broker._order_update_lock with blocking=False (assert it fails while
the first thread holds the lock, then succeeds after release), using
second_thread_blocked Event to signal the blocked state, or (B) if you prefer to
keep the current logic, rename the test to reflect that it verifies RLock
re-entrancy rather than cross-thread blocking; reference
test_detect_and_dispatch_holds_lock_during_dispatch, _dispatch_status_change,
_order_update_lock, and _detect_and_dispatch_order_changes when making the
change.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
lumibot/backtesting/backtesting_broker.pylumibot/brokers/broker.pylumibot/brokers/projectx.pylumibot/brokers/schwab.pylumibot/entities/order.pylumibot/strategies/strategy.pylumibot/strategies/strategy_executor.pylumibot/tools/lumibot_logger.pylumibot/tools/projectx_helpers.pylumibot/traders/trader.pytests/test_projectx_lifecycle_unit.py
| def _clean_order_trackers(self, broker_order): | ||
| """ | ||
| ProjectX has a race condition where an order is filled and added to new trackers at the same time and the | ||
| 'new' queue never gets cleared. This function is used to clean the order trackers of any duplicated orders. | ||
| Keep orders that are completed (i.e. filled, canceled, error) and remove any duplicates from the 'new' and | ||
| 'unprocessed' trackers. | ||
| """ | ||
| if not broker_order.is_active(): | ||
| self._new_orders.remove(broker_order.identifier, key="identifier") | ||
| self._unprocessed_orders.remove(broker_order.identifier, key="identifier") | ||
| self._partially_filled_orders.remove(broker_order.identifier, key="identifier") | ||
| elif broker_order in self._partially_filled_orders: | ||
| self._new_orders.remove(broker_order.identifier, key="identifier") | ||
| self._unprocessed_orders.remove(broker_order.identifier, key="identifier") | ||
| elif broker_order in self._new_orders: | ||
| self._unprocessed_orders.remove(broker_order.identifier, key="identifier") | ||
|
|
There was a problem hiding this comment.
_clean_order_trackers should return the surviving tracked order (it currently returns None).
StrategyExecutor.sync_broker assigns this method result to order_lumi and immediately branches on it. Returning None here causes that branch to miss the cleaned tracked order path.
Suggested fix
def _clean_order_trackers(self, broker_order):
@@
- elif broker_order in self._new_orders:
- self._unprocessed_orders.remove(broker_order.identifier, key="identifier")
+ elif broker_order in self._new_orders:
+ self._unprocessed_orders.remove(broker_order.identifier, key="identifier")
+
+ return self.get_tracked_order(broker_order.identifier, use_placeholders=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/brokers/broker.py` around lines 988 - 1004, _clean_order_trackers
currently performs removals but returns None; change it to return the surviving
tracked order instance so callers (e.g., StrategyExecutor.sync_broker) can
branch correctly. After doing the remove operations, lookup the tracker
collections for broker_order.identifier and return the object that remains
(prefer checking _partially_filled_orders, then _new_orders, then
_unprocessed_orders) so the method yields the surviving tracked order instance
(or None if nothing remains).
| # Check if this order already exists in self._new_orders based on the identifier - Do nothing | ||
| if existing_order in self._new_orders: | ||
| return existing_order | ||
| if existing_order not in self._unprocessed_orders: | ||
| return existing_order # Exists in another tracker, return it without adding to prevent duplicates |
There was a problem hiding this comment.
Early return leaves stale _unprocessed_orders entries in duplicate scenarios.
When existing_order is already in _new_orders, this branch returns immediately and can leave a duplicate/stale unprocessed tracker entry behind.
Suggested fix
- if existing_order in self._new_orders:
- return existing_order
+ if existing_order in self._new_orders:
+ self._unprocessed_orders.remove(existing_order.identifier, key="identifier")
+ return existing_order🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/brokers/broker.py` around lines 1010 - 1014, The current early
returns when existing_order is found in self._new_orders (or found in another
tracker) leave stale entries in self._unprocessed_orders; before returning,
remove any matching tracker from self._unprocessed_orders (e.g., call
self._unprocessed_orders.discard(existing_order) or equivalent) so duplicates
are cleaned up; update the branches that currently "return existing_order" to
first cleanup self._unprocessed_orders for the existing_order and then return,
referencing the symbols existing_order, self._new_orders and
self._unprocessed_orders to locate the logic to change.
| direction = self.POSITION_TYPE_MAPPING.get(broker_position.get("type"), "") | ||
| quantity = broker_position.get("size", 0) if direction == "long" else -broker_position.get("size", 0) | ||
|
|
There was a problem hiding this comment.
Unknown position type is silently treated as short.
If broker_position["type"] is missing/unmapped, Line 1057 makes quantity negative by default. That can invert exposure and PnL interpretation.
🧭 Proposed fix
- direction = self.POSITION_TYPE_MAPPING.get(broker_position.get("type"), "")
- quantity = broker_position.get("size", 0) if direction == "long" else -broker_position.get("size", 0)
+ position_type = broker_position.get("type")
+ direction = self.POSITION_TYPE_MAPPING.get(position_type)
+ if direction is None:
+ self.logger.warning(f"Unknown position type '{position_type}' for contract {broker_position.get('contractId')}")
+ return None
+ size = broker_position.get("size", 0)
+ quantity = size if direction == "long" else -size🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/brokers/projectx.py` around lines 1056 - 1058, The code currently
treats an unmapped broker_position["type"] as not "long" and thus negates size,
inverting exposure; change the logic around
POSITION_TYPE_MAPPING/broker_position to handle unknown types explicitly:
retrieve direction = self.POSITION_TYPE_MAPPING.get(broker_position.get("type"))
and then set quantity only for known values (e.g., if direction == "long":
quantity = broker_position.get("size", 0); elif direction == "short": quantity =
-broker_position.get("size", 0); else: quantity = 0 or raise/log an error).
Update any downstream code that assumes a nonzero quantity accordingly and add a
warning log mentioning broker_position and its type when mapping returns None.
| # Ensure bracket metadata is preserved | ||
| if getattr(cached_order, '_is_bracket_parent', False) and not getattr(new_order, '_is_bracket_parent', False): | ||
| new_order._is_bracket_parent = True | ||
| restore_bracket_meta_if_needed(new_order, {getattr(cached_order,'id',None): cached_order} if cached_order else {}, getattr(self, '_bracket_meta', {}), self.logger) |
There was a problem hiding this comment.
Bracket metadata lookup is still keyed by id in critical paths.
Line 1256 and Line 1352 use order.id for bracket-meta restore/detection, while meta is stored by order.identifier. This can miss bracket parent detection and prevent child spawning in some fill flows.
🔧 Proposed fix
- restore_bracket_meta_if_needed(new_order, {getattr(cached_order,'id',None): cached_order} if cached_order else {}, getattr(self, '_bracket_meta', {}), self.logger)
+ restore_bracket_meta_if_needed(new_order, {getattr(cached_order,'identifier',None): cached_order} if cached_order else {}, getattr(self, '_bracket_meta', {}), self.logger) def _is_bracket_parent(self, order: Order) -> bool:
if getattr(order, '_is_bracket_parent', False):
return True
- return bool(getattr(self, '_bracket_meta', {}).get(getattr(order, 'id', None)))
+ return bool(getattr(self, '_bracket_meta', {}).get(getattr(order, 'identifier', None)))Also applies to: 1349-1353
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/brokers/projectx.py` at line 1256, The bracket metadata lookup is
using order.id when meta is stored by order.identifier; update calls to
restore_bracket_meta_if_needed and any bracket-meta detection logic to key by
getattr(cached_order, 'identifier', None) / getattr(new_order, 'identifier',
None) (not 'id') and ensure _bracket_meta lookups use the identifier key;
specifically replace the dict key construction {getattr(cached_order,'id',None):
cached_order} and any direct uses of order.id in bracket-meta code paths with
the corresponding identifier-based key to restore/detect bracket parents
correctly.
| if not submitted or not getattr(submitted, 'id', None): | ||
| self.logger.error(f"Bracket child submission failed (kind={kind}) for parent {parent.id}") | ||
| self.logger.error(f"Bracket child submission failed (kind={kind}) for parent {parent.identifier}") | ||
| else: | ||
| self.logger.debug(f"Bracket child submitted: parent={parent.id} kind={kind} id={submitted.id} price={price}") | ||
| self.logger.debug(f"Bracket child submitted: parent={parent.identifier} kind={kind} id={submitted.identifier} price={price}") |
There was a problem hiding this comment.
Bracket child submit check still uses id instead of identifier.
Line 1437 checks submitted.id, which can falsely log submission failure after the identifier refactor.
🛠️ Proposed fix
- if not submitted or not getattr(submitted, 'id', None):
+ if not submitted or not getattr(submitted, 'identifier', None):
self.logger.error(f"Bracket child submission failed (kind={kind}) for parent {parent.identifier}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if not submitted or not getattr(submitted, 'id', None): | |
| self.logger.error(f"Bracket child submission failed (kind={kind}) for parent {parent.id}") | |
| self.logger.error(f"Bracket child submission failed (kind={kind}) for parent {parent.identifier}") | |
| else: | |
| self.logger.debug(f"Bracket child submitted: parent={parent.id} kind={kind} id={submitted.id} price={price}") | |
| self.logger.debug(f"Bracket child submitted: parent={parent.identifier} kind={kind} id={submitted.identifier} price={price}") | |
| if not submitted or not getattr(submitted, 'identifier', None): | |
| self.logger.error(f"Bracket child submission failed (kind={kind}) for parent {parent.identifier}") | |
| else: | |
| self.logger.debug(f"Bracket child submitted: parent={parent.identifier} kind={kind} id={submitted.identifier} price={price}") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/brokers/projectx.py` around lines 1437 - 1440, The check in the
bracket child submission block uses submitted.id (which was renamed) causing
false failure logs; update the conditional to check getattr(submitted,
'identifier', None) (or simply submitted and getattr(submitted, 'identifier',
None)) instead of 'id', and update the success branch to reference
submitted.identifier consistently (symbols: submitted, identifier,
parent.identifier, kind, price, self.logger) so the error/success logging
correctly reflects the new identifier field.
| if fill_price and fill_size: | ||
| # Idempotency guard: skip if already marked filled to avoid | ||
| # duplicate FILLED_ORDER events from concurrent callbacks. | ||
| if cached_order.status.lower() in ("fill", "filled"): | ||
| self.logger.debug( | ||
| f"Trade fill skipped for order {order_id}: already filled" | ||
| ) | ||
| else: | ||
| # Mark order as filled based on trade data | ||
| cached_order.status = "filled" | ||
| cached_order.filled_quantity = fill_size | ||
| cached_order.avg_fill_price = fill_price | ||
|
|
||
| # Dispatch fill event - pass same order twice since it's the updated version | ||
| self._dispatch_status_change(cached_order, cached_order) | ||
|
|
There was a problem hiding this comment.
Trade-fill path currently suppresses FILLED_ORDER dispatch.
On Line 1588, _dispatch_status_change(cached_order, cached_order) passes the same object as both old/new state, so the equivalence guard in _dispatch_status_change can short-circuit and drop trade-driven fill events.
🐛 Proposed fix
- # Mark order as filled based on trade data
- cached_order.status = "filled"
- cached_order.filled_quantity = fill_size
- cached_order.avg_fill_price = fill_price
-
- # Dispatch fill event - pass same order twice since it's the updated version
- self._dispatch_status_change(cached_order, cached_order)
+ previous_order = copy(cached_order)
+ cached_order.status = "filled"
+ cached_order.filled_quantity = fill_size
+ cached_order.avg_fill_price = fill_price
+ self._dispatch_status_change(previous_order, cached_order)from copy import copy # add to imports🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/brokers/projectx.py` around lines 1574 - 1589, The trade-fill path
sets cached_order fields then calls _dispatch_status_change(cached_order,
cached_order) which passes the same object as both old/new and lets the
equivalence guard drop the event; fix by importing copy.copy and pass a shallow
copy as the "old" state (e.g. old = copy(cached_order); then call
self._dispatch_status_change(old, cached_order)) so the dispatcher sees a
changed object and emits the FILLED_ORDER event, ensuring to add "from copy
import copy" to the module imports.
| all_orders = self.broker.get_tracked_orders(self.name) | ||
| if identifiers: | ||
| filtered_orders = [order for order in all_orders if order.identifier in identifiers] | ||
| return filtered_orders | ||
| return all_orders |
There was a problem hiding this comment.
Fix empty-filter behavior in get_orders.
At Line 1420, using if identifiers: makes identifiers=[] return all orders instead of []. An explicit empty filter should return no matches.
Proposed fix
all_orders = self.broker.get_tracked_orders(self.name)
- if identifiers:
- filtered_orders = [order for order in all_orders if order.identifier in identifiers]
- return filtered_orders
- return all_orders
+ if identifiers is None:
+ return all_orders
+ identifier_set = set(identifiers)
+ return [order for order in all_orders if order.identifier in identifier_set]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/strategies/strategy.py` around lines 1419 - 1423, The get_orders
filtering wrongly treats an empty list as falsy and returns all orders; change
the conditional in the method that uses identifiers (the block using
self.broker.get_tracked_orders(self.name) and the variable identifiers) to check
for None explicitly (e.g., if identifiers is not None) so that identifiers=[]
produces an empty filtered_orders result while identifiers=None still returns
all_orders; update the return path accordingly in the method (get_orders / the
block shown) to ensure empty iterable filters yield no matches.
| def order_modify(self, account_id: int, order_id: int, size: int = None, limit_price: float = None, | ||
| stop_price: float = None, trail_price: float = None) -> dict: | ||
| """Modify an existing order""" | ||
| url = f"{self.base_url}api/order/modify" | ||
| payload = { | ||
| "accountId": account_id, | ||
| "orderId": order_id, | ||
| "size": size, | ||
| "limitPrice": limit_price, | ||
| "stopPrice": stop_price, | ||
| "trailPrice": trail_price, | ||
| } | ||
| # Remove None values to avoid API model binding issues | ||
| payload = {k: v for k, v in payload.items() if v is not None} | ||
| try: | ||
| response = requests.post(url, headers=self.headers, json=payload, timeout=10) | ||
| try: | ||
| data = response.json() | ||
| except ValueError: | ||
| data = {"success": False, "error": f"Non-JSON response {response.status_code}"} | ||
| return data | ||
| except requests.exceptions.RequestException as e: | ||
| self.logger.error(f"Error modifying order: {e}") | ||
| return {"success": False, "error": str(e)} |
There was a problem hiding this comment.
Guard against no-op order modifications before calling the API.
If all of size, limit_price, stop_price, and trail_price are None, this sends an avoidable invalid modify request. Fail fast locally with a clear error.
🛡️ Proposed guard
def order_modify(self, account_id: int, order_id: int, size: int = None, limit_price: float = None,
stop_price: float = None, trail_price: float = None) -> dict:
"""Modify an existing order"""
url = f"{self.base_url}api/order/modify"
+ if all(v is None for v in (size, limit_price, stop_price, trail_price)):
+ return {
+ "success": False,
+ "error": "At least one of size, limit_price, stop_price, or trail_price must be provided",
+ }
payload = {
"accountId": account_id,
"orderId": order_id,
"size": size,
"limitPrice": limit_price,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/tools/projectx_helpers.py` around lines 528 - 551, The order_modify
method currently sends a modify request even when no modifiable fields are
provided; in order_modify(account_id, order_id, size=None, limit_price=None,
stop_price=None, trail_price=None) add a pre-check that if all of size,
limit_price, stop_price, and trail_price are None then return a clear failure
dict (e.g., {"success": False, "error": "No modification fields provided"}) or
raise a ValueError, avoiding the HTTP call; place this guard before building the
payload and before the requests.post call so the function fails fast with a
descriptive message.
| @@ -691,7 +716,7 @@ def __init__(self, config: dict): | |||
| self._positions_cache_time = 0 | |||
| self._orders_cache = None | |||
| self._orders_cache_time = 0 | |||
| self._cache_ttl = 30 # 30 seconds cache | |||
| self._cache_ttl = 5 # 30 seconds cache | |||
There was a problem hiding this comment.
Update the stale TTL comment to match the actual value.
The code uses 5, but the comment says 30 seconds.
🧹 Proposed fix
- self._cache_ttl = 5 # 30 seconds cache
+ self._cache_ttl = 5 # 5 seconds cache📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self._cache_ttl = 5 # 30 seconds cache | |
| self._cache_ttl = 5 # 5 seconds cache |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/tools/projectx_helpers.py` at line 719, The inline comment for
self._cache_ttl is incorrect: it sets self._cache_ttl = 5 but comments "30
seconds cache"; update the comment to match the actual value (e.g., "5 seconds
cache") or change the numeric value to 30 if the intended TTL was 30
seconds—make the edit in the assignment where self._cache_ttl is defined so the
comment and value are consistent.
| set_console_log_level("ERROR") # Set console log level to ERROR while keeping file logs at INFO | ||
| # When quiet_logs=False, allow INFO logs to console (respects BACKTESTING_QUIET_LOGS) | ||
| else: |
There was a problem hiding this comment.
Align the inline comment with the actual console log behavior.
Line 323 forces console level to ERROR, but Line 324 says INFO logs are allowed. Please make the comment consistent with the code path.
🛠️ Proposed comment fix
- set_console_log_level("ERROR") # Set console log level to ERROR while keeping file logs at INFO
- # When quiet_logs=False, allow INFO logs to console (respects BACKTESTING_QUIET_LOGS)
+ set_console_log_level("ERROR") # Keep backtest console output at ERROR while preserving INFO in file logsAs per coding guidelines "Document and test behavioral changes: update relevant docs in docs/ and add regression tests in the same commit; add comments explaining why/invariants for non-obvious logic".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lumibot/traders/trader.py` around lines 323 - 325, The inline comment next to
set_console_log_level("ERROR") is inconsistent with the code: it states INFO
logs are allowed but the call forces ERROR; update the comment to accurately
reflect behavior (i.e., that when quiet_logs is True or BACKTESTING_QUIET_LOGS
applies we force console to ERROR while file logs remain at INFO), and if the
intended behavior was to allow INFO when quiet_logs is False, change the code to
call set_console_log_level("INFO") in that branch or adjust the branch
condition; reference set_console_log_level, quiet_logs, and
BACKTESTING_QUIET_LOGS to locate the logic and ensure the comment documents the
invariant clearly.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lumibot/tools/lumibot_logger.py (1)
657-676:⚠️ Potential issue | 🟠 MajorBacktest file logs are being filtered at the logger level.
When
is_backtest=True, Line 661-Line 672 can force thelumibotlogger toERROR(default quiet path). Then Line 967-Line 969 sets file handler level toINFO, but those records are still dropped by the logger before reaching the file handler. This breaks the intended “console ERROR, file INFO” behavior.💡 Suggested fix
- if quiet_logs_enabled: - # Quiet mode: only ERROR+ messages to console and file - console_level = logging.ERROR - effective_log_level = logging.ERROR + if quiet_logs_enabled: + # Quiet mode should limit console verbosity only; keep root level at configured + # value so file handlers can still capture INFO/DEBUG when requested. + console_level = logging.ERROR + effective_log_level = log_levelAlso applies to: 959-969
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/tools/lumibot_logger.py` around lines 657 - 676, The logger-level assignment during backtesting can be too high (effective_log_level set to ERROR) which drops INFO records before file handlers see them; update the logic for computing effective_log_level (used when configuring the "lumibot" logger) to be the minimum (most permissive) level required by any handler — e.g., compare console_level and the file handler's intended level (file_handler level, typically INFO) and set effective_log_level to the lower numeric value (or logging.NOTSET) so file INFO records are not filtered out; ensure this change touches the backtesting branch that sets console_level / effective_log_level and the code that creates/sets the file_handler level so the logger level does not prevent file logs from being emitted.
♻️ Duplicate comments (6)
lumibot/brokers/broker.py (1)
1233-1237:⚠️ Potential issue | 🟠 MajorEarly return still leaves stale
_unprocessed_ordersentries.When an order already exists in
_new_orders, this branch returns without removing a same-identifier stale entry from_unprocessed_orders, so duplicate active tracking can persist.Suggested fix
if existing_order: # Check if this order already exists in self._new_orders based on the identifier - Do nothing if existing_order in self._new_orders: + self._unprocessed_orders.remove(existing_order.identifier, key="identifier") return existing_order if existing_order not in self._unprocessed_orders: return existing_order # Exists in another tracker, return it without adding to prevent duplicates else: order = existing_order # Use the existing order object from unprocessed and update status🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/brokers/broker.py` around lines 1233 - 1237, When the code returns early because an order is already present in self._new_orders, it must also remove any stale entry with the same identifier from self._unprocessed_orders to avoid duplicate tracking; update the branch that currently does "if existing_order in self._new_orders: return existing_order" to first check and remove the matching stale item from self._unprocessed_orders (e.g., by comparing identifiers or using the same membership test) before returning existing_order, ensuring you reference the existing_order, self._new_orders, and self._unprocessed_orders variables in the fix.lumibot/brokers/projectx.py (4)
1442-1445:⚠️ Potential issue | 🟡 MinorBracket child submit validation still checks
submitted.id.Line 1442 checks
submitted.id, which can falsely log failures after the identifier-first refactor.🩹 Suggested fix
- if not submitted or not getattr(submitted, 'id', None): + if not submitted or not getattr(submitted, 'identifier', None): self.logger.error(f"Bracket child submission failed (kind={kind}) for parent {parent.identifier}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/brokers/projectx.py` around lines 1442 - 1445, The validation currently checks submitted.id which is outdated after the identifier-first refactor; change the check to use submitted.identifier (e.g., replace getattr(submitted, 'id', None) with getattr(submitted, 'identifier', None)) so failures aren't falsely logged, and adjust the logging calls around self.logger.error/self.logger.debug to reference submitted.identifier when present (fall back safely if missing).
1261-1261:⚠️ Potential issue | 🟠 MajorBracket metadata lookup is still keyed by
idin critical paths.Line 1261 and Line 1357 still use
order.idkeys while the refactor standardizes tracking onorder.identifier. This can miss bracket-parent restoration/detection.🔧 Suggested fix
- restore_bracket_meta_if_needed(new_order, {getattr(cached_order,'id',None): cached_order} if cached_order else {}, getattr(self, '_bracket_meta', {}), self.logger) + restore_bracket_meta_if_needed(new_order, {getattr(cached_order,'identifier',None): cached_order} if cached_order else {}, getattr(self, '_bracket_meta', {}), self.logger)- return bool(getattr(self, '_bracket_meta', {}).get(getattr(order, 'id', None))) + return bool(getattr(self, '_bracket_meta', {}).get(getattr(order, 'identifier', None)))Also applies to: 1355-1358
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/brokers/projectx.py` at line 1261, The bracket metadata lookup still uses order.id as the key causing missed matches after the refactor; update calls to restore_bracket_meta_if_needed (and any other places building lookup dicts) to key the cached_order by getattr(cached_order, 'identifier', None) instead of getattr(cached_order, 'id', None), and ensure the existing _bracket_meta reference uses the same identifier-based keys so restore_bracket_meta_if_needed(new_order, {getattr(cached_order,'identifier',None): cached_order} if cached_order else {}, getattr(self, '_bracket_meta', {}), self.logger) consistently across the code paths (also fix similar usage around where cached_order/new_order are passed).
1579-1593:⚠️ Potential issue | 🔴 CriticalFILLED event dispatch can be skipped by passing the same object as old/new state.
At Line 1592-Line 1593,
_dispatch_status_change(cached_order, cached_order)can be short-circuited by status-equivalence checks, which drops trade-driven fill events.🐛 Suggested fix
+from copy import copy- # Mark order as filled based on trade data + previous_order = copy(cached_order) + # Mark order as filled based on trade data cached_order.status = "filled" cached_order.filled_quantity = fill_size cached_order.avg_fill_price = fill_price - # Dispatch fill event - pass same order twice since it's the updated version - self._dispatch_status_change(cached_order, cached_order) + self._dispatch_status_change(previous_order, cached_order)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/brokers/projectx.py` around lines 1579 - 1593, The dispatch is being called with the same object for old/new state which can be short-circuited by equality/status checks in _dispatch_status_change; before mutating cached_order capture its previous state (e.g., prev_order = copy.copy(cached_order) or construct a shallow copy of the relevant fields), then update cached_order.filled_quantity/status/avg_fill_price and call self._dispatch_status_change(prev_order, cached_order) so the old and new objects are distinct and the FILLED event is dispatched.
1061-1062:⚠️ Potential issue | 🟠 MajorUnknown position types are still treated as short exposure.
If
broker_position["type"]is missing/unmapped, Line 1062 negates size by default. That can invert position direction and downstream PnL.🛠️ Suggested fix
- direction = self.POSITION_TYPE_MAPPING.get(broker_position.get("type"), "") - quantity = broker_position.get("size", 0) if direction == "long" else -broker_position.get("size", 0) + position_type = broker_position.get("type") + direction = self.POSITION_TYPE_MAPPING.get(position_type) + if direction is None: + self.logger.warning( + f"Unknown position type '{position_type}' for contract {broker_position.get('contractId')}" + ) + return None + size = broker_position.get("size", 0) + quantity = size if direction == "long" else -size🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/brokers/projectx.py` around lines 1061 - 1062, The code currently maps broker_position.get("type") via POSITION_TYPE_MAPPING and then treats any unmapped/empty direction as short by negating size; change this so unknown types do not flip sign: retrieve direction = self.POSITION_TYPE_MAPPING.get(broker_position.get("type"), "") and the quantity should be set with explicit branches (if direction == "long": quantity = size; elif direction == "short": quantity = -size; else: quantity = 0) and optionally log or raise a warning about the unknown broker_position["type"]; update the code around the variable names direction, quantity, and broker_position to implement this behavior.lumibot/traders/trader.py (1)
333-334:⚠️ Potential issue | 🟡 MinorInline comment contradicts actual console log level.
Line 333 forces console to
ERROR, but Line 334 says INFO logs are allowed to console. Please align the comment with the executed behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lumibot/traders/trader.py` around lines 333 - 334, The inline comment is misleading: the code calls set_console_log_level("ERROR") but the next sentence claims INFO logs will be allowed; update the comment near set_console_log_level to accurately state that the console is being forced to ERROR (and, if intended behavior is conditional, make that explicit by referencing the quiet_logs / BACKTESTING_QUIET_LOGS check or change the call). Locate set_console_log_level in trader.py and either correct the text to: "Set console log level to ERROR (file logs remain at INFO)" or, if the goal is to allow INFO when quiet_logs is False, change the set_console_log_level call to use INFO when quiet_logs/BACKTESTING_QUIET_LOGS indicates so and adjust the comment accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@lumibot/tools/lumibot_logger.py`:
- Around line 657-676: The logger-level assignment during backtesting can be too
high (effective_log_level set to ERROR) which drops INFO records before file
handlers see them; update the logic for computing effective_log_level (used when
configuring the "lumibot" logger) to be the minimum (most permissive) level
required by any handler — e.g., compare console_level and the file handler's
intended level (file_handler level, typically INFO) and set effective_log_level
to the lower numeric value (or logging.NOTSET) so file INFO records are not
filtered out; ensure this change touches the backtesting branch that sets
console_level / effective_log_level and the code that creates/sets the
file_handler level so the logger level does not prevent file logs from being
emitted.
---
Duplicate comments:
In `@lumibot/brokers/broker.py`:
- Around line 1233-1237: When the code returns early because an order is already
present in self._new_orders, it must also remove any stale entry with the same
identifier from self._unprocessed_orders to avoid duplicate tracking; update the
branch that currently does "if existing_order in self._new_orders: return
existing_order" to first check and remove the matching stale item from
self._unprocessed_orders (e.g., by comparing identifiers or using the same
membership test) before returning existing_order, ensuring you reference the
existing_order, self._new_orders, and self._unprocessed_orders variables in the
fix.
In `@lumibot/brokers/projectx.py`:
- Around line 1442-1445: The validation currently checks submitted.id which is
outdated after the identifier-first refactor; change the check to use
submitted.identifier (e.g., replace getattr(submitted, 'id', None) with
getattr(submitted, 'identifier', None)) so failures aren't falsely logged, and
adjust the logging calls around self.logger.error/self.logger.debug to reference
submitted.identifier when present (fall back safely if missing).
- Line 1261: The bracket metadata lookup still uses order.id as the key causing
missed matches after the refactor; update calls to
restore_bracket_meta_if_needed (and any other places building lookup dicts) to
key the cached_order by getattr(cached_order, 'identifier', None) instead of
getattr(cached_order, 'id', None), and ensure the existing _bracket_meta
reference uses the same identifier-based keys so
restore_bracket_meta_if_needed(new_order,
{getattr(cached_order,'identifier',None): cached_order} if cached_order else {},
getattr(self, '_bracket_meta', {}), self.logger) consistently across the code
paths (also fix similar usage around where cached_order/new_order are passed).
- Around line 1579-1593: The dispatch is being called with the same object for
old/new state which can be short-circuited by equality/status checks in
_dispatch_status_change; before mutating cached_order capture its previous state
(e.g., prev_order = copy.copy(cached_order) or construct a shallow copy of the
relevant fields), then update cached_order.filled_quantity/status/avg_fill_price
and call self._dispatch_status_change(prev_order, cached_order) so the old and
new objects are distinct and the FILLED event is dispatched.
- Around line 1061-1062: The code currently maps broker_position.get("type") via
POSITION_TYPE_MAPPING and then treats any unmapped/empty direction as short by
negating size; change this so unknown types do not flip sign: retrieve direction
= self.POSITION_TYPE_MAPPING.get(broker_position.get("type"), "") and the
quantity should be set with explicit branches (if direction == "long": quantity
= size; elif direction == "short": quantity = -size; else: quantity = 0) and
optionally log or raise a warning about the unknown broker_position["type"];
update the code around the variable names direction, quantity, and
broker_position to implement this behavior.
In `@lumibot/traders/trader.py`:
- Around line 333-334: The inline comment is misleading: the code calls
set_console_log_level("ERROR") but the next sentence claims INFO logs will be
allowed; update the comment near set_console_log_level to accurately state that
the console is being forced to ERROR (and, if intended behavior is conditional,
make that explicit by referencing the quiet_logs / BACKTESTING_QUIET_LOGS check
or change the call). Locate set_console_log_level in trader.py and either
correct the text to: "Set console log level to ERROR (file logs remain at INFO)"
or, if the goal is to allow INFO when quiet_logs is False, change the
set_console_log_level call to use INFO when quiet_logs/BACKTESTING_QUIET_LOGS
indicates so and adjust the comment accordingly.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
lumibot/backtesting/backtesting_broker.pylumibot/brokers/broker.pylumibot/brokers/projectx.pylumibot/brokers/schwab.pylumibot/entities/order.pylumibot/strategies/strategy.pylumibot/strategies/strategy_executor.pylumibot/tools/lumibot_logger.pylumibot/traders/trader.py
🚧 Files skipped from review as they are similar to previous changes (3)
- lumibot/brokers/schwab.py
- lumibot/strategies/strategy.py
- lumibot/entities/order.py
This pull request improves the reliability and maintainability of order tracking and management in the ProjectX broker integration. The main focus is on eliminating order duplication, ensuring consistent usage of order identifiers, and addressing race conditions that can occur during order event processing. Additionally, the code is refactored to centralize order tracking and streamline order cache management.
Order Tracking and Race Condition Fixes
_clean_order_trackersmethod and improved_process_new_orderinBrokerto prevent duplicate orders and clean up order trackers, addressing race conditions when order events (like "fill" and "new") arrive concurrently._order_update_lock) inProjectXto serialize order-update event dispatch and prevent race conditions during concurrent order event handling.Consistent Order Identifier Usage
order.identifier(instead oforder.id) for all order operations, logging, and cache lookups, including order submission, modification, cancellation, and bracket order handling. This ensures consistency and prevents mismatches between internal and broker order references. [1] [2] [3] [4] [5] [6] [7]Order Cache and Lookup Refactoring
_orders_cachewith the centralizedget_tracked_ordermethod for all order lookups, and removed redundant cache population to avoid stale or duplicate order objects. [1] [2] [3]get_tracked_orderto safely handleNoneidentifiers and to include placeholder orders if needed.Other Improvements
POSITION_TYPE_MAPPINGfor clarity and maintainability of position type handling inProjectX.order_lookback_daysparameter. [1] [2] [3] [4]Minor Logging and Diagnostic Updates
self.logger.debugand improved log messages for clarity and consistency, including in the backtesting broker. [1] [2] [3]These changes collectively strengthen order management, reduce bugs related to order duplication and race conditions, and enhance code clarity and maintainability.
Summary by CodeRabbit
New Features
Bug Fixes
Performance
Chores / Tests