Skip to content

Adapt the TimeDriver for per-worker wheels #7414

Open
@ADD-SP

Description

@ADD-SP

Summary

Sub issue of #7384

This document describes the changes I want to make to [imeDriver] for #7409 , if you think something that doesn't make sense please let me know.

This document analysis the role played by TimeDriver, and discusses about which changes we have to make to finish the #7384.

Why change the TimeDriver?

In the current implementation, there is a global timing wheel and a global TimeDriver that drives the global wheel.

In #7384, we proposed to use per-worker timing wheels, so we need to reconsider the role of TimeDriver to adapt this design.

Current Implementation

Data

Data owned by TImeDriver

  • Global wheel: I believe there is no need to explain this too much.
  • start_time: The Instant while creating the Runtime.
  • is_shutdown: This indicates whether the drive is being shut down.
  • next_wake: The earliest time to wake up the io::Driver, in other words, the deadline of the first timer to be awakened.

Logic

Logic owned by TImeDriver

  • deadline (Instant) to tick (u64): Calculate the tick of a timer with a given deadline.
  • tick (u64) to Duration: Just a simple Duration::from_millis(tick).
  • tick (u64) of now: tick based on the Clock and start_time.
    • Clock is freezable and advance-able, so it can be used for testing to advance the time without sleeping.
  • Shutdown: Firing all timers even if they are not expired yet, but not executing it. In other words, clear the timing wheel.
  • Maintain wheels
    • Fire all expired timers based on TimeSource::now(&clock).
    • Spin wheels.
  • Adjust park_timeout: Adjust the park_timeout based on the earliest timer.
  • Park io::Driver: Park the io::Driver after maintaining wheels.

Control

Who is controlling TImeDriver?

All worker threads will periodically try to race for the Mutex, and the winner will invoke the TimeHandle::park_*().

Data: Shared vs. Per-worker

Now, let's talk about which data should be shared across workers, which data should be per-worker.

Timing Wheel

Per-worker

This is the whole point of #7384.

start_time

Shared

This data never changes, and is always the same for all workers.

is_shutdown

Shared

This is a pure global state, and is checked frequently, we may want to use AtomicBool.

next_wake

Per-worker

The answer is not obvious, let's start by discussing why we need it.

In general, io::Driver should always sleep until a new I/O event. However, it is possible for timers to expire before the I/O event occurs. So the io::Driver has to sleep with a timeout based on next_wake to ensure expired timers can be fired in time.

Let's say the wheel is already per-workered. Is the local earliest deadline of local timers enough for parking the io::Driver?

  • For the worker that successfully acquired the driver lock, apparently, io::Driver can be woken up in time.
  • For other workers, they are not blocked by io::Driver, so they can also process expired local timers in time.

In summary, next_wake should be per-worker.

Logic: More stateless TimeDriver

Since some data is planned to be moved to per-worker storage, we should make the TimeDriver more stateless to keep the dataflow simpler.

Unit conversion

  • Instant to u64
  • u64 to Instant

This logic depends on the Clock and start_time, which are pure global states, so we can store these state into the runtime::Handle, and let TimeDriver to save the runtime::Handle to complete the conversion.

Shutdown

This logic depends on both global state (is_shutdown) and per-worker wheel, so the shutdown interface should receive both is_shutdown flag and local wheel.

Wheel maintaining

This only involves a per-worker wheel, so the interface should only receive the local wheel.

Adjust park_timeout

This also only involves per-worker data, so the interface should receive the timeout that is pending be adjusted and the local wheel.

Park io::Driver

Theoretically, this logic doesn't belong to the TimeDriver, so we may park the io::Driver directly without delegating to the TimeDriver.

To recap, the new internal interface of TimeDriver might look like this

struct TimeDriver {
    rt_hande: runtime::Handle
}

struct TimeHandle {
    is_shutdown: AtomicBool,
}

impl TimeDriver {
    pub(crate) fn instant_to_tick(&self, instant: Instant) -> u64;
    pub(crate) fn tick_to_instant(&self, tick: u64) -> Instant;

    /// fire expired timers, cascading wheels, returns the earliest deadline.
    pub(crate) fn maintain_wheel(&self, hdl: &TimeHandle, wheel: &mut Wheel) -> Option<Duration>;

    /// clear wheel, set `is_shutdown` flag
    pub(crate) fn shutdown(&self, hdl: &TimeHandle, wheel: &mut Wheel);
}

Control: Worker drives the TimeDriver

In the current design, TimeDriver drives itself. However, we may let workers have deep control of TimeDriver because crucial data has moved to per-worker storage.

Metadata

Metadata

Assignees

Labels

A-tokioArea: The main tokio crateC-proposalCategory: a proposal and request for commentsM-timeModule: tokio/time

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions