Skip to content

Conversation

Olle-Lukowski
Copy link
Contributor

Objective

Fixes #20802.

Solution

Implemented the suggested approach (using a future that reschedules itself).

Testing

I tried to test it with the log_diagnostics example, but ran into #20844.

@Olle-Lukowski Olle-Lukowski added A-Tasks Tools for parallel and async work A-Diagnostics Logging, crash handling, error reporting and performance analysis S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Sep 3, 2025
@NthTensor NthTensor self-requested a review September 3, 2025 13:20
@NthTensor
Copy link
Contributor

Man, I love that I can write an issue and have someone implement it literally three days later.

Thanks! I'll give this a look today.

{
let mut sys = self.sysinfo.lock().unwrap();
let data = get_diagnostic_data(&mut sys);
_ = self.data_rx.send(data);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this may have been a mistake in the sketch from my example. Since send returns a future, it needs to be awaited rather than dropped.

This should be self.data_rx.send(data).await.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

send doesn't return a Future (using crossbeam-channel here). and also, how would you await aFuture inside the impl Future...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, using crossbeam-channel this will block the executor when full. You need to use try_send or an async channel.

We could also write this task as an async block to make using a channel easier.

async move {
    loop {
        // all the polling and stuff
        data_rx.send(data).await;
        futures::yeild_now().await;
    }
}

Should be approximately the same.

@@ -209,8 +226,7 @@ pub mod internal {
&SystemInformationDiagnosticsPlugin::PROCESS_MEM_USAGE,
|| data.process_mem_usage,
);
false
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this a bit closer, the api for Diagnostics vs Diagnostic strikes me as very odd.

There's no reason to be using a closure for these, and we should be using the Instant::now() from the task itself and passing it back over the channel. Currently add_measurement calls Instant::now() again.

Probably out of scope for this PR.

@NthTensor
Copy link
Contributor

We got a new PR for this, woops! I'll have to compare this with the other PR later today. You may want to give a look over #20852 as well.

Copy link
Contributor

@dloukadakis dloukadakis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Um..I happen to work on the same issue, I opened a duplicate pull request #20852

I have a few comments for this one.

type Output = ();

fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
cx.waker().wake_by_ref();
Copy link
Contributor

@dloukadakis dloukadakis Sep 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was working on the same issue, I tried this too, but it ended up waking the task 240k times per sysinfo::MINIMUM_CPU_UPDATE_INTERVAL, it caused on average ~4% more CPU load on my system.

On #20852, to prevent this and reduce the CPU load caused by it, I decided to use a different approach, and wake up the task in the First schedule that runs at the start of every frame.

@@ -93,9 +96,9 @@ pub mod internal {

pub(super) fn setup_plugin(app: &mut App) {
app.add_systems(Startup, setup_system)
.add_systems(First, launch_diagnostic_tasks)
.add_systems(First, launch_diagnostic_job)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The First schedule runs first on every frame, it needs to be moved to the Startup schedule to have it run one time.

let (tx, rx) = crossbeam_channel::bounded(1);
data.data_rx = Some(rx);
AsyncComputeTaskPool::get()
.spawn(DiagnosticJob {
Copy link
Contributor

@dloukadakis dloukadakis Sep 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will leak because launch_diagnostic_job runs on the First schedule on every frame, adding one DiagnosticJob per frame. Please see #20845 (comment)

@Olle-Lukowski
Copy link
Contributor Author

Closed in favor of #20852

@NthTensor
Copy link
Contributor

Thanks for working on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Diagnostics Logging, crash handling, error reporting and performance analysis A-Tasks Tools for parallel and async work S-Needs-Review Needs reviewer attention (from anyone!) to move forward
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bevy diagnostics should not spawn multiple diagnostic tasks
3 participants