Skip to content

Commit 859b47a

Browse files
rostedtgregkh
authored andcommitted
tracing: Restructure trace_clock_global() to never block
commit aafe104aa9096827a429bc1358f8260ee565b7cc upstream. It was reported that a fix to the ring buffer recursion detection would cause a hung machine when performing suspend / resume testing. The following backtrace was extracted from debugging that case: Call Trace: trace_clock_global+0x91/0xa0 __rb_reserve_next+0x237/0x460 ring_buffer_lock_reserve+0x12a/0x3f0 trace_buffer_lock_reserve+0x10/0x50 __trace_graph_return+0x1f/0x80 trace_graph_return+0xb7/0xf0 ? trace_clock_global+0x91/0xa0 ftrace_return_to_handler+0x8b/0xf0 ? pv_hash+0xa0/0xa0 return_to_handler+0x15/0x30 ? ftrace_graph_caller+0xa0/0xa0 ? trace_clock_global+0x91/0xa0 ? __rb_reserve_next+0x237/0x460 ? ring_buffer_lock_reserve+0x12a/0x3f0 ? trace_event_buffer_lock_reserve+0x3c/0x120 ? trace_event_buffer_reserve+0x6b/0xc0 ? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0 ? dpm_run_callback+0x3b/0xc0 ? pm_ops_is_empty+0x50/0x50 ? platform_get_irq_byname_optional+0x90/0x90 ? trace_device_pm_callback_start+0x82/0xd0 ? dpm_run_callback+0x49/0xc0 With the following RIP: RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200 Since the fix to the recursion detection would allow a single recursion to happen while tracing, this lead to the trace_clock_global() taking a spin lock and then trying to take it again: ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* lock taken */ (something else gets traced by function graph tracer) ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* DEAD LOCK! */ Tracing should *never* block, as it can lead to strange lockups like the above. Restructure the trace_clock_global() code to instead of simply taking a lock to update the recorded "prev_time" simply use it, as two events happening on two different CPUs that calls this at the same time, really doesn't matter which one goes first. Use a trylock to grab the lock for updating the prev_time, and if it fails, simply try again the next time. If it failed to be taken, that means something else is already updating it. Link: https://lkml.kernel.org/r/[email protected] Cc: [email protected] Tested-by: Konstantin Kharlamov <[email protected]> Tested-by: Todd Brandt <[email protected]> Fixes: b02414c8f045 ("ring-buffer: Fix recursion protection transitions between interrupt context") # started showing the problem Fixes: 14131f2 ("tracing: implement trace_clock_*() APIs") # where the bug happened Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212761 Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent e1bf31b commit 859b47a

File tree

1 file changed

+30
-14
lines changed

1 file changed

+30
-14
lines changed

kernel/trace/trace_clock.c

Lines changed: 30 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -93,33 +93,49 @@ u64 notrace trace_clock_global(void)
9393
{
9494
unsigned long flags;
9595
int this_cpu;
96-
u64 now;
96+
u64 now, prev_time;
9797

9898
local_irq_save(flags);
9999

100100
this_cpu = raw_smp_processor_id();
101-
now = sched_clock_cpu(this_cpu);
101+
102102
/*
103-
* If in an NMI context then dont risk lockups and return the
104-
* cpu_clock() time:
103+
* The global clock "guarantees" that the events are ordered
104+
* between CPUs. But if two events on two different CPUS call
105+
* trace_clock_global at roughly the same time, it really does
106+
* not matter which one gets the earlier time. Just make sure
107+
* that the same CPU will always show a monotonic clock.
108+
*
109+
* Use a read memory barrier to get the latest written
110+
* time that was recorded.
105111
*/
106-
if (unlikely(in_nmi()))
107-
goto out;
112+
smp_rmb();
113+
prev_time = READ_ONCE(trace_clock_struct.prev_time);
114+
now = sched_clock_cpu(this_cpu);
108115

109-
arch_spin_lock(&trace_clock_struct.lock);
116+
/* Make sure that now is always greater than prev_time */
117+
if ((s64)(now - prev_time) < 0)
118+
now = prev_time + 1;
110119

111120
/*
112-
* TODO: if this happens often then maybe we should reset
113-
* my_scd->clock to prev_time+1, to make sure
114-
* we start ticking with the local clock from now on?
121+
* If in an NMI context then dont risk lockups and simply return
122+
* the current time.
115123
*/
116-
if ((s64)(now - trace_clock_struct.prev_time) < 0)
117-
now = trace_clock_struct.prev_time + 1;
124+
if (unlikely(in_nmi()))
125+
goto out;
118126

119-
trace_clock_struct.prev_time = now;
127+
/* Tracing can cause strange recursion, always use a try lock */
128+
if (arch_spin_trylock(&trace_clock_struct.lock)) {
129+
/* Reread prev_time in case it was already updated */
130+
prev_time = READ_ONCE(trace_clock_struct.prev_time);
131+
if ((s64)(now - prev_time) < 0)
132+
now = prev_time + 1;
120133

121-
arch_spin_unlock(&trace_clock_struct.lock);
134+
trace_clock_struct.prev_time = now;
122135

136+
/* The unlock acts as the wmb for the above rmb */
137+
arch_spin_unlock(&trace_clock_struct.lock);
138+
}
123139
out:
124140
local_irq_restore(flags);
125141

0 commit comments

Comments
 (0)