summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2014-08-27 09:12:36 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2014-08-27 09:12:36 -0700
commitc0fe5dcb91f75204e34f63269ea72b913b0923d1 (patch)
tree0e7cf0df48f172f99b65bb152b0cbfff3c5636a1 /kernel
parent68e370289c29e3beac99d59c6d840d470af9dfcf (diff)
parent4ce97dbf50245227add17c83d87dc838e7ca79d0 (diff)
downloadlinux-c0fe5dcb91f75204e34f63269ea72b913b0923d1.tar.bz2
Merge tag 'trace-fixes-v3.17-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull trace buffer epoll hang fix from Steven Rostedt: "Josef Bacik found a bug in the ring_buffer_poll_wait() where the condition variable (waiters_pending) was set before being added to the poll queue via poll_wait(). This allowed for a small race window to happen where an event could come in, check the condition variable see it set to true, clear it, and then wake all the waiters. But because the waiter set the variable before adding itself to the queue, the waker could have cleared the variable after it was set and then miss waking it up as it wasn't added to the queue yet. Discussing this bug, we realized that a memory barrier needed to be added too, for the rare case that something polls for a single trace event to happen (and just one, no more to come in), and miss the wakeup due to memory ordering. Ideally, a memory barrier needs to be added on the writer side too, but as that will kill tracing performance and this is for a situation that tracing wasn't even designed for (who traces one instance of an event, use a printk instead!), this isn't worth adding the barrier. But we can in the future add the barrier for when the buffer goes from empty to the first event, as that would cover this case" * tag 'trace-fixes-v3.17-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: trace: Fix epoll hang when we race with new entries
Diffstat (limited to 'kernel')
-rw-r--r--kernel/trace/ring_buffer.c16
1 files changed, 15 insertions, 1 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index afb04b9b818a..b38fb2b9e237 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -626,8 +626,22 @@ int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu,
work = &cpu_buffer->irq_work;
}
- work->waiters_pending = true;
poll_wait(filp, &work->waiters, poll_table);
+ work->waiters_pending = true;
+ /*
+ * There's a tight race between setting the waiters_pending and
+ * checking if the ring buffer is empty. Once the waiters_pending bit
+ * is set, the next event will wake the task up, but we can get stuck
+ * if there's only a single event in.
+ *
+ * FIXME: Ideally, we need a memory barrier on the writer side as well,
+ * but adding a memory barrier to all events will cause too much of a
+ * performance hit in the fast path. We only need a memory barrier when
+ * the buffer goes from empty to having content. But as this race is
+ * extremely small, and it's not a problem if another event comes in, we
+ * will fix it later.
+ */
+ smp_mb();
if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) ||
(cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu)))