diff options
author | Jens Axboe <axboe@kernel.dk> | 2017-09-28 11:31:55 -0600 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2017-10-03 08:38:17 -0600 |
commit | aac8d41cd438f25bf3110fc6b98f1d16d7dbc169 (patch) | |
tree | 0c507b3a603955686e9b2cd13c5fcbbcc638ed6d /drivers/uwb | |
parent | e8e8a0c6c9bfc0b320671166dd795f413f636773 (diff) | |
download | linux-aac8d41cd438f25bf3110fc6b98f1d16d7dbc169.tar.bz2 |
writeback: only allow one inflight and pending full flush
When someone calls wakeup_flusher_threads() or
wakeup_flusher_threads_bdi(), they schedule writeback of all dirty
pages in the system (or on that bdi). If we are tight on memory, we
can get tons of these queued from kswapd/vmscan. This causes (at
least) two problems:
1) We consume a ton of memory just allocating writeback work items.
We've seen as much as 600 million of these writeback work items
pending. That's a lot of memory to pointlessly hold hostage,
while the box is under memory pressure.
2) We spend so much time processing these work items, that we
introduce a softlockup in writeback processing. This is because
each of the writeback work items don't end up doing any work (it's
hard when you have millions of identical ones coming in to the
flush machinery), so we just sit in a tight loop pulling work
items and deleting/freeing them.
Fix this by adding a 'start_all' bit to the writeback structure, and
set that when someone attempts to flush all dirty pages. The bit is
cleared when we start writeback on that work item. If the bit is
already set when we attempt to queue !nr_pages writeback, then we
simply ignore it.
This provides us one full flush in flight, with one pending as well,
and makes for more efficient handling of this type of writeback.
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Tested-by: Chris Mason <clm@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'drivers/uwb')
0 files changed, 0 insertions, 0 deletions