diff options
author | Hugh Dickins <hughd@google.com> | 2014-02-06 15:56:01 -0800 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2014-02-07 10:21:12 -0500 |
commit | ab3f5faa6255a0eb4f832675507d9e295ca7e9ba (patch) | |
tree | 1f9a906214d8f20bf9f58ccb202d2eb8dade8db4 /kernel/cgroup.c | |
parent | 0a6be6555302eebb14510fd6b35bb17e8dfa1386 (diff) | |
download | linux-ab3f5faa6255a0eb4f832675507d9e295ca7e9ba.tar.bz2 |
cgroup: use an ordered workqueue for cgroup destruction
Sometimes the cleanup after memcg hierarchy testing gets stuck in
mem_cgroup_reparent_charges(), unable to bring non-kmem usage down to 0.
There may turn out to be several causes, but a major cause is this: the
workitem to offline parent can get run before workitem to offline child;
parent's mem_cgroup_reparent_charges() circles around waiting for the
child's pages to be reparented to its lrus, but it's holding cgroup_mutex
which prevents the child from reaching its mem_cgroup_reparent_charges().
Just use an ordered workqueue for cgroup_destroy_wq.
tj: Committing as the temporary fix until the reverse dependency can
be removed from memcg. Comment updated accordingly.
Fixes: e5fca243abae ("cgroup: use a dedicated workqueue for cgroup destruction")
Suggested-by: Filipe Brandenburger <filbranden@google.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: stable@vger.kernel.org # 3.10+
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'kernel/cgroup.c')
-rw-r--r-- | kernel/cgroup.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/kernel/cgroup.c b/kernel/cgroup.c index e2f46ba37f72..aa95591c1430 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -4845,12 +4845,16 @@ static int __init cgroup_wq_init(void) /* * There isn't much point in executing destruction path in * parallel. Good chunk is serialized with cgroup_mutex anyway. - * Use 1 for @max_active. + * + * XXX: Must be ordered to make sure parent is offlined after + * children. The ordering requirement is for memcg where a + * parent's offline may wait for a child's leading to deadlock. In + * the long term, this should be fixed from memcg side. * * We would prefer to do this in cgroup_init() above, but that * is called before init_workqueues(): so leave this until after. */ - cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1); + cgroup_destroy_wq = alloc_ordered_workqueue("cgroup_destroy", 0); BUG_ON(!cgroup_destroy_wq); /* |