diff options
author | Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> | 2009-01-07 18:08:29 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-01-08 08:31:09 -0800 |
commit | 7f4d454dee2e0bdd21bafd413d1c53e443a26540 (patch) | |
tree | abf54c2bd7c91fe09685e42b3a92d84679403058 /mm | |
parent | a5e924f5f8abf97944e625d74967cc9452cfbce8 (diff) | |
download | linux-7f4d454dee2e0bdd21bafd413d1c53e443a26540.tar.bz2 |
memcg: avoid deadlock caused by race between oom and cpuset_attach
mpol_rebind_mm(), which can be called from cpuset_attach(), does
down_write(mm->mmap_sem). This means down_write(mm->mmap_sem) can be
called under cgroup_mutex.
OTOH, page fault path does down_read(mm->mmap_sem) and calls
mem_cgroup_try_charge_xxx(), which may eventually calls
mem_cgroup_out_of_memory(). And mem_cgroup_out_of_memory() calls
cgroup_lock(). This means cgroup_lock() can be called under
down_read(mm->mmap_sem).
If those two paths race, deadlock can happen.
This patch avoid this deadlock by:
- remove cgroup_lock() from mem_cgroup_out_of_memory().
- define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
(->attach handler of memory cgroup) and mem_cgroup_out_of_memory.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memcontrol.c | 5 | ||||
-rw-r--r-- | mm/oom_kill.c | 2 |
2 files changed, 5 insertions, 2 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 435f08dac8bf..861037070f66 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -51,6 +51,7 @@ static int really_do_swap_account __initdata = 1; /* for remember boot option*/ #define do_swap_account (0) #endif +static DEFINE_MUTEX(memcg_tasklist); /* can be hold under cgroup_mutex */ /* * Statistics for memory cgroup. @@ -827,7 +828,9 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm, if (!nr_retries--) { if (oom) { + mutex_lock(&memcg_tasklist); mem_cgroup_out_of_memory(mem_over_limit, gfp_mask); + mutex_unlock(&memcg_tasklist); mem_over_limit->last_oom_jiffies = jiffies; } goto nomem; @@ -2211,10 +2214,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss, struct cgroup *old_cont, struct task_struct *p) { + mutex_lock(&memcg_tasklist); /* * FIXME: It's better to move charges of this process from old * memcg to new memcg. But it's just on TODO-List now. */ + mutex_unlock(&memcg_tasklist); } struct cgroup_subsys mem_cgroup_subsys = { diff --git a/mm/oom_kill.c b/mm/oom_kill.c index fd150e3a2567..40ba05061a4f 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -429,7 +429,6 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask) unsigned long points = 0; struct task_struct *p; - cgroup_lock(); read_lock(&tasklist_lock); retry: p = select_bad_process(&points, mem); @@ -444,7 +443,6 @@ retry: goto retry; out: read_unlock(&tasklist_lock); - cgroup_unlock(); } #endif |