From c09ff089aa62380ad904ea785bd713c56720270e Mon Sep 17 00:00:00 2001 From: Hugh Dickins Date: Mon, 5 Mar 2012 20:52:55 -0800 Subject: page_cgroup: fix horrid swap accounting regression Why is memcg's swap accounting so broken? Insane counts, wrong ownership, unfreeable structures, which later get freed and then accessed after free. Turns out to be a tiny a little 3.3-rc1 regression in 9fb4b7cc0724 "page_cgroup: add helper function to get swap_cgroup": the helper function (actually named lookup_swap_cgroup()) returns an address using void* arithmetic, but the structure in question is a short. Signed-off-by: Hugh Dickins Reviewed-by: Bob Liu Cc: Michal Hocko Cc: KAMEZAWA Hiroyuki Cc: Johannes Weiner Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/page_cgroup.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c index de1616aa9b1e..1ccbd714059c 100644 --- a/mm/page_cgroup.c +++ b/mm/page_cgroup.c @@ -379,13 +379,15 @@ static struct swap_cgroup *lookup_swap_cgroup(swp_entry_t ent, pgoff_t offset = swp_offset(ent); struct swap_cgroup_ctrl *ctrl; struct page *mappage; + struct swap_cgroup *sc; ctrl = &swap_cgroup_ctrl[swp_type(ent)]; if (ctrlp) *ctrlp = ctrl; mappage = ctrl->map[offset / SC_PER_PAGE]; - return page_address(mappage) + offset % SC_PER_PAGE; + sc = page_address(mappage); + return sc + offset % SC_PER_PAGE; } /** -- cgit v1.2.3