diff options
author | Ilya Dryomov <idryomov@gmail.com> | 2017-02-08 18:57:48 +0100 |
---|---|---|
committer | Ilya Dryomov <idryomov@gmail.com> | 2017-02-20 12:16:11 +0100 |
commit | ef9324bb11357c02a4f0529b806341e5b768d872 (patch) | |
tree | b759205966ff836406bca39d89282334438d5007 /net/ceph | |
parent | 743efcffffc6620ab44ea9ec67c7e4e28dfa7742 (diff) | |
download | linux-ef9324bb11357c02a4f0529b806341e5b768d872.tar.bz2 |
libceph: don't go through with the mapping if the PG is too wide
With EC overwrites maturing, the kernel client will be getting exposed
to potentially very wide EC pools. While "min(pi->size, X)" works fine
when the cluster is stable and happy, truncating OSD sets interferes
with resend logic (ceph_is_new_interval(), etc). Abort the mapping if
the pool is too wide, assigning the request to the homeless session.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Diffstat (limited to 'net/ceph')
-rw-r--r-- | net/ceph/osdmap.c | 10 |
1 files changed, 8 insertions, 2 deletions
diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c index 2374956c4d40..6824c0ec8373 100644 --- a/net/ceph/osdmap.c +++ b/net/ceph/osdmap.c @@ -2013,8 +2013,14 @@ static void pg_to_raw_osds(struct ceph_osdmap *osdmap, return; } - len = do_crush(osdmap, ruleno, pps, raw->osds, - min_t(int, pi->size, ARRAY_SIZE(raw->osds)), + if (pi->size > ARRAY_SIZE(raw->osds)) { + pr_err_ratelimited("pool %lld ruleset %d type %d too wide: size %d > %zu\n", + pi->id, pi->crush_ruleset, pi->type, pi->size, + ARRAY_SIZE(raw->osds)); + return; + } + + len = do_crush(osdmap, ruleno, pps, raw->osds, pi->size, osdmap->osd_weight, osdmap->max_osd); if (len < 0) { pr_err("error %d from crush rule %d: pool %lld ruleset %d type %d size %d\n", |