summaryrefslogtreecommitdiffstats
path: root/fs/ceph/mds_client.h
diff options
context:
space:
mode:
authorJeff Layton <jlayton@kernel.org>2020-04-01 18:27:25 -0400
committerIlya Dryomov <idryomov@gmail.com>2020-06-01 13:22:52 +0200
commitd67c72e6cce99eab5ab9d62c599e33e5141ff8b4 (patch)
tree9f615fbcc18b0ce30ac10f6a6809691c38f5359b /fs/ceph/mds_client.h
parent1cf03a68e791b1673bc4daaa88a0820f34f538f8 (diff)
downloadlinux-d67c72e6cce99eab5ab9d62c599e33e5141ff8b4.tar.bz2
ceph: request expedited service on session's last cap flush
When flushing a lot of caps to the MDS's at once (e.g. for syncfs), we can end up waiting a substantial amount of time for MDS replies, due to the fact that it may delay some of them so that it can batch them up together in a single journal transaction. This can lead to stalls when calling sync or syncfs. What we'd really like to do is request expedited service on the _last_ cap we're flushing back to the server. If the CHECK_CAPS_FLUSH flag is set on the request and the current inode was the last one on the session->s_cap_dirty list, then mark the request with CEPH_CLIENT_CAPS_SYNC. Note that this heuristic is not perfect. New inodes can race onto the list after we've started flushing, but it does seem to fix some common use cases. URL: https://tracker.ceph.com/issues/44744 Reported-by: Jan Fajerski <jfajerski@suse.com> Signed-off-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Diffstat (limited to 'fs/ceph/mds_client.h')
0 files changed, 0 insertions, 0 deletions