summaryrefslogtreecommitdiffstats
path: root/fs/coredump.c
diff options
context:
space:
mode:
authorBob Peterson <rpeterso@redhat.com>2013-11-06 10:55:52 -0500
committerSteven Whitehouse <swhiteho@redhat.com>2014-01-03 09:57:02 +0000
commit5ce13431dd3365d5dd4f3890394dac59b687c0ed (patch)
tree075a5a73e1fd5e121ec68c75f323aeba53869633 /fs/coredump.c
parent7a262d2ed9fa42fad8c4f243f8025580b58cf2f6 (diff)
downloadlinux-5ce13431dd3365d5dd4f3890394dac59b687c0ed.tar.bz2
GFS2: If requested is too large, use the largest extent in the rgrp
Here is a second try at a patch I posted earlier, which also implements suggestions Steve made: Before this patch, GFS2 would keep searching through all the rgrps until it found one that had a chunk of free blocks big enough to satisfy the size hint, which is based on the file write size, regardless of whether the chunk was big enough to perform the write. However, when doing big writes there may not be a large enough chunk of free blocks in any rgrp, due to file system fragmentation. The largest chunk may be big enough to satisfy the write request, but it may not meet the ideal reservation size from the "size hint". The writes would slow to a crawl because every write would search every rgrp, then finally give up and default to a single-block write. In my case, performance would drop from 425MB/s to 18KB/s, or 24000 times slower. This patch basically makes it so that if we can't find a contiguous chunk of blocks big enough to satisfy the sizehint, we'll use the largest chunk of blocks we found that will still contain the write. It does so by keeping track of the largest run of blocks within the rgrp. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Diffstat (limited to 'fs/coredump.c')
0 files changed, 0 insertions, 0 deletions