diff options
author | Boaz Harrosh <bharrosh@panasas.com> | 2011-09-28 11:55:51 +0300 |
---|---|---|
committer | Boaz Harrosh <bharrosh@panasas.com> | 2011-10-14 18:52:50 +0200 |
commit | b916c5cd4d895a27b47a652648958f73e4f23ac6 (patch) | |
tree | 9fe6e59edd44119c79a18b9df0b02a0c4dacb6d1 /include | |
parent | d866d875f68fdeae63df334d291fe138dc636d96 (diff) | |
download | linux-b916c5cd4d895a27b47a652648958f73e4f23ac6.tar.bz2 |
ore: Only IO one group at a time (API change)
Usually a single IO is confined to one group of devices
(group_width) and at the boundary of a raid group it can
spill into a second group. Current code would allocate a
full device_table size array at each io_state so it can
comply to requests that span two groups. Needless to say
that is very wasteful, specially when device_table count
can get very large (hundreds even thousands), while a
group_width is usually 8 or 10.
* Change ore API to trim on IO that spans two raid groups.
The user passes offset+length to ore_get_rw_state, the
ore might trim on that length if spanning a group boundary.
The user must check ios->length or ios->nrpages to see
how much IO will be preformed. It is the responsibility
of the user to re-issue the reminder of the IO.
* Modify exofs To copy spilled pages on to the next IO.
This means one last kick is needed after all coalescing
of pages is done.
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Diffstat (limited to 'include')
0 files changed, 0 insertions, 0 deletions