diff options
author | Chuck Lever <chuck.lever@oracle.com> | 2011-12-06 16:13:48 -0500 |
---|---|---|
committer | Trond Myklebust <Trond.Myklebust@netapp.com> | 2012-01-05 11:59:18 -0500 |
commit | 0aaaf5c424c7ffd6b0c4253251356558b16ef3a2 (patch) | |
tree | 8ef0eebc41a8e247d52280fd79d36934a71fcb00 /fs/nfs/client.c | |
parent | 414adf14cd3b52e411f79d941a15d0fd4af427fc (diff) | |
download | linux-0aaaf5c424c7ffd6b0c4253251356558b16ef3a2.tar.bz2 |
NFS: Cache state owners after files are closed
Servers have a finite amount of memory to store NFSv4 open and lock
owners. Moreover, servers may have a difficult time determining when
they can reap their state owner table, thanks to gray areas in the
NFSv4 protocol specification. Thus clients should be careful to reuse
state owners when possible.
Currently Linux is not too careful. When a user has closed all her
files on one mount point, the state owner's reference count goes to
zero, and it is released. The next OPEN allocates a new one. A
workload that serially opens and closes files can run through a large
number of open owners this way.
When a state owner's reference count goes to zero, slap it onto a free
list for that nfs_server, with an expiry time. Garbage collect before
looking for a state owner. This makes state owners for active users
available for re-use.
Now that there can be unused state owners remaining at umount time,
purge the state owner free list when a server is destroyed. Also be
sure not to reclaim unused state owners during state recovery.
This change has benefits for the client as well. For some workloads,
this approach drops the number of OPEN_CONFIRM calls from the same as
the number of OPEN calls, down to just one. This reduces wire traffic
and thus open(2) latency. Before this patch, untarring a kernel
source tarball shows the OPEN_CONFIRM call counter steadily increasing
through the test. With the patch, the OPEN_CONFIRM count remains at 1
throughout the entire untar.
As long as the expiry time is kept short, I don't think garbage
collection should be terribly expensive, although it does bounce the
clp->cl_lock around a bit.
[ At some point we should rationalize the use of the nfs_server
->destroy method. ]
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
[Trond: Fixed a garbage collection race and a few efficiency issues]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Diffstat (limited to 'fs/nfs/client.c')
-rw-r--r-- | fs/nfs/client.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/fs/nfs/client.c b/fs/nfs/client.c index 32ea37198e93..41bd67f80d31 100644 --- a/fs/nfs/client.c +++ b/fs/nfs/client.c @@ -250,6 +250,11 @@ static void pnfs_init_server(struct nfs_server *server) rpc_init_wait_queue(&server->roc_rpcwaitq, "pNFS ROC"); } +static void nfs4_destroy_server(struct nfs_server *server) +{ + nfs4_purge_state_owners(server); +} + #else static void nfs4_shutdown_client(struct nfs_client *clp) { @@ -1065,6 +1070,7 @@ static struct nfs_server *nfs_alloc_server(void) INIT_LIST_HEAD(&server->master_link); INIT_LIST_HEAD(&server->delegations); INIT_LIST_HEAD(&server->layouts); + INIT_LIST_HEAD(&server->state_owners_lru); atomic_set(&server->active, 0); @@ -1538,6 +1544,7 @@ static int nfs4_server_common_setup(struct nfs_server *server, nfs_server_insert_lists(server); server->mount_time = jiffies; + server->destroy = nfs4_destroy_server; out: nfs_free_fattr(fattr); return error; @@ -1719,6 +1726,7 @@ struct nfs_server *nfs_clone_server(struct nfs_server *source, /* Copy data from the source */ server->nfs_client = source->nfs_client; + server->destroy = source->destroy; atomic_inc(&server->nfs_client->cl_count); nfs_server_copy_userdata(server, source); |