summaryrefslogtreecommitdiffstats
path: root/fs/dlm/lock.c
AgeCommit message (Collapse)AuthorFilesLines
2012-07-16dlm: fix missing dir removeDavid Teigland1-2/+68
I don't know exactly how, but in some cases, a dir record is not removed, or a new one is created when it shouldn't be. The result is that the dir node lookup returns a master node where the rsb does not exist. In this case, The master node will repeatedly return -EBADR for requests, and the lock requests will be stuck. Until all possible ways for this to happen can be eliminated, a simple and effective way to recover from this situation is for the supposed master node to send a standard remove message to the dir node when it receives a request for a resource it has no rsb for. Signed-off-by: David Teigland <teigland@redhat.com>
2012-07-16dlm: fix conversion deadlock from recoveryDavid Teigland1-15/+40
The process of rebuilding locks on a new master during recovery could re-order the locks on the convert queue, creating an "in place" conversion deadlock that would not be resolved. Fix this by not considering queue order when granting conversions after recovery. Signed-off-by: David Teigland <teigland@redhat.com>
2012-07-16dlm: fix race between remove and lookupDavid Teigland1-37/+144
It was possible for a remove message on an old rsb to be sent after a lookup message on a new rsb, where the rsbs were for the same resource name. This could lead to a missing directory entry for the new rsb. It is fixed by keeping a copy of the resource name being removed until after the remove has been sent. A lookup checks if this in-progress remove matches the name it is looking up. Signed-off-by: David Teigland <teigland@redhat.com>
2012-07-16dlm: use rsbtbl as resource directoryDavid Teigland1-203/+819
Remove the dir hash table (dirtbl), and use the rsb hash table (rsbtbl) as the resource directory. It has always been an unnecessary duplication of information. This improves efficiency by using a single rsbtbl lookup in many cases where both rsbtbl and dirtbl lookups were needed previously. This eliminates the need to handle cases of rsbtbl and dirtbl being out of sync. In many cases there will be memory savings because the dir hash table no longer exists. Signed-off-by: David Teigland <teigland@redhat.com>
2012-05-02dlm: fixes for nodir modeDavid Teigland1-89/+197
The "nodir" mode (statically assign master nodes instead of using the resource directory) has always been highly experimental, and never seriously used. This commit fixes a number of problems, making nodir much more usable. - Major change to recovery: recover all locks and restart all in-progress operations after recovery. In some cases it's not possible to know which in-progess locks to recover, so recover all. (Most require recovery in nodir mode anyway since rehashing changes most master nodes.) - Change the way nodir mode is enabled, from a command line mount arg passed through gfs2, into a sysfs file managed by dlm_controld, consistent with the other config settings. - Allow recovering MSTCPY locks on an rsb that has not yet been turned into a master copy. - Ignore RCOM_LOCK and RCOM_LOCK_REPLY recovery messages from a previous, aborted recovery cycle. Base this on the local recovery status not being in the state where any nodes should be sending LOCK messages for the current recovery cycle. - Hold rsb lock around dlm_purge_mstcpy_locks() because it may run concurrently with dlm_recover_master_copy(). - Maintain highbast on process-copy lkb's (in addition to the master as is usual), because the lkb can switch back and forth between being a master and being a process copy as the master node changes in recovery. - When recovering MSTCPY locks, flag rsb's that have non-empty convert or waiting queues for granting at the end of recovery. (Rename flag from LOCKS_PURGED to RECOVER_GRANT and similar for the recovery function, because it's not only resources with purged locks that need grant a grant attempt.) - Replace a couple of unnecessary assertion panics with error messages. Signed-off-by: David Teigland <teigland@redhat.com>
2012-04-26dlm: improve error and debug messagesDavid Teigland1-85/+156
Change some existing error/debug messages to collect more useful information, and add some new error/debug messages to address recently found problems. Signed-off-by: David Teigland <teigland@redhat.com>
2012-04-26dlm: avoid unnecessary search in search_rsbDavid Teigland1-0/+3
If the rsb is found in the "keep" tree, but is not the right type (i.e. not MASTER), we can return immediately with the result. There's no point in going on to search the "toss" list as if we hadn't found it. Signed-off-by: David Teigland <teigland@redhat.com>
2012-04-26dlm: fix waiter recoveryDavid Teigland1-12/+31
An outstanding remote operation (an lkb on the "waiter" list) could sometimes miss being resent during recovery. The decision was based on the lkb_nodeid field, which could have changed during an earlier aborted recovery, so it no longer represents the actual remote destination. The lkb_wait_nodeid is always the actual remote node, so it is the best value to use. Signed-off-by: David Teigland <teigland@redhat.com>
2012-04-23dlm: fix QUECVT when convert queue is emptyDavid Teigland1-0/+12
The QUECVT flag should not prevent conversions from being granted immediately when the convert queue is empty. Signed-off-by: David Teigland <teigland@redhat.com>
2012-03-08dlm: fix slow rsb search in dir recoveryDavid Teigland1-4/+4
The function used to find an rsb during directory recovery was searching the single linear list of rsb's. This wasted a lot of time compared to using the standard hash table to find the rsb. Signed-off-by: David Teigland <teigland@redhat.com>
2011-11-18dlm: convert rsb list to rb_treeBob Peterson1-17/+70
Change the linked lists to rb_tree's in the rsb hash table to speed up searches. Slow rsb searches were having a large impact on gfs2 performance due to the large number of dlm locks gfs2 uses. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: David Teigland <teigland@redhat.com>
2011-07-15dlm: use workqueue for callbacksDavid Teigland1-12/+12
Instead of creating our own kthread (dlm_astd) to deliver callbacks for all lockspaces, use a per-lockspace workqueue to deliver the callbacks. This eliminates complications and slowdowns from many lockspaces sharing the same thread. Signed-off-by: David Teigland <teigland@redhat.com>
2011-07-14dlm: remove deadlock debug printDavid Teigland1-3/+0
gfs2 recently began using this feature heavily, creating more debug output than we want to see. Signed-off-by: David Teigland <teigland@redhat.com>
2011-07-12dlm: improve rsb searchesDavid Teigland1-37/+82
By pre-allocating rsb structs before searching the hash table, they can be inserted immediately. This avoids always having to repeat the search when adding the struct to hash list. This also adds space to the rsb struct for a max resource name, so an rsb allocation can be used by any request. The constant size also allows us to finally use a slab for the rsb structs. Signed-off-by: David Teigland <teigland@redhat.com>
2011-07-11dlm: keep lkbs in idrDavid Teigland1-45/+24
This is simpler and quicker than the hash table, and avoids needing to search the hash list for every new lkid to check if it's used. Signed-off-by: David Teigland <teigland@redhat.com>
2011-07-11dlm: fix kmalloc argsDavid Teigland1-1/+1
The gfp and size args were switched. Signed-off-by: David Teigland <teigland@redhat.com>
2011-07-11dlm: don't do pointless NULL check, use kzalloc and fix order of argumentsJesper Juhl1-6/+2
In fs/dlm/lock.c in the dlm_scan_waiters() function there are 3 small issues: 1) There's no need to test the return value of the allocation and do a memset if is succeedes. Just use kzalloc() to obtain zeroed memory. 2) Since kfree() handles NULL pointers gracefully, the test of 'warned' against NULL before the kfree() after the loop is completely pointless. Remove it. 3) The arguments to kmalloc() (now kzalloc()) were swapped. Thanks to Dr. David Alan Gilbert for pointing this out. Signed-off-by: Jesper Juhl <jj@chaosbits.net> Signed-off-by: David Teigland <teigland@redhat.com>
2011-05-24Merge branch 'for-linus' of ↵Linus Torvalds1-40/+142
git://git.kernel.org/pub/scm/linux/kernel/git/teigland/dlm * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/teigland/dlm: dlm: make plock operation killable dlm: remove shared message stub for recovery dlm: delayed reply message warning dlm: Remove superfluous call to recalc_sigpending()
2011-04-05dlm: remove shared message stub for recoveryDavid Teigland1-33/+49
kmalloc a stub message struct during recovery instead of sharing the struct in the lockspace. This leaves the lockspace stub_ms only for faking downconvert replies, where it is never modified and sharing is not a problem. Also improve the debug messages in the same recovery function. Signed-off-by: David Teigland <teigland@redhat.com>
2011-04-01dlm: delayed reply message warningDavid Teigland1-7/+93
Add an option (disabled by default) to print a warning message when a lock has been waiting a configurable amount of time for a reply message from another node. This is mainly for debugging. Signed-off-by: David Teigland <teigland@redhat.com>
2011-03-31Fix common misspellingsLucas De Marchi1-1/+1
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
2011-03-10dlm: record full callback stateDavid Teigland1-21/+17
Change how callbacks are recorded for locks. Previously, information about multiple callbacks was combined into a couple of variables that indicated what the end result should be. In some situations, we could not tell from this combined state what the exact sequence of callbacks were, and would end up either delivering the callbacks in the wrong order, or suppress redundant callbacks incorrectly. This new approach records all the data for each callback, leaving no uncertainty about what needs to be delivered. Signed-off-by: David Teigland <teigland@redhat.com>
2010-09-03dlm: Don't send callback to node making lock request when "try 1cb" failsSteven Whitehouse1-0/+3
When converting a lock, an lkb is in the granted state and also being used to request a new state. In the case that the conversion was a "try 1cb" type which has failed, and if the new state was incompatible with the old state, a callback was being generated to the requesting node. This is incorrect as callbacks should only be sent to all the other nodes holding blocking locks. The requesting node should receive the normal (failed) response to its "try 1cb" conversion request only. This was discovered while debugging a performance problem on GFS2, however this fix also speeds up GFS as well. In the GFS2 case the performance gain is over 10x for cases of write activity to an inode whose glock is cached on another, idle (wrt that glock) node. (comment added, dct) Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Tested-by: Abhijith Das <adas@redhat.com> Signed-off-by: David Teigland <teigland@redhat.com>
2010-04-30dlm: cleanup remove unused codeDan Carpenter1-4/+1
Smatch complains because "lkb" is never NULL. Looking at it, the original code actually adds the new element to the end of the list fine, so we can just get rid of the if condition. This code is four years old and no one has complained so it must work. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: David Teigland <teigland@redhat.com>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo1-0/+1
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-02-26dlm: use bastmode in debugfs outputDavid Teigland1-2/+4
The bast mode that appears in the debugfs output should be useful on both master and process nodes. lkb_highbast is currently printed, and is only useful on the master node. lkb_bastmode is only useful on the process node. This patch sets lkb_bastmode on the master node as well, and uses that value in the debugfs print. Signed-off-by: David Teigland <teigland@redhat.com>
2010-02-26dlm: send reply before bastDavid Teigland1-26/+84
When the lock master processes a successful operation (request, convert, cancel, or unlock), it will process the effects of the change before sending the reply for the operation. The "effects" of the operation are: - blocking callbacks (basts) for any newly granted locks - waiting or converting locks that can now be granted The cast is queued on the local node when the reply from the lock master is received. This means that a lock holder can receive a bast for a lock mode that is doesn't yet know has been granted. Signed-off-by: David Teigland <teigland@redhat.com>
2010-02-24dlm: fix ordering of bast and castDavid Teigland1-2/+2
When both blocking and completion callbacks are queued for lock, the dlm would always deliver the completion callback (cast) first. In some cases the blocking callback (bast) is queued before the cast, though, and should be delivered first. This patch keeps track of the order in which they were queued and delivers them in that order. This patch also keeps track of the granted mode in the last cast and eliminates the following bast if the bast mode is compatible with the preceding cast mode. This happens when a remotely mastered lock is demoted, e.g. EX->NL, in which case the local node queues a cast immediately after sending the demote message. In this way a cast can be queued for a mode, e.g. NL, that makes an in-transit bast extraneous. Signed-off-by: David Teigland <teigland@redhat.com>
2009-11-30dlm: always use GFP_NOFSDavid Teigland1-3/+3
Replace all GFP_KERNEL and ls_allocation with GFP_NOFS. ls_allocation would be GFP_KERNEL for userland lockspaces and GFP_NOFS for file system lockspaces. It was discovered that any lockspaces on the system can affect all others by triggering memory reclaim in the file system which could in turn call back into the dlm to acquire locks, deadlocking dlm threads that were shared by all lockspaces, like dlm_recv. Signed-off-by: David Teigland <teigland@redhat.com>
2009-06-17dlm: Fix uninitialised variable warning in lock.cSteven Whitehouse1-1/+1
CC [M] fs/dlm/lock.o fs/dlm/lock.c: In function ‘find_rsb’: fs/dlm/lock.c:438: warning: ‘r’ may be used uninitialized in this function Since r is used on the error path to set r_ret, set it to NULL. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: David Teigland <teigland@redhat.com>
2009-03-11dlm: ignore cancel on granted lockDavid Teigland1-0/+7
Return immediately from dlm_unlock(CANCEL) if the lock is granted and not being converted; there's nothing to cancel. Signed-off-by: David Teigland <teigland@redhat.com>
2009-03-11dlm: clear defunct cancel stateDavid Teigland1-8/+45
When a conversion completes successfully and finds that a cancel of the convert is still in progress (which is now a moot point), preemptively clear the state associated with outstanding cancel. That state could cause a subsequent conversion to be ignored. Also, improve the consistency and content of error and debug messages in this area. Signed-off-by: David Teigland <teigland@redhat.com>
2009-01-08dlm: change rsbtbl rwlock to spinlockDavid Teigland1-13/+13
The rwlock is almost always used in write mode, so there's no reason to not use a spinlock instead. Signed-off-by: David Teigland <teigland@redhat.com>
2008-12-23dlm: add time stamp of blocking callbackDavid Teigland1-0/+2
Record the time the latest blocking callback was queued for a lock. This will be used for debugging in combination with lock queue timestamp changes in the previous patch. Signed-off-by: David Teigland <teigland@redhat.com>
2008-12-23dlm: change lock time stampingDavid Teigland1-10/+11
Use ktime instead of jiffies for timestamping lkb's. Also stamp the time on every lkb whenever it's added to a resource queue, instead of just stamping locks subject to timeouts. This will allow us to use timestamps more widely for debugging all locks. Signed-off-by: David Teigland <teigland@redhat.com>
2008-12-23dlm: improve how bast mode handlingDavid Teigland1-5/+3
The lkb bastmode value is set in the context of processing the lock, and read by the dlm_astd thread. Because it's accessed in these two separate contexts, the writing/reading ought to be done under a lock. This is simple to do by setting it and reading it when the lkb is added to and removed from dlm_astd's callback list which is properly locked. Signed-off-by: David Teigland <teigland@redhat.com>
2008-07-14dlm: fix uninitialized variable for search_rsb_list callersBenny Halevy1-0/+1
gcc 4.3.0 correctly emits the following warning. search_rsb_list does not *r_ret if no dlm_rsb is found and _search_rsb may pass the uninitialized value upstream on the error path when both calls to search_rsb_list return non-zero error. The fix sets *r_ret to NULL on search_rsb_list's not-found path. Signed-off-by: Benny Halevy <bhalevy@panasas.com> Signed-off-by: David Teigland <teigland@redhat.com>
2008-07-14dlm: fix basts for granted CW waiting PR/CWDavid Teigland1-1/+2
The fix in commit 3650925893469ccb03dbcc6a440c5d363350f591 was addressing the case of a granted PR lock with waiting PR and CW locks. It's a special case that requires forcing a CW bast. However, that forced CW bast was incorrectly applying to a second condition where the granted lock was CW. So, the holder of a CW lock could receive an extraneous CW bast instead of a PR bast. This fix narrows the original special case to what was intended. Signed-off-by: David Teigland <teigland@redhat.com>
2008-04-21dlm: save master info after failed no-queue requestDavid Teigland1-2/+1
When a NOQUEUE request fails, the rsb res_master field is unnecessarily reset to -1, instead of leaving the valid master setting in place. We want to save the looked-up master values while the rsb is on the "toss list" so that another lookup can be avoided if the rsb is soon reused. The fix is to simply leave res_master value alone. Signed-off-by: David Teigland <teigland@redhat.com>
2008-04-21dlm: make dlm_print_rsb() staticAdrian Bunk1-1/+1
dlm_print_rsb() can now become static. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-06dlm: eliminate astparam type castingDavid Teigland1-8/+6
Put lkb_astparam in a union with a dlm_user_args pointer to eliminate a lot of type casting. Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-06dlm: proper types for asts and bastsDavid Teigland1-18/+32
Use proper types for ast and bast functions, and use consistent type for ast param. Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-04dlm: fix overflows when copying from ->m_extra to lvbAl Viro1-0/+4
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-04dlm: make find_rsb() fail gracefully when namelen is too largeAl Viro1-1/+5
We *can* get there from receive_request() and dlm_recover_master_copy() with namelen too large if incoming request is invalid; BUG() from DLM_ASSERT() in allocate_rsb() is a bit excessive reaction to that and in case of dlm_recover_master_copy() we would actually oops before that while calculating hash of up to 64Kb worth of data - with data actually being 64 _bytes_ in kmalloc()'ed struct. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-04dlm: receive_rcom_lock_args() overflow checkAl Viro1-3/+4
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-04dlm: verify that places expecting rcom_lock have packet long enoughAl Viro1-0/+3
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-04dlm: do not byteswap rcom_lockAl Viro1-15/+19
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-04dlm: dlm_process_incoming_buffer() fixesAl Viro1-10/+9
* check that length is large enough to cover the non-variable part of message or rcom resp. (after checking that it's large enough to cover the header, of course). * kill more pointless casts Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David Teigland <teigland@redhat.com>
2008-02-04dlm: use proper C for dlm/requestqueue stuff (and fix alignment bug)Al Viro1-1/+1
a) don't cast the pointer to dlm_header *, we use it as dlm_message * anyway. b) we copy the message into a queue element, then pass the pointer to copy to dlm_receive_message_saved(); declare it properly to make sure that we have the right alignment. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David Teigland <teigland@redhat.com>
2008-01-30dlm: keep cached master rsbs during recoveryDavid Teigland1-6/+0
To prevent the master of an rsb from changing rapidly, an unused rsb is kept on the "toss list" for a period of time to be reused. The toss list was being cleared completely for each recovery, which is unnecessary. Much of the benefit of the toss list can be maintained if nodes keep rsb's in their toss list that they are the master of. These rsb's need to be included when the resource directory is rebuilt during recovery. Signed-off-by: David Teigland <teigland@redhat.com>