diff options
author | Waiman Long <longman@redhat.com> | 2019-05-20 16:59:15 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-06-17 12:28:11 +0200 |
commit | a15ea1a35f1b2782befc8b958c123c5d6a7cab0a (patch) | |
tree | ed242d5c8675a56925113865edd691fa0eec5f7b /arch/x86/include/asm/percpu.h | |
parent | 5cfd92e12e13432251981b9d0cd68dbd7aa8d690 (diff) | |
download | linux-a15ea1a35f1b2782befc8b958c123c5d6a7cab0a.tar.bz2 |
locking/rwsem: Guard against making count negative
The upper bits of the count field is used as reader count. When
sufficient number of active readers are present, the most significant
bit will be set and the count becomes negative. If the number of active
readers keep on piling up, we may eventually overflow the reader counts.
This is not likely to happen unless the number of bits reserved for
reader count is reduced because those bits are need for other purpose.
To prevent this count overflow from happening, the most significant
bit is now treated as a guard bit (RWSEM_FLAG_READFAIL). Read-lock
attempts will now fail for both the fast and slow paths whenever this
bit is set. So all those extra readers will be put to sleep in the wait
list. Wakeup will not happen until the reader count reaches 0.
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: huang ying <huang.ying.caritas@gmail.com>
Link: https://lkml.kernel.org/r/20190520205918.22251-17-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/include/asm/percpu.h')
0 files changed, 0 insertions, 0 deletions