summaryrefslogtreecommitdiffstats
path: root/kernel/locking
AgeCommit message (Expand)AuthorFilesLines
2016-06-03locking/mutex: Set and clear owner using WRITE_ONCE()Jason Low2-4/+10
2016-06-03locking/rwsem: Optimize write lock by reducing operations in slowpathJason Low1-7/+18
2016-06-03locking/rwsem: Rework zeroing reader waiter->taskDavidlohr Bueso1-10/+7
2016-06-03locking/rwsem: Enable lockless waiter wakeup(s)Davidlohr Bueso1-16/+42
2016-06-03locking/ww_mutex: Report recursive ww_mutex locking earlyChris Wilson1-3/+6
2016-05-26add down_write_killable_nested()Al Viro1-0/+16
2016-05-24Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/t...Linus Torvalds1-0/+1
2016-05-16Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/ker...Linus Torvalds1-9/+62
2016-05-16Merge branch 'locking-rwsem-for-linus' of git://git.kernel.org/pub/scm/linux/...Linus Torvalds3-8/+68
2016-05-15locking/rwsem: Fix down_write_killable()Peter Zijlstra1-6/+15
2016-05-12Merge branch 'sched/urgent' into sched/core to pick up fixesIngo Molnar2-3/+36
2016-05-05locking/pvqspinlock: Robustify init_qspinlock_stat()Davidlohr Bueso1-8/+14
2016-05-05locking/pvqspinlock: Avoid double resetting of statsDavidlohr Bueso1-2/+0
2016-05-05Merge tag 'v4.6-rc6' into locking/core, to pick up fixesIngo Molnar3-6/+41
2016-05-05locking/lockdep, sched/core: Implement a better lock pinning schemePeter Zijlstra1-9/+62
2016-04-28lcoking/locktorture: Simplify the torture_runnable computationPaul E. McKenney1-6/+1
2016-04-25ext4: fix races between changing inode journal mode and ext4_writepagesDaeho Jeong1-0/+1
2016-04-23lockdep: Fix lock_chain::base sizePeter Zijlstra2-1/+25
2016-04-23locking/lockdep: Fix ->irq_context calculationBoqun Feng1-2/+11
2016-04-22locking/rwsem: Provide down_write_killable()Michal Hocko1-0/+19
2016-04-19locking/pvqspinlock: Fix division by zero in qstat_read()Davidlohr Bueso1-3/+5
2016-04-13locking/rwsem: Introduce basis for down_write_killable()Michal Hocko2-8/+45
2016-04-13locking/rwsem: Get rid of __down_write_nested()Michal Hocko1-6/+1
2016-04-13locking/lockdep: Deinline register_lock_class(), save 2328 bytesDenys Vlasenko1-1/+1
2016-04-13locking/locktorture: Fix NULL pointer dereference for cleanup pathsDavidlohr Bueso1-0/+12
2016-04-13locking/locktorture: Fix deboosting NULL pointer dereferenceDavidlohr Bueso1-3/+3
2016-04-04locking/lockdep: Fix print_collision() unused warningBorislav Petkov1-0/+2
2016-03-31locking/lockdep: Print chain_key collision informationAlfredo Alvarez Fernandez1-2/+77
2016-03-22kernel: add kcov code coverageDmitry Vyukov1-0/+3
2016-03-15tags: Fix DEFINE_PER_CPU expansionsPeter Zijlstra1-2/+1
2016-02-29locking/lockdep: Detect chain_key collisionsIngo Molnar1-8/+51
2016-02-29locking/lockdep: Prevent chain_key collisionsAlfredo Alvarez Fernandez1-8/+6
2016-02-29locking/mutex: Allow next waiter lockless wakeupDavidlohr Bueso1-2/+3
2016-02-29locking/pvqspinlock: Enable slowpath locking count trackingWaiman Long2-0/+8
2016-02-29locking/qspinlock: Use smp_cond_acquire() in pending codeWaiman Long1-4/+3
2016-02-29locking/pvqspinlock: Move lock stealing count tracking code into pv_queued_sp...Waiman Long2-20/+9
2016-02-29locking/mcs: Fix mcs_spin_lock() orderingPeter Zijlstra1-1/+7
2016-02-09locking/lockdep: Eliminate lockdep_init()Andrey Ryabinin1-59/+0
2016-02-09locking/lockdep: Convert hash tables to hlistsAndrew Morton1-23/+19
2016-02-09locking/lockdep: Fix stack trace caching logicDmitry Vyukov1-6/+10
2016-01-26rtmutex: Make wait_lock irq safeThomas Gleixner1-63/+72
2016-01-11Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/k...Linus Torvalds3-58/+576
2015-12-17locking/osq: Fix ordering of node initialisation in osq_lockWill Deacon1-3/+5
2015-12-04locking/pvqspinlock: Queue node adaptive spinningWaiman Long3-4/+50
2015-12-04locking/pvqspinlock: Allow limited lock stealingWaiman Long3-28/+155
2015-12-04locking/pvqspinlock: Collect slowpath lock statisticsWaiman Long2-5/+308
2015-12-04locking, sched: Introduce smp_cond_acquire() and use itPeter Zijlstra1-2/+1
2015-11-23locking/pvqspinlock, x86: Optimize the PV unlock code pathWaiman Long1-16/+27
2015-11-23locking/qspinlock: Avoid redundant read of next pointerWaiman Long1-3/+6
2015-11-23locking/qspinlock: Prefetch the next node cachelineWaiman Long1-0/+10