diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-07-08 16:12:03 -0700 | 
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-07-08 16:12:03 -0700 | 
| commit | e1928328699a582a540b105e5f4c160832a7fdcb (patch) | |
| tree | f36bb303b8648189d7b5a7feb27e58fe9fe3b9f0 /security | |
| parent | 46f1ec23a46940846f86a91c46f7119d8a8b5de1 (diff) | |
| parent | 9156e545765e467e6268c4814cfa609ebb16237e (diff) | |
| download | linux-e1928328699a582a540b105e5f4c160832a7fdcb.tar.bz2 | |
Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
 "The main changes in this cycle are:
   - rwsem scalability improvements, phase #2, by Waiman Long, which are
     rather impressive:
       "On a 2-socket 40-core 80-thread Skylake system with 40 reader
        and writer locking threads, the min/mean/max locking operations
        done in a 5-second testing window before the patchset were:
         40 readers, Iterations Min/Mean/Max = 1,807/1,808/1,810
         40 writers, Iterations Min/Mean/Max = 1,807/50,344/151,255
        After the patchset, they became:
         40 readers, Iterations Min/Mean/Max = 30,057/31,359/32,741
         40 writers, Iterations Min/Mean/Max = 94,466/95,845/97,098"
     There's a lot of changes to the locking implementation that makes
     it similar to qrwlock, including owner handoff for more fair
     locking.
     Another microbenchmark shows how across the spectrum the
     improvements are:
       "With a locking microbenchmark running on 5.1 based kernel, the
        total locking rates (in kops/s) on a 2-socket Skylake system
        with equal numbers of readers and writers (mixed) before and
        after this patchset were:
        # of Threads   Before Patch      After Patch
        ------------   ------------      -----------
             2            2,618             4,193
             4            1,202             3,726
             8              802             3,622
            16              729             3,359
            32              319             2,826
            64              102             2,744"
     The changes are extensive and the patch-set has been through
     several iterations addressing various locking workloads. There
     might be more regressions, but unless they are pathological I
     believe we want to use this new implementation as the baseline
     going forward.
   - jump-label optimizations by Daniel Bristot de Oliveira: the primary
     motivation was to remove IPI disturbance of isolated RT-workload
     CPUs, which resulted in the implementation of batched jump-label
     updates. Beyond the improvement of the real-time characteristics
     kernel, in one test this patchset improved static key update
     overhead from 57 msecs to just 1.4 msecs - which is a nice speedup
     as well.
   - atomic64_t cross-arch type cleanups by Mark Rutland: over the last
     ~10 years of atomic64_t existence the various types used by the
     APIs only had to be self-consistent within each architecture -
     which means they became wildly inconsistent across architectures.
     Mark puts and end to this by reworking all the atomic64
     implementations to use 's64' as the base type for atomic64_t, and
     to ensure that this type is consistently used for parameters and
     return values in the API, avoiding further problems in this area.
   - A large set of small improvements to lockdep by Yuyang Du: type
     cleanups, output cleanups, function return type and othr cleanups
     all around the place.
   - A set of percpu ops cleanups and fixes by Peter Zijlstra.
   - Misc other changes - please see the Git log for more details"
* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (82 commits)
  locking/lockdep: increase size of counters for lockdep statistics
  locking/atomics: Use sed(1) instead of non-standard head(1) option
  locking/lockdep: Move mark_lock() inside CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING
  x86/jump_label: Make tp_vec_nr static
  x86/percpu: Optimize raw_cpu_xchg()
  x86/percpu, sched/fair: Avoid local_clock()
  x86/percpu, x86/irq: Relax {set,get}_irq_regs()
  x86/percpu: Relax smp_processor_id()
  x86/percpu: Differentiate this_cpu_{}() and __this_cpu_{}()
  locking/rwsem: Guard against making count negative
  locking/rwsem: Adaptive disabling of reader optimistic spinning
  locking/rwsem: Enable time-based spinning on reader-owned rwsem
  locking/rwsem: Make rwsem->owner an atomic_long_t
  locking/rwsem: Enable readers spinning on writer
  locking/rwsem: Clarify usage of owner's nonspinaable bit
  locking/rwsem: Wake up almost all readers in wait queue
  locking/rwsem: More optimal RT task handling of null owner
  locking/rwsem: Always release wait_lock before waking up tasks
  locking/rwsem: Implement lock handoff to prevent lock starvation
  locking/rwsem: Make rwsem_spin_on_owner() return owner state
  ...
Diffstat (limited to 'security')
| -rw-r--r-- | security/apparmor/label.c | 8 | 
1 files changed, 4 insertions, 4 deletions
| diff --git a/security/apparmor/label.c b/security/apparmor/label.c index 068e93c5d29c..59f1cc2557a7 100644 --- a/security/apparmor/label.c +++ b/security/apparmor/label.c @@ -76,7 +76,7 @@ void __aa_proxy_redirect(struct aa_label *orig, struct aa_label *new)  	AA_BUG(!orig);  	AA_BUG(!new); -	lockdep_assert_held_exclusive(&labels_set(orig)->lock); +	lockdep_assert_held_write(&labels_set(orig)->lock);  	tmp = rcu_dereference_protected(orig->proxy->label,  					&labels_ns(orig)->lock); @@ -566,7 +566,7 @@ static bool __label_remove(struct aa_label *label, struct aa_label *new)  	AA_BUG(!ls);  	AA_BUG(!label); -	lockdep_assert_held_exclusive(&ls->lock); +	lockdep_assert_held_write(&ls->lock);  	if (new)  		__aa_proxy_redirect(label, new); @@ -603,7 +603,7 @@ static bool __label_replace(struct aa_label *old, struct aa_label *new)  	AA_BUG(!ls);  	AA_BUG(!old);  	AA_BUG(!new); -	lockdep_assert_held_exclusive(&ls->lock); +	lockdep_assert_held_write(&ls->lock);  	AA_BUG(new->flags & FLAG_IN_TREE);  	if (!label_is_stale(old)) @@ -640,7 +640,7 @@ static struct aa_label *__label_insert(struct aa_labelset *ls,  	AA_BUG(!ls);  	AA_BUG(!label);  	AA_BUG(labels_set(label) != ls); -	lockdep_assert_held_exclusive(&ls->lock); +	lockdep_assert_held_write(&ls->lock);  	AA_BUG(label->flags & FLAG_IN_TREE);  	/* Figure out where to put new node */ |