Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
authorLinus Torvalds <torvalds@linux-foundation.org>
Mon, 22 Jun 2015 21:54:22 +0000 (14:54 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Mon, 22 Jun 2015 21:54:22 +0000 (14:54 -0700)
Pull locking updates from Ingo Molnar:
 "The main changes are:

   - 'qspinlock' support, enabled on x86: queued spinlocks - these are
     now the spinlock variant used by x86 as they outperform ticket
     spinlocks in every category.  (Waiman Long)

   - 'pvqspinlock' support on x86: paravirtualized variant of queued
     spinlocks.  (Waiman Long, Peter Zijlstra)

   - 'qrwlock' support, enabled on x86: queued rwlocks.  Similar to
     queued spinlocks, they are now the variant used by x86:

       CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
       CONFIG_QUEUED_SPINLOCKS=y
       CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
       CONFIG_QUEUED_RWLOCKS=y

   - various lockdep fixlets

   - various locking primitives cleanups, further WRITE_ONCE()
     propagation"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits)
  locking/lockdep: Remove hard coded array size dependency
  locking/qrwlock: Don't contend with readers when setting _QW_WAITING
  lockdep: Do not break user-visible string
  locking/arch: Rename set_mb() to smp_store_mb()
  locking/arch: Add WRITE_ONCE() to set_mb()
  rtmutex: Warn if trylock is called from hard/softirq context
  arch: Remove __ARCH_HAVE_CMPXCHG
  locking/rtmutex: Drop usage of __HAVE_ARCH_CMPXCHG
  locking/qrwlock: Rename QUEUE_RWLOCK to QUEUED_RWLOCKS
  locking/pvqspinlock: Rename QUEUED_SPINLOCK to QUEUED_SPINLOCKS
  locking/pvqspinlock: Replace xchg() by the more descriptive set_mb()
  locking/pvqspinlock, x86: Enable PV qspinlock for Xen
  locking/pvqspinlock, x86: Enable PV qspinlock for KVM
  locking/pvqspinlock, x86: Implement the paravirt qspinlock call patching
  locking/pvqspinlock: Implement simple paravirt support for the qspinlock
  locking/qspinlock: Revert to test-and-set on hypervisors
  locking/qspinlock: Use a simple write to grab the lock
  locking/qspinlock: Optimize for smaller NR_CPUS
  locking/qspinlock: Extract out code snippets for the next patch
  locking/qspinlock: Add pending bit
  ...

1  2 
Documentation/memory-barriers.txt
arch/powerpc/include/asm/barrier.h
include/linux/compiler.h
include/linux/sched.h
kernel/locking/lockdep.c
kernel/locking/rtmutex.c

Simple merge
Simple merge
index 5d66777914dbae3485ec79d1b8140b70843fc780,03e227ba481c419ab468d697469a4418eec9f48f..05be2352fef889663fad482f57c4d8b9d5e18df4
@@@ -250,24 -250,8 +250,24 @@@ static __always_inline void __write_onc
        ({ union { typeof(x) __val; char __c[1]; } __u; __read_once_size(&(x), __u.__c, sizeof(x)); __u.__val; })
  
  #define WRITE_ONCE(x, val) \
-       ({ typeof(x) __val = (val); __write_once_size(&(x), &__val, sizeof(__val)); __val; })
+       ({ union { typeof(x) __val; char __c[1]; } __u = { .__val = (val) }; __write_once_size(&(x), __u.__c, sizeof(x)); __u.__val; })
  
 +/**
 + * READ_ONCE_CTRL - Read a value heading a control dependency
 + * @x: The value to be read, heading the control dependency
 + *
 + * Control dependencies are tricky.  See Documentation/memory-barriers.txt
 + * for important information on how to use them.  Note that in many cases,
 + * use of smp_load_acquire() will be much simpler.  Control dependencies
 + * should be avoided except on the hottest of hotpaths.
 + */
 +#define READ_ONCE_CTRL(x) \
 +({ \
 +      typeof(x) __val = READ_ONCE(x); \
 +      smp_read_barrier_depends(); /* Enforce control dependency. */ \
 +      __val; \
 +})
 +
  #endif /* __KERNEL__ */
  
  #endif /* __ASSEMBLY__ */
Simple merge
Simple merge
Simple merge