1. 05 8月, 2014 1 次提交
    • P
      KVM: irqchip: Provide and use accessors for irq routing table · 8ba918d4
      Paul Mackerras 提交于
      This provides accessor functions for the KVM interrupt mappings, in
      order to reduce the amount of code that accesses the fields of the
      kvm_irq_routing_table struct, and restrict that code to one file,
      virt/kvm/irqchip.c.  The new functions are kvm_irq_map_gsi(), which
      maps from a global interrupt number to a set of IRQ routing entries,
      and kvm_irq_map_chip_pin, which maps from IRQ chip and pin numbers to
      a global interrupt number.
      
      This also moves the update of kvm_irq_routing_table::chip[][]
      into irqchip.c, out of the various kvm_set_routing_entry
      implementations.  That means that none of the kvm_set_routing_entry
      implementations need the kvm_irq_routing_table argument anymore,
      so this removes it.
      
      This does not change any locking or data lifetime rules.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Tested-by: NEric Auger <eric.auger@linaro.org>
      Tested-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8ba918d4
  2. 01 8月, 2014 1 次提交
  3. 31 7月, 2014 2 次提交
  4. 30 7月, 2014 5 次提交
  5. 29 7月, 2014 4 次提交
    • A
      clk: Add CLPS711X clk driver · 631c5347
      Alexander Shiyan 提交于
      This adds the clock driver for Cirrus Logic CLPS711X series SoCs
      using common clock infrastructure.
      Designed primarily for migration CLPS711X subarch for multiplatform & DT,
      for this as the "OF" and "non-OF" calls implemented.
      Signed-off-by: NAlexander Shiyan <shc_work@mail.ru>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NMike Turquette <mturquette@linaro.org>
      631c5347
    • E
      ip: make IP identifiers less predictable · 04ca6973
      Eric Dumazet 提交于
      In "Counting Packets Sent Between Arbitrary Internet Hosts", Jeffrey and
      Jedidiah describe ways exploiting linux IP identifier generation to
      infer whether two machines are exchanging packets.
      
      With commit 73f156a6 ("inetpeer: get rid of ip_id_count"), we
      changed IP id generation, but this does not really prevent this
      side-channel technique.
      
      This patch adds a random amount of perturbation so that IP identifiers
      for a given destination [1] are no longer monotonically increasing after
      an idle period.
      
      Note that prandom_u32_max(1) returns 0, so if generator is used at most
      once per jiffy, this patch inserts no hole in the ID suite and do not
      increase collision probability.
      
      This is jiffies based, so in the worst case (HZ=1000), the id can
      rollover after ~65 seconds of idle time, which should be fine.
      
      We also change the hash used in __ip_select_ident() to not only hash
      on daddr, but also saddr and protocol, so that ICMP probes can not be
      used to infer information for other protocols.
      
      For IPv6, adds saddr into the hash as well, but not nexthdr.
      
      If I ping the patched target, we can see ID are now hard to predict.
      
      21:57:11.008086 IP (...)
          A > target: ICMP echo request, seq 1, length 64
      21:57:11.010752 IP (... id 2081 ...)
          target > A: ICMP echo reply, seq 1, length 64
      
      21:57:12.013133 IP (...)
          A > target: ICMP echo request, seq 2, length 64
      21:57:12.015737 IP (... id 3039 ...)
          target > A: ICMP echo reply, seq 2, length 64
      
      21:57:13.016580 IP (...)
          A > target: ICMP echo request, seq 3, length 64
      21:57:13.019251 IP (... id 3437 ...)
          target > A: ICMP echo reply, seq 3, length 64
      
      [1] TCP sessions uses a per flow ID generator not changed by this patch.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NJeffrey Knockel <jeffk@cs.unm.edu>
      Reported-by: NJedidiah R. Crandall <crandall@cs.unm.edu>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Hannes Frederic Sowa <hannes@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      04ca6973
    • L
      kthread_work: remove the unused wait_queue_head · 95847e1b
      Lai Jiangshan 提交于
      The wait_queue_head_t kthread_work->done is unused since
      flush_kthread_work() has been re-implemented.  Let's remove it
      including the initialization code.  This makes
      DEFINE_KTHREAD_WORK_ONSTACK() unnecessary, removed.
      
      tj: Updated description.  Removed DEFINE_KTHREAD_WORK_ONSTACK().
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      95847e1b
    • A
      KVM: PPC: Remove DCR handling · ce91ddc4
      Alexander Graf 提交于
      DCR handling was only needed for 440 KVM. Since we removed it, we can also
      remove handling of DCR accesses.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      ce91ddc4
  6. 28 7月, 2014 3 次提交
    • A
      KVM: Allow KVM_CHECK_EXTENSION on the vm fd · 92b591a4
      Alexander Graf 提交于
      The KVM_CHECK_EXTENSION is only available on the kvm fd today. Unfortunately
      on PPC some of the capabilities change depending on the way a VM was created.
      
      So instead we need a way to expose capabilities as VM ioctl, so that we can
      see which VM type we're using (HV or PR). To enable this, add the
      KVM_CHECK_EXTENSION ioctl to our vm ioctl portfolio.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      92b591a4
    • A
      KVM: Rename and add argument to check_extension · 784aa3d7
      Alexander Graf 提交于
      In preparation to make the check_extension function available to VM scope
      we add a struct kvm * argument to the function header and rename the function
      accordingly. It will still be called from the /dev/kvm fd, but with a NULL
      argument for struct kvm *.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      784aa3d7
    • P
      KVM: PPC: Book3S: Controls for in-kernel sPAPR hypercall handling · 699a0ea0
      Paul Mackerras 提交于
      This provides a way for userspace controls which sPAPR hcalls get
      handled in the kernel.  Each hcall can be individually enabled or
      disabled for in-kernel handling, except for H_RTAS.  The exception
      for H_RTAS is because userspace can already control whether
      individual RTAS functions are handled in-kernel or not via the
      KVM_PPC_RTAS_DEFINE_TOKEN ioctl, and because the numeric value for
      H_RTAS is out of the normal sequence of hcall numbers.
      
      Hcalls are enabled or disabled using the KVM_ENABLE_CAP ioctl for the
      KVM_CAP_PPC_ENABLE_HCALL capability on the file descriptor for the VM.
      The args field of the struct kvm_enable_cap specifies the hcall number
      in args[0] and the enable/disable flag in args[1]; 0 means disable
      in-kernel handling (so that the hcall will always cause an exit to
      userspace) and 1 means enable.  Enabling or disabling in-kernel
      handling of an hcall is effective across the whole VM.
      
      The ability for KVM_ENABLE_CAP to be used on a VM file descriptor
      on PowerPC is new, added by this commit.  The KVM_CAP_ENABLE_CAP_VM
      capability advertises that this ability exists.
      
      When a VM is created, an initial set of hcalls are enabled for
      in-kernel handling.  The set that is enabled is the set that have
      an in-kernel implementation at this point.  Any new hcall
      implementations from this point onwards should not be added to the
      default set without a good reason.
      
      No distinction is made between real-mode and virtual-mode hcall
      implementations; the one setting controls them both.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      699a0ea0
  7. 26 7月, 2014 3 次提交
  8. 25 7月, 2014 1 次提交
  9. 24 7月, 2014 2 次提交
  10. 23 7月, 2014 2 次提交
    • T
      libata: introduce ata_host->n_tags to avoid oops on SAS controllers · 1a112d10
      Tejun Heo 提交于
      1871ee13 ("libata: support the ata host which implements a queue
      depth less than 32") directly used ata_port->scsi_host->can_queue from
      ata_qc_new() to determine the number of tags supported by the host;
      unfortunately, SAS controllers doing SATA don't initialize ->scsi_host
      leading to the following oops.
      
       BUG: unable to handle kernel NULL pointer dereference at 0000000000000058
       IP: [<ffffffff814e0618>] ata_qc_new_init+0x188/0x1b0
       PGD 0
       Oops: 0002 [#1] SMP
       Modules linked in: isci libsas scsi_transport_sas mgag200 drm_kms_helper ttm
       CPU: 1 PID: 518 Comm: udevd Not tainted 3.16.0-rc6+ #62
       Hardware name: Intel Corporation S2600CO/S2600CO, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
       task: ffff880c1a00b280 ti: ffff88061a000000 task.ti: ffff88061a000000
       RIP: 0010:[<ffffffff814e0618>]  [<ffffffff814e0618>] ata_qc_new_init+0x188/0x1b0
       RSP: 0018:ffff88061a003ae8  EFLAGS: 00010012
       RAX: 0000000000000001 RBX: ffff88000241ca80 RCX: 00000000000000fa
       RDX: 0000000000000020 RSI: 0000000000000020 RDI: ffff8806194aa298
       RBP: ffff88061a003ae8 R08: ffff8806194a8000 R09: 0000000000000000
       R10: 0000000000000000 R11: ffff88000241ca80 R12: ffff88061ad58200
       R13: ffff8806194aa298 R14: ffffffff814e67a0 R15: ffff8806194a8000
       FS:  00007f3ad7fe3840(0000) GS:ffff880627620000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: 0000000000000058 CR3: 000000061a118000 CR4: 00000000001407e0
       Stack:
        ffff88061a003b20 ffffffff814e96e1 ffff88000241ca80 ffff88061ad58200
        ffff8800b6bf6000 ffff880c1c988000 ffff880619903850 ffff88061a003b68
        ffffffffa0056ce1 ffff88061a003b48 0000000013d6e6f8 ffff88000241ca80
       Call Trace:
        [<ffffffff814e96e1>] ata_sas_queuecmd+0xa1/0x430
        [<ffffffffa0056ce1>] sas_queuecommand+0x191/0x220 [libsas]
        [<ffffffff8149afee>] scsi_dispatch_cmd+0x10e/0x300
        [<ffffffff814a3bc5>] scsi_request_fn+0x2f5/0x550
        [<ffffffff81317613>] __blk_run_queue+0x33/0x40
        [<ffffffff8131781a>] queue_unplugged+0x2a/0x90
        [<ffffffff8131ceb4>] blk_flush_plug_list+0x1b4/0x210
        [<ffffffff8131d274>] blk_finish_plug+0x14/0x50
        [<ffffffff8117eaa8>] __do_page_cache_readahead+0x198/0x1f0
        [<ffffffff8117ee21>] force_page_cache_readahead+0x31/0x50
        [<ffffffff8117ee7e>] page_cache_sync_readahead+0x3e/0x50
        [<ffffffff81172ac6>] generic_file_read_iter+0x496/0x5a0
        [<ffffffff81219897>] blkdev_read_iter+0x37/0x40
        [<ffffffff811e307e>] new_sync_read+0x7e/0xb0
        [<ffffffff811e3734>] vfs_read+0x94/0x170
        [<ffffffff811e43c6>] SyS_read+0x46/0xb0
        [<ffffffff811e33d1>] ? SyS_lseek+0x91/0xb0
        [<ffffffff8171ee29>] system_call_fastpath+0x16/0x1b
       Code: 00 00 00 88 50 29 83 7f 08 01 19 d2 83 e2 f0 83 ea 50 88 50 34 c6 81 1d 02 00 00 40 c6 81 17 02 00 00 00 5d c3 66 0f 1f 44 00 00 <89> 14 25 58 00 00 00
      
      Fix it by introducing ata_host->n_tags which is initialized to
      ATA_MAX_QUEUE - 1 in ata_host_init() for SAS controllers and set to
      scsi_host_template->can_queue in ata_host_register() for !SAS ones.
      As SAS hosts are never registered, this will give them the same
      ATA_MAX_QUEUE - 1 as before.  Note that we can't use
      scsi_host->can_queue directly for SAS hosts anyway as they can go
      higher than the libata maximum.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NMike Qiu <qiudayu@linux.vnet.ibm.com>
      Reported-by: NJesse Brandeburg <jesse.brandeburg@gmail.com>
      Reported-by: NPeter Hurley <peter@hurleysoftware.com>
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Fixes: 1871ee13 ("libata: support the ata host which implements a queue depth less than 32")
      Cc: Kevin Hao <haokexin@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: stable@vger.kernel.org
      1a112d10
    • N
      pinctrl: dra: dt-bindings: Fix pull enable/disable · 23d9cec0
      Nishanth Menon 提交于
      The DRA74/72 control module pins have a weak pull up and pull down.
      This is configured by bit offset 17. if BIT(17) is 1, a pull up is
      selected, else a pull down is selected.
      
      However, this pull resisstor is applied based on BIT(16) -
      PULLUDENABLE - if BIT(18) is *0*, then pull as defined in BIT(17) is
      applied, else no weak pulls are applied. We defined this in reverse.
      
      Reference: Table 18-5 (Description of the pad configuration register
      bits) in Technical Reference Manual Revision (DRA74x revision Q:
      SPRUHI2Q Revised June 2014 and DRA72x revision F: SPRUHP2F - Revised
      June 2014)
      
      Fixes: 6e58b8f1 ("ARM: dts: DRA7: Add the dts files for dra7 SoC and dra7-evm board")
      Signed-off-by: NNishanth Menon <nm@ti.com>
      Tested-by: NFelipe Balbi <balbi@ti.com>
      Acked-by: NFelipe Balbi <balbi@ti.com>
      Signed-off-by: NTony Lindgren <tony@atomide.com>
      23d9cec0
  11. 22 7月, 2014 1 次提交
    • A
      fuse: add FUSE_NO_OPEN_SUPPORT flag to INIT · d7afaec0
      Andrew Gallagher 提交于
      Here some additional changes to set a capability flag so that clients can
      detect when it's appropriate to return -ENOSYS from open.
      
      This amends the following commit introduced in 3.14:
      
        7678ac50  fuse: support clients that don't implement 'open'
      
      However we can only add the flag to 3.15 and later since there was no
      protocol version update in 3.14.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: <stable@vger.kernel.org> # v3.15+
      d7afaec0
  12. 19 7月, 2014 3 次提交
  13. 18 7月, 2014 1 次提交
    • B
      cpufreq: make table sentinel macros unsigned to match use · 2b1987a9
      Brian W Hart 提交于
      Commit 5eeaf1f1 (cpufreq: Fix build error on some platforms that
      use cpufreq_for_each_*) moved function cpufreq_next_valid() to a public
      header.  Warnings are now generated when objects including that header
      are built with -Wsign-compare (as an out-of-tree module might be):
      
      .../include/linux/cpufreq.h: In function ‘cpufreq_next_valid’:
      .../include/linux/cpufreq.h:519:27: warning: comparison between signed
      and unsigned integer expressions [-Wsign-compare]
        while ((*pos)->frequency != CPUFREQ_TABLE_END)
                                 ^
      .../include/linux/cpufreq.h:520:25: warning: comparison between signed
      and unsigned integer expressions [-Wsign-compare]
         if ((*pos)->frequency != CPUFREQ_ENTRY_INVALID)
                               ^
      
      Constants CPUFREQ_ENTRY_INVALID and CPUFREQ_TABLE_END are signed, but
      are used with unsigned member 'frequency' of cpufreq_frequency_table.
      Update the macro definitions to be explicitly unsigned to match their
      use.
      
      This also corrects potentially wrong behavior of clk_rate_table_iter()
      if unsigned long is wider than usigned int.
      
      Fixes: 5eeaf1f1 (cpufreq: Fix build error on some platforms that use cpufreq_for_each_*)
      Signed-off-by: NBrian W Hart <hartb@linux.vnet.ibm.com>
      Reviewed-by: NSimon Horman <horms+renesas@verge.net.au>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      2b1987a9
  14. 17 7月, 2014 5 次提交
  15. 16 7月, 2014 6 次提交
    • S
      ftrace: Allow archs to specify if they need a separate function graph trampoline · 646d7043
      Steven Rostedt (Red Hat) 提交于
      Currently if an arch supports function graph tracing, the core code will
      just assign the function graph trampoline to the function graph addr that
      gets called.
      
      But as the old method for function graph tracing always calls the function
      trampoline first and that calls the function graph trampoline, some
      archs may have the function graph trampoline dependent on operations that
      were done in the function trampoline. This causes function graph tracer
      to break on those archs.
      
      Instead of having the default be to set the function graph ftrace_ops
      to the function graph trampoline, have it instead just set it to zero
      which will keep it from jumping to a trampoline that is not set up
      to be jumped directly too.
      
      Link: http://lkml.kernel.org/r/53BED155.9040607@nvidia.comReported-by: NTuomas Tynkkynen <ttynkkynen@nvidia.com>
      Tested-by: NTuomas Tynkkynen <ttynkkynen@nvidia.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      646d7043
    • D
      locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNER · 5db6c6fe
      Davidlohr Bueso 提交于
      Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
      encapsulate the dependencies for rwsem optimistic spinning.
      No logical changes here as it continues to depend on both
      SMP and the XADD algorithm variant.
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Acked-by: NJason Low <jason.low2@hp.com>
      [ Also make it depend on ARCH_SUPPORTS_ATOMIC_RMW. ]
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1405112406-13052-2-git-send-email-davidlohr@hp.com
      Cc: aswin@hp.com
      Cc: Chris Mason <clm@fb.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      5db6c6fe
    • J
      locking/rwsem: Reduce the size of struct rw_semaphore · ce069fc9
      Jason Low 提交于
      Recent optimistic spinning additions to rwsem provide significant performance
      benefits on many workloads on large machines. The cost of it was increasing
      the size of the rwsem structure by up to 128 bits.
      
      However, now that the previous patches in this series bring the overhead of
      struct optimistic_spin_queue to 32 bits, this patch reorders some fields in
      struct rw_semaphore such that we can reduce the overhead of the rwsem structure
      by 64 bits (on 64 bit systems).
      
      The extra overhead required for rwsem optimistic spinning would now be up
      to 8 additional bytes instead of up to 16 bytes. Additionally, the size of
      rwsem would now be more in line with mutexes.
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Scott Norton <scott.norton@hp.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Link: http://lkml.kernel.org/r/1405358872-3732-6-git-send-email-jason.low2@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ce069fc9
    • P
      locking/rwsem: Rename 'activity' to 'count' · 13b9a962
      Peter Zijlstra 提交于
      There are two definitions of struct rw_semaphore, one in linux/rwsem.h
      and one in linux/rwsem-spinlock.h.
      
      For some reason they have different names for the initial field. This
      makes it impossible to use C99 named initialization for
      __RWSEM_INITIALIZER() -- or we have to duplicate that entire thing
      along with the structure definitions.
      
      The simpler patch is renaming the rwsem-spinlock variant to match the
      regular rwsem.
      
      This allows us to switch to C99 named initialization.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/n/tip-bmrZolsbGmautmzrerog27io@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      13b9a962
    • J
      locking/spinlocks/mcs: Introduce and use init macro and function for osq locks · 4d9d951e
      Jason Low 提交于
      Currently, we initialize the osq lock by directly setting the lock's values. It
      would be preferable if we use an init macro to do the initialization like we do
      with other locks.
      
      This patch introduces and uses a macro and function for initializing the osq lock.
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Scott Norton <scott.norton@hp.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Link: http://lkml.kernel.org/r/1405358872-3732-4-git-send-email-jason.low2@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4d9d951e
    • J
      locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead · 90631822
      Jason Low 提交于
      The cancellable MCS spinlock is currently used to queue threads that are
      doing optimistic spinning. It uses per-cpu nodes, where a thread obtaining
      the lock would access and queue the local node corresponding to the CPU that
      it's running on. Currently, the cancellable MCS lock is implemented by using
      pointers to these nodes.
      
      In this patch, instead of operating on pointers to the per-cpu nodes, we
      store the CPU numbers in which the per-cpu nodes correspond to in atomic_t.
      A similar concept is used with the qspinlock.
      
      By operating on the CPU # of the nodes using atomic_t instead of pointers
      to those nodes, this can reduce the overhead of the cancellable MCS spinlock
      by 32 bits (on 64 bit systems).
      Signed-off-by: NJason Low <jason.low2@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Scott Norton <scott.norton@hp.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Aswin Chandramouleeswaran <aswin@hp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Link: http://lkml.kernel.org/r/1405358872-3732-3-git-send-email-jason.low2@hp.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      90631822