1. 07 7月, 2015 2 次提交
    • T
      x86/irq: Use proper locking in check_irq_vectors_for_cpu_disable() · cbb24dc7
      Thomas Gleixner 提交于
      It's unsafe to examine fields in the irq descriptor w/o holding the
      descriptor lock. Add proper locking.
      
      While at it add a comment why the vector check can run lock less
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: xiao jin <jin.xiao@intel.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Link: http://lkml.kernel.org/r/20150705171102.236544164@linutronix.de
      cbb24dc7
    • T
      x86/irq: Plug irq vector hotplug race · 5a3f75e3
      Thomas Gleixner 提交于
      Jin debugged a nasty cpu hotplug race which results in leaking a irq
      vector on the newly hotplugged cpu.
      
      cpu N				cpu M
      native_cpu_up                   device_shutdown
        do_boot_cpu			  free_msi_irqs
        start_secondary                   arch_teardown_msi_irqs
          smp_callin                        default_teardown_msi_irqs
             setup_vector_irq                  arch_teardown_msi_irq
              __setup_vector_irq		   native_teardown_msi_irq
                lock(vector_lock)		     destroy_irq 
                install vectors
                unlock(vector_lock)
      					       lock(vector_lock)
      --->                                  	       __clear_irq_vector
                                          	       unlock(vector_lock)
          lock(vector_lock)
          set_cpu_online
          unlock(vector_lock)
      
      This leaves the irq vector(s) which are torn down on CPU M stale in
      the vector array of CPU N, because CPU M does not see CPU N online
      yet. There is a similar issue with concurrent newly setup interrupts.
      
      The alloc/free protection of irq descriptors does not prevent the
      above race, because it merily prevents interrupt descriptors from
      going away or changing concurrently.
      
      Prevent this by moving the call to setup_vector_irq() into the
      vector_lock held region which protects set_cpu_online():
      
      cpu N				cpu M
      native_cpu_up                   device_shutdown
        do_boot_cpu			  free_msi_irqs
        start_secondary                   arch_teardown_msi_irqs
          smp_callin                        default_teardown_msi_irqs
             lock(vector_lock)                arch_teardown_msi_irq
             setup_vector_irq()
              __setup_vector_irq		   native_teardown_msi_irq
                install vectors		     destroy_irq 
             set_cpu_online
             unlock(vector_lock)
      					       lock(vector_lock)
                                        	       __clear_irq_vector
                                          	       unlock(vector_lock)
      
      So cpu M either sees the cpu N online before clearing the vector or
      cpu N installs the vectors after cpu M has cleared it.
      Reported-by: Nxiao jin <jin.xiao@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Link: http://lkml.kernel.org/r/20150705171102.141898931@linutronix.de
      5a3f75e3
  2. 06 7月, 2015 10 次提交
  3. 04 7月, 2015 7 次提交
  4. 03 7月, 2015 3 次提交
    • H
      ARM64 / SMP: Switch pr_err() to pr_debug() for disabled GICC entry · f9058929
      Hanjun Guo 提交于
      It is normal that firmware presents GICC entry or entries (processors)
      with disabled flag in ACPI MADT, taking a system of 16 cpus for example,
      ACPI firmware may present 8 ebabled first with another 8 cpus disabled
      in MADT, the disabled cpus can be hot-added later.
      
      Firmware may also present more cpus than the hardware actually has, but
      disabled the unused ones, and easily enable it when the hardware has such
      cpus to make the firmware code scalable.
      
      So that's not an error for disabled cpus in MADT, we can switch pr_err()
      to pr_debug() to make the boot a little quieter by default.
      
      Since hwid for disabled cpus often are invalid, and we check invalid hwid
      first in the code, for use case that hot add cpus later will be filtered
      out and will not be counted in possible cups, so move this check before
      the hwid one to prepare the code to count for disabeld cpus when cpu
      hot-plug is introduced.
      Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org>
      Reviewed-by: NAl Stone <ahs3@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9058929
    • T
      [IA64] Drop debug test/printk that some special pages are marked reserved · 43c518d1
      Tony Luck 提交于
      In commit 92923ca3 "mm: meminit: only set page reserved in the memblock region"
      we dropped setting the reserved bits for all pages. This results in some warnings
      on ia64:
      
      put_kernel_page: page at 0xe000000005588000 not in reserved memory
      put_kernel_page: page at 0xe000000005588000 not in reserved memory
      put_kernel_page: page at 0xe000000005580000 not in reserved memory
      put_kernel_page: page at 0xe000000005580000 not in reserved memory
      put_kernel_page: page at 0xe000000005580000 not in reserved memory
      put_kernel_page: page at 0xe000000005580000 not in reserved memory
      
      the two different pages match up with two objects from the loaded kernel
      that get mapped by arch/ia64/mm/init.c:setup_gate()
      
      a000000101588000 D __start_gate_section
      a000000101580000 D empty_zero_page
      
      In a discussion with Mel Gorman:
        http://lkml.kernel.org/r/20150526102219.GB13750%40suse.de
      he suggested that while the preferred approach might be to
      set the reserved bit for these pages, it would also be OK
      to just drop the test:
         "as it's a debugging check that is ia-64 specific"
      
      After hunting around a bit and failin to find a good place to mark these
      pages as reserved - I decided to just delete the test.
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      43c518d1
    • J
      arm64: cpuidle: add __init section marker to arm_cpuidle_init · ea389daa
      Jisheng Zhang 提交于
      It is not needed after booting, this patch moves the arm_cpuidle_init()
      function to the __init section.
      Signed-off-by: NJisheng Zhang <jszhang@marvell.com>
      Reviewed-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ea389daa
  5. 02 7月, 2015 4 次提交
  6. 01 7月, 2015 13 次提交
  7. 30 6月, 2015 1 次提交
    • P
      perf/x86: Fix 'active_events' imbalance · 93472aff
      Peter Zijlstra 提交于
      Commit 1b7b938f ("perf/x86/intel: Fix PMI handling for Intel PT") conditionally
      increments active_events in x86_add_exclusive() but unconditionally decrements in
      x86_del_exclusive().
      
      These extra decrements can lead to the situation where
      active_events is zero and thus the PMI handler is 'disabled'
      while we have active events on the PMU generating PMIs.
      
      This leads to a truckload of:
      
        Uhhuh. NMI received for unknown reason 21 on CPU 28.
        Do you have a strange power saving mode enabled?
        Dazed and confused, but trying to continue
      
      messages and generally messes up perf.
      
      Remove the condition on the increment, double increment balanced
      by a double decrement is perfectly fine.
      
      Restructure the code a little bit to make the unconditional inc
      a bit more natural.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: alexander.shishkin@linux.intel.com
      Cc: brgerst@gmail.com
      Cc: dvlasenk@redhat.com
      Cc: luto@amacapital.net
      Cc: oleg@redhat.com
      Fixes: 1b7b938f ("perf/x86/intel: Fix PMI handling for Intel PT")
      Link: http://lkml.kernel.org/r/20150624144750.GJ18673@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      93472aff