1. 14 10月, 2010 1 次提交
  2. 23 9月, 2010 2 次提交
  3. 22 8月, 2010 1 次提交
  4. 20 8月, 2010 1 次提交
    • B
      x86, hotplug: Serialize CPU hotplug to avoid bringup concurrency issues · d7c53c9e
      Borislav Petkov 提交于
      When testing cpu hotplug code on 32-bit we kept hitting the "CPU%d:
      Stuck ??" message due to multiple cores concurrently accessing the
      cpu_callin_mask, among others.
      
      Since these codepaths are not protected from concurrent access due to
      the fact that there's no sane reason for making an already complex
      code unnecessarily more complex - we hit the issue only when insanely
      switching cores off- and online - serialize hotplugging cores on the
      sysfs level and be done with it.
      
      [ v2.1: fix !HOTPLUG_CPU build ]
      
      Cc: <stable@kernel.org>
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <20100819181029.GC17171@aftab>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      d7c53c9e
  5. 27 7月, 2010 1 次提交
  6. 19 6月, 2010 1 次提交
    • A
      x86, olpc: Add support for calling into OpenFirmware · fd699c76
      Andres Salomon 提交于
      Add support for saving OFW's cif, and later calling into it to run OFW
      commands.  OFW remains resident in memory, living within virtual range
      0xff800000 - 0xffc00000.  A single page directory entry points to the
      pgdir that OFW actually uses, so rather than saving the entire page
      table, we grab and install that one entry permanently in the kernel's
      page table.
      
      This is currently only used by the OLPC XO.  Note that this particular
      calling convention breaks PAE and PAT, and so cannot be used on newer
      x86 hardware.
      Signed-off-by: NAndres Salomon <dilinger@queued.net>
      LKML-Reference: <20100618174653.7755a39a@dev.queued.net>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      fd699c76
  7. 28 5月, 2010 3 次提交
  8. 22 5月, 2010 1 次提交
  9. 16 5月, 2010 1 次提交
    • F
      lockup_detector: Adapt CONFIG_PERF_EVENT_NMI to other archs · c01d4323
      Frederic Weisbecker 提交于
      CONFIG_PERF_EVENT_NMI is something that need to be enabled from the
      arch. This is fine on x86 as PERF_EVENTS is builtin but if other
      archs select it, they will need to handle the PERF_EVENTS dependency.
      
      Instead, handle the dependency in the generic layer:
      
      - archs need to tell what they support through HAVE_PERF_EVENTS_NMI
      - Enable magically PERF_EVENTS_NMI if we have PERF_EVENTS and
        HAVE_PERF_EVENTS_NMI.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      c01d4323
  10. 01 5月, 2010 1 次提交
    • F
      hw-breakpoints: Separate constraint space for data and instruction breakpoints · 0102752e
      Frederic Weisbecker 提交于
      There are two outstanding fashions for archs to implement hardware
      breakpoints.
      
      The first is to separate breakpoint address pattern definition
      space between data and instruction breakpoints. We then have
      typically distinct instruction address breakpoint registers
      and data address breakpoint registers, delivered with
      separate control registers for data and instruction breakpoints
      as well. This is the case of PowerPc and ARM for example.
      
      The second consists in having merged breakpoint address space
      definition between data and instruction breakpoint. Address
      registers can host either instruction or data address and
      the access mode for the breakpoint is defined in a control
      register. This is the case of x86 and Super H.
      
      This patch adds a new CONFIG_HAVE_MIXED_BREAKPOINTS_REGS config
      that archs can select if they belong to the second case. Those
      will have their slot allocation merged for instructions and
      data breakpoints.
      
      The others will have a separate slot tracking between data and
      instruction breakpoints.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: K. Prasad <prasad@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      0102752e
  11. 29 4月, 2010 1 次提交
  12. 07 4月, 2010 1 次提交
    • B
      x86: Add optimized popcnt variants · d61931d8
      Borislav Petkov 提交于
      Add support for the hardware version of the Hamming weight function,
      popcnt, present in CPUs which advertize it under CPUID, Function
      0x0000_0001_ECX[23]. On CPUs which don't support it, we fallback to the
      default lib/hweight.c sw versions.
      
      A synthetic benchmark comparing popcnt with __sw_hweight64 showed almost
      a 3x speedup on a F10h machine.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <20100318112015.GC11152@aftab>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d61931d8
  13. 03 4月, 2010 1 次提交
    • D
      x86: Increase CONFIG_NODES_SHIFT max to 10 · 51591e31
      David Rientjes 提交于
      Some larger systems require more than 512 nodes, so increase the
      maximum CONFIG_NODES_SHIFT to 10 for a new max of 1024 nodes.
      
      This was tested with numa=fake=64M on systems with more than
      64GB of RAM. A total of 1022 nodes were initialized.
      
      Successfully builds with no additional warnings on x86_64
      allyesconfig.
      
      ( No effect on any existing config. Newly enabled CONFIG_MAXSMP=y
        will see the new default. )
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      LKML-Reference: <alpine.DEB.2.00.1003251538060.8589@chino.kir.corp.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51591e31
  14. 13 3月, 2010 2 次提交
  15. 10 3月, 2010 1 次提交
    • I
      perf, x86: Add INSTRUCTION_DECODER config flag · ba7e4d13
      Ingo Molnar 提交于
      The PEBS+LBR decoding magic needs the insn_get_length() infrastructure
      to be able to decode x86 instruction length.
      
      So split it out of KPROBES dependency and make it enabled when either
      KPROBES or PERF_EVENTS is enabled.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ba7e4d13
  16. 03 3月, 2010 1 次提交
  17. 26 2月, 2010 3 次提交
    • J
      x86, mrst: Add Kconfig dependencies for Moorestown · 4b2f3f7d
      Jacob Pan 提交于
      The Moorestown platform requires IOAPIC for all interrupts from the
      south complex, since there is no legacy PIC.
      
      Furthermore, Moorestown I/O requires PCI.  Moorestown PCI depends on PCI MMCONFIG
      and DIRECT method to perform device enumeration, as there is no PCI BIOS.
      
      [ hpa: rewrote commit message ]
      Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com>
      LKML-Reference: <1267120934-9505-1-git-send-email-jacob.jun.pan@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      4b2f3f7d
    • P
      x86, numaq: Make CONFIG_X86_NUMAQ depend on CONFIG_PCI · a92d152e
      Pan, Jacob jun 提交于
      The NUMAQ initialization sets x86_init.pci.init to pci_numaq_init,
      which obviously isn't defined if CONFIG_PCI isn't defined.  This
      dependency was implicit in the past, because pci_numaq_init was
      invoked from arch/x86/pci/legacy.c, which itself was conditioned on
      CONFIG_PCI.
      
      I suspect that no NUMA-Q machines without PCI were ever built, so
      instead of complicating the code by adding #ifdefs or stub functions,
      just disable this bit of the configuration space.
      
      [ hpa: rewrote the checkin comment ]
      Signed-off-by: NJacob Pan <jacob.jun.pan@intel.com>
      LKML-Reference: <43F901BD926A4E43B106BF17856F0755A321EE1F@orsmsx508.amr.corp.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      a92d152e
    • M
      kprobes/x86: Support kprobes jump optimization on x86 · c0f7ac3a
      Masami Hiramatsu 提交于
      Introduce x86 arch-specific optimization code, which supports
      both of x86-32 and x86-64.
      
      This code also supports safety checking, which decodes whole of
      a function in which probe is inserted, and checks following
      conditions before optimization:
       - The optimized instructions which will be replaced by a jump instruction
         don't straddle the function boundary.
       - There is no indirect jump instruction, because it will jumps into
         the address range which is replaced by jump operand.
       - There is no jump/loop instruction which jumps into the address range
         which is replaced by jump operand.
       - Don't optimize kprobes if it is in functions into which fixup code will
         jumps.
      
      This uses text_poke_multibyte() which doesn't support modifying
      code on NMI/MCE handler. However, since kprobes itself doesn't
      support NMI/MCE code probing, it's not a problem.
      
      Changes in v9:
       - Use *_text_reserved() for checking the probe can be optimized.
       - Verify jump address range is in 2G range when preparing slot.
       - Backup original code when switching optimized buffer, instead of
         preparing buffer, because there can be int3 of other probes in
         preparing phase.
       - Check kprobe is disabled in arch_check_optimized_kprobe().
       - Strictly check indirect jump opcodes (ff /4, ff /5).
      
      Changes in v6:
       - Split stop_machine-based jump patching code.
       - Update comments and coding style.
      
      Changes in v5:
       - Introduce stop_machine-based jump replacing.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Anders Kaseorg <andersk@ksplice.com>
      Cc: Tim Abbott <tabbott@ksplice.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      LKML-Reference: <20100225133446.6725.78994.stgit@localhost6.localdomain6>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c0f7ac3a
  18. 25 2月, 2010 1 次提交
    • J
      x86, apbt: Moorestown APB system timer driver · bb24c471
      Jacob Pan 提交于
      Moorestown platform does not have PIT or HPET platform timers.  Instead it
      has a bank of eight APB timers.  The number of available timers to the os
      is exposed via SFI mtmr tables.  All APB timer interrupts are routed via
      ioapic rtes and delivered as MSI.
      Currently, we use timer 0 and 1 for per cpu clockevent devices, timer 2
      for clocksource.
      Signed-off-by: NJacob Pan <jacob.jun.pan@intel.com>
      LKML-Reference: <43F901BD926A4E43B106BF17856F0755A318D2D2@orsmsx508.amr.corp.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      bb24c471
  19. 17 2月, 2010 2 次提交
  20. 14 2月, 2010 1 次提交
  21. 13 2月, 2010 2 次提交
  22. 24 1月, 2010 1 次提交
    • H
      x86: Remove "x86 CPU features in debugfs" (CONFIG_X86_CPU_DEBUG) · b1600918
      H. Peter Anvin 提交于
      CONFIG_X86_CPU_DEBUG, which provides some parsed versions of the x86
      CPU configuration via debugfs, has caused boot failures on real
      hardware.  The value of this feature has been marginal at best, as all
      this information is already available to userspace via generic
      interfaces.
      
      Causes crashes that have not been fixed + minimal utility -> remove.
      
      See the referenced LKML thread for more information.
      Reported-by: NOzan Çağlayan <ozan@pardus.org.tr>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      LKML-Reference: <alpine.LFD.2.00.1001221755320.13231@localhost.localdomain>
      Cc: Jaswinder Singh Rajput <jaswinder@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: <stable@kernel.org>
      b1600918
  23. 12 1月, 2010 2 次提交
  24. 18 12月, 2009 1 次提交
    • F
      hw-breakpoints: Fix hardware breakpoints -> perf events dependency · 99e8c5a3
      Frederic Weisbecker 提交于
      The kbuild's select command doesn't propagate through the config
      dependencies.
      
      Hence the current rules of hardware breakpoint's config can't
      ensure perf can never be disabled under us.
      
      We have:
      
      config X86
      	selects HAVE_HW_BREAKPOINTS
      
      config HAVE_HW_BREAKPOINTS
      	select PERF_EVENTS
      
      config PERF_EVENTS
      	[...]
      
      x86 will select the breakpoints but that won't propagate to perf
      events. The user can still disable the latter, but it is
      necessary for the breakpoints.
      
      What we need is:
      
       - x86 selects HAVE_HW_BREAKPOINTS and PERF_EVENTS
       - HAVE_HW_BREAKPOINTS depends on PERF_EVENTS
      
      so that we ensure PERF_EVENTS is enabled and frozen for x86.
      
      This fixes the following kind of build errors:
      
       In file included from arch/x86/kernel/hw_breakpoint.c:31:
       include/linux/hw_breakpoint.h: In function 'hw_breakpoint_addr':
       include/linux/hw_breakpoint.h:39: error: 'struct perf_event' has no member named 'attr'
      
      v2: Select also ANON_INODES from x86, required for perf
      Reported-by: NCyrill Gorcunov <gorcunov@gmail.com>
      Reported-by: NMichal Marek <mmarek@suse.cz>
      Reported-by: NAndrew Randrianasulu <randrik_a@yahoo.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: K.Prasad <prasad@linux.vnet.ibm.com>
      LKML-Reference: <1261010034-7786-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99e8c5a3
  25. 16 12月, 2009 2 次提交
  26. 11 11月, 2009 1 次提交
  27. 23 10月, 2009 1 次提交
  28. 12 10月, 2009 1 次提交
  29. 09 10月, 2009 1 次提交
  30. 02 10月, 2009 1 次提交
    • A
      core, x86: Add user return notifiers · 7c68af6e
      Avi Kivity 提交于
      Add a general per-cpu notifier that is called whenever the kernel is
      about to return to userspace.  The notifier uses a thread_info flag
      and existing checks, so there is no impact on user return or context
      switch fast paths.
      
      This will be used initially to speed up KVM task switching by lazily
      updating MSRs.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      LKML-Reference: <1253342422-13811-1-git-send-email-avi@redhat.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      7c68af6e