1. 14 2月, 2011 3 次提交
    • S
      x86: Scale up the number of TLB invalidate vectors with NR_CPUs, up to 32 · 70e4a369
      Shaohua Li 提交于
      Make the maxium TLB invalidate vectors depend on NR_CPUS linearly,
      with a maximum of 32 vectors.
      
      We currently only have 8 vectors for TLB invalidation and that is clearly
      inadequate. If we have a lot of CPUs, the CPUs need share the 8 vectors and
      tlbstate_lock is used to protect them. flush_tlb_page() is
      heavily used in page reclaim, which will cause a lot of lock
      contention for tlbstate_lock.
      
      Andi Kleen suggested increasing the vectors number to 32, which should be
      good for current typical systems to reduce the tlbstate_lock contention.
      
      My test system has 4 sockets and 64G memory, and 64 CPUs. My
      workload creates 64 processes. Each process mmap reads a big
      empty sparse file. The total size of the files are 2*total_mem,
      so this will cause a lot of page reclaim.
      
      Below is the result I get from perf call-graph profiling:
      
       without the patch:
       ------------------
      
          24.25%           usemem  [kernel]                                   [k] _raw_spin_lock
                           |
                           --- _raw_spin_lock
                              |
                              |--42.15%-- native_flush_tlb_others
      
       with the patch:
       ------------------
      
          14.96%           usemem  [kernel]                                   [k] _raw_spin_lock
                           |
                           --- _raw_spin_lock
                              |--13.89%-- native_flush_tlb_others
      
      So this heavily reduces the tlbstate_lock contention.
      Suggested-by: NAndi Kleen <andi@firstfloor.org>
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1295232727.1949.709.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      70e4a369
    • S
      x86: Allocate 32 tlb_invalidate_interrupt handler stubs · 3a09fb45
      Shaohua Li 提交于
      Add up to 32 invalidate_interrupt handlers. How many handlers are
      added depends on NUM_INVALIDATE_TLB_VECTORS. So if
      NUM_INVALIDATE_TLB_VECTORS is smaller than 32, we reduce code
      size.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      LKML-Reference: <1295232725.1949.708.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3a09fb45
    • S
      x86: Cleanup vector usage · 60f6e65d
      Shaohua Li 提交于
      Cleanup the vector usage and make them continuous if possible.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      LKML-Reference: <1295232722.1949.707.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60f6e65d
  2. 10 2月, 2011 1 次提交
    • J
      KVM: SVM: Make sure KERNEL_GS_BASE is valid when loading gs_index · 893a5ab6
      Joerg Roedel 提交于
      The gs_index loading code uses the swapgs instruction to
      switch to the user gs_base temporarily. This is unsave in an
      lightweight exit-path in KVM on AMD because the
      KERNEL_GS_BASE MSR is switches lazily. An NMI happening in
      the critical path of load_gs_index may use the wrong GS_BASE
      value then leading to unpredictable behavior, e.g. a
      triple-fault.
      
      This patch fixes the issue by making sure that load_gs_index
      is called only with a valid KERNEL_GS_BASE value loaded in
      KVM.
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      893a5ab6
  3. 07 2月, 2011 1 次提交
  4. 05 2月, 2011 1 次提交
  5. 04 2月, 2011 1 次提交
    • S
      x86, mm: avoid possible bogus tlb entries by clearing prev mm_cpumask after switching mm · 831d52bc
      Suresh Siddha 提交于
      Clearing the cpu in prev's mm_cpumask early will avoid the flush tlb
      IPI's while the cr3 is still pointing to the prev mm.  And this window
      can lead to the possibility of bogus TLB fills resulting in strange
      failures.  One such problematic scenario is mentioned below.
      
       T1. CPU-1 is context switching from mm1 to mm2 context and got a NMI
           etc between the point of clearing the cpu from the mm_cpumask(mm1)
           and before reloading the cr3 with the new mm2.
      
       T2. CPU-2 is tearing down a specific vma for mm1 and will proceed with
           flushing the TLB for mm1.  It doesn't send the flush TLB to CPU-1
           as it doesn't see that cpu listed in the mm_cpumask(mm1).
      
       T3. After the TLB flush is complete, CPU-2 goes ahead and frees the
           page-table pages associated with the removed vma mapping.
      
       T4. CPU-2 now allocates those freed page-table pages for something
           else.
      
       T5. As the CR3 and TLB caches for mm1 is still active on CPU-1, CPU-1
           can potentially speculate and walk through the page-table caches
           and can insert new TLB entries.  As the page-table pages are
           already freed and being used on CPU-2, this page walk can
           potentially insert a bogus global TLB entry depending on the
           (random) contents of the page that is being used on CPU-2.
      
       T6. This bogus TLB entry being global will be active across future CR3
           changes and can result in weird memory corruption etc.
      
      To avoid this issue, for the prev mm that is handing over the cpu to
      another mm, clear the cpu from the mm_cpumask(prev) after the cr3 is
      changed.
      
      Marking it for -stable, though we haven't seen any reported failure that
      can be attributed to this.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: stable@kernel.org	[v2.6.32+]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      831d52bc
  6. 03 2月, 2011 2 次提交
    • S
      x86, mtrr: Avoid MTRR reprogramming on BP during boot on UP platforms · f7448548
      Suresh Siddha 提交于
      Markus Kohn ran into a hard hang regression on an acer aspire
      1310, when acpi is enabled. git bisect showed the following
      commit as the bad one that introduced the boot regression.
      
      	commit d0af9eed
      	Author: Suresh Siddha <suresh.b.siddha@intel.com>
      	Date:   Wed Aug 19 18:05:36 2009 -0700
      
      	    x86, pat/mtrr: Rendezvous all the cpus for MTRR/PAT init
      
      Because of the UP configuration of that platform,
      native_smp_prepare_cpus() bailed out (in smp_sanity_check())
      before doing the set_mtrr_aps_delayed_init()
      
      Further down the boot path, native_smp_cpus_done() will call the
      delayed MTRR initialization for the AP's (mtrr_aps_init()) with
      mtrr_aps_delayed_init not set. This resulted in the boot
      processor reprogramming its MTRR's to the values seen during the
      start of the OS boot. While this is not needed ideally, this
      shouldn't have caused any side-effects. This is because the
      reprogramming of MTRR's (set_mtrr_state() that gets called via
      set_mtrr()) will check if the live register contents are
      different from what is being asked to write and will do the actual
      write only if they are different.
      
      BP's mtrr state is read during the start of the OS boot and
      typically nothing would have changed when we ask to reprogram it
      on BP again because of the above scenario on an UP platform. So
      on a normal UP platform no reprogramming of BP MTRR MSR's
      happens and all is well.
      
      However, on this platform, bios seems to be modifying the fixed
      mtrr range registers between the start of OS boot and when we
      double check the live registers for reprogramming BP MTRR
      registers. And as the live registers are modified, we end up
      reprogramming the MTRR's to the state seen during the start of
      the OS boot.
      
      During ACPI initialization, something in the bios (probably smi
      handler?) don't like this fact and results in a hard lockup.
      
      We didn't see this boot hang issue on this platform before the
      commit d0af9eed, because only
      the AP's (if any) will program its MTRR's to the value that BP
      had at the start of the OS boot.
      
      Fix this issue by checking mtrr_aps_delayed_init before
      continuing further in the mtrr_aps_init(). Now, only AP's (if
      any) will program its MTRR's to the BP values during boot.
      
      Addresses https://bugzilla.novell.com/show_bug.cgi?id=623393
      
        [ By the way, this behavior of the bios modifying MTRR's after the start
          of the OS boot is not common and the kernel is not prepared to
          handle this situation well. Irrespective of this issue, during
          suspend/resume, linux kernel will try to reprogram the BP's MTRR values
          to the values seen during the start of the OS boot. So suspend/resume might
          be already broken on this platform for all linux kernel versions. ]
      Reported-and-bisected-by: NMarkus Kohn <jabber@gmx.org>
      Tested-by: NMarkus Kohn <jabber@gmx.org>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Thomas Renninger <trenn@novell.com>
      Cc: Rafael Wysocki <rjw@novell.com>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Cc: stable@kernel.org # [v2.6.32+]
      LKML-Reference: <1296694975.4418.402.camel@sbsiddha-MOBL3.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f7448548
    • M
      x86, nx: Don't force pages RW when setting NX bits · f12d3d04
      Matthieu CASTET 提交于
      Xen want page table pages read only.
      
      But the initial page table (from head_*.S) live in .data or .bss.
      
      That was broken by 64edc8ed.  There is
      absolutely no reason to force these pages RW after they have already
      been marked RO.
      Signed-off-by: NMatthieu CASTET <castet.matthieu@free.fr>
      Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      f12d3d04
  7. 28 1月, 2011 2 次提交
  8. 27 1月, 2011 2 次提交
  9. 26 1月, 2011 3 次提交
    • E
      percpu, x86: Fix percpu_xchg_op() · 889a7a6a
      Eric Dumazet 提交于
      These recent percpu commits:
      
        2485b646: x86,percpu: Move out of place 64 bit ops into X86_64 section
        8270137a: cpuops: Use cmpxchg for xchg to avoid lock semantics
      
      Caused this 'perf top' crash:
      
       Kernel panic - not syncing: Fatal exception in interrupt
       Pid: 0, comm: swapper Tainted: G     D
       2.6.38-rc2-00181-gef71723 #413 Call Trace: <IRQ> [<ffffffff810465b5>]
          ? panic
          ? kmsg_dump
          ? kmsg_dump
          ? oops_end
          ? no_context
          ? __bad_area_nosemaphore
          ? perf_output_begin
          ? bad_area_nosemaphore
          ? do_page_fault
          ? __task_pid_nr_ns
          ? perf_event_tid
          ? __perf_event_header__init_id
          ? validate_chain
          ? perf_output_sample
          ? trace_hardirqs_off
          ? page_fault
          ? irq_work_run
          ? update_process_times
          ? tick_sched_timer
          ? tick_sched_timer
          ? __run_hrtimer
          ? hrtimer_interrupt
          ? account_system_vtime
          ? smp_apic_timer_interrupt
          ? apic_timer_interrupt
       ...
      
      Looking at assembly code, I found:
      
      list = this_cpu_xchg(irq_work_list, NULL);
      
      gives this wrong code : (gcc-4.1.2 cross compiler)
      
      ffffffff810bc45e:
      	mov    %gs:0xead0,%rax
      	cmpxchg %rax,%gs:0xead0
      	jne    ffffffff810bc45e <irq_work_run+0x3e>
      	test   %rax,%rax
      	je     ffffffff810bc4aa <irq_work_run+0x8a>
      
      Tell gcc we dirty eax/rax register in percpu_xchg_op()
      
      Compiler must use another register to store pxo_new__
      
      We also dont need to reload percpu value after a jump,
      since a 'failed' cmpxchg already updated eax/rax
      
      Wrong generated code was :
      	xor     %rax,%rax   /* load 0 into %rax */
      1:	mov     %gs:0xead0,%rax
      	cmpxchg %rax,%gs:0xead0
      	jne     1b
      	test    %rax,%rax
      
      After patch :
      
      	xor     %rdx,%rdx   /* load 0 into %rdx */
      	mov     %gs:0xead0,%rax
      1:	cmpxchg %rdx,%gs:0xead0
      	jne     1b:
      	test    %rax,%rax
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Tejun Heo <tj@kernel.org>
      LKML-Reference: <1295973114.3588.312.camel@edumazet-laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      889a7a6a
    • Y
      x86: Remove left over system_64.h · 9a57c3e4
      Yinghai Lu 提交于
      Left-over from the x86 merge ...
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <4D3E23D1.7010405@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9a57c3e4
    • A
      thp: fix PARAVIRT x86 32bit noPAE · cacf061c
      Andrea Arcangeli 提交于
      This fixes TRANSPARENT_HUGEPAGE=y with PARAVIRT=y and HIGHMEM64=n.
      
      The #ifdef that this patch removes was erratically introduced to fix a
      build error for noPAE (where pmd.pmd doesn't exist).  So then the kernel
      built but it failed at runtime because set_pmd_at was a noop.  This will
      correct it by enabling set_pmd_at for noPAE mode too.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Reported-by: Nwerner <w.landgraf@ru.ru>
      Reported-by: NMinchan Kim <minchan.kim@gmail.com>
      Tested-by: NMinchan Kim <minchan.kim@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cacf061c
  10. 25 1月, 2011 1 次提交
    • J
      x86-64: Don't use pointer to out-of-scope variable in dump_trace() · 2e5aa682
      Jesper Juhl 提交于
      In arch/x86/kernel/dumpstack_64.c::dump_trace() we have this code:
      
      ...
        		if (!stack) {
        			unsigned long dummy;
        			stack = &dummy;
        			if (task && task != current)
        				stack = (unsigned long *)task->thread.sp;
        		}
      
        		bp = stack_frame(task, regs);
        		/*
        		 * Print function call entries in all stacks, starting at the
        		 * current stack address. If the stacks consist of nested
        		 * exceptions
        		 */
        		tinfo = task_thread_info(task);
      
        		for (;;) {
        			char *id;
        			unsigned long *estack_end;
        			estack_end = in_exception_stack(cpu, (unsigned long)stack,
        							&used, &id);
      ...
      
      You'll notice that we assign to 'stack' the address of the variable
      'dummy' which is only in-scope inside the 'if (!stack)'. So when we later
      access stack (at the end of the above, and assuming we did not take the
      'if (task && task != current)' branch) we'll be using the address of a
      variable that is no longer in scope. I believe this patch is the proper
      fix, but I freely admit that I'm not 100% certain.
      Signed-off-by: NJesper Juhl <jj@chaosbits.net>
      LKML-Reference: <alpine.LNX.2.00.1101242232590.10252@swampdragon.chaosbits.net>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      2e5aa682
  11. 23 1月, 2011 1 次提交
    • M
      x86: Fix jump label with RO/NX module protection crash · 89696913
      matthieu castet 提交于
      If we use jump table in module init, there are marked
      as removed in __jump_table section after init is done.
      
      But we already applied ro permissions on the module, so
      we can't modify a read only section (crash in
      remove_jump_label_module_init).
      
      Make the __jump_table section rw.
      Signed-off-by: NMatthieu CASTET <castet.matthieu@free.fr>
      Cc: Xiaotian Feng <xtfeng@gmail.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Siarhei Liakh <sliakh.lkml@gmail.com>
      Cc: Xuxian Jiang <jiang@cs.ncsu.edu>
      Cc: James Morris <jmorris@namei.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Kees Cook <kees.cook@canonical.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4D3C3F20.7030203@free.fr>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89696913
  12. 22 1月, 2011 2 次提交
    • B
      x86, hotplug: Fix powersavings with offlined cores on AMD · 93789b32
      Borislav Petkov 提交于
      ea530692 made a CPU use monitor/mwait
      when offline. This is not the optimal choice for AMD wrt to powersavings
      and we'd prefer our cores to halt (i.e. enter C1) instead. For this, the
      same selection whether to use monitor/mwait has to be used as when we
      select the idle routine for the machine.
      
      With this patch, offlining cores 1-5 on a X6 machine allows core0 to
      boost again.
      
      [ hpa: putting this in urgent since it is a (power) regression fix ]
      Reported-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      Cc: stable@kernel.org # 37.x
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.hl>
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <1295534572-10730-1-git-send-email-bp@amd64.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      93789b32
    • S
      xen: p2m: correctly initialize partial p2m leaf · 8e1b4cf2
      Stefan Bader 提交于
      After changing the p2m mapping to a tree by
      
        commit 58e05027
          xen: convert p2m to a 3 level tree
      
      and trying to boot a DomU with 615MB of memory, the following crash was
      observed in the dump:
      
      kernel direct mapping tables up to 26f00000 @ 1ec4000-1fff000
      BUG: unable to handle kernel NULL pointer dereference at (null)
      IP: [<c0107397>] xen_set_pte+0x27/0x60
      *pdpt = 0000000000000000 *pde = 0000000000000000
      
      Adding further debug statements showed that when trying to set up
      pfn=0x26700 the returned mapping was invalid.
      
      pfn=0x266ff calling set_pte(0xc1fe77f8, 0x6b3003)
      pfn=0x26700 calling set_pte(0xc1fe7800, 0x3)
      
      Although the last_pfn obtained from the startup info is 0x26700, which
      should in turn not be hit, the additional 8MB which are added as extra
      memory normally seem to be ok. This lead to looking into the initial
      p2m tree construction, which uses the smaller value and assuming that
      there is other code handling the extra memory.
      
      When the p2m tree is set up, the leaves are directly pointed to the
      array which the domain builder set up. But if the mapping is not on a
      boundary that fits into one p2m page, this will result in the last leaf
      being only partially valid. And as the invalid entries are not
      initialized in that case, things go badly wrong.
      
      I am trying to fix that by checking whether the current leaf is a
      complete map and if not, allocate a completely new page and copy only
      the valid pointers there. This may not be the most efficient or elegant
      solution, but at least it seems to allow me booting DomUs with memory
      assignments all over the range.
      
      BugLink: http://bugs.launchpad.net/bugs/686692
      [v2: Redid a bit of commit wording and fixed a compile warning]
      Signed-off-by: NStefan Bader <stefan.bader@canonical.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      8e1b4cf2
  13. 21 1月, 2011 4 次提交
  14. 20 1月, 2011 5 次提交
  15. 19 1月, 2011 2 次提交
  16. 18 1月, 2011 2 次提交
    • B
      x86: Clear irqstack thread_info · 7b698ea3
      Brian Gerst 提交于
      Mathias Merz reported that v2.6.37 failed to boot on his
      system.
      
      Make sure that the thread_info part of the irqstack is
      initialized to zeroes.
      Reported-and-Tested-by: NMatthias Merz <linux@merz-ka.de>
      Signed-off-by: NBrian Gerst <brgerst@gmail.com>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <AANLkTimyKXfJ1x8tgwrr1hYnNLrPfgE1NTe4z7L6tUDm@mail.gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7b698ea3
    • S
      x86: Make relocatable kernel work with new binutils · 86b1e8dd
      Shaohua Li 提交于
      The CONFIG_RELOCATABLE=y option is broken with new binutils, which will make
      boot panic.
      
      According to Lu Hongjiu, the affected binutils are from 2.20.51.0.12 to
      2.21.51.0.3, which are release since Oct 22 this year. At least ubuntu 10.10 is
      using such binutils. See:
      
          http://sourceware.org/bugzilla/show_bug.cgi?id=12327
      
      The reason of the boot panic is that we have 'jiffies = jiffies_64;' in
      vmlinux.lds.S. The jiffies isn't in any section. In kernel build, there is
      warning saying jiffies is an absolute address and can't be relocatable. At
      runtime, jiffies will have virtual address 0.
      
      Signed-off-by: Shaohua Li<shaohua.li@intel.com>
      Cc: Lu Hongjiu<hongjiu.lu@intel.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      LKML-Reference: <1295312269.1949.725.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      86b1e8dd
  17. 15 1月, 2011 7 次提交