1. 05 7月, 2010 1 次提交
    • P
      rbtree: Undo augmented trees performance damage and regression · b945d6b2
      Peter Zijlstra 提交于
      Reimplement augmented RB-trees without sprinkling extra branches
      all over the RB-tree code (which lives in the scheduler hot path).
      
      This approach is 'borrowed' from Fabio's BFQ implementation and
      relies on traversing the rebalance path after the RB-tree-op to
      correct the heap property for insertion/removal and make up for
      the damage done by the tree rotations.
      
      For insertion the rebalance path is trivially that from the new
      node upwards to the root, for removal it is that from the deepest
      node in the path from the to be removed node that will still
      be around after the removal.
      
      [ This patch also fixes a video driver regression reported by
        Ali Gholami Rudi - the memtype->subtree_max_end was updated
        incorrectly. ]
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NVenkatesh Pallipadi <venki@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Tested-by: NAli Gholami Rudi <ali@rudi.ir>
      Cc: Fabio Checconi <fabio@gandalf.sssup.it>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1275414172.27810.27961.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b945d6b2
  2. 03 7月, 2010 1 次提交
  3. 01 7月, 2010 1 次提交
  4. 30 6月, 2010 1 次提交
    • F
      x86: Send a SIGTRAP for user icebp traps · a1e80faf
      Frederic Weisbecker 提交于
      Before we had a generic breakpoint layer, x86 used to send a
      sigtrap for any debug event that happened in userspace,
      except if it was caused by lazy dr7 switches.
      
      Currently we only send such signal for single step or breakpoint
      events.
      
      However, there are three other kind of debug exceptions:
      
      - debug register access detected: trigger an exception if the
        next instruction touches the debug registers. We don't use
        it.
      - task switch, but we don't use tss.
      - icebp/int01 trap. This instruction (0xf1) is undocumented and
        generates an int 1 exception. Unlike single step through TF
        flag, it doesn't set the single step origin of the exception
        in dr6.
      
      icebp then used to be reported in userspace using trap signals
      but this have been incidentally broken with the new breakpoint
      code. Reenable this. Since this is the only debug event that
      doesn't set anything in dr6, this is all we have to check.
      
      This fixes a regression in Wine where World Of Warcraft got broken
      as it uses this for software protection checks purposes. And
      probably other apps do.
      Reported-and-tested-by: NAlexandre Julliard <julliard@winehq.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: 2.6.33.x 2.6.34.x <stable@kernel.org>
      a1e80faf
  5. 25 6月, 2010 1 次提交
  6. 20 6月, 2010 1 次提交
  7. 19 6月, 2010 1 次提交
  8. 12 6月, 2010 2 次提交
  9. 11 6月, 2010 2 次提交
  10. 10 6月, 2010 3 次提交
  11. 09 6月, 2010 4 次提交
  12. 08 6月, 2010 2 次提交
  13. 03 6月, 2010 1 次提交
    • I
      xen: ensure timer tick is resumed even on CPU driving the resume · cd52e17e
      Ian Campbell 提交于
      The core suspend/resume code is run from stop_machine on CPU0 but
      parts of the suspend/resume machinery (including xen_arch_resume) are
      run on whichever CPU happened to schedule the xenwatch kernel thread.
      
      As part of the non-core resume code xen_arch_resume is called in order
      to restart the timer tick on non-boot processors. The boot processor
      itself is taken care of by core timekeeping code.
      
      xen_arch_resume uses smp_call_function which does not call the given
      function on the current processor. This means that we can end up with
      one CPU not receiving timer ticks if the xenwatch thread happened to
      be scheduled on CPU > 0.
      
      Use on_each_cpu instead of smp_call_function to ensure the timer tick
      is resumed everywhere.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Acked-by: NJeremy Fitzhardinge <jeremy@goop.org>
      Cc: Stable Kernel <stable@kernel.org> # .32.x
      cd52e17e
  14. 02 6月, 2010 1 次提交
    • B
      x86, smpboot: Fix cores per node printing on boot · 4adc8b71
      Borislav Petkov 提交于
      Percpu initialization happens now after booting the cores on the
      machine and this causes them all to be displayed as belonging to
      node 0:
      
      Jun  8 05:57:21 kepek kernel: [    0.106999] Booting Node   0,
      Processors  #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 Ok.
      
      Use early_cpu_to_node() to get the correct node of each core
      instead.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: Mike Travis <travis@sgi.com>
      LKML-Reference: <20100601190455.GA14237@aftab>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4adc8b71
  15. 01 6月, 2010 2 次提交
  16. 31 5月, 2010 3 次提交
    • A
      x86/mm: Remove unused DBG() macro · e565813a
      Akinobu Mita 提交于
      DBG() macro for CONFIG_DEBUG_PER_CPU_MAPS is unused.
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      LKML-Reference: <1274706291-13554-1-git-send-email-akinobu.mita@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e565813a
    • S
      perf_events: Fix event scheduling issues introduced by transactional API · 90151c35
      Stephane Eranian 提交于
      The transactional API patch between the generic and model-specific
      code introduced several important bugs with event scheduling, at
      least on X86. If you had pinned events, e.g., watchdog,  and were
      over-committing the PMU, you would get bogus counts. The bug was
      showing up on Intel CPU because events would move around more
      often that on AMD. But the problem also existed on AMD, though
      harder to expose.
      
      The issues were:
      
       - group_sched_in() was missing a cancel_txn() in the error path
      
       - cpuc->n_added was not properly maintained, leading to missing
         actions in hw_perf_enable(), i.e., n_running being 0. You cannot
         update n_added until you know the transaction has succeeded. In
         case of failed transaction n_added was not adjusted back.
      
       - in case of failed transactions, event_sched_out() was called
         and eventually invoked x86_disable_event() to touch the HW reg.
         But with transactions, on X86, event_sched_in() does not touch
         HW registers, it simply collects events into a list. Thus, you
         could end up calling x86_disable_event() on a counter which
         did not correspond to the current event when idx != -1.
      
      The patch modifies the generic and X86 code to avoid all those problems.
      
      First, we keep track of the number of events added last. In case the
      transaction fails, we substract them from n_added. This approach is
      necessary (as opposed to delaying updates to n_added) because not all
      event updates use the transaction API, e.g., single events.
      
      Second, we encapsulate the event_sched_in() and event_sched_out() in
      group_sched_in() inside the transaction. That makes the operations
      symmetrical and you can also detect that you are inside a transaction
      and skip the HW reg access by checking cpuc->group_flag.
      
      With this patch, you can now overcommit the PMU even with pinned
      system-wide events present and still get valid counts.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1274796225.5882.1389.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      90151c35
    • L
      Revert "cpusets: randomize node rotor used in cpuset_mem_spread_node()" · 35926ff5
      Linus Torvalds 提交于
      This reverts commit 0ac0c0d0, which
      caused cross-architecture build problems for all the wrong reasons.
      IA64 already added its own version of __node_random(), but the fact is,
      there is nothing architectural about the function, and the original
      commit was just badly done. Revert it, since no fix is forthcoming.
      Requested-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      35926ff5
  17. 28 5月, 2010 11 次提交
  18. 27 5月, 2010 2 次提交