1. 20 9月, 2009 1 次提交
    • S
      kbuild: use INSTALLKERNEL to select customized installkernel script · caa27b66
      Sam Ravnborg 提交于
      Replace the use of CROSS_COMPILE to select a customized
      installkernel script with the possibility to set INSTALLKERNEL
      to select a custom installkernel script when running make:
      
          make INSTALLKERNEL=arm-installkernel install
      
      With this patch we are now more consistent across
      different architectures - they did not all support use
      of CROSS_COMPILE.
      
      The use of CROSS_COMPILE was a hack as this really belongs
      to gcc/binutils and the installkernel script does not change
      just because we change toolchain.
      
      The use of CROSS_COMPILE caused troubles with an upcoming patch
      that saves CROSS_COMPILE when a kernel is built - it would no
      longer be installable.
      [Thanks to Peter Z. for this hint]
      
      This patch undos what Ian did in commit:
      
        0f8e2d62
        ("use ${CROSS_COMPILE}installkernel in arch/*/boot/install.sh")
      
      The patch has been lightly tested on x86 only - but all changes
      looks obvious.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Acked-by: Mike Frysinger <vapier@gentoo.org> [blackfin]
      Acked-by: Russell King <linux@arm.linux.org.uk> [arm]
      Acked-by: Paul Mundt <lethal@linux-sh.org> [sh]
      Acked-by: "H. Peter Anvin" <hpa@zytor.com> [x86]
      Cc: Ian Campbell <icampbell@arcom.com>
      Cc: Tony Luck <tony.luck@intel.com> [ia64]
      Cc: Fenghua Yu <fenghua.yu@intel.com> [ia64]
      Cc: Hirokazu Takata <takata@linux-m32r.org> [m32r]
      Cc: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Cc: Kyle McMartin <kyle@mcmartin.ca> [parisc]
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> [powerpc]
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
      Cc: Thomas Gleixner <tglx@linutronix.de> [x86]
      Cc: Ingo Molnar <mingo@redhat.com> [x86]
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      caa27b66
  2. 18 9月, 2009 1 次提交
    • S
      x86, pat: don't use rb-tree based lookup in reserve_memtype() · dcb73bf4
      Suresh Siddha 提交于
      Recent enhancement of rb-tree based lookup exposed a  bug with the lookup
      mechanism in the reserve_memtype() which ensures that there are no conflicting
      memtype requests for the memory range.
      
      memtype_rb_search() returns an entry which has a start address <= new start
      address. And from here we traverse the linear linked list to check if there
      any conflicts with the existing mappings. As the rbtree is based on the
      start address of the memory range, it is quite possible that we have several
      overlapped mappings whose start address is much less than new requested start
      but the end is >= new requested end. This results in conflicting memtype
      mappings.
      
      Same bug exists with the old code which uses cached_entry from where
      we traverse the linear linked list. But the new rb-tree code exposes this
      bug fairly easily.
      
      For now, don't use the memtype_rb_search() and always start the search from
      the head of linear linked list in reserve_memtype(). Linear linked list
      for most of the systems grow's to few 10's of entries(as we track memory type
      of RAM pages using struct page). So we should be ok for now.
      
      We still retain the rbtree and use it to speed up free_memtype() which
      doesn't have the same bug(as we know what exactly we are searching for
      in free_memtype).
      
      Also use list_for_each_entry_from() in free_memtype() so that we start
      the search from rb-tree lookup result.
      Reported-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      LKML-Reference: <1253136483.4119.12.camel@sbs-t61.sc.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      dcb73bf4
  3. 16 9月, 2009 6 次提交
  4. 15 9月, 2009 10 次提交
    • P
      x86: sched: Provide arch implementations using aperf/mperf · 47fe38fc
      Peter Zijlstra 提交于
      APERF/MPERF support for cpu_power.
      
      APERF/MPERF is arch defined to be a relative scale of work capacity
      per logical cpu, this is assumed to include SMT and Turbo mode.
      
      APERF/MPERF are specified to both reset to 0 when either counter
      wraps, which is highly inconvenient, since that'll give a blimp
      when that happens. The manual specifies writing 0 to the counters
      after each read, but that's 1) too expensive, and 2) destroys the
      possibility of sharing these counters with other users, so we live
      with the blimp - the other existing user does too.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      47fe38fc
    • P
      x86: Add generic aperf/mperf code · 5cbc19a9
      Peter Zijlstra 提交于
      Move some of the aperf/mperf code out from the cpufreq driver
      thingy so that other people can enjoy it too.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      Cc: cpufreq@vger.kernel.org
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5cbc19a9
    • P
      x86: Move APERF/MPERF into a X86_FEATURE · a8303aaf
      Peter Zijlstra 提交于
      Move the APERFMPERF capacility into a X86_FEATURE flag so that it
      can be used outside of the acpi cpufreq driver.
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      Cc: cpufreq@vger.kernel.org
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a8303aaf
    • P
      sched: Reduce forkexec_idx · b8a543ea
      Peter Zijlstra 提交于
      If we're looking to place a new task, we might as well find the
      idlest position _now_, not 1 tick ago.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b8a543ea
    • M
      sched: Improve latencies and throughput · 0ec9fab3
      Mike Galbraith 提交于
      Make the idle balancer more agressive, to improve a
      x264 encoding workload provided by Jason Garrett-Glaser:
      
       NEXT_BUDDY NO_LB_BIAS
       encoded 600 frames, 252.82 fps, 22096.60 kb/s
       encoded 600 frames, 250.69 fps, 22096.60 kb/s
       encoded 600 frames, 245.76 fps, 22096.60 kb/s
      
       NO_NEXT_BUDDY LB_BIAS
       encoded 600 frames, 344.44 fps, 22096.60 kb/s
       encoded 600 frames, 346.66 fps, 22096.60 kb/s
       encoded 600 frames, 352.59 fps, 22096.60 kb/s
      
       NO_NEXT_BUDDY NO_LB_BIAS
       encoded 600 frames, 425.75 fps, 22096.60 kb/s
       encoded 600 frames, 425.45 fps, 22096.60 kb/s
       encoded 600 frames, 422.49 fps, 22096.60 kb/s
      
      Peter pointed out that this is better done via newidle_idx,
      not via LB_BIAS, newidle balancing should look for where
      there is load _now_, not where there was load 2 ticks ago.
      
      Worst-case latencies are improved as well as no buddies
      means less vruntime spread. (as per prior lkml discussions)
      
      This change improves kbuild-peak parallelism as well.
      Reported-by: NJason Garrett-Glaser <darkshikari@gmail.com>
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1253011667.9128.16.camel@marge.simson.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0ec9fab3
    • P
      sched: Tweak wake_idx · 78e7ed53
      Peter Zijlstra 提交于
      When merging select_task_rq_fair() and sched_balance_self() we lost
      the use of wake_idx, restore that and set them to 0 to make wake
      balancing more aggressive.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      78e7ed53
    • P
      sched: Merge select_task_rq_fair() and sched_balance_self() · c88d5910
      Peter Zijlstra 提交于
      The problem with wake_idle() is that is doesn't respect things like
      cpu_power, which means it doesn't deal well with SMT nor the recent
      RT interaction.
      
      To cure this, it needs to do what sched_balance_self() does, which
      leads to the possibility of merging select_task_rq_fair() and
      sched_balance_self().
      
      Modify sched_balance_self() to:
      
        - update_shares() when walking up the domain tree,
          (it only called it for the top domain, but it should
           have done this anyway), which allows us to remove
          this ugly bit from try_to_wake_up().
      
        - do wake_affine() on the smallest domain that contains
          both this (the waking) and the prev (the wakee) cpu for
          WAKE invocations.
      
      Then use the top-down balance steps it had to replace wake_idle().
      
      This leads to the dissapearance of SD_WAKE_BALANCE and
      SD_WAKE_IDLE_FAR, with SD_WAKE_IDLE replaced with SD_BALANCE_WAKE.
      
      SD_WAKE_AFFINE needs SD_BALANCE_WAKE to be effective.
      
      Touch all topology bits to replace the old with new SD flags --
      platforms might need re-tuning, enabling SD_BALANCE_WAKE
      conditionally on a NUMA distance seems like a good additional
      feature, magny-core and small nehalem systems would want this
      enabled, systems with slow interconnects would not.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c88d5910
    • A
      x86, mce: Fix compilation with !CONFIG_DEBUG_FS in mce-severity.c · e34e77ce
      Andi Kleen 提交于
      Fix compilation error in arch/x86/kernel/cpu/mcheck/mce-severity.c
      when CONFIG_DEBUG_FS is disabled, introduced in commit
      5be9ed25.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      e34e77ce
    • B
      x86, mce: do not compile mcelog message on AMD · 22223c9b
      Borislav Petkov 提交于
      Now that decoding is done in-kernel, suppress mcelog message part.
      
      CC: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      22223c9b
    • B
      x86, mce: pass mce info to EDAC for decoding · 549d042d
      Borislav Petkov 提交于
      Move NB decoder along with required defines to EDAC MCE core. Add
      registration routines for further decoding of the MCE info in the AMD64
      EDAC module.
      
      CC: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      549d042d
  5. 13 9月, 2009 1 次提交
  6. 11 9月, 2009 4 次提交
  7. 10 9月, 2009 17 次提交