1. 26 11月, 2008 1 次提交
  2. 23 11月, 2008 1 次提交
  3. 13 11月, 2008 1 次提交
  4. 11 11月, 2008 4 次提交
    • J
      x86, voyager: fix smp generic helper voyager breakage · 6cd10f8d
      James Bottomley 提交于
      Impact: build/boot fix for x86/Voyager
      
      This change:
      
      | commit 3d442233
      | Author: Jens Axboe <jens.axboe@oracle.com>
      | Date:   Thu Jun 26 11:21:34 2008 +0200
      |
      |     Add generic helpers for arch IPI function calls
      
      didn't wire up the voyager smp call function correctly, so do that
      here.  Also make CONFIG_USE_GENERIC_SMP_HELPERS a def_bool y again,
      since we now use the generic helpers for every x86 architecture.
      Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Jens Axboe <Jens.Axboe@oracle.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6cd10f8d
    • I
      tracing, x86: clean up FUNCTION_RET_TRACER Kconfig · f1c4be5e
      Ingo Molnar 提交于
      Impact: cleanup
      
      move FUNCTION_RET_TRACER to the X86 select section, where we have all the
      other options.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f1c4be5e
    • F
      tracing, x86: add low level support for ftrace return tracing · caf4b323
      Frederic Weisbecker 提交于
      Impact: add infrastructure for function-return tracing
      
      Add low level support for ftrace return tracing.
      
      This plug-in stores return addresses on the thread_info structure of
      the current task.
      
      The index of the current return address is initialized when the task
      is the first one (init) and when a process forks (the child). It is
      not needed when a task does a sys_execve because after this syscall,
      it still needs to return on the kernel functions it called.
      
      Note that the code of return_to_handler has been suggested by Steven
      Rostedt as almost all of the ideas of improvements in this V3.
      
      For purpose of security, arch/x86/kernel/process_32.c is not traced
      because __switch_to() changes the current task during its execution.
      That could cause inconsistency in the stored return address of this
      function even if I didn't have any crash after testing with tracing on
      this function enabled.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      caf4b323
    • R
      x86: Make NUMA on 32-bit depend on BROKEN · 4694516d
      Rafael J. Wysocki 提交于
      While investigating the failure of hibernation on 32-bit x86 with
      CONFIG_NUMA set, as described in this message
      http://marc.info/?l=linux-kernel&m=122634118116226&w=4
      I asked some people for help and I was told that it wasn't really
      worth the effort, because CONFIG_NUMA was generally broken on 32-bit
      x86 systems and it shouldn't be used in such configs.  For this
      reason, make CONFIG_NUMA depend on BROKEN instead of EXPERIMENTAL on
      x86-32.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Pavel Machek <pavel@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4694516d
  5. 06 11月, 2008 2 次提交
    • B
      x86: mention ACPI in top-level Kconfig menu · da85f865
      Bjorn Helgaas 提交于
      Impact: clarify menuconfig text
      
      Mention ACPI in the top-level menu to give a clue as to where
      it lives. This matches what ia64 does.
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      da85f865
    • S
      ftrace: add quick function trace stop · 60a7ecf4
      Steven Rostedt 提交于
      Impact: quick start and stop of function tracer
      
      This patch adds a way to disable the function tracer quickly without
      the need to run kstop_machine. It adds a new variable called
      function_trace_stop which will stop the calls to functions from mcount
      when set.  This is just an on/off switch and does not handle recursion
      like preempt_disable().
      
      It's main purpose is to help other tracers/debuggers start and stop tracing
      fuctions without the need to call kstop_machine.
      
      The config option HAVE_FUNCTION_TRACE_MCOUNT_TEST is added for archs
      that implement the testing of the function_trace_stop in the mcount
      arch dependent code. Otherwise, the test is done in the C code.
      
      x86 is the only arch at the moment that supports this.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60a7ecf4
  6. 04 11月, 2008 1 次提交
  7. 31 10月, 2008 1 次提交
  8. 22 10月, 2008 1 次提交
  9. 21 10月, 2008 1 次提交
  10. 20 10月, 2008 1 次提交
    • M
      container freezer: implement freezer cgroup subsystem · dc52ddc0
      Matt Helsley 提交于
      This patch implements a new freezer subsystem in the control groups
      framework.  It provides a way to stop and resume execution of all tasks in
      a cgroup by writing in the cgroup filesystem.
      
      The freezer subsystem in the container filesystem defines a file named
      freezer.state.  Writing "FROZEN" to the state file will freeze all tasks
      in the cgroup.  Subsequently writing "RUNNING" will unfreeze the tasks in
      the cgroup.  Reading will return the current state.
      
      * Examples of usage :
      
         # mkdir /containers/freezer
         # mount -t cgroup -ofreezer freezer  /containers
         # mkdir /containers/0
         # echo $some_pid > /containers/0/tasks
      
      to get status of the freezer subsystem :
      
         # cat /containers/0/freezer.state
         RUNNING
      
      to freeze all tasks in the container :
      
         # echo FROZEN > /containers/0/freezer.state
         # cat /containers/0/freezer.state
         FREEZING
         # cat /containers/0/freezer.state
         FROZEN
      
      to unfreeze all tasks in the container :
      
         # echo RUNNING > /containers/0/freezer.state
         # cat /containers/0/freezer.state
         RUNNING
      
      This is the basic mechanism which should do the right thing for user space
      task in a simple scenario.
      
      It's important to note that freezing can be incomplete.  In that case we
      return EBUSY.  This means that some tasks in the cgroup are busy doing
      something that prevents us from completely freezing the cgroup at this
      time.  After EBUSY, the cgroup will remain partially frozen -- reflected
      by freezer.state reporting "FREEZING" when read.  The state will remain
      "FREEZING" until one of these things happens:
      
      	1) Userspace cancels the freezing operation by writing "RUNNING" to
      		the freezer.state file
      	2) Userspace retries the freezing operation by writing "FROZEN" to
      		the freezer.state file (writing "FREEZING" is not legal
      		and returns EIO)
      	3) The tasks that blocked the cgroup from entering the "FROZEN"
      		state disappear from the cgroup's set of tasks.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: export thaw_process]
      Signed-off-by: NCedric Le Goater <clg@fr.ibm.com>
      Signed-off-by: NMatt Helsley <matthltc@us.ibm.com>
      Acked-by: NSerge E. Hallyn <serue@us.ibm.com>
      Tested-by: NMatt Helsley <matthltc@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dc52ddc0
  11. 17 10月, 2008 2 次提交
  12. 16 10月, 2008 7 次提交
  13. 14 10月, 2008 1 次提交
  14. 13 10月, 2008 2 次提交
    • I
      x86: remove additional_cpus configurability · b8073050
      Ingo Molnar 提交于
      additional_cpus=<x> parameter is dangerous and broken: for example
      if we boot additional_cpus=-2 on a stock dual-core system it will
      crash the box on bootup.
      
      So reduce the maze of code a bit by removingthe user-configurability
      angle.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b8073050
    • C
      x86: allow number of additional hotplug CPUs to be set at compile time, V2 · 7f2f49a5
      Chuck Ebbert 提交于
      x86: allow number of additional hotplug CPUs to be set at compile time, V2
      
      The default number of additional CPU IDs for hotplugging is determined
      by asking ACPI or mptables how many "disabled" CPUs there are in the
      system, but many systems get this wrong so that e.g. a uniprocessor
      machine gets an extra CPU allocated and never switches to single CPU
      mode.
      
      And sometimes CPU hotplugging is enabled only for suspend/hibernate
      anyway, so the additional CPU IDs are not wanted. Allow the number
      to be set to zero at compile time.
      
      Also, force the number of extra CPUs to zero if hotplugging is disabled
      which allows removing some conditional code.
      
      Tested on uniprocessor x86_64 that ACPI claims has a disabled processor,
      with CPU hotplugging configured.
      
      ("After" has the number of additional CPUs set to 0)
      Before: NR_CPUS: 512, nr_cpu_ids: 2, nr_node_ids 1
      After: NR_CPUS: 512, nr_cpu_ids: 1, nr_node_ids 1
      
      [Changed the name of the option and the prompt according to Ingo's
       suggestion.]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7f2f49a5
  15. 01 10月, 2008 1 次提交
    • Y
      x86: change MTRR_SANITIZER to def_bool y · 2ffb3501
      Yinghai Lu 提交于
      This option has been added in v2.6.26 as a default-disabled
      feature and went through several revisions since then.
      
      The feature fixes a wide range of MTRR setup problems that BIOSes
      leave us with: slow system, slow Xorg, slow system when adding lots
      of RAM, etc., so we want to enable it by default for v2.6.28.
      
      See:
      
        [Bug 10508] Upgrade to 4GB of RAM messes up MTRRs
        http://bugzilla.kernel.org/show_bug.cgi?id=10508
      
      and the test results in:
      
         http://lkml.org/lkml/2008/9/29/273
      
      1. hpa
      reg00: base=0xc0000000 (3072MB), size=1024MB: uncachable, count=1
      reg01: base=0x13c000000 (5056MB), size=  64MB: uncachable, count=1
      reg02: base=0x00000000 (   0MB), size=4096MB: write-back, count=1
      reg03: base=0x100000000 (4096MB), size=1024MB: write-back, count=1
      reg04: base=0xbf700000 (3063MB), size=   1MB: uncachable, count=1
      reg05: base=0xbf800000 (3064MB), size=   8MB: uncachable, count=1
      
      will get
      Found optimal setting for mtrr clean up
      gran_size: 1M   chunk_size: 128M        num_reg: 6      lose RAM: 0M
      range0: 0000000000000000 - 00000000c0000000
      Setting variable MTRR 0, base: 0MB, range: 2048MB, type WB
      Setting variable MTRR 1, base: 2048MB, range: 1024MB, type WB
      hole: 00000000bf700000 - 00000000c0000000
      Setting variable MTRR 2, base: 3063MB, range: 1MB, type UC
      Setting variable MTRR 3, base: 3064MB, range: 8MB, type UC
      range0: 0000000100000000 - 0000000140000000
      Setting variable MTRR 4, base: 4096MB, range: 1024MB, type WB
      hole: 000000013c000000 - 0000000140000000
      Setting variable MTRR 5, base: 5056MB, range: 64MB, type UC
      
      2. Dylan Taft
      reg00: base=0x00000000 (   0MB), size=4096MB: write-back, count=1
      reg01: base=0x100000000 (4096MB), size= 512MB: write-back, count=1
      reg02: base=0x120000000 (4608MB), size= 256MB: write-back, count=1
      reg03: base=0xd0000000 (3328MB), size= 256MB: uncachable, count=1
      reg04: base=0xe0000000 (3584MB), size= 512MB: uncachable, count=1
      reg05: base=0xc7e00000 (3198MB), size=   2MB: uncachable, count=1
      reg06: base=0xc8000000 (3200MB), size= 128MB: uncachable, count=1
      
      will get
      Found optimal setting for mtrr clean up
      gran_size: 1M   chunk_size: 4M  num_reg: 6      lose RAM: 0M
      range0: 0000000000000000 - 00000000c8000000
      Setting variable MTRR 0, base: 0MB, range: 2048MB, type WB
      Setting variable MTRR 1, base: 2048MB, range: 1024MB, type WB
      Setting variable MTRR 2, base: 3072MB, range: 128MB, type WB
      hole: 00000000c7e00000 - 00000000c8000000
      Setting variable MTRR 3, base: 3198MB, range: 2MB, type UC
      rangeX: 0000000100000000 - 0000000130000000
      Setting variable MTRR 4, base: 4096MB, range: 512MB, type WB
      Setting variable MTRR 5, base: 4608MB, range: 256MB, type WB
      
      3. Gabriel
      reg00: base=0xd0000000 (3328MB), size= 256MB: uncachable, count=1
      reg01: base=0xe0000000 (3584MB), size= 512MB: uncachable, count=1
      reg02: base=0x00000000 (   0MB), size=4096MB: write-back, count=1
      reg03: base=0x100000000 (4096MB), size= 512MB: write-back, count=1
      reg04: base=0x120000000 (4608MB), size= 128MB: write-back, count=1
      reg05: base=0x128000000 (4736MB), size=  64MB: write-back, count=1
      reg06: base=0xcf600000 (3318MB), size=   2MB: uncachable, count=1
      
      will get
      Found optimal setting for mtrr clean up
      gran_size: 1M   chunk_size: 16M         num_reg: 7      lose RAM: 0M
      range0: 0000000000000000 - 00000000d0000000
      Setting variable MTRR 0, base: 0MB, range: 2048MB, type WB
      Setting variable MTRR 1, base: 2048MB, range: 1024MB, type WB
      Setting variable MTRR 2, base: 3072MB, range: 256MB, type WB
      hole: 00000000cf600000 - 00000000cf800000
      Setting variable MTRR 3, base: 3318MB, range: 2MB, type UC
      rangeX: 0000000100000000 - 000000012c000000
      Setting variable MTRR 4, base: 4096MB, range: 512MB, type WB
      Setting variable MTRR 5, base: 4608MB, range: 128MB, type WB
      Setting variable MTRR 6, base: 4736MB, range: 64MB, type WB
      
      4. Mika Fischer
      reg00: base=0xc0000000 (3072MB), size=1024MB: uncachable, count=1
      reg01: base=0x00000000 ( 0MB), size=4096MB: write-back, count=1
      reg02: base=0x100000000 (4096MB), size=1024MB: write-back, count=1
      reg03: base=0xbf700000 (3063MB), size= 1MB: uncachable, count=1
      reg04: base=0xbf800000 (3064MB), size= 8MB: uncachable, count=1
      
      will get
      Found optimal setting for mtrr clean up
      gran_size: 1M   chunk_size: 16M         num_reg: 5      lose RAM: 0M
      range0: 0000000000000000 - 00000000c0000000
      Setting variable MTRR 0, base: 0MB, range: 2048MB, type WB
      Setting variable MTRR 1, base: 2048MB, range: 1024MB, type WB
      hole: 00000000bf700000 - 00000000c0000000
      Setting variable MTRR 2, base: 3063MB, range: 1MB, type UC
      Setting variable MTRR 3, base: 3064MB, range: 8MB, type UC
      rangeX: 0000000100000000 - 0000000140000000
      Setting variable MTRR 4, base: 4096MB, range: 1024MB, type WB
      Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2ffb3501
  16. 23 9月, 2008 1 次提交
  17. 19 9月, 2008 1 次提交
  18. 16 9月, 2008 1 次提交
    • I
      x86: add X86_RESERVE_LOW_64K · fc381519
      Ingo Molnar 提交于
      This bugzilla:
      
        http://bugzilla.kernel.org/show_bug.cgi?id=11237
      
      Documents a wide range of systems where the BIOS utilizes the first
      64K of physical memory during suspend/resume and other hardware events.
      
      Currently we reserve this memory on all AMI and Phoenix BIOS systems.
      Life is too short to hunt subtle memory corruption problems like this,
      so we try to be robust by default.
      
      Still, allow this to be overriden: allow users who want that first 64K
      of memory to be available to the kernel disable the quirk, via
      CONFIG_X86_RESERVE_LOW_64K=n.
      
      Also, allow the early reservation to overlap with other
      early reservations.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fc381519
  19. 14 9月, 2008 3 次提交
  20. 09 9月, 2008 1 次提交
  21. 07 9月, 2008 5 次提交
  22. 26 8月, 2008 1 次提交
    • L
      [x86] Clean up MAXSMP Kconfig, and limit NR_CPUS to 512 · d25e26b6
      Linus Torvalds 提交于
      This fixes a regression that was indirectly caused by commit
      1184dc2f ("x86: modify Kconfig to allow
      up to 4096 cpus").
      
      Allowing 4k CPU's is not practical at this time, because we still have a
      number of places that have several 'cpumask_t's on the stack, and a
      4k-bit cpumask is 512 bytes of stack-space for each such variable.  This
      literally caused functions like 'smp_call_function_mask' to have a 2.5kB
      stack frame, and several functions to have 2kB stackframes.
      
      With an 8kB stack total, smashing the stack was simply much too likely.
      At least bugzilla entry
      
      	http://bugzilla.kernel.org/show_bug.cgi?id=11342
      
      was due to this.
      
      The earlier commit to not inline load_module() into sys_init_module()
      fixed the particular symptoms of this that Alan Brunelle saw in that
      bugzilla entry, but the huge stack waste by cpumask_t's was the more
      direct cause.
      
      Some day we'll have allocation helpers that allocate large CPU masks
      dynamically, but in the meantime we simply cannot allow cpumasks this
      large.
      
      Cc: Alan D. Brunelle <Alan.Brunelle@hp.com>
      Cc: Mike Travis <travis@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d25e26b6