1. 10 7月, 2007 1 次提交
  2. 31 5月, 2007 1 次提交
  3. 10 5月, 2007 1 次提交
    • R
      Add suspend-related notifications for CPU hotplug · 8bb78442
      Rafael J. Wysocki 提交于
      Since nonboot CPUs are now disabled after tasks and devices have been
      frozen and the CPU hotplug infrastructure is used for this purpose, we need
      special CPU hotplug notifications that will help the CPU-hotplug-aware
      subsystems distinguish normal CPU hotplug events from CPU hotplug events
      related to a system-wide suspend or resume operation in progress.  This
      patch introduces such notifications and causes them to be used during
      suspend and resume transitions.  It also changes all of the
      CPU-hotplug-aware subsystems to take these notifications into consideration
      (for now they are handled in the same way as the corresponding "normal"
      ones).
      
      [oleg@tv-sign.ru: cleanups]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8bb78442
  4. 09 5月, 2007 1 次提交
  5. 27 4月, 2007 3 次提交
  6. 06 3月, 2007 1 次提交
  7. 21 2月, 2007 2 次提交
  8. 12 2月, 2007 1 次提交
  9. 06 2月, 2007 4 次提交
    • H
      4d284cac
    • M
      [S390] ETR support. · d54853ef
      Martin Schwidefsky 提交于
      This patch adds support for clock synchronization to an external time
      reference (ETR). The external time reference sends an oscillator
      signal and a synchronization signal every 2^20 microseconds to keep
      the TOD clocks of all connected servers in sync. For availability
      two ETR units can be connected to a machine. If the clock deviates
      for more than the sync-check tolerance all cpus get a machine check
      that indicates that the clock is out of sync. For the lovely details
      how to get the clock back in sync see the code below.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d54853ef
    • G
      [S390] noexec protection · c1821c2e
      Gerald Schaefer 提交于
      This provides a noexec protection on s390 hardware. Our hardware does
      not have any bits left in the pte for a hw noexec bit, so this is a
      different approach using shadow page tables and a special addressing
      mode that allows separate address spaces for code and data.
      
      As a special feature of our "secondary-space" addressing mode, separate
      page tables can be specified for the translation of data addresses
      (storage operands) and instruction addresses. The shadow page table is
      used for the instruction addresses and the standard page table for the
      data addresses.
      The shadow page table is linked to the standard page table by a pointer
      in page->lru.next of the struct page corresponding to the page that
      contains the standard page table (since page->private is not really
      private with the pte_lock and the page table pages are not in the LRU
      list).
      Depending on the software bits of a pte, it is either inserted into
      both page tables or just into the standard (data) page table. Pages of
      a vma that does not have the VM_EXEC bit set get mapped only in the
      data address space. Any try to execute code on such a page will cause a
      page translation exception. The standard reaction to this is a SIGSEGV
      with two exceptions: the two system call opcodes 0x0a77 (sys_sigreturn)
      and 0x0aad (sys_rt_sigreturn) are allowed. They are stored by the
      kernel to the signal stack frame. Unfortunately, the signal return
      mechanism cannot be modified to use an SA_RESTORER because the
      exception unwinding code depends on the system call opcode stored
      behind the signal stack frame.
      
      This feature requires that user space is executed in secondary-space
      mode and the kernel in home-space mode, which means that the addressing
      modes need to be switched and that the noexec protection only works
      for user space.
      After switching the addressing modes, we cannot use the mvcp/mvcs
      instructions anymore to copy between kernel and user space. A new
      mvcos instruction has been added to the z9 EC/BC hardware which allows
      to copy between arbitrary address spaces, but on older hardware the
      page tables need to be walked manually.
      Signed-off-by: NGerald Schaefer <geraldsc@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      c1821c2e
    • H
  10. 09 1月, 2007 1 次提交
  11. 04 12月, 2006 2 次提交
  12. 06 10月, 2006 1 次提交
  13. 28 9月, 2006 1 次提交
    • M
      [S390] Inline assembly cleanup. · 94c12cc7
      Martin Schwidefsky 提交于
      Major cleanup of all s390 inline assemblies. They now have a common
      coding style. Quite a few have been shortened, mainly by using register
      asm variables. Use of the EX_TABLE macro helps  as well. The atomic ops,
      bit ops and locking inlines new use the Q-constraint if a newer gcc
      is used.  That results in slightly better code.
      
      Thanks to Christian Borntraeger for proof reading the changes.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      94c12cc7
  14. 20 9月, 2006 1 次提交
  15. 28 6月, 2006 1 次提交
    • K
      [PATCH] node hotplug: register cpu: remove node struct · 76b67ed9
      KAMEZAWA Hiroyuki 提交于
      With Goto-san's patch, we can add new pgdat/node at runtime.  I'm now
      considering node-hot-add with cpu + memory on ACPI.
      
      I found acpi container, which describes node, could evaluate cpu before
      memory. This means cpu-hot-add occurs before memory hot add.
      
      In most part, cpu-hot-add doesn't depend on node hot add.  But register_cpu(),
      which creates symbolic link from node to cpu, requires that node should be
      onlined before register_cpu().  When a node is onlined, its pgdat should be
      there.
      
      This patch-set holds off creating symbolic link from node to cpu
      until node is onlined.
      
      This removes node arguments from register_cpu().
      
      Now, register_cpu() requires 'struct node' as its argument.  But the array of
      struct node is now unified in driver/base/node.c now (By Goto's node hotplug
      patch).  We can get struct node in generic way.  So, this argument is not
      necessary now.
      
      This patch also guarantees add cpu under node only when node is onlined.  It
      is necessary for node-hot-add vs.  cpu-hot-add patch following this.
      
      Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
      to its 'struct node *root' argument.  This patch removes it.
      
      Also modify callers of register_cpu()/unregister_cpu, whose args are changed
      by register-cpu-remove-node-struct patch.
      
      [Brice.Goglin@ens-lyon.org: fix it]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NBrice Goglin <Brice.Goglin@ens-lyon.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      76b67ed9
  16. 01 4月, 2006 1 次提交
  17. 24 3月, 2006 1 次提交
  18. 23 3月, 2006 1 次提交
    • A
      [PATCH] more for_each_cpu() conversions · 394e3902
      Andrew Morton 提交于
      When we stop allocating percpu memory for not-possible CPUs we must not touch
      the percpu data for not-possible CPUs at all.  The correct way of doing this
      is to test cpu_possible() or to use for_each_cpu().
      
      This patch is a kernel-wide sweep of all instances of NR_CPUS.  I found very
      few instances of this bug, if any.  But the patch converts lots of open-coded
      test to use the preferred helper macros.
      
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: David Howells <dhowells@redhat.com>
      Acked-by: NKyle McMartin <kyle@parisc-linux.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Christian Zankel <chris@zankel.net>
      Cc: Philippe Elie <phil.el@wanadoo.fr>
      Cc: Nathan Scott <nathans@sgi.com>
      Cc: Jens Axboe <axboe@suse.de>
      Cc: Eric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      394e3902
  19. 18 2月, 2006 3 次提交
  20. 12 2月, 2006 1 次提交
  21. 13 1月, 2006 1 次提交
  22. 07 1月, 2006 2 次提交
  23. 09 11月, 2005 1 次提交
    • N
      [PATCH] sched: disable preempt in idle tasks · 5bfb5d69
      Nick Piggin 提交于
      Run idle threads with preempt disabled.
      
      Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()).
      How did it ever work before?
      
      Might fix the CPU hotplugging hang which Nigel Cunningham noted.
      
      We think the bug hits if the idle thread is preempted after checking
      need_resched() and before going to sleep, then the CPU offlined.
      
      After calling stop_machine_run, the CPU eventually returns from preemption and
      into the idle thread and goes to sleep.  The CPU will continue executing
      previous idle and have no chance to call play_dead.
      
      By disabling preemption until we are ready to explicitly schedule, this bug is
      fixed and the idle threads generally become more robust.
      
      From: alexs <ashepard@u.washington.edu>
      
        PPC build fix
      
      From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
      
        MIPS build fix
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NYoichi Yuasa <yuasa@hh.iij4u.or.jp>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5bfb5d69
  24. 18 9月, 2005 1 次提交
  25. 02 8月, 2005 1 次提交
  26. 28 7月, 2005 1 次提交
  27. 26 6月, 2005 3 次提交
    • C
      [PATCH] s390: add vmcp interface · 6b979de3
      Christian Borntraeger 提交于
      Add interface to issue VM control program commands.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6b979de3
    • H
      [PATCH] s390: improved machine check handling · 77fa2245
      Heiko Carstens 提交于
      Improved machine check handling.  Kernel is now able to receive machine checks
      while in kernel mode (system call, interrupt and program check handling).
      Also register validation is now performed.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      77fa2245
    • Z
      [PATCH] i386 CPU hotplug · f3705136
      Zwane Mwaikambo 提交于
      (The i386 CPU hotplug patch provides infrastructure for some work which Pavel
      is doing as well as for ACPI S3 (suspend-to-RAM) work which Li Shaohua
      <shaohua.li@intel.com> is doing)
      
      The following provides i386 architecture support for safely unregistering and
      registering processors during runtime, updated for the current -mm tree.  In
      order to avoid dumping cpu hotplug code into kernel/irq/* i dropped the
      cpu_online check in do_IRQ() by modifying fixup_irqs().  The difference being
      that on cpu offline, fixup_irqs() is called before we clear the cpu from
      cpu_online_map and a long delay in order to ensure that we never have any
      queued external interrupts on the APICs.  There are additional changes to s390
      and ppc64 to account for this change.
      
      1) Add CONFIG_HOTPLUG_CPU
      2) disable local APIC timer on dead cpus.
      3) Disable preempt around irq balancing to prevent CPUs going down.
      4) Print irq stats for all possible cpus.
      5) Debugging check for interrupts on offline cpus.
      6) Hacky fixup_irqs() to redirect irqs when cpus go off/online.
      7) play_dead() for offline cpus to spin inside.
      8) Handle offline cpus set in flush_tlb_others().
      9) Grab lock earlier in smp_call_function() to prevent CPUs going down.
      10) Implement __cpu_disable() and __cpu_die().
      11) Enable local interrupts in cpu_enable() after fixup_irqs()
      12) Don't fiddle with NMI on dead cpu, but leave intact on other cpus.
      13) Program IRQ affinity whilst cpu is still in cpu_online_map on offline.
      Signed-off-by: NZwane Mwaikambo <zwane@linuxpower.ca>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f3705136
  28. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4