1. 26 7月, 2008 1 次提交
  2. 19 7月, 2008 1 次提交
    • M
      cpumask: Replace cpumask_of_cpu with cpumask_of_cpu_ptr · 65c01184
      Mike Travis 提交于
        * This patch replaces the dangerous lvalue version of cpumask_of_cpu
          with new cpumask_of_cpu_ptr macros.  These are patterned after the
          node_to_cpumask_ptr macros.
      
          In general terms, if there is a cpumask_of_cpu_map[] then a pointer to
          the cpumask_of_cpu_map[cpu] entry is used.  The cpumask_of_cpu_map
          is provided when there is a large NR_CPUS count, reducing
          greatly the amount of code generated and stack space used for
          cpumask_of_cpu().  The pointer to the cpumask_t value is needed for
          calling set_cpus_allowed_ptr() to reduce the amount of stack space
          needed to pass the cpumask_t value.
      
          If there isn't a cpumask_of_cpu_map[], then a temporary variable is
          declared and filled in with value from cpumask_of_cpu(cpu) as well as
          a pointer variable pointing to this temporary variable.  Afterwards,
          the pointer is used to reference the cpumask value.  The compiler
          will optimize out the extra dereference through the pointer as well
          as the stack space used for the pointer, resulting in identical code.
      
          A good example of the orthogonal usages is in net/sunrpc/svc.c:
      
      	case SVC_POOL_PERCPU:
      	{
      		unsigned int cpu = m->pool_to[pidx];
      		cpumask_of_cpu_ptr(cpumask, cpu);
      
      		*oldmask = current->cpus_allowed;
      		set_cpus_allowed_ptr(current, cpumask);
      		return 1;
      	}
      	case SVC_POOL_PERNODE:
      	{
      		unsigned int node = m->pool_to[pidx];
      		node_to_cpumask_ptr(nodecpumask, node);
      
      		*oldmask = current->cpus_allowed;
      		set_cpus_allowed_ptr(current, nodecpumask);
      		return 1;
      	}
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      65c01184
  3. 24 6月, 2008 1 次提交
    • R
      sched: add new API sched_setscheduler_nocheck: add a flag to control access checks · 961ccddd
      Rusty Russell 提交于
      Hidehiro Kawai noticed that sched_setscheduler() can fail in
      stop_machine: it calls sched_setscheduler() from insmod, which can
      have CAP_SYS_MODULE without CAP_SYS_NICE.
      
      Two cases could have failed, so are changed to sched_setscheduler_nocheck:
        kernel/softirq.c:cpu_callback()
      	- CPU hotplug callback
        kernel/stop_machine.c:__stop_machine_run()
      	- Called from various places, including modprobe()
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mm@kvack.org
      Cc: sugita <yumiko.sugita.yf@hitachi.com>
      Cc: Satoshi OSHIMA <satoshi.oshima.fk@hitachi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      961ccddd
  4. 23 5月, 2008 1 次提交
    • C
      stop_machine: make stop_machine_run more virtualization friendly · 3401a61e
      Christian Borntraeger 提交于
      On kvm I have seen some rare hangs in stop_machine when I used more guest
      cpus than hosts cpus. e.g. 32 guest cpus on 1 host cpu triggered the
      hang quite often. I could also reproduce the problem on a 4 way z/VM host with
      a 64 way guest.
      
      It turned out that the guest was consuming all available cpus mostly for
      spinning on scheduler locks like rq->lock. This is expected as the threads are
      calling yield all the time.
      The problem is now, that the host scheduling decisings together with the guest
      scheduling decisions and spinlocks not being fair managed to create an
      interesting scenario similar to a live lock. (Sometimes the hang resolved
      itself after some minutes)
      
      Changing stop_machine to yield the cpu to the hypervisor when yielding inside
      the guest fixed the problem for me. While I am not completely happy with this
      patch, I think it causes no harm and it really improves the situation for me.
      
      I used cpu_relax for yielding to the hypervisor, does that work on all
      architectures?
      
      p.s.: If you want to reproduce the problem, cpu hotplug and kprobes use
      stop_machine_run and both triggered the problem after some retries.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      CC: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      3401a61e
  5. 22 4月, 2008 1 次提交
  6. 20 4月, 2008 1 次提交
    • M
      generic: use new set_cpus_allowed_ptr function · f70316da
      Mike Travis 提交于
        * Use new set_cpus_allowed_ptr() function added by previous patch,
          which instead of passing the "newly allowed cpus" cpumask_t arg
          by value,  pass it by pointer:
      
          -int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
          +int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)
      
        * Modify CPU_MASK_ALL
      
      Depends on:
      	[sched-devel]: sched: add new set_cpus_allowed_ptr function
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f70316da
  7. 19 4月, 2008 1 次提交
  8. 07 2月, 2008 1 次提交
  9. 26 1月, 2008 1 次提交
    • G
      cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus() · 86ef5c9a
      Gautham R Shenoy 提交于
      Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
      get_online_cpus and put_online_cpus instead as it highlights the
      refcount semantics in these operations.
      
      The new API guarantees protection against the cpu-hotplug operation, but
      it doesn't guarantee serialized access to any of the local data
      structures. Hence the changes needs to be reviewed.
      
      In case of pseries_add_processor/pseries_remove_processor, use
      cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
      cpu_present_map there.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      86ef5c9a
  10. 17 7月, 2007 1 次提交
  11. 11 5月, 2007 1 次提交
  12. 09 5月, 2007 1 次提交
    • P
      Use stop_machine_run in the Intel RNG driver · ee527cd3
      Prarit Bhargava 提交于
      Replace call_smp_function with stop_machine_run in the Intel RNG driver.
      
      CPU A has done read_lock(&lock)
      CPU B has done write_lock_irq(&lock) and is waiting for A to release the lock.
      
      A third CPU calls call_smp_function and issues the IPI.  CPU A takes CPU
      C's IPI.  CPU B is waiting with interrupts disabled and does not see the
      IPI.  CPU C is stuck waiting for CPU B to respond to the IPI.
      
      Deadlock.
      
      The solution is to use stop_machine_run instead of call_smp_function
      (call_smp_function should not be called in situations where the CPUs may be
      suspended).
      
      [haruo.tomita@toshiba.co.jp: fix a typo in mod_init()]
      [haruo.tomita@toshiba.co.jp: fix memory leak]
      Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: Jan Beulich <jbeulich@novell.com>
      Cc: "Tomita, Haruo" <haruo.tomita@toshiba.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ee527cd3
  13. 30 9月, 2006 1 次提交
  14. 28 8月, 2006 1 次提交
  15. 04 7月, 2006 1 次提交
  16. 26 6月, 2006 1 次提交
  17. 11 1月, 2006 1 次提交
  18. 14 11月, 2005 1 次提交
    • K
      [PATCH] stop_machine() vs. synchronous IPI send deadlock · 4557398f
      Kirill Korotaev 提交于
      This fixes deadlock of stop_machine() vs.  synchronous IPI send.  The
      problem is that stop_machine() disables interrupts before disabling
      preemption on other CPUs.  So if another CPU is preempted and then calls
      something like flush_tlb_all() it will deadlock with CPU doing
      stop_machine() and which can't process IPI due to disabled IRQs.
      
      I changed stop_machine() to do the same things exactly as it does on other
      CPUs, i.e.  it should disable preemption first on _all_ CPUs including
      itself and only after that disable IRQs.
      Signed-off-by: NKirill Korotaev <dev@sw.ru>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Andrey Savochkin" <saw@sawoct.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4557398f
  19. 22 6月, 2005 1 次提交
    • I
      [PATCH] smp_processor_id() cleanup · 39c715b7
      Ingo Molnar 提交于
      This patch implements a number of smp_processor_id() cleanup ideas that
      Arjan van de Ven and I came up with.
      
      The previous __smp_processor_id/_smp_processor_id/smp_processor_id API
      spaghetti was hard to follow both on the implementational and on the
      usage side.
      
      Some of the complexity arose from picking wrong names, some of the
      complexity comes from the fact that not all architectures defined
      __smp_processor_id.
      
      In the new code, there are two externally visible symbols:
      
       - smp_processor_id(): debug variant.
      
       - raw_smp_processor_id(): nondebug variant. Replaces all existing
         uses of _smp_processor_id() and __smp_processor_id(). Defined
         by every SMP architecture in include/asm-*/smp.h.
      
      There is one new internal symbol, dependent on DEBUG_PREEMPT:
      
       - debug_smp_processor_id(): internal debug variant, mapped to
                                   smp_processor_id().
      
      Also, i moved debug_smp_processor_id() from lib/kernel_lock.c into a new
      lib/smp_processor_id.c file.  All related comments got updated and/or
      clarified.
      
      I have build/boot tested the following 8 .config combinations on x86:
      
       {SMP,UP} x {PREEMPT,!PREEMPT} x {DEBUG_PREEMPT,!DEBUG_PREEMPT}
      
      I have also build/boot tested x64 on UP/PREEMPT/DEBUG_PREEMPT.  (Other
      architectures are untested, but should work just fine.)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjan@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      39c715b7
  20. 01 5月, 2005 1 次提交
  21. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4