1. 10 5月, 2015 1 次提交
  2. 23 3月, 2015 1 次提交
  3. 03 1月, 2015 1 次提交
  4. 11 3月, 2014 1 次提交
  5. 07 3月, 2014 2 次提交
    • S
      x86: Keep thread_info on thread stack in x86_32 · 198d208d
      Steven Rostedt 提交于
      x86_64 uses a per_cpu variable kernel_stack to always point to
      the thread stack of current. This is where the thread_info is stored
      and is accessed from this location even when the irq or exception stack
      is in use. This removes the complexity of having to maintain the
      thread info on the stack when interrupts are running and having to
      copy the preempt_count and other fields to the interrupt stack.
      
      x86_32 uses the old method of copying the thread_info from the thread
      stack to the exception stack just before executing the exception.
      
      Having the two different requires #ifdefs and also the x86_32 way
      is a bit of a pain to maintain. By converting x86_32 to the same
      method of x86_64, we can remove #ifdefs, clean up the x86_32 code
      a little, and remove the overhead of the copy.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/20110806012354.263834829@goodmis.org
      Link: http://lkml.kernel.org/r/20140206144321.852942014@goodmis.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      198d208d
    • S
      x86: Prepare removal of previous_esp from i386 thread_info structure · 0788aa6a
      Steven Rostedt 提交于
      The i386 thread_info contains a previous_esp field that is used
      to daisy chain the different stacks for dump_stack()
      (ie. irq, softirq, thread stacks).
      
      The goal is to eventual make i386 handling of thread_info the same
      as x86_64, which means that the thread_info will not be in the stack
      but as a per_cpu variable. We will no longer depend on thread_info
      being able to daisy chain different stacks as it will only exist
      in one location (the thread stack).
      
      By moving previous_esp to the end of thread_info and referencing
      it as an offset instead of using a thread_info field, this becomes
      a stepping stone to moving the thread_info.
      
      The offset to get to the previous stack is rather ugly in this
      patch, but this is only temporary and the prev_esp will be changed
      in the next commit. This commit is more for sanity checks of the
      change.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Robert Richter <rric@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/20110806012353.891757693@goodmis.org
      Link: http://lkml.kernel.org/r/20140206144321.608754481@goodmis.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      0788aa6a
  6. 01 10月, 2013 1 次提交
    • F
      irq: Consolidate do_softirq() arch overriden implementations · 7d65f4a6
      Frederic Weisbecker 提交于
      All arch overriden implementations of do_softirq() share the following
      common code: disable irqs (to avoid races with the pending check),
      check if there are softirqs pending, then execute __do_softirq() on
      a specific stack.
      
      Consolidate the common parts such that archs only worry about the
      stack switch.
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@au1.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Mackerras <paulus@au1.ibm.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      7d65f4a6
  7. 25 9月, 2013 1 次提交
  8. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  9. 08 5月, 2012 1 次提交
  10. 20 2月, 2012 1 次提交
  11. 05 12月, 2011 2 次提交
  12. 18 3月, 2011 1 次提交
  13. 18 1月, 2011 1 次提交
  14. 30 12月, 2010 1 次提交
  15. 29 10月, 2010 1 次提交
  16. 27 10月, 2010 1 次提交
  17. 07 9月, 2010 1 次提交
  18. 28 7月, 2010 1 次提交
  19. 29 6月, 2010 1 次提交
    • C
      x86: Always use irq stacks · 7974891d
      Christoph Hellwig 提交于
      IRQ stacks provide much better safety against unexpected stack use from
      interrupts, at the minimal downside of slightly higher memory usage.
      Enable irq stacks also for the default 8k stack on 32-bit kernels to
      minimize the problem of stack overflows through interrupt activity.
      
      This is what the 64-bit kernel and various other architectures already do.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      LKML-Reference: <20100628121554.GA6605@lst.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      7974891d
  20. 02 11月, 2009 1 次提交
  21. 10 8月, 2009 1 次提交
  22. 20 2月, 2009 1 次提交
  23. 18 2月, 2009 2 次提交
  24. 09 2月, 2009 2 次提交
  25. 29 1月, 2009 1 次提交
  26. 12 1月, 2009 1 次提交
    • M
      cpumask: update irq_desc to use cpumask_var_t · 7f7ace0c
      Mike Travis 提交于
      Impact: reduce memory usage, use new cpumask API.
      
      Replace the affinity and pending_masks with cpumask_var_t's.  This adds
      to the significant size reduction done with the SPARSE_IRQS changes.
      
      The added functions (init_alloc_desc_masks & init_copy_desc_masks) are
      in the include file so they can be inlined (and optimized out for the
      !CONFIG_CPUMASKS_OFFSTACK case.)  [Naming chosen to be consistent with
      the other init*irq functions, as well as the backwards arg declaration
      of "from, to" instead of the more common "to, from" standard.]
      
      Includes a slight change to the declaration of struct irq_desc to embed
      the pending_mask within ifdef(CONFIG_SMP) to be consistent with other
      references, and some small changes to Xen.
      
      Tested: sparse/non-sparse/cpumask_offstack/non-cpumask_offstack/nonuma/nosmp on x86_64
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: virtualization@lists.osdl.org
      Cc: xen-devel@lists.xensource.com
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      7f7ace0c
  27. 05 1月, 2009 1 次提交
  28. 17 12月, 2008 1 次提交
  29. 13 12月, 2008 1 次提交
  30. 08 12月, 2008 1 次提交
    • Y
      sparse irq_desc[] array: core kernel and x86 changes · 0b8f1efa
      Yinghai Lu 提交于
      Impact: new feature
      
      Problem on distro kernels: irq_desc[NR_IRQS] takes megabytes of RAM with
      NR_CPUS set to large values. The goal is to be able to scale up to much
      larger NR_IRQS value without impacting the (important) common case.
      
      To solve this, we generalize irq_desc[NR_IRQS] to an (optional) array of
      irq_desc pointers.
      
      When CONFIG_SPARSE_IRQ=y is used, we use kzalloc_node to get irq_desc,
      this also makes the IRQ descriptors NUMA-local (to the site that calls
      request_irq()).
      
      This gets rid of the irq_cfg[] static array on x86 as well: irq_cfg now
      uses desc->chip_data for x86 to store irq_cfg.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0b8f1efa
  31. 16 10月, 2008 6 次提交