1. 17 1月, 2009 1 次提交
  2. 30 9月, 2008 1 次提交
  3. 13 8月, 2008 1 次提交
    • T
      [IA64] Ensure cpu0 can access per-cpu variables in early boot code · 10617bbe
      Tony Luck 提交于
      ia64 handles per-cpu variables a litle differently from other architectures
      in that it maps the physical memory allocated for each cpu at a constant
      virtual address (0xffffffffffff0000). This mapping is not enabled until
      the architecture specific cpu_init() function is run, which causes problems
      since some generic code is run before this point. In particular when
      CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to
      per-cpu memory at the first printk() call so the boot will fail without
      the kernel printing anything to the console.
      
      Fix this by allocating percpu memory for cpu0 in the kernel data section
      and doing all initialization to enable percpu access in head.S before
      calling any generic code.
      
      Other cpus must take care not to access per-cpu variables too early, but
      their code path from start_secondary() to cpu_init() is all in arch/ia64
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      10617bbe
  4. 28 5月, 2008 2 次提交
    • I
      [IA64] pvops: preparation: move the constants, LOAD_OFFSET, to a header file. · 8311d21c
      Isaku Yamahata 提交于
      Move the LOAD_OFFSET definition from vmlinux.lds.S into system.h.
      On paravirtualized environments, it is necessary to detect the
      execution environment. One of the solutions is the multi entry point.
      The multi entry point allows a boot loader to start the kernel execution
      from the entry point which is different from the ELF entry point.
      The non standard entry point will defined as the specialized elf note
      which contains the LMA of the entry point symbol.
      The constant, LOAD_OFFSET, is necessary to calculate the symbol's LMA.
      Move the definition into the public header file to make it available
      to the multi entry point support.
      
      Cc: "He, Qing" <qing.he@intel.com>
      Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      8311d21c
    • T
      [IA64] Workaround for RSE issue · 4dcc29e1
      Tony Luck 提交于
      Problem: An application violating the architectural rules regarding
      operation dependencies and having specific Register Stack Engine (RSE)
      state at the time of the violation, may result in an illegal operation
      fault and invalid RSE state.  Such faults may initiate a cascade of
      repeated illegal operation faults within OS interruption handlers.
      The specific behavior is OS dependent.
      
      Implication: An application causing an illegal operation fault with
      specific RSE state may result in a series of illegal operation faults
      and an eventual OS stack overflow condition.
      
      Workaround: OS interruption handlers that switch to kernel backing
      store implement a check for invalid RSE state to avoid the series
      of illegal operation faults.
      
      The core of the workaround is the RSE_WORKAROUND code sequence
      inserted into each invocation of the SAVE_MIN_WITH_COVER and
      SAVE_MIN_WITH_COVER_R19 macros.  This sequence includes hard-coded
      constants that depend on the number of stacked physical registers
      being 96.  The rest of this patch consists of code to disable this
      workaround should this not be the case (with the presumption that
      if a future Itanium processor increases the number of registers, it
      would also remove the need for this patch).
      
      Move the start of the RBS up to a mod32 boundary to avoid some
      corner cases.
      
      The dispatch_illegal_op_fault code outgrew the spot it was
      squatting in when built with this patch and CONFIG_VIRT_CPU_ACCOUNTING=y
      Move it out to the end of the ivt.
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      4dcc29e1
  5. 29 1月, 2008 1 次提交
  6. 08 12月, 2007 1 次提交
  7. 14 8月, 2007 2 次提交
  8. 26 7月, 2007 1 次提交
  9. 20 7月, 2007 1 次提交
    • F
      define new percpu interface for shared data · 5fb7dc37
      Fenghua Yu 提交于
      per cpu data section contains two types of data.  One set which is
      exclusively accessed by the local cpu and the other set which is per cpu,
      but also shared by remote cpus.  In the current kernel, these two sets are
      not clearely separated out.  This can potentially cause the same data
      cacheline shared between the two sets of data, which will result in
      unnecessary bouncing of the cacheline between cpus.
      
      One way to fix the problem is to cacheline align the remotely accessed per
      cpu data, both at the beginning and at the end.  Because of the padding at
      both ends, this will likely cause some memory wastage and also the
      interface to achieve this is not clean.
      
      This patch:
      
      Moves the remotely accessed per cpu data (which is currently marked
      as ____cacheline_aligned_in_smp) into a different section, where all the data
      elements are cacheline aligned. And as such, this differentiates the local
      only data and remotely accessed data cleanly.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5fb7dc37
  10. 19 5月, 2007 2 次提交
  11. 12 2月, 2007 1 次提交
  12. 07 2月, 2007 1 次提交
    • C
      [IA64] remove per-cpu ia64_phys_stacked_size_p8 · a0776ec8
      Chen, Kenneth W 提交于
      It's not efficient to use a per-cpu variable just to store
      how many physical stack register a cpu has.  Ever since the
      incarnation of ia64 up till upcoming Montecito processor, that
      variable has "glued" to 96. Having a variable in memory means
      that the kernel is burning an extra cacheline access on every
      syscall and kernel exit path.  Such "static" value is better
      served with the instruction patching utility exists today.
      Convert ia64_phys_stacked_size_p8 into dynamic insn patching.
      
      This also has a pleasant side effect of eliminating access to
      per-cpu area while psr.ic=0 in the kernel exit path. (fixable
      for per-cpu DTC work, but why bother?)
      
      There are some concerns with the default value that the instruc-
      tion encoded in the kernel image.  It shouldn't be concerned.
      The reasons are:
      
      (1) cpu_init() is called at CPU initialization.  In there, we
          find out physical stack register size from PAL and patch
          two instructions in kernel exit code.  The code in question
          can not be executed before the patching is done.
      
      (2) current implementation stores zero in ia64_phys_stacked_size_p8,
          and that's what the current kernel exit path loads the value with.
          With the new code, it is equivalent that we store reg size 96
          in ia64_phys_stacked_size_p8, thus creating a better safety net.
          Given (1) above can never fail, having (2) is just a bonus.
      
      All in all, this patch allow one less memory reference in the kernel
      exit path, thus reducing syscall and interrupt return latency; and
      avoid polluting potential useful data in the CPU cache.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      a0776ec8
  13. 06 2月, 2007 1 次提交
  14. 28 10月, 2006 1 次提交
  15. 27 9月, 2006 1 次提交
  16. 01 7月, 2006 1 次提交
  17. 30 3月, 2006 1 次提交
  18. 25 3月, 2006 1 次提交
    • R
      [IA64] MCA recovery: kernel context recovery table · d2a28ad9
      Russ Anderson 提交于
      Memory errors encountered by user applications may surface
      when the CPU is running in kernel context.  The current code
      will not attempt recovery if the MCA surfaces in kernel
      context (privilage mode 0).  This patch adds a check for cases
      where the user initiated the load that surfaces in kernel
      interrupt code.
      
      An example is a user process lauching a load from memory
      and the data in memory had bad ECC.  Before the bad data
      gets to the CPU register, and interrupt comes in.  The
      code jumps to the IVT interrupt entry point and begins
      execution in kernel context.  The process of saving the
      user registers (SAVE_REST) causes the bad data to be loaded
      into a CPU register, triggering the MCA.  The MCA surfaces in
      kernel context, even though the load was initiated from
      user context.
      
      As suggested by David and Tony, this patch uses an exception
      table like approach, puting the tagged recovery addresses in
      a searchable table.  One difference from the exception table
      is that MCAs do not surface in precise places (such as with
      a TLB miss), so instead of tagging specific instructions,
      address ranges are registers.  A single macro is used to do
      the tagging, with the input parameter being the label
      of the starting address and the macro being the ending
      address.  This limits clutter in the code.
      
      This patch only tags one spot, the interrupt ivt entry.
      Testing showed that spot to be a "heavy hitter" with
      MCAs surfacing while saving user registers.  Other spots
      can be added as needed by adding a single macro.
      
      Signed-off-by: Russ Anderson (rja@sgi.com)
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      d2a28ad9
  19. 23 3月, 2006 1 次提交
  20. 17 12月, 2005 1 次提交
  21. 08 9月, 2005 1 次提交
  22. 28 6月, 2005 1 次提交
  23. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4