1. 09 5月, 2007 1 次提交
    • C
      move die notifier handling to common code · 1eeb66a1
      Christoph Hellwig 提交于
      This patch moves the die notifier handling to common code.  Previous
      various architectures had exactly the same code for it.  Note that the new
      code is compiled unconditionally, this should be understood as an appel to
      the other architecture maintainer to implement support for it aswell (aka
      sprinkling a notify_die or two in the proper place)
      
      arm had a notifiy_die that did something totally different, I renamed it to
      arm_notify_die as part of the patch and made it static to the file it's
      declared and used at.  avr32 used to pass slightly less information through
      this interface and I brought it into line with the other architectures.
      
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: fix vmalloc_sync_all bustage]
      [bryan.wu@analog.com: fix vmalloc_sync_all in nommu]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NBryan Wu <bryan.wu@analog.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1eeb66a1
  2. 08 5月, 2007 1 次提交
  3. 07 4月, 2007 2 次提交
  4. 31 3月, 2007 1 次提交
    • B
      [IA64] fail mmaps that span areas with incompatible attributes · 6d40fc51
      Bjorn Helgaas 提交于
      Example memory map (from HP sx1000 with VGA enabled):
          0x00000 - 0x9FFFF supports only WB (cacheable) access
          0xA0000 - 0xBFFFF supports only UC (uncacheable) access
          0xC0000 - 0xFFFFF supports only WB (cacheable) access
      
      Some versions of X map the entire 0x00000-0xFFFFF area at once.  With the
      example above, this mmap must fail because there's no memory attribute that's
      safe for the entire area.
      
      Prior to this patch, we performed the mmap with a UC mapping.  When X
      accessed the WB memory at 0xC0000, it caused an MCA.  The crash can happen
      when mapping 0xC0000 from either /dev/mem or a /sys/.../legacy_mem file.
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      6d40fc51
  5. 30 3月, 2007 2 次提交
  6. 21 3月, 2007 2 次提交
  7. 09 3月, 2007 5 次提交
  8. 08 3月, 2007 4 次提交
  9. 07 3月, 2007 3 次提交
    • M
      [IA64] kexec: Use EFI_LOADER_DATA for ELF core header · cee87af2
      Magnus Damm 提交于
      The address where the ELF core header is stored is passed to the secondary
      kernel as a kernel command line option.  The memory area for this header is
      also marked as a separate EFI memory descriptor on ia64.
      
      The separate EFI memory descriptor is at the moment of the type
      EFI_UNUSABLE_MEMORY.  With such a type the secondary kernel skips over the
      entire memory granule (config option, 16M or 64M) when detecting memory.
      If we are lucky we will just lose some memory, but if we happen to have
      data in the same granule (such as an initramfs image), then this data will
      never get mapped and the kernel bombs out when trying to access it.
      
      So this is an attempt to fix this by changing the EFI memory descriptor
      type into EFI_LOADER_DATA.  This type is the same type used for the kernel
      data and for initramfs.  In the secondary kernel we then handle the ELF
      core header data the same way as we handle the initramfs image.
      
      This patch contains the kernel changes to make this happen.  Pretty
      straightforward, we reserve the area in reserve_memory().  The address for
      the area comes from the kernel command line and the size comes from the
      specialized EFI parsing function vmcore_find_descriptor_size().
      
      The kexec-tools-testing code for this can be found here:
      http://lists.osdl.org/pipermail/fastboot/2007-February/005983.htmlSigned-off-by: NMagnus Damm <magnus@valinux.co.jp>
      Cc: Simon Horman <horms@verge.net.au>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      cee87af2
    • N
      [IA64] permon use-after-free fix · 41d5e5d7
      Nick Piggin 提交于
      Perfmon associates vmalloc()ed memory with a file descriptor, and installs
      a vma mapping that memory.  Unfortunately, the vm_file field is not filled
      in, so processes with mappings to that memory do not prevent the file from
      being closed and the memory freed.  This results in use-after-free bugs and
      multiple freeing of pages, etc.
      
      I saw this bug on an Altix on SLES9.  Haven't reproduced upstream but it
      looks like the same issue is there.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Stephane Eranian <eranian@hpl.hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      41d5e5d7
    • H
      [IA64] point saved_max_pfn to the max_pfn of the entire system · f4a57099
      Horms 提交于
      Make saved_max_pfn point to max_pfn of entire system.
      
      Without this patch is so that vmcore is zero length on ia64.  This is
      because saved_max_pfn was wrongly being set to the max_pfn of the crash
      kernel's address space, rather than the max_pfg on the physical memory of
      the machine - the whole purpose of vmcore is to access physical memory that
      is not part of the crash kernel's addresss space.
      Signed-off-by: NSimon Horman <horms@verge.net.au>
      Signed-off-by: NZou Nan hai <nanhai.zou@intel.com>
      Sort-Of-Acked-By: NJay Lan <jlan@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      f4a57099
  10. 27 2月, 2007 1 次提交
  11. 18 2月, 2007 1 次提交
  12. 17 2月, 2007 1 次提交
  13. 15 2月, 2007 3 次提交
  14. 13 2月, 2007 3 次提交
  15. 12 2月, 2007 3 次提交
  16. 10 2月, 2007 1 次提交
  17. 08 2月, 2007 1 次提交
  18. 07 2月, 2007 2 次提交
    • C
      [IA64] relax per-cpu TLB requirement to DTC · 00b65985
      Chen, Kenneth W 提交于
      Instead of pinning per-cpu TLB into a DTR, use DTC.  This will free up
      one TLB entry for application, or even kernel if access pattern to
      per-cpu data area has high temporal locality.
      
      Since per-cpu is mapped at the top of region 7 address, we just need to
      add special case in alt_dtlb_miss.  The physical address of per-cpu data
      is already conveniently stored in IA64_KR(PER_CPU_DATA).  Latency for
      alt_dtlb_miss is not affected as we can hide all the latency.  It was
      measured that alt_dtlb_miss handler has 23 cycles latency before and
      after the patch.
      
      The performance effect is massive for applications that put lots of tlb
      pressure on CPU.  Workload environment like database online transaction
      processing or application uses tera-byte of memory would benefit the most.
      Measurement with industry standard database benchmark shown an upward
      of 1.6% gain.  While smaller workloads like cpu, java also showing small
      improvement.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      00b65985
    • C
      [IA64] remove per-cpu ia64_phys_stacked_size_p8 · a0776ec8
      Chen, Kenneth W 提交于
      It's not efficient to use a per-cpu variable just to store
      how many physical stack register a cpu has.  Ever since the
      incarnation of ia64 up till upcoming Montecito processor, that
      variable has "glued" to 96. Having a variable in memory means
      that the kernel is burning an extra cacheline access on every
      syscall and kernel exit path.  Such "static" value is better
      served with the instruction patching utility exists today.
      Convert ia64_phys_stacked_size_p8 into dynamic insn patching.
      
      This also has a pleasant side effect of eliminating access to
      per-cpu area while psr.ic=0 in the kernel exit path. (fixable
      for per-cpu DTC work, but why bother?)
      
      There are some concerns with the default value that the instruc-
      tion encoded in the kernel image.  It shouldn't be concerned.
      The reasons are:
      
      (1) cpu_init() is called at CPU initialization.  In there, we
          find out physical stack register size from PAL and patch
          two instructions in kernel exit code.  The code in question
          can not be executed before the patching is done.
      
      (2) current implementation stores zero in ia64_phys_stacked_size_p8,
          and that's what the current kernel exit path loads the value with.
          With the new code, it is equivalent that we store reg size 96
          in ia64_phys_stacked_size_p8, thus creating a better safety net.
          Given (1) above can never fail, having (2) is just a bonus.
      
      All in all, this patch allow one less memory reference in the kernel
      exit path, thus reducing syscall and interrupt return latency; and
      avoid polluting potential useful data in the CPU cache.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      a0776ec8
  19. 06 2月, 2007 3 次提交