1. 27 3月, 2006 3 次提交
  2. 26 3月, 2006 1 次提交
    • T
      [PATCH] sys_alarm() unsigned signed conversion fixup · c08b8a49
      Thomas Gleixner 提交于
      alarm() calls the kernel with an unsigend int timeout in seconds.  The
      value is stored in the tv_sec field of a struct timeval to setup the
      itimer.  The tv_sec field of struct timeval is of type long, which causes
      the tv_sec value to be negative on 32 bit machines if seconds > INT_MAX.
      
      Before the hrtimer merge (pre 2.6.16) such a negative value was converted
      to the maximum jiffies timeout by the timeval_to_jiffies conversion.  It's
      not clear whether this was intended or just happened to be done by the
      timeval_to_jiffies code.
      
      hrtimers expect a timeval in canonical form and treat a negative timeout as
      already expired.  This breaks the legitimate usage of alarm() with a
      timeout value > INT_MAX seconds.
      
      For 32 bit machines it is therefor necessary to limit the internal seconds
      value to avoid API breakage.  Instead of doing this in all implementations
      of sys_alarm the duplicated sys_alarm code is moved into a common function
      in itimer.c
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c08b8a49
  3. 25 3月, 2006 8 次提交
    • F
      [IA64] New IA64 core/thread detection patch · 4129a953
      Fenghua Yu 提交于
      IPF SDM 2.2 changes definition of PAL_LOGICAL_TO_PHYSICAL to add
      proc_number=-1 to get core/thread mapping info on the running processer.
      
      Based on this change, we had better to update existing core/thread
      detection in IA64 kernel correspondingly. The attached patch implements
      this change. It simplifies detection code and eliminates potential race
      condition. It also runs a bit faster and has better scalability especially
      when cores and threads number grows up in one package.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      4129a953
    • J
      [IA64] Increase max node count on SN platforms · 4d357aca
      Jack Steiner 提交于
      Update configuration files with new CONFIG option.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      4d357aca
    • J
      [IA64] Increase max node count on SN platforms · a9de9835
      Jack Steiner 提交于
      Node number are kept in the cpu_to_node_map which is
      currently defined as u8. Change to u16 to accomodate
      larger node numbers.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      a9de9835
    • J
      [IA64] Increase max node count on SN platforms · 3ad5ef8b
      Jack Steiner 提交于
      Add support in IA64 acpi for platforms that support more than
      256 nodes. Currently, ACPI is limited to 256 nodes because the
      proximity domain number is 8-bits.
      
      Long term, we expect to use ACPI3.0 to support >256 nodes.
      This patch is an interim solution that works with platforms
      that pass the  high order bits of the proximity domain in
      "reserved" fields of the ACPI tables. This code is enabled
      ONLY on SN platforms.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      3ad5ef8b
    • J
      [IA64] Increase max node count on SN platforms · b354a838
      Jack Steiner 提交于
      Add a configuration option to allow the maximum
      number of nodes to be configurable for GENERIC or SN
      kernels.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      b354a838
    • P
      [IA64] Tollhouse HP: IA64 arch changes · f90aa8c4
      Prarit Bhargava 提交于
      arch/ia64/sn and include/asm-ia64/sn changes required to support Tollhouse
      system PCI hotplug, fixes the ia64_sn_sysctl_ioboard_get call, and introduces
      the PRF_HOTPLUG_SUPPORT feature bit.
      Signed-off-by: NPrarit Bhargava <prarit@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      f90aa8c4
    • C
      [IA64] cleanup dig_irq_init · b17ea91a
      Chen, Kenneth W 提交于
      dig_irq_init is equivalent to machvec_noop, no need to define
      another empty function.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      b17ea91a
    • R
      [IA64] MCA recovery: kernel context recovery table · d2a28ad9
      Russ Anderson 提交于
      Memory errors encountered by user applications may surface
      when the CPU is running in kernel context.  The current code
      will not attempt recovery if the MCA surfaces in kernel
      context (privilage mode 0).  This patch adds a check for cases
      where the user initiated the load that surfaces in kernel
      interrupt code.
      
      An example is a user process lauching a load from memory
      and the data in memory had bad ECC.  Before the bad data
      gets to the CPU register, and interrupt comes in.  The
      code jumps to the IVT interrupt entry point and begins
      execution in kernel context.  The process of saving the
      user registers (SAVE_REST) causes the bad data to be loaded
      into a CPU register, triggering the MCA.  The MCA surfaces in
      kernel context, even though the load was initiated from
      user context.
      
      As suggested by David and Tony, this patch uses an exception
      table like approach, puting the tagged recovery addresses in
      a searchable table.  One difference from the exception table
      is that MCAs do not surface in precise places (such as with
      a TLB miss), so instead of tagging specific instructions,
      address ranges are registers.  A single macro is used to do
      the tagging, with the input parameter being the label
      of the starting address and the macro being the ending
      address.  This limits clutter in the code.
      
      This patch only tags one spot, the interrupt ivt entry.
      Testing showed that spot to be a "heavy hitter" with
      MCAs surfacing while saving user registers.  Other spots
      can be added as needed by adding a single macro.
      
      Signed-off-by: Russ Anderson (rja@sgi.com)
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      d2a28ad9
  4. 24 3月, 2006 3 次提交
  5. 23 3月, 2006 10 次提交
  6. 22 3月, 2006 2 次提交
    • D
      [PATCH] hugepage: is_aligned_hugepage_range() cleanup · 42b88bef
      David Gibson 提交于
      Quite a long time back, prepare_hugepage_range() replaced
      is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to
      verify if an address range is suitable for a hugepage mapping.
      is_aligned_hugepage_range() stuck around, but only to implement
      prepare_hugepage_range() on archs which didn't implement their own.
      
      Most archs (everything except ia64 and powerpc) used the same
      implementation of is_aligned_hugepage_range().  On powerpc, which
      implements its own prepare_hugepage_range(), the custom version was never
      used.
      
      In addition, "is_aligned_hugepage_range()" was a bad name, because it
      suggests it returns true iff the given range is a good hugepage range,
      whereas in fact it returns 0-or-error (so the sense is reversed).
      
      This patch cleans up by abolishing is_aligned_hugepage_range().  Instead
      prepare_hugepage_range() is defined directly.  Most archs use the default
      version, which simply checks the given region is aligned to the size of a
      hugepage.  ia64 and powerpc define custom versions.  The ia64 one simply
      checks that the range is in the correct address space region in addition to
      being suitably aligned.  The powerpc version (just as previously) checks
      for suitable addresses, and if necessary performs low-level MMU frobbing to
      set up new areas for use by hugepages.
      
      No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      42b88bef
    • N
      [PATCH] remove set_page_count() outside mm/ · 7835e98b
      Nick Piggin 提交于
      set_page_count usage outside mm/ is limited to setting the refcount to 1.
      Remove set_page_count from outside mm/, and replace those users with
      init_page_count() and set_page_refcounted().
      
      This allows more debug checking, and tighter control on how code is allowed
      to play around with page->_count.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7835e98b
  7. 21 3月, 2006 1 次提交
  8. 09 3月, 2006 1 次提交
    • C
      [IA64] Fix race in the accessed/dirty bit handlers · d8117ce5
      Christoph Lameter 提交于
      A pte may be zapped by the swapper, exiting process, unmapping or page
      migration while the accessed or dirty bit handers are about to run. In that
      case the accessed bit or dirty is set on an zeroed pte which leads the VM to
      conclude that this is a swap pte. This may lead to
      
      - Messages from the vm like
      
      swap_free: Bad swap file entry 4000000000000000
      
      - Processes being aborted
      
      swap_dup: Bad swap file entry 4000000000000000
      VM: killing process ....
      
      Page migration is particular suitable for the creation of this race since
      it needs to remove and restore page table entries.
      
      The fix here is to check for the present bit and simply not update
      the pte if the page is not present anymore. If the page is not present
      then the fault handler should run next which will take care of the problem
      by bringing the page back and then mark the page dirty or move it onto the
      active list.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      d8117ce5
  9. 08 3月, 2006 4 次提交
  10. 06 3月, 2006 1 次提交
  11. 01 3月, 2006 2 次提交
  12. 28 2月, 2006 4 次提交