1. 26 2月, 2009 3 次提交
    • M
      powerpc: Fix 64bit __copy_tofrom_user() regression · f72b728b
      Mark Nelson 提交于
      This fixes a regression introduced by commit
      a4e22f02 ("powerpc: Update 64bit
      __copy_tofrom_user() using CPU_FTR_UNALIGNED_LD_STD").
      
      The same bug that existed in the 64bit memcpy() also exists here so fix
      it here too. The fix is the same as that applied to memcpy() with the
      addition of fixes for the exception handling code required for
      __copy_tofrom_user().
      
      This stops us reading beyond the end of the source region we were told
      to copy.
      Signed-off-by: NMark Nelson <markn@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f72b728b
    • M
      powerpc: Fix 64bit memcpy() regression · e423b9ec
      Mark Nelson 提交于
      This fixes a regression introduced by commit
      25d6e2d7 ("powerpc: Update 64bit memcpy()
      using CPU_FTR_UNALIGNED_LD_STD").
      
      This commit allowed CPUs that have the CPU_FTR_UNALIGNED_LD_STD CPU
      feature bit present to do the memcpy() with unaligned load doubles. But,
      along with this came a bug where our final load double would read bytes
      beyond a page boundary and into the next (unmapped) page. This was caught
      by enabling CONFIG_DEBUG_PAGEALLOC,
      
      The fix was to read only the number of bytes that we need to store rather
      than reading a full 8-byte doubleword and storing only a portion of that.
      
      In order to minimise the amount of existing code touched we use the
      original do_tail for the src_unaligned case.
      
      Below is an example of the regression, as reported by Sachin Sant:
      
      Unable to handle kernel paging request for data at address 0xc00000003f380000
      Faulting instruction address: 0xc000000000039574
      cpu 0x1: Vector: 300 (Data Access) at [c00000003baf3020]
          pc: c000000000039574: .memcpy+0x74/0x244
          lr: d00000000244916c: .ext3_xattr_get+0x288/0x2f4 [ext3]
          sp: c00000003baf32a0
         msr: 8000000000009032
         dar: c00000003f380000
       dsisr: 40000000
        current = 0xc00000003e54b010
        paca    = 0xc000000000a53680
          pid   = 1840, comm = readahead
      enter ? for help
      [link register   ] d00000000244916c .ext3_xattr_get+0x288/0x2f4 [ext3]
      [c00000003baf32a0] d000000002449104 .ext3_xattr_get+0x220/0x2f4 [ext3]
      (unreliab
      le)
      [c00000003baf3390] d00000000244a6e8 .ext3_xattr_security_get+0x40/0x5c [ext3]
      [c00000003baf3400] c000000000148154 .generic_getxattr+0x74/0x9c
      [c00000003baf34a0] c000000000333400 .inode_doinit_with_dentry+0x1c4/0x678
      [c00000003baf3560] c00000000032c6b0 .security_d_instantiate+0x50/0x68
      [c00000003baf35e0] c00000000013c818 .d_instantiate+0x78/0x9c
      [c00000003baf3680] c00000000013ced0 .d_splice_alias+0xf0/0x120
      [c00000003baf3720] d00000000243e05c .ext3_lookup+0xec/0x134 [ext3]
      [c00000003baf37c0] c000000000131e74 .do_lookup+0x110/0x260
      [c00000003baf3880] c000000000134ed0 .__link_path_walk+0xa98/0x1010
      [c00000003baf3970] c0000000001354a0 .path_walk+0x58/0xc4
      [c00000003baf3a20] c000000000135720 .do_path_lookup+0x138/0x1e4
      [c00000003baf3ad0] c00000000013645c .path_lookup_open+0x6c/0xc8
      [c00000003baf3b70] c000000000136780 .do_filp_open+0xcc/0x874
      [c00000003baf3d10] c0000000001251e0 .do_sys_open+0x80/0x140
      [c00000003baf3dc0] c00000000016aaec .compat_sys_open+0x24/0x38
      [c00000003baf3e30] c00000000000855c syscall_exit+0x0/0x40
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e423b9ec
    • M
      powerpc: Fix load/store float double alignment handler · 49f297f8
      Michael Neuling 提交于
      When we introduced VSX, we changed the way FPRs are stored in the
      thread_struct.  Unfortunately we missed the load/store float double
      alignment handler code when updating how we access FPRs in the
      thread_struct.
      
      Below fixes this and merges the little/big endian case.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      49f297f8
  2. 15 2月, 2009 1 次提交
  3. 13 2月, 2009 4 次提交
    • M
      powerpc/vsx: Fix VSX alignment handler for regs 32-63 · 26456dcf
      Michael Neuling 提交于
      Fix the VSX alignment handler for VSX registers > 32.  32-63 are stored
      in the VMX part of the thread_struct not the FPR part.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      CC: stable@kernel.org (2.6.27 & .28 please)
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      26456dcf
    • G
      powerpc/ps3: Move ps3_mm_add_memory to device_initcall · 0047656e
      Geoff Levand 提交于
      Change the PS3 hotplug memory routine ps3_mm_add_memory() from
      a core_initcall to a device_initcall.
      
      core_initcall routines run before the powerpc topology_init()
      startup routine, which is a subsys_initcall, resulting in
      failure of ps3_mm_add_memory() when CONFIG_NUMA=y.  When
      ps3_mm_add_memory() fails the system will boot with just the
      128 MiB of boot memory
      Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0047656e
    • D
      powerpc/mm: Fix numa reserve bootmem page selection · 06eccea6
      Dave Hansen 提交于
      Fix the powerpc NUMA reserve bootmem page selection logic.
      
      commit 8f64e1f2 (powerpc: Reserve
      in bootmem lmb reserved regions that cross NUMA nodes) changed
      the logic for how the powerpc LMB reserved regions were converted
      to bootmen reserved regions.  As the folowing discussion reports,
      the new logic was not correct.
      
      mark_reserved_regions_for_nid() goes through each LMB on the
      system that specifies a reserved area.  It searches for
      active regions that intersect with that LMB and are on the
      specified node.  It attempts to bootmem-reserve only the area
      where the active region and the reserved LMB intersect.  We
      can not reserve things on other nodes as they may not have
      bootmem structures allocated, yet.
      
      We base the size of the bootmem reservation on two possible
      things.  Normally, we just make the reservation start and
      stop exactly at the start and end of the LMB.
      
      However, the LMB reservations are not aware of NUMA nodes and
      on occasion a single LMB may cross into several adjacent
      active regions.  Those may even be on different NUMA nodes
      and will require separate calls to the bootmem reserve
      functions.  So, the bootmem reservation must be trimmed to
      fit inside the current active region.
      
      That's all fine and dandy, but we trim the reservation
      in a page-aligned fashion.  That's bad because we start the
      reservation at a non-page-aligned address: physbase.
      
      The reservation may only span 2 bytes, but that those bytes
      may span two pfns and cause a reserve_size of 2*PAGE_SIZE.
      
      Take the case where you reserve 0x2 bytes at 0x0fff and
      where the active region ends at 0x1000.  You'll jump into
      that if() statment, but node_ar.end_pfn=0x1 and
      start_pfn=0x0.  You'll end up with a reserve_size=0x1000,
      and then call
      
        reserve_bootmem_node(node, physbase=0xfff, size=0x1000);
      
      0x1000 may not be on the same node as 0xfff.  Oops.
      
      In almost all the vm code, end_<anything> is not inclusive.
      If you have an end_pfn of 0x1234, page 0x1234 is not
      included in the range.  Using PFN_UP instead of the
      (>> >> PAGE_SHIFT) will make this consistent with the other VM
      code.
      
      We also need to do math for the reserved size with physbase
      instead of start_pfn.  node_ar.end_pfn << PAGE_SHIFT is
      *precisely* the end of the node.  However,
      (start_pfn << PAGE_SHIFT) is *NOT* precisely the beginning
      of the reserved area.  That is, of course, physbase.
      If we don't use physbase here, the reserve_size can be
      made too large.
      
      From: Dave Hansen <dave@linux.vnet.ibm.com>
      Tested-by: Geoff Levand <geoffrey.levand@am.sony.com>  Tested on PS3.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      06eccea6
    • P
      powerpc/mm: Fix _PAGE_CHG_MASK to protect _PAGE_SPECIAL · fbc78b07
      Philippe Gerum 提交于
      Fix _PAGE_CHG_MASK so that pte_modify() does not affect the _PAGE_SPECIAL bit.
      Signed-off-by: NPhilippe Gerum <rpm@xenomai.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fbc78b07
  4. 11 2月, 2009 1 次提交
  5. 10 2月, 2009 6 次提交
  6. 07 2月, 2009 5 次提交
  7. 02 2月, 2009 1 次提交
  8. 30 1月, 2009 2 次提交
  9. 28 1月, 2009 4 次提交
  10. 27 1月, 2009 7 次提交
  11. 23 1月, 2009 2 次提交
  12. 21 1月, 2009 1 次提交
  13. 20 1月, 2009 3 次提交