1. 26 8月, 2009 1 次提交
    • D
      sparc64: Validate linear D-TLB misses. · d8ed1d43
      David S. Miller 提交于
      When page alloc debugging is not enabled, we essentially accept any
      virtual address for linear kernel TLB misses.  But with kgdb, kernel
      address probing, and other facilities we can try to access arbitrary
      crap.
      
      So, make sure the address we miss on will translate to physical memory
      that actually exists.
      
      In order to make this work we have to embed the valid address bitmap
      into the kernel image.  And in order to make that less expensive we
      make an adjustment, in that the max physical memory address is
      decreased to "1 << 41", even on the chips that support a 42-bit
      physical address space.  We can do this because bit 41 indicates
      "I/O space" and thus covers non-memory ranges.
      
      The result of this is that:
      
      1) kpte_linear_bitmap shrinks from 2K to 1K in size
      
      2) we need 64K more for the valid address bitmap
      
      We can't let the valid address bitmap be dynamically allocated
      once we start using it to validate TLB misses, otherwise we have
      crazy issues to deal with wrt. recursive TLB misses and such.
      
      If we're in a TLB miss it could be the deepest trap level that's legal
      inside of the cpu.  So if we TLB miss referencing the bitmap, the cpu
      will be out of trap levels and enter RED state.
      
      To guard against out-of-range accesses to the bitmap, we have to check
      to make sure no bits in the physical address above bit 40 are set.  We
      could export and use last_valid_pfn for this check, but that's just an
      unnecessary extra memory reference.
      
      On the plus side of all this, since we load all of these translations
      into the special 4MB mapping TSB, and we check the TSB first for TLB
      misses, there should be absolutely no real cost for these new checks
      in the TLB miss path.
      
      Reported-by: heyongli@gmail.com
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d8ed1d43
  2. 19 8月, 2009 2 次提交
    • D
      sparc32: Kill trap table freeing code. · a9919646
      David S. Miller 提交于
      Normally, srmmu uses different trap table register values to allow
      determination of the cpu we're on.  All of the trap tables have
      identical content, they just sit at different offsets from the first
      trap table, and the offset shifted down and masked out determines
      the cpu we are on.
      
      The code tries to free them up when they aren't actually used
      (don't have all 4 cpus, we're on sun4d, etc.) but that causes
      problems.
      
      For one thing it triggers false positives in the DMA debugging
      code.  And fixing that up while preserving this relative offset
      thing isn't trivial.
      
      So just kill the freeing code, it costs us at most 3 pages, big
      deal...
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a9919646
    • M
      sparc: sys32.S incorrect compat-layer splice() system call · e2c6cbd9
      Mathieu Desnoyers 提交于
      I think arch/sparc/kernel/sys32.S has an incorrect splice definition:
      
      SIGN2(sys32_splice, sys_splice, %o0, %o1)
      
      The splice() prototype looks like :
      
             long splice(int fd_in, loff_t *off_in, int fd_out,
                         loff_t *off_out, size_t len, unsigned int flags);
      
      So I think we should have :
      
      SIGN2(sys32_splice, sys_splice, %o0, %o2)
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e2c6cbd9
  3. 28 7月, 2009 1 次提交
  4. 26 6月, 2009 1 次提交
  5. 18 6月, 2009 1 次提交
  6. 17 6月, 2009 1 次提交
  7. 16 6月, 2009 19 次提交
  8. 12 6月, 2009 1 次提交
  9. 28 4月, 2009 2 次提交
  10. 27 4月, 2009 1 次提交
  11. 22 4月, 2009 2 次提交
  12. 21 4月, 2009 1 次提交
  13. 15 4月, 2009 2 次提交
  14. 08 4月, 2009 4 次提交
  15. 03 4月, 2009 1 次提交