1. 09 9月, 2009 2 次提交
  2. 04 9月, 2009 1 次提交
  3. 02 9月, 2009 1 次提交
  4. 26 8月, 2009 1 次提交
    • D
      sparc64: Validate linear D-TLB misses. · d8ed1d43
      David S. Miller 提交于
      When page alloc debugging is not enabled, we essentially accept any
      virtual address for linear kernel TLB misses.  But with kgdb, kernel
      address probing, and other facilities we can try to access arbitrary
      crap.
      
      So, make sure the address we miss on will translate to physical memory
      that actually exists.
      
      In order to make this work we have to embed the valid address bitmap
      into the kernel image.  And in order to make that less expensive we
      make an adjustment, in that the max physical memory address is
      decreased to "1 << 41", even on the chips that support a 42-bit
      physical address space.  We can do this because bit 41 indicates
      "I/O space" and thus covers non-memory ranges.
      
      The result of this is that:
      
      1) kpte_linear_bitmap shrinks from 2K to 1K in size
      
      2) we need 64K more for the valid address bitmap
      
      We can't let the valid address bitmap be dynamically allocated
      once we start using it to validate TLB misses, otherwise we have
      crazy issues to deal with wrt. recursive TLB misses and such.
      
      If we're in a TLB miss it could be the deepest trap level that's legal
      inside of the cpu.  So if we TLB miss referencing the bitmap, the cpu
      will be out of trap levels and enter RED state.
      
      To guard against out-of-range accesses to the bitmap, we have to check
      to make sure no bits in the physical address above bit 40 are set.  We
      could export and use last_valid_pfn for this check, but that's just an
      unnecessary extra memory reference.
      
      On the plus side of all this, since we load all of these translations
      into the special 4MB mapping TSB, and we check the TSB first for TLB
      misses, there should be absolutely no real cost for these new checks
      in the TLB miss path.
      
      Reported-by: heyongli@gmail.com
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d8ed1d43
  5. 19 8月, 2009 4 次提交
  6. 18 8月, 2009 5 次提交
  7. 17 8月, 2009 3 次提交
  8. 14 8月, 2009 1 次提交
    • T
      percpu, sparc64: fix sparse possible cpu map handling · 74d46d6b
      Tejun Heo 提交于
      percpu code has been assuming num_possible_cpus() == nr_cpu_ids which
      is incorrect if cpu_possible_map contains holes.  This causes percpu
      code to access beyond allocated memories and vmalloc areas.  On a
      sparc64 machine with cpus 0 and 2 (u60), this triggers the following
      warning or fails boot.
      
       WARNING: at /devel/tj/os/work/mm/vmalloc.c:106 vmap_page_range_noflush+0x1f0/0x240()
       Modules linked in:
       Call Trace:
        [00000000004b17d0] vmap_page_range_noflush+0x1f0/0x240
        [00000000004b1840] map_vm_area+0x20/0x60
        [00000000004b1950] __vmalloc_area_node+0xd0/0x160
        [0000000000593434] deflate_init+0x14/0xe0
        [0000000000583b94] __crypto_alloc_tfm+0xd4/0x1e0
        [00000000005844f0] crypto_alloc_base+0x50/0xa0
        [000000000058b898] alg_test_comp+0x18/0x80
        [000000000058dad4] alg_test+0x54/0x180
        [000000000058af00] cryptomgr_test+0x40/0x60
        [0000000000473098] kthread+0x58/0x80
        [000000000042b590] kernel_thread+0x30/0x60
        [0000000000472fd0] kthreadd+0xf0/0x160
       ---[ end trace 429b268a213317ba ]---
      
      This patch fixes generic percpu functions and sparc64
      setup_per_cpu_areas() so that they handle sparse cpu_possible_map
      properly.
      
      Please note that on x86, cpu_possible_map() doesn't contain holes and
      thus num_possible_cpus() == nr_cpu_ids and this patch doesn't cause
      any behavior difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Ingo Molnar <mingo@elte.hu>
      74d46d6b
  9. 03 8月, 2009 1 次提交
  10. 28 7月, 2009 2 次提交
    • D
      sparc64: Sign extend length arg to truncate syscalls when compat. · 9a926d86
      David S. Miller 提交于
      The first thing sys_truncate() and sys_ftruncate() do is sign extend
      the unsigned length arg to a signed type.
      
      Thanks to Benjamin Herrenschmidt for the tip.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9a926d86
    • B
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() · 9e1b32ca
      Benjamin Herrenschmidt 提交于
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
      
      Upcoming paches to support the new 64-bit "BookE" powerpc architecture
      will need to have the virtual address corresponding to PTE page when
      freeing it, due to the way the HW table walker works.
      
      Basically, the TLB can be loaded with "large" pages that cover the whole
      virtual space (well, sort-of, half of it actually) represented by a PTE
      page, and which contain an "indirect" bit indicating that this TLB entry
      RPN points to an array of PTEs from which the TLB can then create direct
      entries. Thus, in order to invalidate those when PTE pages are deleted,
      we need the virtual address to pass to tlbilx or tlbivax instructions.
      
      The old trick of sticking it somewhere in the PTE page struct page sucks
      too much, the address is almost readily available in all call sites and
      almost everybody implemets these as macros, so we may as well add the
      argument everywhere. I added it to the pmd and pud variants for consistency.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
      Acked-by: NNick Piggin <npiggin@suse.de>
      Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9e1b32ca
  11. 13 7月, 2009 2 次提交
  12. 11 7月, 2009 1 次提交
  13. 26 6月, 2009 6 次提交
  14. 22 6月, 2009 1 次提交
  15. 19 6月, 2009 1 次提交
  16. 18 6月, 2009 1 次提交
  17. 17 6月, 2009 2 次提交
    • R
      kmap_types: make most arches use generic header file · e4c9dd0f
      Randy Dunlap 提交于
      Convert most arches to use asm-generic/kmap_types.h.
      
      Move the KM_FENCE_ macro additions into asm-generic/kmap_types.h,
      controlled by __WITH_KM_FENCE from each arch's kmap_types.h file.
      
      Would be nice to be able to add custom KM_types per arch, but I don't yet
      see a nice, clean way to do that.
      
      Built on x86_64, i386, mips, sparc, alpha(tonyb), powerpc(tonyb), and
      68k(tonyb).
      
      Note: avr32 should be able to remove KM_PTE2 (since it's not used) and
      then just use the generic kmap_types.h file.  Get avr32 maintainer
      approval.
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Cc: <linux-arch@vger.kernel.org>
      Acked-by: NMike Frysinger <vapier@gentoo.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Bryan Wu <cooloney@kernel.org>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: "Luck Tony" <tony.luck@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e4c9dd0f
    • A
      mm: consolidate init_mm definition · bb1f17b0
      Alexey Dobriyan 提交于
      * create mm/init-mm.c, move init_mm there
      * remove INIT_MM, initialize init_mm with C99 initializer
      * unexport init_mm on all arches:
      
        init_mm is already unexported on x86.
      
        One strange place is some OMAP driver (drivers/video/omap/) which
        won't build modular, but it's already wants get_vm_area() export.
        Somebody should look there.
      
      [akpm@linux-foundation.org: add missing #includes]
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Mike Frysinger <vapier.adi@gmail.com>
      Cc: Americo Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bb1f17b0
  18. 16 6月, 2009 5 次提交