1. 24 4月, 2008 3 次提交
  2. 26 3月, 2008 2 次提交
  3. 22 3月, 2008 1 次提交
    • D
      [SPARC64]: Remove most limitations to kernel image size. · 64658743
      David S. Miller 提交于
      Currently kernel images are limited to 8MB in size, and this causes
      problems especially when enabling features that take up a lot of
      kernel image space such as lockdep.
      
      The code now will align the kernel image size up to 4MB and map that
      many locked TLB entries.  So, the only practical limitation is the
      number of available locked TLB entries which is 16 on Cheetah and 64
      on pre-Cheetah sparc64 cpus.  Niagara cpus don't actually have hw
      locked TLB entry support.  Rather, the hypervisor transparently
      provides support for "locked" TLB entries since it runs with physical
      addressing and does the initial TLB miss processing.
      
      Fully utilizing this change requires some help from SILO, a patch for
      which will be submitted to the maintainer.  Essentially, SILO will
      only currently map up to 8MB for the kernel image and that needs to be
      increased.
      
      Note that neither this patch nor the SILO bits will help with network
      booting.  The openfirmware code will only map up to a certain amount
      of kernel image during a network boot and there isn't much we can to
      about that other than to implemented a layered network booting
      facility.  Solaris has this, and calls it "wanboot" and we may
      implement something similar at some point.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      64658743
  4. 25 2月, 2008 1 次提交
    • S
      [SPARC64]: Fix section mismatch from kernel_map_range · 896aef43
      Sam Ravnborg 提交于
      Fix following warnings:
      WARNING: vmlinux.o(.text+0x4f980): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem()
      WARNING: vmlinux.o(.text+0x4f9cc): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem()
      
      alloc_bootmem() is only used during early init and for any subsequent
      call to kernel_map_range() the program logic avoid the call.
      So annotate kernel_map_range() with __ref to tell modpost to
      ignore the reference to a __init function.
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      896aef43
  5. 18 2月, 2008 1 次提交
  6. 13 2月, 2008 1 次提交
  7. 08 2月, 2008 1 次提交
    • B
      Introduce flags for reserve_bootmem() · 72a7fe39
      Bernhard Walle 提交于
      This patchset adds a flags variable to reserve_bootmem() and uses the
      BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
      between crashkernel area and already used memory.
      
      This patch:
      
      Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
      If that flag is set, the function returns with -EBUSY if the memory already
      has been reserved in the past.  This is to avoid conflicts.
      
      Because that code runs before SMP initialisation, there's no race condition
      inside reserve_bootmem_core().
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix powerpc build]
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72a7fe39
  8. 31 1月, 2008 1 次提交
    • T
      SPARC64: use generic percpu · 3afc6202
      travis@sgi.com 提交于
      Sparc64 has a way of providing the base address for the per cpu area of the
      currently executing processor in a global register.
      
      Sparc64 also provides a way to calculate the address of a per cpu area
      from a base address instead of performing an array lookup.
      
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3afc6202
  9. 13 12月, 2007 1 次提交
    • D
      [SPARC64]: Fix two kernel linear mapping setup bugs. · 8f361453
      David S. Miller 提交于
      This was caught and identified by Greg Onufer.
      
      Since we setup the 256M/4M bitmap table after taking over the trap
      table, it's possible for some 4M mapping to get loaded in the TLB
      beforhand which later will be 256M mappings.
      
      This can cause illegal TLB multiple-match conditions.  Fix this by
      setting up the bitmap before we take over the trap table.
      
      Next, __flush_tlb_all() was not doing anything on hypervisor
      platforms.  Fix by adding sun4v_mmu_demap_all() and calling it.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f361453
  10. 27 10月, 2007 1 次提交
  11. 17 10月, 2007 2 次提交
  12. 14 10月, 2007 1 次提交
  13. 29 5月, 2007 5 次提交
  14. 12 5月, 2007 1 次提交
  15. 08 5月, 2007 1 次提交
  16. 07 5月, 2007 1 次提交
  17. 26 4月, 2007 11 次提交
  18. 17 3月, 2007 1 次提交
    • D
      [SPARC64]: Get DEBUG_PAGEALLOC working again. · d1acb421
      David S. Miller 提交于
      We have to make sure to use base-pagesize TLB entries even during the
      early transition period where we need TLB miss handling but don't have
      the kernel page tables setup yet for the linear region.
      
      Also, it is necessary therefore to not use the 4MB TSB for these
      translations, and instead use the normal kernel TSB.  This allows us
      to also get rid of the 4MB tsb for debug builds which shrinks the
      kernel a little bit.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d1acb421
  19. 13 2月, 2007 1 次提交
  20. 12 2月, 2007 1 次提交
  21. 01 1月, 2007 1 次提交
  22. 08 12月, 2006 1 次提交