1. 07 1月, 2009 1 次提交
    • S
      sparc64: refactor code in init_64.c · ff9aefbf
      Sam Ravnborg 提交于
      The sparc64 allmodconfig build broke due to enabling of the
      branch_tracer that does some very clever things with
      all if conditions. This caused my gcc 3.4.5 to be so confused that
      it emitted two warnings:
      
      arch/sparc/mm/init_64.c: In function `update_mmu_cache':
      arch/sparc/mm/init_64.c:271: warning: 'pg_flags' might be used uninitialized in this function
      arch/sparc/mm/init_64.c:272: warning: 'page' might be used uninitialized in this function
      
      And with -Werror this broke the build.
      
      Refactor code so it:
      1) becomes more readable
      2) no longer emit a warning with the branch_tracer enabled
      
      The refactoring uses a small helper function (flush_dcache()).
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ff9aefbf
  2. 05 12月, 2008 2 次提交
  3. 01 12月, 2008 1 次提交
  4. 12 9月, 2008 1 次提交
  5. 02 9月, 2008 1 次提交
  6. 01 9月, 2008 2 次提交
  7. 30 8月, 2008 1 次提交
  8. 25 8月, 2008 1 次提交
  9. 14 8月, 2008 2 次提交
    • D
      sparc64: Fix cmdline_memory_size handling bugs. · f2b60794
      David S. Miller 提交于
      First, lmb_enforce_memory_limit() interprets it's argument
      (mostly, heh) as a size limit not an address limit.  So pass
      the raw cmdline_memory_size value into it.  And we don't
      need to check it against zero, lmb_enforce_memory_limit() does
      that for us.
      
      Next, free_initmem() needs special handling when the kernel
      command line trims the available memory.  The problem case is
      if the trimmed out memory is where the kernel image itself
      resides.
      
      When that memory is trimmed out, we don't add those physical
      ram areas to the sparsemem active ranges, amongst other things.
      Which means that this free_initmem() code will free up invalid
      page structs, resulting in either crashes or hangs.
      
      Just quick fix this by not freeing initmem at all if "mem="
      was given on the boot command line.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f2b60794
    • D
      sparc64: Fix overshoot in nid_range(). · c918dcce
      David S. Miller 提交于
      If 'start' does not begin on a page boundary, we can overshoot
      past 'end'.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c918dcce
  10. 13 8月, 2008 1 次提交
  11. 27 7月, 2008 1 次提交
  12. 25 7月, 2008 1 次提交
  13. 20 5月, 2008 1 次提交
  14. 17 5月, 2008 1 次提交
  15. 12 5月, 2008 1 次提交
  16. 07 5月, 2008 1 次提交
  17. 06 5月, 2008 1 次提交
  18. 28 4月, 2008 1 次提交
    • C
      pageflags: get rid of FLAGS_RESERVED · 9223b419
      Christoph Lameter 提交于
      NR_PAGEFLAGS specifies the number of page flags we are using.  From that we
      can calculate the number of bits leftover that can be used for zone, node (and
      maybe the sections id).  There is no need anymore for FLAGS_RESERVED if we use
      NR_PAGEFLAGS.
      
      Use the new methods to make NR_PAGEFLAGS available via the preprocessor.
      NR_PAGEFLAGS is used to calculate field boundaries in the page flags fields.
      These field widths have to be available to the preprocessor.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9223b419
  19. 24 4月, 2008 8 次提交
  20. 29 4月, 2008 1 次提交
  21. 26 3月, 2008 2 次提交
  22. 22 3月, 2008 1 次提交
    • D
      [SPARC64]: Remove most limitations to kernel image size. · 64658743
      David S. Miller 提交于
      Currently kernel images are limited to 8MB in size, and this causes
      problems especially when enabling features that take up a lot of
      kernel image space such as lockdep.
      
      The code now will align the kernel image size up to 4MB and map that
      many locked TLB entries.  So, the only practical limitation is the
      number of available locked TLB entries which is 16 on Cheetah and 64
      on pre-Cheetah sparc64 cpus.  Niagara cpus don't actually have hw
      locked TLB entry support.  Rather, the hypervisor transparently
      provides support for "locked" TLB entries since it runs with physical
      addressing and does the initial TLB miss processing.
      
      Fully utilizing this change requires some help from SILO, a patch for
      which will be submitted to the maintainer.  Essentially, SILO will
      only currently map up to 8MB for the kernel image and that needs to be
      increased.
      
      Note that neither this patch nor the SILO bits will help with network
      booting.  The openfirmware code will only map up to a certain amount
      of kernel image during a network boot and there isn't much we can to
      about that other than to implemented a layered network booting
      facility.  Solaris has this, and calls it "wanboot" and we may
      implement something similar at some point.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      64658743
  23. 25 2月, 2008 1 次提交
    • S
      [SPARC64]: Fix section mismatch from kernel_map_range · 896aef43
      Sam Ravnborg 提交于
      Fix following warnings:
      WARNING: vmlinux.o(.text+0x4f980): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem()
      WARNING: vmlinux.o(.text+0x4f9cc): Section mismatch in reference from the function kernel_map_range() to the function .init.text:__alloc_bootmem()
      
      alloc_bootmem() is only used during early init and for any subsequent
      call to kernel_map_range() the program logic avoid the call.
      So annotate kernel_map_range() with __ref to tell modpost to
      ignore the reference to a __init function.
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      896aef43
  24. 18 2月, 2008 1 次提交
  25. 13 2月, 2008 1 次提交
  26. 08 2月, 2008 1 次提交
    • B
      Introduce flags for reserve_bootmem() · 72a7fe39
      Bernhard Walle 提交于
      This patchset adds a flags variable to reserve_bootmem() and uses the
      BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
      between crashkernel area and already used memory.
      
      This patch:
      
      Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
      If that flag is set, the function returns with -EBUSY if the memory already
      has been reserved in the past.  This is to avoid conflicts.
      
      Because that code runs before SMP initialisation, there's no race condition
      inside reserve_bootmem_core().
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix powerpc build]
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72a7fe39
  27. 31 1月, 2008 1 次提交
    • T
      SPARC64: use generic percpu · 3afc6202
      travis@sgi.com 提交于
      Sparc64 has a way of providing the base address for the per cpu area of the
      currently executing processor in a global register.
      
      Sparc64 also provides a way to calculate the address of a per cpu area
      from a base address instead of performing an array lookup.
      
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3afc6202
  28. 13 12月, 2007 1 次提交
    • D
      [SPARC64]: Fix two kernel linear mapping setup bugs. · 8f361453
      David S. Miller 提交于
      This was caught and identified by Greg Onufer.
      
      Since we setup the 256M/4M bitmap table after taking over the trap
      table, it's possible for some 4M mapping to get loaded in the TLB
      beforhand which later will be 256M mappings.
      
      This can cause illegal TLB multiple-match conditions.  Fix this by
      setting up the bitmap before we take over the trap table.
      
      Next, __flush_tlb_all() was not doing anything on hypervisor
      platforms.  Fix by adding sun4v_mmu_demap_all() and calling it.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8f361453
  29. 27 10月, 2007 1 次提交