1. 16 2月, 2010 1 次提交
  2. 12 1月, 2010 1 次提交
  3. 30 10月, 2009 1 次提交
  4. 07 10月, 2009 1 次提交
  5. 22 9月, 2009 1 次提交
  6. 16 9月, 2009 1 次提交
  7. 12 9月, 2009 1 次提交
    • R
      ARM: Fix pfn_valid() for sparse memory · b7cfda9f
      Russell King 提交于
      On OMAP platforms, some people want to declare to segment up the memory
      between the kernel and a separate application such that there is a hole
      in the middle of the memory as far as Linux is concerned.  However,
      they want to be able to mmap() the hole.
      
      This currently causes problems, because update_mmu_cache() thinks that
      there are valid struct pages for the "hole".  Fix this by making
      pfn_valid() slightly more expensive, by checking whether the PFN is
      contained within the meminfo array.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Tested-by: NKhasim Syed Mohammed <khasim@ti.com>
      b7cfda9f
  8. 15 8月, 2009 1 次提交
    • R
      ARM: Fix broken highmem support · dde5828f
      Russell King 提交于
      Currently, highmem is selectable, and you can request an increased
      vmalloc area.  However, none of this has any effect on the memory
      layout since a patch in the highmem series was accidentally dropped.
      Moreover, even if you did want highmem, all memory would still be
      registered as lowmem, possibly resulting in overflow of the available
      virtual mapping space.
      
      The highmem boundary is determined by the highest allowed beginning
      of the vmalloc area, which depends on its configurable minimum size
      (see commit 60296c71 for details on
      this).
      
      We should create mappings and initialize bootmem only for low memory,
      while the zone allocator must still be told about highmem.
      
      Currently, memory nodes which are completely located in high memory
      are not supported.  This is not a huge limitation since systems
      relying on highmem support are unlikely to have discontiguous memory
      with large holes.
      
      [ A similar patch was meant to be merged before commit 5f0fbf9e
        and be available  in Linux v2.6.30, however some git rebase screw-up
        of mine dropped the first commit of the series, and that goofage
        escaped testing somehow as well. -- Nico ]
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Reviewed-by: NNicolas Pitre <nico@marvell.com>
      dde5828f
  9. 16 3月, 2009 1 次提交
  10. 13 3月, 2009 1 次提交
    • R
      [ARM] Fix virtual to physical translation macro corner cases · 1522ac3e
      Russell King 提交于
      The current use of these macros works well when the conversion is
      entirely linear.  In this case, we can be assured that the following
      holds true:
      
      	__va(p + s) - s = __va(p)
      
      However, this is not always the case, especially when there is a
      non-linear conversion (eg, when there is a 3.5GB hole in memory.)
      In this case, if 's' is the size of the region (eg, PAGE_SIZE) and
      'p' is the final page, the above is most definitely not true.
      
      So, we must ensure that __va() and __pa() are only used with valid
      kernel direct mapped RAM addresses.  This patch tweaks the code
      to achieve this.
      Tested-by: NCharles Moschel <fred99@carolina.rr.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      1522ac3e
  11. 01 12月, 2008 1 次提交
  12. 28 11月, 2008 2 次提交
  13. 02 10月, 2008 2 次提交
  14. 01 10月, 2008 1 次提交
  15. 06 9月, 2008 2 次提交
  16. 31 7月, 2008 1 次提交
  17. 25 7月, 2008 1 次提交
  18. 19 4月, 2008 1 次提交
  19. 08 2月, 2008 1 次提交
    • B
      Introduce flags for reserve_bootmem() · 72a7fe39
      Bernhard Walle 提交于
      This patchset adds a flags variable to reserve_bootmem() and uses the
      BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
      between crashkernel area and already used memory.
      
      This patch:
      
      Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
      If that flag is set, the function returns with -EBUSY if the memory already
      has been reserved in the past.  This is to avoid conflicts.
      
      Because that code runs before SMP initialisation, there's no race condition
      inside reserve_bootmem_core().
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix powerpc build]
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72a7fe39
  20. 22 4月, 2007 1 次提交
  21. 24 1月, 2007 1 次提交
  22. 08 11月, 2006 1 次提交
  23. 27 9月, 2006 3 次提交
  24. 20 9月, 2006 1 次提交
  25. 01 7月, 2006 1 次提交
  26. 29 6月, 2006 1 次提交
  27. 07 4月, 2006 1 次提交
  28. 22 3月, 2006 2 次提交
  29. 18 11月, 2005 1 次提交
    • R
      [ARM] Fix some corner cases in new mm initialisation · 02b30839
      Russell King 提交于
      Document that the VMALLOC_END address must be aligned to 2MB since
      it must align with a PGD boundary.
      
      Allocate the vectors page early so that the flush_cache_all() later
      will cause any dirty cache lines in the direct mapping will be safely
      written back.
      
      Move the flush_cache_all() to the second local_flush_cache_tlb() and
      remove the now redundant first local_flush_cache_tlb().
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      02b30839
  30. 02 11月, 2005 1 次提交
  31. 29 10月, 2005 1 次提交
  32. 28 10月, 2005 2 次提交
  33. 28 6月, 2005 1 次提交