1. 09 10月, 2012 2 次提交
    • J
      mm: fix-up zone present pages · 7f1290f2
      Jianguo Wu 提交于
      I think zone->present_pages indicates pages that buddy system can management,
      it should be:
      
      	zone->present_pages = spanned pages - absent pages - bootmem pages,
      
      but is now:
      	zone->present_pages = spanned pages - absent pages - memmap pages.
      
      spanned pages: total size, including holes.
      absent pages: holes.
      bootmem pages: pages used in system boot, managed by bootmem allocator.
      memmap pages: pages used by page structs.
      
      This may cause zone->present_pages less than it should be.  For example,
      numa node 1 has ZONE_NORMAL and ZONE_MOVABLE, it's memmap and other
      bootmem will be allocated from ZONE_MOVABLE, so ZONE_NORMAL's
      present_pages should be spanned pages - absent pages, but now it also
      minus memmap pages(free_area_init_core), which are actually allocated from
      ZONE_MOVABLE.  When offlining all memory of a zone, this will cause
      zone->present_pages less than 0, because present_pages is unsigned long
      type, it is actually a very large integer, it indirectly caused
      zone->watermark[WMARK_MIN] becomes a large
      integer(setup_per_zone_wmarks()), than cause totalreserve_pages become a
      large integer(calculate_totalreserve_pages()), and finally cause memory
      allocating failure when fork process(__vm_enough_memory()).
      
      [root@localhost ~]# dmesg
      -bash: fork: Cannot allocate memory
      
      I think the bug described in
      
        http://marc.info/?l=linux-mm&m=134502182714186&w=2
      
      is also caused by wrong zone present pages.
      
      This patch intends to fix-up zone->present_pages when memory are freed to
      buddy system on x86_64 and IA64 platforms.
      Signed-off-by: NJianguo Wu <wujianguo@huawei.com>
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Reported-by: NPetr Tesarik <ptesarik@suse.cz>
      Tested-by: NPetr Tesarik <ptesarik@suse.cz>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f1290f2
    • K
      mm: kill vma flag VM_RESERVED and mm->reserved_vm counter · 314e51b9
      Konstantin Khlebnikov 提交于
      A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
      currently it lost original meaning but still has some effects:
      
       | effect                 | alternative flags
      -+------------------------+---------------------------------------------
      1| account as reserved_vm | VM_IO
      2| skip in core dump      | VM_IO, VM_DONTDUMP
      3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
      4| do not mlock           | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
      
      This patch removes reserved_vm counter from mm_struct.  Seems like nobody
      cares about it, it does not exported into userspace directly, it only
      reduces total_vm showed in proc.
      
      Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.
      
      remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
      remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.
      
      [akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Carsten Otte <cotte@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Morris <james.l.morris@oracle.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
      Cc: Matt Helsley <matthltc@us.ibm.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      314e51b9
  2. 29 3月, 2012 1 次提交
  3. 09 12月, 2011 1 次提交
    • T
      ia64: Use HAVE_MEMBLOCK_NODE_MAP · 98e4ae8a
      Tejun Heo 提交于
      ia64 used early_node_map[] just to prime free_area_init_nodes().  Now
      memblock can be used for the same purpose and early_node_map[] is
      scheduled to be dropped.  Use memblock instead.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: linux-ia64@vger.kernel.org
      98e4ae8a
  4. 25 5月, 2011 1 次提交
  5. 07 3月, 2010 1 次提交
    • R
      mm: change anon_vma linking to fix multi-process server scalability issue · 5beb4930
      Rik van Riel 提交于
      The old anon_vma code can lead to scalability issues with heavily forking
      workloads.  Specifically, each anon_vma will be shared between the parent
      process and all its child processes.
      
      In a workload with 1000 child processes and a VMA with 1000 anonymous
      pages per process that get COWed, this leads to a system with a million
      anonymous pages in the same anon_vma, each of which is mapped in just one
      of the 1000 processes.  However, the current rmap code needs to walk them
      all, leading to O(N) scanning complexity for each page.
      
      This can result in systems where one CPU is walking the page tables of
      1000 processes in page_referenced_one, while all other CPUs are stuck on
      the anon_vma lock.  This leads to catastrophic failure for a benchmark
      like AIM7, where the total number of processes can reach in the tens of
      thousands.  Real workloads are still a factor 10 less process intensive
      than AIM7, but they are catching up.
      
      This patch changes the way anon_vmas and VMAs are linked, which allows us
      to associate multiple anon_vmas with a VMA.  At fork time, each child
      process gets its own anon_vmas, in which its COWed pages will be
      instantiated.  The parents' anon_vma is also linked to the VMA, because
      non-COWed pages could be present in any of the children.
      
      This reduces rmap scanning complexity to O(1) for the pages of the 1000
      child processes, with O(N) complexity for at most 1/N pages in the system.
       This reduces the average scanning cost in heavily forking workloads from
      O(N) to 2.
      
      The only real complexity in this patch stems from the fact that linking a
      VMA to anon_vmas now involves memory allocations.  This means vma_adjust
      can fail, if it needs to attach a VMA to anon_vma structures.  This in
      turn means error handling needs to be added to the calling functions.
      
      A second source of complexity is that, because there can be multiple
      anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
      "the" anon_vma lock.  To prevent the rmap code from walking up an
      incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag.  This bit
      flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
      to make sure it is impossible to compile a kernel that needs both symbolic
      values for the same bitflag.
      
      Some test results:
      
      Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
      box with 16GB RAM and not quite enough IO), the system ends up running
      >99% in system time, with every CPU on the same anon_vma lock in the
      pageout code.
      
      With these changes, AIM7 hits the cross-over point around 29.7k users.
      This happens with ~99% IO wait time, there never seems to be any spike in
      system time.  The anon_vma lock contention appears to be resolved.
      
      [akpm@linux-foundation.org: cleanups]
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5beb4930
  6. 09 2月, 2010 1 次提交
    • T
      [IA64] Remove COMPAT_IA32 support · 32974ad4
      Tony Luck 提交于
      This has been broken since May 2008 when Al Viro killed altroot support.
      Since nobody has complained, it would appear that there are no users of
      this code (A plausible theory since the main OSVs that support ia64 prefer
      to use the IA32-EL software emulation).
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      32974ad4
  7. 07 1月, 2010 1 次提交
  8. 02 10月, 2009 1 次提交
    • T
      ia64: don't alias VMALLOC_END to vmalloc_end · 126b3fcd
      Tejun Heo 提交于
      If CONFIG_VIRTUAL_MEM_MAP is enabled, ia64 defines macro VMALLOC_END
      as unsigned long variable vmalloc_end which is adjusted to prepare
      room for vmemmap.  This becomes probnlematic if a local variables
      vmalloc_end is defined in some function (not very unlikely) and
      VMALLOC_END is used in the function - the function thinks its
      referencing the global VMALLOC_END value but would be referencing its
      own local vmalloc_end variable.
      
      There's no reason VMALLOC_END should be a macro.  Just define it as an
      unsigned long variable if CONFIG_VIRTUAL_MEM_MAP is set to avoid nasty
      surprises.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: linux-ia64 <linux-ia64@vger.kernel.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      126b3fcd
  9. 23 9月, 2009 4 次提交
  10. 22 9月, 2009 1 次提交
  11. 18 6月, 2009 1 次提交
    • M
      [IA64] Convert ia64 to use int-ll64.h · e088a4ad
      Matthew Wilcox 提交于
      It is generally agreed that it would be beneficial for u64 to be an
      unsigned long long on all architectures.  ia64 (in common with several
      other 64-bit architectures) currently uses unsigned long.  Migrating
      piecemeal is too painful; this giant patch fixes all compilation warnings
      and errors that come as a result of switching to use int-ll64.h.
      
      Note that userspace will still see __u64 defined as unsigned long.  This
      is important as it affects C++ name mangling.
      
      [Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use
       u64 for start/end rather than unsigned long]
      Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      e088a4ad
  12. 02 4月, 2009 1 次提交
  13. 27 3月, 2009 2 次提交
  14. 07 1月, 2009 1 次提交
    • G
      mm: show node to memory section relationship with symlinks in sysfs · c04fc586
      Gary Hade 提交于
      Show node to memory section relationship with symlinks in sysfs
      
      Add /sys/devices/system/node/nodeX/memoryY symlinks for all
      the memory sections located on nodeX.  For example:
      /sys/devices/system/node/node1/memory135 -> ../../memory/memory135
      indicates that memory section 135 resides on node1.
      
      Also revises documentation to cover this change as well as updating
      Documentation/ABI/testing/sysfs-devices-memory to include descriptions
      of memory hotremove files 'phys_device', 'phys_index', and 'state'
      that were previously not described there.
      
      In addition to it always being a good policy to provide users with
      the maximum possible amount of physical location information for
      resources that can be hot-added and/or hot-removed, the following
      are some (but likely not all) of the user benefits provided by
      this change.
      Immediate:
        - Provides information needed to determine the specific node
          on which a defective DIMM is located.  This will reduce system
          downtime when the node or defective DIMM is swapped out.
        - Prevents unintended onlining of a memory section that was
          previously offlined due to a defective DIMM.  This could happen
          during node hot-add when the user or node hot-add assist script
          onlines _all_ offlined sections due to user or script inability
          to identify the specific memory sections located on the hot-added
          node.  The consequences of reintroducing the defective memory
          could be ugly.
        - Provides information needed to vary the amount and distribution
          of memory on specific nodes for testing or debugging purposes.
      Future:
        - Will provide information needed to identify the memory
          sections that need to be offlined prior to physical removal
          of a specific node.
      
      Symlink creation during boot was tested on 2-node x86_64, 2-node
      ppc64, and 2-node ia64 systems.  Symlink creation during physical
      memory hot-add tested on a 2-node x86_64 system.
      Signed-off-by: NGary Hade <garyhade@us.ibm.com>
      Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c04fc586
  15. 20 10月, 2008 1 次提交
    • B
      mm: cleanup to make remove_memory() arch-neutral · 71088785
      Badari Pulavarty 提交于
      There is nothing architecture specific about remove_memory().
      remove_memory() function is common for all architectures which support
      hotplug memory remove.  Instead of duplicating it in every architecture,
      collapse them into arch neutral function.
      
      [akpm@linux-foundation.org: fix the export]
      Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Gary Hade <garyhade@us.ibm.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71088785
  16. 07 9月, 2008 1 次提交
  17. 16 5月, 2008 1 次提交
    • H
      [IA64] fix personality(PER_LINUX32) performance issue · 839052d2
      Huang, Xiaolan 提交于
      The patch aims to fix a performance issue for the syscall
      personality(PER_LINUX32).
      
      On IA-64 box, the syscall personality (PER_LINUX32) has poor performance
      because it failed to find the Linux/x86 execution domain. Then it tried
      to load the kernel module however it failed always and it used the default
      execution domain PER_LINUX instead. Requesting kernel modules is very
      expensive. It caused the performance issue. (see the function
      lookup_exec_domain in kernel/exec_domain.c).
      
      To resolve the issue, execution domain Linux/x86 is always registered in
      initialization time for IA-64 architecture.
      Signed-off-by: NXiaolan Huang <xiaolan.huang@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      839052d2
  18. 28 4月, 2008 1 次提交
  19. 12 4月, 2008 1 次提交
    • Z
      [IA64] Fix NUMA configuration issue · 98075d24
      Zoltan Menyhart 提交于
      There is a NUMA memory configuration issue in 2.6.24:
      
      A 2-node machine of ours has got the following memory layout:
      
      Node 0:	0 - 2 Gbytes
      Node 0:	4 - 8 Gbytes
      Node 1:	8 - 16 Gbytes
      Node 0:	16 - 18 Gbytes
      
      "efi_memmap_init()" merges the three last ranges into one.
      
      "register_active_ranges()" is called as follows:
      
      efi_memmap_walk(register_active_ranges, NULL);
      
      i.e. once for the 4 - 18 Gbytes range. It picks up the node
      number from the start address, and registers all the memory for
      the node #0.
      
      "register_active_ranges()" should be called as follows to
      make sure there is no merged address range at its entry:
      
      efi_memmap_walk(filter_memory, register_active_ranges);
      
      "filter_memory()" is similar to "filter_rsvd_memory()",
      but the reserved memory ranges are not filtered out.
      Signed-off-by: NZoltan Menyhart <Zoltan.Menyhart@bull.net>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      98075d24
  20. 10 4月, 2008 1 次提交
  21. 07 3月, 2008 1 次提交
  22. 30 10月, 2007 1 次提交
    • A
      [IA64] ia64/mm/init.c: fix section mismatches · 18b8befd
      Adrian Bunk 提交于
      This patch fixes the following section mismatches:
      
      <--  snip  -->
      
      ...
      WARNING: vmlinux.o(.text+0x5b5c2): Section mismatch: reference to .init.text:memmap_init_zone (between 'memmap_init' and 'virtual_memmap_init')
      WARNING: vmlinux.o(.text+0x5b842): Section mismatch: reference to .init.text:memmap_init_zone (between 'virtual_memmap_init' and 'ia64_mmu_init')
      ...
      
      <--  snip  -->
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      18b8befd
  23. 20 10月, 2007 1 次提交
  24. 17 10月, 2007 3 次提交
  25. 12 5月, 2007 1 次提交
  26. 08 5月, 2007 1 次提交
    • C
      Make page->private usable in compound pages · d85f3385
      Christoph Lameter 提交于
      If we add a new flag so that we can distinguish between the first page and the
      tail pages then we can avoid to use page->private in the first page.
      page->private == page for the first page, so there is no real information in
      there.
      
      Freeing up page->private makes the use of compound pages more transparent.
      They become more usable like real pages.  Right now we have to be careful f.e.
       if we are going beyond PAGE_SIZE allocations in the slab on i386 because we
      can then no longer use the private field.  This is one of the issues that
      cause us not to support debugging for page size slabs in SLAB.
      
      Having page->private available for SLUB would allow more meta information in
      the page struct.  I can probably avoid the 16 bit ints that I have in there
      right now.
      
      Also if page->private is available then a compound page may be equipped with
      buffer heads.  This may free up the way for filesystems to support larger
      blocks than page size.
      
      We add PageTail as an alias of PageReclaim.  Compound pages cannot currently
      be reclaimed.  Because of the alias one needs to check PageCompound first.
      
      The RFC for the this approach was discussed at
      http://marc.info/?t=117574302800001&r=1&w=2
      
      [nacc@us.ibm.com: fix hugetlbfs]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d85f3385
  27. 30 3月, 2007 1 次提交
    • K
      [IA64] bugfix stack layout upside-down · 83d2cd3d
      KAMEZAWA Hiroyuki 提交于
      ia64 expects following vm layout:
      
      == low memory
      [register-stack grows up]
      [memory-stack grows down]
      == high memory
      
      But the code assigns the base of the register stack at the
      maximum stack size offset from the fixed address where the
      stack *might* start.  Stack randomization will result in the
      memory stack starting at a lower address than this, and if the
      user has set a low stack limit with "ulimit -s", then you can
      end up with the register stack above the memory stack (or if
      you were very unlucky right on top of it!).
      
      Fix: Calculate the base address for the register stack starting
      from the actual address of the memory stack.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      83d2cd3d
  28. 21 3月, 2007 1 次提交
    • Z
      [IA64] min_low_pfn and max_low_pfn calculation fix · a3f5c338
      Zou Nan hai 提交于
      We have seen bad_pte_print when testing crashdump on an SN machine in
      recent 2.6.20 kernel.  There are tons of bad pte print (pfn < max_low_pfn)
      reports when the crash kernel boots up, all those reported bad pages
      are inside initmem range; That is because if the crash kernel code and
      data happens to be at the beginning of the 1st node. build_node_maps in
      discontig.c will bypass reserved regions with filter_rsvd_memory. Since
      min_low_pfn is calculated in build_node_map, so in this case, min_low_pfn
      will be greater than kernel code and data.
      
      Because pages inside initmem are freed and reused later, we saw
      pfn_valid check fail on those pages.
      
      I think this theoretically happen on a normal kernel. When I check
      min_low_pfn and max_low_pfn calculation in contig.c and discontig.c.
      I found more issues than this.
      
      1. min_low_pfn and max_low_pfn calculation is inconsistent between
      contig.c and discontig.c,
      min_low_pfn is calculated as the first page number of boot memmap in
      contig.c (Why? Though this may work at the most of the time, I don't
      think it is the right logic). It is calculated as the lowest physical
      memory page number bypass reserved regions in discontig.c.
      max_low_pfn is calculated include reserved regions in contig.c. It is
      calculated exclude reserved regions in discontig.c.
      
      2. If kernel code and data region is happen to be at the begin or the
      end of physical memory, when min_low_pfn and max_low_pfn calculation is
      bypassed kernel code and data, pages in initmem will report bad.
      
      3. initrd is also in reserved regions, if it is at the begin or at the
      end of physical memory, kernel will refuse to reuse the memory. Because
      the virt_addr_valid check in free_initrd_mem.
      
      So it is better to fix and clean up those issues.
      Calculate min_low_pfn and max_low_pfn in a consistent way.
      Signed-off-by: NZou Nan hai <nanhai.zou@intel.com>
      Acked-by: NJay Lan <jlan@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      a3f5c338
  29. 12 2月, 2007 2 次提交
  30. 07 2月, 2007 1 次提交
    • C
      [IA64] relax per-cpu TLB requirement to DTC · 00b65985
      Chen, Kenneth W 提交于
      Instead of pinning per-cpu TLB into a DTR, use DTC.  This will free up
      one TLB entry for application, or even kernel if access pattern to
      per-cpu data area has high temporal locality.
      
      Since per-cpu is mapped at the top of region 7 address, we just need to
      add special case in alt_dtlb_miss.  The physical address of per-cpu data
      is already conveniently stored in IA64_KR(PER_CPU_DATA).  Latency for
      alt_dtlb_miss is not affected as we can hide all the latency.  It was
      measured that alt_dtlb_miss handler has 23 cycles latency before and
      after the patch.
      
      The performance effect is massive for applications that put lots of tlb
      pressure on CPU.  Workload environment like database online transaction
      processing or application uses tera-byte of memory would benefit the most.
      Measurement with industry standard database benchmark shown an upward
      of 1.6% gain.  While smaller workloads like cpu, java also showing small
      improvement.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      00b65985
  31. 06 2月, 2007 2 次提交
    • J
      [IA64] swiotlb bug fixes · cde14bbf
      Jan Beulich 提交于
      This patch fixes
      - marking I-cache clean of pages DMAed to now only done for IA64
      - broken multiple inclusion in include/asm-x86_64/swiotlb.h
      - missing call to mark_clean in swiotlb_sync_sg()
      - a (perhaps only theoretical) issue in swiotlb_dma_supported() when
      io_tlb_end is exactly at the end of memory
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      cde14bbf
    • B
      [IA64] register memory ranges in a consistent manner · 139b8304
      Bob Picco 提交于
      While pursuing and unrelated issue with 64Mb granules I noticed a problem
      related to inconsistent use of add_active_range.  There doesn't appear any
      reason to me why FLATMEM versus DISCONTIG_MEM should register memory to
      add_active_range with different code.  So I've changed the code into a
      common implementation.
      
      The other subtle issue fixed by this patch was calling add_active_range in
      count_node_pages before granule aligning is performed.  We were lucky with
      16MB granules but not so with 64MB granules.  count_node_pages has reserved
      regions filtered out and as a consequence linked kernel text and data
      aren't covered by calls to count_node_pages.  So linked kernel regions
      wasn't reported to add_active_regions.  This resulted in free_initmem
      causing numerous bad_page reports.  This won't occur with this patch
      because now all known memory regions are reported by
      register_active_ranges.
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NBob Picco <bob.picco@hp.com>
      Acked-by: NSimon Horman <horms@verge.net.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      139b8304