1. 09 9月, 2015 8 次提交
    • Y
      mm/page_alloc.c: fix a misleading comment · 013110a7
      Yaowei Bai 提交于
      The comment says that the per-cpu batchsize and zone watermarks are
      determined by present_pages which is definitely wrong, they are both
      calculated from managed_pages.  Fix it.
      Signed-off-by: NYaowei Bai <bywxiaobai@163.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      013110a7
    • V
      mm: rename alloc_pages_exact_node() to __alloc_pages_node() · 96db800f
      Vlastimil Babka 提交于
      alloc_pages_exact_node() was introduced in commit 6484eb3e ("page
      allocator: do not check NUMA node ID when the caller knows the node is
      valid") as an optimized variant of alloc_pages_node(), that doesn't
      fallback to current node for nid == NUMA_NO_NODE.  Unfortunately the
      name of the function can easily suggest that the allocation is
      restricted to the given node and fails otherwise.  In truth, the node is
      only preferred, unless __GFP_THISNODE is passed among the gfp flags.
      
      The misleading name has lead to mistakes in the past, see for example
      commits 5265047a ("mm, thp: really limit transparent hugepage
      allocation to local node") and b360edb4 ("mm, mempolicy:
      migrate_to_node should only migrate to node").
      
      Another issue with the name is that there's a family of
      alloc_pages_exact*() functions where 'exact' means exact size (instead
      of page order), which leads to more confusion.
      
      To prevent further mistakes, this patch effectively renames
      alloc_pages_exact_node() to __alloc_pages_node() to better convey that
      it's an optimized variant of alloc_pages_node() not intended for general
      usage.  Both functions get described in comments.
      
      It has been also considered to really provide a convenience function for
      allocations restricted to a node, but the major opinion seems to be that
      __GFP_THISNODE already provides that functionality and we shouldn't
      duplicate the API needlessly.  The number of users would be small
      anyway.
      
      Existing callers of alloc_pages_exact_node() are simply converted to
      call __alloc_pages_node(), with the exception of sba_alloc_coherent()
      which open-codes the check for NUMA_NO_NODE, so it is converted to use
      alloc_pages_node() instead.  This means it no longer performs some
      VM_BUG_ON checks, and since the current check for nid in
      alloc_pages_node() uses a 'nid < 0' comparison (which includes
      NUMA_NO_NODE), it may hide wrong values which would be previously
      exposed.
      
      Both differences will be rectified by the next patch.
      
      To sum up, this patch makes no functional changes, except temporarily
      hiding potentially buggy callers.  Restricting the checks in
      alloc_pages_node() is left for the next patch which can in turn expose
      more existing buggy callers.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NRobin Holt <robinmholt@gmail.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Gleb Natapov <gleb@kernel.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Cliff Whickman <cpw@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      96db800f
    • V
      mm: rename and move get/set_freepage_migratetype · bb14c2c7
      Vlastimil Babka 提交于
      The pair of get/set_freepage_migratetype() functions are used to cache
      pageblock migratetype for a page put on a pcplist, so that it does not
      have to be retrieved again when the page is put on a free list (e.g.
      when pcplists become full).  Historically it was also assumed that the
      value is accurate for pages on freelists (as the functions' names
      unfortunately suggest), but that cannot be guaranteed without affecting
      various allocator fast paths.  It is in fact not needed and all such
      uses have been removed.
      
      The last remaining (but pointless) usage related to pages of freelists
      is in move_freepages(), which this patch removes.
      
      To prevent further confusion, rename the functions to
      get/set_pcppage_migratetype() and expand their description.  Since all
      the users are now in mm/page_alloc.c, move the functions there from the
      shared header.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Seungho Park <seungho1.park@lge.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bb14c2c7
    • V
      mm, page_isolation: remove bogus tests for isolated pages · aa016d14
      Vlastimil Babka 提交于
      The __test_page_isolated_in_pageblock() is used to verify whether all
      pages in pageblock were either successfully isolated, or are hwpoisoned.
      Two of the possible state of pages, that are tested, are however bogus
      and misleading.
      
      Both tests rely on get_freepage_migratetype(page), which however has no
      guarantees about pages on freelists.  Specifically, it doesn't guarantee
      that the migratetype returned by the function actually matches the
      migratetype of the freelist that the page is on.  Such guarantee is not
      its purpose and would have negative impact on allocator performance.
      
      The first test checks whether the freepage_migratetype equals
      MIGRATE_ISOLATE, supposedly to catch races between page isolation and
      allocator activity.  These races should be fixed nowadays with
      51bb1a40 ("mm/page_alloc: add freepage on isolate pageblock to correct
      buddy list") and related patches.  As explained above, the check
      wouldn't be able to catch them reliably anyway.  For the same reason
      false positives can happen, although they are harmless, as the
      move_freepages() call would just move the page to the same freelist it's
      already on.  So removing the test is not a bug fix, just cleanup.  After
      this patch, we assume that all PageBuddy pages are on the correct
      freelist and that the races were really fixed.  A truly reliable
      verification in the form of e.g.  VM_BUG_ON() would be complicated and
      is arguably not needed.
      
      The second test (page_count(page) == 0 && get_freepage_migratetype(page)
      == MIGRATE_ISOLATE) is probably supposed (the code comes from a big
      memory isolation patch from 2007) to catch pages on MIGRATE_ISOLATE
      pcplists.  However, pcplists don't contain MIGRATE_ISOLATE freepages
      nowadays, those are freed directly to free lists, so the check is
      obsolete.  Remove it as well.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Seungho Park <seungho1.park@lge.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      aa016d14
    • D
      mm, oom: pass an oom order of -1 when triggered by sysrq · 54e9e291
      David Rientjes 提交于
      The force_kill member of struct oom_control isn't needed if an order of -1
      is used instead.  This is the same as order == -1 in struct
      compact_control which requires full memory compaction.
      
      This patch introduces no functional change.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54e9e291
    • D
      mm, oom: organize oom context into struct · 6e0fc46d
      David Rientjes 提交于
      There are essential elements to an oom context that are passed around to
      multiple functions.
      
      Organize these elements into a new struct, struct oom_control, that
      specifies the context for an oom condition.
      
      This patch introduces no functional change.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6e0fc46d
    • W
      mm/page_alloc.c: remove unused variable in free_area_init_core() · 7f3eb55b
      Wei Yang 提交于
      Commit febd5949 ("mm/memory hotplug: init the zone's size when
      calculating node totalpages") refines the function
      free_area_init_core().
      
      After doing so, these two parameters are not used anymore.
      
      This patch removes these two parameters.
      Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com>
      Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f3eb55b
    • W
      mm/page_alloc.c: refine the calculation of highest possible node id · 904a9553
      Wei Yang 提交于
      nr_node_ids records the highest possible node id, which is calculated by
      scanning the bitmap node_states[N_POSSIBLE].  Current implementation
      scan the bitmap from the beginning, which will scan the whole bitmap.
      
      This patch reverses the order by scanning from the end with
      find_last_bit().
      Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      904a9553
  2. 22 8月, 2015 1 次提交
    • M
      mm: make page pfmemalloc check more robust · 2f064f34
      Michal Hocko 提交于
      Commit c48a11c7 ("netvm: propagate page->pfmemalloc to skb") added
      checks for page->pfmemalloc to __skb_fill_page_desc():
      
              if (page->pfmemalloc && !page->mapping)
                      skb->pfmemalloc = true;
      
      It assumes page->mapping == NULL implies that page->pfmemalloc can be
      trusted.  However, __delete_from_page_cache() can set set page->mapping
      to NULL and leave page->index value alone.  Due to being in union, a
      non-zero page->index will be interpreted as true page->pfmemalloc.
      
      So the assumption is invalid if the networking code can see such a page.
      And it seems it can.  We have encountered this with a NFS over loopback
      setup when such a page is attached to a new skbuf.  There is no copying
      going on in this case so the page confuses __skb_fill_page_desc which
      interprets the index as pfmemalloc flag and the network stack drops
      packets that have been allocated using the reserves unless they are to
      be queued on sockets handling the swapping which is the case here and
      that leads to hangs when the nfs client waits for a response from the
      server which has been dropped and thus never arrive.
      
      The struct page is already heavily packed so rather than finding another
      hole to put it in, let's do a trick instead.  We can reuse the index
      again but define it to an impossible value (-1UL).  This is the page
      index so it should never see the value that large.  Replace all direct
      users of page->pfmemalloc by page_is_pfmemalloc which will hide this
      nastiness from unspoiled eyes.
      
      The information will get lost if somebody wants to use page->index
      obviously but that was the case before and the original code expected
      that the information should be persisted somewhere else if that is
      really needed (e.g.  what SLAB and SLUB do).
      
      [akpm@linux-foundation.org: fix blooper in slub]
      Fixes: c48a11c7 ("netvm: propagate page->pfmemalloc to skb")
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Debugged-by: NVlastimil Babka <vbabka@suse.com>
      Debugged-by: NJiri Bohac <jbohac@suse.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: <stable@vger.kernel.org>	[3.6+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f064f34
  3. 15 8月, 2015 1 次提交
  4. 07 8月, 2015 4 次提交
    • N
      mm: check __PG_HWPOISON separately from PAGE_FLAGS_CHECK_AT_* · f4c18e6f
      Naoya Horiguchi 提交于
      The race condition addressed in commit add05cec ("mm: soft-offline:
      don't free target page in successful page migration") was not closed
      completely, because that can happen not only for soft-offline, but also
      for hard-offline.  Consider that a slab page is about to be freed into
      buddy pool, and then an uncorrected memory error hits the page just
      after entering __free_one_page(), then VM_BUG_ON_PAGE(page->flags &
      PAGE_FLAGS_CHECK_AT_PREP) is triggered, despite the fact that it's not
      necessary because the data on the affected page is not consumed.
      
      To solve it, this patch drops __PG_HWPOISON from page flag checks at
      allocation/free time.  I think it's justified because __PG_HWPOISON
      flags is defined to prevent the page from being reused, and setting it
      outside the page's alloc-free cycle is a designed behavior (not a bug.)
      
      For recent months, I was annoyed about BUG_ON when soft-offlined page
      remains on lru cache list for a while, which is avoided by calling
      put_page() instead of putback_lru_page() in page migration's success
      path.  This means that this patch reverts a major change from commit
      add05cec about the new refcounting rule of soft-offlined pages, so
      "reuse window" revives.  This will be closed by a subsequent patch.
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Dean Nelson <dnelson@redhat.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f4c18e6f
    • M
      fs, file table: reinit files_stat.max_files after deferred memory initialisation · 4248b0da
      Mel Gorman 提交于
      Dave Hansen reported the following;
      
      	My laptop has been behaving strangely with 4.2-rc2.  Once I log
      	in to my X session, I start getting all kinds of strange errors
      	from applications and see this in my dmesg:
      
              	VFS: file-max limit 8192 reached
      
      The problem is that the file-max is calculated before memory is fully
      initialised and miscalculates how much memory the kernel is using.  This
      patch recalculates file-max after deferred memory initialisation.  Note
      that using memory hotplug infrastructure would not have avoided this
      problem as the value is not recalculated after memory hot-add.
      
      4.1:             files_stat.max_files = 6582781
      4.2-rc2:         files_stat.max_files = 8192
      4.2-rc2 patched: files_stat.max_files = 6562467
      
      Small differences with the patch applied and 4.1 but not enough to matter.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reported-by: NDave Hansen <dave.hansen@intel.com>
      Cc: Nicolai Stange <nicstange@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Alex Ng <alexng@microsoft.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4248b0da
    • N
      mm, meminit: replace rwsem with completion · d3cd131d
      Nicolai Stange 提交于
      Commit 0e1cc95b ("mm: meminit: finish initialisation of struct pages
      before basic setup") introduced a rwsem to signal completion of the
      initialization workers.
      
      Lockdep complains about possible recursive locking:
        =============================================
        [ INFO: possible recursive locking detected ]
        4.1.0-12802-g1dc51b82 #3 Not tainted
        ---------------------------------------------
        swapper/0/1 is trying to acquire lock:
        (pgdat_init_rwsem){++++.+},
          at: [<ffffffff8424c7fb>] page_alloc_init_late+0xc7/0xe6
      
        but task is already holding lock:
        (pgdat_init_rwsem){++++.+},
          at: [<ffffffff8424c772>] page_alloc_init_late+0x3e/0xe6
      
      Replace the rwsem by a completion together with an atomic
      "outstanding work counter".
      
      [peterz@infradead.org: Barrier removal on the grounds of being pointless]
      [mgorman@suse.de: Applied review feedback]
      Signed-off-by: NNicolai Stange <nicstange@gmail.com>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Alex Ng <alexng@microsoft.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d3cd131d
    • M
      mm, meminit: allow early_pfn_to_nid to be used during runtime · 7ace9917
      Mel Gorman 提交于
      early_pfn_to_nid() historically was inherently not SMP safe but only
      used during boot which is inherently single threaded or during hotplug
      which is protected by a giant mutex.
      
      With deferred memory initialisation there was a thread-safe version
      introduced and the early_pfn_to_nid would trigger a BUG_ON if used
      unsafely.  Memory hotplug hit that check.  This patch makes
      early_pfn_to_nid introduces a lock to make it safe to use during
      hotplug.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reported-by: NAlex Ng <alexng@microsoft.com>
      Tested-by: NAlex Ng <alexng@microsoft.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Nicolai Stange <nicstange@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7ace9917
  5. 18 7月, 2015 3 次提交
    • J
      mm/page_owner: set correct gfp_mask on page_owner · e2cfc911
      Joonsoo Kim 提交于
      Currently, we set wrong gfp_mask to page_owner info in case of isolated
      freepage by compaction and split page.  It causes incorrect mixed
      pageblock report that we can get from '/proc/pagetypeinfo'.  This metric
      is really useful to measure fragmentation effect so should be accurate.
      This patch fixes it by setting correct information.
      
      Without this patch, after kernel build workload is finished, number of
      mixed pageblock is 112 among roughly 210 movable pageblocks.
      
      But, with this fix, output shows that mixed pageblock is just 57.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e2cfc911
    • J
      mm/page_owner: fix possible access violation · f3a14ced
      Joonsoo Kim 提交于
      When I tested my new patches, I found that page pointer which is used
      for setting page_owner information is changed.  This is because page
      pointer is used to set new migratetype in loop.  After this work, page
      pointer could be out of bound.  If this wrong pointer is used for
      page_owner, access violation happens.  Below is error message that I
      got.
      
        BUG: unable to handle kernel paging request at 0000000000b00018
        IP: [<ffffffff81025f30>] save_stack_address+0x30/0x40
        PGD 1af2d067 PUD 166e0067 PMD 0
        Oops: 0002 [#1] SMP
        ...snip...
        Call Trace:
          print_context_stack+0xcf/0x100
          dump_trace+0x15f/0x320
          save_stack_trace+0x2f/0x50
          __set_page_owner+0x46/0x70
          __isolate_free_page+0x1f7/0x210
          split_free_page+0x21/0xb0
          isolate_freepages_block+0x1e2/0x410
          compaction_alloc+0x22d/0x2d0
          migrate_pages+0x289/0x8b0
          compact_zone+0x409/0x880
          compact_zone_order+0x6d/0x90
          try_to_compact_pages+0x110/0x210
          __alloc_pages_direct_compact+0x3d/0xe6
          __alloc_pages_nodemask+0x6cd/0x9a0
          alloc_pages_current+0x91/0x100
          runtest_store+0x296/0xa50
          simple_attr_write+0xbd/0xe0
          __vfs_write+0x28/0xf0
          vfs_write+0xa9/0x1b0
          SyS_write+0x46/0xb0
          system_call_fastpath+0x16/0x75
      
      This patch fixes this error by moving up set_page_owner().
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f3a14ced
    • M
      mm, meminit: suppress unused memory variable warning · ae026b2a
      Mel Gorman 提交于
      The kbuild test robot reported the following
      
        tree:   git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
        head:   14a6f198
        commit: 3b242c66 x86: mm: enable deferred struct page initialisation on x86-64
        date:   3 days ago
        config: x86_64-randconfig-x006-201527 (attached as .config)
        reproduce:
          git checkout 3b242c66
          # save the attached .config to linux build tree
          make ARCH=x86_64
      
        All warnings (new ones prefixed by >>):
      
           mm/page_alloc.c: In function 'early_page_uninitialised':
        >> mm/page_alloc.c:247:6: warning: unused variable 'nid' [-Wunused-variable]
             int nid = early_pfn_to_nid(pfn);
      
      It's due to the NODE_DATA macro ignoring the nid parameter on !NUMA
      configurations.  This patch avoids the warning by not declaring nid.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reported-by: NWu Fengguang <fengguang.wu@intel.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ae026b2a
  6. 01 7月, 2015 12 次提交
  7. 25 6月, 2015 5 次提交
  8. 12 5月, 2015 1 次提交
    • A
      mm/net: Rename and move page fragment handling from net/ to mm/ · b63ae8ca
      Alexander Duyck 提交于
      This change moves the __alloc_page_frag functionality out of the networking
      stack and into the page allocation portion of mm.  The idea it so help make
      this maintainable by placing it with other page allocation functions.
      
      Since we are moving it from skbuff.c to page_alloc.c I have also renamed
      the basic defines and structure from netdev_alloc_cache to page_frag_cache
      to reflect that this is now part of a different kernel subsystem.
      
      I have also added a simple __free_page_frag function which can handle
      freeing the frags based on the skb->head pointer.  The model for this is
      based off of __free_pages since we don't actually need to deal with all of
      the cases that put_page handles.  I incorporated the virt_to_head_page call
      and compound_order into the function as it actually allows for a signficant
      size reduction by reducing code duplication.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b63ae8ca
  9. 16 4月, 2015 1 次提交
  10. 15 4月, 2015 4 次提交