- 27 10月, 2010 1 次提交
-
-
由 KAMEZAWA Hiroyuki 提交于
scan_lru_pages returns pfn. So, it's type should be "unsigned long" not "int". Note: I guess this has been work until now because memory hotplug tester's machine has not very big memory.... physical address < 32bit << PAGE_SHIFT. Reported-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 10月, 2010 1 次提交
-
-
由 Martin Schwidefsky 提交于
Improve performance of the sske operation by using the nonquiescing variant if the affected page has no mappings established. On machines with no support for the new sske variant the mask bit will be ignored. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 24 10月, 2010 1 次提交
-
-
由 Xiao Guangrong 提交于
This function is used by KVM to pin process's page in the atomic context. Define the 'weak' function to avoid other architecture not support it Acked-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 19 10月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
lru_add_drain_all() uses schedule_on_each_cpu() which is synchronous. There is no reason to call flush_scheduled_work() after lru_add_drain_all(). Drop the spurious calls. This is to prepare for the deprecation and removal of flush_scheduled_work(). Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: NMinchan Kim <minchan.kim@gmail.com> Acked-by: NMel Gorman <mel@csn.ul.ie>
-
- 12 10月, 2010 2 次提交
-
-
由 Yinghai Lu 提交于
Stephen found WARNING: mm/built-in.o(.text+0x25ab8): Section mismatch in reference from the function memblock_find_base() to the function .init.text:memblock_find_region() The function memblock_find_base() references the function __init memblock_find_region(). This is often because memblock_find_base lacks a __init annotation or the annotation of memblock_find_region is wrong. So let memblock_find_region() to use __init_memblock instead of __init directly. Also fix one function that did not have __init* to be __init_memblock. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NYinghai Lu <yinghai@kernel.org> LKML-Reference: <4CB366B1.40405@kernel.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Jeremy Fitzhardinge 提交于
The Xen setup code needs to call memblock_x86_reserve_range() very early, so allow it to initialize the memblock subsystem before doing so. The second memblock_init() is ignored. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> LKML-Reference: <4CACFDAD.3090900@goop.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 11 10月, 2010 1 次提交
-
-
由 Andi Kleen 提交于
31bit s390 doesn't have huge pages and failed with: > mm/migrate.c: In function 'remove_migration_pte': > mm/migrate.c:143:3: error: implicit declaration of function 'pte_mkhuge' > mm/migrate.c:143:7: error: incompatible types when assigning to type 'pte_t' from type 'int' Put that code into a ifdef. Reported by Heiko Carstens Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
- 08 10月, 2010 19 次提交
-
-
由 Andi Kleen 提交于
We don't reply in other temporary failure cases and there were no reports of replies happening. I think the original reason it was added was also just an early bug, not an observation of the race. So remove the loop for now, but keep a warning message. Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Andi Kleen 提交于
The addr_valid flag is the only flag in "to_kill" and it's slightly more efficient to have it as char instead of a bitfield. Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Andi Kleen 提交于
Now that only a few obscure messages are left as pr_debug disable outputting of pr_debug in memory-failure.c by default. Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Andi Kleen 提交于
Convert a lot of pr_debugs in memory-failure.c that are generally useful to pr_info. It's reasonable to print at least one message why offlining succeeded or failed by default. Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Andi Kleen 提交于
Clean up and improve the overview comment in memory-failure.c Tidy some grammar issues in other comments. Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Andi Kleen 提交于
This fixes a problem introduced with the hugetlb hwpoison handling The user space SIGBUS signalling wants to know the size of the hugepage that caused a HWPOISON fault. Unfortunately the architecture page fault handlers do not have easy access to the struct page. Pass the information out in the fault error code instead. I added a separate VM_FAULT_HWPOISON_LARGE bit for this case and encode the hpage index in some free upper bits of the fault code. The small page hwpoison keeps stays with the VM_FAULT_HWPOISON name to minimize changes. Also add code to hugetlb.h to convert that index into a page shift. Will be used in a further patch. Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: fengguang.wu@intel.com Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Andi Kleen 提交于
Fixes warning reported by Stephen Rothwell mm/hugetlb.c:2950: warning: 'is_hugepage_on_freelist' defined but not used for the !CONFIG_MEMORY_FAILURE case. Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Andi Kleen 提交于
Linus asked for a cleanup of __page_set_anon_rmap to make it look more like the cleaner huge pages version. Factor out the duplicated PageAnon check into a single check at the beginning of the function. Remove obsolete comments and rewrite them into standard English. No functional changes. Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
Currently unpoisoning hugepages doesn't work correctly because clearing PG_HWPoison is done outside if (TestClearPageHWPoison). This patch fixes it. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
This patch extends soft offlining framework to support hugepage. When memory corrected errors occur repeatedly on a hugepage, we can choose to stop using it by migrating data onto another hugepage and disabling the original (maybe half-broken) one. ChangeLog since v4: - branch soft_offline_page() for hugepage ChangeLog since v3: - remove comment about "ToDo: hugepage soft-offline" ChangeLog since v2: - move refcount handling into isolate_lru_page() ChangeLog since v1: - add double check in isolating hwpoisoned hugepage - define free/non-free checker for hugepage - postpone calling put_page() for hugepage in soft_offline_page() Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
Currently error recovery for free hugepage works only for MF_COUNT_INCREASED. This patch enables !MF_COUNT_INCREASED case. Free hugepages can be handled directly by alloc_huge_page() and dequeue_hwpoisoned_huge_page(), and both of them are protected by hugetlb_lock, so there is no race between them. Note that this patch defines the refcount of HWPoisoned hugepage dequeued from freelist is 1, deviated from present 0, thereby we can avoid race between unpoison and memory failure on free hugepage. This is reasonable because unlikely to free buddy pages, free hugepage is governed by hugetlbfs even after error handling finishes. And it also makes unpoison code added in the later patch cleaner. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
Currently alloc_huge_page() raises page refcount outside hugetlb_lock. but it causes race when dequeue_hwpoison_huge_page() runs concurrently with alloc_huge_page(). To avoid it, this patch moves set_page_refcounted() in hugetlb_lock. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NWu Fengguang <fengguang.wu@intel.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
This check is necessary to avoid race between dequeue and allocation, which can cause a free hugepage to be dequeued twice and get kernel unstable. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NWu Fengguang <fengguang.wu@intel.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
This patch extends page migration code to support hugepage migration. One of the potential users of this feature is soft offlining which is triggered by memory corrected errors (added by the next patch.) Todo: - there are other users of page migration such as memory policy, memory hotplug and memocy compaction. They are not ready for hugepage support for now. ChangeLog since v4: - define migrate_huge_pages() - remove changes on isolation/putback_lru_page() ChangeLog since v2: - refactor isolate/putback_lru_page() to handle hugepage - add comment about race on unmap_and_move_huge_page() ChangeLog since v1: - divide migration code path for hugepage - define routine checking migration swap entry for hugetlb - replace "goto" with "if/else" in remove_migration_pte() Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
This patch modifies hugepage copy functions to have only destination and source hugepages as arguments for later use. The old ones are renamed from copy_{gigantic,huge}_page() to copy_user_{gigantic,huge}_page(). This naming convention is consistent with that between copy_highpage() and copy_user_highpage(). ChangeLog since v4: - add blank line between local declaration and code - remove unnecessary might_sleep() ChangeLog since v2: - change copy_huge_page() from macro to inline dummy function to avoid compile warning when !CONFIG_HUGETLB_PAGE. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
We can't use existing hugepage allocation functions to allocate hugepage for page migration, because page migration can happen asynchronously with the running processes and page migration users should call the allocation function with physical addresses (not virtual addresses) as arguments. ChangeLog since v3: - unify alloc_buddy_huge_page() and alloc_buddy_huge_page_node() ChangeLog since v2: - remove unnecessary get/put_mems_allowed() (thanks to David Rientjes) ChangeLog since v1: - add comment on top of alloc_huge_page_no_vma() Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Naoya Horiguchi 提交于
Since the PageHWPoison() check is for avoiding hwpoisoned page remained in pagecache mapping to the process, it should be done in "found in pagecache" branch, not in the common path. Otherwise, metadata corruption occurs if memory failure happens between alloc_huge_page() and lock_page() because page fault fails with metadata changes remained (such as refcount, mapcount, etc.) This patch moves the check to "found in pagecache" branch and fix the problem. ChangeLog since v2: - remove retry check in "new allocation" path. - make description more detailed - change patch name from "HWPOISON, hugetlb: move PG_HWPoison bit check" Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Acked-by: NMel Gorman <mel@csn.ul.ie> Reviewed-by: NWu Fengguang <fengguang.wu@intel.com> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Kirill A. Shutemov 提交于
We need to check parent's thresholds if parent has use_hierarchy == 1 to be sure that parent's threshold events will be triggered even if parent itself is not active (no MEM_CGROUP_EVENTS). Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name> Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Robin Holt 提交于
During boot of a 16TB system, the following is printed: Dentry cache hash table entries: -2147483648 (order: 22, 17179869184 bytes) Signed-off-by: NRobin Holt <holt@sgi.com> Reviewed-by: NWANG Cong <xiyou.wangcong@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 10月, 2010 3 次提交
-
-
由 Andi Kleen 提交于
When we call the slab shrinker to free a page we need to stop at page count one because the caller always holds a single reference, not zero. This avoids useless looping over slab shrinkers and freeing too much memory. Reviewed-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Andi Kleen 提交于
The SIGBUS user space signalling is supposed to report the address granuality of a corruption. Pass this information correctly for huge pages by querying the hpage order. Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reviewed-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Pekka Enberg 提交于
This patch fixes the following build breakage when memory hotplug is enabled on UMA configurations: /home/test/linux-2.6/mm/slub.c: In function 'kmem_cache_init': /home/test/linux-2.6/mm/slub.c:3031:2: error: 'slab_memory_callback' undeclared (first use in this function) /home/test/linux-2.6/mm/slub.c:3031:2: note: each undeclared identifier is reported only once for each function it appears in make[2]: *** [mm/slub.o] Error 1 make[1]: *** [mm] Error 2 make: *** [sub-make] Error 2 Reported-by: NZimny Lech <napohybelskurwysynom2010@gmail.com> Acked-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
- 06 10月, 2010 4 次提交
-
-
由 Christoph Lameter 提交于
There is a lot of #ifdef/#endifs that can be avoided if functions would be in different places. Move them around and reduce #ifdef. Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Christoph Lameter 提交于
Currently disabling CONFIG_SLUB_DEBUG also disabled SYSFS support meaning that the slabs cannot be tuned without DEBUG. Make SYSFS support independent of CONFIG_SLUB_DEBUG Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Pekka Enberg 提交于
This patch optimizes slab_free() debug check to use "c->node != NUMA_NO_NODE" instead of "c->node >= 0" because the former generates smaller code on x86-64: Before: 4736: 48 39 70 08 cmp %rsi,0x8(%rax) 473a: 75 26 jne 4762 <kfree+0xa2> 473c: 44 8b 48 10 mov 0x10(%rax),%r9d 4740: 45 85 c9 test %r9d,%r9d 4743: 78 1d js 4762 <kfree+0xa2> After: 4736: 48 39 70 08 cmp %rsi,0x8(%rax) 473a: 75 23 jne 475f <kfree+0x9f> 473c: 83 78 10 ff cmpl $0xffffffffffffffff,0x10(%rax) 4740: 74 1d je 475f <kfree+0x9f> This patch also cleans up __slab_alloc() to use NUMA_NO_NODE instead of "-1" for enabling debugging for a per-CPU cache. Acked-by: NChristoph Lameter <cl@linux.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Yinghai Lu 提交于
When trying to find huge range for crashkernel, get [ 0.000000] ------------[ cut here ]------------ [ 0.000000] WARNING: at arch/x86/mm/memblock.c:248 memblock_x86_reserve_range+0x40/0x7a() [ 0.000000] Hardware name: Sun Fire x4800 [ 0.000000] memblock_x86_reserve_range: wrong range [0xffffffff37000000, 0x137000000) [ 0.000000] Modules linked in: [ 0.000000] Pid: 0, comm: swapper Not tainted 2.6.36-rc5-tip-yh-01876-g1cac214-dirty #59 [ 0.000000] Call Trace: [ 0.000000] [<ffffffff82816f7e>] ? memblock_x86_reserve_range+0x40/0x7a [ 0.000000] [<ffffffff81078c2d>] warn_slowpath_common+0x85/0x9e [ 0.000000] [<ffffffff81078d38>] warn_slowpath_fmt+0x6e/0x70 [ 0.000000] [<ffffffff8281e77c>] ? memblock_find_region+0x40/0x78 [ 0.000000] [<ffffffff8281eb1f>] ? memblock_find_base+0x9a/0xb9 [ 0.000000] [<ffffffff82816f7e>] memblock_x86_reserve_range+0x40/0x7a [ 0.000000] [<ffffffff8280452c>] setup_arch+0x99d/0xb2a [ 0.000000] [<ffffffff810a3e02>] ? trace_hardirqs_off+0xd/0xf [ 0.000000] [<ffffffff81cec7d8>] ? _raw_spin_unlock_irqrestore+0x3d/0x4c [ 0.000000] [<ffffffff827ffcec>] start_kernel+0xde/0x3f1 [ 0.000000] [<ffffffff827ff2d4>] x86_64_start_reservations+0xa0/0xa4 [ 0.000000] [<ffffffff827ff3de>] x86_64_start_kernel+0x106/0x10d [ 0.000000] ---[ end trace a7919e7f17c0a725 ]--- [ 0.000000] Reserving 8192MB of memory at 17592186041200MB for crashkernel (System RAM: 526336MB) This is caused by a wraparound in the test due to size > end; explicitly check for this condition and fail. Signed-off-by: NYinghai Lu <yinghai@kernel.org> LKML-Reference: <4CAA4DD3.1080401@kernel.org> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 05 10月, 2010 2 次提交
-
-
由 Hugh Dickins 提交于
Building under memory pressure, with KSM on 2.6.36-rc5, collapsed with an internal compiler error: typically indicating an error in swapping. Perhaps there's a timing issue which makes it now more likely, perhaps it's just a long time since I tried for so long: this bug goes back to KSM swapping in 2.6.33. Notice how reuse_swap_page() allows an exclusive page to be reused, but only does SetPageDirty if it can delete it from swap cache right then - if it's currently under Writeback, it has to be left in cache and we don't SetPageDirty, but the page can be reused. Fine, the dirty bit will get set in the pte; but notice how zap_pte_range() does not bother to transfer pte_dirty to page_dirty when unmapping a PageAnon. If KSM chooses to share such a page, it will look like a clean copy of swapcache, and not be written out to swap when its memory is needed; then stale data read back from swap when it's needed again. We could fix this in reuse_swap_page() (or even refuse to reuse a page under writeback), but it's more honest to fix my oversight in KSM's write_protect_page(). Several days of testing on three machines confirms that this fixes the issue they showed. Signed-off-by: NHugh Dickins <hughd@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: stable@kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Hugh Dickins 提交于
2.6.36-rc1 commit 21d0d443 "rmap: resurrect page_address_in_vma anon_vma check" was right to resurrect that check; but now that it's comparing anon_vma->roots instead of just anon_vmas, there's a danger of oopsing on a NULL anon_vma. In most cases no NULL anon_vma ever gets here; but it turns out that occasionally KSM, when enabled on a forked or forking process, will itself call page_address_in_vma() on a "half-KSM" page left over from an earlier failed attempt to merge - whose page_anon_vma() is NULL. It's my bug that those should be getting here at all: I thought they were already dealt with, this oops proves me wrong, I'll fix it in the next release - such pages are effectively pinned until their process exits, since rmap cannot find their ptes (though swapoff can). For now just work around it by making page_address_in_vma() safe (and add a comment on why that check is wanted anyway). A similar check in __page_check_anon_rmap() is safe because do_page_add_anon_rmap() already excluded KSM pages. Signed-off-by: NHugh Dickins <hughd@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 10月, 2010 5 次提交
-
-
由 Namhyung Kim 提交于
Make kmalloc_cache_alloc_node_notrace(), kmalloc_large_node() and __kmalloc_node_track_caller() to be compiled only when CONFIG_NUMA is selected. Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Namhyung Kim 提交于
The unfreeze_slab() releases page's PG_locked bit but was missing proper annotation. The deactivate_slab() needs to be marked also since it calls unfreeze_slab() without grabbing the lock. Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Namhyung Kim 提交于
The bit-ops routines require its arg to be a pointer to unsigned long. This leads sparse to complain about different signedness as follows: mm/slub.c:2425:49: warning: incorrect type in argument 2 (different signedness) mm/slub.c:2425:49: expected unsigned long volatile *addr mm/slub.c:2425:49: got long *map Acked-by: NChristoph Lameter <cl@linux.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Christoph Lameter 提交于
There are a couple of places where repeat the same statements when removing a page from the partial list. Consolidate that into __remove_partial(). Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-
由 Christoph Lameter 提交于
Pass the actual values used for inactive and active redzoning to the functions that check the objects. Avoids a lot of the ? : things to lookup the values in the functions. Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NPekka Enberg <penberg@kernel.org>
-