• D
    mm, dax, pmem: introduce {get|put}_dev_pagemap() for dax-gup · 5c2c2587
    Dan Williams 提交于
    get_dev_page() enables paths like get_user_pages() to pin a dynamically
    mapped pfn-range (devm_memremap_pages()) while the resulting struct page
    objects are in use.  Unlike get_page() it may fail if the device is, or
    is in the process of being, disabled.  While the initial lookup of the
    range may be an expensive list walk, the result is cached to speed up
    subsequent lookups which are likely to be in the same mapped range.
    
    devm_memremap_pages() now requires a reference counter to be specified
    at init time.  For pmem this means moving request_queue allocation into
    pmem_alloc() so the existing queue usage counter can track "device
    pages".
    
    ZONE_DEVICE pages always have an elevated count and will never be on an
    lru reclaim list.  That space in 'struct page' can be redirected for
    other uses, but for safety introduce a poison value that will always
    trip __list_add() to assert.  This allows half of the struct list_head
    storage to be reclaimed with some assurance to back up the assumption
    that the page count never goes to zero and a list_add() is never
    attempted.
    Signed-off-by: NDan Williams <dan.j.williams@intel.com>
    Tested-by: NLogan Gunthorpe <logang@deltatee.com>
    Cc: Dave Hansen <dave@sr71.net>
    Cc: Matthew Wilcox <willy@linux.intel.com>
    Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
    Cc: Alexander Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    5c2c2587
list_debug.c 2.9 KB