• A
    mm/thp: narrow lru locking · 65b561a3
    Alex Shi 提交于
    mainline inclusion
    from mainline-v5.11-rc1
    commit b6769834
    category: feature
    bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZF7C?from=project-issue
    CVE: NA
    
    --------------------------------------
    
    lru_lock and page cache xa_lock have no obvious reason to be taken one
    way round or the other: until now, lru_lock has been taken before page
    cache xa_lock, when splitting a THP; but nothing else takes them
    together.  Reverse that ordering: let's narrow the lru locking - but
    leave local_irq_disable to block interrupts throughout, like before.
    
    Hugh Dickins point: split_huge_page_to_list() was already silly, to be
    using the _irqsave variant: it's just been taking sleeping locks, so
    would already be broken if entered with interrupts enabled.  So we can
    save passing flags argument down to __split_huge_page().
    
    Why change the lock ordering here? That was hard to decide.  One reason:
    when this series reaches per-memcg lru locking, it relies on the THP's
    memcg to be stable when taking the lru_lock: that is now done after the
    THP's refcount has been frozen, which ensures page memcg cannot change.
    
    Another reason: previously, lock_page_memcg()'s move_lock was presumed
    to nest inside lru_lock; but now lru_lock must nest inside (page cache
    lock inside) move_lock, so it becomes possible to use lock_page_memcg()
    to stabilize page memcg before taking its lru_lock.  That is not the
    mechanism used in this series, but it is an option we want to keep open.
    
    [hughd@google.com: rewrite commit log]
    
    Link: https://lkml.kernel.org/r/1604566549-62481-5-git-send-email-alex.shi@linux.alibaba.comSigned-off-by: NAlex Shi <alex.shi@linux.alibaba.com>
    Reviewed-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Acked-by: NHugh Dickins <hughd@google.com>
    Cc: Kirill A. Shutemov <kirill@shutemov.name>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Alexander Duyck <alexander.duyck@gmail.com>
    Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Cc: "Chen, Rong A" <rong.a.chen@intel.com>
    Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
    Cc: "Huang, Ying" <ying.huang@intel.com>
    Cc: Jann Horn <jannh@google.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Michal Hocko <mhocko@kernel.org>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Mika Penttilä <mika.penttila@nextfour.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Shakeel Butt <shakeelb@google.com>
    Cc: Tejun Heo <tj@kernel.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Wei Yang <richard.weiyang@gmail.com>
    Cc: Yang Shi <yang.shi@linux.alibaba.com>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    
    Conflict:
    	mm/huge_memory.c
    Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com>
    Reviewed-by: Nchenwandun <chenwandun@huawei.com>
    Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
    65b561a3
huge_memory.c 83.0 KB