• V
    slub: never fail to shrink cache · 832f37f5
    Vladimir Davydov 提交于
    SLUB's version of __kmem_cache_shrink() not only removes empty slabs, but
    also tries to rearrange the partial lists to place slabs filled up most to
    the head to cope with fragmentation.  To achieve that, it allocates a
    temporary array of lists used to sort slabs by the number of objects in
    use.  If the allocation fails, the whole procedure is aborted.
    
    This is unacceptable for the kernel memory accounting extension of the
    memory cgroup, where we want to make sure that kmem_cache_shrink()
    successfully discarded empty slabs.  Although the allocation failure is
    utterly unlikely with the current page allocator implementation, which
    retries GFP_KERNEL allocations of order <= 2 infinitely, it is better not
    to rely on that.
    
    This patch therefore makes __kmem_cache_shrink() allocate the array on
    stack instead of calling kmalloc, which may fail.  The array size is
    chosen to be equal to 32, because most SLUB caches store not more than 32
    objects per slab page.  Slab pages with <= 32 free objects are sorted
    using the array by the number of objects in use and promoted to the head
    of the partial list, while slab pages with > 32 free objects are left in
    the end of the list without any ordering imposed on them.
    Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
    Acked-by: NChristoph Lameter <cl@linux.com>
    Acked-by: NPekka Enberg <penberg@kernel.org>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Huang Ying <ying.huang@intel.com>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    832f37f5
slub.c 125.5 KB