• M
    zsmalloc: decouple handle and object · 2e40e163
    Minchan Kim 提交于
    Recently, we started to use zram heavily and some of issues
    popped.
    
    1) external fragmentation
    
    I got a report from Juneho Choi that fork failed although there are plenty
    of free pages in the system.  His investigation revealed zram is one of
    the culprit to make heavy fragmentation so there was no more contiguous
    16K page for pgd to fork in the ARM.
    
    2) non-movable pages
    
    Other problem of zram now is that inherently, user want to use zram as
    swap in small memory system so they use zRAM with CMA to use memory
    efficiently.  However, unfortunately, it doesn't work well because zRAM
    cannot use CMA's movable pages unless it doesn't support compaction.  I
    got several reports about that OOM happened with zram although there are
    lots of swap space and free space in CMA area.
    
    3) internal fragmentation
    
    zRAM has started support memory limitation feature to limit memory usage
    and I sent a patchset(https://lkml.org/lkml/2014/9/21/148) for VM to be
    harmonized with zram-swap to stop anonymous page reclaim if zram consumed
    memory up to the limit although there are free space on the swap.  One
    problem for that direction is zram has no way to know any hole in memory
    space zsmalloc allocated by internal fragmentation so zram would regard
    swap is full although there are free space in zsmalloc.  For solving the
    issue, zram want to trigger compaction of zsmalloc before it decides full
    or not.
    
    This patchset is first step to support above issues.  For that, it adds
    indirect layer between handle and object location and supports manual
    compaction to solve 3th problem first of all.
    
    After this patchset got merged, next step is to make VM aware of zsmalloc
    compaction so that generic compaction will move zsmalloced-pages
    automatically in runtime.
    
    In my imaginary experiment(ie, high compress ratio data with heavy swap
    in/out on 8G zram-swap), data is as follows,
    
    Before =
    zram allocated object :      60212066 bytes
    zram total used:     140103680 bytes
    ratio:         42.98 percent
    MemFree:          840192 kB
    
    Compaction
    
    After =
    frag ratio after compaction
    zram allocated object :      60212066 bytes
    zram total used:      76185600 bytes
    ratio:         79.03 percent
    MemFree:          901932 kB
    
    Juneho reported below in his real platform with small aging.
    So, I think the benefit would be bigger in real aging system
    for a long time.
    
    - frag_ratio increased 3% (ie, higher is better)
    - memfree increased about 6MB
    - In buddy info, Normal 2^3: 4, 2^2: 1: 2^1 increased, Highmem: 2^1 21 increased
    
    frag ratio after swap fragment
    used :        156677 kbytes
    total:        166092 kbytes
    frag_ratio :  94
    meminfo before compaction
    MemFree:           83724 kB
    Node 0, zone   Normal  13642   1364     57     10     61     17      9      5      4      0      0
    Node 0, zone  HighMem    425     29      1      0      0      0      0      0      0      0      0
    
    num_migrated :  23630
    compaction done
    
    frag ratio after compaction
    used :        156673 kbytes
    total:        160564 kbytes
    frag_ratio :  97
    meminfo after compaction
    MemFree:           89060 kB
    Node 0, zone   Normal  14076   1544     67     14     61     17      9      5      4      0      0
    Node 0, zone  HighMem    863     50      1      0      0      0      0      0      0      0      0
    
    This patchset adds more logics(about 480 lines) in zsmalloc but when I
    tested heavy swapin/out program, the regression for swapin/out speed is
    marginal because most of overheads were caused by compress/decompress and
    other MM reclaim stuff.
    
    This patch (of 7):
    
    Currently, handle of zsmalloc encodes object's location directly so it
    makes support of migration hard.
    
    This patch decouples handle and object via adding indirect layer.  For
    that, it allocates handle dynamically and returns it to user.  The handle
    is the address allocated by slab allocation so it's unique and we could
    keep object's location in the memory space allocated for handle.
    
    With it, we can change object's position without changing handle itself.
    Signed-off-by: NMinchan Kim <minchan@kernel.org>
    Cc: Juneho Choi <juno.choi@lge.com>
    Cc: Gunho Lee <gunho.lee@lge.com>
    Cc: Luigi Semenzato <semenzato@google.com>
    Cc: Dan Streetman <ddstreet@ieee.org>
    Cc: Seth Jennings <sjennings@variantweb.net>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: Mel Gorman <mel@csn.ul.ie>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    2e40e163
zsmalloc.c 38.0 KB