1. 09 5月, 2017 12 次提交
    • M
      treewide: use kv[mz]alloc* rather than opencoded variants · 752ade68
      Michal Hocko 提交于
      There are many code paths opencoding kvmalloc.  Let's use the helper
      instead.  The main difference to kvmalloc is that those users are
      usually not considering all the aspects of the memory allocator.  E.g.
      allocation requests <= 32kB (with 4kB pages) are basically never failing
      and invoke OOM killer to satisfy the allocation.  This sounds too
      disruptive for something that has a reasonable fallback - the vmalloc.
      On the other hand those requests might fallback to vmalloc even when the
      memory allocator would succeed after several more reclaim/compaction
      attempts previously.  There is no guarantee something like that happens
      though.
      
      This patch converts many of those places to kv[mz]alloc* helpers because
      they are more conservative.
      
      Link: http://lkml.kernel.org/r/20170306103327.2766-2-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> # Xen bits
      Acked-by: NKees Cook <keescook@chromium.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: Andreas Dilger <andreas.dilger@intel.com> # Lustre
      Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> # KVM/s390
      Acked-by: Dan Williams <dan.j.williams@intel.com> # nvdim
      Acked-by: David Sterba <dsterba@suse.com> # btrfs
      Acked-by: Ilya Dryomov <idryomov@gmail.com> # Ceph
      Acked-by: Tariq Toukan <tariqt@mellanox.com> # mlx4
      Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx5
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Anton Vorontsov <anton@enomsg.org>
      Cc: Colin Cross <ccross@android.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: Santosh Raspatur <santosh@chelsio.com>
      Cc: Hariprasad S <hariprasad@chelsio.com>
      Cc: Yishai Hadas <yishaih@mellanox.com>
      Cc: Oleg Drokin <oleg.drokin@intel.com>
      Cc: "Yan, Zheng" <zyan@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      752ade68
    • M
      mm, vmalloc: properly track vmalloc users · 1f5307b1
      Michal Hocko 提交于
      __vmalloc_node_flags used to be static inline but this has changed by
      "mm: introduce kv[mz]alloc helpers" because kvmalloc_node needs to use
      it as well and the code is outside of the vmalloc proper.  I haven't
      realized that changing this will lead to a subtle bug though.  The
      function is responsible to track the caller as well.  This caller is
      then printed by /proc/vmallocinfo.  If __vmalloc_node_flags is not
      inline then we would get only direct users of __vmalloc_node_flags as
      callers (e.g.  v[mz]alloc) which reduces usefulness of this debugging
      feature considerably.  It simply doesn't help to see that the given
      range belongs to vmalloc as a caller:
      
        0xffffc90002c79000-0xffffc90002c7d000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N0=3
        0xffffc90002c81000-0xffffc90002c85000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3
        0xffffc90002c8d000-0xffffc90002c91000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3
        0xffffc90002c95000-0xffffc90002c99000   16384 vmalloc+0x16/0x18 pages=3 vmalloc N1=3
      
      We really want to catch the _caller_ of the vmalloc function.  Fix this
      issue by making __vmalloc_node_flags static inline again.
      
      Link: http://lkml.kernel.org/r/20170502134657.12381-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1f5307b1
    • M
      mm: introduce kv[mz]alloc helpers · a7c3e901
      Michal Hocko 提交于
      Patch series "kvmalloc", v5.
      
      There are many open coded kmalloc with vmalloc fallback instances in the
      tree.  Most of them are not careful enough or simply do not care about
      the underlying semantic of the kmalloc/page allocator which means that
      a) some vmalloc fallbacks are basically unreachable because the kmalloc
      part will keep retrying until it succeeds b) the page allocator can
      invoke a really disruptive steps like the OOM killer to move forward
      which doesn't sound appropriate when we consider that the vmalloc
      fallback is available.
      
      As it can be seen implementing kvmalloc requires quite an intimate
      knowledge if the page allocator and the memory reclaim internals which
      strongly suggests that a helper should be implemented in the memory
      subsystem proper.
      
      Most callers, I could find, have been converted to use the helper
      instead.  This is patch 6.  There are some more relying on __GFP_REPEAT
      in the networking stack which I have converted as well and Eric Dumazet
      was not opposed [2] to convert them as well.
      
      [1] http://lkml.kernel.org/r/20170130094940.13546-1-mhocko@kernel.org
      [2] http://lkml.kernel.org/r/1485273626.16328.301.camel@edumazet-glaptop3.roam.corp.google.com
      
      This patch (of 9):
      
      Using kmalloc with the vmalloc fallback for larger allocations is a
      common pattern in the kernel code.  Yet we do not have any common helper
      for that and so users have invented their own helpers.  Some of them are
      really creative when doing so.  Let's just add kv[mz]alloc and make sure
      it is implemented properly.  This implementation makes sure to not make
      a large memory pressure for > PAGE_SZE requests (__GFP_NORETRY) and also
      to not warn about allocation failures.  This also rules out the OOM
      killer as the vmalloc is a more approapriate fallback than a disruptive
      user visible action.
      
      This patch also changes some existing users and removes helpers which
      are specific for them.  In some cases this is not possible (e.g.
      ext4_kvmalloc, libcfs_kvzalloc) because those seems to be broken and
      require GFP_NO{FS,IO} context which is not vmalloc compatible in general
      (note that the page table allocation is GFP_KERNEL).  Those need to be
      fixed separately.
      
      While we are at it, document that __vmalloc{_node} about unsupported gfp
      mask because there seems to be a lot of confusion out there.
      kvmalloc_node will warn about GFP_KERNEL incompatible (which are not
      superset) flags to catch new abusers.  Existing ones would have to die
      slowly.
      
      [sfr@canb.auug.org.au: f2fs fixup]
        Link: http://lkml.kernel.org/r/20170320163735.332e64b7@canb.auug.org.au
      Link: http://lkml.kernel.org/r/20170306103032.2540-2-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Reviewed-by: Andreas Dilger <adilger@dilger.ca>	[ext4 part]
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a7c3e901
    • D
      sysv,ipc: cacheline align kern_ipc_perm · 60f3e00d
      Davidlohr Bueso 提交于
      Assign 'struct kern_ipc_perm' its own cacheline to avoid false sharing
      with sysv ipc calls.
      
      While the structure itself is rather read-mostly throughout the lifespan
      of ipc, the spinlock causes most of the invalidations.  One example is
      commit 31a7c474 ("ipc/sem.c: cacheline align the ipc spinlock for
      semaphores").  Therefore, extend this to all ipc.
      
      The effect of cacheline alignment on sems can be seen in sembench, which
      deals mostly with semtimedop wait/wakes is seen to improve raw
      throughput (worker loops) between 8 to 12% on a 24-core x86 with over 4
      threads.
      
      Link: http://lkml.kernel.org/r/1486673582-6979-4-git-send-email-dave@stgolabs.netSigned-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      60f3e00d
    • K
      pidns: expose task pid_ns_for_children to userspace · eaa0d190
      Kirill Tkhai 提交于
      pid_ns_for_children set by a task is known only to the task itself, and
      it's impossible to identify it from outside.
      
      It's a big problem for checkpoint/restore software like CRIU, because it
      can't correctly handle tasks, that do setns(CLONE_NEWPID) in proccess of
      their work.
      
      This patch solves the problem, and it exposes pid_ns_for_children to ns
      directory in standard way with the name "pid_for_children":
      
        ~# ls /proc/5531/ns -l | grep pid
        lrwxrwxrwx 1 root root 0 Jan 14 16:38 pid -> pid:[4026531836]
        lrwxrwxrwx 1 root root 0 Jan 14 16:38 pid_for_children -> pid:[4026532286]
      
      Link: http://lkml.kernel.org/r/149201123914.6007.2187327078064239572.stgit@localhost.localdomainSigned-off-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Andrei Vagin <avagin@virtuozzo.com>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Serge Hallyn <serge@hallyn.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eaa0d190
    • K
      ns: allow ns_entries to have custom symlink content · 25b14e92
      Kirill Tkhai 提交于
      Patch series "Expose task pid_ns_for_children to userspace".
      
      pid_ns_for_children set by a task is known only to the task itself, and
      it's impossible to identify it from outside.
      
      It's a big problem for checkpoint/restore software like CRIU, because it
      can't correctly handle tasks, that do setns(CLONE_NEWPID) in proccess of
      their work.  If they have a custom pid_ns_for_children before dump, they
      must have the same ns after restore.  Otherwise, restored task bumped
      into enviroment it does not expect.
      
      This patchset solves the problem.  It exposes pid_ns_for_children to ns
      directory in standard way with the name "pid_for_children":
      
        ~# ls /proc/5531/ns -l | grep pid
        lrwxrwxrwx 1 root root 0 Jan 14 16:38 pid -> pid:[4026531836]
        lrwxrwxrwx 1 root root 0 Jan 14 16:38 pid_for_children -> pid:[4026532286]
      
      This patch (of 2):
      
      Make possible to have link content prefix yyy different from the link
      name xxx:
      
        $ readlink /proc/[pid]/ns/xxx
        yyy:[4026531838]
      
      This will be used in next patch.
      
      Link: http://lkml.kernel.org/r/149201120318.6007.7362655181033883000.stgit@localhost.localdomainSigned-off-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Reviewed-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Acked-by: NAndrei Vagin <avagin@virtuozzo.com>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Serge Hallyn <serge@hallyn.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      25b14e92
    • H
      ia64: reuse append_elf_note() and final_note() functions · 51dbd925
      Hari Bathini 提交于
      Get rid of multiple definitions of append_elf_note() & final_note()
      functions.  Reuse these functions compiled under CONFIG_CRASH_CORE Also,
      define Elf_Word and use it instead of generic u32 or the more specific
      Elf64_Word.
      
      Link: http://lkml.kernel.org/r/149035342324.6881.11667840929850361402.stgit@hbathini.in.ibm.comSigned-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      Acked-by: NDave Young <dyoung@redhat.com>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      51dbd925
    • H
      crash: move crashkernel parsing and vmcore related code under CONFIG_CRASH_CORE · 692f66f2
      Hari Bathini 提交于
      Patch series "kexec/fadump: remove dependency with CONFIG_KEXEC and
      reuse crashkernel parameter for fadump", v4.
      
      Traditionally, kdump is used to save vmcore in case of a crash.  Some
      architectures like powerpc can save vmcore using architecture specific
      support instead of kexec/kdump mechanism.  Such architecture specific
      support also needs to reserve memory, to be used by dump capture kernel.
      crashkernel parameter can be a reused, for memory reservation, by such
      architecture specific infrastructure.
      
      This patchset removes dependency with CONFIG_KEXEC for crashkernel
      parameter and vmcoreinfo related code as it can be reused without kexec
      support.  Also, crashkernel parameter is reused instead of
      fadump_reserve_mem to reserve memory for fadump.
      
      The first patch moves crashkernel parameter parsing and vmcoreinfo
      related code under CONFIG_CRASH_CORE instead of CONFIG_KEXEC_CORE.  The
      second patch reuses the definitions of append_elf_note() & final_note()
      functions under CONFIG_CRASH_CORE in IA64 arch code.  The third patch
      removes dependency on CONFIG_KEXEC for firmware-assisted dump (fadump)
      in powerpc.  The next patch reuses crashkernel parameter for reserving
      memory for fadump, instead of the fadump_reserve_mem parameter.  This
      has the advantage of using all syntaxes crashkernel parameter supports,
      for fadump as well.  The last patch updates fadump kernel documentation
      about use of crashkernel parameter.
      
      This patch (of 5):
      
      Traditionally, kdump is used to save vmcore in case of a crash.  Some
      architectures like powerpc can save vmcore using architecture specific
      support instead of kexec/kdump mechanism.  Such architecture specific
      support also needs to reserve memory, to be used by dump capture kernel.
      crashkernel parameter can be a reused, for memory reservation, by such
      architecture specific infrastructure.
      
      But currently, code related to vmcoreinfo and parsing of crashkernel
      parameter is built under CONFIG_KEXEC_CORE.  This patch introduces
      CONFIG_CRASH_CORE and moves the above mentioned code under this config,
      allowing code reuse without dependency on CONFIG_KEXEC.  There is no
      functional change with this patch.
      
      Link: http://lkml.kernel.org/r/149035338104.6881.4550894432615189948.stgit@hbathini.in.ibm.comSigned-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      Acked-by: NDave Young <dyoung@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      692f66f2
    • A
      cpumask: make "nr_cpumask_bits" unsigned · c311c797
      Alexey Dobriyan 提交于
      Bit searching functions accept "unsigned long" indices but
      "nr_cpumask_bits" is "int" which is signed, so inevitable sign
      extensions occur on x86_64.  Those MOVSX are #1 MOVSX bloat by number of
      uses across whole kernel.
      
      Change "nr_cpumask_bits" to unsigned, this number can't be negative
      after all.  It allows to do implicit zero-extension on x86_64 without
      MOVSX.
      
      Change signed comparisons into unsigned comparisons where necessary.
      
      Other uses looks fine because it is either argument passed to a function
      or comparison is already unsigned.
      
      Net win on allyesconfig type of kernel: ~2.8 KB (!)
      
      	add/remove: 0/0 grow/shrink: 8/725 up/down: 93/-2926 (-2833)
      	function                                     old     new   delta
      	xen_exit_mmap                                691     735     +44
      	qstat_read                                   426     440     +14
      	__cpufreq_cooling_register                  1678    1687      +9
      	trace_rb_cpu_prepare                         447     455      +8
      	vermagic                                      54      60      +6
      	nfp_driver_version                            54      60      +6
      	rcu_torture_stats_print                     1147    1151      +4
      	find_next_push_cpu                           267     269      +2
      	xen_irq_resume                               961     960      -1
      				...
      	init_vp_index                                946     906     -40
      	od_set_powersave_bias                        328     281     -47
      	power_cpu_exit                               193     139     -54
      	arch_show_interrupts                        3538    3484     -54
      	select_idle_sibling                         1558    1471     -87
      	Total: Before=158358910, After=158356077, chg -0.00%
      
      Same arguments apply to "nr_cpu_ids" but I haven't yet found enough
      courage to delve into this issue (and proper fix may require new type
      "cpu_t" which is whole separate story).
      
      Link: http://lkml.kernel.org/r/20170309205322.GA1728@avx2Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c311c797
    • M
      jiffies.h: declare jiffies and jiffies_64 with ____cacheline_aligned_in_smp · 7c30f352
      Matthias Kaehlcke 提交于
      jiffies_64 is defined in kernel/time/timer.c with
      ____cacheline_aligned_in_smp, however this macro is not part of the
      declaration of jiffies and jiffies_64 in jiffies.h.
      
      As a result clang generates the following warning:
      
        kernel/time/timer.c:57:26: error: section does not match previous declaration [-Werror,-Wsection]
        __visible u64 jiffies_64 __cacheline_aligned_in_smp = INITIAL_JIFFIES;
                                 ^
        include/linux/cache.h:39:36: note: expanded from macro '__cacheline_aligned_in_smp'
                                           ^
        include/linux/cache.h:34:4: note: expanded from macro '__cacheline_aligned'
                         __section__(".data..cacheline_aligned")))
                         ^
        include/linux/jiffies.h:77:12: note: previous attribute is here
        extern u64 __jiffy_data jiffies_64;
                   ^
        include/linux/jiffies.h:70:38: note: expanded from macro '__jiffy_data'
      
      Link: http://lkml.kernel.org/r/20170403190200.70273-1-mka@chromium.orgSigned-off-by: NMatthias Kaehlcke <mka@chromium.org>
      Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
      Cc: Grant Grundler <grundler@chromium.org>
      Cc: Michael Davidson <md@google.com>
      Cc: Greg Hackmann <ghackmann@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7c30f352
    • V
      mm, compaction: change migrate_async_suitable() to suitable_migration_source() · b682debd
      Vlastimil Babka 提交于
      Preparation for making the decisions more complex and depending on
      compact_control flags.  No functional change.
      
      Link: http://lkml.kernel.org/r/20170307131545.28577-6-vbabka@suse.czSigned-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b682debd
    • V
      mm, page_alloc: count movable pages when stealing from pageblock · 02aa0cdd
      Vlastimil Babka 提交于
      When stealing pages from pageblock of a different migratetype, we count
      how many free pages were stolen, and change the pageblock's migratetype
      if more than half of the pageblock was free.  This might be too
      conservative, as there might be other pages that are not free, but were
      allocated with the same migratetype as our allocation requested.
      
      While we cannot determine the migratetype of allocated pages precisely
      (at least without the page_owner functionality enabled), we can count
      pages that compaction would try to isolate for migration - those are
      either on LRU or __PageMovable().  The rest can be assumed to be
      MIGRATE_RECLAIMABLE or MIGRATE_UNMOVABLE, which we cannot easily
      distinguish.  This counting can be done as part of free page stealing
      with little additional overhead.
      
      The page stealing code is changed so that it considers free pages plus
      pages of the "good" migratetype for the decision whether to change
      pageblock's migratetype.
      
      The result should be more accurate migratetype of pageblocks wrt the
      actual pages in the pageblocks, when stealing from semi-occupied
      pageblocks.  This should help the efficiency of page grouping by
      mobility.
      
      In testing based on 4.9 kernel with stress-highalloc from mmtests
      configured for order-4 GFP_KERNEL allocations, this patch has reduced
      the number of unmovable allocations falling back to movable pageblocks
      by 47%.  The number of movable allocations falling back to other
      pageblocks are increased by 55%, but these events don't cause permanent
      fragmentation, so the tradeoff should be positive.  Later patches also
      offset the movable fallback increase to some extent.
      
      [akpm@linux-foundation.org: merge fix]
      Link: http://lkml.kernel.org/r/20170307131545.28577-5-vbabka@suse.czSigned-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      02aa0cdd
  2. 05 5月, 2017 3 次提交
  3. 04 5月, 2017 25 次提交