1. 03 6月, 2020 12 次提交
  2. 08 4月, 2020 1 次提交
    • A
      proc: faster open/read/close with "permanent" files · d919b33d
      Alexey Dobriyan 提交于
      Now that "struct proc_ops" exist we can start putting there stuff which
      could not fly with VFS "struct file_operations"...
      
      Most of fs/proc/inode.c file is dedicated to make open/read/.../close
      reliable in the event of disappearing /proc entries which usually happens
      if module is getting removed.  Files like /proc/cpuinfo which never
      disappear simply do not need such protection.
      
      Save 2 atomic ops, 1 allocation, 1 free per open/read/close sequence for such
      "permanent" files.
      
      Enable "permanent" flag for
      
      	/proc/cpuinfo
      	/proc/kmsg
      	/proc/modules
      	/proc/slabinfo
      	/proc/stat
      	/proc/sysvipc/*
      	/proc/swaps
      
      More will come once I figure out foolproof way to prevent out module
      authors from marking their stuff "permanent" for performance reasons
      when it is not.
      
      This should help with scalability: benchmark is "read /proc/cpuinfo R times
      by N threads scattered over the system".
      
      	N	R	t, s (before)	t, s (after)
      	-----------------------------------------------------
      	64	4096	1.582458	1.530502	-3.2%
      	256	4096	6.371926	6.125168	-3.9%
      	1024	4096	25.64888	24.47528	-4.6%
      
      Benchmark source:
      
      #include <chrono>
      #include <iostream>
      #include <thread>
      #include <vector>
      
      #include <sys/types.h>
      #include <sys/stat.h>
      #include <fcntl.h>
      #include <unistd.h>
      
      const int NR_CPUS = sysconf(_SC_NPROCESSORS_ONLN);
      int N;
      const char *filename;
      int R;
      
      int xxx = 0;
      
      int glue(int n)
      {
      	cpu_set_t m;
      	CPU_ZERO(&m);
      	CPU_SET(n, &m);
      	return sched_setaffinity(0, sizeof(cpu_set_t), &m);
      }
      
      void f(int n)
      {
      	glue(n % NR_CPUS);
      
      	while (*(volatile int *)&xxx == 0) {
      	}
      
      	for (int i = 0; i < R; i++) {
      		int fd = open(filename, O_RDONLY);
      		char buf[4096];
      		ssize_t rv = read(fd, buf, sizeof(buf));
      		asm volatile ("" :: "g" (rv));
      		close(fd);
      	}
      }
      
      int main(int argc, char *argv[])
      {
      	if (argc < 4) {
      		std::cerr << "usage: " << argv[0] << ' ' << "N /proc/filename R
      ";
      		return 1;
      	}
      
      	N = atoi(argv[1]);
      	filename = argv[2];
      	R = atoi(argv[3]);
      
      	for (int i = 0; i < NR_CPUS; i++) {
      		if (glue(i) == 0)
      			break;
      	}
      
      	std::vector<std::thread> T;
      	T.reserve(N);
      	for (int i = 0; i < N; i++) {
      		T.emplace_back(f, i);
      	}
      
      	auto t0 = std::chrono::system_clock::now();
      	{
      		*(volatile int *)&xxx = 1;
      		for (auto& t: T) {
      			t.join();
      		}
      	}
      	auto t1 = std::chrono::system_clock::now();
      	std::chrono::duration<double> dt = t1 - t0;
      	std::cout << dt.count() << '
      ';
      
      	return 0;
      }
      
      P.S.:
      Explicit randomization marker is added because adding non-function pointer
      will silently disable structure layout randomization.
      
      [akpm@linux-foundation.org: coding style fixes]
      Reported-by: Nkbuild test robot <lkp@intel.com>
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Joe Perches <joe@perches.com>
      Link: http://lkml.kernel.org/r/20200222201539.GA22576@avx2Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d919b33d
  3. 03 4月, 2020 2 次提交
  4. 30 3月, 2020 1 次提交
    • N
      mm/swapfile.c: move inode_lock out of claim_swapfile · d795a90e
      Naohiro Aota 提交于
      claim_swapfile() currently keeps the inode locked when it is successful,
      or the file is already swapfile (with -EBUSY).  And, on the other error
      cases, it does not lock the inode.
      
      This inconsistency of the lock state and return value is quite confusing
      and actually causing a bad unlock balance as below in the "bad_swap"
      section of __do_sys_swapon().
      
      This commit fixes this issue by moving the inode_lock() and IS_SWAPFILE
      check out of claim_swapfile().  The inode is unlocked in
      "bad_swap_unlock_inode" section, so that the inode is ensured to be
      unlocked at "bad_swap".  Thus, error handling codes after the locking now
      jumps to "bad_swap_unlock_inode" instead of "bad_swap".
      
          =====================================
          WARNING: bad unlock balance detected!
          5.5.0-rc7+ #176 Not tainted
          -------------------------------------
          swapon/4294 is trying to release lock (&sb->s_type->i_mutex_key) at: __do_sys_swapon+0x94b/0x3550
          but there are no more locks to release!
      
          other info that might help us debug this:
          no locks held by swapon/4294.
      
          stack backtrace:
          CPU: 5 PID: 4294 Comm: swapon Not tainted 5.5.0-rc7-BTRFS-ZNS+ #176
          Hardware name: ASUS All Series/H87-PRO, BIOS 2102 07/29/2014
          Call Trace:
           dump_stack+0xa1/0xea
           print_unlock_imbalance_bug.cold+0x114/0x123
           lock_release+0x562/0xed0
           up_write+0x2d/0x490
           __do_sys_swapon+0x94b/0x3550
           __x64_sys_swapon+0x54/0x80
           do_syscall_64+0xa4/0x4b0
           entry_SYSCALL_64_after_hwframe+0x49/0xbe
          RIP: 0033:0x7f15da0a0dc7
      
      Fixes: 1638045c ("mm: set S_SWAPFILE on blockdev swap devices")
      Signed-off-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: NQais Youef <qais.yousef@arm.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20200206090132.154869-1-naohiro.aota@wdc.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d795a90e
  5. 22 2月, 2020 1 次提交
  6. 04 2月, 2020 1 次提交
  7. 01 2月, 2020 1 次提交
  8. 01 12月, 2019 1 次提交
  9. 20 8月, 2019 2 次提交
  10. 13 7月, 2019 2 次提交
    • A
      mm, swap: use rbtree for swap_extent · 4efaceb1
      Aaron Lu 提交于
      swap_extent is used to map swap page offset to backing device's block
      offset.  For a continuous block range, one swap_extent is used and all
      these swap_extents are managed in a linked list.
      
      These swap_extents are used by map_swap_entry() during swap's read and
      write path.  To find out the backing device's block offset for a page
      offset, the swap_extent list will be traversed linearly, with
      curr_swap_extent being used as a cache to speed up the search.
      
      This works well as long as swap_extents are not huge or when the number
      of processes that access swap device are few, but when the swap device
      has many extents and there are a number of processes accessing the swap
      device concurrently, it can be a problem.  On one of our servers, the
      disk's remaining size is tight:
      
        $df -h
        Filesystem      Size  Used Avail Use% Mounted on
        ... ...
        /dev/nvme0n1p1  1.8T  1.3T  504G  72% /home/t4
      
      When creating a 80G swapfile there, there are as many as 84656 swap
      extents.  The end result is, kernel spends abou 30% time in
      map_swap_entry() and swap throughput is only 70MB/s.
      
      As a comparison, when I used smaller sized swapfile, like 4G whose
      swap_extent dropped to 2000, swap throughput is back to 400-500MB/s and
      map_swap_entry() is about 3%.
      
      One downside of using rbtree for swap_extent is, 'struct rbtree' takes
      24 bytes while 'struct list_head' takes 16 bytes, that's 8 bytes more
      for each swap_extent.  For a swapfile that has 80k swap_extents, that
      means 625KiB more memory consumed.
      
      Test:
      
      Since it's not possible to reboot that server, I can not test this patch
      diretly there.  Instead, I tested it on another server with NVMe disk.
      
      I created a 20G swapfile on an NVMe backed XFS fs.  By default, the
      filesystem is quite clean and the created swapfile has only 2 extents.
      Testing vanilla and this patch shows no obvious performance difference
      when swapfile is not fragmented.
      
      To see the patch's effects, I used some tweaks to manually fragment the
      swapfile by breaking the extent at 1M boundary.  This made the swapfile
      have 20K extents.
      
        nr_task=4
        kernel   swapout(KB/s) map_swap_entry(perf)  swapin(KB/s) map_swap_entry(perf)
        vanilla  165191           90.77%             171798          90.21%
        patched  858993 +420%      2.16%             715827 +317%     0.77%
      
        nr_task=8
        kernel   swapout(KB/s) map_swap_entry(perf)  swapin(KB/s) map_swap_entry(perf)
        vanilla  306783           92.19%             318145          87.76%
        patched  954437 +211%      2.35%            1073741 +237%     1.57%
      
      swapout: the throughput of swap out, in KB/s, higher is better 1st
      map_swap_entry: cpu cycles percent sampled by perf swapin: the
      throughput of swap in, in KB/s, higher is better.  2nd map_swap_entry:
      cpu cycles percent sampled by perf
      
      nr_task=1 doesn't show any difference, this is due to the curr_swap_extent
      can be effectively used to cache the correct swap extent for single task
      workload.
      
      [akpm@linux-foundation.org: s/BUG_ON(1)/BUG()/]
      Link: http://lkml.kernel.org/r/20190523142404.GA181@aaronluSigned-off-by: NAaron Lu <ziqian.lzq@antfin.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4efaceb1
    • H
      mm, swap: fix race between swapoff and some swap operations · eb085574
      Huang Ying 提交于
      When swapin is performed, after getting the swap entry information from
      the page table, system will swap in the swap entry, without any lock held
      to prevent the swap device from being swapoff.  This may cause the race
      like below,
      
      CPU 1				CPU 2
      -----				-----
      				do_swap_page
      				  swapin_readahead
      				    __read_swap_cache_async
      swapoff				      swapcache_prepare
        p->swap_map = NULL		        __swap_duplicate
      					  p->swap_map[?] /* !!! NULL pointer access */
      
      Because swapoff is usually done when system shutdown only, the race may
      not hit many people in practice.  But it is still a race need to be fixed.
      
      To fix the race, get_swap_device() is added to check whether the specified
      swap entry is valid in its swap device.  If so, it will keep the swap
      entry valid via preventing the swap device from being swapoff, until
      put_swap_device() is called.
      
      Because swapoff() is very rare code path, to make the normal path runs as
      fast as possible, rcu_read_lock/unlock() and synchronize_rcu() instead of
      reference count is used to implement get/put_swap_device().  >From
      get_swap_device() to put_swap_device(), RCU reader side is locked, so
      synchronize_rcu() in swapoff() will wait until put_swap_device() is
      called.
      
      In addition to swap_map, cluster_info, etc.  data structure in the struct
      swap_info_struct, the swap cache radix tree will be freed after swapoff,
      so this patch fixes the race between swap cache looking up and swapoff
      too.
      
      Races between some other swap cache usages and swapoff are fixed too via
      calling synchronize_rcu() between clearing PageSwapCache() and freeing
      swap cache data structure.
      
      Another possible method to fix this is to use preempt_off() +
      stop_machine() to prevent the swap device from being swapoff when its data
      structure is being accessed.  The overhead in hot-path of both methods is
      similar.  The advantages of RCU based method are,
      
      1. stop_machine() may disturb the normal execution code path on other
         CPUs.
      
      2. File cache uses RCU to protect its radix tree.  If the similar
         mechanism is used for swap cache too, it is easier to share code
         between them.
      
      3. RCU is used to protect swap cache in total_swapcache_pages() and
         exit_swap_address_space() already.  The two mechanisms can be
         merged to simplify the logic.
      
      Link: http://lkml.kernel.org/r/20190522015423.14418-1-ying.huang@intel.com
      Fixes: 235b6217 ("mm/swap: add cluster lock")
      Signed-off-by: N"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: NAndrea Parri <andrea.parri@amarulasolutions.com>
      Not-nacked-by: NHugh Dickins <hughd@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Jérôme Glisse <jglisse@redhat.com>
      Cc: Yang Shi <yang.shi@linux.alibaba.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eb085574
  11. 21 5月, 2019 1 次提交
  12. 20 4月, 2019 3 次提交
    • H
      mm: swapoff: shmem_unuse() stop eviction without igrab() · af53d3e9
      Hugh Dickins 提交于
      The igrab() in shmem_unuse() looks good, but we forgot that it gives no
      protection against concurrent unmounting: a point made by Konstantin
      Khlebnikov eight years ago, and then fixed in 2.6.39 by 778dd893
      ("tmpfs: fix race between umount and swapoff").  The current 5.1-rc
      swapoff is liable to hit "VFS: Busy inodes after unmount of tmpfs.
      Self-destruct in 5 seconds.  Have a nice day..." followed by GPF.
      
      Once again, give up on using igrab(); but don't go back to making such
      heavy-handed use of shmem_swaplist_mutex as last time: that would spoil
      the new design, and I expect could deadlock inside shmem_swapin_page().
      
      Instead, shmem_unuse() just raise a "stop_eviction" count in the shmem-
      specific inode, and shmem_evict_inode() wait for that to go down to 0.
      Call it "stop_eviction" rather than "swapoff_busy" because it can be put
      to use for others later (huge tmpfs patches expect to use it).
      
      That simplifies shmem_unuse(), protecting it from both unlink and
      unmount; and in practice lets it locate all the swap in its first try.
      But do not rely on that: there's still a theoretical case, when
      shmem_writepage() might have been preempted after its get_swap_page(),
      before making the swap entry visible to swapoff.
      
      [hughd@google.com: remove incorrect list_del()]
        Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1904091133570.1898@eggly.anvils
      Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1904081259400.1523@eggly.anvils
      Fixes: b56a2d8a ("mm: rid swapoff of quadratic complexity")
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: "Alex Xu (Hello71)" <alex_y_xu@yahoo.ca>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Kelley Nielsen <kelleynnn@gmail.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Vineeth Pillai <vpillai@digitalocean.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      af53d3e9
    • H
      mm: swapoff: take notice of completion sooner · 64165b1a
      Hugh Dickins 提交于
      The old try_to_unuse() implementation was driven by find_next_to_unuse(),
      which terminated as soon as all the swap had been freed.
      
      Add inuse_pages checks now (alongside signal_pending()) to stop scanning
      mms and swap_map once finished.
      
      The same ought to be done in shmem_unuse() too, but never was before,
      and needs a different interface: so leave it as is for now.
      
      Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1904081258200.1523@eggly.anvils
      Fixes: b56a2d8a ("mm: rid swapoff of quadratic complexity")
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: "Alex Xu (Hello71)" <alex_y_xu@yahoo.ca>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Kelley Nielsen <kelleynnn@gmail.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Vineeth Pillai <vpillai@digitalocean.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      64165b1a
    • H
      mm: swapoff: remove too limiting SWAP_UNUSE_MAX_TRIES · dd862deb
      Hugh Dickins 提交于
      SWAP_UNUSE_MAX_TRIES 3 appeared to work well in earlier testing, but
      further testing has proved it to be a source of unnecessary swapoff
      EBUSY failures (which can then be followed by unmount EBUSY failures).
      
      When mmget_not_zero() or shmem's igrab() fails, there is an mm exiting
      or inode being evicted, freeing up swap independent of try_to_unuse().
      Those typically completed much sooner than the old quadratic swapoff,
      but now it's more common that swapoff may need to wait for them.
      
      It's possible to move those cases from init_mm.mmlist and shmem_swaplist
      to separate "exiting" swaplists, and try_to_unuse() then wait for those
      lists to be emptied; but we've not bothered with that in the past, and
      don't want to risk missing some other forgotten case.  So just revert to
      cycling around until the swap is gone, without any retries limit.
      
      Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1904081256170.1523@eggly.anvils
      Fixes: b56a2d8a ("mm: rid swapoff of quadratic complexity")
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: "Alex Xu (Hello71)" <alex_y_xu@yahoo.ca>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Kelley Nielsen <kelleynnn@gmail.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Vineeth Pillai <vpillai@digitalocean.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd862deb
  13. 06 3月, 2019 4 次提交
    • G
      mm/swapfile.c: use struct_size() in kvzalloc() · 96008744
      Gustavo A. R. Silva 提交于
      One of the more common cases of allocation size calculations is finding
      the size of a structure that has a zero-sized array at the end, along
      with memory for some number of elements for that array.  For example:
      
        struct foo {
            int stuff;
            struct boo entry[];
        };
      
        size = sizeof(struct foo) + count * sizeof(struct boo);
        instance = kvzalloc(size, GFP_KERNEL);
      
      Instead of leaving these open-coded and prone to type mistakes, we can
      now use the new struct_size() helper:
      
        instance = kvzalloc(struct_size(instance, entry, count), GFP_KERNEL);
      
      Notice that, in this case, variable size is not necessary, hence it is
      removed.
      
      This code was detected with the help of Coccinelle.
      
      Link: http://lkml.kernel.org/r/20190221154622.GA19599@embeddedorSigned-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      96008744
    • A
      numa: make "nr_node_ids" unsigned int · b9726c26
      Alexey Dobriyan 提交于
      Number of NUMA nodes can't be negative.
      
      This saves a few bytes on x86_64:
      
      	add/remove: 0/0 grow/shrink: 4/21 up/down: 27/-265 (-238)
      	Function                                     old     new   delta
      	hv_synic_alloc.cold                           88     110     +22
      	prealloc_shrinker                            260     262      +2
      	bootstrap                                    249     251      +2
      	sched_init_numa                             1566    1567      +1
      	show_slab_objects                            778     777      -1
      	s_show                                      1201    1200      -1
      	kmem_cache_init                              346     345      -1
      	__alloc_workqueue_key                       1146    1145      -1
      	mem_cgroup_css_alloc                        1614    1612      -2
      	__do_sys_swapon                             4702    4699      -3
      	__list_lru_init                              655     651      -4
      	nic_probe                                   2379    2374      -5
      	store_user_store                             118     111      -7
      	red_zone_store                               106      99      -7
      	poison_store                                 106      99      -7
      	wq_numa_init                                 348     338     -10
      	__kmem_cache_empty                            75      65     -10
      	task_numa_free                               186     173     -13
      	merge_across_nodes_store                     351     336     -15
      	irq_create_affinity_masks                   1261    1246     -15
      	do_numa_crng_init                            343     321     -22
      	task_numa_fault                             4760    4737     -23
      	swapfile_init                                179     156     -23
      	hv_synic_alloc                               536     492     -44
      	apply_wqattrs_prepare                        746     695     -51
      
      Link: http://lkml.kernel.org/r/20190201223029.GA15820@avx2Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b9726c26
    • D
      mm, swap: bounds check swap_info array accesses to avoid NULL derefs · c10d38cc
      Daniel Jordan 提交于
      Dan Carpenter reports a potential NULL dereference in
      get_swap_page_of_type:
      
        Smatch complains that the NULL checks on "si" aren't consistent.  This
        seems like a real bug because we have not ensured that the type is
        valid and so "si" can be NULL.
      
      Add the missing check for NULL, taking care to use a read barrier to
      ensure CPU1 observes CPU0's updates in the correct order:
      
           CPU0                           CPU1
           alloc_swap_info()              if (type >= nr_swapfiles)
             swap_info[type] = p              /* handle invalid entry */
             smp_wmb()                    smp_rmb()
             ++nr_swapfiles               p = swap_info[type]
      
      Without smp_rmb, CPU1 might observe CPU0's write to nr_swapfiles before
      CPU0's write to swap_info[type] and read NULL from swap_info[type].
      
      Ying Huang noticed other places in swapfile.c don't order these reads
      properly.  Introduce swap_type_to_swap_info to encourage correct usage.
      
      Use READ_ONCE and WRITE_ONCE to follow the Linux Kernel Memory Model
      (see tools/memory-model/Documentation/explanation.txt).
      
      This ordering need not be enforced in places where swap_lock is held
      (e.g.  si_swapinfo) because swap_lock serializes updates to nr_swapfiles
      and the swap_info array.
      
      Link: http://lkml.kernel.org/r/20190131024410.29859-1-daniel.m.jordan@oracle.com
      Fixes: ec8acf20 ("swap: add per-partition lock for swapfile")
      Signed-off-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Suggested-by: N"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: NAndrea Parri <andrea.parri@amarulasolutions.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Omar Sandoval <osandov@fb.com>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c10d38cc
    • V
      mm: rid swapoff of quadratic complexity · b56a2d8a
      Vineeth Remanan Pillai 提交于
      This patch was initially posted by Kelley Nielsen.  Reposting the patch
      with all review comments addressed and with minor modifications and
      optimizations.  Also, folding in the fixes offered by Hugh Dickins and
      Huang Ying.  Tests were rerun and commit message updated with new
      results.
      
      try_to_unuse() is of quadratic complexity, with a lot of wasted effort.
      It unuses swap entries one by one, potentially iterating over all the
      page tables for all the processes in the system for each one.
      
      This new proposed implementation of try_to_unuse simplifies its
      complexity to linear.  It iterates over the system's mms once, unusing
      all the affected entries as it walks each set of page tables.  It also
      makes similar changes to shmem_unuse.
      
      Improvement
      
      swapoff was called on a swap partition containing about 6G of data, in a
      VM(8cpu, 16G RAM), and calls to unuse_pte_range() were counted.
      
      Present implementation....about 1200M calls(8min, avg 80% cpu util).
      Prototype.................about  9.0K calls(3min, avg 5% cpu util).
      
      Details
      
      In shmem_unuse(), iterate over the shmem_swaplist and, for each
      shmem_inode_info that contains a swap entry, pass it to
      shmem_unuse_inode(), along with the swap type.  In shmem_unuse_inode(),
      iterate over its associated xarray, and store the index and value of
      each swap entry in an array for passing to shmem_swapin_page() outside
      of the RCU critical section.
      
      In try_to_unuse(), instead of iterating over the entries in the type and
      unusing them one by one, perhaps walking all the page tables for all the
      processes for each one, iterate over the mmlist, making one pass.  Pass
      each mm to unuse_mm() to begin its page table walk, and during the walk,
      unuse all the ptes that have backing store in the swap type received by
      try_to_unuse().  After the walk, check the type for orphaned swap
      entries with find_next_to_unuse(), and remove them from the swap cache.
      If find_next_to_unuse() starts over at the beginning of the type, repeat
      the check of the shmem_swaplist and the walk a maximum of three times.
      
      Change unuse_mm() and the intervening walk functions down to
      unuse_pte_range() to take the type as a parameter, and to iterate over
      their entire range, calling the next function down on every iteration.
      In unuse_pte_range(), make a swap entry from each pte in the range using
      the passed in type.  If it has backing store in the type, call
      swapin_readahead() to retrieve the page and pass it to unuse_pte().
      
      Pass the count of pages_to_unuse down the page table walks in
      try_to_unuse(), and return from the walk when the desired number of
      pages has been swapped back in.
      
      Link: http://lkml.kernel.org/r/20190114153129.4852-2-vpillai@digitalocean.comSigned-off-by: NVineeth Remanan Pillai <vpillai@digitalocean.com>
      Signed-off-by: NKelley Nielsen <kelleynnn@gmail.com>
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@surriel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b56a2d8a
  14. 29 12月, 2018 2 次提交
    • H
      mm, swap: fix swapoff with KSM pages · 7af7a8e1
      Huang Ying 提交于
      KSM pages may be mapped to the multiple VMAs that cannot be reached from
      one anon_vma.  So during swapin, a new copy of the page need to be
      generated if a different anon_vma is needed, please refer to comments of
      ksm_might_need_to_copy() for details.
      
      During swapoff, unuse_vma() uses anon_vma (if available) to locate VMA and
      virtual address mapped to the page, so not all mappings to a swapped out
      KSM page could be found.  So in try_to_unuse(), even if the swap count of
      a swap entry isn't zero, the page needs to be deleted from swap cache, so
      that, in the next round a new page could be allocated and swapin for the
      other mappings of the swapped out KSM page.
      
      But this contradicts with the THP swap support.  Where the THP could be
      deleted from swap cache only after the swap count of every swap entry in
      the huge swap cluster backing the THP has reach 0.  So try_to_unuse() is
      changed in commit e0709829 ("mm, THP, swap: support to reclaim swap
      space for THP swapped out") to check that before delete a page from swap
      cache, but this has broken KSM swapoff too.
      
      Fortunately, KSM is for the normal pages only, so the original behavior
      for KSM pages could be restored easily via checking PageTransCompound().
      That is how this patch works.
      
      The bug is introduced by e0709829 ("mm, THP, swap: support to reclaim
      swap space for THP swapped out"), which is merged by v4.14-rc1.  So I
      think we should backport the fix to from 4.14 on.  But Hugh thinks it may
      be rare for the KSM pages being in the swap device when swapoff, so nobody
      reports the bug so far.
      
      Link: http://lkml.kernel.org/r/20181226051522.28442-1-ying.huang@intel.com
      Fixes: e0709829 ("mm, THP, swap: support to reclaim swap space for THP swapped out")
      Signed-off-by: N"Huang, Ying" <ying.huang@intel.com>
      Reported-by: NHugh Dickins <hughd@google.com>
      Tested-by: NHugh Dickins <hughd@google.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7af7a8e1
    • A
      mm/swap: use nr_node_ids for avail_lists in swap_info_struct · 66f71da9
      Aaron Lu 提交于
      Since a2468cc9 ("swap: choose swap device according to numa node"),
      avail_lists field of swap_info_struct is changed to an array with
      MAX_NUMNODES elements.  This made swap_info_struct size increased to 40KiB
      and needs an order-4 page to hold it.
      
      This is not optimal in that:
      1 Most systems have way less than MAX_NUMNODES(1024) nodes so it
        is a waste of memory;
      2 It could cause swapon failure if the swap device is swapped on
        after system has been running for a while, due to no order-4
        page is available as pointed out by Vasily Averin.
      
      Solve the above two issues by using nr_node_ids(which is the actual
      possible node number the running system has) for avail_lists instead of
      MAX_NUMNODES.
      
      nr_node_ids is unknown at compile time so can't be directly used when
      declaring this array.  What I did here is to declare avail_lists as zero
      element array and allocate space for it when allocating space for
      swap_info_struct.  The reason why keep using array but not pointer is
      plist_for_each_entry needs the field to be part of the struct, so pointer
      will not work.
      
      This patch is on top of Vasily Averin's fix commit.  I think the use of
      kvzalloc for swap_info_struct is still needed in case nr_node_ids is
      really big on some systems.
      
      Link: http://lkml.kernel.org/r/20181115083847.GA11129@intel.comSigned-off-by: NAaron Lu <aaron.lu@intel.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Vasily Averin <vvs@virtuozzo.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      66f71da9
  15. 19 11月, 2018 1 次提交
  16. 27 10月, 2018 5 次提交