1. 08 11月, 2016 10 次提交
  2. 08 10月, 2016 1 次提交
    • A
      thp: reduce usage of huge zero page's atomic counter · 6fcb52a5
      Aaron Lu 提交于
      The global zero page is used to satisfy an anonymous read fault.  If
      THP(Transparent HugePage) is enabled then the global huge zero page is
      used.  The global huge zero page uses an atomic counter for reference
      counting and is allocated/freed dynamically according to its counter
      value.
      
      CPU time spent on that counter will greatly increase if there are a lot
      of processes doing anonymous read faults.  This patch proposes a way to
      reduce the access to the global counter so that the CPU load can be
      reduced accordingly.
      
      To do this, a new flag of the mm_struct is introduced:
      MMF_USED_HUGE_ZERO_PAGE.  With this flag, the process only need to touch
      the global counter in two cases:
      
       1 The first time it uses the global huge zero page;
       2 The time when mm_user of its mm_struct reaches zero.
      
      Note that right now, the huge zero page is eligible to be freed as soon
      as its last use goes away.  With this patch, the page will not be
      eligible to be freed until the exit of the last process from which it
      was ever used.
      
      And with the use of mm_user, the kthread is not eligible to use huge
      zero page either.  Since no kthread is using huge zero page today, there
      is no difference after applying this patch.  But if that is not desired,
      I can change it to when mm_count reaches zero.
      
      Case used for test on Haswell EP:
      
        usemem -n 72 --readonly -j 0x200000 100G
      
      Which spawns 72 processes and each will mmap 100G anonymous space and
      then do read only access to that space sequentially with a step of 2MB.
      
        CPU cycles from perf report for base commit:
            54.03%  usemem   [kernel.kallsyms]   [k] get_huge_zero_page
        CPU cycles from perf report for this commit:
             0.11%  usemem   [kernel.kallsyms]   [k] mm_get_huge_zero_page
      
      Performance(throughput) of the workload for base commit: 1784430792
      Performance(throughput) of the workload for this commit: 4726928591
      164% increase.
      
      Runtime of the workload for base commit: 707592 us
      Runtime of the workload for this commit: 303970 us
      50% drop.
      
      Link: http://lkml.kernel.org/r/fe51a88f-446a-4622-1363-ad1282d71385@intel.comSigned-off-by: NAaron Lu <aaron.lu@intel.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Ebru Akagunduz <ebru.akagunduz@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6fcb52a5
  3. 19 9月, 2016 4 次提交
  4. 27 7月, 2016 1 次提交
    • R
      dax: remote unused fault wrappers · 6b524995
      Ross Zwisler 提交于
      Remove the unused wrappers dax_fault() and dax_pmd_fault().  After this
      removal, rename __dax_fault() and __dax_pmd_fault() to dax_fault() and
      dax_pmd_fault() respectively, and update all callers.
      
      The dax_fault() and dax_pmd_fault() wrappers were initially intended to
      capture some filesystem independent functionality around page faults
      (calling sb_start_pagefault() & sb_end_pagefault(), updating file mtime
      and ctime).
      
      However, the following commits:
      
         5726b27b ("ext2: Add locking for DAX faults")
         ea3d7209 ("ext4: fix races between page faults and hole punching")
      
      added locking to the ext2 and ext4 filesystems after these common
      operations but before __dax_fault() and __dax_pmd_fault() were called.
      This means that these wrappers are no longer used, and are unlikely to
      be used in the future.
      
      XFS has had locking analogous to what was recently added to ext2 and
      ext4 since DAX support was initially introduced by:
      
         6b698ede ("xfs: add DAX file operations support")
      
      Link: http://lkml.kernel.org/r/20160714214049.20075-2-ross.zwisler@linux.intel.comSigned-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6b524995
  5. 13 7月, 2016 2 次提交
  6. 28 6月, 2016 1 次提交
    • E
      dax: fix offset overflow in dax_io · 02395435
      Eric Sandeen 提交于
      This isn't functionally apparent for some reason, but
      when we test io at extreme offsets at the end of the loff_t
      rang, such as in fstests xfs/071, the calculation of
      "max" in dax_io() can be wrong due to pos + size overflowing.
      
      For example,
      
      # xfs_io -c "pwrite 9223372036854771712 512" /mnt/test/file
      
      enters dax_io with:
      
      start 0x7ffffffffffff000
      end   0x7ffffffffffff200
      
      and the rounded up "size" variable is 0x1000.  This yields:
      
      pos + size 0x8000000000000000 (overflows loff_t)
             end 0x7ffffffffffff200
      
      Due to the overflow, the min() function picks the wrong
      value for the "max" variable, and when we send (max - pos)
      into i.e. copy_from_iter_pmem() it is also the wrong value.
      
      This somehow(tm) gets magically absorbed without incident,
      probably because iter->count is correct.  But it seems best
      to fix it up properly by comparing the two values as
      unsigned.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      02395435
  7. 21 5月, 2016 1 次提交
  8. 20 5月, 2016 6 次提交
    • J
      dax: Remove i_mmap_lock protection · 4d9a2c87
      Jan Kara 提交于
      Currently faults are protected against truncate by filesystem specific
      i_mmap_sem and page lock in case of hole page. Cow faults are protected
      DAX radix tree entry locking. So there's no need for i_mmap_lock in DAX
      code. Remove it.
      Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      4d9a2c87
    • J
      dax: Use radix tree entry lock to protect cow faults · bc2466e4
      Jan Kara 提交于
      When doing cow faults, we cannot directly fill in PTE as we do for other
      faults as we rely on generic code to do proper accounting of the cowed page.
      We also have no page to lock to protect against races with truncate as
      other faults have and we need the protection to extend until the moment
      generic code inserts cowed page into PTE thus at that point we have no
      protection of fs-specific i_mmap_sem. So far we relied on using
      i_mmap_lock for the protection however that is completely special to cow
      faults. To make fault locking more uniform use DAX entry lock instead.
      Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      bc2466e4
    • J
      dax: New fault locking · ac401cc7
      Jan Kara 提交于
      Currently DAX page fault locking is racy.
      
      CPU0 (write fault)		CPU1 (read fault)
      
      __dax_fault()			__dax_fault()
        get_block(inode, block, &bh, 0) -> not mapped
      				  get_block(inode, block, &bh, 0)
      				    -> not mapped
        if (!buffer_mapped(&bh))
          if (vmf->flags & FAULT_FLAG_WRITE)
            get_block(inode, block, &bh, 1) -> allocates blocks
        if (page) -> no
      				  if (!buffer_mapped(&bh))
      				    if (vmf->flags & FAULT_FLAG_WRITE) {
      				    } else {
      				      dax_load_hole();
      				    }
        dax_insert_mapping()
      
      And we are in a situation where we fail in dax_radix_entry() with -EIO.
      
      Another problem with the current DAX page fault locking is that there is
      no race-free way to clear dirty tag in the radix tree. We can always
      end up with clean radix tree and dirty data in CPU cache.
      
      We fix the first problem by introducing locking of exceptional radix
      tree entries in DAX mappings acting very similarly to page lock and thus
      synchronizing properly faults against the same mapping index. The same
      lock can later be used to avoid races when clearing radix tree dirty
      tag.
      Reviewed-by: NNeilBrown <neilb@suse.com>
      Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      ac401cc7
    • J
      dax: Define DAX lock bit for radix tree exceptional entry · e804315d
      Jan Kara 提交于
      We will use lowest available bit in the radix tree exceptional entry for
      locking of the entry. Define it. Also clean up definitions of DAX entry
      type bits in DAX exceptional entries to use defined constants instead of
      hardcoding numbers and cleanup checking of these bits to not rely on how
      other bits in the entry are set.
      Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      e804315d
    • J
      dax: Make huge page handling depend of CONFIG_BROKEN · 348e967a
      Jan Kara 提交于
      Currently the handling of huge pages for DAX is racy. For example the
      following can happen:
      
      CPU0 (THP write fault)			CPU1 (normal read fault)
      
      __dax_pmd_fault()			__dax_fault()
        get_block(inode, block, &bh, 0) -> not mapped
      					get_block(inode, block, &bh, 0)
      					  -> not mapped
        if (!buffer_mapped(&bh) && write)
          get_block(inode, block, &bh, 1) -> allocates blocks
        truncate_pagecache_range(inode, lstart, lend);
      					dax_load_hole();
      
      This results in data corruption since process on CPU1 won't see changes
      into the file done by CPU0.
      
      The race can happen even if two normal faults race however with THP the
      situation is even worse because the two faults don't operate on the same
      entries in the radix tree and we want to use these entries for
      serialization. So make THP support in DAX code depend on CONFIG_BROKEN
      for now.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      348e967a
    • J
      dax: Fix condition for filling of PMD holes · b9953536
      Jan Kara 提交于
      Currently dax_pmd_fault() decides to fill a PMD-sized hole only if
      returned buffer has BH_Uptodate set. However that doesn't get set for
      any mapping buffer so that branch is actually a dead code. The
      BH_Uptodate check doesn't make any sense so just remove it.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      b9953536
  9. 19 5月, 2016 4 次提交
  10. 17 5月, 2016 7 次提交
  11. 13 5月, 2016 1 次提交
    • J
      dax: call get_blocks() with create == 1 for write faults to unwritten extents · aef39ab1
      Jan Kara 提交于
      Currently, __dax_fault() does not call get_blocks() callback with create
      argument set, when we got back unwritten extent from the initial
      get_blocks() call during a write fault. This is because originally
      filesystems were supposed to convert unwritten extents to written ones
      using complete_unwritten() callback. Later this was abandoned in favor of
      using pre-zeroed blocks however the condition whether get_blocks() needs
      to be called with create == 1 remained.
      
      Fix the condition so that filesystems are not forced to zero-out and
      convert unwritten extents when get_blocks() is called with create == 0
      (which introduces unnecessary overhead for read faults and can be
      problematic as the filesystem may possibly be read-only).
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      aef39ab1
  12. 02 5月, 2016 1 次提交
  13. 05 4月, 2016 1 次提交