1. 21 1月, 2016 1 次提交
    • V
      mm: memcontrol: charge swap to cgroup2 · 37e84351
      Vladimir Davydov 提交于
      This patchset introduces swap accounting to cgroup2.
      
      This patch (of 7):
      
      In the legacy hierarchy we charge memsw, which is dubious, because:
      
       - memsw.limit must be >= memory.limit, so it is impossible to limit
         swap usage less than memory usage. Taking into account the fact that
         the primary limiting mechanism in the unified hierarchy is
         memory.high while memory.limit is either left unset or set to a very
         large value, moving memsw.limit knob to the unified hierarchy would
         effectively make it impossible to limit swap usage according to the
         user preference.
      
       - memsw.usage != memory.usage + swap.usage, because a page occupying
         both swap entry and a swap cache page is charged only once to memsw
         counter. As a result, it is possible to effectively eat up to
         memory.limit of memory pages *and* memsw.limit of swap entries, which
         looks unexpected.
      
      That said, we should provide a different swap limiting mechanism for
      cgroup2.
      
      This patch adds mem_cgroup->swap counter, which charges the actual number
      of swap entries used by a cgroup.  It is only charged in the unified
      hierarchy, while the legacy hierarchy memsw logic is left intact.
      
      The swap usage can be monitored using new memory.swap.current file and
      limited using memory.swap.max.
      
      Note, to charge swap resource properly in the unified hierarchy, we have
      to make swap_entry_free uncharge swap only when ->usage reaches zero, not
      just ->count, i.e.  when all references to a swap entry, including the one
      taken by swap cache, are gone.  This is necessary, because otherwise
      swap-in could result in uncharging swap even if the page is still in swap
      cache and hence still occupies a swap entry.  At the same time, this
      shouldn't break memsw counter logic, where a page is never charged twice
      for using both memory and swap, because in case of legacy hierarchy we
      uncharge swap on commit (see mem_cgroup_commit_charge).
      Signed-off-by: NVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37e84351
  2. 16 1月, 2016 2 次提交
    • K
      memcg: adjust to support new THP refcounting · f627c2f5
      Kirill A. Shutemov 提交于
      As with rmap, with new refcounting we cannot rely on PageTransHuge() to
      check if we need to charge size of huge page form the cgroup.  We need
      to get information from caller to know whether it was mapped with PMD or
      PTE.
      
      We do uncharge when last reference on the page gone.  At that point if
      we see PageTransHuge() it means we need to unchange whole huge page.
      
      The tricky part is partial unmap -- when we try to unmap part of huge
      page.  We don't do a special handing of this situation, meaning we don't
      uncharge the part of huge page unless last user is gone or
      split_huge_page() is triggered.  In case of cgroup memory pressure
      happens the partial unmapped page will be split through shrinker.  This
      should be good enough.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Tested-by: NSasha Levin <sasha.levin@oracle.com>
      Tested-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJerome Marchand <jmarchan@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f627c2f5
    • K
      page-flags: define PG_locked behavior on compound pages · 48c935ad
      Kirill A. Shutemov 提交于
      lock_page() must operate on the whole compound page.  It doesn't make
      much sense to lock part of compound page.  Change code to use head
      page's PG_locked, if tail page is passed.
      
      This patch also gets rid of custom helper functions --
      __set_page_locked() and __clear_page_locked().  They are replaced with
      helpers generated by __SETPAGEFLAG/__CLEARPAGEFLAG.  Tail pages to these
      helper would trigger VM_BUG_ON().
      
      SLUB uses PG_locked as a bit spin locked.  IIUC, tail pages should never
      appear there.  VM_BUG_ON() is added to make sure that this assumption is
      correct.
      
      [akpm@linux-foundation.org: fix fs/cifs/file.c]
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Jérôme Glisse <jglisse@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      48c935ad
  3. 15 1月, 2016 4 次提交
    • V
      mm, proc: reduce cost of /proc/pid/smaps for unpopulated shmem mappings · 48131e03
      Vlastimil Babka 提交于
      Following the previous patch, further reduction of /proc/pid/smaps cost
      is possible for private writable shmem mappings with unpopulated areas
      where the page walk invokes the .pte_hole function.  We can use radix
      tree iterator for each such area instead of calling find_get_entry() in
      a loop.  This is possible at the extra maintenance cost of introducing
      another shmem function shmem_partial_swap_usage().
      
      To demonstrate the diference, I have measured this on a process that
      creates a private writable 2GB mapping of a partially swapped out
      /dev/shm/file (which cannot employ the optimizations from the prvious
      patch) and doesn't populate it at all.  I time how long does it take to
      cat /proc/pid/smaps of this process 100 times.
      
      Before this patch:
      
      real    0m3.831s
      user    0m0.180s
      sys     0m3.212s
      
      After this patch:
      
      real    0m1.176s
      user    0m0.180s
      sys     0m0.684s
      
      The time is similar to the case where a radix tree iterator is employed
      on the whole mapping.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      48131e03
    • V
      mm, proc: reduce cost of /proc/pid/smaps for shmem mappings · 6a15a370
      Vlastimil Babka 提交于
      The previous patch has improved swap accounting for shmem mapping, which
      however made /proc/pid/smaps more expensive for shmem mappings, as we
      consult the radix tree for each pte_none entry, so the overal complexity
      is O(n*log(n)).
      
      We can reduce this significantly for mappings that cannot contain COWed
      pages, because then we can either use the statistics tha shmem object
      itself tracks (if the mapping contains the whole object, or the swap
      usage of the whole object is zero), or use the radix tree iterator,
      which is much more effective than repeated find_get_entry() calls.
      
      This patch therefore introduces a function shmem_swap_usage(vma) and
      makes /proc/pid/smaps use it when possible.  Only for writable private
      mappings of shmem objects (i.e.  tmpfs files) with the shmem object
      itself (partially) swapped outwe have to resort to the find_get_entry()
      approach.
      
      Hopefully such mappings are relatively uncommon.
      
      To demonstrate the diference, I have measured this on a process that
      creates a 2GB mapping and dirties single pages with a stride of 2MB, and
      time how long does it take to cat /proc/pid/smaps of this process 100
      times.
      
      Private writable mapping of a /dev/shm/file (the most complex case):
      
      real    0m3.831s
      user    0m0.180s
      sys     0m3.212s
      
      Shared mapping of an almost full mapping of a partially swapped /dev/shm/file
      (which needs to employ the radix tree iterator).
      
      real    0m1.351s
      user    0m0.096s
      sys     0m0.768s
      
      Same, but with /dev/shm/file not swapped (so no radix tree walk needed)
      
      real    0m0.935s
      user    0m0.128s
      sys     0m0.344s
      
      Private anonymous mapping:
      
      real    0m0.949s
      user    0m0.116s
      sys     0m0.348s
      
      The cost is now much closer to the private anonymous mapping case, unless
      the shmem mapping is private and writable.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6a15a370
    • V
      kmemcg: account certain kmem allocations to memcg · 5d097056
      Vladimir Davydov 提交于
      Mark those kmem allocations that are known to be easily triggered from
      userspace as __GFP_ACCOUNT/SLAB_ACCOUNT, which makes them accounted to
      memcg.  For the list, see below:
      
       - threadinfo
       - task_struct
       - task_delay_info
       - pid
       - cred
       - mm_struct
       - vm_area_struct and vm_region (nommu)
       - anon_vma and anon_vma_chain
       - signal_struct
       - sighand_struct
       - fs_struct
       - files_struct
       - fdtable and fdtable->full_fds_bits
       - dentry and external_name
       - inode for all filesystems. This is the most tedious part, because
         most filesystems overwrite the alloc_inode method.
      
      The list is far from complete, so feel free to add more objects.
      Nevertheless, it should be close to "account everything" approach and
      keep most workloads within bounds.  Malevolent users will be able to
      breach the limit, but this was possible even with the former "account
      everything" approach (simply because it did not account everything in
      fact).
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5d097056
    • A
      Make sure that highmem pages are not added to symlink page cache · e8ecde25
      Al Viro 提交于
      inode_nohighmem() is sufficient to make sure that page_get_link()
      won't try to allocate a highmem page.  Moreover, it is sufficient
      to make sure that page_symlink/__page_symlink won't do the same
      thing.  However, any filesystem that manually preseeds the symlink's
      page cache upon symlink(2) needs to make sure that the page it
      inserts there won't be a highmem one.
      
      Fortunately, only nfs and shmem have run afoul of that...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e8ecde25
  4. 31 12月, 2015 1 次提交
  5. 13 12月, 2015 1 次提交
    • H
      tmpfs: fix shmem_evict_inode() warnings on i_blocks · 267a4c76
      Hugh Dickins 提交于
      Dmitry Vyukov provides a little program, autogenerated by syzkaller,
      which races a fault on a mapping of a sparse memfd object, against
      truncation of that object below the fault address: run repeatedly for a
      few minutes, it reliably generates shmem_evict_inode()'s
      WARN_ON(inode->i_blocks).
      
      (But there's nothing specific to memfd here, nor to the fstat which it
      happened to use to generate the fault: though that looked suspicious,
      since a shmem_recalc_inode() had been added there recently.  The same
      problem can be reproduced with open+unlink in place of memfd_create, and
      with fstatfs in place of fstat.)
      
      v3.7 commit 0f3c42f5 ("tmpfs: change final i_blocks BUG to WARNING")
      explains one cause of such a warning (a race with shmem_writepage to
      swap), and possible solutions; but we never took it further, and this
      syzkaller incident turns out to have a different cause.
      
      shmem_getpage_gfp()'s error recovery, when a freshly allocated page is
      then found to be beyond eof, looks plausible - decrementing the alloced
      count that was just before incremented - but in fact can go wrong, if a
      racing thread (the truncator, for example) gets its shmem_recalc_inode()
      in just after our delete_from_page_cache().  delete_from_page_cache()
      decrements nrpages, that shmem_recalc_inode() will balance the books by
      decrementing alloced itself, then our decrement of alloced take it one
      too low: leading to the WARNING when the object is finally evicted.
      
      Once the new page has been exposed in the page cache,
      shmem_getpage_gfp() must leave it to shmem_recalc_inode() itself to get
      the accounting right in all cases (and not fall through from "trunc:" to
      "decused:").  Adjust that error recovery block; and the reinitialization
      of info and sbinfo can be removed too.
      
      While we're here, fix shmem_writepage() to avoid the original issue: it
      will be safe against a racing shmem_recalc_inode(), if it merely
      increments swapped before the shmem_delete_from_page_cache() which
      decrements nrpages (but it must then do its own shmem_recalc_inode()
      before that, while still in balance, instead of after).  (Aside: why do
      we shmem_recalc_inode() here in the swap path? Because its raison d'etre
      is to cope with clean sparse shmem pages being reclaimed behind our
      back: so here when swapping is a good place to look for that case.) But
      I've not now managed to reproduce this bug, even without the patch.
      
      I don't see why I didn't do that earlier: perhaps inhibited by the
      preference to eliminate shmem_recalc_inode() altogether.  Driven by this
      incident, I do now have a patch to do so at last; but still want to sit
      on it for a bit, there's a couple of questions yet to be resolved.
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      267a4c76
  6. 09 12月, 2015 3 次提交
    • A
      teach shmem_get_link() to work in RCU mode · 6a6c9904
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6a6c9904
    • A
      replace ->follow_link() with new method that could stay in RCU mode · 6b255391
      Al Viro 提交于
      new method: ->get_link(); replacement of ->follow_link().  The differences
      are:
      	* inode and dentry are passed separately
      	* might be called both in RCU and non-RCU mode;
      the former is indicated by passing it a NULL dentry.
      	* when called that way it isn't allowed to block
      and should return ERR_PTR(-ECHILD) if it needs to be called
      in non-RCU mode.
      
      It's a flagday change - the old method is gone, all in-tree instances
      converted.  Conversion isn't hard; said that, so far very few instances
      do not immediately bail out when called in RCU mode.  That'll change
      in the next commits.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6b255391
    • A
      don't put symlink bodies in pagecache into highmem · 21fc61c7
      Al Viro 提交于
      kmap() in page_follow_link_light() needed to go - allowing to hold
      an arbitrary number of kmaps for long is a great way to deadlocking
      the system.
      
      new helper (inode_nohighmem(inode)) needs to be used for pagecache
      symlinks inodes; done for all in-tree cases.  page_follow_link_light()
      instrumented to yell about anything missed.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      21fc61c7
  7. 07 12月, 2015 2 次提交
  8. 07 11月, 2015 1 次提交
    • M
      mm: page_alloc: hide some GFP internals and document the bits and flag combinations · dd56b046
      Mel Gorman 提交于
      Andrew stated the following
      
      	We have quite a history of remote parts of the kernel using
      	weird/wrong/inexplicable combinations of __GFP_ flags.	I tend
      	to think that this is because we didn't adequately explain the
      	interface.
      
      	And I don't think that gfp.h really improved much in this area as
      	a result of this patchset.  Could you go through it some time and
      	decide if we've adequately documented all this stuff?
      
      This patches first moves some GFP flag combinations that are part of the MM
      internals to mm/internal.h. The rest of the patch documents the __GFP_FOO
      bits under various headings and then documents the flag combinations. It
      will not help callers that are brain damaged but the clarity might motivate
      some fixes and avoid future mistakes.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd56b046
  9. 06 11月, 2015 2 次提交
    • H
      tmpfs: avoid a little creat and stat slowdown · d0424c42
      Hugh Dickins 提交于
      LKP reports that v4.2 commit afa2db2f ("tmpfs: truncate prealloc
      blocks past i_size") causes a 14.5% slowdown in the AIM9 creat-clo
      benchmark.
      
      creat-clo does just what you'd expect from the name, and creat's O_TRUNC
      on 0-length file does indeed get into more overhead now shmem_setattr()
      tests "0 <= 0" instead of "0 < 0".
      
      I'm not sure how much we care, but I think it would not be too VW-like to
      add in a check for whether any pages (or swap) are allocated: if none are
      allocated, there's none to remove from the radix_tree.  At first I thought
      that check would be good enough for the unmaps too, but no: we should not
      skip the unlikely case of unmapping pages beyond the new EOF, which were
      COWed from holes which have now been reclaimed, leaving none.
      
      This gives me an 8.5% speedup: on Haswell instead of LKP's Westmere, and
      running a debug config before and after: I hope those account for the
      lesser speedup.
      
      And probably someone has a benchmark where a thousand threads keep on
      stat'ing the same file repeatedly: forestall that report by adjusting v4.3
      commit 44a30220 ("shmem: recalculate file inode when fstat") not to
      take the spinlock in shmem_getattr() when there's no work to do.
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Reported-by: NYing Huang <ying.huang@linux.intel.com>
      Tested-by: NYing Huang <ying.huang@linux.intel.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0424c42
    • H
      mm: rename mem_cgroup_migrate to mem_cgroup_replace_page · 45637bab
      Hugh Dickins 提交于
      After v4.3's commit 0610c25d ("memcg: fix dirty page migration")
      mem_cgroup_migrate() doesn't have much to offer in page migration: convert
      migrate_misplaced_transhuge_page() to set_page_memcg() instead.
      
      Then rename mem_cgroup_migrate() to mem_cgroup_replace_page(), since its
      remaining callers are replace_page_cache_page() and shmem_replace_page():
      both of whom passed lrucare true, so just eliminate that argument.
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      45637bab
  10. 09 9月, 2015 1 次提交
  11. 07 8月, 2015 1 次提交
    • S
      ipc: use private shmem or hugetlbfs inodes for shm segments. · e1832f29
      Stephen Smalley 提交于
      The shm implementation internally uses shmem or hugetlbfs inodes for shm
      segments.  As these inodes are never directly exposed to userspace and
      only accessed through the shm operations which are already hooked by
      security modules, mark the inodes with the S_PRIVATE flag so that inode
      security initialization and permission checking is skipped.
      
      This was motivated by the following lockdep warning:
      
        ======================================================
         [ INFO: possible circular locking dependency detected ]
         4.2.0-0.rc3.git0.1.fc24.x86_64+debug #1 Tainted: G        W
        -------------------------------------------------------
         httpd/1597 is trying to acquire lock:
         (&ids->rwsem){+++++.}, at: shm_close+0x34/0x130
         but task is already holding lock:
         (&mm->mmap_sem){++++++}, at: SyS_shmdt+0x4b/0x180
         which lock already depends on the new lock.
         the existing dependency chain (in reverse order) is:
         -> #3 (&mm->mmap_sem){++++++}:
              lock_acquire+0xc7/0x270
              __might_fault+0x7a/0xa0
              filldir+0x9e/0x130
              xfs_dir2_block_getdents.isra.12+0x198/0x1c0 [xfs]
              xfs_readdir+0x1b4/0x330 [xfs]
              xfs_file_readdir+0x2b/0x30 [xfs]
              iterate_dir+0x97/0x130
              SyS_getdents+0x91/0x120
              entry_SYSCALL_64_fastpath+0x12/0x76
         -> #2 (&xfs_dir_ilock_class){++++.+}:
              lock_acquire+0xc7/0x270
              down_read_nested+0x57/0xa0
              xfs_ilock+0x167/0x350 [xfs]
              xfs_ilock_attr_map_shared+0x38/0x50 [xfs]
              xfs_attr_get+0xbd/0x190 [xfs]
              xfs_xattr_get+0x3d/0x70 [xfs]
              generic_getxattr+0x4f/0x70
              inode_doinit_with_dentry+0x162/0x670
              sb_finish_set_opts+0xd9/0x230
              selinux_set_mnt_opts+0x35c/0x660
              superblock_doinit+0x77/0xf0
              delayed_superblock_init+0x10/0x20
              iterate_supers+0xb3/0x110
              selinux_complete_init+0x2f/0x40
              security_load_policy+0x103/0x600
              sel_write_load+0xc1/0x750
              __vfs_write+0x37/0x100
              vfs_write+0xa9/0x1a0
              SyS_write+0x58/0xd0
              entry_SYSCALL_64_fastpath+0x12/0x76
        ...
      Signed-off-by: NStephen Smalley <sds@tycho.nsa.gov>
      Reported-by: NMorten Stevens <mstevens@fedoraproject.org>
      Acked-by: NHugh Dickins <hughd@google.com>
      Acked-by: NPaul Moore <paul@paul-moore.com>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Eric Paris <eparis@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e1832f29
  12. 25 6月, 2015 1 次提交
    • J
      tmpfs: truncate prealloc blocks past i_size · afa2db2f
      Josef Bacik 提交于
      One of the rocksdb people noticed that when you do something like this
      
          fallocate(fd, FALLOC_FL_KEEP_SIZE, 0, 10M)
          pwrite(fd, buf, 5M, 0)
          ftruncate(5M)
      
      on tmpfs, the file would still take up 10M: which led to super fun
      issues because we were getting ENOSPC before we thought we should be
      getting ENOSPC.  This patch fixes the problem, and mirrors what all the
      other fs'es do (and was agreed to be the correct behaviour at LSF).
      
      I tested it locally to make sure it worked properly with the following
      
          xfs_io -f -c "falloc -k 0 10M" -c "pwrite 0 5M" -c "truncate 5M" file
      
      Without the patch we have "Blocks: 20480", with the patch we have the
      correct value of "Blocks: 10240".
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      afa2db2f
  13. 18 6月, 2015 1 次提交
  14. 11 5月, 2015 4 次提交
  15. 16 4月, 2015 1 次提交
  16. 12 4月, 2015 1 次提交
  17. 26 3月, 2015 1 次提交
  18. 24 2月, 2015 1 次提交
    • S
      mm: shmem: check for mapping owner before dereferencing · f0774d88
      Sasha Levin 提交于
      mapping->host can be NULL and shouldn't be dereferenced before being checked.
      
      [ 1295.741844] GPF could be caused by NULL-ptr deref or user memory accessgeneral protection fault: 0000 [#1] SMP KASAN
      [ 1295.746387] Dumping ftrace buffer:
      [ 1295.748217]    (ftrace buffer empty)
      [ 1295.749527] Modules linked in:
      [ 1295.750268] CPU: 62 PID: 23410 Comm: trinity-c70 Not tainted 3.19.0-next-20150219-sasha-00045-g9130270f #1939
      [ 1295.750268] task: ffff8803a49db000 ti: ffff8803a4dc8000 task.ti: ffff8803a4dc8000
      [ 1295.750268] RIP: shmem_mapping (mm/shmem.c:1458)
      [ 1295.750268] RSP: 0000:ffff8803a4dcfbf8  EFLAGS: 00010206
      [ 1295.750268] RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 00000000000f2804
      [ 1295.750268] RDX: 0000000000000005 RSI: 0400000000000794 RDI: 0000000000000028
      [ 1295.750268] RBP: ffff8803a4dcfc08 R08: 0000000000000000 R09: 00000000031de000
      [ 1295.750268] R10: dffffc0000000000 R11: 00000000031c1000 R12: 0400000000000794
      [ 1295.750268] R13: 00000000031c2000 R14: 00000000031de000 R15: ffff880e3bdc1000
      [ 1295.750268] FS:  00007f8703c7e700(0000) GS:ffff881164800000(0000) knlGS:0000000000000000
      [ 1295.750268] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 1295.750268] CR2: 0000000004e58000 CR3: 00000003a9f3c000 CR4: 00000000000007a0
      [ 1295.750268] DR0: ffffffff81000000 DR1: 0000009494949494 DR2: 0000000000000000
      [ 1295.750268] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 00000000000d0602
      [ 1295.750268] Stack:
      [ 1295.750268]  ffff8803a4dcfec8 ffffffffbb1dc770 ffff8803a4dcfc38 ffffffffad6f230b
      [ 1295.750268]  ffffffffad6f2b0d 0000014100000000 ffff88001e17c08b ffff880d9453fe08
      [ 1295.750268]  ffff8803a4dcfd18 ffffffffad6f2ce2 ffff8803a49dbcd8 ffff8803a49dbce0
      [ 1295.750268] Call Trace:
      [ 1295.750268] mincore_page (mm/mincore.c:61)
      [ 1295.750268] ? mincore_pte_range (include/linux/spinlock.h:312 mm/mincore.c:131)
      [ 1295.750268] mincore_pte_range (mm/mincore.c:151)
      [ 1295.750268] ? mincore_unmapped_range (mm/mincore.c:113)
      [ 1295.750268] __walk_page_range (mm/pagewalk.c:51 mm/pagewalk.c:90 mm/pagewalk.c:116 mm/pagewalk.c:204)
      [ 1295.750268] walk_page_range (mm/pagewalk.c:275)
      [ 1295.750268] SyS_mincore (mm/mincore.c:191 mm/mincore.c:253 mm/mincore.c:220)
      [ 1295.750268] ? mincore_pte_range (mm/mincore.c:220)
      [ 1295.750268] ? mincore_unmapped_range (mm/mincore.c:113)
      [ 1295.750268] ? __mincore_unmapped_range (mm/mincore.c:105)
      [ 1295.750268] ? ptlock_free (mm/mincore.c:24)
      [ 1295.750268] ? syscall_trace_enter (arch/x86/kernel/ptrace.c:1610)
      [ 1295.750268] ia32_do_call (arch/x86/ia32/ia32entry.S:446)
      [ 1295.750268] Code: e5 48 c1 ea 03 53 48 89 fb 48 83 ec 08 80 3c 02 00 75 4f 48 b8 00 00 00 00 00 fc ff df 48 8b 1b 48 8d 7b 28 48 89 fa 48 c1 ea 03 <80> 3c 02 00 75 3f 48 b8 00 00 00 00 00 fc ff df 48 8b 5b 28 48
      
      All code
      ========
         0:	e5 48                	in     $0x48,%eax
         2:	c1 ea 03             	shr    $0x3,%edx
         5:	53                   	push   %rbx
         6:	48 89 fb             	mov    %rdi,%rbx
         9:	48 83 ec 08          	sub    $0x8,%rsp
         d:	80 3c 02 00          	cmpb   $0x0,(%rdx,%rax,1)
        11:	75 4f                	jne    0x62
        13:	48 b8 00 00 00 00 00 	movabs $0xdffffc0000000000,%rax
        1a:	fc ff df
        1d:	48 8b 1b             	mov    (%rbx),%rbx
        20:	48 8d 7b 28          	lea    0x28(%rbx),%rdi
        24:	48 89 fa             	mov    %rdi,%rdx
        27:	48 c1 ea 03          	shr    $0x3,%rdx
        2b:*	80 3c 02 00          	cmpb   $0x0,(%rdx,%rax,1)		<-- trapping instruction
        2f:	75 3f                	jne    0x70
        31:	48 b8 00 00 00 00 00 	movabs $0xdffffc0000000000,%rax
        38:	fc ff df
        3b:	48 8b 5b 28          	mov    0x28(%rbx),%rbx
        3f:	48                   	rex.W
      	...
      
      Code starting with the faulting instruction
      ===========================================
         0:	80 3c 02 00          	cmpb   $0x0,(%rdx,%rax,1)
         4:	75 3f                	jne    0x45
         6:	48 b8 00 00 00 00 00 	movabs $0xdffffc0000000000,%rax
         d:	fc ff df
        10:	48 8b 5b 28          	mov    0x28(%rbx),%rbx
        14:	48                   	rex.W
      	...
      [ 1295.750268] RIP shmem_mapping (mm/shmem.c:1458)
      [ 1295.750268]  RSP <ffff8803a4dcfbf8>
      
      Fixes: 97b713ba ("fs: kill BDI_CAP_SWAP_BACKED")
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f0774d88
  19. 23 2月, 2015 1 次提交
    • D
      VFS: (Scripted) Convert S_ISLNK/DIR/REG(dentry->d_inode) to d_is_*(dentry) · e36cb0b8
      David Howells 提交于
      Convert the following where appropriate:
      
       (1) S_ISLNK(dentry->d_inode) to d_is_symlink(dentry).
      
       (2) S_ISREG(dentry->d_inode) to d_is_reg(dentry).
      
       (3) S_ISDIR(dentry->d_inode) to d_is_dir(dentry).  This is actually more
           complicated than it appears as some calls should be converted to
           d_can_lookup() instead.  The difference is whether the directory in
           question is a real dir with a ->lookup op or whether it's a fake dir with
           a ->d_automount op.
      
      In some circumstances, we can subsume checks for dentry->d_inode not being
      NULL into this, provided we the code isn't in a filesystem that expects
      d_inode to be NULL if the dirent really *is* negative (ie. if we're going to
      use d_inode() rather than d_backing_inode() to get the inode pointer).
      
      Note that the dentry type field may be set to something other than
      DCACHE_MISS_TYPE when d_inode is NULL in the case of unionmount, where the VFS
      manages the fall-through from a negative dentry to a lower layer.  In such a
      case, the dentry type of the negative union dentry is set to the same as the
      type of the lower dentry.
      
      However, if you know d_inode is not NULL at the call site, then you can use
      the d_is_xxx() functions even in a filesystem.
      
      There is one further complication: a 0,0 chardev dentry may be labelled
      DCACHE_WHITEOUT_TYPE rather than DCACHE_SPECIAL_TYPE.  Strictly, this was
      intended for special directory entry types that don't have attached inodes.
      
      The following perl+coccinelle script was used:
      
      use strict;
      
      my @callers;
      open($fd, 'git grep -l \'S_IS[A-Z].*->d_inode\' |') ||
          die "Can't grep for S_ISDIR and co. callers";
      @callers = <$fd>;
      close($fd);
      unless (@callers) {
          print "No matches\n";
          exit(0);
      }
      
      my @cocci = (
          '@@',
          'expression E;',
          '@@',
          '',
          '- S_ISLNK(E->d_inode->i_mode)',
          '+ d_is_symlink(E)',
          '',
          '@@',
          'expression E;',
          '@@',
          '',
          '- S_ISDIR(E->d_inode->i_mode)',
          '+ d_is_dir(E)',
          '',
          '@@',
          'expression E;',
          '@@',
          '',
          '- S_ISREG(E->d_inode->i_mode)',
          '+ d_is_reg(E)' );
      
      my $coccifile = "tmp.sp.cocci";
      open($fd, ">$coccifile") || die $coccifile;
      print($fd "$_\n") || die $coccifile foreach (@cocci);
      close($fd);
      
      foreach my $file (@callers) {
          chomp $file;
          print "Processing ", $file, "\n";
          system("spatch", "--sp-file", $coccifile, $file, "--in-place", "--no-show-diff") == 0 ||
      	die "spatch failed";
      }
      
      [AV: overlayfs parts skipped]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e36cb0b8
  20. 12 2月, 2015 1 次提交
  21. 11 2月, 2015 1 次提交
  22. 06 2月, 2015 1 次提交
  23. 21 1月, 2015 2 次提交
  24. 17 12月, 2014 1 次提交
  25. 24 10月, 2014 1 次提交
    • M
      shmem: support RENAME_WHITEOUT · 46fdb794
      Miklos Szeredi 提交于
      Allocate a dentry, initialize it with a whiteout and hash it in the place
      of the old dentry.  Later the old dentry will be moved away and the
      whiteout will remain.
      
      i_mutex protects agains concurrent readdir.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      46fdb794
  26. 10 10月, 2014 1 次提交
  27. 27 9月, 2014 1 次提交
    • M
      shmem: fix nlink for rename overwrite directory · b928095b
      Miklos Szeredi 提交于
      If overwriting an empty directory with rename, then need to drop the extra
      nlink.
      
      Test prog:
      
      #include <stdio.h>
      #include <fcntl.h>
      #include <err.h>
      #include <sys/stat.h>
      
      int main(void)
      {
      	const char *test_dir1 = "test-dir1";
      	const char *test_dir2 = "test-dir2";
      	int res;
      	int fd;
      	struct stat statbuf;
      
      	res = mkdir(test_dir1, 0777);
      	if (res == -1)
      		err(1, "mkdir(\"%s\")", test_dir1);
      
      	res = mkdir(test_dir2, 0777);
      	if (res == -1)
      		err(1, "mkdir(\"%s\")", test_dir2);
      
      	fd = open(test_dir2, O_RDONLY);
      	if (fd == -1)
      		err(1, "open(\"%s\")", test_dir2);
      
      	res = rename(test_dir1, test_dir2);
      	if (res == -1)
      		err(1, "rename(\"%s\", \"%s\")", test_dir1, test_dir2);
      
      	res = fstat(fd, &statbuf);
      	if (res == -1)
      		err(1, "fstat(%i)", fd);
      
      	if (statbuf.st_nlink != 0) {
      		fprintf(stderr, "nlink is %lu, should be 0\n", statbuf.st_nlink);
      		return 1;
      	}
      
      	return 0;
      }
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b928095b
  28. 08 9月, 2014 1 次提交
    • T
      percpu_counter: add @gfp to percpu_counter_init() · 908c7f19
      Tejun Heo 提交于
      Percpu allocator now supports allocation mask.  Add @gfp to
      percpu_counter_init() so that !GFP_KERNEL allocation masks can be used
      with percpu_counters too.
      
      We could have left percpu_counter_init() alone and added
      percpu_counter_init_gfp(); however, the number of users isn't that
      high and introducing _gfp variants to all percpu data structures would
      be quite ugly, so let's just do the conversion.  This is the one with
      the most users.  Other percpu data structures are a lot easier to
      convert.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NJan Kara <jack@suse.cz>
      Acked-by: N"David S. Miller" <davem@davemloft.net>
      Cc: x86@kernel.org
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      908c7f19