1. 12 7月, 2018 1 次提交
  2. 15 6月, 2018 1 次提交
  3. 05 6月, 2018 1 次提交
  4. 31 5月, 2018 2 次提交
  5. 30 5月, 2018 6 次提交
  6. 29 5月, 2018 1 次提交
  7. 26 5月, 2018 4 次提交
    • C
      aio: try to complete poll iocbs without context switch · 1962da0d
      Christoph Hellwig 提交于
      If we can acquire ctx_lock without spinning we can just remove our
      iocb from the active_reqs list, and thus complete the iocbs from the
      wakeup context.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      1962da0d
    • C
      aio: implement IOCB_CMD_POLL · 2c14fa83
      Christoph Hellwig 提交于
      Simple one-shot poll through the io_submit() interface.  To poll for
      a file descriptor the application should submit an iocb of type
      IOCB_CMD_POLL.  It will poll the fd for the events specified in the
      the first 32 bits of the aio_buf field of the iocb.
      
      Unlike poll or epoll without EPOLLONESHOT this interface always works
      in one shot mode, that is once the iocb is completed, it will have to be
      resubmitted.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      2c14fa83
    • C
      aio: simplify cancellation · 888933f8
      Christoph Hellwig 提交于
      With the current aio code there is no need for the magic KIOCB_CANCELLED
      value, as a cancelation just kicks the driver to queue the completion
      ASAP, with all actual completion handling done in another thread. Given
      that both the completion path and cancelation take the context lock there
      is no need for magic cmpxchg loops either.  If we remove iocbs from the
      active list after calling ->ki_cancel (but with ctx_lock still held), we
      can also rely on the invariant thay anything found on the list has a
      ->ki_cancel callback and can be cancelled, further simplifing the code.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      888933f8
    • C
      aio: simplify KIOCB_KEY handling · f3a2752a
      Christoph Hellwig 提交于
      No need to pass the key field to lookup_iocb to compare it with KIOCB_KEY,
      as we can do that right after retrieving it from userspace.  Also move the
      KIOCB_KEY definition to aio.c as it is an internal value not used by any
      other place in the kernel.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      f3a2752a
  8. 24 5月, 2018 1 次提交
    • A
      fix io_destroy()/aio_complete() race · 4faa9996
      Al Viro 提交于
      If io_destroy() gets to cancelling everything that can be cancelled and
      gets to kiocb_cancel() calling the function driver has left in ->ki_cancel,
      it becomes vulnerable to a race with IO completion.  At that point req
      is already taken off the list and aio_complete() does *NOT* spin until
      we (in free_ioctx_users()) releases ->ctx_lock.  As the result, it proceeds
      to kiocb_free(), freing req just it gets passed to ->ki_cancel().
      
      Fix is simple - remove from the list after the call of kiocb_cancel().  All
      instances of ->ki_cancel() already have to cope with the being called with
      iocb still on list - that's what happens in io_cancel(2).
      
      Cc: stable@kernel.org
      Fixes: 0460fef2 "aio: use cancellation list lazily"
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4faa9996
  9. 22 5月, 2018 1 次提交
    • A
      aio: fix io_destroy(2) vs. lookup_ioctx() race · baf10564
      Al Viro 提交于
      kill_ioctx() used to have an explicit RCU delay between removing the
      reference from ->ioctx_table and percpu_ref_kill() dropping the refcount.
      At some point that delay had been removed, on the theory that
      percpu_ref_kill() itself contained an RCU delay.  Unfortunately, that was
      the wrong kind of RCU delay and it didn't care about rcu_read_lock() used
      by lookup_ioctx().  As the result, we could get ctx freed right under
      lookup_ioctx().  Tejun has fixed that in a6d7cff4 ("fs/aio: Add explicit
      RCU grace period when freeing kioctx"); however, that fix is not enough.
      
      Suppose io_destroy() from one thread races with e.g. io_setup() from another;
      CPU1 removes the reference from current->mm->ioctx_table[...] just as CPU2
      has picked it (under rcu_read_lock()).  Then CPU1 proceeds to drop the
      refcount, getting it to 0 and triggering a call of free_ioctx_users(),
      which proceeds to drop the secondary refcount and once that reaches zero
      calls free_ioctx_reqs().  That does
              INIT_RCU_WORK(&ctx->free_rwork, free_ioctx);
              queue_rcu_work(system_wq, &ctx->free_rwork);
      and schedules freeing the whole thing after RCU delay.
      
      In the meanwhile CPU2 has gotten around to percpu_ref_get(), bumping the
      refcount from 0 to 1 and returned the reference to io_setup().
      
      Tejun's fix (that queue_rcu_work() in there) guarantees that ctx won't get
      freed until after percpu_ref_get().  Sure, we'd increment the counter before
      ctx can be freed.  Now we are out of rcu_read_lock() and there's nothing to
      stop freeing of the whole thing.  Unfortunately, CPU2 assumes that since it
      has grabbed the reference, ctx is *NOT* going away until it gets around to
      dropping that reference.
      
      The fix is obvious - use percpu_ref_tryget_live() and treat failure as miss.
      It's not costlier than what we currently do in normal case, it's safe to
      call since freeing *is* delayed and it closes the race window - either
      lookup_ioctx() comes before percpu_ref_kill() (in which case ctx->users
      won't reach 0 until the caller of lookup_ioctx() drops it) or lookup_ioctx()
      fails, ctx->users is unaffected and caller of lookup_ioctx() doesn't see
      the object in question at all.
      
      Cc: stable@kernel.org
      Fixes: a6d7cff4 "fs/aio: Add explicit RCU grace period when freeing kioctx"
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      baf10564
  10. 03 5月, 2018 7 次提交
  11. 20 3月, 2018 1 次提交
  12. 15 3月, 2018 2 次提交
    • T
      fs/aio: Use RCU accessors for kioctx_table->table[] · d0264c01
      Tejun Heo 提交于
      While converting ioctx index from a list to a table, db446a08
      ("aio: convert the ioctx list to table lookup v3") missed tagging
      kioctx_table->table[] as an array of RCU pointers and using the
      appropriate RCU accessors.  This introduces a small window in the
      lookup path where init and access may race.
      
      Mark kioctx_table->table[] with __rcu and use the approriate RCU
      accessors when using the field.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NJann Horn <jannh@google.com>
      Fixes: db446a08 ("aio: convert the ioctx list to table lookup v3")
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: stable@vger.kernel.org # v3.12+
      d0264c01
    • T
      fs/aio: Add explicit RCU grace period when freeing kioctx · a6d7cff4
      Tejun Heo 提交于
      While fixing refcounting, e34ecee2 ("aio: Fix a trinity splat")
      incorrectly removed explicit RCU grace period before freeing kioctx.
      The intention seems to be depending on the internal RCU grace periods
      of percpu_ref; however, percpu_ref uses a different flavor of RCU,
      sched-RCU.  This can lead to kioctx being freed while RCU read
      protected dereferences are still in progress.
      
      Fix it by updating free_ioctx() to go through call_rcu() explicitly.
      
      v2: Comment added to explain double bouncing.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NJann Horn <jannh@google.com>
      Fixes: e34ecee2 ("aio: Fix a trinity splat")
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: stable@vger.kernel.org # v3.13+
      a6d7cff4
  13. 25 10月, 2017 1 次提交
    • M
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns... · 6aa7de05
      Mark Rutland 提交于
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
      
      Please do not apply this to mainline directly, instead please re-run the
      coccinelle script shown below and apply its output.
      
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't harmful, and changing them results in
      churn.
      
      However, for some features, the read/write distinction is critical to
      correct operation. To distinguish these cases, separate read/write
      accessors must be used. This patch migrates (most) remaining
      ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
      coccinelle script:
      
      ----
      // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
      // WRITE_ONCE()
      
      // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
      
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: tj@kernel.org
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6aa7de05
  14. 20 9月, 2017 1 次提交
  15. 09 9月, 2017 1 次提交
    • J
      mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY · 2916ecc0
      Jérôme Glisse 提交于
      Introduce a new migration mode that allow to offload the copy to a device
      DMA engine.  This changes the workflow of migration and not all
      address_space migratepage callback can support this.
      
      This is intended to be use by migrate_vma() which itself is use for thing
      like HMM (see include/linux/hmm.h).
      
      No additional per-filesystem migratepage testing is needed.  I disables
      MIGRATE_SYNC_NO_COPY in all problematic migratepage() callback and i
      added comment in those to explain why (part of this patch).  The commit
      message is unclear it should say that any callback that wish to support
      this new mode need to be aware of the difference in the migration flow
      from other mode.
      
      Some of these callbacks do extra locking while copying (aio, zsmalloc,
      balloon, ...) and for DMA to be effective you want to copy multiple
      pages in one DMA operations.  But in the problematic case you can not
      easily hold the extra lock accross multiple call to this callback.
      
      Usual flow is:
      
      For each page {
       1 - lock page
       2 - call migratepage() callback
       3 - (extra locking in some migratepage() callback)
       4 - migrate page state (freeze refcount, update page cache, buffer
           head, ...)
       5 - copy page
       6 - (unlock any extra lock of migratepage() callback)
       7 - return from migratepage() callback
       8 - unlock page
      }
      
      The new mode MIGRATE_SYNC_NO_COPY:
       1 - lock multiple pages
      For each page {
       2 - call migratepage() callback
       3 - abort in all problematic migratepage() callback
       4 - migrate page state (freeze refcount, update page cache, buffer
           head, ...)
      } // finished all calls to migratepage() callback
       5 - DMA copy multiple pages
       6 - unlock all the pages
      
      To support MIGRATE_SYNC_NO_COPY in the problematic case we would need a
      new callback migratepages() (for instance) that deals with multiple
      pages in one transaction.
      
      Because the problematic cases are not important for current usage I did
      not wanted to complexify this patchset even more for no good reason.
      
      Link: http://lkml.kernel.org/r/20170817000548.32038-14-jglisse@redhat.comSigned-off-by: NJérôme Glisse <jglisse@redhat.com>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Nellans <dnellans@nvidia.com>
      Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Mark Hairgrove <mhairgrove@nvidia.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Sherry Cheung <SCheung@nvidia.com>
      Cc: Subhash Gutti <sgutti@nvidia.com>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: Bob Liu <liubo95@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2916ecc0
  16. 08 9月, 2017 1 次提交
  17. 05 9月, 2017 1 次提交
  18. 28 6月, 2017 1 次提交
  19. 20 6月, 2017 2 次提交
  20. 02 3月, 2017 1 次提交
  21. 25 2月, 2017 1 次提交
  22. 20 2月, 2017 1 次提交
  23. 15 1月, 2017 1 次提交
    • S
      aio: fix lock dep warning · a12f1ae6
      Shaohua Li 提交于
      lockdep reports a warnning. file_start_write/file_end_write only
      acquire/release the lock for regular files. So checking the files in aio
      side too.
      
      [  453.532141] ------------[ cut here ]------------
      [  453.533011] WARNING: CPU: 1 PID: 1298 at ../kernel/locking/lockdep.c:3514 lock_release+0x434/0x670
      [  453.533011] DEBUG_LOCKS_WARN_ON(depth <= 0)
      [  453.533011] Modules linked in:
      [  453.533011] CPU: 1 PID: 1298 Comm: fio Not tainted 4.9.0+ #964
      [  453.533011] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.0-1.fc24 04/01/2014
      [  453.533011]  ffff8803a24b7a70 ffffffff8196cffb ffff8803a24b7ae8 0000000000000000
      [  453.533011]  ffff8803a24b7ab8 ffffffff81091ee1 ffff8803a5dba700 00000dba00000008
      [  453.533011]  ffffed0074496f59 ffff8803a5dbaf54 ffff8803ae0f8488 fffffffffffffdef
      [  453.533011] Call Trace:
      [  453.533011]  [<ffffffff8196cffb>] dump_stack+0x67/0x9c
      [  453.533011]  [<ffffffff81091ee1>] __warn+0x111/0x130
      [  453.533011]  [<ffffffff81091f97>] warn_slowpath_fmt+0x97/0xb0
      [  453.533011]  [<ffffffff81091f00>] ? __warn+0x130/0x130
      [  453.533011]  [<ffffffff8191b789>] ? blk_finish_plug+0x29/0x60
      [  453.533011]  [<ffffffff811205d4>] lock_release+0x434/0x670
      [  453.533011]  [<ffffffff8198af94>] ? import_single_range+0xd4/0x110
      [  453.533011]  [<ffffffff81322195>] ? rw_verify_area+0x65/0x140
      [  453.533011]  [<ffffffff813aa696>] ? aio_write+0x1f6/0x280
      [  453.533011]  [<ffffffff813aa6c9>] aio_write+0x229/0x280
      [  453.533011]  [<ffffffff813aa4a0>] ? aio_complete+0x640/0x640
      [  453.533011]  [<ffffffff8111df20>] ? debug_check_no_locks_freed+0x1a0/0x1a0
      [  453.533011]  [<ffffffff8114793a>] ? debug_lockdep_rcu_enabled.part.2+0x1a/0x30
      [  453.533011]  [<ffffffff81147985>] ? debug_lockdep_rcu_enabled+0x35/0x40
      [  453.533011]  [<ffffffff812a92be>] ? __might_fault+0x7e/0xf0
      [  453.533011]  [<ffffffff813ac9bc>] do_io_submit+0x94c/0xb10
      [  453.533011]  [<ffffffff813ac2ae>] ? do_io_submit+0x23e/0xb10
      [  453.533011]  [<ffffffff813ac070>] ? SyS_io_destroy+0x270/0x270
      [  453.533011]  [<ffffffff8111d7b3>] ? mark_held_locks+0x23/0xc0
      [  453.533011]  [<ffffffff8100201a>] ? trace_hardirqs_on_thunk+0x1a/0x1c
      [  453.533011]  [<ffffffff813acb90>] SyS_io_submit+0x10/0x20
      [  453.533011]  [<ffffffff824f96aa>] entry_SYSCALL_64_fastpath+0x18/0xad
      [  453.533011]  [<ffffffff81119190>] ? trace_hardirqs_off_caller+0xc0/0x110
      [  453.533011] ---[ end trace b2fbe664d1cc0082 ]---
      
      Cc: Dmitry Monakhov <dmonakhov@openvz.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a12f1ae6