1. 07 7月, 2017 1 次提交
  2. 09 5月, 2017 1 次提交
  3. 04 5月, 2017 5 次提交
  4. 07 3月, 2017 1 次提交
  5. 25 2月, 2017 2 次提交
  6. 20 2月, 2017 2 次提交
  7. 14 1月, 2017 1 次提交
    • P
      locking/atomic, kref: Add kref_read() · 2c935bc5
      Peter Zijlstra 提交于
      Since we need to change the implementation, stop exposing internals.
      
      Provide kref_read() to read the current reference count; typically
      used for debug messages.
      
      Kills two anti-patterns:
      
      	atomic_read(&kref->refcount)
      	kref->refcount.counter
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      2c935bc5
  8. 15 12月, 2016 2 次提交
    • I
      libceph: remove now unused finish_request() wrapper · 45ee2c1d
      Ilya Dryomov 提交于
      Kill the wrapper and rename __finish_request() to finish_request().
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      45ee2c1d
    • I
      libceph: always signal completion when done · c297eb42
      Ilya Dryomov 提交于
      r_safe_completion is currently, and has always been, signaled only if
      on-disk ack was requested.  It's there for fsync and syncfs, which wait
      for in-flight writes to flush - all data write requests set ONDISK.
      
      However, the pool perm check code introduced in 4.2 sends a write
      request with only ACK set.  An unfortunately timed syncfs can then hang
      forever: r_safe_completion won't be signaled because only an unsafe
      reply was requested.
      
      We could patch ceph_osdc_sync() to skip !ONDISK write requests, but
      that is somewhat incomplete and yet another special case.  Instead,
      rename this completion to r_done_completion and always signal it when
      the OSD client is done with the request, whether unsafe, safe, or
      error.  This is a bit cleaner and helps with the cancellation code.
      Reported-by: NYan, Zheng <zyan@redhat.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      c297eb42
  9. 13 12月, 2016 1 次提交
  10. 11 11月, 2016 1 次提交
  11. 25 8月, 2016 3 次提交
  12. 09 8月, 2016 1 次提交
  13. 28 7月, 2016 3 次提交
  14. 31 5月, 2016 3 次提交
    • I
      libceph: use %s instead of %pE in dout()s · 4a3262b1
      Ilya Dryomov 提交于
      Commit d30291b9 ("libceph: variable-sized ceph_object_id") changed
      dout()s in what is now encode_request() and ceph_object_locator_to_pg()
      to use %pE, mostly to document that, although all rbd and cephfs object
      names are NULL-terminated strings, ceph_object_id will handle any RADOS
      object name, including the one containing NULs, just fine.
      
      However, it turns out that vbin_printf() can't handle anything but ints
      and %s - all %p suffixes are ignored.  The buffer %p** points to isn't
      recorded, resulting in trash in the messages if the buffer had been
      reused by the time bstr_printf() got to it.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      4a3262b1
    • I
      libceph: put request only if it's done in handle_reply() · dc045a91
      Ilya Dryomov 提交于
      handle_reply() may be called twice on the same request: on ack and then
      on commit.  This occurs on btrfs-formatted OSDs or if cephfs sync write
      path is triggered - CEPH_OSD_FLAG_ACK | CEPH_OSD_FLAG_ONDISK.
      
      handle_reply() handles this with the help of done_request().
      
      Fixes: 5aea3dcd ("libceph: a major OSD client update")
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      dc045a91
    • I
      libceph: change ceph_osdmap_flag() to take osdc · b7ec35b3
      Ilya Dryomov 提交于
      For the benefit of every single caller, take osdc instead of map.
      Also, now that osdc->osdmap can't ever be NULL, drop the check.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      b7ec35b3
  15. 26 5月, 2016 13 次提交
    • Y
      libceph: make ceph_osdc_wait_request() uninterruptible · 0e76abf2
      Yan, Zheng 提交于
      Ceph_osdc_wait_request() is used when cephfs issues sync IO. In most
      cases, the sync IO should be uninterruptible. The fix is use killale
      wait function in ceph_osdc_wait_request().
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      0e76abf2
    • I
      libceph: replace ceph_monc_request_next_osdmap() · 7cca78c9
      Ilya Dryomov 提交于
      ... with a wrapper around maybe_request_map() - no need for two
      osdmap-specific functions.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      7cca78c9
    • I
      libceph: pool deletion detection · 4609245e
      Ilya Dryomov 提交于
      This adds the "map check" infrastructure for sending osdmap version
      checks on CALC_TARGET_POOL_DNE and completing in-flight requests with
      -ENOENT if the target pool doesn't exist or has just been deleted.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      4609245e
    • I
      libceph: support for checking on status of watch · b07d3c4b
      Ilya Dryomov 提交于
      Implement ceph_osdc_watch_check() to be able to check on status of
      watch.  Note that the time it takes for a watch/notify event to get
      delivered through the notify_wq is taken into account.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      b07d3c4b
    • I
      libceph: support for sending notifies · 19079203
      Ilya Dryomov 提交于
      Implement ceph_osdc_notify() for sending notifies.
      
      Due to the fact that the current messenger can't do read-in into
      pagelists (it can only do write-out from them), I had to go with a page
      vector for a NOTIFY_COMPLETE payload, for now.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      19079203
    • I
      libceph, rbd: ceph_osd_linger_request, watch/notify v2 · 922dab61
      Ilya Dryomov 提交于
      This adds support and switches rbd to a new, more reliable version of
      watch/notify protocol.  As with the OSD client update, this is mostly
      about getting the right structures linked into the right places so that
      reconnects are properly sent when needed.  watch/notify v2 also
      requires sending regular pings to the OSDs - send_linger_ping().
      
      A major change from the old watch/notify implementation is the
      introduction of ceph_osd_linger_request - linger requests no longer
      piggy back on ceph_osd_request.  ceph_osd_event has been merged into
      ceph_osd_linger_request.
      
      All the details are now hidden within libceph, the interface consists
      of a simple pair of watch/unwatch functions and ceph_osdc_notify_ack().
      ceph_osdc_watch() does return ceph_osd_linger_request, but only to keep
      the lifetime management simple.
      
      ceph_osdc_notify_ack() accepts an optional data payload, which is
      relayed back to the notifier.
      
      Portions of this patch are loosely based on work by Douglas Fuller
      <dfuller@redhat.com> and Mike Christie <michaelc@cs.wisc.edu>.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      922dab61
    • I
      libceph: wait_request_timeout() · 42b06965
      Ilya Dryomov 提交于
      The unwatch timeout is currently implemented in rbd.  With
      watch/unwatch code moving into libceph, we are going to need
      a ceph_osdc_wait_request() variant with a timeout.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      42b06965
    • I
      libceph: request_init() and request_release_checks() · 3540bfdb
      Ilya Dryomov 提交于
      These are going to be used by request_reinit() code.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      3540bfdb
    • I
      libceph: a major OSD client update · 5aea3dcd
      Ilya Dryomov 提交于
      This is a major sync up, up to ~Jewel.  The highlights are:
      
      - per-session request trees (vs a global per-client tree)
      - per-session locking (vs a global per-client rwlock)
      - homeless OSD session
      - no ad-hoc global per-client lists
      - support for pool quotas
      - foundation for watch/notify v2 support
      - foundation for map check (pool deletion detection) support
      
      The switchover is incomplete: lingering requests can be setup and
      teared down but aren't ever reestablished.  This functionality is
      restored with the introduction of the new lingering infrastructure
      (ceph_osd_linger_request, linger_work, etc) in a later commit.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      5aea3dcd
    • I
      libceph: protect osdc->osd_lru list with a spinlock · 9dd2845c
      Ilya Dryomov 提交于
      OSD client is getting moved from the big per-client lock to a set of
      per-session locks.  The big rwlock would only be held for read most of
      the time, so a global osdc->osd_lru needs additional protection.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      9dd2845c
    • I
      libceph: allocate ceph_osd with GFP_NOFAIL · 7a28f59b
      Ilya Dryomov 提交于
      create_osd() is called way too deep in the stack to be able to error
      out in a sane way; a failing create_osd() just messes everything up.
      The current req_notarget list solution is broken - the list is never
      traversed as it's not entirely clear when to do it, I guess.
      
      If we were to start traversing it at regular intervals and retrying
      each request, we wouldn't be far off from what __GFP_NOFAIL is doing,
      so allocate OSD sessions with __GFP_NOFAIL, at least until we come up
      with a better fix.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      7a28f59b
    • I
      libceph: osd_init() and osd_cleanup() · 0247a0cf
      Ilya Dryomov 提交于
      These are going to be used by homeless OSD sessions code.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      0247a0cf
    • I
      libceph: handle_one_map() · 42c1b124
      Ilya Dryomov 提交于
      Separate osdmap handling from decoding and iterating over a bag of maps
      in a fresh MOSDMap message.  This sets up the scene for the updated OSD
      client.
      
      Of particular importance here is the addition of pi->was_full, which
      can be used to answer "did this pool go full -> not-full in this map?".
      This is the key bit for supporting pool quotas.
      
      We won't be able to downgrade map_sem for much longer, so drop
      downgrade_write().
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      42c1b124