1. 17 8月, 2015 10 次提交
  2. 22 7月, 2015 1 次提交
  3. 21 7月, 2015 2 次提交
  4. 17 7月, 2015 1 次提交
  5. 16 7月, 2015 1 次提交
  6. 02 7月, 2015 1 次提交
  7. 01 7月, 2015 1 次提交
    • I
      rbd: use GFP_NOIO in rbd_obj_request_create() · 5a60e876
      Ilya Dryomov 提交于
      rbd_obj_request_create() is called on the main I/O path, so we need to
      use GFP_NOIO to make sure allocation doesn't blow back on us.  Not all
      callers need this, but I'm still hardcoding the flag inside rather than
      making it a parameter because a) this is going to stable, and b) those
      callers shouldn't really use rbd_obj_request_create() and will be fixed
      in the future.
      
      More memory allocation fixes will follow.
      
      Cc: stable@vger.kernel.org # 3.10+
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      5a60e876
  8. 28 6月, 2015 6 次提交
  9. 26 6月, 2015 13 次提交
  10. 25 6月, 2015 4 次提交
    • I
      rbd: queue_depth map option · b5584180
      Ilya Dryomov 提交于
      nr_requests (/sys/block/rbd<id>/queue/nr_requests) is pretty much
      irrelevant in blk-mq case because each driver sets its own max depth
      that it can handle and that's the number of tags that gets preallocated
      on setup.  Users can't increase queue depth beyond that value via
      writing to nr_requests.
      
      For rbd we are happy with the default BLKDEV_MAX_RQ (128) for most
      cases but we want to give users the opportunity to increase it.
      Introduce a new per-device queue_depth option to do just that:
      
          $ sudo rbd map -o queue_depth=1024 ...
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      b5584180
    • I
      rbd: store rbd_options in rbd_device · d147543d
      Ilya Dryomov 提交于
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      d147543d
    • I
      rbd: terminate rbd_opts_tokens with Opt_err · 210c104c
      Ilya Dryomov 提交于
      Also nuke useless Opt_last_bool and don't break lines unnecessarily.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      210c104c
    • I
      rbd: bump queue_max_segments · d3834fef
      Ilya Dryomov 提交于
      The default queue_limits::max_segments value (BLK_MAX_SEGMENTS = 128)
      unnecessarily limits bio sizes to 512k (assuming 4k pages).  rbd, being
      a virtual block device, doesn't have any restrictions on the number of
      physical segments, so bump max_segments to max_hw_sectors, in theory
      allowing a sector per segment (although the only case this matters that
      I can think of is some readv/writev style thing).  In practice this is
      going to give us 1M bios - the number of segments in a bio is limited
      in bio_get_nr_vecs() by BIO_MAX_PAGES = 256.
      
      Note that this doesn't result in any improvement on a typical direct
      sequential test.  This is because on a box with a not too badly
      fragmented memory the default BLK_MAX_SEGMENTS is enough to see nice
      rbd object size sized requests.  The only difference is the size of
      bios being merged - 512k vs 1M for something like
      
          $ dd if=/dev/zero of=/dev/rbd0 oflag=direct bs=$RBD_OBJ_SIZE
          $ dd if=/dev/rbd0 iflag=direct of=/dev/null bs=$RBD_OBJ_SIZE
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      d3834fef