1. 23 9月, 2014 1 次提交
  2. 09 8月, 2014 1 次提交
  3. 01 8月, 2014 1 次提交
  4. 30 7月, 2014 1 次提交
  5. 26 7月, 2014 6 次提交
  6. 25 7月, 2014 3 次提交
  7. 18 7月, 2014 11 次提交
  8. 04 7月, 2014 1 次提交
  9. 02 7月, 2014 1 次提交
    • D
      block SG_IO: add SG_FLAG_Q_AT_HEAD flag · d1515613
      Douglas Gilbert 提交于
      After the SG_IO ioctl was copied into the block layer and
      later into the bsg driver, subtle differences emerged.
      
      One difference is the way injected commands are queued through
      the block layer (i.e. this is not SCSI device queueing nor SATA
      NCQ). Summarizing:
        - SG_IO on block layer device: blk_exec*(at_head=false)
        - sg device SG_IO: at_head=true
        - bsg device SG_IO: at_head=true
      
      Some time ago Boaz Harrosh introduced a sg v4 flag called
      BSG_FLAG_Q_AT_TAIL to override the bsg driver default. A
      recent patch titled: "sg: add SG_FLAG_Q_AT_TAIL flag"
      allowed the sg driver default to be overridden. This patch
      allows a SG_IO ioctl sent to a block layer device to have
      its default overridden.
      
      ChangeLog:
          - introduce SG_FLAG_Q_AT_HEAD flag in sg.h to cause
            commands that are injected via a block layer
            device SG_IO ioctl to set at_head=true
          - make comments clearer about queueing in sg.h since the
            header is used both by the sg device and block layer
            device implementations of the SG_IO ioctl.
          - introduce BSG_FLAG_Q_AT_HEAD in bsg.h for compatibility
            (it does nothing) and update comments.
      Signed-off-by: NDouglas Gilbert <dgilbert@interlog.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NMike Christie <michaelc@cs.wisc.edu>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d1515613
  10. 01 7月, 2014 1 次提交
  11. 12 6月, 2014 1 次提交
  12. 02 6月, 2014 1 次提交
  13. 20 5月, 2014 1 次提交
  14. 19 5月, 2014 1 次提交
  15. 10 4月, 2014 1 次提交
  16. 27 3月, 2014 3 次提交
  17. 25 3月, 2014 1 次提交
  18. 19 3月, 2014 1 次提交
    • D
      libata, libsas: kill pm_result and related cleanup · bc6e7c4b
      Dan Williams 提交于
      Tejun says:
        "At least for libata, worrying about suspend/resume failures don't make
         whole lot of sense.  If suspend failed, just proceed with suspend.  If
         the device can't be woken up afterwards, that's that.  There isn't
         anything we could have done differently anyway.  The same for resume, if
         spinup fails, the device is dud and the following commands will invoke
         EH actions and will eventually fail.  Again, there really isn't any
         *choice* to make.  Just making sure the errors are handled gracefully
         (ie. don't crash) and the following commands are handled correctly
         should be enough."
      
      The only libata user that actually cares about the result from a suspend
      operation is libsas.  However, it only cares about whether queuing a new
      operation collides with an in-flight one.  All libsas does with the
      error is retry, but we can just let libata wait for the previous
      operation before continuing.
      
      Other cleanups include:
      1/ Unifying all ata port pm operations on an ata_port_pm_ prefix
      2/ Marking all ata port pm helper routines as returning void, only
         ata_port_pm_ entry points need to fake a 0 return value.
      3/ Killing ata_port_{suspend|resume}_common() in favor of calling
         ata_port_request_pm() directly
      4/ Killing the wrappers that just do a to_ata_port() conversion
      5/ Clearly marking the entry points that do async operations with an
        _async suffix.
      
      Reference: http://marc.info/?l=linux-scsi&m=138995409532286&w=2
      
      Cc: Phillip Susi <psusi@ubuntu.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Suggested-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NTodd Brandt <todd.e.brandt@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      bc6e7c4b
  19. 18 3月, 2014 1 次提交
    • S
      SCSI/libiscsi: Add check_protection callback for transports · 55e51eda
      Sagi Grimberg 提交于
      iSCSI needs to be at least aware that a task involves protection
      information.  In case it does, after the transaction completed libiscsi
      will ask the transport to check the protection status of the
      transaction.
      
      Unlike transport errors, DIF errors should not prevent successful
      completion of the transaction from the transport point of view, but
      should be escelated to scsi mid-layer when constructing the scsi
      result and sense data.
      
      check_protection routine will return the ascq corresponding to the DIF
      error that occured (or 0 if no error happened).
      
      return ascq:
      - 0x1: GUARD_CHECK_FAILED
      - 0x2: APPTAG_CHECK_FAILED
      - 0x3: REFTAG_CHECK_FAILED
      Signed-off-by: NSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: NAlex Tabachnik <alext@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      55e51eda
  20. 16 3月, 2014 2 次提交
    • C
      [SCSI] do not manipulate device reference counts in scsi_get/put_command · 04796336
      Christoph Hellwig 提交于
      Many callers won't need this and we can optimize them away.  In addition
      the handling in the __-prefixed variants was inconsistant to start with.
      
      Based on an earlier patch from Bart Van Assche.
      
      [jejb: fix kerneldoc probelm picked up by Fengguang Wu]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
      04796336
    • S
      [SCSI] libiscsi: Reduce locking contention in fast path · 659743b0
      Shlomo Pongratz 提交于
      Replace the session lock with two locks, a forward lock and
      a backwards lock named frwd_lock and back_lock respectively.
      
      The forward lock protects resources that change while sending a
      request to the target, such as cmdsn, queued_cmdsn, and allocating
      task from the commands' pool with kfifo_out.
      
      The backward lock protects resources that change while processing
      a response or in error path, such as cmdsn_exp, cmdsn_max, and
      returning tasks to the commands' pool with kfifo_in.
      
      Under a steady state fast-path situation, that is when one
      or more processes/threads submit IO to an iscsi device and
      a single kernel upcall (e.g softirq) is dealing with processing
      of responses without errors, this patch eliminates the contention
      between the queuecommand()/request response/scsi_done() flows
      associated with iscsi sessions.
      
      Between the forward and the backward locks exists a strict locking
      hierarchy. The mutual exclusion zone protected by the forward lock can
      enclose the mutual exclusion zone protected by the backward lock but not
      vice versa.
      
      For example, in iscsi_conn_teardown or in iscsi_xmit_data when there is
      a failure and __iscsi_put_task is called, the backward lock is taken while
      the forward lock is still taken. On the other hand, if in the RX path a nop
      is to be sent, for example in iscsi_handle_reject or __iscsi_complete_pdu
      than the forward lock is released and the backward lock is taken for the
      duration of iscsi_send_nopout, later the backward lock is released and the
      forward lock is retaken.
      
      libiscsi_tcp uses two kernel fifos the r2t pool and the r2t queue.
      
      The insertion and deletion from these queues didn't corespond to the
      assumption taken by the new forward/backwards session locking paradigm.
      
      That is, in iscsi_tcp_clenup_task which belongs to the RX (backwards)
      path, r2t is taken out from r2t queue and inserted to the r2t pool.
      In iscsi_tcp_get_curr_r2t which belong to the TX (forward) path, r2t
      is also inserted to the r2t pool and another r2t is pulled from r2t
      queue.
      
      Only in iscsi_tcp_r2t_rsp which is called in the RX path but can requeue
      to the TX path, r2t is taken from the r2t pool and inserted to the r2t
      queue.
      
      In order to cope with this situation, two spin locks were added,
      pool2queue and queue2pool. The former protects extracting from the
      r2t pool and inserting to the r2t queue, and the later protects the
      extracing from the r2t queue and inserting to the r2t pool.
      Signed-off-by: NShlomo Pongratz <shlomop@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      [minor fix up to apply cleanly and compile fix]
      Signed-off-by: NMike Christie <michaelc@cs.wisc.edu>
      Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
      659743b0