1. 09 6月, 2009 2 次提交
    • V
      [SCSI] fcoe: removes fcoe_watchdog · 1047f221
      Vasu Dev 提交于
      Removes periodic fcoe_watchdog timer used across all fcoe interface
      maintained in fcoe_hostlist instead added new fcoe_queue_timer
      per fcoe interface.
      
      Added timer is armed only when some pending skb need to be flushed
      as oppose to periodic 1 second fcoe_watchdog, since now
      fcoe_queue_timer is used on demand thus set this to 2 jiffies.
      
      Now fcoe_queue_timer is much simple than fcoe_watchdog using lock to
      process all fcoe interface from fcoe_hostlist.
      
      I noticed +ve performance result with using 2 jiffies timer as
      this helps flushing fcoe_pending_queue quickly.
      Signed-off-by: NVasu Dev <vasu.dev@intel.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      1047f221
    • V
      [SCSI] fcoe: reduces lock cost when adding a new skb to fcoe_pending_queue · 4bb6b515
      Vasu Dev 提交于
      Currently fcoe_pending_queue.lock held twice for every new skb
      adding to this queue when already least one pkt is pending in this
      queue and that is not uncommon once skb pkts starts getting queued
      here upon fcoe_start_io => dev_queue_xmit failure.
      
      This patch moves most fcoe_pending_queue logic to fcoe_check_wait_queue
      function, this new logic grabs fcoe_pending_queue.lock only once to
      add a new skb instead twice as used to be.
      
      I think after this patch call flow around fcoe_check_wait_queue
      calling in fcoe_xmit is bit simplified with modified
      fcoe_check_wait_queue function taking care of adding and
      removing pending skb in one function.
      Signed-off-by: NVasu Dev <vasu.dev@intel.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      4bb6b515
  2. 24 5月, 2009 1 次提交
  3. 21 5月, 2009 1 次提交
  4. 27 4月, 2009 4 次提交
  5. 03 4月, 2009 12 次提交
  6. 14 3月, 2009 3 次提交
  7. 10 3月, 2009 8 次提交
  8. 07 3月, 2009 1 次提交
    • V
      [SCSI] libfc, fcoe: fixed locking issues with lport->lp_mutex around lport->link_status · bc0e17f6
      Vasu Dev 提交于
      The fcoe_xmit could call fc_pause in case the pending skb queue len is larger
      than FCOE_MAX_QUEUE_DEPTH, the fc_pause was trying to grab lport->lp_muex to
      change lport->link_status and that had these issues :-
      
      1. The fcoe_xmit was getting called with bh disabled, thus causing
      "BUG: scheduling while atomic" when grabbing lport->lp_muex with bh disabled.
      
      2. fc_linkup and fc_linkdown function calls lport_enter function with
      lport->lp_mutex held and these enter function in turn calls fcoe_xmit to send
      lport related FC frame, e.g. fc_linkup => fc_lport_enter_flogi to send flogi
      req. In this case grabbing the same lport->lp_mutex again in fc_puase from
      fcoe_xmit would cause deadlock.
      
      The lport->lp_mutex was used for setting FC_PAUSE in fcoe_xmit path but
      FC_PAUSE bit was not used anywhere beside just setting and clear this
      bit in lport->link_status, instead used a separate field qfull in fc_lport
      to eliminate need for lport->lp_mutex to track pending queue full condition
      and in turn avoid above described two locking issues.
      
      Also added check for lp->qfull in fc_fcp_lport_queue_ready to trigger
      SCSI_MLQUEUE_HOST_BUSY when lp->qfull is set to prevent more scsi-ml cmds
      while lp->qfull is set.
      
      This patch eliminated FC_LINK_UP and FC_PAUSE and instead used dedicated
      fields in fc_lport for this, this simplified all related conditional
      code.
      
      Also removed fc_pause and fc_unpause functions and instead used newly added
      lport->qfull directly in fcoe.
      Signed-off-by: NVasu Dev <vasu.dev@intel.com>
      Signed-off-by: NRobert Love <robert.w.love@intel.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      bc0e17f6
  9. 31 12月, 2008 1 次提交
  10. 30 12月, 2008 1 次提交