1. 24 10月, 2011 32 次提交
  2. 11 10月, 2011 1 次提交
  3. 16 9月, 2011 1 次提交
    • R
      target: Fix race between multiple invocations of target_qf_do_work() · bcac364a
      Roland Dreier 提交于
      When work is scheduled with schedule_work(), the work can end up
      running on multiple CPUs at the same time -- this happens if
      the work is already running on one CPU and schedule_work() is called
      on another CPU.  This leads to list corruption with target_qf_do_work(),
      which is roughly doing:
      
      	spin_lock(...);
      	list_for_each_entry_safe(...) {
      		list_del(...);
      		spin_unlock(...);
      
      		// do stuff
      
      		spin_lock(...);
      	}
      
      With multiple CPUs running this code, one CPU can end up deleting the
      list entry that the other CPU is about to work on.
      
      Fix this by splicing the list entries onto a local list and then
      operating on that in the work function.  This way, each invocation of
      target_qf_do_work() operates on its own local list and so multiple
      invocations don't corrupt each other's list.  This also avoids dropping
      and reacquiring the lock for each list entry.
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      bcac364a
  4. 23 8月, 2011 6 次提交
    • R
      target: Make locking in transport_deregister_session() IRQ safe · e63a8e19
      Roland Dreier 提交于
      At least the tcm_qla2xxx fabric driver calls into transport_deregister_session()
      while holding an IRQ-disabled spinlock, so the inner locking needs to
      use spin_lock_irqsave() instead of spin_lock_bh().
      
      This fixes warnings seen with tcm_qla2xxx like:
      
          WARNING: at kernel/softirq.c:159 local_bh_enable_ip+0x98/0xb0()
          Call Trace:
           [<ffffffff8104e65f>] warn_slowpath_common+0x7f/0xc0
           [<ffffffff8104e6ba>] warn_slowpath_null+0x1a/0x20
           [<ffffffff81055368>] local_bh_enable_ip+0x98/0xb0
           [<ffffffff814d5284>] _raw_spin_unlock_bh+0x14/0x20
           [<ffffffffa027b7f6>] transport_deregister_session+0x96/0x180 [target_core_mod]
           [<ffffffffa00f7731>] tcm_qla2xxx_free_session+0xd1/0x170 [tcm_qla2xxx]
           [<ffffffffa01b9173>] qla_tgt_sess_put+0xc3/0x140 [qla2xxx]
           [<ffffffffa01bf40f>] qla_tgt_stop_phase1+0x8f/0x2c0 [qla2xxx]
           [<ffffffffa00f735e>] tcm_qla2xxx_tpg_store_enable+0x6e/0xd0 [tcm_qla2xxx]
           [<ffffffffa026ca29>] target_fabric_tpg_attr_store+0x39/0x40 [target_core_mod]
           [<ffffffffa00a575d>] configfs_write_file+0xbd/0x120 [configfs]
           [<ffffffff811464a6>] vfs_write+0xc6/0x180
           [<ffffffff811467c1>] sys_write+0x51/0x90
           [<ffffffff814dd382>] system_call_fastpath+0x16/0x1b
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      e63a8e19
    • N
      target: Fix task SGL chaining breakage with transport_allocate_data_tasks · c3c74c7a
      Nicholas Bellinger 提交于
      This patch fixes two bugs associated with transport_do_task_sg_chain()
      operation where transport_allocate_data_tasks() was incorrectly setting
      task_padded_sg for all tasks, and causing bogus task->task_sg_nents
      assignments + OOPsen with fabrics depending upon this code.  The first bit
      here adds a task_sg_nents_padded check in transport_allocate_data_tasks()
      to include an extra SGL vector when necessary for tasks that expect to
      be linked using sg_chain().
      
      The second change involves making transport_do_task_sg_chain() properly
      account for the extra SGL vector when task->task_padded_sg is set for
      the non trailing ->task_sg or single ->task_sg allocations.  Note this
      patch also removes the BUG_ON(!task->task_padded_sg) check within
      transport_do_task_sg_chain() as we expect this to happen normally
      with the updated logic in transport_allocate_data_tasks(), along with
      being bogus for CONTROL_SG_IO_CDB type payloads.
      
      So far this bugfix has been tested with tcm_qla2xxx and iblock backends
      in (task_count > 1)( and (task_count == 1) operation.
      Reported-by: NKiran Patil <kiran.patil@intel.com>
      Cc: Kiran Patil <kiran.patil@intel.com>
      Cc: Andy Grover <agrover@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      c3c74c7a
    • N
      target: Fix task count > 1 handling breakage and use max_sector page alignment · 525a48a2
      Nicholas Bellinger 提交于
      This patch addresses recent breakage with multiple se_task (task_count > 1)
      operation following backend dev->se_sub_dev->se_dev_attrib.max_sectors in new
      transport_allocate_data_tasks() code.  The initial bug here was a bogus
      task->task_sg_nents assignment in transport_allocate_data_tasks() based on
      the passed parameter, which now uses DIV_ROUND_UP(task_size, PAGE_SIZE) to
      determine the proper number of per task SGL entries for the (task_count > 1)
      case.
      
      This also means we now need to enforce a PAGE_SIZE aligned max_sector count
      value for this to work as expected without bringing back the pre v3.1
      transport_map_mem_to_sg() logic to handle SGL offsets across multiple tasks.
      So this patch adds se_dev_align_max_sectors() to round down max_sectors as
      necessary to ensure this alignment via se_dev_set_default_attribs() and
      se_dev_align_max_sectors() and keeps it simple for (task_count > 1)
      operation.
      
      So far this bugfix has been tested with (task_count > 1) operation
      using iscsi-target and iblock backends.
      Reported-by: NChris Boot <bootc@bootc.net>
      Cc: Kiran Patil <kiran.patil@intel.com>
      Cc: Andy Grover <agrover@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      525a48a2
    • N
      target: Add missing DATA_SG_IO transport_cmd_get_valid_sectors check · 01cde4d5
      Nicholas Bellinger 提交于
      This patch adds the missing transport_cmd_get_valid_sectors() check for
      SCF_SCSI_DATA_SG_IO_CDB type payloads to ensure that a received LBA + range
      does not exeed past the end of associated backend struct se_device.
      
      This patch also fixes a bug in the failure path of transport_new_cmd_obj()
      where this check can fail, so change to use a signed 'rc' and return '-EINVAL'
      to signal proper transport_generic_request_failure() handling.
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      01cde4d5
    • N
      target: Fix SYNCHRONIZE_CACHE zero LBA + range breakage · 7abbe7f3
      Nicholas Bellinger 提交于
      This patch fixes a SYNCHRONIZE_CACHE CDB handling bug with IBLOCK/FILEIO
      backends where transport_cmd_get_valid_sectors() was incorrectly rejecting
      a zero LBA + range CDB from being processed, and returning CHECK_CONDITION.
      
      This includes changing transport_cmd_get_valid_sectors() to return '0' on
      success and '-EINVAL' on failure (this makes more sense than sectors),
      and to only check transport_cmd_get_valid_sectors() when a non zero LBA +
      range SYNCHRONIZE_CACHE operation has been receieved for the non passthrough
      case.
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      7abbe7f3
    • N
      target: Fix WRITE_SAME usage with transport_get_size · 12850626
      Nicholas Bellinger 提交于
      For all flavours of WRITE_SAME, we only expect to handle a single block
      of data-out buffer payload, regardless of the number of logical blocks
      presented in the CDB.  This patch changes all flavours of WRITE_SAME in
      transport_generic_cmd_sequencer() to pass '1' into transport_get_size()
      instead of the extracted 'sectors' to properly handle the default usage
      of sg_write_same without the --xferlen parameter.
      Reported-by: NEric Seppanen <eric@purestorage.com>
      Signed-off-by: NNicholas Bellinger <nab@risingtidesystems.com>
      12850626