1. 24 10月, 2011 25 次提交
  2. 11 10月, 2011 1 次提交
  3. 16 9月, 2011 1 次提交
    • R
      target: Fix race between multiple invocations of target_qf_do_work() · bcac364a
      Roland Dreier 提交于
      When work is scheduled with schedule_work(), the work can end up
      running on multiple CPUs at the same time -- this happens if
      the work is already running on one CPU and schedule_work() is called
      on another CPU.  This leads to list corruption with target_qf_do_work(),
      which is roughly doing:
      
      	spin_lock(...);
      	list_for_each_entry_safe(...) {
      		list_del(...);
      		spin_unlock(...);
      
      		// do stuff
      
      		spin_lock(...);
      	}
      
      With multiple CPUs running this code, one CPU can end up deleting the
      list entry that the other CPU is about to work on.
      
      Fix this by splicing the list entries onto a local list and then
      operating on that in the work function.  This way, each invocation of
      target_qf_do_work() operates on its own local list and so multiple
      invocations don't corrupt each other's list.  This also avoids dropping
      and reacquiring the lock for each list entry.
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      bcac364a
  4. 23 8月, 2011 8 次提交
    • R
      target: Make locking in transport_deregister_session() IRQ safe · e63a8e19
      Roland Dreier 提交于
      At least the tcm_qla2xxx fabric driver calls into transport_deregister_session()
      while holding an IRQ-disabled spinlock, so the inner locking needs to
      use spin_lock_irqsave() instead of spin_lock_bh().
      
      This fixes warnings seen with tcm_qla2xxx like:
      
          WARNING: at kernel/softirq.c:159 local_bh_enable_ip+0x98/0xb0()
          Call Trace:
           [<ffffffff8104e65f>] warn_slowpath_common+0x7f/0xc0
           [<ffffffff8104e6ba>] warn_slowpath_null+0x1a/0x20
           [<ffffffff81055368>] local_bh_enable_ip+0x98/0xb0
           [<ffffffff814d5284>] _raw_spin_unlock_bh+0x14/0x20
           [<ffffffffa027b7f6>] transport_deregister_session+0x96/0x180 [target_core_mod]
           [<ffffffffa00f7731>] tcm_qla2xxx_free_session+0xd1/0x170 [tcm_qla2xxx]
           [<ffffffffa01b9173>] qla_tgt_sess_put+0xc3/0x140 [qla2xxx]
           [<ffffffffa01bf40f>] qla_tgt_stop_phase1+0x8f/0x2c0 [qla2xxx]
           [<ffffffffa00f735e>] tcm_qla2xxx_tpg_store_enable+0x6e/0xd0 [tcm_qla2xxx]
           [<ffffffffa026ca29>] target_fabric_tpg_attr_store+0x39/0x40 [target_core_mod]
           [<ffffffffa00a575d>] configfs_write_file+0xbd/0x120 [configfs]
           [<ffffffff811464a6>] vfs_write+0xc6/0x180
           [<ffffffff811467c1>] sys_write+0x51/0x90
           [<ffffffff814dd382>] system_call_fastpath+0x16/0x1b
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      e63a8e19
    • N
      target: Fix task SGL chaining breakage with transport_allocate_data_tasks · c3c74c7a
      Nicholas Bellinger 提交于
      This patch fixes two bugs associated with transport_do_task_sg_chain()
      operation where transport_allocate_data_tasks() was incorrectly setting
      task_padded_sg for all tasks, and causing bogus task->task_sg_nents
      assignments + OOPsen with fabrics depending upon this code.  The first bit
      here adds a task_sg_nents_padded check in transport_allocate_data_tasks()
      to include an extra SGL vector when necessary for tasks that expect to
      be linked using sg_chain().
      
      The second change involves making transport_do_task_sg_chain() properly
      account for the extra SGL vector when task->task_padded_sg is set for
      the non trailing ->task_sg or single ->task_sg allocations.  Note this
      patch also removes the BUG_ON(!task->task_padded_sg) check within
      transport_do_task_sg_chain() as we expect this to happen normally
      with the updated logic in transport_allocate_data_tasks(), along with
      being bogus for CONTROL_SG_IO_CDB type payloads.
      
      So far this bugfix has been tested with tcm_qla2xxx and iblock backends
      in (task_count > 1)( and (task_count == 1) operation.
      Reported-by: NKiran Patil <kiran.patil@intel.com>
      Cc: Kiran Patil <kiran.patil@intel.com>
      Cc: Andy Grover <agrover@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      c3c74c7a
    • N
      target: Fix task count > 1 handling breakage and use max_sector page alignment · 525a48a2
      Nicholas Bellinger 提交于
      This patch addresses recent breakage with multiple se_task (task_count > 1)
      operation following backend dev->se_sub_dev->se_dev_attrib.max_sectors in new
      transport_allocate_data_tasks() code.  The initial bug here was a bogus
      task->task_sg_nents assignment in transport_allocate_data_tasks() based on
      the passed parameter, which now uses DIV_ROUND_UP(task_size, PAGE_SIZE) to
      determine the proper number of per task SGL entries for the (task_count > 1)
      case.
      
      This also means we now need to enforce a PAGE_SIZE aligned max_sector count
      value for this to work as expected without bringing back the pre v3.1
      transport_map_mem_to_sg() logic to handle SGL offsets across multiple tasks.
      So this patch adds se_dev_align_max_sectors() to round down max_sectors as
      necessary to ensure this alignment via se_dev_set_default_attribs() and
      se_dev_align_max_sectors() and keeps it simple for (task_count > 1)
      operation.
      
      So far this bugfix has been tested with (task_count > 1) operation
      using iscsi-target and iblock backends.
      Reported-by: NChris Boot <bootc@bootc.net>
      Cc: Kiran Patil <kiran.patil@intel.com>
      Cc: Andy Grover <agrover@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      525a48a2
    • N
      target: Add missing DATA_SG_IO transport_cmd_get_valid_sectors check · 01cde4d5
      Nicholas Bellinger 提交于
      This patch adds the missing transport_cmd_get_valid_sectors() check for
      SCF_SCSI_DATA_SG_IO_CDB type payloads to ensure that a received LBA + range
      does not exeed past the end of associated backend struct se_device.
      
      This patch also fixes a bug in the failure path of transport_new_cmd_obj()
      where this check can fail, so change to use a signed 'rc' and return '-EINVAL'
      to signal proper transport_generic_request_failure() handling.
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      01cde4d5
    • N
      target: Fix SYNCHRONIZE_CACHE zero LBA + range breakage · 7abbe7f3
      Nicholas Bellinger 提交于
      This patch fixes a SYNCHRONIZE_CACHE CDB handling bug with IBLOCK/FILEIO
      backends where transport_cmd_get_valid_sectors() was incorrectly rejecting
      a zero LBA + range CDB from being processed, and returning CHECK_CONDITION.
      
      This includes changing transport_cmd_get_valid_sectors() to return '0' on
      success and '-EINVAL' on failure (this makes more sense than sectors),
      and to only check transport_cmd_get_valid_sectors() when a non zero LBA +
      range SYNCHRONIZE_CACHE operation has been receieved for the non passthrough
      case.
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      7abbe7f3
    • N
      target: Fix WRITE_SAME usage with transport_get_size · 12850626
      Nicholas Bellinger 提交于
      For all flavours of WRITE_SAME, we only expect to handle a single block
      of data-out buffer payload, regardless of the number of logical blocks
      presented in the CDB.  This patch changes all flavours of WRITE_SAME in
      transport_generic_cmd_sequencer() to pass '1' into transport_get_size()
      instead of the extracted 'sectors' to properly handle the default usage
      of sg_write_same without the --xferlen parameter.
      Reported-by: NEric Seppanen <eric@purestorage.com>
      Signed-off-by: NNicholas Bellinger <nab@risingtidesystems.com>
      12850626
    • N
      target: Add WRITE_SAME (10) parsing and refactor passthrough checks · 706d5860
      Nicholas Bellinger 提交于
      This patch adds initial WRITE_SAME (10) w/ UNMAP=1 support following updates in
      sbcr26 to allow UNMAP=1 for the non 16 + 32 byte CDB case.  It also refactors
      current pSCSI passthrough passthrough checks into target_check_write_same_discard()
      ahead of UNMAP=0 w/ write payload support into target_core_iblock.c.
      
      This includes the support for handling WRITE_SAME in transport_emulate_control_cdb(),
      and converts target_emulate_write_same to accept num_blocks directly for
      WRITE_SAME, WRITE_SAME_16 and WRITE_SAME_32.
      Reported-by: NEric Seppanen <eric@purestorage.com>
      Cc: Roland Dreier <roland@purestorage.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NNicholas Bellinger <nab@risingtidesystems.com>
      706d5860
    • N
      target: Fix write payload exception handling with ->new_cmd_map · 16ab8e60
      Nicholas Bellinger 提交于
      This patch fixes a bug for fabrics using tfo->new_cmd_map() that
      are expect transport_generic_request_failure() to be calling
      transport_send_check_condition_and_sense() for both READ and WRITE,
      instead of only for READ exceptions.
      
      This was originally observed with a failed WRITE_SAME_16 w/ unmap=0
      using tcm_loop.
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      16ab8e60
  5. 17 8月, 2011 1 次提交
  6. 30 7月, 2011 1 次提交
    • N
      target: Fix bug for transport_generic_wait_for_tasks with direct operation · dd8ae59d
      Nicholas Bellinger 提交于
      This patch fixes a bug in transport_handle_cdb_direct() usage with target_core
      where transport_generic_wait_for_tasks() was bypassing active I/O + usage of
      cmd->t_transport_stop_comp because cmd->t_transport_active=1 was not being set
      before dispatching with transport_generic_new_cmd().  The fix follows existing
      usage in transport_generic_handle_cdb*() -> transport_add_cmd_to_queue() and
      set these directly, as well as handle transport_generic_new_cmd() exceptions
      for QUEUE_FULL and CHECK_CONDITION instead of propigating up to RX context
      fabric code.
      
      The bug was manifesting itself with the following SLUB poison overwritten
      warnings with iscsi-target v4.1 LUNs using the new process context direct
      operation during session reinstatement with active I/O exception handling:
      
      [885410.498267] =============================================================================
      [885410.621622] BUG lio_cmd_cache: Poison overwritten
      [885410.621791] -----------------------------------------------------------------------------
      [885410.621792]
      [885410.623420] INFO: 0xffff880000cf3750-0xffff880000cf378d. First byte 0x6a instead of 0x6b
      [885410.626332] INFO: Allocated in iscsit_allocate_cmd+0x1c/0xd4 [iscsi_target_mod] age=345 cpu=1 pid=22554
      [885411.855189] INFO: Freed in iscsit_release_cmd+0x208/0x217 [iscsi_target_mod] age=1410 cpu=1 pid=22554
      [885411.856048] INFO: Slab 0xffffea000002d480 objects=22 used=0 fp=0xffff880000cf7300 flags=0x4080
      [885411.856368] INFO: Object 0xffff880000cf33c0 @offset=13248 fp=0xffff880000cf6780
      
      <SNIP>
      
      [885411.955678] Pid: 22554, comm: iscsi_trx Not tainted 3.0.0-rc7+ #30
      [885411.956040] Call Trace:
      [885411.957029]  [<ffffffff810e5cf9>] print_trailer+0x12e/0x137
      [885412.752879]  [<ffffffff810e61d9>] check_bytes_and_report+0xb9/0xfd
      [885412.754933]  [<ffffffff810e62d2>] check_object+0xb5/0x192
      [885412.755099]  [<ffffffff810e6445>] __free_slab+0x96/0x13a
      [885412.757008]  [<ffffffff810e652a>] discard_slab+0x41/0x43
      [885412.758171]  [<ffffffff810e7a4c>] __slab_free+0xf3/0xfe
      [885412.761027]  [<ffffffffa030a536>] ? iscsit_release_cmd+0x208/0x217 [iscsi_target_mod]
      [885412.761354]  [<ffffffff810e7e95>] kmem_cache_free+0x6f/0xac
      [885412.761536]  [<ffffffffa030a536>] iscsit_release_cmd+0x208/0x217 [iscsi_target_mod]
      [885412.762056]  [<ffffffffa020e467>] ? iblock_free_task+0x34/0x39 [target_core_iblock]
      [885412.762368]  [<ffffffffa0314131>] lio_release_cmd+0x10/0x12 [iscsi_target_mod]
      [885412.764129]  [<ffffffffa02c2254>] transport_release_cmd+0x2f/0x33 [target_core_mod]
      [885412.805024]  [<ffffffffa02c230e>] transport_generic_remove+0xb6/0xc3 [target_core_mod]
      [885412.806424]  [<ffffffff81035b5f>] ? try_to_wake_up+0x1bd/0x1bd
      [885412.809033]  [<ffffffffa02c241f>] transport_generic_free_cmd+0x75/0x7d [target_core_mod]
      [885412.810066]  [<ffffffffa02c2643>] transport_generic_wait_for_tasks+0x21c/0x22b [target_core_mod]
      [885412.811056]  [<ffffffff8139f0b1>] ? mutex_lock+0x11/0x32
      [885412.813059]  [<ffffffff8139f0b1>] ? mutex_lock+0x11/0x32
      [885412.813200]  [<ffffffffa030b81d>] iscsit_close_connection+0x1d5/0x63a [iscsi_target_mod]
      [885412.813517]  [<ffffffffa0300a82>] iscsit_take_action_for_connection_exit+0xdb/0xe0 [iscsi_target_mod]
      [885412.813851]  [<ffffffffa03111e9>] iscsi_target_rx_thread+0x11f6/0x1221 [iscsi_target_mod]
      [885412.829024]  [<ffffffff81033e8d>] ? pick_next_task_fair+0xbe/0x10e
      [885412.831010]  [<ffffffffa030fff3>] ? iscsit_handle_scsi_cmd+0x91d/0x91d [iscsi_target_mod]
      [885412.833011]  [<ffffffffa030fff3>] ? iscsit_handle_scsi_cmd+0x91d/0x91d [iscsi_target_mod]
      [885412.835010]  [<ffffffff8105388a>] kthread+0x7d/0x85
      [885412.837022]  [<ffffffff813a7124>] kernel_thread_helper+0x4/0x10
      [885412.838008]  [<ffffffff8105380d>] ? kthread_worker_fn+0x145/0x145
      [885412.840047]  [<ffffffff813a7120>] ? gs_change+0x13/0x13
      [885412.842007] FIX lio_cmd_cache: Restoring 0xffff880000cf3750-0xffff880000cf378d=0x6
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Andy Grover <agrover@redhat.com>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      dd8ae59d
  7. 28 7月, 2011 1 次提交
  8. 26 7月, 2011 1 次提交
  9. 22 7月, 2011 1 次提交