1. 29 5月, 2014 1 次提交
  2. 09 7月, 2013 2 次提交
  3. 01 12月, 2012 1 次提交
  4. 24 8月, 2012 4 次提交
  5. 20 7月, 2012 1 次提交
  6. 24 4月, 2012 2 次提交
    • N
    • N
      [SCSI] mpt2sas: Improvement were made to better protect the sas_device,... · 09da0b32
      nagalakshmi.nandigama@lsi.com 提交于
      [SCSI] mpt2sas: Improvement were made to better protect the sas_device, raid_device, and expander_device lists
      
      There were possible race conditions surrounding reading an object
      from the link list while from another context in the driver was
      removing it. The nature of this enhancement is to rearrange locking
      so the link lists are better protected.
      
      Change set:
      (1) numerous routines were rearranged so spin locks are held through
      the entire time a link list object is being read from or written to.
      (2) added new routines for object deletion from link list.  Thus ensuring
      lock was held during the deletion of the link list object, then and memory
      for object freed outside the lock. The memory was freed outside the lock
      so driver had access to device object info which was required for
      notifying the scsi mid layer that a device was getting deleted.
      (3) added the ioc->blocking_handles parameter.  This is a bitmask used
      to identify which devices need blocking when there is device loss.  This was
      introduced so that lock can be held for the entire time traversing the link
      list objects, and the bitmask was set to indicate which device handles need
      blocking. Oustide the lock the ioc->blocking_handles bitmask is traversed,
      with the respective device handle the scsi mid layer is called for moving
      devices into blocking state.
      Signed-off-by: NNagalakshmi Nandigama <nagalakshmi.nandigama@lsi.com>
      Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
      09da0b32
  7. 15 12月, 2011 6 次提交
  8. 30 10月, 2011 2 次提交
    • N
      [SCSI] mpt2sas: Bump driver version to 10.100.00.00 · 6e880200
      nagalakshmi.nandigama@lsi.com 提交于
      Bump driver vesion to 10.100.00.00
      Signed-off-by: NNagalakshmi Nandigama <nagalakshmi.nandigama@lsi.com>
      Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
      6e880200
    • N
      [SCSI] mpt2sas: New feature - Fast Load Support · 921cd802
      nagalakshmi.nandigama@lsi.com 提交于
      New feature Fast Load Support.
      
      (1)Asynchronous SCSI scanning: This will allow the drivers to scan
      for devices in parallel while other device drivers are loading at
      the same time. This will improve the amount of time it takes for the
      OS to load.
      
      (2) Reporting Devices while port enable is active: This feature will
      allow devices to be reported to OS immediately while port enable is
      active. The previous implementation waits for port enable to complete,
      and then report devices. This feature is only enabled on IT firmware
      configurations when there are no boot device configured in BIOS Configuration
      Utility, else the driver will wait till port enable completes reporting
      devices. For IR firmware, this feature is turned off. This feature is to
      address large SAS topologies (>100 drives) when the boot OS is using onboard
      SATA device, in other words, the boot devices is not
      connected to our controller.
      
      (3) Scanning for devices after diagnostic reset completes: A new routine
      _scsih_scan_start is added. This will scan the expander pages, IR pages,
      and sas device pages, then reporting new devices to SCSI Mid layer. It
      seems the driver is not supporting adding devices while diagnostic reset
      is active. Apparently this is due to the sanity checks on
      ioc->shost_recovery flag throughout the context of kernel work thread FIFO,
      and the mpt2sas_fw_work.
      Signed-off-by: NNagalakshmi Nandigama <nagalakshmi.nandigama@lsi.com>
      Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
      921cd802
  9. 22 9月, 2011 2 次提交
    • N
    • N
      [SCSI] mpt2sas: Added NUNA IO support in driver which uses multi-reply queue support of the HBA · 911ae943
      nagalakshmi.nandigama@lsi.com 提交于
      Support added for controllers capable of multi reply queues.
      
      The following are the modifications to the driver to support NUMA.
      
      1) Create the new structure adapter_reply_queue to contain the reply queue
         info for every msix vector.  This object will contain a
         reply_post_host_index, reply_post_free for each instance, msix_index, among
         other parameters.  We will track all the reply queues on a link list called
         ioc->reply_queue_list. Each reply queue is aligned with each IRQ, and is
         passed to the interrupt via the bus_id parameter.
      
      (2) The driver will figure out the msix_vector_count from the PCIe MSIX
          capabilities register instead of the IOC Facts->MaxMSIxVectors. This is
          because the firmware is not filling in this field until the driver has
          already registered MSIX support.
      
      (3) If the ioc_facts reports that the controller is MSIX compatible in the
          capabilities, then the driver will request for multiple irqs.  This count
          is calculated based on the minimum between the online cpus available and
          the ioc->msix_vector_count.  This count is reported to firmware in the
          ioc_init request.
      
      (4) New routines were added _base_free_irq and _base_request_irq, so
          registering and freeing msix vectors were done thru simple function API.
      
      (5) The new routine _base_assign_reply_queues was added to align the msix
          indexes across cpus. This will initialize the array called
          ioc->cpu_msix_table.  This array is looked up on every MPI request so the
          MSIxIndex is set appropriately.
      
      (6) A new shost sysfs attribute was added to report the reply_queue_count.
      
      (7) User needs to set the affinity cpu mask, so the interrupts occur on the
          same cpu that sent the original request.
      Signed-off-by: NNagalakshmi Nandigama <nagalakshmi.nandigama@lsi.com>
      Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
      911ae943
  10. 30 6月, 2011 4 次提交
  11. 25 5月, 2011 1 次提交
  12. 01 5月, 2011 1 次提交
    • K
      [SCSI] mpt2sas : WarpDrive New product SSS6200 support added · 0bdccdb0
      Kashyap, Desai 提交于
      This patch has Support for the new solid state device product SSS6200
      from LSI and relavent features w.r.t SSS6200.
      
      The major feature added in this driver is supporting Direct-I/O to the
      SSS6200 storage.There are some additional changes done to avoid exposing
      the RAID member disks to the OS and hiding/exposing drives based on the
      OEM Specific Flag in Manufacturing Page10 (this is required to handle
      specific changes in the SSS6200 firmware).
      
      Each and every changes are listed below.
      1. Hiding IR related messages.
      For SSS6200, the driver is modified not to print IR related events.
      Even if the debugging is enabled the IR related messages will not be displayed.
      In some places if there is a need to display a message related to IR the
      string "IR" is replaced with string "DD" and the string "volume" is replaced
      with "direct drive". But the function names are not changed hence there are
      some places where the reference to volume can be seen if debug level is set.
      
      2. Removed RAID transport support
      In Linux the user can retrieve RAID volume information from the sysfs directory.
      This support is removed for SSS6200.
      
      3. Direct I/O support.
      The driver tries to enable direct I/O when a volume is reported to the driver
      by the firmware through IRCC events and the driver does this just before
      reporting to the OS, hence all the OS issued I/O can go through direct path
      if they can, The first validation is to see whether the manufacturing page10
      flag is set to expose all drives always. If that is set, the driver will not
      enable direct I/O and displays the message "DDIO" is disabled globally as
      drives are exposed. The driver checks whether there is more than one volume
      in the controller, if so the direct I/O will be disabled globally for all
      volumes in the controller and the message displayed will be "DDIO is disabled
      globally as number of drives > 1.
      If retrieving number of PD is failed the driver will not enable direct I/O
      and displays the message Failure in computing number of drives DDIO disabled.
      If memory allocation for RAIDVolumePage0 is failed, the driver will not enable
      direct I/O and displays the message Memory allocation failure for
      RVPG0 DDIO disabled.  If retrieving RAIDVolumePage0 is failed the driver will
      not enable direct I/O and displays the message Failure in retrieving
      RVPG0 DDIO disabled
      
      If the number of PD in a volume is greater than 8, then the direct I/O will
      be disabled.
      If any of individual drives handle retrieval is failed then the DD-IO will
      be disabled.
      If the volume is not RAID0 or if the block size is not 512 then the DD-IO will
      be disabled.
      If the volume size is greater than 2TB then the DD-IO will be disabled.
      If the driver is not able to find a valid stripe exponent using the configured
      stripe size then the DD-IO will be disabled
      
      When the DD-IO is enabled the driver will check every I/O request issued to
      the storage and checks whether the request is either
      READ6/WRITE6/READ10/WRITE10, if it is and if the complete I/O transfer
      is within a stripe size then the I/O is redirected to
      the drive directly instead of the volume.
      
      On completion of every I/O, if the completion is failure means if the reply
      is address reply with a reply frame associated with it, then the type of I/O
      will be checked, if the I/O is direct then the I/O will be retried to
      the volume once.
      Signed-off-by: NKashyap Desai <kashyap.desai@lsi.com>
      Reviewed-by: NEric Moore <eric.moore@lsi.com>
      Reviewed-by: NSathya Prakash <sathya.prakash@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      0bdccdb0
  13. 24 3月, 2011 1 次提交
  14. 24 1月, 2011 3 次提交
  15. 22 12月, 2010 4 次提交
    • K
      f0cebfb0
    • K
      [SCSI] mpt2sas: Modify code to support Expander switch · 7f6f794d
      Kashyap, Desai 提交于
      Issue : Switch swap doesn't work when device missing delay is enabled.
      
      (1) add support to individually add and remove phys to and from
      existing ports. This replaces the routine
      _transport_delete_duplicate_port.
      (2) _scsih_sas_host_refresh - was modified to change the link rate
      from zero to 1.5 GB rate when the firmware reports there is an
      attached device with zero link.
      (3) add new function mpt2sas_device_remove, this is wrapper function
      deletes some redundant code through out driver by combining into one
      subrountine
      (4) two subroutines were modified so the sas_device, raid_device, and
      port lists are traversed once when objects are deleted from the list.
      Previously it was looping back each time an object was deleted from the
      list.
      Signed-off-by: NKashyap Desai <kashyap.desai@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      7f6f794d
    • K
      [SCSI] mpt2sas: Create a pool of chain buffer instead of dedicated per IOs · 35f805b5
      Kashyap, Desai 提交于
      Create a pool of chain buffers, instead of dedicated per IO:
      This enahancment is to address memory allocation failure when asking
      for more than 2300 IOs per host.   There is just not enough contiquious
      DMA physical memory to make one single allocation to hold both message
      frames and chain buffers when asking for more than 2300 request. In order
      to address this problem we will have to allocate memory for each chain
      buffer in a seperate individual memory allocation, placing each chain
      element of 128 bytes onto a pool of available chains, which can be
      shared amoung all request.
      Signed-off-by: NKashyap Desai <kashyap.desai@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      35f805b5
    • K
      [SCSI] mpt2sas: Added sanity check for cb_idx and smid access. · dd3741d3
      Kashyap, Desai 提交于
      Sometime it is seen that controller
      firmware returns an invalid system message id (smid).
      
      the oops is occurring becuase mpt_callbacks pointer is referenced to
      either null or invalid virtual address.  this is due to cb_idx set
      incorrectly from routine _base_get_cb_idx.  the cb_idx was set incorrectly
      becuase there is no check to make sure smid is less than maxiumum
      anticapted smid.   to fix this issue, we add a check in
      _base_get_cb_idx to make sure smid is not greater than
      ioc->hba_queue_depth.   in addition, a similar check was added to make
      sure the reply address was less than the largest anticapated address.
      
      Newer firmware has sovled this issue, however it good to have this sanity
      check.
      Signed-off-by: NKashyap Desai <kashyap.desai@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      dd3741d3
  16. 28 7月, 2010 5 次提交
    • E
      [SCSI] mpt2sas: driver fails to recover from injected PCIe bus errors · 3cb5469a
      Eric Moore 提交于
      fixes surrounding PCIe enhanced error handling:
      
      (1) We need to reject all request generated internaly inside the driver as well
      as request arriving from the scsi mid layer when PCIe EEH is active. The fix is
      to add a per adapter flag called pci_error_recovery which is checked thru out
      the driver when request are generated.
      
      (2) We don't need to call the pci_driver->remove directly from the PCIe
      callbacks becuase its already called from the PCIe EEH code. In its place we are
      shutting down the watchdog timer, and flushing back all pending IO.
      
      (3) We need to save and restore the pci state across PCIe EEH handling.
      Signed-off-by: NEric Moore <eric.moore@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      3cb5469a
    • K
      [SCSI] mpt2sas: Bump version 06.100.00.00 · d4572c3d
      Kashyap, Desai 提交于
      Version upgrade patch
      Signed-off-by: NKashyap Desai <kashyap.desai@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      d4572c3d
    • K
      [SCSI] mpt2sas: Copy sense buffer instead of working on direct memory location · 769578ff
      Kashyap, Desai 提交于
      (1) driver was not setting the sense data size prior to sending SCSI_IO,
      resulting in the 0x31190000 loginfo
      (2) The driver needs to copy the sense data to local buffer prior
      to releasing the request message frame.  If not, the sense buffer gets
      overwritten by the next SCSI_IO request.
      Signed-off-by: NKashyap Desai <kashyap.desai@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      769578ff
    • K
      [SCSI] mpt2sas: Redesign Raid devices event handling using pd_handles per HBA · f3eedd69
      Kashyap, Desai 提交于
      Actual problem :
      Driver  may receiving the top level expander
      removal event prior to all the individual PD removal events, hence the
      driver is breaking down all the PDs in advanced to the actaul PD UNHIDE
      event. Driver sends multiple
      Target Resets to the same volume handle for each individual PD removal.
      
      FIX DESCRIPTION:
      To fix this issue, the entire PD device handshake protocal has to be
      moved to interrupt context so the breakdown occurs immediately after the
      actual UNHIDE event arrives.  The driver will only issue one Target Reset to
      the volume handle, occurring after the FAILED or MISSING volume status
      event arrives from interrupt context. For the PD UNHIDE event, the driver
      will issue target resets to the PD handles, followed by OP_REMOVE.  The
      driver will set the "deteleted" flag during interrupt context.  A "pd_handle"
      bitmask was introduced so the driver has a list of known pds during entire
      life of the PD; this replaces the "hidden_raid_component" flag handle in
      the sas_device object.  Each bit in the bitmask represents a device handle.
      The bit in the bitmask would be toggled ON/OFF when the HIDE/UNHIDE
      events arrive; also this pd_handle bitmask would bould be refreshed
      across host resets.
      
      Here we kept older behavior of sending target reset to volume when there is
      a single drive pull, wait for the reply, then send target resets
      to the PDs.  We kept this behavior so the driver will
      behave the same for older versions of firmware.
      Signed-off-by: NKashyap Desai <kashyap.desai@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      f3eedd69
    • K
      [SCSI] mpt2sas: Tie a log info message to a specific PHY. · 7fbae67a
      Kashyap, Desai 提交于
      Add support to display additional debug info for SCSI_IO and
      RAID_SCSI_IO_PASSTHROUGH sent from the normal entry queued entry
      point, as well as internal generated commands, and IOCTLS.  The
      additional debug info included the phy number, as well as the
      sas address, enclosure logical id, and slot number.  This debug info
      has to be enabled thru the logging_level command line option, by
      default this will not be displayed.
      Signed-off-by: NKashyap Desai <kashyap.desai@lsi.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
      7fbae67a