提交 424a6f6e 编写于 作者: L Linus Torvalds

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6

SCSI updates from James Bottomley:
 "The update includes the usual assortment of driver updates (lpfc,
  qla2xxx, qla4xxx, bfa, bnx2fc, bnx2i, isci, fcoe, hpsa) plus a huge
  amount of infrastructure work in the SAS library and transport class
  as well as an iSCSI update.  There's also a new SCSI based virtio
  driver."

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (177 commits)
  [SCSI] qla4xxx: Update driver version to 5.02.00-k15
  [SCSI] qla4xxx: trivial cleanup
  [SCSI] qla4xxx: Fix sparse warning
  [SCSI] qla4xxx: Add support for multiple session per host.
  [SCSI] qla4xxx: Export CHAP index as sysfs attribute
  [SCSI] scsi_transport: Export CHAP index as sysfs attribute
  [SCSI] qla4xxx: Add support to display CHAP list and delete CHAP entry
  [SCSI] iscsi_transport: Add support to display CHAP list and delete CHAP entry
  [SCSI] pm8001: fix endian issue with code optimization.
  [SCSI] pm8001: Fix possible racing condition.
  [SCSI] pm8001: Fix bogus interrupt state flag issue.
  [SCSI] ipr: update PCI ID definitions for new adapters
  [SCSI] qla2xxx: handle default case in qla2x00_request_firmware()
  [SCSI] isci: improvements in driver unloading routine
  [SCSI] isci: improve phy event warnings
  [SCSI] isci: debug, provide state-enum-to-string conversions
  [SCSI] scsi_transport_sas: 'enable' phys on reset
  [SCSI] libsas: don't recover end devices attached to disabled phys
  [SCSI] libsas: fixup target_port_protocols for expanders that don't report sata
  [SCSI] libsas: set attached device type and target protocols for local phys
  ...
Copyright (c) 2003-2011 QLogic Corporation Copyright (c) 2003-2011 QLogic Corporation
QLogic Linux/ESX Fibre Channel HBA Driver QLogic Linux FC-FCoE Driver
This program includes a device driver for Linux 2.6/ESX that may be This program includes a device driver for Linux 3.x.
distributed with QLogic hardware specific firmware binary file.
You may modify and redistribute the device driver code under the You may modify and redistribute the device driver code under the
GNU General Public License (a copy of which is attached hereto as GNU General Public License (a copy of which is attached hereto as
Exhibit A) published by the Free Software Foundation (version 2). Exhibit A) published by the Free Software Foundation (version 2).
You may redistribute the hardware specific firmware binary file
under the following terms:
1. Redistribution of source code (only if applicable),
must retain the above copyright notice, this list of
conditions and the following disclaimer.
2. Redistribution in binary form must reproduce the above
copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other
materials provided with the distribution.
3. The name of QLogic Corporation may not be used to
endorse or promote products derived from this software
without specific prior written permission
REGARDLESS OF WHAT LICENSING MECHANISM IS USED OR APPLICABLE,
THIS PROGRAM IS PROVIDED BY QLOGIC CORPORATION "AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
USER ACKNOWLEDGES AND AGREES THAT USE OF THIS PROGRAM WILL NOT
CREATE OR GIVE GROUNDS FOR A LICENSE BY IMPLICATION, ESTOPPEL, OR
OTHERWISE IN ANY INTELLECTUAL PROPERTY RIGHTS (PATENT, COPYRIGHT,
TRADE SECRET, MASK WORK, OR OTHER PROPRIETARY RIGHT) EMBODIED IN
ANY OTHER QLOGIC HARDWARE OR SOFTWARE EITHER SOLELY OR IN
COMBINATION WITH THIS PROGRAM.
EXHIBIT A EXHIBIT A
......
Linux driver for Brocade FC/FCOE adapters
Supported Hardware
------------------
bfa 3.0.2.2 driver supports all Brocade FC/FCOE adapters. Below is a list of
adapter models with corresponding PCIIDs.
PCIID Model
1657:0013:1657:0014 425 4Gbps dual port FC HBA
1657:0013:1657:0014 825 8Gbps PCIe dual port FC HBA
1657:0013:103c:1742 HP 82B 8Gbps PCIedual port FC HBA
1657:0013:103c:1744 HP 42B 4Gbps dual port FC HBA
1657:0017:1657:0014 415 4Gbps single port FC HBA
1657:0017:1657:0014 815 8Gbps single port FC HBA
1657:0017:103c:1741 HP 41B 4Gbps single port FC HBA
1657:0017:103c 1743 HP 81B 8Gbps single port FC HBA
1657:0021:103c:1779 804 8Gbps FC HBA for HP Bladesystem c-class
1657:0014:1657:0014 1010 10Gbps single port CNA - FCOE
1657:0014:1657:0014 1020 10Gbps dual port CNA - FCOE
1657:0014:1657:0014 1007 10Gbps dual port CNA - FCOE
1657:0014:1657:0014 1741 10Gbps dual port CNA - FCOE
1657:0022:1657:0024 1860 16Gbps FC HBA
1657:0022:1657:0022 1860 10Gbps CNA - FCOE
Firmware download
-----------------
The latest Firmware package for 3.0.2.2 bfa driver can be found at:
http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page
and then click following respective util package link:
Version Link
v3.0.0.0 Linux Adapter Firmware package for RHEL 6.2, SLES 11SP2
Configuration & Management utility download
-------------------------------------------
The latest driver configuration & management utility for 3.0.2.2 bfa driver can
be found at:
http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page
and then click following respective util pacakge link
Version Link
v3.0.2.0 Linux Adapter Firmware package for RHEL 6.2, SLES 11SP2
Documentation
-------------
The latest Administration's Guide, Installation and Reference Manual,
Troubleshooting Guide, and Release Notes for the corresponding out-of-box
driver can be found at:
http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page
and use the following inbox and out-of-box driver version mapping to find
the corresponding documentation:
Inbox Version Out-of-box Version
v3.0.2.2 v3.0.0.0
Support
-------
For general product and support info, go to the Brocade website at:
http://www.brocade.com/services-support/index.page
...@@ -398,21 +398,6 @@ struct sas_task { ...@@ -398,21 +398,6 @@ struct sas_task {
task_done -- callback when the task has finished execution task_done -- callback when the task has finished execution
}; };
When an external entity, entity other than the LLDD or the
SAS Layer, wants to work with a struct domain_device, it
_must_ call kobject_get() when getting a handle on the
device and kobject_put() when it is done with the device.
This does two things:
A) implements proper kfree() for the device;
B) increments/decrements the kref for all players:
domain_device
all domain_device's ... (if past an expander)
port
host adapter
pci device
and up the ladder, etc.
DISCOVERY DISCOVERY
--------- ---------
......
...@@ -5936,29 +5936,31 @@ void ata_host_init(struct ata_host *host, struct device *dev, ...@@ -5936,29 +5936,31 @@ void ata_host_init(struct ata_host *host, struct device *dev,
host->ops = ops; host->ops = ops;
} }
int ata_port_probe(struct ata_port *ap) void __ata_port_probe(struct ata_port *ap)
{ {
int rc = 0; struct ata_eh_info *ehi = &ap->link.eh_info;
unsigned long flags;
/* probe */ /* kick EH for boot probing */
if (ap->ops->error_handler) { spin_lock_irqsave(ap->lock, flags);
struct ata_eh_info *ehi = &ap->link.eh_info;
unsigned long flags;
/* kick EH for boot probing */ ehi->probe_mask |= ATA_ALL_DEVICES;
spin_lock_irqsave(ap->lock, flags); ehi->action |= ATA_EH_RESET;
ehi->flags |= ATA_EHI_NO_AUTOPSY | ATA_EHI_QUIET;
ehi->probe_mask |= ATA_ALL_DEVICES; ap->pflags &= ~ATA_PFLAG_INITIALIZING;
ehi->action |= ATA_EH_RESET; ap->pflags |= ATA_PFLAG_LOADING;
ehi->flags |= ATA_EHI_NO_AUTOPSY | ATA_EHI_QUIET; ata_port_schedule_eh(ap);
ap->pflags &= ~ATA_PFLAG_INITIALIZING; spin_unlock_irqrestore(ap->lock, flags);
ap->pflags |= ATA_PFLAG_LOADING; }
ata_port_schedule_eh(ap);
spin_unlock_irqrestore(ap->lock, flags); int ata_port_probe(struct ata_port *ap)
{
int rc = 0;
/* wait for EH to finish */ if (ap->ops->error_handler) {
__ata_port_probe(ap);
ata_port_wait_eh(ap); ata_port_wait_eh(ap);
} else { } else {
DPRINTK("ata%u: bus probe begin\n", ap->print_id); DPRINTK("ata%u: bus probe begin\n", ap->print_id);
......
...@@ -863,6 +863,7 @@ void ata_port_wait_eh(struct ata_port *ap) ...@@ -863,6 +863,7 @@ void ata_port_wait_eh(struct ata_port *ap)
goto retry; goto retry;
} }
} }
EXPORT_SYMBOL_GPL(ata_port_wait_eh);
static int ata_eh_nr_in_flight(struct ata_port *ap) static int ata_eh_nr_in_flight(struct ata_port *ap)
{ {
......
...@@ -3838,6 +3838,19 @@ void ata_sas_port_stop(struct ata_port *ap) ...@@ -3838,6 +3838,19 @@ void ata_sas_port_stop(struct ata_port *ap)
} }
EXPORT_SYMBOL_GPL(ata_sas_port_stop); EXPORT_SYMBOL_GPL(ata_sas_port_stop);
int ata_sas_async_port_init(struct ata_port *ap)
{
int rc = ap->ops->port_start(ap);
if (!rc) {
ap->print_id = ata_print_id++;
__ata_port_probe(ap);
}
return rc;
}
EXPORT_SYMBOL_GPL(ata_sas_async_port_init);
/** /**
* ata_sas_port_init - Initialize a SATA device * ata_sas_port_init - Initialize a SATA device
* @ap: SATA port to initialize * @ap: SATA port to initialize
......
...@@ -105,6 +105,7 @@ extern int ata_cmd_ioctl(struct scsi_device *scsidev, void __user *arg); ...@@ -105,6 +105,7 @@ extern int ata_cmd_ioctl(struct scsi_device *scsidev, void __user *arg);
extern struct ata_port *ata_port_alloc(struct ata_host *host); extern struct ata_port *ata_port_alloc(struct ata_host *host);
extern const char *sata_spd_string(unsigned int spd); extern const char *sata_spd_string(unsigned int spd);
extern int ata_port_probe(struct ata_port *ap); extern int ata_port_probe(struct ata_port *ap);
extern void __ata_port_probe(struct ata_port *ap);
/* libata-acpi.c */ /* libata-acpi.c */
#ifdef CONFIG_ATA_ACPI #ifdef CONFIG_ATA_ACPI
...@@ -151,7 +152,6 @@ extern void ata_eh_acquire(struct ata_port *ap); ...@@ -151,7 +152,6 @@ extern void ata_eh_acquire(struct ata_port *ap);
extern void ata_eh_release(struct ata_port *ap); extern void ata_eh_release(struct ata_port *ap);
extern enum blk_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd); extern enum blk_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd);
extern void ata_scsi_error(struct Scsi_Host *host); extern void ata_scsi_error(struct Scsi_Host *host);
extern void ata_port_wait_eh(struct ata_port *ap);
extern void ata_eh_fastdrain_timerfn(unsigned long arg); extern void ata_eh_fastdrain_timerfn(unsigned long arg);
extern void ata_qc_schedule_eh(struct ata_queued_cmd *qc); extern void ata_qc_schedule_eh(struct ata_queued_cmd *qc);
extern void ata_dev_disable(struct ata_device *dev); extern void ata_dev_disable(struct ata_device *dev);
......
...@@ -1903,6 +1903,14 @@ config SCSI_BFA_FC ...@@ -1903,6 +1903,14 @@ config SCSI_BFA_FC
To compile this driver as a module, choose M here. The module will To compile this driver as a module, choose M here. The module will
be called bfa. be called bfa.
config SCSI_VIRTIO
tristate "virtio-scsi support (EXPERIMENTAL)"
depends on EXPERIMENTAL && VIRTIO
help
This is the virtual HBA driver for virtio. If the kernel will
be used in a virtual machine, say Y or M.
endif # SCSI_LOWLEVEL endif # SCSI_LOWLEVEL
source "drivers/scsi/pcmcia/Kconfig" source "drivers/scsi/pcmcia/Kconfig"
......
...@@ -141,6 +141,7 @@ obj-$(CONFIG_SCSI_CXGB4_ISCSI) += libiscsi.o libiscsi_tcp.o cxgbi/ ...@@ -141,6 +141,7 @@ obj-$(CONFIG_SCSI_CXGB4_ISCSI) += libiscsi.o libiscsi_tcp.o cxgbi/
obj-$(CONFIG_SCSI_BNX2_ISCSI) += libiscsi.o bnx2i/ obj-$(CONFIG_SCSI_BNX2_ISCSI) += libiscsi.o bnx2i/
obj-$(CONFIG_BE2ISCSI) += libiscsi.o be2iscsi/ obj-$(CONFIG_BE2ISCSI) += libiscsi.o be2iscsi/
obj-$(CONFIG_SCSI_PMCRAID) += pmcraid.o obj-$(CONFIG_SCSI_PMCRAID) += pmcraid.o
obj-$(CONFIG_SCSI_VIRTIO) += virtio_scsi.o
obj-$(CONFIG_VMWARE_PVSCSI) += vmw_pvscsi.o obj-$(CONFIG_VMWARE_PVSCSI) += vmw_pvscsi.o
obj-$(CONFIG_HYPERV_STORAGE) += hv_storvsc.o obj-$(CONFIG_HYPERV_STORAGE) += hv_storvsc.o
......
...@@ -151,7 +151,11 @@ int aac_msi; ...@@ -151,7 +151,11 @@ int aac_msi;
int aac_commit = -1; int aac_commit = -1;
int startup_timeout = 180; int startup_timeout = 180;
int aif_timeout = 120; int aif_timeout = 120;
int aac_sync_mode; /* Only Sync. transfer - disabled */
module_param(aac_sync_mode, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(aac_sync_mode, "Force sync. transfer mode"
" 0=off, 1=on");
module_param(nondasd, int, S_IRUGO|S_IWUSR); module_param(nondasd, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(nondasd, "Control scanning of hba for nondasd devices." MODULE_PARM_DESC(nondasd, "Control scanning of hba for nondasd devices."
" 0=off, 1=on"); " 0=off, 1=on");
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
*----------------------------------------------------------------------------*/ *----------------------------------------------------------------------------*/
#ifndef AAC_DRIVER_BUILD #ifndef AAC_DRIVER_BUILD
# define AAC_DRIVER_BUILD 28000 # define AAC_DRIVER_BUILD 28900
# define AAC_DRIVER_BRANCH "-ms" # define AAC_DRIVER_BRANCH "-ms"
#endif #endif
#define MAXIMUM_NUM_CONTAINERS 32 #define MAXIMUM_NUM_CONTAINERS 32
...@@ -756,8 +756,16 @@ struct src_mu_registers { ...@@ -756,8 +756,16 @@ struct src_mu_registers {
struct src_registers { struct src_registers {
struct src_mu_registers MUnit; /* 00h - c7h */ struct src_mu_registers MUnit; /* 00h - c7h */
__le32 reserved1[130790]; /* c8h - 7fc5fh */ union {
struct src_inbound IndexRegs; /* 7fc60h */ struct {
__le32 reserved1[130790]; /* c8h - 7fc5fh */
struct src_inbound IndexRegs; /* 7fc60h */
} tupelo;
struct {
__le32 reserved1[974]; /* c8h - fffh */
struct src_inbound IndexRegs; /* 1000h */
} denali;
} u;
}; };
#define src_readb(AEP, CSR) readb(&((AEP)->regs.src.bar0->CSR)) #define src_readb(AEP, CSR) readb(&((AEP)->regs.src.bar0->CSR))
...@@ -999,6 +1007,10 @@ struct aac_bus_info_response { ...@@ -999,6 +1007,10 @@ struct aac_bus_info_response {
#define AAC_OPT_NEW_COMM cpu_to_le32(1<<17) #define AAC_OPT_NEW_COMM cpu_to_le32(1<<17)
#define AAC_OPT_NEW_COMM_64 cpu_to_le32(1<<18) #define AAC_OPT_NEW_COMM_64 cpu_to_le32(1<<18)
#define AAC_OPT_NEW_COMM_TYPE1 cpu_to_le32(1<<28) #define AAC_OPT_NEW_COMM_TYPE1 cpu_to_le32(1<<28)
#define AAC_OPT_NEW_COMM_TYPE2 cpu_to_le32(1<<29)
#define AAC_OPT_NEW_COMM_TYPE3 cpu_to_le32(1<<30)
#define AAC_OPT_NEW_COMM_TYPE4 cpu_to_le32(1<<31)
struct aac_dev struct aac_dev
{ {
...@@ -1076,6 +1088,8 @@ struct aac_dev ...@@ -1076,6 +1088,8 @@ struct aac_dev
# define AAC_MIN_FOOTPRINT_SIZE 8192 # define AAC_MIN_FOOTPRINT_SIZE 8192
# define AAC_MIN_SRC_BAR0_SIZE 0x400000 # define AAC_MIN_SRC_BAR0_SIZE 0x400000
# define AAC_MIN_SRC_BAR1_SIZE 0x800 # define AAC_MIN_SRC_BAR1_SIZE 0x800
# define AAC_MIN_SRCV_BAR0_SIZE 0x100000
# define AAC_MIN_SRCV_BAR1_SIZE 0x400
#endif #endif
union union
{ {
...@@ -1116,7 +1130,10 @@ struct aac_dev ...@@ -1116,7 +1130,10 @@ struct aac_dev
u8 msi; u8 msi;
int management_fib_count; int management_fib_count;
spinlock_t manage_lock; spinlock_t manage_lock;
spinlock_t sync_lock;
int sync_mode;
struct fib *sync_fib;
struct list_head sync_fib_list;
}; };
#define aac_adapter_interrupt(dev) \ #define aac_adapter_interrupt(dev) \
...@@ -1163,6 +1180,7 @@ struct aac_dev ...@@ -1163,6 +1180,7 @@ struct aac_dev
#define FIB_CONTEXT_FLAG_TIMED_OUT (0x00000001) #define FIB_CONTEXT_FLAG_TIMED_OUT (0x00000001)
#define FIB_CONTEXT_FLAG (0x00000002) #define FIB_CONTEXT_FLAG (0x00000002)
#define FIB_CONTEXT_FLAG_WAIT (0x00000004)
/* /*
* Define the command values * Define the command values
...@@ -1970,6 +1988,7 @@ int aac_rkt_init(struct aac_dev *dev); ...@@ -1970,6 +1988,7 @@ int aac_rkt_init(struct aac_dev *dev);
int aac_nark_init(struct aac_dev *dev); int aac_nark_init(struct aac_dev *dev);
int aac_sa_init(struct aac_dev *dev); int aac_sa_init(struct aac_dev *dev);
int aac_src_init(struct aac_dev *dev); int aac_src_init(struct aac_dev *dev);
int aac_srcv_init(struct aac_dev *dev);
int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw_fib, int wait, struct fib * fibptr, unsigned long *nonotify); int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_fib * hw_fib, int wait, struct fib * fibptr, unsigned long *nonotify);
unsigned int aac_response_normal(struct aac_queue * q); unsigned int aac_response_normal(struct aac_queue * q);
unsigned int aac_command_normal(struct aac_queue * q); unsigned int aac_command_normal(struct aac_queue * q);
......
...@@ -325,12 +325,14 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev) ...@@ -325,12 +325,14 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
{ {
u32 status[5]; u32 status[5];
struct Scsi_Host * host = dev->scsi_host_ptr; struct Scsi_Host * host = dev->scsi_host_ptr;
extern int aac_sync_mode;
/* /*
* Check the preferred comm settings, defaults from template. * Check the preferred comm settings, defaults from template.
*/ */
dev->management_fib_count = 0; dev->management_fib_count = 0;
spin_lock_init(&dev->manage_lock); spin_lock_init(&dev->manage_lock);
spin_lock_init(&dev->sync_lock);
dev->max_fib_size = sizeof(struct hw_fib); dev->max_fib_size = sizeof(struct hw_fib);
dev->sg_tablesize = host->sg_tablesize = (dev->max_fib_size dev->sg_tablesize = host->sg_tablesize = (dev->max_fib_size
- sizeof(struct aac_fibhdr) - sizeof(struct aac_fibhdr)
...@@ -344,13 +346,21 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev) ...@@ -344,13 +346,21 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
(status[0] == 0x00000001)) { (status[0] == 0x00000001)) {
if (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_64)) if (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_64))
dev->raw_io_64 = 1; dev->raw_io_64 = 1;
if (dev->a_ops.adapter_comm) { dev->sync_mode = aac_sync_mode;
if (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_TYPE1)) { if (dev->a_ops.adapter_comm &&
dev->comm_interface = AAC_COMM_MESSAGE_TYPE1; (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM))) {
dev->raw_io_interface = 1;
} else if (status[1] & le32_to_cpu(AAC_OPT_NEW_COMM)) {
dev->comm_interface = AAC_COMM_MESSAGE; dev->comm_interface = AAC_COMM_MESSAGE;
dev->raw_io_interface = 1; dev->raw_io_interface = 1;
if ((status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_TYPE1))) {
/* driver supports TYPE1 (Tupelo) */
dev->comm_interface = AAC_COMM_MESSAGE_TYPE1;
} else if ((status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_TYPE4)) ||
(status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_TYPE3)) ||
(status[1] & le32_to_cpu(AAC_OPT_NEW_COMM_TYPE2))) {
/* driver doesn't support TYPE2 (Series7), TYPE3 and TYPE4 */
/* switch to sync. mode */
dev->comm_interface = AAC_COMM_MESSAGE_TYPE1;
dev->sync_mode = 1;
} }
} }
if ((dev->comm_interface == AAC_COMM_MESSAGE) && if ((dev->comm_interface == AAC_COMM_MESSAGE) &&
...@@ -455,6 +465,7 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev) ...@@ -455,6 +465,7 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
} }
INIT_LIST_HEAD(&dev->fib_list); INIT_LIST_HEAD(&dev->fib_list);
INIT_LIST_HEAD(&dev->sync_fib_list);
return dev; return dev;
} }
......
...@@ -416,6 +416,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size, ...@@ -416,6 +416,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
unsigned long flags = 0; unsigned long flags = 0;
unsigned long qflags; unsigned long qflags;
unsigned long mflags = 0; unsigned long mflags = 0;
unsigned long sflags = 0;
if (!(hw_fib->header.XferState & cpu_to_le32(HostOwned))) if (!(hw_fib->header.XferState & cpu_to_le32(HostOwned)))
...@@ -512,6 +513,31 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size, ...@@ -512,6 +513,31 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
spin_lock_irqsave(&fibptr->event_lock, flags); spin_lock_irqsave(&fibptr->event_lock, flags);
} }
if (dev->sync_mode) {
if (wait)
spin_unlock_irqrestore(&fibptr->event_lock, flags);
spin_lock_irqsave(&dev->sync_lock, sflags);
if (dev->sync_fib) {
list_add_tail(&fibptr->fiblink, &dev->sync_fib_list);
spin_unlock_irqrestore(&dev->sync_lock, sflags);
} else {
dev->sync_fib = fibptr;
spin_unlock_irqrestore(&dev->sync_lock, sflags);
aac_adapter_sync_cmd(dev, SEND_SYNCHRONOUS_FIB,
(u32)fibptr->hw_fib_pa, 0, 0, 0, 0, 0,
NULL, NULL, NULL, NULL, NULL);
}
if (wait) {
fibptr->flags |= FIB_CONTEXT_FLAG_WAIT;
if (down_interruptible(&fibptr->event_wait)) {
fibptr->flags &= ~FIB_CONTEXT_FLAG_WAIT;
return -EFAULT;
}
return 0;
}
return -EINPROGRESS;
}
if (aac_adapter_deliver(fibptr) != 0) { if (aac_adapter_deliver(fibptr) != 0) {
printk(KERN_ERR "aac_fib_send: returned -EBUSY\n"); printk(KERN_ERR "aac_fib_send: returned -EBUSY\n");
if (wait) { if (wait) {
......
...@@ -56,7 +56,7 @@ ...@@ -56,7 +56,7 @@
#include "aacraid.h" #include "aacraid.h"
#define AAC_DRIVER_VERSION "1.1-7" #define AAC_DRIVER_VERSION "1.2-0"
#ifndef AAC_DRIVER_BRANCH #ifndef AAC_DRIVER_BRANCH
#define AAC_DRIVER_BRANCH "" #define AAC_DRIVER_BRANCH ""
#endif #endif
...@@ -162,7 +162,10 @@ static const struct pci_device_id aac_pci_tbl[] __devinitdata = { ...@@ -162,7 +162,10 @@ static const struct pci_device_id aac_pci_tbl[] __devinitdata = {
{ 0x9005, 0x0285, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 59 }, /* Adaptec Catch All */ { 0x9005, 0x0285, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 59 }, /* Adaptec Catch All */
{ 0x9005, 0x0286, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 60 }, /* Adaptec Rocket Catch All */ { 0x9005, 0x0286, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 60 }, /* Adaptec Rocket Catch All */
{ 0x9005, 0x0288, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 61 }, /* Adaptec NEMER/ARK Catch All */ { 0x9005, 0x0288, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 61 }, /* Adaptec NEMER/ARK Catch All */
{ 0x9005, 0x028b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 62 }, /* Adaptec PMC Catch All */ { 0x9005, 0x028b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 62 }, /* Adaptec PMC Series 6 (Tupelo) */
{ 0x9005, 0x028c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 63 }, /* Adaptec PMC Series 7 (Denali) */
{ 0x9005, 0x028d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 64 }, /* Adaptec PMC Series 8 */
{ 0x9005, 0x028f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 65 }, /* Adaptec PMC Series 9 */
{ 0,} { 0,}
}; };
MODULE_DEVICE_TABLE(pci, aac_pci_tbl); MODULE_DEVICE_TABLE(pci, aac_pci_tbl);
...@@ -238,7 +241,10 @@ static struct aac_driver_ident aac_drivers[] = { ...@@ -238,7 +241,10 @@ static struct aac_driver_ident aac_drivers[] = {
{ aac_rx_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Catch All */ { aac_rx_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Catch All */
{ aac_rkt_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Rocket Catch All */ { aac_rkt_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec Rocket Catch All */
{ aac_nark_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec NEMER/ARK Catch All */ { aac_nark_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec NEMER/ARK Catch All */
{ aac_src_init, "aacraid", "ADAPTEC ", "RAID ", 2 } /* Adaptec PMC Catch All */ { aac_src_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec PMC Series 6 (Tupelo) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec PMC Series 7 (Denali) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2 }, /* Adaptec PMC Series 8 */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2 } /* Adaptec PMC Series 9 */
}; };
/** /**
...@@ -1102,6 +1108,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev, ...@@ -1102,6 +1108,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
int error = -ENODEV; int error = -ENODEV;
int unique_id = 0; int unique_id = 0;
u64 dmamask; u64 dmamask;
extern int aac_sync_mode;
list_for_each_entry(aac, &aac_devices, entry) { list_for_each_entry(aac, &aac_devices, entry) {
if (aac->id > unique_id) if (aac->id > unique_id)
...@@ -1162,6 +1169,21 @@ static int __devinit aac_probe_one(struct pci_dev *pdev, ...@@ -1162,6 +1169,21 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
if ((*aac_drivers[index].init)(aac)) if ((*aac_drivers[index].init)(aac))
goto out_unmap; goto out_unmap;
if (aac->sync_mode) {
if (aac_sync_mode)
printk(KERN_INFO "%s%d: Sync. mode enforced "
"by driver parameter. This will cause "
"a significant performance decrease!\n",
aac->name,
aac->id);
else
printk(KERN_INFO "%s%d: Async. mode not supported "
"by current driver, sync. mode enforced."
"\nPlease update driver to get full performance.\n",
aac->name,
aac->id);
}
/* /*
* Start any kernel threads needed * Start any kernel threads needed
*/ */
......
...@@ -643,6 +643,7 @@ int _aac_rx_init(struct aac_dev *dev) ...@@ -643,6 +643,7 @@ int _aac_rx_init(struct aac_dev *dev)
if (aac_init_adapter(dev) == NULL) if (aac_init_adapter(dev) == NULL)
goto error_iounmap; goto error_iounmap;
aac_adapter_comm(dev, dev->comm_interface); aac_adapter_comm(dev, dev->comm_interface);
dev->sync_mode = 0; /* sync. mode not supported */
dev->msi = aac_msi && !pci_enable_msi(dev->pdev); dev->msi = aac_msi && !pci_enable_msi(dev->pdev);
if (request_irq(dev->pdev->irq, dev->a_ops.adapter_intr, if (request_irq(dev->pdev->irq, dev->a_ops.adapter_intr,
IRQF_SHARED|IRQF_DISABLED, "aacraid", dev) < 0) { IRQF_SHARED|IRQF_DISABLED, "aacraid", dev) < 0) {
......
...@@ -385,6 +385,7 @@ int aac_sa_init(struct aac_dev *dev) ...@@ -385,6 +385,7 @@ int aac_sa_init(struct aac_dev *dev)
if(aac_init_adapter(dev) == NULL) if(aac_init_adapter(dev) == NULL)
goto error_irq; goto error_irq;
dev->sync_mode = 0; /* sync. mode not supported */
if (request_irq(dev->pdev->irq, dev->a_ops.adapter_intr, if (request_irq(dev->pdev->irq, dev->a_ops.adapter_intr,
IRQF_SHARED|IRQF_DISABLED, IRQF_SHARED|IRQF_DISABLED,
"aacraid", (void *)dev ) < 0) { "aacraid", (void *)dev ) < 0) {
......
...@@ -96,6 +96,38 @@ static irqreturn_t aac_src_intr_message(int irq, void *dev_id) ...@@ -96,6 +96,38 @@ static irqreturn_t aac_src_intr_message(int irq, void *dev_id)
our_interrupt = 1; our_interrupt = 1;
/* handle AIF */ /* handle AIF */
aac_intr_normal(dev, 0, 2, 0, NULL); aac_intr_normal(dev, 0, 2, 0, NULL);
} else if (bellbits_shifted & OUTBOUNDDOORBELL_0) {
unsigned long sflags;
struct list_head *entry;
int send_it = 0;
if (dev->sync_fib) {
our_interrupt = 1;
if (dev->sync_fib->callback)
dev->sync_fib->callback(dev->sync_fib->callback_data,
dev->sync_fib);
spin_lock_irqsave(&dev->sync_fib->event_lock, sflags);
if (dev->sync_fib->flags & FIB_CONTEXT_FLAG_WAIT) {
dev->management_fib_count--;
up(&dev->sync_fib->event_wait);
}
spin_unlock_irqrestore(&dev->sync_fib->event_lock, sflags);
spin_lock_irqsave(&dev->sync_lock, sflags);
if (!list_empty(&dev->sync_fib_list)) {
entry = dev->sync_fib_list.next;
dev->sync_fib = list_entry(entry, struct fib, fiblink);
list_del(entry);
send_it = 1;
} else {
dev->sync_fib = NULL;
}
spin_unlock_irqrestore(&dev->sync_lock, sflags);
if (send_it) {
aac_adapter_sync_cmd(dev, SEND_SYNCHRONOUS_FIB,
(u32)dev->sync_fib->hw_fib_pa, 0, 0, 0, 0, 0,
NULL, NULL, NULL, NULL, NULL);
}
}
} }
} }
...@@ -177,56 +209,63 @@ static int src_sync_cmd(struct aac_dev *dev, u32 command, ...@@ -177,56 +209,63 @@ static int src_sync_cmd(struct aac_dev *dev, u32 command,
*/ */
src_writel(dev, MUnit.IDR, INBOUNDDOORBELL_0 << SRC_IDR_SHIFT); src_writel(dev, MUnit.IDR, INBOUNDDOORBELL_0 << SRC_IDR_SHIFT);
ok = 0; if (!dev->sync_mode || command != SEND_SYNCHRONOUS_FIB) {
start = jiffies; ok = 0;
start = jiffies;
/* /*
* Wait up to 30 seconds * Wait up to 5 minutes
*/
while (time_before(jiffies, start+30*HZ)) {
/* Delay 5 microseconds to let Mon960 get info. */
udelay(5);
/* Mon960 will set doorbell0 bit
* when it has completed the command
*/ */
if ((src_readl(dev, MUnit.ODR_R) >> SRC_ODR_SHIFT) & OUTBOUNDDOORBELL_0) { while (time_before(jiffies, start+300*HZ)) {
/* Clear the doorbell */ udelay(5); /* Delay 5 microseconds to let Mon960 get info. */
src_writel(dev, /*
MUnit.ODR_C, * Mon960 will set doorbell0 bit when it has completed the command.
OUTBOUNDDOORBELL_0 << SRC_ODR_SHIFT); */
ok = 1; if ((src_readl(dev, MUnit.ODR_R) >> SRC_ODR_SHIFT) & OUTBOUNDDOORBELL_0) {
break; /*
* Clear the doorbell.
*/
src_writel(dev, MUnit.ODR_C, OUTBOUNDDOORBELL_0 << SRC_ODR_SHIFT);
ok = 1;
break;
}
/*
* Yield the processor in case we are slow
*/
msleep(1);
} }
if (unlikely(ok != 1)) {
/* Yield the processor in case we are slow */ /*
msleep(1); * Restore interrupt mask even though we timed out
} */
if (unlikely(ok != 1)) { aac_adapter_enable_int(dev);
/* Restore interrupt mask even though we timed out */ return -ETIMEDOUT;
aac_adapter_enable_int(dev); }
return -ETIMEDOUT; /*
* Pull the synch status from Mailbox 0.
*/
if (status)
*status = readl(&dev->IndexRegs->Mailbox[0]);
if (r1)
*r1 = readl(&dev->IndexRegs->Mailbox[1]);
if (r2)
*r2 = readl(&dev->IndexRegs->Mailbox[2]);
if (r3)
*r3 = readl(&dev->IndexRegs->Mailbox[3]);
if (r4)
*r4 = readl(&dev->IndexRegs->Mailbox[4]);
/*
* Clear the synch command doorbell.
*/
src_writel(dev, MUnit.ODR_C, OUTBOUNDDOORBELL_0 << SRC_ODR_SHIFT);
} }
/* Pull the synch status from Mailbox 0 */ /*
if (status) * Restore interrupt mask
*status = readl(&dev->IndexRegs->Mailbox[0]); */
if (r1)
*r1 = readl(&dev->IndexRegs->Mailbox[1]);
if (r2)
*r2 = readl(&dev->IndexRegs->Mailbox[2]);
if (r3)
*r3 = readl(&dev->IndexRegs->Mailbox[3]);
if (r4)
*r4 = readl(&dev->IndexRegs->Mailbox[4]);
/* Clear the synch command doorbell */
src_writel(dev, MUnit.ODR_C, OUTBOUNDDOORBELL_0 << SRC_ODR_SHIFT);
/* Restore interrupt mask */
aac_adapter_enable_int(dev); aac_adapter_enable_int(dev);
return 0; return 0;
} }
/** /**
...@@ -386,9 +425,7 @@ static int aac_src_ioremap(struct aac_dev *dev, u32 size) ...@@ -386,9 +425,7 @@ static int aac_src_ioremap(struct aac_dev *dev, u32 size)
{ {
if (!size) { if (!size) {
iounmap(dev->regs.src.bar0); iounmap(dev->regs.src.bar0);
dev->regs.src.bar0 = NULL; dev->base = dev->regs.src.bar0 = NULL;
iounmap(dev->base);
dev->base = NULL;
return 0; return 0;
} }
dev->regs.src.bar1 = ioremap(pci_resource_start(dev->pdev, 2), dev->regs.src.bar1 = ioremap(pci_resource_start(dev->pdev, 2),
...@@ -404,7 +441,27 @@ static int aac_src_ioremap(struct aac_dev *dev, u32 size) ...@@ -404,7 +441,27 @@ static int aac_src_ioremap(struct aac_dev *dev, u32 size)
return -1; return -1;
} }
dev->IndexRegs = &((struct src_registers __iomem *) dev->IndexRegs = &((struct src_registers __iomem *)
dev->base)->IndexRegs; dev->base)->u.tupelo.IndexRegs;
return 0;
}
/**
* aac_srcv_ioremap
* @size: mapping resize request
*
*/
static int aac_srcv_ioremap(struct aac_dev *dev, u32 size)
{
if (!size) {
iounmap(dev->regs.src.bar0);
dev->base = dev->regs.src.bar0 = NULL;
return 0;
}
dev->base = dev->regs.src.bar0 = ioremap(dev->scsi_host_ptr->base, size);
if (dev->base == NULL)
return -1;
dev->IndexRegs = &((struct src_registers __iomem *)
dev->base)->u.denali.IndexRegs;
return 0; return 0;
} }
...@@ -419,7 +476,7 @@ static int aac_src_restart_adapter(struct aac_dev *dev, int bled) ...@@ -419,7 +476,7 @@ static int aac_src_restart_adapter(struct aac_dev *dev, int bled)
bled = aac_adapter_sync_cmd(dev, IOP_RESET_ALWAYS, bled = aac_adapter_sync_cmd(dev, IOP_RESET_ALWAYS,
0, 0, 0, 0, 0, 0, &var, &reset_mask, NULL, NULL, NULL); 0, 0, 0, 0, 0, 0, &var, &reset_mask, NULL, NULL, NULL);
if (bled || (var != 0x00000001)) if (bled || (var != 0x00000001))
bled = -EINVAL; return -EINVAL;
if (dev->supplement_adapter_info.SupportedOptions2 & if (dev->supplement_adapter_info.SupportedOptions2 &
AAC_OPTION_DOORBELL_RESET) { AAC_OPTION_DOORBELL_RESET) {
src_writel(dev, MUnit.IDR, reset_mask); src_writel(dev, MUnit.IDR, reset_mask);
...@@ -579,15 +636,149 @@ int aac_src_init(struct aac_dev *dev) ...@@ -579,15 +636,149 @@ int aac_src_init(struct aac_dev *dev)
dev->dbg_size = AAC_MIN_SRC_BAR1_SIZE; dev->dbg_size = AAC_MIN_SRC_BAR1_SIZE;
aac_adapter_enable_int(dev); aac_adapter_enable_int(dev);
if (!dev->sync_mode) {
/*
* Tell the adapter that all is configured, and it can
* start accepting requests
*/
aac_src_start_adapter(dev);
}
return 0;
error_iounmap:
return -1;
}
/**
* aac_srcv_init - initialize an SRCv card
* @dev: device to configure
*
*/
int aac_srcv_init(struct aac_dev *dev)
{
unsigned long start;
unsigned long status;
int restart = 0;
int instance = dev->id;
const char *name = dev->name;
dev->a_ops.adapter_ioremap = aac_srcv_ioremap;
dev->a_ops.adapter_comm = aac_src_select_comm;
dev->base_size = AAC_MIN_SRCV_BAR0_SIZE;
if (aac_adapter_ioremap(dev, dev->base_size)) {
printk(KERN_WARNING "%s: unable to map adapter.\n", name);
goto error_iounmap;
}
/* Failure to reset here is an option ... */
dev->a_ops.adapter_sync_cmd = src_sync_cmd;
dev->a_ops.adapter_enable_int = aac_src_disable_interrupt;
if ((aac_reset_devices || reset_devices) &&
!aac_src_restart_adapter(dev, 0))
++restart;
/* /*
* Tell the adapter that all is configured, and it can * Check to see if the board panic'd while booting.
* start accepting requests
*/ */
aac_src_start_adapter(dev); status = src_readl(dev, MUnit.OMR);
if (status & KERNEL_PANIC) {
if (aac_src_restart_adapter(dev, aac_src_check_health(dev)))
goto error_iounmap;
++restart;
}
/*
* Check to see if the board failed any self tests.
*/
status = src_readl(dev, MUnit.OMR);
if (status & SELF_TEST_FAILED) {
printk(KERN_ERR "%s%d: adapter self-test failed.\n", dev->name, instance);
goto error_iounmap;
}
/*
* Check to see if the monitor panic'd while booting.
*/
if (status & MONITOR_PANIC) {
printk(KERN_ERR "%s%d: adapter monitor panic.\n", dev->name, instance);
goto error_iounmap;
}
start = jiffies;
/*
* Wait for the adapter to be up and running. Wait up to 3 minutes
*/
while (!((status = src_readl(dev, MUnit.OMR)) & KERNEL_UP_AND_RUNNING)) {
if ((restart &&
(status & (KERNEL_PANIC|SELF_TEST_FAILED|MONITOR_PANIC))) ||
time_after(jiffies, start+HZ*startup_timeout)) {
printk(KERN_ERR "%s%d: adapter kernel failed to start, init status = %lx.\n",
dev->name, instance, status);
goto error_iounmap;
}
if (!restart &&
((status & (KERNEL_PANIC|SELF_TEST_FAILED|MONITOR_PANIC)) ||
time_after(jiffies, start + HZ *
((startup_timeout > 60)
? (startup_timeout - 60)
: (startup_timeout / 2))))) {
if (likely(!aac_src_restart_adapter(dev, aac_src_check_health(dev))))
start = jiffies;
++restart;
}
msleep(1);
}
if (restart && aac_commit)
aac_commit = 1;
/*
* Fill in the common function dispatch table.
*/
dev->a_ops.adapter_interrupt = aac_src_interrupt_adapter;
dev->a_ops.adapter_disable_int = aac_src_disable_interrupt;
dev->a_ops.adapter_notify = aac_src_notify_adapter;
dev->a_ops.adapter_sync_cmd = src_sync_cmd;
dev->a_ops.adapter_check_health = aac_src_check_health;
dev->a_ops.adapter_restart = aac_src_restart_adapter;
/*
* First clear out all interrupts. Then enable the one's that we
* can handle.
*/
aac_adapter_comm(dev, AAC_COMM_MESSAGE);
aac_adapter_disable_int(dev);
src_writel(dev, MUnit.ODR_C, 0xffffffff);
aac_adapter_enable_int(dev);
if (aac_init_adapter(dev) == NULL)
goto error_iounmap;
if (dev->comm_interface != AAC_COMM_MESSAGE_TYPE1)
goto error_iounmap;
dev->msi = aac_msi && !pci_enable_msi(dev->pdev);
if (request_irq(dev->pdev->irq, dev->a_ops.adapter_intr,
IRQF_SHARED|IRQF_DISABLED, "aacraid", dev) < 0) {
if (dev->msi)
pci_disable_msi(dev->pdev);
printk(KERN_ERR "%s%d: Interrupt unavailable.\n",
name, instance);
goto error_iounmap;
}
dev->dbg_base = dev->scsi_host_ptr->base;
dev->dbg_base_mapped = dev->base;
dev->dbg_size = dev->base_size;
aac_adapter_enable_int(dev);
if (!dev->sync_mode) {
/*
* Tell the adapter that all is configured, and it can
* start accepting requests
*/
aac_src_start_adapter(dev);
}
return 0; return 0;
error_iounmap: error_iounmap:
return -1; return -1;
} }
...@@ -80,6 +80,8 @@ void asd_invalidate_edb(struct asd_ascb *ascb, int edb_id); ...@@ -80,6 +80,8 @@ void asd_invalidate_edb(struct asd_ascb *ascb, int edb_id);
int asd_execute_task(struct sas_task *, int num, gfp_t gfp_flags); int asd_execute_task(struct sas_task *, int num, gfp_t gfp_flags);
void asd_set_dmamode(struct domain_device *dev);
/* ---------- TMFs ---------- */ /* ---------- TMFs ---------- */
int asd_abort_task(struct sas_task *); int asd_abort_task(struct sas_task *);
int asd_abort_task_set(struct domain_device *, u8 *lun); int asd_abort_task_set(struct domain_device *, u8 *lun);
......
...@@ -109,26 +109,37 @@ static int asd_init_sata_tag_ddb(struct domain_device *dev) ...@@ -109,26 +109,37 @@ static int asd_init_sata_tag_ddb(struct domain_device *dev)
return 0; return 0;
} }
static int asd_init_sata(struct domain_device *dev) void asd_set_dmamode(struct domain_device *dev)
{ {
struct asd_ha_struct *asd_ha = dev->port->ha->lldd_ha; struct asd_ha_struct *asd_ha = dev->port->ha->lldd_ha;
struct ata_device *ata_dev = sas_to_ata_dev(dev);
int ddb = (int) (unsigned long) dev->lldd_dev; int ddb = (int) (unsigned long) dev->lldd_dev;
u32 qdepth = 0; u32 qdepth = 0;
int res = 0;
asd_ddbsite_write_word(asd_ha, ddb, ATA_CMD_SCBPTR, 0xFFFF); if (dev->dev_type == SATA_DEV || dev->dev_type == SATA_PM_PORT) {
if ((dev->dev_type == SATA_DEV || dev->dev_type == SATA_PM_PORT) && if (ata_id_has_ncq(ata_dev->id))
dev->sata_dev.identify_device && qdepth = ata_id_queue_depth(ata_dev->id);
dev->sata_dev.identify_device[10] != 0) {
u16 w75 = le16_to_cpu(dev->sata_dev.identify_device[75]);
u16 w76 = le16_to_cpu(dev->sata_dev.identify_device[76]);
if (w76 & 0x100) /* NCQ? */
qdepth = (w75 & 0x1F) + 1;
asd_ddbsite_write_dword(asd_ha, ddb, SATA_TAG_ALLOC_MASK, asd_ddbsite_write_dword(asd_ha, ddb, SATA_TAG_ALLOC_MASK,
(1ULL<<qdepth)-1); (1ULL<<qdepth)-1);
asd_ddbsite_write_byte(asd_ha, ddb, NUM_SATA_TAGS, qdepth); asd_ddbsite_write_byte(asd_ha, ddb, NUM_SATA_TAGS, qdepth);
} }
if (qdepth > 0)
if (asd_init_sata_tag_ddb(dev) != 0) {
unsigned long flags;
spin_lock_irqsave(dev->sata_dev.ap->lock, flags);
ata_dev->flags |= ATA_DFLAG_NCQ_OFF;
spin_unlock_irqrestore(dev->sata_dev.ap->lock, flags);
}
}
static int asd_init_sata(struct domain_device *dev)
{
struct asd_ha_struct *asd_ha = dev->port->ha->lldd_ha;
int ddb = (int) (unsigned long) dev->lldd_dev;
asd_ddbsite_write_word(asd_ha, ddb, ATA_CMD_SCBPTR, 0xFFFF);
if (dev->dev_type == SATA_DEV || dev->dev_type == SATA_PM || if (dev->dev_type == SATA_DEV || dev->dev_type == SATA_PM ||
dev->dev_type == SATA_PM_PORT) { dev->dev_type == SATA_PM_PORT) {
struct dev_to_host_fis *fis = (struct dev_to_host_fis *) struct dev_to_host_fis *fis = (struct dev_to_host_fis *)
...@@ -136,9 +147,8 @@ static int asd_init_sata(struct domain_device *dev) ...@@ -136,9 +147,8 @@ static int asd_init_sata(struct domain_device *dev)
asd_ddbsite_write_byte(asd_ha, ddb, SATA_STATUS, fis->status); asd_ddbsite_write_byte(asd_ha, ddb, SATA_STATUS, fis->status);
} }
asd_ddbsite_write_word(asd_ha, ddb, NCQ_DATA_SCB_PTR, 0xFFFF); asd_ddbsite_write_word(asd_ha, ddb, NCQ_DATA_SCB_PTR, 0xFFFF);
if (qdepth > 0)
res = asd_init_sata_tag_ddb(dev); return 0;
return res;
} }
static int asd_init_target_ddb(struct domain_device *dev) static int asd_init_target_ddb(struct domain_device *dev)
......
...@@ -68,7 +68,6 @@ static struct scsi_host_template aic94xx_sht = { ...@@ -68,7 +68,6 @@ static struct scsi_host_template aic94xx_sht = {
.queuecommand = sas_queuecommand, .queuecommand = sas_queuecommand,
.target_alloc = sas_target_alloc, .target_alloc = sas_target_alloc,
.slave_configure = sas_slave_configure, .slave_configure = sas_slave_configure,
.slave_destroy = sas_slave_destroy,
.scan_finished = asd_scan_finished, .scan_finished = asd_scan_finished,
.scan_start = asd_scan_start, .scan_start = asd_scan_start,
.change_queue_depth = sas_change_queue_depth, .change_queue_depth = sas_change_queue_depth,
...@@ -82,7 +81,6 @@ static struct scsi_host_template aic94xx_sht = { ...@@ -82,7 +81,6 @@ static struct scsi_host_template aic94xx_sht = {
.use_clustering = ENABLE_CLUSTERING, .use_clustering = ENABLE_CLUSTERING,
.eh_device_reset_handler = sas_eh_device_reset_handler, .eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_bus_reset_handler = sas_eh_bus_reset_handler, .eh_bus_reset_handler = sas_eh_bus_reset_handler,
.slave_alloc = sas_slave_alloc,
.target_destroy = sas_target_destroy, .target_destroy = sas_target_destroy,
.ioctl = sas_ioctl, .ioctl = sas_ioctl,
}; };
...@@ -972,7 +970,7 @@ static int asd_scan_finished(struct Scsi_Host *shost, unsigned long time) ...@@ -972,7 +970,7 @@ static int asd_scan_finished(struct Scsi_Host *shost, unsigned long time)
if (time < HZ) if (time < HZ)
return 0; return 0;
/* Wait for discovery to finish */ /* Wait for discovery to finish */
scsi_flush_work(shost); sas_drain_work(SHOST_TO_SAS_HA(shost));
return 1; return 1;
} }
...@@ -1010,6 +1008,8 @@ static struct sas_domain_function_template aic94xx_transport_functions = { ...@@ -1010,6 +1008,8 @@ static struct sas_domain_function_template aic94xx_transport_functions = {
.lldd_clear_nexus_ha = asd_clear_nexus_ha, .lldd_clear_nexus_ha = asd_clear_nexus_ha,
.lldd_control_phy = asd_control_phy, .lldd_control_phy = asd_control_phy,
.lldd_ata_set_dmamode = asd_set_dmamode,
}; };
static const struct pci_device_id aic94xx_pci_table[] __devinitdata = { static const struct pci_device_id aic94xx_pci_table[] __devinitdata = {
......
...@@ -181,7 +181,7 @@ static int asd_clear_nexus_I_T(struct domain_device *dev, ...@@ -181,7 +181,7 @@ static int asd_clear_nexus_I_T(struct domain_device *dev,
int asd_I_T_nexus_reset(struct domain_device *dev) int asd_I_T_nexus_reset(struct domain_device *dev)
{ {
int res, tmp_res, i; int res, tmp_res, i;
struct sas_phy *phy = sas_find_local_phy(dev); struct sas_phy *phy = sas_get_local_phy(dev);
/* Standard mandates link reset for ATA (type 0) and /* Standard mandates link reset for ATA (type 0) and
* hard reset for SSP (type 1) */ * hard reset for SSP (type 1) */
int reset_type = (dev->dev_type == SATA_DEV || int reset_type = (dev->dev_type == SATA_DEV ||
...@@ -192,7 +192,7 @@ int asd_I_T_nexus_reset(struct domain_device *dev) ...@@ -192,7 +192,7 @@ int asd_I_T_nexus_reset(struct domain_device *dev)
ASD_DPRINTK("sending %s reset to %s\n", ASD_DPRINTK("sending %s reset to %s\n",
reset_type ? "hard" : "soft", dev_name(&phy->dev)); reset_type ? "hard" : "soft", dev_name(&phy->dev));
res = sas_phy_reset(phy, reset_type); res = sas_phy_reset(phy, reset_type);
if (res == TMF_RESP_FUNC_COMPLETE) { if (res == TMF_RESP_FUNC_COMPLETE || res == -ENODEV) {
/* wait for the maximum settle time */ /* wait for the maximum settle time */
msleep(500); msleep(500);
/* clear all outstanding commands (keep nexus suspended) */ /* clear all outstanding commands (keep nexus suspended) */
...@@ -201,7 +201,7 @@ int asd_I_T_nexus_reset(struct domain_device *dev) ...@@ -201,7 +201,7 @@ int asd_I_T_nexus_reset(struct domain_device *dev)
for (i = 0 ; i < 3; i++) { for (i = 0 ; i < 3; i++) {
tmp_res = asd_clear_nexus_I_T(dev, NEXUS_PHASE_RESUME); tmp_res = asd_clear_nexus_I_T(dev, NEXUS_PHASE_RESUME);
if (tmp_res == TC_RESUME) if (tmp_res == TC_RESUME)
return res; goto out;
msleep(500); msleep(500);
} }
...@@ -211,7 +211,10 @@ int asd_I_T_nexus_reset(struct domain_device *dev) ...@@ -211,7 +211,10 @@ int asd_I_T_nexus_reset(struct domain_device *dev)
dev_printk(KERN_ERR, &phy->dev, dev_printk(KERN_ERR, &phy->dev,
"Failed to resume nexus after reset 0x%x\n", tmp_res); "Failed to resume nexus after reset 0x%x\n", tmp_res);
return TMF_RESP_FUNC_FAILED; res = TMF_RESP_FUNC_FAILED;
out:
sas_put_local_phy(phy);
return res;
} }
static int asd_clear_nexus_I_T_L(struct domain_device *dev, u8 *lun) static int asd_clear_nexus_I_T_L(struct domain_device *dev, u8 *lun)
......
...@@ -3047,8 +3047,7 @@ bfad_im_bsg_els_ct_request(struct fc_bsg_job *job) ...@@ -3047,8 +3047,7 @@ bfad_im_bsg_els_ct_request(struct fc_bsg_job *job)
* Allocate buffer for bsg_fcpt and do a copy_from_user op for payload * Allocate buffer for bsg_fcpt and do a copy_from_user op for payload
* buffer of size bsg_data->payload_len * buffer of size bsg_data->payload_len
*/ */
bsg_fcpt = (struct bfa_bsg_fcpt_s *) bsg_fcpt = kzalloc(bsg_data->payload_len, GFP_KERNEL);
kzalloc(bsg_data->payload_len, GFP_KERNEL);
if (!bsg_fcpt) if (!bsg_fcpt)
goto out; goto out;
...@@ -3060,6 +3059,7 @@ bfad_im_bsg_els_ct_request(struct fc_bsg_job *job) ...@@ -3060,6 +3059,7 @@ bfad_im_bsg_els_ct_request(struct fc_bsg_job *job)
drv_fcxp = kzalloc(sizeof(struct bfad_fcxp), GFP_KERNEL); drv_fcxp = kzalloc(sizeof(struct bfad_fcxp), GFP_KERNEL);
if (drv_fcxp == NULL) { if (drv_fcxp == NULL) {
kfree(bsg_fcpt);
rc = -ENOMEM; rc = -ENOMEM;
goto out; goto out;
} }
......
...@@ -62,7 +62,7 @@ ...@@ -62,7 +62,7 @@
#include "bnx2fc_constants.h" #include "bnx2fc_constants.h"
#define BNX2FC_NAME "bnx2fc" #define BNX2FC_NAME "bnx2fc"
#define BNX2FC_VERSION "1.0.9" #define BNX2FC_VERSION "1.0.10"
#define PFX "bnx2fc: " #define PFX "bnx2fc: "
...@@ -114,6 +114,8 @@ ...@@ -114,6 +114,8 @@
#define BNX2FC_HASH_TBL_CHUNK_SIZE (16 * 1024) #define BNX2FC_HASH_TBL_CHUNK_SIZE (16 * 1024)
#define BNX2FC_MAX_SEQS 255 #define BNX2FC_MAX_SEQS 255
#define BNX2FC_MAX_RETRY_CNT 3
#define BNX2FC_MAX_RPORT_RETRY_CNT 255
#define BNX2FC_READ (1 << 1) #define BNX2FC_READ (1 << 1)
#define BNX2FC_WRITE (1 << 0) #define BNX2FC_WRITE (1 << 0)
...@@ -121,8 +123,10 @@ ...@@ -121,8 +123,10 @@
#define BNX2FC_MIN_XID 0 #define BNX2FC_MIN_XID 0
#define BNX2FC_MAX_XID \ #define BNX2FC_MAX_XID \
(BNX2FC_MAX_OUTSTANDING_CMNDS + BNX2FC_ELSTM_XIDS - 1) (BNX2FC_MAX_OUTSTANDING_CMNDS + BNX2FC_ELSTM_XIDS - 1)
#define FCOE_MAX_NUM_XIDS 0x2000
#define FCOE_MIN_XID (BNX2FC_MAX_XID + 1) #define FCOE_MIN_XID (BNX2FC_MAX_XID + 1)
#define FCOE_MAX_XID (FCOE_MIN_XID + 4095) #define FCOE_MAX_XID (FCOE_MIN_XID + FCOE_MAX_NUM_XIDS - 1)
#define FCOE_XIDS_PER_CPU (FCOE_MIN_XID + (512 * nr_cpu_ids) - 1)
#define BNX2FC_MAX_LUN 0xFFFF #define BNX2FC_MAX_LUN 0xFFFF
#define BNX2FC_MAX_FCP_TGT 256 #define BNX2FC_MAX_FCP_TGT 256
#define BNX2FC_MAX_CMD_LEN 16 #define BNX2FC_MAX_CMD_LEN 16
......
...@@ -22,7 +22,7 @@ DEFINE_PER_CPU(struct bnx2fc_percpu_s, bnx2fc_percpu); ...@@ -22,7 +22,7 @@ DEFINE_PER_CPU(struct bnx2fc_percpu_s, bnx2fc_percpu);
#define DRV_MODULE_NAME "bnx2fc" #define DRV_MODULE_NAME "bnx2fc"
#define DRV_MODULE_VERSION BNX2FC_VERSION #define DRV_MODULE_VERSION BNX2FC_VERSION
#define DRV_MODULE_RELDATE "Oct 21, 2011" #define DRV_MODULE_RELDATE "Jan 22, 2011"
static char version[] __devinitdata = static char version[] __devinitdata =
...@@ -939,8 +939,14 @@ static int bnx2fc_libfc_config(struct fc_lport *lport) ...@@ -939,8 +939,14 @@ static int bnx2fc_libfc_config(struct fc_lport *lport)
static int bnx2fc_em_config(struct fc_lport *lport) static int bnx2fc_em_config(struct fc_lport *lport)
{ {
int max_xid;
if (nr_cpu_ids <= 2)
max_xid = FCOE_XIDS_PER_CPU;
else
max_xid = FCOE_MAX_XID;
if (!fc_exch_mgr_alloc(lport, FC_CLASS_3, FCOE_MIN_XID, if (!fc_exch_mgr_alloc(lport, FC_CLASS_3, FCOE_MIN_XID,
FCOE_MAX_XID, NULL)) { max_xid, NULL)) {
printk(KERN_ERR PFX "em_config:fc_exch_mgr_alloc failed\n"); printk(KERN_ERR PFX "em_config:fc_exch_mgr_alloc failed\n");
return -ENOMEM; return -ENOMEM;
} }
...@@ -952,8 +958,8 @@ static int bnx2fc_lport_config(struct fc_lport *lport) ...@@ -952,8 +958,8 @@ static int bnx2fc_lport_config(struct fc_lport *lport)
{ {
lport->link_up = 0; lport->link_up = 0;
lport->qfull = 0; lport->qfull = 0;
lport->max_retry_count = 3; lport->max_retry_count = BNX2FC_MAX_RETRY_CNT;
lport->max_rport_retry_count = 3; lport->max_rport_retry_count = BNX2FC_MAX_RPORT_RETRY_CNT;
lport->e_d_tov = 2 * 1000; lport->e_d_tov = 2 * 1000;
lport->r_a_tov = 10 * 1000; lport->r_a_tov = 10 * 1000;
...@@ -1536,6 +1542,7 @@ static void __bnx2fc_destroy(struct bnx2fc_interface *interface) ...@@ -1536,6 +1542,7 @@ static void __bnx2fc_destroy(struct bnx2fc_interface *interface)
static int bnx2fc_destroy(struct net_device *netdev) static int bnx2fc_destroy(struct net_device *netdev)
{ {
struct bnx2fc_interface *interface = NULL; struct bnx2fc_interface *interface = NULL;
struct workqueue_struct *timer_work_queue;
int rc = 0; int rc = 0;
rtnl_lock(); rtnl_lock();
...@@ -1548,9 +1555,9 @@ static int bnx2fc_destroy(struct net_device *netdev) ...@@ -1548,9 +1555,9 @@ static int bnx2fc_destroy(struct net_device *netdev)
goto netdev_err; goto netdev_err;
} }
timer_work_queue = interface->timer_work_queue;
destroy_workqueue(interface->timer_work_queue);
__bnx2fc_destroy(interface); __bnx2fc_destroy(interface);
destroy_workqueue(timer_work_queue);
netdev_err: netdev_err:
mutex_unlock(&bnx2fc_dev_lock); mutex_unlock(&bnx2fc_dev_lock);
...@@ -2054,6 +2061,7 @@ static int bnx2fc_create(struct net_device *netdev, enum fip_state fip_mode) ...@@ -2054,6 +2061,7 @@ static int bnx2fc_create(struct net_device *netdev, enum fip_state fip_mode)
ifput_err: ifput_err:
bnx2fc_net_cleanup(interface); bnx2fc_net_cleanup(interface);
bnx2fc_interface_put(interface); bnx2fc_interface_put(interface);
goto mod_err;
netdev_err: netdev_err:
module_put(THIS_MODULE); module_put(THIS_MODULE);
mod_err: mod_err:
......
...@@ -1312,14 +1312,18 @@ int bnx2i_send_fw_iscsi_init_msg(struct bnx2i_hba *hba) ...@@ -1312,14 +1312,18 @@ int bnx2i_send_fw_iscsi_init_msg(struct bnx2i_hba *hba)
ISCSI_KCQE_COMPLETION_STATUS_PROTOCOL_ERR_EXP_DATASN) | ISCSI_KCQE_COMPLETION_STATUS_PROTOCOL_ERR_EXP_DATASN) |
/* EMC */ /* EMC */
(1ULL << ISCSI_KCQE_COMPLETION_STATUS_PROTOCOL_ERR_LUN)); (1ULL << ISCSI_KCQE_COMPLETION_STATUS_PROTOCOL_ERR_LUN));
if (error_mask1) if (error_mask1) {
iscsi_init2.error_bit_map[0] = error_mask1; iscsi_init2.error_bit_map[0] = error_mask1;
else mask64 &= (u32)(~mask64);
mask64 |= error_mask1;
} else
iscsi_init2.error_bit_map[0] = (u32) mask64; iscsi_init2.error_bit_map[0] = (u32) mask64;
if (error_mask2) if (error_mask2) {
iscsi_init2.error_bit_map[1] = error_mask2; iscsi_init2.error_bit_map[1] = error_mask2;
else mask64 &= 0xffffffff;
mask64 |= ((u64)error_mask2 << 32);
} else
iscsi_init2.error_bit_map[1] = (u32) (mask64 >> 32); iscsi_init2.error_bit_map[1] = (u32) (mask64 >> 32);
iscsi_error_mask = mask64; iscsi_error_mask = mask64;
......
...@@ -49,11 +49,11 @@ module_param(en_tcp_dack, int, 0664); ...@@ -49,11 +49,11 @@ module_param(en_tcp_dack, int, 0664);
MODULE_PARM_DESC(en_tcp_dack, "Enable TCP Delayed ACK"); MODULE_PARM_DESC(en_tcp_dack, "Enable TCP Delayed ACK");
unsigned int error_mask1 = 0x00; unsigned int error_mask1 = 0x00;
module_param(error_mask1, int, 0664); module_param(error_mask1, uint, 0664);
MODULE_PARM_DESC(error_mask1, "Config FW iSCSI Error Mask #1"); MODULE_PARM_DESC(error_mask1, "Config FW iSCSI Error Mask #1");
unsigned int error_mask2 = 0x00; unsigned int error_mask2 = 0x00;
module_param(error_mask2, int, 0664); module_param(error_mask2, uint, 0664);
MODULE_PARM_DESC(error_mask2, "Config FW iSCSI Error Mask #2"); MODULE_PARM_DESC(error_mask2, "Config FW iSCSI Error Mask #2");
unsigned int sq_size; unsigned int sq_size;
...@@ -393,8 +393,9 @@ static void bnx2i_percpu_thread_create(unsigned int cpu) ...@@ -393,8 +393,9 @@ static void bnx2i_percpu_thread_create(unsigned int cpu)
p = &per_cpu(bnx2i_percpu, cpu); p = &per_cpu(bnx2i_percpu, cpu);
thread = kthread_create(bnx2i_percpu_io_thread, (void *)p, thread = kthread_create_on_node(bnx2i_percpu_io_thread, (void *)p,
"bnx2i_thread/%d", cpu); cpu_to_node(cpu),
"bnx2i_thread/%d", cpu);
/* bind thread to the cpu */ /* bind thread to the cpu */
if (likely(!IS_ERR(thread))) { if (likely(!IS_ERR(thread))) {
kthread_bind(thread, cpu); kthread_bind(thread, cpu);
......
...@@ -2147,11 +2147,10 @@ int cxgbi_set_conn_param(struct iscsi_cls_conn *cls_conn, ...@@ -2147,11 +2147,10 @@ int cxgbi_set_conn_param(struct iscsi_cls_conn *cls_conn,
enum iscsi_param param, char *buf, int buflen) enum iscsi_param param, char *buf, int buflen)
{ {
struct iscsi_conn *conn = cls_conn->dd_data; struct iscsi_conn *conn = cls_conn->dd_data;
struct iscsi_session *session = conn->session;
struct iscsi_tcp_conn *tcp_conn = conn->dd_data; struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct cxgbi_conn *cconn = tcp_conn->dd_data; struct cxgbi_conn *cconn = tcp_conn->dd_data;
struct cxgbi_sock *csk = cconn->cep->csk; struct cxgbi_sock *csk = cconn->cep->csk;
int value, err = 0; int err;
log_debug(1 << CXGBI_DBG_ISCSI, log_debug(1 << CXGBI_DBG_ISCSI,
"cls_conn 0x%p, param %d, buf(%d) %s.\n", "cls_conn 0x%p, param %d, buf(%d) %s.\n",
...@@ -2173,15 +2172,7 @@ int cxgbi_set_conn_param(struct iscsi_cls_conn *cls_conn, ...@@ -2173,15 +2172,7 @@ int cxgbi_set_conn_param(struct iscsi_cls_conn *cls_conn,
conn->datadgst_en, 0); conn->datadgst_en, 0);
break; break;
case ISCSI_PARAM_MAX_R2T: case ISCSI_PARAM_MAX_R2T:
sscanf(buf, "%d", &value); return iscsi_tcp_set_max_r2t(conn, buf);
if (value <= 0 || !is_power_of_2(value))
return -EINVAL;
if (session->max_r2t == value)
break;
iscsi_tcp_r2tpool_free(session);
err = iscsi_set_param(cls_conn, param, buf, buflen);
if (!err && iscsi_tcp_r2tpool_alloc(session))
return -ENOMEM;
case ISCSI_PARAM_MAX_RECV_DLENGTH: case ISCSI_PARAM_MAX_RECV_DLENGTH:
err = iscsi_set_param(cls_conn, param, buf, buflen); err = iscsi_set_param(cls_conn, param, buf, buflen);
if (!err) if (!err)
......
...@@ -168,6 +168,14 @@ static struct fc_function_template fcoe_nport_fc_functions = { ...@@ -168,6 +168,14 @@ static struct fc_function_template fcoe_nport_fc_functions = {
.show_host_supported_fc4s = 1, .show_host_supported_fc4s = 1,
.show_host_active_fc4s = 1, .show_host_active_fc4s = 1,
.show_host_maxframe_size = 1, .show_host_maxframe_size = 1,
.show_host_serial_number = 1,
.show_host_manufacturer = 1,
.show_host_model = 1,
.show_host_model_description = 1,
.show_host_hardware_version = 1,
.show_host_driver_version = 1,
.show_host_firmware_version = 1,
.show_host_optionrom_version = 1,
.show_host_port_id = 1, .show_host_port_id = 1,
.show_host_supported_speeds = 1, .show_host_supported_speeds = 1,
...@@ -208,6 +216,14 @@ static struct fc_function_template fcoe_vport_fc_functions = { ...@@ -208,6 +216,14 @@ static struct fc_function_template fcoe_vport_fc_functions = {
.show_host_supported_fc4s = 1, .show_host_supported_fc4s = 1,
.show_host_active_fc4s = 1, .show_host_active_fc4s = 1,
.show_host_maxframe_size = 1, .show_host_maxframe_size = 1,
.show_host_serial_number = 1,
.show_host_manufacturer = 1,
.show_host_model = 1,
.show_host_model_description = 1,
.show_host_hardware_version = 1,
.show_host_driver_version = 1,
.show_host_firmware_version = 1,
.show_host_optionrom_version = 1,
.show_host_port_id = 1, .show_host_port_id = 1,
.show_host_supported_speeds = 1, .show_host_supported_speeds = 1,
...@@ -364,11 +380,10 @@ static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev, ...@@ -364,11 +380,10 @@ static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev,
if (!fcoe) { if (!fcoe) {
FCOE_NETDEV_DBG(netdev, "Could not allocate fcoe structure\n"); FCOE_NETDEV_DBG(netdev, "Could not allocate fcoe structure\n");
fcoe = ERR_PTR(-ENOMEM); fcoe = ERR_PTR(-ENOMEM);
goto out_nomod; goto out_putmod;
} }
dev_hold(netdev); dev_hold(netdev);
kref_init(&fcoe->kref);
/* /*
* Initialize FIP. * Initialize FIP.
...@@ -384,53 +399,17 @@ static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev, ...@@ -384,53 +399,17 @@ static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev,
kfree(fcoe); kfree(fcoe);
dev_put(netdev); dev_put(netdev);
fcoe = ERR_PTR(err); fcoe = ERR_PTR(err);
goto out_nomod; goto out_putmod;
} }
goto out; goto out;
out_nomod: out_putmod:
module_put(THIS_MODULE); module_put(THIS_MODULE);
out: out:
return fcoe; return fcoe;
} }
/**
* fcoe_interface_release() - fcoe_port kref release function
* @kref: Embedded reference count in an fcoe_interface struct
*/
static void fcoe_interface_release(struct kref *kref)
{
struct fcoe_interface *fcoe;
struct net_device *netdev;
fcoe = container_of(kref, struct fcoe_interface, kref);
netdev = fcoe->netdev;
/* tear-down the FCoE controller */
fcoe_ctlr_destroy(&fcoe->ctlr);
kfree(fcoe);
dev_put(netdev);
module_put(THIS_MODULE);
}
/**
* fcoe_interface_get() - Get a reference to a FCoE interface
* @fcoe: The FCoE interface to be held
*/
static inline void fcoe_interface_get(struct fcoe_interface *fcoe)
{
kref_get(&fcoe->kref);
}
/**
* fcoe_interface_put() - Put a reference to a FCoE interface
* @fcoe: The FCoE interface to be released
*/
static inline void fcoe_interface_put(struct fcoe_interface *fcoe)
{
kref_put(&fcoe->kref, fcoe_interface_release);
}
/** /**
* fcoe_interface_cleanup() - Clean up a FCoE interface * fcoe_interface_cleanup() - Clean up a FCoE interface
* @fcoe: The FCoE interface to be cleaned up * @fcoe: The FCoE interface to be cleaned up
...@@ -478,7 +457,11 @@ static void fcoe_interface_cleanup(struct fcoe_interface *fcoe) ...@@ -478,7 +457,11 @@ static void fcoe_interface_cleanup(struct fcoe_interface *fcoe)
rtnl_unlock(); rtnl_unlock();
/* Release the self-reference taken during fcoe_interface_create() */ /* Release the self-reference taken during fcoe_interface_create() */
fcoe_interface_put(fcoe); /* tear-down the FCoE controller */
fcoe_ctlr_destroy(fip);
kfree(fcoe);
dev_put(netdev);
module_put(THIS_MODULE);
} }
/** /**
...@@ -734,6 +717,85 @@ static int fcoe_shost_config(struct fc_lport *lport, struct device *dev) ...@@ -734,6 +717,85 @@ static int fcoe_shost_config(struct fc_lport *lport, struct device *dev)
return 0; return 0;
} }
/**
* fcoe_fdmi_info() - Get FDMI related info from net devive for SW FCoE
* @lport: The local port that is associated with the net device
* @netdev: The associated net device
*
* Must be called after fcoe_shost_config() as it will use local port mutex
*
*/
static void fcoe_fdmi_info(struct fc_lport *lport, struct net_device *netdev)
{
struct fcoe_interface *fcoe;
struct fcoe_port *port;
struct net_device *realdev;
int rc;
struct netdev_fcoe_hbainfo fdmi;
port = lport_priv(lport);
fcoe = port->priv;
realdev = fcoe->realdev;
if (!realdev)
return;
/* No FDMI state m/c for NPIV ports */
if (lport->vport)
return;
if (realdev->netdev_ops->ndo_fcoe_get_hbainfo) {
memset(&fdmi, 0, sizeof(fdmi));
rc = realdev->netdev_ops->ndo_fcoe_get_hbainfo(realdev,
&fdmi);
if (rc) {
printk(KERN_INFO "fcoe: Failed to retrieve FDMI "
"information from netdev.\n");
return;
}
snprintf(fc_host_serial_number(lport->host),
FC_SERIAL_NUMBER_SIZE,
"%s",
fdmi.serial_number);
snprintf(fc_host_manufacturer(lport->host),
FC_SERIAL_NUMBER_SIZE,
"%s",
fdmi.manufacturer);
snprintf(fc_host_model(lport->host),
FC_SYMBOLIC_NAME_SIZE,
"%s",
fdmi.model);
snprintf(fc_host_model_description(lport->host),
FC_SYMBOLIC_NAME_SIZE,
"%s",
fdmi.model_description);
snprintf(fc_host_hardware_version(lport->host),
FC_VERSION_STRING_SIZE,
"%s",
fdmi.hardware_version);
snprintf(fc_host_driver_version(lport->host),
FC_VERSION_STRING_SIZE,
"%s",
fdmi.driver_version);
snprintf(fc_host_optionrom_version(lport->host),
FC_VERSION_STRING_SIZE,
"%s",
fdmi.optionrom_version);
snprintf(fc_host_firmware_version(lport->host),
FC_VERSION_STRING_SIZE,
"%s",
fdmi.firmware_version);
/* Enable FDMI lport states */
lport->fdmi_enabled = 1;
} else {
lport->fdmi_enabled = 0;
printk(KERN_INFO "fcoe: No FDMI support.\n");
}
}
/** /**
* fcoe_oem_match() - The match routine for the offloaded exchange manager * fcoe_oem_match() - The match routine for the offloaded exchange manager
* @fp: The I/O frame * @fp: The I/O frame
...@@ -881,9 +943,6 @@ static void fcoe_if_destroy(struct fc_lport *lport) ...@@ -881,9 +943,6 @@ static void fcoe_if_destroy(struct fc_lport *lport)
dev_uc_del(netdev, port->data_src_addr); dev_uc_del(netdev, port->data_src_addr);
rtnl_unlock(); rtnl_unlock();
/* Release reference held in fcoe_if_create() */
fcoe_interface_put(fcoe);
/* Free queued packets for the per-CPU receive threads */ /* Free queued packets for the per-CPU receive threads */
fcoe_percpu_clean(lport); fcoe_percpu_clean(lport);
...@@ -1047,6 +1106,9 @@ static struct fc_lport *fcoe_if_create(struct fcoe_interface *fcoe, ...@@ -1047,6 +1106,9 @@ static struct fc_lport *fcoe_if_create(struct fcoe_interface *fcoe,
goto out_lp_destroy; goto out_lp_destroy;
} }
/* Initialized FDMI information */
fcoe_fdmi_info(lport, netdev);
/* /*
* fcoe_em_alloc() and fcoe_hostlist_add() both * fcoe_em_alloc() and fcoe_hostlist_add() both
* need to be atomic with respect to other changes to the * need to be atomic with respect to other changes to the
...@@ -1070,7 +1132,6 @@ static struct fc_lport *fcoe_if_create(struct fcoe_interface *fcoe, ...@@ -1070,7 +1132,6 @@ static struct fc_lport *fcoe_if_create(struct fcoe_interface *fcoe,
goto out_lp_destroy; goto out_lp_destroy;
} }
fcoe_interface_get(fcoe);
return lport; return lport;
out_lp_destroy: out_lp_destroy:
...@@ -2009,20 +2070,13 @@ static void fcoe_destroy_work(struct work_struct *work) ...@@ -2009,20 +2070,13 @@ static void fcoe_destroy_work(struct work_struct *work)
{ {
struct fcoe_port *port; struct fcoe_port *port;
struct fcoe_interface *fcoe; struct fcoe_interface *fcoe;
int npiv = 0;
port = container_of(work, struct fcoe_port, destroy_work); port = container_of(work, struct fcoe_port, destroy_work);
mutex_lock(&fcoe_config_mutex); mutex_lock(&fcoe_config_mutex);
/* set if this is an NPIV port */
npiv = port->lport->vport ? 1 : 0;
fcoe = port->priv; fcoe = port->priv;
fcoe_if_destroy(port->lport); fcoe_if_destroy(port->lport);
fcoe_interface_cleanup(fcoe);
/* Do not tear down the fcoe interface for NPIV port */
if (!npiv)
fcoe_interface_cleanup(fcoe);
mutex_unlock(&fcoe_config_mutex); mutex_unlock(&fcoe_config_mutex);
} }
...@@ -2593,12 +2647,15 @@ static int fcoe_vport_destroy(struct fc_vport *vport) ...@@ -2593,12 +2647,15 @@ static int fcoe_vport_destroy(struct fc_vport *vport)
struct Scsi_Host *shost = vport_to_shost(vport); struct Scsi_Host *shost = vport_to_shost(vport);
struct fc_lport *n_port = shost_priv(shost); struct fc_lport *n_port = shost_priv(shost);
struct fc_lport *vn_port = vport->dd_data; struct fc_lport *vn_port = vport->dd_data;
struct fcoe_port *port = lport_priv(vn_port);
mutex_lock(&n_port->lp_mutex); mutex_lock(&n_port->lp_mutex);
list_del(&vn_port->list); list_del(&vn_port->list);
mutex_unlock(&n_port->lp_mutex); mutex_unlock(&n_port->lp_mutex);
queue_work(fcoe_wq, &port->destroy_work);
mutex_lock(&fcoe_config_mutex);
fcoe_if_destroy(vn_port);
mutex_unlock(&fcoe_config_mutex);
return 0; return 0;
} }
......
...@@ -71,8 +71,6 @@ do { \ ...@@ -71,8 +71,6 @@ do { \
* @ctlr: The FCoE controller (for FIP) * @ctlr: The FCoE controller (for FIP)
* @oem: The offload exchange manager for all local port * @oem: The offload exchange manager for all local port
* instances associated with this port * instances associated with this port
* @kref: The kernel reference
*
* This structure is 1:1 with a net devive. * This structure is 1:1 with a net devive.
*/ */
struct fcoe_interface { struct fcoe_interface {
...@@ -83,7 +81,6 @@ struct fcoe_interface { ...@@ -83,7 +81,6 @@ struct fcoe_interface {
struct packet_type fip_packet_type; struct packet_type fip_packet_type;
struct fcoe_ctlr ctlr; struct fcoe_ctlr ctlr;
struct fc_exch_mgr *oem; struct fc_exch_mgr *oem;
struct kref kref;
}; };
#define fcoe_from_ctlr(fip) container_of(fip, struct fcoe_interface, ctlr) #define fcoe_from_ctlr(fip) container_of(fip, struct fcoe_interface, ctlr)
......
...@@ -619,8 +619,8 @@ static int libfcoe_device_notification(struct notifier_block *notifier, ...@@ -619,8 +619,8 @@ static int libfcoe_device_notification(struct notifier_block *notifier,
switch (event) { switch (event) {
case NETDEV_UNREGISTER: case NETDEV_UNREGISTER:
printk(KERN_ERR "libfcoe_device_notification: NETDEV_UNREGISTER %s\n", LIBFCOE_TRANSPORT_DBG("NETDEV_UNREGISTER %s\n",
netdev->name); netdev->name);
fcoe_del_netdev_mapping(netdev); fcoe_del_netdev_mapping(netdev);
break; break;
} }
......
此差异已折叠。
...@@ -58,7 +58,6 @@ struct ctlr_info { ...@@ -58,7 +58,6 @@ struct ctlr_info {
unsigned long paddr; unsigned long paddr;
int nr_cmds; /* Number of commands allowed on this controller */ int nr_cmds; /* Number of commands allowed on this controller */
struct CfgTable __iomem *cfgtable; struct CfgTable __iomem *cfgtable;
int max_sg_entries;
int interrupts_enabled; int interrupts_enabled;
int major; int major;
int max_commands; int max_commands;
...@@ -317,7 +316,7 @@ static unsigned long SA5_completed(struct ctlr_info *h) ...@@ -317,7 +316,7 @@ static unsigned long SA5_completed(struct ctlr_info *h)
dev_dbg(&h->pdev->dev, "Read %lx back from board\n", dev_dbg(&h->pdev->dev, "Read %lx back from board\n",
register_value); register_value);
else else
dev_dbg(&h->pdev->dev, "hpsa: FIFO Empty read\n"); dev_dbg(&h->pdev->dev, "FIFO Empty read\n");
#endif #endif
return register_value; return register_value;
......
...@@ -23,7 +23,7 @@ ...@@ -23,7 +23,7 @@
/* general boundary defintions */ /* general boundary defintions */
#define SENSEINFOBYTES 32 /* may vary between hbas */ #define SENSEINFOBYTES 32 /* may vary between hbas */
#define MAXSGENTRIES 32 #define SG_ENTRIES_IN_CMD 32 /* Max SG entries excluding chain blocks */
#define HPSA_SG_CHAIN 0x80000000 #define HPSA_SG_CHAIN 0x80000000
#define MAXREPLYQS 256 #define MAXREPLYQS 256
...@@ -122,12 +122,11 @@ union u64bit { ...@@ -122,12 +122,11 @@ union u64bit {
}; };
/* FIXME this is a per controller value (barf!) */ /* FIXME this is a per controller value (barf!) */
#define HPSA_MAX_TARGETS_PER_CTLR 16
#define HPSA_MAX_LUN 1024 #define HPSA_MAX_LUN 1024
#define HPSA_MAX_PHYS_LUN 1024 #define HPSA_MAX_PHYS_LUN 1024
#define MAX_MSA2XXX_ENCLOSURES 32 #define MAX_EXT_TARGETS 32
#define HPSA_MAX_DEVICES (HPSA_MAX_PHYS_LUN + HPSA_MAX_LUN + \ #define HPSA_MAX_DEVICES (HPSA_MAX_PHYS_LUN + HPSA_MAX_LUN + \
MAX_MSA2XXX_ENCLOSURES + 1) /* + 1 is for the controller itself */ MAX_EXT_TARGETS + 1) /* + 1 is for the controller itself */
/* SCSI-3 Commands */ /* SCSI-3 Commands */
#pragma pack(1) #pragma pack(1)
...@@ -282,7 +281,7 @@ struct CommandList { ...@@ -282,7 +281,7 @@ struct CommandList {
struct CommandListHeader Header; struct CommandListHeader Header;
struct RequestBlock Request; struct RequestBlock Request;
struct ErrDescriptor ErrDesc; struct ErrDescriptor ErrDesc;
struct SGDescriptor SG[MAXSGENTRIES]; struct SGDescriptor SG[SG_ENTRIES_IN_CMD];
/* information associated with the command */ /* information associated with the command */
u32 busaddr; /* physical addr of this record */ u32 busaddr; /* physical addr of this record */
struct ErrorInfo *err_info; /* pointer to the allocated mem */ struct ErrorInfo *err_info; /* pointer to the allocated mem */
......
...@@ -183,7 +183,7 @@ static const struct ipr_chip_t ipr_chip[] = { ...@@ -183,7 +183,7 @@ static const struct ipr_chip_t ipr_chip[] = {
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_SNIPE, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[1] }, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_SNIPE, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[1] },
{ PCI_VENDOR_ID_ADAPTEC2, PCI_DEVICE_ID_ADAPTEC2_SCAMP, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[1] }, { PCI_VENDOR_ID_ADAPTEC2, PCI_DEVICE_ID_ADAPTEC2_SCAMP, IPR_USE_LSI, IPR_SIS32, IPR_PCI_CFG, &ipr_chip_cfg[1] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2, IPR_USE_MSI, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] }, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2, IPR_USE_MSI, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_ASIC_E2, IPR_USE_MSI, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] } { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE, IPR_USE_MSI, IPR_SIS64, IPR_MMIO, &ipr_chip_cfg[2] }
}; };
static int ipr_max_bus_speeds [] = { static int ipr_max_bus_speeds [] = {
...@@ -9191,15 +9191,15 @@ static struct pci_device_id ipr_pci_table[] __devinitdata = { ...@@ -9191,15 +9191,15 @@ static struct pci_device_id ipr_pci_table[] __devinitdata = {
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C3, 0, 0, 0 }, PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C3, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_FPGA_E2,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C4, 0, 0, 0 }, PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C4, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_ASIC_E2, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57B4, 0, 0, 0 }, PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57B4, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_ASIC_E2, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57B1, 0, 0, 0 }, PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57B1, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_ASIC_E2, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C6, 0, 0, 0 }, PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C6, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_ASIC_E2, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_575D, 0, 0, 0 }, PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57C8, 0, 0, 0 },
{ PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROC_ASIC_E2, { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CROCODILE,
PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57CE, 0, 0, 0 }, PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_57CE, 0, 0, 0 },
{ } { }
}; };
......
...@@ -58,7 +58,7 @@ ...@@ -58,7 +58,7 @@
#define PCI_DEVICE_ID_IBM_OBSIDIAN_E 0x0339 #define PCI_DEVICE_ID_IBM_OBSIDIAN_E 0x0339
#define PCI_DEVICE_ID_IBM_CROC_FPGA_E2 0x033D #define PCI_DEVICE_ID_IBM_CROC_FPGA_E2 0x033D
#define PCI_DEVICE_ID_IBM_CROC_ASIC_E2 0x034A #define PCI_DEVICE_ID_IBM_CROCODILE 0x034A
#define IPR_SUBS_DEV_ID_2780 0x0264 #define IPR_SUBS_DEV_ID_2780 0x0264
#define IPR_SUBS_DEV_ID_5702 0x0266 #define IPR_SUBS_DEV_ID_5702 0x0266
...@@ -92,7 +92,7 @@ ...@@ -92,7 +92,7 @@
#define IPR_SUBS_DEV_ID_57B1 0x0355 #define IPR_SUBS_DEV_ID_57B1 0x0355
#define IPR_SUBS_DEV_ID_574D 0x0356 #define IPR_SUBS_DEV_ID_574D 0x0356
#define IPR_SUBS_DEV_ID_575D 0x035D #define IPR_SUBS_DEV_ID_57C8 0x035D
#define IPR_NAME "ipr" #define IPR_NAME "ipr"
......
...@@ -649,15 +649,13 @@ static void isci_host_start_complete(struct isci_host *ihost, enum sci_status co ...@@ -649,15 +649,13 @@ static void isci_host_start_complete(struct isci_host *ihost, enum sci_status co
int isci_host_scan_finished(struct Scsi_Host *shost, unsigned long time) int isci_host_scan_finished(struct Scsi_Host *shost, unsigned long time)
{ {
struct isci_host *ihost = SHOST_TO_SAS_HA(shost)->lldd_ha; struct sas_ha_struct *ha = SHOST_TO_SAS_HA(shost);
struct isci_host *ihost = ha->lldd_ha;
if (test_bit(IHOST_START_PENDING, &ihost->flags)) if (test_bit(IHOST_START_PENDING, &ihost->flags))
return 0; return 0;
/* todo: use sas_flush_discovery once it is upstream */ sas_drain_work(ha);
scsi_flush_work(shost);
scsi_flush_work(shost);
dev_dbg(&ihost->pdev->dev, dev_dbg(&ihost->pdev->dev,
"%s: ihost->status = %d, time = %ld\n", "%s: ihost->status = %d, time = %ld\n",
...@@ -1490,6 +1488,15 @@ sci_controller_set_interrupt_coalescence(struct isci_host *ihost, ...@@ -1490,6 +1488,15 @@ sci_controller_set_interrupt_coalescence(struct isci_host *ihost,
static void sci_controller_ready_state_enter(struct sci_base_state_machine *sm) static void sci_controller_ready_state_enter(struct sci_base_state_machine *sm)
{ {
struct isci_host *ihost = container_of(sm, typeof(*ihost), sm); struct isci_host *ihost = container_of(sm, typeof(*ihost), sm);
u32 val;
/* enable clock gating for power control of the scu unit */
val = readl(&ihost->smu_registers->clock_gating_control);
val &= ~(SMU_CGUCR_GEN_BIT(REGCLK_ENABLE) |
SMU_CGUCR_GEN_BIT(TXCLK_ENABLE) |
SMU_CGUCR_GEN_BIT(XCLK_ENABLE));
val |= SMU_CGUCR_GEN_BIT(IDLE_ENABLE);
writel(val, &ihost->smu_registers->clock_gating_control);
/* set the default interrupt coalescence number and timeout value. */ /* set the default interrupt coalescence number and timeout value. */
sci_controller_set_interrupt_coalescence(ihost, 0, 0); sci_controller_set_interrupt_coalescence(ihost, 0, 0);
......
...@@ -187,6 +187,7 @@ struct isci_host { ...@@ -187,6 +187,7 @@ struct isci_host {
int id; /* unique within a given pci device */ int id; /* unique within a given pci device */
struct isci_phy phys[SCI_MAX_PHYS]; struct isci_phy phys[SCI_MAX_PHYS];
struct isci_port ports[SCI_MAX_PORTS + 1]; /* includes dummy port */ struct isci_port ports[SCI_MAX_PORTS + 1]; /* includes dummy port */
struct asd_sas_port sas_ports[SCI_MAX_PORTS];
struct sas_ha_struct sas_ha; struct sas_ha_struct sas_ha;
spinlock_t state_lock; spinlock_t state_lock;
...@@ -393,24 +394,6 @@ static inline int sci_remote_device_node_count(struct isci_remote_device *idev) ...@@ -393,24 +394,6 @@ static inline int sci_remote_device_node_count(struct isci_remote_device *idev)
#define sci_controller_clear_invalid_phy(controller, phy) \ #define sci_controller_clear_invalid_phy(controller, phy) \
((controller)->invalid_phy_mask &= ~(1 << (phy)->phy_index)) ((controller)->invalid_phy_mask &= ~(1 << (phy)->phy_index))
static inline struct device *sciphy_to_dev(struct isci_phy *iphy)
{
if (!iphy || !iphy->isci_port || !iphy->isci_port->isci_host)
return NULL;
return &iphy->isci_port->isci_host->pdev->dev;
}
static inline struct device *sciport_to_dev(struct isci_port *iport)
{
if (!iport || !iport->isci_host)
return NULL;
return &iport->isci_host->pdev->dev;
}
static inline struct device *scirdev_to_dev(struct isci_remote_device *idev) static inline struct device *scirdev_to_dev(struct isci_remote_device *idev)
{ {
if (!idev || !idev->isci_port || !idev->isci_port->isci_host) if (!idev || !idev->isci_port || !idev->isci_port->isci_host)
......
...@@ -60,6 +60,7 @@ ...@@ -60,6 +60,7 @@
#include <linux/efi.h> #include <linux/efi.h>
#include <asm/string.h> #include <asm/string.h>
#include <scsi/scsi_host.h> #include <scsi/scsi_host.h>
#include "host.h"
#include "isci.h" #include "isci.h"
#include "task.h" #include "task.h"
#include "probe_roms.h" #include "probe_roms.h"
...@@ -154,7 +155,6 @@ static struct scsi_host_template isci_sht = { ...@@ -154,7 +155,6 @@ static struct scsi_host_template isci_sht = {
.queuecommand = sas_queuecommand, .queuecommand = sas_queuecommand,
.target_alloc = sas_target_alloc, .target_alloc = sas_target_alloc,
.slave_configure = sas_slave_configure, .slave_configure = sas_slave_configure,
.slave_destroy = sas_slave_destroy,
.scan_finished = isci_host_scan_finished, .scan_finished = isci_host_scan_finished,
.scan_start = isci_host_scan_start, .scan_start = isci_host_scan_start,
.change_queue_depth = sas_change_queue_depth, .change_queue_depth = sas_change_queue_depth,
...@@ -166,9 +166,6 @@ static struct scsi_host_template isci_sht = { ...@@ -166,9 +166,6 @@ static struct scsi_host_template isci_sht = {
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,
.max_sectors = SCSI_DEFAULT_MAX_SECTORS, .max_sectors = SCSI_DEFAULT_MAX_SECTORS,
.use_clustering = ENABLE_CLUSTERING, .use_clustering = ENABLE_CLUSTERING,
.eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_bus_reset_handler = isci_bus_reset_handler,
.slave_alloc = sas_slave_alloc,
.target_destroy = sas_target_destroy, .target_destroy = sas_target_destroy,
.ioctl = sas_ioctl, .ioctl = sas_ioctl,
.shost_attrs = isci_host_attrs, .shost_attrs = isci_host_attrs,
...@@ -194,6 +191,9 @@ static struct sas_domain_function_template isci_transport_ops = { ...@@ -194,6 +191,9 @@ static struct sas_domain_function_template isci_transport_ops = {
.lldd_lu_reset = isci_task_lu_reset, .lldd_lu_reset = isci_task_lu_reset,
.lldd_query_task = isci_task_query_task, .lldd_query_task = isci_task_query_task,
/* ata recovery called from ata-eh */
.lldd_ata_check_ready = isci_ata_check_ready,
/* Port and Adapter management */ /* Port and Adapter management */
.lldd_clear_nexus_port = isci_task_clear_nexus_port, .lldd_clear_nexus_port = isci_task_clear_nexus_port,
.lldd_clear_nexus_ha = isci_task_clear_nexus_ha, .lldd_clear_nexus_ha = isci_task_clear_nexus_ha,
...@@ -242,18 +242,13 @@ static int isci_register_sas_ha(struct isci_host *isci_host) ...@@ -242,18 +242,13 @@ static int isci_register_sas_ha(struct isci_host *isci_host)
if (!sas_ports) if (!sas_ports)
return -ENOMEM; return -ENOMEM;
/*----------------- Libsas Initialization Stuff----------------------
* Set various fields in the sas_ha struct:
*/
sas_ha->sas_ha_name = DRV_NAME; sas_ha->sas_ha_name = DRV_NAME;
sas_ha->lldd_module = THIS_MODULE; sas_ha->lldd_module = THIS_MODULE;
sas_ha->sas_addr = &isci_host->phys[0].sas_addr[0]; sas_ha->sas_addr = &isci_host->phys[0].sas_addr[0];
/* set the array of phy and port structs. */
for (i = 0; i < SCI_MAX_PHYS; i++) { for (i = 0; i < SCI_MAX_PHYS; i++) {
sas_phys[i] = &isci_host->phys[i].sas_phy; sas_phys[i] = &isci_host->phys[i].sas_phy;
sas_ports[i] = &isci_host->ports[i].sas_port; sas_ports[i] = &isci_host->sas_ports[i];
} }
sas_ha->sas_phy = sas_phys; sas_ha->sas_phy = sas_phys;
...@@ -528,6 +523,13 @@ static int __devinit isci_pci_probe(struct pci_dev *pdev, const struct pci_devic ...@@ -528,6 +523,13 @@ static int __devinit isci_pci_probe(struct pci_dev *pdev, const struct pci_devic
goto err_host_alloc; goto err_host_alloc;
} }
pci_info->hosts[i] = h; pci_info->hosts[i] = h;
/* turn on DIF support */
scsi_host_set_prot(h->shost,
SHOST_DIF_TYPE1_PROTECTION |
SHOST_DIF_TYPE2_PROTECTION |
SHOST_DIF_TYPE3_PROTECTION);
scsi_host_set_guard(h->shost, SHOST_DIX_GUARD_CRC);
} }
err = isci_setup_interrupts(pdev); err = isci_setup_interrupts(pdev);
...@@ -551,9 +553,9 @@ static void __devexit isci_pci_remove(struct pci_dev *pdev) ...@@ -551,9 +553,9 @@ static void __devexit isci_pci_remove(struct pci_dev *pdev)
int i; int i;
for_each_isci_host(i, ihost, pdev) { for_each_isci_host(i, ihost, pdev) {
wait_for_start(ihost);
isci_unregister(ihost); isci_unregister(ihost);
isci_host_deinit(ihost); isci_host_deinit(ihost);
sci_controller_disable_interrupts(ihost);
} }
} }
......
...@@ -59,6 +59,16 @@ ...@@ -59,6 +59,16 @@
#include "scu_event_codes.h" #include "scu_event_codes.h"
#include "probe_roms.h" #include "probe_roms.h"
#undef C
#define C(a) (#a)
static const char *phy_state_name(enum sci_phy_states state)
{
static const char * const strings[] = PHY_STATES;
return strings[state];
}
#undef C
/* Maximum arbitration wait time in micro-seconds */ /* Maximum arbitration wait time in micro-seconds */
#define SCIC_SDS_PHY_MAX_ARBITRATION_WAIT_TIME (700) #define SCIC_SDS_PHY_MAX_ARBITRATION_WAIT_TIME (700)
...@@ -67,6 +77,19 @@ enum sas_linkrate sci_phy_linkrate(struct isci_phy *iphy) ...@@ -67,6 +77,19 @@ enum sas_linkrate sci_phy_linkrate(struct isci_phy *iphy)
return iphy->max_negotiated_speed; return iphy->max_negotiated_speed;
} }
static struct isci_host *phy_to_host(struct isci_phy *iphy)
{
struct isci_phy *table = iphy - iphy->phy_index;
struct isci_host *ihost = container_of(table, typeof(*ihost), phys[0]);
return ihost;
}
static struct device *sciphy_to_dev(struct isci_phy *iphy)
{
return &phy_to_host(iphy)->pdev->dev;
}
static enum sci_status static enum sci_status
sci_phy_transport_layer_initialization(struct isci_phy *iphy, sci_phy_transport_layer_initialization(struct isci_phy *iphy,
struct scu_transport_layer_registers __iomem *reg) struct scu_transport_layer_registers __iomem *reg)
...@@ -446,8 +469,8 @@ enum sci_status sci_phy_start(struct isci_phy *iphy) ...@@ -446,8 +469,8 @@ enum sci_status sci_phy_start(struct isci_phy *iphy)
enum sci_phy_states state = iphy->sm.current_state_id; enum sci_phy_states state = iphy->sm.current_state_id;
if (state != SCI_PHY_STOPPED) { if (state != SCI_PHY_STOPPED) {
dev_dbg(sciphy_to_dev(iphy), dev_dbg(sciphy_to_dev(iphy), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, phy_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -472,8 +495,8 @@ enum sci_status sci_phy_stop(struct isci_phy *iphy) ...@@ -472,8 +495,8 @@ enum sci_status sci_phy_stop(struct isci_phy *iphy)
case SCI_PHY_READY: case SCI_PHY_READY:
break; break;
default: default:
dev_dbg(sciphy_to_dev(iphy), dev_dbg(sciphy_to_dev(iphy), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, phy_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -486,8 +509,8 @@ enum sci_status sci_phy_reset(struct isci_phy *iphy) ...@@ -486,8 +509,8 @@ enum sci_status sci_phy_reset(struct isci_phy *iphy)
enum sci_phy_states state = iphy->sm.current_state_id; enum sci_phy_states state = iphy->sm.current_state_id;
if (state != SCI_PHY_READY) { if (state != SCI_PHY_READY) {
dev_dbg(sciphy_to_dev(iphy), dev_dbg(sciphy_to_dev(iphy), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, phy_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -536,8 +559,8 @@ enum sci_status sci_phy_consume_power_handler(struct isci_phy *iphy) ...@@ -536,8 +559,8 @@ enum sci_status sci_phy_consume_power_handler(struct isci_phy *iphy)
return SCI_SUCCESS; return SCI_SUCCESS;
} }
default: default:
dev_dbg(sciphy_to_dev(iphy), dev_dbg(sciphy_to_dev(iphy), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, phy_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -591,6 +614,60 @@ static void sci_phy_complete_link_training(struct isci_phy *iphy, ...@@ -591,6 +614,60 @@ static void sci_phy_complete_link_training(struct isci_phy *iphy,
sci_change_state(&iphy->sm, next_state); sci_change_state(&iphy->sm, next_state);
} }
static const char *phy_event_name(u32 event_code)
{
switch (scu_get_event_code(event_code)) {
case SCU_EVENT_PORT_SELECTOR_DETECTED:
return "port selector";
case SCU_EVENT_SENT_PORT_SELECTION:
return "port selection";
case SCU_EVENT_HARD_RESET_TRANSMITTED:
return "tx hard reset";
case SCU_EVENT_HARD_RESET_RECEIVED:
return "rx hard reset";
case SCU_EVENT_RECEIVED_IDENTIFY_TIMEOUT:
return "identify timeout";
case SCU_EVENT_LINK_FAILURE:
return "link fail";
case SCU_EVENT_SATA_SPINUP_HOLD:
return "sata spinup hold";
case SCU_EVENT_SAS_15_SSC:
case SCU_EVENT_SAS_15:
return "sas 1.5";
case SCU_EVENT_SAS_30_SSC:
case SCU_EVENT_SAS_30:
return "sas 3.0";
case SCU_EVENT_SAS_60_SSC:
case SCU_EVENT_SAS_60:
return "sas 6.0";
case SCU_EVENT_SATA_15_SSC:
case SCU_EVENT_SATA_15:
return "sata 1.5";
case SCU_EVENT_SATA_30_SSC:
case SCU_EVENT_SATA_30:
return "sata 3.0";
case SCU_EVENT_SATA_60_SSC:
case SCU_EVENT_SATA_60:
return "sata 6.0";
case SCU_EVENT_SAS_PHY_DETECTED:
return "sas detect";
case SCU_EVENT_SATA_PHY_DETECTED:
return "sata detect";
default:
return "unknown";
}
}
#define phy_event_dbg(iphy, state, code) \
dev_dbg(sciphy_to_dev(iphy), "phy-%d:%d: %s event: %s (%x)\n", \
phy_to_host(iphy)->id, iphy->phy_index, \
phy_state_name(state), phy_event_name(code), code)
#define phy_event_warn(iphy, state, code) \
dev_warn(sciphy_to_dev(iphy), "phy-%d:%d: %s event: %s (%x)\n", \
phy_to_host(iphy)->id, iphy->phy_index, \
phy_state_name(state), phy_event_name(code), code)
enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
{ {
enum sci_phy_states state = iphy->sm.current_state_id; enum sci_phy_states state = iphy->sm.current_state_id;
...@@ -607,11 +684,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -607,11 +684,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
iphy->is_in_link_training = true; iphy->is_in_link_training = true;
break; break;
default: default:
dev_dbg(sciphy_to_dev(iphy), phy_event_dbg(iphy, state, event_code);
"%s: PHY starting substate machine received "
"unexpected event_code %x\n",
__func__,
event_code);
return SCI_FAILURE; return SCI_FAILURE;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
...@@ -648,11 +721,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -648,11 +721,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
sci_change_state(&iphy->sm, SCI_PHY_STARTING); sci_change_state(&iphy->sm, SCI_PHY_STARTING);
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%s: PHY starting substate machine received "
"unexpected event_code %x\n",
__func__, event_code);
return SCI_FAILURE; return SCI_FAILURE;
break; break;
} }
...@@ -677,10 +746,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -677,10 +746,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
sci_change_state(&iphy->sm, SCI_PHY_STARTING); sci_change_state(&iphy->sm, SCI_PHY_STARTING);
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%s: PHY starting substate machine received "
"unexpected event_code %x\n",
__func__, event_code);
return SCI_FAILURE; return SCI_FAILURE;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
...@@ -691,11 +757,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -691,11 +757,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
sci_change_state(&iphy->sm, SCI_PHY_STARTING); sci_change_state(&iphy->sm, SCI_PHY_STARTING);
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%s: PHY starting substate machine received unexpected "
"event_code %x\n",
__func__,
event_code);
return SCI_FAILURE; return SCI_FAILURE;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
...@@ -719,11 +781,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -719,11 +781,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%s: PHY starting substate machine received "
"unexpected event_code %x\n",
__func__, event_code);
return SCI_FAILURE; return SCI_FAILURE;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
...@@ -751,12 +809,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -751,12 +809,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
sci_phy_start_sas_link_training(iphy); sci_phy_start_sas_link_training(iphy);
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%s: PHY starting substate machine received "
"unexpected event_code %x\n",
__func__,
event_code);
return SCI_FAILURE; return SCI_FAILURE;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
...@@ -793,11 +846,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -793,11 +846,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
sci_phy_start_sas_link_training(iphy); sci_phy_start_sas_link_training(iphy);
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%s: PHY starting substate machine received "
"unexpected event_code %x\n",
__func__, event_code);
return SCI_FAILURE; return SCI_FAILURE;
} }
...@@ -815,12 +864,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -815,12 +864,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%s: PHY starting substate machine received "
"unexpected event_code %x\n",
__func__,
event_code);
return SCI_FAILURE; return SCI_FAILURE;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
...@@ -838,10 +882,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -838,10 +882,7 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
iphy->bcn_received_while_port_unassigned = true; iphy->bcn_received_while_port_unassigned = true;
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%sP SCIC PHY 0x%p ready state machine received "
"unexpected event_code %x\n",
__func__, iphy, event_code);
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
...@@ -852,18 +893,14 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code) ...@@ -852,18 +893,14 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
sci_change_state(&iphy->sm, SCI_PHY_STARTING); sci_change_state(&iphy->sm, SCI_PHY_STARTING);
break; break;
default: default:
dev_warn(sciphy_to_dev(iphy), phy_event_warn(iphy, state, event_code);
"%s: SCIC PHY 0x%p resetting state machine received "
"unexpected event_code %x\n",
__func__, iphy, event_code);
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
break; break;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
default: default:
dev_dbg(sciphy_to_dev(iphy), dev_dbg(sciphy_to_dev(iphy), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, phy_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -956,8 +993,8 @@ enum sci_status sci_phy_frame_handler(struct isci_phy *iphy, u32 frame_index) ...@@ -956,8 +993,8 @@ enum sci_status sci_phy_frame_handler(struct isci_phy *iphy, u32 frame_index)
return result; return result;
} }
default: default:
dev_dbg(sciphy_to_dev(iphy), dev_dbg(sciphy_to_dev(iphy), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, phy_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -1299,7 +1336,6 @@ void isci_phy_init(struct isci_phy *iphy, struct isci_host *ihost, int index) ...@@ -1299,7 +1336,6 @@ void isci_phy_init(struct isci_phy *iphy, struct isci_host *ihost, int index)
sas_addr = cpu_to_be64(sci_sas_addr); sas_addr = cpu_to_be64(sci_sas_addr);
memcpy(iphy->sas_addr, &sas_addr, sizeof(sas_addr)); memcpy(iphy->sas_addr, &sas_addr, sizeof(sas_addr));
iphy->isci_port = NULL;
iphy->sas_phy.enabled = 0; iphy->sas_phy.enabled = 0;
iphy->sas_phy.id = index; iphy->sas_phy.id = index;
iphy->sas_phy.sas_addr = &iphy->sas_addr[0]; iphy->sas_phy.sas_addr = &iphy->sas_addr[0];
...@@ -1333,13 +1369,13 @@ int isci_phy_control(struct asd_sas_phy *sas_phy, ...@@ -1333,13 +1369,13 @@ int isci_phy_control(struct asd_sas_phy *sas_phy,
{ {
int ret = 0; int ret = 0;
struct isci_phy *iphy = sas_phy->lldd_phy; struct isci_phy *iphy = sas_phy->lldd_phy;
struct isci_port *iport = iphy->isci_port; struct asd_sas_port *port = sas_phy->port;
struct isci_host *ihost = sas_phy->ha->lldd_ha; struct isci_host *ihost = sas_phy->ha->lldd_ha;
unsigned long flags; unsigned long flags;
dev_dbg(&ihost->pdev->dev, dev_dbg(&ihost->pdev->dev,
"%s: phy %p; func %d; buf %p; isci phy %p, port %p\n", "%s: phy %p; func %d; buf %p; isci phy %p, port %p\n",
__func__, sas_phy, func, buf, iphy, iport); __func__, sas_phy, func, buf, iphy, port);
switch (func) { switch (func) {
case PHY_FUNC_DISABLE: case PHY_FUNC_DISABLE:
...@@ -1356,11 +1392,10 @@ int isci_phy_control(struct asd_sas_phy *sas_phy, ...@@ -1356,11 +1392,10 @@ int isci_phy_control(struct asd_sas_phy *sas_phy,
break; break;
case PHY_FUNC_HARD_RESET: case PHY_FUNC_HARD_RESET:
if (!iport) if (!port)
return -ENODEV; return -ENODEV;
/* Perform the port reset. */ ret = isci_port_perform_hard_reset(ihost, port->lldd_port, iphy);
ret = isci_port_perform_hard_reset(ihost, iport, iphy);
break; break;
case PHY_FUNC_GET_EVENTS: { case PHY_FUNC_GET_EVENTS: {
......
...@@ -103,7 +103,6 @@ struct isci_phy { ...@@ -103,7 +103,6 @@ struct isci_phy {
struct scu_transport_layer_registers __iomem *transport_layer_registers; struct scu_transport_layer_registers __iomem *transport_layer_registers;
struct scu_link_layer_registers __iomem *link_layer_registers; struct scu_link_layer_registers __iomem *link_layer_registers;
struct asd_sas_phy sas_phy; struct asd_sas_phy sas_phy;
struct isci_port *isci_port;
u8 sas_addr[SAS_ADDR_SIZE]; u8 sas_addr[SAS_ADDR_SIZE];
union { union {
struct sas_identify_frame iaf; struct sas_identify_frame iaf;
...@@ -344,101 +343,65 @@ enum sci_phy_counter_id { ...@@ -344,101 +343,65 @@ enum sci_phy_counter_id {
SCIC_PHY_COUNTER_SN_DWORD_SYNC_ERROR SCIC_PHY_COUNTER_SN_DWORD_SYNC_ERROR
}; };
enum sci_phy_states { /**
/** * enum sci_phy_states - phy state machine states
* Simply the initial state for the base domain state machine. * @SCI_PHY_INITIAL: Simply the initial state for the base domain state
*/ * machine.
SCI_PHY_INITIAL, * @SCI_PHY_STOPPED: phy has successfully been stopped. In this state
* no new IO operations are permitted on this phy.
/** * @SCI_PHY_STARTING: the phy is in the process of becomming ready. In
* This state indicates that the phy has successfully been stopped. * this state no new IO operations are permitted on
* In this state no new IO operations are permitted on this phy. * this phy.
* This state is entered from the INITIAL state. * @SCI_PHY_SUB_INITIAL: Initial state
* This state is entered from the STARTING state. * @SCI_PHY_SUB_AWAIT_OSSP_EN: Wait state for the hardware OSSP event
* This state is entered from the READY state. * type notification
* This state is entered from the RESETTING state. * @SCI_PHY_SUB_AWAIT_SAS_SPEED_EN: Wait state for the PHY speed
*/ * notification
SCI_PHY_STOPPED, * @SCI_PHY_SUB_AWAIT_IAF_UF: Wait state for the IAF Unsolicited frame
* notification
/** * @SCI_PHY_SUB_AWAIT_SAS_POWER: Wait state for the request to consume
* This state indicates that the phy is in the process of becomming * power
* ready. In this state no new IO operations are permitted on this phy. * @SCI_PHY_SUB_AWAIT_SATA_POWER: Wait state for request to consume
* This state is entered from the STOPPED state. * power
* This state is entered from the READY state. * @SCI_PHY_SUB_AWAIT_SATA_PHY_EN: Wait state for the SATA PHY
* This state is entered from the RESETTING state. * notification
*/ * @SCI_PHY_SUB_AWAIT_SATA_SPEED_EN: Wait for the SATA PHY speed
SCI_PHY_STARTING, * notification
* @SCI_PHY_SUB_AWAIT_SIG_FIS_UF: Wait state for the SIGNATURE FIS
/** * unsolicited frame notification
* Initial state * @SCI_PHY_SUB_FINAL: Exit state for this state machine
*/ * @SCI_PHY_READY: phy is now ready. Thus, the user is able to perform
SCI_PHY_SUB_INITIAL, * IO operations utilizing this phy as long as it is
* currently part of a valid port. This state is
/** * entered from the STARTING state.
* Wait state for the hardware OSSP event type notification * @SCI_PHY_RESETTING: phy is in the process of being reset. In this
*/ * state no new IO operations are permitted on this
SCI_PHY_SUB_AWAIT_OSSP_EN, * phy. This state is entered from the READY state.
* @SCI_PHY_FINAL: Simply the final state for the base phy state
/** * machine.
* Wait state for the PHY speed notification */
*/ #define PHY_STATES {\
SCI_PHY_SUB_AWAIT_SAS_SPEED_EN, C(PHY_INITIAL),\
C(PHY_STOPPED),\
/** C(PHY_STARTING),\
* Wait state for the IAF Unsolicited frame notification C(PHY_SUB_INITIAL),\
*/ C(PHY_SUB_AWAIT_OSSP_EN),\
SCI_PHY_SUB_AWAIT_IAF_UF, C(PHY_SUB_AWAIT_SAS_SPEED_EN),\
C(PHY_SUB_AWAIT_IAF_UF),\
/** C(PHY_SUB_AWAIT_SAS_POWER),\
* Wait state for the request to consume power C(PHY_SUB_AWAIT_SATA_POWER),\
*/ C(PHY_SUB_AWAIT_SATA_PHY_EN),\
SCI_PHY_SUB_AWAIT_SAS_POWER, C(PHY_SUB_AWAIT_SATA_SPEED_EN),\
C(PHY_SUB_AWAIT_SIG_FIS_UF),\
/** C(PHY_SUB_FINAL),\
* Wait state for request to consume power C(PHY_READY),\
*/ C(PHY_RESETTING),\
SCI_PHY_SUB_AWAIT_SATA_POWER, C(PHY_FINAL),\
}
/** #undef C
* Wait state for the SATA PHY notification #define C(a) SCI_##a
*/ enum sci_phy_states PHY_STATES;
SCI_PHY_SUB_AWAIT_SATA_PHY_EN, #undef C
/**
* Wait for the SATA PHY speed notification
*/
SCI_PHY_SUB_AWAIT_SATA_SPEED_EN,
/**
* Wait state for the SIGNATURE FIS unsolicited frame notification
*/
SCI_PHY_SUB_AWAIT_SIG_FIS_UF,
/**
* Exit state for this state machine
*/
SCI_PHY_SUB_FINAL,
/**
* This state indicates the the phy is now ready. Thus, the user
* is able to perform IO operations utilizing this phy as long as it
* is currently part of a valid port.
* This state is entered from the STARTING state.
*/
SCI_PHY_READY,
/**
* This state indicates that the phy is in the process of being reset.
* In this state no new IO operations are permitted on this phy.
* This state is entered from the READY state.
*/
SCI_PHY_RESETTING,
/**
* Simply the final state for the base phy state machine.
*/
SCI_PHY_FINAL,
};
void sci_phy_construct( void sci_phy_construct(
struct isci_phy *iphy, struct isci_phy *iphy,
......
...@@ -60,18 +60,29 @@ ...@@ -60,18 +60,29 @@
#define SCIC_SDS_PORT_HARD_RESET_TIMEOUT (1000) #define SCIC_SDS_PORT_HARD_RESET_TIMEOUT (1000)
#define SCU_DUMMY_INDEX (0xFFFF) #define SCU_DUMMY_INDEX (0xFFFF)
static void isci_port_change_state(struct isci_port *iport, enum isci_status status) #undef C
#define C(a) (#a)
const char *port_state_name(enum sci_port_states state)
{ {
unsigned long flags; static const char * const strings[] = PORT_STATES;
return strings[state];
}
#undef C
static struct device *sciport_to_dev(struct isci_port *iport)
{
int i = iport->physical_port_index;
struct isci_port *table;
struct isci_host *ihost;
if (i == SCIC_SDS_DUMMY_PORT)
i = SCI_MAX_PORTS+1;
dev_dbg(&iport->isci_host->pdev->dev, table = iport - i;
"%s: iport = %p, state = 0x%x\n", ihost = container_of(table, typeof(*ihost), ports[0]);
__func__, iport, status);
/* XXX pointless lock */ return &ihost->pdev->dev;
spin_lock_irqsave(&iport->state_lock, flags);
iport->status = status;
spin_unlock_irqrestore(&iport->state_lock, flags);
} }
static void sci_port_get_protocols(struct isci_port *iport, struct sci_phy_proto *proto) static void sci_port_get_protocols(struct isci_port *iport, struct sci_phy_proto *proto)
...@@ -165,18 +176,12 @@ static void isci_port_link_up(struct isci_host *isci_host, ...@@ -165,18 +176,12 @@ static void isci_port_link_up(struct isci_host *isci_host,
struct sci_port_properties properties; struct sci_port_properties properties;
unsigned long success = true; unsigned long success = true;
BUG_ON(iphy->isci_port != NULL);
iphy->isci_port = iport;
dev_dbg(&isci_host->pdev->dev, dev_dbg(&isci_host->pdev->dev,
"%s: isci_port = %p\n", "%s: isci_port = %p\n",
__func__, iport); __func__, iport);
spin_lock_irqsave(&iphy->sas_phy.frame_rcvd_lock, flags); spin_lock_irqsave(&iphy->sas_phy.frame_rcvd_lock, flags);
isci_port_change_state(iphy->isci_port, isci_starting);
sci_port_get_properties(iport, &properties); sci_port_get_properties(iport, &properties);
if (iphy->protocol == SCIC_SDS_PHY_PROTOCOL_SATA) { if (iphy->protocol == SCIC_SDS_PHY_PROTOCOL_SATA) {
...@@ -258,7 +263,6 @@ static void isci_port_link_down(struct isci_host *isci_host, ...@@ -258,7 +263,6 @@ static void isci_port_link_down(struct isci_host *isci_host,
__func__, isci_device); __func__, isci_device);
set_bit(IDEV_GONE, &isci_device->flags); set_bit(IDEV_GONE, &isci_device->flags);
} }
isci_port_change_state(isci_port, isci_stopping);
} }
} }
...@@ -269,52 +273,10 @@ static void isci_port_link_down(struct isci_host *isci_host, ...@@ -269,52 +273,10 @@ static void isci_port_link_down(struct isci_host *isci_host,
isci_host->sas_ha.notify_phy_event(&isci_phy->sas_phy, isci_host->sas_ha.notify_phy_event(&isci_phy->sas_phy,
PHYE_LOSS_OF_SIGNAL); PHYE_LOSS_OF_SIGNAL);
isci_phy->isci_port = NULL;
dev_dbg(&isci_host->pdev->dev, dev_dbg(&isci_host->pdev->dev,
"%s: isci_port = %p - Done\n", __func__, isci_port); "%s: isci_port = %p - Done\n", __func__, isci_port);
} }
/**
* isci_port_ready() - This function is called by the sci core when a link
* becomes ready.
* @isci_host: This parameter specifies the isci host object.
* @port: This parameter specifies the sci port with the active link.
*
*/
static void isci_port_ready(struct isci_host *isci_host, struct isci_port *isci_port)
{
dev_dbg(&isci_host->pdev->dev,
"%s: isci_port = %p\n", __func__, isci_port);
complete_all(&isci_port->start_complete);
isci_port_change_state(isci_port, isci_ready);
return;
}
/**
* isci_port_not_ready() - This function is called by the sci core when a link
* is not ready. All remote devices on this link will be removed if they are
* in the stopping state.
* @isci_host: This parameter specifies the isci host object.
* @port: This parameter specifies the sci port with the active link.
*
*/
static void isci_port_not_ready(struct isci_host *isci_host, struct isci_port *isci_port)
{
dev_dbg(&isci_host->pdev->dev,
"%s: isci_port = %p\n", __func__, isci_port);
}
static void isci_port_stop_complete(struct isci_host *ihost,
struct isci_port *iport,
enum sci_status completion_status)
{
dev_dbg(&ihost->pdev->dev, "Port stop complete\n");
}
static bool is_port_ready_state(enum sci_port_states state) static bool is_port_ready_state(enum sci_port_states state)
{ {
switch (state) { switch (state) {
...@@ -353,7 +315,9 @@ static void port_state_machine_change(struct isci_port *iport, ...@@ -353,7 +315,9 @@ static void port_state_machine_change(struct isci_port *iport,
static void isci_port_hard_reset_complete(struct isci_port *isci_port, static void isci_port_hard_reset_complete(struct isci_port *isci_port,
enum sci_status completion_status) enum sci_status completion_status)
{ {
dev_dbg(&isci_port->isci_host->pdev->dev, struct isci_host *ihost = isci_port->owning_controller;
dev_dbg(&ihost->pdev->dev,
"%s: isci_port = %p, completion_status=%x\n", "%s: isci_port = %p, completion_status=%x\n",
__func__, isci_port, completion_status); __func__, isci_port, completion_status);
...@@ -364,23 +328,24 @@ static void isci_port_hard_reset_complete(struct isci_port *isci_port, ...@@ -364,23 +328,24 @@ static void isci_port_hard_reset_complete(struct isci_port *isci_port,
/* The reset failed. The port state is now SCI_PORT_FAILED. */ /* The reset failed. The port state is now SCI_PORT_FAILED. */
if (isci_port->active_phy_mask == 0) { if (isci_port->active_phy_mask == 0) {
int phy_idx = isci_port->last_active_phy;
struct isci_phy *iphy = &ihost->phys[phy_idx];
/* Generate the link down now to the host, since it /* Generate the link down now to the host, since it
* was intercepted by the hard reset state machine when * was intercepted by the hard reset state machine when
* it really happened. * it really happened.
*/ */
isci_port_link_down(isci_port->isci_host, isci_port_link_down(ihost, iphy, isci_port);
&isci_port->isci_host->phys[
isci_port->last_active_phy],
isci_port);
} }
/* Advance the port state so that link state changes will be /* Advance the port state so that link state changes will be
* noticed. * noticed.
*/ */
port_state_machine_change(isci_port, SCI_PORT_SUB_WAITING); port_state_machine_change(isci_port, SCI_PORT_SUB_WAITING);
} }
complete_all(&isci_port->hard_reset_complete); clear_bit(IPORT_RESET_PENDING, &isci_port->state);
wake_up(&ihost->eventq);
} }
/* This method will return a true value if the specified phy can be assigned to /* This method will return a true value if the specified phy can be assigned to
...@@ -835,10 +800,9 @@ static void port_timeout(unsigned long data) ...@@ -835,10 +800,9 @@ static void port_timeout(unsigned long data)
__func__, __func__,
iport); iport);
} else if (current_state == SCI_PORT_STOPPING) { } else if (current_state == SCI_PORT_STOPPING) {
/* if the port is still stopping then the stop has not completed */ dev_dbg(sciport_to_dev(iport),
isci_port_stop_complete(iport->owning_controller, "%s: port%d: stop complete timeout\n",
iport, __func__, iport->physical_port_index);
SCI_FAILURE_TIMEOUT);
} else { } else {
/* The port is in the ready state and we have a timer /* The port is in the ready state and we have a timer
* reporting a timeout this should not happen. * reporting a timeout this should not happen.
...@@ -1003,7 +967,8 @@ static void sci_port_ready_substate_operational_enter(struct sci_base_state_mach ...@@ -1003,7 +967,8 @@ static void sci_port_ready_substate_operational_enter(struct sci_base_state_mach
struct isci_port *iport = container_of(sm, typeof(*iport), sm); struct isci_port *iport = container_of(sm, typeof(*iport), sm);
struct isci_host *ihost = iport->owning_controller; struct isci_host *ihost = iport->owning_controller;
isci_port_ready(ihost, iport); dev_dbg(&ihost->pdev->dev, "%s: port%d ready\n",
__func__, iport->physical_port_index);
for (index = 0; index < SCI_MAX_PHYS; index++) { for (index = 0; index < SCI_MAX_PHYS; index++) {
if (iport->phy_table[index]) { if (iport->phy_table[index]) {
...@@ -1069,7 +1034,8 @@ static void sci_port_ready_substate_operational_exit(struct sci_base_state_machi ...@@ -1069,7 +1034,8 @@ static void sci_port_ready_substate_operational_exit(struct sci_base_state_machi
*/ */
sci_port_abort_dummy_request(iport); sci_port_abort_dummy_request(iport);
isci_port_not_ready(ihost, iport); dev_dbg(&ihost->pdev->dev, "%s: port%d !ready\n",
__func__, iport->physical_port_index);
if (iport->ready_exit) if (iport->ready_exit)
sci_port_invalidate_dummy_remote_node(iport); sci_port_invalidate_dummy_remote_node(iport);
...@@ -1081,7 +1047,8 @@ static void sci_port_ready_substate_configuring_enter(struct sci_base_state_mach ...@@ -1081,7 +1047,8 @@ static void sci_port_ready_substate_configuring_enter(struct sci_base_state_mach
struct isci_host *ihost = iport->owning_controller; struct isci_host *ihost = iport->owning_controller;
if (iport->active_phy_mask == 0) { if (iport->active_phy_mask == 0) {
isci_port_not_ready(ihost, iport); dev_dbg(&ihost->pdev->dev, "%s: port%d !ready\n",
__func__, iport->physical_port_index);
port_state_machine_change(iport, SCI_PORT_SUB_WAITING); port_state_machine_change(iport, SCI_PORT_SUB_WAITING);
} else } else
...@@ -1097,8 +1064,8 @@ enum sci_status sci_port_start(struct isci_port *iport) ...@@ -1097,8 +1064,8 @@ enum sci_status sci_port_start(struct isci_port *iport)
state = iport->sm.current_state_id; state = iport->sm.current_state_id;
if (state != SCI_PORT_STOPPED) { if (state != SCI_PORT_STOPPED) {
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -1172,8 +1139,8 @@ enum sci_status sci_port_stop(struct isci_port *iport) ...@@ -1172,8 +1139,8 @@ enum sci_status sci_port_stop(struct isci_port *iport)
SCI_PORT_STOPPING); SCI_PORT_STOPPING);
return SCI_SUCCESS; return SCI_SUCCESS;
default: default:
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -1187,8 +1154,8 @@ static enum sci_status sci_port_hard_reset(struct isci_port *iport, u32 timeout) ...@@ -1187,8 +1154,8 @@ static enum sci_status sci_port_hard_reset(struct isci_port *iport, u32 timeout)
state = iport->sm.current_state_id; state = iport->sm.current_state_id;
if (state != SCI_PORT_SUB_OPERATIONAL) { if (state != SCI_PORT_SUB_OPERATIONAL) {
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -1282,8 +1249,8 @@ enum sci_status sci_port_add_phy(struct isci_port *iport, ...@@ -1282,8 +1249,8 @@ enum sci_status sci_port_add_phy(struct isci_port *iport,
SCI_PORT_SUB_CONFIGURING); SCI_PORT_SUB_CONFIGURING);
return SCI_SUCCESS; return SCI_SUCCESS;
default: default:
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -1332,8 +1299,8 @@ enum sci_status sci_port_remove_phy(struct isci_port *iport, ...@@ -1332,8 +1299,8 @@ enum sci_status sci_port_remove_phy(struct isci_port *iport,
SCI_PORT_SUB_CONFIGURING); SCI_PORT_SUB_CONFIGURING);
return SCI_SUCCESS; return SCI_SUCCESS;
default: default:
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -1375,8 +1342,8 @@ enum sci_status sci_port_link_up(struct isci_port *iport, ...@@ -1375,8 +1342,8 @@ enum sci_status sci_port_link_up(struct isci_port *iport,
sci_port_general_link_up_handler(iport, iphy, PF_RESUME); sci_port_general_link_up_handler(iport, iphy, PF_RESUME);
return SCI_SUCCESS; return SCI_SUCCESS;
default: default:
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -1405,8 +1372,8 @@ enum sci_status sci_port_link_down(struct isci_port *iport, ...@@ -1405,8 +1372,8 @@ enum sci_status sci_port_link_down(struct isci_port *iport,
sci_port_deactivate_phy(iport, iphy, false); sci_port_deactivate_phy(iport, iphy, false);
return SCI_SUCCESS; return SCI_SUCCESS;
default: default:
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -1425,8 +1392,8 @@ enum sci_status sci_port_start_io(struct isci_port *iport, ...@@ -1425,8 +1392,8 @@ enum sci_status sci_port_start_io(struct isci_port *iport,
iport->started_request_count++; iport->started_request_count++;
return SCI_SUCCESS; return SCI_SUCCESS;
default: default:
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -1440,8 +1407,8 @@ enum sci_status sci_port_complete_io(struct isci_port *iport, ...@@ -1440,8 +1407,8 @@ enum sci_status sci_port_complete_io(struct isci_port *iport,
state = iport->sm.current_state_id; state = iport->sm.current_state_id;
switch (state) { switch (state) {
case SCI_PORT_STOPPED: case SCI_PORT_STOPPED:
dev_warn(sciport_to_dev(iport), dev_warn(sciport_to_dev(iport), "%s: in wrong state: %s\n",
"%s: in wrong state: %d\n", __func__, state); __func__, port_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
case SCI_PORT_STOPPING: case SCI_PORT_STOPPING:
sci_port_decrement_request_count(iport); sci_port_decrement_request_count(iport);
...@@ -1547,7 +1514,8 @@ static void sci_port_ready_state_enter(struct sci_base_state_machine *sm) ...@@ -1547,7 +1514,8 @@ static void sci_port_ready_state_enter(struct sci_base_state_machine *sm)
if (prev_state == SCI_PORT_RESETTING) if (prev_state == SCI_PORT_RESETTING)
isci_port_hard_reset_complete(iport, SCI_SUCCESS); isci_port_hard_reset_complete(iport, SCI_SUCCESS);
else else
isci_port_not_ready(ihost, iport); dev_dbg(&ihost->pdev->dev, "%s: port%d !ready\n",
__func__, iport->physical_port_index);
/* Post and suspend the dummy remote node context for this port. */ /* Post and suspend the dummy remote node context for this port. */
sci_port_post_dummy_remote_node(iport); sci_port_post_dummy_remote_node(iport);
...@@ -1644,22 +1612,7 @@ void isci_port_init(struct isci_port *iport, struct isci_host *ihost, int index) ...@@ -1644,22 +1612,7 @@ void isci_port_init(struct isci_port *iport, struct isci_host *ihost, int index)
{ {
INIT_LIST_HEAD(&iport->remote_dev_list); INIT_LIST_HEAD(&iport->remote_dev_list);
INIT_LIST_HEAD(&iport->domain_dev_list); INIT_LIST_HEAD(&iport->domain_dev_list);
spin_lock_init(&iport->state_lock);
init_completion(&iport->start_complete);
iport->isci_host = ihost; iport->isci_host = ihost;
isci_port_change_state(iport, isci_freed);
}
/**
* isci_port_get_state() - This function gets the status of the port object.
* @isci_port: This parameter points to the isci_port object
*
* status of the object as a isci_status enum.
*/
enum isci_status isci_port_get_state(
struct isci_port *isci_port)
{
return isci_port->status;
} }
void sci_port_broadcast_change_received(struct isci_port *iport, struct isci_phy *iphy) void sci_port_broadcast_change_received(struct isci_port *iport, struct isci_phy *iphy)
...@@ -1670,6 +1623,11 @@ void sci_port_broadcast_change_received(struct isci_port *iport, struct isci_phy ...@@ -1670,6 +1623,11 @@ void sci_port_broadcast_change_received(struct isci_port *iport, struct isci_phy
isci_port_bc_change_received(ihost, iport, iphy); isci_port_bc_change_received(ihost, iport, iphy);
} }
static void wait_port_reset(struct isci_host *ihost, struct isci_port *iport)
{
wait_event(ihost->eventq, !test_bit(IPORT_RESET_PENDING, &iport->state));
}
int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *iport, int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *iport,
struct isci_phy *iphy) struct isci_phy *iphy)
{ {
...@@ -1680,9 +1638,8 @@ int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *ipor ...@@ -1680,9 +1638,8 @@ int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *ipor
dev_dbg(&ihost->pdev->dev, "%s: iport = %p\n", dev_dbg(&ihost->pdev->dev, "%s: iport = %p\n",
__func__, iport); __func__, iport);
init_completion(&iport->hard_reset_complete);
spin_lock_irqsave(&ihost->scic_lock, flags); spin_lock_irqsave(&ihost->scic_lock, flags);
set_bit(IPORT_RESET_PENDING, &iport->state);
#define ISCI_PORT_RESET_TIMEOUT SCIC_SDS_SIGNATURE_FIS_TIMEOUT #define ISCI_PORT_RESET_TIMEOUT SCIC_SDS_SIGNATURE_FIS_TIMEOUT
status = sci_port_hard_reset(iport, ISCI_PORT_RESET_TIMEOUT); status = sci_port_hard_reset(iport, ISCI_PORT_RESET_TIMEOUT);
...@@ -1690,7 +1647,7 @@ int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *ipor ...@@ -1690,7 +1647,7 @@ int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *ipor
spin_unlock_irqrestore(&ihost->scic_lock, flags); spin_unlock_irqrestore(&ihost->scic_lock, flags);
if (status == SCI_SUCCESS) { if (status == SCI_SUCCESS) {
wait_for_completion(&iport->hard_reset_complete); wait_port_reset(ihost, iport);
dev_dbg(&ihost->pdev->dev, dev_dbg(&ihost->pdev->dev,
"%s: iport = %p; hard reset completion\n", "%s: iport = %p; hard reset completion\n",
...@@ -1704,6 +1661,8 @@ int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *ipor ...@@ -1704,6 +1661,8 @@ int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *ipor
__func__, iport, iport->hard_reset_status); __func__, iport, iport->hard_reset_status);
} }
} else { } else {
clear_bit(IPORT_RESET_PENDING, &iport->state);
wake_up(&ihost->eventq);
ret = TMF_RESP_FUNC_FAILED; ret = TMF_RESP_FUNC_FAILED;
dev_err(&ihost->pdev->dev, dev_err(&ihost->pdev->dev,
...@@ -1726,24 +1685,80 @@ int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *ipor ...@@ -1726,24 +1685,80 @@ int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *ipor
return ret; return ret;
} }
/** int isci_ata_check_ready(struct domain_device *dev)
* isci_port_deformed() - This function is called by libsas when a port becomes {
* inactive. struct isci_port *iport = dev->port->lldd_port;
* @phy: This parameter specifies the libsas phy with the inactive port. struct isci_host *ihost = dev_to_ihost(dev);
* struct isci_remote_device *idev;
*/ unsigned long flags;
int rc = 0;
spin_lock_irqsave(&ihost->scic_lock, flags);
idev = isci_lookup_device(dev);
spin_unlock_irqrestore(&ihost->scic_lock, flags);
if (!idev)
goto out;
if (test_bit(IPORT_RESET_PENDING, &iport->state))
goto out;
rc = !!iport->active_phy_mask;
out:
isci_put_device(idev);
return rc;
}
void isci_port_deformed(struct asd_sas_phy *phy) void isci_port_deformed(struct asd_sas_phy *phy)
{ {
pr_debug("%s: sas_phy = %p\n", __func__, phy); struct isci_host *ihost = phy->ha->lldd_ha;
struct isci_port *iport = phy->port->lldd_port;
unsigned long flags;
int i;
/* we got a port notification on a port that was subsequently
* torn down and libsas is just now catching up
*/
if (!iport)
return;
spin_lock_irqsave(&ihost->scic_lock, flags);
for (i = 0; i < SCI_MAX_PHYS; i++) {
if (iport->active_phy_mask & 1 << i)
break;
}
spin_unlock_irqrestore(&ihost->scic_lock, flags);
if (i >= SCI_MAX_PHYS)
dev_dbg(&ihost->pdev->dev, "%s: port: %ld\n",
__func__, (long) (iport - &ihost->ports[0]));
} }
/**
* isci_port_formed() - This function is called by libsas when a port becomes
* active.
* @phy: This parameter specifies the libsas phy with the active port.
*
*/
void isci_port_formed(struct asd_sas_phy *phy) void isci_port_formed(struct asd_sas_phy *phy)
{ {
pr_debug("%s: sas_phy = %p, sas_port = %p\n", __func__, phy, phy->port); struct isci_host *ihost = phy->ha->lldd_ha;
struct isci_phy *iphy = to_iphy(phy);
struct asd_sas_port *port = phy->port;
struct isci_port *iport;
unsigned long flags;
int i;
/* initial ports are formed as the driver is still initializing,
* wait for that process to complete
*/
wait_for_start(ihost);
spin_lock_irqsave(&ihost->scic_lock, flags);
for (i = 0; i < SCI_MAX_PORTS; i++) {
iport = &ihost->ports[i];
if (iport->active_phy_mask & 1 << iphy->phy_index)
break;
}
spin_unlock_irqrestore(&ihost->scic_lock, flags);
if (i >= SCI_MAX_PORTS)
iport = NULL;
port->lldd_port = iport;
} }
...@@ -95,14 +95,11 @@ enum isci_status { ...@@ -95,14 +95,11 @@ enum isci_status {
* @timer: timeout start/stop operations * @timer: timeout start/stop operations
*/ */
struct isci_port { struct isci_port {
enum isci_status status;
struct isci_host *isci_host; struct isci_host *isci_host;
struct asd_sas_port sas_port;
struct list_head remote_dev_list; struct list_head remote_dev_list;
spinlock_t state_lock;
struct list_head domain_dev_list; struct list_head domain_dev_list;
struct completion start_complete; #define IPORT_RESET_PENDING 0
struct completion hard_reset_complete; unsigned long state;
enum sci_status hard_reset_status; enum sci_status hard_reset_status;
struct sci_base_state_machine sm; struct sci_base_state_machine sm;
bool ready_exit; bool ready_exit;
...@@ -147,70 +144,47 @@ struct sci_port_properties { ...@@ -147,70 +144,47 @@ struct sci_port_properties {
}; };
/** /**
* enum sci_port_states - This enumeration depicts all the states for the * enum sci_port_states - port state machine states
* common port state machine. * @SCI_PORT_STOPPED: port has successfully been stopped. In this state
* * no new IO operations are permitted. This state is
* * entered from the STOPPING state.
* @SCI_PORT_STOPPING: port is in the process of stopping. In this
* state no new IO operations are permitted, but
* existing IO operations are allowed to complete.
* This state is entered from the READY state.
* @SCI_PORT_READY: port is now ready. Thus, the user is able to
* perform IO operations on this port. This state is
* entered from the STARTING state.
* @SCI_PORT_SUB_WAITING: port is started and ready but has no active
* phys.
* @SCI_PORT_SUB_OPERATIONAL: port is started and ready and there is at
* least one phy operational.
* @SCI_PORT_SUB_CONFIGURING: port is started and there was an
* add/remove phy event. This state is only
* used in Automatic Port Configuration Mode
* (APC)
* @SCI_PORT_RESETTING: port is in the process of performing a hard
* reset. Thus, the user is unable to perform IO
* operations on this port. This state is entered
* from the READY state.
* @SCI_PORT_FAILED: port has failed a reset request. This state is
* entered when a port reset request times out. This
* state is entered from the RESETTING state.
*/ */
enum sci_port_states { #define PORT_STATES {\
/** C(PORT_STOPPED),\
* This state indicates that the port has successfully been stopped. C(PORT_STOPPING),\
* In this state no new IO operations are permitted. C(PORT_READY),\
* This state is entered from the STOPPING state. C(PORT_SUB_WAITING),\
*/ C(PORT_SUB_OPERATIONAL),\
SCI_PORT_STOPPED, C(PORT_SUB_CONFIGURING),\
C(PORT_RESETTING),\
/** C(PORT_FAILED),\
* This state indicates that the port is in the process of stopping. }
* In this state no new IO operations are permitted, but existing IO #undef C
* operations are allowed to complete. #define C(a) SCI_##a
* This state is entered from the READY state. enum sci_port_states PORT_STATES;
*/ #undef C
SCI_PORT_STOPPING,
/**
* This state indicates the port is now ready. Thus, the user is
* able to perform IO operations on this port.
* This state is entered from the STARTING state.
*/
SCI_PORT_READY,
/**
* The substate where the port is started and ready but has no
* active phys.
*/
SCI_PORT_SUB_WAITING,
/**
* The substate where the port is started and ready and there is
* at least one phy operational.
*/
SCI_PORT_SUB_OPERATIONAL,
/**
* The substate where the port is started and there was an
* add/remove phy event. This state is only used in Automatic
* Port Configuration Mode (APC)
*/
SCI_PORT_SUB_CONFIGURING,
/**
* This state indicates the port is in the process of performing a hard
* reset. Thus, the user is unable to perform IO operations on this
* port.
* This state is entered from the READY state.
*/
SCI_PORT_RESETTING,
/**
* This state indicates the port has failed a reset request. This state
* is entered when a port reset request times out.
* This state is entered from the RESETTING state.
*/
SCI_PORT_FAILED,
};
static inline void sci_port_decrement_request_count(struct isci_port *iport) static inline void sci_port_decrement_request_count(struct isci_port *iport)
{ {
...@@ -296,9 +270,6 @@ void sci_port_get_attached_sas_address( ...@@ -296,9 +270,6 @@ void sci_port_get_attached_sas_address(
struct isci_port *iport, struct isci_port *iport,
struct sci_sas_address *sas_address); struct sci_sas_address *sas_address);
enum isci_status isci_port_get_state(
struct isci_port *isci_port);
void isci_port_formed(struct asd_sas_phy *); void isci_port_formed(struct asd_sas_phy *);
void isci_port_deformed(struct asd_sas_phy *); void isci_port_deformed(struct asd_sas_phy *);
...@@ -309,4 +280,5 @@ void isci_port_init( ...@@ -309,4 +280,5 @@ void isci_port_init(
int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *iport, int isci_port_perform_hard_reset(struct isci_host *ihost, struct isci_port *iport,
struct isci_phy *iphy); struct isci_phy *iphy);
int isci_ata_check_ready(struct domain_device *dev);
#endif /* !defined(_ISCI_PORT_H_) */ #endif /* !defined(_ISCI_PORT_H_) */
...@@ -370,6 +370,27 @@ struct scu_iit_entry { ...@@ -370,6 +370,27 @@ struct scu_iit_entry {
>> SMU_DEVICE_CONTEXT_CAPACITY_MAX_RNC_SHIFT \ >> SMU_DEVICE_CONTEXT_CAPACITY_MAX_RNC_SHIFT \
) )
/* ***************************************************************************** */
#define SMU_CLOCK_GATING_CONTROL_IDLE_ENABLE_SHIFT (0)
#define SMU_CLOCK_GATING_CONTROL_IDLE_ENABLE_MASK (0x00000001)
#define SMU_CLOCK_GATING_CONTROL_XCLK_ENABLE_SHIFT (1)
#define SMU_CLOCK_GATING_CONTROL_XCLK_ENABLE_MASK (0x00000002)
#define SMU_CLOCK_GATING_CONTROL_TXCLK_ENABLE_SHIFT (2)
#define SMU_CLOCK_GATING_CONTROL_TXCLK_ENABLE_MASK (0x00000004)
#define SMU_CLOCK_GATING_CONTROL_REGCLK_ENABLE_SHIFT (3)
#define SMU_CLOCK_GATING_CONTROL_REGCLK_ENABLE_MASK (0x00000008)
#define SMU_CLOCK_GATING_CONTROL_IDLE_TIMEOUT_SHIFT (16)
#define SMU_CLOCK_GATING_CONTROL_IDLE_TIMEOUT_MASK (0x000F0000)
#define SMU_CLOCK_GATING_CONTROL_FORCE_IDLE_SHIFT (31)
#define SMU_CLOCK_GATING_CONTROL_FORCE_IDLE_MASK (0x80000000)
#define SMU_CLOCK_GATING_CONTROL_RESERVED_MASK (0x7FF0FFF0)
#define SMU_CGUCR_GEN_VAL(name, value) \
SCU_GEN_VALUE(SMU_CLOCK_GATING_CONTROL_##name, value)
#define SMU_CGUCR_GEN_BIT(name) \
SCU_GEN_BIT(SMU_CLOCK_GATING_CONTROL_##name)
/* -------------------------------------------------------------------------- */ /* -------------------------------------------------------------------------- */
#define SMU_CONTROL_STATUS_TASK_CONTEXT_RANGE_ENABLE_SHIFT (0) #define SMU_CONTROL_STATUS_TASK_CONTEXT_RANGE_ENABLE_SHIFT (0)
...@@ -992,8 +1013,10 @@ struct smu_registers { ...@@ -992,8 +1013,10 @@ struct smu_registers {
u32 mmr_address_window; u32 mmr_address_window;
/* 0x00A4 SMDW */ /* 0x00A4 SMDW */
u32 mmr_data_window; u32 mmr_data_window;
u32 reserved_A8; /* 0x00A8 CGUCR */
u32 reserved_AC; u32 clock_gating_control;
/* 0x00AC CGUPC */
u32 clock_gating_performance;
/* A whole bunch of reserved space */ /* A whole bunch of reserved space */
u32 reserved_Bx[4]; u32 reserved_Bx[4];
u32 reserved_Cx[4]; u32 reserved_Cx[4];
......
...@@ -62,6 +62,16 @@ ...@@ -62,6 +62,16 @@
#include "scu_event_codes.h" #include "scu_event_codes.h"
#include "task.h" #include "task.h"
#undef C
#define C(a) (#a)
const char *dev_state_name(enum sci_remote_device_states state)
{
static const char * const strings[] = REMOTE_DEV_STATES;
return strings[state];
}
#undef C
/** /**
* isci_remote_device_not_ready() - This function is called by the ihost when * isci_remote_device_not_ready() - This function is called by the ihost when
* the remote device is not ready. We mark the isci device as ready (not * the remote device is not ready. We mark the isci device as ready (not
...@@ -167,8 +177,8 @@ enum sci_status sci_remote_device_stop(struct isci_remote_device *idev, ...@@ -167,8 +177,8 @@ enum sci_status sci_remote_device_stop(struct isci_remote_device *idev,
case SCI_DEV_FAILED: case SCI_DEV_FAILED:
case SCI_DEV_FINAL: case SCI_DEV_FINAL:
default: default:
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
case SCI_DEV_STOPPED: case SCI_DEV_STOPPED:
return SCI_SUCCESS; return SCI_SUCCESS;
...@@ -226,8 +236,8 @@ enum sci_status sci_remote_device_reset(struct isci_remote_device *idev) ...@@ -226,8 +236,8 @@ enum sci_status sci_remote_device_reset(struct isci_remote_device *idev)
case SCI_DEV_RESETTING: case SCI_DEV_RESETTING:
case SCI_DEV_FINAL: case SCI_DEV_FINAL:
default: default:
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
case SCI_DEV_READY: case SCI_DEV_READY:
case SCI_STP_DEV_IDLE: case SCI_STP_DEV_IDLE:
...@@ -246,8 +256,8 @@ enum sci_status sci_remote_device_reset_complete(struct isci_remote_device *idev ...@@ -246,8 +256,8 @@ enum sci_status sci_remote_device_reset_complete(struct isci_remote_device *idev
enum sci_remote_device_states state = sm->current_state_id; enum sci_remote_device_states state = sm->current_state_id;
if (state != SCI_DEV_RESETTING) { if (state != SCI_DEV_RESETTING) {
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -262,8 +272,8 @@ enum sci_status sci_remote_device_suspend(struct isci_remote_device *idev, ...@@ -262,8 +272,8 @@ enum sci_status sci_remote_device_suspend(struct isci_remote_device *idev,
enum sci_remote_device_states state = sm->current_state_id; enum sci_remote_device_states state = sm->current_state_id;
if (state != SCI_STP_DEV_CMD) { if (state != SCI_STP_DEV_CMD) {
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -287,8 +297,8 @@ enum sci_status sci_remote_device_frame_handler(struct isci_remote_device *idev, ...@@ -287,8 +297,8 @@ enum sci_status sci_remote_device_frame_handler(struct isci_remote_device *idev,
case SCI_SMP_DEV_IDLE: case SCI_SMP_DEV_IDLE:
case SCI_DEV_FINAL: case SCI_DEV_FINAL:
default: default:
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
/* Return the frame back to the controller */ /* Return the frame back to the controller */
sci_controller_release_frame(ihost, frame_index); sci_controller_release_frame(ihost, frame_index);
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
...@@ -502,8 +512,8 @@ enum sci_status sci_remote_device_start_io(struct isci_host *ihost, ...@@ -502,8 +512,8 @@ enum sci_status sci_remote_device_start_io(struct isci_host *ihost,
case SCI_DEV_RESETTING: case SCI_DEV_RESETTING:
case SCI_DEV_FINAL: case SCI_DEV_FINAL:
default: default:
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
case SCI_DEV_READY: case SCI_DEV_READY:
/* attempt to start an io request for this device object. The remote /* attempt to start an io request for this device object. The remote
...@@ -637,8 +647,8 @@ enum sci_status sci_remote_device_complete_io(struct isci_host *ihost, ...@@ -637,8 +647,8 @@ enum sci_status sci_remote_device_complete_io(struct isci_host *ihost,
case SCI_DEV_FAILED: case SCI_DEV_FAILED:
case SCI_DEV_FINAL: case SCI_DEV_FINAL:
default: default:
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
case SCI_DEV_READY: case SCI_DEV_READY:
case SCI_STP_DEV_AWAIT_RESET: case SCI_STP_DEV_AWAIT_RESET:
...@@ -721,8 +731,8 @@ enum sci_status sci_remote_device_start_task(struct isci_host *ihost, ...@@ -721,8 +731,8 @@ enum sci_status sci_remote_device_start_task(struct isci_host *ihost,
case SCI_DEV_RESETTING: case SCI_DEV_RESETTING:
case SCI_DEV_FINAL: case SCI_DEV_FINAL:
default: default:
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
case SCI_STP_DEV_IDLE: case SCI_STP_DEV_IDLE:
case SCI_STP_DEV_CMD: case SCI_STP_DEV_CMD:
...@@ -853,8 +863,8 @@ static enum sci_status sci_remote_device_destruct(struct isci_remote_device *ide ...@@ -853,8 +863,8 @@ static enum sci_status sci_remote_device_destruct(struct isci_remote_device *ide
struct isci_host *ihost; struct isci_host *ihost;
if (state != SCI_DEV_STOPPED) { if (state != SCI_DEV_STOPPED) {
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -1204,8 +1214,8 @@ static enum sci_status sci_remote_device_start(struct isci_remote_device *idev, ...@@ -1204,8 +1214,8 @@ static enum sci_status sci_remote_device_start(struct isci_remote_device *idev,
enum sci_status status; enum sci_status status;
if (state != SCI_DEV_STOPPED) { if (state != SCI_DEV_STOPPED) {
dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %d\n", dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n",
__func__, state); __func__, dev_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -1308,7 +1318,6 @@ void isci_remote_device_release(struct kref *kref) ...@@ -1308,7 +1318,6 @@ void isci_remote_device_release(struct kref *kref)
clear_bit(IDEV_STOP_PENDING, &idev->flags); clear_bit(IDEV_STOP_PENDING, &idev->flags);
clear_bit(IDEV_IO_READY, &idev->flags); clear_bit(IDEV_IO_READY, &idev->flags);
clear_bit(IDEV_GONE, &idev->flags); clear_bit(IDEV_GONE, &idev->flags);
clear_bit(IDEV_EH, &idev->flags);
smp_mb__before_clear_bit(); smp_mb__before_clear_bit();
clear_bit(IDEV_ALLOCATED, &idev->flags); clear_bit(IDEV_ALLOCATED, &idev->flags);
wake_up(&ihost->eventq); wake_up(&ihost->eventq);
...@@ -1381,34 +1390,17 @@ void isci_remote_device_gone(struct domain_device *dev) ...@@ -1381,34 +1390,17 @@ void isci_remote_device_gone(struct domain_device *dev)
* *
* status, zero indicates success. * status, zero indicates success.
*/ */
int isci_remote_device_found(struct domain_device *domain_dev) int isci_remote_device_found(struct domain_device *dev)
{ {
struct isci_host *isci_host = dev_to_ihost(domain_dev); struct isci_host *isci_host = dev_to_ihost(dev);
struct isci_port *isci_port; struct isci_port *isci_port = dev->port->lldd_port;
struct isci_phy *isci_phy;
struct asd_sas_port *sas_port;
struct asd_sas_phy *sas_phy;
struct isci_remote_device *isci_device; struct isci_remote_device *isci_device;
enum sci_status status; enum sci_status status;
dev_dbg(&isci_host->pdev->dev, dev_dbg(&isci_host->pdev->dev,
"%s: domain_device = %p\n", __func__, domain_dev); "%s: domain_device = %p\n", __func__, dev);
wait_for_start(isci_host);
sas_port = domain_dev->port;
sas_phy = list_first_entry(&sas_port->phy_list, struct asd_sas_phy,
port_phy_el);
isci_phy = to_iphy(sas_phy);
isci_port = isci_phy->isci_port;
/* we are being called for a device on this port,
* so it has to come up eventually
*/
wait_for_completion(&isci_port->start_complete);
if ((isci_stopping == isci_port_get_state(isci_port)) || if (!isci_port)
(isci_stopped == isci_port_get_state(isci_port)))
return -ENODEV; return -ENODEV;
isci_device = isci_remote_device_alloc(isci_host, isci_port); isci_device = isci_remote_device_alloc(isci_host, isci_port);
...@@ -1419,7 +1411,7 @@ int isci_remote_device_found(struct domain_device *domain_dev) ...@@ -1419,7 +1411,7 @@ int isci_remote_device_found(struct domain_device *domain_dev)
INIT_LIST_HEAD(&isci_device->node); INIT_LIST_HEAD(&isci_device->node);
spin_lock_irq(&isci_host->scic_lock); spin_lock_irq(&isci_host->scic_lock);
isci_device->domain_dev = domain_dev; isci_device->domain_dev = dev;
isci_device->isci_port = isci_port; isci_device->isci_port = isci_port;
list_add_tail(&isci_device->node, &isci_port->remote_dev_list); list_add_tail(&isci_device->node, &isci_port->remote_dev_list);
...@@ -1432,7 +1424,7 @@ int isci_remote_device_found(struct domain_device *domain_dev) ...@@ -1432,7 +1424,7 @@ int isci_remote_device_found(struct domain_device *domain_dev)
if (status == SCI_SUCCESS) { if (status == SCI_SUCCESS) {
/* device came up, advertise it to the world */ /* device came up, advertise it to the world */
domain_dev->lldd_dev = isci_device; dev->lldd_dev = isci_device;
} else } else
isci_put_device(isci_device); isci_put_device(isci_device);
spin_unlock_irq(&isci_host->scic_lock); spin_unlock_irq(&isci_host->scic_lock);
......
...@@ -82,10 +82,9 @@ struct isci_remote_device { ...@@ -82,10 +82,9 @@ struct isci_remote_device {
#define IDEV_START_PENDING 0 #define IDEV_START_PENDING 0
#define IDEV_STOP_PENDING 1 #define IDEV_STOP_PENDING 1
#define IDEV_ALLOCATED 2 #define IDEV_ALLOCATED 2
#define IDEV_EH 3 #define IDEV_GONE 3
#define IDEV_GONE 4 #define IDEV_IO_READY 4
#define IDEV_IO_READY 5 #define IDEV_IO_NCQERROR 5
#define IDEV_IO_NCQERROR 6
unsigned long flags; unsigned long flags;
struct kref kref; struct kref kref;
struct isci_port *isci_port; struct isci_port *isci_port;
...@@ -180,122 +179,101 @@ enum sci_status sci_remote_device_reset_complete( ...@@ -180,122 +179,101 @@ enum sci_status sci_remote_device_reset_complete(
/** /**
* enum sci_remote_device_states - This enumeration depicts all the states * enum sci_remote_device_states - This enumeration depicts all the states
* for the common remote device state machine. * for the common remote device state machine.
* @SCI_DEV_INITIAL: Simply the initial state for the base remote device
* state machine.
* *
* @SCI_DEV_STOPPED: This state indicates that the remote device has
* successfully been stopped. In this state no new IO operations are
* permitted. This state is entered from the INITIAL state. This state
* is entered from the STOPPING state.
* *
* @SCI_DEV_STARTING: This state indicates the the remote device is in
* the process of becoming ready (i.e. starting). In this state no new
* IO operations are permitted. This state is entered from the STOPPED
* state.
*
* @SCI_DEV_READY: This state indicates the remote device is now ready.
* Thus, the user is able to perform IO operations on the remote device.
* This state is entered from the STARTING state.
*
* @SCI_STP_DEV_IDLE: This is the idle substate for the stp remote
* device. When there are no active IO for the device it is is in this
* state.
*
* @SCI_STP_DEV_CMD: This is the command state for for the STP remote
* device. This state is entered when the device is processing a
* non-NCQ command. The device object will fail any new start IO
* requests until this command is complete.
*
* @SCI_STP_DEV_NCQ: This is the NCQ state for the STP remote device.
* This state is entered when the device is processing an NCQ reuqest.
* It will remain in this state so long as there is one or more NCQ
* requests being processed.
*
* @SCI_STP_DEV_NCQ_ERROR: This is the NCQ error state for the STP
* remote device. This state is entered when an SDB error FIS is
* received by the device object while in the NCQ state. The device
* object will only accept a READ LOG command while in this state.
*
* @SCI_STP_DEV_ATAPI_ERROR: This is the ATAPI error state for the STP
* ATAPI remote device. This state is entered when ATAPI device sends
* error status FIS without data while the device object is in CMD
* state. A suspension event is expected in this state. The device
* object will resume right away.
*
* @SCI_STP_DEV_AWAIT_RESET: This is the READY substate indicates the
* device is waiting for the RESET task coming to be recovered from
* certain hardware specific error.
*
* @SCI_SMP_DEV_IDLE: This is the ready operational substate for the
* remote device. This is the normal operational state for a remote
* device.
*
* @SCI_SMP_DEV_CMD: This is the suspended state for the remote device.
* This is the state that the device is placed in when a RNC suspend is
* received by the SCU hardware.
*
* @SCI_DEV_STOPPING: This state indicates that the remote device is in
* the process of stopping. In this state no new IO operations are
* permitted, but existing IO operations are allowed to complete. This
* state is entered from the READY state. This state is entered from
* the FAILED state.
*
* @SCI_DEV_FAILED: This state indicates that the remote device has
* failed. In this state no new IO operations are permitted. This
* state is entered from the INITIALIZING state. This state is entered
* from the READY state.
*
* @SCI_DEV_RESETTING: This state indicates the device is being reset.
* In this state no new IO operations are permitted. This state is
* entered from the READY state.
*
* @SCI_DEV_FINAL: Simply the final state for the base remote device
* state machine.
*/ */
enum sci_remote_device_states { #define REMOTE_DEV_STATES {\
/** C(DEV_INITIAL),\
* Simply the initial state for the base remote device state machine. C(DEV_STOPPED),\
*/ C(DEV_STARTING),\
SCI_DEV_INITIAL, C(DEV_READY),\
C(STP_DEV_IDLE),\
/** C(STP_DEV_CMD),\
* This state indicates that the remote device has successfully been C(STP_DEV_NCQ),\
* stopped. In this state no new IO operations are permitted. C(STP_DEV_NCQ_ERROR),\
* This state is entered from the INITIAL state. C(STP_DEV_ATAPI_ERROR),\
* This state is entered from the STOPPING state. C(STP_DEV_AWAIT_RESET),\
*/ C(SMP_DEV_IDLE),\
SCI_DEV_STOPPED, C(SMP_DEV_CMD),\
C(DEV_STOPPING),\
/** C(DEV_FAILED),\
* This state indicates the the remote device is in the process of C(DEV_RESETTING),\
* becoming ready (i.e. starting). In this state no new IO operations C(DEV_FINAL),\
* are permitted. }
* This state is entered from the STOPPED state. #undef C
*/ #define C(a) SCI_##a
SCI_DEV_STARTING, enum sci_remote_device_states REMOTE_DEV_STATES;
#undef C
/** const char *dev_state_name(enum sci_remote_device_states state);
* This state indicates the remote device is now ready. Thus, the user
* is able to perform IO operations on the remote device.
* This state is entered from the STARTING state.
*/
SCI_DEV_READY,
/**
* This is the idle substate for the stp remote device. When there are no
* active IO for the device it is is in this state.
*/
SCI_STP_DEV_IDLE,
/**
* This is the command state for for the STP remote device. This state is
* entered when the device is processing a non-NCQ command. The device object
* will fail any new start IO requests until this command is complete.
*/
SCI_STP_DEV_CMD,
/**
* This is the NCQ state for the STP remote device. This state is entered
* when the device is processing an NCQ reuqest. It will remain in this state
* so long as there is one or more NCQ requests being processed.
*/
SCI_STP_DEV_NCQ,
/**
* This is the NCQ error state for the STP remote device. This state is
* entered when an SDB error FIS is received by the device object while in the
* NCQ state. The device object will only accept a READ LOG command while in
* this state.
*/
SCI_STP_DEV_NCQ_ERROR,
/**
* This is the ATAPI error state for the STP ATAPI remote device.
* This state is entered when ATAPI device sends error status FIS
* without data while the device object is in CMD state.
* A suspension event is expected in this state.
* The device object will resume right away.
*/
SCI_STP_DEV_ATAPI_ERROR,
/**
* This is the READY substate indicates the device is waiting for the RESET task
* coming to be recovered from certain hardware specific error.
*/
SCI_STP_DEV_AWAIT_RESET,
/**
* This is the ready operational substate for the remote device. This is the
* normal operational state for a remote device.
*/
SCI_SMP_DEV_IDLE,
/**
* This is the suspended state for the remote device. This is the state that
* the device is placed in when a RNC suspend is received by the SCU hardware.
*/
SCI_SMP_DEV_CMD,
/**
* This state indicates that the remote device is in the process of
* stopping. In this state no new IO operations are permitted, but
* existing IO operations are allowed to complete.
* This state is entered from the READY state.
* This state is entered from the FAILED state.
*/
SCI_DEV_STOPPING,
/**
* This state indicates that the remote device has failed.
* In this state no new IO operations are permitted.
* This state is entered from the INITIALIZING state.
* This state is entered from the READY state.
*/
SCI_DEV_FAILED,
/**
* This state indicates the device is being reset.
* In this state no new IO operations are permitted.
* This state is entered from the READY state.
*/
SCI_DEV_RESETTING,
/**
* Simply the final state for the base remote device state machine.
*/
SCI_DEV_FINAL,
};
static inline struct isci_remote_device *rnc_to_dev(struct sci_remote_node_context *rnc) static inline struct isci_remote_device *rnc_to_dev(struct sci_remote_node_context *rnc)
{ {
......
...@@ -60,18 +60,15 @@ ...@@ -60,18 +60,15 @@
#include "scu_event_codes.h" #include "scu_event_codes.h"
#include "scu_task_context.h" #include "scu_task_context.h"
#undef C
#define C(a) (#a)
const char *rnc_state_name(enum scis_sds_remote_node_context_states state)
{
static const char * const strings[] = RNC_STATES;
/** return strings[state];
* }
* @sci_rnc: The RNC for which the is posted request is being made. #undef C
*
* This method will return true if the RNC is not in the initial state. In all
* other states the RNC is considered active and this will return true. The
* destroy request of the state machine drives the RNC back to the initial
* state. If the state machine changes then this routine will also have to be
* changed. bool true if the state machine is not in the initial state false if
* the state machine is in the initial state
*/
/** /**
* *
......
...@@ -85,61 +85,50 @@ struct sci_remote_node_context; ...@@ -85,61 +85,50 @@ struct sci_remote_node_context;
typedef void (*scics_sds_remote_node_context_callback)(void *); typedef void (*scics_sds_remote_node_context_callback)(void *);
/** /**
* This is the enumeration of the remote node context states. * enum sci_remote_node_context_states
* @SCI_RNC_INITIAL initial state for a remote node context. On a resume
* request the remote node context will transition to the posting state.
*
* @SCI_RNC_POSTING: transition state that posts the RNi to the hardware. Once
* the RNC is posted the remote node context will be made ready.
*
* @SCI_RNC_INVALIDATING: transition state that will post an RNC invalidate to
* the hardware. Once the invalidate is complete the remote node context will
* transition to the posting state.
*
* @SCI_RNC_RESUMING: transition state that will post an RNC resume to the
* hardare. Once the event notification of resume complete is received the
* remote node context will transition to the ready state.
*
* @SCI_RNC_READY: state that the remote node context must be in to accept io
* request operations.
*
* @SCI_RNC_TX_SUSPENDED: state that the remote node context transitions to when
* it gets a TX suspend notification from the hardware.
*
* @SCI_RNC_TX_RX_SUSPENDED: state that the remote node context transitions to
* when it gets a TX RX suspend notification from the hardware.
*
* @SCI_RNC_AWAIT_SUSPENSION: wait state for the remote node context that waits
* for a suspend notification from the hardware. This state is entered when
* either there is a request to supend the remote node context or when there is
* a TC completion where the remote node will be suspended by the hardware.
*/ */
enum scis_sds_remote_node_context_states { #define RNC_STATES {\
/** C(RNC_INITIAL),\
* This state is the initial state for a remote node context. On a resume C(RNC_POSTING),\
* request the remote node context will transition to the posting state. C(RNC_INVALIDATING),\
*/ C(RNC_RESUMING),\
SCI_RNC_INITIAL, C(RNC_READY),\
C(RNC_TX_SUSPENDED),\
/** C(RNC_TX_RX_SUSPENDED),\
* This is a transition state that posts the RNi to the hardware. Once the RNC C(RNC_AWAIT_SUSPENSION),\
* is posted the remote node context will be made ready. }
*/ #undef C
SCI_RNC_POSTING, #define C(a) SCI_##a
enum scis_sds_remote_node_context_states RNC_STATES;
/** #undef C
* This is a transition state that will post an RNC invalidate to the const char *rnc_state_name(enum scis_sds_remote_node_context_states state);
* hardware. Once the invalidate is complete the remote node context will
* transition to the posting state.
*/
SCI_RNC_INVALIDATING,
/**
* This is a transition state that will post an RNC resume to the hardare.
* Once the event notification of resume complete is received the remote node
* context will transition to the ready state.
*/
SCI_RNC_RESUMING,
/**
* This is the state that the remote node context must be in to accept io
* request operations.
*/
SCI_RNC_READY,
/**
* This is the state that the remote node context transitions to when it gets
* a TX suspend notification from the hardware.
*/
SCI_RNC_TX_SUSPENDED,
/**
* This is the state that the remote node context transitions to when it gets
* a TX RX suspend notification from the hardware.
*/
SCI_RNC_TX_RX_SUSPENDED,
/**
* This state is a wait state for the remote node context that waits for a
* suspend notification from the hardware. This state is entered when either
* there is a request to supend the remote node context or when there is a TC
* completion where the remote node will be suspended by the hardware.
*/
SCI_RNC_AWAIT_SUSPENSION
};
/** /**
* *
......
...@@ -53,6 +53,7 @@ ...@@ -53,6 +53,7 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/ */
#include <scsi/scsi_cmnd.h>
#include "isci.h" #include "isci.h"
#include "task.h" #include "task.h"
#include "request.h" #include "request.h"
...@@ -60,6 +61,16 @@ ...@@ -60,6 +61,16 @@
#include "scu_event_codes.h" #include "scu_event_codes.h"
#include "sas.h" #include "sas.h"
#undef C
#define C(a) (#a)
const char *req_state_name(enum sci_base_request_states state)
{
static const char * const strings[] = REQUEST_STATES;
return strings[state];
}
#undef C
static struct scu_sgl_element_pair *to_sgl_element_pair(struct isci_request *ireq, static struct scu_sgl_element_pair *to_sgl_element_pair(struct isci_request *ireq,
int idx) int idx)
{ {
...@@ -264,6 +275,141 @@ static void scu_ssp_reqeust_construct_task_context( ...@@ -264,6 +275,141 @@ static void scu_ssp_reqeust_construct_task_context(
task_context->response_iu_lower = lower_32_bits(dma_addr); task_context->response_iu_lower = lower_32_bits(dma_addr);
} }
static u8 scu_bg_blk_size(struct scsi_device *sdp)
{
switch (sdp->sector_size) {
case 512:
return 0;
case 1024:
return 1;
case 4096:
return 3;
default:
return 0xff;
}
}
static u32 scu_dif_bytes(u32 len, u32 sector_size)
{
return (len >> ilog2(sector_size)) * 8;
}
static void scu_ssp_ireq_dif_insert(struct isci_request *ireq, u8 type, u8 op)
{
struct scu_task_context *tc = ireq->tc;
struct scsi_cmnd *scmd = ireq->ttype_ptr.io_task_ptr->uldd_task;
u8 blk_sz = scu_bg_blk_size(scmd->device);
tc->block_guard_enable = 1;
tc->blk_prot_en = 1;
tc->blk_sz = blk_sz;
/* DIF write insert */
tc->blk_prot_func = 0x2;
tc->transfer_length_bytes += scu_dif_bytes(tc->transfer_length_bytes,
scmd->device->sector_size);
/* always init to 0, used by hw */
tc->interm_crc_val = 0;
tc->init_crc_seed = 0;
tc->app_tag_verify = 0;
tc->app_tag_gen = 0;
tc->ref_tag_seed_verify = 0;
/* always init to same as bg_blk_sz */
tc->UD_bytes_immed_val = scmd->device->sector_size;
tc->reserved_DC_0 = 0;
/* always init to 8 */
tc->DIF_bytes_immed_val = 8;
tc->reserved_DC_1 = 0;
tc->bgc_blk_sz = scmd->device->sector_size;
tc->reserved_E0_0 = 0;
tc->app_tag_gen_mask = 0;
/** setup block guard control **/
tc->bgctl = 0;
/* DIF write insert */
tc->bgctl_f.op = 0x2;
tc->app_tag_verify_mask = 0;
/* must init to 0 for hw */
tc->blk_guard_err = 0;
tc->reserved_E8_0 = 0;
if ((type & SCSI_PROT_DIF_TYPE1) || (type & SCSI_PROT_DIF_TYPE2))
tc->ref_tag_seed_gen = scsi_get_lba(scmd) & 0xffffffff;
else if (type & SCSI_PROT_DIF_TYPE3)
tc->ref_tag_seed_gen = 0;
}
static void scu_ssp_ireq_dif_strip(struct isci_request *ireq, u8 type, u8 op)
{
struct scu_task_context *tc = ireq->tc;
struct scsi_cmnd *scmd = ireq->ttype_ptr.io_task_ptr->uldd_task;
u8 blk_sz = scu_bg_blk_size(scmd->device);
tc->block_guard_enable = 1;
tc->blk_prot_en = 1;
tc->blk_sz = blk_sz;
/* DIF read strip */
tc->blk_prot_func = 0x1;
tc->transfer_length_bytes += scu_dif_bytes(tc->transfer_length_bytes,
scmd->device->sector_size);
/* always init to 0, used by hw */
tc->interm_crc_val = 0;
tc->init_crc_seed = 0;
tc->app_tag_verify = 0;
tc->app_tag_gen = 0;
if ((type & SCSI_PROT_DIF_TYPE1) || (type & SCSI_PROT_DIF_TYPE2))
tc->ref_tag_seed_verify = scsi_get_lba(scmd) & 0xffffffff;
else if (type & SCSI_PROT_DIF_TYPE3)
tc->ref_tag_seed_verify = 0;
/* always init to same as bg_blk_sz */
tc->UD_bytes_immed_val = scmd->device->sector_size;
tc->reserved_DC_0 = 0;
/* always init to 8 */
tc->DIF_bytes_immed_val = 8;
tc->reserved_DC_1 = 0;
tc->bgc_blk_sz = scmd->device->sector_size;
tc->reserved_E0_0 = 0;
tc->app_tag_gen_mask = 0;
/** setup block guard control **/
tc->bgctl = 0;
/* DIF read strip */
tc->bgctl_f.crc_verify = 1;
tc->bgctl_f.op = 0x1;
if ((type & SCSI_PROT_DIF_TYPE1) || (type & SCSI_PROT_DIF_TYPE2)) {
tc->bgctl_f.ref_tag_chk = 1;
tc->bgctl_f.app_f_detect = 1;
} else if (type & SCSI_PROT_DIF_TYPE3)
tc->bgctl_f.app_ref_f_detect = 1;
tc->app_tag_verify_mask = 0;
/* must init to 0 for hw */
tc->blk_guard_err = 0;
tc->reserved_E8_0 = 0;
tc->ref_tag_seed_gen = 0;
}
/** /**
* This method is will fill in the SCU Task Context for a SSP IO request. * This method is will fill in the SCU Task Context for a SSP IO request.
* @sci_req: * @sci_req:
...@@ -274,6 +420,10 @@ static void scu_ssp_io_request_construct_task_context(struct isci_request *ireq, ...@@ -274,6 +420,10 @@ static void scu_ssp_io_request_construct_task_context(struct isci_request *ireq,
u32 len) u32 len)
{ {
struct scu_task_context *task_context = ireq->tc; struct scu_task_context *task_context = ireq->tc;
struct sas_task *sas_task = ireq->ttype_ptr.io_task_ptr;
struct scsi_cmnd *scmd = sas_task->uldd_task;
u8 prot_type = scsi_get_prot_type(scmd);
u8 prot_op = scsi_get_prot_op(scmd);
scu_ssp_reqeust_construct_task_context(ireq, task_context); scu_ssp_reqeust_construct_task_context(ireq, task_context);
...@@ -296,6 +446,13 @@ static void scu_ssp_io_request_construct_task_context(struct isci_request *ireq, ...@@ -296,6 +446,13 @@ static void scu_ssp_io_request_construct_task_context(struct isci_request *ireq,
if (task_context->transfer_length_bytes > 0) if (task_context->transfer_length_bytes > 0)
sci_request_build_sgl(ireq); sci_request_build_sgl(ireq);
if (prot_type != SCSI_PROT_DIF_TYPE0) {
if (prot_op == SCSI_PROT_READ_STRIP)
scu_ssp_ireq_dif_strip(ireq, prot_type, prot_op);
else if (prot_op == SCSI_PROT_WRITE_INSERT)
scu_ssp_ireq_dif_insert(ireq, prot_type, prot_op);
}
} }
/** /**
...@@ -519,18 +676,12 @@ sci_io_request_construct_sata(struct isci_request *ireq, ...@@ -519,18 +676,12 @@ sci_io_request_construct_sata(struct isci_request *ireq,
if (test_bit(IREQ_TMF, &ireq->flags)) { if (test_bit(IREQ_TMF, &ireq->flags)) {
struct isci_tmf *tmf = isci_request_access_tmf(ireq); struct isci_tmf *tmf = isci_request_access_tmf(ireq);
if (tmf->tmf_code == isci_tmf_sata_srst_high || dev_err(&ireq->owning_controller->pdev->dev,
tmf->tmf_code == isci_tmf_sata_srst_low) { "%s: Request 0x%p received un-handled SAT "
scu_stp_raw_request_construct_task_context(ireq); "management protocol 0x%x.\n",
return SCI_SUCCESS; __func__, ireq, tmf->tmf_code);
} else {
dev_err(&ireq->owning_controller->pdev->dev,
"%s: Request 0x%p received un-handled SAT "
"management protocol 0x%x.\n",
__func__, ireq, tmf->tmf_code);
return SCI_FAILURE; return SCI_FAILURE;
}
} }
if (!sas_protocol_ata(task->task_proto)) { if (!sas_protocol_ata(task->task_proto)) {
...@@ -627,34 +778,6 @@ static enum sci_status sci_io_request_construct_basic_sata(struct isci_request * ...@@ -627,34 +778,6 @@ static enum sci_status sci_io_request_construct_basic_sata(struct isci_request *
return status; return status;
} }
enum sci_status sci_task_request_construct_sata(struct isci_request *ireq)
{
enum sci_status status = SCI_SUCCESS;
/* check for management protocols */
if (test_bit(IREQ_TMF, &ireq->flags)) {
struct isci_tmf *tmf = isci_request_access_tmf(ireq);
if (tmf->tmf_code == isci_tmf_sata_srst_high ||
tmf->tmf_code == isci_tmf_sata_srst_low) {
scu_stp_raw_request_construct_task_context(ireq);
} else {
dev_err(&ireq->owning_controller->pdev->dev,
"%s: Request 0x%p received un-handled SAT "
"Protocol 0x%x.\n",
__func__, ireq, tmf->tmf_code);
return SCI_FAILURE;
}
}
if (status != SCI_SUCCESS)
return status;
sci_change_state(&ireq->sm, SCI_REQ_CONSTRUCTED);
return status;
}
/** /**
* sci_req_tx_bytes - bytes transferred when reply underruns request * sci_req_tx_bytes - bytes transferred when reply underruns request
* @ireq: request that was terminated early * @ireq: request that was terminated early
...@@ -756,9 +879,6 @@ sci_io_request_terminate(struct isci_request *ireq) ...@@ -756,9 +879,6 @@ sci_io_request_terminate(struct isci_request *ireq)
case SCI_REQ_STP_PIO_WAIT_FRAME: case SCI_REQ_STP_PIO_WAIT_FRAME:
case SCI_REQ_STP_PIO_DATA_IN: case SCI_REQ_STP_PIO_DATA_IN:
case SCI_REQ_STP_PIO_DATA_OUT: case SCI_REQ_STP_PIO_DATA_OUT:
case SCI_REQ_STP_SOFT_RESET_WAIT_H2D_ASSERTED:
case SCI_REQ_STP_SOFT_RESET_WAIT_H2D_DIAG:
case SCI_REQ_STP_SOFT_RESET_WAIT_D2H:
case SCI_REQ_ATAPI_WAIT_H2D: case SCI_REQ_ATAPI_WAIT_H2D:
case SCI_REQ_ATAPI_WAIT_PIO_SETUP: case SCI_REQ_ATAPI_WAIT_PIO_SETUP:
case SCI_REQ_ATAPI_WAIT_D2H: case SCI_REQ_ATAPI_WAIT_D2H:
...@@ -800,7 +920,8 @@ enum sci_status sci_request_complete(struct isci_request *ireq) ...@@ -800,7 +920,8 @@ enum sci_status sci_request_complete(struct isci_request *ireq)
state = ireq->sm.current_state_id; state = ireq->sm.current_state_id;
if (WARN_ONCE(state != SCI_REQ_COMPLETED, if (WARN_ONCE(state != SCI_REQ_COMPLETED,
"isci: request completion from wrong state (%d)\n", state)) "isci: request completion from wrong state (%s)\n",
req_state_name(state)))
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
if (ireq->saved_rx_frame_index != SCU_INVALID_FRAME_INDEX) if (ireq->saved_rx_frame_index != SCU_INVALID_FRAME_INDEX)
...@@ -821,8 +942,8 @@ enum sci_status sci_io_request_event_handler(struct isci_request *ireq, ...@@ -821,8 +942,8 @@ enum sci_status sci_io_request_event_handler(struct isci_request *ireq,
state = ireq->sm.current_state_id; state = ireq->sm.current_state_id;
if (state != SCI_REQ_STP_PIO_DATA_IN) { if (state != SCI_REQ_STP_PIO_DATA_IN) {
dev_warn(&ihost->pdev->dev, "%s: (%x) in wrong state %d\n", dev_warn(&ihost->pdev->dev, "%s: (%x) in wrong state %s\n",
__func__, event_code, state); __func__, event_code, req_state_name(state));
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
...@@ -1938,59 +2059,6 @@ sci_io_request_frame_handler(struct isci_request *ireq, ...@@ -1938,59 +2059,6 @@ sci_io_request_frame_handler(struct isci_request *ireq,
return status; return status;
} }
case SCI_REQ_STP_SOFT_RESET_WAIT_D2H: {
struct dev_to_host_fis *frame_header;
u32 *frame_buffer;
status = sci_unsolicited_frame_control_get_header(&ihost->uf_control,
frame_index,
(void **)&frame_header);
if (status != SCI_SUCCESS) {
dev_err(&ihost->pdev->dev,
"%s: SCIC IO Request 0x%p could not get frame "
"header for frame index %d, status %x\n",
__func__,
stp_req,
frame_index,
status);
return status;
}
switch (frame_header->fis_type) {
case FIS_REGD2H:
sci_unsolicited_frame_control_get_buffer(&ihost->uf_control,
frame_index,
(void **)&frame_buffer);
sci_controller_copy_sata_response(&ireq->stp.rsp,
frame_header,
frame_buffer);
/* The command has completed with error */
ireq->scu_status = SCU_TASK_DONE_CHECK_RESPONSE;
ireq->sci_status = SCI_FAILURE_IO_RESPONSE_VALID;
break;
default:
dev_warn(&ihost->pdev->dev,
"%s: IO Request:0x%p Frame Id:%d protocol "
"violation occurred\n",
__func__,
stp_req,
frame_index);
ireq->scu_status = SCU_TASK_DONE_UNEXP_FIS;
ireq->sci_status = SCI_FAILURE_PROTOCOL_VIOLATION;
break;
}
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
/* Frame has been decoded return it to the controller */
sci_controller_release_frame(ihost, frame_index);
return status;
}
case SCI_REQ_ATAPI_WAIT_PIO_SETUP: { case SCI_REQ_ATAPI_WAIT_PIO_SETUP: {
struct sas_task *task = isci_request_access_task(ireq); struct sas_task *task = isci_request_access_task(ireq);
...@@ -2088,57 +2156,6 @@ static enum sci_status stp_request_udma_await_tc_event(struct isci_request *ireq ...@@ -2088,57 +2156,6 @@ static enum sci_status stp_request_udma_await_tc_event(struct isci_request *ireq
return status; return status;
} }
static enum sci_status
stp_request_soft_reset_await_h2d_asserted_tc_event(struct isci_request *ireq,
u32 completion_code)
{
switch (SCU_GET_COMPLETION_TL_STATUS(completion_code)) {
case SCU_MAKE_COMPLETION_STATUS(SCU_TASK_DONE_GOOD):
ireq->scu_status = SCU_TASK_DONE_GOOD;
ireq->sci_status = SCI_SUCCESS;
sci_change_state(&ireq->sm, SCI_REQ_STP_SOFT_RESET_WAIT_H2D_DIAG);
break;
default:
/*
* All other completion status cause the IO to be complete.
* If a NAK was received, then it is up to the user to retry
* the request.
*/
ireq->scu_status = SCU_NORMALIZE_COMPLETION_STATUS(completion_code);
ireq->sci_status = SCI_FAILURE_CONTROLLER_SPECIFIC_IO_ERR;
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
break;
}
return SCI_SUCCESS;
}
static enum sci_status
stp_request_soft_reset_await_h2d_diagnostic_tc_event(struct isci_request *ireq,
u32 completion_code)
{
switch (SCU_GET_COMPLETION_TL_STATUS(completion_code)) {
case SCU_MAKE_COMPLETION_STATUS(SCU_TASK_DONE_GOOD):
ireq->scu_status = SCU_TASK_DONE_GOOD;
ireq->sci_status = SCI_SUCCESS;
sci_change_state(&ireq->sm, SCI_REQ_STP_SOFT_RESET_WAIT_D2H);
break;
default:
/* All other completion status cause the IO to be complete. If
* a NAK was received, then it is up to the user to retry the
* request.
*/
ireq->scu_status = SCU_NORMALIZE_COMPLETION_STATUS(completion_code);
ireq->sci_status = SCI_FAILURE_CONTROLLER_SPECIFIC_IO_ERR;
sci_change_state(&ireq->sm, SCI_REQ_COMPLETED);
break;
}
return SCI_SUCCESS;
}
static enum sci_status atapi_raw_completion(struct isci_request *ireq, u32 completion_code, static enum sci_status atapi_raw_completion(struct isci_request *ireq, u32 completion_code,
enum sci_base_request_states next) enum sci_base_request_states next)
{ {
...@@ -2284,14 +2301,6 @@ sci_io_request_tc_completion(struct isci_request *ireq, ...@@ -2284,14 +2301,6 @@ sci_io_request_tc_completion(struct isci_request *ireq,
case SCI_REQ_STP_PIO_DATA_OUT: case SCI_REQ_STP_PIO_DATA_OUT:
return pio_data_out_tx_done_tc_event(ireq, completion_code); return pio_data_out_tx_done_tc_event(ireq, completion_code);
case SCI_REQ_STP_SOFT_RESET_WAIT_H2D_ASSERTED:
return stp_request_soft_reset_await_h2d_asserted_tc_event(ireq,
completion_code);
case SCI_REQ_STP_SOFT_RESET_WAIT_H2D_DIAG:
return stp_request_soft_reset_await_h2d_diagnostic_tc_event(ireq,
completion_code);
case SCI_REQ_ABORTING: case SCI_REQ_ABORTING:
return request_aborting_state_tc_event(ireq, return request_aborting_state_tc_event(ireq,
completion_code); completion_code);
...@@ -2308,12 +2317,8 @@ sci_io_request_tc_completion(struct isci_request *ireq, ...@@ -2308,12 +2317,8 @@ sci_io_request_tc_completion(struct isci_request *ireq,
return atapi_data_tc_completion_handler(ireq, completion_code); return atapi_data_tc_completion_handler(ireq, completion_code);
default: default:
dev_warn(&ihost->pdev->dev, dev_warn(&ihost->pdev->dev, "%s: %x in wrong state %s\n",
"%s: SCIC IO Request given task completion " __func__, completion_code, req_state_name(state));
"notification %x while in wrong state %d\n",
__func__,
completion_code,
state);
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
} }
} }
...@@ -3065,10 +3070,6 @@ static void sci_request_started_state_enter(struct sci_base_state_machine *sm) ...@@ -3065,10 +3070,6 @@ static void sci_request_started_state_enter(struct sci_base_state_machine *sm)
*/ */
if (!task && dev->dev_type == SAS_END_DEV) { if (!task && dev->dev_type == SAS_END_DEV) {
state = SCI_REQ_TASK_WAIT_TC_COMP; state = SCI_REQ_TASK_WAIT_TC_COMP;
} else if (!task &&
(isci_request_access_tmf(ireq)->tmf_code == isci_tmf_sata_srst_high ||
isci_request_access_tmf(ireq)->tmf_code == isci_tmf_sata_srst_low)) {
state = SCI_REQ_STP_SOFT_RESET_WAIT_H2D_ASSERTED;
} else if (task && task->task_proto == SAS_PROTOCOL_SMP) { } else if (task && task->task_proto == SAS_PROTOCOL_SMP) {
state = SCI_REQ_SMP_WAIT_RESP; state = SCI_REQ_SMP_WAIT_RESP;
} else if (task && sas_protocol_ata(task->task_proto) && } else if (task && sas_protocol_ata(task->task_proto) &&
...@@ -3125,31 +3126,6 @@ static void sci_stp_request_started_pio_await_h2d_completion_enter(struct sci_ba ...@@ -3125,31 +3126,6 @@ static void sci_stp_request_started_pio_await_h2d_completion_enter(struct sci_ba
ireq->target_device->working_request = ireq; ireq->target_device->working_request = ireq;
} }
static void sci_stp_request_started_soft_reset_await_h2d_asserted_completion_enter(struct sci_base_state_machine *sm)
{
struct isci_request *ireq = container_of(sm, typeof(*ireq), sm);
ireq->target_device->working_request = ireq;
}
static void sci_stp_request_started_soft_reset_await_h2d_diagnostic_completion_enter(struct sci_base_state_machine *sm)
{
struct isci_request *ireq = container_of(sm, typeof(*ireq), sm);
struct scu_task_context *tc = ireq->tc;
struct host_to_dev_fis *h2d_fis;
enum sci_status status;
/* Clear the SRST bit */
h2d_fis = &ireq->stp.cmd;
h2d_fis->control = 0;
/* Clear the TC control bit */
tc->control_frame = 0;
status = sci_controller_continue_io(ireq);
WARN_ONCE(status != SCI_SUCCESS, "isci: continue io failure\n");
}
static const struct sci_base_state sci_request_state_table[] = { static const struct sci_base_state sci_request_state_table[] = {
[SCI_REQ_INIT] = { }, [SCI_REQ_INIT] = { },
[SCI_REQ_CONSTRUCTED] = { }, [SCI_REQ_CONSTRUCTED] = { },
...@@ -3168,13 +3144,6 @@ static const struct sci_base_state sci_request_state_table[] = { ...@@ -3168,13 +3144,6 @@ static const struct sci_base_state sci_request_state_table[] = {
[SCI_REQ_STP_PIO_DATA_OUT] = { }, [SCI_REQ_STP_PIO_DATA_OUT] = { },
[SCI_REQ_STP_UDMA_WAIT_TC_COMP] = { }, [SCI_REQ_STP_UDMA_WAIT_TC_COMP] = { },
[SCI_REQ_STP_UDMA_WAIT_D2H] = { }, [SCI_REQ_STP_UDMA_WAIT_D2H] = { },
[SCI_REQ_STP_SOFT_RESET_WAIT_H2D_ASSERTED] = {
.enter_state = sci_stp_request_started_soft_reset_await_h2d_asserted_completion_enter,
},
[SCI_REQ_STP_SOFT_RESET_WAIT_H2D_DIAG] = {
.enter_state = sci_stp_request_started_soft_reset_await_h2d_diagnostic_completion_enter,
},
[SCI_REQ_STP_SOFT_RESET_WAIT_D2H] = { },
[SCI_REQ_TASK_WAIT_TC_COMP] = { }, [SCI_REQ_TASK_WAIT_TC_COMP] = { },
[SCI_REQ_TASK_WAIT_TC_RESP] = { }, [SCI_REQ_TASK_WAIT_TC_RESP] = { },
[SCI_REQ_SMP_WAIT_RESP] = { }, [SCI_REQ_SMP_WAIT_RESP] = { },
...@@ -3649,8 +3618,7 @@ int isci_request_execute(struct isci_host *ihost, struct isci_remote_device *ide ...@@ -3649,8 +3618,7 @@ int isci_request_execute(struct isci_host *ihost, struct isci_remote_device *ide
/* Cause this task to be scheduled in the SCSI error /* Cause this task to be scheduled in the SCSI error
* handler thread. * handler thread.
*/ */
isci_execpath_callback(ihost, task, sas_task_abort(task);
sas_task_abort);
/* Change the status, since we are holding /* Change the status, since we are holding
* the I/O until it is managed by the SCSI * the I/O until it is managed by the SCSI
......
...@@ -182,138 +182,103 @@ static inline struct isci_request *to_ireq(struct isci_stp_request *stp_req) ...@@ -182,138 +182,103 @@ static inline struct isci_request *to_ireq(struct isci_stp_request *stp_req)
} }
/** /**
* enum sci_base_request_states - This enumeration depicts all the states for * enum sci_base_request_states - request state machine states
* the common request state machine.
* *
* @SCI_REQ_INIT: Simply the initial state for the base request state machine.
* *
* @SCI_REQ_CONSTRUCTED: This state indicates that the request has been
* constructed. This state is entered from the INITIAL state.
*
* @SCI_REQ_STARTED: This state indicates that the request has been started.
* This state is entered from the CONSTRUCTED state.
*
* @SCI_REQ_STP_UDMA_WAIT_TC_COMP:
* @SCI_REQ_STP_UDMA_WAIT_D2H:
* @SCI_REQ_STP_NON_DATA_WAIT_H2D:
* @SCI_REQ_STP_NON_DATA_WAIT_D2H:
*
* @SCI_REQ_STP_PIO_WAIT_H2D: While in this state the IO request object is
* waiting for the TC completion notification for the H2D Register FIS
*
* @SCI_REQ_STP_PIO_WAIT_FRAME: While in this state the IO request object is
* waiting for either a PIO Setup FIS or a D2H register FIS. The type of frame
* received is based on the result of the prior frame and line conditions.
*
* @SCI_REQ_STP_PIO_DATA_IN: While in this state the IO request object is
* waiting for a DATA frame from the device.
*
* @SCI_REQ_STP_PIO_DATA_OUT: While in this state the IO request object is
* waiting to transmit the next data frame to the device.
*
* @SCI_REQ_ATAPI_WAIT_H2D: While in this state the IO request object is
* waiting for the TC completion notification for the H2D Register FIS
*
* @SCI_REQ_ATAPI_WAIT_PIO_SETUP: While in this state the IO request object is
* waiting for either a PIO Setup.
*
* @SCI_REQ_ATAPI_WAIT_D2H: The non-data IO transit to this state in this state
* after receiving TC completion. While in this state IO request object is
* waiting for D2H status frame as UF.
*
* @SCI_REQ_ATAPI_WAIT_TC_COMP: When transmitting raw frames hardware reports
* task context completion after every frame submission, so in the
* non-accelerated case we need to expect the completion for the "cdb" frame.
*
* @SCI_REQ_TASK_WAIT_TC_COMP: The AWAIT_TC_COMPLETION sub-state indicates that
* the started raw task management request is waiting for the transmission of
* the initial frame (i.e. command, task, etc.).
*
* @SCI_REQ_TASK_WAIT_TC_RESP: This sub-state indicates that the started task
* management request is waiting for the reception of an unsolicited frame
* (i.e. response IU).
*
* @SCI_REQ_SMP_WAIT_RESP: This sub-state indicates that the started task
* management request is waiting for the reception of an unsolicited frame
* (i.e. response IU).
*
* @SCI_REQ_SMP_WAIT_TC_COMP: The AWAIT_TC_COMPLETION sub-state indicates that
* the started SMP request is waiting for the transmission of the initial frame
* (i.e. command, task, etc.).
*
* @SCI_REQ_COMPLETED: This state indicates that the request has completed.
* This state is entered from the STARTED state. This state is entered from the
* ABORTING state.
*
* @SCI_REQ_ABORTING: This state indicates that the request is in the process
* of being terminated/aborted. This state is entered from the CONSTRUCTED
* state. This state is entered from the STARTED state.
*
* @SCI_REQ_FINAL: Simply the final state for the base request state machine.
*/ */
enum sci_base_request_states { #define REQUEST_STATES {\
/* C(REQ_INIT),\
* Simply the initial state for the base request state machine. C(REQ_CONSTRUCTED),\
*/ C(REQ_STARTED),\
SCI_REQ_INIT, C(REQ_STP_UDMA_WAIT_TC_COMP),\
C(REQ_STP_UDMA_WAIT_D2H),\
/* C(REQ_STP_NON_DATA_WAIT_H2D),\
* This state indicates that the request has been constructed. C(REQ_STP_NON_DATA_WAIT_D2H),\
* This state is entered from the INITIAL state. C(REQ_STP_PIO_WAIT_H2D),\
*/ C(REQ_STP_PIO_WAIT_FRAME),\
SCI_REQ_CONSTRUCTED, C(REQ_STP_PIO_DATA_IN),\
C(REQ_STP_PIO_DATA_OUT),\
/* C(REQ_ATAPI_WAIT_H2D),\
* This state indicates that the request has been started. This state C(REQ_ATAPI_WAIT_PIO_SETUP),\
* is entered from the CONSTRUCTED state. C(REQ_ATAPI_WAIT_D2H),\
*/ C(REQ_ATAPI_WAIT_TC_COMP),\
SCI_REQ_STARTED, C(REQ_TASK_WAIT_TC_COMP),\
C(REQ_TASK_WAIT_TC_RESP),\
SCI_REQ_STP_UDMA_WAIT_TC_COMP, C(REQ_SMP_WAIT_RESP),\
SCI_REQ_STP_UDMA_WAIT_D2H, C(REQ_SMP_WAIT_TC_COMP),\
C(REQ_COMPLETED),\
SCI_REQ_STP_NON_DATA_WAIT_H2D, C(REQ_ABORTING),\
SCI_REQ_STP_NON_DATA_WAIT_D2H, C(REQ_FINAL),\
}
SCI_REQ_STP_SOFT_RESET_WAIT_H2D_ASSERTED, #undef C
SCI_REQ_STP_SOFT_RESET_WAIT_H2D_DIAG, #define C(a) SCI_##a
SCI_REQ_STP_SOFT_RESET_WAIT_D2H, enum sci_base_request_states REQUEST_STATES;
#undef C
/* const char *req_state_name(enum sci_base_request_states state);
* While in this state the IO request object is waiting for the TC
* completion notification for the H2D Register FIS
*/
SCI_REQ_STP_PIO_WAIT_H2D,
/*
* While in this state the IO request object is waiting for either a
* PIO Setup FIS or a D2H register FIS. The type of frame received is
* based on the result of the prior frame and line conditions.
*/
SCI_REQ_STP_PIO_WAIT_FRAME,
/*
* While in this state the IO request object is waiting for a DATA
* frame from the device.
*/
SCI_REQ_STP_PIO_DATA_IN,
/*
* While in this state the IO request object is waiting to transmit
* the next data frame to the device.
*/
SCI_REQ_STP_PIO_DATA_OUT,
/*
* While in this state the IO request object is waiting for the TC
* completion notification for the H2D Register FIS
*/
SCI_REQ_ATAPI_WAIT_H2D,
/*
* While in this state the IO request object is waiting for either a
* PIO Setup.
*/
SCI_REQ_ATAPI_WAIT_PIO_SETUP,
/*
* The non-data IO transit to this state in this state after receiving
* TC completion. While in this state IO request object is waiting for
* D2H status frame as UF.
*/
SCI_REQ_ATAPI_WAIT_D2H,
/*
* When transmitting raw frames hardware reports task context completion
* after every frame submission, so in the non-accelerated case we need
* to expect the completion for the "cdb" frame.
*/
SCI_REQ_ATAPI_WAIT_TC_COMP,
/*
* The AWAIT_TC_COMPLETION sub-state indicates that the started raw
* task management request is waiting for the transmission of the
* initial frame (i.e. command, task, etc.).
*/
SCI_REQ_TASK_WAIT_TC_COMP,
/*
* This sub-state indicates that the started task management request
* is waiting for the reception of an unsolicited frame
* (i.e. response IU).
*/
SCI_REQ_TASK_WAIT_TC_RESP,
/*
* This sub-state indicates that the started task management request
* is waiting for the reception of an unsolicited frame
* (i.e. response IU).
*/
SCI_REQ_SMP_WAIT_RESP,
/*
* The AWAIT_TC_COMPLETION sub-state indicates that the started SMP
* request is waiting for the transmission of the initial frame
* (i.e. command, task, etc.).
*/
SCI_REQ_SMP_WAIT_TC_COMP,
/*
* This state indicates that the request has completed.
* This state is entered from the STARTED state. This state is entered
* from the ABORTING state.
*/
SCI_REQ_COMPLETED,
/*
* This state indicates that the request is in the process of being
* terminated/aborted.
* This state is entered from the CONSTRUCTED state.
* This state is entered from the STARTED state.
*/
SCI_REQ_ABORTING,
/*
* Simply the final state for the base request state machine.
*/
SCI_REQ_FINAL,
};
enum sci_status sci_request_start(struct isci_request *ireq); enum sci_status sci_request_start(struct isci_request *ireq);
enum sci_status sci_io_request_terminate(struct isci_request *ireq); enum sci_status sci_io_request_terminate(struct isci_request *ireq);
...@@ -446,10 +411,7 @@ sci_task_request_construct(struct isci_host *ihost, ...@@ -446,10 +411,7 @@ sci_task_request_construct(struct isci_host *ihost,
struct isci_remote_device *idev, struct isci_remote_device *idev,
u16 io_tag, u16 io_tag,
struct isci_request *ireq); struct isci_request *ireq);
enum sci_status enum sci_status sci_task_request_construct_ssp(struct isci_request *ireq);
sci_task_request_construct_ssp(struct isci_request *ireq);
enum sci_status
sci_task_request_construct_sata(struct isci_request *ireq);
void sci_smp_request_copy_response(struct isci_request *ireq); void sci_smp_request_copy_response(struct isci_request *ireq);
static inline int isci_task_is_ncq_recovery(struct sas_task *task) static inline int isci_task_is_ncq_recovery(struct sas_task *task)
......
...@@ -866,9 +866,9 @@ struct scu_task_context { ...@@ -866,9 +866,9 @@ struct scu_task_context {
struct transport_snapshot snapshot; /* read only set to 0 */ struct transport_snapshot snapshot; /* read only set to 0 */
/* OFFSET 0x5C */ /* OFFSET 0x5C */
u32 block_protection_enable:1; u32 blk_prot_en:1;
u32 block_size:2; u32 blk_sz:2;
u32 block_protection_function:2; u32 blk_prot_func:2;
u32 reserved_5C_0:9; u32 reserved_5C_0:9;
u32 active_sgl_element:2; /* read only set to 0 */ u32 active_sgl_element:2; /* read only set to 0 */
u32 sgl_exhausted:1; /* read only set to 0 */ u32 sgl_exhausted:1; /* read only set to 0 */
...@@ -896,33 +896,56 @@ struct scu_task_context { ...@@ -896,33 +896,56 @@ struct scu_task_context {
u32 reserved_C4_CC[3]; u32 reserved_C4_CC[3];
/* OFFSET 0xD0 */ /* OFFSET 0xD0 */
u32 intermediate_crc_value:16; u32 interm_crc_val:16;
u32 initial_crc_seed:16; u32 init_crc_seed:16;
/* OFFSET 0xD4 */ /* OFFSET 0xD4 */
u32 application_tag_for_verify:16; u32 app_tag_verify:16;
u32 application_tag_for_generate:16; u32 app_tag_gen:16;
/* OFFSET 0xD8 */ /* OFFSET 0xD8 */
u32 reference_tag_seed_for_verify_function; u32 ref_tag_seed_verify;
/* OFFSET 0xDC */ /* OFFSET 0xDC */
u32 reserved_DC; u32 UD_bytes_immed_val:13;
u32 reserved_DC_0:3;
u32 DIF_bytes_immed_val:4;
u32 reserved_DC_1:12;
/* OFFSET 0xE0 */ /* OFFSET 0xE0 */
u32 reserved_E0_0:16; u32 bgc_blk_sz:13;
u32 application_tag_mask_for_generate:16; u32 reserved_E0_0:3;
u32 app_tag_gen_mask:16;
/* OFFSET 0xE4 */ /* OFFSET 0xE4 */
u32 block_protection_control:16; union {
u32 application_tag_mask_for_verify:16; u16 bgctl;
struct {
u16 crc_verify:1;
u16 app_tag_chk:1;
u16 ref_tag_chk:1;
u16 op:2;
u16 legacy:1;
u16 invert_crc_seed:1;
u16 ref_tag_gen:1;
u16 fixed_ref_tag:1;
u16 invert_crc:1;
u16 app_ref_f_detect:1;
u16 uninit_dif_check_err:1;
u16 uninit_dif_bypass:1;
u16 app_f_detect:1;
u16 reserved_0:2;
} bgctl_f;
};
u16 app_tag_verify_mask;
/* OFFSET 0xE8 */ /* OFFSET 0xE8 */
u32 block_protection_error:8; u32 blk_guard_err:8;
u32 reserved_E8_0:24; u32 reserved_E8_0:24;
/* OFFSET 0xEC */ /* OFFSET 0xEC */
u32 reference_tag_seed_for_verify; u32 ref_tag_seed_gen;
/* OFFSET 0xF0 */ /* OFFSET 0xF0 */
u32 intermediate_crc_valid_snapshot:16; u32 intermediate_crc_valid_snapshot:16;
...@@ -937,6 +960,6 @@ struct scu_task_context { ...@@ -937,6 +960,6 @@ struct scu_task_context {
/* OFFSET 0xFC */ /* OFFSET 0xFC */
u32 reference_tag_seed_for_generate_function_snapshot; u32 reference_tag_seed_for_generate_function_snapshot;
}; } __packed;
#endif /* _SCU_TASK_CONTEXT_H_ */ #endif /* _SCU_TASK_CONTEXT_H_ */
...@@ -96,8 +96,7 @@ static void isci_task_refuse(struct isci_host *ihost, struct sas_task *task, ...@@ -96,8 +96,7 @@ static void isci_task_refuse(struct isci_host *ihost, struct sas_task *task,
__func__, task, response, status); __func__, task, response, status);
task->lldd_task = NULL; task->lldd_task = NULL;
task->task_done(task);
isci_execpath_callback(ihost, task, task->task_done);
break; break;
case isci_perform_aborted_io_completion: case isci_perform_aborted_io_completion:
...@@ -117,8 +116,7 @@ static void isci_task_refuse(struct isci_host *ihost, struct sas_task *task, ...@@ -117,8 +116,7 @@ static void isci_task_refuse(struct isci_host *ihost, struct sas_task *task,
"%s: Error - task = %p, response=%d, " "%s: Error - task = %p, response=%d, "
"status=%d\n", "status=%d\n",
__func__, task, response, status); __func__, task, response, status);
sas_task_abort(task);
isci_execpath_callback(ihost, task, sas_task_abort);
break; break;
default: default:
...@@ -249,46 +247,6 @@ int isci_task_execute_task(struct sas_task *task, int num, gfp_t gfp_flags) ...@@ -249,46 +247,6 @@ int isci_task_execute_task(struct sas_task *task, int num, gfp_t gfp_flags)
return 0; return 0;
} }
static enum sci_status isci_sata_management_task_request_build(struct isci_request *ireq)
{
struct isci_tmf *isci_tmf;
enum sci_status status;
if (!test_bit(IREQ_TMF, &ireq->flags))
return SCI_FAILURE;
isci_tmf = isci_request_access_tmf(ireq);
switch (isci_tmf->tmf_code) {
case isci_tmf_sata_srst_high:
case isci_tmf_sata_srst_low: {
struct host_to_dev_fis *fis = &ireq->stp.cmd;
memset(fis, 0, sizeof(*fis));
fis->fis_type = 0x27;
fis->flags &= ~0x80;
fis->flags &= 0xF0;
if (isci_tmf->tmf_code == isci_tmf_sata_srst_high)
fis->control |= ATA_SRST;
else
fis->control &= ~ATA_SRST;
break;
}
/* other management commnd go here... */
default:
return SCI_FAILURE;
}
/* core builds the protocol specific request
* based on the h2d fis.
*/
status = sci_task_request_construct_sata(ireq);
return status;
}
static struct isci_request *isci_task_request_build(struct isci_host *ihost, static struct isci_request *isci_task_request_build(struct isci_host *ihost,
struct isci_remote_device *idev, struct isci_remote_device *idev,
u16 tag, struct isci_tmf *isci_tmf) u16 tag, struct isci_tmf *isci_tmf)
...@@ -328,13 +286,6 @@ static struct isci_request *isci_task_request_build(struct isci_host *ihost, ...@@ -328,13 +286,6 @@ static struct isci_request *isci_task_request_build(struct isci_host *ihost,
return NULL; return NULL;
} }
if (dev->dev_type == SATA_DEV || (dev->tproto & SAS_PROTOCOL_STP)) {
isci_tmf->proto = SAS_PROTOCOL_SATA;
status = isci_sata_management_task_request_build(ireq);
if (status != SCI_SUCCESS)
return NULL;
}
return ireq; return ireq;
} }
...@@ -873,53 +824,20 @@ static int isci_task_send_lu_reset_sas( ...@@ -873,53 +824,20 @@ static int isci_task_send_lu_reset_sas(
return ret; return ret;
} }
static int isci_task_send_lu_reset_sata(struct isci_host *ihost, int isci_task_lu_reset(struct domain_device *dev, u8 *lun)
struct isci_remote_device *idev, u8 *lun)
{
int ret = TMF_RESP_FUNC_FAILED;
struct isci_tmf tmf;
/* Send the soft reset to the target */
#define ISCI_SRST_TIMEOUT_MS 25000 /* 25 second timeout. */
isci_task_build_tmf(&tmf, isci_tmf_sata_srst_high, NULL, NULL);
ret = isci_task_execute_tmf(ihost, idev, &tmf, ISCI_SRST_TIMEOUT_MS);
if (ret != TMF_RESP_FUNC_COMPLETE) {
dev_dbg(&ihost->pdev->dev,
"%s: Assert SRST failed (%p) = %x",
__func__, idev, ret);
/* Return the failure so that the LUN reset is escalated
* to a target reset.
*/
}
return ret;
}
/**
* isci_task_lu_reset() - This function is one of the SAS Domain Template
* functions. This is one of the Task Management functoins called by libsas,
* to reset the given lun. Note the assumption that while this call is
* executing, no I/O will be sent by the host to the device.
* @lun: This parameter specifies the lun to be reset.
*
* status, zero indicates success.
*/
int isci_task_lu_reset(struct domain_device *domain_device, u8 *lun)
{ {
struct isci_host *isci_host = dev_to_ihost(domain_device); struct isci_host *isci_host = dev_to_ihost(dev);
struct isci_remote_device *isci_device; struct isci_remote_device *isci_device;
unsigned long flags; unsigned long flags;
int ret; int ret;
spin_lock_irqsave(&isci_host->scic_lock, flags); spin_lock_irqsave(&isci_host->scic_lock, flags);
isci_device = isci_lookup_device(domain_device); isci_device = isci_lookup_device(dev);
spin_unlock_irqrestore(&isci_host->scic_lock, flags); spin_unlock_irqrestore(&isci_host->scic_lock, flags);
dev_dbg(&isci_host->pdev->dev, dev_dbg(&isci_host->pdev->dev,
"%s: domain_device=%p, isci_host=%p; isci_device=%p\n", "%s: domain_device=%p, isci_host=%p; isci_device=%p\n",
__func__, domain_device, isci_host, isci_device); __func__, dev, isci_host, isci_device);
if (!isci_device) { if (!isci_device) {
/* If the device is gone, stop the escalations. */ /* If the device is gone, stop the escalations. */
...@@ -928,11 +846,11 @@ int isci_task_lu_reset(struct domain_device *domain_device, u8 *lun) ...@@ -928,11 +846,11 @@ int isci_task_lu_reset(struct domain_device *domain_device, u8 *lun)
ret = TMF_RESP_FUNC_COMPLETE; ret = TMF_RESP_FUNC_COMPLETE;
goto out; goto out;
} }
set_bit(IDEV_EH, &isci_device->flags);
/* Send the task management part of the reset. */ /* Send the task management part of the reset. */
if (sas_protocol_ata(domain_device->tproto)) { if (dev_is_sata(dev)) {
ret = isci_task_send_lu_reset_sata(isci_host, isci_device, lun); sas_ata_schedule_reset(dev);
ret = TMF_RESP_FUNC_COMPLETE;
} else } else
ret = isci_task_send_lu_reset_sas(isci_host, isci_device, lun); ret = isci_task_send_lu_reset_sas(isci_host, isci_device, lun);
...@@ -1062,9 +980,6 @@ int isci_task_abort_task(struct sas_task *task) ...@@ -1062,9 +980,6 @@ int isci_task_abort_task(struct sas_task *task)
"%s: dev = %p, task = %p, old_request == %p\n", "%s: dev = %p, task = %p, old_request == %p\n",
__func__, isci_device, task, old_request); __func__, isci_device, task, old_request);
if (isci_device)
set_bit(IDEV_EH, &isci_device->flags);
/* Device reset conditions signalled in task_state_flags are the /* Device reset conditions signalled in task_state_flags are the
* responsbility of libsas to observe at the start of the error * responsbility of libsas to observe at the start of the error
* handler thread. * handler thread.
...@@ -1332,29 +1247,35 @@ isci_task_request_complete(struct isci_host *ihost, ...@@ -1332,29 +1247,35 @@ isci_task_request_complete(struct isci_host *ihost,
} }
static int isci_reset_device(struct isci_host *ihost, static int isci_reset_device(struct isci_host *ihost,
struct domain_device *dev,
struct isci_remote_device *idev) struct isci_remote_device *idev)
{ {
struct sas_phy *phy = sas_find_local_phy(idev->domain_dev);
enum sci_status status;
unsigned long flags;
int rc; int rc;
unsigned long flags;
enum sci_status status;
struct sas_phy *phy = sas_get_local_phy(dev);
struct isci_port *iport = dev->port->lldd_port;
dev_dbg(&ihost->pdev->dev, "%s: idev %p\n", __func__, idev); dev_dbg(&ihost->pdev->dev, "%s: idev %p\n", __func__, idev);
spin_lock_irqsave(&ihost->scic_lock, flags); spin_lock_irqsave(&ihost->scic_lock, flags);
status = sci_remote_device_reset(idev); status = sci_remote_device_reset(idev);
if (status != SCI_SUCCESS) { spin_unlock_irqrestore(&ihost->scic_lock, flags);
spin_unlock_irqrestore(&ihost->scic_lock, flags);
if (status != SCI_SUCCESS) {
dev_dbg(&ihost->pdev->dev, dev_dbg(&ihost->pdev->dev,
"%s: sci_remote_device_reset(%p) returned %d!\n", "%s: sci_remote_device_reset(%p) returned %d!\n",
__func__, idev, status); __func__, idev, status);
rc = TMF_RESP_FUNC_FAILED;
return TMF_RESP_FUNC_FAILED; goto out;
} }
spin_unlock_irqrestore(&ihost->scic_lock, flags);
rc = sas_phy_reset(phy, true); if (scsi_is_sas_phy_local(phy)) {
struct isci_phy *iphy = &ihost->phys[phy->number];
rc = isci_port_perform_hard_reset(ihost, iport, iphy);
} else
rc = sas_phy_reset(phy, !dev_is_sata(dev));
/* Terminate in-progress I/O now. */ /* Terminate in-progress I/O now. */
isci_remote_device_nuke_requests(ihost, idev); isci_remote_device_nuke_requests(ihost, idev);
...@@ -1371,7 +1292,8 @@ static int isci_reset_device(struct isci_host *ihost, ...@@ -1371,7 +1292,8 @@ static int isci_reset_device(struct isci_host *ihost,
} }
dev_dbg(&ihost->pdev->dev, "%s: idev %p complete.\n", __func__, idev); dev_dbg(&ihost->pdev->dev, "%s: idev %p complete.\n", __func__, idev);
out:
sas_put_local_phy(phy);
return rc; return rc;
} }
...@@ -1386,35 +1308,15 @@ int isci_task_I_T_nexus_reset(struct domain_device *dev) ...@@ -1386,35 +1308,15 @@ int isci_task_I_T_nexus_reset(struct domain_device *dev)
idev = isci_lookup_device(dev); idev = isci_lookup_device(dev);
spin_unlock_irqrestore(&ihost->scic_lock, flags); spin_unlock_irqrestore(&ihost->scic_lock, flags);
if (!idev || !test_bit(IDEV_EH, &idev->flags)) {
ret = TMF_RESP_FUNC_COMPLETE;
goto out;
}
ret = isci_reset_device(ihost, idev);
out:
isci_put_device(idev);
return ret;
}
int isci_bus_reset_handler(struct scsi_cmnd *cmd)
{
struct domain_device *dev = sdev_to_domain_dev(cmd->device);
struct isci_host *ihost = dev_to_ihost(dev);
struct isci_remote_device *idev;
unsigned long flags;
int ret;
spin_lock_irqsave(&ihost->scic_lock, flags);
idev = isci_lookup_device(dev);
spin_unlock_irqrestore(&ihost->scic_lock, flags);
if (!idev) { if (!idev) {
/* XXX: need to cleanup any ireqs targeting this
* domain_device
*/
ret = TMF_RESP_FUNC_COMPLETE; ret = TMF_RESP_FUNC_COMPLETE;
goto out; goto out;
} }
ret = isci_reset_device(ihost, idev); ret = isci_reset_device(ihost, dev, idev);
out: out:
isci_put_device(idev); isci_put_device(idev);
return ret; return ret;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -462,3 +462,4 @@ int lpfc_issue_unreg_vfi(struct lpfc_vport *); ...@@ -462,3 +462,4 @@ int lpfc_issue_unreg_vfi(struct lpfc_vport *);
int lpfc_selective_reset(struct lpfc_hba *); int lpfc_selective_reset(struct lpfc_hba *);
int lpfc_sli4_read_config(struct lpfc_hba *phba); int lpfc_sli4_read_config(struct lpfc_hba *phba);
int lpfc_scsi_buf_update(struct lpfc_hba *phba); int lpfc_scsi_buf_update(struct lpfc_hba *phba);
void lpfc_sli4_node_prep(struct lpfc_hba *phba);
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册