提交 a829a844 编写于 作者: L Linus Torvalds

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "This update includes the usual round of major driver updates (ncr5380,
  lpfc, hisi_sas, megaraid_sas, ufs, ibmvscsis, mpt3sas).

  There's also an assortment of minor fixes, mostly in error legs or
  other not very user visible stuff. The major change is the
  pci_alloc_irq_vectors replacement for the old pci_msix_.. calls; this
  effectively makes IRQ mapping generic for the drivers and allows
  blk_mq to use the information"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (256 commits)
  scsi: qla4xxx: switch to pci_alloc_irq_vectors
  scsi: hisi_sas: support deferred probe for v2 hw
  scsi: megaraid_sas: switch to pci_alloc_irq_vectors
  scsi: scsi_devinfo: remove synchronous ALUA for NETAPP devices
  scsi: be2iscsi: set errno on error path
  scsi: be2iscsi: set errno on error path
  scsi: hpsa: fallback to use legacy REPORT PHYS command
  scsi: scsi_dh_alua: Fix RCU annotations
  scsi: hpsa: use %phN for short hex dumps
  scsi: hisi_sas: fix free'ing in probe and remove
  scsi: isci: switch to pci_alloc_irq_vectors
  scsi: ipr: Fix runaway IRQs when falling back from MSI to LSI
  scsi: dpt_i2o: double free on error path
  scsi: cxlflash: Migrate scsi command pointer to AFU command
  scsi: cxlflash: Migrate IOARRIN specific routines to function pointers
  scsi: cxlflash: Cleanup queuecommand()
  scsi: cxlflash: Cleanup send_tmf()
  scsi: cxlflash: Remove AFU command lock
  scsi: cxlflash: Wait for active AFU commands to timeout upon tear down
  scsi: cxlflash: Remove private command pool
  ...
......@@ -6,6 +6,7 @@ Main node required properties:
- compatible : value should be as follows:
(a) "hisilicon,hip05-sas-v1" for v1 hw in hip05 chipset
(b) "hisilicon,hip06-sas-v2" for v2 hw in hip06 chipset
(c) "hisilicon,hip07-sas-v2" for v2 hw in hip07 chipset
- sas-addr : array of 8 bytes for host SAS address
- reg : Address and length of the SAS register
- hisilicon,sas-syscon: phandle of syscon used for sas control
......
......@@ -7,8 +7,11 @@ To bind UFS PHY with UFS host controller, the controller node should
contain a phandle reference to UFS PHY node.
Required properties:
- compatible : compatible list, contains "qcom,ufs-phy-qmp-20nm"
or "qcom,ufs-phy-qmp-14nm" according to the relevant phy in use.
- compatible : compatible list, contains one of the following -
"qcom,ufs-phy-qmp-20nm" for 20nm ufs phy,
"qcom,ufs-phy-qmp-14nm" for legacy 14nm ufs phy,
"qcom,msm8996-ufs-phy-qmp-14nm" for 14nm ufs phy
present on MSM8996 chipset.
- reg : should contain PHY register address space (mandatory),
- reg-names : indicates various resources passed to driver (via reg proptery) by name.
Required "reg-names" is "phy_mem".
......
......@@ -3192,15 +3192,15 @@ S: Supported
F: drivers/clocksource
CISCO FCOE HBA DRIVER
M: Hiral Patel <hiralpat@cisco.com>
M: Suma Ramars <sramars@cisco.com>
M: Brian Uchino <buchino@cisco.com>
M: Satish Kharat <satishkh@cisco.com>
M: Sesidhar Baddela <sebaddel@cisco.com>
M: Karan Tilak Kumar <kartilak@cisco.com>
L: linux-scsi@vger.kernel.org
S: Supported
F: drivers/scsi/fnic/
CISCO SCSI HBA DRIVER
M: Narsimhulu Musini <nmusini@cisco.com>
M: Karan Tilak Kumar <kartilak@cisco.com>
M: Sesidhar Baddela <sebaddel@cisco.com>
L: linux-scsi@vger.kernel.org
S: Supported
......@@ -4787,11 +4787,11 @@ M: David Woodhouse <dwmw2@infradead.org>
L: linux-embedded@vger.kernel.org
S: Maintained
EMULEX/AVAGO LPFC FC/FCOE SCSI DRIVER
M: James Smart <james.smart@avagotech.com>
M: Dick Kennedy <dick.kennedy@avagotech.com>
EMULEX/BROADCOM LPFC FC/FCOE SCSI DRIVER
M: James Smart <james.smart@broadcom.com>
M: Dick Kennedy <dick.kennedy@broadcom.com>
L: linux-scsi@vger.kernel.org
W: http://www.avagotech.com
W: http://www.broadcom.com
S: Supported
F: drivers/scsi/lpfc/
......@@ -5717,7 +5717,6 @@ F: drivers/watchdog/hpwdt.c
HEWLETT-PACKARD SMART ARRAY RAID DRIVER (hpsa)
M: Don Brace <don.brace@microsemi.com>
L: iss_storagedev@hp.com
L: esc.storagedev@microsemi.com
L: linux-scsi@vger.kernel.org
S: Supported
......@@ -5728,7 +5727,6 @@ F: include/uapi/linux/cciss*.h
HEWLETT-PACKARD SMART CISS RAID DRIVER (cciss)
M: Don Brace <don.brace@microsemi.com>
L: iss_storagedev@hp.com
L: esc.storagedev@microsemi.com
L: linux-scsi@vger.kernel.org
S: Supported
......@@ -7968,12 +7966,12 @@ S: Maintained
F: drivers/net/wireless/mediatek/mt7601u/
MEGARAID SCSI/SAS DRIVERS
M: Kashyap Desai <kashyap.desai@avagotech.com>
M: Sumit Saxena <sumit.saxena@avagotech.com>
M: Uday Lingala <uday.lingala@avagotech.com>
L: megaraidlinux.pdl@avagotech.com
M: Kashyap Desai <kashyap.desai@broadcom.com>
M: Sumit Saxena <sumit.saxena@broadcom.com>
M: Shivasharan S <shivasharan.srikanteshwara@broadcom.com>
L: megaraidlinux.pdl@broadcom.com
L: linux-scsi@vger.kernel.org
W: http://www.lsi.com
W: http://www.avagotech.com/support/
S: Maintained
F: Documentation/scsi/megaraid.txt
F: drivers/scsi/megaraid.*
......@@ -8453,7 +8451,6 @@ F: drivers/scsi/arm/oak.c
F: drivers/scsi/atari_scsi.*
F: drivers/scsi/dmx3191d.c
F: drivers/scsi/g_NCR5380.*
F: drivers/scsi/g_NCR5380_mmio.c
F: drivers/scsi/mac_scsi.*
F: drivers/scsi/sun3_scsi.*
F: drivers/scsi/sun3_scsi_vme.c
......@@ -12547,7 +12544,8 @@ F: Documentation/scsi/ufs.txt
F: drivers/scsi/ufs/
UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER DWC HOOKS
M: Joao Pinto <Joao.Pinto@synopsys.com>
M: Manjunath M Bettegowda <manjumb@synopsys.com>
M: Prabu Thangamuthu <prabut@synopsys.com>
L: linux-scsi@vger.kernel.org
S: Supported
F: drivers/scsi/ufs/*dwc*
......
......@@ -87,6 +87,7 @@ int blk_mq_map_queues(struct blk_mq_tag_set *set)
free_cpumask_var(cpus);
return 0;
}
EXPORT_SYMBOL_GPL(blk_mq_map_queues);
/*
* We have no quick way of doing reverse lookups. This is only used at
......
......@@ -42,7 +42,6 @@ void blk_mq_disable_hotplug(void);
/*
* CPU -> queue mappings
*/
int blk_mq_map_queues(struct blk_mq_tag_set *set);
extern int blk_mq_hw_queue_to_node(unsigned int *map, unsigned int);
static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q,
......
......@@ -32,8 +32,13 @@
* bsg_destroy_job - routine to teardown/delete a bsg job
* @job: bsg_job that is to be torn down
*/
static void bsg_destroy_job(struct bsg_job *job)
static void bsg_destroy_job(struct kref *kref)
{
struct bsg_job *job = container_of(kref, struct bsg_job, kref);
struct request *rq = job->req;
blk_end_request_all(rq, rq->errors);
put_device(job->dev); /* release reference for the request */
kfree(job->request_payload.sg_list);
......@@ -41,6 +46,18 @@ static void bsg_destroy_job(struct bsg_job *job)
kfree(job);
}
void bsg_job_put(struct bsg_job *job)
{
kref_put(&job->kref, bsg_destroy_job);
}
EXPORT_SYMBOL_GPL(bsg_job_put);
int bsg_job_get(struct bsg_job *job)
{
return kref_get_unless_zero(&job->kref);
}
EXPORT_SYMBOL_GPL(bsg_job_get);
/**
* bsg_job_done - completion routine for bsg requests
* @job: bsg_job that is complete
......@@ -83,8 +100,7 @@ static void bsg_softirq_done(struct request *rq)
{
struct bsg_job *job = rq->special;
blk_end_request_all(rq, rq->errors);
bsg_destroy_job(job);
bsg_job_put(job);
}
static int bsg_map_buffer(struct bsg_buffer *buf, struct request *req)
......@@ -142,6 +158,7 @@ static int bsg_create_job(struct device *dev, struct request *req)
job->dev = dev;
/* take a reference for the request */
get_device(job->dev);
kref_init(&job->kref);
return 0;
failjob_rls_rqst_payload:
......
......@@ -260,43 +260,6 @@ scsi_cmd_stack_free(ctlr_info_t *h)
}
#if 0
static int xmargin=8;
static int amargin=60;
static void
print_bytes (unsigned char *c, int len, int hex, int ascii)
{
int i;
unsigned char *x;
if (hex)
{
x = c;
for (i=0;i<len;i++)
{
if ((i % xmargin) == 0 && i>0) printk("\n");
if ((i % xmargin) == 0) printk("0x%04x:", i);
printk(" %02x", *x);
x++;
}
printk("\n");
}
if (ascii)
{
x = c;
for (i=0;i<len;i++)
{
if ((i % amargin) == 0 && i>0) printk("\n");
if ((i % amargin) == 0) printk("0x%04x:", i);
if (*x > 26 && *x < 128) printk("%c", *x);
else printk(".");
x++;
}
printk("\n");
}
}
static void
print_cmd(CommandList_struct *cp)
{
......@@ -305,30 +268,13 @@ print_cmd(CommandList_struct *cp)
printk("sgtot:%d\n", cp->Header.SGTotal);
printk("Tag:0x%08x/0x%08x\n", cp->Header.Tag.upper,
cp->Header.Tag.lower);
printk("LUN:0x%02x%02x%02x%02x%02x%02x%02x%02x\n",
cp->Header.LUN.LunAddrBytes[0],
cp->Header.LUN.LunAddrBytes[1],
cp->Header.LUN.LunAddrBytes[2],
cp->Header.LUN.LunAddrBytes[3],
cp->Header.LUN.LunAddrBytes[4],
cp->Header.LUN.LunAddrBytes[5],
cp->Header.LUN.LunAddrBytes[6],
cp->Header.LUN.LunAddrBytes[7]);
printk("LUN:0x%8phN\n", cp->Header.LUN.LunAddrBytes);
printk("CDBLen:%d\n", cp->Request.CDBLen);
printk("Type:%d\n",cp->Request.Type.Type);
printk("Attr:%d\n",cp->Request.Type.Attribute);
printk(" Dir:%d\n",cp->Request.Type.Direction);
printk("Timeout:%d\n",cp->Request.Timeout);
printk( "CDB: %02x %02x %02x %02x %02x %02x %02x %02x"
" %02x %02x %02x %02x %02x %02x %02x %02x\n",
cp->Request.CDB[0], cp->Request.CDB[1],
cp->Request.CDB[2], cp->Request.CDB[3],
cp->Request.CDB[4], cp->Request.CDB[5],
cp->Request.CDB[6], cp->Request.CDB[7],
cp->Request.CDB[8], cp->Request.CDB[9],
cp->Request.CDB[10], cp->Request.CDB[11],
cp->Request.CDB[12], cp->Request.CDB[13],
cp->Request.CDB[14], cp->Request.CDB[15]),
printk("CDB: %16ph\n", cp->Request.CDB);
printk("edesc.Addr: 0x%08x/0%08x, Len = %d\n",
cp->ErrDesc.Addr.upper, cp->ErrDesc.Addr.lower,
cp->ErrDesc.Len);
......@@ -340,9 +286,7 @@ print_cmd(CommandList_struct *cp)
printk("offense size:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_size);
printk("offense byte:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_num);
printk("offense value:%d\n", cp->err_info->MoreErrInfo.Invalid_Cmd.offense_value);
}
#endif
static int
......@@ -782,8 +726,10 @@ static void complete_scsi_command(CommandList_struct *c, int timeout,
"reported\n", c);
break;
case CMD_INVALID: {
/* print_bytes(c, sizeof(*c), 1, 0);
print_cmd(c); */
/*
print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, c, sizeof(*c), false);
print_cmd(c);
*/
/* We get CMD_INVALID if you address a non-existent tape drive instead
of a selection timeout (no response). You will see this if you yank
out a tape drive, then try to access it. This is kind of a shame
......@@ -985,8 +931,10 @@ cciss_scsi_interpret_error(ctlr_info_t *h, CommandList_struct *c)
dev_warn(&h->pdev->dev,
"%p is reported invalid (probably means "
"target device no longer present)\n", c);
/* print_bytes((unsigned char *) c, sizeof(*c), 1, 0);
print_cmd(c); */
/*
print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, c, sizeof(*c), false);
print_cmd(c);
*/
}
break;
case CMD_PROTOCOL_ERR:
......
......@@ -2585,10 +2585,7 @@ mpt_do_ioc_recovery(MPT_ADAPTER *ioc, u32 reason, int sleepFlag)
(void) GetLanConfigPages(ioc);
a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow;
dprintk(ioc, printk(MYIOC_s_DEBUG_FMT
"LanAddr = %02X:%02X:%02X"
":%02X:%02X:%02X\n",
ioc->name, a[5], a[4],
a[3], a[2], a[1], a[0]));
"LanAddr = %pMR\n", ioc->name, a));
}
break;
......@@ -2868,21 +2865,21 @@ MptDisplayIocCapabilities(MPT_ADAPTER *ioc)
printk(KERN_INFO "%s: ", ioc->name);
if (ioc->prod_name)
printk("%s: ", ioc->prod_name);
printk("Capabilities={");
pr_cont("%s: ", ioc->prod_name);
pr_cont("Capabilities={");
if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_INITIATOR) {
printk("Initiator");
pr_cont("Initiator");
i++;
}
if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_TARGET) {
printk("%sTarget", i ? "," : "");
pr_cont("%sTarget", i ? "," : "");
i++;
}
if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN) {
printk("%sLAN", i ? "," : "");
pr_cont("%sLAN", i ? "," : "");
i++;
}
......@@ -2891,12 +2888,12 @@ MptDisplayIocCapabilities(MPT_ADAPTER *ioc)
* This would probably evoke more questions than it's worth
*/
if (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_TARGET) {
printk("%sLogBusAddr", i ? "," : "");
pr_cont("%sLogBusAddr", i ? "," : "");
i++;
}
#endif
printk("}\n");
pr_cont("}\n");
}
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
......@@ -6783,8 +6780,7 @@ static int mpt_iocinfo_proc_show(struct seq_file *m, void *v)
if (ioc->bus_type == FC) {
if (ioc->pfacts[p].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN) {
u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow;
seq_printf(m, " LanAddr = %02X:%02X:%02X:%02X:%02X:%02X\n",
a[5], a[4], a[3], a[2], a[1], a[0]);
seq_printf(m, " LanAddr = %pMR\n", a);
}
seq_printf(m, " WWN = %08X%08X:%08X%08X\n",
ioc->fc_port_page0[p].WWNN.High,
......@@ -6861,8 +6857,7 @@ mpt_print_ioc_summary(MPT_ADAPTER *ioc, char *buffer, int *size, int len, int sh
if (showlan && (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN)) {
u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow;
y += sprintf(buffer+len+y, ", LanAddr=%02X:%02X:%02X:%02X:%02X:%02X",
a[5], a[4], a[3], a[2], a[1], a[0]);
y += sprintf(buffer+len+y, ", LanAddr=%pMR", a);
}
y += sprintf(buffer+len+y, ", IRQ=%d", ioc->pci_irq);
......@@ -6896,8 +6891,7 @@ static void seq_mpt_print_ioc_summary(MPT_ADAPTER *ioc, struct seq_file *m, int
if (showlan && (ioc->pfacts[0].ProtocolFlags & MPI_PORTFACTS_PROTOCOL_LAN)) {
u8 *a = (u8*)&ioc->lan_cnfg_page1.HardwareAddressLow;
seq_printf(m, ", LanAddr=%02X:%02X:%02X:%02X:%02X:%02X",
a[5], a[4], a[3], a[2], a[1], a[0]);
seq_printf(m, ", LanAddr=%pMR", a);
}
seq_printf(m, ", IRQ=%d", ioc->pci_irq);
......
......@@ -1366,15 +1366,10 @@ mptscsih_qcmd(struct scsi_cmnd *SCpnt)
/* Default to untagged. Once a target structure has been allocated,
* use the Inquiry data to determine if device supports tagged.
*/
if ((vdevice->vtarget->tflags & MPT_TARGET_FLAGS_Q_YES)
&& (SCpnt->device->tagged_supported)) {
if ((vdevice->vtarget->tflags & MPT_TARGET_FLAGS_Q_YES) &&
SCpnt->device->tagged_supported)
scsictl = scsidir | MPI_SCSIIO_CONTROL_SIMPLEQ;
if (SCpnt->request && SCpnt->request->ioprio) {
if (((SCpnt->request->ioprio & 0x7) == 1) ||
!(SCpnt->request->ioprio & 0x7))
scsictl |= MPI_SCSIIO_CONTROL_HEADOFQ;
}
} else
else
scsictl = scsidir | MPI_SCSIIO_CONTROL_UNTAGGED;
......
......@@ -141,11 +141,8 @@ struct ufs_qcom_phy_specific_ops {
struct ufs_qcom_phy *get_ufs_qcom_phy(struct phy *generic_phy);
int ufs_qcom_phy_power_on(struct phy *generic_phy);
int ufs_qcom_phy_power_off(struct phy *generic_phy);
int ufs_qcom_phy_exit(struct phy *generic_phy);
int ufs_qcom_phy_init_clks(struct phy *generic_phy,
struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_init_vregulators(struct phy *generic_phy,
struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_init_clks(struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_init_vregulators(struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_remove(struct phy *generic_phy,
struct ufs_qcom_phy *ufs_qcom_phy);
struct phy *ufs_qcom_phy_generic_probe(struct platform_device *pdev,
......
......@@ -44,30 +44,12 @@ void ufs_qcom_phy_qmp_14nm_advertise_quirks(struct ufs_qcom_phy *phy_common)
static int ufs_qcom_phy_qmp_14nm_init(struct phy *generic_phy)
{
struct ufs_qcom_phy_qmp_14nm *phy = phy_get_drvdata(generic_phy);
struct ufs_qcom_phy *phy_common = &phy->common_cfg;
int err;
err = ufs_qcom_phy_init_clks(generic_phy, phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_clks() failed %d\n",
__func__, err);
goto out;
}
err = ufs_qcom_phy_init_vregulators(generic_phy, phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_vregulators() failed %d\n",
__func__, err);
goto out;
}
phy_common->vdda_phy.max_uV = UFS_PHY_VDDA_PHY_UV;
phy_common->vdda_phy.min_uV = UFS_PHY_VDDA_PHY_UV;
ufs_qcom_phy_qmp_14nm_advertise_quirks(phy_common);
return 0;
}
out:
return err;
static int ufs_qcom_phy_qmp_14nm_exit(struct phy *generic_phy)
{
return 0;
}
static
......@@ -117,7 +99,7 @@ static int ufs_qcom_phy_qmp_14nm_is_pcs_ready(struct ufs_qcom_phy *phy_common)
static const struct phy_ops ufs_qcom_phy_qmp_14nm_phy_ops = {
.init = ufs_qcom_phy_qmp_14nm_init,
.exit = ufs_qcom_phy_exit,
.exit = ufs_qcom_phy_qmp_14nm_exit,
.power_on = ufs_qcom_phy_power_on,
.power_off = ufs_qcom_phy_power_off,
.owner = THIS_MODULE,
......@@ -136,6 +118,7 @@ static int ufs_qcom_phy_qmp_14nm_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct phy *generic_phy;
struct ufs_qcom_phy_qmp_14nm *phy;
struct ufs_qcom_phy *phy_common;
int err = 0;
phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
......@@ -143,8 +126,9 @@ static int ufs_qcom_phy_qmp_14nm_probe(struct platform_device *pdev)
err = -ENOMEM;
goto out;
}
phy_common = &phy->common_cfg;
generic_phy = ufs_qcom_phy_generic_probe(pdev, &phy->common_cfg,
generic_phy = ufs_qcom_phy_generic_probe(pdev, phy_common,
&ufs_qcom_phy_qmp_14nm_phy_ops, &phy_14nm_ops);
if (!generic_phy) {
......@@ -154,39 +138,43 @@ static int ufs_qcom_phy_qmp_14nm_probe(struct platform_device *pdev)
goto out;
}
phy_set_drvdata(generic_phy, phy);
err = ufs_qcom_phy_init_clks(phy_common);
if (err) {
dev_err(phy_common->dev,
"%s: ufs_qcom_phy_init_clks() failed %d\n",
__func__, err);
goto out;
}
strlcpy(phy->common_cfg.name, UFS_PHY_NAME,
sizeof(phy->common_cfg.name));
err = ufs_qcom_phy_init_vregulators(phy_common);
if (err) {
dev_err(phy_common->dev,
"%s: ufs_qcom_phy_init_vregulators() failed %d\n",
__func__, err);
goto out;
}
phy_common->vdda_phy.max_uV = UFS_PHY_VDDA_PHY_UV;
phy_common->vdda_phy.min_uV = UFS_PHY_VDDA_PHY_UV;
out:
return err;
}
ufs_qcom_phy_qmp_14nm_advertise_quirks(phy_common);
static int ufs_qcom_phy_qmp_14nm_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct phy *generic_phy = to_phy(dev);
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy);
int err = 0;
phy_set_drvdata(generic_phy, phy);
err = ufs_qcom_phy_remove(generic_phy, ufs_qcom_phy);
if (err)
dev_err(dev, "%s: ufs_qcom_phy_remove failed = %d\n",
__func__, err);
strlcpy(phy_common->name, UFS_PHY_NAME, sizeof(phy_common->name));
out:
return err;
}
static const struct of_device_id ufs_qcom_phy_qmp_14nm_of_match[] = {
{.compatible = "qcom,ufs-phy-qmp-14nm"},
{.compatible = "qcom,msm8996-ufs-phy-qmp-14nm"},
{},
};
MODULE_DEVICE_TABLE(of, ufs_qcom_phy_qmp_14nm_of_match);
static struct platform_driver ufs_qcom_phy_qmp_14nm_driver = {
.probe = ufs_qcom_phy_qmp_14nm_probe,
.remove = ufs_qcom_phy_qmp_14nm_remove,
.driver = {
.of_match_table = ufs_qcom_phy_qmp_14nm_of_match,
.name = "ufs_qcom_phy_qmp_14nm",
......
......@@ -63,28 +63,12 @@ void ufs_qcom_phy_qmp_20nm_advertise_quirks(struct ufs_qcom_phy *phy_common)
static int ufs_qcom_phy_qmp_20nm_init(struct phy *generic_phy)
{
struct ufs_qcom_phy_qmp_20nm *phy = phy_get_drvdata(generic_phy);
struct ufs_qcom_phy *phy_common = &phy->common_cfg;
int err = 0;
err = ufs_qcom_phy_init_clks(generic_phy, phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_clks() failed %d\n",
__func__, err);
goto out;
}
err = ufs_qcom_phy_init_vregulators(generic_phy, phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_vregulators() failed %d\n",
__func__, err);
goto out;
}
ufs_qcom_phy_qmp_20nm_advertise_quirks(phy_common);
return 0;
}
out:
return err;
static int ufs_qcom_phy_qmp_20nm_exit(struct phy *generic_phy)
{
return 0;
}
static
......@@ -173,7 +157,7 @@ static int ufs_qcom_phy_qmp_20nm_is_pcs_ready(struct ufs_qcom_phy *phy_common)
static const struct phy_ops ufs_qcom_phy_qmp_20nm_phy_ops = {
.init = ufs_qcom_phy_qmp_20nm_init,
.exit = ufs_qcom_phy_exit,
.exit = ufs_qcom_phy_qmp_20nm_exit,
.power_on = ufs_qcom_phy_power_on,
.power_off = ufs_qcom_phy_power_off,
.owner = THIS_MODULE,
......@@ -192,6 +176,7 @@ static int ufs_qcom_phy_qmp_20nm_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct phy *generic_phy;
struct ufs_qcom_phy_qmp_20nm *phy;
struct ufs_qcom_phy *phy_common;
int err = 0;
phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
......@@ -199,8 +184,9 @@ static int ufs_qcom_phy_qmp_20nm_probe(struct platform_device *pdev)
err = -ENOMEM;
goto out;
}
phy_common = &phy->common_cfg;
generic_phy = ufs_qcom_phy_generic_probe(pdev, &phy->common_cfg,
generic_phy = ufs_qcom_phy_generic_probe(pdev, phy_common,
&ufs_qcom_phy_qmp_20nm_phy_ops, &phy_20nm_ops);
if (!generic_phy) {
......@@ -210,27 +196,27 @@ static int ufs_qcom_phy_qmp_20nm_probe(struct platform_device *pdev)
goto out;
}
phy_set_drvdata(generic_phy, phy);
err = ufs_qcom_phy_init_clks(phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_clks() failed %d\n",
__func__, err);
goto out;
}
strlcpy(phy->common_cfg.name, UFS_PHY_NAME,
sizeof(phy->common_cfg.name));
err = ufs_qcom_phy_init_vregulators(phy_common);
if (err) {
dev_err(phy_common->dev, "%s: ufs_qcom_phy_init_vregulators() failed %d\n",
__func__, err);
goto out;
}
out:
return err;
}
ufs_qcom_phy_qmp_20nm_advertise_quirks(phy_common);
static int ufs_qcom_phy_qmp_20nm_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct phy *generic_phy = to_phy(dev);
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy);
int err = 0;
phy_set_drvdata(generic_phy, phy);
err = ufs_qcom_phy_remove(generic_phy, ufs_qcom_phy);
if (err)
dev_err(dev, "%s: ufs_qcom_phy_remove failed = %d\n",
__func__, err);
strlcpy(phy_common->name, UFS_PHY_NAME, sizeof(phy_common->name));
out:
return err;
}
......@@ -242,7 +228,6 @@ MODULE_DEVICE_TABLE(of, ufs_qcom_phy_qmp_20nm_of_match);
static struct platform_driver ufs_qcom_phy_qmp_20nm_driver = {
.probe = ufs_qcom_phy_qmp_20nm_probe,
.remove = ufs_qcom_phy_qmp_20nm_remove,
.driver = {
.of_match_table = ufs_qcom_phy_qmp_20nm_of_match,
.name = "ufs_qcom_phy_qmp_20nm",
......
......@@ -22,13 +22,6 @@
#define VDDP_REF_CLK_MIN_UV 1200000
#define VDDP_REF_CLK_MAX_UV 1200000
static int __ufs_qcom_phy_init_vreg(struct phy *, struct ufs_qcom_phy_vreg *,
const char *, bool);
static int ufs_qcom_phy_init_vreg(struct phy *, struct ufs_qcom_phy_vreg *,
const char *);
static int ufs_qcom_phy_base_init(struct platform_device *pdev,
struct ufs_qcom_phy *phy_common);
int ufs_qcom_phy_calibrate(struct ufs_qcom_phy *ufs_qcom_phy,
struct ufs_qcom_phy_calibration *tbl_A,
int tbl_size_A,
......@@ -75,45 +68,6 @@ int ufs_qcom_phy_calibrate(struct ufs_qcom_phy *ufs_qcom_phy,
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_calibrate);
struct phy *ufs_qcom_phy_generic_probe(struct platform_device *pdev,
struct ufs_qcom_phy *common_cfg,
const struct phy_ops *ufs_qcom_phy_gen_ops,
struct ufs_qcom_phy_specific_ops *phy_spec_ops)
{
int err;
struct device *dev = &pdev->dev;
struct phy *generic_phy = NULL;
struct phy_provider *phy_provider;
err = ufs_qcom_phy_base_init(pdev, common_cfg);
if (err) {
dev_err(dev, "%s: phy base init failed %d\n", __func__, err);
goto out;
}
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
if (IS_ERR(phy_provider)) {
err = PTR_ERR(phy_provider);
dev_err(dev, "%s: failed to register phy %d\n", __func__, err);
goto out;
}
generic_phy = devm_phy_create(dev, NULL, ufs_qcom_phy_gen_ops);
if (IS_ERR(generic_phy)) {
err = PTR_ERR(generic_phy);
dev_err(dev, "%s: failed to create phy %d\n", __func__, err);
generic_phy = NULL;
goto out;
}
common_cfg->phy_spec_ops = phy_spec_ops;
common_cfg->dev = dev;
out:
return generic_phy;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_generic_probe);
/*
* This assumes the embedded phy structure inside generic_phy is of type
* struct ufs_qcom_phy. In order to function properly it's crucial
......@@ -154,13 +108,50 @@ int ufs_qcom_phy_base_init(struct platform_device *pdev,
return 0;
}
static int __ufs_qcom_phy_clk_get(struct phy *phy,
struct phy *ufs_qcom_phy_generic_probe(struct platform_device *pdev,
struct ufs_qcom_phy *common_cfg,
const struct phy_ops *ufs_qcom_phy_gen_ops,
struct ufs_qcom_phy_specific_ops *phy_spec_ops)
{
int err;
struct device *dev = &pdev->dev;
struct phy *generic_phy = NULL;
struct phy_provider *phy_provider;
err = ufs_qcom_phy_base_init(pdev, common_cfg);
if (err) {
dev_err(dev, "%s: phy base init failed %d\n", __func__, err);
goto out;
}
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
if (IS_ERR(phy_provider)) {
err = PTR_ERR(phy_provider);
dev_err(dev, "%s: failed to register phy %d\n", __func__, err);
goto out;
}
generic_phy = devm_phy_create(dev, NULL, ufs_qcom_phy_gen_ops);
if (IS_ERR(generic_phy)) {
err = PTR_ERR(generic_phy);
dev_err(dev, "%s: failed to create phy %d\n", __func__, err);
generic_phy = NULL;
goto out;
}
common_cfg->phy_spec_ops = phy_spec_ops;
common_cfg->dev = dev;
out:
return generic_phy;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_generic_probe);
static int __ufs_qcom_phy_clk_get(struct device *dev,
const char *name, struct clk **clk_out, bool err_print)
{
struct clk *clk;
int err = 0;
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
clk = devm_clk_get(dev, name);
if (IS_ERR(clk)) {
......@@ -174,42 +165,44 @@ static int __ufs_qcom_phy_clk_get(struct phy *phy,
return err;
}
static
int ufs_qcom_phy_clk_get(struct phy *phy,
static int ufs_qcom_phy_clk_get(struct device *dev,
const char *name, struct clk **clk_out)
{
return __ufs_qcom_phy_clk_get(phy, name, clk_out, true);
return __ufs_qcom_phy_clk_get(dev, name, clk_out, true);
}
int
ufs_qcom_phy_init_clks(struct phy *generic_phy,
struct ufs_qcom_phy *phy_common)
int ufs_qcom_phy_init_clks(struct ufs_qcom_phy *phy_common)
{
int err;
err = ufs_qcom_phy_clk_get(generic_phy, "tx_iface_clk",
if (of_device_is_compatible(phy_common->dev->of_node,
"qcom,msm8996-ufs-phy-qmp-14nm"))
goto skip_txrx_clk;
err = ufs_qcom_phy_clk_get(phy_common->dev, "tx_iface_clk",
&phy_common->tx_iface_clk);
if (err)
goto out;
err = ufs_qcom_phy_clk_get(generic_phy, "rx_iface_clk",
err = ufs_qcom_phy_clk_get(phy_common->dev, "rx_iface_clk",
&phy_common->rx_iface_clk);
if (err)
goto out;
err = ufs_qcom_phy_clk_get(generic_phy, "ref_clk_src",
err = ufs_qcom_phy_clk_get(phy_common->dev, "ref_clk_src",
&phy_common->ref_clk_src);
if (err)
goto out;
skip_txrx_clk:
/*
* "ref_clk_parent" is optional hence don't abort init if it's not
* found.
*/
__ufs_qcom_phy_clk_get(generic_phy, "ref_clk_parent",
__ufs_qcom_phy_clk_get(phy_common->dev, "ref_clk_parent",
&phy_common->ref_clk_parent, false);
err = ufs_qcom_phy_clk_get(generic_phy, "ref_clk",
err = ufs_qcom_phy_clk_get(phy_common->dev, "ref_clk",
&phy_common->ref_clk);
out:
......@@ -217,41 +210,14 @@ ufs_qcom_phy_init_clks(struct phy *generic_phy,
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_init_clks);
int
ufs_qcom_phy_init_vregulators(struct phy *generic_phy,
struct ufs_qcom_phy *phy_common)
{
int err;
err = ufs_qcom_phy_init_vreg(generic_phy, &phy_common->vdda_pll,
"vdda-pll");
if (err)
goto out;
err = ufs_qcom_phy_init_vreg(generic_phy, &phy_common->vdda_phy,
"vdda-phy");
if (err)
goto out;
/* vddp-ref-clk-* properties are optional */
__ufs_qcom_phy_init_vreg(generic_phy, &phy_common->vddp_ref_clk,
"vddp-ref-clk", true);
out:
return err;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_init_vregulators);
static int __ufs_qcom_phy_init_vreg(struct phy *phy,
static int __ufs_qcom_phy_init_vreg(struct device *dev,
struct ufs_qcom_phy_vreg *vreg, const char *name, bool optional)
{
int err = 0;
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
char prop_name[MAX_PROP_NAME];
vreg->name = kstrdup(name, GFP_KERNEL);
vreg->name = devm_kstrdup(dev, name, GFP_KERNEL);
if (!vreg->name) {
err = -ENOMEM;
goto out;
......@@ -304,14 +270,36 @@ static int __ufs_qcom_phy_init_vreg(struct phy *phy,
return err;
}
static int ufs_qcom_phy_init_vreg(struct phy *phy,
static int ufs_qcom_phy_init_vreg(struct device *dev,
struct ufs_qcom_phy_vreg *vreg, const char *name)
{
return __ufs_qcom_phy_init_vreg(phy, vreg, name, false);
return __ufs_qcom_phy_init_vreg(dev, vreg, name, false);
}
static
int ufs_qcom_phy_cfg_vreg(struct phy *phy,
int ufs_qcom_phy_init_vregulators(struct ufs_qcom_phy *phy_common)
{
int err;
err = ufs_qcom_phy_init_vreg(phy_common->dev, &phy_common->vdda_pll,
"vdda-pll");
if (err)
goto out;
err = ufs_qcom_phy_init_vreg(phy_common->dev, &phy_common->vdda_phy,
"vdda-phy");
if (err)
goto out;
/* vddp-ref-clk-* properties are optional */
__ufs_qcom_phy_init_vreg(phy_common->dev, &phy_common->vddp_ref_clk,
"vddp-ref-clk", true);
out:
return err;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_init_vregulators);
static int ufs_qcom_phy_cfg_vreg(struct device *dev,
struct ufs_qcom_phy_vreg *vreg, bool on)
{
int ret = 0;
......@@ -319,10 +307,6 @@ int ufs_qcom_phy_cfg_vreg(struct phy *phy,
const char *name = vreg->name;
int min_uV;
int uA_load;
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
BUG_ON(!vreg);
if (regulator_count_voltages(reg) > 0) {
min_uV = on ? vreg->min_uV : 0;
......@@ -350,18 +334,15 @@ int ufs_qcom_phy_cfg_vreg(struct phy *phy,
return ret;
}
static
int ufs_qcom_phy_enable_vreg(struct phy *phy,
static int ufs_qcom_phy_enable_vreg(struct device *dev,
struct ufs_qcom_phy_vreg *vreg)
{
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
int ret = 0;
if (!vreg || vreg->enabled)
goto out;
ret = ufs_qcom_phy_cfg_vreg(phy, vreg, true);
ret = ufs_qcom_phy_cfg_vreg(dev, vreg, true);
if (ret) {
dev_err(dev, "%s: ufs_qcom_phy_cfg_vreg() failed, err=%d\n",
__func__, ret);
......@@ -380,10 +361,9 @@ int ufs_qcom_phy_enable_vreg(struct phy *phy,
return ret;
}
int ufs_qcom_phy_enable_ref_clk(struct phy *generic_phy)
static int ufs_qcom_phy_enable_ref_clk(struct ufs_qcom_phy *phy)
{
int ret = 0;
struct ufs_qcom_phy *phy = get_ufs_qcom_phy(generic_phy);
if (phy->is_ref_clk_enabled)
goto out;
......@@ -430,14 +410,10 @@ int ufs_qcom_phy_enable_ref_clk(struct phy *generic_phy)
out:
return ret;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_enable_ref_clk);
static
int ufs_qcom_phy_disable_vreg(struct phy *phy,
static int ufs_qcom_phy_disable_vreg(struct device *dev,
struct ufs_qcom_phy_vreg *vreg)
{
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(phy);
struct device *dev = ufs_qcom_phy->dev;
int ret = 0;
if (!vreg || !vreg->enabled || vreg->is_always_on)
......@@ -447,7 +423,7 @@ int ufs_qcom_phy_disable_vreg(struct phy *phy,
if (!ret) {
/* ignore errors on applying disable config */
ufs_qcom_phy_cfg_vreg(phy, vreg, false);
ufs_qcom_phy_cfg_vreg(dev, vreg, false);
vreg->enabled = false;
} else {
dev_err(dev, "%s: %s disable failed, err=%d\n",
......@@ -457,10 +433,8 @@ int ufs_qcom_phy_disable_vreg(struct phy *phy,
return ret;
}
void ufs_qcom_phy_disable_ref_clk(struct phy *generic_phy)
static void ufs_qcom_phy_disable_ref_clk(struct ufs_qcom_phy *phy)
{
struct ufs_qcom_phy *phy = get_ufs_qcom_phy(generic_phy);
if (phy->is_ref_clk_enabled) {
clk_disable_unprepare(phy->ref_clk);
/*
......@@ -473,7 +447,6 @@ void ufs_qcom_phy_disable_ref_clk(struct phy *generic_phy)
phy->is_ref_clk_enabled = false;
}
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_disable_ref_clk);
#define UFS_REF_CLK_EN (1 << 5)
......@@ -526,9 +499,8 @@ void ufs_qcom_phy_disable_dev_ref_clk(struct phy *generic_phy)
EXPORT_SYMBOL_GPL(ufs_qcom_phy_disable_dev_ref_clk);
/* Turn ON M-PHY RMMI interface clocks */
int ufs_qcom_phy_enable_iface_clk(struct phy *generic_phy)
static int ufs_qcom_phy_enable_iface_clk(struct ufs_qcom_phy *phy)
{
struct ufs_qcom_phy *phy = get_ufs_qcom_phy(generic_phy);
int ret = 0;
if (phy->is_iface_clk_enabled)
......@@ -552,20 +524,16 @@ int ufs_qcom_phy_enable_iface_clk(struct phy *generic_phy)
out:
return ret;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_enable_iface_clk);
/* Turn OFF M-PHY RMMI interface clocks */
void ufs_qcom_phy_disable_iface_clk(struct phy *generic_phy)
void ufs_qcom_phy_disable_iface_clk(struct ufs_qcom_phy *phy)
{
struct ufs_qcom_phy *phy = get_ufs_qcom_phy(generic_phy);
if (phy->is_iface_clk_enabled) {
clk_disable_unprepare(phy->tx_iface_clk);
clk_disable_unprepare(phy->rx_iface_clk);
phy->is_iface_clk_enabled = false;
}
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_disable_iface_clk);
int ufs_qcom_phy_start_serdes(struct phy *generic_phy)
{
......@@ -634,29 +602,6 @@ int ufs_qcom_phy_calibrate_phy(struct phy *generic_phy, bool is_rate_B)
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_calibrate_phy);
int ufs_qcom_phy_remove(struct phy *generic_phy,
struct ufs_qcom_phy *ufs_qcom_phy)
{
phy_power_off(generic_phy);
kfree(ufs_qcom_phy->vdda_pll.name);
kfree(ufs_qcom_phy->vdda_phy.name);
return 0;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_remove);
int ufs_qcom_phy_exit(struct phy *generic_phy)
{
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy);
if (ufs_qcom_phy->is_powered_on)
phy_power_off(generic_phy);
return 0;
}
EXPORT_SYMBOL_GPL(ufs_qcom_phy_exit);
int ufs_qcom_phy_is_pcs_ready(struct phy *generic_phy)
{
struct ufs_qcom_phy *ufs_qcom_phy = get_ufs_qcom_phy(generic_phy);
......@@ -678,7 +623,10 @@ int ufs_qcom_phy_power_on(struct phy *generic_phy)
struct device *dev = phy_common->dev;
int err;
err = ufs_qcom_phy_enable_vreg(generic_phy, &phy_common->vdda_phy);
if (phy_common->is_powered_on)
return 0;
err = ufs_qcom_phy_enable_vreg(dev, &phy_common->vdda_phy);
if (err) {
dev_err(dev, "%s enable vdda_phy failed, err=%d\n",
__func__, err);
......@@ -688,23 +636,30 @@ int ufs_qcom_phy_power_on(struct phy *generic_phy)
phy_common->phy_spec_ops->power_control(phy_common, true);
/* vdda_pll also enables ref clock LDOs so enable it first */
err = ufs_qcom_phy_enable_vreg(generic_phy, &phy_common->vdda_pll);
err = ufs_qcom_phy_enable_vreg(dev, &phy_common->vdda_pll);
if (err) {
dev_err(dev, "%s enable vdda_pll failed, err=%d\n",
__func__, err);
goto out_disable_phy;
}
err = ufs_qcom_phy_enable_ref_clk(generic_phy);
err = ufs_qcom_phy_enable_iface_clk(phy_common);
if (err) {
dev_err(dev, "%s enable phy ref clock failed, err=%d\n",
dev_err(dev, "%s enable phy iface clock failed, err=%d\n",
__func__, err);
goto out_disable_pll;
}
err = ufs_qcom_phy_enable_ref_clk(phy_common);
if (err) {
dev_err(dev, "%s enable phy ref clock failed, err=%d\n",
__func__, err);
goto out_disable_iface_clk;
}
/* enable device PHY ref_clk pad rail */
if (phy_common->vddp_ref_clk.reg) {
err = ufs_qcom_phy_enable_vreg(generic_phy,
err = ufs_qcom_phy_enable_vreg(dev,
&phy_common->vddp_ref_clk);
if (err) {
dev_err(dev, "%s enable vddp_ref_clk failed, err=%d\n",
......@@ -717,11 +672,13 @@ int ufs_qcom_phy_power_on(struct phy *generic_phy)
goto out;
out_disable_ref_clk:
ufs_qcom_phy_disable_ref_clk(generic_phy);
ufs_qcom_phy_disable_ref_clk(phy_common);
out_disable_iface_clk:
ufs_qcom_phy_disable_iface_clk(phy_common);
out_disable_pll:
ufs_qcom_phy_disable_vreg(generic_phy, &phy_common->vdda_pll);
ufs_qcom_phy_disable_vreg(dev, &phy_common->vdda_pll);
out_disable_phy:
ufs_qcom_phy_disable_vreg(generic_phy, &phy_common->vdda_phy);
ufs_qcom_phy_disable_vreg(dev, &phy_common->vdda_phy);
out:
return err;
}
......@@ -731,15 +688,19 @@ int ufs_qcom_phy_power_off(struct phy *generic_phy)
{
struct ufs_qcom_phy *phy_common = get_ufs_qcom_phy(generic_phy);
if (!phy_common->is_powered_on)
return 0;
phy_common->phy_spec_ops->power_control(phy_common, false);
if (phy_common->vddp_ref_clk.reg)
ufs_qcom_phy_disable_vreg(generic_phy,
ufs_qcom_phy_disable_vreg(phy_common->dev,
&phy_common->vddp_ref_clk);
ufs_qcom_phy_disable_ref_clk(generic_phy);
ufs_qcom_phy_disable_ref_clk(phy_common);
ufs_qcom_phy_disable_iface_clk(phy_common);
ufs_qcom_phy_disable_vreg(generic_phy, &phy_common->vdda_pll);
ufs_qcom_phy_disable_vreg(generic_phy, &phy_common->vdda_phy);
ufs_qcom_phy_disable_vreg(phy_common->dev, &phy_common->vdda_pll);
ufs_qcom_phy_disable_vreg(phy_common->dev, &phy_common->vdda_phy);
phy_common->is_powered_on = false;
return 0;
......
......@@ -84,8 +84,8 @@ extern void zfcp_fc_link_test_work(struct work_struct *);
extern void zfcp_fc_wka_ports_force_offline(struct zfcp_fc_wka_ports *);
extern int zfcp_fc_gs_setup(struct zfcp_adapter *);
extern void zfcp_fc_gs_destroy(struct zfcp_adapter *);
extern int zfcp_fc_exec_bsg_job(struct fc_bsg_job *);
extern int zfcp_fc_timeout_bsg_job(struct fc_bsg_job *);
extern int zfcp_fc_exec_bsg_job(struct bsg_job *);
extern int zfcp_fc_timeout_bsg_job(struct bsg_job *);
extern void zfcp_fc_sym_name_update(struct work_struct *);
extern unsigned int zfcp_fc_port_scan_backoff(void);
extern void zfcp_fc_conditional_port_scan(struct zfcp_adapter *);
......
......@@ -13,6 +13,7 @@
#include <linux/slab.h>
#include <linux/utsname.h>
#include <linux/random.h>
#include <linux/bsg-lib.h>
#include <scsi/fc/fc_els.h>
#include <scsi/libfc.h>
#include "zfcp_ext.h"
......@@ -885,26 +886,30 @@ void zfcp_fc_sym_name_update(struct work_struct *work)
static void zfcp_fc_ct_els_job_handler(void *data)
{
struct fc_bsg_job *job = data;
struct bsg_job *job = data;
struct zfcp_fsf_ct_els *zfcp_ct_els = job->dd_data;
struct fc_bsg_reply *jr = job->reply;
jr->reply_payload_rcv_len = job->reply_payload.payload_len;
jr->reply_data.ctels_reply.status = FC_CTELS_STATUS_OK;
jr->result = zfcp_ct_els->status ? -EIO : 0;
job->job_done(job);
bsg_job_done(job, jr->result, jr->reply_payload_rcv_len);
}
static struct zfcp_fc_wka_port *zfcp_fc_job_wka_port(struct fc_bsg_job *job)
static struct zfcp_fc_wka_port *zfcp_fc_job_wka_port(struct bsg_job *job)
{
u32 preamble_word1;
u8 gs_type;
struct zfcp_adapter *adapter;
struct fc_bsg_request *bsg_request = job->request;
struct fc_rport *rport = fc_bsg_to_rport(job);
struct Scsi_Host *shost;
preamble_word1 = job->request->rqst_data.r_ct.preamble_word1;
preamble_word1 = bsg_request->rqst_data.r_ct.preamble_word1;
gs_type = (preamble_word1 & 0xff000000) >> 24;
adapter = (struct zfcp_adapter *) job->shost->hostdata[0];
shost = rport ? rport_to_shost(rport) : fc_bsg_to_shost(job);
adapter = (struct zfcp_adapter *) shost->hostdata[0];
switch (gs_type) {
case FC_FST_ALIAS:
......@@ -924,7 +929,7 @@ static struct zfcp_fc_wka_port *zfcp_fc_job_wka_port(struct fc_bsg_job *job)
static void zfcp_fc_ct_job_handler(void *data)
{
struct fc_bsg_job *job = data;
struct bsg_job *job = data;
struct zfcp_fc_wka_port *wka_port;
wka_port = zfcp_fc_job_wka_port(job);
......@@ -933,11 +938,12 @@ static void zfcp_fc_ct_job_handler(void *data)
zfcp_fc_ct_els_job_handler(data);
}
static int zfcp_fc_exec_els_job(struct fc_bsg_job *job,
static int zfcp_fc_exec_els_job(struct bsg_job *job,
struct zfcp_adapter *adapter)
{
struct zfcp_fsf_ct_els *els = job->dd_data;
struct fc_rport *rport = job->rport;
struct fc_rport *rport = fc_bsg_to_rport(job);
struct fc_bsg_request *bsg_request = job->request;
struct zfcp_port *port;
u32 d_id;
......@@ -949,13 +955,13 @@ static int zfcp_fc_exec_els_job(struct fc_bsg_job *job,
d_id = port->d_id;
put_device(&port->dev);
} else
d_id = ntoh24(job->request->rqst_data.h_els.port_id);
d_id = ntoh24(bsg_request->rqst_data.h_els.port_id);
els->handler = zfcp_fc_ct_els_job_handler;
return zfcp_fsf_send_els(adapter, d_id, els, job->req->timeout / HZ);
}
static int zfcp_fc_exec_ct_job(struct fc_bsg_job *job,
static int zfcp_fc_exec_ct_job(struct bsg_job *job,
struct zfcp_adapter *adapter)
{
int ret;
......@@ -978,13 +984,15 @@ static int zfcp_fc_exec_ct_job(struct fc_bsg_job *job,
return ret;
}
int zfcp_fc_exec_bsg_job(struct fc_bsg_job *job)
int zfcp_fc_exec_bsg_job(struct bsg_job *job)
{
struct Scsi_Host *shost;
struct zfcp_adapter *adapter;
struct zfcp_fsf_ct_els *ct_els = job->dd_data;
struct fc_bsg_request *bsg_request = job->request;
struct fc_rport *rport = fc_bsg_to_rport(job);
shost = job->rport ? rport_to_shost(job->rport) : job->shost;
shost = rport ? rport_to_shost(rport) : fc_bsg_to_shost(job);
adapter = (struct zfcp_adapter *)shost->hostdata[0];
if (!(atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_OPEN))
......@@ -994,7 +1002,7 @@ int zfcp_fc_exec_bsg_job(struct fc_bsg_job *job)
ct_els->resp = job->reply_payload.sg_list;
ct_els->handler_data = job;
switch (job->request->msgcode) {
switch (bsg_request->msgcode) {
case FC_BSG_RPT_ELS:
case FC_BSG_HST_ELS_NOLOGIN:
return zfcp_fc_exec_els_job(job, adapter);
......@@ -1006,7 +1014,7 @@ int zfcp_fc_exec_bsg_job(struct fc_bsg_job *job)
}
}
int zfcp_fc_timeout_bsg_job(struct fc_bsg_job *job)
int zfcp_fc_timeout_bsg_job(struct bsg_job *job)
{
/* hardware tracks timeout, reset bsg timeout to not interfere */
return -EAGAIN;
......
......@@ -263,6 +263,7 @@ config SCSI_SPI_ATTRS
config SCSI_FC_ATTRS
tristate "FiberChannel Transport Attributes"
depends on SCSI && NET
select BLK_DEV_BSGLIB
select SCSI_NETLINK
help
If you wish to export transport-specific information about
......@@ -743,40 +744,18 @@ config SCSI_ISCI
control unit found in the Intel(R) C600 series chipset.
config SCSI_GENERIC_NCR5380
tristate "Generic NCR5380/53c400 SCSI PIO support"
depends on ISA && SCSI
tristate "Generic NCR5380/53c400 SCSI ISA card support"
depends on ISA && SCSI && HAS_IOPORT_MAP
select SCSI_SPI_ATTRS
---help---
This is a driver for the old NCR 53c80 series of SCSI controllers
on boards using PIO. Most boards such as the Trantor T130 fit this
category, along with a large number of ISA 8bit controllers shipped
for free with SCSI scanners. If you have a PAS16, T128 or DMX3191
you should select the specific driver for that card rather than
generic 5380 support.
It is explained in section 3.8 of the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>. If it doesn't work out
of the box, you may have to change some settings in
<file:drivers/scsi/g_NCR5380.h>.
This is a driver for old ISA card SCSI controllers based on a
NCR 5380, 53C80, 53C400, 53C400A, or DTC 436 device.
Most boards such as the Trantor T130 fit this category, as do
various 8-bit and 16-bit ISA cards bundled with SCSI scanners.
To compile this driver as a module, choose M here: the
module will be called g_NCR5380.
config SCSI_GENERIC_NCR5380_MMIO
tristate "Generic NCR5380/53c400 SCSI MMIO support"
depends on ISA && SCSI
select SCSI_SPI_ATTRS
---help---
This is a driver for the old NCR 53c80 series of SCSI controllers
on boards using memory mapped I/O.
It is explained in section 3.8 of the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>. If it doesn't work out
of the box, you may have to change some settings in
<file:drivers/scsi/g_NCR5380.h>.
To compile this driver as a module, choose M here: the
module will be called g_NCR5380_mmio.
config SCSI_IPS
tristate "IBM ServeRAID support"
depends on PCI && SCSI
......
......@@ -74,7 +74,6 @@ obj-$(CONFIG_SCSI_ISCI) += isci/
obj-$(CONFIG_SCSI_IPS) += ips.o
obj-$(CONFIG_SCSI_FUTURE_DOMAIN)+= fdomain.o
obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o
obj-$(CONFIG_SCSI_GENERIC_NCR5380_MMIO) += g_NCR5380_mmio.o
obj-$(CONFIG_SCSI_NCR53C406A) += NCR53c406a.o
obj-$(CONFIG_SCSI_NCR_D700) += 53c700.o NCR_D700.o
obj-$(CONFIG_SCSI_NCR_Q720) += NCR_Q720_mod.o
......
......@@ -121,9 +121,10 @@
*
* Either real DMA *or* pseudo DMA may be implemented
*
* NCR5380_dma_write_setup(instance, src, count) - initialize
* NCR5380_dma_read_setup(instance, dst, count) - initialize
* NCR5380_dma_residual(instance); - residual count
* NCR5380_dma_xfer_len - determine size of DMA/PDMA transfer
* NCR5380_dma_send_setup - execute DMA/PDMA from memory to 5380
* NCR5380_dma_recv_setup - execute DMA/PDMA from 5380 to memory
* NCR5380_dma_residual - residual byte count
*
* The generic driver is initialized by calling NCR5380_init(instance),
* after setting the appropriate host specific fields and ID. If the
......@@ -178,7 +179,7 @@ static inline void initialize_SCp(struct scsi_cmnd *cmd)
/**
* NCR5380_poll_politely2 - wait for two chip register values
* @instance: controller to poll
* @hostdata: host private data
* @reg1: 5380 register to poll
* @bit1: Bitmask to check
* @val1: Expected value
......@@ -195,18 +196,14 @@ static inline void initialize_SCp(struct scsi_cmnd *cmd)
* Returns 0 if either or both event(s) occurred otherwise -ETIMEDOUT.
*/
static int NCR5380_poll_politely2(struct Scsi_Host *instance,
int reg1, int bit1, int val1,
int reg2, int bit2, int val2, int wait)
static int NCR5380_poll_politely2(struct NCR5380_hostdata *hostdata,
unsigned int reg1, u8 bit1, u8 val1,
unsigned int reg2, u8 bit2, u8 val2,
unsigned long wait)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned long n = hostdata->poll_loops;
unsigned long deadline = jiffies + wait;
unsigned long n;
/* Busy-wait for up to 10 ms */
n = min(10000U, jiffies_to_usecs(wait));
n *= hostdata->accesses_per_ms;
n /= 2000;
do {
if ((NCR5380_read(reg1) & bit1) == val1)
return 0;
......@@ -288,6 +285,7 @@ mrs[] = {
static void NCR5380_print(struct Scsi_Host *instance)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char status, data, basr, mr, icr, i;
data = NCR5380_read(CURRENT_SCSI_DATA_REG);
......@@ -337,6 +335,7 @@ static struct {
static void NCR5380_print_phase(struct Scsi_Host *instance)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char status;
int i;
......@@ -441,14 +440,14 @@ static void prepare_info(struct Scsi_Host *instance)
struct NCR5380_hostdata *hostdata = shost_priv(instance);
snprintf(hostdata->info, sizeof(hostdata->info),
"%s, io_port 0x%lx, n_io_port %d, "
"base 0x%lx, irq %d, "
"%s, irq %d, "
"io_port 0x%lx, base 0x%lx, "
"can_queue %d, cmd_per_lun %d, "
"sg_tablesize %d, this_id %d, "
"flags { %s%s%s}, "
"options { %s} ",
instance->hostt->name, instance->io_port, instance->n_io_port,
instance->base, instance->irq,
instance->hostt->name, instance->irq,
hostdata->io_port, hostdata->base,
instance->can_queue, instance->cmd_per_lun,
instance->sg_tablesize, instance->this_id,
hostdata->flags & FLAG_DMA_FIXUP ? "DMA_FIXUP " : "",
......@@ -482,6 +481,7 @@ static int NCR5380_init(struct Scsi_Host *instance, int flags)
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int i;
unsigned long deadline;
unsigned long accesses_per_ms;
instance->max_lun = 7;
......@@ -530,7 +530,8 @@ static int NCR5380_init(struct Scsi_Host *instance, int flags)
++i;
cpu_relax();
} while (time_is_after_jiffies(deadline));
hostdata->accesses_per_ms = i / 256;
accesses_per_ms = i / 256;
hostdata->poll_loops = NCR5380_REG_POLL_TIME * accesses_per_ms / 2;
return 0;
}
......@@ -560,7 +561,7 @@ static int NCR5380_maybe_reset_bus(struct Scsi_Host *instance)
case 3:
case 5:
shost_printk(KERN_ERR, instance, "SCSI bus busy, waiting up to five seconds\n");
NCR5380_poll_politely(instance,
NCR5380_poll_politely(hostdata,
STATUS_REG, SR_BSY, 0, 5 * HZ);
break;
case 2:
......@@ -871,7 +872,7 @@ static void NCR5380_dma_complete(struct Scsi_Host *instance)
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
NCR5380_read(RESET_PARITY_INTERRUPT_REG);
transferred = hostdata->dma_len - NCR5380_dma_residual(instance);
transferred = hostdata->dma_len - NCR5380_dma_residual(hostdata);
hostdata->dma_len = 0;
data = (unsigned char **)&hostdata->connected->SCp.ptr;
......@@ -994,7 +995,7 @@ static irqreturn_t __maybe_unused NCR5380_intr(int irq, void *dev_id)
}
handled = 1;
} else {
shost_printk(KERN_NOTICE, instance, "interrupt without IRQ bit\n");
dsprintk(NDEBUG_INTR, instance, "interrupt without IRQ bit\n");
#ifdef SUN3_SCSI_VME
dregs->csr |= CSR_DMA_ENABLE;
#endif
......@@ -1075,7 +1076,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance,
*/
spin_unlock_irq(&hostdata->lock);
err = NCR5380_poll_politely2(instance, MODE_REG, MR_ARBITRATE, 0,
err = NCR5380_poll_politely2(hostdata, MODE_REG, MR_ARBITRATE, 0,
INITIATOR_COMMAND_REG, ICR_ARBITRATION_PROGRESS,
ICR_ARBITRATION_PROGRESS, HZ);
spin_lock_irq(&hostdata->lock);
......@@ -1201,7 +1202,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance,
* selection.
*/
err = NCR5380_poll_politely(instance, STATUS_REG, SR_BSY, SR_BSY,
err = NCR5380_poll_politely(hostdata, STATUS_REG, SR_BSY, SR_BSY,
msecs_to_jiffies(250));
if ((NCR5380_read(STATUS_REG) & (SR_SEL | SR_IO)) == (SR_SEL | SR_IO)) {
......@@ -1247,7 +1248,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance,
/* Wait for start of REQ/ACK handshake */
err = NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, SR_REQ, HZ);
err = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, HZ);
spin_lock_irq(&hostdata->lock);
if (err < 0) {
shost_printk(KERN_ERR, instance, "select: REQ timeout\n");
......@@ -1318,6 +1319,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
unsigned char *phase, int *count,
unsigned char **data)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char p = *phase, tmp;
int c = *count;
unsigned char *d = *data;
......@@ -1336,7 +1338,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
* valid
*/
if (NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, SR_REQ, HZ) < 0)
if (NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, HZ) < 0)
break;
dsprintk(NDEBUG_HANDSHAKE, instance, "REQ asserted\n");
......@@ -1381,7 +1383,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ACK);
}
if (NCR5380_poll_politely(instance,
if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_REQ, 0, 5 * HZ) < 0)
break;
......@@ -1440,6 +1442,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
static void do_reset(struct Scsi_Host *instance)
{
struct NCR5380_hostdata __maybe_unused *hostdata = shost_priv(instance);
unsigned long flags;
local_irq_save(flags);
......@@ -1462,6 +1465,7 @@ static void do_reset(struct Scsi_Host *instance)
static int do_abort(struct Scsi_Host *instance)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char *msgptr, phase, tmp;
int len;
int rc;
......@@ -1479,7 +1483,7 @@ static int do_abort(struct Scsi_Host *instance)
* the target sees, so we just handshake.
*/
rc = NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, SR_REQ, 10 * HZ);
rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, 10 * HZ);
if (rc < 0)
goto timeout;
......@@ -1490,7 +1494,7 @@ static int do_abort(struct Scsi_Host *instance)
if (tmp != PHASE_MSGOUT) {
NCR5380_write(INITIATOR_COMMAND_REG,
ICR_BASE | ICR_ASSERT_ATN | ICR_ASSERT_ACK);
rc = NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, 0, 3 * HZ);
rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, 0, 3 * HZ);
if (rc < 0)
goto timeout;
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ATN);
......@@ -1575,9 +1579,9 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
* starting the NCR. This is also the cleaner way for the TT.
*/
if (p & SR_IO)
result = NCR5380_dma_recv_setup(instance, d, c);
result = NCR5380_dma_recv_setup(hostdata, d, c);
else
result = NCR5380_dma_send_setup(instance, d, c);
result = NCR5380_dma_send_setup(hostdata, d, c);
}
/*
......@@ -1609,9 +1613,9 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
* NCR access, else the DMA setup gets trashed!
*/
if (p & SR_IO)
result = NCR5380_dma_recv_setup(instance, d, c);
result = NCR5380_dma_recv_setup(hostdata, d, c);
else
result = NCR5380_dma_send_setup(instance, d, c);
result = NCR5380_dma_send_setup(hostdata, d, c);
}
/* On failure, NCR5380_dma_xxxx_setup() returns a negative int. */
......@@ -1678,12 +1682,12 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
* byte.
*/
if (NCR5380_poll_politely(instance, BUS_AND_STATUS_REG,
if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_DRQ, BASR_DRQ, HZ) < 0) {
result = -1;
shost_printk(KERN_ERR, instance, "PDMA read: DRQ timeout\n");
}
if (NCR5380_poll_politely(instance, STATUS_REG,
if (NCR5380_poll_politely(hostdata, STATUS_REG,
SR_REQ, 0, HZ) < 0) {
result = -1;
shost_printk(KERN_ERR, instance, "PDMA read: !REQ timeout\n");
......@@ -1694,7 +1698,7 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
* Wait for the last byte to be sent. If REQ is being asserted for
* the byte we're interested, we'll ACK it and it will go false.
*/
if (NCR5380_poll_politely2(instance,
if (NCR5380_poll_politely2(hostdata,
BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ,
BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, HZ) < 0) {
result = -1;
......@@ -1751,22 +1755,26 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
NCR5380_dprint_phase(NDEBUG_INFORMATION, instance);
}
#ifdef CONFIG_SUN3
if (phase == PHASE_CMDOUT) {
void *d;
unsigned long count;
if (phase == PHASE_CMDOUT &&
sun3_dma_setup_done != cmd) {
int count;
if (!cmd->SCp.this_residual && cmd->SCp.buffers_residual) {
count = cmd->SCp.buffer->length;
d = sg_virt(cmd->SCp.buffer);
} else {
count = cmd->SCp.this_residual;
d = cmd->SCp.ptr;
++cmd->SCp.buffer;
--cmd->SCp.buffers_residual;
cmd->SCp.this_residual = cmd->SCp.buffer->length;
cmd->SCp.ptr = sg_virt(cmd->SCp.buffer);
}
if (sun3_dma_setup_done != cmd &&
sun3scsi_dma_xfer_len(count, cmd) > 0) {
sun3scsi_dma_setup(instance, d, count,
rq_data_dir(cmd->request));
count = sun3scsi_dma_xfer_len(hostdata, cmd);
if (count > 0) {
if (rq_data_dir(cmd->request))
sun3scsi_dma_send_setup(hostdata,
cmd->SCp.ptr, count);
else
sun3scsi_dma_recv_setup(hostdata,
cmd->SCp.ptr, count);
sun3_dma_setup_done = cmd;
}
#ifdef SUN3_SCSI_VME
......@@ -1827,7 +1835,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
transfersize = 0;
if (!cmd->device->borken)
transfersize = NCR5380_dma_xfer_len(instance, cmd, phase);
transfersize = NCR5380_dma_xfer_len(hostdata, cmd);
if (transfersize > 0) {
len = transfersize;
......@@ -2073,7 +2081,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
} /* switch(phase) */
} else {
spin_unlock_irq(&hostdata->lock);
NCR5380_poll_politely(instance, STATUS_REG, SR_REQ, SR_REQ, HZ);
NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, HZ);
spin_lock_irq(&hostdata->lock);
}
}
......@@ -2119,7 +2127,7 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
*/
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_BSY);
if (NCR5380_poll_politely(instance,
if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_SEL, 0, 2 * HZ) < 0) {
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
return;
......@@ -2130,7 +2138,7 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
* Wait for target to go into MSGIN.
*/
if (NCR5380_poll_politely(instance,
if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_REQ, SR_REQ, 2 * HZ) < 0) {
do_abort(instance);
return;
......@@ -2204,22 +2212,25 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
}
#ifdef CONFIG_SUN3
{
void *d;
unsigned long count;
if (sun3_dma_setup_done != tmp) {
int count;
if (!tmp->SCp.this_residual && tmp->SCp.buffers_residual) {
count = tmp->SCp.buffer->length;
d = sg_virt(tmp->SCp.buffer);
} else {
count = tmp->SCp.this_residual;
d = tmp->SCp.ptr;
++tmp->SCp.buffer;
--tmp->SCp.buffers_residual;
tmp->SCp.this_residual = tmp->SCp.buffer->length;
tmp->SCp.ptr = sg_virt(tmp->SCp.buffer);
}
if (sun3_dma_setup_done != tmp &&
sun3scsi_dma_xfer_len(count, tmp) > 0) {
sun3scsi_dma_setup(instance, d, count,
rq_data_dir(tmp->request));
count = sun3scsi_dma_xfer_len(hostdata, tmp);
if (count > 0) {
if (rq_data_dir(tmp->request))
sun3scsi_dma_send_setup(hostdata,
tmp->SCp.ptr, count);
else
sun3scsi_dma_recv_setup(hostdata,
tmp->SCp.ptr, count);
sun3_dma_setup_done = tmp;
}
}
......
......@@ -219,27 +219,32 @@
#define FLAG_TOSHIBA_DELAY 128 /* Allow for borken CD-ROMs */
struct NCR5380_hostdata {
NCR5380_implementation_fields; /* implementation specific */
struct Scsi_Host *host; /* Host backpointer */
unsigned char id_mask, id_higher_mask; /* 1 << id, all bits greater */
unsigned char busy[8]; /* index = target, bit = lun */
int dma_len; /* requested length of DMA */
unsigned char last_message; /* last message OUT */
struct scsi_cmnd *connected; /* currently connected cmnd */
struct scsi_cmnd *selecting; /* cmnd to be connected */
struct list_head unissued; /* waiting to be issued */
struct list_head autosense; /* priority issue queue */
struct list_head disconnected; /* waiting for reconnect */
spinlock_t lock; /* protects this struct */
int flags;
struct scsi_eh_save ses;
struct scsi_cmnd *sensing;
NCR5380_implementation_fields; /* Board-specific data */
u8 __iomem *io; /* Remapped 5380 address */
u8 __iomem *pdma_io; /* Remapped PDMA address */
unsigned long poll_loops; /* Register polling limit */
spinlock_t lock; /* Protects this struct */
struct scsi_cmnd *connected; /* Currently connected cmnd */
struct list_head disconnected; /* Waiting for reconnect */
struct Scsi_Host *host; /* SCSI host backpointer */
struct workqueue_struct *work_q; /* SCSI host work queue */
struct work_struct main_task; /* Work item for main loop */
int flags; /* Board-specific quirks */
int dma_len; /* Requested length of DMA */
int read_overruns; /* Transfer size reduction for DMA erratum */
unsigned long io_port; /* Device IO port */
unsigned long base; /* Device base address */
struct list_head unissued; /* Waiting to be issued */
struct scsi_cmnd *selecting; /* Cmnd to be connected */
struct list_head autosense; /* Priority cmnd queue */
struct scsi_cmnd *sensing; /* Cmnd needing autosense */
struct scsi_eh_save ses; /* Cmnd state saved for EH */
unsigned char busy[8]; /* Index = target, bit = lun */
unsigned char id_mask; /* 1 << Host ID */
unsigned char id_higher_mask; /* All bits above id_mask */
unsigned char last_message; /* Last Message Out */
unsigned long region_size; /* Size of address/port range */
char info[256];
int read_overruns; /* number of bytes to cut from a
* transfer to handle chip overruns */
struct work_struct main_task;
struct workqueue_struct *work_q;
unsigned long accesses_per_ms; /* chip register accesses per ms */
};
#ifdef __KERNEL__
......@@ -252,6 +257,9 @@ struct NCR5380_cmd {
#define NCR5380_PIO_CHUNK_SIZE 256
/* Time limit (ms) to poll registers when IRQs are disabled, e.g. during PDMA */
#define NCR5380_REG_POLL_TIME 15
static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr)
{
return ((struct scsi_cmnd *)ncmd_ptr) - 1;
......@@ -294,14 +302,45 @@ static void NCR5380_reselect(struct Scsi_Host *instance);
static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *, struct scsi_cmnd *);
static int NCR5380_transfer_dma(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data);
static int NCR5380_transfer_pio(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data);
static int NCR5380_poll_politely2(struct Scsi_Host *, int, int, int, int, int, int, int);
static int NCR5380_poll_politely2(struct NCR5380_hostdata *,
unsigned int, u8, u8,
unsigned int, u8, u8, unsigned long);
static inline int NCR5380_poll_politely(struct Scsi_Host *instance,
int reg, int bit, int val, int wait)
static inline int NCR5380_poll_politely(struct NCR5380_hostdata *hostdata,
unsigned int reg, u8 bit, u8 val,
unsigned long wait)
{
return NCR5380_poll_politely2(instance, reg, bit, val,
if ((NCR5380_read(reg) & bit) == val)
return 0;
return NCR5380_poll_politely2(hostdata, reg, bit, val,
reg, bit, val, wait);
}
static int NCR5380_dma_xfer_len(struct NCR5380_hostdata *,
struct scsi_cmnd *);
static int NCR5380_dma_send_setup(struct NCR5380_hostdata *,
unsigned char *, int);
static int NCR5380_dma_recv_setup(struct NCR5380_hostdata *,
unsigned char *, int);
static int NCR5380_dma_residual(struct NCR5380_hostdata *);
static inline int NCR5380_dma_xfer_none(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd)
{
return 0;
}
static inline int NCR5380_dma_setup_none(struct NCR5380_hostdata *hostdata,
unsigned char *data, int count)
{
return 0;
}
static inline int NCR5380_dma_residual_none(struct NCR5380_hostdata *hostdata)
{
return 0;
}
#endif /* __KERNEL__ */
#endif /* NCR5380_H */
......@@ -1246,7 +1246,6 @@ struct aac_dev
u32 max_msix; /* max. MSI-X vectors */
u32 vector_cap; /* MSI-X vector capab.*/
int msi_enabled; /* MSI/MSI-X enabled */
struct msix_entry msixentry[AAC_MAX_MSIX];
struct aac_msix_ctx aac_msix[AAC_MAX_MSIX]; /* context */
u8 adapter_shutdown;
u32 handle_pci_error;
......
......@@ -378,16 +378,12 @@ void aac_define_int_mode(struct aac_dev *dev)
if (msi_count > AAC_MAX_MSIX)
msi_count = AAC_MAX_MSIX;
for (i = 0; i < msi_count; i++)
dev->msixentry[i].entry = i;
if (msi_count > 1 &&
pci_find_capability(dev->pdev, PCI_CAP_ID_MSIX)) {
min_msix = 2;
i = pci_enable_msix_range(dev->pdev,
dev->msixentry,
min_msix,
msi_count);
i = pci_alloc_irq_vectors(dev->pdev,
min_msix, msi_count,
PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
if (i > 0) {
dev->msi_enabled = 1;
msi_count = i;
......
......@@ -2043,30 +2043,22 @@ int aac_acquire_irq(struct aac_dev *dev)
int i;
int j;
int ret = 0;
int cpu;
cpu = cpumask_first(cpu_online_mask);
if (!dev->sync_mode && dev->msi_enabled && dev->max_msix > 1) {
for (i = 0; i < dev->max_msix; i++) {
dev->aac_msix[i].vector_no = i;
dev->aac_msix[i].dev = dev;
if (request_irq(dev->msixentry[i].vector,
if (request_irq(pci_irq_vector(dev->pdev, i),
dev->a_ops.adapter_intr,
0, "aacraid", &(dev->aac_msix[i]))) {
printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n",
dev->name, dev->id, i);
for (j = 0 ; j < i ; j++)
free_irq(dev->msixentry[j].vector,
free_irq(pci_irq_vector(dev->pdev, j),
&(dev->aac_msix[j]));
pci_disable_msix(dev->pdev);
ret = -1;
}
if (irq_set_affinity_hint(dev->msixentry[i].vector,
get_cpu_mask(cpu))) {
printk(KERN_ERR "%s%d: Failed to set IRQ affinity for cpu %d\n",
dev->name, dev->id, cpu);
}
cpu = cpumask_next(cpu, cpu_online_mask);
}
} else {
dev->aac_msix[0].vector_no = 0;
......@@ -2096,16 +2088,9 @@ void aac_free_irq(struct aac_dev *dev)
dev->pdev->device == PMC_DEVICE_S8 ||
dev->pdev->device == PMC_DEVICE_S9) {
if (dev->max_msix > 1) {
for (i = 0; i < dev->max_msix; i++) {
if (irq_set_affinity_hint(
dev->msixentry[i].vector, NULL)) {
printk(KERN_ERR "%s%d: Failed to reset IRQ affinity for cpu %d\n",
dev->name, dev->id, cpu);
}
cpu = cpumask_next(cpu, cpu_online_mask);
free_irq(dev->msixentry[i].vector,
&(dev->aac_msix[i]));
}
for (i = 0; i < dev->max_msix; i++)
free_irq(pci_irq_vector(dev->pdev, i),
&(dev->aac_msix[i]));
} else {
free_irq(dev->pdev->irq, &(dev->aac_msix[0]));
}
......
......@@ -1071,7 +1071,6 @@ static struct scsi_host_template aac_driver_template = {
static void __aac_shutdown(struct aac_dev * aac)
{
int i;
int cpu;
aac_send_shutdown(aac);
......@@ -1087,24 +1086,13 @@ static void __aac_shutdown(struct aac_dev * aac)
kthread_stop(aac->thread);
}
aac_adapter_disable_int(aac);
cpu = cpumask_first(cpu_online_mask);
if (aac->pdev->device == PMC_DEVICE_S6 ||
aac->pdev->device == PMC_DEVICE_S7 ||
aac->pdev->device == PMC_DEVICE_S8 ||
aac->pdev->device == PMC_DEVICE_S9) {
if (aac->max_msix > 1) {
for (i = 0; i < aac->max_msix; i++) {
if (irq_set_affinity_hint(
aac->msixentry[i].vector,
NULL)) {
printk(KERN_ERR "%s%d: Failed to reset IRQ affinity for cpu %d\n",
aac->name,
aac->id,
cpu);
}
cpu = cpumask_next(cpu,
cpu_online_mask);
free_irq(aac->msixentry[i].vector,
free_irq(pci_irq_vector(aac->pdev, i),
&(aac->aac_msix[i]));
}
} else {
......@@ -1350,7 +1338,7 @@ static void aac_release_resources(struct aac_dev *aac)
aac->pdev->device == PMC_DEVICE_S9) {
if (aac->max_msix > 1) {
for (i = 0; i < aac->max_msix; i++)
free_irq(aac->msixentry[i].vector,
free_irq(pci_irq_vector(aac->pdev, i),
&(aac->aac_msix[i]));
} else {
free_irq(aac->pdev->irq, &(aac->aac_msix[0]));
......@@ -1396,13 +1384,13 @@ static int aac_acquire_resources(struct aac_dev *dev)
dev->aac_msix[i].vector_no = i;
dev->aac_msix[i].dev = dev;
if (request_irq(dev->msixentry[i].vector,
if (request_irq(pci_irq_vector(dev->pdev, i),
dev->a_ops.adapter_intr,
0, "aacraid", &(dev->aac_msix[i]))) {
printk(KERN_ERR "%s%d: Failed to register IRQ for vector %d.\n",
name, instance, i);
for (j = 0 ; j < i ; j++)
free_irq(dev->msixentry[j].vector,
free_irq(pci_irq_vector(dev->pdev, j),
&(dev->aac_msix[j]));
pci_disable_msix(dev->pdev);
goto error_iounmap;
......
......@@ -11030,6 +11030,9 @@ static int advansys_board_found(struct Scsi_Host *shost, unsigned int iop,
ASC_DBG(2, "AdvInitGetConfig()\n");
ret = AdvInitGetConfig(pdev, shost) ? -ENODEV : 0;
#else
share_irq = 0;
ret = -ENODEV;
#endif /* CONFIG_PCI */
}
......
......@@ -228,8 +228,11 @@ static int asd_init_scbs(struct asd_ha_struct *asd_ha)
bitmap_bytes = (asd_ha->seq.tc_index_bitmap_bits+7)/8;
bitmap_bytes = BITS_TO_LONGS(bitmap_bytes*8)*sizeof(unsigned long);
asd_ha->seq.tc_index_bitmap = kzalloc(bitmap_bytes, GFP_KERNEL);
if (!asd_ha->seq.tc_index_bitmap)
if (!asd_ha->seq.tc_index_bitmap) {
kfree(asd_ha->seq.tc_index_array);
asd_ha->seq.tc_index_array = NULL;
return -ENOMEM;
}
spin_lock_init(&seq->tc_index_lock);
......
......@@ -629,7 +629,6 @@ struct AdapterControlBlock
struct pci_dev * pdev;
struct Scsi_Host * host;
unsigned long vir2phy_offset;
struct msix_entry entries[ARCMST_NUM_MSIX_VECTORS];
/* Offset is used in making arc cdb physical to virtual calculations */
uint32_t outbound_int_enable;
uint32_t cdb_phyaddr_hi32;
......@@ -671,8 +670,6 @@ struct AdapterControlBlock
/* iop init */
#define ACB_F_ABORT 0x0200
#define ACB_F_FIRMWARE_TRAP 0x0400
#define ACB_F_MSI_ENABLED 0x1000
#define ACB_F_MSIX_ENABLED 0x2000
struct CommandControlBlock * pccb_pool[ARCMSR_MAX_FREECCB_NUM];
/* used for memory free */
struct list_head ccb_free_list;
......@@ -725,7 +722,7 @@ struct AdapterControlBlock
atomic_t rq_map_token;
atomic_t ante_token_value;
uint32_t maxOutstanding;
int msix_vector_count;
int vector_count;
};/* HW_DEVICE_EXTENSION */
/*
*******************************************************************************
......
......@@ -720,51 +720,39 @@ static void arcmsr_message_isr_bh_fn(struct work_struct *work)
static int
arcmsr_request_irq(struct pci_dev *pdev, struct AdapterControlBlock *acb)
{
int i, j, r;
struct msix_entry entries[ARCMST_NUM_MSIX_VECTORS];
for (i = 0; i < ARCMST_NUM_MSIX_VECTORS; i++)
entries[i].entry = i;
r = pci_enable_msix_range(pdev, entries, 1, ARCMST_NUM_MSIX_VECTORS);
if (r < 0)
goto msi_int;
acb->msix_vector_count = r;
for (i = 0; i < r; i++) {
if (request_irq(entries[i].vector,
arcmsr_do_interrupt, 0, "arcmsr", acb)) {
unsigned long flags;
int nvec, i;
nvec = pci_alloc_irq_vectors(pdev, 1, ARCMST_NUM_MSIX_VECTORS,
PCI_IRQ_MSIX);
if (nvec > 0) {
pr_info("arcmsr%d: msi-x enabled\n", acb->host->host_no);
flags = 0;
} else {
nvec = pci_alloc_irq_vectors(pdev, 1, 1,
PCI_IRQ_MSI | PCI_IRQ_LEGACY);
if (nvec < 1)
return FAILED;
flags = IRQF_SHARED;
}
acb->vector_count = nvec;
for (i = 0; i < nvec; i++) {
if (request_irq(pci_irq_vector(pdev, i), arcmsr_do_interrupt,
flags, "arcmsr", acb)) {
pr_warn("arcmsr%d: request_irq =%d failed!\n",
acb->host->host_no, entries[i].vector);
for (j = 0 ; j < i ; j++)
free_irq(entries[j].vector, acb);
pci_disable_msix(pdev);
goto msi_int;
acb->host->host_no, pci_irq_vector(pdev, i));
goto out_free_irq;
}
acb->entries[i] = entries[i];
}
acb->acb_flags |= ACB_F_MSIX_ENABLED;
pr_info("arcmsr%d: msi-x enabled\n", acb->host->host_no);
return SUCCESS;
msi_int:
if (pci_enable_msi_exact(pdev, 1) < 0)
goto legacy_int;
if (request_irq(pdev->irq, arcmsr_do_interrupt,
IRQF_SHARED, "arcmsr", acb)) {
pr_warn("arcmsr%d: request_irq =%d failed!\n",
acb->host->host_no, pdev->irq);
pci_disable_msi(pdev);
goto legacy_int;
}
acb->acb_flags |= ACB_F_MSI_ENABLED;
pr_info("arcmsr%d: msi enabled\n", acb->host->host_no);
return SUCCESS;
legacy_int:
if (request_irq(pdev->irq, arcmsr_do_interrupt,
IRQF_SHARED, "arcmsr", acb)) {
pr_warn("arcmsr%d: request_irq = %d failed!\n",
acb->host->host_no, pdev->irq);
return FAILED;
}
return SUCCESS;
out_free_irq:
while (--i >= 0)
free_irq(pci_irq_vector(pdev, i), acb);
pci_free_irq_vectors(pdev);
return FAILED;
}
static int arcmsr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
......@@ -886,15 +874,9 @@ static void arcmsr_free_irq(struct pci_dev *pdev,
{
int i;
if (acb->acb_flags & ACB_F_MSI_ENABLED) {
free_irq(pdev->irq, acb);
pci_disable_msi(pdev);
} else if (acb->acb_flags & ACB_F_MSIX_ENABLED) {
for (i = 0; i < acb->msix_vector_count; i++)
free_irq(acb->entries[i].vector, acb);
pci_disable_msix(pdev);
} else
free_irq(pdev->irq, acb);
for (i = 0; i < acb->vector_count; i++)
free_irq(pci_irq_vector(pdev, i), acb);
pci_free_irq_vectors(pdev);
}
static int arcmsr_suspend(struct pci_dev *pdev, pm_message_t state)
......
......@@ -14,49 +14,48 @@
#include <scsi/scsi_host.h>
#define priv(host) ((struct NCR5380_hostdata *)(host)->hostdata)
#define NCR5380_read(reg) cumanascsi_read(instance, reg)
#define NCR5380_write(reg, value) cumanascsi_write(instance, reg, value)
#define NCR5380_read(reg) cumanascsi_read(hostdata, reg)
#define NCR5380_write(reg, value) cumanascsi_write(hostdata, reg, value)
#define NCR5380_dma_xfer_len(instance, cmd, phase) (cmd->transfersize)
#define NCR5380_dma_xfer_len cumanascsi_dma_xfer_len
#define NCR5380_dma_recv_setup cumanascsi_pread
#define NCR5380_dma_send_setup cumanascsi_pwrite
#define NCR5380_dma_residual(instance) (0)
#define NCR5380_dma_residual NCR5380_dma_residual_none
#define NCR5380_intr cumanascsi_intr
#define NCR5380_queue_command cumanascsi_queue_command
#define NCR5380_info cumanascsi_info
#define NCR5380_implementation_fields \
unsigned ctrl; \
void __iomem *base; \
void __iomem *dma
unsigned ctrl
#include "../NCR5380.h"
struct NCR5380_hostdata;
static u8 cumanascsi_read(struct NCR5380_hostdata *, unsigned int);
static void cumanascsi_write(struct NCR5380_hostdata *, unsigned int, u8);
void cumanascsi_setup(char *str, int *ints)
{
}
#include "../NCR5380.h"
#define CTRL 0x16fc
#define STAT 0x2004
#define L(v) (((v)<<16)|((v) & 0x0000ffff))
#define H(v) (((v)>>16)|((v) & 0xffff0000))
static inline int cumanascsi_pwrite(struct Scsi_Host *host,
static inline int cumanascsi_pwrite(struct NCR5380_hostdata *hostdata,
unsigned char *addr, int len)
{
unsigned long *laddr;
void __iomem *dma = priv(host)->dma + 0x2000;
u8 __iomem *base = hostdata->io;
u8 __iomem *dma = hostdata->pdma_io + 0x2000;
if(!len) return 0;
writeb(0x02, priv(host)->base + CTRL);
writeb(0x02, base + CTRL);
laddr = (unsigned long *)addr;
while(len >= 32)
{
unsigned int status;
unsigned long v;
status = readb(priv(host)->base + STAT);
status = readb(base + STAT);
if(status & 0x80)
goto end;
if(!(status & 0x40))
......@@ -75,12 +74,12 @@ static inline int cumanascsi_pwrite(struct Scsi_Host *host,
}
addr = (unsigned char *)laddr;
writeb(0x12, priv(host)->base + CTRL);
writeb(0x12, base + CTRL);
while(len > 0)
{
unsigned int status;
status = readb(priv(host)->base + STAT);
status = readb(base + STAT);
if(status & 0x80)
goto end;
if(status & 0x40)
......@@ -90,7 +89,7 @@ static inline int cumanascsi_pwrite(struct Scsi_Host *host,
break;
}
status = readb(priv(host)->base + STAT);
status = readb(base + STAT);
if(status & 0x80)
goto end;
if(status & 0x40)
......@@ -101,27 +100,28 @@ static inline int cumanascsi_pwrite(struct Scsi_Host *host,
}
}
end:
writeb(priv(host)->ctrl | 0x40, priv(host)->base + CTRL);
writeb(hostdata->ctrl | 0x40, base + CTRL);
if (len)
return -1;
return 0;
}
static inline int cumanascsi_pread(struct Scsi_Host *host,
static inline int cumanascsi_pread(struct NCR5380_hostdata *hostdata,
unsigned char *addr, int len)
{
unsigned long *laddr;
void __iomem *dma = priv(host)->dma + 0x2000;
u8 __iomem *base = hostdata->io;
u8 __iomem *dma = hostdata->pdma_io + 0x2000;
if(!len) return 0;
writeb(0x00, priv(host)->base + CTRL);
writeb(0x00, base + CTRL);
laddr = (unsigned long *)addr;
while(len >= 32)
{
unsigned int status;
status = readb(priv(host)->base + STAT);
status = readb(base + STAT);
if(status & 0x80)
goto end;
if(!(status & 0x40))
......@@ -140,12 +140,12 @@ static inline int cumanascsi_pread(struct Scsi_Host *host,
}
addr = (unsigned char *)laddr;
writeb(0x10, priv(host)->base + CTRL);
writeb(0x10, base + CTRL);
while(len > 0)
{
unsigned int status;
status = readb(priv(host)->base + STAT);
status = readb(base + STAT);
if(status & 0x80)
goto end;
if(status & 0x40)
......@@ -155,7 +155,7 @@ static inline int cumanascsi_pread(struct Scsi_Host *host,
break;
}
status = readb(priv(host)->base + STAT);
status = readb(base + STAT);
if(status & 0x80)
goto end;
if(status & 0x40)
......@@ -166,37 +166,45 @@ static inline int cumanascsi_pread(struct Scsi_Host *host,
}
}
end:
writeb(priv(host)->ctrl | 0x40, priv(host)->base + CTRL);
writeb(hostdata->ctrl | 0x40, base + CTRL);
if (len)
return -1;
return 0;
}
static unsigned char cumanascsi_read(struct Scsi_Host *host, unsigned int reg)
static int cumanascsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd)
{
return cmd->transfersize;
}
static u8 cumanascsi_read(struct NCR5380_hostdata *hostdata,
unsigned int reg)
{
void __iomem *base = priv(host)->base;
unsigned char val;
u8 __iomem *base = hostdata->io;
u8 val;
writeb(0, base + CTRL);
val = readb(base + 0x2100 + (reg << 2));
priv(host)->ctrl = 0x40;
hostdata->ctrl = 0x40;
writeb(0x40, base + CTRL);
return val;
}
static void cumanascsi_write(struct Scsi_Host *host, unsigned int reg, unsigned int value)
static void cumanascsi_write(struct NCR5380_hostdata *hostdata,
unsigned int reg, u8 value)
{
void __iomem *base = priv(host)->base;
u8 __iomem *base = hostdata->io;
writeb(0, base + CTRL);
writeb(value, base + 0x2100 + (reg << 2));
priv(host)->ctrl = 0x40;
hostdata->ctrl = 0x40;
writeb(0x40, base + CTRL);
}
......@@ -235,11 +243,11 @@ static int cumanascsi1_probe(struct expansion_card *ec,
goto out_release;
}
priv(host)->base = ioremap(ecard_resource_start(ec, ECARD_RES_IOCSLOW),
ecard_resource_len(ec, ECARD_RES_IOCSLOW));
priv(host)->dma = ioremap(ecard_resource_start(ec, ECARD_RES_MEMC),
ecard_resource_len(ec, ECARD_RES_MEMC));
if (!priv(host)->base || !priv(host)->dma) {
priv(host)->io = ioremap(ecard_resource_start(ec, ECARD_RES_IOCSLOW),
ecard_resource_len(ec, ECARD_RES_IOCSLOW));
priv(host)->pdma_io = ioremap(ecard_resource_start(ec, ECARD_RES_MEMC),
ecard_resource_len(ec, ECARD_RES_MEMC));
if (!priv(host)->io || !priv(host)->pdma_io) {
ret = -ENOMEM;
goto out_unmap;
}
......@@ -253,7 +261,7 @@ static int cumanascsi1_probe(struct expansion_card *ec,
NCR5380_maybe_reset_bus(host);
priv(host)->ctrl = 0;
writeb(0, priv(host)->base + CTRL);
writeb(0, priv(host)->io + CTRL);
ret = request_irq(host->irq, cumanascsi_intr, 0,
"CumanaSCSI-1", host);
......@@ -275,8 +283,8 @@ static int cumanascsi1_probe(struct expansion_card *ec,
out_exit:
NCR5380_exit(host);
out_unmap:
iounmap(priv(host)->base);
iounmap(priv(host)->dma);
iounmap(priv(host)->io);
iounmap(priv(host)->pdma_io);
scsi_host_put(host);
out_release:
ecard_release_resources(ec);
......@@ -287,15 +295,17 @@ static int cumanascsi1_probe(struct expansion_card *ec,
static void cumanascsi1_remove(struct expansion_card *ec)
{
struct Scsi_Host *host = ecard_get_drvdata(ec);
void __iomem *base = priv(host)->io;
void __iomem *dma = priv(host)->pdma_io;
ecard_set_drvdata(ec, NULL);
scsi_remove_host(host);
free_irq(host->irq, host);
NCR5380_exit(host);
iounmap(priv(host)->base);
iounmap(priv(host)->dma);
scsi_host_put(host);
iounmap(base);
iounmap(dma);
ecard_release_resources(ec);
}
......
......@@ -16,21 +16,18 @@
#define priv(host) ((struct NCR5380_hostdata *)(host)->hostdata)
#define NCR5380_read(reg) \
readb(priv(instance)->base + ((reg) << 2))
#define NCR5380_write(reg, value) \
writeb(value, priv(instance)->base + ((reg) << 2))
#define NCR5380_read(reg) readb(hostdata->io + ((reg) << 2))
#define NCR5380_write(reg, value) writeb(value, hostdata->io + ((reg) << 2))
#define NCR5380_dma_xfer_len(instance, cmd, phase) (0)
#define NCR5380_dma_xfer_len NCR5380_dma_xfer_none
#define NCR5380_dma_recv_setup oakscsi_pread
#define NCR5380_dma_send_setup oakscsi_pwrite
#define NCR5380_dma_residual(instance) (0)
#define NCR5380_dma_residual NCR5380_dma_residual_none
#define NCR5380_queue_command oakscsi_queue_command
#define NCR5380_info oakscsi_info
#define NCR5380_implementation_fields \
void __iomem *base
#define NCR5380_implementation_fields /* none */
#include "../NCR5380.h"
......@@ -40,10 +37,10 @@
#define STAT ((128 + 16) << 2)
#define DATA ((128 + 8) << 2)
static inline int oakscsi_pwrite(struct Scsi_Host *instance,
static inline int oakscsi_pwrite(struct NCR5380_hostdata *hostdata,
unsigned char *addr, int len)
{
void __iomem *base = priv(instance)->base;
u8 __iomem *base = hostdata->io;
printk("writing %p len %d\n",addr, len);
......@@ -55,10 +52,11 @@ printk("writing %p len %d\n",addr, len);
return 0;
}
static inline int oakscsi_pread(struct Scsi_Host *instance,
static inline int oakscsi_pread(struct NCR5380_hostdata *hostdata,
unsigned char *addr, int len)
{
void __iomem *base = priv(instance)->base;
u8 __iomem *base = hostdata->io;
printk("reading %p len %d\n", addr, len);
while(len > 0)
{
......@@ -133,15 +131,14 @@ static int oakscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
goto release;
}
priv(host)->base = ioremap(ecard_resource_start(ec, ECARD_RES_MEMC),
ecard_resource_len(ec, ECARD_RES_MEMC));
if (!priv(host)->base) {
priv(host)->io = ioremap(ecard_resource_start(ec, ECARD_RES_MEMC),
ecard_resource_len(ec, ECARD_RES_MEMC));
if (!priv(host)->io) {
ret = -ENOMEM;
goto unreg;
}
host->irq = NO_IRQ;
host->n_io_port = 255;
ret = NCR5380_init(host, FLAG_DMA_FIXUP | FLAG_LATE_DMA_SETUP);
if (ret)
......@@ -159,7 +156,7 @@ static int oakscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
out_exit:
NCR5380_exit(host);
out_unmap:
iounmap(priv(host)->base);
iounmap(priv(host)->io);
unreg:
scsi_host_put(host);
release:
......@@ -171,13 +168,14 @@ static int oakscsi_probe(struct expansion_card *ec, const struct ecard_id *id)
static void oakscsi_remove(struct expansion_card *ec)
{
struct Scsi_Host *host = ecard_get_drvdata(ec);
void __iomem *base = priv(host)->io;
ecard_set_drvdata(ec, NULL);
scsi_remove_host(host);
NCR5380_exit(host);
iounmap(priv(host)->base);
scsi_host_put(host);
iounmap(base);
ecard_release_resources(ec);
}
......
......@@ -57,6 +57,9 @@
#define NCR5380_implementation_fields /* none */
static u8 (*atari_scsi_reg_read)(unsigned int);
static void (*atari_scsi_reg_write)(unsigned int, u8);
#define NCR5380_read(reg) atari_scsi_reg_read(reg)
#define NCR5380_write(reg, value) atari_scsi_reg_write(reg, value)
......@@ -64,14 +67,10 @@
#define NCR5380_abort atari_scsi_abort
#define NCR5380_info atari_scsi_info
#define NCR5380_dma_recv_setup(instance, data, count) \
atari_scsi_dma_setup(instance, data, count, 0)
#define NCR5380_dma_send_setup(instance, data, count) \
atari_scsi_dma_setup(instance, data, count, 1)
#define NCR5380_dma_residual(instance) \
atari_scsi_dma_residual(instance)
#define NCR5380_dma_xfer_len(instance, cmd, phase) \
atari_dma_xfer_len(cmd->SCp.this_residual, cmd, !((phase) & SR_IO))
#define NCR5380_dma_xfer_len atari_scsi_dma_xfer_len
#define NCR5380_dma_recv_setup atari_scsi_dma_recv_setup
#define NCR5380_dma_send_setup atari_scsi_dma_send_setup
#define NCR5380_dma_residual atari_scsi_dma_residual
#define NCR5380_acquire_dma_irq(instance) falcon_get_lock(instance)
#define NCR5380_release_dma_irq(instance) falcon_release_lock()
......@@ -126,9 +125,6 @@ static inline unsigned long SCSI_DMA_GETADR(void)
static void atari_scsi_fetch_restbytes(void);
static unsigned char (*atari_scsi_reg_read)(unsigned char reg);
static void (*atari_scsi_reg_write)(unsigned char reg, unsigned char value);
static unsigned long atari_dma_residual, atari_dma_startaddr;
static short atari_dma_active;
/* pointer to the dribble buffer */
......@@ -457,15 +453,14 @@ static int __init atari_scsi_setup(char *str)
__setup("atascsi=", atari_scsi_setup);
#endif /* !MODULE */
static unsigned long atari_scsi_dma_setup(struct Scsi_Host *instance,
static unsigned long atari_scsi_dma_setup(struct NCR5380_hostdata *hostdata,
void *data, unsigned long count,
int dir)
{
unsigned long addr = virt_to_phys(data);
dprintk(NDEBUG_DMA, "scsi%d: setting up dma, data = %p, phys = %lx, count = %ld, "
"dir = %d\n", instance->host_no, data, addr, count, dir);
dprintk(NDEBUG_DMA, "scsi%d: setting up dma, data = %p, phys = %lx, count = %ld, dir = %d\n",
hostdata->host->host_no, data, addr, count, dir);
if (!IS_A_TT() && !STRAM_ADDR(addr)) {
/* If we have a non-DMAable address on a Falcon, use the dribble
......@@ -522,8 +517,19 @@ static unsigned long atari_scsi_dma_setup(struct Scsi_Host *instance,
return count;
}
static inline int atari_scsi_dma_recv_setup(struct NCR5380_hostdata *hostdata,
unsigned char *data, int count)
{
return atari_scsi_dma_setup(hostdata, data, count, 0);
}
static inline int atari_scsi_dma_send_setup(struct NCR5380_hostdata *hostdata,
unsigned char *data, int count)
{
return atari_scsi_dma_setup(hostdata, data, count, 1);
}
static long atari_scsi_dma_residual(struct Scsi_Host *instance)
static int atari_scsi_dma_residual(struct NCR5380_hostdata *hostdata)
{
return atari_dma_residual;
}
......@@ -564,10 +570,11 @@ static int falcon_classify_cmd(struct scsi_cmnd *cmd)
* the overrun problem, so this question is academic :-)
*/
static unsigned long atari_dma_xfer_len(unsigned long wanted_len,
struct scsi_cmnd *cmd, int write_flag)
static int atari_scsi_dma_xfer_len(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd)
{
unsigned long possible_len, limit;
int wanted_len = cmd->SCp.this_residual;
int possible_len, limit;
if (wanted_len < DMA_MIN_SIZE)
return 0;
......@@ -604,7 +611,7 @@ static unsigned long atari_dma_xfer_len(unsigned long wanted_len,
* use the dribble buffer and thus can do only STRAM_BUFFER_SIZE bytes.
*/
if (write_flag) {
if (cmd->sc_data_direction == DMA_TO_DEVICE) {
/* Write operation can always use the DMA, but the transfer size must
* be rounded up to the next multiple of 512 (atari_dma_setup() does
* this).
......@@ -644,8 +651,8 @@ static unsigned long atari_dma_xfer_len(unsigned long wanted_len,
possible_len = limit;
if (possible_len != wanted_len)
dprintk(NDEBUG_DMA, "Sorry, must cut DMA transfer size to %ld bytes "
"instead of %ld\n", possible_len, wanted_len);
dprintk(NDEBUG_DMA, "DMA transfer now %d bytes instead of %d\n",
possible_len, wanted_len);
return possible_len;
}
......@@ -658,26 +665,38 @@ static unsigned long atari_dma_xfer_len(unsigned long wanted_len,
* NCR5380_write call these functions via function pointers.
*/
static unsigned char atari_scsi_tt_reg_read(unsigned char reg)
static u8 atari_scsi_tt_reg_read(unsigned int reg)
{
return tt_scsi_regp[reg * 2];
}
static void atari_scsi_tt_reg_write(unsigned char reg, unsigned char value)
static void atari_scsi_tt_reg_write(unsigned int reg, u8 value)
{
tt_scsi_regp[reg * 2] = value;
}
static unsigned char atari_scsi_falcon_reg_read(unsigned char reg)
static u8 atari_scsi_falcon_reg_read(unsigned int reg)
{
dma_wd.dma_mode_status= (u_short)(0x88 + reg);
return (u_char)dma_wd.fdc_acces_seccount;
unsigned long flags;
u8 result;
reg += 0x88;
local_irq_save(flags);
dma_wd.dma_mode_status = (u_short)reg;
result = (u8)dma_wd.fdc_acces_seccount;
local_irq_restore(flags);
return result;
}
static void atari_scsi_falcon_reg_write(unsigned char reg, unsigned char value)
static void atari_scsi_falcon_reg_write(unsigned int reg, u8 value)
{
dma_wd.dma_mode_status = (u_short)(0x88 + reg);
unsigned long flags;
reg += 0x88;
local_irq_save(flags);
dma_wd.dma_mode_status = (u_short)reg;
dma_wd.fdc_acces_seccount = (u_short)value;
local_irq_restore(flags);
}
......
......@@ -3049,8 +3049,10 @@ static int beiscsi_create_eqs(struct beiscsi_hba *phba,
eq_vaddress = pci_alloc_consistent(phba->pcidev,
num_eq_pages * PAGE_SIZE,
&paddr);
if (!eq_vaddress)
if (!eq_vaddress) {
ret = -ENOMEM;
goto create_eq_error;
}
mem->va = eq_vaddress;
ret = be_fill_queue(eq, phba->params.num_eq_entries,
......@@ -3113,8 +3115,10 @@ static int beiscsi_create_cqs(struct beiscsi_hba *phba,
cq_vaddress = pci_alloc_consistent(phba->pcidev,
num_cq_pages * PAGE_SIZE,
&paddr);
if (!cq_vaddress)
if (!cq_vaddress) {
ret = -ENOMEM;
goto create_cq_error;
}
ret = be_fill_queue(cq, phba->params.num_cq_entries,
sizeof(struct sol_cqe), cq_vaddress);
......
......@@ -111,20 +111,24 @@ struct bfa_meminfo_s {
struct bfa_mem_kva_s kva_info;
};
/* BFA memory segment setup macros */
#define bfa_mem_dma_setup(_meminfo, _dm_ptr, _seg_sz) do { \
((bfa_mem_dma_t *)(_dm_ptr))->mem_len = (_seg_sz); \
if (_seg_sz) \
list_add_tail(&((bfa_mem_dma_t *)_dm_ptr)->qe, \
&(_meminfo)->dma_info.qe); \
} while (0)
/* BFA memory segment setup helpers */
static inline void bfa_mem_dma_setup(struct bfa_meminfo_s *meminfo,
struct bfa_mem_dma_s *dm_ptr,
size_t seg_sz)
{
dm_ptr->mem_len = seg_sz;
if (seg_sz)
list_add_tail(&dm_ptr->qe, &meminfo->dma_info.qe);
}
#define bfa_mem_kva_setup(_meminfo, _kva_ptr, _seg_sz) do { \
((bfa_mem_kva_t *)(_kva_ptr))->mem_len = (_seg_sz); \
if (_seg_sz) \
list_add_tail(&((bfa_mem_kva_t *)_kva_ptr)->qe, \
&(_meminfo)->kva_info.qe); \
} while (0)
static inline void bfa_mem_kva_setup(struct bfa_meminfo_s *meminfo,
struct bfa_mem_kva_s *kva_ptr,
size_t seg_sz)
{
kva_ptr->mem_len = seg_sz;
if (seg_sz)
list_add_tail(&kva_ptr->qe, &meminfo->kva_info.qe);
}
/* BFA dma memory segments iterator */
#define bfa_mem_dma_sptr(_mod, _i) (&(_mod)->dma_seg[(_i)])
......
......@@ -3130,11 +3130,12 @@ bfad_iocmd_handler(struct bfad_s *bfad, unsigned int cmd, void *iocmd,
}
static int
bfad_im_bsg_vendor_request(struct fc_bsg_job *job)
bfad_im_bsg_vendor_request(struct bsg_job *job)
{
uint32_t vendor_cmd = job->request->rqst_data.h_vendor.vendor_cmd[0];
struct bfad_im_port_s *im_port =
(struct bfad_im_port_s *) job->shost->hostdata[0];
struct fc_bsg_request *bsg_request = job->request;
struct fc_bsg_reply *bsg_reply = job->reply;
uint32_t vendor_cmd = bsg_request->rqst_data.h_vendor.vendor_cmd[0];
struct bfad_im_port_s *im_port = shost_priv(fc_bsg_to_shost(job));
struct bfad_s *bfad = im_port->bfad;
struct request_queue *request_q = job->req->q;
void *payload_kbuf;
......@@ -3175,18 +3176,19 @@ bfad_im_bsg_vendor_request(struct fc_bsg_job *job)
/* Fill the BSG job reply data */
job->reply_len = job->reply_payload.payload_len;
job->reply->reply_payload_rcv_len = job->reply_payload.payload_len;
job->reply->result = rc;
bsg_reply->reply_payload_rcv_len = job->reply_payload.payload_len;
bsg_reply->result = rc;
job->job_done(job);
bsg_job_done(job, bsg_reply->result,
bsg_reply->reply_payload_rcv_len);
return rc;
error:
/* free the command buffer */
kfree(payload_kbuf);
out:
job->reply->result = rc;
bsg_reply->result = rc;
job->reply_len = sizeof(uint32_t);
job->reply->reply_payload_rcv_len = 0;
bsg_reply->reply_payload_rcv_len = 0;
return rc;
}
......@@ -3312,7 +3314,7 @@ bfad_fcxp_free_mem(struct bfad_s *bfad, struct bfad_buf_info *buf_base,
}
int
bfad_fcxp_bsg_send(struct fc_bsg_job *job, struct bfad_fcxp *drv_fcxp,
bfad_fcxp_bsg_send(struct bsg_job *job, struct bfad_fcxp *drv_fcxp,
bfa_bsg_fcpt_t *bsg_fcpt)
{
struct bfa_fcxp_s *hal_fcxp;
......@@ -3352,28 +3354,29 @@ bfad_fcxp_bsg_send(struct fc_bsg_job *job, struct bfad_fcxp *drv_fcxp,
}
int
bfad_im_bsg_els_ct_request(struct fc_bsg_job *job)
bfad_im_bsg_els_ct_request(struct bsg_job *job)
{
struct bfa_bsg_data *bsg_data;
struct bfad_im_port_s *im_port =
(struct bfad_im_port_s *) job->shost->hostdata[0];
struct bfad_im_port_s *im_port = shost_priv(fc_bsg_to_shost(job));
struct bfad_s *bfad = im_port->bfad;
bfa_bsg_fcpt_t *bsg_fcpt;
struct bfad_fcxp *drv_fcxp;
struct bfa_fcs_lport_s *fcs_port;
struct bfa_fcs_rport_s *fcs_rport;
uint32_t command_type = job->request->msgcode;
struct fc_bsg_request *bsg_request = bsg_request;
struct fc_bsg_reply *bsg_reply = job->reply;
uint32_t command_type = bsg_request->msgcode;
unsigned long flags;
struct bfad_buf_info *rsp_buf_info;
void *req_kbuf = NULL, *rsp_kbuf = NULL;
int rc = -EINVAL;
job->reply_len = sizeof(uint32_t); /* Atleast uint32_t reply_len */
job->reply->reply_payload_rcv_len = 0;
bsg_reply->reply_payload_rcv_len = 0;
/* Get the payload passed in from userspace */
bsg_data = (struct bfa_bsg_data *) (((char *)job->request) +
sizeof(struct fc_bsg_request));
bsg_data = (struct bfa_bsg_data *) (((char *)bsg_request) +
sizeof(struct fc_bsg_request));
if (bsg_data == NULL)
goto out;
......@@ -3517,13 +3520,13 @@ bfad_im_bsg_els_ct_request(struct fc_bsg_job *job)
/* fill the job->reply data */
if (drv_fcxp->req_status == BFA_STATUS_OK) {
job->reply_len = drv_fcxp->rsp_len;
job->reply->reply_payload_rcv_len = drv_fcxp->rsp_len;
job->reply->reply_data.ctels_reply.status = FC_CTELS_STATUS_OK;
bsg_reply->reply_payload_rcv_len = drv_fcxp->rsp_len;
bsg_reply->reply_data.ctels_reply.status = FC_CTELS_STATUS_OK;
} else {
job->reply->reply_payload_rcv_len =
bsg_reply->reply_payload_rcv_len =
sizeof(struct fc_bsg_ctels_reply);
job->reply_len = sizeof(uint32_t);
job->reply->reply_data.ctels_reply.status =
bsg_reply->reply_data.ctels_reply.status =
FC_CTELS_STATUS_REJECT;
}
......@@ -3549,20 +3552,23 @@ bfad_im_bsg_els_ct_request(struct fc_bsg_job *job)
kfree(bsg_fcpt);
kfree(drv_fcxp);
out:
job->reply->result = rc;
bsg_reply->result = rc;
if (rc == BFA_STATUS_OK)
job->job_done(job);
bsg_job_done(job, bsg_reply->result,
bsg_reply->reply_payload_rcv_len);
return rc;
}
int
bfad_im_bsg_request(struct fc_bsg_job *job)
bfad_im_bsg_request(struct bsg_job *job)
{
struct fc_bsg_request *bsg_request = job->request;
struct fc_bsg_reply *bsg_reply = job->reply;
uint32_t rc = BFA_STATUS_OK;
switch (job->request->msgcode) {
switch (bsg_request->msgcode) {
case FC_BSG_HST_VENDOR:
/* Process BSG HST Vendor requests */
rc = bfad_im_bsg_vendor_request(job);
......@@ -3575,8 +3581,8 @@ bfad_im_bsg_request(struct fc_bsg_job *job)
rc = bfad_im_bsg_els_ct_request(job);
break;
default:
job->reply->result = rc = -EINVAL;
job->reply->reply_payload_rcv_len = 0;
bsg_reply->result = rc = -EINVAL;
bsg_reply->reply_payload_rcv_len = 0;
break;
}
......@@ -3584,7 +3590,7 @@ bfad_im_bsg_request(struct fc_bsg_job *job)
}
int
bfad_im_bsg_timeout(struct fc_bsg_job *job)
bfad_im_bsg_timeout(struct bsg_job *job)
{
/* Don't complete the BSG job request - return -EAGAIN
* to reset bsg job timeout : for ELS/CT pass thru we
......
......@@ -166,8 +166,8 @@ extern struct device_attribute *bfad_im_vport_attrs[];
irqreturn_t bfad_intx(int irq, void *dev_id);
int bfad_im_bsg_request(struct fc_bsg_job *job);
int bfad_im_bsg_timeout(struct fc_bsg_job *job);
int bfad_im_bsg_request(struct bsg_job *job);
int bfad_im_bsg_timeout(struct bsg_job *job);
/*
* Macro to set the SCSI device sdev_bflags - sdev_bflags are used by the
......
......@@ -970,7 +970,6 @@ static int bnx2fc_libfc_config(struct fc_lport *lport)
sizeof(struct libfc_function_template));
fc_elsct_init(lport);
fc_exch_init(lport);
fc_rport_init(lport);
fc_disc_init(lport);
fc_disc_config(lport, lport);
return 0;
......
......@@ -80,7 +80,6 @@ static void bnx2fc_offload_session(struct fcoe_port *port,
struct bnx2fc_rport *tgt,
struct fc_rport_priv *rdata)
{
struct fc_lport *lport = rdata->local_port;
struct fc_rport *rport = rdata->rport;
struct bnx2fc_interface *interface = port->priv;
struct bnx2fc_hba *hba = interface->hba;
......@@ -160,7 +159,7 @@ static void bnx2fc_offload_session(struct fcoe_port *port,
tgt_init_err:
if (tgt->fcoe_conn_id != -1)
bnx2fc_free_conn_id(hba, tgt->fcoe_conn_id);
lport->tt.rport_logoff(rdata);
fc_rport_logoff(rdata);
}
void bnx2fc_flush_active_ios(struct bnx2fc_rport *tgt)
......
......@@ -1411,7 +1411,7 @@ static int init_act_open(struct cxgbi_sock *csk)
csk->atid = cxgb4_alloc_atid(lldi->tids, csk);
if (csk->atid < 0) {
pr_err("%s, NO atid available.\n", ndev->name);
return -EINVAL;
goto rel_resource_without_clip;
}
cxgbi_sock_set_flag(csk, CTPF_HAS_ATID);
cxgbi_sock_get(csk);
......
......@@ -19,6 +19,7 @@
#include <linux/rwsem.h>
#include <linux/types.h>
#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
#include <scsi/scsi_device.h>
extern const struct file_operations cxlflash_cxl_fops;
......@@ -62,11 +63,6 @@ static inline void check_sizes(void)
/* AFU defines a fixed size of 4K for command buffers (borrow 4K page define) */
#define CMD_BUFSIZE SIZE_4K
/* flags in IOA status area for host use */
#define B_DONE 0x01
#define B_ERROR 0x02 /* set with B_DONE */
#define B_TIMEOUT 0x04 /* set with B_DONE & B_ERROR */
enum cxlflash_lr_state {
LINK_RESET_INVALID,
LINK_RESET_REQUIRED,
......@@ -132,12 +128,9 @@ struct cxlflash_cfg {
struct afu_cmd {
struct sisl_ioarcb rcb; /* IOARCB (cache line aligned) */
struct sisl_ioasa sa; /* IOASA must follow IOARCB */
spinlock_t slock;
struct completion cevent;
char *buf; /* per command buffer */
struct afu *parent;
int slot;
atomic_t free;
struct scsi_cmnd *scp;
struct completion cevent;
u8 cmd_tmf:1;
......@@ -147,19 +140,31 @@ struct afu_cmd {
*/
} __aligned(cache_line_size());
static inline struct afu_cmd *sc_to_afuc(struct scsi_cmnd *sc)
{
return PTR_ALIGN(scsi_cmd_priv(sc), __alignof__(struct afu_cmd));
}
static inline struct afu_cmd *sc_to_afucz(struct scsi_cmnd *sc)
{
struct afu_cmd *afuc = sc_to_afuc(sc);
memset(afuc, 0, sizeof(*afuc));
return afuc;
}
struct afu {
/* Stuff requiring alignment go first. */
u64 rrq_entry[NUM_RRQ_ENTRY]; /* 2K RRQ */
/*
* Command & data for AFU commands.
*/
struct afu_cmd cmd[CXLFLASH_NUM_CMDS];
/* Beware of alignment till here. Preferably introduce new
* fields after this point
*/
int (*send_cmd)(struct afu *, struct afu_cmd *);
void (*context_reset)(struct afu_cmd *);
/* AFU HW */
struct cxl_ioctl_start_work work;
struct cxlflash_afu_map __iomem *afu_map; /* entire MMIO map */
......@@ -173,10 +178,10 @@ struct afu {
u64 *hrrq_end;
u64 *hrrq_curr;
bool toggle;
bool read_room;
atomic64_t room;
atomic_t cmds_active; /* Number of currently active AFU commands */
s64 room;
spinlock_t rrin_slock; /* Lock to rrin queuing and cmd_room updates */
u64 hb;
u32 cmd_couts; /* Number of command checkouts */
u32 internal_lun; /* User-desired LUN mode for this AFU */
char version[16];
......
......@@ -254,8 +254,14 @@ int cxlflash_manage_lun(struct scsi_device *sdev,
if (lli->parent->mode != MODE_NONE)
rc = -EBUSY;
else {
/*
* Clean up local LUN for this port and reset table
* tracking when no more references exist.
*/
sdev->hostdata = NULL;
lli->port_sel &= ~CHAN2PORT(chan);
if (lli->port_sel == 0U)
lli->in_table = false;
}
}
......
......@@ -34,67 +34,6 @@ MODULE_AUTHOR("Manoj N. Kumar <manoj@linux.vnet.ibm.com>");
MODULE_AUTHOR("Matthew R. Ochs <mrochs@linux.vnet.ibm.com>");
MODULE_LICENSE("GPL");
/**
* cmd_checkout() - checks out an AFU command
* @afu: AFU to checkout from.
*
* Commands are checked out in a round-robin fashion. Note that since
* the command pool is larger than the hardware queue, the majority of
* times we will only loop once or twice before getting a command. The
* buffer and CDB within the command are initialized (zeroed) prior to
* returning.
*
* Return: The checked out command or NULL when command pool is empty.
*/
static struct afu_cmd *cmd_checkout(struct afu *afu)
{
int k, dec = CXLFLASH_NUM_CMDS;
struct afu_cmd *cmd;
while (dec--) {
k = (afu->cmd_couts++ & (CXLFLASH_NUM_CMDS - 1));
cmd = &afu->cmd[k];
if (!atomic_dec_if_positive(&cmd->free)) {
pr_devel("%s: returning found index=%d cmd=%p\n",
__func__, cmd->slot, cmd);
memset(cmd->buf, 0, CMD_BUFSIZE);
memset(cmd->rcb.cdb, 0, sizeof(cmd->rcb.cdb));
return cmd;
}
}
return NULL;
}
/**
* cmd_checkin() - checks in an AFU command
* @cmd: AFU command to checkin.
*
* Safe to pass commands that have already been checked in. Several
* internal tracking fields are reset as part of the checkin. Note
* that these are intentionally reset prior to toggling the free bit
* to avoid clobbering values in the event that the command is checked
* out right away.
*/
static void cmd_checkin(struct afu_cmd *cmd)
{
cmd->rcb.scp = NULL;
cmd->rcb.timeout = 0;
cmd->sa.ioasc = 0;
cmd->cmd_tmf = false;
cmd->sa.host_use[0] = 0; /* clears both completion and retry bytes */
if (unlikely(atomic_inc_return(&cmd->free) != 1)) {
pr_err("%s: Freeing cmd (%d) that is not in use!\n",
__func__, cmd->slot);
return;
}
pr_devel("%s: released cmd %p index=%d\n", __func__, cmd, cmd->slot);
}
/**
* process_cmd_err() - command error handler
* @cmd: AFU command that experienced the error.
......@@ -212,7 +151,7 @@ static void process_cmd_err(struct afu_cmd *cmd, struct scsi_cmnd *scp)
*
* Prepares and submits command that has either completed or timed out to
* the SCSI stack. Checks AFU command back into command pool for non-internal
* (rcb.scp populated) commands.
* (cmd->scp populated) commands.
*/
static void cmd_complete(struct afu_cmd *cmd)
{
......@@ -222,19 +161,14 @@ static void cmd_complete(struct afu_cmd *cmd)
struct cxlflash_cfg *cfg = afu->parent;
bool cmd_is_tmf;
spin_lock_irqsave(&cmd->slock, lock_flags);
cmd->sa.host_use_b[0] |= B_DONE;
spin_unlock_irqrestore(&cmd->slock, lock_flags);
if (cmd->rcb.scp) {
scp = cmd->rcb.scp;
if (cmd->scp) {
scp = cmd->scp;
if (unlikely(cmd->sa.ioasc))
process_cmd_err(cmd, scp);
else
scp->result = (DID_OK << 16);
cmd_is_tmf = cmd->cmd_tmf;
cmd_checkin(cmd); /* Don't use cmd after here */
pr_debug_ratelimited("%s: calling scsi_done scp=%p result=%X "
"ioasc=%d\n", __func__, scp, scp->result,
......@@ -254,49 +188,19 @@ static void cmd_complete(struct afu_cmd *cmd)
}
/**
* context_reset() - timeout handler for AFU commands
* context_reset_ioarrin() - reset command owner context via IOARRIN register
* @cmd: AFU command that timed out.
*
* Sends a reset to the AFU.
*/
static void context_reset(struct afu_cmd *cmd)
static void context_reset_ioarrin(struct afu_cmd *cmd)
{
int nretry = 0;
u64 rrin = 0x1;
u64 room = 0;
struct afu *afu = cmd->parent;
ulong lock_flags;
struct cxlflash_cfg *cfg = afu->parent;
struct device *dev = &cfg->dev->dev;
pr_debug("%s: cmd=%p\n", __func__, cmd);
spin_lock_irqsave(&cmd->slock, lock_flags);
/* Already completed? */
if (cmd->sa.host_use_b[0] & B_DONE) {
spin_unlock_irqrestore(&cmd->slock, lock_flags);
return;
}
cmd->sa.host_use_b[0] |= (B_DONE | B_ERROR | B_TIMEOUT);
spin_unlock_irqrestore(&cmd->slock, lock_flags);
/*
* We really want to send this reset at all costs, so spread
* out wait time on successive retries for available room.
*/
do {
room = readq_be(&afu->host_map->cmd_room);
atomic64_set(&afu->room, room);
if (room)
goto write_rrin;
udelay(1 << nretry);
} while (nretry++ < MC_ROOM_RETRY_CNT);
pr_err("%s: no cmd_room to send reset\n", __func__);
return;
write_rrin:
nretry = 0;
writeq_be(rrin, &afu->host_map->ioarrin);
do {
rrin = readq_be(&afu->host_map->ioarrin);
......@@ -305,93 +209,81 @@ static void context_reset(struct afu_cmd *cmd)
/* Double delay each time */
udelay(1 << nretry);
} while (nretry++ < MC_ROOM_RETRY_CNT);
dev_dbg(dev, "%s: returning rrin=0x%016llX nretry=%d\n",
__func__, rrin, nretry);
}
/**
* send_cmd() - sends an AFU command
* send_cmd_ioarrin() - sends an AFU command via IOARRIN register
* @afu: AFU associated with the host.
* @cmd: AFU command to send.
*
* Return:
* 0 on success, SCSI_MLQUEUE_HOST_BUSY on failure
*/
static int send_cmd(struct afu *afu, struct afu_cmd *cmd)
static int send_cmd_ioarrin(struct afu *afu, struct afu_cmd *cmd)
{
struct cxlflash_cfg *cfg = afu->parent;
struct device *dev = &cfg->dev->dev;
int nretry = 0;
int rc = 0;
u64 room;
long newval;
s64 room;
ulong lock_flags;
/*
* This routine is used by critical users such an AFU sync and to
* send a task management function (TMF). Thus we want to retry a
* bit before returning an error. To avoid the performance penalty
* of MMIO, we spread the update of 'room' over multiple commands.
* To avoid the performance penalty of MMIO, spread the update of
* 'room' over multiple commands.
*/
retry:
newval = atomic64_dec_if_positive(&afu->room);
if (!newval) {
do {
room = readq_be(&afu->host_map->cmd_room);
atomic64_set(&afu->room, room);
if (room)
goto write_ioarrin;
udelay(1 << nretry);
} while (nretry++ < MC_ROOM_RETRY_CNT);
dev_err(dev, "%s: no cmd_room to send 0x%X\n",
__func__, cmd->rcb.cdb[0]);
goto no_room;
} else if (unlikely(newval < 0)) {
/* This should be rare. i.e. Only if two threads race and
* decrement before the MMIO read is done. In this case
* just benefit from the other thread having updated
* afu->room.
*/
if (nretry++ < MC_ROOM_RETRY_CNT) {
udelay(1 << nretry);
goto retry;
spin_lock_irqsave(&afu->rrin_slock, lock_flags);
if (--afu->room < 0) {
room = readq_be(&afu->host_map->cmd_room);
if (room <= 0) {
dev_dbg_ratelimited(dev, "%s: no cmd_room to send "
"0x%02X, room=0x%016llX\n",
__func__, cmd->rcb.cdb[0], room);
afu->room = 0;
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
goto no_room;
afu->room = room - 1;
}
write_ioarrin:
writeq_be((u64)&cmd->rcb, &afu->host_map->ioarrin);
out:
spin_unlock_irqrestore(&afu->rrin_slock, lock_flags);
pr_devel("%s: cmd=%p len=%d ea=%p rc=%d\n", __func__, cmd,
cmd->rcb.data_len, (void *)cmd->rcb.data_ea, rc);
return rc;
no_room:
afu->read_room = true;
kref_get(&cfg->afu->mapcount);
schedule_work(&cfg->work_q);
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
/**
* wait_resp() - polls for a response or timeout to a sent AFU command
* @afu: AFU associated with the host.
* @cmd: AFU command that was sent.
*
* Return:
* 0 on success, -1 on timeout/error
*/
static void wait_resp(struct afu *afu, struct afu_cmd *cmd)
static int wait_resp(struct afu *afu, struct afu_cmd *cmd)
{
int rc = 0;
ulong timeout = msecs_to_jiffies(cmd->rcb.timeout * 2 * 1000);
timeout = wait_for_completion_timeout(&cmd->cevent, timeout);
if (!timeout)
context_reset(cmd);
if (!timeout) {
afu->context_reset(cmd);
rc = -1;
}
if (unlikely(cmd->sa.ioasc != 0))
if (unlikely(cmd->sa.ioasc != 0)) {
pr_err("%s: CMD 0x%X failed, IOASC: flags 0x%X, afu_rc 0x%X, "
"scsi_rc 0x%X, fc_rc 0x%X\n", __func__, cmd->rcb.cdb[0],
cmd->sa.rc.flags, cmd->sa.rc.afu_rc, cmd->sa.rc.scsi_rc,
cmd->sa.rc.fc_rc);
rc = -1;
}
return rc;
}
/**
......@@ -405,24 +297,15 @@ static void wait_resp(struct afu *afu, struct afu_cmd *cmd)
*/
static int send_tmf(struct afu *afu, struct scsi_cmnd *scp, u64 tmfcmd)
{
struct afu_cmd *cmd;
u32 port_sel = scp->device->channel + 1;
short lflag = 0;
struct Scsi_Host *host = scp->device->host;
struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata;
struct afu_cmd *cmd = sc_to_afucz(scp);
struct device *dev = &cfg->dev->dev;
ulong lock_flags;
int rc = 0;
ulong to;
cmd = cmd_checkout(afu);
if (unlikely(!cmd)) {
dev_err(dev, "%s: could not get a free command\n", __func__);
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
/* When Task Management Function is active do not send another */
spin_lock_irqsave(&cfg->tmf_slock, lock_flags);
if (cfg->tmf_active)
......@@ -430,28 +313,23 @@ static int send_tmf(struct afu *afu, struct scsi_cmnd *scp, u64 tmfcmd)
!cfg->tmf_active,
cfg->tmf_slock);
cfg->tmf_active = true;
cmd->cmd_tmf = true;
spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags);
cmd->scp = scp;
cmd->parent = afu;
cmd->cmd_tmf = true;
cmd->rcb.ctx_id = afu->ctx_hndl;
cmd->rcb.msi = SISL_MSI_RRQ_UPDATED;
cmd->rcb.port_sel = port_sel;
cmd->rcb.lun_id = lun_to_lunid(scp->device->lun);
lflag = SISL_REQ_FLAGS_TMF_CMD;
cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID |
SISL_REQ_FLAGS_SUP_UNDERRUN | lflag);
/* Stash the scp in the reserved field, for reuse during interrupt */
cmd->rcb.scp = scp;
/* Copy the CDB from the cmd passed in */
SISL_REQ_FLAGS_SUP_UNDERRUN |
SISL_REQ_FLAGS_TMF_CMD);
memcpy(cmd->rcb.cdb, &tmfcmd, sizeof(tmfcmd));
/* Send the command */
rc = send_cmd(afu, cmd);
rc = afu->send_cmd(afu, cmd);
if (unlikely(rc)) {
cmd_checkin(cmd);
spin_lock_irqsave(&cfg->tmf_slock, lock_flags);
cfg->tmf_active = false;
spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags);
......@@ -507,12 +385,12 @@ static int cxlflash_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scp)
struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata;
struct afu *afu = cfg->afu;
struct device *dev = &cfg->dev->dev;
struct afu_cmd *cmd;
struct afu_cmd *cmd = sc_to_afucz(scp);
struct scatterlist *sg = scsi_sglist(scp);
u32 port_sel = scp->device->channel + 1;
int nseg, i, ncount;
struct scatterlist *sg;
u16 req_flags = SISL_REQ_FLAGS_SUP_UNDERRUN;
ulong lock_flags;
short lflag = 0;
int nseg = 0;
int rc = 0;
int kref_got = 0;
......@@ -552,55 +430,38 @@ static int cxlflash_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scp)
break;
}
cmd = cmd_checkout(afu);
if (unlikely(!cmd)) {
dev_err(dev, "%s: could not get a free command\n", __func__);
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
kref_get(&cfg->afu->mapcount);
kref_got = 1;
if (likely(sg)) {
nseg = scsi_dma_map(scp);
if (unlikely(nseg < 0)) {
dev_err(dev, "%s: Fail DMA map!\n", __func__);
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
cmd->rcb.data_len = sg_dma_len(sg);
cmd->rcb.data_ea = sg_dma_address(sg);
}
cmd->scp = scp;
cmd->parent = afu;
cmd->rcb.ctx_id = afu->ctx_hndl;
cmd->rcb.msi = SISL_MSI_RRQ_UPDATED;
cmd->rcb.port_sel = port_sel;
cmd->rcb.lun_id = lun_to_lunid(scp->device->lun);
if (scp->sc_data_direction == DMA_TO_DEVICE)
lflag = SISL_REQ_FLAGS_HOST_WRITE;
else
lflag = SISL_REQ_FLAGS_HOST_READ;
cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID |
SISL_REQ_FLAGS_SUP_UNDERRUN | lflag);
/* Stash the scp in the reserved field, for reuse during interrupt */
cmd->rcb.scp = scp;
nseg = scsi_dma_map(scp);
if (unlikely(nseg < 0)) {
dev_err(dev, "%s: Fail DMA map! nseg=%d\n",
__func__, nseg);
rc = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
req_flags |= SISL_REQ_FLAGS_HOST_WRITE;
ncount = scsi_sg_count(scp);
scsi_for_each_sg(scp, sg, ncount, i) {
cmd->rcb.data_len = sg_dma_len(sg);
cmd->rcb.data_ea = sg_dma_address(sg);
}
/* Copy the CDB from the scsi_cmnd passed in */
cmd->rcb.req_flags = req_flags;
memcpy(cmd->rcb.cdb, scp->cmnd, sizeof(cmd->rcb.cdb));
/* Send the command */
rc = send_cmd(afu, cmd);
if (unlikely(rc)) {
cmd_checkin(cmd);
rc = afu->send_cmd(afu, cmd);
if (unlikely(rc))
scsi_dma_unmap(scp);
}
out:
if (kref_got)
kref_put(&afu->mapcount, afu_unmap);
......@@ -628,17 +489,9 @@ static void cxlflash_wait_for_pci_err_recovery(struct cxlflash_cfg *cfg)
*/
static void free_mem(struct cxlflash_cfg *cfg)
{
int i;
char *buf = NULL;
struct afu *afu = cfg->afu;
if (cfg->afu) {
for (i = 0; i < CXLFLASH_NUM_CMDS; i++) {
buf = afu->cmd[i].buf;
if (!((u64)buf & (PAGE_SIZE - 1)))
free_page((ulong)buf);
}
free_pages((ulong)afu, get_order(sizeof(struct afu)));
cfg->afu = NULL;
}
......@@ -650,30 +503,16 @@ static void free_mem(struct cxlflash_cfg *cfg)
*
* Safe to call with AFU in a partially allocated/initialized state.
*
* Cleans up all state associated with the command queue, and unmaps
* Waits for any active internal AFU commands to timeout and then unmaps
* the MMIO space.
*
* - complete() will take care of commands we initiated (they'll be checked
* in as part of the cleanup that occurs after the completion)
*
* - cmd_checkin() will take care of entries that we did not initiate and that
* have not (and will not) complete because they are sitting on a [now stale]
* hardware queue
*/
static void stop_afu(struct cxlflash_cfg *cfg)
{
int i;
struct afu *afu = cfg->afu;
struct afu_cmd *cmd;
if (likely(afu)) {
for (i = 0; i < CXLFLASH_NUM_CMDS; i++) {
cmd = &afu->cmd[i];
complete(&cmd->cevent);
if (!atomic_read(&cmd->free))
cmd_checkin(cmd);
}
while (atomic_read(&afu->cmds_active))
ssleep(1);
if (likely(afu->afu_map)) {
cxl_psa_unmap((void __iomem *)afu->afu_map);
afu->afu_map = NULL;
......@@ -886,8 +725,6 @@ static void cxlflash_remove(struct pci_dev *pdev)
static int alloc_mem(struct cxlflash_cfg *cfg)
{
int rc = 0;
int i;
char *buf = NULL;
struct device *dev = &cfg->dev->dev;
/* AFU is ~12k, i.e. only one 64k page or up to four 4k pages */
......@@ -901,25 +738,6 @@ static int alloc_mem(struct cxlflash_cfg *cfg)
}
cfg->afu->parent = cfg;
cfg->afu->afu_map = NULL;
for (i = 0; i < CXLFLASH_NUM_CMDS; buf += CMD_BUFSIZE, i++) {
if (!((u64)buf & (PAGE_SIZE - 1))) {
buf = (void *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
if (unlikely(!buf)) {
dev_err(dev,
"%s: Allocate command buffers fail!\n",
__func__);
rc = -ENOMEM;
free_mem(cfg);
goto out;
}
}
cfg->afu->cmd[i].buf = buf;
atomic_set(&cfg->afu->cmd[i].free, 1);
cfg->afu->cmd[i].slot = i;
}
out:
return rc;
}
......@@ -1549,13 +1367,6 @@ static void init_pcr(struct cxlflash_cfg *cfg)
/* Program the Endian Control for the master context */
writeq_be(SISL_ENDIAN_CTRL, &afu->host_map->endian_ctrl);
/* Initialize cmd fields that never change */
for (i = 0; i < CXLFLASH_NUM_CMDS; i++) {
afu->cmd[i].rcb.ctx_id = afu->ctx_hndl;
afu->cmd[i].rcb.msi = SISL_MSI_RRQ_UPDATED;
afu->cmd[i].rcb.rrq = 0x0;
}
}
/**
......@@ -1644,19 +1455,8 @@ static int init_global(struct cxlflash_cfg *cfg)
static int start_afu(struct cxlflash_cfg *cfg)
{
struct afu *afu = cfg->afu;
struct afu_cmd *cmd;
int i = 0;
int rc = 0;
for (i = 0; i < CXLFLASH_NUM_CMDS; i++) {
cmd = &afu->cmd[i];
init_completion(&cmd->cevent);
spin_lock_init(&cmd->slock);
cmd->parent = afu;
}
init_pcr(cfg);
/* After an AFU reset, RRQ entries are stale, clear them */
......@@ -1829,6 +1629,9 @@ static int init_afu(struct cxlflash_cfg *cfg)
goto err2;
}
afu->send_cmd = send_cmd_ioarrin;
afu->context_reset = context_reset_ioarrin;
pr_debug("%s: afu version %s, interface version 0x%llX\n", __func__,
afu->version, afu->interface_version);
......@@ -1840,7 +1643,8 @@ static int init_afu(struct cxlflash_cfg *cfg)
}
afu_err_intr_init(cfg->afu);
atomic64_set(&afu->room, readq_be(&afu->host_map->cmd_room));
spin_lock_init(&afu->rrin_slock);
afu->room = readq_be(&afu->host_map->cmd_room);
/* Restore the LUN mappings */
cxlflash_restore_luntable(cfg);
......@@ -1884,8 +1688,8 @@ int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t ctx_hndl_u,
struct cxlflash_cfg *cfg = afu->parent;
struct device *dev = &cfg->dev->dev;
struct afu_cmd *cmd = NULL;
char *buf = NULL;
int rc = 0;
int retry_cnt = 0;
static DEFINE_MUTEX(sync_active);
if (cfg->state != STATE_NORMAL) {
......@@ -1894,27 +1698,23 @@ int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t ctx_hndl_u,
}
mutex_lock(&sync_active);
retry:
cmd = cmd_checkout(afu);
if (unlikely(!cmd)) {
retry_cnt++;
udelay(1000 * retry_cnt);
if (retry_cnt < MC_RETRY_CNT)
goto retry;
dev_err(dev, "%s: could not get a free command\n", __func__);
atomic_inc(&afu->cmds_active);
buf = kzalloc(sizeof(*cmd) + __alignof__(*cmd) - 1, GFP_KERNEL);
if (unlikely(!buf)) {
dev_err(dev, "%s: no memory for command\n", __func__);
rc = -1;
goto out;
}
pr_debug("%s: afu=%p cmd=%p %d\n", __func__, afu, cmd, ctx_hndl_u);
cmd = (struct afu_cmd *)PTR_ALIGN(buf, __alignof__(*cmd));
init_completion(&cmd->cevent);
cmd->parent = afu;
memset(cmd->rcb.cdb, 0, sizeof(cmd->rcb.cdb));
pr_debug("%s: afu=%p cmd=%p %d\n", __func__, afu, cmd, ctx_hndl_u);
cmd->rcb.req_flags = SISL_REQ_FLAGS_AFU_CMD;
cmd->rcb.port_sel = 0x0; /* NA */
cmd->rcb.lun_id = 0x0; /* NA */
cmd->rcb.data_len = 0x0;
cmd->rcb.data_ea = 0x0;
cmd->rcb.ctx_id = afu->ctx_hndl;
cmd->rcb.msi = SISL_MSI_RRQ_UPDATED;
cmd->rcb.timeout = MC_AFU_SYNC_TIMEOUT;
cmd->rcb.cdb[0] = 0xC0; /* AFU Sync */
......@@ -1924,20 +1724,17 @@ int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t ctx_hndl_u,
*((__be16 *)&cmd->rcb.cdb[2]) = cpu_to_be16(ctx_hndl_u);
*((__be32 *)&cmd->rcb.cdb[4]) = cpu_to_be32(res_hndl_u);
rc = send_cmd(afu, cmd);
rc = afu->send_cmd(afu, cmd);
if (unlikely(rc))
goto out;
wait_resp(afu, cmd);
/* Set on timeout */
if (unlikely((cmd->sa.ioasc != 0) ||
(cmd->sa.host_use_b[0] & B_ERROR)))
rc = wait_resp(afu, cmd);
if (unlikely(rc))
rc = -1;
out:
atomic_dec(&afu->cmds_active);
mutex_unlock(&sync_active);
if (cmd)
cmd_checkin(cmd);
kfree(buf);
pr_debug("%s: returning rc=%d\n", __func__, rc);
return rc;
}
......@@ -2376,8 +2173,9 @@ static struct scsi_host_template driver_template = {
.change_queue_depth = cxlflash_change_queue_depth,
.cmd_per_lun = CXLFLASH_MAX_CMDS_PER_LUN,
.can_queue = CXLFLASH_MAX_CMDS,
.cmd_size = sizeof(struct afu_cmd) + __alignof__(struct afu_cmd) - 1,
.this_id = -1,
.sg_tablesize = SG_NONE, /* No scatter gather support */
.sg_tablesize = 1, /* No scatter gather support */
.max_sectors = CXLFLASH_MAX_SECTORS,
.use_clustering = ENABLE_CLUSTERING,
.shost_attrs = cxlflash_host_attrs,
......@@ -2412,7 +2210,6 @@ MODULE_DEVICE_TABLE(pci, cxlflash_pci_table);
* Handles the following events:
* - Link reset which cannot be performed on interrupt context due to
* blocking up to a few seconds
* - Read AFU command room
* - Rescan the host
*/
static void cxlflash_worker_thread(struct work_struct *work)
......@@ -2449,11 +2246,6 @@ static void cxlflash_worker_thread(struct work_struct *work)
cfg->lr_state = LINK_RESET_COMPLETE;
}
if (afu->read_room) {
atomic64_set(&afu->room, readq_be(&afu->host_map->cmd_room));
afu->read_room = false;
}
spin_unlock_irqrestore(cfg->host->host_lock, lock_flags);
if (atomic_dec_if_positive(&cfg->scan_host_needed) >= 0)
......
......@@ -72,7 +72,7 @@ struct sisl_ioarcb {
u16 timeout; /* in units specified by req_flags */
u32 rsvd1;
u8 cdb[16]; /* must be in big endian */
struct scsi_cmnd *scp;
u64 reserved; /* Reserved area */
} __packed;
struct sisl_rc {
......
......@@ -95,7 +95,7 @@ struct alua_port_group {
struct alua_dh_data {
struct list_head node;
struct alua_port_group *pg;
struct alua_port_group __rcu *pg;
int group_id;
spinlock_t pg_lock;
struct scsi_device *sdev;
......@@ -371,7 +371,7 @@ static int alua_check_vpd(struct scsi_device *sdev, struct alua_dh_data *h,
/* Check for existing port group references */
spin_lock(&h->pg_lock);
old_pg = h->pg;
old_pg = rcu_dereference_protected(h->pg, lockdep_is_held(&h->pg_lock));
if (old_pg != pg) {
/* port group has changed. Update to new port group */
if (h->pg) {
......@@ -390,7 +390,9 @@ static int alua_check_vpd(struct scsi_device *sdev, struct alua_dh_data *h,
list_add_rcu(&h->node, &pg->dh_list);
spin_unlock_irqrestore(&pg->lock, flags);
alua_rtpg_queue(h->pg, sdev, NULL, true);
alua_rtpg_queue(rcu_dereference_protected(h->pg,
lockdep_is_held(&h->pg_lock)),
sdev, NULL, true);
spin_unlock(&h->pg_lock);
if (old_pg)
......@@ -942,7 +944,7 @@ static int alua_initialize(struct scsi_device *sdev, struct alua_dh_data *h)
static int alua_set_params(struct scsi_device *sdev, const char *params)
{
struct alua_dh_data *h = sdev->handler_data;
struct alua_port_group __rcu *pg = NULL;
struct alua_port_group *pg = NULL;
unsigned int optimize = 0, argc;
const char *p = params;
int result = SCSI_DH_OK;
......@@ -989,7 +991,7 @@ static int alua_activate(struct scsi_device *sdev,
struct alua_dh_data *h = sdev->handler_data;
int err = SCSI_DH_OK;
struct alua_queue_data *qdata;
struct alua_port_group __rcu *pg;
struct alua_port_group *pg;
qdata = kzalloc(sizeof(*qdata), GFP_KERNEL);
if (!qdata) {
......@@ -1053,7 +1055,7 @@ static void alua_check(struct scsi_device *sdev, bool force)
static int alua_prep_fn(struct scsi_device *sdev, struct request *req)
{
struct alua_dh_data *h = sdev->handler_data;
struct alua_port_group __rcu *pg;
struct alua_port_group *pg;
unsigned char state = SCSI_ACCESS_STATE_OPTIMAL;
int ret = BLKPREP_OK;
......@@ -1123,7 +1125,7 @@ static void alua_bus_detach(struct scsi_device *sdev)
struct alua_port_group *pg;
spin_lock(&h->pg_lock);
pg = h->pg;
pg = rcu_dereference_protected(h->pg, lockdep_is_held(&h->pg_lock));
rcu_assign_pointer(h->pg, NULL);
h->sdev = NULL;
spin_unlock(&h->pg_lock);
......
......@@ -34,13 +34,13 @@
* Definitions for the generic 5380 driver.
*/
#define NCR5380_read(reg) inb(instance->io_port + reg)
#define NCR5380_write(reg, value) outb(value, instance->io_port + reg)
#define NCR5380_read(reg) inb(hostdata->base + (reg))
#define NCR5380_write(reg, value) outb(value, hostdata->base + (reg))
#define NCR5380_dma_xfer_len(instance, cmd, phase) (0)
#define NCR5380_dma_recv_setup(instance, dst, len) (0)
#define NCR5380_dma_send_setup(instance, src, len) (0)
#define NCR5380_dma_residual(instance) (0)
#define NCR5380_dma_xfer_len NCR5380_dma_xfer_none
#define NCR5380_dma_recv_setup NCR5380_dma_setup_none
#define NCR5380_dma_send_setup NCR5380_dma_setup_none
#define NCR5380_dma_residual NCR5380_dma_residual_none
#define NCR5380_implementation_fields /* none */
......@@ -71,6 +71,7 @@ static int dmx3191d_probe_one(struct pci_dev *pdev,
const struct pci_device_id *id)
{
struct Scsi_Host *shost;
struct NCR5380_hostdata *hostdata;
unsigned long io;
int error = -ENODEV;
......@@ -88,7 +89,9 @@ static int dmx3191d_probe_one(struct pci_dev *pdev,
sizeof(struct NCR5380_hostdata));
if (!shost)
goto out_release_region;
shost->io_port = io;
hostdata = shost_priv(shost);
hostdata->base = io;
/* This card does not seem to raise an interrupt on pdev->irq.
* Steam-powered SCSI controllers run without an IRQ anyway.
......@@ -125,7 +128,8 @@ static int dmx3191d_probe_one(struct pci_dev *pdev,
static void dmx3191d_remove_one(struct pci_dev *pdev)
{
struct Scsi_Host *shost = pci_get_drvdata(pdev);
unsigned long io = shost->io_port;
struct NCR5380_hostdata *hostdata = shost_priv(shost);
unsigned long io = hostdata->base;
scsi_remove_host(shost);
......@@ -149,18 +153,7 @@ static struct pci_driver dmx3191d_pci_driver = {
.remove = dmx3191d_remove_one,
};
static int __init dmx3191d_init(void)
{
return pci_register_driver(&dmx3191d_pci_driver);
}
static void __exit dmx3191d_exit(void)
{
pci_unregister_driver(&dmx3191d_pci_driver);
}
module_init(dmx3191d_init);
module_exit(dmx3191d_exit);
module_pci_driver(dmx3191d_pci_driver);
MODULE_AUTHOR("Massimo Piccioni <dafastidio@libero.it>");
MODULE_DESCRIPTION("Domex DMX3191D SCSI driver");
......
......@@ -651,7 +651,6 @@ static u32 adpt_ioctl_to_context(adpt_hba * pHba, void *reply)
}
spin_unlock_irqrestore(pHba->host->host_lock, flags);
if (i >= nr) {
kfree (reply);
printk(KERN_WARNING"%s: Too many outstanding "
"ioctl commands\n", pHba->name);
return (u32)-1;
......@@ -1754,8 +1753,10 @@ static int adpt_i2o_passthru(adpt_hba* pHba, u32 __user *arg)
sg_offset = (msg[0]>>4)&0xf;
msg[2] = 0x40000000; // IOCTL context
msg[3] = adpt_ioctl_to_context(pHba, reply);
if (msg[3] == (u32)-1)
if (msg[3] == (u32)-1) {
kfree(reply);
return -EBUSY;
}
memset(sg_list,0, sizeof(sg_list[0])*pHba->sg_tablesize);
if(sg_offset) {
......@@ -3350,7 +3351,7 @@ static int adpt_i2o_query_scalar(adpt_hba* pHba, int tid,
if (opblk_va == NULL) {
dma_free_coherent(&pHba->pDev->dev, sizeof(u8) * (8+buflen),
resblk_va, resblk_pa);
printk(KERN_CRIT "%s: query operatio failed; Out of memory.\n",
printk(KERN_CRIT "%s: query operation failed; Out of memory.\n",
pHba->name);
return -ENOMEM;
}
......
......@@ -63,6 +63,14 @@ unsigned int fcoe_debug_logging;
module_param_named(debug_logging, fcoe_debug_logging, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(debug_logging, "a bit mask of logging levels");
unsigned int fcoe_e_d_tov = 2 * 1000;
module_param_named(e_d_tov, fcoe_e_d_tov, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(e_d_tov, "E_D_TOV in ms, default 2000");
unsigned int fcoe_r_a_tov = 2 * 2 * 1000;
module_param_named(r_a_tov, fcoe_r_a_tov, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(r_a_tov, "R_A_TOV in ms, default 4000");
static DEFINE_MUTEX(fcoe_config_mutex);
static struct workqueue_struct *fcoe_wq;
......@@ -582,7 +590,8 @@ static void fcoe_fip_send(struct fcoe_ctlr *fip, struct sk_buff *skb)
* Use default VLAN for FIP VLAN discovery protocol
*/
frame = (struct fip_frame *)skb->data;
if (frame->fip.fip_op == ntohs(FIP_OP_VLAN) &&
if (ntohs(frame->eth.h_proto) == ETH_P_FIP &&
ntohs(frame->fip.fip_op) == FIP_OP_VLAN &&
fcoe->realdev != fcoe->netdev)
skb->dev = fcoe->realdev;
else
......@@ -633,8 +642,8 @@ static int fcoe_lport_config(struct fc_lport *lport)
lport->qfull = 0;
lport->max_retry_count = 3;
lport->max_rport_retry_count = 3;
lport->e_d_tov = 2 * 1000; /* FC-FS default */
lport->r_a_tov = 2 * 2 * 1000;
lport->e_d_tov = fcoe_e_d_tov;
lport->r_a_tov = fcoe_r_a_tov;
lport->service_params = (FCP_SPPF_INIT_FCN | FCP_SPPF_RD_XRDY_DIS |
FCP_SPPF_RETRY | FCP_SPPF_CONF_COMPL);
lport->does_npiv = 1;
......@@ -2160,11 +2169,13 @@ static bool fcoe_match(struct net_device *netdev)
*/
static void fcoe_dcb_create(struct fcoe_interface *fcoe)
{
int ctlr_prio = TC_PRIO_BESTEFFORT;
int fcoe_prio = TC_PRIO_INTERACTIVE;
struct fcoe_ctlr *ctlr = fcoe_to_ctlr(fcoe);
#ifdef CONFIG_DCB
int dcbx;
u8 fup, up;
struct net_device *netdev = fcoe->realdev;
struct fcoe_ctlr *ctlr = fcoe_to_ctlr(fcoe);
struct dcb_app app = {
.priority = 0,
.protocol = ETH_P_FCOE
......@@ -2186,10 +2197,12 @@ static void fcoe_dcb_create(struct fcoe_interface *fcoe)
fup = dcb_getapp(netdev, &app);
}
fcoe->priority = ffs(up) ? ffs(up) - 1 : 0;
ctlr->priority = ffs(fup) ? ffs(fup) - 1 : fcoe->priority;
fcoe_prio = ffs(up) ? ffs(up) - 1 : 0;
ctlr_prio = ffs(fup) ? ffs(fup) - 1 : fcoe_prio;
}
#endif
fcoe->priority = fcoe_prio;
ctlr->priority = ctlr_prio;
}
enum fcoe_create_link_state {
......
......@@ -801,6 +801,8 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
return -EINPROGRESS;
drop:
kfree_skb(skb);
LIBFCOE_FIP_DBG(fip, "drop els_send op %u d_id %x\n",
op, ntoh24(fh->fh_d_id));
return -EINVAL;
}
EXPORT_SYMBOL(fcoe_ctlr_els_send);
......@@ -1316,7 +1318,7 @@ static void fcoe_ctlr_recv_els(struct fcoe_ctlr *fip, struct sk_buff *skb)
* The overall length has already been checked.
*/
static void fcoe_ctlr_recv_clr_vlink(struct fcoe_ctlr *fip,
struct fip_header *fh)
struct sk_buff *skb)
{
struct fip_desc *desc;
struct fip_mac_desc *mp;
......@@ -1331,20 +1333,49 @@ static void fcoe_ctlr_recv_clr_vlink(struct fcoe_ctlr *fip,
int num_vlink_desc;
int reset_phys_port = 0;
struct fip_vn_desc **vlink_desc_arr = NULL;
struct fip_header *fh = (struct fip_header *)skb->data;
struct ethhdr *eh = eth_hdr(skb);
LIBFCOE_FIP_DBG(fip, "Clear Virtual Link received\n");
if (!fcf || !lport->port_id) {
if (!fcf) {
/*
* We are yet to select best FCF, but we got CVL in the
* meantime. reset the ctlr and let it rediscover the FCF
*/
LIBFCOE_FIP_DBG(fip, "Resetting fcoe_ctlr as FCF has not been "
"selected yet\n");
mutex_lock(&fip->ctlr_mutex);
fcoe_ctlr_reset(fip);
mutex_unlock(&fip->ctlr_mutex);
return;
}
/*
* If we've selected an FCF check that the CVL is from there to avoid
* processing CVLs from an unexpected source. If it is from an
* unexpected source drop it on the floor.
*/
if (!ether_addr_equal(eh->h_source, fcf->fcf_mac)) {
LIBFCOE_FIP_DBG(fip, "Dropping CVL due to source address "
"mismatch with FCF src=%pM\n", eh->h_source);
return;
}
/*
* If we haven't logged into the fabric but receive a CVL we should
* reset everything and go back to solicitation.
*/
if (!lport->port_id) {
LIBFCOE_FIP_DBG(fip, "lport not logged in, resoliciting\n");
mutex_lock(&fip->ctlr_mutex);
fcoe_ctlr_reset(fip);
mutex_unlock(&fip->ctlr_mutex);
fc_lport_reset(fip->lp);
fcoe_ctlr_solicit(fip, NULL);
return;
}
/*
* mask of required descriptors. Validating each one clears its bit.
*/
......@@ -1576,7 +1607,7 @@ static int fcoe_ctlr_recv_handler(struct fcoe_ctlr *fip, struct sk_buff *skb)
if (op == FIP_OP_DISC && sub == FIP_SC_ADV)
fcoe_ctlr_recv_adv(fip, skb);
else if (op == FIP_OP_CTRL && sub == FIP_SC_CLR_VLINK)
fcoe_ctlr_recv_clr_vlink(fip, fiph);
fcoe_ctlr_recv_clr_vlink(fip, skb);
kfree_skb(skb);
return 0;
drop:
......@@ -2122,7 +2153,7 @@ static void fcoe_ctlr_vn_rport_callback(struct fc_lport *lport,
LIBFCOE_FIP_DBG(fip,
"rport FLOGI limited port_id %6.6x\n",
rdata->ids.port_id);
lport->tt.rport_logoff(rdata);
fc_rport_logoff(rdata);
}
break;
default:
......@@ -2145,9 +2176,15 @@ static void fcoe_ctlr_disc_stop_locked(struct fc_lport *lport)
{
struct fc_rport_priv *rdata;
rcu_read_lock();
list_for_each_entry_rcu(rdata, &lport->disc.rports, peers) {
if (kref_get_unless_zero(&rdata->kref)) {
fc_rport_logoff(rdata);
kref_put(&rdata->kref, fc_rport_destroy);
}
}
rcu_read_unlock();
mutex_lock(&lport->disc.disc_mutex);
list_for_each_entry_rcu(rdata, &lport->disc.rports, peers)
lport->tt.rport_logoff(rdata);
lport->disc.disc_callback = NULL;
mutex_unlock(&lport->disc.disc_mutex);
}
......@@ -2178,7 +2215,7 @@ static void fcoe_ctlr_disc_stop(struct fc_lport *lport)
static void fcoe_ctlr_disc_stop_final(struct fc_lport *lport)
{
fcoe_ctlr_disc_stop(lport);
lport->tt.rport_flush_queue();
fc_rport_flush_queue();
synchronize_rcu();
}
......@@ -2393,6 +2430,8 @@ static void fcoe_ctlr_vn_probe_req(struct fcoe_ctlr *fip,
switch (fip->state) {
case FIP_ST_VNMP_CLAIM:
case FIP_ST_VNMP_UP:
LIBFCOE_FIP_DBG(fip, "vn_probe_req: send reply, state %x\n",
fip->state);
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REP,
frport->enode_mac, 0);
break;
......@@ -2407,15 +2446,21 @@ static void fcoe_ctlr_vn_probe_req(struct fcoe_ctlr *fip,
*/
if (fip->lp->wwpn > rdata->ids.port_name &&
!(frport->flags & FIP_FL_REC_OR_P2P)) {
LIBFCOE_FIP_DBG(fip, "vn_probe_req: "
"port_id collision\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REP,
frport->enode_mac, 0);
break;
}
/* fall through */
case FIP_ST_VNMP_START:
LIBFCOE_FIP_DBG(fip, "vn_probe_req: "
"restart VN2VN negotiation\n");
fcoe_ctlr_vn_restart(fip);
break;
default:
LIBFCOE_FIP_DBG(fip, "vn_probe_req: ignore state %x\n",
fip->state);
break;
}
}
......@@ -2437,9 +2482,12 @@ static void fcoe_ctlr_vn_probe_reply(struct fcoe_ctlr *fip,
case FIP_ST_VNMP_PROBE1:
case FIP_ST_VNMP_PROBE2:
case FIP_ST_VNMP_CLAIM:
LIBFCOE_FIP_DBG(fip, "vn_probe_reply: restart state %x\n",
fip->state);
fcoe_ctlr_vn_restart(fip);
break;
case FIP_ST_VNMP_UP:
LIBFCOE_FIP_DBG(fip, "vn_probe_reply: send claim notify\n");
fcoe_ctlr_vn_send_claim(fip);
break;
default:
......@@ -2467,26 +2515,33 @@ static void fcoe_ctlr_vn_add(struct fcoe_ctlr *fip, struct fc_rport_priv *new)
return;
mutex_lock(&lport->disc.disc_mutex);
rdata = lport->tt.rport_create(lport, port_id);
rdata = fc_rport_create(lport, port_id);
if (!rdata) {
mutex_unlock(&lport->disc.disc_mutex);
return;
}
mutex_lock(&rdata->rp_mutex);
mutex_unlock(&lport->disc.disc_mutex);
rdata->ops = &fcoe_ctlr_vn_rport_ops;
rdata->disc_id = lport->disc.disc_id;
ids = &rdata->ids;
if ((ids->port_name != -1 && ids->port_name != new->ids.port_name) ||
(ids->node_name != -1 && ids->node_name != new->ids.node_name))
lport->tt.rport_logoff(rdata);
(ids->node_name != -1 && ids->node_name != new->ids.node_name)) {
mutex_unlock(&rdata->rp_mutex);
LIBFCOE_FIP_DBG(fip, "vn_add rport logoff %6.6x\n", port_id);
fc_rport_logoff(rdata);
mutex_lock(&rdata->rp_mutex);
}
ids->port_name = new->ids.port_name;
ids->node_name = new->ids.node_name;
mutex_unlock(&lport->disc.disc_mutex);
mutex_unlock(&rdata->rp_mutex);
frport = fcoe_ctlr_rport(rdata);
LIBFCOE_FIP_DBG(fip, "vn_add rport %6.6x %s\n",
port_id, frport->fcoe_len ? "old" : "new");
LIBFCOE_FIP_DBG(fip, "vn_add rport %6.6x %s state %d\n",
port_id, frport->fcoe_len ? "old" : "new",
rdata->rp_state);
*frport = *fcoe_ctlr_rport(new);
frport->time = 0;
}
......@@ -2506,12 +2561,12 @@ static int fcoe_ctlr_vn_lookup(struct fcoe_ctlr *fip, u32 port_id, u8 *mac)
struct fcoe_rport *frport;
int ret = -1;
rdata = lport->tt.rport_lookup(lport, port_id);
rdata = fc_rport_lookup(lport, port_id);
if (rdata) {
frport = fcoe_ctlr_rport(rdata);
memcpy(mac, frport->enode_mac, ETH_ALEN);
ret = 0;
kref_put(&rdata->kref, lport->tt.rport_destroy);
kref_put(&rdata->kref, fc_rport_destroy);
}
return ret;
}
......@@ -2529,6 +2584,7 @@ static void fcoe_ctlr_vn_claim_notify(struct fcoe_ctlr *fip,
struct fcoe_rport *frport = fcoe_ctlr_rport(new);
if (frport->flags & FIP_FL_REC_OR_P2P) {
LIBFCOE_FIP_DBG(fip, "send probe req for P2P/REC\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0);
return;
}
......@@ -2536,25 +2592,37 @@ static void fcoe_ctlr_vn_claim_notify(struct fcoe_ctlr *fip,
case FIP_ST_VNMP_START:
case FIP_ST_VNMP_PROBE1:
case FIP_ST_VNMP_PROBE2:
if (new->ids.port_id == fip->port_id)
if (new->ids.port_id == fip->port_id) {
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: "
"restart, state %d\n",
fip->state);
fcoe_ctlr_vn_restart(fip);
}
break;
case FIP_ST_VNMP_CLAIM:
case FIP_ST_VNMP_UP:
if (new->ids.port_id == fip->port_id) {
if (new->ids.port_name > fip->lp->wwpn) {
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: "
"restart, port_id collision\n");
fcoe_ctlr_vn_restart(fip);
break;
}
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: "
"send claim notify\n");
fcoe_ctlr_vn_send_claim(fip);
break;
}
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: send reply to %x\n",
new->ids.port_id);
fcoe_ctlr_vn_send(fip, FIP_SC_VN_CLAIM_REP, frport->enode_mac,
min((u32)frport->fcoe_len,
fcoe_ctlr_fcoe_size(fip)));
fcoe_ctlr_vn_add(fip, new);
break;
default:
LIBFCOE_FIP_DBG(fip, "vn_claim_notify: "
"ignoring claim from %x\n", new->ids.port_id);
break;
}
}
......@@ -2591,19 +2659,26 @@ static void fcoe_ctlr_vn_beacon(struct fcoe_ctlr *fip,
frport = fcoe_ctlr_rport(new);
if (frport->flags & FIP_FL_REC_OR_P2P) {
LIBFCOE_FIP_DBG(fip, "p2p beacon while in vn2vn mode\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0);
return;
}
rdata = lport->tt.rport_lookup(lport, new->ids.port_id);
rdata = fc_rport_lookup(lport, new->ids.port_id);
if (rdata) {
if (rdata->ids.node_name == new->ids.node_name &&
rdata->ids.port_name == new->ids.port_name) {
frport = fcoe_ctlr_rport(rdata);
if (!frport->time && fip->state == FIP_ST_VNMP_UP)
lport->tt.rport_login(rdata);
LIBFCOE_FIP_DBG(fip, "beacon from rport %x\n",
rdata->ids.port_id);
if (!frport->time && fip->state == FIP_ST_VNMP_UP) {
LIBFCOE_FIP_DBG(fip, "beacon expired "
"for rport %x\n",
rdata->ids.port_id);
fc_rport_login(rdata);
}
frport->time = jiffies;
}
kref_put(&rdata->kref, lport->tt.rport_destroy);
kref_put(&rdata->kref, fc_rport_destroy);
return;
}
if (fip->state != FIP_ST_VNMP_UP)
......@@ -2638,11 +2713,15 @@ static unsigned long fcoe_ctlr_vn_age(struct fcoe_ctlr *fip)
unsigned long deadline;
next_time = jiffies + msecs_to_jiffies(FIP_VN_BEACON_INT * 10);
mutex_lock(&lport->disc.disc_mutex);
rcu_read_lock();
list_for_each_entry_rcu(rdata, &lport->disc.rports, peers) {
if (!kref_get_unless_zero(&rdata->kref))
continue;
frport = fcoe_ctlr_rport(rdata);
if (!frport->time)
if (!frport->time) {
kref_put(&rdata->kref, fc_rport_destroy);
continue;
}
deadline = frport->time +
msecs_to_jiffies(FIP_VN_BEACON_INT * 25 / 10);
if (time_after_eq(jiffies, deadline)) {
......@@ -2650,11 +2729,12 @@ static unsigned long fcoe_ctlr_vn_age(struct fcoe_ctlr *fip)
LIBFCOE_FIP_DBG(fip,
"port %16.16llx fc_id %6.6x beacon expired\n",
rdata->ids.port_name, rdata->ids.port_id);
lport->tt.rport_logoff(rdata);
fc_rport_logoff(rdata);
} else if (time_before(deadline, next_time))
next_time = deadline;
kref_put(&rdata->kref, fc_rport_destroy);
}
mutex_unlock(&lport->disc.disc_mutex);
rcu_read_unlock();
return next_time;
}
......@@ -2674,11 +2754,21 @@ static int fcoe_ctlr_vn_recv(struct fcoe_ctlr *fip, struct sk_buff *skb)
struct fc_rport_priv rdata;
struct fcoe_rport frport;
} buf;
int rc;
int rc, vlan_id = 0;
fiph = (struct fip_header *)skb->data;
sub = fiph->fip_subcode;
if (fip->lp->vlan)
vlan_id = skb_vlan_tag_get_id(skb);
if (vlan_id && vlan_id != fip->lp->vlan) {
LIBFCOE_FIP_DBG(fip, "vn_recv drop frame sub %x vlan %d\n",
sub, vlan_id);
rc = -EAGAIN;
goto drop;
}
rc = fcoe_ctlr_vn_parse(fip, skb, &buf.rdata);
if (rc) {
LIBFCOE_FIP_DBG(fip, "vn_recv vn_parse error %d\n", rc);
......@@ -2941,7 +3031,7 @@ static void fcoe_ctlr_disc_recv(struct fc_lport *lport, struct fc_frame *fp)
rjt_data.reason = ELS_RJT_UNSUP;
rjt_data.explan = ELS_EXPL_NONE;
lport->tt.seq_els_rsp_send(fp, ELS_LS_RJT, &rjt_data);
fc_seq_els_rsp_send(fp, ELS_LS_RJT, &rjt_data);
fc_frame_free(fp);
}
......@@ -2991,12 +3081,17 @@ static void fcoe_ctlr_vn_disc(struct fcoe_ctlr *fip)
mutex_lock(&disc->disc_mutex);
callback = disc->pending ? disc->disc_callback : NULL;
disc->pending = 0;
mutex_unlock(&disc->disc_mutex);
rcu_read_lock();
list_for_each_entry_rcu(rdata, &disc->rports, peers) {
if (!kref_get_unless_zero(&rdata->kref))
continue;
frport = fcoe_ctlr_rport(rdata);
if (frport->time)
lport->tt.rport_login(rdata);
fc_rport_login(rdata);
kref_put(&rdata->kref, fc_rport_destroy);
}
mutex_unlock(&disc->disc_mutex);
rcu_read_unlock();
if (callback)
callback(lport, DISC_EV_SUCCESS);
}
......@@ -3015,11 +3110,13 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
switch (fip->state) {
case FIP_ST_VNMP_START:
fcoe_ctlr_set_state(fip, FIP_ST_VNMP_PROBE1);
LIBFCOE_FIP_DBG(fip, "vn_timeout: send 1st probe request\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0);
next_time = jiffies + msecs_to_jiffies(FIP_VN_PROBE_WAIT);
break;
case FIP_ST_VNMP_PROBE1:
fcoe_ctlr_set_state(fip, FIP_ST_VNMP_PROBE2);
LIBFCOE_FIP_DBG(fip, "vn_timeout: send 2nd probe request\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_PROBE_REQ, fcoe_all_vn2vn, 0);
next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT);
break;
......@@ -3030,6 +3127,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
hton24(mac + 3, new_port_id);
fcoe_ctlr_map_dest(fip);
fip->update_mac(fip->lp, mac);
LIBFCOE_FIP_DBG(fip, "vn_timeout: send claim notify\n");
fcoe_ctlr_vn_send_claim(fip);
next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT);
break;
......@@ -3041,6 +3139,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
next_time = fip->sol_time + msecs_to_jiffies(FIP_VN_ANN_WAIT);
if (time_after_eq(jiffies, next_time)) {
fcoe_ctlr_set_state(fip, FIP_ST_VNMP_UP);
LIBFCOE_FIP_DBG(fip, "vn_timeout: send vn2vn beacon\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_BEACON,
fcoe_all_vn2vn, 0);
next_time = jiffies + msecs_to_jiffies(FIP_VN_ANN_WAIT);
......@@ -3051,6 +3150,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
case FIP_ST_VNMP_UP:
next_time = fcoe_ctlr_vn_age(fip);
if (time_after_eq(jiffies, fip->port_ka_time)) {
LIBFCOE_FIP_DBG(fip, "vn_timeout: send vn2vn beacon\n");
fcoe_ctlr_vn_send(fip, FIP_SC_VN_BEACON,
fcoe_all_vn2vn, 0);
fip->port_ka_time = jiffies +
......@@ -3135,7 +3235,6 @@ int fcoe_libfc_config(struct fc_lport *lport, struct fcoe_ctlr *fip,
fc_exch_init(lport);
fc_elsct_init(lport);
fc_lport_init(lport);
fc_rport_init(lport);
fc_disc_init(lport);
fcoe_ctlr_mode_set(lport, fip, fip->mode);
return 0;
......
......@@ -335,16 +335,24 @@ static ssize_t store_ctlr_enabled(struct device *dev,
const char *buf, size_t count)
{
struct fcoe_ctlr_device *ctlr = dev_to_ctlr(dev);
bool enabled;
int rc;
if (*buf == '1')
enabled = true;
else if (*buf == '0')
enabled = false;
else
return -EINVAL;
switch (ctlr->enabled) {
case FCOE_CTLR_ENABLED:
if (*buf == '1')
if (enabled)
return count;
ctlr->enabled = FCOE_CTLR_DISABLED;
break;
case FCOE_CTLR_DISABLED:
if (*buf == '0')
if (!enabled)
return count;
ctlr->enabled = FCOE_CTLR_ENABLED;
break;
......@@ -423,6 +431,75 @@ static FCOE_DEVICE_ATTR(ctlr, fip_vlan_responder, S_IRUGO | S_IWUSR,
show_ctlr_fip_resp,
store_ctlr_fip_resp);
static ssize_t
fcoe_ctlr_var_store(u32 *var, const char *buf, size_t count)
{
int err;
unsigned long v;
err = kstrtoul(buf, 10, &v);
if (err || v > UINT_MAX)
return -EINVAL;
*var = v;
return count;
}
static ssize_t store_ctlr_r_a_tov(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct fcoe_ctlr_device *ctlr_dev = dev_to_ctlr(dev);
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
if (ctlr_dev->enabled == FCOE_CTLR_ENABLED)
return -EBUSY;
if (ctlr_dev->enabled == FCOE_CTLR_DISABLED)
return fcoe_ctlr_var_store(&ctlr->lp->r_a_tov, buf, count);
return -ENOTSUPP;
}
static ssize_t show_ctlr_r_a_tov(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fcoe_ctlr_device *ctlr_dev = dev_to_ctlr(dev);
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
return sprintf(buf, "%d\n", ctlr->lp->r_a_tov);
}
static FCOE_DEVICE_ATTR(ctlr, r_a_tov, S_IRUGO | S_IWUSR,
show_ctlr_r_a_tov, store_ctlr_r_a_tov);
static ssize_t store_ctlr_e_d_tov(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct fcoe_ctlr_device *ctlr_dev = dev_to_ctlr(dev);
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
if (ctlr_dev->enabled == FCOE_CTLR_ENABLED)
return -EBUSY;
if (ctlr_dev->enabled == FCOE_CTLR_DISABLED)
return fcoe_ctlr_var_store(&ctlr->lp->e_d_tov, buf, count);
return -ENOTSUPP;
}
static ssize_t show_ctlr_e_d_tov(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fcoe_ctlr_device *ctlr_dev = dev_to_ctlr(dev);
struct fcoe_ctlr *ctlr = fcoe_ctlr_device_priv(ctlr_dev);
return sprintf(buf, "%d\n", ctlr->lp->e_d_tov);
}
static FCOE_DEVICE_ATTR(ctlr, e_d_tov, S_IRUGO | S_IWUSR,
show_ctlr_e_d_tov, store_ctlr_e_d_tov);
static ssize_t
store_private_fcoe_ctlr_fcf_dev_loss_tmo(struct device *dev,
struct device_attribute *attr,
......@@ -507,6 +584,8 @@ static struct attribute_group fcoe_ctlr_lesb_attr_group = {
static struct attribute *fcoe_ctlr_attrs[] = {
&device_attr_fcoe_ctlr_fip_vlan_responder.attr,
&device_attr_fcoe_ctlr_fcf_dev_loss_tmo.attr,
&device_attr_fcoe_ctlr_r_a_tov.attr,
&device_attr_fcoe_ctlr_e_d_tov.attr,
&device_attr_fcoe_ctlr_enabled.attr,
&device_attr_fcoe_ctlr_mode.attr,
NULL,
......
......@@ -441,30 +441,38 @@ static int fnic_queuecommand_lck(struct scsi_cmnd *sc, void (*done)(struct scsi_
unsigned long ptr;
spinlock_t *io_lock = NULL;
int io_lock_acquired = 0;
struct fc_rport_libfc_priv *rp;
if (unlikely(fnic_chk_state_flags_locked(fnic, FNIC_FLAGS_IO_BLOCKED)))
return SCSI_MLQUEUE_HOST_BUSY;
rport = starget_to_rport(scsi_target(sc->device));
if (!rport) {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"returning DID_NO_CONNECT for IO as rport is NULL\n");
sc->result = DID_NO_CONNECT << 16;
done(sc);
return 0;
}
ret = fc_remote_port_chkready(rport);
if (ret) {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"rport is not ready\n");
atomic64_inc(&fnic_stats->misc_stats.rport_not_ready);
sc->result = ret;
done(sc);
return 0;
}
if (rport) {
struct fc_rport_libfc_priv *rp = rport->dd_data;
if (!rp || rp->rp_state != RPORT_ST_READY) {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
rp = rport->dd_data;
if (!rp || rp->rp_state != RPORT_ST_READY) {
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"returning DID_NO_CONNECT for IO as rport is removed\n");
atomic64_inc(&fnic_stats->misc_stats.rport_not_ready);
sc->result = DID_NO_CONNECT<<16;
done(sc);
return 0;
}
atomic64_inc(&fnic_stats->misc_stats.rport_not_ready);
sc->result = DID_NO_CONNECT<<16;
done(sc);
return 0;
}
if (lp->state != LPORT_ST_READY || !(lp->link_up))
......@@ -2543,7 +2551,7 @@ int fnic_reset(struct Scsi_Host *shost)
* Reset local port, this will clean up libFC exchanges,
* reset remote port sessions, and if link is up, begin flogi
*/
ret = lp->tt.lport_reset(lp);
ret = fc_lport_reset(lp);
FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
"Returning from fnic reset %s\n",
......
......@@ -613,7 +613,7 @@ int fnic_fc_trace_set_data(u32 host_no, u8 frame_type,
fc_trace_entries.rd_idx = 0;
}
fc_buf->time_stamp = CURRENT_TIME;
ktime_get_real_ts64(&fc_buf->time_stamp);
fc_buf->host_no = host_no;
fc_buf->frame_type = frame_type;
......@@ -740,7 +740,7 @@ void copy_and_format_trace_data(struct fc_trace_hdr *tdata,
len = *orig_len;
time_to_tm(tdata->time_stamp.tv_sec, 0, &tm);
time64_to_tm(tdata->time_stamp.tv_sec, 0, &tm);
fmt = "%02d:%02d:%04ld %02d:%02d:%02d.%09lu ns%8x %c%8x\t";
len += snprintf(fnic_dbgfs_prt->buffer + len,
......
......@@ -72,7 +72,7 @@ struct fnic_trace_data {
typedef struct fnic_trace_data fnic_trace_data_t;
struct fc_trace_hdr {
struct timespec time_stamp;
struct timespec64 time_stamp;
u32 host_no;
u8 frame_type;
u8 frame_len;
......
......@@ -499,10 +499,7 @@ void vnic_dev_add_addr(struct vnic_dev *vdev, u8 *addr)
err = vnic_dev_cmd(vdev, CMD_ADDR_ADD, &a0, &a1, wait);
if (err)
printk(KERN_ERR
"Can't add addr [%02x:%02x:%02x:%02x:%02x:%02x], %d\n",
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5],
err);
pr_err("Can't add addr [%pM], %d\n", addr, err);
}
void vnic_dev_del_addr(struct vnic_dev *vdev, u8 *addr)
......@@ -517,10 +514,7 @@ void vnic_dev_del_addr(struct vnic_dev *vdev, u8 *addr)
err = vnic_dev_cmd(vdev, CMD_ADDR_DEL, &a0, &a1, wait);
if (err)
printk(KERN_ERR
"Can't del addr [%02x:%02x:%02x:%02x:%02x:%02x], %d\n",
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5],
err);
pr_err("Can't del addr [%pM], %d\n", addr, err);
}
int vnic_dev_notify_set(struct vnic_dev *vdev, u16 intr)
......
......@@ -64,9 +64,9 @@ static int card[] = { -1, -1, -1, -1, -1, -1, -1, -1 };
module_param_array(card, int, NULL, 0);
MODULE_PARM_DESC(card, "card type (0=NCR5380, 1=NCR53C400, 2=NCR53C400A, 3=DTC3181E, 4=HP C2502)");
MODULE_ALIAS("g_NCR5380_mmio");
MODULE_LICENSE("GPL");
#ifndef SCSI_G_NCR5380_MEM
/*
* Configure I/O address of 53C400A or DTC436 by writing magic numbers
* to ports 0x779 and 0x379.
......@@ -88,40 +88,35 @@ static void magic_configure(int idx, u8 irq, u8 magic[])
cfg = 0x80 | idx | (irq << 4);
outb(cfg, 0x379);
}
#endif
static unsigned int ncr_53c400a_ports[] = {
0x280, 0x290, 0x300, 0x310, 0x330, 0x340, 0x348, 0x350, 0
};
static unsigned int dtc_3181e_ports[] = {
0x220, 0x240, 0x280, 0x2a0, 0x2c0, 0x300, 0x320, 0x340, 0
};
static u8 ncr_53c400a_magic[] = { /* 53C400A & DTC436 */
0x59, 0xb9, 0xc5, 0xae, 0xa6
};
static u8 hp_c2502_magic[] = { /* HP C2502 */
0x0f, 0x22, 0xf0, 0x20, 0x80
};
static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
struct device *pdev, int base, int irq, int board)
{
unsigned int *ports;
bool is_pmio = base <= 0xffff;
int ret;
int flags = 0;
unsigned int *ports = NULL;
u8 *magic = NULL;
#ifndef SCSI_G_NCR5380_MEM
int i;
int port_idx = -1;
unsigned long region_size;
#endif
static unsigned int ncr_53c400a_ports[] = {
0x280, 0x290, 0x300, 0x310, 0x330, 0x340, 0x348, 0x350, 0
};
static unsigned int dtc_3181e_ports[] = {
0x220, 0x240, 0x280, 0x2a0, 0x2c0, 0x300, 0x320, 0x340, 0
};
static u8 ncr_53c400a_magic[] = { /* 53C400A & DTC436 */
0x59, 0xb9, 0xc5, 0xae, 0xa6
};
static u8 hp_c2502_magic[] = { /* HP C2502 */
0x0f, 0x22, 0xf0, 0x20, 0x80
};
int flags, ret;
struct Scsi_Host *instance;
struct NCR5380_hostdata *hostdata;
#ifdef SCSI_G_NCR5380_MEM
void __iomem *iomem;
resource_size_t iomem_size;
#endif
u8 __iomem *iomem;
ports = NULL;
flags = 0;
switch (board) {
case BOARD_NCR5380:
flags = FLAG_NO_PSEUDO_DMA | FLAG_DMA_FIXUP;
......@@ -140,8 +135,7 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
break;
}
#ifndef SCSI_G_NCR5380_MEM
if (ports && magic) {
if (is_pmio && ports && magic) {
/* wakeup sequence for the NCR53C400A and DTC3181E */
/* Disable the adapter and look for a free io port */
......@@ -170,84 +164,89 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
if (ports[i]) {
/* At this point we have our region reserved */
magic_configure(i, 0, magic); /* no IRQ yet */
outb(0xc0, ports[i] + 9);
if (inb(ports[i] + 9) != 0x80) {
base = ports[i];
outb(0xc0, base + 9);
if (inb(base + 9) != 0x80) {
ret = -ENODEV;
goto out_release;
}
base = ports[i];
port_idx = i;
} else
return -EINVAL;
}
else
{
} else if (is_pmio) {
/* NCR5380 - no configuration, just grab */
region_size = 8;
if (!base || !request_region(base, region_size, "ncr5380"))
return -EBUSY;
} else { /* MMIO */
region_size = NCR53C400_region_size;
if (!request_mem_region(base, region_size, "ncr5380"))
return -EBUSY;
}
#else
iomem_size = NCR53C400_region_size;
if (!request_mem_region(base, iomem_size, "ncr5380"))
return -EBUSY;
iomem = ioremap(base, iomem_size);
if (is_pmio)
iomem = ioport_map(base, region_size);
else
iomem = ioremap(base, region_size);
if (!iomem) {
release_mem_region(base, iomem_size);
return -ENOMEM;
ret = -ENOMEM;
goto out_release;
}
#endif
instance = scsi_host_alloc(tpnt, sizeof(struct NCR5380_hostdata));
if (instance == NULL) {
ret = -ENOMEM;
goto out_release;
goto out_unmap;
}
hostdata = shost_priv(instance);
#ifndef SCSI_G_NCR5380_MEM
instance->io_port = base;
instance->n_io_port = region_size;
hostdata->io_width = 1; /* 8-bit PDMA by default */
/*
* On NCR53C400 boards, NCR5380 registers are mapped 8 past
* the base address.
*/
switch (board) {
case BOARD_NCR53C400:
instance->io_port += 8;
hostdata->c400_ctl_status = 0;
hostdata->c400_blk_cnt = 1;
hostdata->c400_host_buf = 4;
break;
case BOARD_DTC3181E:
hostdata->io_width = 2; /* 16-bit PDMA */
/* fall through */
case BOARD_NCR53C400A:
case BOARD_HP_C2502:
hostdata->c400_ctl_status = 9;
hostdata->c400_blk_cnt = 10;
hostdata->c400_host_buf = 8;
break;
}
#else
instance->base = base;
hostdata->iomem = iomem;
hostdata->iomem_size = iomem_size;
switch (board) {
case BOARD_NCR53C400:
hostdata->c400_ctl_status = 0x100;
hostdata->c400_blk_cnt = 0x101;
hostdata->c400_host_buf = 0x104;
break;
case BOARD_DTC3181E:
case BOARD_NCR53C400A:
case BOARD_HP_C2502:
pr_err(DRV_MODULE_NAME ": unknown register offsets\n");
ret = -EINVAL;
goto out_unregister;
hostdata->io = iomem;
hostdata->region_size = region_size;
if (is_pmio) {
hostdata->io_port = base;
hostdata->io_width = 1; /* 8-bit PDMA by default */
hostdata->offset = 0;
/*
* On NCR53C400 boards, NCR5380 registers are mapped 8 past
* the base address.
*/
switch (board) {
case BOARD_NCR53C400:
hostdata->io_port += 8;
hostdata->c400_ctl_status = 0;
hostdata->c400_blk_cnt = 1;
hostdata->c400_host_buf = 4;
break;
case BOARD_DTC3181E:
hostdata->io_width = 2; /* 16-bit PDMA */
/* fall through */
case BOARD_NCR53C400A:
case BOARD_HP_C2502:
hostdata->c400_ctl_status = 9;
hostdata->c400_blk_cnt = 10;
hostdata->c400_host_buf = 8;
break;
}
} else {
hostdata->base = base;
hostdata->offset = NCR53C400_mem_base;
switch (board) {
case BOARD_NCR53C400:
hostdata->c400_ctl_status = 0x100;
hostdata->c400_blk_cnt = 0x101;
hostdata->c400_host_buf = 0x104;
break;
case BOARD_DTC3181E:
case BOARD_NCR53C400A:
case BOARD_HP_C2502:
pr_err(DRV_MODULE_NAME ": unknown register offsets\n");
ret = -EINVAL;
goto out_unregister;
}
}
#endif
ret = NCR5380_init(instance, flags | FLAG_LATE_DMA_SETUP);
if (ret)
......@@ -273,11 +272,9 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
instance->irq = NO_IRQ;
if (instance->irq != NO_IRQ) {
#ifndef SCSI_G_NCR5380_MEM
/* set IRQ for HP C2502 */
if (board == BOARD_HP_C2502)
magic_configure(port_idx, instance->irq, magic);
#endif
if (request_irq(instance->irq, generic_NCR5380_intr,
0, "NCR5380", instance)) {
printk(KERN_WARNING "scsi%d : IRQ%d not free, interrupts disabled\n", instance->host_no, instance->irq);
......@@ -303,38 +300,39 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
NCR5380_exit(instance);
out_unregister:
scsi_host_put(instance);
out_release:
#ifndef SCSI_G_NCR5380_MEM
release_region(base, region_size);
#else
out_unmap:
iounmap(iomem);
release_mem_region(base, iomem_size);
#endif
out_release:
if (is_pmio)
release_region(base, region_size);
else
release_mem_region(base, region_size);
return ret;
}
static void generic_NCR5380_release_resources(struct Scsi_Host *instance)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
void __iomem *iomem = hostdata->io;
unsigned long io_port = hostdata->io_port;
unsigned long base = hostdata->base;
unsigned long region_size = hostdata->region_size;
scsi_remove_host(instance);
if (instance->irq != NO_IRQ)
free_irq(instance->irq, instance);
NCR5380_exit(instance);
#ifndef SCSI_G_NCR5380_MEM
release_region(instance->io_port, instance->n_io_port);
#else
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
iounmap(hostdata->iomem);
release_mem_region(instance->base, hostdata->iomem_size);
}
#endif
scsi_host_put(instance);
iounmap(iomem);
if (io_port)
release_region(io_port, region_size);
else
release_mem_region(base, region_size);
}
/**
* generic_NCR5380_pread - pseudo DMA read
* @instance: adapter to read from
* @hostdata: scsi host private data
* @dst: buffer to read into
* @len: buffer length
*
......@@ -342,10 +340,9 @@ static void generic_NCR5380_release_resources(struct Scsi_Host *instance)
* controller
*/
static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
static inline int generic_NCR5380_pread(struct NCR5380_hostdata *hostdata,
unsigned char *dst, int len)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int blocks = len / 128;
int start = 0;
......@@ -361,18 +358,16 @@ static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
; /* FIXME - no timeout */
#ifndef SCSI_G_NCR5380_MEM
if (hostdata->io_width == 2)
insw(instance->io_port + hostdata->c400_host_buf,
if (hostdata->io_port && hostdata->io_width == 2)
insw(hostdata->io_port + hostdata->c400_host_buf,
dst + start, 64);
else
insb(instance->io_port + hostdata->c400_host_buf,
else if (hostdata->io_port)
insb(hostdata->io_port + hostdata->c400_host_buf,
dst + start, 128);
#else
/* implies SCSI_G_NCR5380_MEM */
memcpy_fromio(dst + start,
hostdata->iomem + NCR53C400_host_buffer, 128);
#endif
else
memcpy_fromio(dst + start,
hostdata->io + NCR53C400_host_buffer, 128);
start += 128;
blocks--;
}
......@@ -381,18 +376,16 @@ static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
; /* FIXME - no timeout */
#ifndef SCSI_G_NCR5380_MEM
if (hostdata->io_width == 2)
insw(instance->io_port + hostdata->c400_host_buf,
if (hostdata->io_port && hostdata->io_width == 2)
insw(hostdata->io_port + hostdata->c400_host_buf,
dst + start, 64);
else
insb(instance->io_port + hostdata->c400_host_buf,
else if (hostdata->io_port)
insb(hostdata->io_port + hostdata->c400_host_buf,
dst + start, 128);
#else
/* implies SCSI_G_NCR5380_MEM */
memcpy_fromio(dst + start,
hostdata->iomem + NCR53C400_host_buffer, 128);
#endif
else
memcpy_fromio(dst + start,
hostdata->io + NCR53C400_host_buffer, 128);
start += 128;
blocks--;
}
......@@ -412,7 +405,7 @@ static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
/**
* generic_NCR5380_pwrite - pseudo DMA write
* @instance: adapter to read from
* @hostdata: scsi host private data
* @dst: buffer to read into
* @len: buffer length
*
......@@ -420,10 +413,9 @@ static inline int generic_NCR5380_pread(struct Scsi_Host *instance,
* controller
*/
static inline int generic_NCR5380_pwrite(struct Scsi_Host *instance,
static inline int generic_NCR5380_pwrite(struct NCR5380_hostdata *hostdata,
unsigned char *src, int len)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int blocks = len / 128;
int start = 0;
......@@ -439,18 +431,17 @@ static inline int generic_NCR5380_pwrite(struct Scsi_Host *instance,
break;
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
; // FIXME - timeout
#ifndef SCSI_G_NCR5380_MEM
if (hostdata->io_width == 2)
outsw(instance->io_port + hostdata->c400_host_buf,
if (hostdata->io_port && hostdata->io_width == 2)
outsw(hostdata->io_port + hostdata->c400_host_buf,
src + start, 64);
else
outsb(instance->io_port + hostdata->c400_host_buf,
else if (hostdata->io_port)
outsb(hostdata->io_port + hostdata->c400_host_buf,
src + start, 128);
#else
/* implies SCSI_G_NCR5380_MEM */
memcpy_toio(hostdata->iomem + NCR53C400_host_buffer,
src + start, 128);
#endif
else
memcpy_toio(hostdata->io + NCR53C400_host_buffer,
src + start, 128);
start += 128;
blocks--;
}
......@@ -458,18 +449,16 @@ static inline int generic_NCR5380_pwrite(struct Scsi_Host *instance,
while (NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
; // FIXME - no timeout
#ifndef SCSI_G_NCR5380_MEM
if (hostdata->io_width == 2)
outsw(instance->io_port + hostdata->c400_host_buf,
if (hostdata->io_port && hostdata->io_width == 2)
outsw(hostdata->io_port + hostdata->c400_host_buf,
src + start, 64);
else
outsb(instance->io_port + hostdata->c400_host_buf,
else if (hostdata->io_port)
outsb(hostdata->io_port + hostdata->c400_host_buf,
src + start, 128);
#else
/* implies SCSI_G_NCR5380_MEM */
memcpy_toio(hostdata->iomem + NCR53C400_host_buffer,
src + start, 128);
#endif
else
memcpy_toio(hostdata->io + NCR53C400_host_buffer,
src + start, 128);
start += 128;
blocks--;
}
......@@ -489,10 +478,9 @@ static inline int generic_NCR5380_pwrite(struct Scsi_Host *instance,
return 0;
}
static int generic_NCR5380_dma_xfer_len(struct Scsi_Host *instance,
static int generic_NCR5380_dma_xfer_len(struct NCR5380_hostdata *hostdata,
struct scsi_cmnd *cmd)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int transfersize = cmd->transfersize;
if (hostdata->flags & FLAG_NO_PSEUDO_DMA)
......@@ -566,7 +554,7 @@ static struct isa_driver generic_NCR5380_isa_driver = {
},
};
#if !defined(SCSI_G_NCR5380_MEM) && defined(CONFIG_PNP)
#ifdef CONFIG_PNP
static struct pnp_device_id generic_NCR5380_pnp_ids[] = {
{ .id = "DTC436e", .driver_data = BOARD_DTC3181E },
{ .id = "" }
......@@ -600,7 +588,7 @@ static struct pnp_driver generic_NCR5380_pnp_driver = {
.probe = generic_NCR5380_pnp_probe,
.remove = generic_NCR5380_pnp_remove,
};
#endif /* !defined(SCSI_G_NCR5380_MEM) && defined(CONFIG_PNP) */
#endif /* defined(CONFIG_PNP) */
static int pnp_registered, isa_registered;
......@@ -624,7 +612,7 @@ static int __init generic_NCR5380_init(void)
card[0] = BOARD_HP_C2502;
}
#if !defined(SCSI_G_NCR5380_MEM) && defined(CONFIG_PNP)
#ifdef CONFIG_PNP
if (!pnp_register_driver(&generic_NCR5380_pnp_driver))
pnp_registered = 1;
#endif
......@@ -637,7 +625,7 @@ static int __init generic_NCR5380_init(void)
static void __exit generic_NCR5380_exit(void)
{
#if !defined(SCSI_G_NCR5380_MEM) && defined(CONFIG_PNP)
#ifdef CONFIG_PNP
if (pnp_registered)
pnp_unregister_driver(&generic_NCR5380_pnp_driver);
#endif
......
......@@ -14,49 +14,28 @@
#ifndef GENERIC_NCR5380_H
#define GENERIC_NCR5380_H
#ifndef SCSI_G_NCR5380_MEM
#define DRV_MODULE_NAME "g_NCR5380"
#define NCR5380_read(reg) \
inb(instance->io_port + (reg))
ioread8(hostdata->io + hostdata->offset + (reg))
#define NCR5380_write(reg, value) \
outb(value, instance->io_port + (reg))
iowrite8(value, hostdata->io + hostdata->offset + (reg))
#define NCR5380_implementation_fields \
int offset; \
int c400_ctl_status; \
int c400_blk_cnt; \
int c400_host_buf; \
int io_width;
#else
/* therefore SCSI_G_NCR5380_MEM */
#define DRV_MODULE_NAME "g_NCR5380_mmio"
#define NCR53C400_mem_base 0x3880
#define NCR53C400_host_buffer 0x3900
#define NCR53C400_region_size 0x3a00
#define NCR5380_read(reg) \
readb(((struct NCR5380_hostdata *)shost_priv(instance))->iomem + \
NCR53C400_mem_base + (reg))
#define NCR5380_write(reg, value) \
writeb(value, ((struct NCR5380_hostdata *)shost_priv(instance))->iomem + \
NCR53C400_mem_base + (reg))
#define NCR5380_implementation_fields \
void __iomem *iomem; \
resource_size_t iomem_size; \
int c400_ctl_status; \
int c400_blk_cnt; \
int c400_host_buf;
#endif
#define NCR5380_dma_xfer_len(instance, cmd, phase) \
generic_NCR5380_dma_xfer_len(instance, cmd)
#define NCR5380_dma_xfer_len generic_NCR5380_dma_xfer_len
#define NCR5380_dma_recv_setup generic_NCR5380_pread
#define NCR5380_dma_send_setup generic_NCR5380_pwrite
#define NCR5380_dma_residual(instance) (0)
#define NCR5380_dma_residual NCR5380_dma_residual_none
#define NCR5380_intr generic_NCR5380_intr
#define NCR5380_queue_command generic_NCR5380_queue_command
......@@ -73,4 +52,3 @@
#define BOARD_HP_C2502 4
#endif /* GENERIC_NCR5380_H */
/*
* There is probably a nicer way to do this but this one makes
* pretty obvious what is happening. We rebuild the same file with
* different options for mmio versus pio.
*/
#define SCSI_G_NCR5380_MEM
#include "g_NCR5380.c"
......@@ -13,6 +13,7 @@
#define _HISI_SAS_H_
#include <linux/acpi.h>
#include <linux/clk.h>
#include <linux/dmapool.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
......@@ -110,7 +111,7 @@ struct hisi_sas_device {
struct domain_device *sas_device;
u64 attached_phy;
u64 device_id;
u64 running_req;
atomic64_t running_req;
u8 dev_status;
};
......@@ -149,7 +150,8 @@ struct hisi_sas_hw {
struct domain_device *device);
struct hisi_sas_device *(*alloc_dev)(struct domain_device *device);
void (*sl_notify)(struct hisi_hba *hisi_hba, int phy_no);
int (*get_free_slot)(struct hisi_hba *hisi_hba, int *q, int *s);
int (*get_free_slot)(struct hisi_hba *hisi_hba, u32 dev_id,
int *q, int *s);
void (*start_delivery)(struct hisi_hba *hisi_hba);
int (*prep_ssp)(struct hisi_hba *hisi_hba,
struct hisi_sas_slot *slot, int is_tmf,
......@@ -166,6 +168,9 @@ struct hisi_sas_hw {
void (*phy_enable)(struct hisi_hba *hisi_hba, int phy_no);
void (*phy_disable)(struct hisi_hba *hisi_hba, int phy_no);
void (*phy_hard_reset)(struct hisi_hba *hisi_hba, int phy_no);
void (*phy_set_linkrate)(struct hisi_hba *hisi_hba, int phy_no,
struct sas_phy_linkrates *linkrates);
enum sas_linkrate (*phy_get_max_linkrate)(void);
void (*free_device)(struct hisi_hba *hisi_hba,
struct hisi_sas_device *dev);
int (*get_wideport_bitmap)(struct hisi_hba *hisi_hba, int port_id);
......@@ -183,6 +188,7 @@ struct hisi_hba {
u32 ctrl_reset_reg;
u32 ctrl_reset_sts_reg;
u32 ctrl_clock_ena_reg;
u32 refclk_frequency_mhz;
u8 sas_addr[SAS_ADDR_SIZE];
int n_phy;
......@@ -205,7 +211,6 @@ struct hisi_hba {
struct hisi_sas_port port[HISI_SAS_MAX_PHYS];
int queue_count;
int queue;
struct hisi_sas_slot *slot_prep;
struct dma_pool *sge_page_pool;
......
......@@ -843,6 +843,49 @@ static void sl_notify_v1_hw(struct hisi_hba *hisi_hba, int phy_no)
hisi_sas_phy_write32(hisi_hba, phy_no, SL_CONTROL, sl_control);
}
static enum sas_linkrate phy_get_max_linkrate_v1_hw(void)
{
return SAS_LINK_RATE_6_0_GBPS;
}
static void phy_set_linkrate_v1_hw(struct hisi_hba *hisi_hba, int phy_no,
struct sas_phy_linkrates *r)
{
u32 prog_phy_link_rate =
hisi_sas_phy_read32(hisi_hba, phy_no, PROG_PHY_LINK_RATE);
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
struct asd_sas_phy *sas_phy = &phy->sas_phy;
int i;
enum sas_linkrate min, max;
u32 rate_mask = 0;
if (r->maximum_linkrate == SAS_LINK_RATE_UNKNOWN) {
max = sas_phy->phy->maximum_linkrate;
min = r->minimum_linkrate;
} else if (r->minimum_linkrate == SAS_LINK_RATE_UNKNOWN) {
max = r->maximum_linkrate;
min = sas_phy->phy->minimum_linkrate;
} else
return;
sas_phy->phy->maximum_linkrate = max;
sas_phy->phy->minimum_linkrate = min;
min -= SAS_LINK_RATE_1_5_GBPS;
max -= SAS_LINK_RATE_1_5_GBPS;
for (i = 0; i <= max; i++)
rate_mask |= 1 << (i * 2);
prog_phy_link_rate &= ~0xff;
prog_phy_link_rate |= rate_mask;
hisi_sas_phy_write32(hisi_hba, phy_no, PROG_PHY_LINK_RATE,
prog_phy_link_rate);
phy_hard_reset_v1_hw(hisi_hba, phy_no);
}
static int get_wideport_bitmap_v1_hw(struct hisi_hba *hisi_hba, int port_id)
{
int i, bitmap = 0;
......@@ -862,29 +905,23 @@ static int get_wideport_bitmap_v1_hw(struct hisi_hba *hisi_hba, int port_id)
* The callpath to this function and upto writing the write
* queue pointer should be safe from interruption.
*/
static int get_free_slot_v1_hw(struct hisi_hba *hisi_hba, int *q, int *s)
static int get_free_slot_v1_hw(struct hisi_hba *hisi_hba, u32 dev_id,
int *q, int *s)
{
struct device *dev = &hisi_hba->pdev->dev;
struct hisi_sas_dq *dq;
u32 r, w;
int queue = hisi_hba->queue;
while (1) {
dq = &hisi_hba->dq[queue];
w = dq->wr_point;
r = hisi_sas_read32_relaxed(hisi_hba,
DLVRY_Q_0_RD_PTR + (queue * 0x14));
if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) {
queue = (queue + 1) % hisi_hba->queue_count;
if (queue == hisi_hba->queue) {
dev_warn(dev, "could not find free slot\n");
return -EAGAIN;
}
continue;
}
break;
int queue = dev_id % hisi_hba->queue_count;
dq = &hisi_hba->dq[queue];
w = dq->wr_point;
r = hisi_sas_read32_relaxed(hisi_hba,
DLVRY_Q_0_RD_PTR + (queue * 0x14));
if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) {
dev_warn(dev, "could not find free slot\n");
return -EAGAIN;
}
hisi_hba->queue = (queue + 1) % hisi_hba->queue_count;
*q = queue;
*s = w;
return 0;
......@@ -1372,8 +1409,8 @@ static int slot_complete_v1_hw(struct hisi_hba *hisi_hba,
}
out:
if (sas_dev && sas_dev->running_req)
sas_dev->running_req--;
if (sas_dev)
atomic64_dec(&sas_dev->running_req);
hisi_sas_slot_task_free(hisi_hba, task, slot);
sts = ts->stat;
......@@ -1824,6 +1861,8 @@ static const struct hisi_sas_hw hisi_sas_v1_hw = {
.phy_enable = enable_phy_v1_hw,
.phy_disable = disable_phy_v1_hw,
.phy_hard_reset = phy_hard_reset_v1_hw,
.phy_set_linkrate = phy_set_linkrate_v1_hw,
.phy_get_max_linkrate = phy_get_max_linkrate_v1_hw,
.get_wideport_bitmap = get_wideport_bitmap_v1_hw,
.max_command_entries = HISI_SAS_COMMAND_ENTRIES_V1_HW,
.complete_hdr_size = sizeof(struct hisi_sas_complete_v1_hdr),
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册