提交 f290cbac 编写于 作者: L Linus Torvalds

Merge tag 'scsi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull late SCSI updates from James Bottomley:
 "This is mostly stuff which missed the initial pull.

  There's a new driver: qedi, and some ufs, ibmvscsis and ncr5380
  updates plus some assorted driver fixes and also a fix for the bug
  where if a device goes into a blocked state between configuration and
  sysfs device add (which can be a long time under async probing) it
  would become permanently blocked"

* tag 'scsi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (30 commits)
  scsi: avoid a permanent stop of the scsi device's request queue
  scsi: mpt3sas: Recognize and act on iopriority info
  scsi: qla2xxx: Fix Target mode handling with Multiqueue changes.
  scsi: qla2xxx: Add Block Multi Queue functionality.
  scsi: qla2xxx: Add multiple queue pair functionality.
  scsi: qla2xxx: Utilize pci_alloc_irq_vectors/pci_free_irq_vectors calls.
  scsi: qla2xxx: Only allow operational MBX to proceed during RESET.
  scsi: hpsa: remove memory allocate failure message
  scsi: Update 3ware driver email addresses
  scsi: zfcp: fix rport unblock race with LUN recovery
  scsi: zfcp: do not trace pure benign residual HBA responses at default level
  scsi: zfcp: fix use-after-"free" in FC ingress path after TMF
  scsi: libcxgbi: return error if interface is not up
  scsi: cxgb4i: libcxgbi: add missing module_put()
  scsi: cxgb4i: libcxgbi: cxgb4: add T6 iSCSI completion feature
  scsi: cxgb4i: libcxgbi: add active open cmd for T6 adapters
  scsi: cxgb4i: use cxgb4_tp_smt_idx() to get smt_idx
  scsi: qedi: Add QLogic FastLinQ offload iSCSI driver framework.
  scsi: aacraid: remove wildcard for series 9 controllers
  scsi: ibmvscsi: add write memory barrier to CRQ processing
  ...
无相关合并请求
......@@ -6,17 +6,15 @@ NCR53c400 extensions (c) 1994,1995,1996 Kevin Lentin
This file documents the NCR53c400 extensions by Kevin Lentin and some
enhancements to the NCR5380 core.
This driver supports both NCR5380 and NCR53c400 cards in port or memory
mapped modes. Currently this driver can only support one of those mapping
modes at a time but it does support both of these chips at the same time.
The next release of this driver will support port & memory mapped cards at
the same time. It should be able to handle multiple different cards in the
same machine.
This driver supports NCR5380 and NCR53c400 and compatible cards in port or
memory mapped modes.
The drivers/scsi/Makefile has an override in it for the most common
NCR53c400 card, the Trantor T130B in its default configuration:
Port: 0x350
IRQ : 5
Use of an interrupt is recommended, if supported by the board, as this will
allow targets to disconnect and thereby improve SCSI bus utilization.
If the irq parameter is 254 or is omitted entirely, the driver will probe
for the correct IRQ line automatically. If the irq parameter is 0 or 255
then no IRQ will be used.
The NCR53c400 does not support DMA but it does have Pseudo-DMA which is
supported by the driver.
......@@ -47,22 +45,24 @@ These old-style parameters can support only one card:
dtc_3181e=1 to set up for a Domex Technology Corp 3181E board
hp_c2502=1 to set up for a Hewlett Packard C2502 board
e.g.
OLD: modprobe g_NCR5380 ncr_irq=5 ncr_addr=0x350 ncr_5380=1
NEW: modprobe g_NCR5380 irq=5 base=0x350 card=0
for a port mapped NCR5380 board or
OLD: modprobe g_NCR5380 ncr_irq=255 ncr_addr=0xc8000 ncr_53c400=1
NEW: modprobe g_NCR5380 irq=255 base=0xc8000 card=1
for a memory mapped NCR53C400 board with interrupts disabled or
E.g. Trantor T130B in its default configuration:
modprobe g_NCR5380 irq=5 base=0x350 card=1
or alternatively, using the old syntax,
modprobe g_NCR5380 ncr_irq=5 ncr_addr=0x350 ncr_53c400=1
NEW: modprobe g_NCR5380 irq=0,7 base=0x240,0x300 card=3,4
for two cards: DTC3181 (in non-PnP mode) at 0x240 with no IRQ
and HP C2502 at 0x300 with IRQ 7
E.g. a port mapped NCR5380 board, driver to probe for IRQ:
modprobe g_NCR5380 base=0x350 card=0
or alternatively,
modprobe g_NCR5380 ncr_addr=0x350 ncr_5380=1
(255 should be specified for no or DMA interrupt, 254 to autoprobe for an
IRQ line if overridden on the command line.)
E.g. a memory mapped NCR53C400 board with no IRQ:
modprobe g_NCR5380 irq=255 base=0xc8000 card=1
or alternatively,
modprobe g_NCR5380 ncr_irq=255 ncr_addr=0xc8000 ncr_53c400=1
E.g. two cards, DTC3181 (in non-PnP mode) at 0x240 with no IRQ
and HP C2502 at 0x300 with IRQ 7:
modprobe g_NCR5380 irq=0,7 base=0x240,0x300 card=3,4
Kevin Lentin
K.Lentin@cs.monash.edu.au
......@@ -143,7 +143,7 @@ S: Maintained
F: drivers/net/ethernet/3com/typhoon*
3WARE SAS/SATA-RAID SCSI DRIVERS (3W-XXXX, 3W-9XXX, 3W-SAS)
M: Adam Radford <linuxraid@lsi.com>
M: Adam Radford <aradford@gmail.com>
L: linux-scsi@vger.kernel.org
W: http://www.lsi.com
S: Supported
......@@ -10136,6 +10136,12 @@ F: drivers/net/ethernet/qlogic/qed/
F: include/linux/qed/
F: drivers/net/ethernet/qlogic/qede/
QLOGIC QL41xxx ISCSI DRIVER
M: QLogic-Storage-Upstream@cavium.com
L: linux-scsi@vger.kernel.org
S: Supported
F: drivers/scsi/qedi/
QNX4 FILESYSTEM
M: Anders Larsen <al@alarsen.net>
W: http://www.alarsen.net/linux/qnx4fs/
......
......@@ -76,6 +76,7 @@ enum {
CPL_PASS_ESTABLISH = 0x41,
CPL_RX_DATA_DDP = 0x42,
CPL_PASS_ACCEPT_REQ = 0x44,
CPL_RX_ISCSI_CMP = 0x45,
CPL_TRACE_PKT_T5 = 0x48,
CPL_RX_ISCSI_DDP = 0x49,
......@@ -934,6 +935,18 @@ struct cpl_iscsi_data {
__u8 status;
};
struct cpl_rx_iscsi_cmp {
union opcode_tid ot;
__be16 pdu_len_ddp;
__be16 len;
__be32 seq;
__be16 urg;
__u8 rsvd;
__u8 status;
__be32 ulp_crc;
__be32 ddpvld;
};
struct cpl_tx_data_iso {
__be32 op_to_scsi;
__u8 reserved1;
......
......@@ -289,11 +289,12 @@ void zfcp_dbf_rec_trig(char *tag, struct zfcp_adapter *adapter,
/**
* zfcp_dbf_rec_run - trace event related to running recovery
* zfcp_dbf_rec_run_lvl - trace event related to running recovery
* @level: trace level to be used for event
* @tag: identifier for event
* @erp: erp_action running
*/
void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
void zfcp_dbf_rec_run_lvl(int level, char *tag, struct zfcp_erp_action *erp)
{
struct zfcp_dbf *dbf = erp->adapter->dbf;
struct zfcp_dbf_rec *rec = &dbf->rec_buf;
......@@ -319,10 +320,20 @@ void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
else
rec->u.run.rec_count = atomic_read(&erp->adapter->erp_counter);
debug_event(dbf->rec, 1, rec, sizeof(*rec));
debug_event(dbf->rec, level, rec, sizeof(*rec));
spin_unlock_irqrestore(&dbf->rec_lock, flags);
}
/**
* zfcp_dbf_rec_run - trace event related to running recovery
* @tag: identifier for event
* @erp: erp_action running
*/
void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
{
zfcp_dbf_rec_run_lvl(1, tag, erp);
}
/**
* zfcp_dbf_rec_run_wka - trace wka port event with info like running recovery
* @tag: identifier for event
......
......@@ -2,7 +2,7 @@
* zfcp device driver
* debug feature declarations
*
* Copyright IBM Corp. 2008, 2015
* Copyright IBM Corp. 2008, 2016
*/
#ifndef ZFCP_DBF_H
......@@ -283,6 +283,30 @@ struct zfcp_dbf {
struct zfcp_dbf_scsi scsi_buf;
};
/**
* zfcp_dbf_hba_fsf_resp_suppress - true if we should not trace by default
* @req: request that has been completed
*
* Returns true if FCP response with only benign residual under count.
*/
static inline
bool zfcp_dbf_hba_fsf_resp_suppress(struct zfcp_fsf_req *req)
{
struct fsf_qtcb *qtcb = req->qtcb;
u32 fsf_stat = qtcb->header.fsf_status;
struct fcp_resp *fcp_rsp;
u8 rsp_flags, fr_status;
if (qtcb->prefix.qtcb_type != FSF_IO_COMMAND)
return false; /* not an FCP response */
fcp_rsp = (struct fcp_resp *)&qtcb->bottom.io.fcp_rsp;
rsp_flags = fcp_rsp->fr_flags;
fr_status = fcp_rsp->fr_status;
return (fsf_stat == FSF_FCP_RSP_AVAILABLE) &&
(rsp_flags == FCP_RESID_UNDER) &&
(fr_status == SAM_STAT_GOOD);
}
static inline
void zfcp_dbf_hba_fsf_resp(char *tag, int level, struct zfcp_fsf_req *req)
{
......@@ -304,7 +328,9 @@ void zfcp_dbf_hba_fsf_response(struct zfcp_fsf_req *req)
zfcp_dbf_hba_fsf_resp("fs_perr", 1, req);
} else if (qtcb->header.fsf_status != FSF_GOOD) {
zfcp_dbf_hba_fsf_resp("fs_ferr", 1, req);
zfcp_dbf_hba_fsf_resp("fs_ferr",
zfcp_dbf_hba_fsf_resp_suppress(req)
? 5 : 1, req);
} else if ((req->fsf_command == FSF_QTCB_OPEN_PORT_WITH_DID) ||
(req->fsf_command == FSF_QTCB_OPEN_LUN)) {
......@@ -388,4 +414,15 @@ void zfcp_dbf_scsi_devreset(char *tag, struct scsi_cmnd *scmnd, u8 flag)
_zfcp_dbf_scsi(tmp_tag, 1, scmnd, NULL);
}
/**
* zfcp_dbf_scsi_nullcmnd() - trace NULLify of SCSI command in dev/tgt-reset.
* @scmnd: SCSI command that was NULLified.
* @fsf_req: request that owned @scmnd.
*/
static inline void zfcp_dbf_scsi_nullcmnd(struct scsi_cmnd *scmnd,
struct zfcp_fsf_req *fsf_req)
{
_zfcp_dbf_scsi("scfc__1", 3, scmnd, fsf_req);
}
#endif /* ZFCP_DBF_H */
......@@ -3,7 +3,7 @@
*
* Error Recovery Procedures (ERP).
*
* Copyright IBM Corp. 2002, 2015
* Copyright IBM Corp. 2002, 2016
*/
#define KMSG_COMPONENT "zfcp"
......@@ -1204,6 +1204,62 @@ static void zfcp_erp_action_dequeue(struct zfcp_erp_action *erp_action)
}
}
/**
* zfcp_erp_try_rport_unblock - unblock rport if no more/new recovery
* @port: zfcp_port whose fc_rport we should try to unblock
*/
static void zfcp_erp_try_rport_unblock(struct zfcp_port *port)
{
unsigned long flags;
struct zfcp_adapter *adapter = port->adapter;
int port_status;
struct Scsi_Host *shost = adapter->scsi_host;
struct scsi_device *sdev;
write_lock_irqsave(&adapter->erp_lock, flags);
port_status = atomic_read(&port->status);
if ((port_status & ZFCP_STATUS_COMMON_UNBLOCKED) == 0 ||
(port_status & (ZFCP_STATUS_COMMON_ERP_INUSE |
ZFCP_STATUS_COMMON_ERP_FAILED)) != 0) {
/* new ERP of severity >= port triggered elsewhere meanwhile or
* local link down (adapter erp_failed but not clear unblock)
*/
zfcp_dbf_rec_run_lvl(4, "ertru_p", &port->erp_action);
write_unlock_irqrestore(&adapter->erp_lock, flags);
return;
}
spin_lock(shost->host_lock);
__shost_for_each_device(sdev, shost) {
struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
int lun_status;
if (zsdev->port != port)
continue;
/* LUN under port of interest */
lun_status = atomic_read(&zsdev->status);
if ((lun_status & ZFCP_STATUS_COMMON_ERP_FAILED) != 0)
continue; /* unblock rport despite failed LUNs */
/* LUN recovery not given up yet [maybe follow-up pending] */
if ((lun_status & ZFCP_STATUS_COMMON_UNBLOCKED) == 0 ||
(lun_status & ZFCP_STATUS_COMMON_ERP_INUSE) != 0) {
/* LUN blocked:
* not yet unblocked [LUN recovery pending]
* or meanwhile blocked [new LUN recovery triggered]
*/
zfcp_dbf_rec_run_lvl(4, "ertru_l", &zsdev->erp_action);
spin_unlock(shost->host_lock);
write_unlock_irqrestore(&adapter->erp_lock, flags);
return;
}
}
/* now port has no child or all children have completed recovery,
* and no ERP of severity >= port was meanwhile triggered elsewhere
*/
zfcp_scsi_schedule_rport_register(port);
spin_unlock(shost->host_lock);
write_unlock_irqrestore(&adapter->erp_lock, flags);
}
static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
{
struct zfcp_adapter *adapter = act->adapter;
......@@ -1214,6 +1270,7 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
case ZFCP_ERP_ACTION_REOPEN_LUN:
if (!(act->status & ZFCP_STATUS_ERP_NO_REF))
scsi_device_put(sdev);
zfcp_erp_try_rport_unblock(port);
break;
case ZFCP_ERP_ACTION_REOPEN_PORT:
......@@ -1224,7 +1281,7 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
*/
if (act->step != ZFCP_ERP_STEP_UNINITIALIZED)
if (result == ZFCP_ERP_SUCCEEDED)
zfcp_scsi_schedule_rport_register(port);
zfcp_erp_try_rport_unblock(port);
/* fall through */
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
put_device(&port->dev);
......
......@@ -3,7 +3,7 @@
*
* External function declarations.
*
* Copyright IBM Corp. 2002, 2015
* Copyright IBM Corp. 2002, 2016
*/
#ifndef ZFCP_EXT_H
......@@ -35,6 +35,8 @@ extern void zfcp_dbf_adapter_unregister(struct zfcp_adapter *);
extern void zfcp_dbf_rec_trig(char *, struct zfcp_adapter *,
struct zfcp_port *, struct scsi_device *, u8, u8);
extern void zfcp_dbf_rec_run(char *, struct zfcp_erp_action *);
extern void zfcp_dbf_rec_run_lvl(int level, char *tag,
struct zfcp_erp_action *erp);
extern void zfcp_dbf_rec_run_wka(char *, struct zfcp_fc_wka_port *, u64);
extern void zfcp_dbf_hba_fsf_uss(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_fsf_res(char *, int, struct zfcp_fsf_req *);
......
......@@ -3,7 +3,7 @@
*
* Interface to the FSF support functions.
*
* Copyright IBM Corp. 2002, 2015
* Copyright IBM Corp. 2002, 2016
*/
#ifndef FSF_H
......@@ -78,6 +78,7 @@
#define FSF_APP_TAG_CHECK_FAILURE 0x00000082
#define FSF_REF_TAG_CHECK_FAILURE 0x00000083
#define FSF_ADAPTER_STATUS_AVAILABLE 0x000000AD
#define FSF_FCP_RSP_AVAILABLE 0x000000AF
#define FSF_UNKNOWN_COMMAND 0x000000E2
#define FSF_UNKNOWN_OP_SUBTYPE 0x000000E3
#define FSF_INVALID_COMMAND_OPTION 0x000000E5
......
......@@ -4,7 +4,7 @@
* Data structure and helper functions for tracking pending FSF
* requests.
*
* Copyright IBM Corp. 2009
* Copyright IBM Corp. 2009, 2016
*/
#ifndef ZFCP_REQLIST_H
......@@ -180,4 +180,32 @@ static inline void zfcp_reqlist_move(struct zfcp_reqlist *rl,
spin_unlock_irqrestore(&rl->lock, flags);
}
/**
* zfcp_reqlist_apply_for_all() - apply a function to every request.
* @rl: the requestlist that contains the target requests.
* @f: the function to apply to each request; the first parameter of the
* function will be the target-request; the second parameter is the same
* pointer as given with the argument @data.
* @data: freely chosen argument; passed through to @f as second parameter.
*
* Uses :c:macro:`list_for_each_entry` to iterate over the lists in the hash-
* table (not a 'safe' variant, so don't modify the list).
*
* Holds @rl->lock over the entire request-iteration.
*/
static inline void
zfcp_reqlist_apply_for_all(struct zfcp_reqlist *rl,
void (*f)(struct zfcp_fsf_req *, void *), void *data)
{
struct zfcp_fsf_req *req;
unsigned long flags;
unsigned int i;
spin_lock_irqsave(&rl->lock, flags);
for (i = 0; i < ZFCP_REQ_LIST_BUCKETS; i++)
list_for_each_entry(req, &rl->buckets[i], list)
f(req, data);
spin_unlock_irqrestore(&rl->lock, flags);
}
#endif /* ZFCP_REQLIST_H */
......@@ -3,7 +3,7 @@
*
* Interface to Linux SCSI midlayer.
*
* Copyright IBM Corp. 2002, 2015
* Copyright IBM Corp. 2002, 2016
*/
#define KMSG_COMPONENT "zfcp"
......@@ -88,9 +88,7 @@ int zfcp_scsi_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scpnt)
}
if (unlikely(!(status & ZFCP_STATUS_COMMON_UNBLOCKED))) {
/* This could be either
* open LUN pending: this is temporary, will result in
* open LUN or ERP_FAILED, so retry command
/* This could be
* call to rport_delete pending: mimic retry from
* fc_remote_port_chkready until rport is BLOCKED
*/
......@@ -209,6 +207,57 @@ static int zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt)
return retval;
}
struct zfcp_scsi_req_filter {
u8 tmf_scope;
u32 lun_handle;
u32 port_handle;
};
static void zfcp_scsi_forget_cmnd(struct zfcp_fsf_req *old_req, void *data)
{
struct zfcp_scsi_req_filter *filter =
(struct zfcp_scsi_req_filter *)data;
/* already aborted - prevent side-effects - or not a SCSI command */
if (old_req->data == NULL || old_req->fsf_command != FSF_QTCB_FCP_CMND)
return;
/* (tmf_scope == FCP_TMF_TGT_RESET || tmf_scope == FCP_TMF_LUN_RESET) */
if (old_req->qtcb->header.port_handle != filter->port_handle)
return;
if (filter->tmf_scope == FCP_TMF_LUN_RESET &&
old_req->qtcb->header.lun_handle != filter->lun_handle)
return;
zfcp_dbf_scsi_nullcmnd((struct scsi_cmnd *)old_req->data, old_req);
old_req->data = NULL;
}
static void zfcp_scsi_forget_cmnds(struct zfcp_scsi_dev *zsdev, u8 tm_flags)
{
struct zfcp_adapter *adapter = zsdev->port->adapter;
struct zfcp_scsi_req_filter filter = {
.tmf_scope = FCP_TMF_TGT_RESET,
.port_handle = zsdev->port->handle,
};
unsigned long flags;
if (tm_flags == FCP_TMF_LUN_RESET) {
filter.tmf_scope = FCP_TMF_LUN_RESET;
filter.lun_handle = zsdev->lun_handle;
}
/*
* abort_lock secures against other processings - in the abort-function
* and normal cmnd-handler - of (struct zfcp_fsf_req *)->data
*/
write_lock_irqsave(&adapter->abort_lock, flags);
zfcp_reqlist_apply_for_all(adapter->req_list, zfcp_scsi_forget_cmnd,
&filter);
write_unlock_irqrestore(&adapter->abort_lock, flags);
}
static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
{
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(scpnt->device);
......@@ -241,8 +290,10 @@ static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
if (fsf_req->status & ZFCP_STATUS_FSFREQ_TMFUNCFAILED) {
zfcp_dbf_scsi_devreset("fail", scpnt, tm_flags);
retval = FAILED;
} else
} else {
zfcp_dbf_scsi_devreset("okay", scpnt, tm_flags);
zfcp_scsi_forget_cmnds(zfcp_sdev, tm_flags);
}
zfcp_fsf_req_free(fsf_req);
return retval;
......
/*
3w-9xxx.c -- 3ware 9000 Storage Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Modifications By: Tom Couch <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Modifications By: Tom Couch
Copyright (C) 2004-2009 Applied Micro Circuits Corporation.
Copyright (C) 2010 LSI Corporation.
......@@ -41,10 +41,7 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
Note: This version of the driver does not contain a bundled firmware
image.
......
/*
3w-9xxx.h -- 3ware 9000 Storage Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Modifications By: Tom Couch <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Modifications By: Tom Couch
Copyright (C) 2004-2009 Applied Micro Circuits Corporation.
Copyright (C) 2010 LSI Corporation.
......@@ -41,10 +41,7 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
*/
#ifndef _3W_9XXX_H
......
/*
3w-sas.c -- LSI 3ware SAS/SATA-RAID Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Copyright (C) 2009 LSI Corporation.
......@@ -43,10 +43,7 @@
LSI 3ware 9750 6Gb/s SAS/SATA-RAID
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
History
-------
......
/*
3w-sas.h -- LSI 3ware SAS/SATA-RAID Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Copyright (C) 2009 LSI Corporation.
......@@ -39,10 +39,7 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
*/
#ifndef _3W_SAS_H
......
/*
3w-xxxx.c -- 3ware Storage Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Modifications By: Joel Jacobson <linux@3ware.com>
Arnaldo Carvalho de Melo <acme@conectiva.com.br>
Brad Strand <linux@3ware.com>
......@@ -47,10 +47,9 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
For more information, goto:
http://www.lsi.com
aradford@gmail.com
History
-------
......
/*
3w-xxxx.h -- 3ware Storage Controller device driver for Linux.
Written By: Adam Radford <linuxraid@lsi.com>
Written By: Adam Radford <aradford@gmail.com>
Modifications By: Joel Jacobson <linux@3ware.com>
Arnaldo Carvalho de Melo <acme@conectiva.com.br>
Brad Strand <linux@3ware.com>
......@@ -45,7 +45,8 @@
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Bugs/Comments/Suggestions should be mailed to:
linuxraid@lsi.com
aradford@gmail.com
For more information, goto:
http://www.lsi.com
......
......@@ -1233,6 +1233,7 @@ config SCSI_QLOGICPTI
source "drivers/scsi/qla2xxx/Kconfig"
source "drivers/scsi/qla4xxx/Kconfig"
source "drivers/scsi/qedi/Kconfig"
config SCSI_LPFC
tristate "Emulex LightPulse Fibre Channel Support"
......
......@@ -131,6 +131,7 @@ obj-$(CONFIG_PS3_ROM) += ps3rom.o
obj-$(CONFIG_SCSI_CXGB3_ISCSI) += libiscsi.o libiscsi_tcp.o cxgbi/
obj-$(CONFIG_SCSI_CXGB4_ISCSI) += libiscsi.o libiscsi_tcp.o cxgbi/
obj-$(CONFIG_SCSI_BNX2_ISCSI) += libiscsi.o bnx2i/
obj-$(CONFIG_QEDI) += libiscsi.o qedi/
obj-$(CONFIG_BE2ISCSI) += libiscsi.o be2iscsi/
obj-$(CONFIG_SCSI_ESAS2R) += esas2r/
obj-$(CONFIG_SCSI_PMCRAID) += pmcraid.o
......
......@@ -97,9 +97,6 @@
* and macros and include this file in your driver.
*
* These macros control options :
* AUTOPROBE_IRQ - if defined, the NCR5380_probe_irq() function will be
* defined.
*
* AUTOSENSE - if defined, REQUEST SENSE will be performed automatically
* for commands that return with a CHECK CONDITION status.
*
......@@ -127,9 +124,7 @@
* NCR5380_dma_residual - residual byte count
*
* The generic driver is initialized by calling NCR5380_init(instance),
* after setting the appropriate host specific fields and ID. If the
* driver wishes to autoprobe for an IRQ line, the NCR5380_probe_irq(instance,
* possible) function may be used.
* after setting the appropriate host specific fields and ID.
*/
#ifndef NCR5380_io_delay
......@@ -351,76 +346,6 @@ static void NCR5380_print_phase(struct Scsi_Host *instance)
}
#endif
static int probe_irq;
/**
* probe_intr - helper for IRQ autoprobe
* @irq: interrupt number
* @dev_id: unused
* @regs: unused
*
* Set a flag to indicate the IRQ in question was received. This is
* used by the IRQ probe code.
*/
static irqreturn_t probe_intr(int irq, void *dev_id)
{
probe_irq = irq;
return IRQ_HANDLED;
}
/**
* NCR5380_probe_irq - find the IRQ of an NCR5380
* @instance: NCR5380 controller
* @possible: bitmask of ISA IRQ lines
*
* Autoprobe for the IRQ line used by the NCR5380 by triggering an IRQ
* and then looking to see what interrupt actually turned up.
*/
static int __maybe_unused NCR5380_probe_irq(struct Scsi_Host *instance,
int possible)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned long timeout;
int trying_irqs, i, mask;
for (trying_irqs = 0, i = 1, mask = 2; i < 16; ++i, mask <<= 1)
if ((mask & possible) && (request_irq(i, &probe_intr, 0, "NCR-probe", NULL) == 0))
trying_irqs |= mask;
timeout = jiffies + msecs_to_jiffies(250);
probe_irq = NO_IRQ;
/*
* A interrupt is triggered whenever BSY = false, SEL = true
* and a bit set in the SELECT_ENABLE_REG is asserted on the
* SCSI bus.
*
* Note that the bus is only driven when the phase control signals
* (I/O, C/D, and MSG) match those in the TCR, so we must reset that
* to zero.
*/
NCR5380_write(TARGET_COMMAND_REG, 0);
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
NCR5380_write(OUTPUT_DATA_REG, hostdata->id_mask);
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_DATA | ICR_ASSERT_SEL);
while (probe_irq == NO_IRQ && time_before(jiffies, timeout))
schedule_timeout_uninterruptible(1);
NCR5380_write(SELECT_ENABLE_REG, 0);
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
for (i = 1, mask = 2; i < 16; ++i, mask <<= 1)
if (trying_irqs & mask)
free_irq(i, NULL);
return probe_irq;
}
/**
* NCR58380_info - report driver and host information
* @instance: relevant scsi host instance
......
......@@ -199,16 +199,6 @@
#define PHASE_SR_TO_TCR(phase) ((phase) >> 2)
/*
* These are "special" values for the irq and dma_channel fields of the
* Scsi_Host structure
*/
#define DMA_NONE 255
#define IRQ_AUTO 254
#define DMA_AUTO 254
#define PORT_AUTO 0xffff /* autoprobe io port for 53c400a */
#ifndef NO_IRQ
#define NO_IRQ 0
#endif
......@@ -290,7 +280,6 @@ static void NCR5380_print(struct Scsi_Host *instance);
#define NCR5380_dprint_phase(flg, arg) do {} while (0)
#endif
static int NCR5380_probe_irq(struct Scsi_Host *instance, int possible);
static int NCR5380_init(struct Scsi_Host *instance, int flags);
static int NCR5380_maybe_reset_bus(struct Scsi_Host *);
static void NCR5380_exit(struct Scsi_Host *instance);
......
......@@ -160,7 +160,6 @@ static const struct pci_device_id aac_pci_tbl[] = {
{ 0x9005, 0x028b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 62 }, /* Adaptec PMC Series 6 (Tupelo) */
{ 0x9005, 0x028c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 63 }, /* Adaptec PMC Series 7 (Denali) */
{ 0x9005, 0x028d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 64 }, /* Adaptec PMC Series 8 */
{ 0x9005, 0x028f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 65 }, /* Adaptec PMC Series 9 */
{ 0,}
};
MODULE_DEVICE_TABLE(pci, aac_pci_tbl);
......@@ -239,7 +238,6 @@ static struct aac_driver_ident aac_drivers[] = {
{ aac_src_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 6 (Tupelo) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 7 (Denali) */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_SRC }, /* Adaptec PMC Series 8 */
{ aac_srcv_init, "aacraid", "ADAPTEC ", "RAID ", 2, AAC_QUIRK_SRC } /* Adaptec PMC Series 9 */
};
/**
......
......@@ -189,7 +189,6 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
struct l2t_entry *e)
{
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(csk->cdev);
int t4 = is_t4(lldi->adapter_type);
int wscale = cxgbi_sock_compute_wscale(csk->mss_idx);
unsigned long long opt0;
unsigned int opt2;
......@@ -232,7 +231,7 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
csk, &req->local_ip, ntohs(req->local_port),
&req->peer_ip, ntohs(req->peer_port),
csk->atid, csk->rss_qid);
} else {
} else if (is_t5(lldi->adapter_type)) {
struct cpl_t5_act_open_req *req =
(struct cpl_t5_act_open_req *)skb->head;
u32 isn = (prandom_u32() & ~7UL) - 1;
......@@ -260,12 +259,45 @@ static void send_act_open_req(struct cxgbi_sock *csk, struct sk_buff *skb,
csk, &req->local_ip, ntohs(req->local_port),
&req->peer_ip, ntohs(req->peer_port),
csk->atid, csk->rss_qid);
} else {
struct cpl_t6_act_open_req *req =
(struct cpl_t6_act_open_req *)skb->head;
u32 isn = (prandom_u32() & ~7UL) - 1;
INIT_TP_WR(req, 0);
OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
qid_atid));
req->local_port = csk->saddr.sin_port;
req->peer_port = csk->daddr.sin_port;
req->local_ip = csk->saddr.sin_addr.s_addr;
req->peer_ip = csk->daddr.sin_addr.s_addr;
req->opt0 = cpu_to_be64(opt0);
req->params = cpu_to_be64(FILTER_TUPLE_V(
cxgb4_select_ntuple(
csk->cdev->ports[csk->port_id],
csk->l2t)));
req->rsvd = cpu_to_be32(isn);
opt2 |= T5_ISS_VALID;
opt2 |= RX_FC_DISABLE_F;
opt2 |= T5_OPT_2_VALID_F;
req->opt2 = cpu_to_be32(opt2);
req->rsvd2 = cpu_to_be32(0);
req->opt3 = cpu_to_be32(0);
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
"csk t6 0x%p, %pI4:%u-%pI4:%u, atid %d, qid %u.\n",
csk, &req->local_ip, ntohs(req->local_port),
&req->peer_ip, ntohs(req->peer_port),
csk->atid, csk->rss_qid);
}
set_wr_txq(skb, CPL_PRIORITY_SETUP, csk->port_id);
pr_info_ipaddr("t%d csk 0x%p,%u,0x%lx,%u, rss_qid %u.\n",
(&csk->saddr), (&csk->daddr), t4 ? 4 : 5, csk,
(&csk->saddr), (&csk->daddr),
CHELSIO_CHIP_VERSION(lldi->adapter_type), csk,
csk->state, csk->flags, csk->atid, csk->rss_qid);
cxgb4_l2t_send(csk->cdev->ports[csk->port_id], skb, csk->l2t);
......@@ -276,7 +308,6 @@ static void send_act_open_req6(struct cxgbi_sock *csk, struct sk_buff *skb,
struct l2t_entry *e)
{
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(csk->cdev);
int t4 = is_t4(lldi->adapter_type);
int wscale = cxgbi_sock_compute_wscale(csk->mss_idx);
unsigned long long opt0;
unsigned int opt2;
......@@ -294,10 +325,9 @@ static void send_act_open_req6(struct cxgbi_sock *csk, struct sk_buff *skb,
opt2 = RX_CHANNEL_V(0) |
RSS_QUEUE_VALID_F |
RX_FC_DISABLE_F |
RSS_QUEUE_V(csk->rss_qid);
if (t4) {
if (is_t4(lldi->adapter_type)) {
struct cpl_act_open_req6 *req =
(struct cpl_act_open_req6 *)skb->head;
......@@ -322,7 +352,7 @@ static void send_act_open_req6(struct cxgbi_sock *csk, struct sk_buff *skb,
req->params = cpu_to_be32(cxgb4_select_ntuple(
csk->cdev->ports[csk->port_id],
csk->l2t));
} else {
} else if (is_t5(lldi->adapter_type)) {
struct cpl_t5_act_open_req6 *req =
(struct cpl_t5_act_open_req6 *)skb->head;
......@@ -345,12 +375,41 @@ static void send_act_open_req6(struct cxgbi_sock *csk, struct sk_buff *skb,
req->params = cpu_to_be64(FILTER_TUPLE_V(cxgb4_select_ntuple(
csk->cdev->ports[csk->port_id],
csk->l2t)));
} else {
struct cpl_t6_act_open_req6 *req =
(struct cpl_t6_act_open_req6 *)skb->head;
INIT_TP_WR(req, 0);
OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ6,
qid_atid));
req->local_port = csk->saddr6.sin6_port;
req->peer_port = csk->daddr6.sin6_port;
req->local_ip_hi = *(__be64 *)(csk->saddr6.sin6_addr.s6_addr);
req->local_ip_lo = *(__be64 *)(csk->saddr6.sin6_addr.s6_addr +
8);
req->peer_ip_hi = *(__be64 *)(csk->daddr6.sin6_addr.s6_addr);
req->peer_ip_lo = *(__be64 *)(csk->daddr6.sin6_addr.s6_addr +
8);
req->opt0 = cpu_to_be64(opt0);
opt2 |= RX_FC_DISABLE_F;
opt2 |= T5_OPT_2_VALID_F;
req->opt2 = cpu_to_be32(opt2);
req->params = cpu_to_be64(FILTER_TUPLE_V(cxgb4_select_ntuple(
csk->cdev->ports[csk->port_id],
csk->l2t)));
req->rsvd2 = cpu_to_be32(0);
req->opt3 = cpu_to_be32(0);
}
set_wr_txq(skb, CPL_PRIORITY_SETUP, csk->port_id);
pr_info("t%d csk 0x%p,%u,0x%lx,%u, [%pI6]:%u-[%pI6]:%u, rss_qid %u.\n",
t4 ? 4 : 5, csk, csk->state, csk->flags, csk->atid,
CHELSIO_CHIP_VERSION(lldi->adapter_type), csk, csk->state,
csk->flags, csk->atid,
&csk->saddr6.sin6_addr, ntohs(csk->saddr.sin_port),
&csk->daddr6.sin6_addr, ntohs(csk->daddr.sin_port),
csk->rss_qid);
......@@ -742,7 +801,7 @@ static void do_act_establish(struct cxgbi_device *cdev, struct sk_buff *skb)
(&csk->saddr), (&csk->daddr),
atid, tid, csk, csk->state, csk->flags, rcv_isn);
module_put(THIS_MODULE);
module_put(cdev->owner);
cxgbi_sock_get(csk);
csk->tid = tid;
......@@ -891,7 +950,7 @@ static void do_act_open_rpl(struct cxgbi_device *cdev, struct sk_buff *skb)
if (is_neg_adv(status))
goto rel_skb;
module_put(THIS_MODULE);
module_put(cdev->owner);
if (status && status != CPL_ERR_TCAM_FULL &&
status != CPL_ERR_CONN_EXIST &&
......@@ -1173,6 +1232,101 @@ static void do_rx_iscsi_hdr(struct cxgbi_device *cdev, struct sk_buff *skb)
__kfree_skb(skb);
}
static void do_rx_iscsi_data(struct cxgbi_device *cdev, struct sk_buff *skb)
{
struct cxgbi_sock *csk;
struct cpl_iscsi_hdr *cpl = (struct cpl_iscsi_hdr *)skb->data;
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct tid_info *t = lldi->tids;
struct sk_buff *lskb;
u32 tid = GET_TID(cpl);
u16 pdu_len_ddp = be16_to_cpu(cpl->pdu_len_ddp);
csk = lookup_tid(t, tid);
if (unlikely(!csk)) {
pr_err("can't find conn. for tid %u.\n", tid);
goto rel_skb;
}
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lx, tid %u, skb 0x%p,%u, 0x%x.\n",
csk, csk->state, csk->flags, csk->tid, skb,
skb->len, pdu_len_ddp);
spin_lock_bh(&csk->lock);
if (unlikely(csk->state >= CTP_PASSIVE_CLOSE)) {
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
"csk 0x%p,%u,0x%lx,%u, bad state.\n",
csk, csk->state, csk->flags, csk->tid);
if (csk->state != CTP_ABORTING)
goto abort_conn;
else
goto discard;
}
cxgbi_skcb_tcp_seq(skb) = be32_to_cpu(cpl->seq);
cxgbi_skcb_flags(skb) = 0;
skb_reset_transport_header(skb);
__skb_pull(skb, sizeof(*cpl));
__pskb_trim(skb, ntohs(cpl->len));
if (!csk->skb_ulp_lhdr)
csk->skb_ulp_lhdr = skb;
lskb = csk->skb_ulp_lhdr;
cxgbi_skcb_set_flag(lskb, SKCBF_RX_DATA);
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lx, skb 0x%p data, 0x%p.\n",
csk, csk->state, csk->flags, skb, lskb);
__skb_queue_tail(&csk->receive_queue, skb);
spin_unlock_bh(&csk->lock);
return;
abort_conn:
send_abort_req(csk);
discard:
spin_unlock_bh(&csk->lock);
rel_skb:
__kfree_skb(skb);
}
static void
cxgb4i_process_ddpvld(struct cxgbi_sock *csk,
struct sk_buff *skb, u32 ddpvld)
{
if (ddpvld & (1 << CPL_RX_DDP_STATUS_HCRC_SHIFT)) {
pr_info("csk 0x%p, lhdr 0x%p, status 0x%x, hcrc bad 0x%lx.\n",
csk, skb, ddpvld, cxgbi_skcb_flags(skb));
cxgbi_skcb_set_flag(skb, SKCBF_RX_HCRC_ERR);
}
if (ddpvld & (1 << CPL_RX_DDP_STATUS_DCRC_SHIFT)) {
pr_info("csk 0x%p, lhdr 0x%p, status 0x%x, dcrc bad 0x%lx.\n",
csk, skb, ddpvld, cxgbi_skcb_flags(skb));
cxgbi_skcb_set_flag(skb, SKCBF_RX_DCRC_ERR);
}
if (ddpvld & (1 << CPL_RX_DDP_STATUS_PAD_SHIFT)) {
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lhdr 0x%p, status 0x%x, pad bad.\n",
csk, skb, ddpvld);
cxgbi_skcb_set_flag(skb, SKCBF_RX_PAD_ERR);
}
if ((ddpvld & (1 << CPL_RX_DDP_STATUS_DDP_SHIFT)) &&
!cxgbi_skcb_test_flag(skb, SKCBF_RX_DATA)) {
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lhdr 0x%p, 0x%x, data ddp'ed.\n",
csk, skb, ddpvld);
cxgbi_skcb_set_flag(skb, SKCBF_RX_DATA_DDPD);
}
}
static void do_rx_data_ddp(struct cxgbi_device *cdev,
struct sk_buff *skb)
{
......@@ -1182,7 +1336,7 @@ static void do_rx_data_ddp(struct cxgbi_device *cdev,
unsigned int tid = GET_TID(rpl);
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct tid_info *t = lldi->tids;
unsigned int status = ntohl(rpl->ddpvld);
u32 ddpvld = be32_to_cpu(rpl->ddpvld);
csk = lookup_tid(t, tid);
if (unlikely(!csk)) {
......@@ -1192,7 +1346,7 @@ static void do_rx_data_ddp(struct cxgbi_device *cdev,
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lx, skb 0x%p,0x%x, lhdr 0x%p.\n",
csk, csk->state, csk->flags, skb, status, csk->skb_ulp_lhdr);
csk, csk->state, csk->flags, skb, ddpvld, csk->skb_ulp_lhdr);
spin_lock_bh(&csk->lock);
......@@ -1220,29 +1374,8 @@ static void do_rx_data_ddp(struct cxgbi_device *cdev,
pr_info("tid 0x%x, RX_DATA_DDP pdulen %u != %u.\n",
csk->tid, ntohs(rpl->len), cxgbi_skcb_rx_pdulen(lskb));
if (status & (1 << CPL_RX_DDP_STATUS_HCRC_SHIFT)) {
pr_info("csk 0x%p, lhdr 0x%p, status 0x%x, hcrc bad 0x%lx.\n",
csk, lskb, status, cxgbi_skcb_flags(lskb));
cxgbi_skcb_set_flag(lskb, SKCBF_RX_HCRC_ERR);
}
if (status & (1 << CPL_RX_DDP_STATUS_DCRC_SHIFT)) {
pr_info("csk 0x%p, lhdr 0x%p, status 0x%x, dcrc bad 0x%lx.\n",
csk, lskb, status, cxgbi_skcb_flags(lskb));
cxgbi_skcb_set_flag(lskb, SKCBF_RX_DCRC_ERR);
}
if (status & (1 << CPL_RX_DDP_STATUS_PAD_SHIFT)) {
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lhdr 0x%p, status 0x%x, pad bad.\n",
csk, lskb, status);
cxgbi_skcb_set_flag(lskb, SKCBF_RX_PAD_ERR);
}
if ((status & (1 << CPL_RX_DDP_STATUS_DDP_SHIFT)) &&
!cxgbi_skcb_test_flag(lskb, SKCBF_RX_DATA)) {
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lhdr 0x%p, 0x%x, data ddp'ed.\n",
csk, lskb, status);
cxgbi_skcb_set_flag(lskb, SKCBF_RX_DATA_DDPD);
}
cxgb4i_process_ddpvld(csk, lskb, ddpvld);
log_debug(1 << CXGBI_DBG_PDU_RX,
"csk 0x%p, lskb 0x%p, f 0x%lx.\n",
csk, lskb, cxgbi_skcb_flags(lskb));
......@@ -1260,6 +1393,98 @@ static void do_rx_data_ddp(struct cxgbi_device *cdev,
__kfree_skb(skb);
}
static void
do_rx_iscsi_cmp(struct cxgbi_device *cdev, struct sk_buff *skb)
{
struct cxgbi_sock *csk;
struct cpl_rx_iscsi_cmp *rpl = (struct cpl_rx_iscsi_cmp *)skb->data;
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct tid_info *t = lldi->tids;
struct sk_buff *data_skb = NULL;
u32 tid = GET_TID(rpl);
u32 ddpvld = be32_to_cpu(rpl->ddpvld);
u32 seq = be32_to_cpu(rpl->seq);
u16 pdu_len_ddp = be16_to_cpu(rpl->pdu_len_ddp);
csk = lookup_tid(t, tid);
if (unlikely(!csk)) {
pr_err("can't find connection for tid %u.\n", tid);
goto rel_skb;
}
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_PDU_RX,
"csk 0x%p,%u,0x%lx, skb 0x%p,0x%x, lhdr 0x%p, len %u, "
"pdu_len_ddp %u, status %u.\n",
csk, csk->state, csk->flags, skb, ddpvld, csk->skb_ulp_lhdr,
ntohs(rpl->len), pdu_len_ddp, rpl->status);
spin_lock_bh(&csk->lock);
if (unlikely(csk->state >= CTP_PASSIVE_CLOSE)) {
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
"csk 0x%p,%u,0x%lx,%u, bad state.\n",
csk, csk->state, csk->flags, csk->tid);
if (csk->state != CTP_ABORTING)
goto abort_conn;
else
goto discard;
}
cxgbi_skcb_tcp_seq(skb) = seq;
cxgbi_skcb_flags(skb) = 0;
cxgbi_skcb_rx_pdulen(skb) = 0;
skb_reset_transport_header(skb);
__skb_pull(skb, sizeof(*rpl));
__pskb_trim(skb, be16_to_cpu(rpl->len));
csk->rcv_nxt = seq + pdu_len_ddp;
if (csk->skb_ulp_lhdr) {
data_skb = skb_peek(&csk->receive_queue);
if (!data_skb ||
!cxgbi_skcb_test_flag(data_skb, SKCBF_RX_DATA)) {
pr_err("Error! freelist data not found 0x%p, tid %u\n",
data_skb, tid);
goto abort_conn;
}
__skb_unlink(data_skb, &csk->receive_queue);
cxgbi_skcb_set_flag(skb, SKCBF_RX_DATA);
__skb_queue_tail(&csk->receive_queue, skb);
__skb_queue_tail(&csk->receive_queue, data_skb);
} else {
__skb_queue_tail(&csk->receive_queue, skb);
}
csk->skb_ulp_lhdr = NULL;
cxgbi_skcb_set_flag(skb, SKCBF_RX_HDR);
cxgbi_skcb_set_flag(skb, SKCBF_RX_STATUS);
cxgbi_skcb_set_flag(skb, SKCBF_RX_ISCSI_COMPL);
cxgbi_skcb_rx_ddigest(skb) = be32_to_cpu(rpl->ulp_crc);
cxgb4i_process_ddpvld(csk, skb, ddpvld);
log_debug(1 << CXGBI_DBG_PDU_RX, "csk 0x%p, skb 0x%p, f 0x%lx.\n",
csk, skb, cxgbi_skcb_flags(skb));
cxgbi_conn_pdu_ready(csk);
spin_unlock_bh(&csk->lock);
return;
abort_conn:
send_abort_req(csk);
discard:
spin_unlock_bh(&csk->lock);
rel_skb:
__kfree_skb(skb);
}
static void do_fw4_ack(struct cxgbi_device *cdev, struct sk_buff *skb)
{
struct cxgbi_sock *csk;
......@@ -1382,7 +1607,6 @@ static int init_act_open(struct cxgbi_sock *csk)
void *daddr;
unsigned int step;
unsigned int size, size6;
int t4 = is_t4(lldi->adapter_type);
unsigned int linkspeed;
unsigned int rcv_winf, snd_winf;
......@@ -1428,12 +1652,15 @@ static int init_act_open(struct cxgbi_sock *csk)
cxgb4_clip_get(ndev, (const u32 *)&csk->saddr6.sin6_addr, 1);
#endif
if (t4) {
if (is_t4(lldi->adapter_type)) {
size = sizeof(struct cpl_act_open_req);
size6 = sizeof(struct cpl_act_open_req6);
} else {
} else if (is_t5(lldi->adapter_type)) {
size = sizeof(struct cpl_t5_act_open_req);
size6 = sizeof(struct cpl_t5_act_open_req6);
} else {
size = sizeof(struct cpl_t6_act_open_req);
size6 = sizeof(struct cpl_t6_act_open_req6);
}
if (csk->csk_family == AF_INET)
......@@ -1452,8 +1679,8 @@ static int init_act_open(struct cxgbi_sock *csk)
csk->mtu = dst_mtu(csk->dst);
cxgb4_best_mtu(lldi->mtus, csk->mtu, &csk->mss_idx);
csk->tx_chan = cxgb4_port_chan(ndev);
/* SMT two entries per row */
csk->smac_idx = ((cxgb4_port_viid(ndev) & 0x7F)) << 1;
csk->smac_idx = cxgb4_tp_smt_idx(lldi->adapter_type,
cxgb4_port_viid(ndev));
step = lldi->ntxq / lldi->nchan;
csk->txq_idx = cxgb4_port_idx(ndev) * step;
step = lldi->nrxq / lldi->nchan;
......@@ -1486,7 +1713,11 @@ static int init_act_open(struct cxgbi_sock *csk)
csk->mtu, csk->mss_idx, csk->smac_idx);
/* must wait for either a act_open_rpl or act_open_establish */
try_module_get(THIS_MODULE);
if (!try_module_get(cdev->owner)) {
pr_err("%s, try_module_get failed.\n", ndev->name);
goto rel_resource;
}
cxgbi_sock_set_state(csk, CTP_ACTIVE_OPEN);
if (csk->csk_family == AF_INET)
send_act_open_req(csk, skb, csk->l2t);
......@@ -1521,10 +1752,11 @@ static cxgb4i_cplhandler_func cxgb4i_cplhandlers[NUM_CPL_CMDS] = {
[CPL_CLOSE_CON_RPL] = do_close_con_rpl,
[CPL_FW4_ACK] = do_fw4_ack,
[CPL_ISCSI_HDR] = do_rx_iscsi_hdr,
[CPL_ISCSI_DATA] = do_rx_iscsi_hdr,
[CPL_ISCSI_DATA] = do_rx_iscsi_data,
[CPL_SET_TCB_RPL] = do_set_tcb_rpl,
[CPL_RX_DATA_DDP] = do_rx_data_ddp,
[CPL_RX_ISCSI_DDP] = do_rx_data_ddp,
[CPL_RX_ISCSI_CMP] = do_rx_iscsi_cmp,
[CPL_RX_DATA] = do_rx_data,
};
......@@ -1794,10 +2026,12 @@ static void *t4_uld_add(const struct cxgb4_lld_info *lldi)
cdev->nports = lldi->nports;
cdev->mtus = lldi->mtus;
cdev->nmtus = NMTUS;
cdev->rx_credit_thres = cxgb4i_rx_credit_thres;
cdev->rx_credit_thres = (CHELSIO_CHIP_VERSION(lldi->adapter_type) <=
CHELSIO_T5) ? cxgb4i_rx_credit_thres : 0;
cdev->skb_tx_rsvd = CXGB4I_TX_HEADER_LEN;
cdev->skb_rx_extra = sizeof(struct cpl_iscsi_hdr);
cdev->itp = &cxgb4i_iscsi_transport;
cdev->owner = THIS_MODULE;
cdev->pfvf = FW_VIID_PFN_G(cxgb4_port_viid(lldi->ports[0]))
<< FW_VIID_PFN_S;
......
......@@ -642,6 +642,12 @@ static struct cxgbi_sock *cxgbi_check_route(struct sockaddr *dst_addr)
n->dev->name, ndev->name, mtu);
}
if (!(ndev->flags & IFF_UP) || !netif_carrier_ok(ndev)) {
pr_info("%s interface not up.\n", ndev->name);
err = -ENETDOWN;
goto rel_neigh;
}
cdev = cxgbi_device_find_by_netdev(ndev, &port);
if (!cdev) {
pr_info("dst %pI4, %s, NOT cxgbi device.\n",
......@@ -736,6 +742,12 @@ static struct cxgbi_sock *cxgbi_check_route6(struct sockaddr *dst_addr)
}
ndev = n->dev;
if (!(ndev->flags & IFF_UP) || !netif_carrier_ok(ndev)) {
pr_info("%s interface not up.\n", ndev->name);
err = -ENETDOWN;
goto rel_rt;
}
if (ipv6_addr_is_multicast(&daddr6->sin6_addr)) {
pr_info("multi-cast route %pI6 port %u, dev %s.\n",
daddr6->sin6_addr.s6_addr,
......@@ -896,6 +908,7 @@ EXPORT_SYMBOL_GPL(cxgbi_sock_fail_act_open);
void cxgbi_sock_act_open_req_arp_failure(void *handle, struct sk_buff *skb)
{
struct cxgbi_sock *csk = (struct cxgbi_sock *)skb->sk;
struct module *owner = csk->cdev->owner;
log_debug(1 << CXGBI_DBG_SOCK, "csk 0x%p,%u,0x%lx,%u.\n",
csk, (csk)->state, (csk)->flags, (csk)->tid);
......@@ -906,6 +919,8 @@ void cxgbi_sock_act_open_req_arp_failure(void *handle, struct sk_buff *skb)
spin_unlock_bh(&csk->lock);
cxgbi_sock_put(csk);
__kfree_skb(skb);
module_put(owner);
}
EXPORT_SYMBOL_GPL(cxgbi_sock_act_open_req_arp_failure);
......@@ -1574,6 +1589,25 @@ static int skb_read_pdu_bhs(struct iscsi_conn *conn, struct sk_buff *skb)
return -EIO;
}
if (cxgbi_skcb_test_flag(skb, SKCBF_RX_ISCSI_COMPL) &&
cxgbi_skcb_test_flag(skb, SKCBF_RX_DATA_DDPD)) {
/* If completion flag is set and data is directly
* placed in to the host memory then update
* task->exp_datasn to the datasn in completion
* iSCSI hdr as T6 adapter generates completion only
* for the last pdu of a sequence.
*/
itt_t itt = ((struct iscsi_data *)skb->data)->itt;
struct iscsi_task *task = iscsi_itt_to_ctask(conn, itt);
u32 data_sn = be32_to_cpu(((struct iscsi_data *)
skb->data)->datasn);
if (task && task->sc) {
struct iscsi_tcp_task *tcp_task = task->dd_data;
tcp_task->exp_datasn = data_sn;
}
}
return read_pdu_skb(conn, skb, 0, 0);
}
......@@ -1627,15 +1661,15 @@ static void csk_return_rx_credits(struct cxgbi_sock *csk, int copied)
csk->rcv_wup, cdev->rx_credit_thres,
csk->rcv_win);
if (!cdev->rx_credit_thres)
return;
if (csk->state != CTP_ESTABLISHED)
return;
credits = csk->copied_seq - csk->rcv_wup;
if (unlikely(!credits))
return;
if (unlikely(cdev->rx_credit_thres == 0))
return;
must_send = credits + 16384 >= csk->rcv_win;
if (must_send || credits >= cdev->rx_credit_thres)
csk->rcv_wup += cdev->csk_send_rx_credits(csk, credits);
......
......@@ -207,6 +207,7 @@ enum cxgbi_skcb_flags {
SKCBF_RX_HDR, /* received pdu header */
SKCBF_RX_DATA, /* received pdu payload */
SKCBF_RX_STATUS, /* received ddp status */
SKCBF_RX_ISCSI_COMPL, /* received iscsi completion */
SKCBF_RX_DATA_DDPD, /* pdu payload ddp'd */
SKCBF_RX_HCRC_ERR, /* header digest error */
SKCBF_RX_DCRC_ERR, /* data digest error */
......@@ -467,6 +468,7 @@ struct cxgbi_device {
struct pci_dev *pdev;
struct dentry *debugfs_root;
struct iscsi_transport *itp;
struct module *owner;
unsigned int pfvf;
unsigned int rx_credit_thres;
......
......@@ -37,7 +37,7 @@
#define MAX_CARDS 8
/* old-style parameters for compatibility */
static int ncr_irq;
static int ncr_irq = -1;
static int ncr_addr;
static int ncr_5380;
static int ncr_53c400;
......@@ -52,9 +52,9 @@ module_param(ncr_53c400a, int, 0);
module_param(dtc_3181e, int, 0);
module_param(hp_c2502, int, 0);
static int irq[] = { 0, 0, 0, 0, 0, 0, 0, 0 };
static int irq[] = { -1, -1, -1, -1, -1, -1, -1, -1 };
module_param_array(irq, int, NULL, 0);
MODULE_PARM_DESC(irq, "IRQ number(s)");
MODULE_PARM_DESC(irq, "IRQ number(s) (0=none, 254=auto [default])");
static int base[] = { 0, 0, 0, 0, 0, 0, 0, 0 };
module_param_array(base, int, NULL, 0);
......@@ -67,6 +67,56 @@ MODULE_PARM_DESC(card, "card type (0=NCR5380, 1=NCR53C400, 2=NCR53C400A, 3=DTC31
MODULE_ALIAS("g_NCR5380_mmio");
MODULE_LICENSE("GPL");
static void g_NCR5380_trigger_irq(struct Scsi_Host *instance)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
/*
* An interrupt is triggered whenever BSY = false, SEL = true
* and a bit set in the SELECT_ENABLE_REG is asserted on the
* SCSI bus.
*
* Note that the bus is only driven when the phase control signals
* (I/O, C/D, and MSG) match those in the TCR.
*/
NCR5380_write(TARGET_COMMAND_REG,
PHASE_SR_TO_TCR(NCR5380_read(STATUS_REG) & PHASE_MASK));
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
NCR5380_write(OUTPUT_DATA_REG, hostdata->id_mask);
NCR5380_write(INITIATOR_COMMAND_REG,
ICR_BASE | ICR_ASSERT_DATA | ICR_ASSERT_SEL);
msleep(1);
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
NCR5380_write(SELECT_ENABLE_REG, 0);
NCR5380_write(TARGET_COMMAND_REG, 0);
}
/**
* g_NCR5380_probe_irq - find the IRQ of a NCR5380 or equivalent
* @instance: SCSI host instance
*
* Autoprobe for the IRQ line used by the card by triggering an IRQ
* and then looking to see what interrupt actually turned up.
*/
static int g_NCR5380_probe_irq(struct Scsi_Host *instance)
{
struct NCR5380_hostdata *hostdata = shost_priv(instance);
int irq_mask, irq;
NCR5380_read(RESET_PARITY_INTERRUPT_REG);
irq_mask = probe_irq_on();
g_NCR5380_trigger_irq(instance);
irq = probe_irq_off(irq_mask);
NCR5380_read(RESET_PARITY_INTERRUPT_REG);
if (irq <= 0)
return NO_IRQ;
return irq;
}
/*
* Configure I/O address of 53C400A or DTC436 by writing magic numbers
* to ports 0x779 and 0x379.
......@@ -81,14 +131,33 @@ static void magic_configure(int idx, u8 irq, u8 magic[])
outb(magic[3], 0x379);
outb(magic[4], 0x379);
/* allowed IRQs for HP C2502 */
if (irq != 2 && irq != 3 && irq != 4 && irq != 5 && irq != 7)
irq = 0;
if (irq == 9)
irq = 2;
if (idx >= 0 && idx <= 7)
cfg = 0x80 | idx | (irq << 4);
outb(cfg, 0x379);
}
static irqreturn_t legacy_empty_irq_handler(int irq, void *dev_id)
{
return IRQ_HANDLED;
}
static int legacy_find_free_irq(int *irq_table)
{
while (*irq_table != -1) {
if (!request_irq(*irq_table, legacy_empty_irq_handler,
IRQF_PROBE_SHARED, "Test IRQ",
(void *)irq_table)) {
free_irq(*irq_table, (void *) irq_table);
return *irq_table;
}
irq_table++;
}
return -1;
}
static unsigned int ncr_53c400a_ports[] = {
0x280, 0x290, 0x300, 0x310, 0x330, 0x340, 0x348, 0x350, 0
};
......@@ -101,6 +170,9 @@ static u8 ncr_53c400a_magic[] = { /* 53C400A & DTC436 */
static u8 hp_c2502_magic[] = { /* HP C2502 */
0x0f, 0x22, 0xf0, 0x20, 0x80
};
static int hp_c2502_irqs[] = {
9, 5, 7, 3, 4, -1
};
static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
struct device *pdev, int base, int irq, int board)
......@@ -248,6 +320,13 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
}
}
/* Check for vacant slot */
NCR5380_write(MODE_REG, 0);
if (NCR5380_read(MODE_REG) != 0) {
ret = -ENODEV;
goto out_unregister;
}
ret = NCR5380_init(instance, flags | FLAG_LATE_DMA_SETUP);
if (ret)
goto out_unregister;
......@@ -262,31 +341,59 @@ static int generic_NCR5380_init_one(struct scsi_host_template *tpnt,
NCR5380_maybe_reset_bus(instance);
if (irq != IRQ_AUTO)
instance->irq = irq;
else
instance->irq = NCR5380_probe_irq(instance, 0xffff);
/* Compatibility with documented NCR5380 kernel parameters */
if (instance->irq == 255)
instance->irq = NO_IRQ;
if (irq == 255 || irq == 0)
irq = NO_IRQ;
else if (irq == -1)
irq = IRQ_AUTO;
if (board == BOARD_HP_C2502) {
int *irq_table = hp_c2502_irqs;
int board_irq = -1;
switch (irq) {
case NO_IRQ:
board_irq = 0;
break;
case IRQ_AUTO:
board_irq = legacy_find_free_irq(irq_table);
break;
default:
while (*irq_table != -1)
if (*irq_table++ == irq)
board_irq = irq;
}
if (board_irq <= 0) {
board_irq = 0;
irq = NO_IRQ;
}
magic_configure(port_idx, board_irq, magic);
}
if (irq == IRQ_AUTO) {
instance->irq = g_NCR5380_probe_irq(instance);
if (instance->irq == NO_IRQ)
shost_printk(KERN_INFO, instance, "no irq detected\n");
} else {
instance->irq = irq;
if (instance->irq == NO_IRQ)
shost_printk(KERN_INFO, instance, "no irq provided\n");
}
if (instance->irq != NO_IRQ) {
/* set IRQ for HP C2502 */
if (board == BOARD_HP_C2502)
magic_configure(port_idx, instance->irq, magic);
if (request_irq(instance->irq, generic_NCR5380_intr,
0, "NCR5380", instance)) {
printk(KERN_WARNING "scsi%d : IRQ%d not free, interrupts disabled\n", instance->host_no, instance->irq);
instance->irq = NO_IRQ;
shost_printk(KERN_INFO, instance,
"irq %d denied\n", instance->irq);
} else {
shost_printk(KERN_INFO, instance,
"irq %d acquired\n", instance->irq);
}
}
if (instance->irq == NO_IRQ) {
printk(KERN_INFO "scsi%d : interrupts not enabled. for better interactive performance,\n", instance->host_no);
printk(KERN_INFO "scsi%d : please jumper the board for a free IRQ.\n", instance->host_no);
}
ret = scsi_add_host(instance, pdev);
if (ret)
goto out_free_irq;
......@@ -597,7 +704,7 @@ static int __init generic_NCR5380_init(void)
int ret = 0;
/* compatibility with old-style parameters */
if (irq[0] == 0 && base[0] == 0 && card[0] == -1) {
if (irq[0] == -1 && base[0] == 0 && card[0] == -1) {
irq[0] = ncr_irq;
base[0] = ncr_addr;
if (ncr_5380)
......
......@@ -51,4 +51,6 @@
#define BOARD_DTC3181E 3
#define BOARD_HP_C2502 4
#define IRQ_AUTO 254
#endif /* GENERIC_NCR5380_H */
......@@ -1557,10 +1557,9 @@ static void hpsa_monitor_offline_device(struct ctlr_info *h,
/* Device is not on the list, add it. */
device = kmalloc(sizeof(*device), GFP_KERNEL);
if (!device) {
dev_warn(&h->pdev->dev, "out of memory in %s\n", __func__);
if (!device)
return;
}
memcpy(device->scsi3addr, scsi3addr, sizeof(device->scsi3addr));
spin_lock_irqsave(&h->offline_device_lock, flags);
list_add_tail(&device->offline_list, &h->offline_device_list);
......@@ -2142,17 +2141,15 @@ static int hpsa_alloc_sg_chain_blocks(struct ctlr_info *h)
h->cmd_sg_list = kzalloc(sizeof(*h->cmd_sg_list) * h->nr_cmds,
GFP_KERNEL);
if (!h->cmd_sg_list) {
dev_err(&h->pdev->dev, "Failed to allocate SG list\n");
if (!h->cmd_sg_list)
return -ENOMEM;
}
for (i = 0; i < h->nr_cmds; i++) {
h->cmd_sg_list[i] = kmalloc(sizeof(*h->cmd_sg_list[i]) *
h->chainsize, GFP_KERNEL);
if (!h->cmd_sg_list[i]) {
dev_err(&h->pdev->dev, "Failed to allocate cmd SG\n");
if (!h->cmd_sg_list[i])
goto clean;
}
}
return 0;
......@@ -3454,11 +3451,8 @@ static void hpsa_get_sas_address(struct ctlr_info *h, unsigned char *scsi3addr,
struct bmic_sense_subsystem_info *ssi;
ssi = kzalloc(sizeof(*ssi), GFP_KERNEL);
if (ssi == NULL) {
dev_warn(&h->pdev->dev,
"%s: out of memory\n", __func__);
if (!ssi)
return;
}
rc = hpsa_bmic_sense_subsystem_information(h,
scsi3addr, 0, ssi, sizeof(*ssi));
......@@ -4335,8 +4329,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h)
currentsd[i] = kzalloc(sizeof(*currentsd[i]), GFP_KERNEL);
if (!currentsd[i]) {
dev_warn(&h->pdev->dev, "out of memory at %s:%d\n",
__FILE__, __LINE__);
h->drv_req_rescan = 1;
goto out;
}
......@@ -8597,14 +8589,12 @@ static int hpsa_luns_changed(struct ctlr_info *h)
*/
if (!h->lastlogicals)
goto out;
return rc;
logdev = kzalloc(sizeof(*logdev), GFP_KERNEL);
if (!logdev) {
dev_warn(&h->pdev->dev,
"Out of memory, can't track lun changes.\n");
goto out;
}
if (!logdev)
return rc;
if (hpsa_scsi_do_report_luns(h, 1, logdev, sizeof(*logdev), 0)) {
dev_warn(&h->pdev->dev,
"report luns failed, can't track lun changes.\n");
......@@ -8998,11 +8988,8 @@ static void hpsa_disable_rld_caching(struct ctlr_info *h)
return;
options = kzalloc(sizeof(*options), GFP_KERNEL);
if (!options) {
dev_err(&h->pdev->dev,
"Error: failed to disable rld caching, during alloc.\n");
if (!options)
return;
}
c = cmd_alloc(h);
......
......@@ -95,6 +95,7 @@ static int fast_fail = 1;
static int client_reserve = 1;
static char partition_name[97] = "UNKNOWN";
static unsigned int partition_number = -1;
static LIST_HEAD(ibmvscsi_head);
static struct scsi_transport_template *ibmvscsi_transport_template;
......@@ -232,6 +233,7 @@ static void ibmvscsi_task(void *data)
while ((crq = crq_queue_next_crq(&hostdata->queue)) != NULL) {
ibmvscsi_handle_crq(crq, hostdata);
crq->valid = VIOSRP_CRQ_FREE;
wmb();
}
vio_enable_interrupts(vdev);
......@@ -240,6 +242,7 @@ static void ibmvscsi_task(void *data)
vio_disable_interrupts(vdev);
ibmvscsi_handle_crq(crq, hostdata);
crq->valid = VIOSRP_CRQ_FREE;
wmb();
} else {
done = 1;
}
......@@ -992,7 +995,7 @@ static void handle_cmd_rsp(struct srp_event_struct *evt_struct)
if (unlikely(rsp->opcode != SRP_RSP)) {
if (printk_ratelimit())
dev_warn(evt_struct->hostdata->dev,
"bad SRP RSP type %d\n", rsp->opcode);
"bad SRP RSP type %#02x\n", rsp->opcode);
}
if (cmnd) {
......@@ -2270,6 +2273,7 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
}
dev_set_drvdata(&vdev->dev, hostdata);
list_add_tail(&hostdata->host_list, &ibmvscsi_head);
return 0;
add_srp_port_failed:
......@@ -2291,6 +2295,7 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
static int ibmvscsi_remove(struct vio_dev *vdev)
{
struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev);
list_del(&hostdata->host_list);
unmap_persist_bufs(hostdata);
release_event_pool(&hostdata->pool, hostdata);
ibmvscsi_release_crq_queue(&hostdata->queue, hostdata,
......
......@@ -90,6 +90,7 @@ struct event_pool {
/* all driver data associated with a host adapter */
struct ibmvscsi_host_data {
struct list_head host_list;
atomic_t request_limit;
int client_migrated;
int reset_crq;
......
......@@ -402,6 +402,9 @@ struct MPT3SAS_DEVICE {
u8 block;
u8 tlr_snoop_check;
u8 ignore_delay_remove;
/* Iopriority Command Handling */
u8 ncq_prio_enable;
};
#define MPT3_CMD_NOT_USED 0x8000 /* free */
......@@ -1458,4 +1461,7 @@ mpt3sas_setup_direct_io(struct MPT3SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
struct _raid_device *raid_device, Mpi2SCSIIORequest_t *mpi_request,
u16 smid);
/* NCQ Prio Handling Check */
bool scsih_ncq_prio_supp(struct scsi_device *sdev);
#endif /* MPT3SAS_BASE_H_INCLUDED */
......@@ -3325,8 +3325,6 @@ static DEVICE_ATTR(diag_trigger_mpi, S_IRUGO | S_IWUSR,
/*********** diagnostic trigger suppport *** END ****************************/
/*****************************************/
struct device_attribute *mpt3sas_host_attrs[] = {
......@@ -3402,9 +3400,50 @@ _ctl_device_handle_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR(sas_device_handle, S_IRUGO, _ctl_device_handle_show, NULL);
/**
* _ctl_device_ncq_io_prio_show - send prioritized io commands to device
* @dev - pointer to embedded device
* @buf - the buffer returned
*
* A sysfs 'read/write' sdev attribute, only works with SATA
*/
static ssize_t
_ctl_device_ncq_prio_enable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scsi_device *sdev = to_scsi_device(dev);
struct MPT3SAS_DEVICE *sas_device_priv_data = sdev->hostdata;
return snprintf(buf, PAGE_SIZE, "%d\n",
sas_device_priv_data->ncq_prio_enable);
}
static ssize_t
_ctl_device_ncq_prio_enable_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct scsi_device *sdev = to_scsi_device(dev);
struct MPT3SAS_DEVICE *sas_device_priv_data = sdev->hostdata;
bool ncq_prio_enable = 0;
if (kstrtobool(buf, &ncq_prio_enable))
return -EINVAL;
if (!scsih_ncq_prio_supp(sdev))
return -EINVAL;
sas_device_priv_data->ncq_prio_enable = ncq_prio_enable;
return strlen(buf);
}
static DEVICE_ATTR(sas_ncq_prio_enable, S_IRUGO | S_IWUSR,
_ctl_device_ncq_prio_enable_show,
_ctl_device_ncq_prio_enable_store);
struct device_attribute *mpt3sas_dev_attrs[] = {
&dev_attr_sas_address,
&dev_attr_sas_device_handle,
&dev_attr_sas_ncq_prio_enable,
NULL,
};
......
......@@ -4053,6 +4053,8 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
struct MPT3SAS_DEVICE *sas_device_priv_data;
struct MPT3SAS_TARGET *sas_target_priv_data;
struct _raid_device *raid_device;
struct request *rq = scmd->request;
int class;
Mpi2SCSIIORequest_t *mpi_request;
u32 mpi_control;
u16 smid;
......@@ -4115,7 +4117,12 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
/* set tags */
mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
/* NCQ Prio supported, make sure control indicated high priority */
if (sas_device_priv_data->ncq_prio_enable) {
class = IOPRIO_PRIO_CLASS(req_get_ioprio(rq));
if (class == IOPRIO_CLASS_RT)
mpi_control |= 1 << MPI2_SCSIIO_CONTROL_CMDPRI_SHIFT;
}
/* Make sure Device is not raid volume.
* We do not expose raid functionality to upper layer for warpdrive.
*/
......@@ -9099,6 +9106,31 @@ scsih_pci_mmio_enabled(struct pci_dev *pdev)
return PCI_ERS_RESULT_RECOVERED;
}
/**
* scsih__ncq_prio_supp - Check for NCQ command priority support
* @sdev: scsi device struct
*
* This is called when a user indicates they would like to enable
* ncq command priorities. This works only on SATA devices.
*/
bool scsih_ncq_prio_supp(struct scsi_device *sdev)
{
unsigned char *buf;
bool ncq_prio_supp = false;
if (!scsi_device_supports_vpd(sdev))
return ncq_prio_supp;
buf = kmalloc(SCSI_VPD_PG_LEN, GFP_KERNEL);
if (!buf)
return ncq_prio_supp;
if (!scsi_get_vpd_page(sdev, 0x89, buf, SCSI_VPD_PG_LEN))
ncq_prio_supp = (buf[213] >> 4) & 1;
kfree(buf);
return ncq_prio_supp;
}
/*
* The pci device ids are defined in mpi/mpi2_cnfg.h.
*/
......
config QEDI
tristate "QLogic QEDI 25/40/100Gb iSCSI Initiator Driver Support"
depends on PCI && SCSI
depends on QED
select SCSI_ISCSI_ATTRS
select QED_LL2
select QED_ISCSI
---help---
This driver supports iSCSI offload for the QLogic FastLinQ
41000 Series Converged Network Adapters.
obj-$(CONFIG_QEDI) := qedi.o
qedi-y := qedi_main.o qedi_iscsi.o qedi_fw.o qedi_sysfs.o \
qedi_dbg.o
qedi-$(CONFIG_DEBUG_FS) += qedi_debugfs.o
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef _QEDI_H_
#define _QEDI_H_
#define __PREVENT_QED_HSI__
#include <scsi/scsi_transport_iscsi.h>
#include <scsi/libiscsi.h>
#include <scsi/scsi_host.h>
#include <linux/uio_driver.h>
#include "qedi_hsi.h"
#include <linux/qed/qed_if.h>
#include "qedi_dbg.h"
#include <linux/qed/qed_iscsi_if.h>
#include <linux/qed/qed_ll2_if.h>
#include "qedi_version.h"
#define QEDI_MODULE_NAME "qedi"
struct qedi_endpoint;
/*
* PCI function probe defines
*/
#define QEDI_MODE_NORMAL 0
#define QEDI_MODE_RECOVERY 1
#define ISCSI_WQE_SET_PTU_INVALIDATE 1
#define QEDI_MAX_ISCSI_TASK 4096
#define QEDI_MAX_TASK_NUM 0x0FFF
#define QEDI_MAX_ISCSI_CONNS_PER_HBA 1024
#define QEDI_ISCSI_MAX_BDS_PER_CMD 256 /* Firmware max BDs is 256 */
#define MAX_OUSTANDING_TASKS_PER_CON 1024
#define QEDI_MAX_BD_LEN 0xffff
#define QEDI_BD_SPLIT_SZ 0x1000
#define QEDI_PAGE_SIZE 4096
#define QEDI_FAST_SGE_COUNT 4
/* MAX Length for cached SGL */
#define MAX_SGLEN_FOR_CACHESGL ((1U << 16) - 1)
#define MAX_NUM_MSIX_PF 8
#define MIN_NUM_CPUS_MSIX(x) min((x)->msix_count, num_online_cpus())
#define QEDI_LOCAL_PORT_MIN 60000
#define QEDI_LOCAL_PORT_MAX 61024
#define QEDI_LOCAL_PORT_RANGE (QEDI_LOCAL_PORT_MAX - QEDI_LOCAL_PORT_MIN)
#define QEDI_LOCAL_PORT_INVALID 0xffff
#define TX_RX_RING 16
#define RX_RING (TX_RX_RING - 1)
#define LL2_SINGLE_BUF_SIZE 0x400
#define QEDI_PAGE_SIZE 4096
#define QEDI_PAGE_ALIGN(addr) ALIGN(addr, QEDI_PAGE_SIZE)
#define QEDI_PAGE_MASK (~((QEDI_PAGE_SIZE) - 1))
#define QEDI_PAGE_SIZE 4096
#define QEDI_PATH_HANDLE 0xFE0000000UL
struct qedi_uio_ctrl {
/* meta data */
u32 uio_hsi_version;
/* user writes */
u32 host_tx_prod;
u32 host_rx_cons;
u32 host_rx_bd_cons;
u32 host_tx_pkt_len;
u32 host_rx_cons_cnt;
/* driver writes */
u32 hw_tx_cons;
u32 hw_rx_prod;
u32 hw_rx_bd_prod;
u32 hw_rx_prod_cnt;
/* other */
u8 mac_addr[6];
u8 reserve[2];
};
struct qedi_rx_bd {
u32 rx_pkt_index;
u32 rx_pkt_len;
u16 vlan_id;
};
#define QEDI_RX_DESC_CNT (QEDI_PAGE_SIZE / sizeof(struct qedi_rx_bd))
#define QEDI_MAX_RX_DESC_CNT (QEDI_RX_DESC_CNT - 1)
#define QEDI_NUM_RX_BD (QEDI_RX_DESC_CNT * 1)
#define QEDI_MAX_RX_BD (QEDI_NUM_RX_BD - 1)
#define QEDI_NEXT_RX_IDX(x) ((((x) & (QEDI_MAX_RX_DESC_CNT)) == \
(QEDI_MAX_RX_DESC_CNT - 1)) ? \
(x) + 2 : (x) + 1)
struct qedi_uio_dev {
struct uio_info qedi_uinfo;
u32 uio_dev;
struct list_head list;
u32 ll2_ring_size;
void *ll2_ring;
u32 ll2_buf_size;
void *ll2_buf;
void *rx_pkt;
void *tx_pkt;
struct qedi_ctx *qedi;
struct pci_dev *pdev;
void *uctrl;
};
/* List to maintain the skb pointers */
struct skb_work_list {
struct list_head list;
struct sk_buff *skb;
u16 vlan_id;
};
/* Queue sizes in number of elements */
#define QEDI_SQ_SIZE MAX_OUSTANDING_TASKS_PER_CON
#define QEDI_CQ_SIZE 2048
#define QEDI_CMDQ_SIZE QEDI_MAX_ISCSI_TASK
#define QEDI_PROTO_CQ_PROD_IDX 0
struct qedi_glbl_q_params {
u64 hw_p_cq; /* Completion queue PBL */
u64 hw_p_rq; /* Request queue PBL */
u64 hw_p_cmdq; /* Command queue PBL */
};
struct global_queue {
union iscsi_cqe *cq;
dma_addr_t cq_dma;
u32 cq_mem_size;
u32 cq_cons_idx; /* Completion queue consumer index */
void *cq_pbl;
dma_addr_t cq_pbl_dma;
u32 cq_pbl_size;
};
struct qedi_fastpath {
struct qed_sb_info *sb_info;
u16 sb_id;
#define QEDI_NAME_SIZE 16
char name[QEDI_NAME_SIZE];
struct qedi_ctx *qedi;
};
/* Used to pass fastpath information needed to process CQEs */
struct qedi_io_work {
struct list_head list;
struct iscsi_cqe_solicited cqe;
u16 que_idx;
};
/**
* struct iscsi_cid_queue - Per adapter iscsi cid queue
*
* @cid_que_base: queue base memory
* @cid_que: queue memory pointer
* @cid_q_prod_idx: produce index
* @cid_q_cons_idx: consumer index
* @cid_q_max_idx: max index. used to detect wrap around condition
* @cid_free_cnt: queue size
* @conn_cid_tbl: iscsi cid to conn structure mapping table
*
* Per adapter iSCSI CID Queue
*/
struct iscsi_cid_queue {
void *cid_que_base;
u32 *cid_que;
u32 cid_q_prod_idx;
u32 cid_q_cons_idx;
u32 cid_q_max_idx;
u32 cid_free_cnt;
struct qedi_conn **conn_cid_tbl;
};
struct qedi_portid_tbl {
spinlock_t lock; /* Port id lock */
u16 start;
u16 max;
u16 next;
unsigned long *table;
};
struct qedi_itt_map {
__le32 itt;
struct qedi_cmd *p_cmd;
};
/* I/O tracing entry */
#define QEDI_IO_TRACE_SIZE 2048
struct qedi_io_log {
#define QEDI_IO_TRACE_REQ 0
#define QEDI_IO_TRACE_RSP 1
u8 direction;
u16 task_id;
u32 cid;
u32 port_id; /* Remote port fabric ID */
int lun;
u8 op; /* SCSI CDB */
u8 lba[4];
unsigned int bufflen; /* SCSI buffer length */
unsigned int sg_count; /* Number of SG elements */
u8 fast_sgs; /* number of fast sgls */
u8 slow_sgs; /* number of slow sgls */
u8 cached_sgs; /* number of cached sgls */
int result; /* Result passed back to mid-layer */
unsigned long jiffies; /* Time stamp when I/O logged */
int refcount; /* Reference count for task id */
unsigned int blk_req_cpu; /* CPU that the task is queued on by
* blk layer
*/
unsigned int req_cpu; /* CPU that the task is queued on */
unsigned int intr_cpu; /* Interrupt CPU that the task is received on */
unsigned int blk_rsp_cpu;/* CPU that task is actually processed and
* returned to blk layer
*/
bool cached_sge;
bool slow_sge;
bool fast_sge;
};
/* Number of entries in BDQ */
#define QEDI_BDQ_NUM 256
#define QEDI_BDQ_BUF_SIZE 256
/* DMA coherent buffers for BDQ */
struct qedi_bdq_buf {
void *buf_addr;
dma_addr_t buf_dma;
};
/* Main port level struct */
struct qedi_ctx {
struct qedi_dbg_ctx dbg_ctx;
struct Scsi_Host *shost;
struct pci_dev *pdev;
struct qed_dev *cdev;
struct qed_dev_iscsi_info dev_info;
struct qed_int_info int_info;
struct qedi_glbl_q_params *p_cpuq;
struct global_queue **global_queues;
/* uio declaration */
struct qedi_uio_dev *udev;
struct list_head ll2_skb_list;
spinlock_t ll2_lock; /* Light L2 lock */
spinlock_t hba_lock; /* per port lock */
struct task_struct *ll2_recv_thread;
unsigned long flags;
#define UIO_DEV_OPENED 1
#define QEDI_IOTHREAD_WAKE 2
#define QEDI_IN_RECOVERY 5
#define QEDI_IN_OFFLINE 6
u8 mac[ETH_ALEN];
u32 src_ip[4];
u8 ip_type;
/* Physical address of above array */
dma_addr_t hw_p_cpuq;
struct qedi_bdq_buf bdq[QEDI_BDQ_NUM];
void *bdq_pbl;
dma_addr_t bdq_pbl_dma;
size_t bdq_pbl_mem_size;
void *bdq_pbl_list;
dma_addr_t bdq_pbl_list_dma;
u8 bdq_pbl_list_num_entries;
void __iomem *bdq_primary_prod;
void __iomem *bdq_secondary_prod;
u16 bdq_prod_idx;
u16 rq_num_entries;
u32 msix_count;
u32 max_sqes;
u8 num_queues;
u32 max_active_conns;
struct iscsi_cid_queue cid_que;
struct qedi_endpoint **ep_tbl;
struct qedi_portid_tbl lcl_port_tbl;
/* Rx fast path intr context */
struct qed_sb_info *sb_array;
struct qedi_fastpath *fp_array;
struct qed_iscsi_tid tasks;
#define QEDI_LINK_DOWN 0
#define QEDI_LINK_UP 1
atomic_t link_state;
#define QEDI_RESERVE_TASK_ID 0
#define MAX_ISCSI_TASK_ENTRIES 4096
#define QEDI_INVALID_TASK_ID (MAX_ISCSI_TASK_ENTRIES + 1)
unsigned long task_idx_map[MAX_ISCSI_TASK_ENTRIES / BITS_PER_LONG];
struct qedi_itt_map *itt_map;
u16 tid_reuse_count[QEDI_MAX_ISCSI_TASK];
struct qed_pf_params pf_params;
struct workqueue_struct *tmf_thread;
struct workqueue_struct *offload_thread;
u16 ll2_mtu;
struct workqueue_struct *dpc_wq;
spinlock_t task_idx_lock; /* To protect gbl context */
s32 last_tidx_alloc;
s32 last_tidx_clear;
struct qedi_io_log io_trace_buf[QEDI_IO_TRACE_SIZE];
spinlock_t io_trace_lock; /* prtect trace Log buf */
u16 io_trace_idx;
unsigned int intr_cpu;
u32 cached_sgls;
bool use_cached_sge;
u32 slow_sgls;
bool use_slow_sge;
u32 fast_sgls;
bool use_fast_sge;
atomic_t num_offloads;
};
struct qedi_work {
struct list_head list;
struct qedi_ctx *qedi;
union iscsi_cqe cqe;
u16 que_idx;
bool is_solicited;
};
struct qedi_percpu_s {
struct task_struct *iothread;
struct list_head work_list;
spinlock_t p_work_lock; /* Per cpu worker lock */
};
static inline void *qedi_get_task_mem(struct qed_iscsi_tid *info, u32 tid)
{
return (info->blocks[tid / info->num_tids_per_block] +
(tid % info->num_tids_per_block) * info->size);
}
#define QEDI_U64_HI(val) ((u32)(((u64)(val)) >> 32))
#define QEDI_U64_LO(val) ((u32)(((u64)(val)) & 0xffffffff))
#endif /* _QEDI_H_ */
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#include "qedi_dbg.h"
#include <linux/vmalloc.h>
void
qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char nfunc[32];
memset(nfunc, 0, sizeof(nfunc));
memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
if (likely(qedi) && likely(qedi->pdev))
pr_err("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
nfunc, line, qedi->host_no, &vaf);
else
pr_err("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
va_end(va);
}
void
qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char nfunc[32];
memset(nfunc, 0, sizeof(nfunc));
memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
if (!(qedi_dbg_log & QEDI_LOG_WARN))
return;
if (likely(qedi) && likely(qedi->pdev))
pr_warn("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
nfunc, line, qedi->host_no, &vaf);
else
pr_warn("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
va_end(va);
}
void
qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char nfunc[32];
memset(nfunc, 0, sizeof(nfunc));
memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
if (!(qedi_dbg_log & QEDI_LOG_NOTICE))
return;
if (likely(qedi) && likely(qedi->pdev))
pr_notice("[%s]:[%s:%d]:%d: %pV",
dev_name(&qedi->pdev->dev), nfunc, line,
qedi->host_no, &vaf);
else
pr_notice("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
va_end(va);
}
void
qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
u32 level, const char *fmt, ...)
{
va_list va;
struct va_format vaf;
char nfunc[32];
memset(nfunc, 0, sizeof(nfunc));
memcpy(nfunc, func, sizeof(nfunc) - 1);
va_start(va, fmt);
vaf.fmt = fmt;
vaf.va = &va;
if (!(qedi_dbg_log & level))
return;
if (likely(qedi) && likely(qedi->pdev))
pr_info("[%s]:[%s:%d]:%d: %pV", dev_name(&qedi->pdev->dev),
nfunc, line, qedi->host_no, &vaf);
else
pr_info("[0000:00:00.0]:[%s:%d]: %pV", nfunc, line, &vaf);
va_end(va);
}
int
qedi_create_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
{
int ret = 0;
for (; iter->name; iter++) {
ret = sysfs_create_bin_file(&shost->shost_gendev.kobj,
iter->attr);
if (ret)
pr_err("Unable to create sysfs %s attr, err(%d).\n",
iter->name, ret);
}
return ret;
}
void
qedi_remove_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter)
{
for (; iter->name; iter++)
sysfs_remove_bin_file(&shost->shost_gendev.kobj, iter->attr);
}
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef _QEDI_DBG_H_
#define _QEDI_DBG_H_
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/compiler.h>
#include <linux/string.h>
#include <linux/version.h>
#include <linux/pci.h>
#include <linux/delay.h>
#include <scsi/scsi_transport.h>
#include <scsi/scsi_transport_iscsi.h>
#include <linux/fs.h>
#define __PREVENT_QED_HSI__
#include <linux/qed/common_hsi.h>
#include <linux/qed/qed_if.h>
extern uint qedi_dbg_log;
/* Debug print level definitions */
#define QEDI_LOG_DEFAULT 0x1 /* Set default logging mask */
#define QEDI_LOG_INFO 0x2 /* Informational logs,
* MAC address, WWPN, WWNN
*/
#define QEDI_LOG_DISC 0x4 /* Init, discovery, rport */
#define QEDI_LOG_LL2 0x8 /* LL2, VLAN logs */
#define QEDI_LOG_CONN 0x10 /* Connection setup, cleanup */
#define QEDI_LOG_EVT 0x20 /* Events, link, mtu */
#define QEDI_LOG_TIMER 0x40 /* Timer events */
#define QEDI_LOG_MP_REQ 0x80 /* Middle Path (MP) logs */
#define QEDI_LOG_SCSI_TM 0x100 /* SCSI Aborts, Task Mgmt */
#define QEDI_LOG_UNSOL 0x200 /* unsolicited event logs */
#define QEDI_LOG_IO 0x400 /* scsi cmd, completion */
#define QEDI_LOG_MQ 0x800 /* Multi Queue logs */
#define QEDI_LOG_BSG 0x1000 /* BSG logs */
#define QEDI_LOG_DEBUGFS 0x2000 /* debugFS logs */
#define QEDI_LOG_LPORT 0x4000 /* lport logs */
#define QEDI_LOG_ELS 0x8000 /* ELS logs */
#define QEDI_LOG_NPIV 0x10000 /* NPIV logs */
#define QEDI_LOG_SESS 0x20000 /* Conection setup, cleanup */
#define QEDI_LOG_UIO 0x40000 /* iSCSI UIO logs */
#define QEDI_LOG_TID 0x80000 /* FW TID context acquire,
* free
*/
#define QEDI_TRACK_TID 0x100000 /* Track TID state. To be
* enabled only at module load
* and not run-time.
*/
#define QEDI_TRACK_CMD_LIST 0x300000 /* Track active cmd list nodes,
* done with reference to TID,
* hence TRACK_TID also enabled.
*/
#define QEDI_LOG_NOTICE 0x40000000 /* Notice logs */
#define QEDI_LOG_WARN 0x80000000 /* Warning logs */
/* Debug context structure */
struct qedi_dbg_ctx {
unsigned int host_no;
struct pci_dev *pdev;
#ifdef CONFIG_DEBUG_FS
struct dentry *bdf_dentry;
#endif
};
#define QEDI_ERR(pdev, fmt, ...) \
qedi_dbg_err(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
#define QEDI_WARN(pdev, fmt, ...) \
qedi_dbg_warn(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
#define QEDI_NOTICE(pdev, fmt, ...) \
qedi_dbg_notice(pdev, __func__, __LINE__, fmt, ## __VA_ARGS__)
#define QEDI_INFO(pdev, level, fmt, ...) \
qedi_dbg_info(pdev, __func__, __LINE__, level, fmt, \
## __VA_ARGS__)
void qedi_dbg_err(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...);
void qedi_dbg_warn(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...);
void qedi_dbg_notice(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
const char *fmt, ...);
void qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line,
u32 info, const char *fmt, ...);
struct Scsi_Host;
struct sysfs_bin_attrs {
char *name;
struct bin_attribute *attr;
};
int qedi_create_sysfs_attr(struct Scsi_Host *shost,
struct sysfs_bin_attrs *iter);
void qedi_remove_sysfs_attr(struct Scsi_Host *shost,
struct sysfs_bin_attrs *iter);
#ifdef CONFIG_DEBUG_FS
/* DebugFS related code */
struct qedi_list_of_funcs {
char *oper_str;
ssize_t (*oper_func)(struct qedi_dbg_ctx *qedi);
};
struct qedi_debugfs_ops {
char *name;
struct qedi_list_of_funcs *qedi_funcs;
};
#define qedi_dbg_fileops(drv, ops) \
{ \
.owner = THIS_MODULE, \
.open = simple_open, \
.read = drv##_dbg_##ops##_cmd_read, \
.write = drv##_dbg_##ops##_cmd_write \
}
/* Used for debugfs sequential files */
#define qedi_dbg_fileops_seq(drv, ops) \
{ \
.owner = THIS_MODULE, \
.open = drv##_dbg_##ops##_open, \
.read = seq_read, \
.llseek = seq_lseek, \
.release = single_release, \
}
void qedi_dbg_host_init(struct qedi_dbg_ctx *qedi,
struct qedi_debugfs_ops *dops,
const struct file_operations *fops);
void qedi_dbg_host_exit(struct qedi_dbg_ctx *qedi);
void qedi_dbg_init(char *drv_name);
void qedi_dbg_exit(void);
#endif /* CONFIG_DEBUG_FS */
#endif /* _QEDI_DBG_H_ */
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#include "qedi.h"
#include "qedi_dbg.h"
#include <linux/uaccess.h>
#include <linux/debugfs.h>
#include <linux/module.h>
int do_not_recover;
static struct dentry *qedi_dbg_root;
void
qedi_dbg_host_init(struct qedi_dbg_ctx *qedi,
struct qedi_debugfs_ops *dops,
const struct file_operations *fops)
{
char host_dirname[32];
struct dentry *file_dentry = NULL;
sprintf(host_dirname, "host%u", qedi->host_no);
qedi->bdf_dentry = debugfs_create_dir(host_dirname, qedi_dbg_root);
if (!qedi->bdf_dentry)
return;
while (dops) {
if (!(dops->name))
break;
file_dentry = debugfs_create_file(dops->name, 0600,
qedi->bdf_dentry, qedi,
fops);
if (!file_dentry) {
QEDI_INFO(qedi, QEDI_LOG_DEBUGFS,
"Debugfs entry %s creation failed\n",
dops->name);
debugfs_remove_recursive(qedi->bdf_dentry);
return;
}
dops++;
fops++;
}
}
void
qedi_dbg_host_exit(struct qedi_dbg_ctx *qedi)
{
debugfs_remove_recursive(qedi->bdf_dentry);
qedi->bdf_dentry = NULL;
}
void
qedi_dbg_init(char *drv_name)
{
qedi_dbg_root = debugfs_create_dir(drv_name, NULL);
if (!qedi_dbg_root)
QEDI_INFO(NULL, QEDI_LOG_DEBUGFS, "Init of debugfs failed\n");
}
void
qedi_dbg_exit(void)
{
debugfs_remove_recursive(qedi_dbg_root);
qedi_dbg_root = NULL;
}
static ssize_t
qedi_dbg_do_not_recover_enable(struct qedi_dbg_ctx *qedi_dbg)
{
if (!do_not_recover)
do_not_recover = 1;
QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
do_not_recover);
return 0;
}
static ssize_t
qedi_dbg_do_not_recover_disable(struct qedi_dbg_ctx *qedi_dbg)
{
if (do_not_recover)
do_not_recover = 0;
QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
do_not_recover);
return 0;
}
static struct qedi_list_of_funcs qedi_dbg_do_not_recover_ops[] = {
{ "enable", qedi_dbg_do_not_recover_enable },
{ "disable", qedi_dbg_do_not_recover_disable },
{ NULL, NULL }
};
struct qedi_debugfs_ops qedi_debugfs_ops[] = {
{ "gbl_ctx", NULL },
{ "do_not_recover", qedi_dbg_do_not_recover_ops},
{ "io_trace", NULL },
{ NULL, NULL }
};
static ssize_t
qedi_dbg_do_not_recover_cmd_write(struct file *filp, const char __user *buffer,
size_t count, loff_t *ppos)
{
size_t cnt = 0;
struct qedi_dbg_ctx *qedi_dbg =
(struct qedi_dbg_ctx *)filp->private_data;
struct qedi_list_of_funcs *lof = qedi_dbg_do_not_recover_ops;
if (*ppos)
return 0;
while (lof) {
if (!(lof->oper_str))
break;
if (!strncmp(lof->oper_str, buffer, strlen(lof->oper_str))) {
cnt = lof->oper_func(qedi_dbg);
break;
}
lof++;
}
return (count - cnt);
}
static ssize_t
qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer,
size_t count, loff_t *ppos)
{
size_t cnt = 0;
if (*ppos)
return 0;
cnt = sprintf(buffer, "do_not_recover=%d\n", do_not_recover);
cnt = min_t(int, count, cnt - *ppos);
*ppos += cnt;
return cnt;
}
static int
qedi_gbl_ctx_show(struct seq_file *s, void *unused)
{
struct qedi_fastpath *fp = NULL;
struct qed_sb_info *sb_info = NULL;
struct status_block *sb = NULL;
struct global_queue *que = NULL;
int id;
u16 prod_idx;
struct qedi_ctx *qedi = s->private;
unsigned long flags;
seq_puts(s, " DUMP CQ CONTEXT:\n");
for (id = 0; id < MIN_NUM_CPUS_MSIX(qedi); id++) {
spin_lock_irqsave(&qedi->hba_lock, flags);
seq_printf(s, "=========FAST CQ PATH [%d] ==========\n", id);
fp = &qedi->fp_array[id];
sb_info = fp->sb_info;
sb = sb_info->sb_virt;
prod_idx = (sb->pi_array[QEDI_PROTO_CQ_PROD_IDX] &
STATUS_BLOCK_PROD_INDEX_MASK);
seq_printf(s, "SB PROD IDX: %d\n", prod_idx);
que = qedi->global_queues[fp->sb_id];
seq_printf(s, "DRV CONS IDX: %d\n", que->cq_cons_idx);
seq_printf(s, "CQ complete host memory: %d\n", fp->sb_id);
seq_puts(s, "=========== END ==================\n\n\n");
spin_unlock_irqrestore(&qedi->hba_lock, flags);
}
return 0;
}
static int
qedi_dbg_gbl_ctx_open(struct inode *inode, struct file *file)
{
struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
dbg_ctx);
return single_open(file, qedi_gbl_ctx_show, qedi);
}
static int
qedi_io_trace_show(struct seq_file *s, void *unused)
{
int id, idx = 0;
struct qedi_ctx *qedi = s->private;
struct qedi_io_log *io_log;
unsigned long flags;
seq_puts(s, " DUMP IO LOGS:\n");
spin_lock_irqsave(&qedi->io_trace_lock, flags);
idx = qedi->io_trace_idx;
for (id = 0; id < QEDI_IO_TRACE_SIZE; id++) {
io_log = &qedi->io_trace_buf[idx];
seq_printf(s, "iodir-%d:", io_log->direction);
seq_printf(s, "tid-0x%x:", io_log->task_id);
seq_printf(s, "cid-0x%x:", io_log->cid);
seq_printf(s, "lun-%d:", io_log->lun);
seq_printf(s, "op-0x%02x:", io_log->op);
seq_printf(s, "0x%02x%02x%02x%02x:", io_log->lba[0],
io_log->lba[1], io_log->lba[2], io_log->lba[3]);
seq_printf(s, "buflen-%d:", io_log->bufflen);
seq_printf(s, "sgcnt-%d:", io_log->sg_count);
seq_printf(s, "res-0x%08x:", io_log->result);
seq_printf(s, "jif-%lu:", io_log->jiffies);
seq_printf(s, "blk_req_cpu-%d:", io_log->blk_req_cpu);
seq_printf(s, "req_cpu-%d:", io_log->req_cpu);
seq_printf(s, "intr_cpu-%d:", io_log->intr_cpu);
seq_printf(s, "blk_rsp_cpu-%d\n", io_log->blk_rsp_cpu);
idx++;
if (idx == QEDI_IO_TRACE_SIZE)
idx = 0;
}
spin_unlock_irqrestore(&qedi->io_trace_lock, flags);
return 0;
}
static int
qedi_dbg_io_trace_open(struct inode *inode, struct file *file)
{
struct qedi_dbg_ctx *qedi_dbg = inode->i_private;
struct qedi_ctx *qedi = container_of(qedi_dbg, struct qedi_ctx,
dbg_ctx);
return single_open(file, qedi_io_trace_show, qedi);
}
const struct file_operations qedi_dbg_fops[] = {
qedi_dbg_fileops_seq(qedi, gbl_ctx),
qedi_dbg_fileops(qedi, do_not_recover),
qedi_dbg_fileops_seq(qedi, io_trace),
{ NULL, NULL },
};
此差异已折叠。
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef _QEDI_GBL_H_
#define _QEDI_GBL_H_
#include "qedi_iscsi.h"
extern uint qedi_io_tracing;
extern int do_not_recover;
extern struct scsi_host_template qedi_host_template;
extern struct iscsi_transport qedi_iscsi_transport;
extern const struct qed_iscsi_ops *qedi_ops;
extern struct qedi_debugfs_ops qedi_debugfs_ops;
extern const struct file_operations qedi_dbg_fops;
extern struct device_attribute *qedi_shost_attrs[];
int qedi_alloc_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
void qedi_free_sq(struct qedi_ctx *qedi, struct qedi_endpoint *ep);
int qedi_send_iscsi_login(struct qedi_conn *qedi_conn,
struct iscsi_task *task);
int qedi_send_iscsi_logout(struct qedi_conn *qedi_conn,
struct iscsi_task *task);
int qedi_iscsi_abort_work(struct qedi_conn *qedi_conn,
struct iscsi_task *mtask);
int qedi_send_iscsi_text(struct qedi_conn *qedi_conn,
struct iscsi_task *task);
int qedi_send_iscsi_nopout(struct qedi_conn *qedi_conn,
struct iscsi_task *task,
char *datap, int data_len, int unsol);
int qedi_iscsi_send_ioreq(struct iscsi_task *task);
int qedi_get_task_idx(struct qedi_ctx *qedi);
void qedi_clear_task_idx(struct qedi_ctx *qedi, int idx);
int qedi_iscsi_cleanup_task(struct iscsi_task *task,
bool mark_cmd_node_deleted);
void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd);
void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt,
struct qedi_cmd *qedi_cmd);
void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt);
void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, int16_t *tid);
void qedi_process_iscsi_error(struct qedi_endpoint *ep,
struct async_data *data);
void qedi_start_conn_recovery(struct qedi_ctx *qedi,
struct qedi_conn *qedi_conn);
struct qedi_conn *qedi_get_conn_from_id(struct qedi_ctx *qedi, u32 iscsi_cid);
void qedi_process_tcp_error(struct qedi_endpoint *ep, struct async_data *data);
void qedi_mark_device_missing(struct iscsi_cls_session *cls_session);
void qedi_mark_device_available(struct iscsi_cls_session *cls_session);
void qedi_reset_host_mtu(struct qedi_ctx *qedi, u16 mtu);
int qedi_recover_all_conns(struct qedi_ctx *qedi);
void qedi_fp_process_cqes(struct qedi_work *work);
int qedi_cleanup_all_io(struct qedi_ctx *qedi,
struct qedi_conn *qedi_conn,
struct iscsi_task *task, bool in_recovery);
void qedi_trace_io(struct qedi_ctx *qedi, struct iscsi_task *task,
u16 tid, int8_t direction);
int qedi_alloc_id(struct qedi_portid_tbl *id_tbl, u16 id);
u16 qedi_alloc_new_id(struct qedi_portid_tbl *id_tbl);
void qedi_free_id(struct qedi_portid_tbl *id_tbl, u16 id);
int qedi_create_sysfs_ctx_attr(struct qedi_ctx *qedi);
void qedi_remove_sysfs_ctx_attr(struct qedi_ctx *qedi);
void qedi_clearsq(struct qedi_ctx *qedi,
struct qedi_conn *qedi_conn,
struct iscsi_task *task);
#endif
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef __QEDI_HSI__
#define __QEDI_HSI__
/*
* Add include to common target
*/
#include <linux/qed/common_hsi.h>
/*
* Add include to common storage target
*/
#include <linux/qed/storage_common.h>
/*
* Add include to common TCP target
*/
#include <linux/qed/tcp_common.h>
/*
* Add include to common iSCSI target for both eCore and protocol driver
*/
#include <linux/qed/iscsi_common.h>
/*
* iSCSI CMDQ element
*/
struct iscsi_cmdqe {
__le16 conn_id;
u8 invalid_command;
u8 cmd_hdr_type;
__le32 reserved1[2];
__le32 cmd_payload[13];
};
/*
* iSCSI CMD header type
*/
enum iscsi_cmd_hdr_type {
ISCSI_CMD_HDR_TYPE_BHS_ONLY /* iSCSI BHS with no expected AHS */,
ISCSI_CMD_HDR_TYPE_BHS_W_AHS /* iSCSI BHS with expected AHS */,
ISCSI_CMD_HDR_TYPE_AHS /* iSCSI AHS */,
MAX_ISCSI_CMD_HDR_TYPE
};
#endif /* __QEDI_HSI__ */
此差异已折叠。
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#ifndef _QEDI_ISCSI_H_
#define _QEDI_ISCSI_H_
#include <linux/socket.h>
#include <linux/completion.h>
#include "qedi.h"
#define ISCSI_MAX_SESS_PER_HBA 4096
#define DEF_KA_TIMEOUT 7200000
#define DEF_KA_INTERVAL 10000
#define DEF_KA_MAX_PROBE_COUNT 10
#define DEF_TOS 0
#define DEF_TTL 0xfe
#define DEF_SND_SEQ_SCALE 0
#define DEF_RCV_BUF 0xffff
#define DEF_SND_BUF 0xffff
#define DEF_SEED 0
#define DEF_MAX_RT_TIME 8000
#define DEF_MAX_DA_COUNT 2
#define DEF_SWS_TIMER 1000
#define DEF_MAX_CWND 2
#define DEF_PATH_MTU 1500
#define DEF_MSS 1460
#define DEF_LL2_MTU 1560
#define JUMBO_MTU 9000
#define MIN_MTU 576 /* rfc 793 */
#define IPV4_HDR_LEN 20
#define IPV6_HDR_LEN 40
#define TCP_HDR_LEN 20
#define TCP_OPTION_LEN 12
#define VLAN_LEN 4
enum {
EP_STATE_IDLE = 0x0,
EP_STATE_ACQRCONN_START = 0x1,
EP_STATE_ACQRCONN_COMPL = 0x2,
EP_STATE_OFLDCONN_START = 0x4,
EP_STATE_OFLDCONN_COMPL = 0x8,
EP_STATE_DISCONN_START = 0x10,
EP_STATE_DISCONN_COMPL = 0x20,
EP_STATE_CLEANUP_START = 0x40,
EP_STATE_CLEANUP_CMPL = 0x80,
EP_STATE_TCP_FIN_RCVD = 0x100,
EP_STATE_TCP_RST_RCVD = 0x200,
EP_STATE_LOGOUT_SENT = 0x400,
EP_STATE_LOGOUT_RESP_RCVD = 0x800,
EP_STATE_CLEANUP_FAILED = 0x1000,
EP_STATE_OFLDCONN_FAILED = 0x2000,
EP_STATE_CONNECT_FAILED = 0x4000,
EP_STATE_DISCONN_TIMEDOUT = 0x8000,
};
struct qedi_conn;
struct qedi_endpoint {
struct qedi_ctx *qedi;
u32 dst_addr[4];
u32 src_addr[4];
u16 src_port;
u16 dst_port;
u16 vlan_id;
u16 pmtu;
u8 src_mac[ETH_ALEN];
u8 dst_mac[ETH_ALEN];
u8 ip_type;
int state;
wait_queue_head_t ofld_wait;
wait_queue_head_t tcp_ofld_wait;
u32 iscsi_cid;
/* identifier of the connection from qed */
u32 handle;
u32 fw_cid;
void __iomem *p_doorbell;
/* Send queue management */
struct iscsi_wqe *sq;
dma_addr_t sq_dma;
u16 sq_prod_idx;
u16 fw_sq_prod_idx;
u16 sq_con_idx;
u32 sq_mem_size;
void *sq_pbl;
dma_addr_t sq_pbl_dma;
u32 sq_pbl_size;
struct qedi_conn *conn;
struct work_struct offload_work;
};
#define QEDI_SQ_WQES_MIN 16
struct qedi_io_bdt {
struct iscsi_sge *sge_tbl;
dma_addr_t sge_tbl_dma;
u16 sge_valid;
};
/**
* struct generic_pdu_resc - login pdu resource structure
*
* @req_buf: driver buffer used to stage payload associated with
* the login request
* @req_dma_addr: dma address for iscsi login request payload buffer
* @req_buf_size: actual login request payload length
* @req_wr_ptr: pointer into login request buffer when next data is
* to be written
* @resp_hdr: iscsi header where iscsi login response header is to
* be recreated
* @resp_buf: buffer to stage login response payload
* @resp_dma_addr: login response payload buffer dma address
* @resp_buf_size: login response paylod length
* @resp_wr_ptr: pointer into login response buffer when next data is
* to be written
* @req_bd_tbl: iscsi login request payload BD table
* @req_bd_dma: login request BD table dma address
* @resp_bd_tbl: iscsi login response payload BD table
* @resp_bd_dma: login request BD table dma address
*
* following structure defines buffer info for generic pdus such as iSCSI Login,
* Logout and NOP
*/
struct generic_pdu_resc {
char *req_buf;
dma_addr_t req_dma_addr;
u32 req_buf_size;
char *req_wr_ptr;
struct iscsi_hdr resp_hdr;
char *resp_buf;
dma_addr_t resp_dma_addr;
u32 resp_buf_size;
char *resp_wr_ptr;
char *req_bd_tbl;
dma_addr_t req_bd_dma;
char *resp_bd_tbl;
dma_addr_t resp_bd_dma;
};
struct qedi_conn {
struct iscsi_cls_conn *cls_conn;
struct qedi_ctx *qedi;
struct qedi_endpoint *ep;
struct list_head active_cmd_list;
spinlock_t list_lock; /* internal conn lock */
u32 active_cmd_count;
u32 cmd_cleanup_req;
u32 cmd_cleanup_cmpl;
u32 iscsi_conn_id;
int itt;
int abrt_conn;
#define QEDI_CID_RESERVED 0x5AFF
u32 fw_cid;
/*
* Buffer for login negotiation process
*/
struct generic_pdu_resc gen_pdu;
struct list_head tmf_work_list;
wait_queue_head_t wait_queue;
spinlock_t tmf_work_lock; /* tmf work lock */
unsigned long flags;
#define QEDI_CONN_FW_CLEANUP 1
};
struct qedi_cmd {
struct list_head io_cmd;
bool io_cmd_in_list;
struct iscsi_hdr hdr;
struct qedi_conn *conn;
struct scsi_cmnd *scsi_cmd;
struct scatterlist *sg;
struct qedi_io_bdt io_tbl;
struct iscsi_task_context request;
unsigned char *sense_buffer;
dma_addr_t sense_buffer_dma;
u16 task_id;
/* field populated for tmf work queue */
struct iscsi_task *task;
struct work_struct tmf_work;
int state;
#define CLEANUP_WAIT 1
#define CLEANUP_RECV 2
#define CLEANUP_WAIT_FAILED 3
#define CLEANUP_NOT_REQUIRED 4
#define LUN_RESET_RESPONSE_RECEIVED 5
#define RESPONSE_RECEIVED 6
int type;
#define TYPEIO 1
#define TYPERESET 2
struct qedi_work_map *list_tmf_work;
/* slowpath management */
bool use_slowpath;
struct iscsi_tm_rsp *tmf_resp_buf;
struct qedi_work cqe_work;
};
struct qedi_work_map {
struct list_head list;
struct qedi_cmd *qedi_cmd;
int rtid;
int state;
#define QEDI_WORK_QUEUED 1
#define QEDI_WORK_SCHEDULED 2
#define QEDI_WORK_EXIT 3
struct work_struct *ptr_tmf_work;
};
#define qedi_set_itt(task_id, itt) ((u32)(((task_id) & 0xffff) | ((itt) << 16)))
#define qedi_get_itt(cqe) (cqe.iscsi_hdr.cmd.itt >> 16)
#define QEDI_OFLD_WAIT_STATE(q) ((q)->state == EP_STATE_OFLDCONN_FAILED || \
(q)->state == EP_STATE_OFLDCONN_COMPL)
#endif /* _QEDI_ISCSI_H_ */
此差异已折叠。
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#include "qedi.h"
#include "qedi_gbl.h"
#include "qedi_iscsi.h"
#include "qedi_dbg.h"
static inline struct qedi_ctx *qedi_dev_to_hba(struct device *dev)
{
struct Scsi_Host *shost = class_to_shost(dev);
return iscsi_host_priv(shost);
}
static ssize_t qedi_show_port_state(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct qedi_ctx *qedi = qedi_dev_to_hba(dev);
if (atomic_read(&qedi->link_state) == QEDI_LINK_UP)
return sprintf(buf, "Online\n");
else
return sprintf(buf, "Linkdown\n");
}
static ssize_t qedi_show_speed(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct qedi_ctx *qedi = qedi_dev_to_hba(dev);
struct qed_link_output if_link;
qedi_ops->common->get_link(qedi->cdev, &if_link);
return sprintf(buf, "%d Gbit\n", if_link.speed / 1000);
}
static DEVICE_ATTR(port_state, 0444, qedi_show_port_state, NULL);
static DEVICE_ATTR(speed, 0444, qedi_show_speed, NULL);
struct device_attribute *qedi_shost_attrs[] = {
&dev_attr_port_state,
&dev_attr_speed,
NULL
};
/*
* QLogic iSCSI Offload Driver
* Copyright (c) 2016 Cavium Inc.
*
* This software is available under the terms of the GNU General Public License
* (GPL) Version 2, available from the file COPYING in the main directory of
* this source tree.
*/
#define QEDI_MODULE_VERSION "8.10.3.0"
#define QEDI_DRIVER_MAJOR_VER 8
#define QEDI_DRIVER_MINOR_VER 10
#define QEDI_DRIVER_REV_VER 3
#define QEDI_DRIVER_ENG_VER 0
......@@ -1988,9 +1988,9 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
scsi_qla_host_t *base_vha = shost_priv(fc_vport->shost);
scsi_qla_host_t *vha = NULL;
struct qla_hw_data *ha = base_vha->hw;
uint16_t options = 0;
int cnt;
struct req_que *req = ha->req_q_map[0];
struct qla_qpair *qpair;
ret = qla24xx_vport_create_req_sanity_check(fc_vport);
if (ret) {
......@@ -2075,15 +2075,9 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
qlt_vport_create(vha, ha);
qla24xx_vport_disable(fc_vport, disable);
if (ha->flags.cpu_affinity_enabled) {
req = ha->req_q_map[1];
ql_dbg(ql_dbg_multiq, vha, 0xc000,
"Request queue %p attached with "
"VP[%d], cpu affinity =%d\n",
req, vha->vp_idx, ha->flags.cpu_affinity_enabled);
goto vport_queue;
} else if (ql2xmaxqueues == 1 || !ha->npiv_info)
if (!ql2xmqsupport || !ha->npiv_info)
goto vport_queue;
/* Create a request queue in QoS mode for the vport */
for (cnt = 0; cnt < ha->nvram_npiv_size; cnt++) {
if (memcmp(ha->npiv_info[cnt].port_name, vha->port_name, 8) == 0
......@@ -2095,20 +2089,20 @@ qla24xx_vport_create(struct fc_vport *fc_vport, bool disable)
}
if (qos) {
ret = qla25xx_create_req_que(ha, options, vha->vp_idx, 0, 0,
qos);
if (!ret)
qpair = qla2xxx_create_qpair(vha, qos, vha->vp_idx);
if (!qpair)
ql_log(ql_log_warn, vha, 0x7084,
"Can't create request queue for VP[%d]\n",
"Can't create qpair for VP[%d]\n",
vha->vp_idx);
else {
ql_dbg(ql_dbg_multiq, vha, 0xc001,
"Request Que:%d Q0s: %d) created for VP[%d]\n",
ret, qos, vha->vp_idx);
"Queue pair: %d Qos: %d) created for VP[%d]\n",
qpair->id, qos, vha->vp_idx);
ql_dbg(ql_dbg_user, vha, 0x7085,
"Request Que:%d Q0s: %d) created for VP[%d]\n",
ret, qos, vha->vp_idx);
req = ha->req_q_map[ret];
"Queue Pair: %d Qos: %d) created for VP[%d]\n",
qpair->id, qos, vha->vp_idx);
req = qpair->req;
vha->qpair = qpair;
}
}
......@@ -2162,10 +2156,10 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
clear_bit(vha->vp_idx, ha->vp_idx_map);
mutex_unlock(&ha->vport_lock);
if (vha->req->id && !ha->flags.cpu_affinity_enabled) {
if (qla25xx_delete_req_que(vha, vha->req) != QLA_SUCCESS)
if (vha->qpair->vp_idx == vha->vp_idx) {
if (qla2xxx_delete_qpair(vha, vha->qpair) != QLA_SUCCESS)
ql_log(ql_log_warn, vha, 0x7087,
"Queue delete failed.\n");
"Queue Pair delete failed.\n");
}
ql_log(ql_log_info, vha, 0x7088, "VP[%d] deleted.\n", id);
......
......@@ -11,7 +11,7 @@
* ----------------------------------------------------------------------
* | Level | Last Value Used | Holes |
* ----------------------------------------------------------------------
* | Module Init and Probe | 0x0191 | 0x0146 |
* | Module Init and Probe | 0x0193 | 0x0146 |
* | | | 0x015b-0x0160 |
* | | | 0x016e |
* | Mailbox commands | 0x1199 | 0x1193 |
......@@ -58,7 +58,7 @@
* | | | 0xb13a,0xb142 |
* | | | 0xb13c-0xb140 |
* | | | 0xb149 |
* | MultiQ | 0xc00c | |
* | MultiQ | 0xc010 | |
* | Misc | 0xd301 | 0xd031-0xd0ff |
* | | | 0xd101-0xd1fe |
* | | | 0xd214-0xd2fe |
......
......@@ -401,6 +401,7 @@ typedef struct srb {
uint16_t type;
char *name;
int iocbs;
struct qla_qpair *qpair;
union {
struct srb_iocb iocb_cmd;
struct bsg_job *bsg_job;
......@@ -2719,6 +2720,7 @@ struct isp_operations {
int (*get_flash_version) (struct scsi_qla_host *, void *);
int (*start_scsi) (srb_t *);
int (*start_scsi_mq) (srb_t *);
int (*abort_isp) (struct scsi_qla_host *);
int (*iospace_config)(struct qla_hw_data*);
int (*initialize_adapter)(struct scsi_qla_host *);
......@@ -2730,8 +2732,10 @@ struct isp_operations {
#define QLA_MSIX_FW_MODE(m) (((m) & (BIT_7|BIT_8|BIT_9)) >> 7)
#define QLA_MSIX_FW_MODE_1(m) (QLA_MSIX_FW_MODE(m) == 1)
#define QLA_MSIX_DEFAULT 0x00
#define QLA_MSIX_RSP_Q 0x01
#define QLA_MSIX_DEFAULT 0x00
#define QLA_MSIX_RSP_Q 0x01
#define QLA_ATIO_VECTOR 0x02
#define QLA_MSIX_QPAIR_MULTIQ_RSP_Q 0x03
#define QLA_MIDX_DEFAULT 0
#define QLA_MIDX_RSP_Q 1
......@@ -2745,9 +2749,11 @@ struct scsi_qla_host;
struct qla_msix_entry {
int have_irq;
int in_use;
uint32_t vector;
uint16_t entry;
struct rsp_que *rsp;
char name[30];
void *handle;
struct irq_affinity_notify irq_notify;
int cpuid;
};
......@@ -2872,7 +2878,6 @@ struct rsp_que {
struct qla_msix_entry *msix;
struct req_que *req;
srb_t *status_srb; /* status continuation entry */
struct work_struct q_work;
dma_addr_t dma_fx00;
response_t *ring_fx00;
......@@ -2909,6 +2914,37 @@ struct req_que {
uint8_t req_pkt[REQUEST_ENTRY_SIZE];
};
/*Queue pair data structure */
struct qla_qpair {
spinlock_t qp_lock;
atomic_t ref_count;
/* distill these fields down to 'online=0/1'
* ha->flags.eeh_busy
* ha->flags.pci_channel_io_perm_failure
* base_vha->loop_state
*/
uint32_t online:1;
/* move vha->flags.difdix_supported here */
uint32_t difdix_supported:1;
uint32_t delete_in_progress:1;
uint16_t id; /* qp number used with FW */
uint16_t num_active_cmd; /* cmds down at firmware */
cpumask_t cpu_mask; /* CPU mask for cpu affinity operation */
uint16_t vp_idx; /* vport ID */
mempool_t *srb_mempool;
/* to do: New driver: move queues to here instead of pointers */
struct req_que *req;
struct rsp_que *rsp;
struct atio_que *atio;
struct qla_msix_entry *msix; /* point to &ha->msix_entries[x] */
struct qla_hw_data *hw;
struct work_struct q_work;
struct list_head qp_list_elem; /* vha->qp_list */
};
/* Place holder for FW buffer parameters */
struct qlfc_fw {
void *fw_buf;
......@@ -3004,7 +3040,6 @@ struct qla_hw_data {
uint32_t chip_reset_done :1;
uint32_t running_gold_fw :1;
uint32_t eeh_busy :1;
uint32_t cpu_affinity_enabled :1;
uint32_t disable_msix_handshake :1;
uint32_t fcp_prio_enabled :1;
uint32_t isp82xx_fw_hung:1;
......@@ -3061,10 +3096,15 @@ struct qla_hw_data {
uint8_t mqenable;
struct req_que **req_q_map;
struct rsp_que **rsp_q_map;
struct qla_qpair **queue_pair_map;
unsigned long req_qid_map[(QLA_MAX_QUEUES / 8) / sizeof(unsigned long)];
unsigned long rsp_qid_map[(QLA_MAX_QUEUES / 8) / sizeof(unsigned long)];
unsigned long qpair_qid_map[(QLA_MAX_QUEUES / 8)
/ sizeof(unsigned long)];
uint8_t max_req_queues;
uint8_t max_rsp_queues;
uint8_t max_qpairs;
struct qla_qpair *base_qpair;
struct qla_npiv_entry *npiv_info;
uint16_t nvram_npiv_size;
......@@ -3328,6 +3368,7 @@ struct qla_hw_data {
struct mutex vport_lock; /* Virtual port synchronization */
spinlock_t vport_slock; /* order is hardware_lock, then vport_slock */
struct mutex mq_lock; /* multi-queue synchronization */
struct completion mbx_cmd_comp; /* Serialize mbx access */
struct completion mbx_intr_comp; /* Used for completion notification */
struct completion dcbx_comp; /* For set port config notification */
......@@ -3608,6 +3649,7 @@ typedef struct scsi_qla_host {
uint32_t fw_tgt_reported:1;
uint32_t bbcr_enable:1;
uint32_t qpairs_available:1;
} flags;
atomic_t loop_state;
......@@ -3646,6 +3688,7 @@ typedef struct scsi_qla_host {
#define FX00_TARGET_SCAN 24
#define FX00_CRITEMP_RECOVERY 25
#define FX00_HOST_INFO_RESEND 26
#define QPAIR_ONLINE_CHECK_NEEDED 27
unsigned long pci_flags;
#define PFLG_DISCONNECTED 0 /* PCI device removed */
......@@ -3704,10 +3747,13 @@ typedef struct scsi_qla_host {
/* List of pending PLOGI acks, protected by hw lock */
struct list_head plogi_ack_list;
struct list_head qp_list;
uint32_t vp_abort_cnt;
struct fc_vport *fc_vport; /* holds fc_vport * for each vport */
uint16_t vp_idx; /* vport ID */
struct qla_qpair *qpair; /* base qpair */
unsigned long vp_flags;
#define VP_IDX_ACQUIRED 0 /* bit no 0 */
......@@ -3763,6 +3809,23 @@ struct qla_tgt_vp_map {
scsi_qla_host_t *vha;
};
struct qla2_sgx {
dma_addr_t dma_addr; /* OUT */
uint32_t dma_len; /* OUT */
uint32_t tot_bytes; /* IN */
struct scatterlist *cur_sg; /* IN */
/* for book keeping, bzero on initial invocation */
uint32_t bytes_consumed;
uint32_t num_bytes;
uint32_t tot_partial;
/* for debugging */
uint32_t num_sg;
srb_t *sp;
};
/*
* Macros to help code, maintain, etc.
*/
......@@ -3775,21 +3838,34 @@ struct qla_tgt_vp_map {
(test_bit(ISP_ABORT_NEEDED, &ha->dpc_flags) || \
test_bit(LOOP_RESYNC_NEEDED, &ha->dpc_flags))
#define QLA_VHA_MARK_BUSY(__vha, __bail) do { \
atomic_inc(&__vha->vref_count); \
mb(); \
if (__vha->flags.delete_progress) { \
atomic_dec(&__vha->vref_count); \
__bail = 1; \
} else { \
__bail = 0; \
} \
#define QLA_VHA_MARK_BUSY(__vha, __bail) do { \
atomic_inc(&__vha->vref_count); \
mb(); \
if (__vha->flags.delete_progress) { \
atomic_dec(&__vha->vref_count); \
__bail = 1; \
} else { \
__bail = 0; \
} \
} while (0)
#define QLA_VHA_MARK_NOT_BUSY(__vha) do { \
atomic_dec(&__vha->vref_count); \
#define QLA_VHA_MARK_NOT_BUSY(__vha) \
atomic_dec(&__vha->vref_count); \
#define QLA_QPAIR_MARK_BUSY(__qpair, __bail) do { \
atomic_inc(&__qpair->ref_count); \
mb(); \
if (__qpair->delete_in_progress) { \
atomic_dec(&__qpair->ref_count); \
__bail = 1; \
} else { \
__bail = 0; \
} \
} while (0)
#define QLA_QPAIR_MARK_NOT_BUSY(__qpair) \
atomic_dec(&__qpair->ref_count); \
/*
* qla2x00 local function return status codes
*/
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册
反馈
建议
客服 返回
顶部