提交 df910390 编写于 作者: L Linus Torvalds

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull first round of SCSI updates from James Bottomley:
 "This includes one new driver: cxlflash plus the usual grab bag of
  updates for the major drivers: qla2xxx, ipr, storvsc, pm80xx, hptiop,
  plus a few assorted fixes.

  There's another tranch coming, but I want to incubate it another few
  days in the checkers, plus it includes a mpt2sas separated lifetime
  fix, which Avago won't get done testing until Friday"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (85 commits)
  aic94xx: set an error code on failure
  storvsc: Set the error code correctly in failure conditions
  storvsc: Allow write_same when host is windows 10
  storvsc: use storage protocol version to determine storage capabilities
  storvsc: use correct defaults for values determined by protocol negotiation
  storvsc: Untangle the storage protocol negotiation from the vmbus protocol negotiation.
  storvsc: Use a single value to track protocol versions
  storvsc: Rather than look for sets of specific protocol versions, make decisions based on ranges.
  cxlflash: Remove unused variable from queuecommand
  cxlflash: shift wrapping bug in afu_link_reset()
  cxlflash: off by one bug in cxlflash_show_port_status()
  cxlflash: Virtual LUN support
  cxlflash: Superpipe support
  cxlflash: Base error recovery support
  qla2xxx: Update driver version to 8.07.00.26-k
  qla2xxx: Add pci device id 0x2261.
  qla2xxx: Fix missing device login retries.
  qla2xxx: do not clear slot in outstanding cmd array
  qla2xxx: Remove decrement of sp reference count in abort handler.
  qla2xxx: Add support to show MPI and PEP FW version for ISP27xx.
  ...
...@@ -316,6 +316,7 @@ Code Seq#(hex) Include File Comments ...@@ -316,6 +316,7 @@ Code Seq#(hex) Include File Comments
0xB3 00 linux/mmc/ioctl.h 0xB3 00 linux/mmc/ioctl.h
0xC0 00-0F linux/usb/iowarrior.h 0xC0 00-0F linux/usb/iowarrior.h
0xCA 00-0F uapi/misc/cxl.h 0xCA 00-0F uapi/misc/cxl.h
0xCA 80-8F uapi/scsi/cxlflash_ioctl.h
0xCB 00-1F CBM serial IEC bus in development: 0xCB 00-1F CBM serial IEC bus in development:
<mailto:michael.klein@puffin.lb.shuttle.de> <mailto:michael.klein@puffin.lb.shuttle.de>
0xCD 01 linux/reiserfs_fs.h 0xCD 01 linux/reiserfs_fs.h
......
Introduction
============
The IBM Power architecture provides support for CAPI (Coherent
Accelerator Power Interface), which is available to certain PCIe slots
on Power 8 systems. CAPI can be thought of as a special tunneling
protocol through PCIe that allow PCIe adapters to look like special
purpose co-processors which can read or write an application's
memory and generate page faults. As a result, the host interface to
an adapter running in CAPI mode does not require the data buffers to
be mapped to the device's memory (IOMMU bypass) nor does it require
memory to be pinned.
On Linux, Coherent Accelerator (CXL) kernel services present CAPI
devices as a PCI device by implementing a virtual PCI host bridge.
This abstraction simplifies the infrastructure and programming
model, allowing for drivers to look similar to other native PCI
device drivers.
CXL provides a mechanism by which user space applications can
directly talk to a device (network or storage) bypassing the typical
kernel/device driver stack. The CXL Flash Adapter Driver enables a
user space application direct access to Flash storage.
The CXL Flash Adapter Driver is a kernel module that sits in the
SCSI stack as a low level device driver (below the SCSI disk and
protocol drivers) for the IBM CXL Flash Adapter. This driver is
responsible for the initialization of the adapter, setting up the
special path for user space access, and performing error recovery. It
communicates directly the Flash Accelerator Functional Unit (AFU)
as described in Documentation/powerpc/cxl.txt.
The cxlflash driver supports two, mutually exclusive, modes of
operation at the device (LUN) level:
- Any flash device (LUN) can be configured to be accessed as a
regular disk device (i.e.: /dev/sdc). This is the default mode.
- Any flash device (LUN) can be configured to be accessed from
user space with a special block library. This mode further
specifies the means of accessing the device and provides for
either raw access to the entire LUN (referred to as direct
or physical LUN access) or access to a kernel/AFU-mediated
partition of the LUN (referred to as virtual LUN access). The
segmentation of a disk device into virtual LUNs is assisted
by special translation services provided by the Flash AFU.
Overview
========
The Coherent Accelerator Interface Architecture (CAIA) introduces a
concept of a master context. A master typically has special privileges
granted to it by the kernel or hypervisor allowing it to perform AFU
wide management and control. The master may or may not be involved
directly in each user I/O, but at the minimum is involved in the
initial setup before the user application is allowed to send requests
directly to the AFU.
The CXL Flash Adapter Driver establishes a master context with the
AFU. It uses memory mapped I/O (MMIO) for this control and setup. The
Adapter Problem Space Memory Map looks like this:
+-------------------------------+
| 512 * 64 KB User MMIO |
| (per context) |
| User Accessible |
+-------------------------------+
| 512 * 128 B per context |
| Provisioning and Control |
| Trusted Process accessible |
+-------------------------------+
| 64 KB Global |
| Trusted Process accessible |
+-------------------------------+
This driver configures itself into the SCSI software stack as an
adapter driver. The driver is the only entity that is considered a
Trusted Process to program the Provisioning and Control and Global
areas in the MMIO Space shown above. The master context driver
discovers all LUNs attached to the CXL Flash adapter and instantiates
scsi block devices (/dev/sdb, /dev/sdc etc.) for each unique LUN
seen from each path.
Once these scsi block devices are instantiated, an application
written to a specification provided by the block library may get
access to the Flash from user space (without requiring a system call).
This master context driver also provides a series of ioctls for this
block library to enable this user space access. The driver supports
two modes for accessing the block device.
The first mode is called a virtual mode. In this mode a single scsi
block device (/dev/sdb) may be carved up into any number of distinct
virtual LUNs. The virtual LUNs may be resized as long as the sum of
the sizes of all the virtual LUNs, along with the meta-data associated
with it does not exceed the physical capacity.
The second mode is called the physical mode. In this mode a single
block device (/dev/sdb) may be opened directly by the block library
and the entire space for the LUN is available to the application.
Only the physical mode provides persistence of the data. i.e. The
data written to the block device will survive application exit and
restart and also reboot. The virtual LUNs do not persist (i.e. do
not survive after the application terminates or the system reboots).
Block library API
=================
Applications intending to get access to the CXL Flash from user
space should use the block library, as it abstracts the details of
interfacing directly with the cxlflash driver that are necessary for
performing administrative actions (i.e.: setup, tear down, resize).
The block library can be thought of as a 'user' of services,
implemented as IOCTLs, that are provided by the cxlflash driver
specifically for devices (LUNs) operating in user space access
mode. While it is not a requirement that applications understand
the interface between the block library and the cxlflash driver,
a high-level overview of each supported service (IOCTL) is provided
below.
The block library can be found on GitHub:
http://www.github.com/mikehollinger/ibmcapikv
CXL Flash Driver IOCTLs
=======================
Users, such as the block library, that wish to interface with a flash
device (LUN) via user space access need to use the services provided
by the cxlflash driver. As these services are implemented as ioctls,
a file descriptor handle must first be obtained in order to establish
the communication channel between a user and the kernel. This file
descriptor is obtained by opening the device special file associated
with the scsi disk device (/dev/sdb) that was created during LUN
discovery. As per the location of the cxlflash driver within the
SCSI protocol stack, this open is actually not seen by the cxlflash
driver. Upon successful open, the user receives a file descriptor
(herein referred to as fd1) that should be used for issuing the
subsequent ioctls listed below.
The structure definitions for these IOCTLs are available in:
uapi/scsi/cxlflash_ioctl.h
DK_CXLFLASH_ATTACH
------------------
This ioctl obtains, initializes, and starts a context using the CXL
kernel services. These services specify a context id (u16) by which
to uniquely identify the context and its allocated resources. The
services additionally provide a second file descriptor (herein
referred to as fd2) that is used by the block library to initiate
memory mapped I/O (via mmap()) to the CXL flash device and poll for
completion events. This file descriptor is intentionally installed by
this driver and not the CXL kernel services to allow for intermediary
notification and access in the event of a non-user-initiated close(),
such as a killed process. This design point is described in further
detail in the description for the DK_CXLFLASH_DETACH ioctl.
There are a few important aspects regarding the "tokens" (context id
and fd2) that are provided back to the user:
- These tokens are only valid for the process under which they
were created. The child of a forked process cannot continue
to use the context id or file descriptor created by its parent
(see DK_CXLFLASH_VLUN_CLONE for further details).
- These tokens are only valid for the lifetime of the context and
the process under which they were created. Once either is
destroyed, the tokens are to be considered stale and subsequent
usage will result in errors.
- When a context is no longer needed, the user shall detach from
the context via the DK_CXLFLASH_DETACH ioctl.
- A close on fd2 will invalidate the tokens. This operation is not
required by the user.
DK_CXLFLASH_USER_DIRECT
-----------------------
This ioctl is responsible for transitioning the LUN to direct
(physical) mode access and configuring the AFU for direct access from
user space on a per-context basis. Additionally, the block size and
last logical block address (LBA) are returned to the user.
As mentioned previously, when operating in user space access mode,
LUNs may be accessed in whole or in part. Only one mode is allowed
at a time and if one mode is active (outstanding references exist),
requests to use the LUN in a different mode are denied.
The AFU is configured for direct access from user space by adding an
entry to the AFU's resource handle table. The index of the entry is
treated as a resource handle that is returned to the user. The user
is then able to use the handle to reference the LUN during I/O.
DK_CXLFLASH_USER_VIRTUAL
------------------------
This ioctl is responsible for transitioning the LUN to virtual mode
of access and configuring the AFU for virtual access from user space
on a per-context basis. Additionally, the block size and last logical
block address (LBA) are returned to the user.
As mentioned previously, when operating in user space access mode,
LUNs may be accessed in whole or in part. Only one mode is allowed
at a time and if one mode is active (outstanding references exist),
requests to use the LUN in a different mode are denied.
The AFU is configured for virtual access from user space by adding
an entry to the AFU's resource handle table. The index of the entry
is treated as a resource handle that is returned to the user. The
user is then able to use the handle to reference the LUN during I/O.
By default, the virtual LUN is created with a size of 0. The user
would need to use the DK_CXLFLASH_VLUN_RESIZE ioctl to adjust the grow
the virtual LUN to a desired size. To avoid having to perform this
resize for the initial creation of the virtual LUN, the user has the
option of specifying a size as part of the DK_CXLFLASH_USER_VIRTUAL
ioctl, such that when success is returned to the user, the
resource handle that is provided is already referencing provisioned
storage. This is reflected by the last LBA being a non-zero value.
DK_CXLFLASH_VLUN_RESIZE
-----------------------
This ioctl is responsible for resizing a previously created virtual
LUN and will fail if invoked upon a LUN that is not in virtual
mode. Upon success, an updated last LBA is returned to the user
indicating the new size of the virtual LUN associated with the
resource handle.
The partitioning of virtual LUNs is jointly mediated by the cxlflash
driver and the AFU. An allocation table is kept for each LUN that is
operating in the virtual mode and used to program a LUN translation
table that the AFU references when provided with a resource handle.
DK_CXLFLASH_RELEASE
-------------------
This ioctl is responsible for releasing a previously obtained
reference to either a physical or virtual LUN. This can be
thought of as the inverse of the DK_CXLFLASH_USER_DIRECT or
DK_CXLFLASH_USER_VIRTUAL ioctls. Upon success, the resource handle
is no longer valid and the entry in the resource handle table is
made available to be used again.
As part of the release process for virtual LUNs, the virtual LUN
is first resized to 0 to clear out and free the translation tables
associated with the virtual LUN reference.
DK_CXLFLASH_DETACH
------------------
This ioctl is responsible for unregistering a context with the
cxlflash driver and release outstanding resources that were
not explicitly released via the DK_CXLFLASH_RELEASE ioctl. Upon
success, all "tokens" which had been provided to the user from the
DK_CXLFLASH_ATTACH onward are no longer valid.
DK_CXLFLASH_VLUN_CLONE
----------------------
This ioctl is responsible for cloning a previously created
context to a more recently created context. It exists solely to
support maintaining user space access to storage after a process
forks. Upon success, the child process (which invoked the ioctl)
will have access to the same LUNs via the same resource handle(s)
and fd2 as the parent, but under a different context.
Context sharing across processes is not supported with CXL and
therefore each fork must be met with establishing a new context
for the child process. This ioctl simplifies the state management
and playback required by a user in such a scenario. When a process
forks, child process can clone the parents context by first creating
a context (via DK_CXLFLASH_ATTACH) and then using this ioctl to
perform the clone from the parent to the child.
The clone itself is fairly simple. The resource handle and lun
translation tables are copied from the parent context to the child's
and then synced with the AFU.
DK_CXLFLASH_VERIFY
------------------
This ioctl is used to detect various changes such as the capacity of
the disk changing, the number of LUNs visible changing, etc. In cases
where the changes affect the application (such as a LUN resize), the
cxlflash driver will report the changed state to the application.
The user calls in when they want to validate that a LUN hasn't been
changed in response to a check condition. As the user is operating out
of band from the kernel, they will see these types of events without
the kernel's knowledge. When encountered, the user's architected
behavior is to call in to this ioctl, indicating what they want to
verify and passing along any appropriate information. For now, only
verifying a LUN change (ie: size different) with sense data is
supported.
DK_CXLFLASH_RECOVER_AFU
-----------------------
This ioctl is used to drive recovery (if such an action is warranted)
of a specified user context. Any state associated with the user context
is re-established upon successful recovery.
User contexts are put into an error condition when the device needs to
be reset or is terminating. Users are notified of this error condition
by seeing all 0xF's on an MMIO read. Upon encountering this, the
architected behavior for a user is to call into this ioctl to recover
their context. A user may also call into this ioctl at any time to
check if the device is operating normally. If a failure is returned
from this ioctl, the user is expected to gracefully clean up their
context via release/detach ioctls. Until they do, the context they
hold is not relinquished. The user may also optionally exit the process
at which time the context/resources they held will be freed as part of
the release fop.
DK_CXLFLASH_MANAGE_LUN
----------------------
This ioctl is used to switch a LUN from a mode where it is available
for file-system access (legacy), to a mode where it is set aside for
exclusive user space access (superpipe). In case a LUN is visible
across multiple ports and adapters, this ioctl is used to uniquely
identify each LUN by its World Wide Node Name (WWNN).
...@@ -8098,7 +8098,7 @@ S: Supported ...@@ -8098,7 +8098,7 @@ S: Supported
F: drivers/scsi/pmcraid.* F: drivers/scsi/pmcraid.*
PMC SIERRA PM8001 DRIVER PMC SIERRA PM8001 DRIVER
M: xjtuwjp@gmail.com M: Jack Wang <jinpu.wang@profitbricks.com>
M: lindar_liu@usish.com M: lindar_liu@usish.com
L: pmchba@pmcs.com L: pmchba@pmcs.com
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
......
...@@ -1859,6 +1859,15 @@ mptctl_do_mpt_command (struct mpt_ioctl_command karg, void __user *mfPtr) ...@@ -1859,6 +1859,15 @@ mptctl_do_mpt_command (struct mpt_ioctl_command karg, void __user *mfPtr)
} }
spin_unlock_irqrestore(&ioc->taskmgmt_lock, flags); spin_unlock_irqrestore(&ioc->taskmgmt_lock, flags);
/* Basic sanity checks to prevent underflows or integer overflows */
if (karg.maxReplyBytes < 0 ||
karg.dataInSize < 0 ||
karg.dataOutSize < 0 ||
karg.dataSgeOffset < 0 ||
karg.maxSenseBytes < 0 ||
karg.dataSgeOffset > ioc->req_sz / 4)
return -EINVAL;
/* Verify that the final request frame will not be too large. /* Verify that the final request frame will not be too large.
*/ */
sz = karg.dataSgeOffset * 4; sz = karg.dataSgeOffset * 4;
......
...@@ -345,6 +345,7 @@ source "drivers/scsi/cxgbi/Kconfig" ...@@ -345,6 +345,7 @@ source "drivers/scsi/cxgbi/Kconfig"
source "drivers/scsi/bnx2i/Kconfig" source "drivers/scsi/bnx2i/Kconfig"
source "drivers/scsi/bnx2fc/Kconfig" source "drivers/scsi/bnx2fc/Kconfig"
source "drivers/scsi/be2iscsi/Kconfig" source "drivers/scsi/be2iscsi/Kconfig"
source "drivers/scsi/cxlflash/Kconfig"
config SGIWD93_SCSI config SGIWD93_SCSI
tristate "SGI WD93C93 SCSI Driver" tristate "SGI WD93C93 SCSI Driver"
......
...@@ -102,6 +102,7 @@ obj-$(CONFIG_SCSI_7000FASST) += wd7000.o ...@@ -102,6 +102,7 @@ obj-$(CONFIG_SCSI_7000FASST) += wd7000.o
obj-$(CONFIG_SCSI_EATA) += eata.o obj-$(CONFIG_SCSI_EATA) += eata.o
obj-$(CONFIG_SCSI_DC395x) += dc395x.o obj-$(CONFIG_SCSI_DC395x) += dc395x.o
obj-$(CONFIG_SCSI_AM53C974) += esp_scsi.o am53c974.o obj-$(CONFIG_SCSI_AM53C974) += esp_scsi.o am53c974.o
obj-$(CONFIG_CXLFLASH) += cxlflash/
obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o
obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/ obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/
obj-$(CONFIG_MEGARAID_SAS) += megaraid/ obj-$(CONFIG_MEGARAID_SAS) += megaraid/
......
...@@ -109,6 +109,7 @@ static int asd_map_memio(struct asd_ha_struct *asd_ha) ...@@ -109,6 +109,7 @@ static int asd_map_memio(struct asd_ha_struct *asd_ha)
if (!io_handle->addr) { if (!io_handle->addr) {
asd_printk("couldn't map MBAR%d of %s\n", i==0?0:1, asd_printk("couldn't map MBAR%d of %s\n", i==0?0:1,
pci_name(asd_ha->pcidev)); pci_name(asd_ha->pcidev));
err = -ENOMEM;
goto Err_unreq; goto Err_unreq;
} }
} }
......
...@@ -851,6 +851,8 @@ bfad_im_module_exit(void) ...@@ -851,6 +851,8 @@ bfad_im_module_exit(void)
if (bfad_im_scsi_vport_transport_template) if (bfad_im_scsi_vport_transport_template)
fc_release_transport(bfad_im_scsi_vport_transport_template); fc_release_transport(bfad_im_scsi_vport_transport_template);
idr_destroy(&bfad_im_port_index);
} }
void void
......
#
# IBM CXL-attached Flash Accelerator SCSI Driver
#
config CXLFLASH
tristate "Support for IBM CAPI Flash"
depends on PCI && SCSI && CXL && EEH
default m
help
Allows CAPI Accelerated IO to Flash
If unsure, say N.
obj-$(CONFIG_CXLFLASH) += cxlflash.o
cxlflash-y += main.o superpipe.o lunmgt.o vlun.o
/*
* CXL Flash Device Driver
*
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
*
* Copyright (C) 2015 IBM Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _CXLFLASH_COMMON_H
#define _CXLFLASH_COMMON_H
#include <linux/list.h>
#include <linux/types.h>
#include <scsi/scsi.h>
#include <scsi/scsi_device.h>
#define MAX_CONTEXT CXLFLASH_MAX_CONTEXT /* num contexts per afu */
#define CXLFLASH_BLOCK_SIZE 4096 /* 4K blocks */
#define CXLFLASH_MAX_XFER_SIZE 16777216 /* 16MB transfer */
#define CXLFLASH_MAX_SECTORS (CXLFLASH_MAX_XFER_SIZE/512) /* SCSI wants
max_sectors
in units of
512 byte
sectors
*/
#define NUM_RRQ_ENTRY 16 /* for master issued cmds */
#define MAX_RHT_PER_CONTEXT (PAGE_SIZE / sizeof(struct sisl_rht_entry))
/* AFU command retry limit */
#define MC_RETRY_CNT 5 /* sufficient for SCSI check and
certain AFU errors */
/* Command management definitions */
#define CXLFLASH_NUM_CMDS (2 * CXLFLASH_MAX_CMDS) /* Must be a pow2 for
alignment and more
efficient array
index derivation
*/
#define CXLFLASH_MAX_CMDS 16
#define CXLFLASH_MAX_CMDS_PER_LUN CXLFLASH_MAX_CMDS
static inline void check_sizes(void)
{
BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_NUM_CMDS);
}
/* AFU defines a fixed size of 4K for command buffers (borrow 4K page define) */
#define CMD_BUFSIZE SIZE_4K
/* flags in IOA status area for host use */
#define B_DONE 0x01
#define B_ERROR 0x02 /* set with B_DONE */
#define B_TIMEOUT 0x04 /* set with B_DONE & B_ERROR */
enum cxlflash_lr_state {
LINK_RESET_INVALID,
LINK_RESET_REQUIRED,
LINK_RESET_COMPLETE
};
enum cxlflash_init_state {
INIT_STATE_NONE,
INIT_STATE_PCI,
INIT_STATE_AFU,
INIT_STATE_SCSI
};
enum cxlflash_state {
STATE_NORMAL, /* Normal running state, everything good */
STATE_LIMBO, /* Limbo running state, trying to reset/recover */
STATE_FAILTERM /* Failed/terminating state, error out users/threads */
};
/*
* Each context has its own set of resource handles that is visible
* only from that context.
*/
struct cxlflash_cfg {
struct afu *afu;
struct cxl_context *mcctx;
struct pci_dev *dev;
struct pci_device_id *dev_id;
struct Scsi_Host *host;
ulong cxlflash_regs_pci;
struct work_struct work_q;
enum cxlflash_init_state init_state;
enum cxlflash_lr_state lr_state;
int lr_port;
struct cxl_afu *cxl_afu;
struct pci_pool *cxlflash_cmd_pool;
struct pci_dev *parent_dev;
atomic_t recovery_threads;
struct mutex ctx_recovery_mutex;
struct mutex ctx_tbl_list_mutex;
struct ctx_info *ctx_tbl[MAX_CONTEXT];
struct list_head ctx_err_recovery; /* contexts w/ recovery pending */
struct file_operations cxl_fops;
atomic_t num_user_contexts;
/* Parameters that are LUN table related */
int last_lun_index[CXLFLASH_NUM_FC_PORTS];
int promote_lun_index;
struct list_head lluns; /* list of llun_info structs */
wait_queue_head_t tmf_waitq;
bool tmf_active;
wait_queue_head_t limbo_waitq;
enum cxlflash_state state;
};
struct afu_cmd {
struct sisl_ioarcb rcb; /* IOARCB (cache line aligned) */
struct sisl_ioasa sa; /* IOASA must follow IOARCB */
spinlock_t slock;
struct completion cevent;
char *buf; /* per command buffer */
struct afu *parent;
int slot;
atomic_t free;
u8 cmd_tmf:1;
/* As per the SISLITE spec the IOARCB EA has to be 16-byte aligned.
* However for performance reasons the IOARCB/IOASA should be
* cache line aligned.
*/
} __aligned(cache_line_size());
struct afu {
/* Stuff requiring alignment go first. */
u64 rrq_entry[NUM_RRQ_ENTRY]; /* 128B RRQ */
/*
* Command & data for AFU commands.
*/
struct afu_cmd cmd[CXLFLASH_NUM_CMDS];
/* Beware of alignment till here. Preferably introduce new
* fields after this point
*/
/* AFU HW */
struct cxl_ioctl_start_work work;
struct cxlflash_afu_map *afu_map; /* entire MMIO map */
struct sisl_host_map *host_map; /* MC host map */
struct sisl_ctrl_map *ctrl_map; /* MC control map */
ctx_hndl_t ctx_hndl; /* master's context handle */
u64 *hrrq_start;
u64 *hrrq_end;
u64 *hrrq_curr;
bool toggle;
bool read_room;
atomic64_t room;
u64 hb;
u32 cmd_couts; /* Number of command checkouts */
u32 internal_lun; /* User-desired LUN mode for this AFU */
char version[8];
u64 interface_version;
struct cxlflash_cfg *parent; /* Pointer back to parent cxlflash_cfg */
};
static inline u64 lun_to_lunid(u64 lun)
{
u64 lun_id;
int_to_scsilun(lun, (struct scsi_lun *)&lun_id);
return swab64(lun_id);
}
int cxlflash_send_cmd(struct afu *, struct afu_cmd *);
void cxlflash_wait_resp(struct afu *, struct afu_cmd *);
int cxlflash_afu_reset(struct cxlflash_cfg *);
struct afu_cmd *cxlflash_cmd_checkout(struct afu *);
void cxlflash_cmd_checkin(struct afu_cmd *);
int cxlflash_afu_sync(struct afu *, ctx_hndl_t, res_hndl_t, u8);
void cxlflash_list_init(void);
void cxlflash_term_global_luns(void);
void cxlflash_free_errpage(void);
int cxlflash_ioctl(struct scsi_device *, int, void __user *);
void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *);
int cxlflash_mark_contexts_error(struct cxlflash_cfg *);
void cxlflash_term_local_luns(struct cxlflash_cfg *);
void cxlflash_restore_luntable(struct cxlflash_cfg *);
#endif /* ifndef _CXLFLASH_COMMON_H */
/*
* CXL Flash Device Driver
*
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
*
* Copyright (C) 2015 IBM Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <misc/cxl.h>
#include <asm/unaligned.h>
#include <scsi/scsi_host.h>
#include <uapi/scsi/cxlflash_ioctl.h>
#include "sislite.h"
#include "common.h"
#include "vlun.h"
#include "superpipe.h"
/**
* create_local() - allocate and initialize a local LUN information structure
* @sdev: SCSI device associated with LUN.
* @wwid: World Wide Node Name for LUN.
*
* Return: Allocated local llun_info structure on success, NULL on failure
*/
static struct llun_info *create_local(struct scsi_device *sdev, u8 *wwid)
{
struct llun_info *lli = NULL;
lli = kzalloc(sizeof(*lli), GFP_KERNEL);
if (unlikely(!lli)) {
pr_err("%s: could not allocate lli\n", __func__);
goto out;
}
lli->sdev = sdev;
lli->newly_created = true;
lli->host_no = sdev->host->host_no;
lli->in_table = false;
memcpy(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN);
out:
return lli;
}
/**
* create_global() - allocate and initialize a global LUN information structure
* @sdev: SCSI device associated with LUN.
* @wwid: World Wide Node Name for LUN.
*
* Return: Allocated global glun_info structure on success, NULL on failure
*/
static struct glun_info *create_global(struct scsi_device *sdev, u8 *wwid)
{
struct glun_info *gli = NULL;
gli = kzalloc(sizeof(*gli), GFP_KERNEL);
if (unlikely(!gli)) {
pr_err("%s: could not allocate gli\n", __func__);
goto out;
}
mutex_init(&gli->mutex);
memcpy(gli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN);
out:
return gli;
}
/**
* refresh_local() - find and update local LUN information structure by WWID
* @cfg: Internal structure associated with the host.
* @wwid: WWID associated with LUN.
*
* When the LUN is found, mark it by updating it's newly_created field.
*
* Return: Found local lun_info structure on success, NULL on failure
* If a LUN with the WWID is found in the list, refresh it's state.
*/
static struct llun_info *refresh_local(struct cxlflash_cfg *cfg, u8 *wwid)
{
struct llun_info *lli, *temp;
list_for_each_entry_safe(lli, temp, &cfg->lluns, list)
if (!memcmp(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN)) {
lli->newly_created = false;
return lli;
}
return NULL;
}
/**
* lookup_global() - find a global LUN information structure by WWID
* @wwid: WWID associated with LUN.
*
* Return: Found global lun_info structure on success, NULL on failure
*/
static struct glun_info *lookup_global(u8 *wwid)
{
struct glun_info *gli, *temp;
list_for_each_entry_safe(gli, temp, &global.gluns, list)
if (!memcmp(gli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN))
return gli;
return NULL;
}
/**
* find_and_create_lun() - find or create a local LUN information structure
* @sdev: SCSI device associated with LUN.
* @wwid: WWID associated with LUN.
*
* The LUN is kept both in a local list (per adapter) and in a global list
* (across all adapters). Certain attributes of the LUN are local to the
* adapter (such as index, port selection mask etc.).
* The block allocation map is shared across all adapters (i.e. associated
* wih the global list). Since different attributes are associated with
* the per adapter and global entries, allocate two separate structures for each
* LUN (one local, one global).
*
* Keep a pointer back from the local to the global entry.
*
* Return: Found/Allocated local lun_info structure on success, NULL on failure
*/
static struct llun_info *find_and_create_lun(struct scsi_device *sdev, u8 *wwid)
{
struct llun_info *lli = NULL;
struct glun_info *gli = NULL;
struct Scsi_Host *shost = sdev->host;
struct cxlflash_cfg *cfg = shost_priv(shost);
mutex_lock(&global.mutex);
if (unlikely(!wwid))
goto out;
lli = refresh_local(cfg, wwid);
if (lli)
goto out;
lli = create_local(sdev, wwid);
if (unlikely(!lli))
goto out;
gli = lookup_global(wwid);
if (gli) {
lli->parent = gli;
list_add(&lli->list, &cfg->lluns);
goto out;
}
gli = create_global(sdev, wwid);
if (unlikely(!gli)) {
kfree(lli);
lli = NULL;
goto out;
}
lli->parent = gli;
list_add(&lli->list, &cfg->lluns);
list_add(&gli->list, &global.gluns);
out:
mutex_unlock(&global.mutex);
pr_debug("%s: returning %p\n", __func__, lli);
return lli;
}
/**
* cxlflash_term_local_luns() - Delete all entries from local LUN list, free.
* @cfg: Internal structure associated with the host.
*/
void cxlflash_term_local_luns(struct cxlflash_cfg *cfg)
{
struct llun_info *lli, *temp;
mutex_lock(&global.mutex);
list_for_each_entry_safe(lli, temp, &cfg->lluns, list) {
list_del(&lli->list);
kfree(lli);
}
mutex_unlock(&global.mutex);
}
/**
* cxlflash_list_init() - initializes the global LUN list
*/
void cxlflash_list_init(void)
{
INIT_LIST_HEAD(&global.gluns);
mutex_init(&global.mutex);
global.err_page = NULL;
}
/**
* cxlflash_term_global_luns() - frees resources associated with global LUN list
*/
void cxlflash_term_global_luns(void)
{
struct glun_info *gli, *temp;
mutex_lock(&global.mutex);
list_for_each_entry_safe(gli, temp, &global.gluns, list) {
list_del(&gli->list);
cxlflash_ba_terminate(&gli->blka.ba_lun);
kfree(gli);
}
mutex_unlock(&global.mutex);
}
/**
* cxlflash_manage_lun() - handles LUN management activities
* @sdev: SCSI device associated with LUN.
* @manage: Manage ioctl data structure.
*
* This routine is used to notify the driver about a LUN's WWID and associate
* SCSI devices (sdev) with a global LUN instance. Additionally it serves to
* change a LUN's operating mode: legacy or superpipe.
*
* Return: 0 on success, -errno on failure
*/
int cxlflash_manage_lun(struct scsi_device *sdev,
struct dk_cxlflash_manage_lun *manage)
{
int rc = 0;
struct llun_info *lli = NULL;
u64 flags = manage->hdr.flags;
u32 chan = sdev->channel;
lli = find_and_create_lun(sdev, manage->wwid);
pr_debug("%s: ENTER: WWID = %016llX%016llX, flags = %016llX li = %p\n",
__func__, get_unaligned_le64(&manage->wwid[0]),
get_unaligned_le64(&manage->wwid[8]),
manage->hdr.flags, lli);
if (unlikely(!lli)) {
rc = -ENOMEM;
goto out;
}
if (flags & DK_CXLFLASH_MANAGE_LUN_ENABLE_SUPERPIPE) {
if (lli->newly_created)
lli->port_sel = CHAN2PORT(chan);
else
lli->port_sel = BOTH_PORTS;
/* Store off lun in unpacked, AFU-friendly format */
lli->lun_id[chan] = lun_to_lunid(sdev->lun);
sdev->hostdata = lli;
} else if (flags & DK_CXLFLASH_MANAGE_LUN_DISABLE_SUPERPIPE) {
if (lli->parent->mode != MODE_NONE)
rc = -EBUSY;
else
sdev->hostdata = NULL;
}
out:
pr_debug("%s: returning rc=%d\n", __func__, rc);
return rc;
}
此差异已折叠。
/*
* CXL Flash Device Driver
*
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
*
* Copyright (C) 2015 IBM Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _CXLFLASH_MAIN_H
#define _CXLFLASH_MAIN_H
#include <linux/list.h>
#include <linux/types.h>
#include <scsi/scsi.h>
#include <scsi/scsi_device.h>
#define CXLFLASH_NAME "cxlflash"
#define CXLFLASH_ADAPTER_NAME "IBM POWER CXL Flash Adapter"
#define CXLFLASH_DRIVER_DATE "(August 13, 2015)"
#define PCI_DEVICE_ID_IBM_CORSA 0x04F0
#define CXLFLASH_SUBS_DEV_ID 0x04F0
/* Since there is only one target, make it 0 */
#define CXLFLASH_TARGET 0
#define CXLFLASH_MAX_CDB_LEN 16
/* Really only one target per bus since the Texan is directly attached */
#define CXLFLASH_MAX_NUM_TARGETS_PER_BUS 1
#define CXLFLASH_MAX_NUM_LUNS_PER_TARGET 65536
#define CXLFLASH_PCI_ERROR_RECOVERY_TIMEOUT (120 * HZ)
#define NUM_FC_PORTS CXLFLASH_NUM_FC_PORTS /* ports per AFU */
/* FC defines */
#define FC_MTIP_CMDCONFIG 0x010
#define FC_MTIP_STATUS 0x018
#define FC_PNAME 0x300
#define FC_CONFIG 0x320
#define FC_CONFIG2 0x328
#define FC_STATUS 0x330
#define FC_ERROR 0x380
#define FC_ERRCAP 0x388
#define FC_ERRMSK 0x390
#define FC_CNT_CRCERR 0x538
#define FC_CRC_THRESH 0x580
#define FC_MTIP_CMDCONFIG_ONLINE 0x20ULL
#define FC_MTIP_CMDCONFIG_OFFLINE 0x40ULL
#define FC_MTIP_STATUS_MASK 0x30ULL
#define FC_MTIP_STATUS_ONLINE 0x20ULL
#define FC_MTIP_STATUS_OFFLINE 0x10ULL
/* TIMEOUT and RETRY definitions */
/* AFU command timeout values */
#define MC_AFU_SYNC_TIMEOUT 5 /* 5 secs */
/* AFU command room retry limit */
#define MC_ROOM_RETRY_CNT 10
/* FC CRC clear periodic timer */
#define MC_CRC_THRESH 100 /* threshold in 5 mins */
#define FC_PORT_STATUS_RETRY_CNT 100 /* 100 100ms retries = 10 seconds */
#define FC_PORT_STATUS_RETRY_INTERVAL_US 100000 /* microseconds */
/* VPD defines */
#define CXLFLASH_VPD_LEN 256
#define WWPN_LEN 16
#define WWPN_BUF_LEN (WWPN_LEN + 1)
enum undo_level {
RELEASE_CONTEXT = 0,
FREE_IRQ,
UNMAP_ONE,
UNMAP_TWO,
UNMAP_THREE,
UNDO_START
};
struct dev_dependent_vals {
u64 max_sectors;
};
struct asyc_intr_info {
u64 status;
char *desc;
u8 port;
u8 action;
#define CLR_FC_ERROR 0x01
#define LINK_RESET 0x02
};
#ifndef CONFIG_CXL_EEH
#define cxl_perst_reloads_same_image(_a, _b) do { } while (0)
#endif
#endif /* _CXLFLASH_MAIN_H */
/*
* CXL Flash Device Driver
*
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
*
* Copyright (C) 2015 IBM Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _SISLITE_H
#define _SISLITE_H
#include <linux/types.h>
typedef u16 ctx_hndl_t;
typedef u32 res_hndl_t;
#define SIZE_4K 4096
#define SIZE_64K 65536
/*
* IOARCB: 64 bytes, min 16 byte alignment required, host native endianness
* except for SCSI CDB which remains big endian per SCSI standards.
*/
struct sisl_ioarcb {
u16 ctx_id; /* ctx_hndl_t */
u16 req_flags;
#define SISL_REQ_FLAGS_RES_HNDL 0x8000U /* bit 0 (MSB) */
#define SISL_REQ_FLAGS_PORT_LUN_ID 0x0000U
#define SISL_REQ_FLAGS_SUP_UNDERRUN 0x4000U /* bit 1 */
#define SISL_REQ_FLAGS_TIMEOUT_SECS 0x0000U /* bits 8,9 */
#define SISL_REQ_FLAGS_TIMEOUT_MSECS 0x0040U
#define SISL_REQ_FLAGS_TIMEOUT_USECS 0x0080U
#define SISL_REQ_FLAGS_TIMEOUT_CYCLES 0x00C0U
#define SISL_REQ_FLAGS_TMF_CMD 0x0004u /* bit 13 */
#define SISL_REQ_FLAGS_AFU_CMD 0x0002U /* bit 14 */
#define SISL_REQ_FLAGS_HOST_WRITE 0x0001U /* bit 15 (LSB) */
#define SISL_REQ_FLAGS_HOST_READ 0x0000U
union {
u32 res_hndl; /* res_hndl_t */
u32 port_sel; /* this is a selection mask:
* 0x1 -> port#0 can be selected,
* 0x2 -> port#1 can be selected.
* Can be bitwise ORed.
*/
};
u64 lun_id;
u32 data_len; /* 4K for read/write */
u32 ioadl_len;
union {
u64 data_ea; /* min 16 byte aligned */
u64 ioadl_ea;
};
u8 msi; /* LISN to send on RRQ write */
#define SISL_MSI_CXL_PFAULT 0 /* reserved for CXL page faults */
#define SISL_MSI_SYNC_ERROR 1 /* recommended for AFU sync error */
#define SISL_MSI_RRQ_UPDATED 2 /* recommended for IO completion */
#define SISL_MSI_ASYNC_ERROR 3 /* master only - for AFU async error */
u8 rrq; /* 0 for a single RRQ */
u16 timeout; /* in units specified by req_flags */
u32 rsvd1;
u8 cdb[16]; /* must be in big endian */
struct scsi_cmnd *scp;
} __packed;
struct sisl_rc {
u8 flags;
#define SISL_RC_FLAGS_SENSE_VALID 0x80U
#define SISL_RC_FLAGS_FCP_RSP_CODE_VALID 0x40U
#define SISL_RC_FLAGS_OVERRUN 0x20U
#define SISL_RC_FLAGS_UNDERRUN 0x10U
u8 afu_rc;
#define SISL_AFU_RC_RHT_INVALID 0x01U /* user error */
#define SISL_AFU_RC_RHT_UNALIGNED 0x02U /* should never happen */
#define SISL_AFU_RC_RHT_OUT_OF_BOUNDS 0x03u /* user error */
#define SISL_AFU_RC_RHT_DMA_ERR 0x04u /* see afu_extra
may retry if afu_retry is off
possible on master exit
*/
#define SISL_AFU_RC_RHT_RW_PERM 0x05u /* no RW perms, user error */
#define SISL_AFU_RC_LXT_UNALIGNED 0x12U /* should never happen */
#define SISL_AFU_RC_LXT_OUT_OF_BOUNDS 0x13u /* user error */
#define SISL_AFU_RC_LXT_DMA_ERR 0x14u /* see afu_extra
may retry if afu_retry is off
possible on master exit
*/
#define SISL_AFU_RC_LXT_RW_PERM 0x15u /* no RW perms, user error */
#define SISL_AFU_RC_NOT_XLATE_HOST 0x1au /* possible if master exited */
/* NO_CHANNELS means the FC ports selected by dest_port in
* IOARCB or in the LXT entry are down when the AFU tried to select
* a FC port. If the port went down on an active IO, it will set
* fc_rc to =0x54(NOLOGI) or 0x57(LINKDOWN) instead.
*/
#define SISL_AFU_RC_NO_CHANNELS 0x20U /* see afu_extra, may retry */
#define SISL_AFU_RC_CAP_VIOLATION 0x21U /* either user error or
afu reset/master restart
*/
#define SISL_AFU_RC_OUT_OF_DATA_BUFS 0x30U /* always retry */
#define SISL_AFU_RC_DATA_DMA_ERR 0x31U /* see afu_extra
may retry if afu_retry is off
*/
u8 scsi_rc; /* SCSI status byte, retry as appropriate */
#define SISL_SCSI_RC_CHECK 0x02U
#define SISL_SCSI_RC_BUSY 0x08u
u8 fc_rc; /* retry */
/*
* We should only see fc_rc=0x57 (LINKDOWN) or 0x54(NOLOGI) for
* commands that are in flight when a link goes down or is logged out.
* If the link is down or logged out before AFU selects the port, either
* it will choose the other port or we will get afu_rc=0x20 (no_channel)
* if there is no valid port to use.
*
* ABORTPEND/ABORTOK/ABORTFAIL/TGTABORT can be retried, typically these
* would happen if a frame is dropped and something times out.
* NOLOGI or LINKDOWN can be retried if the other port is up.
* RESIDERR can be retried as well.
*
* ABORTFAIL might indicate that lots of frames are getting CRC errors.
* So it maybe retried once and reset the link if it happens again.
* The link can also be reset on the CRC error threshold interrupt.
*/
#define SISL_FC_RC_ABORTPEND 0x52 /* exchange timeout or abort request */
#define SISL_FC_RC_WRABORTPEND 0x53 /* due to write XFER_RDY invalid */
#define SISL_FC_RC_NOLOGI 0x54 /* port not logged in, in-flight cmds */
#define SISL_FC_RC_NOEXP 0x55 /* FC protocol error or HW bug */
#define SISL_FC_RC_INUSE 0x56 /* tag already in use, HW bug */
#define SISL_FC_RC_LINKDOWN 0x57 /* link down, in-flight cmds */
#define SISL_FC_RC_ABORTOK 0x58 /* pending abort completed w/success */
#define SISL_FC_RC_ABORTFAIL 0x59 /* pending abort completed w/fail */
#define SISL_FC_RC_RESID 0x5A /* ioasa underrun/overrun flags set */
#define SISL_FC_RC_RESIDERR 0x5B /* actual data len does not match SCSI
reported len, possbly due to dropped
frames */
#define SISL_FC_RC_TGTABORT 0x5C /* command aborted by target */
};
#define SISL_SENSE_DATA_LEN 20 /* Sense data length */
/*
* IOASA: 64 bytes & must follow IOARCB, min 16 byte alignment required,
* host native endianness
*/
struct sisl_ioasa {
union {
struct sisl_rc rc;
u32 ioasc;
#define SISL_IOASC_GOOD_COMPLETION 0x00000000U
};
u32 resid;
u8 port;
u8 afu_extra;
/* when afu_rc=0x04, 0x14, 0x31 (_xxx_DMA_ERR):
* afu_exta contains PSL response code. Useful codes are:
*/
#define SISL_AFU_DMA_ERR_PAGE_IN 0x0A /* AFU_retry_on_pagein Action
* Enabled N/A
* Disabled retry
*/
#define SISL_AFU_DMA_ERR_INVALID_EA 0x0B /* this is a hard error
* afu_rc Implies
* 0x04, 0x14 master exit.
* 0x31 user error.
*/
/* when afu rc=0x20 (no channels):
* afu_extra bits [4:5]: available portmask, [6:7]: requested portmask.
*/
#define SISL_AFU_NO_CLANNELS_AMASK(afu_extra) (((afu_extra) & 0x0C) >> 2)
#define SISL_AFU_NO_CLANNELS_RMASK(afu_extra) ((afu_extra) & 0x03)
u8 scsi_extra;
u8 fc_extra;
u8 sense_data[SISL_SENSE_DATA_LEN];
/* These fields are defined by the SISlite architecture for the
* host to use as they see fit for their implementation.
*/
union {
u64 host_use[4];
u8 host_use_b[32];
};
} __packed;
#define SISL_RESP_HANDLE_T_BIT 0x1ULL /* Toggle bit */
/* MMIO space is required to support only 64-bit access */
/*
* This AFU has two mechanisms to deal with endian-ness.
* One is a global configuration (in the afu_config) register
* below that specifies the endian-ness of the host.
* The other is a per context (i.e. application) specification
* controlled by the endian_ctrl field here. Since the master
* context is one such application the master context's
* endian-ness is set to be the same as the host.
*
* As per the SISlite spec, the MMIO registers are always
* big endian.
*/
#define SISL_ENDIAN_CTRL_BE 0x8000000000000080ULL
#define SISL_ENDIAN_CTRL_LE 0x0000000000000000ULL
#ifdef __BIG_ENDIAN
#define SISL_ENDIAN_CTRL SISL_ENDIAN_CTRL_BE
#else
#define SISL_ENDIAN_CTRL SISL_ENDIAN_CTRL_LE
#endif
/* per context host transport MMIO */
struct sisl_host_map {
__be64 endian_ctrl; /* Per context Endian Control. The AFU will
* operate on whatever the context is of the
* host application.
*/
__be64 intr_status; /* this sends LISN# programmed in ctx_ctrl.
* Only recovery in a PERM_ERR is a context
* exit since there is no way to tell which
* command caused the error.
*/
#define SISL_ISTATUS_PERM_ERR_CMDROOM 0x0010ULL /* b59, user error */
#define SISL_ISTATUS_PERM_ERR_RCB_READ 0x0008ULL /* b60, user error */
#define SISL_ISTATUS_PERM_ERR_SA_WRITE 0x0004ULL /* b61, user error */
#define SISL_ISTATUS_PERM_ERR_RRQ_WRITE 0x0002ULL /* b62, user error */
/* Page in wait accessing RCB/IOASA/RRQ is reported in b63.
* Same error in data/LXT/RHT access is reported via IOASA.
*/
#define SISL_ISTATUS_TEMP_ERR_PAGEIN 0x0001ULL /* b63, can be generated
* only when AFU auto
* retry is disabled.
* If user can determine
* the command that
* caused the error, it
* can be retried.
*/
#define SISL_ISTATUS_UNMASK (0x001FULL) /* 1 means unmasked */
#define SISL_ISTATUS_MASK ~(SISL_ISTATUS_UNMASK) /* 1 means masked */
__be64 intr_clear;
__be64 intr_mask;
__be64 ioarrin; /* only write what cmd_room permits */
__be64 rrq_start; /* start & end are both inclusive */
__be64 rrq_end; /* write sequence: start followed by end */
__be64 cmd_room;
__be64 ctx_ctrl; /* least signiifcant byte or b56:63 is LISN# */
__be64 mbox_w; /* restricted use */
};
/* per context provisioning & control MMIO */
struct sisl_ctrl_map {
__be64 rht_start;
__be64 rht_cnt_id;
/* both cnt & ctx_id args must be ULL */
#define SISL_RHT_CNT_ID(cnt, ctx_id) (((cnt) << 48) | ((ctx_id) << 32))
__be64 ctx_cap; /* afu_rc below is when the capability is violated */
#define SISL_CTX_CAP_PROXY_ISSUE 0x8000000000000000ULL /* afu_rc 0x21 */
#define SISL_CTX_CAP_REAL_MODE 0x4000000000000000ULL /* afu_rc 0x21 */
#define SISL_CTX_CAP_HOST_XLATE 0x2000000000000000ULL /* afu_rc 0x1a */
#define SISL_CTX_CAP_PROXY_TARGET 0x1000000000000000ULL /* afu_rc 0x21 */
#define SISL_CTX_CAP_AFU_CMD 0x0000000000000008ULL /* afu_rc 0x21 */
#define SISL_CTX_CAP_GSCSI_CMD 0x0000000000000004ULL /* afu_rc 0x21 */
#define SISL_CTX_CAP_WRITE_CMD 0x0000000000000002ULL /* afu_rc 0x21 */
#define SISL_CTX_CAP_READ_CMD 0x0000000000000001ULL /* afu_rc 0x21 */
__be64 mbox_r;
};
/* single copy global regs */
struct sisl_global_regs {
__be64 aintr_status;
/* In cxlflash, each FC port/link gets a byte of status */
#define SISL_ASTATUS_FC0_OTHER 0x8000ULL /* b48, other err,
FC_ERRCAP[31:20] */
#define SISL_ASTATUS_FC0_LOGO 0x4000ULL /* b49, target sent FLOGI/PLOGI/LOGO
while logged in */
#define SISL_ASTATUS_FC0_CRC_T 0x2000ULL /* b50, CRC threshold exceeded */
#define SISL_ASTATUS_FC0_LOGI_R 0x1000ULL /* b51, login state mechine timed out
and retrying */
#define SISL_ASTATUS_FC0_LOGI_F 0x0800ULL /* b52, login failed,
FC_ERROR[19:0] */
#define SISL_ASTATUS_FC0_LOGI_S 0x0400ULL /* b53, login succeeded */
#define SISL_ASTATUS_FC0_LINK_DN 0x0200ULL /* b54, link online to offline */
#define SISL_ASTATUS_FC0_LINK_UP 0x0100ULL /* b55, link offline to online */
#define SISL_ASTATUS_FC1_OTHER 0x0080ULL /* b56 */
#define SISL_ASTATUS_FC1_LOGO 0x0040ULL /* b57 */
#define SISL_ASTATUS_FC1_CRC_T 0x0020ULL /* b58 */
#define SISL_ASTATUS_FC1_LOGI_R 0x0010ULL /* b59 */
#define SISL_ASTATUS_FC1_LOGI_F 0x0008ULL /* b60 */
#define SISL_ASTATUS_FC1_LOGI_S 0x0004ULL /* b61 */
#define SISL_ASTATUS_FC1_LINK_DN 0x0002ULL /* b62 */
#define SISL_ASTATUS_FC1_LINK_UP 0x0001ULL /* b63 */
#define SISL_FC_INTERNAL_UNMASK 0x0000000300000000ULL /* 1 means unmasked */
#define SISL_FC_INTERNAL_MASK ~(SISL_FC_INTERNAL_UNMASK)
#define SISL_FC_INTERNAL_SHIFT 32
#define SISL_ASTATUS_UNMASK 0xFFFFULL /* 1 means unmasked */
#define SISL_ASTATUS_MASK ~(SISL_ASTATUS_UNMASK) /* 1 means masked */
__be64 aintr_clear;
__be64 aintr_mask;
__be64 afu_ctrl;
__be64 afu_hb;
__be64 afu_scratch_pad;
__be64 afu_port_sel;
#define SISL_AFUCONF_AR_IOARCB 0x4000ULL
#define SISL_AFUCONF_AR_LXT 0x2000ULL
#define SISL_AFUCONF_AR_RHT 0x1000ULL
#define SISL_AFUCONF_AR_DATA 0x0800ULL
#define SISL_AFUCONF_AR_RSRC 0x0400ULL
#define SISL_AFUCONF_AR_IOASA 0x0200ULL
#define SISL_AFUCONF_AR_RRQ 0x0100ULL
/* Aggregate all Auto Retry Bits */
#define SISL_AFUCONF_AR_ALL (SISL_AFUCONF_AR_IOARCB|SISL_AFUCONF_AR_LXT| \
SISL_AFUCONF_AR_RHT|SISL_AFUCONF_AR_DATA| \
SISL_AFUCONF_AR_RSRC|SISL_AFUCONF_AR_IOASA| \
SISL_AFUCONF_AR_RRQ)
#ifdef __BIG_ENDIAN
#define SISL_AFUCONF_ENDIAN 0x0000ULL
#else
#define SISL_AFUCONF_ENDIAN 0x0020ULL
#endif
#define SISL_AFUCONF_MBOX_CLR_READ 0x0010ULL
__be64 afu_config;
__be64 rsvd[0xf8];
__be64 afu_version;
__be64 interface_version;
};
#define CXLFLASH_NUM_FC_PORTS 2
#define CXLFLASH_MAX_CONTEXT 512 /* how many contexts per afu */
#define CXLFLASH_NUM_VLUNS 512
struct sisl_global_map {
union {
struct sisl_global_regs regs;
char page0[SIZE_4K]; /* page 0 */
};
char page1[SIZE_4K]; /* page 1 */
/* pages 2 & 3 */
__be64 fc_regs[CXLFLASH_NUM_FC_PORTS][CXLFLASH_NUM_VLUNS];
/* pages 4 & 5 (lun tbl) */
__be64 fc_port[CXLFLASH_NUM_FC_PORTS][CXLFLASH_NUM_VLUNS];
};
/*
* CXL Flash Memory Map
*
* +-------------------------------+
* | 512 * 64 KB User MMIO |
* | (per context) |
* | User Accessible |
* +-------------------------------+
* | 512 * 128 B per context |
* | Provisioning and Control |
* | Trusted Process accessible |
* +-------------------------------+
* | 64 KB Global |
* | Trusted Process accessible |
* +-------------------------------+
*/
struct cxlflash_afu_map {
union {
struct sisl_host_map host;
char harea[SIZE_64K]; /* 64KB each */
} hosts[CXLFLASH_MAX_CONTEXT];
union {
struct sisl_ctrl_map ctrl;
char carea[cache_line_size()]; /* 128B each */
} ctrls[CXLFLASH_MAX_CONTEXT];
union {
struct sisl_global_map global;
char garea[SIZE_64K]; /* 64KB single block */
};
};
/*
* LXT - LBA Translation Table
* LXT control blocks
*/
struct sisl_lxt_entry {
u64 rlba_base; /* bits 0:47 is base
* b48:55 is lun index
* b58:59 is write & read perms
* (if no perm, afu_rc=0x15)
* b60:63 is port_sel mask
*/
};
/*
* RHT - Resource Handle Table
* Per the SISlite spec, RHT entries are to be 16-byte aligned
*/
struct sisl_rht_entry {
struct sisl_lxt_entry *lxt_start;
u32 lxt_cnt;
u16 rsvd;
u8 fp; /* format & perm nibbles.
* (if no perm, afu_rc=0x05)
*/
u8 nmask;
} __packed __aligned(16);
struct sisl_rht_entry_f1 {
u64 lun_id;
union {
struct {
u8 valid;
u8 rsvd[5];
u8 fp;
u8 port_sel;
};
u64 dw;
};
} __packed __aligned(16);
/* make the fp byte */
#define SISL_RHT_FP(fmt, perm) (((fmt) << 4) | (perm))
/* make the fp byte for a clone from a source fp and clone flags
* flags must be only 2 LSB bits.
*/
#define SISL_RHT_FP_CLONE(src_fp, cln_flags) ((src_fp) & (0xFC | (cln_flags)))
#define RHT_PERM_READ 0x01U
#define RHT_PERM_WRITE 0x02U
#define RHT_PERM_RW (RHT_PERM_READ | RHT_PERM_WRITE)
/* extract the perm bits from a fp */
#define SISL_RHT_PERM(fp) ((fp) & RHT_PERM_RW)
#define PORT0 0x01U
#define PORT1 0x02U
#define BOTH_PORTS (PORT0 | PORT1)
/* AFU Sync Mode byte */
#define AFU_LW_SYNC 0x0U
#define AFU_HW_SYNC 0x1U
#define AFU_GSYNC 0x2U
/* Special Task Management Function CDB */
#define TMF_LUN_RESET 0x1U
#define TMF_CLEAR_ACA 0x2U
#define SISLITE_MAX_WS_BLOCKS 512
#endif /* _SISLITE_H */
此差异已折叠。
/*
* CXL Flash Device Driver
*
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
*
* Copyright (C) 2015 IBM Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _CXLFLASH_SUPERPIPE_H
#define _CXLFLASH_SUPERPIPE_H
extern struct cxlflash_global global;
/*
* Terminology: use afu (and not adapter) to refer to the HW.
* Adapter is the entire slot and includes PSL out of which
* only the AFU is visible to user space.
*/
/* Chunk size parms: note sislite minimum chunk size is
0x10000 LBAs corresponding to a NMASK or 16.
*/
#define MC_CHUNK_SIZE (1 << MC_RHT_NMASK) /* in LBAs */
#define MC_DISCOVERY_TIMEOUT 5 /* 5 secs */
#define CHAN2PORT(_x) ((_x) + 1)
#define PORT2CHAN(_x) ((_x) - 1)
enum lun_mode {
MODE_NONE = 0,
MODE_VIRTUAL,
MODE_PHYSICAL
};
/* Global (entire driver, spans adapters) lun_info structure */
struct glun_info {
u64 max_lba; /* from read cap(16) */
u32 blk_len; /* from read cap(16) */
enum lun_mode mode; /* NONE, VIRTUAL, PHYSICAL */
int users; /* Number of users w/ references to LUN */
u8 wwid[16];
struct mutex mutex;
struct blka blka;
struct list_head list;
};
/* Local (per-adapter) lun_info structure */
struct llun_info {
u64 lun_id[CXLFLASH_NUM_FC_PORTS]; /* from REPORT_LUNS */
u32 lun_index; /* Index in the LUN table */
u32 host_no; /* host_no from Scsi_host */
u32 port_sel; /* What port to use for this LUN */
bool newly_created; /* Whether the LUN was just discovered */
bool in_table; /* Whether a LUN table entry was created */
u8 wwid[16]; /* Keep a duplicate copy here? */
struct glun_info *parent; /* Pointer to entry in global LUN structure */
struct scsi_device *sdev;
struct list_head list;
};
struct lun_access {
struct llun_info *lli;
struct scsi_device *sdev;
struct list_head list;
};
enum ctx_ctrl {
CTX_CTRL_CLONE = (1 << 1),
CTX_CTRL_ERR = (1 << 2),
CTX_CTRL_ERR_FALLBACK = (1 << 3),
CTX_CTRL_NOPID = (1 << 4),
CTX_CTRL_FILE = (1 << 5)
};
#define ENCODE_CTXID(_ctx, _id) (((((u64)_ctx) & 0xFFFFFFFF0) << 28) | _id)
#define DECODE_CTXID(_val) (_val & 0xFFFFFFFF)
struct ctx_info {
struct sisl_ctrl_map *ctrl_map; /* initialized at startup */
struct sisl_rht_entry *rht_start; /* 1 page (req'd for alignment),
alloc/free on attach/detach */
u32 rht_out; /* Number of checked out RHT entries */
u32 rht_perms; /* User-defined permissions for RHT entries */
struct llun_info **rht_lun; /* Mapping of RHT entries to LUNs */
bool *rht_needs_ws; /* User-desired write-same function per RHTE */
struct cxl_ioctl_start_work work;
u64 ctxid;
int lfd;
pid_t pid;
bool unavail;
bool err_recovery_active;
struct mutex mutex; /* Context protection */
struct cxl_context *ctx;
struct list_head luns; /* LUNs attached to this context */
const struct vm_operations_struct *cxl_mmap_vmops;
struct file *file;
struct list_head list; /* Link contexts in error recovery */
};
struct cxlflash_global {
struct mutex mutex;
struct list_head gluns;/* list of glun_info structs */
struct page *err_page; /* One page of all 0xF for error notification */
};
int cxlflash_vlun_resize(struct scsi_device *, struct dk_cxlflash_resize *);
int _cxlflash_vlun_resize(struct scsi_device *, struct ctx_info *,
struct dk_cxlflash_resize *);
int cxlflash_disk_release(struct scsi_device *, struct dk_cxlflash_release *);
int _cxlflash_disk_release(struct scsi_device *, struct ctx_info *,
struct dk_cxlflash_release *);
int cxlflash_disk_clone(struct scsi_device *, struct dk_cxlflash_clone *);
int cxlflash_disk_virtual_open(struct scsi_device *, void *);
int cxlflash_lun_attach(struct glun_info *, enum lun_mode, bool);
void cxlflash_lun_detach(struct glun_info *);
struct ctx_info *get_context(struct cxlflash_cfg *, u64, void *, enum ctx_ctrl);
void put_context(struct ctx_info *);
struct sisl_rht_entry *get_rhte(struct ctx_info *, res_hndl_t,
struct llun_info *);
struct sisl_rht_entry *rhte_checkout(struct ctx_info *, struct llun_info *);
void rhte_checkin(struct ctx_info *, struct sisl_rht_entry *);
void cxlflash_ba_terminate(struct ba_lun *);
int cxlflash_manage_lun(struct scsi_device *, struct dk_cxlflash_manage_lun *);
#endif /* ifndef _CXLFLASH_SUPERPIPE_H */
此差异已折叠。
/*
* CXL Flash Device Driver
*
* Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation
* Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation
*
* Copyright (C) 2015 IBM Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#ifndef _CXLFLASH_VLUN_H
#define _CXLFLASH_VLUN_H
/* RHT - Resource Handle Table */
#define MC_RHT_NMASK 16 /* in bits */
#define MC_CHUNK_SHIFT MC_RHT_NMASK /* shift to go from LBA to chunk# */
#define HIBIT (BITS_PER_LONG - 1)
#define MAX_AUN_CLONE_CNT 0xFF
/*
* LXT - LBA Translation Table
*
* +-------+-------+-------+-------+-------+-------+-------+---+---+
* | RLBA_BASE |LUN_IDX| P |SEL|
* +-------+-------+-------+-------+-------+-------+-------+---+---+
*
* The LXT Entry contains the physical LBA where the chunk starts (RLBA_BASE).
* AFU ORes the low order bits from the virtual LBA (offset into the chunk)
* with RLBA_BASE. The result is the physical LBA to be sent to storage.
* The LXT Entry also contains an index to a LUN TBL and a bitmask of which
* outgoing (FC) * ports can be selected. The port select bit-mask is ANDed
* with a global port select bit-mask maintained by the driver.
* In addition, it has permission bits that are ANDed with the
* RHT permissions to arrive at the final permissions for the chunk.
*
* LXT tables are allocated dynamically in groups. This is done to avoid
* a malloc/free overhead each time the LXT has to grow or shrink.
*
* Based on the current lxt_cnt (used), it is always possible to know
* how many are allocated (used+free). The number of allocated entries is
* not stored anywhere.
*
* The LXT table is re-allocated whenever it needs to cross into another group.
*/
#define LXT_GROUP_SIZE 8
#define LXT_NUM_GROUPS(lxt_cnt) (((lxt_cnt) + 7)/8) /* alloc'ed groups */
#define LXT_LUNIDX_SHIFT 8 /* LXT entry, shift for LUN index */
#define LXT_PERM_SHIFT 4 /* LXT entry, shift for permission bits */
struct ba_lun_info {
u64 *lun_alloc_map;
u32 lun_bmap_size;
u32 total_aus;
u64 free_aun_cnt;
/* indices to be used for elevator lookup of free map */
u32 free_low_idx;
u32 free_curr_idx;
u32 free_high_idx;
u8 *aun_clone_map;
};
struct ba_lun {
u64 lun_id;
u64 wwpn;
size_t lsize; /* LUN size in number of LBAs */
size_t lba_size; /* LBA size in number of bytes */
size_t au_size; /* Allocation Unit size in number of LBAs */
struct ba_lun_info *ba_lun_handle;
};
/* Block Allocator */
struct blka {
struct ba_lun ba_lun;
u64 nchunk; /* number of chunks */
struct mutex mutex;
};
#endif /* ifndef _CXLFLASH_SUPERPIPE_H */
/* /*
* Disk Array driver for HP Smart Array SAS controllers * Disk Array driver for HP Smart Array SAS controllers
* Copyright 2000, 2014 Hewlett-Packard Development Company, L.P. * Copyright 2014-2015 PMC-Sierra, Inc.
* Copyright 2000,2009-2015 Hewlett-Packard Development Company, L.P.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
...@@ -11,11 +12,7 @@ ...@@ -11,11 +12,7 @@
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more details. * NON INFRINGEMENT. See the GNU General Public License for more details.
* *
* You should have received a copy of the GNU General Public License * Questions/Comments/Bugfixes to storagedev@pmcs.com
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* Questions/Comments/Bugfixes to iss_storagedev@hp.com
* *
*/ */
...@@ -132,6 +129,11 @@ static const struct pci_device_id hpsa_pci_device_id[] = { ...@@ -132,6 +129,11 @@ static const struct pci_device_id hpsa_pci_device_id[] = {
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSI, 0x103C, 0x21CD}, {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSI, 0x103C, 0x21CD},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSI, 0x103C, 0x21CE}, {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSI, 0x103C, 0x21CE},
{PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0580}, {PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0580},
{PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0581},
{PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0582},
{PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0583},
{PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0584},
{PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0585},
{PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x0076}, {PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x0076},
{PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x0087}, {PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x0087},
{PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x007D}, {PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x007D},
...@@ -190,6 +192,11 @@ static struct board_type products[] = { ...@@ -190,6 +192,11 @@ static struct board_type products[] = {
{0x21CD103C, "Smart Array", &SA5_access}, {0x21CD103C, "Smart Array", &SA5_access},
{0x21CE103C, "Smart HBA", &SA5_access}, {0x21CE103C, "Smart HBA", &SA5_access},
{0x05809005, "SmartHBA-SA", &SA5_access}, {0x05809005, "SmartHBA-SA", &SA5_access},
{0x05819005, "SmartHBA-SA 8i", &SA5_access},
{0x05829005, "SmartHBA-SA 8i8e", &SA5_access},
{0x05839005, "SmartHBA-SA 8e", &SA5_access},
{0x05849005, "SmartHBA-SA 16i", &SA5_access},
{0x05859005, "SmartHBA-SA 4i4e", &SA5_access},
{0x00761590, "HP Storage P1224 Array Controller", &SA5_access}, {0x00761590, "HP Storage P1224 Array Controller", &SA5_access},
{0x00871590, "HP Storage P1224e Array Controller", &SA5_access}, {0x00871590, "HP Storage P1224e Array Controller", &SA5_access},
{0x007D1590, "HP Storage P1228 Array Controller", &SA5_access}, {0x007D1590, "HP Storage P1228 Array Controller", &SA5_access},
...@@ -267,6 +274,7 @@ static int hpsa_scsi_ioaccel_queue_command(struct ctlr_info *h, ...@@ -267,6 +274,7 @@ static int hpsa_scsi_ioaccel_queue_command(struct ctlr_info *h,
static void hpsa_command_resubmit_worker(struct work_struct *work); static void hpsa_command_resubmit_worker(struct work_struct *work);
static u32 lockup_detected(struct ctlr_info *h); static u32 lockup_detected(struct ctlr_info *h);
static int detect_controller_lockup(struct ctlr_info *h); static int detect_controller_lockup(struct ctlr_info *h);
static int is_ext_target(struct ctlr_info *h, struct hpsa_scsi_dev_t *device);
static inline struct ctlr_info *sdev_to_hba(struct scsi_device *sdev) static inline struct ctlr_info *sdev_to_hba(struct scsi_device *sdev)
{ {
...@@ -325,7 +333,7 @@ static int check_for_unit_attention(struct ctlr_info *h, ...@@ -325,7 +333,7 @@ static int check_for_unit_attention(struct ctlr_info *h,
decode_sense_data(c->err_info->SenseInfo, sense_len, decode_sense_data(c->err_info->SenseInfo, sense_len,
&sense_key, &asc, &ascq); &sense_key, &asc, &ascq);
if (sense_key != UNIT_ATTENTION || asc == -1) if (sense_key != UNIT_ATTENTION || asc == 0xff)
return 0; return 0;
switch (asc) { switch (asc) {
...@@ -717,12 +725,107 @@ static ssize_t host_show_hp_ssd_smart_path_enabled(struct device *dev, ...@@ -717,12 +725,107 @@ static ssize_t host_show_hp_ssd_smart_path_enabled(struct device *dev,
return snprintf(buf, 20, "%d\n", offload_enabled); return snprintf(buf, 20, "%d\n", offload_enabled);
} }
#define MAX_PATHS 8
#define PATH_STRING_LEN 50
static ssize_t path_info_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ctlr_info *h;
struct scsi_device *sdev;
struct hpsa_scsi_dev_t *hdev;
unsigned long flags;
int i;
int output_len = 0;
u8 box;
u8 bay;
u8 path_map_index = 0;
char *active;
unsigned char phys_connector[2];
unsigned char path[MAX_PATHS][PATH_STRING_LEN];
memset(path, 0, MAX_PATHS * PATH_STRING_LEN);
sdev = to_scsi_device(dev);
h = sdev_to_hba(sdev);
spin_lock_irqsave(&h->devlock, flags);
hdev = sdev->hostdata;
if (!hdev) {
spin_unlock_irqrestore(&h->devlock, flags);
return -ENODEV;
}
bay = hdev->bay;
for (i = 0; i < MAX_PATHS; i++) {
path_map_index = 1<<i;
if (i == hdev->active_path_index)
active = "Active";
else if (hdev->path_map & path_map_index)
active = "Inactive";
else
continue;
output_len = snprintf(path[i],
PATH_STRING_LEN, "[%d:%d:%d:%d] %20.20s ",
h->scsi_host->host_no,
hdev->bus, hdev->target, hdev->lun,
scsi_device_type(hdev->devtype));
if (is_ext_target(h, hdev) ||
(hdev->devtype == TYPE_RAID) ||
is_logical_dev_addr_mode(hdev->scsi3addr)) {
output_len += snprintf(path[i] + output_len,
PATH_STRING_LEN, "%s\n",
active);
continue;
}
box = hdev->box[i];
memcpy(&phys_connector, &hdev->phys_connector[i],
sizeof(phys_connector));
if (phys_connector[0] < '0')
phys_connector[0] = '0';
if (phys_connector[1] < '0')
phys_connector[1] = '0';
if (hdev->phys_connector[i] > 0)
output_len += snprintf(path[i] + output_len,
PATH_STRING_LEN,
"PORT: %.2s ",
phys_connector);
if (hdev->devtype == TYPE_DISK &&
hdev->expose_state != HPSA_DO_NOT_EXPOSE) {
if (box == 0 || box == 0xFF) {
output_len += snprintf(path[i] + output_len,
PATH_STRING_LEN,
"BAY: %hhu %s\n",
bay, active);
} else {
output_len += snprintf(path[i] + output_len,
PATH_STRING_LEN,
"BOX: %hhu BAY: %hhu %s\n",
box, bay, active);
}
} else if (box != 0 && box != 0xFF) {
output_len += snprintf(path[i] + output_len,
PATH_STRING_LEN, "BOX: %hhu %s\n",
box, active);
} else
output_len += snprintf(path[i] + output_len,
PATH_STRING_LEN, "%s\n", active);
}
spin_unlock_irqrestore(&h->devlock, flags);
return snprintf(buf, output_len+1, "%s%s%s%s%s%s%s%s",
path[0], path[1], path[2], path[3],
path[4], path[5], path[6], path[7]);
}
static DEVICE_ATTR(raid_level, S_IRUGO, raid_level_show, NULL); static DEVICE_ATTR(raid_level, S_IRUGO, raid_level_show, NULL);
static DEVICE_ATTR(lunid, S_IRUGO, lunid_show, NULL); static DEVICE_ATTR(lunid, S_IRUGO, lunid_show, NULL);
static DEVICE_ATTR(unique_id, S_IRUGO, unique_id_show, NULL); static DEVICE_ATTR(unique_id, S_IRUGO, unique_id_show, NULL);
static DEVICE_ATTR(rescan, S_IWUSR, NULL, host_store_rescan); static DEVICE_ATTR(rescan, S_IWUSR, NULL, host_store_rescan);
static DEVICE_ATTR(hp_ssd_smart_path_enabled, S_IRUGO, static DEVICE_ATTR(hp_ssd_smart_path_enabled, S_IRUGO,
host_show_hp_ssd_smart_path_enabled, NULL); host_show_hp_ssd_smart_path_enabled, NULL);
static DEVICE_ATTR(path_info, S_IRUGO, path_info_show, NULL);
static DEVICE_ATTR(hp_ssd_smart_path_status, S_IWUSR|S_IRUGO|S_IROTH, static DEVICE_ATTR(hp_ssd_smart_path_status, S_IWUSR|S_IRUGO|S_IROTH,
host_show_hp_ssd_smart_path_status, host_show_hp_ssd_smart_path_status,
host_store_hp_ssd_smart_path_status); host_store_hp_ssd_smart_path_status);
...@@ -744,6 +847,7 @@ static struct device_attribute *hpsa_sdev_attrs[] = { ...@@ -744,6 +847,7 @@ static struct device_attribute *hpsa_sdev_attrs[] = {
&dev_attr_lunid, &dev_attr_lunid,
&dev_attr_unique_id, &dev_attr_unique_id,
&dev_attr_hp_ssd_smart_path_enabled, &dev_attr_hp_ssd_smart_path_enabled,
&dev_attr_path_info,
&dev_attr_lockup_detected, &dev_attr_lockup_detected,
NULL, NULL,
}; };
...@@ -1083,17 +1187,19 @@ static int hpsa_scsi_add_entry(struct ctlr_info *h, int hostno, ...@@ -1083,17 +1187,19 @@ static int hpsa_scsi_add_entry(struct ctlr_info *h, int hostno,
/* This is a non-zero lun of a multi-lun device. /* This is a non-zero lun of a multi-lun device.
* Search through our list and find the device which * Search through our list and find the device which
* has the same 8 byte LUN address, excepting byte 4. * has the same 8 byte LUN address, excepting byte 4 and 5.
* Assign the same bus and target for this new LUN. * Assign the same bus and target for this new LUN.
* Use the logical unit number from the firmware. * Use the logical unit number from the firmware.
*/ */
memcpy(addr1, device->scsi3addr, 8); memcpy(addr1, device->scsi3addr, 8);
addr1[4] = 0; addr1[4] = 0;
addr1[5] = 0;
for (i = 0; i < n; i++) { for (i = 0; i < n; i++) {
sd = h->dev[i]; sd = h->dev[i];
memcpy(addr2, sd->scsi3addr, 8); memcpy(addr2, sd->scsi3addr, 8);
addr2[4] = 0; addr2[4] = 0;
/* differ only in byte 4? */ addr2[5] = 0;
/* differ only in byte 4 and 5? */
if (memcmp(addr1, addr2, 8) == 0) { if (memcmp(addr1, addr2, 8) == 0) {
device->bus = sd->bus; device->bus = sd->bus;
device->target = sd->target; device->target = sd->target;
...@@ -1286,8 +1392,9 @@ static inline int device_updated(struct hpsa_scsi_dev_t *dev1, ...@@ -1286,8 +1392,9 @@ static inline int device_updated(struct hpsa_scsi_dev_t *dev1,
return 1; return 1;
if (dev1->offload_enabled != dev2->offload_enabled) if (dev1->offload_enabled != dev2->offload_enabled)
return 1; return 1;
if (dev1->queue_depth != dev2->queue_depth) if (!is_logical_dev_addr_mode(dev1->scsi3addr))
return 1; if (dev1->queue_depth != dev2->queue_depth)
return 1;
return 0; return 0;
} }
...@@ -1376,17 +1483,23 @@ static void hpsa_show_volume_status(struct ctlr_info *h, ...@@ -1376,17 +1483,23 @@ static void hpsa_show_volume_status(struct ctlr_info *h,
h->scsi_host->host_no, h->scsi_host->host_no,
sd->bus, sd->target, sd->lun); sd->bus, sd->target, sd->lun);
break; break;
case HPSA_LV_NOT_AVAILABLE:
dev_info(&h->pdev->dev,
"C%d:B%d:T%d:L%d Volume is waiting for transforming volume.\n",
h->scsi_host->host_no,
sd->bus, sd->target, sd->lun);
break;
case HPSA_LV_UNDERGOING_RPI: case HPSA_LV_UNDERGOING_RPI:
dev_info(&h->pdev->dev, dev_info(&h->pdev->dev,
"C%d:B%d:T%d:L%d Volume is undergoing rapid parity initialization process.\n", "C%d:B%d:T%d:L%d Volume is undergoing rapid parity init.\n",
h->scsi_host->host_no, h->scsi_host->host_no,
sd->bus, sd->target, sd->lun); sd->bus, sd->target, sd->lun);
break; break;
case HPSA_LV_PENDING_RPI: case HPSA_LV_PENDING_RPI:
dev_info(&h->pdev->dev, dev_info(&h->pdev->dev,
"C%d:B%d:T%d:L%d Volume is queued for rapid parity initialization process.\n", "C%d:B%d:T%d:L%d Volume is queued for rapid parity initialization process.\n",
h->scsi_host->host_no, h->scsi_host->host_no,
sd->bus, sd->target, sd->lun); sd->bus, sd->target, sd->lun);
break; break;
case HPSA_LV_ENCRYPTED_NO_KEY: case HPSA_LV_ENCRYPTED_NO_KEY:
dev_info(&h->pdev->dev, dev_info(&h->pdev->dev,
...@@ -2585,34 +2698,6 @@ static int hpsa_scsi_do_inquiry(struct ctlr_info *h, unsigned char *scsi3addr, ...@@ -2585,34 +2698,6 @@ static int hpsa_scsi_do_inquiry(struct ctlr_info *h, unsigned char *scsi3addr,
return rc; return rc;
} }
static int hpsa_bmic_ctrl_mode_sense(struct ctlr_info *h,
unsigned char *scsi3addr, unsigned char page,
struct bmic_controller_parameters *buf, size_t bufsize)
{
int rc = IO_OK;
struct CommandList *c;
struct ErrorInfo *ei;
c = cmd_alloc(h);
if (fill_cmd(c, BMIC_SENSE_CONTROLLER_PARAMETERS, h, buf, bufsize,
page, scsi3addr, TYPE_CMD)) {
rc = -1;
goto out;
}
rc = hpsa_scsi_do_simple_cmd_with_retry(h, c,
PCI_DMA_FROMDEVICE, NO_TIMEOUT);
if (rc)
goto out;
ei = c->err_info;
if (ei->CommandStatus != 0 && ei->CommandStatus != CMD_DATA_UNDERRUN) {
hpsa_scsi_interpret_error(h, c);
rc = -1;
}
out:
cmd_free(h, c);
return rc;
}
static int hpsa_send_reset(struct ctlr_info *h, unsigned char *scsi3addr, static int hpsa_send_reset(struct ctlr_info *h, unsigned char *scsi3addr,
u8 reset_type, int reply_queue) u8 reset_type, int reply_queue)
{ {
...@@ -2749,11 +2834,10 @@ static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev, ...@@ -2749,11 +2834,10 @@ static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev,
lockup_detected(h)); lockup_detected(h));
if (unlikely(lockup_detected(h))) { if (unlikely(lockup_detected(h))) {
dev_warn(&h->pdev->dev, dev_warn(&h->pdev->dev,
"Controller lockup detected during reset wait\n"); "Controller lockup detected during reset wait\n");
mutex_unlock(&h->reset_mutex); rc = -ENODEV;
rc = -ENODEV; }
}
if (unlikely(rc)) if (unlikely(rc))
atomic_set(&dev->reset_cmds_out, 0); atomic_set(&dev->reset_cmds_out, 0);
...@@ -3186,6 +3270,7 @@ static int hpsa_volume_offline(struct ctlr_info *h, ...@@ -3186,6 +3270,7 @@ static int hpsa_volume_offline(struct ctlr_info *h,
/* Keep volume offline in certain cases: */ /* Keep volume offline in certain cases: */
switch (ldstat) { switch (ldstat) {
case HPSA_LV_UNDERGOING_ERASE: case HPSA_LV_UNDERGOING_ERASE:
case HPSA_LV_NOT_AVAILABLE:
case HPSA_LV_UNDERGOING_RPI: case HPSA_LV_UNDERGOING_RPI:
case HPSA_LV_PENDING_RPI: case HPSA_LV_PENDING_RPI:
case HPSA_LV_ENCRYPTED_NO_KEY: case HPSA_LV_ENCRYPTED_NO_KEY:
...@@ -3562,29 +3647,6 @@ static u8 *figure_lunaddrbytes(struct ctlr_info *h, int raid_ctlr_position, ...@@ -3562,29 +3647,6 @@ static u8 *figure_lunaddrbytes(struct ctlr_info *h, int raid_ctlr_position,
return NULL; return NULL;
} }
static int hpsa_hba_mode_enabled(struct ctlr_info *h)
{
int rc;
int hba_mode_enabled;
struct bmic_controller_parameters *ctlr_params;
ctlr_params = kzalloc(sizeof(struct bmic_controller_parameters),
GFP_KERNEL);
if (!ctlr_params)
return -ENOMEM;
rc = hpsa_bmic_ctrl_mode_sense(h, RAID_CTLR_LUNID, 0, ctlr_params,
sizeof(struct bmic_controller_parameters));
if (rc) {
kfree(ctlr_params);
return rc;
}
hba_mode_enabled =
((ctlr_params->nvram_flags & HBA_MODE_ENABLED_FLAG) != 0);
kfree(ctlr_params);
return hba_mode_enabled;
}
/* get physical drive ioaccel handle and queue depth */ /* get physical drive ioaccel handle and queue depth */
static void hpsa_get_ioaccel_drive_info(struct ctlr_info *h, static void hpsa_get_ioaccel_drive_info(struct ctlr_info *h,
struct hpsa_scsi_dev_t *dev, struct hpsa_scsi_dev_t *dev,
...@@ -3615,6 +3677,31 @@ static void hpsa_get_ioaccel_drive_info(struct ctlr_info *h, ...@@ -3615,6 +3677,31 @@ static void hpsa_get_ioaccel_drive_info(struct ctlr_info *h,
atomic_set(&dev->reset_cmds_out, 0); atomic_set(&dev->reset_cmds_out, 0);
} }
static void hpsa_get_path_info(struct hpsa_scsi_dev_t *this_device,
u8 *lunaddrbytes,
struct bmic_identify_physical_device *id_phys)
{
if (PHYS_IOACCEL(lunaddrbytes)
&& this_device->ioaccel_handle)
this_device->hba_ioaccel_enabled = 1;
memcpy(&this_device->active_path_index,
&id_phys->active_path_number,
sizeof(this_device->active_path_index));
memcpy(&this_device->path_map,
&id_phys->redundant_path_present_map,
sizeof(this_device->path_map));
memcpy(&this_device->box,
&id_phys->alternate_paths_phys_box_on_port,
sizeof(this_device->box));
memcpy(&this_device->phys_connector,
&id_phys->alternate_paths_phys_connector,
sizeof(this_device->phys_connector));
memcpy(&this_device->bay,
&id_phys->phys_bay_in_box,
sizeof(this_device->bay));
}
static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno) static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
{ {
/* the idea here is we could get notified /* the idea here is we could get notified
...@@ -3637,7 +3724,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno) ...@@ -3637,7 +3724,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
int ncurrent = 0; int ncurrent = 0;
int i, n_ext_target_devs, ndevs_to_allocate; int i, n_ext_target_devs, ndevs_to_allocate;
int raid_ctlr_position; int raid_ctlr_position;
int rescan_hba_mode;
DECLARE_BITMAP(lunzerobits, MAX_EXT_TARGETS); DECLARE_BITMAP(lunzerobits, MAX_EXT_TARGETS);
currentsd = kzalloc(sizeof(*currentsd) * HPSA_MAX_DEVICES, GFP_KERNEL); currentsd = kzalloc(sizeof(*currentsd) * HPSA_MAX_DEVICES, GFP_KERNEL);
...@@ -3653,17 +3739,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno) ...@@ -3653,17 +3739,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
} }
memset(lunzerobits, 0, sizeof(lunzerobits)); memset(lunzerobits, 0, sizeof(lunzerobits));
rescan_hba_mode = hpsa_hba_mode_enabled(h);
if (rescan_hba_mode < 0)
goto out;
if (!h->hba_mode_enabled && rescan_hba_mode)
dev_warn(&h->pdev->dev, "HBA mode enabled\n");
else if (h->hba_mode_enabled && !rescan_hba_mode)
dev_warn(&h->pdev->dev, "HBA mode disabled\n");
h->hba_mode_enabled = rescan_hba_mode;
if (hpsa_gather_lun_info(h, physdev_list, &nphysicals, if (hpsa_gather_lun_info(h, physdev_list, &nphysicals,
logdev_list, &nlogicals)) logdev_list, &nlogicals))
goto out; goto out;
...@@ -3739,9 +3814,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno) ...@@ -3739,9 +3814,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
/* do not expose masked devices */ /* do not expose masked devices */
if (MASKED_DEVICE(lunaddrbytes) && if (MASKED_DEVICE(lunaddrbytes) &&
i < nphysicals + (raid_ctlr_position == 0)) { i < nphysicals + (raid_ctlr_position == 0)) {
if (h->hba_mode_enabled)
dev_warn(&h->pdev->dev,
"Masked physical device detected\n");
this_device->expose_state = HPSA_DO_NOT_EXPOSE; this_device->expose_state = HPSA_DO_NOT_EXPOSE;
} else { } else {
this_device->expose_state = this_device->expose_state =
...@@ -3761,30 +3833,21 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno) ...@@ -3761,30 +3833,21 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno)
ncurrent++; ncurrent++;
break; break;
case TYPE_DISK: case TYPE_DISK:
if (i >= nphysicals) { if (i < nphysicals + (raid_ctlr_position == 0)) {
ncurrent++; /* The disk is in HBA mode. */
break; /* Never use RAID mapper in HBA mode. */
}
if (h->hba_mode_enabled)
/* never use raid mapper in HBA mode */
this_device->offload_enabled = 0; this_device->offload_enabled = 0;
else if (!(h->transMethod & CFGTBL_Trans_io_accel1 || hpsa_get_ioaccel_drive_info(h, this_device,
h->transMethod & CFGTBL_Trans_io_accel2)) lunaddrbytes, id_phys);
break; hpsa_get_path_info(this_device, lunaddrbytes,
id_phys);
hpsa_get_ioaccel_drive_info(h, this_device, }
lunaddrbytes, id_phys);
atomic_set(&this_device->ioaccel_cmds_out, 0);
ncurrent++; ncurrent++;
break; break;
case TYPE_TAPE: case TYPE_TAPE:
case TYPE_MEDIUM_CHANGER: case TYPE_MEDIUM_CHANGER:
ncurrent++;
break;
case TYPE_ENCLOSURE: case TYPE_ENCLOSURE:
if (h->hba_mode_enabled) ncurrent++;
ncurrent++;
break; break;
case TYPE_RAID: case TYPE_RAID:
/* Only present the Smartarray HBA as a RAID controller. /* Only present the Smartarray HBA as a RAID controller.
...@@ -5104,7 +5167,7 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd) ...@@ -5104,7 +5167,7 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
int rc; int rc;
struct ctlr_info *h; struct ctlr_info *h;
struct hpsa_scsi_dev_t *dev; struct hpsa_scsi_dev_t *dev;
char msg[40]; char msg[48];
/* find the controller to which the command to be aborted was sent */ /* find the controller to which the command to be aborted was sent */
h = sdev_to_hba(scsicmd->device); h = sdev_to_hba(scsicmd->device);
...@@ -5122,16 +5185,18 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd) ...@@ -5122,16 +5185,18 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
/* if controller locked up, we can guarantee command won't complete */ /* if controller locked up, we can guarantee command won't complete */
if (lockup_detected(h)) { if (lockup_detected(h)) {
sprintf(msg, "cmd %d RESET FAILED, lockup detected", snprintf(msg, sizeof(msg),
hpsa_get_cmd_index(scsicmd)); "cmd %d RESET FAILED, lockup detected",
hpsa_get_cmd_index(scsicmd));
hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
return FAILED; return FAILED;
} }
/* this reset request might be the result of a lockup; check */ /* this reset request might be the result of a lockup; check */
if (detect_controller_lockup(h)) { if (detect_controller_lockup(h)) {
sprintf(msg, "cmd %d RESET FAILED, new lockup detected", snprintf(msg, sizeof(msg),
hpsa_get_cmd_index(scsicmd)); "cmd %d RESET FAILED, new lockup detected",
hpsa_get_cmd_index(scsicmd));
hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
return FAILED; return FAILED;
} }
...@@ -5145,7 +5210,8 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd) ...@@ -5145,7 +5210,8 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
/* send a reset to the SCSI LUN which the command was sent to */ /* send a reset to the SCSI LUN which the command was sent to */
rc = hpsa_do_reset(h, dev, dev->scsi3addr, HPSA_RESET_TYPE_LUN, rc = hpsa_do_reset(h, dev, dev->scsi3addr, HPSA_RESET_TYPE_LUN,
DEFAULT_REPLY_QUEUE); DEFAULT_REPLY_QUEUE);
sprintf(msg, "reset %s", rc == 0 ? "completed successfully" : "failed"); snprintf(msg, sizeof(msg), "reset %s",
rc == 0 ? "completed successfully" : "failed");
hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
return rc == 0 ? SUCCESS : FAILED; return rc == 0 ? SUCCESS : FAILED;
} }
...@@ -7989,7 +8055,6 @@ static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -7989,7 +8055,6 @@ static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_drvdata(pdev, h); pci_set_drvdata(pdev, h);
h->ndevices = 0; h->ndevices = 0;
h->hba_mode_enabled = 0;
spin_lock_init(&h->devlock); spin_lock_init(&h->devlock);
rc = hpsa_put_ctlr_into_performant_mode(h); rc = hpsa_put_ctlr_into_performant_mode(h);
...@@ -8054,7 +8119,7 @@ static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -8054,7 +8119,7 @@ static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
rc = hpsa_kdump_soft_reset(h); rc = hpsa_kdump_soft_reset(h);
if (rc) if (rc)
/* Neither hard nor soft reset worked, we're hosed. */ /* Neither hard nor soft reset worked, we're hosed. */
goto clean9; goto clean7;
dev_info(&h->pdev->dev, "Board READY.\n"); dev_info(&h->pdev->dev, "Board READY.\n");
dev_info(&h->pdev->dev, dev_info(&h->pdev->dev,
...@@ -8100,8 +8165,6 @@ static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -8100,8 +8165,6 @@ static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
h->heartbeat_sample_interval); h->heartbeat_sample_interval);
return 0; return 0;
clean9: /* wq, sh, perf, sg, cmd, irq, shost, pci, lu, aer/h */
kfree(h->hba_inquiry_data);
clean7: /* perf, sg, cmd, irq, shost, pci, lu, aer/h */ clean7: /* perf, sg, cmd, irq, shost, pci, lu, aer/h */
hpsa_free_performant_mode(h); hpsa_free_performant_mode(h);
h->access.set_intr_mask(h, HPSA_INTR_OFF); h->access.set_intr_mask(h, HPSA_INTR_OFF);
...@@ -8209,6 +8272,14 @@ static void hpsa_remove_one(struct pci_dev *pdev) ...@@ -8209,6 +8272,14 @@ static void hpsa_remove_one(struct pci_dev *pdev)
destroy_workqueue(h->rescan_ctlr_wq); destroy_workqueue(h->rescan_ctlr_wq);
destroy_workqueue(h->resubmit_wq); destroy_workqueue(h->resubmit_wq);
/*
* Call before disabling interrupts.
* scsi_remove_host can trigger I/O operations especially
* when multipath is enabled. There can be SYNCHRONIZE CACHE
* operations which cannot complete and will hang the system.
*/
if (h->scsi_host)
scsi_remove_host(h->scsi_host); /* init_one 8 */
/* includes hpsa_free_irqs - init_one 4 */ /* includes hpsa_free_irqs - init_one 4 */
/* includes hpsa_disable_interrupt_mode - pci_init 2 */ /* includes hpsa_disable_interrupt_mode - pci_init 2 */
hpsa_shutdown(pdev); hpsa_shutdown(pdev);
...@@ -8217,8 +8288,6 @@ static void hpsa_remove_one(struct pci_dev *pdev) ...@@ -8217,8 +8288,6 @@ static void hpsa_remove_one(struct pci_dev *pdev)
kfree(h->hba_inquiry_data); /* init_one 10 */ kfree(h->hba_inquiry_data); /* init_one 10 */
h->hba_inquiry_data = NULL; /* init_one 10 */ h->hba_inquiry_data = NULL; /* init_one 10 */
if (h->scsi_host)
scsi_remove_host(h->scsi_host); /* init_one 8 */
hpsa_free_ioaccel2_sg_chain_blocks(h); hpsa_free_ioaccel2_sg_chain_blocks(h);
hpsa_free_performant_mode(h); /* init_one 7 */ hpsa_free_performant_mode(h); /* init_one 7 */
hpsa_free_sg_chain_blocks(h); /* init_one 6 */ hpsa_free_sg_chain_blocks(h); /* init_one 6 */
......
/* /*
* Disk Array driver for HP Smart Array SAS controllers * Disk Array driver for HP Smart Array SAS controllers
* Copyright 2000, 2014 Hewlett-Packard Development Company, L.P. * Copyright 2014-2015 PMC-Sierra, Inc.
* Copyright 2000,2009-2015 Hewlett-Packard Development Company, L.P.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
...@@ -11,11 +12,7 @@ ...@@ -11,11 +12,7 @@
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more details. * NON INFRINGEMENT. See the GNU General Public License for more details.
* *
* You should have received a copy of the GNU General Public License * Questions/Comments/Bugfixes to storagedev@pmcs.com
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* Questions/Comments/Bugfixes to iss_storagedev@hp.com
* *
*/ */
#ifndef HPSA_H #ifndef HPSA_H
...@@ -53,6 +50,11 @@ struct hpsa_scsi_dev_t { ...@@ -53,6 +50,11 @@ struct hpsa_scsi_dev_t {
* device via "ioaccel" path. * device via "ioaccel" path.
*/ */
u32 ioaccel_handle; u32 ioaccel_handle;
u8 active_path_index;
u8 path_map;
u8 bay;
u8 box[8];
u16 phys_connector[8];
int offload_config; /* I/O accel RAID offload configured */ int offload_config; /* I/O accel RAID offload configured */
int offload_enabled; /* I/O accel RAID offload enabled */ int offload_enabled; /* I/O accel RAID offload enabled */
int offload_to_be_enabled; int offload_to_be_enabled;
...@@ -114,7 +116,6 @@ struct bmic_controller_parameters { ...@@ -114,7 +116,6 @@ struct bmic_controller_parameters {
u8 automatic_drive_slamming; u8 automatic_drive_slamming;
u8 reserved1; u8 reserved1;
u8 nvram_flags; u8 nvram_flags;
#define HBA_MODE_ENABLED_FLAG (1 << 3)
u8 cache_nvram_flags; u8 cache_nvram_flags;
u8 drive_config_flags; u8 drive_config_flags;
u16 reserved2; u16 reserved2;
...@@ -153,7 +154,6 @@ struct ctlr_info { ...@@ -153,7 +154,6 @@ struct ctlr_info {
unsigned int msi_vector; unsigned int msi_vector;
int intr_mode; /* either PERF_MODE_INT or SIMPLE_MODE_INT */ int intr_mode; /* either PERF_MODE_INT or SIMPLE_MODE_INT */
struct access_method access; struct access_method access;
char hba_mode_enabled;
/* queue and queue Info */ /* queue and queue Info */
unsigned int Qdepth; unsigned int Qdepth;
......
/* /*
* Disk Array driver for HP Smart Array SAS controllers * Disk Array driver for HP Smart Array SAS controllers
* Copyright 2000, 2014 Hewlett-Packard Development Company, L.P. * Copyright 2014-2015 PMC-Sierra, Inc.
* Copyright 2000,2009-2015 Hewlett-Packard Development Company, L.P.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
...@@ -11,11 +12,7 @@ ...@@ -11,11 +12,7 @@
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for more details. * NON INFRINGEMENT. See the GNU General Public License for more details.
* *
* You should have received a copy of the GNU General Public License * Questions/Comments/Bugfixes to storagedev@pmcs.com
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* Questions/Comments/Bugfixes to iss_storagedev@hp.com
* *
*/ */
#ifndef HPSA_CMD_H #ifndef HPSA_CMD_H
...@@ -167,6 +164,7 @@ ...@@ -167,6 +164,7 @@
/* Logical volume states */ /* Logical volume states */
#define HPSA_VPD_LV_STATUS_UNSUPPORTED 0xff #define HPSA_VPD_LV_STATUS_UNSUPPORTED 0xff
#define HPSA_LV_OK 0x0 #define HPSA_LV_OK 0x0
#define HPSA_LV_NOT_AVAILABLE 0x0b
#define HPSA_LV_UNDERGOING_ERASE 0x0F #define HPSA_LV_UNDERGOING_ERASE 0x0F
#define HPSA_LV_UNDERGOING_RPI 0x12 #define HPSA_LV_UNDERGOING_RPI 0x12
#define HPSA_LV_PENDING_RPI 0x13 #define HPSA_LV_PENDING_RPI 0x13
......
/* /*
* HighPoint RR3xxx/4xxx controller driver for Linux * HighPoint RR3xxx/4xxx controller driver for Linux
* Copyright (C) 2006-2012 HighPoint Technologies, Inc. All Rights Reserved. * Copyright (C) 2006-2015 HighPoint Technologies, Inc. All Rights Reserved.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
...@@ -42,7 +42,7 @@ MODULE_DESCRIPTION("HighPoint RocketRAID 3xxx/4xxx Controller Driver"); ...@@ -42,7 +42,7 @@ MODULE_DESCRIPTION("HighPoint RocketRAID 3xxx/4xxx Controller Driver");
static char driver_name[] = "hptiop"; static char driver_name[] = "hptiop";
static const char driver_name_long[] = "RocketRAID 3xxx/4xxx Controller driver"; static const char driver_name_long[] = "RocketRAID 3xxx/4xxx Controller driver";
static const char driver_ver[] = "v1.8"; static const char driver_ver[] = "v1.10.0";
static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec); static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec);
static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag, static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
...@@ -764,9 +764,7 @@ static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag, ...@@ -764,9 +764,7 @@ static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
scsi_set_resid(scp, scsi_set_resid(scp,
scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length)); scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length));
scp->result = SAM_STAT_CHECK_CONDITION; scp->result = SAM_STAT_CHECK_CONDITION;
memcpy(scp->sense_buffer, &req->sg_list, memcpy(scp->sense_buffer, &req->sg_list, SCSI_SENSE_BUFFERSIZE);
min_t(size_t, SCSI_SENSE_BUFFERSIZE,
le32_to_cpu(req->dataxfer_length)));
goto skip_resid; goto skip_resid;
break; break;
...@@ -1037,8 +1035,9 @@ static int hptiop_queuecommand_lck(struct scsi_cmnd *scp, ...@@ -1037,8 +1035,9 @@ static int hptiop_queuecommand_lck(struct scsi_cmnd *scp,
scp->result = 0; scp->result = 0;
if (scp->device->channel || scp->device->lun || if (scp->device->channel ||
scp->device->id > hba->max_devices) { (scp->device->id > hba->max_devices) ||
((scp->device->id == (hba->max_devices-1)) && scp->device->lun)) {
scp->result = DID_BAD_TARGET << 16; scp->result = DID_BAD_TARGET << 16;
free_req(hba, _req); free_req(hba, _req);
goto cmd_done; goto cmd_done;
...@@ -1168,6 +1167,14 @@ static struct device_attribute *hptiop_attrs[] = { ...@@ -1168,6 +1167,14 @@ static struct device_attribute *hptiop_attrs[] = {
NULL NULL
}; };
static int hptiop_slave_config(struct scsi_device *sdev)
{
if (sdev->type == TYPE_TAPE)
blk_queue_max_hw_sectors(sdev->request_queue, 8192);
return 0;
}
static struct scsi_host_template driver_template = { static struct scsi_host_template driver_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
.name = driver_name, .name = driver_name,
...@@ -1179,6 +1186,7 @@ static struct scsi_host_template driver_template = { ...@@ -1179,6 +1186,7 @@ static struct scsi_host_template driver_template = {
.use_clustering = ENABLE_CLUSTERING, .use_clustering = ENABLE_CLUSTERING,
.proc_name = driver_name, .proc_name = driver_name,
.shost_attrs = hptiop_attrs, .shost_attrs = hptiop_attrs,
.slave_configure = hptiop_slave_config,
.this_id = -1, .this_id = -1,
.change_queue_depth = hptiop_adjust_disk_queue_depth, .change_queue_depth = hptiop_adjust_disk_queue_depth,
}; };
...@@ -1323,6 +1331,7 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id) ...@@ -1323,6 +1331,7 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id)
} }
hba = (struct hptiop_hba *)host->hostdata; hba = (struct hptiop_hba *)host->hostdata;
memset(hba, 0, sizeof(struct hptiop_hba));
hba->ops = iop_ops; hba->ops = iop_ops;
hba->pcidev = pcidev; hba->pcidev = pcidev;
...@@ -1336,7 +1345,7 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id) ...@@ -1336,7 +1345,7 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id)
init_waitqueue_head(&hba->reset_wq); init_waitqueue_head(&hba->reset_wq);
init_waitqueue_head(&hba->ioctl_wq); init_waitqueue_head(&hba->ioctl_wq);
host->max_lun = 1; host->max_lun = 128;
host->max_channel = 0; host->max_channel = 0;
host->io_port = 0; host->io_port = 0;
host->n_io_port = 0; host->n_io_port = 0;
...@@ -1428,34 +1437,33 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id) ...@@ -1428,34 +1437,33 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id)
dprintk("req_size=%d, max_requests=%d\n", req_size, hba->max_requests); dprintk("req_size=%d, max_requests=%d\n", req_size, hba->max_requests);
hba->req_size = req_size; hba->req_size = req_size;
start_virt = dma_alloc_coherent(&pcidev->dev, hba->req_list = NULL;
hba->req_size*hba->max_requests + 0x20,
&start_phy, GFP_KERNEL);
if (!start_virt) { for (i = 0; i < hba->max_requests; i++) {
printk(KERN_ERR "scsi%d: fail to alloc request mem\n", start_virt = dma_alloc_coherent(&pcidev->dev,
hba->host->host_no); hba->req_size + 0x20,
goto free_request_irq; &start_phy, GFP_KERNEL);
}
if (!start_virt) {
printk(KERN_ERR "scsi%d: fail to alloc request mem\n",
hba->host->host_no);
goto free_request_mem;
}
hba->dma_coherent = start_virt; hba->dma_coherent[i] = start_virt;
hba->dma_coherent_handle = start_phy; hba->dma_coherent_handle[i] = start_phy;
if ((start_phy & 0x1f) != 0) { if ((start_phy & 0x1f) != 0) {
offset = ((start_phy + 0x1f) & ~0x1f) - start_phy; offset = ((start_phy + 0x1f) & ~0x1f) - start_phy;
start_phy += offset; start_phy += offset;
start_virt += offset; start_virt += offset;
} }
hba->req_list = NULL;
for (i = 0; i < hba->max_requests; i++) {
hba->reqs[i].next = NULL; hba->reqs[i].next = NULL;
hba->reqs[i].req_virt = start_virt; hba->reqs[i].req_virt = start_virt;
hba->reqs[i].req_shifted_phy = start_phy >> 5; hba->reqs[i].req_shifted_phy = start_phy >> 5;
hba->reqs[i].index = i; hba->reqs[i].index = i;
free_req(hba, &hba->reqs[i]); free_req(hba, &hba->reqs[i]);
start_virt = (char *)start_virt + hba->req_size;
start_phy = start_phy + hba->req_size;
} }
/* Enable Interrupt and start background task */ /* Enable Interrupt and start background task */
...@@ -1474,11 +1482,16 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id) ...@@ -1474,11 +1482,16 @@ static int hptiop_probe(struct pci_dev *pcidev, const struct pci_device_id *id)
return 0; return 0;
free_request_mem: free_request_mem:
dma_free_coherent(&hba->pcidev->dev, for (i = 0; i < hba->max_requests; i++) {
hba->req_size * hba->max_requests + 0x20, if (hba->dma_coherent[i] && hba->dma_coherent_handle[i])
hba->dma_coherent, hba->dma_coherent_handle); dma_free_coherent(&hba->pcidev->dev,
hba->req_size + 0x20,
hba->dma_coherent[i],
hba->dma_coherent_handle[i]);
else
break;
}
free_request_irq:
free_irq(hba->pcidev->irq, hba); free_irq(hba->pcidev->irq, hba);
unmap_pci_bar: unmap_pci_bar:
...@@ -1546,6 +1559,7 @@ static void hptiop_remove(struct pci_dev *pcidev) ...@@ -1546,6 +1559,7 @@ static void hptiop_remove(struct pci_dev *pcidev)
{ {
struct Scsi_Host *host = pci_get_drvdata(pcidev); struct Scsi_Host *host = pci_get_drvdata(pcidev);
struct hptiop_hba *hba = (struct hptiop_hba *)host->hostdata; struct hptiop_hba *hba = (struct hptiop_hba *)host->hostdata;
u32 i;
dprintk("scsi%d: hptiop_remove\n", hba->host->host_no); dprintk("scsi%d: hptiop_remove\n", hba->host->host_no);
...@@ -1555,10 +1569,15 @@ static void hptiop_remove(struct pci_dev *pcidev) ...@@ -1555,10 +1569,15 @@ static void hptiop_remove(struct pci_dev *pcidev)
free_irq(hba->pcidev->irq, hba); free_irq(hba->pcidev->irq, hba);
dma_free_coherent(&hba->pcidev->dev, for (i = 0; i < hba->max_requests; i++) {
hba->req_size * hba->max_requests + 0x20, if (hba->dma_coherent[i] && hba->dma_coherent_handle[i])
hba->dma_coherent, dma_free_coherent(&hba->pcidev->dev,
hba->dma_coherent_handle); hba->req_size + 0x20,
hba->dma_coherent[i],
hba->dma_coherent_handle[i]);
else
break;
}
hba->ops->internal_memfree(hba); hba->ops->internal_memfree(hba);
...@@ -1653,6 +1672,14 @@ static struct pci_device_id hptiop_id_table[] = { ...@@ -1653,6 +1672,14 @@ static struct pci_device_id hptiop_id_table[] = {
{ PCI_VDEVICE(TTI, 0x3020), (kernel_ulong_t)&hptiop_mv_ops }, { PCI_VDEVICE(TTI, 0x3020), (kernel_ulong_t)&hptiop_mv_ops },
{ PCI_VDEVICE(TTI, 0x4520), (kernel_ulong_t)&hptiop_mvfrey_ops }, { PCI_VDEVICE(TTI, 0x4520), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x4522), (kernel_ulong_t)&hptiop_mvfrey_ops }, { PCI_VDEVICE(TTI, 0x4522), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x3610), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x3611), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x3620), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x3622), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x3640), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x3660), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x3680), (kernel_ulong_t)&hptiop_mvfrey_ops },
{ PCI_VDEVICE(TTI, 0x3690), (kernel_ulong_t)&hptiop_mvfrey_ops },
{}, {},
}; };
......
/* /*
* HighPoint RR3xxx/4xxx controller driver for Linux * HighPoint RR3xxx/4xxx controller driver for Linux
* Copyright (C) 2006-2012 HighPoint Technologies, Inc. All Rights Reserved. * Copyright (C) 2006-2015 HighPoint Technologies, Inc. All Rights Reserved.
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
...@@ -327,8 +327,8 @@ struct hptiop_hba { ...@@ -327,8 +327,8 @@ struct hptiop_hba {
struct hptiop_request reqs[HPTIOP_MAX_REQUESTS]; struct hptiop_request reqs[HPTIOP_MAX_REQUESTS];
/* used to free allocated dma area */ /* used to free allocated dma area */
void *dma_coherent; void *dma_coherent[HPTIOP_MAX_REQUESTS];
dma_addr_t dma_coherent_handle; dma_addr_t dma_coherent_handle[HPTIOP_MAX_REQUESTS];
atomic_t reset_count; atomic_t reset_count;
atomic_t resetting; atomic_t resetting;
......
...@@ -1165,7 +1165,8 @@ static void ipr_init_res_entry(struct ipr_resource_entry *res, ...@@ -1165,7 +1165,8 @@ static void ipr_init_res_entry(struct ipr_resource_entry *res,
if (ioa_cfg->sis64) { if (ioa_cfg->sis64) {
proto = cfgtew->u.cfgte64->proto; proto = cfgtew->u.cfgte64->proto;
res->res_flags = cfgtew->u.cfgte64->res_flags; res->flags = be16_to_cpu(cfgtew->u.cfgte64->flags);
res->res_flags = be16_to_cpu(cfgtew->u.cfgte64->res_flags);
res->qmodel = IPR_QUEUEING_MODEL64(res); res->qmodel = IPR_QUEUEING_MODEL64(res);
res->type = cfgtew->u.cfgte64->res_type; res->type = cfgtew->u.cfgte64->res_type;
...@@ -1313,8 +1314,8 @@ static void ipr_update_res_entry(struct ipr_resource_entry *res, ...@@ -1313,8 +1314,8 @@ static void ipr_update_res_entry(struct ipr_resource_entry *res,
int new_path = 0; int new_path = 0;
if (res->ioa_cfg->sis64) { if (res->ioa_cfg->sis64) {
res->flags = cfgtew->u.cfgte64->flags; res->flags = be16_to_cpu(cfgtew->u.cfgte64->flags);
res->res_flags = cfgtew->u.cfgte64->res_flags; res->res_flags = be16_to_cpu(cfgtew->u.cfgte64->res_flags);
res->type = cfgtew->u.cfgte64->res_type; res->type = cfgtew->u.cfgte64->res_type;
memcpy(&res->std_inq_data, &cfgtew->u.cfgte64->std_inq_data, memcpy(&res->std_inq_data, &cfgtew->u.cfgte64->std_inq_data,
...@@ -1900,7 +1901,7 @@ static void ipr_log_array_error(struct ipr_ioa_cfg *ioa_cfg, ...@@ -1900,7 +1901,7 @@ static void ipr_log_array_error(struct ipr_ioa_cfg *ioa_cfg,
* Return value: * Return value:
* none * none
**/ **/
static void ipr_log_hex_data(struct ipr_ioa_cfg *ioa_cfg, u32 *data, int len) static void ipr_log_hex_data(struct ipr_ioa_cfg *ioa_cfg, __be32 *data, int len)
{ {
int i; int i;
...@@ -2270,7 +2271,7 @@ static void ipr_log_fabric_error(struct ipr_ioa_cfg *ioa_cfg, ...@@ -2270,7 +2271,7 @@ static void ipr_log_fabric_error(struct ipr_ioa_cfg *ioa_cfg,
((unsigned long)fabric + be16_to_cpu(fabric->length)); ((unsigned long)fabric + be16_to_cpu(fabric->length));
} }
ipr_log_hex_data(ioa_cfg, (u32 *)fabric, add_len); ipr_log_hex_data(ioa_cfg, (__be32 *)fabric, add_len);
} }
/** /**
...@@ -2364,7 +2365,7 @@ static void ipr_log_sis64_fabric_error(struct ipr_ioa_cfg *ioa_cfg, ...@@ -2364,7 +2365,7 @@ static void ipr_log_sis64_fabric_error(struct ipr_ioa_cfg *ioa_cfg,
((unsigned long)fabric + be16_to_cpu(fabric->length)); ((unsigned long)fabric + be16_to_cpu(fabric->length));
} }
ipr_log_hex_data(ioa_cfg, (u32 *)fabric, add_len); ipr_log_hex_data(ioa_cfg, (__be32 *)fabric, add_len);
} }
/** /**
...@@ -4455,7 +4456,7 @@ static ssize_t ipr_show_device_id(struct device *dev, struct device_attribute *a ...@@ -4455,7 +4456,7 @@ static ssize_t ipr_show_device_id(struct device *dev, struct device_attribute *a
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
res = (struct ipr_resource_entry *)sdev->hostdata; res = (struct ipr_resource_entry *)sdev->hostdata;
if (res && ioa_cfg->sis64) if (res && ioa_cfg->sis64)
len = snprintf(buf, PAGE_SIZE, "0x%llx\n", res->dev_id); len = snprintf(buf, PAGE_SIZE, "0x%llx\n", be64_to_cpu(res->dev_id));
else if (res) else if (res)
len = snprintf(buf, PAGE_SIZE, "0x%llx\n", res->lun_wwn); len = snprintf(buf, PAGE_SIZE, "0x%llx\n", res->lun_wwn);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册