- 26 4月, 2017 1 次提交
-
-
由 Christoph Hellwig 提交于
Found by sparse. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NMatias Bjørling <matias@cnexlabs.com>
-
- 21 4月, 2017 2 次提交
-
-
由 Christoph Hellwig 提交于
Currently it's used by the lighnvm passthrough ioctl, but we'd like to make it private in preparation of block layer specific error code. Lighnvm already returns the real NVMe status anyway, so I think we can just limit it to returning -EIO for any status set. This will need a careful audit from the lightnvm folks, though. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Christoph Hellwig 提交于
We want our own clearly defined error field for NVMe passthrough commands, and the request errors field is going away in its current form. Just store the status and result field in the nvme_request field from hardirq completion context (using a new helper) and then generate a Linux errno for the block layer only when we actually need it. Because we can't overload the status value with a negative error code for cancelled command we now have a flags filed in struct nvme_request that contains a bit for this condition. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 20 4月, 2017 1 次提交
-
-
由 Bart Van Assche 提交于
This patch changes the behavior of the lightnvm driver as follows: * REQ_FAILFAST_MASK is set for read-ahead requests. * If no I/O priority has been set in the bio, the I/O priority is copied from the I/O context. * The rq_disk member is initialized if bio->bi_bdev != NULL. * The bio sector offset is copied into req->__sector instead of retaining the value -1 set by blk_mq_alloc_request(). * req->errors is initialized to zero. Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 17 4月, 2017 4 次提交
-
-
由 Javier González 提交于
The NVMe I/O command control bits are 16 bytes, but is interpreted as 32 bytes in the lightnvm user I/O data path. Signed-off-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
The asserts in _nvme_nvm_check_size are not compiled due to the function not begin called. Make sure that it is called, and also fix the wrong sizes of asserts for nvme_nvm_addr_format, and nvme_nvm_bb_tbl, which checked for number of bits instead of bytes. Reported-by: NScott Bauer <scott.bauer@intel.com> Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Javier González 提交于
Until now erases have been submitted as synchronous commands through a dedicated erase function. In order to enable targets implementing asynchronous erases, refactor the erase path so that it uses the normal async I/O submission functions. If a target requires sync I/O, it can implement it internally. Also, adapt rrpc to use the new erase path. Signed-off-by: NJavier González <javier@cnexlabs.com> Fixed spelling error. Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Scott Bauer 提交于
There are two closely named structs in lightnvm: struct nvme_nvm_addr_format and struct nvme_addr_format. The first struct has 4 reserved bytes at the end, the second does not. (gdb) p sizeof(struct nvme_nvm_addr_format) $1 = 16 (gdb) p sizeof(struct nvm_addr_format) $2 = 12 In the nvme_nvm_identify function we memcpy from the larger struct to the smaller struct. We incorrectly pass the length of the larger struct and overflow by 4 bytes, lets not do that. Signed-off-by: NScott Bauer <scott.bauer@intel.com> Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 31 1月, 2017 4 次提交
-
-
由 Matias Bjørling 提交于
When the lightnvm core had the "gennvm" layer between the device and the target, there was a need for the core to be able to figure out which target it should send an end_io callback to. Leading to a "double" end_io, first for the media manager instance, and then for the target instance. Now that core and gennvm is merged, there is no longer a need for this, and a single end_io callback will do. Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
Enable user-space to issue vector I/O commands through ioctls. To issue a vector I/O, the ppa list with addresses is also required and must be mapped for the controller to access. For each ioctl, the result and status bits are returned as well, such that user-space can retrieve the open-channel SSD completion bits. The implementation covers the traditional use-cases of bad block management, and vectored read/write/erase. Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Metadata implementation, test, and fixes. Signed-off-by: NSimon A.F. Lund <slund@cnexlabs.com> Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
The number of configuration groups has been limited to one in current code, even if there is support for up to four. With the introduction of the open-channel SSD 1.3 specification, only a single group is exposed onwards. Reflect this in the nvm_id structure. Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
For the first iteration of Open-Channel SSDs, it was anticipated that there could be various media managers on top of an open-channel SSD, such to allow vendors to plug in their own host-side FTLs, without the media manager in between. Now that an Open-Channel SSD is exposed as a traditional block device, there is no longer a need for this. Therefore lets merge the gennvm code with core and simplify the stack. Signed-off-by: NMatias Bjørling <matias@cnexlabs.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 30 11月, 2016 5 次提交
-
-
由 Javier González 提交于
In order to naturally support multi-target instances on an Open-Channel SSD, targets should own the LUNs they get blocks from and manage provisioning internally. This is done in several steps. Since targets own the LUNs the are instantiated on top of and manage the free block list internally, there is no need for a LUN abstraction in the media manager. LUNs are intrinsically managed as in the physical layout (ch:0,lun:0, ..., ch:0,lun:n, ch:1,lun:0, ch:1,lun:n, ..., ch:m,lun:0, ch:m,lun:n) and given to the targets based on the target creation ioctl. This simplifies LUN management and clears the path for a partition manager to sit directly underneath LightNVM targets. Signed-off-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Javier González 提交于
In order to naturally support multi-target instances on an Open-Channel SSD, targets should own the LUNs they get blocks from and manage provisioning internally. This is done in several steps. This patch moves the block provisioning inside of the target and removes the get/put block interface from the media manager. Signed-off-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Javier González 提交于
Erases might be subject to host hints. An example is multi-plane programming to erase blocks in parallel. Enable targets to specify this hint. Signed-off-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
Previously, LBA read and write were not supported in the lightnvm specification. Now that it supports it, lets use the traditional NVMe gendisk, and attach the lightnvm sysfs geometry export. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
When struct nvme_request was introduced, the nvme_nvm_submit_io was converted to the new interface. The interface moves nvme_nvm_command data structure into the struct request pdu. On io completion, rq->cmd is freed, which should have been the dereferenced pdu nvme_request->cmd. Fixes: d49187e9 "nvme: introduce struct nvme_request" Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 12 11月, 2016 1 次提交
-
-
由 Matias Bjørling 提交于
The ns->lba_shift assumes its value to be the logarithmic of the LA size. A previous patch duplicated the lba_shift calculation into lightnvm. It prematurely also subtracted a 512byte shift, which commonly is applied per-command. The 512byte shift being subtracted twice led to data loss when restoring the logical to physical mapping table from device and when issuing I/O commands using rrpc. Fix offset by removing the 512byte shift subtraction when calculating lba_shift. Fixes: b0b4e09c "lightnvm: control life of nvm_dev in driver" Reported-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 11 11月, 2016 1 次提交
-
-
由 Christoph Hellwig 提交于
This adds a shared per-request structure for all NVMe I/O. This structure is embedded as the first member in all NVMe transport drivers request private data and allows to implement common functionality between the drivers. The first use is to replace the current abuse of the SCSI command passthrough fields in struct request for the NVMe command passthrough, but it will grow a field more fields to allow implementing things like common abort handlers in the future. The passthrough commands are handled by having a pointer to the SQE (struct nvme_command) in struct nvme_request, and the union of the possible result fields, which had to be turned from an anonymous into a named union for that purpose. This avoids having to pass a reference to a full CQE around and thus makes checking the result a lot more lightweight. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 21 9月, 2016 3 次提交
-
-
由 Simon A. F. Lund 提交于
For a host to access an Open-Channel SSD, it has to know its geometry, so that it writes and reads at the appropriate device bounds. Currently, the geometry information is kept within the kernel, and not exported to user-space for consumption. This patch exposes the configuration through sysfs and enables user-space libraries, such as liblightnvm, to use the sysfs implementation to get the geometry of an Open-Channel SSD. The sysfs entries are stored within the device hierarchy, and can be found using the "lightnvm" device type. An example configuration looks like this: /sys/class/nvme/ └── nvme0n1 ├── capabilities: 3 ├── device_mode: 1 ├── erase_max: 1000000 ├── erase_typ: 1000000 ├── flash_media_type: 0 ├── media_capabilities: 0x00000001 ├── media_type: 0 ├── multiplane: 0x00010101 ├── num_blocks: 1022 ├── num_channels: 1 ├── num_luns: 4 ├── num_pages: 64 ├── num_planes: 1 ├── page_size: 4096 ├── prog_max: 100000 ├── prog_typ: 100000 ├── read_max: 10000 ├── read_typ: 10000 ├── sector_oob_size: 0 ├── sector_size: 4096 ├── media_manager: gennvm ├── ppa_format: 0x380830082808001010102008 ├── vendor_opcode: 0 ├── max_phys_secs: 64 └── version: 1 Signed-off-by: NSimon A. F. Lund <slund@cnexlabs.com> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
LightNVM compatible device drivers does not have a method to expose LightNVM specific sysfs entries. To enable LightNVM sysfs entries to be exposed, lightnvm device drivers require a struct device to attach it to. To allow both the actual device driver and lightnvm sysfs entries to coexist, the device driver tracks the lifetime of the nvm_dev structure. This patch refactors NVMe and null_blk to handle the lifetime of struct nvm_dev, which eliminates the need for struct gendisk when a lightnvm compatible device is provided. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
With LightNVM enabled namespaces, the gendisk structure is not exposed to the user. This prevents LightNVM users from accessing the NVMe device driver specific sysfs entries, and LightNVM namespace geometry. Refactor the revalidation process, so that a namespace, instead of a gendisk, is revalidated. This later allows patches to wire up the sysfs entries up to a non-gendisk namespace. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 21 7月, 2016 1 次提交
-
-
由 Christoph Hellwig 提交于
These two are confusing leftover of the old world order, combining values of the REQ_OP_ and REQ_ namespaces. For callers that don't special case we mostly just replace bi_rw with bio_data_dir or op_is_write, except for the few cases where a switch over the REQ_OP_ values makes more sense. Any check for READA is replaced with an explicit check for REQ_RAHEAD. Also remove the READA alias for REQ_RAHEAD. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Reviewed-by: NMike Christie <mchristi@redhat.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 07 7月, 2016 1 次提交
-
-
由 Johannes Thumshirn 提交于
According to the OpenChannel SSD interface specification the NAND flash MLC page pairing information's number of page page pairings field is the first two bytes in the MLC Page Pairing data structure. The hardware's data structure itself is little endian so annotate it as such, like the rest of lighnvm's data structures. Signed-off-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 07 5月, 2016 8 次提交
-
-
由 Javier González 提交于
The number of ppas contained on a request is not necessarily the number of pages that it maps to neither on the target nor on the device side. In order to avoid confusion, rename nr_pages to nr_ppas since it is what the variable actually contains. Signed-off-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Arnd Bergmann 提交于
A recent change to lightnvm added code to pass a kernel pointer to the hardware, which gcc complained about: drivers/nvme/host/lightnvm.c: In function 'nvme_nvm_rqtocmd': drivers/nvme/host/lightnvm.c:472:32: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast] c->ph_rw.metadata = cpu_to_le64(rqd->meta_list); It looks like this has no way of working anyway, so this changes the code to pass the dma_address instead. This was most likely what was intended here. Neither of the two are currently ever written to, so the effect is the same for now. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Fixes: a34b1eb78e21 ("lightnvm: enable metadata to be sent to device") Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Sagi Grimberg 提交于
Align with the rest of the nvme subsystem. Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Javier González 提交于
Until now, the dma pool have been exclusively used to allocate the ppa list being sent to the device. In pblk (upcoming), we use these pools to allocate metadata too. Thus, we generalize the names of some variables on the dma helper functions to make the code more readable. Signed-off-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Javier González 提交于
Enable metadata buffer to be sent to the device through the metadata field on the physical rw nvme command. The size of the metadata buffer must follow dev->oob_size * # of PPAs. Signed-off-by: NJavier González <javier@cnexlabs.com> Updated description. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
The set_bb_tbl takes struct nvm_rq and only uses its ppa_list and nr_pages internally. Instead, make these two variables explicit. This allows a user to call it without initializing a struct nvm_rq first. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
The device ops->get_bb_tbl() takes a callback, that allows the caller to use its own callback function to update its data structures in the returning function. This makes it difficult to send parameters to the callback, and usually is circumvented by small private structures, that both carry the callers state and any flags needed to fulfill the update. Refactor ops->get_bb_tbl() to fill a data buffer with the status of the blocks returned, and let the user call the callback function manually. That will provide the necessary flags and data structures and simplify the logic around ops->get_bb_tbl(). Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
The get block table command returns a list of blocks and planes with their associated state. Users, such as gennvm and sysblk, manages all planes as a single virtual block. It was therefore natural to fold the bad block list before it is returned. However, to allow users, which manages on a per-plane block level, to also use the interface, the get_bb_tbl interface is changed to not fold by default and instead let the caller fold if necessary. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 19 3月, 2016 1 次提交
-
-
由 Matias Bjorling 提交于
PPAs sent to device is separately acknowledge in a 64bit status variable. The status is stored in DW0 and DW1 of the completion queue entry. Store this status inside the nvm_rq for further processing. This can later be used to implement retry techniques for failed writes and reads. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 04 3月, 2016 1 次提交
-
-
由 Matias Bjørling 提交于
When the media manager runs in dual or quad plane mode, lightnvm abstracts away plane specific commands. This poses a problem for get bad block table, as it reports bad blocks per plane, making the table either two or four times bigger than expected. Fold the bad block list before returning. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 05 2月, 2016 1 次提交
-
-
由 Matias Bjørling 提交于
The specification currently limits the number of MLC pairs to 886. Make sure that a device is unable to be instantiate if more is configured. Also, previously the patch had the wrong math for copying MLC pairs, as it only copied half of the actual entries. Fixes: ca5927e7 "lightnvm: introduce mlc lower page table mappings" Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 12 1月, 2016 3 次提交
-
-
由 Matias Bjørling 提交于
NAND MLC memories have both lower and upper pages. When programming, both of these must be written, before data can be read. However, these lower and upper pages might not placed at even and odd flash pages, but can be skipped. Therefore each flash memory has its lower pages defined, which can then be used when programming and to know when padding are necessary. This patch implements the lower page definition in the specification, and exposes it through a simple lookup table at dev->lptbl. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
During get_bb_tbl, a callback is used to allow an user-specific scan function to be called. The callback may return an error, and in that case, the return value is overridden. However, the callback error is needed when the fault is a user error and not a kernel error. For example, when a user tries to initialize the same device twice. The get_bb_tbl callback should be able to communicate this. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
To implement sync I/O support within the LightNVM core, the end_io functions are refactored to take an end_io function pointer instead of testing for initialized media manager, followed by calling its end_io function. Sync I/O can then be implemented using a callback that signal I/O completion. This is similar to the logic found in blk_to_execute_io(). By implementing it this way, the underlying device I/Os submission logic is abstracted away from core, targets, and media managers. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 08 12月, 2015 2 次提交
-
-
由 Matias Bjørling 提交于
In the case where a request queue is passed to the low lever lightnvm device drive integration, the device driver might pass its admin commands through another queue. Instead pass nvm_dev, and let the low level drive the appropriate queue. Reported-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Matias Bjørling 提交于
The core can may issue I/Os before a media manager is registered with the lightnvm subsystem. Make sure that we don't call the media manager ->end_io prematurely with a null pointer. Signed-off-by: NMatias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-