- 05 11月, 2011 40 次提交
-
-
由 Matthew Wilcox 提交于
If any of the memory allocations in nvme_setup_prps fail, handle it by modifying the passed-in data length to reflect the number of bytes we are actually able to send. Also allow the caller to specify the GFP flags they need; for user-initiated commands, we can use GFP_KERNEL allocations. The various callers are updated to handle this possibility; the main I/O path is already prepared for this possibility (as it may happen due to nvme_map_bio being unable to map all the segments of the I/O). The other callers return -ENOMEM instead of doing partial I/Os. Reported-by: NAndi Kleen <andi@firstfloor.org> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The current approach of using the namespace ID as the minor number doesn't work when there are multiple adapters in the machine. Rather than statically partitioning the number of namespaces between adapters, dynamically allocate minor numbers to namespaces as they are detected. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Previously it was being implicitly included through some other header file Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
In the kthread, walk the list of outstanding I/Os and check they've not hit the timeout. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The trailing '_data' on the end was annoying and inconsistent. Also, make it actually return the data since this is needed for timing out commands. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
When an I/O completed with an error, we would call bio_endio twice (once with -EIO and once with 0). Found by inspection. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
THe device reports (in its capability register) how long it will take to initialise. If that time elapses before the ready bit becomes set, conclude the device is broken and refuse to initialise it. Log a nice error message so the user knows why we did nothing. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
We need to clear the affinity mask before calling free_irq() Reported-by: NShane Michael Matthews <shane.matthews@intel.com> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The arbitration field was extended by one bit, shifting the shutdown notification bits by one. Also, the SQ/CQ entry size was made configurable for future extensions. Reported-by: NPaul Luse <paul.e.luse@intel.com> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The read and write commands don't define a 'result', so there's no need to copy it back to userspace. Remove the ability of the ioctl to submit commands to a different namespace; it's just asking for trouble, and the use case I have in mind will be addressed througha different ioctl in the future. That removes the need for both the block_shift and nsid arguments. Check that the opcode is one of 'read' or 'write'. Future opcodes may be added in the future, but we will need a different structure definition for them. The nblocks field is redefined to be 0-based. This allows the user to request the full 65536 blocks. Don't byteswap the reftag, apptag and appmask. Martin Petersen tells me these are calculated in big-endian and are transmitted to the device in big-endian. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
NVME_IOCTL_SUBMIT_IO has a struct nvme_user_io, not a struct nvme_rw_command as a parameter, and NVME_IOCTL_DOWNLOAD_FW is a Write, not a Read. Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Make ioctls work for 32-bit applications on 64-bit kernels. The structures are defined to be the same for both 32- and 64-bit applications, so we can use the same handler for both. Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Fill in all the num_possible_cpus() entries with duplicate pointers. This reduces the complexity of the frequently-called get_nvmeq(), as well as avoiding a bug in it when there are fewer queues than CPUs. Reported-by: NShane Michael Matthews <shane.matthews@intel.com> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Once there are no more bios on the congestion list, we can stop waking up the nvme kthread every time a completion happens. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If the last element in the PRP list fits on the end of the page, there's no need to allocate an extra page to put that single element in. It can fit on the end of the page. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The spec says this is a 0s based value. We don't need to handle the maximal value because it's reserved to mean "every namespace". Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The head can never overrun the tail since we won't allocate enough command IDs to let that happen. The status codes are in sync with the spec. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Reported-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Reported-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Krzysztof Wierzbicki 提交于
Signed-off-by: NKrzysztof Wierzbicki <krzysztof.wierzbicki@intel.com> Signed-off-by: NMatthew Wilcox <willy@linux.intel.com> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The spec says we're not allowed to completely fill the submission queue. Solve this by reducing the number of allocatable cmdids by 1. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
When we submit subsequent portions of the I/O, we need to access the updated block, not start reading again from the original position. This was showing up as miscompares in the XFS randholes testcase. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
NVMe scatterlists must be virtually contiguous, like almost all I/Os. However, when the filesystem lays out files with a hole, it can be that adjacent LBAs map to non-adjacent virtual addresses. Handle this by submitting one NVMe command at a time for each virtually discontiguous range. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Linux implements Flush as a bit in the bio. That means there may also be data associated with the flush; if so the flush should be sent before the data. To avoid completing the bio twice, I add CMD_CTX_FLUSH to indicate the completion routine should do nothing. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The value written to the doorbell needs to be the first free index in the queue, not the most recently used index in the queue. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If interrupts are misconfigured, the kthread will be needed to process admin queue completions. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
I got confused about whether this included the admin queue or not, and had to resort to reading the spec. It doesn't include the admin queue, so make that clear in the name. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
This was the data transfer bit until spec rev 0.92 Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Instead of trying to resubmit I/Os in the I/O completion path (in interrupt context), wake up a kthread which will resubmit I/O from user context. This allows mke2fs to run to completion. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Return -EBUSY if the queue is full or -ENOMEM if we failed to allocate memory (or map a scatterlist). Also use GFP_ATOMIC to allocate the nvme_bio and move the locking to the callers of nvme_submit_bio_queue(). In nvme_make_request(), don't permit an I/O to jump the queue -- if the congestion list already has an entry, just add to the tail, rather than trying to submit. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Add two reserved registers in the middle of the BAR to match the 1.0 spec plus ECN 0002. Also rename IMC and ISC to INTMC and INTSC to conform with the spec. We still don't need to use them :-) Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
In order to not overrun the sg array, we have to merge physically contiguous pages into a single sg entry. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If dma_map_sg returns 0 (failure), we need to fail the I/O. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
We were passing the nvme_queue to access the q_dmadev for the dma_alloc_coherent calls, but since we moved to the dma pool API, we really only need the nvme_dev. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Add a second memory pool for smaller I/Os. We can pack 16 of these on a single page instead of using an entire page for each one. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Calling dma_free_coherent from interrupt context causes warnings. Using the DMA pools delays freeing until pool destruction, so avoids the problem. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-