- 05 11月, 2011 40 次提交
-
-
由 Matthew Wilcox 提交于
The head can never overrun the tail since we won't allocate enough command IDs to let that happen. The status codes are in sync with the spec. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Reported-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Reported-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Krzysztof Wierzbicki 提交于
Signed-off-by: NKrzysztof Wierzbicki <krzysztof.wierzbicki@intel.com> Signed-off-by: NMatthew Wilcox <willy@linux.intel.com> Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The spec says we're not allowed to completely fill the submission queue. Solve this by reducing the number of allocatable cmdids by 1. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
When we submit subsequent portions of the I/O, we need to access the updated block, not start reading again from the original position. This was showing up as miscompares in the XFS randholes testcase. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
NVMe scatterlists must be virtually contiguous, like almost all I/Os. However, when the filesystem lays out files with a hole, it can be that adjacent LBAs map to non-adjacent virtual addresses. Handle this by submitting one NVMe command at a time for each virtually discontiguous range. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Linux implements Flush as a bit in the bio. That means there may also be data associated with the flush; if so the flush should be sent before the data. To avoid completing the bio twice, I add CMD_CTX_FLUSH to indicate the completion routine should do nothing. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
The value written to the doorbell needs to be the first free index in the queue, not the most recently used index in the queue. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If interrupts are misconfigured, the kthread will be needed to process admin queue completions. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
I got confused about whether this included the admin queue or not, and had to resort to reading the spec. It doesn't include the admin queue, so make that clear in the name. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
This was the data transfer bit until spec rev 0.92 Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Instead of trying to resubmit I/Os in the I/O completion path (in interrupt context), wake up a kthread which will resubmit I/O from user context. This allows mke2fs to run to completion. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Return -EBUSY if the queue is full or -ENOMEM if we failed to allocate memory (or map a scatterlist). Also use GFP_ATOMIC to allocate the nvme_bio and move the locking to the callers of nvme_submit_bio_queue(). In nvme_make_request(), don't permit an I/O to jump the queue -- if the congestion list already has an entry, just add to the tail, rather than trying to submit. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Add two reserved registers in the middle of the BAR to match the 1.0 spec plus ECN 0002. Also rename IMC and ISC to INTMC and INTSC to conform with the spec. We still don't need to use them :-) Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
In order to not overrun the sg array, we have to merge physically contiguous pages into a single sg entry. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If dma_map_sg returns 0 (failure), we need to fail the I/O. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
We were passing the nvme_queue to access the q_dmadev for the dma_alloc_coherent calls, but since we moved to the dma pool API, we really only need the nvme_dev. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Add a second memory pool for smaller I/Os. We can pack 16 of these on a single page instead of using an entire page for each one. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Calling dma_free_coherent from interrupt context causes warnings. Using the DMA pools delays freeing until pool destruction, so avoids the problem. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
There are too many things called 'info' in this driver. This data structure is auxiliary information for a struct bio, so call it nvme_bio, or nbio when used as a variable. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Shane Michael Matthews 提交于
Add a pointer to the nvme_req_info to hold a new data structure (nvme_prps) which contains a list of the pages allocated to this particular request for holding PRP list entries. nvme_setup_prps() now returns this pointer. To allocate and free the memory used for PRP lists, we need a struct device, so we need to pass the nvme_queue pointer to many functions which didn't use to need it. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
For multipage BIOs, we were always using sg[0] instead of advancing through the list. Oops :-) Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If POISON_POINTER_DELTA isn't defined, ensure they're in page 0 which should never be mapped. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
In the bio completion handler, check for bios on the congestion list for this NVM queue. Also, lock the congestion list in the make_request function as the queue may end up being shared between multiple CPUs. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
In addition to recording the completion data for each command, record the anticipated completion time. Choose a timeout of 5 seconds for normal I/Os and 60 seconds for admin I/Os. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If we're sharing a queue between multiple CPUs and we cancel a sync I/O, we must have the queue locked to avoid corrupting the stack of the thread that submitted the I/O. It turns out this is the same locking that's needed for the threaded irq handler, so share that code. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If the adapter completes a command ID that is outside the bounds of the array, return CMD_CTX_INVALID instead of random data, and print a message in the sync_completion handler (which is rapidly becoming the misc completion handler :-) Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Set the context value to CMD_CTX_COMPLETED, and print a message in the sync_completion handler if we see it. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
I have plans for other special values in sync_completion. Plus, this is more self-documenting, and lets us detect bogus usages. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
We're currently calling bio_endio from hard interrupt context. This is not a good idea for preemptible kernels as it will cause longer latencies. Using a threaded interrupt will run the entire queue processing mechanism (including bio_endio) in a thread, which can be preempted. Unfortuantely, it also adds about 7us of latency to the single-I/O case, so make it a module parameter for the moment. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
We can't have preemption disabled when we call schedule(). Accept the possibility that we'll get preempted, and it'll cost us some cacheline bounces. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
If the user sends a fatal signal, sleeping in the TASK_KILLABLE state permits the task to be aborted. The only wrinkle is making sure that if/when the command completes later that it doesn't upset anything. Handle this by setting the data pointer to 0, and checking the value isn't NULL in the sync completion path. Eventually, bios can be cancelled through this path too. Note that the cmdid isn't freed to prevent reuse. We should also abort the command in the future, but this is a good start. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-
由 Matthew Wilcox 提交于
Because I wasn't setting driverfs_dev, the devices were showing up under /sys/devices/virtual/block. Now they appear underneath the PCI device which they belong to. Signed-off-by: NMatthew Wilcox <matthew.r.wilcox@intel.com>
-