1. 26 11月, 2015 28 次提交
  2. 25 11月, 2015 2 次提交
    • C
      nvme: add missing unmaps in nvme_queue_rq · bf508e91
      Christoph Hellwig 提交于
      When we fail various metadata related operations in nvme_queue_rq we
      need to unmap the data SGL.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      bf508e91
    • N
      NVMe: default to 4k device page size · c5c9f25b
      Nishanth Aravamudan 提交于
      We received a bug report recently when DDW (64-bit direct DMA on Power)
      is not enabled for NVMe devices. In that case, we fall back to 32-bit
      DMA via the IOMMU, which is always done via 4K TCEs (Translation Control
      Entries).
      
      The NVMe device driver, though, assumes that the DMA alignment for the
      PRP entries will match the device's page size, and that the DMA aligment
      matches the kernel's page aligment. On Power, the the IOMMU page size,
      as mentioned above, can be 4K, while the device can have a page size of
      8K, while the kernel has a page size of 64K. This eventually trips the
      BUG_ON in nvme_setup_prps(), as we have a 'dma_len' that is a multiple
      of 4K but not 8K (e.g., 0xF000).
      
      In this particular case of page sizes, we clearly want to use the
      IOMMU's page size in the driver. And generally, the NVMe driver in this
      function should be using the IOMMU's page size for the default device
      page size, rather than the kernel's page size. There is not currently an
      API to obtain the IOMMU's page size across all architectures and in the
      interest of a stop-gap fix to this functional issue, default the NVMe
      device page size to 4K, with the intent of adding such an API and
      implementation across all architectures in the next merge window.
      
      With the functionally equivalent v3 of this patch, our hardware test
      exerciser survives when using 32-bit DMA; without the patch, the kernel
      will BUG within a few minutes.
      
      Signed-off-by: Nishanth Aravamudan <nacc at linux.vnet.ibm.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c5c9f25b
  3. 24 11月, 2015 1 次提交
    • M
      dm thin: fix regression in advertised discard limits · 0fcb04d5
      Mike Snitzer 提交于
      When establishing a thin device's discard limits we cannot rely on the
      underlying thin-pool device's discard capabilities (which are inherited
      from the thin-pool's underlying data device) given that DM thin devices
      must provide discard support even when the thin-pool's underlying data
      device doesn't support discards.
      
      Users were exposed to this thin device discard limits regression if
      their thin-pool's underlying data device does _not_ support discards.
      This regression caused all upper-layers that called the
      blkdev_issue_discard() interface to not be able to issue discards to
      thin devices (because discard_granularity was 0).  This regression
      wasn't caught earlier because the device-mapper-test-suite's extensive
      'thin-provisioning' discard tests are only ever performed against
      thin-pool's with data devices that support discards.
      
      Fix is to have thin_io_hints() test the pool's 'discard_enabled' feature
      rather than inferring whether or not a thin device's discard support
      should be enabled by looking at the thin-pool's discard_granularity.
      
      Fixes: 21607670 ("dm thin: disable discard support for thin devices if pool's is disabled")
      Reported-by: NMike Gerber <mike@sprachgewalt.de>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 4.1+
      0fcb04d5
  4. 21 11月, 2015 9 次提交