1. 08 10月, 2016 6 次提交
  2. 06 10月, 2016 2 次提交
  3. 01 10月, 2016 5 次提交
    • D
      libnvdimm, label: convert label tracking to a linked list · ae8219f1
      Dan Williams 提交于
      In preparation for enabling multiple namespaces per pmem region, convert
      the label tracking to use a linked list.  In particular this will allow
      select_pmem_id() to move labels from the unvalidated state to the
      validated state.  Currently we only track one validated set per-region.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      ae8219f1
    • D
      libnvdimm, region: move region-mapping input-paramters to nd_mapping_desc · 44c462eb
      Dan Williams 提交于
      Before we add more libnvdimm-private fields to nd_mapping make it clear
      which parameters are input vs libnvdimm internals. Use struct
      nd_mapping_desc instead of struct nd_mapping in nd_region_desc and make
      struct nd_mapping private to libnvdimm.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      44c462eb
    • V
      libnvdimm: clear the internal poison_list when clearing badblocks · e046114a
      Vishal Verma 提交于
      nvdimm_clear_poison cleared the user-visible badblocks, and sent
      commands to the NVDIMM to clear the areas marked as 'poison', but it
      neglected to clear the same areas from the internal poison_list which is
      used to marshal ARS results before sorting them by namespace. As a
      result, once on-demand ARS functionality was added:
      
      37b137ff nfit, libnvdimm: allow an ARS scrub to be triggered on demand
      
      A scrub triggered from either sysfs or an MCE was found to be adding
      stale entries that had been cleared from gendisk->badblocks, but were
      still present in nvdimm_bus->poison_list. Additionally, the stale entries
      could be triggered into producing stale disk->badblocks by simply disabling
      and re-enabling the namespace or region.
      
      This adds the missing step of clearing poison_list entries when clearing
      poison, so that it is always in sync with badblocks.
      
      Fixes: 37b137ff ("nfit, libnvdimm: allow an ARS scrub to be triggered on demand")
      Signed-off-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      e046114a
    • V
      pmem: reduce kmap_atomic sections to the memcpys only · bd697a80
      Vishal Verma 提交于
      pmem_do_bvec used to kmap_atomic at the begin, and only unmap at the
      end. Things like nvdimm_clear_poison may want to do nvdimm subsystem
      bookkeeping operations that may involve taking locks or doing memory
      allocations, and we can't do that from the atomic context. Reduce the
      atomic context to just what needs it - the memcpy to/from pmem.
      
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      bd697a80
    • V
      nfit: don't start a full scrub by default for an MCE · 9ffd6350
      Vishal Verma 提交于
      Starting a full Address Range Scrub (ARS) on hitting a memory error
      machine check exception may not always be desirable. Provide a way
      through sysfs to toggle the behavior between just adding the address
      (cache line) where the MCE happened to the poison list and doing a full
      scrub. The former (selective insertion of the address) is done
      unconditionally.
      
      Cc: linux-acpi@vger.kernel.org
      Cc: Linda Knippers <linda.knippers@hpe.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      9ffd6350
  4. 22 9月, 2016 2 次提交
  5. 02 9月, 2016 3 次提交
  6. 30 8月, 2016 1 次提交
  7. 23 8月, 2016 2 次提交
  8. 11 8月, 2016 3 次提交
    • G
      nvme: Suspend all queues before deletion · c21377f8
      Gabriel Krisman Bertazi 提交于
      When nvme_delete_queue fails in the first pass of the
      nvme_disable_io_queues() loop, we return early, failing to suspend all
      of the IO queues.  Later, on the nvme_pci_disable path, this causes us
      to disable MSI without actually having freed all the IRQs, which
      triggers the BUG_ON in free_msi_irqs(), as show below.
      
      This patch refactors nvme_disable_io_queues to suspend all queues before
      start submitting delete queue commands.  This way, we ensure that we
      have at least returned every IRQ before continuing with the removal
      path.
      
      [  487.529200] kernel BUG at ../drivers/pci/msi.c:368!
      cpu 0x46: Vector: 700 (Program Check) at [c0000078c5b83650]
          pc: c000000000627a50: free_msi_irqs+0x90/0x200
          lr: c000000000627a40: free_msi_irqs+0x80/0x200
          sp: c0000078c5b838d0
         msr: 9000000100029033
        current = 0xc0000078c5b40000
        paca    = 0xc000000002bd7600   softe: 0        irq_happened: 0x01
          pid   = 1376, comm = kworker/70:1H
      kernel BUG at ../drivers/pci/msi.c:368!
      Linux version 4.7.0.mainline+ (root@iod76) (gcc version 5.3.1 20160413
      (Ubuntu/IBM 5.3.1-14ubuntu2.1) ) #104 SMP Fri Jul 29 09:20:17 CDT 2016
      enter ? for help
      [c0000078c5b83920] d0000000363b0cd8 nvme_dev_disable+0x208/0x4f0 [nvme]
      [c0000078c5b83a10] d0000000363b12a4 nvme_timeout+0xe4/0x250 [nvme]
      [c0000078c5b83ad0] c0000000005690e4 blk_mq_rq_timed_out+0x64/0x110
      [c0000078c5b83b40] c00000000056c930 bt_for_each+0x160/0x170
      [c0000078c5b83bb0] c00000000056d928 blk_mq_queue_tag_busy_iter+0x78/0x110
      [c0000078c5b83c00] c0000000005675d8 blk_mq_timeout_work+0xd8/0x1b0
      [c0000078c5b83c50] c0000000000e8cf0 process_one_work+0x1e0/0x590
      [c0000078c5b83ce0] c0000000000e9148 worker_thread+0xa8/0x660
      [c0000078c5b83d80] c0000000000f2090 kthread+0x110/0x130
      [c0000078c5b83e30] c0000000000095f0 ret_from_kernel_thread+0x5c/0x6c
      Signed-off-by: NGabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>
      Cc: Brian King <brking@linux.vnet.ibm.com>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: linux-nvme@lists.infradead.org
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c21377f8
    • A
      efi/capsule: Allocate whole capsule into virtual memory · 6862e6ad
      Austin Christ 提交于
      According to UEFI 2.6 section 7.5.3, the capsule should be in contiguous
      virtual memory and firmware may consume the capsule immediately. To
      correctly implement this functionality, the kernel driver needs to vmap
      the entire capsule at the time it is made available to firmware.
      
      The virtual allocation of the capsule update has been changed from kmap,
      which was only allocating the first page of the update, to vmap, and
      allocates the entire data payload.
      Signed-off-by: NAustin Christ <austinwc@codeaurora.org>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Reviewed-by: NLee, Chun-Yi <jlee@suse.com>
      Cc: <stable@vger.kernel.org> # v4.7
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Bryan O'Donoghue <pure.logic@nexus-software.ie>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kweh Hock Leong <hock.leong.kweh@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1470912120-22831-3-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6862e6ad
    • D
      rapidio: dereferencing an error pointer · 73984137
      Dan Carpenter 提交于
      Original patch: https://lkml.org/lkml/2016/8/4/32
      
      If riocm_ch_alloc() fails then we end up dereferencing the error
      pointer.
      
      The problem is that we're not unwinding in the reverse order from how we
      allocate things so it gets confusing.  I've changed this around so now
      "ch" is NULL when we are done with it after we call riocm_put_channel().
      That way we can check if it's NULL and avoid calling riocm_put_channel()
      on it twice.
      
      I renamed err_nodev to err_put_new_ch so that it better reflects what
      the goto does.
      
      Then because we had flipping things around, it means we don't neeed to
      initialize the pointers to NULL and we can remove an if statement and
      pull things in an indent level.
      
      Link: http://lkml.kernel.org/r/20160805152406.20713-1-alexandre.bounine@idt.comSigned-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NAlexandre Bounine <alexandre.bounine@idt.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Andre van Herk <andre.van.herk@prodrive-technologies.com>
      Cc: Barry Wood <barry.wood@idt.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      73984137
  9. 10 8月, 2016 4 次提交
  10. 09 8月, 2016 12 次提交