1. 23 4月, 2016 3 次提交
  2. 08 4月, 2016 1 次提交
    • D
      libnvdimm, pfn: fix nvdimm_namespace_add_poison() vs section alignment · a3901802
      Dan Williams 提交于
      When section alignment padding is in effect we need to shift / truncate
      the range that is queried for poison by the 'start_pad' or 'end_trunc'
      reservations.
      
      It's easiest if we just pass in an adjusted resource range rather than
      deriving it from the passed in namespace.  With the resource range
      resolution pushed out to the caller we can also push the
      namespace-to-region lookup to the caller and drop the implicit pmem-type
      assumption about the passed in namespace object.
      
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      a3901802
  3. 10 3月, 2016 1 次提交
  4. 06 3月, 2016 1 次提交
  5. 10 1月, 2016 3 次提交
  6. 13 12月, 2015 1 次提交
  7. 11 12月, 2015 1 次提交
  8. 29 8月, 2015 3 次提交
    • D
      libnvdimm, pmem: direct map legacy pmem by default · 004f1afb
      Dan Williams 提交于
      The expectation is that the legacy / non-standard pmem discovery method
      (e820 type-12) will only ever be used to describe small quantities of
      persistent memory.  Larger capacities will be described via the ACPI
      NFIT.  When "allocate struct page from pmem" support is added this default
      policy can be overridden by assigning a legacy pmem namespace to a pfn
      device, however this would be only be necessary if a platform used the
      legacy mechanism to define a very large range.
      
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      004f1afb
    • D
      libnvdimm, pmem: 'struct page' for pmem · 32ab0a3f
      Dan Williams 提交于
      Enable the pmem driver to handle PFN device instances.  Attaching a pmem
      namespace to a pfn device triggers the driver to allocate and initialize
      struct page entries for pmem.  Memory capacity for this allocation comes
      exclusively from RAM for now which is suitable for low PMEM to RAM
      ratios.  This mechanism will be expanded later for setting an "allocate
      from PMEM" policy.
      
      Cc: Boaz Harrosh <boaz@plexistor.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      32ab0a3f
    • D
      libnvdimm, pfn: 'struct page' provider infrastructure · e1455744
      Dan Williams 提交于
      Implement the base infrastructure for libnvdimm PFN devices. Similar to
      BTT devices they take a namespace as a backing device and layer
      functionality on top. In this case the functionality is reserving space
      for an array of 'struct page' entries to be handed out through
      pfn_to_page(). For now this is just the basic libnvdimm-device-model for
      configuring the base PFN device.
      
      As the namespace claiming mechanism for PFN devices is mostly identical
      to BTT devices drivers/nvdimm/claim.c is created to house the common
      bits.
      
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      e1455744
  9. 15 8月, 2015 1 次提交
    • V
      libnvdimm, btt: write and validate parent_uuid · 6ec68954
      Vishal Verma 提交于
      When a BTT is instantiated on a namespace it must validate the namespace
      uuid matches the 'parent_uuid' stored in the btt superblock. This
      property enforces that changing the namespace UUID invalidates all
      former BTT instances on that storage. For "IO namespaces" that don't
      have a label or UUID, the parent_uuid is set to zero, and this
      validation is skipped. For such cases, old BTTs have to be invalidated
      by forcing the namespace to raw mode, and overwriting the BTT info
      blocks.
      
      Based on a patch by Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      6ec68954
  10. 01 8月, 2015 1 次提交
  11. 26 6月, 2015 7 次提交
    • T
      libnvdimm: Set numa_node to NVDIMM devices · 41d7a6d6
      Toshi Kani 提交于
      ACPI NFIT table has System Physical Address Range Structure entries that
      describe a proximity ID of each range when ACPI_NFIT_PROXIMITY_VALID is
      set in the flags.
      
      Change acpi_nfit_register_region() to map a proximity ID to its node ID,
      and set it to a new numa_node field of nd_region_desc, which is then
      conveyed to the nd_region device.
      
      The device core arranges for btt and namespace devices to inherit their
      node from their parent region.
      Signed-off-by: NToshi Kani <toshi.kani@hp.com>
      [djbw: move set_dev_node() from region.c to bus.c]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      41d7a6d6
    • D
      libnvdimm, nfit: handle unarmed dimms, mark namespaces read-only · 58138820
      Dan Williams 提交于
      Upon detection of an unarmed dimm in a region, arrange for descendant
      BTT, PMEM, or BLK instances to be read-only.  A dimm is primarily marked
      "unarmed" via flags passed by platform firmware (NFIT).
      
      The flags in the NFIT memory device sub-structure indicate the state of
      the data on the nvdimm relative to its energy source or last "flush to
      persistence".  For the most part there is nothing the driver can do but
      advertise the state of these flags in sysfs and emit a message if
      firmware indicates that the contents of the device may be corrupted.
      However, for the case of ACPI_NFIT_MEM_ARMED, the driver can arrange for
      the block devices incorporating that nvdimm to be marked read-only.
      This is a safe default as the data is still available and new writes are
      held off until the administrator either forces read-write mode, or the
      energy source becomes armed.
      
      A 'read_only' attribute is added to REGION devices to allow for
      overriding the default read-only policy of all descendant block devices.
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      58138820
    • D
      libnvdimm: enable iostat · f0dc089c
      Dan Williams 提交于
      This is disabled by default as the overhead is prohibitive, but if the
      user takes the action to turn it on we'll oblige.
      Reviewed-by: NVishal Verma <vishal.l.verma@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      f0dc089c
    • V
      libnvdimm, blk: add support for blk integrity · fcae6957
      Vishal Verma 提交于
      Support multiple block sizes (sector + metadata) for nd_blk in the
      same way as done for the BTT. Add the idea of an 'internal' lbasize,
      which is properly aligned and padded, and store metadata in this space.
      Signed-off-by: NVishal Verma <vishal.l.verma@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      fcae6957
    • V
      libnvdimm, btt: add support for blk integrity · 41cd8b70
      Vishal Verma 提交于
      Support multiple block sizes (sector + metadata) using the blk integrity
      framework. This registers a new integrity template that defines the
      protection information tuple size based on the configured metadata size,
      and simply acts as a passthrough for protection information generated by
      another layer. The metadata is written to the storage as-is, and read back
      with each sector.
      Signed-off-by: NVishal Verma <vishal.l.verma@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      41cd8b70
    • R
      libnvdimm, nfit, nd_blk: driver for BLK-mode access persistent memory · 047fc8a1
      Ross Zwisler 提交于
      The libnvdimm implementation handles allocating dimm address space (DPA)
      between PMEM and BLK mode interfaces.  After DPA has been allocated from
      a BLK-region to a BLK-namespace the nd_blk driver attaches to handle I/O
      as a struct bio based block device. Unlike PMEM, BLK is required to
      handle platform specific details like mmio register formats and memory
      controller interleave.  For this reason the libnvdimm generic nd_blk
      driver calls back into the bus provider to carry out the I/O.
      
      This initial implementation handles the BLK interface defined by the
      ACPI 6 NFIT [1] and the NVDIMM DSM Interface Example [2] composed from
      DCR (dimm control region), BDW (block data window), IDT (interleave
      descriptor) NFIT structures and the hardware register format.
      [1]: http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf
      [2]: http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Boaz Harrosh <boaz@plexistor.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      047fc8a1
    • V
      nd_btt: atomic sector updates · 5212e11f
      Vishal Verma 提交于
      BTT stands for Block Translation Table, and is a way to provide power
      fail sector atomicity semantics for block devices that have the ability
      to perform byte granularity IO. It relies on the capability of libnvdimm
      namespace devices to do byte aligned IO.
      
      The BTT works as a stacked blocked device, and reserves a chunk of space
      from the backing device for its accounting metadata. It is a bio-based
      driver because all IO is done synchronously, and there is no queuing or
      asynchronous completions at either the device or the driver level.
      
      The BTT uses 'lanes' to index into various 'on-disk' data structures,
      and lanes also act as a synchronization mechanism in case there are more
      CPUs than available lanes. We did a comparison between two lane lock
      strategies - first where we kept an atomic counter around that tracked
      which was the last lane that was used, and 'our' lane was determined by
      atomically incrementing that. That way, for the nr_cpus > nr_lanes case,
      theoretically, no CPU would be blocked waiting for a lane. The other
      strategy was to use the cpu number we're scheduled on to and hash it to
      a lane number. Theoretically, this could block an IO that could've
      otherwise run using a different, free lane. But some fio workloads
      showed that the direct cpu -> lane hash performed faster than tracking
      'last lane' - my reasoning is the cache thrash caused by moving the
      atomic variable made that approach slower than simply waiting out the
      in-progress IO. This supports the conclusion that the driver can be a
      very simple bio-based one that does synchronous IOs instead of queuing.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Boaz Harrosh <boaz@plexistor.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      [jmoyer: fix nmi watchdog timeout in btt_map_init]
      [jmoyer: move btt initialization to module load path]
      [jmoyer: fix memory leak in the btt initialization path]
      [jmoyer: Don't overwrite corrupted arenas]
      Signed-off-by: NVishal Verma <vishal.l.verma@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      5212e11f
  12. 25 6月, 2015 9 次提交
    • D
      libnvdimm: infrastructure for btt devices · 8c2f7e86
      Dan Williams 提交于
      NVDIMM namespaces, in addition to accepting "struct bio" based requests,
      also have the capability to perform byte-aligned accesses.  By default
      only the bio/block interface is used.  However, if another driver can
      make effective use of the byte-aligned capability it can claim namespace
      interface and use the byte-aligned ->rw_bytes() interface.
      
      The BTT driver is the initial first consumer of this mechanism to allow
      adding atomic sector update semantics to a pmem or blk namespace.  This
      patch is the sysfs infrastructure to allow configuring a BTT instance
      for a namespace.  Enabling that BTT and performing i/o is in a
      subsequent patch.
      
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      8c2f7e86
    • D
      libnvdimm: write pmem label set · f524bf27
      Dan Williams 提交于
      After 'uuid', 'size', and optionally 'alt_name' have been set to valid
      values the labels on the dimms can be updated.
      
      Write procedure is:
      1/ Allocate and write new labels in the "next" index
      2/ Free the old labels in the working copy
      3/ Write the bitmap and the label space on the dimm
      4/ Write the index to make the update valid
      
      Label ranges directly mirror the dpa resource values for the given
      label_id of the namespace.
      
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Neil Brown <neilb@suse.de>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      f524bf27
    • D
      libnvdimm: blk labels and namespace instantiation · 1b40e09a
      Dan Williams 提交于
      A blk label set describes a namespace comprised of one or more
      discontiguous dpa ranges on a single dimm.  They may alias with one or
      more pmem interleave sets that include the given dimm.
      
      This is the runtime/volatile configuration infrastructure for sysfs
      manipulation of 'alt_name', 'uuid', 'size', and 'sector_size'.  A later
      patch will make these settings persistent by writing back the label(s).
      
      Unlike pmem namespaces, multiple blk namespaces can be created per
      region.  Once a blk namespace has been created a new seed device
      (unconfigured child of a parent blk region) is instantiated.  As long as
      a region has 'available_size' != 0 new child namespaces may be created.
      
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Neil Brown <neilb@suse.de>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      1b40e09a
    • D
      libnvdimm: pmem label sets and namespace instantiation. · bf9bccc1
      Dan Williams 提交于
      A complete label set is a PMEM-label per-dimm per-interleave-set where
      all the UUIDs match and the interleave set cookie matches the hosting
      interleave set.
      
      Present sysfs attributes for manipulation of a PMEM-namespace's
      'alt_name', 'uuid', and 'size' attributes.  A later patch will make
      these settings persistent by writing back the label.
      
      Note that PMEM allocations grow forwards from the start of an interleave
      set (lowest dimm-physical-address (DPA)).  BLK-namespaces that alias
      with a PMEM interleave set will grow allocations backward from the
      highest DPA.
      
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Neil Brown <neilb@suse.de>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      bf9bccc1
    • D
      libnvdimm: namespace indices: read and validate · 4a826c83
      Dan Williams 提交于
      This on media label format [1] consists of two index blocks followed by
      an array of labels.  None of these structures are ever updated in place.
      A sequence number tracks the current active index and the next one to
      write, while labels are written to free slots.
      
          +------------+
          |            |
          |  nsindex0  |
          |            |
          +------------+
          |            |
          |  nsindex1  |
          |            |
          +------------+
          |   label0   |
          +------------+
          |   label1   |
          +------------+
          |            |
           ....nslot...
          |            |
          +------------+
          |   labelN   |
          +------------+
      
      After reading valid labels, store the dpa ranges they claim into
      per-dimm resource trees.
      
      [1]: http://pmem.io/documents/NVDIMM_Namespace_Spec.pdf
      
      Cc: Neil Brown <neilb@suse.de>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      4a826c83
    • D
      libnvdimm, nfit: add interleave-set state-tracking infrastructure · eaf96153
      Dan Williams 提交于
      On platforms that have firmware support for reading/writing per-dimm
      label space, a portion of the dimm may be accessible via an interleave
      set PMEM mapping in addition to the dimm's BLK (block-data-window
      aperture(s)) interface.  A label, stored in a "configuration data
      region" on the dimm, disambiguates which dimm addresses are accessed
      through which exclusive interface.
      
      Add infrastructure that allows the kernel to block modifications to a
      label in the set while any member dimm is active.  Note that this is
      meant only for enforcing "no modifications of active labels" via the
      coarse ioctl command.  Adding/deleting namespaces from an active
      interleave set is always possible via sysfs.
      
      Another aspect of tracking interleave sets is tracking their integrity
      when DIMMs in a set are physically re-ordered.  For this purpose we
      generate an "interleave-set cookie" that can be recorded in a label and
      validated against the current configuration.  It is the bus provider
      implementation's responsibility to calculate the interleave set cookie
      and attach it to a given region.
      
      Cc: Neil Brown <neilb@suse.de>
      Cc: <linux-acpi@vger.kernel.org>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Robert Moore <robert.moore@intel.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      eaf96153
    • D
      libnvdimm: support for legacy (non-aliasing) nvdimms · 3d88002e
      Dan Williams 提交于
      The libnvdimm region driver is an intermediary driver that translates
      non-volatile "region"s into "namespace" sub-devices that are surfaced by
      persistent memory block-device drivers (PMEM and BLK).
      
      ACPI 6 introduces the concept that a given nvdimm may simultaneously
      offer multiple access modes to its media through direct PMEM load/store
      access, or windowed BLK mode.  Existing nvdimms mostly implement a PMEM
      interface, some offer a BLK-like mode, but never both as ACPI 6 defines.
      If an nvdimm is single interfaced, then there is no need for dimm
      metadata labels.  For these devices we can take the region boundaries
      directly to create a child namespace device (nd_namespace_io).
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NToshi Kani <toshi.kani@hp.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      3d88002e
    • D
      libnvdimm, nfit: regions (block-data-window, persistent memory, volatile memory) · 1f7df6f8
      Dan Williams 提交于
      A "region" device represents the maximum capacity of a BLK range (mmio
      block-data-window(s)), or a PMEM range (DAX-capable persistent memory or
      volatile memory), without regard for aliasing.  Aliasing, in the
      dimm-local address space (DPA), is resolved by metadata on a dimm to
      designate which exclusive interface will access the aliased DPA ranges.
      Support for the per-dimm metadata/label arrvies is in a subsequent
      patch.
      
      The name format of "region" devices is "regionN" where, like dimms, N is
      a global ida index assigned at discovery time.  This id is not reliable
      across reboots nor in the presence of hotplug.  Look to attributes of
      the region or static id-data of the sub-namespace to generate a
      persistent name.  However, if the platform configuration does not change
      it is reasonable to expect the same region id to be assigned at the next
      boot.
      
      "region"s have 2 generic attributes "size", and "mapping"s where:
      - size: the BLK accessible capacity or the span of the
        system physical address range in the case of PMEM.
      
      - mappingN: a tuple describing a dimm's contribution to the region's
        capacity in the format (<nmemX>,<dpa>,<size>).  For a PMEM-region
        there will be at least one mapping per dimm in the interleave set.  For
        a BLK-region there is only "mapping0" listing the starting DPA of the
        BLK-region and the available DPA capacity of that space (matches "size"
        above).
      
      The max number of mappings per "region" is hard coded per the
      constraints of sysfs attribute groups.  That said the number of mappings
      per region should never exceed the maximum number of possible dimms in
      the system.  If the current number turns out to not be enough then the
      "mappings" attribute clarifies how many there are supposed to be. "32
      should be enough for anybody...".
      
      Cc: Neil Brown <neilb@suse.de>
      Cc: <linux-acpi@vger.kernel.org>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Robert Moore <robert.moore@intel.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Tested-by: NToshi Kani <toshi.kani@hp.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      1f7df6f8
    • D
      libnvdimm, nvdimm: dimm driver and base libnvdimm device-driver infrastructure · 4d88a97a
      Dan Williams 提交于
      * Implement the device-model infrastructure for loading modules and
        attaching drivers to nvdimm devices.  This is a simple association of a
        nd-device-type number with a driver that has a bitmask of supported
        device types.  To facilitate userspace bind/unbind operations 'modalias'
        and 'devtype', that also appear in the uevent, are added as generic
        sysfs attributes for all nvdimm devices.  The reason for the device-type
        number is to support sub-types within a given parent devtype, be it a
        vendor-specific sub-type or otherwise.
      
      * The first consumer of this infrastructure is the driver
        for dimm devices.  It simply uses control messages to retrieve and
        store the configuration-data image (label set) from each dimm.
      
      Note: nd_device_register() arranges for asynchronous registration of
            nvdimm bus devices by default.
      
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Neil Brown <neilb@suse.de>
      Acked-by: NChristoph Hellwig <hch@lst.de>
      Tested-by: NToshi Kani <toshi.kani@hp.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      4d88a97a