1. 15 9月, 2015 4 次提交
  2. 12 9月, 2015 2 次提交
  3. 09 9月, 2015 4 次提交
  4. 05 9月, 2015 2 次提交
  5. 02 9月, 2015 2 次提交
  6. 29 8月, 2015 2 次提交
    • D
      libnvdimm, pmem: 'struct page' for pmem · 32ab0a3f
      Dan Williams 提交于
      Enable the pmem driver to handle PFN device instances.  Attaching a pmem
      namespace to a pfn device triggers the driver to allocate and initialize
      struct page entries for pmem.  Memory capacity for this allocation comes
      exclusively from RAM for now which is suitable for low PMEM to RAM
      ratios.  This mechanism will be expanded later for setting an "allocate
      from PMEM" policy.
      
      Cc: Boaz Harrosh <boaz@plexistor.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      32ab0a3f
    • D
      libnvdimm, pfn: 'struct page' provider infrastructure · e1455744
      Dan Williams 提交于
      Implement the base infrastructure for libnvdimm PFN devices. Similar to
      BTT devices they take a namespace as a backing device and layer
      functionality on top. In this case the functionality is reserving space
      for an array of 'struct page' entries to be handed out through
      pfn_to_page(). For now this is just the basic libnvdimm-device-model for
      configuring the base PFN device.
      
      As the namespace claiming mechanism for PFN devices is mostly identical
      to BTT devices drivers/nvdimm/claim.c is created to house the common
      bits.
      
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      e1455744
  7. 28 8月, 2015 4 次提交
    • R
      nd_blk: change aperture mapping from WC to WB · 67a3e8fe
      Ross Zwisler 提交于
      This should result in a pretty sizeable performance gain for reads.  For
      rough comparison I did some simple read testing using PMEM to compare
      reads of write combining (WC) mappings vs write-back (WB).  This was
      done on a random lab machine.
      
      PMEM reads from a write combining mapping:
      	# dd of=/dev/null if=/dev/pmem0 bs=4096 count=100000
      	100000+0 records in
      	100000+0 records out
      	409600000 bytes (410 MB) copied, 9.2855 s, 44.1 MB/s
      
      PMEM reads from a write-back mapping:
      	# dd of=/dev/null if=/dev/pmem0 bs=4096 count=1000000
      	1000000+0 records in
      	1000000+0 records out
      	4096000000 bytes (4.1 GB) copied, 3.44034 s, 1.2 GB/s
      
      To be able to safely support a write-back aperture I needed to add
      support for the "read flush" _DSM flag, as outlined in the DSM spec:
      
      http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf
      
      This flag tells the ND BLK driver that it needs to flush the cache lines
      associated with the aperture after the aperture is moved but before any
      new data is read.  This ensures that any stale cache lines from the
      previous contents of the aperture will be discarded from the processor
      cache, and the new data will be read properly from the DIMM.  We know
      that the cache lines are clean and will be discarded without any
      writeback because either a) the previous aperture operation was a read,
      and we never modified the contents of the aperture, or b) the previous
      aperture operation was a write and we must have written back the dirtied
      contents of the aperture to the DIMM before the I/O was completed.
      
      In order to add support for the "read flush" flag I needed to add a
      generic routine to invalidate cache lines, mmio_flush_range().  This is
      protected by the ARCH_HAS_MMIO_FLUSH Kconfig variable, and is currently
      only supported on x86.
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      67a3e8fe
    • B
    • B
      selftests: check before install · a7d0f078
      Bamvor Jian Zhang 提交于
      When the test cases is not supported by the current architecture
      the install files(TEST_PROGS, TEST_PROGS_EXTENDED and TEST_FILES)
      will be empty. Check it before installation to dismiss a failure
      reported by install program.
      Signed-off-by: NBamvor Jian Zhang <bamvor.zhangjian@linaro.org>
      Signed-off-by: NShuah Khan <shuahkh@osg.samsung.com>
      a7d0f078
    • N
      selftests/zram: Adding zram tests · f21fb798
      Naresh Kamboju 提交于
      zram: Compressed RAM based block devices
      ----------------------------------------
      The zram module creates RAM based block devices named /dev/zram<id>
      (<id> = 0, 1, ...). Pages written to these disks are compressed and stored
      in memory itself. These disks allow very fast I/O and compression provides
      good amounts of memory savings. Some of the usecases include /tmp storage,
      use as swap disks, various caches under /var and maybe many more :)
      
      Statistics for individual zram devices are exported through sysfs nodes at
      /sys/block/zram<id>/
      
      This patch is to validate the zram functionality. Test interacts with block
      device /dev/zram<id> and sysfs nodes /sys/block/zram<id>/
      
      zram.sh: sanity check of CONFIG_ZRAM and to run zram01 and zram02 tests
      zram01.sh: creates general purpose ram disks with different filesystems
      zram02.sh: creates block device for swap
      zram_lib.sh: create library with initialization/cleanup functions
      README: ZRAM introduction and Kconfig required.
      Makefile: To run zram tests
      
      zram test output
      -----------------
      ./zram.sh
      --------------------
      running zram tests
      --------------------
      /dev/zram0 device file found: OK
      set max_comp_streams to zram device(s)
      /sys/block/zram0/max_comp_streams = '2' (1/1)
      zram max streams: OK
      test that we can set compression algorithm
      supported algs: [lzo] lz4
      /sys/block/zram0/comp_algorithm = 'lzo' (1/1)
      zram set compression algorithm: OK
      set disk size to zram device(s)
      /sys/block/zram0/disksize = '2097152' (1/1)
      zram set disksizes: OK
      set memory limit to zram device(s)
      /sys/block/zram0/mem_limit = '2M' (1/1)
      zram set memory limit: OK
      make ext4 filesystem on /dev/zram0
      zram mkfs.ext4: OK
      mount /dev/zram0
      zram mount of zram device(s): OK
      fill zram0...
      zram0 can be filled with '1932' KB
      zram used 3M, zram disk sizes 2097152M
      zram compression ratio: 699050.66:1: OK
      zram cleanup
      zram01 : [PASS]
      
      /dev/zram0 device file found: OK
      set max_comp_streams to zram device(s)
      /sys/block/zram0/max_comp_streams = '2' (1/1)
      zram max streams: OK
      set disk size to zram device(s)
      /sys/block/zram0/disksize = '1048576' (1/1)
      zram set disksizes: OK
      set memory limit to zram device(s)
      /sys/block/zram0/mem_limit = '1M' (1/1)
      zram set memory limit: OK
      make swap with zram device(s)
      done with /dev/zram0
      zram making zram mkswap and swapon: OK
      zram swapoff: OK
      zram cleanup
      zram02 : [PASS]
      
      CC: Shuah Khan <shuahkh@osg.samsung.com>
      CC: Tyler Baker <tyler.baker@linaro.org>
      CC: Milosz Wasilewski <milosz.wasilewski@linaro.org>
      CC: Alexey Kodanev <alexey.kodanev@oracle.com>
      Signed-off-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Signed-off-by: NAlexey Kodanev <alexey.kodanev@oracle.com>
      Reviewed-By: NTyler Baker <tyler.baker@linaro.org>
      Signed-off-by: NShuah Khan <shuahkh@osg.samsung.com>
      f21fb798
  8. 19 8月, 2015 1 次提交
    • D
      libnvdimm, e820: make CONFIG_X86_PMEM_LEGACY a tristate option · 7a67832c
      Dan Williams 提交于
      We currently register a platform device for e820 type-12 memory and
      register a nvdimm bus beneath it.  Registering the platform device
      triggers the device-core machinery to probe for a driver, but that
      search currently comes up empty.  Building the nvdimm-bus registration
      into the e820_pmem platform device registration in this way forces
      libnvdimm to be built-in.  Instead, convert the built-in portion of
      CONFIG_X86_PMEM_LEGACY to simply register a platform device and move the
      rest of the logic to the driver for e820_pmem, for the following
      reasons:
      
      1/ Letting e820_pmem support be a module allows building and testing
         libnvdimm.ko changes without rebooting
      
      2/ All the normal policy around modules can be applied to e820_pmem
         (unbind to disable and/or blacklisting the module from loading by
         default)
      
      3/ Moving the driver to a generic location and converting it to scan
         "iomem_resource" rather than "e820.map" means any other architecture can
         take advantage of this simple nvdimm resource discovery mechanism by
         registering a resource named "Persistent Memory (legacy)"
      
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      7a67832c
  9. 18 8月, 2015 4 次提交
  10. 17 8月, 2015 1 次提交
  11. 15 8月, 2015 2 次提交
  12. 06 8月, 2015 1 次提交
  13. 03 8月, 2015 2 次提交
  14. 31 7月, 2015 2 次提交
  15. 30 7月, 2015 2 次提交
  16. 28 7月, 2015 1 次提交
  17. 21 7月, 2015 2 次提交
  18. 18 7月, 2015 1 次提交
  19. 16 7月, 2015 1 次提交