1. 01 5月, 2014 10 次提交
  2. 23 4月, 2014 3 次提交
  3. 22 4月, 2014 2 次提交
  4. 17 4月, 2014 1 次提交
  5. 16 4月, 2014 4 次提交
    • C
      blk-mq: split out tag initialization, support shared tags · 24d2f903
      Christoph Hellwig 提交于
      Add a new blk_mq_tag_set structure that gets set up before we initialize
      the queue.  A single blk_mq_tag_set structure can be shared by multiple
      queues.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      
      Modular export of blk_mq_{alloc,free}_tagset added by me.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      24d2f903
    • C
      blk-mq: add ->init_request and ->exit_request methods · e9b267d9
      Christoph Hellwig 提交于
      The current blk_mq_init_commands/blk_mq_free_commands interface has a
      two problems:
      
       1) Because only the constructor is passed to blk_mq_init_commands there
          is no easy way to clean up when a comman initialization failed.  The
          current code simply leaks the allocations done in the constructor.
      
       2) There is no good place to call blk_mq_free_commands: before
          blk_cleanup_queue there is no guarantee that all outstanding
          commands have completed, so we can't free them yet.  After
          blk_cleanup_queue the queue has usually been freed.  This can be
          worked around by grabbing an unconditional reference before calling
          blk_cleanup_queue and dropping it after blk_mq_free_commands is
          done, although that's not exatly pretty and driver writers are
          guaranteed to get it wrong sooner or later.
      
      Both issues are easily fixed by making the request constructor and
      destructor normal blk_mq_ops methods.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e9b267d9
    • C
      blk-mq: do not initialize req->special · 9d74e257
      Christoph Hellwig 提交于
      Drivers can reach their private data easily using the blk_mq_rq_to_pdu
      helper and don't need req->special.  By not initializing it code can
      be simplified nicely, and we also shave off a few more instructions from
      the I/O path.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9d74e257
    • J
      block: remove struct request buffer member · b4f42e28
      Jens Axboe 提交于
      This was used in the olden days, back when onions were proper
      yellow. Basically it mapped to the current buffer to be
      transferred. With highmem being added more than a decade ago,
      most drivers map pages out of a bio, and rq->buffer isn't
      pointing at anything valid.
      
      Convert old style drivers to just use bio_data().
      
      For the discard payload use case, just reference the page
      in the bio.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b4f42e28
  6. 11 4月, 2014 6 次提交
  7. 09 4月, 2014 1 次提交
    • M
      drivers/block/loop.c: ratelimit error messages · 44bd70c3
      Mike Galbraith 提交于
      Metric tons of high speed spew is not helpful when things go pear shaped.
      systemd lost its mind, forgot how to stop services it insists on being
      sole manager of, massive printk() flood ensued, box eventually died.
      
      [16206.684000] loop: Write error at byte offset 11412291584, length 4096.
      [16206.684000] systemd-journald[1758]: /dev/kmsg buffer overrun, some messages lost.
      [16206.684000] loop: Write error at byte offset 13155434496, length 4096.
      [16206.684000] loop: Write error at byte offset 13155438592, length 4096.
      [16206.684000] loop: Write error at byte offset 13155442688, length 4096.
      [16206.684000] loop: Write error at byte offset 13960736768, length 4096.
      [16206.684000] loop: Write error at byte offset 14229172224, length 4096.
      [16206.684000] systemd-journald[1758]: /dev/kmsg buffer overrun, some messages lost.
      [16206.684000] loop: Write error at byte offset 14766043136, length 4096.
      [16206.684000] loop: Write error at byte offset 15034478592, length 4096.
      [16206.684000] systemd-journald[1758]: /dev/kmsg buffer overrun, some messages lost.
      Signed-off-by: NMike Galbraith <bitbucket@online.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      44bd70c3
  8. 08 4月, 2014 13 次提交
    • J
      zram: support REQ_DISCARD · f4659d8e
      Joonsoo Kim 提交于
      zram is ram based block device and can be used by backend of filesystem.
      When filesystem deletes a file, it normally doesn't do anything on data
      block of that file.  It just marks on metadata of that file.  This
      behavior has no problem on disk based block device, but has problems on
      ram based block device, since we can't free memory used for data block.
      To overcome this disadvantage, there is REQ_DISCARD functionality.  If
      block device support REQ_DISCARD and filesystem is mounted with discard
      option, filesystem sends REQ_DISCARD to block device whenever some data
      blocks are discarded.  All we have to do is to handle this request.
      
      This patch implements to flag up QUEUE_FLAG_DISCARD and handle this
      REQ_DISCARD request.  With it, we can free memory used by zram if it isn't
      used.
      
      [akpm@linux-foundation.org: tweak comments]
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f4659d8e
    • S
      zram: use scnprintf() in attrs show() methods · 56b4e8cb
      Sergey Senozhatsky 提交于
      sysfs.txt documentation lists the following requirements:
      
       - The buffer will always be PAGE_SIZE bytes in length. On i386, this
         is 4096.
      
       - show() methods should return the number of bytes printed into the
         buffer. This is the return value of scnprintf().
      
       - show() should always use scnprintf().
      
      Use scnprintf() in show() functions.
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56b4e8cb
    • M
      zram: propagate error to user · 60a726e3
      Minchan Kim 提交于
      When we initialized zcomp with single, we couldn't change
      max_comp_streams without zram reset but current interface doesn't show
      any error to user and even it changes max_comp_streams's value without
      any effect so it would make user very confusing.
      
      This patch prevents max_comp_streams's change when zcomp was initialized
      as single zcomp and emit the error to user(ex, echo).
      
      [akpm@linux-foundation.org: don't return with the lock held, per Sergey]
      [fengguang.wu@intel.com: fix coccinelle warnings]
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Acked-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      60a726e3
    • S
      zram: return error-valued pointer from zcomp_create() · fcfa8d95
      Sergey Senozhatsky 提交于
      Instead of returning just NULL, return ERR_PTR from zcomp_create() if
      compressing backend creation has failed.  ERR_PTR(-EINVAL) for unsupported
      compression algorithm request, ERR_PTR(-ENOMEM) for allocation (zcomp or
      compression stream) error.
      
      Perform IS_ERR() check of returned from zcomp_create() value in
      disksize_store() and set return code to PTR_ERR().
      
      Change suggested by Jerome Marchand.
      
      [akpm@linux-foundation.org: clean up error recovery flow]
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Reported-by: NJerome Marchand <jmarchan@redhat.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fcfa8d95
    • S
      zram: move comp allocation out of init_lock · d61f98c7
      Sergey Senozhatsky 提交于
      While fixing lockdep spew of ->init_lock reported by Sasha Levin [1],
      Minchan Kim noted [2] that it's better to move compression backend
      allocation (using GPF_KERNEL) out of the ->init_lock lock, same way as
      with zram_meta_alloc(), in order to prevent the same lockdep spew.
      
      [1] https://lkml.org/lkml/2014/2/27/337
      [2] https://lkml.org/lkml/2014/3/3/32Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Reported-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Acked-by: NJerome Marchand <jmarchan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d61f98c7
    • S
      zram: add lz4 algorithm backend · 6e76668e
      Sergey Senozhatsky 提交于
      Introduce LZ4 compression backend and make it available for selection.
      LZ4 support is optional and requires user to set ZRAM_LZ4_COMPRESS config
      option.  The default compression backend is LZO.
      
      TEST
      
      (x86_64, core i5, 2 cores + 2 hyperthreading, zram disk size 1G,
      ext4 file system, 3 compression streams)
      
      iozone -t 3 -R -r 16K -s 60M -I +Z
      
             Test           LZO           LZ4
      ----------------------------------------------
        Initial write   1642744.62    1317005.09
              Rewrite   2498980.88    1800645.16
                 Read   3957026.38    5877043.75
              Re-read   3950997.38    5861847.00
         Reverse Read   2937114.56    5047384.00
          Stride read   2948163.19    4929587.38
          Random read   3292692.69    4880793.62
       Mixed workload   1545602.62    3502940.38
         Random write   2448039.75    1758786.25
               Pwrite   1670051.03    1338329.69
                Pread   2530682.00    5097177.62
               Fwrite   3232085.62    3275942.56
                Fread   6306880.25    6645271.12
      
      So on my system LZ4 is slower in write-only tests, while it performs
      better in read-only and mixed (reads + writes) tests.
      
      Official LZ4 benchmarks available here http://code.google.com/p/lz4/
      (linux kernel uses revision r90).
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6e76668e
    • S
      zram: make compression algorithm selection possible · e46b8a03
      Sergey Senozhatsky 提交于
      Add and document `comp_algorithm' device attribute.  This attribute allows
      to show supported compression and currently selected compression
      algorithms:
      
      	cat /sys/block/zram0/comp_algorithm
      	[lzo] lz4
      
      and change selected compression algorithm:
      	echo lzo > /sys/block/zram0/comp_algorithm
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e46b8a03
    • S
      zram: add set_max_streams knob · fe8eb122
      Sergey Senozhatsky 提交于
      This patch allows to change max_comp_streams on initialised zcomp.
      
      Introduce zcomp set_max_streams() knob, zcomp_strm_multi_set_max_streams()
      and zcomp_strm_single_set_max_streams() callbacks to change streams limit
      for zcomp_strm_multi and zcomp_strm_single, accordingly.  set_max_streams
      for single steam zcomp does nothing.
      
      If user has lowered the limit, then zcomp_strm_multi_set_max_streams()
      attempts to immediately free extra streams (as much as it can, depending
      on idle streams availability).
      
      Note, this patch does not allow to change stream 'policy' from single to
      multi stream (or vice versa) on already initialised compression backend.
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fe8eb122
    • S
      zram: add multi stream functionality · beca3ec7
      Sergey Senozhatsky 提交于
      Existing zram (zcomp) implementation has only one compression stream
      (buffer and algorithm private part), so in order to prevent data
      corruption only one write (compress operation) can use this compression
      stream, forcing all concurrent write operations to wait for stream lock
      to be released.  This patch changes zcomp to keep a compression streams
      list of user-defined size (via sysfs device attr).  Each write operation
      still exclusively holds compression stream, the difference is that we
      can have N write operations (depending on size of streams list)
      executing in parallel.  See TEST section later in commit message for
      performance data.
      
      Introduce struct zcomp_strm_multi and a set of functions to manage
      zcomp_strm stream access.  zcomp_strm_multi has a list of idle
      zcomp_strm structs, spinlock to protect idle list and wait queue, making
      it possible to perform parallel compressions.
      
      The following set of functions added:
      - zcomp_strm_multi_find()/zcomp_strm_multi_release()
        find and release a compression stream, implement required locking
      - zcomp_strm_multi_create()/zcomp_strm_multi_destroy()
        create and destroy zcomp_strm_multi
      
      zcomp ->strm_find() and ->strm_release() callbacks are set during
      initialisation to zcomp_strm_multi_find()/zcomp_strm_multi_release()
      correspondingly.
      
      Each time zcomp issues a zcomp_strm_multi_find() call, the following set
      of operations performed:
      
      - spin lock strm_lock
      - if idle list is not empty, remove zcomp_strm from idle list, spin
        unlock and return zcomp stream pointer to caller
      - if idle list is empty, current adds itself to wait queue. it will be
        awaken by zcomp_strm_multi_release() caller.
      
      zcomp_strm_multi_release():
      - spin lock strm_lock
      - add zcomp stream to idle list
      - spin unlock, wake up sleeper
      
      Minchan Kim reported that spinlock-based locking scheme has demonstrated
      a severe perfomance regression for single compression stream case,
      comparing to mutex-based (see https://lkml.org/lkml/2014/2/18/16)
      
      base                      spinlock                    mutex
      
      ==Initial write           ==Initial write             ==Initial  write
      records:  5               records:  5                 records:   5
      avg:      1642424.35      avg:      699610.40         avg:       1655583.71
      std:      39890.95(2.43%) std:      232014.19(33.16%) std:       52293.96
      max:      1690170.94      max:      1163473.45        max:       1697164.75
      min:      1568669.52      min:      573429.88         min:       1553410.23
      ==Rewrite                 ==Rewrite                   ==Rewrite
      records:  5               records:  5                 records:   5
      avg:      1611775.39      avg:      501406.64         avg:       1684419.11
      std:      17144.58(1.06%) std:      15354.41(3.06%)   std:       18367.42
      max:      1641800.95      max:      531356.78         max:       1706445.84
      min:      1593515.27      min:      488817.78         min:       1655335.73
      
      When only one compression stream available, mutex with spin on owner
      tends to perform much better than frequent wait_event()/wake_up().  This
      is why single stream implemented as a special case with mutex locking.
      
      Introduce and document zram device attribute max_comp_streams.  This
      attr shows and stores current zcomp's max number of zcomp streams
      (max_strm).  Extend zcomp's zcomp_create() with `max_strm' parameter.
      `max_strm' limits the number of zcomp_strm structs in compression
      backend's idle list (max_comp_streams).
      
      max_comp_streams used during initialisation as follows:
      -- passing to zcomp_create() max_strm equals to 1 will initialise zcomp
      using single compression stream zcomp_strm_single (mutex-based locking).
      -- passing to zcomp_create() max_strm greater than 1 will initialise zcomp
      using multi compression stream zcomp_strm_multi (spinlock-based locking).
      
      default max_comp_streams value is 1, meaning that zram with single stream
      will be initialised.
      
      Later patch will introduce configuration knob to change max_comp_streams
      on already initialised and used zcomp.
      
      TEST
      iozone -t 3 -R -r 16K -s 60M -I +Z
      
             test           base       1 strm (mutex)     3 strm (spinlock)
      -----------------------------------------------------------------------
       Initial write      589286.78       583518.39          718011.05
             Rewrite      604837.97       596776.38         1515125.72
        Random write      584120.11       595714.58         1388850.25
              Pwrite      535731.17       541117.38          739295.27
              Fwrite     1418083.88      1478612.72         1484927.06
      
      Usage example:
      set max_comp_streams to 4
              echo 4 > /sys/block/zram0/max_comp_streams
      
      show current max_comp_streams (default value is 1).
              cat /sys/block/zram0/max_comp_streams
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      beca3ec7
    • S
      zram: factor out single stream compression · 9cc97529
      Sergey Senozhatsky 提交于
      This is preparation patch to add multi stream support to zcomp.
      
      Introduce struct zcomp_strm_single and a set of functions to manage
      zcomp_strm stream access.  zcomp_strm_single implements single compession
      stream, same way as current zcomp implementation.  This moves zcomp_strm
      stream control and locking from zcomp, so compressing backend zcomp is not
      aware of required locking.
      
      Single and multi streams require different locking schemes.  Minchan Kim
      reported that spinlock-based locking scheme (which is used in multi stream
      implementation) has demonstrated a severe perfomance regression for single
      compression stream case, comparing to mutex-based.  see
      https://lkml.org/lkml/2014/2/18/16
      
      The following set of functions added:
      - zcomp_strm_single_find()/zcomp_strm_single_release()
        find and release a compression stream, implement required locking
      - zcomp_strm_single_create()/zcomp_strm_single_destroy()
        create and destroy zcomp_strm_single
      
      New ->strm_find() and ->strm_release() callbacks added to zcomp, which are
      set to zcomp_strm_single_find() and zcomp_strm_single_release() during
      initialisation.  Instead of direct locking and zcomp_strm access from
      zcomp_strm_find() and zcomp_strm_release(), zcomp now calls ->strm_find()
      and ->strm_release() correspondingly.
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9cc97529
    • S
      zram: use zcomp compressing backends · b7ca232e
      Sergey Senozhatsky 提交于
      Do not perform direct LZO compress/decompress calls, initialise
      and use zcomp LZO backend (single compression stream) instead.
      
      [akpm@linux-foundation.org: resolve conflicts with zram-delete-zram_init_device-fix.patch]
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b7ca232e
    • S
      zram: introduce compressing backend abstraction · e7e1ef43
      Sergey Senozhatsky 提交于
      ZRAM performs direct LZO compression algorithm calls, making it the one
      and only option.  While LZO is generally performs well, LZ4 algorithm
      tends to have a faster decompression (see http://code.google.com/p/lz4/
      for full report)
      
      	Name            Ratio  C.speed D.speed
      	                        MB/s    MB/s
      	LZ4 (r101)      2.084    422    1820
      	LZO 2.06        2.106    414     600
      
      Thus, users who have mostly read (decompress) usage scenarious or mixed
      workflow (writes with relatively high read ops number) will benefit from
      using LZ4 compression backend.
      
      Introduce compressing backend abstraction zcomp in order to support
      multiple compression algorithms with the following set of operations:
      
              .create
              .destroy
              .compress
              .decompress
      
      Schematically zram write() usually contains the following steps:
      0) preparation (decompression of partioal IO, etc.)
      1) lock buffer_lock mutex (protects meta compress buffers)
      2) compress (using meta compress buffers)
      3) alloc and map zs_pool object
      4) copy compressed data (from meta compress buffers) to object allocated by 3)
      5) free previous pool page, assign a new one
      6) unlock buffer_lock mutex
      
      As we can see, compressing buffers must remain untouched from 1) to 4),
      because, otherwise, concurrent write() can overwrite data.  At the same
      time, zram_meta must be aware of a) specific compression algorithm memory
      requirements and b) necessary locking to protect compression buffers.  To
      remove requirement a) new struct zcomp_strm introduced, which contains a
      compress/decompress `buffer' and compression algorithm `private' part.
      While struct zcomp implements zcomp_strm stream handling and locking and
      removes requirement b) from zram meta.  zcomp ->create() and ->destroy(),
      respectively, allocate and deallocate algorithm specific zcomp_strm
      `private' part.
      
      Every zcomp has zcomp stream and mutex to protect its compression stream.
      Stream usage semantics remains the same -- only one write can hold stream
      lock and use its buffers.  zcomp_strm_find() turns caller into exclusive
      user of a stream (holding stream mutex until zram release stream), and
      zcomp_strm_release() makes zcomp stream available (unlock the stream
      mutex).  Hence no concurrent write (compression) operations possible at
      the moment.
      
      iozone -t 3 -R -r 16K -s 60M -I +Z
      
             test            base           patched
      --------------------------------------------------
        Initial write      597992.91       591660.58
              Rewrite      609674.34       616054.97
                 Read     2404771.75      2452909.12
              Re-read     2459216.81      2470074.44
         Reverse Read     1652769.66      1589128.66
          Stride read     2202441.81      2202173.31
          Random read     2236311.47      2276565.31
       Mixed workload     1423760.41      1709760.06
         Random write      579584.08       615933.86
               Pwrite      597550.02       594933.70
                Pread     1703672.53      1718126.72
               Fwrite     1330497.06      1461054.00
                Fread     3922851.00      3957242.62
      
      Usage examples:
      
      	comp = zcomp_create(NAME) /* NAME e.g. "lzo" */
      
      which initialises compressing backend if requested algorithm is supported.
      
      Compress:
      	zstrm = zcomp_strm_find(comp)
      	zcomp_compress(comp, zstrm, src, &dst_len)
      	[..] /* copy compressed data */
      	zcomp_strm_release(comp, zstrm)
      
      Decompress:
      	zcomp_decompress(comp, src, src_len, dst);
      
      Free compessing backend and its zcomp stream:
      	zcomp_destroy(comp)
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7e1ef43
    • S
      zram: delete zram_init_device() · b67d1ec1
      Sergey Senozhatsky 提交于
      allocate new `zram_meta' in disksize_store() only for uninitialised zram
      device, saving a number of allocations and deallocations in case if
      disksize_store() was called on currently used device.  at the same time
      zram_meta stack variable is not necessary, because we can set ->meta
      directly.  there is also no need in setting QUEUE_FLAG_NONROT queue on
      every disksize_store(), set it once during device creation.
      
      [minchan@kernel.org: handle zram->meta alloc fail case]
      [minchan@kernel.org: prevent lockdep spew of init_lock]
      Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NJerome Marchand <jmarchan@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67d1ec1