1. 06 5月, 2015 1 次提交
    • R
      Revert "dm crypt: fix deadlock when async crypto algorithm returns -EBUSY" · c0403ec0
      Rabin Vincent 提交于
      This reverts Linux 4.1-rc1 commit 0618764c.
      
      The problem which that commit attempts to fix actually lies in the
      Freescale CAAM crypto driver not dm-crypt.
      
      dm-crypt uses CRYPTO_TFM_REQ_MAY_BACKLOG.  This means the the crypto
      driver should internally backlog requests which arrive when the queue is
      full and process them later.  Until the crypto hw's queue becomes full,
      the driver returns -EINPROGRESS.  When the crypto hw's queue if full,
      the driver returns -EBUSY, and if CRYPTO_TFM_REQ_MAY_BACKLOG is set, is
      expected to backlog the request and process it when the hardware has
      queue space.  At the point when the driver takes the request from the
      backlog and starts processing it, it calls the completion function with
      a status of -EINPROGRESS.  The completion function is called (for a
      second time, in the case of backlogged requests) with a status/err of 0
      when a request is done.
      
      Crypto drivers for hardware without hardware queueing use the helpers,
      crypto_init_queue(), crypto_enqueue_request(), crypto_dequeue_request()
      and crypto_get_backlog() helpers to implement this behaviour correctly,
      while others implement this behaviour without these helpers (ccp, for
      example).
      
      dm-crypt (before the patch that needs reverting) uses this API
      correctly.  It queues up as many requests as the hw queues will allow
      (i.e. as long as it gets back -EINPROGRESS from the request function).
      Then, when it sees at least one backlogged request (gets -EBUSY), it
      waits till that backlogged request is handled (completion gets called
      with -EINPROGRESS), and then continues.  The references to
      af_alg_wait_for_completion() and af_alg_complete() in that commit's
      commit message are irrelevant because those functions only handle one
      request at a time, unlink dm-crypt.
      
      The problem is that the Freescale CAAM driver, which that commit
      describes as having being tested with, fails to implement the
      backlogging behaviour correctly.  In cam_jr_enqueue(), if the hardware
      queue is full, it simply returns -EBUSY without backlogging the request.
      What the observed deadlock was is not described in the commit message
      but it is obviously the wait_for_completion() in crypto_convert() where
      dm-crypto would wait for the completion being called with -EINPROGRESS
      in the case of backlogged requests.  This completion will never be
      completed due to the bug in the CAAM driver.
      
      Commit 0618764c incorrectly made dm-crypt wait for every request,
      even when the driver/hardware queues are not full, which means that
      dm-crypt will never see -EBUSY.  This means that that commit will cause
      a performance regression on all crypto drivers which implement the API
      correctly.
      
      Revert it.  Correct backlog handling should be implemented in the CAAM
      driver instead.
      
      Cc'ing stable purely because commit 0618764c did.  If for some reason
      a stable@ kernel did pick up commit 0618764c it should get reverted.
      Signed-off-by: NRabin Vincent <rabin.vincent@axis.com>
      Reviewed-by: NHoria Geanta <horia.geanta@freescale.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      c0403ec0
  2. 17 4月, 2015 1 次提交
  3. 16 4月, 2015 3 次提交
    • B
      dm crypt: fix deadlock when async crypto algorithm returns -EBUSY · 0618764c
      Ben Collins 提交于
      I suspect this doesn't show up for most anyone because software
      algorithms typically don't have a sense of being too busy.  However,
      when working with the Freescale CAAM driver it will return -EBUSY on
      occasion under heavy -- which resulted in dm-crypt deadlock.
      
      After checking the logic in some other drivers, the scheme for
      crypt_convert() and it's callback, kcryptd_async_done(), were not
      correctly laid out to properly handle -EBUSY or -EINPROGRESS.
      
      Fix this by using the completion for both -EBUSY and -EINPROGRESS.  Now
      crypt_convert()'s use of completion is comparable to
      af_alg_wait_for_completion().  Similarly, kcryptd_async_done() follows
      the pattern used in af_alg_complete().
      
      Before this fix dm-crypt would lockup within 1-2 minutes running with
      the CAAM driver.  Fix was regression tested against software algorithms
      on PPC32 and x86_64, and things seem perfectly happy there as well.
      Signed-off-by: NBen Collins <ben.c@servergy.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      0618764c
    • M
      dm crypt: leverage immutable biovecs when decrypting on read · 59779079
      Mike Snitzer 提交于
      Commit 003b5c57 ("block: Convert drivers to immutable biovecs")
      stopped short of changing dm-crypt to leverage the fact that the biovec
      array of a bio will no longer be modified.
      
      Switch to using bio_clone_fast() when cloning bios for decryption after
      read.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      59779079
    • M
      dm crypt: update URLs to new cryptsetup project page · e44f23b3
      Milan Broz 提交于
      Cryptsetup home page moved to GitLab.
      Also remove link to abandonded Truecrypt page.
      Signed-off-by: NMilan Broz <gmazyland@gmail.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      e44f23b3
  4. 17 2月, 2015 7 次提交
    • M
      dm crypt: sort writes · b3c5fd30
      Mikulas Patocka 提交于
      Write requests are sorted in a red-black tree structure and are
      submitted in the sorted order.
      
      In theory the sorting should be performed by the underlying disk
      scheduler, however, in practice the disk scheduler only accepts and
      sorts a finite number of requests.  To allow the sorting of all
      requests, dm-crypt needs to implement its own sorting.
      
      The overhead associated with rbtree-based sorting is considered
      negligible so it is not used conditionally.  Even on SSD sorting can be
      beneficial since in-order request dispatch promotes lower latency IO
      completion to the upper layers.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      b3c5fd30
    • M
      dm crypt: add 'submit_from_crypt_cpus' option · 0f5d8e6e
      Mikulas Patocka 提交于
      Make it possible to disable offloading writes by setting the optional
      'submit_from_crypt_cpus' table argument.
      
      There are some situations where offloading write bios from the
      encryption threads to a single thread degrades performance
      significantly.
      
      The default is to offload write bios to the same thread because it
      benefits CFQ to have writes submitted using the same IO context.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      0f5d8e6e
    • M
      dm crypt: offload writes to thread · dc267621
      Mikulas Patocka 提交于
      Submitting write bios directly in the encryption thread caused serious
      performance degradation.  On a multiprocessor machine, encryption requests
      finish in a different order than they were submitted.  Consequently, write
      requests would be submitted in a different order and it could cause severe
      performance degradation.
      
      Move the submission of write requests to a separate thread so that the
      requests can be sorted before submitting.  But this commit improves
      dm-crypt performance even without having dm-crypt perform request
      sorting (in particular it enables IO schedulers like CFQ to sort more
      effectively).
      
      Note: it is required that a previous commit ("dm crypt: don't allocate
      pages for a partial request") be applied before applying this patch.
      Otherwise, this commit could introduce a crash.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      dc267621
    • M
      dm crypt: remove unused io_pool and _crypt_io_pool · 94f5e024
      Mikulas Patocka 提交于
      The previous commit ("dm crypt: don't allocate pages for a partial
      request") stopped using the io_pool slab mempool and backing
      _crypt_io_pool kmem cache.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      94f5e024
    • M
      dm crypt: avoid deadlock in mempools · 7145c241
      Mikulas Patocka 提交于
      Fix a theoretical deadlock introduced in the previous commit ("dm crypt:
      don't allocate pages for a partial request").
      
      The function crypt_alloc_buffer may be called concurrently.  If we allocate
      from the mempool concurrently, there is a possibility of deadlock.  For
      example, if we have mempool of 256 pages, two processes, each wanting
      256, pages allocate from the mempool concurrently, it may deadlock in a
      situation where both processes have allocated 128 pages and the mempool
      is exhausted.
      
      To avoid such a scenario we allocate the pages under a mutex.  In order
      to not degrade performance with excessive locking, we try non-blocking
      allocations without a mutex first and if that fails, we fallback to a
      blocking allocations with a mutex.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      7145c241
    • M
      dm crypt: don't allocate pages for a partial request · cf2f1abf
      Mikulas Patocka 提交于
      Change crypt_alloc_buffer so that it only ever allocates pages for a
      full request.  This is a prerequisite for the commit "dm crypt: offload
      writes to thread".
      
      This change simplifies the dm-crypt code at the expense of reduced
      throughput in low memory conditions (where allocation for a partial
      request is most useful).
      
      Note: the next commit ("dm crypt: avoid deadlock in mempools") is needed
      to fix a theoretical deadlock.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      cf2f1abf
    • M
      dm crypt: use unbound workqueue for request processing · f3396c58
      Mikulas Patocka 提交于
      Use unbound workqueue by default so that work is automatically balanced
      between available CPUs.  The original behavior of encrypting using the
      same cpu that IO was submitted on can still be enabled by setting the
      optional 'same_cpu_crypt' table argument.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      f3396c58
  5. 02 12月, 2014 1 次提交
  6. 14 10月, 2014 1 次提交
  7. 29 8月, 2014 1 次提交
    • M
      dm crypt: fix access beyond the end of allocated space · d49ec52f
      Mikulas Patocka 提交于
      The DM crypt target accesses memory beyond allocated space resulting in
      a crash on 32 bit x86 systems.
      
      This bug is very old (it dates back to 2.6.25 commit 3a7f6c99 "dm
      crypt: use async crypto").  However, this bug was masked by the fact
      that kmalloc rounds the size up to the next power of two.  This bug
      wasn't exposed until 3.17-rc1 commit 298a9fa0 ("dm crypt: use per-bio
      data").  By switching to using per-bio data there was no longer any
      padding beyond the end of a dm-crypt allocated memory block.
      
      To minimize allocation overhead dm-crypt puts several structures into one
      block allocated with kmalloc.  The block holds struct ablkcipher_request,
      cipher-specific scratch pad (crypto_ablkcipher_reqsize(any_tfm(cc))),
      struct dm_crypt_request and an initialization vector.
      
      The variable dmreq_start is set to offset of struct dm_crypt_request
      within this memory block.  dm-crypt allocates the block with this size:
      cc->dmreq_start + sizeof(struct dm_crypt_request) + cc->iv_size.
      
      When accessing the initialization vector, dm-crypt uses the function
      iv_of_dmreq, which performs this calculation: ALIGN((unsigned long)(dmreq
      + 1), crypto_ablkcipher_alignmask(any_tfm(cc)) + 1).
      
      dm-crypt allocated "cc->iv_size" bytes beyond the end of dm_crypt_request
      structure.  However, when dm-crypt accesses the initialization vector, it
      takes a pointer to the end of dm_crypt_request, aligns it, and then uses
      it as the initialization vector.  If the end of dm_crypt_request is not
      aligned on a crypto_ablkcipher_alignmask(any_tfm(cc)) boundary the
      alignment causes the initialization vector to point beyond the allocated
      space.
      
      Fix this bug by calculating the variable iv_size_padding and adding it
      to the allocated size.
      
      Also correct the alignment of dm_crypt_request.  struct dm_crypt_request
      is specific to dm-crypt (it isn't used by the crypto subsystem at all),
      so it is aligned on __alignof__(struct dm_crypt_request).
      
      Also align per_bio_data_size on ARCH_KMALLOC_MINALIGN, so that it is
      aligned as if the block was allocated with kmalloc.
      Reported-by: NKrzysztof Kolasa <kkolasa@winsoft.pl>
      Tested-by: NMilan Broz <gmazyland@gmail.com>
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      d49ec52f
  8. 02 8月, 2014 1 次提交
    • M
      dm crypt: use per-bio data · 298a9fa0
      Mikulas Patocka 提交于
      Change dm-crypt so that it uses auxiliary data allocated with the bio.
      
      Dm-crypt requires two allocations per request - struct dm_crypt_io and
      struct ablkcipher_request (with other data appended to it).  It
      previously only used mempool allocations.
      
      Some requests may require more dm_crypt_ios and ablkcipher_requests,
      however most requests need just one of each of these two structures to
      complete.
      
      This patch changes it so that the first dm_crypt_io and ablkcipher_request
      are allocated with the bio (using target per_bio_data_size option).  If
      the request needs additional values, they are allocated from the mempool.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      298a9fa0
  9. 11 7月, 2014 1 次提交
  10. 15 5月, 2014 1 次提交
    • M
      dm crypt: fix cpu hotplug crash by removing per-cpu structure · 610f2de3
      Mikulas Patocka 提交于
      The DM crypt target used per-cpu structures to hold pointers to a
      ablkcipher_request structure.  The code assumed that the work item keeps
      executing on a single CPU, so it didn't use synchronization when
      accessing this structure.
      
      If a CPU is disabled by writing 0 to /sys/devices/system/cpu/cpu*/online,
      the work item could be moved to another CPU.  This causes dm-crypt
      crashes, like the following, because the code starts using an incorrect
      ablkcipher_request:
      
       smpboot: CPU 7 is now offline
       BUG: unable to handle kernel NULL pointer dereference at 0000000000000130
       IP: [<ffffffffa1862b3d>] crypt_convert+0x12d/0x3c0 [dm_crypt]
       ...
       Call Trace:
        [<ffffffffa1864415>] ? kcryptd_crypt+0x305/0x470 [dm_crypt]
        [<ffffffff81062060>] ? finish_task_switch+0x40/0xc0
        [<ffffffff81052a28>] ? process_one_work+0x168/0x470
        [<ffffffff8105366b>] ? worker_thread+0x10b/0x390
        [<ffffffff81053560>] ? manage_workers.isra.26+0x290/0x290
        [<ffffffff81058d9f>] ? kthread+0xaf/0xc0
        [<ffffffff81058cf0>] ? kthread_create_on_node+0x120/0x120
        [<ffffffff813464ac>] ? ret_from_fork+0x7c/0xb0
        [<ffffffff81058cf0>] ? kthread_create_on_node+0x120/0x120
      
      Fix this bug by removing the per-cpu definition.  The structure
      ablkcipher_request is accessed via a pointer from convert_context.
      Consequently, if the work item is rescheduled to a different CPU, the
      thread still uses the same ablkcipher_request.
      
      This change may undermine performance improvements intended by commit
      c0297721 ("dm crypt: scale to multiple cpus") on select hardware.  In
      practice no performance difference was observed on recent hardware.  But
      regardless, correctness is more important than performance.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      610f2de3
  11. 24 11月, 2013 2 次提交
    • K
      block: Convert drivers to immutable biovecs · 003b5c57
      Kent Overstreet 提交于
      Now that we've got a mechanism for immutable biovecs -
      bi_iter.bi_bvec_done - we need to convert drivers to use primitives that
      respect it instead of using the bvec array directly.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: dm-devel@redhat.com
      003b5c57
    • K
      block: Abstract out bvec iterator · 4f024f37
      Kent Overstreet 提交于
      Immutable biovecs are going to require an explicit iterator. To
      implement immutable bvecs, a later patch is going to add a bi_bvec_done
      member to this struct; for now, this patch effectively just renames
      things.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Yehuda Sadeh <yehuda@inktank.com>
      Cc: Sage Weil <sage@inktank.com>
      Cc: Alex Elder <elder@inktank.com>
      Cc: ceph-devel@vger.kernel.org
      Cc: Joshua Morris <josh.h.morris@us.ibm.com>
      Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: dm-devel@redhat.com
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux390@de.ibm.com
      Cc: Boaz Harrosh <bharrosh@panasas.com>
      Cc: Benny Halevy <bhalevy@tonian.com>
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Chris Mason <chris.mason@fusionio.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Jaegeuk Kim <jaegeuk.kim@samsung.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: Joern Engel <joern@logfs.org>
      Cc: Prasad Joshi <prasadjoshi.linux@gmail.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: KONISHI Ryusuke <konishi.ryusuke@lab.ntt.co.jp>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Ben Myers <bpm@sgi.com>
      Cc: xfs@oss.sgi.com
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Guo Chao <yan@linux.vnet.ibm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Asai Thambi S P <asamymuthupa@micron.com>
      Cc: Selvan Mani <smani@micron.com>
      Cc: Sam Bradshaw <sbradshaw@micron.com>
      Cc: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
      Cc: "Roger Pau Monné" <roger.pau@citrix.com>
      Cc: Jan Beulich <jbeulich@suse.com>
      Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
      Cc: Ian Campbell <Ian.Campbell@citrix.com>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchand@redhat.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Peng Tao <tao.peng@emc.com>
      Cc: Andy Adamson <andros@netapp.com>
      Cc: fanchaoting <fanchaoting@cn.fujitsu.com>
      Cc: Jie Liu <jeff.liu@oracle.com>
      Cc: Sunil Mushran <sunil.mushran@gmail.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Namjae Jeon <namjae.jeon@samsung.com>
      Cc: Pankaj Kumar <pankaj.km@samsung.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Mel Gorman <mgorman@suse.de>6
      4f024f37
  12. 15 11月, 2013 1 次提交
  13. 10 11月, 2013 2 次提交
    • M
      dm crypt: add TCW IV mode for old CBC TCRYPT containers · ed04d981
      Milan Broz 提交于
      dm-crypt can already activate TCRYPT (TrueCrypt compatible) containers
      in LRW or XTS block encryption mode.
      
      TCRYPT containers prior to version 4.1 use CBC mode with some additional
      tweaks, this patch adds support for these containers.
      
      This new mode is implemented using special IV generator named TCW
      (TrueCrypt IV with whitening).  TCW IV only supports containers that are
      encrypted with one cipher (Tested with AES, Twofish, Serpent, CAST5 and
      TripleDES).
      
      While this mode is legacy and is known to be vulnerable to some
      watermarking attacks (e.g. revealing of hidden disk existence) it can
      still be useful to activate old containers without using 3rd party
      software or for independent forensic analysis of such containers.
      
      (Both the userspace and kernel code is an independent implementation
      based on the format documentation and it completely avoids use of
      original source code.)
      
      The TCW IV generator uses two additional keys: Kw (whitening seed, size
      is always 16 bytes - TCW_WHITENING_SIZE) and Kiv (IV seed, size is
      always the IV size of the selected cipher).  These keys are concatenated
      at the end of the main encryption key provided in mapping table.
      
      While whitening is completely independent from IV, it is implemented
      inside IV generator for simplification.
      
      The whitening value is always 16 bytes long and is calculated per sector
      from provided Kw as initial seed, xored with sector number and mixed
      with CRC32 algorithm.  Resulting value is xored with ciphertext sector
      content.
      
      IV is calculated from the provided Kiv as initial IV seed and xored with
      sector number.
      
      Detailed calculation can be found in the Truecrypt documentation for
      version < 4.1 and will also be described on dm-crypt site, see:
      http://code.google.com/p/cryptsetup/wiki/DMCrypt
      
      The experimental support for activation of these containers is already
      present in git devel brach of cryptsetup.
      Signed-off-by: NMilan Broz <gmazyland@gmail.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      ed04d981
    • M
      dm crypt: properly handle extra key string in initialization · da31a078
      Milan Broz 提交于
      Some encryption modes use extra keys (e.g. loopAES has IV seed) which
      are not used in block cipher initialization but are part of key string
      in table constructor.
      
      This patch adds an additional field which describes the length of the
      extra key(s) and substracts it before real key encryption setting.
      
      The key_size always includes the size, in bytes, of the key provided
      in mapping table.
      
      The key_parts describes how many parts (usually keys) are contained in
      the whole key buffer.  And key_extra_size contains size in bytes of
      additional keys part (this number of bytes must be subtracted because it
      is processed by the IV generator).
      
      | K1 | K2 | .... | K64 |      Kiv       |
      |----------- key_size ----------------- |
      |                      |-key_extra_size-|
      |     [64 keys]        |  [1 key]       | => key_parts = 65
      
      Example where key string contains main key K, whitening key
      Kw and IV seed Kiv:
      
      |     K       |   Kiv   |       Kw      |
      |--------------- key_size --------------|
      |             |-----key_extra_size------|
      |  [1 key]    | [1 key] |     [1 key]   | => key_parts = 3
      
      Because key_extra_size is calculated during IV mode setting, key
      initialization is moved after this step.
      
      For now, this change has no effect to supported modes (thanks to ilog2
      rounding) but it is required by the following patch.
      
      Also, fix a sparse warning in crypt_iv_lmk_one().
      Signed-off-by: NMilan Broz <gmazyland@gmail.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      da31a078
  14. 23 8月, 2013 1 次提交
  15. 24 3月, 2013 1 次提交
    • K
      block: Convert some code to bio_for_each_segment_all() · cb34e057
      Kent Overstreet 提交于
      More prep work for immutable bvecs:
      
      A few places in the code were either open coding or using the wrong
      version - fix.
      
      After we introduce the bvec iter, it'll no longer be possible to modify
      the biovec through bio_for_each_segment_all() - it doesn't increment a
      pointer to the current bvec, you pass in a struct bio_vec (not a
      pointer) which is updated with what the current biovec would be (taking
      into account bi_bvec_done and bi_size).
      
      So because of that it's more worthwhile to be consistent about
      bio_for_each_segment()/bio_for_each_segment_all() usage.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: NeilBrown <neilb@suse.de>
      CC: Alasdair Kergon <agk@redhat.com>
      CC: dm-devel@redhat.com
      CC: Alexander Viro <viro@zeniv.linux.org.uk>
      cb34e057
  16. 02 3月, 2013 2 次提交
    • A
      dm: rename request variables to bios · 55a62eef
      Alasdair G Kergon 提交于
      Use 'bio' in the name of variables and functions that deal with
      bios rather than 'request' to avoid confusion with the normal
      block layer use of 'request'.
      
      No functional changes.
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      55a62eef
    • M
      dm: fix truncated status strings · fd7c092e
      Mikulas Patocka 提交于
      Avoid returning a truncated table or status string instead of setting
      the DM_BUFFER_FULL_FLAG when the last target of a table fills the
      buffer.
      
      When processing a table or status request, the function retrieve_status
      calls ti->type->status. If ti->type->status returns non-zero,
      retrieve_status assumes that the buffer overflowed and sets
      DM_BUFFER_FULL_FLAG.
      
      However, targets don't return non-zero values from their status method
      on overflow. Most targets returns always zero.
      
      If a buffer overflow happens in a target that is not the last in the
      table, it gets noticed during the next iteration of the loop in
      retrieve_status; but if a buffer overflow happens in the last target, it
      goes unnoticed and erroneously truncated data is returned.
      
      In the current code, the targets behave in the following way:
      * dm-crypt returns -ENOMEM if there is not enough space to store the
        key, but it returns 0 on all other overflows.
      * dm-thin returns errors from the status method if a disk error happened.
        This is incorrect because retrieve_status doesn't check the error
        code, it assumes that all non-zero values mean buffer overflow.
      * all the other targets always return 0.
      
      This patch changes the ti->type->status function to return void (because
      most targets don't use the return code). Overflow is detected in
      retrieve_status: if the status method fills up the remaining space
      completely, it is assumed that buffer overflow happened.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      fd7c092e
  17. 22 12月, 2012 1 次提交
  18. 09 9月, 2012 2 次提交
    • K
      block: Add bio_clone_bioset(), bio_clone_kmalloc() · bf800ef1
      Kent Overstreet 提交于
      Previously, there was bio_clone() but it only allocated from the fs bio
      set; as a result various users were open coding it and using
      __bio_clone().
      
      This changes bio_clone() to become bio_clone_bioset(), and then we add
      bio_clone() and bio_clone_kmalloc() as wrappers around it, making use of
      the functionality the last patch adedd.
      
      This will also help in a later patch changing how bio cloning works.
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: NeilBrown <neilb@suse.de>
      CC: Alasdair Kergon <agk@redhat.com>
      CC: Boaz Harrosh <bharrosh@panasas.com>
      CC: Jeff Garzik <jeff@garzik.org>
      Acked-by: NJeff Garzik <jgarzik@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bf800ef1
    • K
      block: Generalized bio pool freeing · 395c72a7
      Kent Overstreet 提交于
      With the old code, when you allocate a bio from a bio pool you have to
      implement your own destructor that knows how to find the bio pool the
      bio was originally allocated from.
      
      This adds a new field to struct bio (bi_pool) and changes
      bio_alloc_bioset() to use it. This makes various bio destructors
      unnecessary, so they're then deleted.
      
      v6: Explain the temporary if statement in bio_put
      Signed-off-by: NKent Overstreet <koverstreet@google.com>
      CC: Jens Axboe <axboe@kernel.dk>
      CC: NeilBrown <neilb@suse.de>
      CC: Alasdair Kergon <agk@redhat.com>
      CC: Nicholas Bellinger <nab@linux-iscsi.org>
      CC: Lars Ellenberg <lars.ellenberg@linbit.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Acked-by: NNicholas Bellinger <nab@linux-iscsi.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      395c72a7
  19. 27 7月, 2012 7 次提交
  20. 29 3月, 2012 3 次提交
    • M
      dm: reject trailing characters in sccanf input · 31998ef1
      Mikulas Patocka 提交于
      Device mapper uses sscanf to convert arguments to numbers. The problem is that
      the way we use it ignores additional unmatched characters in the scanned string.
      
      For example, this `if (sscanf(string, "%d", &number) == 1)' will match a number,
      but also it will match number with some garbage appended, like "123abc".
      
      As a result, device mapper accepts garbage after some numbers. For example
      the command `dmsetup create vg1-new --table "0 16384 linear 254:1bla 34816bla"'
      will pass without an error.
      
      This patch fixes all sscanf uses in device mapper. It appends "%c" with
      a pointer to a dummy character variable to every sscanf statement.
      
      The construct `if (sscanf(string, "%d%c", &number, &dummy) == 1)' succeeds
      only if string is a null-terminated number (optionally preceded by some
      whitespace characters). If there is some character appended after the number,
      sscanf matches "%c", writes the character to the dummy variable and returns 2.
      We check the return value for 1 and consequently reject numbers with some
      garbage appended.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      31998ef1
    • M
      dm crypt: add missing error handling · 72c6e7af
      Mikulas Patocka 提交于
      Always set io->error to -EIO when an error is detected in dm-crypt.
      
      There were cases where an error code would be set only if we finish
      processing the last sector. If there were other encryption operations in
      flight, the error would be ignored and bio would be returned with
      success as if no error happened.
      
      This bug is present in kcryptd_crypt_write_convert, kcryptd_crypt_read_convert
      and kcryptd_async_done.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@kernel.org
      Reviewed-by: NMilan Broz <mbroz@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      72c6e7af
    • M
      dm crypt: fix mempool deadlock · aeb2deae
      Mikulas Patocka 提交于
      This patch fixes a possible deadlock in dm-crypt's mempool use.
      
      Currently, dm-crypt reserves a mempool of MIN_BIO_PAGES reserved pages.
      It allocates first MIN_BIO_PAGES with non-failing allocation (the allocation
      cannot fail and waits until the mempool is refilled). Further pages are
      allocated with different gfp flags that allow failing.
      
      Because allocations may be done in parallel, this code can deadlock. Example:
      There are two processes, each tries to allocate MIN_BIO_PAGES and the processes
      run simultaneously.
      It may end up in a situation where each process allocates (MIN_BIO_PAGES / 2)
      pages. The mempool is exhausted. Each process waits for more pages to be freed
      to the mempool, which never happens.
      
      To avoid this deadlock scenario, this patch changes the code so that only
      the first page is allocated with non-failing gfp mask. Allocation of further
      pages may fail.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NMilan Broz <mbroz@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      aeb2deae