1. 27 5月, 2020 12 次提交
  2. 22 5月, 2020 2 次提交
  3. 19 5月, 2020 13 次提交
  4. 17 5月, 2020 2 次提交
  5. 14 5月, 2020 7 次提交
    • S
      block: blk-crypto-fallback for Inline Encryption · 488f6682
      Satya Tangirala 提交于
      Blk-crypto delegates crypto operations to inline encryption hardware
      when available. The separately configurable blk-crypto-fallback contains
      a software fallback to the kernel crypto API - when enabled, blk-crypto
      will use this fallback for en/decryption when inline encryption hardware
      is not available.
      
      This lets upper layers not have to worry about whether or not the
      underlying device has support for inline encryption before deciding to
      specify an encryption context for a bio. It also allows for testing
      without actual inline encryption hardware - in particular, it makes it
      possible to test the inline encryption code in ext4 and f2fs simply by
      running xfstests with the inlinecrypt mount option, which in turn allows
      for things like the regular upstream regression testing of ext4 to cover
      the inline encryption code paths.
      
      For more details, refer to Documentation/block/inline-encryption.rst.
      Signed-off-by: NSatya Tangirala <satyat@google.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      488f6682
    • S
      block: Make blk-integrity preclude hardware inline encryption · d145dc23
      Satya Tangirala 提交于
      Whenever a device supports blk-integrity, make the kernel pretend that
      the device doesn't support inline encryption (essentially by setting the
      keyslot manager in the request queue to NULL).
      
      There's no hardware currently that supports both integrity and inline
      encryption. However, it seems possible that there will be such hardware
      in the near future (like the NVMe key per I/O support that might support
      both inline encryption and PI).
      
      But properly integrating both features is not trivial, and without
      real hardware that implements both, it is difficult to tell if it will
      be done correctly by the majority of hardware that support both.
      So it seems best not to support both features together right now, and
      to decide what to do at probe time.
      Signed-off-by: NSatya Tangirala <satyat@google.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d145dc23
    • S
      block: Inline encryption support for blk-mq · a892c8d5
      Satya Tangirala 提交于
      We must have some way of letting a storage device driver know what
      encryption context it should use for en/decrypting a request. However,
      it's the upper layers (like the filesystem/fscrypt) that know about and
      manages encryption contexts. As such, when the upper layer submits a bio
      to the block layer, and this bio eventually reaches a device driver with
      support for inline encryption, the device driver will need to have been
      told the encryption context for that bio.
      
      We want to communicate the encryption context from the upper layer to the
      storage device along with the bio, when the bio is submitted to the block
      layer. To do this, we add a struct bio_crypt_ctx to struct bio, which can
      represent an encryption context (note that we can't use the bi_private
      field in struct bio to do this because that field does not function to pass
      information across layers in the storage stack). We also introduce various
      functions to manipulate the bio_crypt_ctx and make the bio/request merging
      logic aware of the bio_crypt_ctx.
      
      We also make changes to blk-mq to make it handle bios with encryption
      contexts. blk-mq can merge many bios into the same request. These bios need
      to have contiguous data unit numbers (the necessary changes to blk-merge
      are also made to ensure this) - as such, it suffices to keep the data unit
      number of just the first bio, since that's all a storage driver needs to
      infer the data unit number to use for each data block in each bio in a
      request. blk-mq keeps track of the encryption context to be used for all
      the bios in a request with the request's rq_crypt_ctx. When the first bio
      is added to an empty request, blk-mq will program the encryption context
      of that bio into the request_queue's keyslot manager, and store the
      returned keyslot in the request's rq_crypt_ctx. All the functions to
      operate on encryption contexts are in blk-crypto.c.
      
      Upper layers only need to call bio_crypt_set_ctx with the encryption key,
      algorithm and data_unit_num; they don't have to worry about getting a
      keyslot for each encryption context, as blk-mq/blk-crypto handles that.
      Blk-crypto also makes it possible for request-based layered devices like
      dm-rq to make use of inline encryption hardware by cloning the
      rq_crypt_ctx and programming a keyslot in the new request_queue when
      necessary.
      
      Note that any user of the block layer can submit bios with an
      encryption context, such as filesystems, device-mapper targets, etc.
      Signed-off-by: NSatya Tangirala <satyat@google.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a892c8d5
    • S
      block: Keyslot Manager for Inline Encryption · 1b262839
      Satya Tangirala 提交于
      Inline Encryption hardware allows software to specify an encryption context
      (an encryption key, crypto algorithm, data unit num, data unit size) along
      with a data transfer request to a storage device, and the inline encryption
      hardware will use that context to en/decrypt the data. The inline
      encryption hardware is part of the storage device, and it conceptually sits
      on the data path between system memory and the storage device.
      
      Inline Encryption hardware implementations often function around the
      concept of "keyslots". These implementations often have a limited number
      of "keyslots", each of which can hold a key (we say that a key can be
      "programmed" into a keyslot). Requests made to the storage device may have
      a keyslot and a data unit number associated with them, and the inline
      encryption hardware will en/decrypt the data in the requests using the key
      programmed into that associated keyslot and the data unit number specified
      with the request.
      
      As keyslots are limited, and programming keys may be expensive in many
      implementations, and multiple requests may use exactly the same encryption
      contexts, we introduce a Keyslot Manager to efficiently manage keyslots.
      
      We also introduce a blk_crypto_key, which will represent the key that's
      programmed into keyslots managed by keyslot managers. The keyslot manager
      also functions as the interface that upper layers will use to program keys
      into inline encryption hardware. For more information on the Keyslot
      Manager, refer to documentation found in block/keyslot-manager.c and
      linux/keyslot-manager.h.
      Co-developed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NSatya Tangirala <satyat@google.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1b262839
    • S
      Documentation: Document the blk-crypto framework · 54b259f6
      Satya Tangirala 提交于
      The blk-crypto framework adds support for inline encryption. There are
      numerous changes throughout the storage stack. This patch documents the
      main design choices in the block layer, the API presented to users of
      the block layer (like fscrypt or layered devices) and the API presented
      to drivers for adding support for inline encryption.
      Signed-off-by: NSatya Tangirala <satyat@google.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      54b259f6
    • T
      iocost: don't let vrate run wild while there's no saturation signal · 81ca627a
      Tejun Heo 提交于
      When the QoS targets are met and nothing is being throttled, there's
      no way to tell how saturated the underlying device is - it could be
      almost entirely idle, at the cusp of saturation or anywhere inbetween.
      Given that there's no information, it's best to keep vrate as-is in
      this state.  Before 7cd806a9 ("iocost: improve nr_lagging
      handling"), this was the case - if the device isn't missing QoS
      targets and nothing is being throttled, busy_level was reset to zero.
      
      While fixing nr_lagging handling, 7cd806a9 ("iocost: improve
      nr_lagging handling") broke this.  Now, while the device is hitting
      QoS targets and nothing is being throttled, vrate keeps getting
      adjusted according to the existing busy_level.
      
      This led to vrate keeping climing till it hits max when there's an IO
      issuer with limited request concurrency if the vrate started low.
      vrate starts getting adjusted upwards until the issuer can issue IOs
      w/o being throttled.  From then on, QoS targets keeps getting met and
      nothing on the system needs throttling and vrate keeps getting
      increased due to the existing busy_level.
      
      This patch makes the following changes to the busy_level logic.
      
      * Reset busy_level if nr_shortages is zero to avoid the above
        scenario.
      
      * Make non-zero nr_lagging block lowering nr_level but still clear
        positive busy_level if there's clear non-saturation signal - QoS
        targets are met and nr_shortages is non-zero.  nr_lagging's role is
        preventing adjusting vrate upwards while there are long-running
        commands and it shouldn't keep busy_level positive while there's
        clear non-saturation signal.
      
      * Restructure code for clarity and add comments.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NAndy Newell <newella@fb.com>
      Fixes: 7cd806a9 ("iocost: improve nr_lagging handling")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      81ca627a
    • M
      block: move blk_io_schedule() out of header file · 71ac860a
      Ming Lei 提交于
      blk_io_schedule() isn't called from performance sensitive code path, and
      it is easier to maintain by exporting it as symbol.
      
      Also blk_io_schedule() is only called by CONFIG_BLOCK code, so it is safe
      to do this way. Meantime fixes build failure when CONFIG_BLOCK is off.
      
      Cc: Christoph Hellwig <hch@infradead.org>
      Fixes: e6249cdd ("block: add blk_io_schedule() for avoiding task hung in sync dio")
      Reported-by: NSatya Tangirala <satyat@google.com>
      Tested-by: NSatya Tangirala <satyat@google.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      71ac860a
  6. 13 5月, 2020 4 次提交