提交 3e414b5b 编写于 作者: L Linus Torvalds

Merge tag 'for-5.4/dm-changes' of...

Merge tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper updates from Mike Snitzer:

 - crypto and DM crypt advances that allow the crypto API to reclaim
   implementation details that do not belong in DM crypt. The wrapper
   template for ESSIV generation that was factored out will also be used
   by fscrypt in the future.

 - Add root hash pkcs#7 signature verification to the DM verity target.

 - Add a new "clone" DM target that allows for efficient remote
   replication of a device.

 - Enhance DM bufio's cache to be tailored to each client based on use.
   Clients that make heavy use of the cache get more of it, and those
   that use less have reduced cache usage.

 - Add a new DM_GET_TARGET_VERSION ioctl to allow userspace to query the
   version number of a DM target (even if the associated module isn't
   yet loaded).

 - Fix invalid memory access in DM zoned target.

 - Fix the max_discard_sectors limit advertised by the DM raid target;
   it was mistakenly storing the limit in bytes rather than sectors.

 - Small optimizations and cleanups in DM writecache target.

 - Various fixes and cleanups in DM core, DM raid1 and space map portion
   of DM persistent data library.

* tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (22 commits)
  dm: introduce DM_GET_TARGET_VERSION
  dm bufio: introduce a global cache replacement
  dm bufio: remove old-style buffer cleanup
  dm bufio: introduce a global queue
  dm bufio: refactor adjust_total_allocated
  dm bufio: call adjust_total_allocated from __link_buffer and __unlink_buffer
  dm: add clone target
  dm raid: fix updating of max_discard_sectors limit
  dm writecache: skip writecache_wait for pmem mode
  dm stats: use struct_size() helper
  dm crypt: omit parsing of the encapsulated cipher
  dm crypt: switch to ESSIV crypto API template
  crypto: essiv - create wrapper template for ESSIV generation
  dm space map common: remove check for impossible sm_find_free() return value
  dm raid1: use struct_size() with kzalloc()
  dm writecache: optimize performance by sorting the blocks for writeback_all
  dm writecache: add unlikely for getting two block with same LBA
  dm writecache: remove unused member pointer in writeback_struct
  dm zoned: fix invalid memory access
  dm verity: add root hash pkcs#7 signature verification
  ...
无相关合并请求
.. SPDX-License-Identifier: GPL-2.0-only
========
dm-clone
========
Introduction
============
dm-clone is a device mapper target which produces a one-to-one copy of an
existing, read-only source device into a writable destination device: It
presents a virtual block device which makes all data appear immediately, and
redirects reads and writes accordingly.
The main use case of dm-clone is to clone a potentially remote, high-latency,
read-only, archival-type block device into a writable, fast, primary-type device
for fast, low-latency I/O. The cloned device is visible/mountable immediately
and the copy of the source device to the destination device happens in the
background, in parallel with user I/O.
For example, one could restore an application backup from a read-only copy,
accessible through a network storage protocol (NBD, Fibre Channel, iSCSI, AoE,
etc.), into a local SSD or NVMe device, and start using the device immediately,
without waiting for the restore to complete.
When the cloning completes, the dm-clone table can be removed altogether and be
replaced, e.g., by a linear table, mapping directly to the destination device.
The dm-clone target reuses the metadata library used by the thin-provisioning
target.
Glossary
========
Hydration
The process of filling a region of the destination device with data from
the same region of the source device, i.e., copying the region from the
source to the destination device.
Once a region gets hydrated we redirect all I/O regarding it to the destination
device.
Design
======
Sub-devices
-----------
The target is constructed by passing three devices to it (along with other
parameters detailed later):
1. A source device - the read-only device that gets cloned and source of the
hydration.
2. A destination device - the destination of the hydration, which will become a
clone of the source device.
3. A small metadata device - it records which regions are already valid in the
destination device, i.e., which regions have already been hydrated, or have
been written to directly, via user I/O.
The size of the destination device must be at least equal to the size of the
source device.
Regions
-------
dm-clone divides the source and destination devices in fixed sized regions.
Regions are the unit of hydration, i.e., the minimum amount of data copied from
the source to the destination device.
The region size is configurable when you first create the dm-clone device. The
recommended region size is the same as the file system block size, which usually
is 4KB. The region size must be between 8 sectors (4KB) and 2097152 sectors
(1GB) and a power of two.
Reads and writes from/to hydrated regions are serviced from the destination
device.
A read to a not yet hydrated region is serviced directly from the source device.
A write to a not yet hydrated region will be delayed until the corresponding
region has been hydrated and the hydration of the region starts immediately.
Note that a write request with size equal to region size will skip copying of
the corresponding region from the source device and overwrite the region of the
destination device directly.
Discards
--------
dm-clone interprets a discard request to a range that hasn't been hydrated yet
as a hint to skip hydration of the regions covered by the request, i.e., it
skips copying the region's data from the source to the destination device, and
only updates its metadata.
If the destination device supports discards, then by default dm-clone will pass
down discard requests to it.
Background Hydration
--------------------
dm-clone copies continuously from the source to the destination device, until
all of the device has been copied.
Copying data from the source to the destination device uses bandwidth. The user
can set a throttle to prevent more than a certain amount of copying occurring at
any one time. Moreover, dm-clone takes into account user I/O traffic going to
the devices and pauses the background hydration when there is I/O in-flight.
A message `hydration_threshold <#regions>` can be used to set the maximum number
of regions being copied, the default being 1 region.
dm-clone employs dm-kcopyd for copying portions of the source device to the
destination device. By default, we issue copy requests of size equal to the
region size. A message `hydration_batch_size <#regions>` can be used to tune the
size of these copy requests. Increasing the hydration batch size results in
dm-clone trying to batch together contiguous regions, so we copy the data in
batches of this many regions.
When the hydration of the destination device finishes, a dm event will be sent
to user space.
Updating on-disk metadata
-------------------------
On-disk metadata is committed every time a FLUSH or FUA bio is written. If no
such requests are made then commits will occur every second. This means the
dm-clone device behaves like a physical disk that has a volatile write cache. If
power is lost you may lose some recent writes. The metadata should always be
consistent in spite of any crash.
Target Interface
================
Constructor
-----------
::
clone <metadata dev> <destination dev> <source dev> <region size>
[<#feature args> [<feature arg>]* [<#core args> [<core arg>]*]]
================ ==============================================================
metadata dev Fast device holding the persistent metadata
destination dev The destination device, where the source will be cloned
source dev Read only device containing the data that gets cloned
region size The size of a region in sectors
#feature args Number of feature arguments passed
feature args no_hydration or no_discard_passdown
#core args An even number of arguments corresponding to key/value pairs
passed to dm-clone
core args Key/value pairs passed to dm-clone, e.g. `hydration_threshold
256`
================ ==============================================================
Optional feature arguments are:
==================== =========================================================
no_hydration Create a dm-clone instance with background hydration
disabled
no_discard_passdown Disable passing down discards to the destination device
==================== =========================================================
Optional core arguments are:
================================ ==============================================
hydration_threshold <#regions> Maximum number of regions being copied from
the source to the destination device at any
one time, during background hydration.
hydration_batch_size <#regions> During background hydration, try to batch
together contiguous regions, so we copy data
from the source to the destination device in
batches of this many regions.
================================ ==============================================
Status
------
::
<metadata block size> <#used metadata blocks>/<#total metadata blocks>
<region size> <#hydrated regions>/<#total regions> <#hydrating regions>
<#feature args> <feature args>* <#core args> <core args>*
<clone metadata mode>
======================= =======================================================
metadata block size Fixed block size for each metadata block in sectors
#used metadata blocks Number of metadata blocks used
#total metadata blocks Total number of metadata blocks
region size Configurable region size for the device in sectors
#hydrated regions Number of regions that have finished hydrating
#total regions Total number of regions to hydrate
#hydrating regions Number of regions currently hydrating
#feature args Number of feature arguments to follow
feature args Feature arguments, e.g. `no_hydration`
#core args Even number of core arguments to follow
core args Key/value pairs for tuning the core, e.g.
`hydration_threshold 256`
clone metadata mode ro if read-only, rw if read-write
In serious cases where even a read-only mode is deemed
unsafe no further I/O will be permitted and the status
will just contain the string 'Fail'. If the metadata
mode changes, a dm event will be sent to user space.
======================= =======================================================
Messages
--------
`disable_hydration`
Disable the background hydration of the destination device.
`enable_hydration`
Enable the background hydration of the destination device.
`hydration_threshold <#regions>`
Set background hydration threshold.
`hydration_batch_size <#regions>`
Set background hydration batch size.
Examples
========
Clone a device containing a file system
---------------------------------------
1. Create the dm-clone device.
::
dmsetup create clone --table "0 1048576000 clone $metadata_dev $dest_dev \
$source_dev 8 1 no_hydration"
2. Mount the device and trim the file system. dm-clone interprets the discards
sent by the file system and it will not hydrate the unused space.
::
mount /dev/mapper/clone /mnt/cloned-fs
fstrim /mnt/cloned-fs
3. Enable background hydration of the destination device.
::
dmsetup message clone 0 enable_hydration
4. When the hydration finishes, we can replace the dm-clone table with a linear
table.
::
dmsetup suspend clone
dmsetup load clone --table "0 1048576000 linear $dest_dev 0"
dmsetup resume clone
The metadata device is no longer needed and can be safely discarded or reused
for other purposes.
Known issues
============
1. We redirect reads, to not-yet-hydrated regions, to the source device. If
reading the source device has high latency and the user repeatedly reads from
the same regions, this behaviour could degrade performance. We should use
these reads as hints to hydrate the relevant regions sooner. Currently, we
rely on the page cache to cache these regions, so we hopefully don't end up
reading them multiple times from the source device.
2. Release in-core resources, i.e., the bitmaps tracking which regions are
hydrated, after the hydration has finished.
3. During background hydration, if we fail to read the source or write to the
destination device, we print an error message, but the hydration process
continues indefinitely, until it succeeds. We should stop the background
hydration after a number of failures and emit a dm event for user space to
notice.
Why not...?
===========
We explored the following alternatives before implementing dm-clone:
1. Use dm-cache with cache size equal to the source device and implement a new
cloning policy:
* The resulting cache device is not a one-to-one mirror of the source device
and thus we cannot remove the cache device once cloning completes.
* dm-cache writes to the source device, which violates our requirement that
the source device must be treated as read-only.
* Caching is semantically different from cloning.
2. Use dm-snapshot with a COW device equal to the source device:
* dm-snapshot stores its metadata in the COW device, so the resulting device
is not a one-to-one mirror of the source device.
* No background copying mechanism.
* dm-snapshot needs to commit its metadata whenever a pending exception
completes, to ensure snapshot consistency. In the case of cloning, we don't
need to be so strict and can rely on committing metadata every time a FLUSH
or FUA bio is written, or periodically, like dm-thin and dm-cache do. This
improves the performance significantly.
3. Use dm-mirror: The mirror target has a background copying/mirroring
mechanism, but it writes to all mirrors, thus violating our requirement that
the source device must be treated as read-only.
4. Use dm-thin's external snapshot functionality. This approach is the most
promising among all alternatives, as the thinly-provisioned volume is a
one-to-one mirror of the source device and handles reads and writes to
un-provisioned/not-yet-cloned areas the same way as dm-clone does.
Still:
* There is no background copying mechanism, though one could be implemented.
* Most importantly, we want to support arbitrary block devices as the
destination of the cloning process and not restrict ourselves to
thinly-provisioned volumes. Thin-provisioning has an inherent metadata
overhead, for maintaining the thin volume mappings, which significantly
degrades performance.
Moreover, cloning a device shouldn't force the use of thin-provisioning. On
the other hand, if we wish to use thin provisioning, we can just use a thin
LV as dm-clone's destination device.
......@@ -125,6 +125,13 @@ check_at_most_once
blocks, and a hash block will not be verified any more after all the data
blocks it covers have been verified anyway.
root_hash_sig_key_desc <key_description>
This is the description of the USER_KEY that the kernel will lookup to get
the pkcs7 signature of the roothash. The pkcs7 signature is used to validate
the root hash during the creation of the device mapper block device.
Verification of roothash depends on the config DM_VERITY_VERIFY_ROOTHASH_SIG
being set in the kernel.
Theory of operation
===================
......
......@@ -487,6 +487,34 @@ config CRYPTO_ADIANTUM
If unsure, say N.
config CRYPTO_ESSIV
tristate "ESSIV support for block encryption"
select CRYPTO_AUTHENC
help
Encrypted salt-sector initialization vector (ESSIV) is an IV
generation method that is used in some cases by fscrypt and/or
dm-crypt. It uses the hash of the block encryption key as the
symmetric key for a block encryption pass applied to the input
IV, making low entropy IV sources more suitable for block
encryption.
This driver implements a crypto API template that can be
instantiated either as a skcipher or as a aead (depending on the
type of the first template argument), and which defers encryption
and decryption requests to the encapsulated cipher after applying
ESSIV to the input IV. Note that in the aead case, it is assumed
that the keys are presented in the same format used by the authenc
template, and that the IV appears at the end of the authenticated
associated data (AAD) region (which is how dm-crypt uses it.)
Note that the use of ESSIV is not recommended for new deployments,
and so this only needs to be enabled when interoperability with
existing encrypted volumes of filesystems is required, or when
building for a particular system that requires it (e.g., when
the SoC in question has accelerated CBC but not XTS, making CBC
combined with ESSIV the only feasible mode for h/w accelerated
block encryption)
comment "Hash modes"
config CRYPTO_CMAC
......
......@@ -165,6 +165,7 @@ obj-$(CONFIG_CRYPTO_USER_API_AEAD) += algif_aead.o
obj-$(CONFIG_CRYPTO_ZSTD) += zstd.o
obj-$(CONFIG_CRYPTO_OFB) += ofb.o
obj-$(CONFIG_CRYPTO_ECC) += ecc.o
obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o
ecdh_generic-y += ecdh.o
ecdh_generic-y += ecdh_helper.o
......
// SPDX-License-Identifier: GPL-2.0
/*
* ESSIV skcipher and aead template for block encryption
*
* This template encapsulates the ESSIV IV generation algorithm used by
* dm-crypt and fscrypt, which converts the initial vector for the skcipher
* used for block encryption, by encrypting it using the hash of the
* skcipher key as encryption key. Usually, the input IV is a 64-bit sector
* number in LE representation zero-padded to the size of the IV, but this
* is not assumed by this driver.
*
* The typical use of this template is to instantiate the skcipher
* 'essiv(cbc(aes),sha256)', which is the only instantiation used by
* fscrypt, and the most relevant one for dm-crypt. However, dm-crypt
* also permits ESSIV to be used in combination with the authenc template,
* e.g., 'essiv(authenc(hmac(sha256),cbc(aes)),sha256)', in which case
* we need to instantiate an aead that accepts the same special key format
* as the authenc template, and deals with the way the encrypted IV is
* embedded into the AAD area of the aead request. This means the AEAD
* flavor produced by this template is tightly coupled to the way dm-crypt
* happens to use it.
*
* Copyright (c) 2019 Linaro, Ltd. <ard.biesheuvel@linaro.org>
*
* Heavily based on:
* adiantum length-preserving encryption mode
*
* Copyright 2018 Google LLC
*/
#include <crypto/authenc.h>
#include <crypto/internal/aead.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/skcipher.h>
#include <crypto/scatterwalk.h>
#include <linux/module.h>
#include "internal.h"
struct essiv_instance_ctx {
union {
struct crypto_skcipher_spawn skcipher_spawn;
struct crypto_aead_spawn aead_spawn;
} u;
char essiv_cipher_name[CRYPTO_MAX_ALG_NAME];
char shash_driver_name[CRYPTO_MAX_ALG_NAME];
};
struct essiv_tfm_ctx {
union {
struct crypto_skcipher *skcipher;
struct crypto_aead *aead;
} u;
struct crypto_cipher *essiv_cipher;
struct crypto_shash *hash;
int ivoffset;
};
struct essiv_aead_request_ctx {
struct scatterlist sg[4];
u8 *assoc;
struct aead_request aead_req;
};
static int essiv_skcipher_setkey(struct crypto_skcipher *tfm,
const u8 *key, unsigned int keylen)
{
struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
SHASH_DESC_ON_STACK(desc, tctx->hash);
u8 salt[HASH_MAX_DIGESTSIZE];
int err;
crypto_skcipher_clear_flags(tctx->u.skcipher, CRYPTO_TFM_REQ_MASK);
crypto_skcipher_set_flags(tctx->u.skcipher,
crypto_skcipher_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
err = crypto_skcipher_setkey(tctx->u.skcipher, key, keylen);
crypto_skcipher_set_flags(tfm,
crypto_skcipher_get_flags(tctx->u.skcipher) &
CRYPTO_TFM_RES_MASK);
if (err)
return err;
desc->tfm = tctx->hash;
err = crypto_shash_digest(desc, key, keylen, salt);
if (err)
return err;
crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(tctx->essiv_cipher,
crypto_skcipher_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
err = crypto_cipher_setkey(tctx->essiv_cipher, salt,
crypto_shash_digestsize(tctx->hash));
crypto_skcipher_set_flags(tfm,
crypto_cipher_get_flags(tctx->essiv_cipher) &
CRYPTO_TFM_RES_MASK);
return err;
}
static int essiv_aead_setkey(struct crypto_aead *tfm, const u8 *key,
unsigned int keylen)
{
struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
SHASH_DESC_ON_STACK(desc, tctx->hash);
struct crypto_authenc_keys keys;
u8 salt[HASH_MAX_DIGESTSIZE];
int err;
crypto_aead_clear_flags(tctx->u.aead, CRYPTO_TFM_REQ_MASK);
crypto_aead_set_flags(tctx->u.aead, crypto_aead_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
err = crypto_aead_setkey(tctx->u.aead, key, keylen);
crypto_aead_set_flags(tfm, crypto_aead_get_flags(tctx->u.aead) &
CRYPTO_TFM_RES_MASK);
if (err)
return err;
if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) {
crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
desc->tfm = tctx->hash;
err = crypto_shash_init(desc) ?:
crypto_shash_update(desc, keys.enckey, keys.enckeylen) ?:
crypto_shash_finup(desc, keys.authkey, keys.authkeylen, salt);
if (err)
return err;
crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(tctx->essiv_cipher, crypto_aead_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
err = crypto_cipher_setkey(tctx->essiv_cipher, salt,
crypto_shash_digestsize(tctx->hash));
crypto_aead_set_flags(tfm, crypto_cipher_get_flags(tctx->essiv_cipher) &
CRYPTO_TFM_RES_MASK);
return err;
}
static int essiv_aead_setauthsize(struct crypto_aead *tfm,
unsigned int authsize)
{
struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
return crypto_aead_setauthsize(tctx->u.aead, authsize);
}
static void essiv_skcipher_done(struct crypto_async_request *areq, int err)
{
struct skcipher_request *req = areq->data;
skcipher_request_complete(req, err);
}
static int essiv_skcipher_crypt(struct skcipher_request *req, bool enc)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct skcipher_request *subreq = skcipher_request_ctx(req);
crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv);
skcipher_request_set_tfm(subreq, tctx->u.skcipher);
skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
req->iv);
skcipher_request_set_callback(subreq, skcipher_request_flags(req),
essiv_skcipher_done, req);
return enc ? crypto_skcipher_encrypt(subreq) :
crypto_skcipher_decrypt(subreq);
}
static int essiv_skcipher_encrypt(struct skcipher_request *req)
{
return essiv_skcipher_crypt(req, true);
}
static int essiv_skcipher_decrypt(struct skcipher_request *req)
{
return essiv_skcipher_crypt(req, false);
}
static void essiv_aead_done(struct crypto_async_request *areq, int err)
{
struct aead_request *req = areq->data;
struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
if (rctx->assoc)
kfree(rctx->assoc);
aead_request_complete(req, err);
}
static int essiv_aead_crypt(struct aead_request *req, bool enc)
{
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
const struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
struct aead_request *subreq = &rctx->aead_req;
struct scatterlist *src = req->src;
int err;
crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv);
/*
* dm-crypt embeds the sector number and the IV in the AAD region, so
* we have to copy the converted IV into the right scatterlist before
* we pass it on.
*/
rctx->assoc = NULL;
if (req->src == req->dst || !enc) {
scatterwalk_map_and_copy(req->iv, req->dst,
req->assoclen - crypto_aead_ivsize(tfm),
crypto_aead_ivsize(tfm), 1);
} else {
u8 *iv = (u8 *)aead_request_ctx(req) + tctx->ivoffset;
int ivsize = crypto_aead_ivsize(tfm);
int ssize = req->assoclen - ivsize;
struct scatterlist *sg;
int nents;
if (ssize < 0)
return -EINVAL;
nents = sg_nents_for_len(req->src, ssize);
if (nents < 0)
return -EINVAL;
memcpy(iv, req->iv, ivsize);
sg_init_table(rctx->sg, 4);
if (unlikely(nents > 1)) {
/*
* This is a case that rarely occurs in practice, but
* for correctness, we have to deal with it nonetheless.
*/
rctx->assoc = kmalloc(ssize, GFP_ATOMIC);
if (!rctx->assoc)
return -ENOMEM;
scatterwalk_map_and_copy(rctx->assoc, req->src, 0,
ssize, 0);
sg_set_buf(rctx->sg, rctx->assoc, ssize);
} else {
sg_set_page(rctx->sg, sg_page(req->src), ssize,
req->src->offset);
}
sg_set_buf(rctx->sg + 1, iv, ivsize);
sg = scatterwalk_ffwd(rctx->sg + 2, req->src, req->assoclen);
if (sg != rctx->sg + 2)
sg_chain(rctx->sg, 3, sg);
src = rctx->sg;
}
aead_request_set_tfm(subreq, tctx->u.aead);
aead_request_set_ad(subreq, req->assoclen);
aead_request_set_callback(subreq, aead_request_flags(req),
essiv_aead_done, req);
aead_request_set_crypt(subreq, src, req->dst, req->cryptlen, req->iv);
err = enc ? crypto_aead_encrypt(subreq) :
crypto_aead_decrypt(subreq);
if (rctx->assoc && err != -EINPROGRESS)
kfree(rctx->assoc);
return err;
}
static int essiv_aead_encrypt(struct aead_request *req)
{
return essiv_aead_crypt(req, true);
}
static int essiv_aead_decrypt(struct aead_request *req)
{
return essiv_aead_crypt(req, false);
}
static int essiv_init_tfm(struct essiv_instance_ctx *ictx,
struct essiv_tfm_ctx *tctx)
{
struct crypto_cipher *essiv_cipher;
struct crypto_shash *hash;
int err;
essiv_cipher = crypto_alloc_cipher(ictx->essiv_cipher_name, 0, 0);
if (IS_ERR(essiv_cipher))
return PTR_ERR(essiv_cipher);
hash = crypto_alloc_shash(ictx->shash_driver_name, 0, 0);
if (IS_ERR(hash)) {
err = PTR_ERR(hash);
goto err_free_essiv_cipher;
}
tctx->essiv_cipher = essiv_cipher;
tctx->hash = hash;
return 0;
err_free_essiv_cipher:
crypto_free_cipher(essiv_cipher);
return err;
}
static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm)
{
struct skcipher_instance *inst = skcipher_alg_instance(tfm);
struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct crypto_skcipher *skcipher;
int err;
skcipher = crypto_spawn_skcipher(&ictx->u.skcipher_spawn);
if (IS_ERR(skcipher))
return PTR_ERR(skcipher);
crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) +
crypto_skcipher_reqsize(skcipher));
err = essiv_init_tfm(ictx, tctx);
if (err) {
crypto_free_skcipher(skcipher);
return err;
}
tctx->u.skcipher = skcipher;
return 0;
}
static int essiv_aead_init_tfm(struct crypto_aead *tfm)
{
struct aead_instance *inst = aead_alg_instance(tfm);
struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
struct crypto_aead *aead;
unsigned int subreq_size;
int err;
BUILD_BUG_ON(offsetofend(struct essiv_aead_request_ctx, aead_req) !=
sizeof(struct essiv_aead_request_ctx));
aead = crypto_spawn_aead(&ictx->u.aead_spawn);
if (IS_ERR(aead))
return PTR_ERR(aead);
subreq_size = FIELD_SIZEOF(struct essiv_aead_request_ctx, aead_req) +
crypto_aead_reqsize(aead);
tctx->ivoffset = offsetof(struct essiv_aead_request_ctx, aead_req) +
subreq_size;
crypto_aead_set_reqsize(tfm, tctx->ivoffset + crypto_aead_ivsize(aead));
err = essiv_init_tfm(ictx, tctx);
if (err) {
crypto_free_aead(aead);
return err;
}
tctx->u.aead = aead;
return 0;
}
static void essiv_skcipher_exit_tfm(struct crypto_skcipher *tfm)
{
struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
crypto_free_skcipher(tctx->u.skcipher);
crypto_free_cipher(tctx->essiv_cipher);
crypto_free_shash(tctx->hash);
}
static void essiv_aead_exit_tfm(struct crypto_aead *tfm)
{
struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
crypto_free_aead(tctx->u.aead);
crypto_free_cipher(tctx->essiv_cipher);
crypto_free_shash(tctx->hash);
}
static void essiv_skcipher_free_instance(struct skcipher_instance *inst)
{
struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
crypto_drop_skcipher(&ictx->u.skcipher_spawn);
kfree(inst);
}
static void essiv_aead_free_instance(struct aead_instance *inst)
{
struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
crypto_drop_aead(&ictx->u.aead_spawn);
kfree(inst);
}
static bool parse_cipher_name(char *essiv_cipher_name, const char *cra_name)
{
const char *p, *q;
int len;
/* find the last opening parens */
p = strrchr(cra_name, '(');
if (!p++)
return false;
/* find the first closing parens in the tail of the string */
q = strchr(p, ')');
if (!q)
return false;
len = q - p;
if (len >= CRYPTO_MAX_ALG_NAME)
return false;
memcpy(essiv_cipher_name, p, len);
essiv_cipher_name[len] = '\0';
return true;
}
static bool essiv_supported_algorithms(const char *essiv_cipher_name,
struct shash_alg *hash_alg,
int ivsize)
{
struct crypto_alg *alg;
bool ret = false;
alg = crypto_alg_mod_lookup(essiv_cipher_name,
CRYPTO_ALG_TYPE_CIPHER,
CRYPTO_ALG_TYPE_MASK);
if (IS_ERR(alg))
return false;
if (hash_alg->digestsize < alg->cra_cipher.cia_min_keysize ||
hash_alg->digestsize > alg->cra_cipher.cia_max_keysize)
goto out;
if (ivsize != alg->cra_blocksize)
goto out;
if (crypto_shash_alg_has_setkey(hash_alg))
goto out;
ret = true;
out:
crypto_mod_put(alg);
return ret;
}
static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
{
struct crypto_attr_type *algt;
const char *inner_cipher_name;
const char *shash_name;
struct skcipher_instance *skcipher_inst = NULL;
struct aead_instance *aead_inst = NULL;
struct crypto_instance *inst;
struct crypto_alg *base, *block_base;
struct essiv_instance_ctx *ictx;
struct skcipher_alg *skcipher_alg = NULL;
struct aead_alg *aead_alg = NULL;
struct crypto_alg *_hash_alg;
struct shash_alg *hash_alg;
int ivsize;
u32 type;
int err;
algt = crypto_get_attr_type(tb);
if (IS_ERR(algt))
return PTR_ERR(algt);
inner_cipher_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(inner_cipher_name))
return PTR_ERR(inner_cipher_name);
shash_name = crypto_attr_alg_name(tb[2]);
if (IS_ERR(shash_name))
return PTR_ERR(shash_name);
type = algt->type & algt->mask;
switch (type) {
case CRYPTO_ALG_TYPE_BLKCIPHER:
skcipher_inst = kzalloc(sizeof(*skcipher_inst) +
sizeof(*ictx), GFP_KERNEL);
if (!skcipher_inst)
return -ENOMEM;
inst = skcipher_crypto_instance(skcipher_inst);
base = &skcipher_inst->alg.base;
ictx = crypto_instance_ctx(inst);
/* Symmetric cipher, e.g., "cbc(aes)" */
crypto_set_skcipher_spawn(&ictx->u.skcipher_spawn, inst);
err = crypto_grab_skcipher(&ictx->u.skcipher_spawn,
inner_cipher_name, 0,
crypto_requires_sync(algt->type,
algt->mask));
if (err)
goto out_free_inst;
skcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.skcipher_spawn);
block_base = &skcipher_alg->base;
ivsize = crypto_skcipher_alg_ivsize(skcipher_alg);
break;
case CRYPTO_ALG_TYPE_AEAD:
aead_inst = kzalloc(sizeof(*aead_inst) +
sizeof(*ictx), GFP_KERNEL);
if (!aead_inst)
return -ENOMEM;
inst = aead_crypto_instance(aead_inst);
base = &aead_inst->alg.base;
ictx = crypto_instance_ctx(inst);
/* AEAD cipher, e.g., "authenc(hmac(sha256),cbc(aes))" */
crypto_set_aead_spawn(&ictx->u.aead_spawn, inst);
err = crypto_grab_aead(&ictx->u.aead_spawn,
inner_cipher_name, 0,
crypto_requires_sync(algt->type,
algt->mask));
if (err)
goto out_free_inst;
aead_alg = crypto_spawn_aead_alg(&ictx->u.aead_spawn);
block_base = &aead_alg->base;
if (!strstarts(block_base->cra_name, "authenc(")) {
pr_warn("Only authenc() type AEADs are supported by ESSIV\n");
err = -EINVAL;
goto out_drop_skcipher;
}
ivsize = aead_alg->ivsize;
break;
default:
return -EINVAL;
}
if (!parse_cipher_name(ictx->essiv_cipher_name, block_base->cra_name)) {
pr_warn("Failed to parse ESSIV cipher name from skcipher cra_name\n");
err = -EINVAL;
goto out_drop_skcipher;
}
/* Synchronous hash, e.g., "sha256" */
_hash_alg = crypto_alg_mod_lookup(shash_name,
CRYPTO_ALG_TYPE_SHASH,
CRYPTO_ALG_TYPE_MASK);
if (IS_ERR(_hash_alg)) {
err = PTR_ERR(_hash_alg);
goto out_drop_skcipher;
}
hash_alg = __crypto_shash_alg(_hash_alg);
/* Check the set of algorithms */
if (!essiv_supported_algorithms(ictx->essiv_cipher_name, hash_alg,
ivsize)) {
pr_warn("Unsupported essiv instantiation: essiv(%s,%s)\n",
block_base->cra_name, hash_alg->base.cra_name);
err = -EINVAL;
goto out_free_hash;
}
/* record the driver name so we can instantiate this exact algo later */
strlcpy(ictx->shash_driver_name, hash_alg->base.cra_driver_name,
CRYPTO_MAX_ALG_NAME);
/* Instance fields */
err = -ENAMETOOLONG;
if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME,
"essiv(%s,%s)", block_base->cra_name,
hash_alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
goto out_free_hash;
if (snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME,
"essiv(%s,%s)", block_base->cra_driver_name,
hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto out_free_hash;
base->cra_flags = block_base->cra_flags & CRYPTO_ALG_ASYNC;
base->cra_blocksize = block_base->cra_blocksize;
base->cra_ctxsize = sizeof(struct essiv_tfm_ctx);
base->cra_alignmask = block_base->cra_alignmask;
base->cra_priority = block_base->cra_priority;
if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
skcipher_inst->alg.setkey = essiv_skcipher_setkey;
skcipher_inst->alg.encrypt = essiv_skcipher_encrypt;
skcipher_inst->alg.decrypt = essiv_skcipher_decrypt;
skcipher_inst->alg.init = essiv_skcipher_init_tfm;
skcipher_inst->alg.exit = essiv_skcipher_exit_tfm;
skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg);
skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg);
skcipher_inst->alg.ivsize = ivsize;
skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg);
skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg);
skcipher_inst->free = essiv_skcipher_free_instance;
err = skcipher_register_instance(tmpl, skcipher_inst);
} else {
aead_inst->alg.setkey = essiv_aead_setkey;
aead_inst->alg.setauthsize = essiv_aead_setauthsize;
aead_inst->alg.encrypt = essiv_aead_encrypt;
aead_inst->alg.decrypt = essiv_aead_decrypt;
aead_inst->alg.init = essiv_aead_init_tfm;
aead_inst->alg.exit = essiv_aead_exit_tfm;
aead_inst->alg.ivsize = ivsize;
aead_inst->alg.maxauthsize = crypto_aead_alg_maxauthsize(aead_alg);
aead_inst->alg.chunksize = crypto_aead_alg_chunksize(aead_alg);
aead_inst->free = essiv_aead_free_instance;
err = aead_register_instance(tmpl, aead_inst);
}
if (err)
goto out_free_hash;
crypto_mod_put(_hash_alg);
return 0;
out_free_hash:
crypto_mod_put(_hash_alg);
out_drop_skcipher:
if (type == CRYPTO_ALG_TYPE_BLKCIPHER)
crypto_drop_skcipher(&ictx->u.skcipher_spawn);
else
crypto_drop_aead(&ictx->u.aead_spawn);
out_free_inst:
kfree(skcipher_inst);
kfree(aead_inst);
return err;
}
/* essiv(cipher_name, shash_name) */
static struct crypto_template essiv_tmpl = {
.name = "essiv",
.create = essiv_create,
.module = THIS_MODULE,
};
static int __init essiv_module_init(void)
{
return crypto_register_template(&essiv_tmpl);
}
static void __exit essiv_module_exit(void)
{
crypto_unregister_template(&essiv_tmpl);
}
subsys_initcall(essiv_module_init);
module_exit(essiv_module_exit);
MODULE_DESCRIPTION("ESSIV skcipher/aead wrapper for block encryption");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("essiv");
......@@ -271,6 +271,7 @@ config DM_CRYPT
depends on BLK_DEV_DM
select CRYPTO
select CRYPTO_CBC
select CRYPTO_ESSIV
---help---
This device-mapper target allows you to create a device that
transparently encrypts the data on it. You'll need to activate
......@@ -346,6 +347,20 @@ config DM_ERA
over time. Useful for maintaining cache coherency when using
vendor snapshots.
config DM_CLONE
tristate "Clone target (EXPERIMENTAL)"
depends on BLK_DEV_DM
default n
select DM_PERSISTENT_DATA
---help---
dm-clone produces a one-to-one copy of an existing, read-only source
device into a writable destination device. The cloned device is
visible/mountable immediately and the copy of the source device to the
destination device happens in the background, in parallel with user
I/O.
If unsure, say N.
config DM_MIRROR
tristate "Mirror target"
depends on BLK_DEV_DM
......@@ -490,6 +505,18 @@ config DM_VERITY
If unsure, say N.
config DM_VERITY_VERIFY_ROOTHASH_SIG
def_bool n
bool "Verity data device root hash signature verification support"
depends on DM_VERITY
select SYSTEM_DATA_VERIFICATION
help
Add ability for dm-verity device to be validated if the
pre-generated tree of cryptographic checksums passed has a pkcs#7
signature file that can validate the roothash of the tree.
If unsure, say N.
config DM_VERITY_FEC
bool "Verity forward error correction support"
depends on DM_VERITY
......
......@@ -18,6 +18,7 @@ dm-cache-y += dm-cache-target.o dm-cache-metadata.o dm-cache-policy.o \
dm-cache-background-tracker.o
dm-cache-smq-y += dm-cache-policy-smq.o
dm-era-y += dm-era-target.o
dm-clone-y += dm-clone-target.o dm-clone-metadata.o
dm-verity-y += dm-verity-target.o
md-mod-y += md.o md-bitmap.o
raid456-y += raid5.o raid5-cache.o raid5-ppl.o
......@@ -65,6 +66,7 @@ obj-$(CONFIG_DM_VERITY) += dm-verity.o
obj-$(CONFIG_DM_CACHE) += dm-cache.o
obj-$(CONFIG_DM_CACHE_SMQ) += dm-cache-smq.o
obj-$(CONFIG_DM_ERA) += dm-era.o
obj-$(CONFIG_DM_CLONE) += dm-clone.o
obj-$(CONFIG_DM_LOG_WRITES) += dm-log-writes.o
obj-$(CONFIG_DM_INTEGRITY) += dm-integrity.o
obj-$(CONFIG_DM_ZONED) += dm-zoned.o
......@@ -81,3 +83,7 @@ endif
ifeq ($(CONFIG_DM_VERITY_FEC),y)
dm-verity-objs += dm-verity-fec.o
endif
ifeq ($(CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG),y)
dm-verity-objs += dm-verity-verify-sig.o
endif
......@@ -33,7 +33,8 @@
#define DM_BUFIO_MEMORY_PERCENT 2
#define DM_BUFIO_VMALLOC_PERCENT 25
#define DM_BUFIO_WRITEBACK_PERCENT 75
#define DM_BUFIO_WRITEBACK_RATIO 3
#define DM_BUFIO_LOW_WATERMARK_RATIO 16
/*
* Check buffer ages in this interval (seconds)
......@@ -132,12 +133,14 @@ enum data_mode {
struct dm_buffer {
struct rb_node node;
struct list_head lru_list;
struct list_head global_list;
sector_t block;
void *data;
unsigned char data_mode; /* DATA_MODE_* */
unsigned char list_mode; /* LIST_* */
blk_status_t read_error;
blk_status_t write_error;
unsigned accessed;
unsigned hold_count;
unsigned long state;
unsigned long last_accessed;
......@@ -192,7 +195,11 @@ static unsigned long dm_bufio_cache_size;
*/
static unsigned long dm_bufio_cache_size_latch;
static DEFINE_SPINLOCK(param_spinlock);
static DEFINE_SPINLOCK(global_spinlock);
static LIST_HEAD(global_queue);
static unsigned long global_num = 0;
/*
* Buffers are freed after this timeout
......@@ -208,11 +215,6 @@ static unsigned long dm_bufio_current_allocated;
/*----------------------------------------------------------------*/
/*
* Per-client cache: dm_bufio_cache_size / dm_bufio_client_count
*/
static unsigned long dm_bufio_cache_size_per_client;
/*
* The current number of clients.
*/
......@@ -224,11 +226,15 @@ static int dm_bufio_client_count;
static LIST_HEAD(dm_bufio_all_clients);
/*
* This mutex protects dm_bufio_cache_size_latch,
* dm_bufio_cache_size_per_client and dm_bufio_client_count
* This mutex protects dm_bufio_cache_size_latch and dm_bufio_client_count
*/
static DEFINE_MUTEX(dm_bufio_clients_lock);
static struct workqueue_struct *dm_bufio_wq;
static struct delayed_work dm_bufio_cleanup_old_work;
static struct work_struct dm_bufio_replacement_work;
#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
static void buffer_record_stack(struct dm_buffer *b)
{
......@@ -285,15 +291,23 @@ static void __remove(struct dm_bufio_client *c, struct dm_buffer *b)
/*----------------------------------------------------------------*/
static void adjust_total_allocated(unsigned char data_mode, long diff)
static void adjust_total_allocated(struct dm_buffer *b, bool unlink)
{
unsigned char data_mode;
long diff;
static unsigned long * const class_ptr[DATA_MODE_LIMIT] = {
&dm_bufio_allocated_kmem_cache,
&dm_bufio_allocated_get_free_pages,
&dm_bufio_allocated_vmalloc,
};
spin_lock(&param_spinlock);
data_mode = b->data_mode;
diff = (long)b->c->block_size;
if (unlink)
diff = -diff;
spin_lock(&global_spinlock);
*class_ptr[data_mode] += diff;
......@@ -302,7 +316,19 @@ static void adjust_total_allocated(unsigned char data_mode, long diff)
if (dm_bufio_current_allocated > dm_bufio_peak_allocated)
dm_bufio_peak_allocated = dm_bufio_current_allocated;
spin_unlock(&param_spinlock);
b->accessed = 1;
if (!unlink) {
list_add(&b->global_list, &global_queue);
global_num++;
if (dm_bufio_current_allocated > dm_bufio_cache_size)
queue_work(dm_bufio_wq, &dm_bufio_replacement_work);
} else {
list_del(&b->global_list);
global_num--;
}
spin_unlock(&global_spinlock);
}
/*
......@@ -323,9 +349,6 @@ static void __cache_size_refresh(void)
dm_bufio_default_cache_size);
dm_bufio_cache_size_latch = dm_bufio_default_cache_size;
}
dm_bufio_cache_size_per_client = dm_bufio_cache_size_latch /
(dm_bufio_client_count ? : 1);
}
/*
......@@ -431,8 +454,6 @@ static struct dm_buffer *alloc_buffer(struct dm_bufio_client *c, gfp_t gfp_mask)
return NULL;
}
adjust_total_allocated(b->data_mode, (long)c->block_size);
#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
b->stack_len = 0;
#endif
......@@ -446,8 +467,6 @@ static void free_buffer(struct dm_buffer *b)
{
struct dm_bufio_client *c = b->c;
adjust_total_allocated(b->data_mode, -(long)c->block_size);
free_buffer_data(c, b->data, b->data_mode);
kmem_cache_free(c->slab_buffer, b);
}
......@@ -465,6 +484,8 @@ static void __link_buffer(struct dm_buffer *b, sector_t block, int dirty)
list_add(&b->lru_list, &c->lru[dirty]);
__insert(b->c, b);
b->last_accessed = jiffies;
adjust_total_allocated(b, false);
}
/*
......@@ -479,6 +500,8 @@ static void __unlink_buffer(struct dm_buffer *b)
c->n_buffers[b->list_mode]--;
__remove(b->c, b);
list_del(&b->lru_list);
adjust_total_allocated(b, true);
}
/*
......@@ -488,6 +511,8 @@ static void __relink_lru(struct dm_buffer *b, int dirty)
{
struct dm_bufio_client *c = b->c;
b->accessed = 1;
BUG_ON(!c->n_buffers[b->list_mode]);
c->n_buffers[b->list_mode]--;
......@@ -906,36 +931,6 @@ static void __write_dirty_buffers_async(struct dm_bufio_client *c, int no_wait,
}
}
/*
* Get writeback threshold and buffer limit for a given client.
*/
static void __get_memory_limit(struct dm_bufio_client *c,
unsigned long *threshold_buffers,
unsigned long *limit_buffers)
{
unsigned long buffers;
if (unlikely(READ_ONCE(dm_bufio_cache_size) != dm_bufio_cache_size_latch)) {
if (mutex_trylock(&dm_bufio_clients_lock)) {
__cache_size_refresh();
mutex_unlock(&dm_bufio_clients_lock);
}
}
buffers = dm_bufio_cache_size_per_client;
if (likely(c->sectors_per_block_bits >= 0))
buffers >>= c->sectors_per_block_bits + SECTOR_SHIFT;
else
buffers /= c->block_size;
if (buffers < c->minimum_buffers)
buffers = c->minimum_buffers;
*limit_buffers = buffers;
*threshold_buffers = mult_frac(buffers,
DM_BUFIO_WRITEBACK_PERCENT, 100);
}
/*
* Check if we're over watermark.
* If we are over threshold_buffers, start freeing buffers.
......@@ -944,23 +939,7 @@ static void __get_memory_limit(struct dm_bufio_client *c,
static void __check_watermark(struct dm_bufio_client *c,
struct list_head *write_list)
{
unsigned long threshold_buffers, limit_buffers;
__get_memory_limit(c, &threshold_buffers, &limit_buffers);
while (c->n_buffers[LIST_CLEAN] + c->n_buffers[LIST_DIRTY] >
limit_buffers) {
struct dm_buffer *b = __get_unclaimed_buffer(c);
if (!b)
return;
__free_buffer_wake(b);
cond_resched();
}
if (c->n_buffers[LIST_DIRTY] > threshold_buffers)
if (c->n_buffers[LIST_DIRTY] > c->n_buffers[LIST_CLEAN] * DM_BUFIO_WRITEBACK_RATIO)
__write_dirty_buffers_async(c, 1, write_list);
}
......@@ -1841,6 +1820,74 @@ static void __evict_old_buffers(struct dm_bufio_client *c, unsigned long age_hz)
dm_bufio_unlock(c);
}
static void do_global_cleanup(struct work_struct *w)
{
struct dm_bufio_client *locked_client = NULL;
struct dm_bufio_client *current_client;
struct dm_buffer *b;
unsigned spinlock_hold_count;
unsigned long threshold = dm_bufio_cache_size -
dm_bufio_cache_size / DM_BUFIO_LOW_WATERMARK_RATIO;
unsigned long loops = global_num * 2;
mutex_lock(&dm_bufio_clients_lock);
while (1) {
cond_resched();
spin_lock(&global_spinlock);
if (unlikely(dm_bufio_current_allocated <= threshold))
break;
spinlock_hold_count = 0;
get_next:
if (!loops--)
break;
if (unlikely(list_empty(&global_queue)))
break;
b = list_entry(global_queue.prev, struct dm_buffer, global_list);
if (b->accessed) {
b->accessed = 0;
list_move(&b->global_list, &global_queue);
if (likely(++spinlock_hold_count < 16))
goto get_next;
spin_unlock(&global_spinlock);
continue;
}
current_client = b->c;
if (unlikely(current_client != locked_client)) {
if (locked_client)
dm_bufio_unlock(locked_client);
if (!dm_bufio_trylock(current_client)) {
spin_unlock(&global_spinlock);
dm_bufio_lock(current_client);
locked_client = current_client;
continue;
}
locked_client = current_client;
}
spin_unlock(&global_spinlock);
if (unlikely(!__try_evict_buffer(b, GFP_KERNEL))) {
spin_lock(&global_spinlock);
list_move(&b->global_list, &global_queue);
spin_unlock(&global_spinlock);
}
}
spin_unlock(&global_spinlock);
if (locked_client)
dm_bufio_unlock(locked_client);
mutex_unlock(&dm_bufio_clients_lock);
}
static void cleanup_old_buffers(void)
{
unsigned long max_age_hz = get_max_age_hz();
......@@ -1856,14 +1903,11 @@ static void cleanup_old_buffers(void)
mutex_unlock(&dm_bufio_clients_lock);
}
static struct workqueue_struct *dm_bufio_wq;
static struct delayed_work dm_bufio_work;
static void work_fn(struct work_struct *w)
{
cleanup_old_buffers();
queue_delayed_work(dm_bufio_wq, &dm_bufio_work,
queue_delayed_work(dm_bufio_wq, &dm_bufio_cleanup_old_work,
DM_BUFIO_WORK_TIMER_SECS * HZ);
}
......@@ -1905,8 +1949,9 @@ static int __init dm_bufio_init(void)
if (!dm_bufio_wq)
return -ENOMEM;
INIT_DELAYED_WORK(&dm_bufio_work, work_fn);
queue_delayed_work(dm_bufio_wq, &dm_bufio_work,
INIT_DELAYED_WORK(&dm_bufio_cleanup_old_work, work_fn);
INIT_WORK(&dm_bufio_replacement_work, do_global_cleanup);
queue_delayed_work(dm_bufio_wq, &dm_bufio_cleanup_old_work,
DM_BUFIO_WORK_TIMER_SECS * HZ);
return 0;
......@@ -1919,7 +1964,8 @@ static void __exit dm_bufio_exit(void)
{
int bug = 0;
cancel_delayed_work_sync(&dm_bufio_work);
cancel_delayed_work_sync(&dm_bufio_cleanup_old_work);
flush_workqueue(dm_bufio_wq);
destroy_workqueue(dm_bufio_wq);
if (dm_bufio_client_count) {
......
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
*/
#ifndef DM_CLONE_METADATA_H
#define DM_CLONE_METADATA_H
#include "persistent-data/dm-block-manager.h"
#include "persistent-data/dm-space-map-metadata.h"
#define DM_CLONE_METADATA_BLOCK_SIZE DM_SM_METADATA_BLOCK_SIZE
/*
* The metadata device is currently limited in size.
*/
#define DM_CLONE_METADATA_MAX_SECTORS DM_SM_METADATA_MAX_SECTORS
/*
* A metadata device larger than 16GB triggers a warning.
*/
#define DM_CLONE_METADATA_MAX_SECTORS_WARNING (16 * (1024 * 1024 * 1024 >> SECTOR_SHIFT))
#define SPACE_MAP_ROOT_SIZE 128
/* dm-clone metadata */
struct dm_clone_metadata;
/*
* Set region status to hydrated.
*
* @cmd: The dm-clone metadata
* @region_nr: The region number
*
* This function doesn't block, so it's safe to call it from interrupt context.
*/
int dm_clone_set_region_hydrated(struct dm_clone_metadata *cmd, unsigned long region_nr);
/*
* Set status of all regions in the provided range to hydrated, if not already
* hydrated.
*
* @cmd: The dm-clone metadata
* @start: Starting region number
* @nr_regions: Number of regions in the range
*
* This function doesn't block, so it's safe to call it from interrupt context.
*/
int dm_clone_cond_set_range(struct dm_clone_metadata *cmd, unsigned long start,
unsigned long nr_regions);
/*
* Read existing or create fresh metadata.
*
* @bdev: The device storing the metadata
* @target_size: The target size
* @region_size: The region size
*
* @returns: The dm-clone metadata
*
* This function reads the superblock of @bdev and checks if it's all zeroes.
* If it is, it formats @bdev and creates fresh metadata. If it isn't, it
* validates the metadata stored in @bdev.
*/
struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
sector_t target_size,
sector_t region_size);
/*
* Free the resources related to metadata management.
*/
void dm_clone_metadata_close(struct dm_clone_metadata *cmd);
/*
* Commit dm-clone metadata to disk.
*/
int dm_clone_metadata_commit(struct dm_clone_metadata *cmd);
/*
* Reload the in core copy of the on-disk bitmap.
*
* This should be used after aborting a metadata transaction and setting the
* metadata to read-only, to invalidate the in-core cache and make it match the
* on-disk metadata.
*
* WARNING: It must not be called concurrently with either
* dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), as it updates
* the region bitmap without taking the relevant spinlock. We don't take the
* spinlock because dm_clone_reload_in_core_bitset() does I/O, so it may block.
*
* But, it's safe to use it after calling dm_clone_metadata_set_read_only(),
* because the latter sets the metadata to read-only mode. Both
* dm_clone_set_region_hydrated() and dm_clone_cond_set_range() refuse to touch
* the region bitmap, after calling dm_clone_metadata_set_read_only().
*/
int dm_clone_reload_in_core_bitset(struct dm_clone_metadata *cmd);
/*
* Check whether dm-clone's metadata changed this transaction.
*/
bool dm_clone_changed_this_transaction(struct dm_clone_metadata *cmd);
/*
* Abort current metadata transaction and rollback metadata to the last
* committed transaction.
*/
int dm_clone_metadata_abort(struct dm_clone_metadata *cmd);
/*
* Switches metadata to a read only mode. Once read-only mode has been entered
* the following functions will return -EPERM:
*
* dm_clone_metadata_commit()
* dm_clone_set_region_hydrated()
* dm_clone_cond_set_range()
* dm_clone_metadata_abort()
*/
void dm_clone_metadata_set_read_only(struct dm_clone_metadata *cmd);
void dm_clone_metadata_set_read_write(struct dm_clone_metadata *cmd);
/*
* Returns true if the hydration of the destination device is finished.
*/
bool dm_clone_is_hydration_done(struct dm_clone_metadata *cmd);
/*
* Returns true if region @region_nr is hydrated.
*/
bool dm_clone_is_region_hydrated(struct dm_clone_metadata *cmd, unsigned long region_nr);
/*
* Returns true if all the regions in the range are hydrated.
*/
bool dm_clone_is_range_hydrated(struct dm_clone_metadata *cmd,
unsigned long start, unsigned long nr_regions);
/*
* Returns the number of hydrated regions.
*/
unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd);
/*
* Returns the first unhydrated region with region_nr >= @start
*/
unsigned long dm_clone_find_next_unhydrated_region(struct dm_clone_metadata *cmd,
unsigned long start);
/*
* Get the number of free metadata blocks.
*/
int dm_clone_get_free_metadata_block_count(struct dm_clone_metadata *cmd, dm_block_t *result);
/*
* Get the total number of metadata blocks.
*/
int dm_clone_get_metadata_dev_size(struct dm_clone_metadata *cmd, dm_block_t *result);
#endif /* DM_CLONE_METADATA_H */
此差异已折叠。
此差异已折叠。
......@@ -601,17 +601,27 @@ static void list_version_get_info(struct target_type *tt, void *param)
info->vers = align_ptr(((void *) ++info->vers) + strlen(tt->name) + 1);
}
static int list_versions(struct file *filp, struct dm_ioctl *param, size_t param_size)
static int __list_versions(struct dm_ioctl *param, size_t param_size, const char *name)
{
size_t len, needed = 0;
struct dm_target_versions *vers;
struct vers_iter iter_info;
struct target_type *tt = NULL;
if (name) {
tt = dm_get_target_type(name);
if (!tt)
return -EINVAL;
}
/*
* Loop through all the devices working out how much
* space we need.
*/
dm_target_iterate(list_version_get_needed, &needed);
if (!tt)
dm_target_iterate(list_version_get_needed, &needed);
else
list_version_get_needed(tt, &needed);
/*
* Grab our output buffer.
......@@ -632,13 +642,28 @@ static int list_versions(struct file *filp, struct dm_ioctl *param, size_t param
/*
* Now loop through filling out the names & versions.
*/
dm_target_iterate(list_version_get_info, &iter_info);
if (!tt)
dm_target_iterate(list_version_get_info, &iter_info);
else
list_version_get_info(tt, &iter_info);
param->flags |= iter_info.flags;
out:
if (tt)
dm_put_target_type(tt);
return 0;
}
static int list_versions(struct file *filp, struct dm_ioctl *param, size_t param_size)
{
return __list_versions(param, param_size, NULL);
}
static int get_target_version(struct file *filp, struct dm_ioctl *param, size_t param_size)
{
return __list_versions(param, param_size, param->name);
}
static int check_name(const char *name)
{
if (strchr(name, '/')) {
......@@ -1592,7 +1617,7 @@ static int target_message(struct file *filp, struct dm_ioctl *param, size_t para
}
ti = dm_table_find_target(table, tmsg->sector);
if (!dm_target_is_valid(ti)) {
if (!ti) {
DMWARN("Target message sector outside device.");
r = -EINVAL;
} else if (ti->type->message)
......@@ -1664,6 +1689,7 @@ static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
{DM_TARGET_MSG_CMD, 0, target_message},
{DM_DEV_SET_GEOMETRY_CMD, 0, dev_set_geometry},
{DM_DEV_ARM_POLL, IOCTL_FLAGS_NO_PARAMS, dev_arm_poll},
{DM_GET_TARGET_VERSION, 0, get_target_version},
};
if (unlikely(cmd >= ARRAY_SIZE(_ioctls)))
......
......@@ -3738,18 +3738,18 @@ static int raid_iterate_devices(struct dm_target *ti,
static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
{
struct raid_set *rs = ti->private;
unsigned int chunk_size = to_bytes(rs->md.chunk_sectors);
unsigned int chunk_size_bytes = to_bytes(rs->md.chunk_sectors);
blk_limits_io_min(limits, chunk_size);
blk_limits_io_opt(limits, chunk_size * mddev_data_stripes(rs));
blk_limits_io_min(limits, chunk_size_bytes);
blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs));
/*
* RAID1 and RAID10 personalities require bio splitting,
* RAID0/4/5/6 don't and process large discard bios properly.
*/
if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
limits->discard_granularity = chunk_size;
limits->max_discard_sectors = chunk_size;
limits->discard_granularity = chunk_size_bytes;
limits->max_discard_sectors = rs->md.chunk_sectors;
}
}
......
......@@ -878,12 +878,9 @@ static struct mirror_set *alloc_context(unsigned int nr_mirrors,
struct dm_target *ti,
struct dm_dirty_log *dl)
{
size_t len;
struct mirror_set *ms = NULL;
len = sizeof(*ms) + (sizeof(ms->mirror[0]) * nr_mirrors);
struct mirror_set *ms =
kzalloc(struct_size(ms, mirror, nr_mirrors), GFP_KERNEL);
ms = kzalloc(len, GFP_KERNEL);
if (!ms) {
ti->error = "Cannot allocate mirror context";
return NULL;
......
......@@ -262,7 +262,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
if (n_entries != (size_t)n_entries || !(size_t)(n_entries + 1))
return -EOVERFLOW;
shared_alloc_size = sizeof(struct dm_stat) + (size_t)n_entries * sizeof(struct dm_stat_shared);
shared_alloc_size = struct_size(s, stat_shared, n_entries);
if ((shared_alloc_size - sizeof(struct dm_stat)) / sizeof(struct dm_stat_shared) != n_entries)
return -EOVERFLOW;
......
......@@ -163,10 +163,8 @@ static int alloc_targets(struct dm_table *t, unsigned int num)
/*
* Allocate both the target array and offset array at once.
* Append an empty entry to catch sectors beyond the end of
* the device.
*/
n_highs = (sector_t *) dm_vcalloc(num + 1, sizeof(struct dm_target) +
n_highs = (sector_t *) dm_vcalloc(num, sizeof(struct dm_target) +
sizeof(sector_t));
if (!n_highs)
return -ENOMEM;
......@@ -1359,7 +1357,7 @@ struct dm_target *dm_table_get_target(struct dm_table *t, unsigned int index)
/*
* Search the btree for the correct target.
*
* Caller should check returned pointer with dm_target_is_valid()
* Caller should check returned pointer for NULL
* to trap I/O beyond end of device.
*/
struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
......@@ -1368,7 +1366,7 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
sector_t *node;
if (unlikely(sector >= dm_table_get_size(t)))
return &t->targets[t->num_targets];
return NULL;
for (l = 0; l < t->depth; l++) {
n = get_child(n, k);
......
此差异已折叠。
此差异已折叠。
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2019 Microsoft Corporation.
*
* Author: Jaskaran Singh Khurana <jaskarankhurana@linux.microsoft.com>
*
*/
#ifndef DM_VERITY_SIG_VERIFICATION_H
#define DM_VERITY_SIG_VERIFICATION_H
#define DM_VERITY_ROOT_HASH_VERIFICATION "DM Verity Sig Verification"
#define DM_VERITY_ROOT_HASH_VERIFICATION_OPT_SIG_KEY "root_hash_sig_key_desc"
struct dm_verity_sig_opts {
unsigned int sig_size;
u8 *sig;
};
#ifdef CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG
#define DM_VERITY_ROOT_HASH_VERIFICATION_OPTS 2
int verity_verify_root_hash(const void *data, size_t data_len,
const void *sig_data, size_t sig_len);
bool verity_verify_is_sig_opt_arg(const char *arg_name);
int verity_verify_sig_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v,
struct dm_verity_sig_opts *sig_opts,
unsigned int *argc, const char *arg_name);
void verity_verify_sig_opts_cleanup(struct dm_verity_sig_opts *sig_opts);
#else
#define DM_VERITY_ROOT_HASH_VERIFICATION_OPTS 0
int verity_verify_root_hash(const void *data, size_t data_len,
const void *sig_data, size_t sig_len)
{
return 0;
}
bool verity_verify_is_sig_opt_arg(const char *arg_name)
{
return false;
}
int verity_verify_sig_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v,
struct dm_verity_sig_opts *sig_opts,
unsigned int *argc, const char *arg_name)
{
return -EINVAL;
}
void verity_verify_sig_opts_cleanup(struct dm_verity_sig_opts *sig_opts)
{
}
#endif /* CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG */
#endif /* DM_VERITY_SIG_VERIFICATION_H */
......@@ -63,6 +63,8 @@ struct dm_verity {
struct dm_verity_fec *fec; /* forward error correction */
unsigned long *validated_blocks; /* bitset blocks validated */
char *signature_key_desc; /* signature keyring reference */
};
struct dm_verity_io {
......
此差异已折叠。
......@@ -134,8 +134,6 @@ static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone,
refcount_inc(&bioctx->ref);
generic_make_request(clone);
if (clone->bi_status == BLK_STS_IOERR)
return -EIO;
if (bio_op(bio) == REQ_OP_WRITE && dmz_is_seq(zone))
zone->wp_block += nr_blocks;
......
此差异已折叠。
......@@ -85,11 +85,6 @@ struct target_type *dm_get_immutable_target_type(struct mapped_device *md);
int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t);
/*
* To check the return value from dm_table_find_target().
*/
#define dm_target_is_valid(t) ((t)->table)
/*
* To check whether the target type is bio-based or not (request-based).
*/
......
......@@ -369,10 +369,6 @@ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
*/
dm_tm_unlock(ll->tm, blk);
continue;
} else if (r < 0) {
dm_tm_unlock(ll->tm, blk);
return r;
}
dm_tm_unlock(ll->tm, blk);
......
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册
反馈
建议
客服 返回
顶部