未验证 提交 9c8467e9 编写于 作者: O openeuler-ci-bot 提交者: Gitee

!1802 zram: Support multiple compression streams

Merge Pull Request from: @ci-robot 
 
PR sync from: Jinjiang Tu <tujinjiang@huawei.com>
https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/UAQZCBQFUA5LYCLXXVHHJGAK2GNRA6LC/ 
Support multiple compression streams for zram.

Alexey Romanov (1):
  zram: add size class equals check into recompression

Andy Shevchenko (1):
  lib/cmdline: Export next_arg() for being used in modules

Ming Lei (1):
  zram: fix race between zram_reset_device() and disksize_store()

Sergey Senozhatsky (11):
  zram: preparation for multi-zcomp support
  zram: add recompression algorithm sysfs knob
  zram: factor out WB and non-WB zram read functions
  zram: introduce recompress sysfs knob
  zram: add recompress flag to read_block_state()
  zram: clarify writeback_store() comment
  zram: remove redundant checks from zram_recompress()
  zram: add algo parameter support to zram_recompress()
  documentation: add zram recompression documentation
  zram: add incompressible writeback
  zram: add incompressible flag to read_block_state()


-- 
2.25.1
 
https://gitee.com/openeuler/kernel/issues/I7TWVA 
 
Link:https://gitee.com/openeuler/kernel/pulls/1802 

Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> 
Signed-off-by: Jialin Zhang <zhangjialin11@huawei.com> 
......@@ -334,6 +334,11 @@ Admin can request writeback of those idle pages at right timing via::
With the command, zram writeback idle pages from memory to the storage.
If a user chooses to writeback only incompressible pages (pages that none of
algorithms can compress) this can be accomplished with::
echo incompressible > /sys/block/zramX/writeback
If there are lots of write IO with flash device, potentially, it has
flash wearout problem so that admin needs to design write limitation
to guarantee storage health for entire product life.
......@@ -382,6 +387,87 @@ budget in next setting is user's job.
If admin wants to measure writeback count in a certain period, he could
know it via /sys/block/zram0/bd_stat's 3rd column.
recompression
-------------
With CONFIG_ZRAM_MULTI_COMP, zram can recompress pages using alternative
(secondary) compression algorithms. The basic idea is that alternative
compression algorithm can provide better compression ratio at a price of
(potentially) slower compression/decompression speeds. Alternative compression
algorithm can, for example, be more successful compressing huge pages (those
that default algorithm failed to compress). Another application is idle pages
recompression - pages that are cold and sit in the memory can be recompressed
using more effective algorithm and, hence, reduce zsmalloc memory usage.
With CONFIG_ZRAM_MULTI_COMP, zram supports up to 4 compression algorithms:
one primary and up to 3 secondary ones. Primary zram compressor is explained
in "3) Select compression algorithm", secondary algorithms are configured
using recomp_algorithm device attribute.
Example:::
#show supported recompression algorithms
cat /sys/block/zramX/recomp_algorithm
#1: lzo lzo-rle lz4 lz4hc [zstd]
#2: lzo lzo-rle lz4 [lz4hc] zstd
Alternative compression algorithms are sorted by priority. In the example
above, zstd is used as the first alternative algorithm, which has priority
of 1, while lz4hc is configured as a compression algorithm with priority 2.
Alternative compression algorithm's priority is provided during algorithms
configuration:::
#select zstd recompression algorithm, priority 1
echo "algo=zstd priority=1" > /sys/block/zramX/recomp_algorithm
#select deflate recompression algorithm, priority 2
echo "algo=deflate priority=2" > /sys/block/zramX/recomp_algorithm
Another device attribute that CONFIG_ZRAM_MULTI_COMP enables is recompress,
which controls recompression.
Examples:::
#IDLE pages recompression is activated by `idle` mode
echo "type=idle" > /sys/block/zramX/recompress
#HUGE pages recompression is activated by `huge` mode
echo "type=huge" > /sys/block/zram0/recompress
#HUGE_IDLE pages recompression is activated by `huge_idle` mode
echo "type=huge_idle" > /sys/block/zramX/recompress
The number of idle pages can be significant, so user-space can pass a size
threshold (in bytes) to the recompress knob: zram will recompress only pages
of equal or greater size:::
#recompress all pages larger than 3000 bytes
echo "threshold=3000" > /sys/block/zramX/recompress
#recompress idle pages larger than 2000 bytes
echo "type=idle threshold=2000" > /sys/block/zramX/recompress
Recompression of idle pages requires memory tracking.
During re-compression for every page, that matches re-compression criteria,
ZRAM iterates the list of registered alternative compression algorithms in
order of their priorities. ZRAM stops either when re-compression was
successful (re-compressed object is smaller in size than the original one)
and matches re-compression criteria (e.g. size threshold) or when there are
no secondary algorithms left to try. If none of the secondary algorithms can
successfully re-compressed the page such a page is marked as incompressible,
so ZRAM will not attempt to re-compress it in the future.
This re-compression behaviour, when it iterates through the list of
registered compression algorithms, increases our chances of finding the
algorithm that successfully compresses a particular page. Sometimes, however,
it is convenient (and sometimes even necessary) to limit recompression to
only one particular algorithm so that it will not try any other algorithms.
This can be achieved by providing a algo=NAME parameter:::
#use zstd algorithm only (if registered)
echo "type=huge algo=zstd" > /sys/block/zramX/recompress
memory tracking
===============
......@@ -392,9 +478,11 @@ pages of the process with*pagemap.
If you enable the feature, you could see block state via
/sys/kernel/debug/zram/zram0/block_state". The output is as follows::
300 75.033841 .wh.
301 63.806904 s...
302 63.806919 ..hi
300 75.033841 .wh...
301 63.806904 s.....
302 63.806919 ..hi..
303 62.801919 ....r.
304 146.781902 ..hi.n
First column
zram's block index.
......@@ -411,6 +499,10 @@ Third column
huge page
i:
idle page
r:
recompressed page (secondary compression algorithm)
n:
none (including secondary) of algorithms could compress it
First line of above example says 300th block is accessed at 75.033841sec
and the block's state is huge so it is written back to the backing
......
......@@ -37,3 +37,12 @@ config ZRAM_MEMORY_TRACKING
/sys/kernel/debug/zram/zramX/block_state.
See Documentation/admin-guide/blockdev/zram.rst for more information.
config ZRAM_MULTI_COMP
bool "Enable multiple compression streams"
depends on ZRAM
help
This will enable multi-compression streams, so that ZRAM can
re-compress pages using a potentially slower but more effective
compression algorithm. Note, that IDLE page recompression
requires ZRAM_MEMORY_TRACKING.
......@@ -204,7 +204,7 @@ void zcomp_destroy(struct zcomp *comp)
* case of allocation error, or any other error potentially
* returned by zcomp_init().
*/
struct zcomp *zcomp_create(const char *compress)
struct zcomp *zcomp_create(const char *alg)
{
struct zcomp *comp;
int error;
......@@ -214,14 +214,14 @@ struct zcomp *zcomp_create(const char *compress)
* is not loaded yet. We must do it here, otherwise we are about to
* call /sbin/modprobe under CPU hot-plug lock.
*/
if (!zcomp_available_algorithm(compress))
if (!zcomp_available_algorithm(alg))
return ERR_PTR(-EINVAL);
comp = kzalloc(sizeof(struct zcomp), GFP_KERNEL);
if (!comp)
return ERR_PTR(-ENOMEM);
comp->name = compress;
comp->name = alg;
error = zcomp_init(comp);
if (error) {
kfree(comp);
......
......@@ -27,7 +27,7 @@ int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node);
ssize_t zcomp_available_show(const char *comp, char *buf);
bool zcomp_available_algorithm(const char *comp);
struct zcomp *zcomp_create(const char *comp);
struct zcomp *zcomp_create(const char *alg);
void zcomp_destroy(struct zcomp *comp);
struct zcomp_strm *zcomp_stream_get(struct zcomp *comp);
......
......@@ -157,6 +157,25 @@ static inline bool is_partial_io(struct bio_vec *bvec)
}
#endif
static inline void zram_set_priority(struct zram *zram, u32 index, u32 prio)
{
prio &= ZRAM_COMP_PRIORITY_MASK;
/*
* Clear previous priority value first, in case if we recompress
* further an already recompressed page
*/
zram->table[index].flags &= ~(ZRAM_COMP_PRIORITY_MASK <<
ZRAM_COMP_PRIORITY_BIT1);
zram->table[index].flags |= (prio << ZRAM_COMP_PRIORITY_BIT1);
}
static inline u32 zram_get_priority(struct zram *zram, u32 index)
{
u32 prio = zram->table[index].flags >> ZRAM_COMP_PRIORITY_BIT1;
return prio & ZRAM_COMP_PRIORITY_MASK;
}
/*
* Check if request is within bounds and aligned on zram logical blocks.
*/
......@@ -620,8 +639,9 @@ static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
return 1;
}
#define HUGE_WRITEBACK 1
#define IDLE_WRITEBACK 2
#define HUGE_WRITEBACK (1<<0)
#define IDLE_WRITEBACK (1<<1)
#define INCOMPRESSIBLE_WRITEBACK (1<<2)
static ssize_t writeback_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
......@@ -640,6 +660,8 @@ static ssize_t writeback_store(struct device *dev,
mode = IDLE_WRITEBACK;
else if (sysfs_streq(buf, "huge"))
mode = HUGE_WRITEBACK;
else if (sysfs_streq(buf, "incompressible"))
mode = INCOMPRESSIBLE_WRITEBACK;
else
return -EINVAL;
......@@ -693,11 +715,15 @@ static ssize_t writeback_store(struct device *dev,
goto next;
if (mode == IDLE_WRITEBACK &&
!zram_test_flag(zram, index, ZRAM_IDLE))
!zram_test_flag(zram, index, ZRAM_IDLE))
goto next;
if (mode == HUGE_WRITEBACK &&
!zram_test_flag(zram, index, ZRAM_HUGE))
!zram_test_flag(zram, index, ZRAM_HUGE))
goto next;
if (mode & INCOMPRESSIBLE_WRITEBACK &&
!zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE))
goto next;
/*
* Clearing ZRAM_UNDER_WB is duty of caller.
* IOW, zram_free_page never clear it.
......@@ -732,8 +758,12 @@ static ssize_t writeback_store(struct device *dev,
zram_clear_flag(zram, index, ZRAM_IDLE);
zram_slot_unlock(zram, index);
/*
* Return last IO error unless every IO were
* not suceeded.
* BIO errors are not fatal, we continue and simply
* attempt to writeback the remaining objects (pages).
* At the same time we need to signal user-space that
* some writes (at least one, but also could be all of
* them) were not successful and we do so by returning
* the most recent BIO error.
*/
ret = err;
continue;
......@@ -899,13 +929,16 @@ static ssize_t read_block_state(struct file *file, char __user *buf,
ts = ktime_to_timespec64(zram->table[index].ac_time);
copied = snprintf(kbuf + written, count,
"%12zd %12lld.%06lu %c%c%c%c\n",
"%12zd %12lld.%06lu %c%c%c%c%c%c\n",
index, (s64)ts.tv_sec,
ts.tv_nsec / NSEC_PER_USEC,
zram_test_flag(zram, index, ZRAM_SAME) ? 's' : '.',
zram_test_flag(zram, index, ZRAM_WB) ? 'w' : '.',
zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.',
zram_test_flag(zram, index, ZRAM_IDLE) ? 'i' : '.');
zram_test_flag(zram, index, ZRAM_IDLE) ? 'i' : '.',
zram_get_priority(zram, index) ? 'r' : '.',
zram_test_flag(zram, index,
ZRAM_INCOMPRESSIBLE) ? 'n' : '.');
if (count <= copied) {
zram_slot_unlock(zram, index);
......@@ -979,47 +1012,144 @@ static ssize_t max_comp_streams_store(struct device *dev,
return len;
}
static ssize_t comp_algorithm_show(struct device *dev,
struct device_attribute *attr, char *buf)
static void comp_algorithm_set(struct zram *zram, u32 prio, const char *alg)
{
size_t sz;
struct zram *zram = dev_to_zram(dev);
/* Do not free statically defined compression algorithms */
if (zram->comp_algs[prio] != default_compressor)
kfree(zram->comp_algs[prio]);
zram->comp_algs[prio] = alg;
}
static ssize_t __comp_algorithm_show(struct zram *zram, u32 prio, char *buf)
{
ssize_t sz;
down_read(&zram->init_lock);
sz = zcomp_available_show(zram->compressor, buf);
sz = zcomp_available_show(zram->comp_algs[prio], buf);
up_read(&zram->init_lock);
return sz;
}
static ssize_t comp_algorithm_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
static int __comp_algorithm_store(struct zram *zram, u32 prio, const char *buf)
{
struct zram *zram = dev_to_zram(dev);
char compressor[ARRAY_SIZE(zram->compressor)];
char *compressor;
size_t sz;
strlcpy(compressor, buf, sizeof(compressor));
sz = strlen(buf);
if (sz >= CRYPTO_MAX_ALG_NAME)
return -E2BIG;
compressor = kstrdup(buf, GFP_KERNEL);
if (!compressor)
return -ENOMEM;
/* ignore trailing newline */
sz = strlen(compressor);
if (sz > 0 && compressor[sz - 1] == '\n')
compressor[sz - 1] = 0x00;
if (!zcomp_available_algorithm(compressor))
if (!zcomp_available_algorithm(compressor)) {
kfree(compressor);
return -EINVAL;
}
down_write(&zram->init_lock);
if (init_done(zram)) {
up_write(&zram->init_lock);
kfree(compressor);
pr_info("Can't change algorithm for initialized device\n");
return -EBUSY;
}
strcpy(zram->compressor, compressor);
comp_algorithm_set(zram, prio, compressor);
up_write(&zram->init_lock);
return len;
return 0;
}
static ssize_t comp_algorithm_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct zram *zram = dev_to_zram(dev);
return __comp_algorithm_show(zram, ZRAM_PRIMARY_COMP, buf);
}
static ssize_t comp_algorithm_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t len)
{
struct zram *zram = dev_to_zram(dev);
int ret;
ret = __comp_algorithm_store(zram, ZRAM_PRIMARY_COMP, buf);
return ret ? ret : len;
}
#ifdef CONFIG_ZRAM_MULTI_COMP
static ssize_t recomp_algorithm_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct zram *zram = dev_to_zram(dev);
ssize_t sz = 0;
u32 prio;
for (prio = ZRAM_SECONDARY_COMP; prio < ZRAM_MAX_COMPS; prio++) {
if (!zram->comp_algs[prio])
continue;
sz += scnprintf(buf + sz, PAGE_SIZE - sz - 2, "#%d: ", prio);
sz += __comp_algorithm_show(zram, prio, buf + sz);
}
return sz;
}
static ssize_t recomp_algorithm_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t len)
{
struct zram *zram = dev_to_zram(dev);
int prio = ZRAM_SECONDARY_COMP;
char *args, *param, *val;
char *alg = NULL;
int ret;
args = skip_spaces(buf);
while (*args) {
args = next_arg(args, &param, &val);
if (!*val)
return -EINVAL;
if (!strcmp(param, "algo")) {
alg = val;
continue;
}
if (!strcmp(param, "priority")) {
ret = kstrtoint(val, 10, &prio);
if (ret)
return ret;
continue;
}
}
if (!alg)
return -EINVAL;
if (prio < ZRAM_SECONDARY_COMP || prio >= ZRAM_MAX_COMPS)
return -EINVAL;
ret = __comp_algorithm_store(zram, prio, alg);
return ret ? ret : len;
}
#endif
static ssize_t compact_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
......@@ -1188,6 +1318,11 @@ static void zram_free_page(struct zram *zram, size_t index)
atomic64_dec(&zram->stats.huge_pages);
}
if (zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE))
zram_clear_flag(zram, index, ZRAM_INCOMPRESSIBLE);
zram_set_priority(zram, index, 0);
if (zram_test_flag(zram, index, ZRAM_WB)) {
zram_clear_flag(zram, index, ZRAM_WB);
free_block_bdev(zram, zram_get_element(zram, index));
......@@ -1220,29 +1355,37 @@ static void zram_free_page(struct zram *zram, size_t index)
~(1UL << ZRAM_LOCK | 1UL << ZRAM_UNDER_WB));
}
static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
struct bio *bio, bool partial_io)
/*
* Reads a page from the writeback devices. Corresponding ZRAM slot
* should be unlocked.
*/
static int zram_bvec_read_from_bdev(struct zram *zram, struct page *page,
u32 index, struct bio *bio, bool partial_io)
{
struct bio_vec bvec = {
.bv_page = page,
.bv_len = PAGE_SIZE,
.bv_offset = 0,
};
return read_from_bdev(zram, &bvec, zram_get_element(zram, index), bio,
partial_io);
}
/*
* Reads (decompresses if needed) a page from zspool (zsmalloc).
* Corresponding ZRAM slot should be locked.
*/
static int zram_read_from_zspool(struct zram *zram, struct page *page,
u32 index)
{
struct zcomp_strm *zstrm;
unsigned long handle;
unsigned int size;
void *src, *dst;
u32 prio;
int ret;
zram_slot_lock(zram, index);
if (zram_test_flag(zram, index, ZRAM_WB)) {
struct bio_vec bvec;
zram_slot_unlock(zram, index);
bvec.bv_page = page;
bvec.bv_len = PAGE_SIZE;
bvec.bv_offset = 0;
return read_from_bdev(zram, &bvec,
zram_get_element(zram, index),
bio, partial_io);
}
handle = zram_get_handle(zram, index);
if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) {
unsigned long value;
......@@ -1252,14 +1395,15 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
mem = kmap_atomic(page);
zram_fill_page(mem, PAGE_SIZE, value);
kunmap_atomic(mem);
zram_slot_unlock(zram, index);
return 0;
}
size = zram_get_obj_size(zram, index);
if (size != PAGE_SIZE)
zstrm = zcomp_stream_get(zram->comp);
if (size != PAGE_SIZE) {
prio = zram_get_priority(zram, index);
zstrm = zcomp_stream_get(zram->comps[prio]);
}
src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
if (size == PAGE_SIZE) {
......@@ -1271,20 +1415,43 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
dst = kmap_atomic(page);
ret = zcomp_decompress(zstrm, src, size, dst);
kunmap_atomic(dst);
zcomp_stream_put(zram->comp);
zcomp_stream_put(zram->comps[prio]);
}
zs_unmap_object(zram->mem_pool, handle);
zram_slot_unlock(zram, index);
return ret;
}
static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
struct bio *bio, bool partial_io)
{
int ret;
zram_slot_lock(zram, index);
if (!zram_test_flag(zram, index, ZRAM_WB)) {
/* Slot should be locked through out the function call */
ret = zram_read_from_zspool(zram, page, index);
zram_slot_unlock(zram, index);
} else {
/* Slot should be unlocked before the function call */
zram_slot_unlock(zram, index);
/* A null bio means rw_page was used, we must fallback to bio */
if (!bio)
return -EOPNOTSUPP;
ret = zram_bvec_read_from_bdev(zram, page, index, bio,
partial_io);
}
/* Should NEVER happen. Return bio error if it does. */
if (WARN_ON(ret))
if (WARN_ON(ret < 0))
pr_err("Decompression failed! err=%d, page=%u\n", ret, index);
return ret;
}
static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
u32 index, int offset, struct bio *bio)
u32 index, int offset, struct bio *bio)
{
int ret;
struct page *page;
......@@ -1340,13 +1507,13 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
kunmap_atomic(mem);
compress_again:
zstrm = zcomp_stream_get(zram->comp);
zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]);
src = kmap_atomic(page);
ret = zcomp_compress(zstrm, src, &comp_len);
kunmap_atomic(src);
if (unlikely(ret)) {
zcomp_stream_put(zram->comp);
zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]);
pr_err("Compression failed! err=%d\n", ret);
zs_free(zram->mem_pool, handle);
return ret;
......@@ -1374,7 +1541,7 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
__GFP_HIGHMEM |
__GFP_MOVABLE);
if (!handle) {
zcomp_stream_put(zram->comp);
zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]);
atomic64_inc(&zram->stats.writestall);
handle = zs_malloc(zram->mem_pool, comp_len,
GFP_NOIO | __GFP_HIGHMEM |
......@@ -1388,7 +1555,7 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
update_used_max(zram, alloced_pages);
if (zram->limit_pages && alloced_pages > zram->limit_pages) {
zcomp_stream_put(zram->comp);
zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]);
zs_free(zram->mem_pool, handle);
return -ENOMEM;
}
......@@ -1402,7 +1569,7 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
if (comp_len == PAGE_SIZE)
kunmap_atomic(src);
zcomp_stream_put(zram->comp);
zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]);
zs_unmap_object(zram->mem_pool, handle);
atomic64_add(comp_len, &zram->stats.compr_data_size);
out:
......@@ -1473,6 +1640,274 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
return ret;
}
#ifdef CONFIG_ZRAM_MULTI_COMP
/*
* This function will decompress (unless it's ZRAM_HUGE) the page and then
* attempt to compress it using provided compression algorithm priority
* (which is potentially more effective).
*
* Corresponding ZRAM slot should be locked.
*/
static int zram_recompress(struct zram *zram, u32 index, struct page *page,
u32 threshold, u32 prio, u32 prio_max)
{
struct zcomp_strm *zstrm = NULL;
unsigned long handle_old;
unsigned long handle_new;
unsigned int comp_len_old;
unsigned int comp_len_new;
unsigned int class_index_old;
unsigned int class_index_new;
u32 num_recomps = 0;
void *src, *dst;
int ret;
handle_old = zram_get_handle(zram, index);
if (!handle_old)
return -EINVAL;
comp_len_old = zram_get_obj_size(zram, index);
/*
* Do not recompress objects that are already "small enough".
*/
if (comp_len_old < threshold)
return 0;
ret = zram_read_from_zspool(zram, page, index);
if (ret)
return ret;
class_index_old = zs_lookup_class_index(zram->mem_pool, comp_len_old);
/*
* Iterate the secondary comp algorithms list (in order of priority)
* and try to recompress the page.
*/
for (; prio < prio_max; prio++) {
if (!zram->comps[prio])
continue;
/*
* Skip if the object is already re-compressed with a higher
* priority algorithm (or same algorithm).
*/
if (prio <= zram_get_priority(zram, index))
continue;
num_recomps++;
zstrm = zcomp_stream_get(zram->comps[prio]);
src = kmap_atomic(page);
ret = zcomp_compress(zstrm, src, &comp_len_new);
kunmap_atomic(src);
if (ret) {
zcomp_stream_put(zram->comps[prio]);
return ret;
}
class_index_new = zs_lookup_class_index(zram->mem_pool,
comp_len_new);
/* Continue until we make progress */
if (class_index_new >= class_index_old ||
(threshold && comp_len_new >= threshold)) {
zcomp_stream_put(zram->comps[prio]);
continue;
}
/* Recompression was successful so break out */
break;
}
/*
* We did not try to recompress, e.g. when we have only one
* secondary algorithm and the page is already recompressed
* using that algorithm
*/
if (!zstrm)
return 0;
if (class_index_new >= class_index_old) {
/*
* Secondary algorithms failed to re-compress the page
* in a way that would save memory, mark the object as
* incompressible so that we will not try to compress
* it again.
*
* We need to make sure that all secondary algorithms have
* failed, so we test if the number of recompressions matches
* the number of active secondary algorithms.
*/
if (num_recomps == zram->num_active_comps - 1)
zram_set_flag(zram, index, ZRAM_INCOMPRESSIBLE);
return 0;
}
/* Successful recompression but above threshold */
if (threshold && comp_len_new >= threshold)
return 0;
/*
* No direct reclaim (slow path) for handle allocation and no
* re-compression attempt (unlike in __zram_bvec_write()) since
* we already have stored that object in zsmalloc. If we cannot
* alloc memory for recompressed object then we bail out and
* simply keep the old (existing) object in zsmalloc.
*/
handle_new = zs_malloc(zram->mem_pool, comp_len_new,
__GFP_KSWAPD_RECLAIM |
__GFP_NOWARN |
__GFP_HIGHMEM |
__GFP_MOVABLE);
if (IS_ERR_VALUE(handle_new)) {
zcomp_stream_put(zram->comps[prio]);
return PTR_ERR((void *)handle_new);
}
dst = zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO);
memcpy(dst, zstrm->buffer, comp_len_new);
zcomp_stream_put(zram->comps[prio]);
zs_unmap_object(zram->mem_pool, handle_new);
zram_free_page(zram, index);
zram_set_handle(zram, index, handle_new);
zram_set_obj_size(zram, index, comp_len_new);
zram_set_priority(zram, index, prio);
atomic64_add(comp_len_new, &zram->stats.compr_data_size);
atomic64_inc(&zram->stats.pages_stored);
return 0;
}
#define RECOMPRESS_IDLE (1 << 0)
#define RECOMPRESS_HUGE (1 << 1)
static ssize_t recompress_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t len)
{
u32 prio = ZRAM_SECONDARY_COMP, prio_max = ZRAM_MAX_COMPS;
struct zram *zram = dev_to_zram(dev);
unsigned long nr_pages = zram->disksize >> PAGE_SHIFT;
char *args, *param, *val, *algo = NULL;
u32 mode = 0, threshold = 0;
unsigned long index;
struct page *page;
ssize_t ret;
args = skip_spaces(buf);
while (*args) {
args = next_arg(args, &param, &val);
if (!*val)
return -EINVAL;
if (!strcmp(param, "type")) {
if (!strcmp(val, "idle"))
mode = RECOMPRESS_IDLE;
if (!strcmp(val, "huge"))
mode = RECOMPRESS_HUGE;
if (!strcmp(val, "huge_idle"))
mode = RECOMPRESS_IDLE | RECOMPRESS_HUGE;
continue;
}
if (!strcmp(param, "threshold")) {
/*
* We will re-compress only idle objects equal or
* greater in size than watermark.
*/
ret = kstrtouint(val, 10, &threshold);
if (ret)
return ret;
continue;
}
if (!strcmp(param, "algo")) {
algo = val;
continue;
}
}
if (threshold >= PAGE_SIZE)
return -EINVAL;
down_read(&zram->init_lock);
if (!init_done(zram)) {
ret = -EINVAL;
goto release_init_lock;
}
if (algo) {
bool found = false;
for (; prio < ZRAM_MAX_COMPS; prio++) {
if (!zram->comp_algs[prio])
continue;
if (!strcmp(zram->comp_algs[prio], algo)) {
prio_max = min(prio + 1, ZRAM_MAX_COMPS);
found = true;
break;
}
}
if (!found) {
ret = -EINVAL;
goto release_init_lock;
}
}
page = alloc_page(GFP_KERNEL);
if (!page) {
ret = -ENOMEM;
goto release_init_lock;
}
ret = len;
for (index = 0; index < nr_pages; index++) {
int err = 0;
zram_slot_lock(zram, index);
if (!zram_allocated(zram, index))
goto next;
if (mode & RECOMPRESS_IDLE &&
!zram_test_flag(zram, index, ZRAM_IDLE))
goto next;
if (mode & RECOMPRESS_HUGE &&
!zram_test_flag(zram, index, ZRAM_HUGE))
goto next;
if (zram_test_flag(zram, index, ZRAM_WB) ||
zram_test_flag(zram, index, ZRAM_UNDER_WB) ||
zram_test_flag(zram, index, ZRAM_SAME) ||
zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE))
goto next;
err = zram_recompress(zram, index, page, threshold,
prio, prio_max);
next:
zram_slot_unlock(zram, index);
if (err) {
ret = err;
break;
}
cond_resched();
}
__free_page(page);
release_init_lock:
up_read(&zram->init_lock);
return ret;
}
#endif
/*
* zram_bio_discard - handler on discard request
* @index: physical block index in PAGE_SIZE units
......@@ -1682,9 +2117,23 @@ static int zram_rw_page(struct block_device *bdev, sector_t sector,
return ret;
}
static void zram_destroy_comps(struct zram *zram)
{
u32 prio;
for (prio = 0; prio < ZRAM_MAX_COMPS; prio++) {
struct zcomp *comp = zram->comps[prio];
zram->comps[prio] = NULL;
if (!comp)
continue;
zcomp_destroy(comp);
zram->num_active_comps--;
}
}
static void zram_reset_device(struct zram *zram)
{
struct zcomp *comp;
u64 disksize;
down_write(&zram->init_lock);
......@@ -1696,19 +2145,20 @@ static void zram_reset_device(struct zram *zram)
return;
}
comp = zram->comp;
disksize = zram->disksize;
zram->disksize = 0;
set_capacity(zram->disk, 0);
part_stat_set_all(&zram->disk->part0, 0);
up_write(&zram->init_lock);
/* I/O operation under all of CPU are done so let's free */
zram_meta_free(zram, disksize);
zram_destroy_comps(zram);
memset(&zram->stats, 0, sizeof(zram->stats));
zcomp_destroy(comp);
reset_bdev(zram);
comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
up_write(&zram->init_lock);
}
static ssize_t disksize_store(struct device *dev,
......@@ -1718,6 +2168,7 @@ static ssize_t disksize_store(struct device *dev,
struct zcomp *comp;
struct zram *zram = dev_to_zram(dev);
int err;
u32 prio;
disksize = memparse(buf, NULL);
if (!disksize)
......@@ -1736,15 +2187,21 @@ static ssize_t disksize_store(struct device *dev,
goto out_unlock;
}
comp = zcomp_create(zram->compressor);
if (IS_ERR(comp)) {
pr_err("Cannot initialise %s compressing backend\n",
zram->compressor);
err = PTR_ERR(comp);
goto out_free_meta;
}
for (prio = 0; prio < ZRAM_MAX_COMPS; prio++) {
if (!zram->comp_algs[prio])
continue;
comp = zcomp_create(zram->comp_algs[prio]);
if (IS_ERR(comp)) {
pr_err("Cannot initialise %s compressing backend\n",
zram->comp_algs[prio]);
err = PTR_ERR(comp);
goto out_free_comps;
}
zram->comp = comp;
zram->comps[prio] = comp;
zram->num_active_comps++;
}
zram->disksize = disksize;
set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT);
......@@ -1753,7 +2210,8 @@ static ssize_t disksize_store(struct device *dev,
return len;
out_free_meta:
out_free_comps:
zram_destroy_comps(zram);
zram_meta_free(zram, disksize);
out_unlock:
up_write(&zram->init_lock);
......@@ -1850,6 +2308,10 @@ static DEVICE_ATTR_WO(writeback);
static DEVICE_ATTR_RW(writeback_limit);
static DEVICE_ATTR_RW(writeback_limit_enable);
#endif
#ifdef CONFIG_ZRAM_MULTI_COMP
static DEVICE_ATTR_RW(recomp_algorithm);
static DEVICE_ATTR_WO(recompress);
#endif
static struct attribute *zram_disk_attrs[] = {
&dev_attr_disksize.attr,
......@@ -1873,6 +2335,10 @@ static struct attribute *zram_disk_attrs[] = {
&dev_attr_bd_stat.attr,
#endif
&dev_attr_debug_stat.attr,
#ifdef CONFIG_ZRAM_MULTI_COMP
&dev_attr_recomp_algorithm.attr,
&dev_attr_recompress.attr,
#endif
NULL,
};
......@@ -1965,7 +2431,7 @@ static int zram_add(void)
blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, zram->disk->queue);
device_add_disk(NULL, zram->disk, zram_disk_attr_groups);
strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
zram_debugfs_register(zram);
pr_info("Added device: %s\n", zram->disk->disk_name);
......
......@@ -41,6 +41,9 @@
*/
#define ZRAM_FLAG_SHIFT 24
/* Only 2 bits are allowed for comp priority index */
#define ZRAM_COMP_PRIORITY_MASK 0x3
/* Flags for zram pages (table[page_no].flags) */
enum zram_pageflags {
/* zram slot is locked */
......@@ -50,6 +53,10 @@ enum zram_pageflags {
ZRAM_UNDER_WB, /* page is under writeback */
ZRAM_HUGE, /* Incompressible page */
ZRAM_IDLE, /* not accessed page since last idle marking */
ZRAM_INCOMPRESSIBLE, /* none of the algorithms could compress it */
ZRAM_COMP_PRIORITY_BIT1, /* First bit of comp priority index */
ZRAM_COMP_PRIORITY_BIT2, /* Second bit of comp priority index */
__NR_ZRAM_PAGEFLAGS,
};
......@@ -89,10 +96,20 @@ struct zram_stats {
#endif
};
#ifdef CONFIG_ZRAM_MULTI_COMP
#define ZRAM_PRIMARY_COMP 0U
#define ZRAM_SECONDARY_COMP 1U
#define ZRAM_MAX_COMPS 4U
#else
#define ZRAM_PRIMARY_COMP 0U
#define ZRAM_SECONDARY_COMP 0U
#define ZRAM_MAX_COMPS 1U
#endif
struct zram {
struct zram_table_entry *table;
struct zs_pool *mem_pool;
struct zcomp *comp;
struct zcomp *comps[ZRAM_MAX_COMPS];
struct gendisk *disk;
/* Prevent concurrent execution of device init */
struct rw_semaphore init_lock;
......@@ -107,7 +124,8 @@ struct zram {
* we can store in a disk.
*/
u64 disksize; /* bytes */
char compressor[CRYPTO_MAX_ALG_NAME];
const char *comp_algs[ZRAM_MAX_COMPS];
s8 num_active_comps;
/*
* zram is claimed so open request will be failed
*/
......
......@@ -55,5 +55,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
unsigned long zs_get_total_pages(struct zs_pool *pool);
unsigned long zs_compact(struct zs_pool *pool);
unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size);
void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats);
#endif
......@@ -253,3 +253,4 @@ char *next_arg(char *args, char **param, char **val)
/* Chew up trailing spaces. */
return skip_spaces(next);
}
EXPORT_SYMBOL(next_arg);
......@@ -1221,6 +1221,27 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage)
return get_zspage_inuse(zspage) == class->objs_per_zspage;
}
/**
* zs_lookup_class_index() - Returns index of the zsmalloc &size_class
* that hold objects of the provided size.
* @pool: zsmalloc pool to use
* @size: object size
*
* Context: Any context.
*
* Return: the index of the zsmalloc &size_class that hold objects of the
* provided size.
*/
unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size)
{
struct size_class *class;
class = pool->size_class[get_size_class_index(size)];
return class->index;
}
EXPORT_SYMBOL_GPL(zs_lookup_class_index);
unsigned long zs_get_total_pages(struct zs_pool *pool)
{
return atomic_long_read(&pool->pages_allocated);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册