提交 75a8f1b4 编写于 作者: B Bodo Stroesser 提交者: Yongqiang Liu

scsi: target: tcmu: Optimize use of flush_dcache_page

mainline inclusion
from mainline-v5.9-rc1
commit 3c58f737
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I5SXLB
CVE: NA

--------------------------------

(scatter|gather)_data_area() need to flush dcache after writing data to or
before reading data from a page in uio data area.  The two routines are
able to handle data transfer to/from such a page in fragments and flush the
cache after each fragment was copied by calling the wrapper
tcmu_flush_dcache_range().

That means:

1) flush_dcache_page() can be called multiple times for the same page.

2) Calling flush_dcache_page() indirectly using the wrapper does not make
   sense, because each call of the wrapper is for one single page only and
   the calling routine already has the correct page pointer.

Change (scatter|gather)_data_area() such that, instead of calling
tcmu_flush_dcache_range() before/after each memcpy, it now calls
flush_dcache_page() before unmapping a page (when writing is complete for
that page) or after mapping a page (when starting to read the page).

After this change only calls to tcmu_flush_dcache_range() for addresses in
vmalloc'ed command ring are left over.

The patch was tested on ARM with kernel 4.19.118 and 5.7.2

Link: https://lore.kernel.org/r/20200618131632.32748-2-bstroesser@ts.fujitsu.comTested-by: NJiangYu <lnsyyj@hotmail.com>
Tested-by: NDaniel Meyerholt <dxm523@gmail.com>
Acked-by: NMike Christie <michael.christie@oracle.com>
Signed-off-by: NBodo Stroesser <bstroesser@ts.fujitsu.com>
Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: NWenchao Hao <haowenchao@huawei.com>
Reviewed-by: Nlijinlin <lijinlin3@huawei.com>
Signed-off-by: NYongqiang Liu <liuyongqiang13@huawei.com>
上级 1981211d
...@@ -687,8 +687,10 @@ static void scatter_data_area(struct tcmu_dev *udev, ...@@ -687,8 +687,10 @@ static void scatter_data_area(struct tcmu_dev *udev,
from = kmap_atomic(sg_page(sg)) + sg->offset; from = kmap_atomic(sg_page(sg)) + sg->offset;
while (sg_remaining > 0) { while (sg_remaining > 0) {
if (block_remaining == 0) { if (block_remaining == 0) {
if (to) if (to) {
flush_dcache_page(page);
kunmap_atomic(to); kunmap_atomic(to);
}
block_remaining = DATA_BLOCK_SIZE; block_remaining = DATA_BLOCK_SIZE;
dbi = tcmu_cmd_get_dbi(tcmu_cmd); dbi = tcmu_cmd_get_dbi(tcmu_cmd);
...@@ -733,7 +735,6 @@ static void scatter_data_area(struct tcmu_dev *udev, ...@@ -733,7 +735,6 @@ static void scatter_data_area(struct tcmu_dev *udev,
memcpy(to + offset, memcpy(to + offset,
from + sg->length - sg_remaining, from + sg->length - sg_remaining,
copy_bytes); copy_bytes);
tcmu_flush_dcache_range(to, copy_bytes);
} }
sg_remaining -= copy_bytes; sg_remaining -= copy_bytes;
...@@ -742,8 +743,10 @@ static void scatter_data_area(struct tcmu_dev *udev, ...@@ -742,8 +743,10 @@ static void scatter_data_area(struct tcmu_dev *udev,
kunmap_atomic(from - sg->offset); kunmap_atomic(from - sg->offset);
} }
if (to) if (to) {
flush_dcache_page(page);
kunmap_atomic(to); kunmap_atomic(to);
}
} }
static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd, static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd,
...@@ -789,13 +792,13 @@ static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd, ...@@ -789,13 +792,13 @@ static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd,
dbi = tcmu_cmd_get_dbi(cmd); dbi = tcmu_cmd_get_dbi(cmd);
page = tcmu_get_block_page(udev, dbi); page = tcmu_get_block_page(udev, dbi);
from = kmap_atomic(page); from = kmap_atomic(page);
flush_dcache_page(page);
} }
copy_bytes = min_t(size_t, sg_remaining, copy_bytes = min_t(size_t, sg_remaining,
block_remaining); block_remaining);
if (read_len < copy_bytes) if (read_len < copy_bytes)
copy_bytes = read_len; copy_bytes = read_len;
offset = DATA_BLOCK_SIZE - block_remaining; offset = DATA_BLOCK_SIZE - block_remaining;
tcmu_flush_dcache_range(from, copy_bytes);
memcpy(to + sg->length - sg_remaining, from + offset, memcpy(to + sg->length - sg_remaining, from + offset,
copy_bytes); copy_bytes);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册