提交 03161a93 编写于 作者: H Haitao Shi 提交者: Zheng Zengkai

mm: fix some spelling mistakes in comments

mainline inclusion
from mainline-master
commit 60f7c503
category: bugfix
bugzilla: 175270
CVE: NA

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=60f7c503d971a731ee3c4f884a9f2e80d476730d

------------------------------------------------------------------------

Fix some spelling mistakes in comments:
	udpate ==> update
	succesful ==> successful
	exmaple ==> example
	unneccessary ==> unnecessary
	stoping ==> stopping
	uknown ==> unknown

Link: https://lkml.kernel.org/r/20201127011747.86005-1-shihaitao1@huawei.comSigned-off-by: NHaitao Shi <shihaitao1@huawei.com>
Reviewed-by: NMike Rapoport <rppt@linux.ibm.com>
Reviewed-by: NSouptick Joarder <jrdr.linux@gmail.com>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: NOuyangdelong <ouyangdelong@huawei.com>
Signed-off-by: NNifujia <nifujia1@hisilicon.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 9ef151f1
...@@ -1347,7 +1347,7 @@ static int __wait_on_page_locked_async(struct page *page, ...@@ -1347,7 +1347,7 @@ static int __wait_on_page_locked_async(struct page *page,
else else
ret = PageLocked(page); ret = PageLocked(page);
/* /*
* If we were succesful now, we know we're still on the * If we were successful now, we know we're still on the
* waitqueue as we're still under the lock. This means it's * waitqueue as we're still under the lock. This means it's
* safe to remove and return success, we know the callback * safe to remove and return success, we know the callback
* isn't going to trigger. * isn't going to trigger.
......
...@@ -2387,7 +2387,7 @@ static void __split_huge_page_tail(struct page *head, int tail, ...@@ -2387,7 +2387,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
* Clone page flags before unfreezing refcount. * Clone page flags before unfreezing refcount.
* *
* After successful get_page_unless_zero() might follow flags change, * After successful get_page_unless_zero() might follow flags change,
* for exmaple lock_page() which set PG_waiters. * for example lock_page() which set PG_waiters.
*/ */
page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
page_tail->flags |= (head->flags & page_tail->flags |= (head->flags &
......
...@@ -1283,7 +1283,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, ...@@ -1283,7 +1283,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
* PTEs are armed with uffd write protection. * PTEs are armed with uffd write protection.
* Here we can also mark the new huge pmd as * Here we can also mark the new huge pmd as
* write protected if any of the small ones is * write protected if any of the small ones is
* marked but that could bring uknown * marked but that could bring unknown
* userfault messages that falls outside of * userfault messages that falls outside of
* the registered range. So, just be simple. * the registered range. So, just be simple.
*/ */
......
...@@ -834,7 +834,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size) ...@@ -834,7 +834,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size)
* @base: base address of the region * @base: base address of the region
* @size: size of the region * @size: size of the region
* @set: set or clear the flag * @set: set or clear the flag
* @flag: the flag to udpate * @flag: the flag to update
* *
* This function isolates region [@base, @base + @size), and sets/clears flag * This function isolates region [@base, @base + @size), and sets/clears flag
* *
......
...@@ -2550,7 +2550,7 @@ static bool migrate_vma_check_page(struct page *page) ...@@ -2550,7 +2550,7 @@ static bool migrate_vma_check_page(struct page *page)
* will bump the page reference count. Sadly there is no way to * will bump the page reference count. Sadly there is no way to
* differentiate a regular pin from migration wait. Hence to * differentiate a regular pin from migration wait. Hence to
* avoid 2 racing thread trying to migrate back to CPU to enter * avoid 2 racing thread trying to migrate back to CPU to enter
* infinite loop (one stoping migration because the other is * infinite loop (one stopping migration because the other is
* waiting on pte migration entry). We always return true here. * waiting on pte migration entry). We always return true here.
* *
* FIXME proper solution is to rework migration_entry_wait() so * FIXME proper solution is to rework migration_entry_wait() so
......
...@@ -34,7 +34,7 @@ ...@@ -34,7 +34,7 @@
* *
* The need callback is used to decide whether extended memory allocation is * The need callback is used to decide whether extended memory allocation is
* needed or not. Sometimes users want to deactivate some features in this * needed or not. Sometimes users want to deactivate some features in this
* boot and extra memory would be unneccessary. In this case, to avoid * boot and extra memory would be unnecessary. In this case, to avoid
* allocating huge chunk of memory, each clients represent their need of * allocating huge chunk of memory, each clients represent their need of
* extra memory through the need callback. If one of the need callbacks * extra memory through the need callback. If one of the need callbacks
* returns true, it means that someone needs extra memory so that * returns true, it means that someone needs extra memory so that
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册