- 27 12月, 2019 40 次提交
-
-
由 Jeremy Linton 提交于
mainline inclusion from mainline-5.3-rc1 commit d24a0c70 category: feature bugzilla: 16072 CVE: NA --------------------------- ACPI 6.3 adds additional fields to the MADT GICC structure to describe SPE PPI's. We pick these out of the cached reference to the madt_gicc structure similarly to the core PMU code. We then create a platform device referring to the IRQ and let the user/module loader decide whether to load the SPE driver. Tested-by: NHanjun Gou <gouhanjun@huawei.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jeremy Linton 提交于
mainline inclusion from mainline-5.3-rc1 commit 56855a99 category: feature bugzilla: 16072 CVE: NA --------------------------- ACPI 6.3 adds a flag to indicate that child nodes are all identical cores. This is useful to authoritatively determine if a set of (possibly offline) cores are identical or not. Since the flag doesn't give us a unique id we can generate one and use it to create bitmaps of sibling nodes, or simply in a loop to determine if a subset of cores are identical. Tested-by: NHanjun Gou <gouhanjun@huawei.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jeremy Linton 提交于
mainline inclusion from mainline-5.3-rc1 commit ed2b664f category: feature bugzilla: 16072 CVE: NA --------------------------- The ACPI specification implies that the IDENTICAL flag should be set on all non leaf nodes where the children are identical. This means that we need to be searching for the last node with the identical flag set rather than the first one. Since this flag is also dependent on the table revision, we need to add a bit of extra code to verify the table revision, and the next node's state in the traversal. Since we want to avoid function pointers here, lets just special case the IDENTICAL flag. Tested-by: NHanjun Gou <gouhanjun@huawei.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Erik Schmauss 提交于
mainline inclusion from mainline-5.1-rc1 commit b5eab512 category: feature bugzilla: 16072 CVE: NA --------------------------- ACPICA commit c736ea34add19a3a07e0e398711847cd6b95affd Link: https://github.com/acpica/acpica/commit/c736ea34Signed-off-by: NErik Schmauss <erik.schmauss@intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Erik Schmauss 提交于
mainline inclusion from mainline-5.1-rc1 commit e646e0a5 category: feature bugzilla: 16072 CVE: NA --------------------------- ACPICA commit 31b184052a986dc8d80c878edeca574d4ffa1cf5 Link: https://github.com/acpica/acpica/commit/31b18405Signed-off-by: NErik Schmauss <erik.schmauss@intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Erik Schmauss 提交于
mainline inclusion from mainline-5.1-rc1 commit f00175d0 category: feature bugzilla: 16072 CVE: NA --------------------------- ACPICA commit b11446d8b47805c2637a2286aca34b717ec6b5be Link: https://github.com/acpica/acpica/commit/b11446d8Signed-off-by: NErik Schmauss <erik.schmauss@intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hongbo Yao 提交于
hulk inclusion category: feature bugzilla: 16072 CVE: NA --------------------------- This reverts commit 1e15594e6cd587350e006db646be55f747d2654c. In Linux 5.3.0, SPE ACPI enablement has been upstreamed. SPE patches in hulk-4.19 are the old version, and they need to be reverted to the mainline version. Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hongbo Yao 提交于
hulk inclusion category: feature bugzilla: 16072 CVE: NA --------------------------- This reverts commit 18f4cb589de0851b04a7632ee72bdbb8f1a7463e. In Linux 5.3.0, SPE ACPI enablement has been upstreamed. SPE patches in hulk-4.19-next are the old version, and they need to be reverted to the mainline version. Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hongbo Yao 提交于
hulk inclusion category: feature bugzilla: 16072 CVE: NA --------------------------- This reverts commit 556b16f5ad7e910c3784bb02b33c2af6ca9c9a4b. In Linux 5.3.0, SPE ACPI enablement has been upstreamed. SPE patches in hulk-4.19 are the old version, and they need to be reverted to the mainline version. Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hongbo Yao 提交于
hulk inclusion category: feature bugzilla: 16072 CVE: NA --------------------------- This reverts commit af0a9606fb0823524e0f834d15f38c0223747087. In Linux 5.3.0, SPE ACPI enablement has been upstreamed. SPE patches in hulk-4.19 are the old version, and they need to be reverted to the mainline version. Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-5.2 commit: 01d57485 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- Since commit 3d65b6bb ("arm64: tlbi: Set MAX_TLBI_OPS to PTRS_PER_PTE"), we resort to per-ASID invalidation when attempting to perform more than PTRS_PER_PTE invalidation instructions in a single call to __flush_tlb_range(). Whilst this is beneficial, the mmu_gather code does not ensure that the end address of the range is rounded-up to the stride when freeing intermediate page tables in pXX_free_tlb(), which defeats our range checking. Align the bounds passed into __flush_tlb_range(). Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Reported-by: NHanjun Guo <guohanjun@huawei.com> Tested-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.21 commit: 3d65b6bb category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- In order to reduce the possibility of soft lock-ups, we bound the maximum number of TLBI operations performed by a single call to flush_tlb_range() to an arbitrary constant of 1024. Whilst this does the job of avoiding lock-ups, we can actually be a bit smarter by defining this as PTRS_PER_PTE. Due to the structure of our page tables, using PTRS_PER_PTE means that an outer loop calling flush_tlb_range() for entire table entries will end up performing just a single TLBI operation for each entry. As an example, mremap()ing a 1GB range mapped using 4k pages now requires only 512 TLBI operations when moving the page tables as opposed to 262144 operations (512*512) when using the current threshold of 1024. Cc: Joel Fernandes <joel@joelfernandes.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Alex Van Brunt 提交于
mainline inclusion from mainline-4.21 commit: 3403e56b category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- When transitioning a PTE from young to old as part of page aging, we can avoid waiting for the TLB invalidation to complete and therefore drop the subsequent DSB instruction. Whilst this opens up a race with page reclaim, where a PTE in active use via a stale, young TLB entry does not update the underlying descriptor, the worst thing that happens is that the page is reclaimed and then immediately faulted back in. Given that we have a DSB in our context-switch path, the window for a spurious reclaim is fairly limited and eliding the barrier claims to boost NVMe/SSD accesses by over 10% on some platforms. A similar optimisation was made for x86 in commit b13b1d2d ("x86/mm: In the PTE swapout page reclaim case clear the accessed bit instead of flushing the TLB"). Signed-off-by: NAlex Van Brunt <avanbrunt@nvidia.com> Signed-off-by: NAshish Mhetre <amhetre@nvidia.com> [will: rewrote patch] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: 7f088727 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- Peter Z asked me to justify the barrier usage in asm/tlbflush.h, but actually that whole block comment needs to be rewritten. Reported-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: ace8cb75 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- By selecting HAVE_RCU_TABLE_INVALIDATE, we can rely on tlb_flush() being called if we fail to batch table pages for freeing. This in turn allows us to postpone walk-cache invalidation until tlb_finish_mmu(), which avoids lots of unnecessary DSBs and means we can shoot down the ASID if the range is large enough. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: f270ab88 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- Now that the core mmu_gather code keeps track of both the levels of page table cleared and also whether or not these entries correspond to intermediate entries, we can use this in our tlb_flush() callback to reduce the number of invalidations we issue as well as their scope. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: 07212cd4 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- If there's one thing the RCU-based table freeing doesn't need, it's more ifdeffery. Remove the redundant !CONFIG_HAVE_RCU_TABLE_FREE code, since this option is unconditionally selected in our Kconfig. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: 67a902ac category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- When we are unmapping intermediate page-table entries or huge pages, we don't need to issue a TLBI instruction for every PAGE_SIZE chunk in the VA range being unmapped. Allow the invalidation stride to be passed to __flush_tlb_range(), and adjust our "just nuke the ASID" heuristic to take this into account. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: d8289d3a category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- Add a comment to explain why we can't get away with last-level invalidation in flush_tlb_range() Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: 0795edaf category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- Now that our walk-cache invalidation routines imply a DSB before the invalidation, we no longer need one when we are clearing an entry during unmap. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: 45a284bc category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- __flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after writing the new table entry and therefore avoid the barrier prior to the TLBI instruction. In preparation for delaying our walk-cache invalidation on the unmap() path, move the DSB into the TLB invalidation routines. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: 6899a4c8 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- flush_tlb_kernel_range() is only ever used to invalidate last-level entries, so we can restrict the scope of the TLB invalidation instruction. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: 7526aa54 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- We recently had to debug a TLB invalidation problem on the munmap() path, which was made more difficult than necessary because: (a) The MMU gather code had changed without people realising (b) Many people subtly misunderstood the operation of the MMU gather code and its interactions with RCU and arch-specific TLB invalidation (c) Untangling the intended behaviour involved educated guesswork and plenty of discussion Hopefully, we can avoid getting into this mess again by designating a cross-arch group of people to look after this code. It is not intended that they will have a separate tree, but they at least provide a point of contact for anybody working in this area and can co-ordinate any proposed future changes to the internal API. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-4.20-rc1 commit: 196d9d8b category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- In preparation for maintaining the mmu_gather code as its own entity, move the implementation out of memory.c and into its own file. Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: a6d60245 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- It is common for architectures with hugepage support to require only a single TLB invalidation operation per hugepage during unmap(), rather than iterating through the mapping at a PAGE_SIZE increment. Currently, however, the level in the page table where the unmap() operation occurs is not stored in the mmu_gather structure, therefore forcing architectures to issue additional TLB invalidation operations or to give up and over-invalidate by e.g. invalidating the entire TLB. Ideally, we could add an interval rbtree to the mmu_gather structure, which would allow us to associate the correct mapping granule with the various sub-mappings within the range being invalidated. However, this is costly in terms of book-keeping and memory management, so instead we approximate by keeping track of the page table levels that are cleared and provide a means to query the smallest granule required for invalidation. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-4.20-rc1 commit: 22a61c3c category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- Some architectures require different TLB invalidation instructions depending on whether it is only the last-level of page table being changed, or whether there are also changes to the intermediate (directory) entries higher up the tree. Add a new bit to the flags bitfield in struct mmu_gather so that the architecture code can operate accordingly if it's the intermediate levels being invalidated. Acked-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: faaadaf3 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- The inner workings of the mmu_gather-based TLB invalidation mechanism are not relevant to nommu configurations, so guard them with an #ifdef. This allows us to implement future functions using static inlines without breaking the build. Acked-by: NNicholas Piggin <npiggin@gmail.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yonglong Liu 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA Some Chinese comments in kcompat.h, change them to English. Feature or Bugfix:Bugfix Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Reviewed-by: Nlinyunsheng <linyunsheng@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Vandana BN 提交于
mainline inclusion from mainline-v5.3-rc5 commit 5d2e73a5 category: bugfix bugzilla: 13690 CVE: CVE-2019-15217 ------------------------------------------------- SyzKaller hit the null pointer deref while reading from uninitialized udev->product in zr364xx_vidioc_querycap(). ================================================================== BUG: KASAN: null-ptr-deref in read_word_at_a_time+0xe/0x20 include/linux/compiler.h:274 Read of size 1 at addr 0000000000000000 by task v4l_id/5287 CPU: 1 PID: 5287 Comm: v4l_id Not tainted 5.1.0-rc3-319004-g43151d6 #6 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xe8/0x16e lib/dump_stack.c:113 kasan_report.cold+0x5/0x3c mm/kasan/report.c:321 read_word_at_a_time+0xe/0x20 include/linux/compiler.h:274 strscpy+0x8a/0x280 lib/string.c:207 zr364xx_vidioc_querycap+0xb5/0x210 drivers/media/usb/zr364xx/zr364xx.c:706 v4l_querycap+0x12b/0x340 drivers/media/v4l2-core/v4l2-ioctl.c:1062 __video_do_ioctl+0x5bb/0xb40 drivers/media/v4l2-core/v4l2-ioctl.c:2874 video_usercopy+0x44e/0xf00 drivers/media/v4l2-core/v4l2-ioctl.c:3056 v4l2_ioctl+0x14e/0x1a0 drivers/media/v4l2-core/v4l2-dev.c:364 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:509 [inline] do_vfs_ioctl+0xced/0x12f0 fs/ioctl.c:696 ksys_ioctl+0xa0/0xc0 fs/ioctl.c:713 __do_sys_ioctl fs/ioctl.c:720 [inline] __se_sys_ioctl fs/ioctl.c:718 [inline] __x64_sys_ioctl+0x74/0xb0 fs/ioctl.c:718 do_syscall_64+0xcf/0x4f0 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7f3b56d8b347 Code: 90 90 90 48 8b 05 f1 fa 2a 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 90 90 90 90 90 90 90 90 90 90 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d c1 fa 2a 00 31 d2 48 29 c2 64 RSP: 002b:00007ffe005d5d68 EFLAGS: 00000202 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f3b56d8b347 RDX: 00007ffe005d5d70 RSI: 0000000080685600 RDI: 0000000000000003 RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000400884 R13: 00007ffe005d5ec0 R14: 0000000000000000 R15: 0000000000000000 ================================================================== For this device udev->product is not initialized and accessing it causes a NULL pointer deref. The fix is to check for NULL before strscpy() and copy empty string, if product is NULL Reported-by: syzbot+66010012fd4c531a1a96@syzkaller.appspotmail.com Signed-off-by: NVandana BN <bnvandana@gmail.com> Signed-off-by: NHans Verkuil <hverkuil-cisco@xs4all.nl> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hui Peng 提交于
mainline inclusion from mainline-v5.3-rc5 commit 19bce474c45be69a284ecee660aa12d8f1e88f18 category: bugfix bugzilla: 13690 CVE: CVE-2019-15118 ------------------------------------------------- `check_input_term` recursively calls itself with input from device side (e.g., uac_input_terminal_descriptor.bCSourceID) as argument (id). In `check_input_term`, if `check_input_term` is called with the same `id` argument as the caller, it triggers endless recursive call, resulting kernel space stack overflow. This patch fixes the bug by adding a bitmap to `struct mixer_build` to keep track of the checked ids and stop the execution if some id has been checked (similar to how parse_audio_unit handles unitid argument). Reported-by: NHui Peng <benquike@gmail.com> Reported-by: NMathias Payer <mathias.payer@nebelwelt.net> Signed-off-by: NHui Peng <benquike@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hui Peng 提交于
mainline inclusion from mainline-v5.3-rc5 commit daac07156b330b18eb5071aec4b3ddca1c377f2c category: bugfix bugzilla: 13690 CVE: CVE-2019-15117 ------------------------------------------------- The `uac_mixer_unit_descriptor` shown as below is read from the device side. In `parse_audio_mixer_unit`, `baSourceID` field is accessed from index 0 to `bNrInPins` - 1, the current implementation assumes that descriptor is always valid (the length of descriptor is no shorter than 5 + `bNrInPins`). If a descriptor read from the device side is invalid, it may trigger out-of-bound memory access. ``` struct uac_mixer_unit_descriptor { __u8 bLength; __u8 bDescriptorType; __u8 bDescriptorSubtype; __u8 bUnitID; __u8 bNrInPins; __u8 baSourceID[]; } ``` This patch fixes the bug by add a sanity check on the length of the descriptor. Reported-by: NHui Peng <benquike@gmail.com> Reported-by: NMathias Payer <mathias.payer@nebelwelt.net> Cc: <stable@vger.kernel.org> Signed-off-by: NHui Peng <benquike@gmail.com> Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-5.3-rc2 commit 952041a8 category: bugfix bugzilla: 20006 CVE: NA ------------------------------------------------- While reviewing rwsem down_slowpath, Will noticed ldsem had a copy of a bug we just found for rwsem. X = 0; CPU0 CPU1 rwsem_down_read() for (;;) { set_current_state(TASK_UNINTERRUPTIBLE); X = 1; rwsem_up_write(); rwsem_mark_wake() atomic_long_add(adjustment, &sem->count); smp_store_release(&waiter->task, NULL); if (!waiter.task) break; ... } r = X; Allows 'r == 0'. Reported-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NWill Deacon <will@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 4898e640 ("tty: Add timed, writer-prioritized rw semaphore") Signed-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-5.3-rc2 commit 99143f82a255e7f054bead8443462fae76dd829e category: bugfix bugzilla: 20006 CVE: NA ------------------------------------------------- While reviewing another read_slowpath patch, both Will and I noticed another missing ACQUIRE, namely: X = 0; CPU0 CPU1 rwsem_down_read() for (;;) { set_current_state(TASK_UNINTERRUPTIBLE); X = 1; rwsem_up_write(); rwsem_mark_wake() atomic_long_add(adjustment, &sem->count); smp_store_release(&waiter->task, NULL); if (!waiter.task) break; ... } r = X; Allows 'r == 0'. Reported-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reported-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NWill Deacon <will@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 1da177e4 ("Linux-2.6.12-rc2") Signed-off-by: NIngo Molnar <mingo@kernel.org> Conflicts: kernel/locking/rwsem.c [yyl: rwsem_down_read_slowpath() is refactor __rwsem_down_read_failed_common()] Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Brian Foster 提交于
mainline inclusion from mainline-5.0-rc6 commit 465fa17f category: bugfix bugzilla: 18932 CVE: NA --------------------------- As of commit e339dd8d ("xfs: use sync buffer I/O for sync delwri queue submission"), the delwri submission code uses sync buffer I/O for sync delwri I/O. Instead of waiting on async I/O to unlock the buffer, it uses the underlying sync I/O completion mechanism. If delwri buffer submission fails due to a shutdown scenario, an error is set on the buffer and buffer completion never occurs. This can cause xfs_buf_delwri_submit() to deadlock waiting on a completion event. We could check the error state before waiting on such buffers, but that doesn't serialize against the case of an error set via a racing I/O completion. Instead, invoke I/O completion in the shutdown case regardless of buffer I/O type. Signed-off-by: NBrian Foster <bfoster@redhat.com> Reviewed-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com> Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Nyu kuai <yukuai3@huawei.com> Reviewed-by: Nzhengbin <zhengbin13@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Darrick J. Wong 提交于
mainline inclusion from mainline-5.2-rc1 commit 3de5eab3 category: bugfix bugzilla: 19045 CVE: NA --------------------------- We passed an inode into xfs_ioctl_setattr_get_trans with join_flags indicating which locks are held on that inode. If we can't allocate a transaction then we need to unlock the inode before we bail out, like all the other error paths do. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NBrian Foster <bfoster@redhat.com> Signed-off-by: Nzhengbin <zhengbin13@huawei.com> Reviewed-by: Nyu kuai <yukuai3@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Brian Foster 提交于
mainline inclusion from mainline-5.1-rc3 commit 6958d11f category: bugfix bugzilla: 19011 CVE: NA --------------------------- We've had rather rare reports of bmap btree block corruption where the bmap root block has a level count of zero. The root cause of the corruption is so far unknown. We do have verifier checks to detect this form of on-disk corruption, but this doesn't cover a memory corruption variant of the problem. The latter is a reasonable possibility because the root block is part of the inode fork and can reside in-core for some time before inode extents are read. If this occurs, it leads to a system crash such as the following: BUG: unable to handle kernel paging request at ffffffff00000221 PF error: [normal kernel read fault] ... RIP: 0010:xfs_trans_brelse+0xf/0x200 [xfs] ... Call Trace: xfs_iread_extents+0x379/0x540 [xfs] xfs_file_iomap_begin_delay+0x11a/0xb40 [xfs] ? xfs_attr_get+0xd1/0x120 [xfs] ? iomap_write_begin.constprop.40+0x2d0/0x2d0 xfs_file_iomap_begin+0x4c4/0x6d0 [xfs] ? __vfs_getxattr+0x53/0x70 ? iomap_write_begin.constprop.40+0x2d0/0x2d0 iomap_apply+0x63/0x130 ? iomap_write_begin.constprop.40+0x2d0/0x2d0 iomap_file_buffered_write+0x62/0x90 ? iomap_write_begin.constprop.40+0x2d0/0x2d0 xfs_file_buffered_aio_write+0xe4/0x3b0 [xfs] __vfs_write+0x150/0x1b0 vfs_write+0xba/0x1c0 ksys_pwrite64+0x64/0xa0 do_syscall_64+0x5a/0x1d0 entry_SYSCALL_64_after_hwframe+0x49/0xbe The crash occurs because xfs_iread_extents() attempts to release an uninitialized buffer pointer as the level == 0 value prevented the buffer from ever being allocated or read. Change the level > 0 assert to an explicit error check in xfs_iread_extents() to avoid crashing the kernel in the event of localized, in-core inode corruption. Signed-off-by: NBrian Foster <bfoster@redhat.com> Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com> Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Nzhengbin <zhengbin13@huawei.com> Reviewed-by: Nyu kuai <yukuai3@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Darrick J. Wong 提交于
mainline inclusion from mainline-5.1-rc1 commit 3d129e1b category: bugfix bugzilla: 19005 CVE: NA --------------------------- Fix a backwards endian conversion of a constant. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NAllison Henderson <allison.henderson@oracle.com> Signed-off-by: Nzhengbin <zhengbin13@huawei.com> Reviewed-by: Nyu kuai <yukuai3@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Nishka Dasgupta 提交于
mainline inclusion from mainline-5.3-rc3 commit 321275f0 category: bugfix bugzilla: 20161 CVE: NA ------------------------------------------------- Each iteration of for_each_child_of_node puts the previous node, but in the case of a return from the middle of the loop, there is no put, thus causing a memory leak. Add an of_node_put before the return in three places. Issue found with Coccinelle. Signed-off-by: NNishka Dasgupta <nishkadg.linux@gmail.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xue 提交于
driver inclusion category:bugfix bugzilla:4472 CVE:NA ----------------------------------------------------------------------- This patch deletes the job number in the comment and adds new file comment. Signed-off-by: NXue <xuechaojing@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 zhangyi (F) 提交于
hulk inclusion category: bugfix bugzilla: 18685 CVE: NA --------------------------- Remount process will release system zone which was allocated before if "noblock_validity" is specified. If we mount an ext4 file system to two mountpoints with default mount options, and then remount one of them with "noblock_validity", it may trigger a use after free problem when someone accessing the other one. # mount /dev/sda foo # mount /dev/sda bar User access mountpoint "foo" | Remount mountpoint "bar" | ext4_map_blocks() | ext4_remount() check_block_validity() | ext4_setup_system_zone() ext4_data_block_valid() | ext4_release_system_zone() | free system_blks rb nodes access system_blks rb nodes | trigger use after free | This problem can also be reproduced by one mountpint, At the same time, add_system_zone() can get called during remount as well so there can be racing ext4_data_block_valid() reading the rbtree at the same time. This patch add RCU to protect system zone from releasing or building when doing a remount which inverse current "noblock_validity" mount option. It assign the rbtree after the whole tree was complete and do actual freeing after rcu grace period, avoid any intermediate state. Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Reviewed-by: NJan Kara <jack@suse.cz> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Reviewed-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-