1. 18 7月, 2022 2 次提交
  2. 14 1月, 2022 1 次提交
    • K
      powerpc: Fix virt_addr_valid() check · 44634062
      Kefeng Wang 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 186017 https://gitee.com/openeuler/kernel/issues/I4DDEL
      
      --------------------------------
      
      When run ethtool eth0, the BUG occurred,
      
        usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
        kernel BUG at mm/usercopy.c:99
        ...
        usercopy_abort+0x64/0xa0 (unreliable)
        __check_heap_object+0x168/0x190
        __check_object_size+0x1a0/0x200
        dev_ethtool+0x2494/0x2b20
        dev_ioctl+0x5d0/0x770
        sock_do_ioctl+0xf0/0x1d0
        sock_ioctl+0x3ec/0x5a0
        __se_sys_ioctl+0xf0/0x160
        system_call_exception+0xfc/0x1f0
        system_call_common+0xf8/0x200
      
      The code shows below,
      
        data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
        copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
      
      The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
      on PowerPC64, which leads to the panic.
      
      As commit 4dd7554a ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
      and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
      the virt_addr_valid().
      Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: NYuanzheng Song <songyuanzheng@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      44634062
  3. 26 7月, 2020 1 次提交
  4. 11 5月, 2020 1 次提交
  5. 11 4月, 2020 1 次提交
    • A
      mm/vma: define a default value for VM_DATA_DEFAULT_FLAGS · c62da0c3
      Anshuman Khandual 提交于
      There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS
      This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the
      existing VM_STACK_DEFAULT_FLAGS.  While here, also define some more
      macros with standard VMA access flag combinations that are used
      frequently across many platforms.  Apart from simplification, this
      reduces code duplication as well.
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Chris Zankel <chris@zankel.net>
      Link: http://lkml.kernel.org/r/1583391014-8170-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c62da0c3
  6. 17 2月, 2020 1 次提交
  7. 07 1月, 2020 1 次提交
  8. 13 11月, 2019 2 次提交
  9. 01 11月, 2019 1 次提交
  10. 20 8月, 2019 1 次提交
  11. 19 6月, 2019 1 次提交
  12. 31 5月, 2019 1 次提交
  13. 02 5月, 2019 2 次提交
  14. 21 4月, 2019 2 次提交
  15. 23 2月, 2019 1 次提交
  16. 04 2月, 2019 1 次提交
  17. 20 12月, 2018 1 次提交
    • C
      powerpc: use mm zones more sensibly · 25078dc1
      Christoph Hellwig 提交于
      Powerpc has somewhat odd usage where ZONE_DMA is used for all memory on
      common 64-bit configfs, and ZONE_DMA32 is used for 31-bit schemes.
      
      Move to a scheme closer to what other architectures use (and I dare to
      say the intent of the system):
      
       - ZONE_DMA: optionally for memory < 31-bit (64-bit embedded only)
       - ZONE_NORMAL: everything addressable by the kernel
       - ZONE_HIGHMEM: memory > 32-bit for 32-bit kernels
      
      Also provide information on how ZONE_DMA is used by defining
      ARCH_ZONE_DMA_BITS.
      
      Contains various fixes from Benjamin Herrenschmidt.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      25078dc1
  18. 04 12月, 2018 1 次提交
  19. 25 11月, 2018 1 次提交
    • D
      powerpc: mark 64-bit PD_HUGE constant as unsigned long · d456f352
      Daniel Axtens 提交于
      When compiled for 64-bit, the PD_HUGE constant is a 64-bit integer.
      Mark it as an unsigned long.
      
      This squashes over a thousand sparse warnings on my minimal T4240RDB
      (e6500, ppc64be) config, of the following 2 forms:
      
      arch/powerpc/include/asm/hugetlb.h:52:49: warning: constant 0x8000000000000000 is so big it is unsigned long
      arch/powerpc/include/asm/nohash/pgtable.h:269:49: warning: constant 0x8000000000000000 is so big it is unsigned long
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d456f352
  20. 30 7月, 2018 2 次提交
  21. 03 5月, 2018 1 次提交
    • H
      powerpc/fadump: Do not use hugepages when fadump is active · 85975387
      Hari Bathini 提交于
      FADump capture kernel boots in restricted memory environment preserving
      the context of previous kernel to save vmcore. Supporting hugepages in
      such environment makes things unnecessarily complicated, as hugepages
      need memory set aside for them. This means most of the capture kernel's
      memory is used in supporting hugepages. In most cases, this results in
      out-of-memory issues while booting FADump capture kernel. But hugepages
      are not of much use in capture kernel whose only job is to save vmcore.
      So, disabling hugepages support, when fadump is active, is a reliable
      solution for the out of memory issues. Introducing a flag variable to
      disable HugeTLB support when fadump is active.
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      Reviewed-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      85975387
  22. 13 3月, 2018 1 次提交
  23. 06 3月, 2018 1 次提交
  24. 19 5月, 2017 1 次提交
    • M
      powerpc/mm: Fix virt_addr_valid() etc. on 64-bit hash · e41e53cd
      Michael Ellerman 提交于
      virt_addr_valid() is supposed to tell you if it's OK to call virt_to_page() on
      an address. What this means in practice is that it should only return true for
      addresses in the linear mapping which are backed by a valid PFN.
      
      We are failing to properly check that the address is in the linear mapping,
      because virt_to_pfn() will return a valid looking PFN for more or less any
      address. That bug is actually caused by __pa(), used in virt_to_pfn().
      
      eg: __pa(0xc000000000010000) = 0x10000  # Good
          __pa(0xd000000000010000) = 0x10000  # Bad!
          __pa(0x0000000000010000) = 0x10000  # Bad!
      
      This started happening after commit bdbc29c1 ("powerpc: Work around gcc
      miscompilation of __pa() on 64-bit") (Aug 2013), where we changed the definition
      of __pa() to work around a GCC bug. Prior to that we subtracted PAGE_OFFSET from
      the value passed to __pa(), meaning __pa() of a 0xd or 0x0 address would give
      you something bogus back.
      
      Until we can verify if that GCC bug is no longer an issue, or come up with
      another solution, this commit does the minimal fix to make virt_addr_valid()
      work, by explicitly checking that the address is in the linear mapping region.
      
      Fixes: bdbc29c1 ("powerpc: Work around gcc miscompilation of __pa() on 64-bit")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Tested-by: NBreno Leitao <breno.leitao@gmail.com>
      e41e53cd
  25. 23 2月, 2017 1 次提交
    • D
      powerpc: do not make the entire heap executable · 16e72e9b
      Denys Vlasenko 提交于
      On 32-bit powerpc the ELF PLT sections of binaries (built with
      --bss-plt, or with a toolchain which defaults to it) look like this:
      
        [17] .sbss             NOBITS          0002aff8 01aff8 000014 00  WA  0   0  4
        [18] .plt              NOBITS          0002b00c 01aff8 000084 00 WAX  0   0  4
        [19] .bss              NOBITS          0002b090 01aff8 0000a4 00  WA  0   0  4
      
      Which results in an ELF load header:
      
        Type           Offset   VirtAddr   PhysAddr   FileSiz MemSiz  Flg Align
        LOAD           0x019c70 0x00029c70 0x00029c70 0x01388 0x014c4 RWE 0x10000
      
      This is all correct, the load region containing the PLT is marked as
      executable.  Note that the PLT starts at 0002b00c but the file mapping
      ends at 0002aff8, so the PLT falls in the 0 fill section described by
      the load header, and after a page boundary.
      
      Unfortunately the generic ELF loader ignores the X bit in the load
      headers when it creates the 0 filled non-file backed mappings.  It
      assumes all of these mappings are RW BSS sections, which is not the case
      for PPC.
      
      gcc/ld has an option (--secure-plt) to not do this, this is said to
      incur a small performance penalty.
      
      Currently, to support 32-bit binaries with PLT in BSS kernel maps
      *entire brk area* with executable rights for all binaries, even
      --secure-plt ones.
      
      Stop doing that.
      
      Teach the ELF loader to check the X bit in the relevant load header and
      create 0 filled anonymous mappings that are executable if the load
      header requests that.
      
      Test program showing the difference in /proc/$PID/maps:
      
      int main() {
      	char buf[16*1024];
      	char *p = malloc(123); /* make "[heap]" mapping appear */
      	int fd = open("/proc/self/maps", O_RDONLY);
      	int len = read(fd, buf, sizeof(buf));
      	write(1, buf, len);
      	printf("%p\n", p);
      	return 0;
      }
      
      Compiled using: gcc -mbss-plt -m32 -Os test.c -otest
      
      Unpatched ppc64 kernel:
      00100000-00120000 r-xp 00000000 00:00 0                                  [vdso]
      0fe10000-0ffd0000 r-xp 00000000 fd:00 67898094                           /usr/lib/libc-2.17.so
      0ffd0000-0ffe0000 r--p 001b0000 fd:00 67898094                           /usr/lib/libc-2.17.so
      0ffe0000-0fff0000 rw-p 001c0000 fd:00 67898094                           /usr/lib/libc-2.17.so
      10000000-10010000 r-xp 00000000 fd:00 100674505                          /home/user/test
      10010000-10020000 r--p 00000000 fd:00 100674505                          /home/user/test
      10020000-10030000 rw-p 00010000 fd:00 100674505                          /home/user/test
      10690000-106c0000 rwxp 00000000 00:00 0                                  [heap]
      f7f70000-f7fa0000 r-xp 00000000 fd:00 67898089                           /usr/lib/ld-2.17.so
      f7fa0000-f7fb0000 r--p 00020000 fd:00 67898089                           /usr/lib/ld-2.17.so
      f7fb0000-f7fc0000 rw-p 00030000 fd:00 67898089                           /usr/lib/ld-2.17.so
      ffa90000-ffac0000 rw-p 00000000 00:00 0                                  [stack]
      0x10690008
      
      Patched ppc64 kernel:
      00100000-00120000 r-xp 00000000 00:00 0                                  [vdso]
      0fe10000-0ffd0000 r-xp 00000000 fd:00 67898094                           /usr/lib/libc-2.17.so
      0ffd0000-0ffe0000 r--p 001b0000 fd:00 67898094                           /usr/lib/libc-2.17.so
      0ffe0000-0fff0000 rw-p 001c0000 fd:00 67898094                           /usr/lib/libc-2.17.so
      10000000-10010000 r-xp 00000000 fd:00 100674505                          /home/user/test
      10010000-10020000 r--p 00000000 fd:00 100674505                          /home/user/test
      10020000-10030000 rw-p 00010000 fd:00 100674505                          /home/user/test
      10180000-101b0000 rw-p 00000000 00:00 0                                  [heap]
                        ^^^^ this has changed
      f7c60000-f7c90000 r-xp 00000000 fd:00 67898089                           /usr/lib/ld-2.17.so
      f7c90000-f7ca0000 r--p 00020000 fd:00 67898089                           /usr/lib/ld-2.17.so
      f7ca0000-f7cb0000 rw-p 00030000 fd:00 67898089                           /usr/lib/ld-2.17.so
      ff860000-ff890000 rw-p 00000000 00:00 0                                  [stack]
      0x10180008
      
      The patch was originally posted in 2012 by Jason Gunthorpe
      and apparently ignored:
      
      https://lkml.org/lkml/2012/9/30/138
      
      Lightly run-tested.
      
      Link: http://lkml.kernel.org/r/20161215131950.23054-1-dvlasenk@redhat.comSigned-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Tested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Florian Weimer <fweimer@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      16e72e9b
  26. 18 1月, 2017 1 次提交
    • A
      powerpc/mm: Fix little-endian 4K hugetlb · 20717e1f
      Aneesh Kumar K.V 提交于
      When we switched to big endian page table, we never updated the hugepd
      format such that it can work for both big endian and little endian
      config. This patch series update hugepd format such that it is looked at
      as __be64 value in big endian page table config.
      
      This patch also switch hugepd_t.pd from signed long to unsigned long.
      I did update the FSL hugepd_ok check to check for the top bit instead
      of checking > 0.
      
      Fixes: 5dc1ef85 ("powerpc/mm: Use big endian Linux page tables for book3s 64")
      Cc: stable@vger.kernel.org # v4.7+
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      20717e1f
  27. 19 7月, 2016 1 次提交
  28. 11 5月, 2016 1 次提交
    • A
      powerpc/mm: Make 4K and 64K use pte_t for pgtable_t · 934828ed
      Aneesh Kumar K.V 提交于
      This patch switches 4K Linux page size config to use pte_t * type
      instead of struct page * for pgtable_t. This simplifies the code a lot
      and helps in consolidating both 64K and 4K page allocator routines. The
      changes should not have any impact, because we already store physical
      address in the upper level page table tree and that implies we already
      do struct page * to physical address conversion.
      
      One change to note here is we move the pgtable_page_dtor() call for
      nohash to pte_fragment_free_mm(). The nohash related change is due to
      the related changes in pgtable_64.c.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      934828ed
  29. 01 5月, 2016 1 次提交
    • A
      powerpc/mm: Use big endian Linux page tables for book3s 64 · 5dc1ef85
      Aneesh Kumar K.V 提交于
      Traditionally Power server machines have used the Hashed Page Table MMU
      mode. In this mode Linux manages its own tree of nested page tables,
      aka. "the Linux page tables", which are not used by the hardware
      directly, and software loads translations into the hash page table for
      use by the hardware.
      
      Power ISA 3.0 defines a new MMU mode, known as Radix Tree Translation,
      where the hardware can directly operate on the Linux page tables.
      However the hardware requires that the page tables be in big endian
      format.
      
      To accommodate this, switch the pgtable types to __be64 and add
      appropriate endian conversions.
      
      Because we will be supporting a single kernel binary that boots using
      either radix or hash mode, we always store the Linux page tables big
      endian, even in hash mode where they are not actually used by the
      hardware.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      [mpe: Fix sparse errors, flesh out change log]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      5dc1ef85
  30. 03 3月, 2016 1 次提交
  31. 29 2月, 2016 1 次提交
  32. 14 12月, 2015 3 次提交
  33. 28 10月, 2015 1 次提交
    • S
      powerpc/booke: Only use VIRT_PHYS_OFFSET on booke32 · ffda09a9
      Scott Wood 提交于
      The way VIRT_PHYS_OFFSET is not correct on book3e-64, because
      it does not account for CONFIG_RELOCATABLE other than via the
      32-bit-only virt_phys_offset.
      
      book3e-64 can (and if the comment about a GCC miscompilation is still
      relevant, should) use the normal ppc64 __va/__pa.
      
      At this point, only booke-32 will use VIRT_PHYS_OFFSET, so given the
      issues with its calculation, restrict its definition to booke-32.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      ffda09a9