1. 14 10月, 2020 1 次提交
    • M
      arch, drivers: replace for_each_membock() with for_each_mem_range() · b10d6bca
      Mike Rapoport 提交于
      There are several occurrences of the following pattern:
      
      	for_each_memblock(memory, reg) {
      		start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
      		end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
      
      		/* do something with start and end */
      	}
      
      Using for_each_mem_range() iterator is more appropriate in such cases and
      allows simpler and cleaner code.
      
      [akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build]
      [rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring]
        Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b10d6bca
  2. 10 6月, 2020 2 次提交
    • M
      mm: pgtable: add shortcuts for accessing kernel PMD and PTE · e05c7b1f
      Mike Rapoport 提交于
      The powerpc 32-bit implementation of pgtable has nice shortcuts for
      accessing kernel PMD and PTE for a given virtual address.  Make these
      helpers available for all architectures.
      
      [rppt@linux.ibm.com: microblaze: fix page table traversal in setup_rt_frame()]
        Link: http://lkml.kernel.org/r/20200518191511.GD1118872@kernel.org
      [akpm@linux-foundation.org: s/pmd_ptr_k/pmd_off_k/ in various powerpc places]
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-9-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e05c7b1f
    • M
      mm: don't include asm/pgtable.h if linux/mm.h is already included · e31cf2f4
      Mike Rapoport 提交于
      Patch series "mm: consolidate definitions of page table accessors", v2.
      
      The low level page table accessors (pXY_index(), pXY_offset()) are
      duplicated across all architectures and sometimes more than once.  For
      instance, we have 31 definition of pgd_offset() for 25 supported
      architectures.
      
      Most of these definitions are actually identical and typically it boils
      down to, e.g.
      
      static inline unsigned long pmd_index(unsigned long address)
      {
              return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
      }
      
      static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
      {
              return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
      }
      
      These definitions can be shared among 90% of the arches provided
      XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.
      
      For architectures that really need a custom version there is always
      possibility to override the generic version with the usual ifdefs magic.
      
      These patches introduce include/linux/pgtable.h that replaces
      include/asm-generic/pgtable.h and add the definitions of the page table
      accessors to the new header.
      
      This patch (of 12):
      
      The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
      functions involving page table manipulations, e.g.  pte_alloc() and
      pmd_alloc().  So, there is no point to explicitly include <asm/pgtable.h>
      in the files that include <linux/mm.h>.
      
      The include statements in such cases are remove with a simple loop:
      
      	for f in $(git grep -l "include <linux/mm.h>") ; do
      		sed -i -e '/include <asm\/pgtable.h>/ d' $f
      	done
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org
      Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e31cf2f4
  3. 26 5月, 2020 3 次提交
  4. 26 2月, 2020 2 次提交
  5. 23 1月, 2020 1 次提交
  6. 18 11月, 2019 1 次提交
  7. 27 8月, 2019 5 次提交
  8. 20 8月, 2019 1 次提交
  9. 19 6月, 2019 1 次提交
  10. 31 5月, 2019 1 次提交
  11. 03 5月, 2019 1 次提交
    • R
      powerpc/mm: Warn if W+X pages found on boot · 453d87f6
      Russell Currey 提交于
      Implement code to walk all pages and warn if any are found to be both
      writable and executable.  Depends on STRICT_KERNEL_RWX enabled, and is
      behind the DEBUG_WX config option.
      
      This only runs on boot and has no runtime performance implications.
      
      Very heavily influenced (and in some cases copied verbatim) from the
      ARM64 code written by Laura Abbott (thanks!), since our ptdump
      infrastructure is similar.
      Signed-off-by: NRussell Currey <ruscur@russell.cc>
      [mpe: Fixup build error when disabled]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      453d87f6
  12. 02 5月, 2019 3 次提交
  13. 23 2月, 2019 4 次提交
    • C
      powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX · 63b2bc61
      Christophe Leroy 提交于
      Today, STRICT_KERNEL_RWX is based on the use of regular pages
      to map kernel pages.
      
      On Book3s 32, it has three consequences:
      - Using pages instead of BAT for mapping kernel linear memory severely
      impacts performance.
      - Exec protection is not effective because no-execute cannot be set at
      page level (except on 603 which doesn't have hash tables)
      - Write protection is not effective because PP bits do not provide RO
      mode for kernel-only pages (except on 603 which handles it in software
      via PAGE_DIRTY)
      
      On the 603+, we have:
      - Independent IBAT and DBAT allowing limitation of exec parts.
      - NX bit can be set in segment registers to forbit execution on memory
      mapped by pages.
      - RO mode on DBATs even for kernel-only blocks.
      
      On the 601, there is nothing much we can do other than warn the user
      about it, because:
      - BATs are common to instructions and data.
      - BAT do not provide RO mode for kernel-only blocks.
      - segment registers don't have the NX bit.
      
      In order to use IBAT for exec protection, this patch:
      - Aligns _etext to BAT block sizes (128kb)
      - Set NX bit in kernel segment register (Except on vmalloc area when
      CONFIG_MODULES is selected)
      - Maps kernel text with IBATs.
      
      In order to use DBAT for exec protection, this patch:
      - Aligns RW DATA to BAT block sizes (4M)
      - Maps kernel RO area with write prohibited DBATs
      - Maps remaining memory with remaining DBATs
      
      Here is what we get with this patch on a 832x when activating
      STRICT_KERNEL_RWX:
      
      Symbols:
      c0000000 T _stext
      c0680000 R __start_rodata
      c0680000 R _etext
      c0800000 T __init_begin
      c0800000 T _sinittext
      
      ~# cat /sys/kernel/debug/block_address_translation
      ---[ Instruction Block Address Translation ]---
      0: 0xc0000000-0xc03fffff 0x00000000 Kernel EXEC coherent
      1: 0xc0400000-0xc05fffff 0x00400000 Kernel EXEC coherent
      2: 0xc0600000-0xc067ffff 0x00600000 Kernel EXEC coherent
      3:         -
      4:         -
      5:         -
      6:         -
      7:         -
      
      ---[ Data Block Address Translation ]---
      0: 0xc0000000-0xc07fffff 0x00000000 Kernel RO coherent
      1: 0xc0800000-0xc0ffffff 0x00800000 Kernel RW coherent
      2: 0xc1000000-0xc1ffffff 0x01000000 Kernel RW coherent
      3: 0xc2000000-0xc3ffffff 0x02000000 Kernel RW coherent
      4: 0xc4000000-0xc7ffffff 0x04000000 Kernel RW coherent
      5: 0xc8000000-0xcfffffff 0x08000000 Kernel RW coherent
      6: 0xd0000000-0xdfffffff 0x10000000 Kernel RW coherent
      7:         -
      
      ~# cat /sys/kernel/debug/segment_registers
      ---[ User Segments ]---
      0x00000000-0x0fffffff Kern key 1 User key 1 VSID 0xa085d0
      0x10000000-0x1fffffff Kern key 1 User key 1 VSID 0xa086e1
      0x20000000-0x2fffffff Kern key 1 User key 1 VSID 0xa087f2
      0x30000000-0x3fffffff Kern key 1 User key 1 VSID 0xa08903
      0x40000000-0x4fffffff Kern key 1 User key 1 VSID 0xa08a14
      0x50000000-0x5fffffff Kern key 1 User key 1 VSID 0xa08b25
      0x60000000-0x6fffffff Kern key 1 User key 1 VSID 0xa08c36
      0x70000000-0x7fffffff Kern key 1 User key 1 VSID 0xa08d47
      0x80000000-0x8fffffff Kern key 1 User key 1 VSID 0xa08e58
      0x90000000-0x9fffffff Kern key 1 User key 1 VSID 0xa08f69
      0xa0000000-0xafffffff Kern key 1 User key 1 VSID 0xa0907a
      0xb0000000-0xbfffffff Kern key 1 User key 1 VSID 0xa0918b
      
      ---[ Kernel Segments ]---
      0xc0000000-0xcfffffff Kern key 0 User key 1 No Exec VSID 0x000ccc
      0xd0000000-0xdfffffff Kern key 0 User key 1 No Exec VSID 0x000ddd
      0xe0000000-0xefffffff Kern key 0 User key 1 No Exec VSID 0x000eee
      0xf0000000-0xffffffff Kern key 0 User key 1 No Exec VSID 0x000fff
      
      Aligning _etext to 128kb allows to map up to 32Mb text with 8 IBATs:
      16Mb + 8Mb + 4Mb + 2Mb + 1Mb + 512kb + 256kb + 128kb (+ 128kb) = 32Mb
      (A 9th IBAT is unneeded as 32Mb would need only a single 32Mb block)
      
      Aligning data to 4M allows to map up to 512Mb data with 8 DBATs:
      16Mb + 8Mb + 4Mb + 4Mb + 32Mb + 64Mb + 128Mb + 256Mb = 512Mb
      
      Because some processors only have 4 BATs and because some targets need
      DBATs for mapping other areas, the following patch will allow to
      modify _etext and data alignment.
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      63b2bc61
    • C
      powerpc/32: always populate page tables for Abatron BDI. · d2f15e09
      Christophe Leroy 提交于
      When CONFIG_BDI_SWITCH is set, the page tables have to be populated
      allthough large TLBs are used, because the BDI switch knows nothing
      about those large TLBs which are handled directly in TLB miss logic.
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d2f15e09
    • C
      powerpc/mm/32s: use generic mmu_mapin_ram() for all blocks. · 9e849f23
      Christophe Leroy 提交于
      Now that mmu_mapin_ram() is able to handle other blocks
      than the one starting at 0, the WII can use it for all
      its blocks.
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9e849f23
    • C
      powerpc/mm/32: add base address to mmu_mapin_ram() · 14e609d6
      Christophe Leroy 提交于
      At the time being, mmu_mapin_ram() always maps RAM from the beginning.
      But some platforms like the WII have to map a second block of RAM.
      
      This patch adds to mmu_mapin_ram() the base address of the block.
      At the moment, only base address 0 is supported.
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      14e609d6
  14. 05 1月, 2019 1 次提交
    • J
      mm: treewide: remove unused address argument from pte_alloc functions · 4cf58924
      Joel Fernandes (Google) 提交于
      Patch series "Add support for fast mremap".
      
      This series speeds up the mremap(2) syscall by copying page tables at
      the PMD level even for non-THP systems.  There is concern that the extra
      'address' argument that mremap passes to pte_alloc may do something
      subtle architecture related in the future that may make the scheme not
      work.  Also we find that there is no point in passing the 'address' to
      pte_alloc since its unused.  This patch therefore removes this argument
      tree-wide resulting in a nice negative diff as well.  Also ensuring
      along the way that the enabled architectures do not do anything funky
      with the 'address' argument that goes unnoticed by the optimization.
      
      Build and boot tested on x86-64.  Build tested on arm64.  The config
      enablement patch for arm64 will be posted in the future after more
      testing.
      
      The changes were obtained by applying the following Coccinelle script.
      (thanks Julia for answering all Coccinelle questions!).
      Following fix ups were done manually:
      * Removal of address argument from  pte_fragment_alloc
      * Removal of pte_alloc_one_fast definitions from m68k and microblaze.
      
      // Options: --include-headers --no-includes
      // Note: I split the 'identifier fn' line, so if you are manually
      // running it, please unsplit it so it runs for you.
      
      virtual patch
      
      @pte_alloc_func_def depends on patch exists@
      identifier E2;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      type T2;
      @@
      
       fn(...
      - , T2 E2
       )
       { ... }
      
      @pte_alloc_func_proto_noarg depends on patch exists@
      type T1, T2, T3, T4;
      identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
      (
      - T3 fn(T1, T2);
      + T3 fn(T1);
      |
      - T3 fn(T1, T2, T4);
      + T3 fn(T1, T2);
      )
      
      @pte_alloc_func_proto depends on patch exists@
      identifier E1, E2, E4;
      type T1, T2, T3, T4;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
      (
      - T3 fn(T1 E1, T2 E2);
      + T3 fn(T1 E1);
      |
      - T3 fn(T1 E1, T2 E2, T4 E4);
      + T3 fn(T1 E1, T2 E2);
      )
      
      @pte_alloc_func_call depends on patch exists@
      expression E2;
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      @@
      
       fn(...
      -,  E2
       )
      
      @pte_alloc_macro depends on patch exists@
      identifier fn =~
      "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
      identifier a, b, c;
      expression e;
      position p;
      @@
      
      (
      - #define fn(a, b, c) e
      + #define fn(a, b) e
      |
      - #define fn(a, b) e
      + #define fn(a) e
      )
      
      Link: http://lkml.kernel.org/r/20181108181201.88826-2-joelaf@google.comSigned-off-by: NJoel Fernandes (Google) <joel@joelfernandes.org>
      Suggested-by: NKirill A. Shutemov <kirill@shutemov.name>
      Acked-by: NKirill A. Shutemov <kirill@shutemov.name>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Julia Lawall <Julia.Lawall@lip6.fr>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4cf58924
  15. 19 12月, 2018 1 次提交
    • C
      powerpc: implement CONFIG_DEBUG_VIRTUAL · 6bf752da
      Christophe Leroy 提交于
      This patch implements CONFIG_DEBUG_VIRTUAL to warn about
      incorrect use of virt_to_phys() and page_to_phys()
      
      Below is the result of test_debug_virtual:
      
      [    1.438746] WARNING: CPU: 0 PID: 1 at ./arch/powerpc/include/asm/io.h:808 test_debug_virtual_init+0x3c/0xd4
      [    1.448156] CPU: 0 PID: 1 Comm: swapper Not tainted 4.20.0-rc5-00560-g6bfb52e23a00-dirty #532
      [    1.457259] NIP:  c066c550 LR: c0650ccc CTR: c066c514
      [    1.462257] REGS: c900bdb0 TRAP: 0700   Not tainted  (4.20.0-rc5-00560-g6bfb52e23a00-dirty)
      [    1.471184] MSR:  00029032 <EE,ME,IR,DR,RI>  CR: 48000422  XER: 20000000
      [    1.477811]
      [    1.477811] GPR00: c0650ccc c900be60 c60d0000 00000000 006000c0 c9000000 00009032 c7fa0020
      [    1.477811] GPR08: 00002400 00000001 09000000 00000000 c07b5d04 00000000 c00037d8 00000000
      [    1.477811] GPR16: 00000000 00000000 00000000 00000000 c0760000 c0740000 00000092 c0685bb0
      [    1.477811] GPR24: c065042c c068a734 c0685b8c 00000006 00000000 c0760000 c075c3c0 ffffffff
      [    1.512711] NIP [c066c550] test_debug_virtual_init+0x3c/0xd4
      [    1.518315] LR [c0650ccc] do_one_initcall+0x8c/0x1cc
      [    1.523163] Call Trace:
      [    1.525595] [c900be60] [c0567340] 0xc0567340 (unreliable)
      [    1.530954] [c900be90] [c0650ccc] do_one_initcall+0x8c/0x1cc
      [    1.536551] [c900bef0] [c0651000] kernel_init_freeable+0x1f4/0x2cc
      [    1.542658] [c900bf30] [c00037ec] kernel_init+0x14/0x110
      [    1.547913] [c900bf40] [c000e1d0] ret_from_kernel_thread+0x14/0x1c
      [    1.553971] Instruction dump:
      [    1.556909] 3ca50100 bfa10024 54a5000e 3fa0c076 7c0802a6 3d454000 813dc204 554893be
      [    1.564566] 7d294010 7d294910 90010034 39290001 <0f090000> 7c3e0b78 955e0008 3fe0c062
      [    1.572425] ---[ end trace 6f6984225b280ad6 ]---
      [    1.577467] PA: 0x09000000 for VA: 0xc9000000
      [    1.581799] PA: 0x061e8f50 for VA: 0xc61e8f50
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6bf752da
  16. 04 12月, 2018 1 次提交
  17. 26 11月, 2018 1 次提交
  18. 31 10月, 2018 1 次提交
    • M
      memblock: rename memblock_alloc{_nid,_try_nid} to memblock_phys_alloc* · 9a8dd708
      Mike Rapoport 提交于
      Make it explicit that the caller gets a physical address rather than a
      virtual one.
      
      This will also allow using meblock_alloc prefix for memblock allocations
      returning virtual address, which is done in the following patches.
      
      The conversion is done using the following semantic patch:
      
      @@
      expression e1, e2, e3;
      @@
      (
      - memblock_alloc(e1, e2)
      + memblock_phys_alloc(e1, e2)
      |
      - memblock_alloc_nid(e1, e2, e3)
      + memblock_phys_alloc_nid(e1, e2, e3)
      |
      - memblock_alloc_try_nid(e1, e2, e3)
      + memblock_phys_alloc_try_nid(e1, e2, e3)
      )
      
      Link: http://lkml.kernel.org/r/1536927045-23536-7-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a8dd708
  19. 14 10月, 2018 4 次提交
  20. 13 10月, 2018 1 次提交
  21. 31 3月, 2018 2 次提交
  22. 16 1月, 2018 1 次提交
    • C
      powerpc/mm: extend _PAGE_PRIVILEGED to all CPUs · 812fadcb
      Christophe Leroy 提交于
      commit ac29c640 ("powerpc/mm: Replace _PAGE_USER with
      _PAGE_PRIVILEGED") introduced _PAGE_PRIVILEGED for BOOK3S/64
      
      This patch generalises _PAGE_PRIVILEGED for all CPUs, allowing
      to have either _PAGE_PRIVILEGED or _PAGE_USER or both.
      
      PPC_8xx has a _PAGE_SHARED flag which is set for and only for
      all non user pages. Lets rename it _PAGE_PRIVILEGED to remove
      confusion as it has nothing to do with Linux shared pages.
      
      On BookE, there's a _PAGE_BAP_SR which has to be set for kernel
      pages: defining _PAGE_PRIVILEGED as _PAGE_BAP_SR will make
      this generic
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      812fadcb
  23. 04 10月, 2017 1 次提交
    • G
      powerpc/mm: Call flush_tlb_kernel_range with interrupts enabled · 7c6a4f3b
      Guenter Roeck 提交于
      flush_tlb_kernel_range() may call smp_call_function_many() which expects
      interrupts to be enabled. This results in a traceback.
      
      WARNING: CPU: 0 PID: 1 at kernel/smp.c:416 smp_call_function_many+0xcc/0x2fc
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-rc1-00009-g0666f560 #1
      task: cf830000 task.stack: cf82e000
      NIP:  c00a93c8 LR: c00a9634 CTR: 00000001
      REGS: cf82fde0 TRAP: 0700   Not tainted  (4.14.0-rc1-00009-g0666f560)
      MSR:  00021000 <CE,ME>  CR: 24000082  XER: 00000000
      
      GPR00: c00a9634 cf82fe90 cf830000 c050ad3c c0015a54 00000000 00000001 00000001
      GPR08: 00000001 00000000 00000000 cf82e000 24000084 00000000 c0003150 00000000
      GPR16: 00000000 00000000 00000000 00000000 00000000 00000001 00000000 c0510000
      GPR24: 00000000 c0015a54 00000000 c050ad3c c051823c c050ad3c 00000025 00000000
      NIP [c00a93c8] smp_call_function_many+0xcc/0x2fc
      LR [c00a9634] smp_call_function+0x3c/0x50
      Call Trace:
      [cf82fe90] [00000010] 0x10 (unreliable)
      [cf82fed0] [c00a9634] smp_call_function+0x3c/0x50
      [cf82fee0] [c0015d2c] flush_tlb_kernel_range+0x20/0x38
      [cf82fef0] [c001524c] mark_initmem_nx+0x154/0x16c
      [cf82ff20] [c001484c] free_initmem+0x20/0x4c
      [cf82ff30] [c000316c] kernel_init+0x1c/0x108
      [cf82ff40] [c000f3a8] ret_from_kernel_thread+0x5c/0x64
      Instruction dump:
      7c0803a6 7d808120 38210040 4e800020 3d20c052 812981a0 2f890000 40beffac
      3d20c051 8929ac64 2f890000 40beff9c <0fe00000> 4bffff94 7fc3f378 7f64db78
      
      Fixes: 3184cc4b ("powerpc/mm: Fix kernel RAM protection after freeing ...")
      Fixes: e611939f ("powerpc/mm: Ensure change_page_attr() doesn't ...")
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      Reviewed-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7c6a4f3b