1. 07 11月, 2021 1 次提交
  2. 29 3月, 2021 2 次提交
  3. 20 3月, 2021 1 次提交
  4. 23 12月, 2020 4 次提交
  5. 14 10月, 2020 1 次提交
    • M
      arch, drivers: replace for_each_membock() with for_each_mem_range() · b10d6bca
      Mike Rapoport 提交于
      There are several occurrences of the following pattern:
      
      	for_each_memblock(memory, reg) {
      		start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
      		end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
      
      		/* do something with start and end */
      	}
      
      Using for_each_mem_range() iterator is more appropriate in such cases and
      allows simpler and cleaner code.
      
      [akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build]
      [rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring]
        Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Emil Renner Berthing <kernel@esmil.dk>
      Cc: Hari Bathini <hbathini@linux.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b10d6bca
  6. 10 6月, 2020 2 次提交
    • M
      mm: consolidate pte_index() and pte_offset_*() definitions · 974b9b2c
      Mike Rapoport 提交于
      All architectures define pte_index() as
      
      	(address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)
      
      and all architectures define pte_offset_kernel() as an entry in the array
      of PTEs indexed by the pte_index().
      
      For the most architectures the pte_offset_kernel() implementation relies
      on the availability of pmd_page_vaddr() that converts a PMD entry value to
      the virtual address of the page containing PTEs array.
      
      Let's move x86 definitions of the PTE accessors to the generic place in
      <linux/pgtable.h> and then simply drop the respective definitions from the
      other architectures.
      
      The architectures that didn't provide pmd_page_vaddr() are updated to have
      that defined.
      
      The generic implementation of pte_offset_kernel() can be overridden by an
      architecture and alpha makes use of this because it has special ordering
      requirements for its version of pte_offset_kernel().
      
      [rppt@linux.ibm.com: v2]
        Link: http://lkml.kernel.org/r/20200514170327.31389-11-rppt@kernel.org
      [rppt@linux.ibm.com: update]
        Link: http://lkml.kernel.org/r/20200514170327.31389-12-rppt@kernel.org
      [rppt@linux.ibm.com: update]
        Link: http://lkml.kernel.org/r/20200514170327.31389-13-rppt@kernel.org
      [akpm@linux-foundation.org: fix x86 warning]
      [sfr@canb.auug.org.au: fix powerpc build]
        Link: http://lkml.kernel.org/r/20200607153443.GB738695@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-10-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      974b9b2c
    • M
      mm: don't include asm/pgtable.h if linux/mm.h is already included · e31cf2f4
      Mike Rapoport 提交于
      Patch series "mm: consolidate definitions of page table accessors", v2.
      
      The low level page table accessors (pXY_index(), pXY_offset()) are
      duplicated across all architectures and sometimes more than once.  For
      instance, we have 31 definition of pgd_offset() for 25 supported
      architectures.
      
      Most of these definitions are actually identical and typically it boils
      down to, e.g.
      
      static inline unsigned long pmd_index(unsigned long address)
      {
              return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
      }
      
      static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
      {
              return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
      }
      
      These definitions can be shared among 90% of the arches provided
      XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.
      
      For architectures that really need a custom version there is always
      possibility to override the generic version with the usual ifdefs magic.
      
      These patches introduce include/linux/pgtable.h that replaces
      include/asm-generic/pgtable.h and add the definitions of the page table
      accessors to the new header.
      
      This patch (of 12):
      
      The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
      functions involving page table manipulations, e.g.  pte_alloc() and
      pmd_alloc().  So, there is no point to explicitly include <asm/pgtable.h>
      in the files that include <linux/mm.h>.
      
      The include statements in such cases are remove with a simple loop:
      
      	for f in $(git grep -l "include <linux/mm.h>") ; do
      		sed -i -e '/include <asm\/pgtable.h>/ d' $f
      	done
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org
      Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e31cf2f4
  7. 05 6月, 2020 1 次提交
    • M
      arm64: add support for folded p4d page tables · e9f63768
      Mike Rapoport 提交于
      Implement primitives necessary for the 4th level folding, add walks of p4d
      level where appropriate, replace 5level-fixup.h with pgtable-nop4d.h and
      remove __ARCH_USE_5LEVEL_HACK.
      
      [arnd@arndb.de: fix gcc-10 shift warning]
        Link: http://lkml.kernel.org/r/20200429185657.4085975-1-arnd@arndb.deSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert+renesas@glider.be>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: James Morse <james.morse@arm.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200414153455.21744-4-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e9f63768
  8. 15 8月, 2019 1 次提交
    • M
      arm64: memory: rename VA_START to PAGE_END · 77ad4ce6
      Mark Rutland 提交于
      Prior to commit:
      
        14c127c9 ("arm64: mm: Flip kernel VA space")
      
      ... VA_START described the start of the TTBR1 address space for a given
      VA size described by VA_BITS, where all kernel mappings began.
      
      Since that commit, VA_START described a portion midway through the
      address space, where the linear map ends and other kernel mappings
      begin.
      
      To avoid confusion, let's rename VA_START to PAGE_END, making it clear
      that it's not the start of the TTBR1 address space and implying that
      it's related to PAGE_OFFSET. Comments and other mnemonics are updated
      accordingly, along with a typo fix in the decription of VMEMMAP_SIZE.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Tested-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      77ad4ce6
  9. 09 8月, 2019 2 次提交
    • S
      arm64: mm: Introduce VA_BITS_MIN · 90ec95cd
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, the
      kernel needs to know the most conservative VA_BITS possible should it
      need to fall back to this quantity due to lack of hardware support.
      
      A new compile time constant VA_BITS_MIN is introduced in this patch and
      it is employed in the KASAN end address, KASLR, and EFI stub.
      
      For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.
      
      In other words: VA_BITS_MIN = min (48, VA_BITS)
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      90ec95cd
    • S
      arm64: mm: Flip kernel VA space · 14c127c9
      Steve Capper 提交于
      In order to allow for a KASAN shadow that changes size at boot time, one
      must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
      start address. Also, it is highly desirable to maintain the same
      function addresses in the kernel .text between VA sizes. Both of these
      requirements necessitate us to flip the kernel address space halves s.t.
      the direct linear map occupies the lower addresses.
      
      This patch puts the direct linear map in the lower addresses of the
      kernel VA range and everything else in the higher ranges.
      
      We need to adjust:
       *) KASAN shadow region placement logic,
       *) KASAN_SHADOW_OFFSET computation logic,
       *) virt_to_phys, phys_to_virt checks,
       *) page table dumper.
      
      These are all small changes, that need to take place atomically, so they
      are bundled into this commit.
      
      As part of the re-arrangement, a guard region of 2MB (to preserve
      alignment for fixed map) is added after the vmemmap. Otherwise the
      vmemmap could intersect with IS_ERR pointers.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      14c127c9
  10. 19 6月, 2019 1 次提交
  11. 13 3月, 2019 1 次提交
    • M
      treewide: add checks for the return value of memblock_alloc*() · 8a7f97b9
      Mike Rapoport 提交于
      Add check for the return value of memblock_alloc*() functions and call
      panic() in case of error.  The panic message repeats the one used by
      panicing memblock allocators with adjustment of parameters to include
      only relevant ones.
      
      The replacement was mostly automated with semantic patches like the one
      below with manual massaging of format strings.
      
        @@
        expression ptr, size, align;
        @@
        ptr = memblock_alloc(size, align);
        + if (!ptr)
        + 	panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align);
      
      [anders.roxell@linaro.org: use '%pa' with 'phys_addr_t' type]
        Link: http://lkml.kernel.org/r/20190131161046.21886-1-anders.roxell@linaro.org
      [rppt@linux.ibm.com: fix format strings for panics after memblock_alloc]
        Link: http://lkml.kernel.org/r/1548950940-15145-1-git-send-email-rppt@linux.ibm.com
      [rppt@linux.ibm.com: don't panic if the allocation in sparse_buffer_init fails]
        Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx
      [akpm@linux-foundation.org: fix xtensa printk warning]
      Link: http://lkml.kernel.org/r/1548057848-15136-20-git-send-email-rppt@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      Reviewed-by: Guo Ren <ren_guo@c-sky.com>		[c-sky]
      Acked-by: Paul Burton <paul.burton@mips.com>		[MIPS]
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>	[s390]
      Reviewed-by: Juergen Gross <jgross@suse.com>		[Xen]
      Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>	[m68k]
      Acked-by: Max Filippov <jcmvbkbc@gmail.com>		[xtensa]
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Rob Herring <robh+dt@kernel.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8a7f97b9
  12. 22 2月, 2019 1 次提交
  13. 29 12月, 2018 4 次提交
  14. 31 10月, 2018 2 次提交
    • M
      mm: remove include/linux/bootmem.h · 57c8a661
      Mike Rapoport 提交于
      Move remaining definitions and declarations from include/linux/bootmem.h
      into include/linux/memblock.h and remove the redundant header.
      
      The includes were replaced with the semantic patch below and then
      semi-automated removal of duplicated '#include <linux/memblock.h>
      
      @@
      @@
      - #include <linux/bootmem.h>
      + #include <linux/memblock.h>
      
      [sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h]
        Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au
      [sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h]
        Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au
      [sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal]
        Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au
      Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      57c8a661
    • M
      memblock: remove _virt from APIs returning virtual address · eb31d559
      Mike Rapoport 提交于
      The conversion is done using
      
      sed -i 's@memblock_virt_alloc@memblock_alloc@g' \
      	$(git grep -l memblock_virt_alloc)
      
      Link: http://lkml.kernel.org/r/1536927045-23536-8-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eb31d559
  15. 05 10月, 2018 1 次提交
  16. 17 4月, 2018 1 次提交
    • M
      arm64: kasan: avoid pfn_to_nid() before page array is initialized · 800cb2e5
      Mark Rutland 提交于
      In arm64's kasan_init(), we use pfn_to_nid() to find the NUMA node a
      span of memory is in, hoping to allocate shadow from the same NUMA node.
      However, at this point, the page array has not been initialized, and
      thus this is bogus.
      
      Since commit:
      
        f165b378 ("mm: uninitialized struct page poisoning sanity")
      
      ... accessing fields of the page array results in a boot time Oops(),
      highlighting this problem:
      
      [    0.000000] Unable to handle kernel paging request at virtual address dfff200000000000
      [    0.000000] Mem abort info:
      [    0.000000]   ESR = 0x96000004
      [    0.000000]   Exception class = DABT (current EL), IL = 32 bits
      [    0.000000]   SET = 0, FnV = 0
      [    0.000000]   EA = 0, S1PTW = 0
      [    0.000000] Data abort info:
      [    0.000000]   ISV = 0, ISS = 0x00000004
      [    0.000000]   CM = 0, WnR = 0
      [    0.000000] [dfff200000000000] address between user and kernel address ranges
      [    0.000000] Internal error: Oops: 96000004 [#1] PREEMPT SMP
      [    0.000000] Modules linked in:
      [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.16.0-07317-gf165b378 #42
      [    0.000000] Hardware name: ARM Juno development board (r1) (DT)
      [    0.000000] pstate: 80000085 (Nzcv daIf -PAN -UAO)
      [    0.000000] pc : __asan_load8+0x8c/0xa8
      [    0.000000] lr : __dump_page+0x3c/0x3b8
      [    0.000000] sp : ffff2000099b7ca0
      [    0.000000] x29: ffff2000099b7ca0 x28: ffff20000a1762c0
      [    0.000000] x27: ffff7e0000000000 x26: ffff2000099dd000
      [    0.000000] x25: ffff200009a3f960 x24: ffff200008f9c38c
      [    0.000000] x23: ffff20000a9d3000 x22: ffff200009735430
      [    0.000000] x21: fffffffffffffffe x20: ffff7e0001e50420
      [    0.000000] x19: ffff7e0001e50400 x18: 0000000000001840
      [    0.000000] x17: ffffffffffff8270 x16: 0000000000001840
      [    0.000000] x15: 0000000000001920 x14: 0000000000000004
      [    0.000000] x13: 0000000000000000 x12: 0000000000000800
      [    0.000000] x11: 1ffff0012d0f89ff x10: ffff10012d0f89ff
      [    0.000000] x9 : 0000000000000000 x8 : ffff8009687c5000
      [    0.000000] x7 : 0000000000000000 x6 : ffff10000f282000
      [    0.000000] x5 : 0000000000000040 x4 : fffffffffffffffe
      [    0.000000] x3 : 0000000000000000 x2 : dfff200000000000
      [    0.000000] x1 : 0000000000000005 x0 : 0000000000000000
      [    0.000000] Process swapper (pid: 0, stack limit = 0x        (ptrval))
      [    0.000000] Call trace:
      [    0.000000]  __asan_load8+0x8c/0xa8
      [    0.000000]  __dump_page+0x3c/0x3b8
      [    0.000000]  dump_page+0xc/0x18
      [    0.000000]  kasan_init+0x2e8/0x5a8
      [    0.000000]  setup_arch+0x294/0x71c
      [    0.000000]  start_kernel+0xdc/0x500
      [    0.000000] Code: aa0403e0 9400063c 17ffffee d343fc00 (38e26800)
      [    0.000000] ---[ end trace 67064f0e9c0cc338 ]---
      [    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
      [    0.000000] ---[ end Kernel panic - not syncing: Attempted to kill the idle task! ]---
      
      Let's fix this by using early_pfn_to_nid(), as other architectures do in
      their kasan init code. Note that early_pfn_to_nid acquires the nid from
      the memblock array, which we iterate over in kasan_init(), so this
      should be fine.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Fixes: 39d114dd ("arm64: add KASAN support")
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      800cb2e5
  17. 17 2月, 2018 1 次提交
    • W
      arm64: mm: Use READ_ONCE/WRITE_ONCE when accessing page tables · 20a004e7
      Will Deacon 提交于
      In many cases, page tables can be accessed concurrently by either another
      CPU (due to things like fast gup) or by the hardware page table walker
      itself, which may set access/dirty bits. In such cases, it is important
      to use READ_ONCE/WRITE_ONCE when accessing page table entries so that
      entries cannot be torn, merged or subject to apparent loss of coherence
      due to compiler transformations.
      
      Whilst there are some scenarios where this cannot happen (e.g. pinned
      kernel mappings for the linear region), the overhead of using READ_ONCE
      /WRITE_ONCE everywhere is minimal and makes the code an awful lot easier
      to reason about. This patch consistently uses these macros in the arch
      code, as well as explicitly namespacing pointers to page table entries
      from the entries themselves by using adopting a 'p' suffix for the former
      (as is sometimes used elsewhere in the kernel source).
      Tested-by: NYury Norov <ynorov@caviumnetworks.com>
      Tested-by: NRichard Ruigrok <rruigrok@codeaurora.org>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      20a004e7
  18. 07 2月, 2018 1 次提交
  19. 16 11月, 2017 1 次提交
    • W
      arm64/mm/kasan: don't use vmemmap_populate() to initialize shadow · e17d8025
      Will Deacon 提交于
      The kasan shadow is currently mapped using vmemmap_populate() since that
      provides a semi-convenient way to map pages into init_top_pgt.  However,
      since that no longer zeroes the mapped pages, it is not suitable for
      kasan, which requires zeroed shadow memory.
      
      Add kasan_populate_shadow() interface and use it instead of
      vmemmap_populate().  Besides, this allows us to take advantage of
      gigantic pages and use them to populate the shadow, which should save us
      some memory wasted on page tables and reduce TLB pressure.
      
      Link: http://lkml.kernel.org/r/20171103185147.2688-3-pasha.tatashin@oracle.comSigned-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Steven Sistare <steven.sistare@oracle.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Bob Picco <bob.picco@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e17d8025
  20. 11 7月, 2017 1 次提交
  21. 11 3月, 2017 1 次提交
    • M
      arm64: kasan: avoid bad virt_to_pfn() · b0de0ccc
      Mark Rutland 提交于
      Booting a v4.11-rc1 kernel with DEBUG_VIRTUAL and KASAN enabled produces
      the following splat (trimmed for brevity):
      
      [    0.000000] virt_to_phys used for non-linear address: ffff200008080000 (0xffff200008080000)
      [    0.000000] WARNING: CPU: 0 PID: 0 at arch/arm64/mm/physaddr.c:14 __virt_to_phys+0x48/0x70
      [    0.000000] PC is at __virt_to_phys+0x48/0x70
      [    0.000000] LR is at __virt_to_phys+0x48/0x70
      [    0.000000] Call trace:
      [    0.000000] [<ffff2000080b1ac0>] __virt_to_phys+0x48/0x70
      [    0.000000] [<ffff20000a03b86c>] kasan_init+0x1c0/0x498
      [    0.000000] [<ffff20000a034018>] setup_arch+0x2fc/0x948
      [    0.000000] [<ffff20000a030c68>] start_kernel+0xb8/0x570
      [    0.000000] [<ffff20000a0301e8>] __primary_switched+0x6c/0x74
      
      This is because we use virt_to_pfn() on a kernel image address when
      trying to figure out its nid, so that we can allocate its shadow from
      the same node.
      
      As with other recent changes, this patch uses lm_alias() to solve this.
      
      We could instead use NUMA_NO_NODE, as x86 does for all shadow
      allocations, though we'll likely want the "real" memory shadow to be
      backed from its corresponding nid anyway, so we may as well be
      consistent and find the nid for the image shadow.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b0de0ccc
  22. 02 3月, 2017 1 次提交
  23. 12 1月, 2017 1 次提交
  24. 11 3月, 2016 2 次提交
    • C
      arm64: kasan: Fix zero shadow mapping overriding kernel image shadow · 2776e0e8
      Catalin Marinas 提交于
      With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
      PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
      kimg_shadow_end is not page aligned (_end shifted by
      KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
      shadow via vmemmap_populate() may be overridden by subsequent calls to
      kasan_populate_zero_shadow(), leading to kernel panics like below:
      
      ------------------------------------------------------------------------------
      Unable to handle kernel paging request at virtual address fffffc100135068c
      pgd = fffffc8009ac0000
      [fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793
      Internal error: Oops: 9600004f [#1] PREEMPT SMP
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
      Hardware name: Juno (DT)
      task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000
      PC is at __memset+0x4c/0x200
      LR is at kasan_unpoison_shadow+0x34/0x50
      pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245
      sp : fffffe0900203db0
      x29: fffffe0900203db0 x28: 0000000000000000
      x27: 0000000000000000 x26: 0000000000000000
      x25: fffffc80099b69d0 x24: 0000000000000001
      x23: 0000000000000000 x22: 0000000000002000
      x21: dffffc8000000000 x20: 1fffff9001350a8c
      x19: 0000000000002000 x18: 0000000000000008
      x17: 0000000000000147 x16: ffffffffffffffff
      x15: 79746972100e041d x14: ffffff0000000000
      x13: ffff000000000000 x12: 0000000000000000
      x11: 0101010101010101 x10: 1fffffc11c000000
      x9 : 0000000000000000 x8 : fffffc100135068c
      x7 : 0000000000000000 x6 : 000000000000003f
      x5 : 0000000000000040 x4 : 0000000000000004
      x3 : fffffc100134f651 x2 : 0000000000000400
      x1 : 0000000000000000 x0 : fffffc100135068c
      
      Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
      Call trace:
      [<fffffc800846f1cc>] __memset+0x4c/0x200
      [<fffffc8008220044>] __asan_register_globals+0x5c/0xb0
      [<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
      [<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
      [<fffffc80089e1948>] kernel_init+0x10/0xf8
      [<fffffc8008093a00>] ret_from_fork+0x10/0x50
      ------------------------------------------------------------------------------
      
      This patch aligns kimg_shadow_start and kimg_shadow_end to
      SWAPPER_BLOCK_SIZE in all configurations.
      
      Fixes: f9040773 ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      2776e0e8
    • C
      arm64: kasan: Use actual memory node when populating the kernel image shadow · 2f76969f
      Catalin Marinas 提交于
      With the 16KB or 64KB page configurations, the generic
      vmemmap_populate() implementation warns on potential offnode
      page_structs via vmemmap_verify() because the arm64 kasan_init() passes
      NUMA_NO_NODE instead of the actual node for the kernel image memory.
      
      Fixes: f9040773 ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NJames Morse <james.morse@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      2f76969f
  25. 24 2月, 2016 1 次提交
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
  26. 19 2月, 2016 1 次提交
    • A
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel 提交于
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9040773
  27. 16 2月, 2016 2 次提交
    • M
      arm64: mm: create new fine-grained mappings at boot · 068a17a5
      Mark Rutland 提交于
      At boot we may change the granularity of the tables mapping the kernel
      (by splitting or making sections). This may happen when we create the
      linear mapping (in __map_memblock), or at any point we try to apply
      fine-grained permissions to the kernel (e.g. fixup_executable,
      mark_rodata_ro, fixup_init).
      
      Changing the active page tables in this manner may result in multiple
      entries for the same address being allocated into TLBs, risking problems
      such as TLB conflict aborts or issues derived from the amalgamation of
      TLB entries. Generally, a break-before-make (BBM) approach is necessary
      to avoid conflicts, but we cannot do this for the kernel tables as it
      risks unmapping text or data being used to do so.
      
      Instead, we can create a new set of tables from scratch in the safety of
      the existing mappings, and subsequently migrate over to these using the
      new cpu_replace_ttbr1 helper, which avoids the two sets of tables being
      active simultaneously.
      
      To avoid issues when we later modify permissions of the page tables
      (e.g. in fixup_init), we must create the page tables at a granularity
      such that later modification does not result in splitting of tables.
      
      This patch applies this strategy, creating a new set of fine-grained
      page tables from scratch, and safely migrating to them. The existing
      fixmap and kasan shadow page tables are reused in the new fine-grained
      tables.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      068a17a5
    • M
      arm64: kasan: avoid TLB conflicts · c1a88e91
      Mark Rutland 提交于
      The page table modification performed during the KASAN init risks the
      allocation of conflicting TLB entries, as it swaps a set of valid global
      entries for another without suitable TLB maintenance.
      
      The presence of conflicting TLB entries can result in the delivery of
      synchronous TLB conflict aborts, or may result in the use of erroneous
      data being returned in response to a TLB lookup. This can affect
      explicit data accesses from software as well as translations performed
      asynchronously (e.g. as part of page table walks or speculative I-cache
      fetches), and can therefore result in a wide variety of problems.
      
      To avoid this, use cpu_replace_ttbr1 to swap the page tables. This
      ensures that when the new tables are installed there are no stale
      entries from the old tables which may conflict. As all updates are made
      to the tables while they are not active, the updates themselves are
      safe.
      
      At the same time, add the missing barrier to ensure that the tmp_pg_dir
      entries updated via memcpy are visible to the page table walkers at the
      point the tmp_pg_dir is installed. All other page table updates made as
      part of KASAN initialisation have the requisite barriers due to the use
      of the standard page table accessors.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c1a88e91
  28. 25 1月, 2016 1 次提交