1. 21 6月, 2021 1 次提交
  2. 16 6月, 2021 4 次提交
  3. 21 4月, 2021 1 次提交
  4. 26 3月, 2021 1 次提交
  5. 15 9月, 2020 1 次提交
  6. 26 7月, 2020 1 次提交
  7. 10 6月, 2020 1 次提交
    • M
      mm: don't include asm/pgtable.h if linux/mm.h is already included · e31cf2f4
      Mike Rapoport 提交于
      Patch series "mm: consolidate definitions of page table accessors", v2.
      
      The low level page table accessors (pXY_index(), pXY_offset()) are
      duplicated across all architectures and sometimes more than once.  For
      instance, we have 31 definition of pgd_offset() for 25 supported
      architectures.
      
      Most of these definitions are actually identical and typically it boils
      down to, e.g.
      
      static inline unsigned long pmd_index(unsigned long address)
      {
              return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
      }
      
      static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
      {
              return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
      }
      
      These definitions can be shared among 90% of the arches provided
      XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.
      
      For architectures that really need a custom version there is always
      possibility to override the generic version with the usual ifdefs magic.
      
      These patches introduce include/linux/pgtable.h that replaces
      include/asm-generic/pgtable.h and add the definitions of the page table
      accessors to the new header.
      
      This patch (of 12):
      
      The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
      functions involving page table manipulations, e.g.  pte_alloc() and
      pmd_alloc().  So, there is no point to explicitly include <asm/pgtable.h>
      in the files that include <linux/mm.h>.
      
      The include statements in such cases are remove with a simple loop:
      
      	for f in $(git grep -l "include <linux/mm.h>") ; do
      		sed -i -e '/include <asm\/pgtable.h>/ d' $f
      	done
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org
      Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e31cf2f4
  8. 05 6月, 2020 1 次提交
  9. 26 5月, 2020 1 次提交
  10. 18 5月, 2020 10 次提交
  11. 31 5月, 2019 1 次提交
  12. 21 4月, 2019 1 次提交
    • R
      powerpc/lib: Refactor __patch_instruction() to use __put_user_asm() · ef296729
      Russell Currey 提交于
      __patch_instruction() is called in early boot, and uses
      __put_user_size(), which includes the allow/prevent calls to enforce
      KUAP, which could either be called too early, or in the Radix case,
      forced to use "early_" versions of functions just to safely handle
      this one case.
      
      __put_user_asm() does not do this, and thus is safe to use both in
      early boot, and later on since in this case it should only ever be
      touching kernel memory.
      
      __patch_instruction() was previously refactored to use
      __put_user_size() in order to be able to return -EFAULT, which would
      allow the kernel to patch instructions in userspace, which should
      never happen. This has the functional change of causing faults on
      userspace addresses if KUAP is turned on, which should never happen in
      practice.
      
      A future enhancement could be to double check the patch address is
      definitely allowed to be tampered with by the kernel.
      Signed-off-by: NRussell Currey <ruscur@russell.cc>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ef296729
  13. 20 4月, 2019 1 次提交
  14. 19 12月, 2018 1 次提交
  15. 14 10月, 2018 1 次提交
  16. 02 10月, 2018 1 次提交
  17. 18 9月, 2018 1 次提交
    • M
      powerpc: Avoid code patching freed init sections · 51c3c62b
      Michael Neuling 提交于
      This stops us from doing code patching in init sections after they've
      been freed.
      
      In this chain:
        kvm_guest_init() ->
          kvm_use_magic_page() ->
            fault_in_pages_readable() ->
      	 __get_user() ->
      	   __get_user_nocheck() ->
      	     barrier_nospec();
      
      We have a code patching location at barrier_nospec() and
      kvm_guest_init() is an init function. This whole chain gets inlined,
      so when we free the init section (hence kvm_guest_init()), this code
      goes away and hence should no longer be patched.
      
      We seen this as userspace memory corruption when using a memory
      checker while doing partition migration testing on powervm (this
      starts the code patching post migration via
      /sys/kernel/mobility/migration). In theory, it could also happen when
      using /sys/kernel/debug/powerpc/barrier_nospec.
      
      Cc: stable@vger.kernel.org # 4.13+
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
      Reviewed-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      51c3c62b
  18. 07 8月, 2018 1 次提交
    • M
      powerpc/asm: Add a patch_site macro & helpers for patching instructions · 06d0bbc6
      Michael Ellerman 提交于
      Add a macro and some helper C functions for patching single asm
      instructions.
      
      The gas macro means we can do something like:
      
        1:	nop
        	patch_site 1b, patch__foo
      
      Which is less visually distracting than defining a GLOBAL symbol at 1,
      and also doesn't pollute the symbol table which can confuse eg. perf.
      
      These are obviously similar to our existing feature sections, but are
      not automatically patched based on CPU/MMU features, rather they are
      designed to be manually patched by C code at some arbitrary point.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      06d0bbc6
  19. 21 1月, 2018 2 次提交
  20. 11 12月, 2017 1 次提交
  21. 22 11月, 2017 1 次提交
  22. 03 7月, 2017 1 次提交
    • B
      powerpc/lib/code-patching: Use alternate map for patch_instruction() · 37bc3e5f
      Balbir Singh 提交于
      This patch creates the window using text_poke_area, allocated via
      get_vm_area(). text_poke_area is per CPU to avoid locking.
      text_poke_area for each cpu is setup using late_initcall, prior to
      setup of these alternate mapping areas, we continue to use direct
      write to change/modify kernel text. With the ability to use alternate
      mappings to write to kernel text, it provides us the freedom to then
      turn text read-only and implement CONFIG_STRICT_KERNEL_RWX.
      
      This code is CPU hotplug aware to ensure that the we have mappings for
      any new cpus as they come online and tear down mappings for any CPUs
      that go offline.
      Signed-off-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      37bc3e5f
  23. 23 4月, 2017 1 次提交
    • N
      powerpc/kprobes: Convert __kprobes to NOKPROBE_SYMBOL() · 71f6e58e
      Naveen N. Rao 提交于
      Along similar lines as commit 9326638c ("kprobes, x86: Use NOKPROBE_SYMBOL()
      instead of __kprobes annotation"), convert __kprobes annotation to either
      NOKPROBE_SYMBOL() or nokprobe_inline. The latter forces inlining, in which case
      the caller needs to be added to NOKPROBE_SYMBOL().
      
      Also:
       - blacklist arch_deref_entry_point(), and
       - convert a few regular inlines to nokprobe_inline in lib/sstep.c
      
      A key benefit is the ability to detect such symbols as being
      blacklisted. Before this patch:
      
        $ cat /sys/kernel/debug/kprobes/blacklist | grep read_mem
        $ perf probe read_mem
        Failed to write event: Invalid argument
          Error: Failed to add events.
        $ dmesg | tail -1
        [ 3736.112815] Could not insert probe at _text+10014968: -22
      
      After patch:
        $ cat /sys/kernel/debug/kprobes/blacklist | grep read_mem
        0xc000000000072b50-0xc000000000072d20	read_mem
        $ perf probe read_mem
        read_mem is blacklisted function, skip it.
        Added new events:
          (null):(null)        (on read_mem)
          probe:read_mem       (on read_mem)
      
        You can now use it in all perf tools, such as:
      
      	  perf record -e probe:read_mem -aR sleep 1
      
        $ grep " read_mem" /proc/kallsyms
        c000000000072b50 t read_mem
        c0000000005f3b40 t read_mem
        $ cat /sys/kernel/debug/kprobes/list
        c0000000005f3b48  k  read_mem+0x8    [DISABLED]
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      [mpe: Minor change log formatting, fix up some conflicts]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      71f6e58e
  24. 28 2月, 2017 1 次提交
    • L
      kprobes: move kprobe declarations to asm-generic/kprobes.h · 7d134b2c
      Luis R. Rodriguez 提交于
      Often all is needed is these small helpers, instead of compiler.h or a
      full kprobes.h.  This is important for asm helpers, in fact even some
      asm/kprobes.h make use of these helpers...  instead just keep a generic
      asm file with helpers useful for asm code with the least amount of
      clutter as possible.
      
      Likewise we need now to also address what to do about this file for both
      when architectures have CONFIG_HAVE_KPROBES, and when they do not.  Then
      for when architectures have CONFIG_HAVE_KPROBES but have disabled
      CONFIG_KPROBES.
      
      Right now most asm/kprobes.h do not have guards against CONFIG_KPROBES,
      this means most architecture code cannot include asm/kprobes.h safely.
      Correct this and add guards for architectures missing them.
      Additionally provide architectures that not have kprobes support with
      the default asm-generic solution.  This lets us force asm/kprobes.h on
      the header include/linux/kprobes.h always, but most importantly we can
      now safely include just asm/kprobes.h on architecture code without
      bringing the full kitchen sink of header files.
      
      Two architectures already provided a guard against CONFIG_KPROBES on its
      kprobes.h: sh, arch.  The rest of the architectures needed gaurds added.
      We avoid including any not-needed headers on asm/kprobes.h unless
      kprobes have been enabled.
      
      In a subsequent atomic change we can try now to remove compiler.h from
      include/linux/kprobes.h.
      
      During this sweep I've also identified a few architectures defining a
      common macro needed for both kprobes and ftrace, that of the definition
      of the breakput instruction up.  Some refer to this as
      BREAKPOINT_INSTRUCTION.  This must be kept outside of the #ifdef
      CONFIG_KPROBES guard.
      
      [mcgrof@kernel.org: fix arm64 build]
        Link: http://lkml.kernel.org/r/CAB=NE6X1WMByuARS4mZ1g9+W=LuVBnMDnh_5zyN0CLADaVh=Jw@mail.gmail.com
      [sfr@canb.auug.org.au: fixup for kprobes declarations moving]
        Link: http://lkml.kernel.org/r/20170214165933.13ebd4f4@canb.auug.org.au
      Link: http://lkml.kernel.org/r/20170203233139.32682-1-mcgrof@kernel.orgSigned-off-by: NLuis R. Rodriguez <mcgrof@kernel.org>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7d134b2c
  25. 10 2月, 2017 2 次提交
    • A
      powerpc/kprobes: Implement Optprobes · 51c9c084
      Anju T 提交于
      Current infrastructure of kprobe uses the unconditional trap instruction
      to probe a running kernel. Optprobe allows kprobe to replace the trap
      with a branch instruction to a detour buffer. Detour buffer contains
      instructions to create an in memory pt_regs. Detour buffer also has a
      call to optimized_callback() which in turn call the pre_handler(). After
      the execution of the pre-handler, a call is made for instruction
      emulation. The NIP is determined in advanced through dummy instruction
      emulation and a branch instruction is created to the NIP at the end of
      the trampoline.
      
      To address the limitation of branch instruction in POWER architecture,
      detour buffer slot is allocated from a reserved area. For the time
      being, 64KB is reserved in memory for this purpose.
      
      Instructions which can be emulated using analyse_instr() are the
      candidates for optimization. Before optimization ensure that the address
      range between the detour buffer allocated and the instruction being
      probed is within +/- 32MB.
      Signed-off-by: NAnju T Sudhakar <anju@linux.vnet.ibm.com>
      Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      51c9c084
    • A
      powerpc: Add helper to check if offset is within relative branch range · ebfa50df
      Anju T 提交于
      To permit the use of relative branch instruction in powerpc, the target
      address has to be relatively nearby, since the address is specified in an
      immediate field (24 bit filed) in the instruction opcode itself. Here
      nearby refers to 32MB on either side of the current instruction.
      
      This patch verifies whether the target address is within +/- 32MB
      range or not.
      Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: NAnju T Sudhakar <anju@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ebfa50df
  26. 25 12月, 2016 1 次提交