1. 01 5月, 2016 3 次提交
  2. 27 4月, 2016 6 次提交
    • M
      powerpc/perf: Replace raw event hex values with #defines · 5bcca743
      Madhavan Srinivasan 提交于
      Minor cleanup patch to replace the raw event hex values in
      power8-pmu.c with #defines.
      Signed-off-by: NMadhavan Srinivasan <maddy@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      5bcca743
    • T
      ftrace: Match dot symbols when searching functions on ppc64 · 7132e2d6
      Thiago Jung Bauermann 提交于
      In the ppc64 big endian ABI, function symbols point to function
      descriptors. The symbols which point to the function entry points
      have a dot in front of the function name. Consequently, when the
      ftrace filter mechanism searches for the symbol corresponding to
      an entry point address, it gets the dot symbol.
      
      As a result, ftrace filter users have to be aware of this ABI detail on
      ppc64 and prepend a dot to the function name when setting the filter.
      
      The perf probe command insulates the user from this by ignoring the dot
      in front of the symbol name when matching function names to symbols,
      but the sysfs interface does not. This patch makes the ftrace filter
      mechanism do the same when searching symbols.
      
      Fixes the following failure in ftracetest's kprobe_ftrace.tc:
      
        .../kprobe_ftrace.tc: line 9: echo: write error: Invalid argument
      
      That failure is on this line of kprobe_ftrace.tc:
      
        echo _do_fork > set_ftrace_filter
      
      This is because there's no _do_fork entry in the functions list:
      
        # cat available_filter_functions | grep _do_fork
        ._do_fork
      
      This change introduces no regressions on the perf and ftracetest
      testsuite results.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NThiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7132e2d6
    • D
      powerpc: rework sparse for lib/xor_vmx.c · 8fe08885
      Daniel Axtens 提交于
      Sparse doesn't seem to be passing -maltivec around properly, leading
      to lots of errors:
      
      .../include/altivec.h:34:2: error: Use the "-maltivec" flag to enable PowerPC AltiVec support
      arch/powerpc/lib/xor_vmx.c:27:16: error: Expected ; at end of declaration
      arch/powerpc/lib/xor_vmx.c:27:16: error: got signed
      arch/powerpc/lib/xor_vmx.c:60:9: error: No right hand side of '*'-expression
      arch/powerpc/lib/xor_vmx.c:60:9: error: Expected ; at end of statement
      arch/powerpc/lib/xor_vmx.c:60:9: error: got v1_in
      ...
      arch/powerpc/lib/xor_vmx.c:87:9: error: too many errors
      
      Only include the altivec.h header for non-__CHECKER__ builds.
      For builds with __CHECKER__, make up some stubs instead, as
      suggested by Balbir. (The vector size of 16 is arbitrary.)
      Suggested-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Tested-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8fe08885
    • C
      powerpc: Add support for userspace P9 copy paste · 8a649045
      Chris Smart 提交于
      The copy paste facility introduced in POWER9 provides an optimised
      mechanism for a userspace application to copy a cacheline. This is
      provided by a pair of instructions, copy and paste, while a third,
      cp_abort (copy paste abort), provides a clean up of the state in case of
      a failure.
      
      The copy instruction will read a 128 byte cacheline and store it in an
      internal buffer. The subsequent paste instruction will store this
      internal buffer to memory and set a CR field if the paste succeeds.
      
      Since the state of the copy paste buffer is internal (and not
      architecturally visible), in the unlikely event of a context switch, the
      state cannot be stored and the paste should therefore fail.
      
      The cp_abort instruction exists to fail and clean up any such
      interrupted copy paste sequence and is to be called by the kernel as
      part of the context switch. Doing so prevents data from a preceding copy
      in one process leaking into the paste of another.
      
      This code enables use of the cp_abort instruction if a supported
      processor is detected.
      
      NOTE: this is for userspace only, not in kernel, and does not deal
      with KVM guests.
      
      Patch created with much assistance from Michael Neuling
      <mikey@neuling.org>
      Signed-off-by: NChris Smart <chris@distroguy.com>
      Reviewed-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8a649045
    • A
      powerpc/mpic: handle subsys_system_register() failure · 4ad5e883
      Andrew Donnellan 提交于
      mpic_init_sys() currently doesn't check whether
      subsys_system_register() succeeded or not. Check the return code of
      subsys_system_register() and clean up if there's an error.
      Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      4ad5e883
    • A
      powerpc/eeh: fix misleading indentation · 2d521784
      Andrew Donnellan 提交于
      Found by smatch.
      Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com>
      Acked-by: NRussell Currey <ruscur@russell.cc>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2d521784
  3. 21 4月, 2016 4 次提交
    • A
      powerpc/perf: Add support for sampling interrupt register state · ed4a4ef8
      Anju T 提交于
      The perf infrastructure uses a bit mask to find out valid registers to
      display. Define a register mask for supported registers defined in
      uapi/asm/perf_regs.h. The bit positions also correspond to register IDs
      which is used by perf infrastructure to fetch the register values.
      CONFIG_HAVE_PERF_REGS enables sampling of the interrupted machine state.
      Signed-off-by: NAnju T <anju@linux.vnet.ibm.com>
      [mpe: Add license, use CONFIG_PPC64, fix 32-bit build]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ed4a4ef8
    • A
      powerpc/perf: Assign an id to each powerpc register · 1bfadabf
      Anju T 提交于
      The enum definition assigns an 'id' to each register in "struct pt_regs"
      of arch/powerpc. The order of these values in the enum definition are
      based on the order of members in pt_regs.
      Signed-off-by: NAnju T <anju@linux.vnet.ibm.com>
      [mpe: Rename LNK to LINK, use _UAPI_ASM for include guards]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      1bfadabf
    • H
      powerpc/book3s64: Remove __end_handlers marker · 057b6d7e
      Hari Bathini 提交于
      The __end_handlers marker was intended to mark down upto code that gets
      called from exception prologs. But that hasn't kept pace with code
      changes. Case in point, slb_miss_realmode being called from exception
      prolog code but isn't below __end_handlers marker. So, __end_handlers
      marker is as good as a comment but could be misleading at times if it
      isn't in sync with the code, as is the case now. So, let us avoid this
      confusion by having a better comment and removing __end_handlers marker
      altogether.
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      057b6d7e
    • H
      powerpc/book3s64: Fix branching to OOL handlers in relocatable kernel · 8ed8ab40
      Hari Bathini 提交于
      Some of the interrupt vectors on 64-bit POWER server processors are only
      32 bytes long (8 instructions), which is not enough for the full
      first-level interrupt handler. For these we need to branch to an
      out-of-line (OOL) handler. But when we are running a relocatable kernel,
      interrupt vectors till __end_interrupts marker are copied down to real
      address 0x100. So, branching to labels (ie. OOL handlers) outside this
      section must be handled differently (see LOAD_HANDLER()), considering
      relocatable kernel, which would need at least 4 instructions.
      
      However, branching from interrupt vector means that we corrupt the
      CFAR (come-from address register) on POWER7 and later processors as
      mentioned in commit 1707dd16. So, EXCEPTION_PROLOG_0 (6 instructions)
      that contains the part up to the point where the CFAR is saved in the
      PACA should be part of the short interrupt vectors before we branch out
      to OOL handlers.
      
      But as mentioned already, there are interrupt vectors on 64-bit POWER
      server processors that are only 32 bytes long (like vectors 0x4f00,
      0x4f20, etc.), which cannot accomodate the above two cases at the same
      time owing to space constraint. Currently, in these interrupt vectors,
      we simply branch out to OOL handlers, without using LOAD_HANDLER(),
      which leaves us vulnerable when running a relocatable kernel (eg. kdump
      case). While this has been the case for sometime now and kdump is used
      widely, we were fortunate not to see any problems so far, for three
      reasons:
      
        1. In almost all cases, production kernel (relocatable) is used for
           kdump as well, which would mean that crashed kernel's OOL handler
           would be at the same place where we end up branching to, from short
           interrupt vector of kdump kernel.
        2. Also, OOL handler was unlikely the reason for crash in almost all
           the kdump scenarios, which meant we had a sane OOL handler from
           crashed kernel that we branched to.
        3. On most 64-bit POWER server processors, page size is large enough
           that marking interrupt vector code as executable (see commit
           429d2e83) leads to marking OOL handler code from crashed kernel,
           that sits right below interrupt vector code from kdump kernel, as
           executable as well.
      
      Let us fix this by moving the __end_interrupts marker down past OOL
      handlers to make sure that we also copy OOL handlers to real address
      0x100 when running a relocatable kernel.
      
      This fix has been tested successfully in kdump scenario, on an LPAR with
      4K page size by using different default/production kernel and kdump
      kernel.
      
      Also tested by manually corrupting the OOL handlers in the first kernel
      and then kdump'ing, and then causing the OOL handlers to fire - mpe.
      
      Fixes: c1fb6816 ("powerpc: Add relocation on exception vector handlers")
      Cc: stable@vger.kernel.org
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8ed8ab40
  4. 14 4月, 2016 3 次提交
    • M
      powerpc/livepatch: Add live patching support on ppc64le · 85baa095
      Michael Ellerman 提交于
      Add the kconfig logic & assembly support for handling live patched
      functions. This depends on DYNAMIC_FTRACE_WITH_REGS, which in turn
      depends on the new -mprofile-kernel ftrace ABI, which is only supported
      currently on ppc64le.
      
      Live patching is handled by a special ftrace handler. This means it runs
      from ftrace_caller(). The live patch handler modifies the NIP so as to
      redirect the return from ftrace_caller() to the new patched function.
      
      However there is one particularly tricky case we need to handle.
      
      If a function A calls another function B, and it is known at link time
      that they share the same TOC, then A will not save or restore its TOC,
      and will call the local entry point of B.
      
      When we live patch B, we replace it with a new function C, which may
      not have the same TOC as A. At live patch time it's too late to modify A
      to do the TOC save/restore, so the live patching code must interpose
      itself between A and C, and do the TOC save/restore that A omitted.
      
      An additionaly complication is that the livepatch code can not create a
      stack frame in order to save the TOC. That is because if C takes > 8
      arguments, or is varargs, A will have written the arguments for C in
      A's stack frame.
      
      To solve this, we introduce a "livepatch stack" which grows upward from
      the base of the regular stack, and is used to store the TOC & LR when
      calling a live patched function.
      
      When the patched function returns, we retrieve the real LR & TOC from
      the livepatch stack, restore them, and pop the livepatch "stack frame".
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      85baa095
    • M
      powerpc/livepatch: Add livepatch stack to struct thread_info · 5d31a96e
      Michael Ellerman 提交于
      In order to support live patching we need to maintain an alternate
      stack of TOC & LR values. We use the base of the stack for this, and
      store the "live patch stack pointer" in struct thread_info.
      
      Unlike the other fields of thread_info, we can not statically initialise
      that value, so it must be done at run time.
      
      This patch just adds the code to support that, it is not enabled until
      the next patch which actually adds live patch support.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      5d31a96e
    • M
      powerpc/livepatch: Add livepatch header · f63e6d89
      Michael Ellerman 提交于
      Add the powerpc specific livepatch definitions. In particular we provide
      a non-default implementation of klp_get_ftrace_location().
      
      This is required because the location of the mcount call is not constant
      when using -mprofile-kernel (which we always do for live patching).
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      Signed-off-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f63e6d89
  5. 12 4月, 2016 2 次提交
  6. 11 4月, 2016 13 次提交
  7. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  8. 29 3月, 2016 3 次提交
  9. 26 3月, 2016 1 次提交
  10. 23 3月, 2016 2 次提交
  11. 22 3月, 2016 2 次提交