1. 12 4月, 2016 1 次提交
  2. 11 4月, 2016 4 次提交
  3. 29 3月, 2016 1 次提交
    • O
      powerpc/process: Fix altivec SPR not being saved · 01d7c2a2
      Oliver O'Halloran 提交于
      In save_sprs() in process.c contains the following test:
      
      	if (cpu_has_feature(cpu_has_feature(CPU_FTR_ALTIVEC)))
      		t->vrsave = mfspr(SPRN_VRSAVE);
      
      CPU feature with the mask 0x1 is CPU_FTR_COHERENT_ICACHE so the test
      is equivilent to:
      
      	if (cpu_has_feature(CPU_FTR_ALTIVEC) &&
      		cpu_has_feature(CPU_FTR_COHERENT_ICACHE))
      
      On CPUs without support for both (i.e G5) this results in vrsave not
      being saved between context switches. The vector register save/restore
      code doesn't use VRSAVE to determine which registers to save/restore,
      but the value of VRSAVE is used to determine if altivec is being used
      in several code paths.
      
      Fixes: 152d523e ("powerpc: Create context switch helpers save_sprs() and restore_sprs()")
      Cc: stable@vger.kernel.org
      Signed-off-by: NOliver O'Halloran <oohall@gmail.com>
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      01d7c2a2
  4. 26 3月, 2016 1 次提交
  5. 18 3月, 2016 2 次提交
  6. 16 3月, 2016 2 次提交
    • C
      powerpc: Fix unrecoverable SLB miss during restore_math() · 6e669f08
      Cyril Bur 提交于
      Commit 70fe3d98 "powerpc: Restore FPU/VEC/VSX if previously used" introduces a
      call to restore_math() late in the syscall return path, after MSR_RI has been
      cleared. The MSR_RI flag is used to indicate whether the kernel can take
      another exception or not. A cleared MSR_RI flag indicates that the kernel
      cannot.
      
      Unfortunately when a machine is under SLB pressure an SLB miss can occur
      in restore_math() which (with MSR_RI cleared) leads to an unrecoverable
      exception.
      
        Unrecoverable exception 4100 at c0000000000088d8
        cpu 0x0: Vector: 4100  at [c0000003fa473b20]
            pc: c0000000000088d8: .load_vr_state+0x70/0x110
            lr: c00000000000f710: .restore_math+0x130/0x188
            sp: c0000003fa473da0
           msr: 9000000002003030
          current = 0xc0000007f876f180
          paca    = 0xc00000000fff0000	 softe: 0	 irq_happened: 0x01
            pid   = 1944, comm = K08umountfs
        [link register   ] c00000000000f710 .restore_math+0x130/0x188
        [c0000003fa473da0] c0000003fa473e30 (unreliable)
        [c0000003fa473e30] c000000000007b6c system_call+0x84/0xfc
      
      The clearing of MSR_RI is actually an optimisation to avoid multiple MSR
      writes, what must be disabled are interrupts. See comment in entry_64.S:
      
        /*
         * For performance reasons we clear RI the same time that we
         * clear EE. We only need to clear RI just before we restore r13
         * below, but batching it with EE saves us one expensive mtmsrd call.
         * We have to be careful to restore RI if we branch anywhere from
         * here (eg syscall_exit_work).
         */
      
      At the point of calling restore_math() r13 has not been restored, as such, the
      quick fix of turning MSR_RI back on for the call to restore_math() will
      eliminate the occurrence of an unrecoverable exception.
      
      We'd like to do a better fix in future.
      
      Fixes: 70fe3d98 ("powerpc: Restore FPU/VEC/VSX if previously used")
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6e669f08
    • S
      powerpc/book3e-64: Use hardcoded mttmr opcode · 7a25d912
      Scott Wood 提交于
      This preserves the ability to build using older binutils (reportedly <=
      2.22).
      
      Fixes: 6becef7e ("powerpc/mpc85xx: Add CPU hotplug support for E6500")
      Signed-off-by: NScott Wood <oss@buserror.net>
      Cc: chenhui.zhao@freescale.com
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7a25d912
  7. 12 3月, 2016 9 次提交
  8. 10 3月, 2016 1 次提交
  9. 09 3月, 2016 12 次提交
  10. 07 3月, 2016 7 次提交
    • T
      powerpc/ftrace: Add support for -mprofile-kernel ftrace ABI · 15308664
      Torsten Duwe 提交于
      The gcc switch -mprofile-kernel defines a new ABI for calling _mcount()
      very early in the function with minimal overhead.
      
      Although mprofile-kernel has been available since GCC 3.4, there were
      bugs which were only fixed recently. Currently it is known to work in
      GCC 4.9, 5 and 6.
      
      Additionally there are two possible code sequences generated by the
      flag, the first uses mflr/std/bl and the second is optimised to omit the
      std. Currently only gcc 6 has the optimised sequence. This patch
      supports both sequences.
      
      Initial work started by Vojtech Pavlik, used with permission.
      
      Key changes:
       - rework _mcount() to work for both the old and new ABIs.
       - implement new versions of ftrace_caller() and ftrace_graph_caller()
         which deal with the new ABI.
       - updates to __ftrace_make_nop() to recognise the new mcount calling
         sequence.
       - updates to __ftrace_make_call() to recognise the nop'ed sequence.
       - implement ftrace_modify_call().
       - updates to the module loader to surpress the toc save in the module
         stub when calling mcount with the new ABI.
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      15308664
    • T
      powerpc/ftrace: Use $(CC_FLAGS_FTRACE) when disabling ftrace · 9a7841ae
      Torsten Duwe 提交于
      Rather than open-coding -pg whereever we want to disable ftrace, use the
      existing $(CC_FLAGS_FTRACE) variable.
      
      This has the advantage that it will work in future when we use a
      different set of flags to enable ftrace.
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9a7841ae
    • T
      powerpc/ftrace: Use generic ftrace_modify_all_code() · c96f8385
      Torsten Duwe 提交于
      Convert powerpc's arch_ftrace_update_code() from its own version to use
      the generic default functionality (without stop_machine -- our
      instructions are properly aligned and the replacements atomic).
      
      With this we gain error checking and the much-needed function_trace_op
      handling.
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Reviewed-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c96f8385
    • M
      powerpc/module: Create a special stub for ftrace_caller() · 336a7b5d
      Michael Ellerman 提交于
      In order to support the new -mprofile-kernel ABI, we need to be able to
      call from the module back to ftrace_caller() (in the kernel) without
      using the module's r2. That is because the function in this module which
      is calling ftrace_caller() may not have setup r2, if it doesn't
      otherwise need it (ie. it accesses no globals).
      
      To make that work we add a new stub which is used for calling
      ftrace_caller(), which uses the kernel toc instead of the module toc.
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      336a7b5d
    • M
      powerpc/module: Mark module stubs with a magic value · f17c4e01
      Michael Ellerman 提交于
      When a module is loaded, calls out to the kernel go via a stub which is
      generated at runtime. One of these stubs is used to call _mcount(),
      which is the default target of tracing calls generated by the compiler
      with -pg.
      
      If dynamic ftrace is enabled (which it typically is), another stub is
      used to call ftrace_caller(), which is the target of tracing calls when
      ftrace is actually active.
      
      ftrace then wants to disable the calls to _mcount() at module startup,
      and enable/disable the calls to ftrace_caller() when enabling/disabling
      tracing - all of these it does by patching the code.
      
      As part of that code patching, the ftrace code wants to confirm that the
      branch it is about to modify, is in fact a call to a module stub which
      calls _mcount() or ftrace_caller().
      
      Currently it does that by inspecting the instructions and confirming
      they are what it expects. Although that works, the code to do it is
      pretty intricate because it requires lots of knowledge about the exact
      format of the stub.
      
      We can make that process easier by marking the generated stubs with a
      magic value, and then looking for that magic value. Altough this is not
      as rigorous as the current method, I believe it is sufficient in
      practice.
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f17c4e01
    • M
      powerpc/module: Only try to generate the ftrace_caller() stub once · 136cd345
      Michael Ellerman 提交于
      Currently we generate the module stub for ftrace_caller() at the bottom
      of apply_relocate_add(). However apply_relocate_add() is potentially
      called more than once per module, which means we will try to generate
      the ftrace_caller() stub multiple times.
      
      Although the current code deals with that correctly, ie. it only
      generates a stub the first time, it would be clearer to only try to
      generate the stub once.
      
      Note also on first reading it may appear that we generate a different
      stub for each section that requires relocation, but that is not the
      case. The code in stub_for_addr() that searches for an existing stub
      uses sechdrs[me->arch.stubs_section], ie. the single stub section for
      this module.
      
      A cleaner approach is to only generate the ftrace_caller() stub once,
      from module_finalize(). Although the original code didn't check to see
      if the stub was actually generated correctly, it seems prudent to add a
      check, so do that. And an additional benefit is we can clean the ifdefs
      up a little.
      
      Finally we must propagate the const'ness of some of the pointers passed
      to module_finalize(), but that is also an improvement.
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      136cd345
    • M
      powerpc: Create a helper for getting the kernel toc value · a5cab83c
      Michael Ellerman 提交于
      Move the logic to work out the kernel toc pointer into a header. This is
      a good cleanup, and also means we can use it elsewhere in future.
      Reviewed-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Tested-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      a5cab83c