1. 24 4月, 2017 1 次提交
  2. 10 2月, 2017 1 次提交
    • A
      powerpc/kprobes: Implement Optprobes · 51c9c084
      Anju T 提交于
      Current infrastructure of kprobe uses the unconditional trap instruction
      to probe a running kernel. Optprobe allows kprobe to replace the trap
      with a branch instruction to a detour buffer. Detour buffer contains
      instructions to create an in memory pt_regs. Detour buffer also has a
      call to optimized_callback() which in turn call the pre_handler(). After
      the execution of the pre-handler, a call is made for instruction
      emulation. The NIP is determined in advanced through dummy instruction
      emulation and a branch instruction is created to the NIP at the end of
      the trampoline.
      
      To address the limitation of branch instruction in POWER architecture,
      detour buffer slot is allocated from a reserved area. For the time
      being, 64KB is reserved in memory for this purpose.
      
      Instructions which can be emulated using analyse_instr() are the
      candidates for optimization. Before optimization ensure that the address
      range between the detour buffer allocated and the instruction being
      probed is within +/- 32MB.
      Signed-off-by: NAnju T Sudhakar <anju@linux.vnet.ibm.com>
      Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      51c9c084
  3. 03 2月, 2017 1 次提交
  4. 24 1月, 2017 1 次提交
    • M
      powerpc: Revert the initial stack protector support · f2574030
      Michael Ellerman 提交于
      Unfortunately the stack protector support we merged recently only works
      on some toolchains. If the toolchain is built without glibc support
      everything works fine, but if glibc is built then it leads to a panic
      at boot.
      
      The solution is not rc5 material, so revert the support for now. This
      reverts commits:
      
      6533b7c1 ("powerpc: Initial stack protector (-fstack-protector) support")
      902e06eb ("powerpc/32: Change the stack protector canary value per task")
      
      Fixes: 6533b7c1 ("powerpc: Initial stack protector (-fstack-protector) support")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f2574030
  5. 21 12月, 2016 1 次提交
    • T
      powerpc: ima: get the kexec buffer passed by the previous kernel · 467d2782
      Thiago Jung Bauermann 提交于
      Patch series "ima: carry the measurement list across kexec", v8.
      
      The TPM PCRs are only reset on a hard reboot.  In order to validate a
      TPM's quote after a soft reboot (eg.  kexec -e), the IMA measurement
      list of the running kernel must be saved and then restored on the
      subsequent boot, possibly of a different architecture.
      
      The existing securityfs binary_runtime_measurements file conveniently
      provides a serialized format of the IMA measurement list.  This patch
      set serializes the measurement list in this format and restores it.
      
      Up to now, the binary_runtime_measurements was defined as architecture
      native format.  The assumption being that userspace could and would
      handle any architecture conversions.  With the ability of carrying the
      measurement list across kexec, possibly from one architecture to a
      different one, the per boot architecture information is lost and with it
      the ability of recalculating the template digest hash.  To resolve this
      problem, without breaking the existing ABI, this patch set introduces
      the boot command line option "ima_canonical_fmt", which is arbitrarily
      defined as little endian.
      
      The need for this boot command line option will be limited to the
      existing version 1 format of the binary_runtime_measurements.
      Subsequent formats will be defined as canonical format (eg.  TPM 2.0
      support for larger digests).
      
      A simplified method of Thiago Bauermann's "kexec buffer handover" patch
      series for carrying the IMA measurement list across kexec is included in
      this patch set.  The simplified method requires all file measurements be
      taken prior to executing the kexec load, as subsequent measurements will
      not be carried across the kexec and restored.
      
      This patch (of 10):
      
      The IMA kexec buffer allows the currently running kernel to pass the
      measurement list via a kexec segment to the kernel that will be kexec'd.
      The second kernel can check whether the previous kernel sent the buffer
      and retrieve it.
      
      This is the architecture-specific part which enables IMA to receive the
      measurement list passed by the previous kernel.  It will be used in the
      next patch.
      
      The change in machine_kexec_64.c is to factor out the logic of removing
      an FDT memory reservation so that it can be used by remove_ima_buffer.
      
      Link: http://lkml.kernel.org/r/1480554346-29071-2-git-send-email-zohar@linux.vnet.ibm.comSigned-off-by: NThiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
      Signed-off-by: NMimi Zohar <zohar@linux.vnet.ibm.com>
      Acked-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Andreas Steffen <andreas.steffen@strongswan.org>
      Cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com>
      Cc: Josh Sklar <sklar@linux.vnet.ibm.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stewart Smith <stewart@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      467d2782
  6. 30 11月, 2016 2 次提交
  7. 23 11月, 2016 1 次提交
  8. 18 11月, 2016 2 次提交
  9. 11 10月, 2016 1 次提交
    • E
      gcc-plugins: Add latent_entropy plugin · 38addce8
      Emese Revfy 提交于
      This adds a new gcc plugin named "latent_entropy". It is designed to
      extract as much possible uncertainty from a running system at boot time as
      possible, hoping to capitalize on any possible variation in CPU operation
      (due to runtime data differences, hardware differences, SMP ordering,
      thermal timing variation, cache behavior, etc).
      
      At the very least, this plugin is a much more comprehensive example for
      how to manipulate kernel code using the gcc plugin internals.
      
      The need for very-early boot entropy tends to be very architecture or
      system design specific, so this plugin is more suited for those sorts
      of special cases. The existing kernel RNG already attempts to extract
      entropy from reliable runtime variation, but this plugin takes the idea to
      a logical extreme by permuting a global variable based on any variation
      in code execution (e.g. a different value (and permutation function)
      is used to permute the global based on loop count, case statement,
      if/then/else branching, etc).
      
      To do this, the plugin starts by inserting a local variable in every
      marked function. The plugin then adds logic so that the value of this
      variable is modified by randomly chosen operations (add, xor and rol) and
      random values (gcc generates separate static values for each location at
      compile time and also injects the stack pointer at runtime). The resulting
      value depends on the control flow path (e.g., loops and branches taken).
      
      Before the function returns, the plugin mixes this local variable into
      the latent_entropy global variable. The value of this global variable
      is added to the kernel entropy pool in do_one_initcall() and _do_fork(),
      though it does not credit any bytes of entropy to the pool; the contents
      of the global are just used to mix the pool.
      
      Additionally, the plugin can pre-initialize arrays with build-time
      random contents, so that two different kernel builds running on identical
      hardware will not have the same starting values.
      Signed-off-by: NEmese Revfy <re.emese@gmail.com>
      [kees: expanded commit message and code comments]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      38addce8
  10. 13 9月, 2016 1 次提交
    • M
      powerpc/Makefile: Drop CONFIG_WORD_SIZE for BITS · 68201fbb
      Michael Ellerman 提交于
      Commit 2578bfae ("[POWERPC] Create and use CONFIG_WORD_SIZE") added
      CONFIG_WORD_SIZE, and suggests that other arches were going to do
      likewise.
      
      But that never happened, powerpc is the only architecture which uses it.
      
      So switch to using a simple make variable, BITS, like x86, sh, sparc and
      tile. It is also easier to spell and simpler, avoiding any confusion
      about whether it's defined due to ordering of make vs kconfig.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      68201fbb
  11. 09 9月, 2016 1 次提交
    • P
      powerpc: move hmi.c to arch/powerpc/kvm/ · 3f257774
      Paolo Bonzini 提交于
      hmi.c functions are unused unless sibling_subcore_state is nonzero, and
      that in turn happens only if KVM is in use.  So move the code to
      arch/powerpc/kvm/, putting it under CONFIG_KVM_BOOK3S_HV_POSSIBLE
      rather than CONFIG_PPC_BOOK3S_64.  The sibling_subcore_state is also
      included in struct paca_struct only if KVM is supported by the kernel.
      
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: kvm-ppc@vger.kernel.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      3f257774
  12. 22 8月, 2016 1 次提交
    • P
      powerpc: move hmi.c to arch/powerpc/kvm/ · 7c379526
      Paolo Bonzini 提交于
      hmi.c functions are unused unless sibling_subcore_state is nonzero, and
      that in turn happens only if KVM is in use.  So move the code to
      arch/powerpc/kvm/, putting it under CONFIG_KVM_BOOK3S_HV_POSSIBLE
      rather than CONFIG_PPC_BOOK3S_64.  The sibling_subcore_state is also
      included in struct paca_struct only if KVM is supported by the kernel.
      
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: kvm-ppc@vger.kernel.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      7c379526
  13. 08 8月, 2016 1 次提交
  14. 19 7月, 2016 1 次提交
  15. 15 7月, 2016 1 次提交
  16. 20 6月, 2016 1 次提交
    • M
      KVM: PPC: Book3S HV: Fix TB corruption in guest exit path on HMI interrupt · fd7bacbc
      Mahesh Salgaonkar 提交于
      When a guest is assigned to a core it converts the host Timebase (TB)
      into guest TB by adding guest timebase offset before entering into
      guest. During guest exit it restores the guest TB to host TB. This means
      under certain conditions (Guest migration) host TB and guest TB can differ.
      
      When we get an HMI for TB related issues the opal HMI handler would
      try fixing errors and restore the correct host TB value. With no guest
      running, we don't have any issues. But with guest running on the core
      we run into TB corruption issues.
      
      If we get an HMI while in the guest, the current HMI handler invokes opal
      hmi handler before forcing guest to exit. The guest exit path subtracts
      the guest TB offset from the current TB value which may have already
      been restored with host value by opal hmi handler. This leads to incorrect
      host and guest TB values.
      
      With split-core, things become more complex. With split-core, TB also gets
      split and each subcore gets its own TB register. When a hmi handler fixes
      a TB error and restores the TB value, it affects all the TB values of
      sibling subcores on the same core. On TB errors all the thread in the core
      gets HMI. With existing code, the individual threads call opal hmi handle
      independently which can easily throw TB out of sync if we have guest
      running on subcores. Hence we will need to co-ordinate with all the
      threads before making opal hmi handler call followed by TB resync.
      
      This patch introduces a sibling subcore state structure (shared by all
      threads in the core) in paca which holds information about whether sibling
      subcores are in Guest mode or host mode. An array in_guest[] of size
      MAX_SUBCORE_PER_CORE=4 is used to maintain the state of each subcore.
      The subcore id is used as index into in_guest[] array. Only primary
      thread entering/exiting the guest is responsible to set/unset its
      designated array element.
      
      On TB error, we get HMI interrupt on every thread on the core. Upon HMI,
      this patch will now force guest to vacate the core/subcore. Primary
      thread from each subcore will then turn off its respective bit
      from the above bitmap during the guest exit path just after the
      guest->host partition switch is complete.
      
      All other threads that have just exited the guest OR were already in host
      will wait until all other subcores clears their respective bit.
      Once all the subcores turn off their respective bit, all threads will
      will make call to opal hmi handler.
      
      It is not necessary that opal hmi handler would resync the TB value for
      every HMI interrupts. It would do so only for the HMI caused due to
      TB errors. For rest, it would not touch TB value. Hence to make things
      simpler, primary thread would call TB resync explicitly once for each
      core immediately after opal hmi handler instead of subtracting guest
      offset from TB. TB resync call will restore the TB with host value.
      Thus we can be sure about the TB state.
      
      One of the primary threads exiting the guest will take up the
      responsibility of calling TB resync. It will use one of the top bits
      (bit 63) from subcore state flags bitmap to make the decision. The first
      primary thread (among the subcores) that is able to set the bit will
      have to call the TB resync. Rest all other threads will wait until TB
      resync is complete.  Once TB resync is complete all threads will then
      proceed.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      fd7bacbc
  17. 07 3月, 2016 1 次提交
  18. 21 1月, 2016 1 次提交
  19. 06 8月, 2015 1 次提交
  20. 05 6月, 2015 1 次提交
  21. 11 5月, 2015 1 次提交
  22. 17 3月, 2015 1 次提交
  23. 25 9月, 2014 1 次提交
  24. 11 6月, 2014 1 次提交
  25. 30 4月, 2014 1 次提交
  26. 13 1月, 2014 1 次提交
    • G
      clk: mpc5xxx: switch to COMMON_CLK, retire PPC_CLOCK · 7d71d5b2
      Gerhard Sittig 提交于
      the setup before the change was
      - arch/powerpc/Kconfig had the PPC_CLOCK option, off by default
      - depending on the PPC_CLOCK option the arch/powerpc/kernel/clock.c file
        was built, which implements the clk.h API but always returns -ENOSYS
        unless a platform registers specific callbacks
      - the MPC52xx platform selected PPC_CLOCK but did not register any
        callbacks, thus all clk.h API calls keep resulting in -ENOSYS errors
        (which is OK, all peripheral drivers deal with the situation)
      - the MPC512x platform selected PPC_CLOCK and registered specific
        callbacks implemented in arch/powerpc/platforms/512x/clock.c, thus
        provided real support for the clock API
      - no other powerpc platform did select PPC_CLOCK
      
      the situation after the change is
      - the MPC512x platform implements the COMMON_CLK interface, and thus the
        PPC_CLOCK approach in arch/powerpc/platforms/512x/clock.c has become
        obsolete
      - the MPC52xx platform still lacks genuine support for the clk.h API
        while this is not a change against the previous situation (the error
        code returned from COMMON_CLK stubs differs but every call still
        results in an error)
      - with all references gone, the arch/powerpc/kernel/clock.c wrapper and
        the PPC_CLOCK option have become obsolete, as did the clk_interface.h
        header file
      
      the switch from PPC_CLOCK to COMMON_CLK is done for all platforms within
      the same commit such that multiplatform kernels (the combination of 512x
      and 52xx within one executable) keep working
      
      Cc: Mike Turquette <mturquette@linaro.org>
      Cc: Anatolij Gustschin <agust@denx.de>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NGerhard Sittig <gsi@denx.de>
      Signed-off-by: NAnatolij Gustschin <agust@denx.de>
      7d71d5b2
  27. 05 12月, 2013 2 次提交
    • M
      powerpc/book3s: Decode and save machine check event. · 36df96f8
      Mahesh Salgaonkar 提交于
      Now that we handle machine check in linux, the MCE decoding should also
      take place in linux host. This info is crucial to log before we go down
      in case we can not handle the machine check errors. This patch decodes
      and populates a machine check event which contain high level meaning full
      MCE information.
      
      We do this in real mode C code with ME bit on. The MCE information is still
      available on emergency stack (in pt_regs structure format). Even if we take
      another exception at this point the MCE early handler will allocate a new
      stack frame on top of current one. So when we return back here we still have
      our MCE information safe on current stack.
      
      We use per cpu buffer to save high level MCE information. Each per cpu buffer
      is an array of machine check event structure indexed by per cpu counter
      mce_nest_count. The mce_nest_count is incremented every time we enter
      machine check early handler in real mode to get the current free slot
      (index = mce_nest_count - 1). The mce_nest_count is decremented once the
      MCE info is consumed by virtual mode machine exception handler.
      
      This patch provides save_mce_event(), get_mce_event() and release_mce_event()
      generic routines that can be used by machine check handlers to populate and
      retrieve the event. The routine release_mce_event() will free the event slot so
      that it can be reused. Caller can invoke get_mce_event() with a release flag
      either to release the event slot immediately OR keep it so that it can be
      fetched again. The event slot can be also released anytime by invoking
      release_mce_event().
      
      This patch also updates kvm code to invoke get_mce_event to retrieve generic
      mce event rather than paca->opal_mce_evt.
      
      The KVM code always calls get_mce_event() with release flags set to false so
      that event is available for linus host machine
      
      If machine check occurs while we are in guest, KVM tries to handle the error.
      If KVM is able to handle MC error successfully, it enters the guest and
      delivers the machine check to guest. If KVM is not able to handle MC error, it
      exists the guest and passes the control to linux host machine check handler
      which then logs MC event and decides how to handle it in linux host. In failure
      case, KVM needs to make sure that the MC event is available for linux host to
      consume. Hence KVM always calls get_mce_event() with release flags set to false
      and later it invokes release_mce_event() only if it succeeds to handle error.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      36df96f8
    • M
      powerpc/book3s: Flush SLB/TLBs if we get SLB/TLB machine check errors on power7. · e22a2274
      Mahesh Salgaonkar 提交于
      If we get a machine check exception due to SLB or TLB errors, then flush
      SLBs/TLBs and reload SLBs to recover. We do this in real mode before turning
      on MMU. Otherwise we would run into nested machine checks.
      
      If we get a machine check when we are in guest, then just flush the
      SLBs and continue. This patch handles errors for power7. The next
      patch will handle errors for power8
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e22a2274
  28. 28 8月, 2013 1 次提交
  29. 14 8月, 2013 2 次提交
  30. 20 6月, 2013 1 次提交
  31. 15 2月, 2013 1 次提交
  32. 10 1月, 2013 3 次提交
    • A
      powerpc: Build kernel with -mcmodel=medium · 1fbe9cf2
      Anton Blanchard 提交于
      Finally remove the two level TOC and build with -mcmodel=medium.
      
      Unfortunately we can't build modules with -mcmodel=medium due to
      the tricks the kernel module loader plays with percpu data:
      
      # -mcmodel=medium breaks modules because it uses 32bit offsets from
      # the TOC pointer to create pointers where possible. Pointers into the
      # percpu data area are created by this method.
      #
      # The kernel module loader relocates the percpu data section from the
      # original location (starting with 0xd...) to somewhere in the base
      # kernel percpu data space (starting with 0xc...). We need a full
      # 64bit relocation for this to work, hence -mcmodel=large.
      
      On older kernels we fall back to the two level TOC (-mminimal-toc)
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1fbe9cf2
    • A
      powerpc: Relocate prom_init.c on 64bit · 5ac47f7a
      Anton Blanchard 提交于
      The ppc64 kernel can get loaded at any address which means
      our very early init code in prom_init.c must be relocatable. We do
      this with a pretty nasty RELOC() macro that we wrap accesses of
      variables with. It is very fragile and sometimes we forget to add a
      RELOC() to an uncommon path or sometimes a compiler change breaks it.
      
      32bit has a much more elegant solution where we build prom_init.c
      with -mrelocatable and then process the relocations manually.
      Unfortunately we can't do the equivalent on 64bit and we would
      have to build the entire kernel relocatable (-pie), resulting in a
      large increase in kernel footprint (megabytes of relocation data).
      The relocation data will be marked __initdata but it still creates
      more pressure on our already tight memory layout at boot.
      
      Alan Modra pointed out that the 64bit ABI is relocatable even
      if we don't build with -pie, we just need to relocate the TOC.
      This patch implements that idea and relocates the TOC entries of
      prom_init.c. An added bonus is there are very few relocations to
      process which helps keep boot times on simulators down.
      
      gcc does not put 64bit integer constants into the TOC but to be
      safe we may want a build time script which passes through the
      prom_init.c TOC entries to make sure everything looks reasonable.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5ac47f7a
    • I
      powerpc: Update Kconfig + Makefile to prepare for server doorbells · 440bc685
      Ian Munsie 提交于
      Move the rule to build doorbell support out of the Makefile and into a
      new Kconfig boolean that platforms can select.
      
      We will add doorbell support to pseries as well in the next patch.
      Signed-off-by: NIan Munsie <imunsie@au1.ibm.com>
      Tested-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      440bc685
  33. 15 11月, 2012 1 次提交
  34. 05 9月, 2012 1 次提交
    • A
      powerpc: Uprobes port to powerpc · 8b7b80b9
      Ananth N Mavinakayanahalli 提交于
      This is the port of uprobes to powerpc. Usage is similar to x86.
      
      [root@xxxx ~]# ./bin/perf probe -x /lib64/libc.so.6 malloc
      Added new event:
        probe_libc:malloc    (on 0xb4860)
      
      You can now use it in all perf tools, such as:
      
      	perf record -e probe_libc:malloc -aR sleep 1
      
      [root@xxxx ~]# ./bin/perf record -e probe_libc:malloc -aR sleep 20
      [ perf record: Woken up 22 times to write data ]
      [ perf record: Captured and wrote 5.843 MB perf.data (~255302 samples) ]
      [root@xxxx ~]# ./bin/perf report --stdio
      ...
      
          69.05%           tar  libc-2.12.so   [.] malloc
          28.57%            rm  libc-2.12.so   [.] malloc
           1.32%  avahi-daemon  libc-2.12.so   [.] malloc
           0.58%          bash  libc-2.12.so   [.] malloc
           0.28%          sshd  libc-2.12.so   [.] malloc
           0.08%    irqbalance  libc-2.12.so   [.] malloc
           0.05%         bzip2  libc-2.12.so   [.] malloc
           0.04%         sleep  libc-2.12.so   [.] malloc
           0.03%    multipathd  libc-2.12.so   [.] malloc
           0.01%      sendmail  libc-2.12.so   [.] malloc
           0.01%     automount  libc-2.12.so   [.] malloc
      
      The trap_nr addition patch is a prereq.
      Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8b7b80b9