1. 15 12月, 2014 3 次提交
    • S
      powernv/cpuidle: Redesign idle states management · 7cba160a
      Shreyas B. Prabhu 提交于
      Deep idle states like sleep and winkle are per core idle states. A core
      enters these states only when all the threads enter either the
      particular idle state or a deeper one. There are tasks like fastsleep
      hardware bug workaround and hypervisor core state save which have to be
      done only by the last thread of the core entering deep idle state and
      similarly tasks like timebase resync, hypervisor core register restore
      that have to be done only by the first thread waking up from these
      state.
      
      The current idle state management does not have a way to distinguish the
      first/last thread of the core waking/entering idle states. Tasks like
      timebase resync are done for all the threads. This is not only is
      suboptimal, but can cause functionality issues when subcores and kvm is
      involved.
      
      This patch adds the necessary infrastructure to track idle states of
      threads in a per-core structure. It uses this info to perform tasks like
      fastsleep workaround and timebase resync only once per core.
      Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Originally-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: linux-pm@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7cba160a
    • S
      powerpc/powernv: Enable Offline CPUs to enter deep idle states · 8eb8ac89
      Shreyas B. Prabhu 提交于
      The secondary threads should enter deep idle states so as to gain maximum
      powersavings when the entire core is offline. To do so the offline path
      must be made aware of the available deepest idle state. Hence probe the
      device tree for the possible idle states in powernv core code and
      expose the deepest idle state through flags.
      
      Since the  device tree is probed by the cpuidle driver as well, move
      the parameters required to discover the idle states into an appropriate
      common place to both the driver and the powernv core code.
      
      Another point is that fastsleep idle state may require workarounds in
      the kernel to function properly. This workaround is introduced in the
      subsequent patches. However neither the cpuidle driver or the hotplug
      path need be bothered about this workaround.
      
      They will be taken care of by the core powernv code.
      Originally-by: NSrivatsa S. Bhat <srivatsa@mit.edu>
      Signed-off-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com>
      Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Reviewed-by: NPaul Mackerras <paulus@samba.org>
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: linux-pm@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8eb8ac89
    • P
      powerpc/powernv: Switch off MMU before entering nap/sleep/rvwinkle mode · 8117ac6a
      Paul Mackerras 提交于
      Currently, when going idle, we set the flag indicating that we are in
      nap mode (paca->kvm_hstate.hwthread_state) and then execute the nap
      (or sleep or rvwinkle) instruction, all with the MMU on.  This is bad
      for two reasons: (a) the architecture specifies that those instructions
      must be executed with the MMU off, and in fact with only the SF, HV, ME
      and possibly RI bits set, and (b) this introduces a race, because as
      soon as we set the flag, another thread can switch the MMU to a guest
      context.  If the race is lost, this thread will typically start looping
      on relocation-on ISIs at 0xc...4400.
      
      This fixes it by setting the MSR as required by the architecture before
      setting the flag or executing the nap/sleep/rvwinkle instruction.
      
      Cc: stable@vger.kernel.org
      [ shreyas@linux.vnet.ibm.com: Edited to handle LE ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8117ac6a
  2. 14 12月, 2014 1 次提交
  3. 12 12月, 2014 1 次提交
  4. 08 12月, 2014 1 次提交
    • P
      powerpc/powernv: Return to cpu offline loop when finished in KVM guest · 56548fc0
      Paul Mackerras 提交于
      When a secondary hardware thread has finished running a KVM guest, we
      currently put that thread into nap mode using a nap instruction in
      the KVM code.  This changes the code so that instead of doing a nap
      instruction directly, we instead cause the call to power7_nap() that
      put the thread into nap mode to return.  The reason for doing this is
      to avoid having the KVM code having to know what low-power mode to
      put the thread into.
      
      In the case of a secondary thread used to run a KVM guest, the thread
      will be offline from the point of view of the host kernel, and the
      relevant power7_nap() call is the one in pnv_smp_cpu_disable().
      In this case we don't want to clear pending IPIs in the offline loop
      in that function, since that might cause us to miss the wakeup for
      the next time the thread needs to run a guest.  To tell whether or
      not to clear the interrupt, we use the SRR1 value returned from
      power7_nap(), and check if it indicates an external interrupt.  We
      arrange that the return from power7_nap() when we have finished running
      a guest returns 0, so pending interrupts don't get flushed in that
      case.
      
      Note that it is important a secondary thread that has finished
      executing in the guest, or that didn't have a guest to run, should
      not return to power7_nap's caller while the kvm_hstate.hwthread_req
      flag in the PACA is non-zero, because the return from power7_nap
      will reenable the MMU, and the MMU might still be in guest context.
      In this situation we spin at low priority in real mode waiting for
      hwthread_req to become zero.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      56548fc0
  5. 05 12月, 2014 1 次提交
    • A
      powerpc/mm: don't do tlbie for updatepp request with NO HPTE fault · aefa5688
      Aneesh Kumar K.V 提交于
      upatepp can get called for a nohpte fault when we find from the linux
      page table that the translation was hashed before. In that case
      we are sure that there is no existing translation, hence we could
      avoid doing tlbie.
      
      We could possibly race with a parallel fault filling the TLB. But
      that should be ok because updatepp is only ever relaxing permissions.
      We also look at linux pte permission bits when filling hash pte
      permission bits. We also hold the linux pte busy bits while
      inserting/updating a hashpte entry, hence a paralle update of
      linux pte is not possible. On the other hand mprotect involves
      ptep_modify_prot_start which cause a hpte invalidate and not updatepp.
      
      Performance number:
      We use randbox_access_bench written by Anton.
      
      Kernel with THP disabled and smaller hash page table size.
      
          86.60%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_updatepp
           2.10%  random_access_b  random_access_bench              [.] doit
           1.99%  random_access_b  [kernel.kallsyms]                [k] .do_raw_spin_lock
           1.85%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_insert
           1.26%  random_access_b  [kernel.kallsyms]                [k] .native_flush_hash_range
           1.18%  random_access_b  [kernel.kallsyms]                [k] .__delay
           0.69%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_remove
           0.37%  random_access_b  [kernel.kallsyms]                [k] .clear_user_page
           0.34%  random_access_b  [kernel.kallsyms]                [k] .__hash_page_64K
           0.32%  random_access_b  [kernel.kallsyms]                [k] fast_exception_return
           0.30%  random_access_b  [kernel.kallsyms]                [k] .hash_page_mm
      
      With Fix:
      
          27.54%  random_access_b  random_access_bench              [.] doit
          22.90%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_insert
           5.76%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_remove
           5.20%  random_access_b  [kernel.kallsyms]                [k] fast_exception_return
           5.12%  random_access_b  [kernel.kallsyms]                [k] .__hash_page_64K
           4.80%  random_access_b  [kernel.kallsyms]                [k] .hash_page_mm
           3.31%  random_access_b  [kernel.kallsyms]                [k] data_access_common
           1.84%  random_access_b  [kernel.kallsyms]                [k] .trace_hardirqs_on_caller
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      aefa5688
  6. 02 12月, 2014 5 次提交
  7. 17 11月, 2014 2 次提交
    • N
      rtc/tpo: Driver to support rtc and wakeup on PowerNV platform · 16b1d26e
      Neelesh Gupta 提交于
      The patch implements the OPAL rtc driver that binds with the rtc
      driver subsystem. The driver uses the platform device infrastructure
      to probe the rtc device and register it to rtc class framework. The
      'wakeup' is supported depending upon the property 'has-tpo' present
      in the OF node. It provides a way to load the generic rtc driver in
      in the absence of an OPAL driver.
      
      The patch also moves the existing OPAL rtc get/set time interfaces to the
      new driver and exposes the necessary OPAL calls using EXPORT_SYMBOL_GPL.
      
      Test results:
      -------------
      Host:
      [root@tul169p1 ~]# ls -l /sys/class/rtc/
      total 0
      lrwxrwxrwx 1 root root 0 Oct 14 03:07 rtc0 -> ../../devices/opal-rtc/rtc/rtc0
      [root@tul169p1 ~]# cat /sys/devices/opal-rtc/rtc/rtc0/time
      08:10:07
      [root@tul169p1 ~]# echo `date '+%s' -d '+ 2 minutes'` > /sys/class/rtc/rtc0/wakealarm
      [root@tul169p1 ~]# cat /sys/class/rtc/rtc0/wakealarm
      1413274345
      [root@tul169p1 ~]#
      
      FSP:
      $ smgr mfgState
      standby
      $ rtim timeofday
      
      System time is valid: 2014/10/14 08:12:04.225115
      
      $ smgr mfgState
      ipling
      $
      
      CC: devicetree@vger.kernel.org
      CC: tglx@linutronix.de
      CC: rtc-linux@googlegroups.com
      CC: a.zummo@towertech.it
      Signed-off-by: NNeelesh Gupta <neelegup@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      16b1d26e
    • V
      powerpc: Use generic PIE randomization · 59994fb0
      Vineeth Vijayan 提交于
      Back in 2009 we merged 501cb16d "Randomise PIEs", which added support for
      randomizing PIE (Position Independent Executable) binaries.
      
      That commit added randomize_et_dyn(), which correctly randomized the addresses,
      but failed to honor PF_RANDOMIZE. That means it was not possible to disable PIE
      randomization via the personality flag, or /proc/sys/kernel/randomize_va_space.
      
      Since then there has been generic support for PIE randomization added to
      binfmt_elf.c, selectable via ARCH_BINFMT_ELF_RANDOMIZE_PIE.
      
      Enabling that allows us to drop randomize_et_dyn(), which means we start
      honoring PF_RANDOMIZE correctly.
      
      It also causes a fairly major change to how we layout PIE binaries.
      
      Currently we will place the binary at 512MB-520MB for 32 bit binaries, or
      512MB-1.5GB for 64 bit binaries, eg:
      
          $ cat /proc/$$/maps
          4e550000-4e580000 r-xp 00000000 08:02 129813       /bin/dash
          4e580000-4e590000 rw-p 00020000 08:02 129813       /bin/dash
          10014110000-10014140000 rw-p 00000000 00:00 0      [heap]
          3fffaa3f0000-3fffaa5a0000 r-xp 00000000 08:02 921  /lib/powerpc64le-linux-gnu/libc-2.19.so
          3fffaa5a0000-3fffaa5b0000 rw-p 001a0000 08:02 921  /lib/powerpc64le-linux-gnu/libc-2.19.so
          3fffaa5c0000-3fffaa5d0000 rw-p 00000000 00:00 0
          3fffaa5d0000-3fffaa5f0000 r-xp 00000000 00:00 0    [vdso]
          3fffaa5f0000-3fffaa620000 r-xp 00000000 08:02 1246 /lib/powerpc64le-linux-gnu/ld-2.19.so
          3fffaa620000-3fffaa630000 rw-p 00020000 08:02 1246 /lib/powerpc64le-linux-gnu/ld-2.19.so
          3ffffc340000-3ffffc370000 rw-p 00000000 00:00 0    [stack]
      
      With this commit applied we don't do any special randomisation for the binary,
      and instead rely on mmap randomisation. This means the binary ends up at high
      addresses, eg:
      
          $ cat /proc/$$/maps
          3fff99820000-3fff999d0000 r-xp 00000000 08:02 921    /lib/powerpc64le-linux-gnu/libc-2.19.so
          3fff999d0000-3fff999e0000 rw-p 001a0000 08:02 921    /lib/powerpc64le-linux-gnu/libc-2.19.so
          3fff999f0000-3fff99a00000 rw-p 00000000 00:00 0
          3fff99a00000-3fff99a20000 r-xp 00000000 00:00 0      [vdso]
          3fff99a20000-3fff99a50000 r-xp 00000000 08:02 1246   /lib/powerpc64le-linux-gnu/ld-2.19.so
          3fff99a50000-3fff99a60000 rw-p 00020000 08:02 1246   /lib/powerpc64le-linux-gnu/ld-2.19.so
          3fff99a60000-3fff99a90000 r-xp 00000000 08:02 129813 /bin/dash
          3fff99a90000-3fff99aa0000 rw-p 00020000 08:02 129813 /bin/dash
          3fffc3de0000-3fffc3e10000 rw-p 00000000 00:00 0      [stack]
          3fffc55e0000-3fffc5610000 rw-p 00000000 00:00 0      [heap]
      
      Although this should be OK, it's possible it might break badly written
      binaries that make assumptions about the address space layout.
      Signed-off-by: NVineeth Vijayan <vvijayan@mvista.com>
      [mpe: Rewrite changelog]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      59994fb0
  8. 14 11月, 2014 3 次提交
  9. 12 11月, 2014 2 次提交
  10. 10 11月, 2014 7 次提交
  11. 08 11月, 2014 3 次提交
  12. 05 11月, 2014 2 次提交
  13. 03 11月, 2014 2 次提交
    • A
      powerpc: Convert power off logic to pm_power_off · 9178ba29
      Alexander Graf 提交于
      The generic Linux framework to power off the machine is a function pointer
      called pm_power_off. The trick about this pointer is that device drivers can
      potentially implement it rather than board files.
      
      Today on powerpc we set pm_power_off to invoke our generic full machine power
      off logic which then calls ppc_md.power_off to invoke machine specific power
      off.
      
      However, when we want to add a power off GPIO via the "gpio-poweroff" driver,
      this card house falls apart. That driver only registers itself if pm_power_off
      is NULL to ensure it doesn't override board specific logic. However, since we
      always set pm_power_off to the generic power off logic (which will just not
      power off the machine if no ppc_md.power_off call is implemented), we can't
      implement power off via the generic GPIO power off driver.
      
      To fix this up, let's get rid of the ppc_md.power_off logic and just always use
      pm_power_off as was intended. Then individual drivers such as the GPIO power off
      driver can implement power off logic via that function pointer.
      
      With this patch set applied and a few patches on top of QEMU that implement a
      power off GPIO on the virt e500 machine, I can successfully turn off my virtual
      machine after halt.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      [mpe: Squash into one patch and update changelog based on cover letter]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      9178ba29
    • C
      powerpc: Replace __get_cpu_var uses · 69111bac
      Christoph Lameter 提交于
      This still has not been merged and now powerpc is the only arch that does
      not have this change. Sorry about missing linuxppc-dev before.
      
      V2->V2
        - Fix up to work against 3.18-rc1
      
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      [mpe: Fix build errors caused by set/or_softirq_pending(), and rework
            assignment in __set_breakpoint() to use memcpy().]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      69111bac
  14. 30 10月, 2014 1 次提交
    • H
      powerpc/fadump: Fix endianess issues in firmware assisted dump handling · 408cddd9
      Hari Bathini 提交于
      Firmware-assisted dump (fadump) kernel code is not endian safe. The
      below patch fixes this issue. Tested this patch with upstream kernel.
      Below output shows crash tool successfully opening LE fadump vmcore.
      
          # crash vmlinux vmcore
          GNU gdb (GDB) 7.6
          This GDB was configured as "powerpc64le-unknown-linux-gnu"...
      
                KERNEL: vmlinux
              DUMPFILE: vmcore
          	CPUS: 16
          	DATE: Wed Dec 31 19:00:00 1969
                UPTIME: 00:03:28
          LOAD AVERAGE: 0.46, 0.86, 0.41
                 TASKS: 268
              NODENAME: linux-dhr2
               RELEASE: 3.17.0-rc5-7-default
               VERSION: #6 SMP Tue Sep 30 01:06:34 EDT 2014
               MACHINE: ppc64le  (4116 Mhz)
                MEMORY: 40 GB
                 PANIC: "Oops: Kernel access of bad area, sig: 11 [#1]" (check log for details)
          	 PID: 6223
               COMMAND: "bash"
          	TASK: c0000009661b2500  [THREAD_INFO: c000000967ac0000]
          	 CPU: 2
                 STATE: TASK_RUNNING (PANIC)
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      [mpe: Make the comment in pSeries_lpar_hptab_clear() clearer]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      408cddd9
  15. 22 10月, 2014 2 次提交
    • P
      powerpc: Wire up sys_bpf() syscall · fcbb539f
      Pranith Kumar 提交于
      This patch wires up the new syscall sys_bpf() on powerpc.
      
      Passes the tests in samples/bpf:
      
          #0 add+sub+mul OK
          #1 unreachable OK
          #2 unreachable2 OK
          #3 out of range jump OK
          #4 out of range jump2 OK
          #5 test1 ld_imm64 OK
          #6 test2 ld_imm64 OK
          #7 test3 ld_imm64 OK
          #8 test4 ld_imm64 OK
          #9 test5 ld_imm64 OK
          #10 no bpf_exit OK
          #11 loop (back-edge) OK
          #12 loop2 (back-edge) OK
          #13 conditional loop OK
          #14 read uninitialized register OK
          #15 read invalid register OK
          #16 program doesn't init R0 before exit OK
          #17 stack out of bounds OK
          #18 invalid call insn1 OK
          #19 invalid call insn2 OK
          #20 invalid function call OK
          #21 uninitialized stack1 OK
          #22 uninitialized stack2 OK
          #23 check valid spill/fill OK
          #24 check corrupted spill/fill OK
          #25 invalid src register in STX OK
          #26 invalid dst register in STX OK
          #27 invalid dst register in ST OK
          #28 invalid src register in LDX OK
          #29 invalid dst register in LDX OK
          #30 junk insn OK
          #31 junk insn2 OK
          #32 junk insn3 OK
          #33 junk insn4 OK
          #34 junk insn5 OK
          #35 misaligned read from stack OK
          #36 invalid map_fd for function call OK
          #37 don't check return value before access OK
          #38 access memory with incorrect alignment OK
          #39 sometimes access memory with incorrect alignment OK
          #40 jump test 1 OK
          #41 jump test 2 OK
          #42 jump test 3 OK
          #43 jump test 4 OK
      Signed-off-by: NPranith Kumar <bobby.prani@gmail.com>
      [mpe: test using samples/bpf]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      fcbb539f
    • A
      powerpc/mm: Remove redundant #if case · ca5f1d16
      Aneesh Kumar K.V 提交于
      Remove the check of CONFIG_PPC_SUBPAGE_PROT when deciding if
      is_hugepage_only_range() is extern or inline. The extern version is in
      slice.c and is built if CONFIG_PPC_MM_SLICES=y.
      
      There was no build break possible because CONFIG_PPC_SUBPAGE_PROT is
      only selectable under conditions which also mean CONFIG_PPC_MM_SLICES
      will be selected.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      ca5f1d16
  16. 15 10月, 2014 4 次提交
    • G
      powerpc/eeh: Block PCI config access upon frozen PE · b6541db1
      Gavin Shan 提交于
      The problem was found when I tried to inject PCI config error by
      PHB3 PAPR error injection registers into Broadcom Austin 4-ports
      NIC adapter. The frozen PE was reported successfully and EEH core
      started to recover it. However, I run into fenced PHB when dumping
      PCI config space as EEH logs. I was told that PCI config requests
      should not be progagated to the adapter until PE reset is done
      successfully. Otherise, we would run out of PHB internal credits
      and trigger PCT (PCIE Completion Timeout), which leads to the
      fenced PHB.
      
      The patch introduces another PE flag EEH_PE_CFG_RESTRICTED, which
      is set during PE initialization time if the PE includes the specific
      PCI devices that need block PCI config access until PE reset is done.
      When the PE becomes frozen for the first time, EEH_PE_CFG_BLOCKED is
      set if the PE has flag EEH_PE_CFG_RESTRICTED. Then the PCI config
      access to the PE will be dropped by platform PCI accessors until
      PE reset is done successfully. The mechanism is shared by PowerNV
      platform owned PE or userland owned ones. It's not used on pSeries
      platform yet.
      Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b6541db1
    • G
      powerpc/eeh: Rename flag EEH_PE_RESET to EEH_PE_CFG_BLOCKED · 8a6b3710
      Gavin Shan 提交于
      The flag EEH_PE_RESET indicates blocking config space of the PE
      during reset time. We potentially need block PE's config space
      other than reset time. So it's reasonable to replace it with
      EEH_PE_CFG_BLOCKED to indicate its usage.
      
      There are no substantial code or logic changes in this patch.
      Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8a6b3710
    • A
      powerpc: Rename __get_SP() to current_stack_pointer() · acf620ec
      Anton Blanchard 提交于
      Michael points out that __get_SP() is a pretty horrible
      function name. Let's give it a better name.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      acf620ec
    • A
      powerpc: Reimplement __get_SP() as a function not a define · bfe9a2cf
      Anton Blanchard 提交于
      Li Zhong points out an issue with our current __get_SP()
      implementation. If ftrace function tracing is enabled (ie -pg
      profiling using _mcount) we spill a stack frame on 64bit all the
      time.
      
      If a function calls __get_SP() and later calls a function that is
      tail call optimised, we will pop the stack frame and the value
      returned by __get_SP() is no longer valid. An example from Li can
      be found in save_stack_trace -> save_context_stack:
      
      c0000000000432c0 <.save_stack_trace>:
      c0000000000432c0:       mflr    r0
      c0000000000432c4:       std     r0,16(r1)
      c0000000000432c8:       stdu    r1,-128(r1) <-- stack frame for _mcount
      c0000000000432cc:       std     r3,112(r1)
      c0000000000432d0:       bl      <._mcount>
      c0000000000432d4:       nop
      
      c0000000000432d8:       mr      r4,r1 <-- __get_SP()
      
      c0000000000432dc:       ld      r5,632(r13)
      c0000000000432e0:       ld      r3,112(r1)
      c0000000000432e4:       li      r6,1
      
      c0000000000432e8:       addi    r1,r1,128 <-- pop stack frame
      
      c0000000000432ec:       ld      r0,16(r1)
      c0000000000432f0:       mtlr    r0
      c0000000000432f4:       b       <.save_context_stack> <-- tail call optimized
      
      save_context_stack ends up with a stack pointer below the current
      one, and it is likely to be scribbled over.
      
      Fix this by making __get_SP() a function which returns the
      callers stack frame. Also replace inline assembly which grabs
      the stack pointer in save_stack_trace and show_stack with
      __get_SP().
      
      This also fixes an issue with perf_arch_fetch_caller_regs().
      It currently unwinds the stack once, which will skip a
      valid stack frame on a leaf function. With the __get_SP() fixes
      in this patch, we never need to unwind the stack frame to get
      to the first interesting frame.
      
      We have to export __get_SP() because perf_arch_fetch_caller_regs()
      (which is used in modules) calls it from a header file.
      Reported-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      bfe9a2cf