1. 01 5月, 2016 2 次提交
    • A
      powerpc/mm/hash: Add support for Power9 Hash · 50de596d
      Aneesh Kumar K.V 提交于
      PowerISA 3.0 adds a parition table indexed by LPID. Parition table
      allows us to specify the MMU model that will be used for guest and host
      translation.
      
      This patch adds support with SLB based hash model (UPRT = 0). What is
      required with this model is to support the new hash page table entry
      format and also setup partition table such that we use hash table for
      address translation.
      
      We don't have segment table support yet.
      
      In order to make sure we don't load KVM module on Power9 (since we don't
      have kvm support yet) this patch also disables KVM on Power9.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      50de596d
    • A
      powerpc/mm: Drop WIMG in favour of new constants · 30bda41a
      Aneesh Kumar K.V 提交于
      PowerISA 3.0 introduces two pte bits with the below meaning for radix:
        00 -> Normal Memory
        01 -> Strong Access Order (SAO)
        10 -> Non idempotent I/O (Cache inhibited and guarded)
        11 -> Tolerant I/O (Cache inhibited)
      
      We drop the existing WIMG bits in the Linux page table in favour of the
      above constants. We loose _PAGE_WRITETHRU with this conversion. We only
      use writethru via pgprot_cached_wthru() which is used by
      fbdev/controlfb.c which is Apple control display and also PPC32.
      
      With respect to _PAGE_COHERENCE, we have been marking hpte always
      coherent for some time now. htab_convert_pte_flags() always added
      HPTE_R_M.
      
      NOTE: KVM changes need closer review.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      30bda41a
  2. 01 3月, 2016 1 次提交
    • D
      powerpc/mm: Handle removing maybe-present bolted HPTEs · 27828f98
      David Gibson 提交于
      At the moment the hpte_removebolted callback in ppc_md returns void and
      will BUG_ON() if the hpte it's asked to remove doesn't exist in the first
      place.  This is awkward for the case of cleaning up a mapping which was
      partially made before failing.
      
      So, we add a return value to hpte_removebolted, and have it return ENOENT
      in the case that the HPTE to remove didn't exist in the first place.
      
      In the (sole) caller, we propagate errors in hpte_removebolted to its
      caller to handle.  However, we handle ENOENT specially, continuing to
      complete the unmapping over the specified range before returning the error
      to the caller.
      
      This means that htab_remove_mapping() will work sanely on a partially
      present mapping, removing any HPTEs which are present, while also returning
      ENOENT to its caller in case it's important there.
      
      There are two callers of htab_remove_mapping():
         - In remove_section_mapping() we already WARN_ON() any error return,
           which is reasonable - in this case the mapping should be fully
           present
         - In vmemmap_remove_mapping() we BUG_ON() any error.  We change that to
           just a WARN_ON() in the case of ENOENT, since failing to remove a
           mapping that wasn't there in the first place probably shouldn't be
           fatal.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      27828f98
  3. 14 12月, 2015 2 次提交
  4. 09 4月, 2015 1 次提交
  5. 29 12月, 2014 1 次提交
    • H
      powerpc/kdump: Ignore failure in enabling big endian exception during crash · c1caae3d
      Hari Bathini 提交于
      In LE kernel, we currently have a hack for kexec that resets the exception
      endian before starting a new kernel as the kernel that is loaded could be a
      big endian or a little endian kernel. In kdump case, resetting exception
      endian fails when one or more cpus is disabled. But we can ignore the failure
      and still go ahead, as in most cases crashkernel will be of same endianess
      as primary kernel and reseting endianess is not even needed in those cases.
      This patch adds a new inline function to say if this is kdump path. This
      function is used at places where such a check is needed.
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      [mpe: Rename to kdump_in_progress(), use bool, and edit comment]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c1caae3d
  6. 05 12月, 2014 1 次提交
    • A
      powerpc/mm: don't do tlbie for updatepp request with NO HPTE fault · aefa5688
      Aneesh Kumar K.V 提交于
      upatepp can get called for a nohpte fault when we find from the linux
      page table that the translation was hashed before. In that case
      we are sure that there is no existing translation, hence we could
      avoid doing tlbie.
      
      We could possibly race with a parallel fault filling the TLB. But
      that should be ok because updatepp is only ever relaxing permissions.
      We also look at linux pte permission bits when filling hash pte
      permission bits. We also hold the linux pte busy bits while
      inserting/updating a hashpte entry, hence a paralle update of
      linux pte is not possible. On the other hand mprotect involves
      ptep_modify_prot_start which cause a hpte invalidate and not updatepp.
      
      Performance number:
      We use randbox_access_bench written by Anton.
      
      Kernel with THP disabled and smaller hash page table size.
      
          86.60%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_updatepp
           2.10%  random_access_b  random_access_bench              [.] doit
           1.99%  random_access_b  [kernel.kallsyms]                [k] .do_raw_spin_lock
           1.85%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_insert
           1.26%  random_access_b  [kernel.kallsyms]                [k] .native_flush_hash_range
           1.18%  random_access_b  [kernel.kallsyms]                [k] .__delay
           0.69%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_remove
           0.37%  random_access_b  [kernel.kallsyms]                [k] .clear_user_page
           0.34%  random_access_b  [kernel.kallsyms]                [k] .__hash_page_64K
           0.32%  random_access_b  [kernel.kallsyms]                [k] fast_exception_return
           0.30%  random_access_b  [kernel.kallsyms]                [k] .hash_page_mm
      
      With Fix:
      
          27.54%  random_access_b  random_access_bench              [.] doit
          22.90%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_insert
           5.76%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_remove
           5.20%  random_access_b  [kernel.kallsyms]                [k] fast_exception_return
           5.12%  random_access_b  [kernel.kallsyms]                [k] .__hash_page_64K
           4.80%  random_access_b  [kernel.kallsyms]                [k] .hash_page_mm
           3.31%  random_access_b  [kernel.kallsyms]                [k] data_access_common
           1.84%  random_access_b  [kernel.kallsyms]                [k] .trace_hardirqs_on_caller
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      aefa5688
  7. 02 12月, 2014 1 次提交
  8. 03 11月, 2014 1 次提交
    • C
      powerpc: Replace __get_cpu_var uses · 69111bac
      Christoph Lameter 提交于
      This still has not been merged and now powerpc is the only arch that does
      not have this change. Sorry about missing linuxppc-dev before.
      
      V2->V2
        - Fix up to work against 3.18-rc1
      
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      [mpe: Fix build errors caused by set/or_softirq_pending(), and rework
            assignment in __set_breakpoint() to use memcpy().]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      69111bac
  9. 30 10月, 2014 1 次提交
    • H
      powerpc/fadump: Fix endianess issues in firmware assisted dump handling · 408cddd9
      Hari Bathini 提交于
      Firmware-assisted dump (fadump) kernel code is not endian safe. The
      below patch fixes this issue. Tested this patch with upstream kernel.
      Below output shows crash tool successfully opening LE fadump vmcore.
      
          # crash vmlinux vmcore
          GNU gdb (GDB) 7.6
          This GDB was configured as "powerpc64le-unknown-linux-gnu"...
      
                KERNEL: vmlinux
              DUMPFILE: vmcore
          	CPUS: 16
          	DATE: Wed Dec 31 19:00:00 1969
                UPTIME: 00:03:28
          LOAD AVERAGE: 0.46, 0.86, 0.41
                 TASKS: 268
              NODENAME: linux-dhr2
               RELEASE: 3.17.0-rc5-7-default
               VERSION: #6 SMP Tue Sep 30 01:06:34 EDT 2014
               MACHINE: ppc64le  (4116 Mhz)
                MEMORY: 40 GB
                 PANIC: "Oops: Kernel access of bad area, sig: 11 [#1]" (check log for details)
          	 PID: 6223
               COMMAND: "bash"
          	TASK: c0000009661b2500  [THREAD_INFO: c000000967ac0000]
          	 CPU: 2
                 STATE: TASK_RUNNING (PANIC)
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      [mpe: Make the comment in pSeries_lpar_hptab_clear() clearer]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      408cddd9
  10. 25 9月, 2014 2 次提交
  11. 27 8月, 2014 2 次提交
    • T
      Revert "powerpc: Replace __get_cpu_var uses" · 23f66e2d
      Tejun Heo 提交于
      This reverts commit 5828f666 due to
      build failure after merging with pending powerpc changes.
      
      Link: http://lkml.kernel.org/g/20140827142243.6277eaff@canb.auug.org.auSigned-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      23f66e2d
    • C
      powerpc: Replace __get_cpu_var uses · 5828f666
      Christoph Lameter 提交于
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      #define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      tj: Folded a fix patch.
          http://lkml.kernel.org/g/alpine.DEB.2.11.1408172143020.9652@gentwo.org
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5828f666
  12. 13 8月, 2014 1 次提交
  13. 11 7月, 2014 1 次提交
    • A
      powerpc/pseries: Use jump labels for hcall tracepoints · cc1adb5f
      Anton Blanchard 提交于
      hcall tracepoints add quite a few instructions to our hcall path:
      
      plpar_hcall:
      	mr      r2,r2
      	mfcr    r0
      	stw     r0,8(r1)
      	b       164		<---- start
      	ld      r12,0(r2)
      	std     r12,32(r1)
      	cmpdi   r12,0
      	beq     164		<---- end
      ...
      
      We have an unconditional branch that gets noped out during boot and
      a load/compare/branch. We also store the tracepoint value to the
      stack for the hcall_exit path to use.
      
      By using jump labels we can simplify this to just a single nop that
      gets replaced with a branch when the tracepoint is enabled:
      
      plpar_hcall:
      	mr      r2,r2
      	mfcr    r0
      	stw     r0,8(r1)
      	nop			<----
      ...
      
      If jump labels are not enabled, we fall back to the old method.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cc1adb5f
  14. 09 12月, 2013 2 次提交
  15. 21 11月, 2013 1 次提交
  16. 27 8月, 2013 2 次提交
  17. 14 8月, 2013 2 次提交
  18. 24 7月, 2013 1 次提交
  19. 01 7月, 2013 1 次提交
  20. 21 6月, 2013 2 次提交
  21. 30 4月, 2013 2 次提交
  22. 08 4月, 2013 1 次提交
  23. 17 9月, 2012 1 次提交
  24. 05 9月, 2012 1 次提交
  25. 21 3月, 2012 1 次提交
  26. 11 1月, 2012 1 次提交
    • A
      powerpc: Fix RCU idle and hcall tracing · a5ccfee0
      Anton Blanchard 提交于
      Tracepoints should not be called inside an rcu_idle_enter/rcu_idle_exit
      region. Since pSeries calls H_CEDE in the idle loop, we were violating
      this rule.
      
      commit a7b152d5 (powerpc: Tell RCU about idle after hcall tracing)
      tried to work around it by delaying the rcu_idle_enter until after we
      called the hcall tracepoint, but there are a number of issues with it.
      
      The hcall tracepoint trampoline code is called conditionally when the
      tracepoint is enabled. If the tracepoint is not enabled we never call
      rcu_idle_enter. The idle_uses_rcu check was also done at compile time
      which breaks multiplatform builds.
      
      The simple fix is to avoid tracing H_CEDE and rely on other tracepoints
      and the hypervisor dispatch trace log to work out if we called H_CEDE.
      
      This fixes a hang during boot on pSeries.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a5ccfee0
  27. 03 1月, 2012 1 次提交
  28. 12 12月, 2011 1 次提交
    • P
      powerpc: Tell RCU about idle after hcall tracing · a7b152d5
      Paul E. McKenney 提交于
      The PowerPC pSeries platform (CONFIG_PPC_PSERIES=y) enables
      hypervisor-call tracing for CONFIG_TRACEPOINTS=y kernels.  One of the
      hypervisor calls that is traced is the H_CEDE call in the idle loop
      that tells the hypervisor that this OS instance no longer needs the
      current CPU.  However, tracing uses RCU, so this combination of kernel
      configuration variables needs to avoid telling RCU about the current CPU's
      idleness until after the H_CEDE-entry tracing completes on the one hand,
      and must tell RCU that the the current CPU is no longer idle before the
      H_CEDE-exit tracing starts.
      
      In all other cases, it suffices to inform RCU of CPU idleness upon
      idle-loop entry and exit.
      
      This commit makes the required adjustments.
      Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      a7b152d5
  29. 01 11月, 2011 1 次提交
  30. 05 8月, 2011 2 次提交