1. 17 9月, 2018 1 次提交
  2. 30 7月, 2018 1 次提交
  3. 24 7月, 2018 2 次提交
  4. 10 5月, 2018 1 次提交
  5. 30 3月, 2018 1 次提交
  6. 27 3月, 2018 2 次提交
  7. 06 11月, 2017 1 次提交
    • M
      powerpc/64s: Replace CONFIG_PPC_STD_MMU_64 with CONFIG_PPC_BOOK3S_64 · 4e003747
      Michael Ellerman 提交于
      CONFIG_PPC_STD_MMU_64 indicates support for the "standard" powerpc MMU
      on 64-bit CPUs. The "standard" MMU refers to the hash page table MMU
      found in "server" processors, from IBM mainly.
      
      Currently CONFIG_PPC_STD_MMU_64 is == CONFIG_PPC_BOOK3S_64. While it's
      annoying to have two symbols that always have the same value, it's not
      quite annoying enough to bother removing one.
      
      However with the arrival of Power9, we now have the situation where
      CONFIG_PPC_STD_MMU_64 is enabled, but the kernel is running using the
      Radix MMU - *not* the "standard" MMU. So it is now actively confusing
      to use it, because it implies that code is disabled or inactive when
      the Radix MMU is in use, however that is not necessarily true.
      
      So s/CONFIG_PPC_STD_MMU_64/CONFIG_PPC_BOOK3S_64/, and do some minor
      formatting updates of some of the affected lines.
      
      This will be a pain for backports, but c'est la vie.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      4e003747
  8. 03 7月, 2017 1 次提交
    • B
      powerpc/pseries: Fix passing of pp0 in updatepp() and updateboltedpp() · e71ff982
      Balbir Singh 提交于
      Once upon a time there were only two PP (page protection) bits. In ISA
      2.03 an additional PP bit was added, but because of the layout of the
      HPTE it could not be made contiguous with the existing PP bits.
      
      The result is that we now have three PP bits, named pp0, pp1, pp2,
      where pp0 occupies bit 63 of dword 1 of the HPTE and pp1 and pp2
      occupy bits 1 and 0 respectively. Until recently Linux hasn't used
      pp0, however with the addition of _PAGE_KERNEL_RO we started using it.
      
      The problem arises in the LPAR code, where we need to translate the PP
      bits into the argument for the H_PROTECT hypercall. Currently the code
      only passes bits 0-2 of newpp, which covers pp1, pp2 and N (no
      execute), meaning pp0 is not passed to the hypervisor at all.
      
      We can't simply pass it through in bit 63, as that would collide with a
      different field in the flags argument, as defined in PAPR. Instead we
      have to shift it down to bit 8 (IBM bit 55).
      
      Fixes: e58e87ad ("powerpc/mm: Update _PAGE_KERNEL_RO")
      Cc: stable@vger.kernel.org # v4.7+
      Signed-off-by: NBalbir Singh <bsingharora@gmail.com>
      [mpe: Simplify the test, rework change log]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      e71ff982
  9. 01 4月, 2017 1 次提交
    • A
      powerpc/pseries: Skip using reserved virtual address range · 82228e36
      Aneesh Kumar K.V 提交于
      Now that we use all the available virtual address range, we need to make
      sure we don't generate VSID such that it overlaps with the reserved vsid
      range. Reserved vsid range include the virtual address range used by the
      adjunct partition and also the VRMA virtual segment. We find the context
      value that can result in generating such a VSID and reserve it early in
      boot.
      
      We don't look at the adjunct range, because for now we disable the
      adjunct usage in a Linux LPAR via CAS interface.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      [mpe: Rewrite hash__reserve_context_id(), move the rest into pseries]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      82228e36
  10. 17 3月, 2017 1 次提交
  11. 10 2月, 2017 1 次提交
    • D
      powerpc/pseries: Add support for hash table resizing · dbcf929c
      David Gibson 提交于
      This adds support for using two hypercalls to change the size of the
      main hash page table while running as a PAPR guest. For now these
      hypercalls are only in experimental qemu versions.
      
      The interface is two part: first H_RESIZE_HPT_PREPARE is used to
      allocate and prepare the new hash table. This may be slow, but can be
      done asynchronously. Then, H_RESIZE_HPT_COMMIT is used to switch to the
      new hash table. This requires that no CPUs be concurrently updating the
      HPT, and so must be run under stop_machine().
      
      This also adds a debugfs file which can be used to manually control
      HPT resizing or testing purposes.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NPaul Mackerras <paulus@samba.org>
      [mpe: Rename the debugfs file to "hpt_order"]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      dbcf929c
  12. 31 1月, 2017 1 次提交
    • P
      powerpc/64: Enable use of radix MMU under hypervisor on POWER9 · cc3d2940
      Paul Mackerras 提交于
      To use radix as a guest, we first need to tell the hypervisor via
      the ibm,client-architecture call first that we support POWER9 and
      architecture v3.00, and that we can do either radix or hash and
      that we would like to choose later using an hcall (the
      H_REGISTER_PROC_TBL hcall).
      
      Then we need to check whether the hypervisor agreed to us using
      radix.  We need to do this very early on in the kernel boot process
      before any of the MMU initialization is done.  If the hypervisor
      doesn't agree, we can't use radix and therefore clear the radix
      MMU feature bit.
      
      Later, when we have set up our process table, which points to the
      radix tree for each process, we need to install that using the
      H_REGISTER_PROC_TBL hcall.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      cc3d2940
  13. 09 12月, 2016 1 次提交
    • S
      tracing: Have the reg function allow to fail · 8cf868af
      Steven Rostedt (Red Hat) 提交于
      Some tracepoints have a registration function that gets enabled when the
      tracepoint is enabled. There may be cases that the registraction function
      must fail (for example, can't allocate enough memory). In this case, the
      tracepoint should also fail to register, otherwise the user would not know
      why the tracepoint is not working.
      
      Cc: David Howells <dhowells@redhat.com>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8cf868af
  14. 16 11月, 2016 1 次提交
    • P
      powerpc/64: Simplify adaptation to new ISA v3.00 HPTE format · 6b243fcf
      Paul Mackerras 提交于
      This changes the way that we support the new ISA v3.00 HPTE format.
      Instead of adapting everything that uses HPTE values to handle either
      the old format or the new format, depending on which CPU we are on,
      we now convert explicitly between old and new formats if necessary
      in the low-level routines that actually access HPTEs in memory.
      This limits the amount of code that needs to know about the new
      format and makes the conversions explicit.  This is OK because the
      old format contains all the information that is in the new format.
      
      This also fixes operation under a hypervisor, because the H_ENTER
      hypercall (and other hypercalls that deal with HPTEs) will continue
      to require the HPTE value to be supplied in the old format.  At
      present the kernel will not boot in HPT mode on POWER9 under a
      hypervisor.
      
      This fixes and partially reverts commit 50de596d
      ("powerpc/mm/hash: Add support for Power9 Hash", 2016-04-29).
      
      Fixes: 50de596d ("powerpc/mm/hash: Add support for Power9 Hash")
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6b243fcf
  15. 14 11月, 2016 1 次提交
  16. 11 10月, 2016 1 次提交
    • L
      powerpc/pseries: Fix stack corruption in htpe code · 05af40e8
      Laurent Dufour 提交于
      This commit fixes a stack corruption in the pseries specific code dealing
      with the huge pages.
      
      In __pSeries_lpar_hugepage_invalidate() the buffer used to pass arguments
      to the hypervisor is not large enough. This leads to a stack corruption
      where a previously saved register could be corrupted leading to unexpected
      result in the caller, like the following panic:
      
        Oops: Kernel access of bad area, sig: 11 [#1]
        SMP NR_CPUS=2048 NUMA pSeries
        Modules linked in: virtio_balloon ip_tables x_tables autofs4
        virtio_blk 8139too virtio_pci virtio_ring 8139cp virtio
        CPU: 11 PID: 1916 Comm: mmstress Not tainted 4.8.0 #76
        task: c000000005394880 task.stack: c000000005570000
        NIP: c00000000027bf6c LR: c00000000027bf64 CTR: 0000000000000000
        REGS: c000000005573820 TRAP: 0300   Not tainted  (4.8.0)
        MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE>  CR: 84822884  XER: 20000000
        CFAR: c00000000010a924 DAR: 420000000014e5e0 DSISR: 40000000 SOFTE: 1
        GPR00: c00000000027bf64 c000000005573aa0 c000000000e02800 c000000004447964
        GPR04: c00000000404de18 c000000004d38810 00000000042100f5 00000000f5002104
        GPR08: e0000000f5002104 0000000000000001 042100f5000000e0 00000000042100f5
        GPR12: 0000000000002200 c00000000fe02c00 c00000000404de18 0000000000000000
        GPR16: c1ffffffffffe7ff 00003fff62000000 420000000014e5e0 00003fff63000000
        GPR20: 0008000000000000 c0000000f7014800 0405e600000000e0 0000000000010000
        GPR24: c000000004d38810 c000000004447c10 c00000000404de18 c000000004447964
        GPR28: c000000005573b10 c000000004d38810 00003fff62000000 420000000014e5e0
        NIP [c00000000027bf6c] zap_huge_pmd+0x4c/0x470
        LR [c00000000027bf64] zap_huge_pmd+0x44/0x470
        Call Trace:
        [c000000005573aa0] [c00000000027bf64] zap_huge_pmd+0x44/0x470 (unreliable)
        [c000000005573af0] [c00000000022bbd8] unmap_page_range+0xcf8/0xed0
        [c000000005573c30] [c00000000022c2d4] unmap_vmas+0x84/0x120
        [c000000005573c80] [c000000000235448] unmap_region+0xd8/0x1b0
        [c000000005573d80] [c0000000002378f0] do_munmap+0x2d0/0x4c0
        [c000000005573df0] [c000000000237be4] SyS_munmap+0x64/0xb0
        [c000000005573e30] [c000000000009560] system_call+0x38/0x108
        Instruction dump:
        fbe1fff8 fb81ffe0 7c7f1b78 7ca32b78 7cbd2b78 f8010010 7c9a2378 f821ffb1
        7cde3378 4bfffea9 7c7b1b79 41820298 <e87f0000> 48000130 7fa5eb78 7fc4f378
      
      Most of the time, the bug is surfacing in a caller up in the stack from
      __pSeries_lpar_hugepage_invalidate() which is quite confusing.
      
      This bug is pending since v3.11 but was hidden if a caller of the
      caller of __pSeries_lpar_hugepage_invalidate() has pushed the corruped
      register (r18 in this case) in the stack and is not using it until
      restoring it. GCC 6.2.0 seems to raise it more frequently.
      
      This commit also change the definition of the parameter buffer in
      pSeries_lpar_flush_hash_range() to rely on the global define
      PLPAR_HCALL9_BUFSIZE (no functional change here).
      
      Fixes: 1a527286 ("powerpc: Optimize hugepage invalidate")
      Cc: stable@vger.kernel.org # v3.11+
      Signed-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      05af40e8
  17. 26 7月, 2016 1 次提交
  18. 21 7月, 2016 2 次提交
  19. 16 6月, 2016 1 次提交
  20. 11 5月, 2016 1 次提交
  21. 01 5月, 2016 2 次提交
    • A
      powerpc/mm/hash: Add support for Power9 Hash · 50de596d
      Aneesh Kumar K.V 提交于
      PowerISA 3.0 adds a parition table indexed by LPID. Parition table
      allows us to specify the MMU model that will be used for guest and host
      translation.
      
      This patch adds support with SLB based hash model (UPRT = 0). What is
      required with this model is to support the new hash page table entry
      format and also setup partition table such that we use hash table for
      address translation.
      
      We don't have segment table support yet.
      
      In order to make sure we don't load KVM module on Power9 (since we don't
      have kvm support yet) this patch also disables KVM on Power9.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      50de596d
    • A
      powerpc/mm: Drop WIMG in favour of new constants · 30bda41a
      Aneesh Kumar K.V 提交于
      PowerISA 3.0 introduces two pte bits with the below meaning for radix:
        00 -> Normal Memory
        01 -> Strong Access Order (SAO)
        10 -> Non idempotent I/O (Cache inhibited and guarded)
        11 -> Tolerant I/O (Cache inhibited)
      
      We drop the existing WIMG bits in the Linux page table in favour of the
      above constants. We loose _PAGE_WRITETHRU with this conversion. We only
      use writethru via pgprot_cached_wthru() which is used by
      fbdev/controlfb.c which is Apple control display and also PPC32.
      
      With respect to _PAGE_COHERENCE, we have been marking hpte always
      coherent for some time now. htab_convert_pte_flags() always added
      HPTE_R_M.
      
      NOTE: KVM changes need closer review.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      30bda41a
  22. 01 3月, 2016 1 次提交
    • D
      powerpc/mm: Handle removing maybe-present bolted HPTEs · 27828f98
      David Gibson 提交于
      At the moment the hpte_removebolted callback in ppc_md returns void and
      will BUG_ON() if the hpte it's asked to remove doesn't exist in the first
      place.  This is awkward for the case of cleaning up a mapping which was
      partially made before failing.
      
      So, we add a return value to hpte_removebolted, and have it return ENOENT
      in the case that the HPTE to remove didn't exist in the first place.
      
      In the (sole) caller, we propagate errors in hpte_removebolted to its
      caller to handle.  However, we handle ENOENT specially, continuing to
      complete the unmapping over the specified range before returning the error
      to the caller.
      
      This means that htab_remove_mapping() will work sanely on a partially
      present mapping, removing any HPTEs which are present, while also returning
      ENOENT to its caller in case it's important there.
      
      There are two callers of htab_remove_mapping():
         - In remove_section_mapping() we already WARN_ON() any error return,
           which is reasonable - in this case the mapping should be fully
           present
         - In vmemmap_remove_mapping() we BUG_ON() any error.  We change that to
           just a WARN_ON() in the case of ENOENT, since failing to remove a
           mapping that wasn't there in the first place probably shouldn't be
           fatal.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      27828f98
  23. 14 12月, 2015 2 次提交
  24. 09 4月, 2015 1 次提交
  25. 29 12月, 2014 1 次提交
    • H
      powerpc/kdump: Ignore failure in enabling big endian exception during crash · c1caae3d
      Hari Bathini 提交于
      In LE kernel, we currently have a hack for kexec that resets the exception
      endian before starting a new kernel as the kernel that is loaded could be a
      big endian or a little endian kernel. In kdump case, resetting exception
      endian fails when one or more cpus is disabled. But we can ignore the failure
      and still go ahead, as in most cases crashkernel will be of same endianess
      as primary kernel and reseting endianess is not even needed in those cases.
      This patch adds a new inline function to say if this is kdump path. This
      function is used at places where such a check is needed.
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      [mpe: Rename to kdump_in_progress(), use bool, and edit comment]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c1caae3d
  26. 05 12月, 2014 1 次提交
    • A
      powerpc/mm: don't do tlbie for updatepp request with NO HPTE fault · aefa5688
      Aneesh Kumar K.V 提交于
      upatepp can get called for a nohpte fault when we find from the linux
      page table that the translation was hashed before. In that case
      we are sure that there is no existing translation, hence we could
      avoid doing tlbie.
      
      We could possibly race with a parallel fault filling the TLB. But
      that should be ok because updatepp is only ever relaxing permissions.
      We also look at linux pte permission bits when filling hash pte
      permission bits. We also hold the linux pte busy bits while
      inserting/updating a hashpte entry, hence a paralle update of
      linux pte is not possible. On the other hand mprotect involves
      ptep_modify_prot_start which cause a hpte invalidate and not updatepp.
      
      Performance number:
      We use randbox_access_bench written by Anton.
      
      Kernel with THP disabled and smaller hash page table size.
      
          86.60%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_updatepp
           2.10%  random_access_b  random_access_bench              [.] doit
           1.99%  random_access_b  [kernel.kallsyms]                [k] .do_raw_spin_lock
           1.85%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_insert
           1.26%  random_access_b  [kernel.kallsyms]                [k] .native_flush_hash_range
           1.18%  random_access_b  [kernel.kallsyms]                [k] .__delay
           0.69%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_remove
           0.37%  random_access_b  [kernel.kallsyms]                [k] .clear_user_page
           0.34%  random_access_b  [kernel.kallsyms]                [k] .__hash_page_64K
           0.32%  random_access_b  [kernel.kallsyms]                [k] fast_exception_return
           0.30%  random_access_b  [kernel.kallsyms]                [k] .hash_page_mm
      
      With Fix:
      
          27.54%  random_access_b  random_access_bench              [.] doit
          22.90%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_insert
           5.76%  random_access_b  [kernel.kallsyms]                [k] .native_hpte_remove
           5.20%  random_access_b  [kernel.kallsyms]                [k] fast_exception_return
           5.12%  random_access_b  [kernel.kallsyms]                [k] .__hash_page_64K
           4.80%  random_access_b  [kernel.kallsyms]                [k] .hash_page_mm
           3.31%  random_access_b  [kernel.kallsyms]                [k] data_access_common
           1.84%  random_access_b  [kernel.kallsyms]                [k] .trace_hardirqs_on_caller
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      aefa5688
  27. 02 12月, 2014 1 次提交
  28. 03 11月, 2014 1 次提交
    • C
      powerpc: Replace __get_cpu_var uses · 69111bac
      Christoph Lameter 提交于
      This still has not been merged and now powerpc is the only arch that does
      not have this change. Sorry about missing linuxppc-dev before.
      
      V2->V2
        - Fix up to work against 3.18-rc1
      
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      [mpe: Fix build errors caused by set/or_softirq_pending(), and rework
            assignment in __set_breakpoint() to use memcpy().]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      69111bac
  29. 30 10月, 2014 1 次提交
    • H
      powerpc/fadump: Fix endianess issues in firmware assisted dump handling · 408cddd9
      Hari Bathini 提交于
      Firmware-assisted dump (fadump) kernel code is not endian safe. The
      below patch fixes this issue. Tested this patch with upstream kernel.
      Below output shows crash tool successfully opening LE fadump vmcore.
      
          # crash vmlinux vmcore
          GNU gdb (GDB) 7.6
          This GDB was configured as "powerpc64le-unknown-linux-gnu"...
      
                KERNEL: vmlinux
              DUMPFILE: vmcore
          	CPUS: 16
          	DATE: Wed Dec 31 19:00:00 1969
                UPTIME: 00:03:28
          LOAD AVERAGE: 0.46, 0.86, 0.41
                 TASKS: 268
              NODENAME: linux-dhr2
               RELEASE: 3.17.0-rc5-7-default
               VERSION: #6 SMP Tue Sep 30 01:06:34 EDT 2014
               MACHINE: ppc64le  (4116 Mhz)
                MEMORY: 40 GB
                 PANIC: "Oops: Kernel access of bad area, sig: 11 [#1]" (check log for details)
          	 PID: 6223
               COMMAND: "bash"
          	TASK: c0000009661b2500  [THREAD_INFO: c000000967ac0000]
          	 CPU: 2
                 STATE: TASK_RUNNING (PANIC)
      Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com>
      [mpe: Make the comment in pSeries_lpar_hptab_clear() clearer]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      408cddd9
  30. 25 9月, 2014 2 次提交
  31. 27 8月, 2014 2 次提交
    • T
      Revert "powerpc: Replace __get_cpu_var uses" · 23f66e2d
      Tejun Heo 提交于
      This reverts commit 5828f666 due to
      build failure after merging with pending powerpc changes.
      
      Link: http://lkml.kernel.org/g/20140827142243.6277eaff@canb.auug.org.auSigned-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      23f66e2d
    • C
      powerpc: Replace __get_cpu_var uses · 5828f666
      Christoph Lameter 提交于
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      #define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      tj: Folded a fix patch.
          http://lkml.kernel.org/g/alpine.DEB.2.11.1408172143020.9652@gentwo.org
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5828f666
  32. 13 8月, 2014 1 次提交
  33. 11 7月, 2014 1 次提交
    • A
      powerpc/pseries: Use jump labels for hcall tracepoints · cc1adb5f
      Anton Blanchard 提交于
      hcall tracepoints add quite a few instructions to our hcall path:
      
      plpar_hcall:
      	mr      r2,r2
      	mfcr    r0
      	stw     r0,8(r1)
      	b       164		<---- start
      	ld      r12,0(r2)
      	std     r12,32(r1)
      	cmpdi   r12,0
      	beq     164		<---- end
      ...
      
      We have an unconditional branch that gets noped out during boot and
      a load/compare/branch. We also store the tracepoint value to the
      stack for the hcall_exit path to use.
      
      By using jump labels we can simplify this to just a single nop that
      gets replaced with a branch when the tracepoint is enabled:
      
      plpar_hcall:
      	mr      r2,r2
      	mfcr    r0
      	stw     r0,8(r1)
      	nop			<----
      ...
      
      If jump labels are not enabled, we fall back to the old method.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cc1adb5f