1. 09 10月, 2018 12 次提交
    • P
      KVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests · 95a6432c
      Paul Mackerras 提交于
      This creates an alternative guest entry/exit path which is used for
      radix guests on POWER9 systems when we have indep_threads_mode=Y.  In
      these circumstances there is exactly one vcpu per vcore and there is
      no coordination required between vcpus or vcores; the vcpu can enter
      the guest without needing to synchronize with anything else.
      
      The new fast path is implemented almost entirely in C in book3s_hv.c
      and runs with the MMU on until the guest is entered.  On guest exit
      we use the existing path until the point where we are committed to
      exiting the guest (as distinct from handling an interrupt in the
      low-level code and returning to the guest) and we have pulled the
      guest context from the XIVE.  At that point we check a flag in the
      stack frame to see whether we came in via the old path and the new
      path; if we came in via the new path then we go back to C code to do
      the rest of the process of saving the guest context and restoring the
      host context.
      
      The C code is split into separate functions for handling the
      OS-accessible state and the hypervisor state, with the idea that the
      latter can be replaced by a hypercall when we implement nested
      virtualization.
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      [mpe: Fix CONFIG_ALTIVEC=n build]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      95a6432c
    • P
      KVM: PPC: Book3S HV: Call kvmppc_handle_exit_hv() with vcore unlocked · 53655ddd
      Paul Mackerras 提交于
      Currently kvmppc_handle_exit_hv() is called with the vcore lock held
      because it is called within a for_each_runnable_thread loop.
      However, we already unlock the vcore within kvmppc_handle_exit_hv()
      under certain circumstances, and this is safe because (a) any vcpus
      that become runnable and are added to the runnable set by
      kvmppc_run_vcpu() have their vcpu->arch.trap == 0 and can't actually
      run in the guest (because the vcore state is VCORE_EXITING), and
      (b) for_each_runnable_thread is safe against addition or removal
      of vcpus from the runnable set.
      
      Therefore, in order to simplify things for following patches, let's
      drop the vcore lock in the for_each_runnable_thread loop, so
      kvmppc_handle_exit_hv() gets called without the vcore lock held.
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      53655ddd
    • P
      KVM: PPC: Book3S: Rework TM save/restore code and make it C-callable · 7854f754
      Paul Mackerras 提交于
      This adds a parameter to __kvmppc_save_tm and __kvmppc_restore_tm
      which allows the caller to indicate whether it wants the nonvolatile
      register state to be preserved across the call, as required by the C
      calling conventions.  This parameter being non-zero also causes the
      MSR bits that enable TM, FP, VMX and VSX to be preserved.  The
      condition register and DSCR are now always preserved.
      
      With this, kvmppc_save_tm_hv and kvmppc_restore_tm_hv can be called
      from C code provided the 3rd parameter is non-zero.  So that these
      functions can be called from modules, they now include code to set
      the TOC pointer (r2) on entry, as they can call other built-in C
      functions which will assume the TOC to have been set.
      
      Also, the fake suspend code in kvmppc_save_tm_hv is modified here to
      assume that treclaim in fake-suspend state does not modify any registers,
      which is the case on POWER9.  This enables the code to be simplified
      quite a bit.
      
      _kvmppc_save_tm_pr and _kvmppc_restore_tm_pr become much simpler with
      this change, since they now only need to save and restore TAR and pass
      1 for the 3rd argument to __kvmppc_{save,restore}_tm.
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7854f754
    • P
      KVM: PPC: Book3S HV: Simplify real-mode interrupt handling · df709a29
      Paul Mackerras 提交于
      This streamlines the first part of the code that handles a hypervisor
      interrupt that occurred in the guest.  With this, all of the real-mode
      handling that occurs is done before the "guest_exit_cont" label; once
      we get to that label we are committed to exiting to host virtual mode.
      Thus the machine check and HMI real-mode handling is moved before that
      label.
      
      Also, the code to handle external interrupts is moved out of line, as
      is the code that calls kvmppc_realmode_hmi_handler().
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      df709a29
    • P
      KVM: PPC: Book3S HV: Extract PMU save/restore operations as C-callable functions · 41f4e631
      Paul Mackerras 提交于
      This pulls out the assembler code that is responsible for saving and
      restoring the PMU state for the host and guest into separate functions
      so they can be used from an alternate entry path.  The calling
      convention is made compatible with C.
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: NMadhavan Srinivasan <maddy@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      41f4e631
    • P
      KVM: PPC: Book3S HV: Move interrupt delivery on guest entry to C code · f7035ce9
      Paul Mackerras 提交于
      This is based on a patch by Suraj Jitindar Singh.
      
      This moves the code in book3s_hv_rmhandlers.S that generates an
      external, decrementer or privileged doorbell interrupt just before
      entering the guest to C code in book3s_hv_builtin.c.  This is to
      make future maintenance and modification easier.  The algorithm
      expressed in the C code is almost identical to the previous
      algorithm.
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f7035ce9
    • P
      KVM: PPC: Book3S HV: Remove left-over code in XICS-on-XIVE emulation · 966eba93
      Paul Mackerras 提交于
      This removes code that clears the external interrupt pending bit in
      the pending_exceptions bitmap.  This is left over from an earlier
      iteration of the code where this bit was set when an escalation
      interrupt arrived in order to wake the vcpu from cede.  Currently
      we set the vcpu->arch.irq_pending flag instead for this purpose.
      Therefore there is no need to do anything with the pending_exceptions
      bitmap.
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      966eba93
    • P
      KVM: PPC: Book3S: Simplify external interrupt handling · d24ea8a7
      Paul Mackerras 提交于
      Currently we use two bits in the vcpu pending_exceptions bitmap to
      indicate that an external interrupt is pending for the guest, one
      for "one-shot" interrupts that are cleared when delivered, and one
      for interrupts that persist until cleared by an explicit action of
      the OS (e.g. an acknowledge to an interrupt controller).  The
      BOOK3S_IRQPRIO_EXTERNAL bit is used for one-shot interrupt requests
      and BOOK3S_IRQPRIO_EXTERNAL_LEVEL is used for persisting interrupts.
      
      In practice BOOK3S_IRQPRIO_EXTERNAL never gets used, because our
      Book3S platforms generally, and pseries in particular, expect
      external interrupt requests to persist until they are acknowledged
      at the interrupt controller.  That combined with the confusion
      introduced by having two bits for what is essentially the same thing
      makes it attractive to simplify things by only using one bit.  This
      patch does that.
      
      With this patch there is only BOOK3S_IRQPRIO_EXTERNAL, and by default
      it has the semantics of a persisting interrupt.  In order to avoid
      breaking the ABI, we introduce a new "external_oneshot" flag which
      preserves the behaviour of the KVM_INTERRUPT ioctl with the
      KVM_INTERRUPT_SET argument.
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d24ea8a7
    • P
      powerpc: Turn off CPU_FTR_P9_TM_HV_ASSIST in non-hypervisor mode · e7b17d50
      Paul Mackerras 提交于
      When doing nested virtualization, it is only necessary to do the
      transactional memory hypervisor assist at level 0, that is, when
      we are in hypervisor mode.  Nested hypervisors can just use the TM
      facilities as architected.  Therefore we should clear the
      CPU_FTR_P9_TM_HV_ASSIST bit when we are not in hypervisor mode,
      along with the CPU_FTR_HVMODE bit.
      
      Doing this will not change anything at this stage because the only
      code that tests CPU_FTR_P9_TM_HV_ASSIST is in HV KVM, which currently
      can only be used when when CPU_FTR_HVMODE is set.
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      e7b17d50
    • A
      KVM: PPC: Remove redundand permission bits removal · a3ac077b
      Alexey Kardashevskiy 提交于
      The kvmppc_gpa_to_ua() helper itself takes care of the permission
      bits in the TCE and yet every single caller removes them.
      
      This changes semantics of kvmppc_gpa_to_ua() so it takes TCEs
      (which are GPAs + TCE permission bits) to make the callers simpler.
      
      This should cause no behavioural change.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a3ac077b
    • A
      KVM: PPC: Propagate errors to the guest when failed instead of ignoring · 2691f0ff
      Alexey Kardashevskiy 提交于
      At the moment if the PUT_TCE{_INDIRECT} handlers fail to update
      the hardware tables, we print a warning once, clear the entry and
      continue. This is so as at the time the assumption was that if
      a VFIO device is hotplugged into the guest, and the userspace replays
      virtual DMA mappings (i.e. TCEs) to the hardware tables and if this fails,
      then there is nothing useful we can do about it.
      
      However the assumption is not valid as these handlers are not called for
      TCE replay (VFIO ioctl interface is used for that) and these handlers
      are for new TCEs.
      
      This returns an error to the guest if there is a request which cannot be
      processed. By now the only possible failure must be H_TOO_HARD.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2691f0ff
    • A
      KVM: PPC: Validate TCEs against preregistered memory page sizes · 42de7b9e
      Alexey Kardashevskiy 提交于
      The userspace can request an arbitrary supported page size for a DMA
      window and this works fine as long as the mapped memory is backed with
      the pages of the same or bigger size; if this is not the case,
      mm_iommu_ua_to_hpa{_rm}() fail and tables do not populated with
      dangerously incorrect TCEs.
      
      However since it is quite easy to misconfigure the KVM and we do not do
      reverts to all changes made to TCE tables if an error happens in a middle,
      we better do the acceptable page size validation before we even touch
      the tables.
      
      This enhances kvmppc_tce_validate() to check the hardware IOMMU page sizes
      against the preregistered memory page sizes.
      
      Since the new check uses real/virtual mode helpers, this renames
      kvmppc_tce_validate() to kvmppc_rm_tce_validate() to handle the real mode
      case and mirrors it for the virtual mode under the old name. The real
      mode handler is not used for the virtual mode as:
      1. it uses _lockless() list traversing primitives instead of RCU;
      2. realmode's mm_iommu_ua_to_hpa_rm() uses vmalloc_to_phys() which
      virtual mode does not have to use and since on POWER9+radix only virtual
      mode handlers actually work, we do not want to slow down that path even
      a bit.
      
      This removes EXPORT_SYMBOL_GPL(kvmppc_tce_validate) as the validators
      are static now.
      
      From now on the attempts on mapping IOMMU pages bigger than allowed
      will result in KVM exit.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      [mpe: Fix KVM_HV=n build]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      42de7b9e
  2. 02 10月, 2018 2 次提交
  3. 12 9月, 2018 2 次提交
    • N
      KVM: PPC: Book3S HV: Don't use compound_order to determine host mapping size · 71d29f43
      Nicholas Piggin 提交于
      THP paths can defer splitting compound pages until after the actual
      remap and TLB flushes to split a huge PMD/PUD. This causes radix
      partition scope page table mappings to get out of synch with the host
      qemu page table mappings.
      
      This results in random memory corruption in the guest when running
      with THP. The easiest way to reproduce is use KVM balloon to free up
      a lot of memory in the guest and then shrink the balloon to give the
      memory back, while some work is being done in the guest.
      
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
      Cc: kvm-ppc@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      71d29f43
    • A
      KVM: PPC: Avoid marking DMA-mapped pages dirty in real mode · 425333bf
      Alexey Kardashevskiy 提交于
      At the moment the real mode handler of H_PUT_TCE calls iommu_tce_xchg_rm()
      which in turn reads the old TCE and if it was a valid entry, marks
      the physical page dirty if it was mapped for writing. Since it is in
      real mode, realmode_pfn_to_page() is used instead of pfn_to_page()
      to get the page struct. However SetPageDirty() itself reads the compound
      page head and returns a virtual address for the head page struct and
      setting dirty bit for that kills the system.
      
      This adds additional dirty bit tracking into the MM/IOMMU API for use
      in the real mode. Note that this does not change how VFIO and
      KVM (in virtual mode) set this bit. The KVM (real mode) changes include:
      - use the lowest bit of the cached host phys address to carry
      the dirty bit;
      - mark pages dirty when they are unpinned which happens when
      the preregistered memory is released which always happens in virtual
      mode;
      - add mm_iommu_ua_mark_dirty_rm() helper to set delayed dirty bit;
      - change iommu_tce_xchg_rm() to take the kvm struct for the mm to use
      in the new mm_iommu_ua_mark_dirty_rm() helper;
      - move iommu_tce_xchg_rm() to book3s_64_vio_hv.c (which is the only
      caller anyway) to reduce the real mode KVM and IOMMU knowledge
      across different subsystems.
      
      This removes realmode_pfn_to_page() as it is not used anymore.
      
      While we at it, remove some EXPORT_SYMBOL_GPL() as that code is for
      the real mode only and modules cannot call it anyway.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      425333bf
  4. 30 8月, 2018 1 次提交
    • A
      powerpc: disable support for relative ksymtab references · ff69279a
      Ard Biesheuvel 提交于
      The newly added code that emits ksymtab entries as pairs of 32-bit
      relative references interacts poorly with the way powerpc lays out its
      address space: when a module exports a per-CPU variable, the primary
      module region covering the ksymtab entry -and thus the 32-bit relative
      reference- is too far away from the actual per-CPU variable's base
      address (to which the per-CPU offsets are applied to obtain the
      respective address of each CPU's copy), resulting in corruption when the
      module loader attempts to resolve symbol references of modules that are
      loaded on top and link to the exported per-CPU symbol.
      
      So let's disable this feature on powerpc.  Even though it implements
      CONFIG_RELOCATABLE, it does not implement CONFIG_RANDOMIZE_BASE and so
      KASLR kernels (which are the main target of the feature) do not exist on
      powerpc anyway.
      Reported-by: NAndreas Schwab <schwab@linux-m68k.org>
      Suggested-by: NNicholas Piggin <nicholas.piggin@gmail.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ff69279a
  5. 24 8月, 2018 2 次提交
  6. 23 8月, 2018 6 次提交
    • M
      powerpc/mce: Fix SLB rebolting during MCE recovery path. · 0f52b3a0
      Mahesh Salgaonkar 提交于
      The commit e7e81847 ("powerpc/64s: move machine check SLB flushing
      to mm/slb.c") introduced a bug in reloading bolted SLB entries. Unused
      bolted entries are stored with .esid=0 in the slb_shadow area, and
      that value is now used directly as the RB input to slbmte, which means
      the RB[52:63] index field is set to 0, which causes SLB entry 0 to be
      cleared.
      
      Fix this by storing the index bits in the unused bolted entries, which
      directs the slbmte to the right place.
      
      The SLB shadow area is also used by the hypervisor, but PAPR is okay
      with that, from LoPAPR v1.1, 14.11.1.3 SLB Shadow Buffer:
      
        Note: SLB is filled sequentially starting at index 0
        from the shadow buffer ignoring the contents of
        RB field bits 52-63
      
      Fixes: e7e81847 ("powerpc/64s: move machine check SLB flushing to mm/slb.c")
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      0f52b3a0
    • P
      KVM: PPC: Book3S: Fix guest DMA when guest partially backed by THP pages · 8cfbdbdc
      Paul Mackerras 提交于
      Commit 76fa4975 ("KVM: PPC: Check if IOMMU page is contained in
      the pinned physical page", 2018-07-17) added some checks to ensure
      that guest DMA mappings don't attempt to map more than the guest is
      entitled to access. However, errors in the logic mean that legitimate
      guest requests to map pages for DMA are being denied in some
      situations. Specifically, if the first page of the range passed to
      mm_iommu_get() is mapped with a normal page, and subsequent pages are
      mapped with transparent huge pages, we end up with mem->pageshift ==
      0. That means that the page size checks in mm_iommu_ua_to_hpa() and
      mm_iommu_up_to_hpa_rm() will always fail for every page in that
      region, and thus the guest can never map any memory in that region for
      DMA, typically leading to a flood of error messages like this:
      
        qemu-system-ppc64: VFIO_MAP_DMA: -22
        qemu-system-ppc64: vfio_dma_map(0x10005f47780, 0x800000000000000, 0x10000, 0x7fff63ff0000) = -22 (Invalid argument)
      
      The logic errors in mm_iommu_get() are:
      
        (a) use of 'ua' not 'ua + (i << PAGE_SHIFT)' in the find_linux_pte()
            call (meaning that find_linux_pte() returns the pte for the
            first address in the range, not the address we are currently up
            to);
        (b) use of 'pageshift' as the variable to receive the hugepage shift
            returned by find_linux_pte() - for a normal page this gets set
            to 0, leading to us setting mem->pageshift to 0 when we conclude
            that the pte returned by find_linux_pte() didn't match the page
            we were looking at;
        (c) comparing 'compshift', which is a page order, i.e. log base 2 of
            the number of pages, with 'pageshift', which is a log base 2 of
            the number of bytes.
      
      To fix these problems, this patch introduces 'cur_ua' to hold the
      current user address and uses that in the find_linux_pte() call;
      introduces 'pteshift' to hold the hugepage shift found by
      find_linux_pte(); and compares 'pteshift' with 'compshift +
      PAGE_SHIFT' rather than 'compshift'.
      
      The patch also moves the local_irq_restore to the point after the PTE
      pointer returned by find_linux_pte() has been dereferenced because
      otherwise the PTE could change underneath us, and adds a check to
      avoid doing the find_linux_pte() call once mem->pageshift has been
      reduced to PAGE_SHIFT, as an optimization.
      
      Fixes: 76fa4975 ("KVM: PPC: Check if IOMMU page is contained in the pinned physical page")
      Cc: stable@vger.kernel.org # v4.12+
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      8cfbdbdc
    • A
      powerpc/mm/radix: Only need the Nest MMU workaround for R -> RW transition · f08d08f3
      Aneesh Kumar K.V 提交于
      The Nest MMU workaround is only needed for RW upgrades. Avoid doing
      that for other PTE updates.
      
      We also avoid clearing the PTE while marking it invalid. This is
      because other page table walkers will find this PTE none and can
      result in unexpected behaviour due to that. Instead we clear
      _PAGE_PRESENT and set the software PTE bit _PAGE_INVALID.
      pte_present() is already updated to check for both bits. This makes
      sure page table walkers will find the PTE present and things like
      pte_pfn(pte) returns the right value.
      
      Based on an original patch from Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f08d08f3
    • A
      powerpc/mm/books3s: Add new pte bit to mark pte temporarily invalid. · bd0dbb73
      Aneesh Kumar K.V 提交于
      When splitting a huge pmd pte, we need to mark the pmd entry invalid. We
      can do that by clearing _PAGE_PRESENT bit. But then that will be taken as a
      swap pte. In order to differentiate between the two use a software pte bit
      when invalidating.
      
      For regular pte, due to bd5050e3 ("powerpc/mm/radix: Change pte relax
      sequence to handle nest MMU hang") we need to mark the pte entry invalid when
      relaxing access permission. Instead of marking pte_none which can result in
      different page table walk routines possibly skipping this pte entry, invalidate
      it but still keep it marked present.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      bd0dbb73
    • C
      powerpc/nohash: fix pte_access_permitted() · 810e9f86
      Christophe Leroy 提交于
      Commit 5769beaf ("powerpc/mm: Add proper pte access check helper
      for other platforms") replaced generic pte_access_permitted() by an
      arch specific one.
      
      The generic one is defined as
      (pte_present(pte) && (!(write) || pte_write(pte)))
      
      The arch specific one is open coded checking that _PAGE_USER and
      _PAGE_WRITE (_PAGE_RW) flags are set, but lacking to check that
      _PAGE_RO and _PAGE_PRIVILEGED are unset, leading to a useless test
      on targets like the 8xx which defines _PAGE_RW and _PAGE_USER as 0.
      
      Commit 5fa5b16b ("powerpc/mm/hugetlb: Use pte_access_permitted
      for hugetlb access check") replaced some tests performed with
      pte helpers by a call to pte_access_permitted(), leading to the same
      issue.
      
      This patch rewrites powerpc/nohash pte_access_permitted()
      using pte helpers.
      
      Fixes: 5769beaf ("powerpc/mm: Add proper pte access check helper for other platforms")
      Fixes: 5fa5b16b ("powerpc/mm/hugetlb: Use pte_access_permitted for hugetlb access check")
      Cc: stable@vger.kernel.org # v4.15+
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      810e9f86
    • A
      arch: enable relative relocations for arm64, power and x86 · 271ca788
      Ard Biesheuvel 提交于
      Patch series "add support for relative references in special sections", v10.
      
      This adds support for emitting special sections such as initcall arrays,
      PCI fixups and tracepoints as relative references rather than absolute
      references.  This reduces the size by 50% on 64-bit architectures, but
      more importantly, it removes the need for carrying relocation metadata for
      these sections in relocatable kernels (e.g., for KASLR) that needs to be
      fixed up at boot time.  On arm64, this reduces the vmlinux footprint of
      such a reference by 8x (8 byte absolute reference + 24 byte RELA entry vs
      4 byte relative reference)
      
      Patch #3 was sent out before as a single patch.  This series supersedes
      the previous submission.  This version makes relative ksymtab entries
      dependent on the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS rather
      than trying to infer from kbuild test robot replies for which
      architectures it should be blacklisted.
      
      Patch #1 introduces the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS,
      and sets it for the main architectures that are expected to benefit the
      most from this feature, i.e., 64-bit architectures or ones that use
      runtime relocations.
      
      Patch #2 add support for #define'ing __DISABLE_EXPORTS to get rid of
      ksymtab/kcrctab sections in decompressor and EFI stub objects when
      rebuilding existing C files to run in a different context.
      
      Patches #4 - #6 implement relative references for initcalls, PCI fixups
      and tracepoints, respectively, all of which produce sections with order
      ~1000 entries on an arm64 defconfig kernel with tracing enabled.  This
      means we save about 28 KB of vmlinux space for each of these patches.
      
      [From the v7 series blurb, which included the jump_label patches as well]:
      
        For the arm64 kernel, all patches combined reduce the memory footprint
        of vmlinux by about 1.3 MB (using a config copied from Ubuntu that has
        KASLR enabled), of which ~1 MB is the size reduction of the RELA section
        in .init, and the remaining 300 KB is reduction of .text/.data.
      
      This patch (of 6):
      
      Before updating certain subsystems to use place relative 32-bit
      relocations in special sections, to save space and reduce the number of
      absolute relocations that need to be processed at runtime by relocatable
      kernels, introduce the Kconfig symbol and define it for some architectures
      that should be able to support and benefit from it.
      
      Link: http://lkml.kernel.org/r/20180704083651.24360-2-ard.biesheuvel@linaro.orgSigned-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "Serge E. Hallyn" <serge@hallyn.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
      Cc: James Morris <james.morris@microsoft.com>
      Cc: Jessica Yu <jeyu@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      271ca788
  7. 22 8月, 2018 2 次提交
  8. 21 8月, 2018 2 次提交
    • S
      powerpc/topology: Get topology for shared processors at boot · 2ea62630
      Srikar Dronamraju 提交于
      On a shared LPAR, Phyp will not update the CPU associativity at boot
      time. Just after the boot system does recognize itself as a shared
      LPAR and trigger a request for correct CPU associativity. But by then
      the scheduler would have already created/destroyed its sched domains.
      
      This causes
        - Broken load balance across Nodes causing islands of cores.
        - Performance degradation esp if the system is lightly loaded
        - dmesg to wrongly report all CPUs to be in Node 0.
        - Messages in dmesg saying borken topology.
        - With commit 051f3ca0 ("sched/topology: Introduce NUMA identity
          node sched domain"), can cause rcu stalls at boot up.
      
      The sched_domains_numa_masks table which is used to generate cpumasks
      is only created at boot time just before creating sched domains and
      never updated. Hence, its better to get the topology correct before
      the sched domains are created.
      
      For example on 64 core Power 8 shared LPAR, dmesg reports
      
        Brought up 512 CPUs
        Node 0 CPUs: 0-511
        Node 1 CPUs:
        Node 2 CPUs:
        Node 3 CPUs:
        Node 4 CPUs:
        Node 5 CPUs:
        Node 6 CPUs:
        Node 7 CPUs:
        Node 8 CPUs:
        Node 9 CPUs:
        Node 10 CPUs:
        Node 11 CPUs:
        ...
        BUG: arch topology borken
             the DIE domain not a subset of the NUMA domain
        BUG: arch topology borken
             the DIE domain not a subset of the NUMA domain
      
      numactl/lscpu output will still be correct with cores spreading across
      all nodes:
      
        Socket(s):             64
        NUMA node(s):          12
        Model:                 2.0 (pvr 004d 0200)
        Model name:            POWER8 (architected), altivec supported
        Hypervisor vendor:     pHyp
        Virtualization type:   para
        L1d cache:             64K
        L1i cache:             32K
        NUMA node0 CPU(s): 0-7,32-39,64-71,96-103,176-183,272-279,368-375,464-471
        NUMA node1 CPU(s): 8-15,40-47,72-79,104-111,184-191,280-287,376-383,472-479
        NUMA node2 CPU(s): 16-23,48-55,80-87,112-119,192-199,288-295,384-391,480-487
        NUMA node3 CPU(s): 24-31,56-63,88-95,120-127,200-207,296-303,392-399,488-495
        NUMA node4 CPU(s):     208-215,304-311,400-407,496-503
        NUMA node5 CPU(s):     168-175,264-271,360-367,456-463
        NUMA node6 CPU(s):     128-135,224-231,320-327,416-423
        NUMA node7 CPU(s):     136-143,232-239,328-335,424-431
        NUMA node8 CPU(s):     216-223,312-319,408-415,504-511
        NUMA node9 CPU(s):     144-151,240-247,336-343,432-439
        NUMA node10 CPU(s):    152-159,248-255,344-351,440-447
        NUMA node11 CPU(s):    160-167,256-263,352-359,448-455
      
      Currently on this LPAR, the scheduler detects 2 levels of Numa and
      created numa sched domains for all CPUs, but it finds a single DIE
      domain consisting of all CPUs. Hence it deletes all numa sched
      domains.
      
      To address this, detect the shared processor and update topology soon
      after CPUs are setup so that correct topology is updated just before
      scheduler creates sched domain.
      
      With the fix, dmesg reports:
      
        numa: Node 0 CPUs: 0-7 32-39 64-71 96-103 176-183 272-279 368-375 464-471
        numa: Node 1 CPUs: 8-15 40-47 72-79 104-111 184-191 280-287 376-383 472-479
        numa: Node 2 CPUs: 16-23 48-55 80-87 112-119 192-199 288-295 384-391 480-487
        numa: Node 3 CPUs: 24-31 56-63 88-95 120-127 200-207 296-303 392-399 488-495
        numa: Node 4 CPUs: 208-215 304-311 400-407 496-503
        numa: Node 5 CPUs: 168-175 264-271 360-367 456-463
        numa: Node 6 CPUs: 128-135 224-231 320-327 416-423
        numa: Node 7 CPUs: 136-143 232-239 328-335 424-431
        numa: Node 8 CPUs: 216-223 312-319 408-415 504-511
        numa: Node 9 CPUs: 144-151 240-247 336-343 432-439
        numa: Node 10 CPUs: 152-159 248-255 344-351 440-447
        numa: Node 11 CPUs: 160-167 256-263 352-359 448-455
      
      and lscpu also reports:
      
        Socket(s):             64
        NUMA node(s):          12
        Model:                 2.0 (pvr 004d 0200)
        Model name:            POWER8 (architected), altivec supported
        Hypervisor vendor:     pHyp
        Virtualization type:   para
        L1d cache:             64K
        L1i cache:             32K
        NUMA node0 CPU(s): 0-7,32-39,64-71,96-103,176-183,272-279,368-375,464-471
        NUMA node1 CPU(s): 8-15,40-47,72-79,104-111,184-191,280-287,376-383,472-479
        NUMA node2 CPU(s): 16-23,48-55,80-87,112-119,192-199,288-295,384-391,480-487
        NUMA node3 CPU(s): 24-31,56-63,88-95,120-127,200-207,296-303,392-399,488-495
        NUMA node4 CPU(s):     208-215,304-311,400-407,496-503
        NUMA node5 CPU(s):     168-175,264-271,360-367,456-463
        NUMA node6 CPU(s):     128-135,224-231,320-327,416-423
        NUMA node7 CPU(s):     136-143,232-239,328-335,424-431
        NUMA node8 CPU(s):     216-223,312-319,408-415,504-511
        NUMA node9 CPU(s):     144-151,240-247,336-343,432-439
        NUMA node10 CPU(s):    152-159,248-255,344-351,440-447
        NUMA node11 CPU(s):    160-167,256-263,352-359,448-455
      Reported-by: NManjunatha H R <manjuhr1@in.ibm.com>
      Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      [mpe: Trim / format change log]
      Tested-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2ea62630
    • L
      powerpc64/ftrace: Include ftrace.h needed for enable/disable calls · d6ee76d3
      Luke Dashjr 提交于
      this_cpu_disable_ftrace and this_cpu_enable_ftrace are inlines in
      ftrace.h Without it included, the build fails.
      
      Fixes: a4bc64d3 ("powerpc64/ftrace: Disable ftrace during kvm entry/exit")
      Cc: stable@vger.kernel.org # v4.18+
      Signed-off-by: NLuke Dashjr <luke-jr+git@utopios.org>
      Acked-by: Naveen N. Rao <naveen.n.rao at linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d6ee76d3
  9. 20 8月, 2018 5 次提交
    • B
      powerpc/powernv/pci: Work around races in PCI bridge enabling · db217319
      Benjamin Herrenschmidt 提交于
      The generic code is racy when multiple children of a PCI bridge try to
      enable it simultaneously.
      
      This leads to drivers trying to access a device through a
      not-yet-enabled bridge, and this EEH errors under various
      circumstances when using parallel driver probing.
      
      There is work going on to fix that properly in the PCI core but it
      will take some time.
      
      x86 gets away with it because (outside of hotplug), the BIOS enables
      all the bridges at boot time.
      
      This patch does the same thing on powernv by enabling all bridges that
      have child devices at boot time, thus avoiding subsequent races. It's
      suitable for backporting to stable and distros, while the proper PCI
      fix will probably be significantly more invasive.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      db217319
    • H
      powerpc/fadump: cleanup crash memory ranges support · a5818313
      Hari Bathini 提交于
      Commit 1bd6a1c4 ("powerpc/fadump: handle crash memory ranges array
      index overflow") changed crash memory ranges to a dynamic array that
      is reallocated on-demand with krealloc(). The relevant header for this
      call was not included. The kernel compiles though. But be cautious and
      add the header anyway.
      
      Also, memory allocation logic in fadump_add_crash_memory() takes care
      of memory allocation for crash memory ranges in all scenarios. Drop
      unnecessary memory allocation in fadump_setup_crash_memory_ranges().
      
      Fixes: 1bd6a1c4 ("powerpc/fadump: handle crash memory ranges array index overflow")
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NHari Bathini <hbathini@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a5818313
    • N
      powerpc/powernv: provide a console flush operation for opal hvc driver · 95b861a7
      Nicholas Piggin 提交于
      Provide the flush hv_op for the opal hvc driver. This will flush the
      firmware console buffers without spinning with interrupts disabled.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      95b861a7
    • M
      powerpc/traps: Avoid rate limit messages from show unhandled signals · 997dd26c
      Michael Ellerman 提交于
      In the recent commit to add an explicit ratelimit state when showing
      unhandled signals, commit 35a52a10 ("powerpc/traps: Use an
      explicit ratelimit state for show_signal_msg()"), I put the check of
      show_unhandled_signals and the ratelimit state before the call to
      unhandled_signal() so as to avoid unnecessarily calling the latter
      when show_unhandled_signals is false.
      
      However that causes us to check the ratelimit state on every call, so
      if we take a lot of *handled* signals that has the effect of making
      the ratelimit code print warnings that callbacks have been suppressed
      when they haven't.
      
      So rearrange the code so that we check show_unhandled_signals first,
      then call unhandled_signal() and finally check the ratelimit state.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NMurilo Opsfelder Araujo <muriloo@linux.ibm.com>
      997dd26c
    • P
      KVM: PPC: Book3S HV: Don't truncate HPTE index in xlate function · 46dec40f
      Paul Mackerras 提交于
      This fixes a bug which causes guest virtual addresses to get translated
      to guest real addresses incorrectly when the guest is using the HPT MMU
      and has more than 256GB of RAM, or more specifically has a HPT larger
      than 2GB.  This has showed up in testing as a failure of the host to
      emulate doorbell instructions correctly on POWER9 for HPT guests with
      more than 256GB of RAM.
      
      The bug is that the HPTE index in kvmppc_mmu_book3s_64_hv_xlate()
      is stored as an int, and in forming the HPTE address, the index gets
      shifted left 4 bits as an int before being signed-extended to 64 bits.
      The simple fix is to make the variable a long int, matching the
      return type of kvmppc_hv_find_lock_hpte(), which is what calculates
      the index.
      
      Fixes: 697d3899 ("KVM: PPC: Implement MMIO emulation support for Book3S HV guests")
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      46dec40f
  10. 18 8月, 2018 2 次提交
  11. 15 8月, 2018 1 次提交
    • P
      KVM: PPC: Book3S HV: Use correct pagesize in kvm_unmap_radix() · c066fafc
      Paul Mackerras 提交于
      Since commit e641a317 ("KVM: PPC: Book3S HV: Unify dirty page map
      between HPT and radix", 2017-10-26), kvm_unmap_radix() computes the
      number of PAGE_SIZEd pages being unmapped and passes it to
      kvmppc_update_dirty_map(), which expects to be passed the page size
      instead.  Consequently it will only mark one system page dirty even
      when a large page (for example a THP page) is being unmapped.  The
      consequence of this is that part of the THP page might not get copied
      during live migration, resulting in memory corruption for the guest.
      
      This fixes it by computing and passing the page size in kvm_unmap_radix().
      
      Cc: stable@vger.kernel.org # v4.15+
      Fixes: e641a317 (KVM: PPC: Book3S HV: Unify dirty page map between HPT and radix)
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      c066fafc
  12. 14 8月, 2018 1 次提交
  13. 13 8月, 2018 1 次提交
  14. 10 8月, 2018 1 次提交
    • M
      powerpc/uaccess: Enable get_user(u64, *p) on 32-bit · f7a6947c
      Michael Ellerman 提交于
      Currently if you build a 32-bit powerpc kernel and use get_user() to
      load a u64 value it will fail to build with eg:
      
        kernel/rseq.o: In function `rseq_get_rseq_cs':
        kernel/rseq.c:123: undefined reference to `__get_user_bad'
      
      This is hitting the check in __get_user_size() that makes sure the
      size we're copying doesn't exceed the size of the destination:
      
        #define __get_user_size(x, ptr, size, retval)
        do {
        	retval = 0;
        	__chk_user_ptr(ptr);
        	if (size > sizeof(x))
        		(x) = __get_user_bad();
      
      Which doesn't immediately make sense because the size of the
      destination is u64, but it's not really, because __get_user_check()
      etc. internally create an unsigned long and copy into that:
      
        #define __get_user_check(x, ptr, size)
        ({
        	long __gu_err = -EFAULT;
        	unsigned long  __gu_val = 0;
      
      The problem being that on 32-bit unsigned long is not big enough to
      hold a u64. We can fix this with a trick from hpa in the x86 code, we
      statically check the type of x and set the type of __gu_val to either
      unsigned long or unsigned long long.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f7a6947c