1. 03 2月, 2017 1 次提交
    • J
      MIPS: Export pgd/pmd symbols for KVM · ccf01516
      James Hogan 提交于
      Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to
      GPL kernel modules so that MIPS KVM can use the inline page table
      management functions and switch between page tables:
      
      - pmd_init() will be used directly by KVM to initialise newly allocated
        pmd tables with invalid lower level table pointers.
      
      - invalid_pmd_table is used by pud_present(), pud_none(), and
        pud_clear(), which KVM will use to test and clear pud entries.
      
      - tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch
        to the appropriate GVA page tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      ccf01516
  2. 02 2月, 2017 1 次提交
    • J
      MIPS: Move pgd_alloc() out of header · 814f91bf
      James Hogan 提交于
      pgd_alloc() references init_mm which is not exported to modules. In
      order for KVM to be able to use pgd_alloc() to allocate GVA page tables,
      move pgd_alloc() into a new pgtable.c file and export it to modules.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      814f91bf
  3. 25 12月, 2016 1 次提交
  4. 15 12月, 2016 1 次提交
  5. 25 11月, 2016 1 次提交
  6. 24 11月, 2016 1 次提交
    • P
      MIPS: Mask out limit field when calculating wired entry count · 10313980
      Paul Burton 提交于
      Since MIPSr6 the Wired register is split into 2 fields, with the upper
      16 bits of the register indicating a limit on the value that the wired
      entry count in the bottom 16 bits of the register can take. This means
      that simply reading the wired register doesn't get us a valid TLB entry
      index any longer, and we instead need to retrieve only the lower 16 bits
      of the register. Introduce a new num_wired_entries() function which does
      this on MIPSr6 or higher and simply returns the value of the wired
      register on older architecture revisions, and make use of it when
      reading the number of wired entries.
      
      Since commit e710d666 ("MIPS: tlb-r4k: If there are wired entries,
      don't use TLBINVF") we have been using a non-zero number of wired
      entries to determine whether we should avoid use of the tlbinvf
      instruction (which would invalidate wired entries) and instead loop over
      TLB entries in local_flush_tlb_all(). This loop begins with the number
      of wired entries, or before this patch some large bogus TLB index on
      MIPSr6 systems. Thus since the aforementioned commit some MIPSr6 systems
      with FTLBs have been prone to leaving stale address translations in the
      FTLB & crashing in various weird & wonderful ways when we later observe
      the wrong memory.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/14557/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      10313980
  7. 19 10月, 2016 1 次提交
  8. 07 10月, 2016 3 次提交
    • P
      MIPS: Support per-device DMA coherence · 20d33064
      Paul Burton 提交于
      On some MIPS systems, a subset of devices may have DMA coherent with CPU
      caches. For example in systems including a MIPS I/O Coherence Unit
      (IOCU), some devices may be connected to that IOCU whilst others are
      not.
      
      Prior to this patch, we have a plat_device_is_coherent() function but no
      implementation which does anything besides return a global true or
      false, optionally chosen at runtime. For devices such as those described
      above this is insufficient.
      
      Fix this by tracking DMA coherence on a per-device basis with a
      dma_coherent field in struct dev_archdata. Setting this from
      arch_setup_dma_ops() takes care of devices which set the dma-coherent
      property via device tree, and any PCI devices beneath a bridge described
      in DT, automatically.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/14349/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      20d33064
    • P
      MIPS: dma-default: Don't check hw_coherentio if device is non-coherent · cfa93fb9
      Paul Burton 提交于
      There are no cases where plat_device_is_coherent() will return zero
      whilst hw_coherentio is non-zero, and acting any differently in such a
      case doesn't make much sense - if a device is non-coherent with the CPU
      caches then access to memory "coherent" with DMA must be uncached. Clean
      up the nonsensical case.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/14348/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      cfa93fb9
    • P
      MIPS: Sanitise coherentio semantics · f2302023
      Paul Burton 提交于
      The coherentio variable has previously been used as a boolean value,
      indicating whether the user specified that coherent I/O should be
      enabled or disabled. It failed to take into account the case where the
      user does not specify any preference, in which case it makes sense that
      we should default to coherent I/O if the hardware supports it
      (hw_coherentio is non-zero).
      
      Introduce an enum to clarify the 3 different values of coherentio & use
      it throughout the code, modifying plat_device_is_coherent() &
      r4k_cache_init() to take into account the default case.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: Paul Burton <paul.burton@imgtec.com>
      Patchwork: https://patchwork.linux-mips.org/patch/14347/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      f2302023
  9. 05 10月, 2016 3 次提交
    • P
      MIPS: mm: Audit and remove any unnecessary uses of module.h · d9ba5778
      Paul Gortmaker 提交于
      Historically a lot of these existed because we did not have
      a distinction between what was modular code and what was providing
      support to modules via EXPORT_SYMBOL and friends.  That changed
      when we forked out support for the latter into the export.h file.
      
      This means we should be able to reduce the usage of module.h
      in code that is obj-y Makefile or bool Kconfig.  The advantage
      in doing so is that module.h itself sources about 15 other headers;
      adding significantly to what we feed cpp, and it can obscure what
      headers we are effectively using.
      
      Since module.h was the source for init.h (for __init) and for
      export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
      for the presence of either and replace as needed.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/14033/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d9ba5778
    • M
      MIPS: tlb-r4k: If there are wired entries, don't use TLBINVF · e710d666
      Matt Redfearn 提交于
      When adding a wired entry to the TLB via add_wired_entry, the tlb is
      flushed with local_flush_tlb_all, which on CPUs with TLBINV results in
      the new wired entry being flushed again.
      
      Behavior of the TLBINV instruction applies to all applicable TLB entries
      and is unaffected by the setting of the Wired register. Therefore if
      the TLB has any wired entries, fall back to iterating over the entries
      rather than blasting them all using TLBINVF.
      Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com>
      Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
      Cc: Ohad Ben-Cohen <ohad@wizery.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: lisa.parratt@imgtec.com
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mips@linux-mips.org
      Cc: linux-remoteproc@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/14283/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e710d666
    • J
      MIPS: c-r4k: Fix flush_icache_range() for EVA · b2ff7171
      James Hogan 提交于
      flush_icache_range() flushes icache lines in a protected fashion for
      kernel addresses, however this isn't correct with EVA where protected
      cache ops only operate on user addresses, making flush_icache_range()
      ineffective.
      
      Split the implementations of __flush_icache_user_range() from
      flush_icache_range(), changing the normal flush_icache_range() to use
      unprotected normal cache ops.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/14156/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      b2ff7171
  10. 04 10月, 2016 4 次提交
  11. 30 9月, 2016 1 次提交
    • P
      MIPS: Fix detection of unsupported highmem with cache aliases · 058effe7
      Paul Burton 提交于
      The paging_init() function contains code which detects that highmem is
      in use but unsupported due to dcache aliasing. However this code was
      ineffective because it was being run before the caches are probed,
      meaning that cpu_has_dc_aliases would always evaluate to false (unless a
      platform overrides it to a compile-time constant) and the detection of
      the unsupported case is never triggered. The kernel would then go on to
      attempt to use highmem & either hit coherency issues or trigger the
      BUG_ON in flush_kernel_dcache_page().
      
      Fix this by running paging_init() later than cpu_cache_init(), such that
      the cpu_has_dc_aliases macro will evaluate correctly & the unsupported
      highmem case will be detected successfully.
      
      This then leads to a formerly hidden issue in that
      mem_init_free_highmem() will attempt to free all highmem pages, even
      though we're avoiding use of them & don't have valid page structs for
      them. This leads to an invalid pointer dereference & a TLB exception.
      Avoid this by skipping the loop in mem_init_free_highmem() if
      cpu_has_dc_aliases evaluates true.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Rabin Vincent <rabinv@axis.com>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Alexander Sverdlin <alexander.sverdlin@gmail.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Jaedon Shin <jaedon.shin@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Sergey Ryazanov <ryazanov.s.a@gmail.com>
      Cc: Jonas Gorski <jogo@openwrt.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/14184/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      058effe7
  12. 13 9月, 2016 2 次提交
  13. 04 8月, 2016 2 次提交
    • K
      dma-mapping: use unsigned long for dma_attrs · 00085f1e
      Krzysztof Kozlowski 提交于
      The dma-mapping core and the implementations do not change the DMA
      attributes passed by pointer.  Thus the pointer can point to const data.
      However the attributes do not have to be a bitfield.  Instead unsigned
      long will do fine:
      
      1. This is just simpler.  Both in terms of reading the code and setting
         attributes.  Instead of initializing local attributes on the stack
         and passing pointer to it to dma_set_attr(), just set the bits.
      
      2. It brings safeness and checking for const correctness because the
         attributes are passed by value.
      
      Semantic patches for this change (at least most of them):
      
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
      
          @@
          f(...,
          - struct dma_attrs *attrs
          + unsigned long attrs
          , ...)
          {
          ...
          }
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      and
      
          // Options: --all-includes
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
          type t;
      
          @@
          t f(..., struct dma_attrs *attrs);
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Acked-by: NVineet Gupta <vgupta@synopsys.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no>
      Acked-by: Mark Salter <msalter@redhat.com> [c6x]
      Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
      Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
      Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
      Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
      Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
      Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
      Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
      Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00085f1e
    • M
      tree-wide: replace config_enabled() with IS_ENABLED() · 97f2645f
      Masahiro Yamada 提交于
      The use of config_enabled() against config options is ambiguous.  In
      practical terms, config_enabled() is equivalent to IS_BUILTIN(), but the
      author might have used it for the meaning of IS_ENABLED().  Using
      IS_ENABLED(), IS_BUILTIN(), IS_MODULE() etc.  makes the intention
      clearer.
      
      This commit replaces config_enabled() with IS_ENABLED() where possible.
      This commit is only touching bool config options.
      
      I noticed two cases where config_enabled() is used against a tristate
      option:
      
       - config_enabled(CONFIG_HWMON)
        [ drivers/net/wireless/ath/ath10k/thermal.c ]
      
       - config_enabled(CONFIG_BACKLIGHT_CLASS_DEVICE)
        [ drivers/gpu/drm/gma500/opregion.c ]
      
      I did not touch them because they should be converted to IS_BUILTIN()
      in order to keep the logic, but I was not sure it was the authors'
      intention.
      
      Link: http://lkml.kernel.org/r/1465215656-20569-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Stas Sergeev <stsp@list.ru>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: Joshua Kinard <kumba@gentoo.org>
      Cc: Jiri Slaby <jslaby@suse.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: "Dmitry V. Levin" <ldv@altlinux.org>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Will Drewry <wad@chromium.org>
      Cc: Nikolay Martynov <mar.kolya@gmail.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: Rafal Milecki <zajec5@gmail.com>
      Cc: James Cowgill <James.Cowgill@imgtec.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
      Cc: Qais Yousef <qais.yousef@imgtec.com>
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Mikko Rapeli <mikko.rapeli@iki.fi>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Norris <computersforpeace@gmail.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Roland McGrath <roland@hack.frob.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Kalle Valo <kvalo@qca.qualcomm.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Tony Wu <tung7970@gmail.com>
      Cc: Huaitong Han <huaitong.han@intel.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrea Gelmini <andrea.gelmini@gelma.net>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Rabin Vincent <rabin@rab.in>
      Cc: "Maciej W. Rozycki" <macro@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97f2645f
  14. 03 8月, 2016 2 次提交
  15. 02 8月, 2016 1 次提交
  16. 29 7月, 2016 10 次提交
    • J
      MIPS: c-r4k: Use SMP calls for CM indexed cache ops · 11f76903
      James Hogan 提交于
      The MIPS Coherence Manager (CM) can propagate address-based ("hit")
      cache operations to other cores in the coherent system, alleviating
      software of the need to use SMP calls, however indexed cache operations
      are not propagated by hardware since doing so makes no sense for
      separate caches.
      
      Update r4k_op_needs_ipi() to report that only hit cache operations are
      globalized by the CM, requiring indexed cache operations to be
      globalized by software via an SMP call.
      
      r4k_on_each_cpu() previously had a special case for CONFIG_MIPS_MT_SMP,
      intended to avoid the SMP calls when the only other CPUs in the system
      were other VPEs in the same core, and hence sharing the same caches.
      This was changed by commit cccf34e9 ("MIPS: c-r4k: Fix cache
      flushing for MT cores") to apparently handle multi-core multi-VPE
      systems, but it focussed mainly on hit cache ops, so the SMP calls were
      still disabled entirely for CM systems.
      
      This doesn't normally cause problems, but tests can be written to hit
      these corner cases by using multiple threads, or changing task
      affinities to force the process to migrate cores. For example the
      failure of mprotect RW->RX to globally sync icaches (via
      flush_cache_range) can be detected by modifying and mprotecting a code
      page on one core, and migrating to a different core to execute from it.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13807/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      11f76903
    • J
      MIPS: c-r4k: Avoid small flush_icache_range SMP calls · f70ddc07
      James Hogan 提交于
      Avoid SMP calls for flushing small icache ranges. On non-CM platforms,
      and CM platforms too after we make r4k_on_each_cpu() take the cache op
      type into account, it will be called on multiple CPUs due to the
      possibility that local_r4k_flush_icache_range_ipi() could do
      non-globalized indexed cache ops. This rougly copies the range size
      check out into r4k_flush_icache_range(), which can disallow indexed
      cache ops and allow r4k_on_each_cpu() to skip the SMP call.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13805/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      f70ddc07
    • J
      MIPS: c-r4k: Local flush_icache_range cache op override · 27b93d9c
      James Hogan 提交于
      Allow the permitted cache op types used by
      local_r4k_flush_icache_range_ipi() to be overridden by the SMP caller.
      This will allow SMP calls to be avoided under certain circumstances,
      falling back to a single CPU performing globalized hit cache ops only.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13803/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      27b93d9c
    • J
      MIPS: c-r4k: Split r4k_flush_kernel_vmap_range() · a9341ae2
      James Hogan 提交于
      Split the operation of r4k_flush_kernel_vmap_range() into separate
      SMP callbacks for the indexed cache flush and hit cache flush cases,
      since the logic to determine which to use can be determined by the
      initiating CPU prior to doing any SMP calls.
      
      This will help when we change r4k_on_each_cpu() to distinguish indexed
      and hit cache ops in a later patch, preventing globalized hit cache ops
      being performed redundantly on multiple CPUs.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13806/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      a9341ae2
    • J
      MIPS: c-r4k: Exclude sibling CPUs in SMP calls · 640511ae
      James Hogan 提交于
      When performing SMP calls to foreign cores, exclude sibling CPUs from
      the provided map, as we already handle the local core on the current
      CPU. This prevents an SMP call from for example core 0, VPE 1 to VPE 0
      on the same core.
      
      In the process the cpu_foreign_map cpumask is turned into an array of
      cpumasks, so that each CPU has its own version of it which excludes
      sibling CPUs. r4k_op_needs_ipi() is also updated to reflect that cache
      management SMP calls are not needed when all CPUs are siblings (i.e.
      there are no foreign CPUs according to the new cpu_foreign_map[]
      semantics which exclude siblings).
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: Felix Fietkau <nbd@nbd.name>
      Cc: Jayachandran C. <jchandra@broadcom.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13801/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      640511ae
    • J
      MIPS: c-r4k: Fix valid ASID optimisation · 6d758bfc
      James Hogan 提交于
      Several cache operations are optimised to return early from the SMP call
      handler if the memory map in question has no valid ASID on the current
      CPU, or any online CPU in the case of MIPS_MT_SMP. The idea is that if a
      memory map has never been used on a CPU it shouldn't have cache lines in
      need of flushing.
      
      However this doesn't cover all cases when ASIDs for other CPUs need to
      be checked:
      - Offline VPEs may have recently been online and brought lines into the
        (shared) cache, so they should also be checked, rather than only
        online CPUs.
      - SMP systems with a Coherence Manager (CM), but with MT disabled still
        have globalized hit cache ops, but don't use SMP calls, so all present
        CPUs should be taken into account.
      - R6 systems have a different multithreading implementation, so
        MIPS_MT_SMP won't be set, but as above may still have a CM which
        globalizes hit cache ops.
      
      Additionally for non-globalized cache operations where an SMP call to a
      single VPE in each foreign core is used, it is not necessary to check
      every CPU in the system, only sibling CPUs sharing the same first level
      cache.
      
      Fix this by making has_valid_asid() take a cache op type argument like
      r4k_on_each_cpu(), so it can determine whether r4k_on_each_cpu() will
      have done SMP calls to other cores. It can then determine which set of
      CPUs to check the ASIDs of based on that, excluding foreign CPUs if an
      SMP call will have been performed.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13804/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6d758bfc
    • J
      MIPS: c-r4k: Add r4k_on_each_cpu cache op type arg · d374d937
      James Hogan 提交于
      The r4k_on_each_cpu() function calls the specified cache flush helper on
      other CPUs if deemed necessary due to the cache ops not being
      globalized by hardware. However this really depends on the cache op
      addressing type, as the MIPS Coherence Manager (CM) if present will
      globalize "hit" cache ops (addressed by virtual address), but not
      "index" cache ops (addressed by cache index). This results in index
      cache ops only being performed on a single CPU when CM is present.
      
      Most (but not all) of the functions called by r4k_on_each_cpu() perform
      cache operations exclusively with a single cache op type, so add a type
      argument and modify the callers to pass in some combination of R4K_HIT
      (global kernel virtual addressing or user virtual addressing
      conditional upon matching active_mm) and R4K_INDEX (index into cache).
      
      This will allow r4k_on_each_cpu() to later distinguish these cases and
      decide whether to perform an SMP call based on it.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13798/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d374d937
    • J
      MIPS: c-r4k: Avoid dcache flush for sigtramps · 8bd646e9
      James Hogan 提交于
      Avoid the dcache and scache flush in local_r4k_flush_cache_sigtramp() if
      the icache fills straight from the dcache.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13802/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      8bd646e9
    • J
      MIPS: c-r4k: Fix sigtramp SMP call to use kmap · e523f289
      James Hogan 提交于
      Fix r4k_flush_cache_sigtramp() and local_r4k_flush_cache_sigtramp() to
      flush the delay slot emulation trampoline cacheline through a kmap
      rather than directly when the active_mm doesn't match that of the task
      initiating the flush, a bit like local_r4k_flush_cache_page() does.
      
      This would fix a corner case on SMP systems without hardware globalized
      hit cache ops, where a migration to another CPU after the flush, where
      that CPU did not have the same mm active at the time of the flush, could
      result in stale icache content being executed instead of the trampoline,
      e.g. from a previous delay slot emulation with a similar stack pointer.
      
      This case was artificially triggered by replacing the icache flush with
      a full indexed flush (not globalized on CM systems) and forcing the SMP
      call to take place, with a test program that alternated two FPU delay
      slots with a parent process repeatedly changing scheduler affinity.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13797/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e523f289
    • J
      MIPS: SMP: Clear ASID without confusing has_valid_asid() · a05c3920
      James Hogan 提交于
      The SMP flush_tlb_*() functions may clear the memory map's ASIDs for
      other CPUs if the mm has only a single user (the current CPU) in order
      to avoid SMP calls. However this makes it appear to has_valid_asid(),
      which is used by various cache flush functions, as if the CPUs have
      never run in the mm, and therefore can't have cached any of its memory.
      
      For flush_tlb_mm() this doesn't sound unreasonable.
      
      flush_tlb_range() corresponds to flush_cache_range() which does do full
      indexed cache flushes, but only on the icache if the specified mapping
      is executable, otherwise it doesn't guarantee that there are no cache
      contents left for the mm.
      
      flush_tlb_page() corresponds to flush_cache_page(), which will perform
      address based cache ops on the specified page only, and also only
      touches the icache if the page is executable. It does not guarantee that
      there are no cache contents left for the mm.
      
      For example, this affects flush_cache_range() which uses the
      has_valid_asid() optimisation. It is required to flush the icache when
      mappings are made executable (e.g. using mprotect) so they are
      immediately usable. If some code is changed to non executable in order
      to be modified then it will not be flushed from the icache during that
      time, but the ASID on other CPUs may still be cleared for TLB flushing.
      When the code is changed back to executable, flush_cache_range() will
      assume the code hasn't run on those other CPUs due to the zero ASID, and
      won't invalidate the icache on them.
      
      This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the
      above two flush_tlb_*() functions when the corresponding cache flushes
      are likely to be incomplete (non executable range flush, or any page
      flush). This ASID appears valid to has_valid_asid(), but still triggers
      ASID regeneration due to the upper ASID version bits being 0, which is
      less than the minimum ASID version of 1 and so always treated as stale.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13795/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      a05c3920
  17. 27 7月, 2016 1 次提交
  18. 24 7月, 2016 2 次提交
    • J
      MIPS: tlbex: Avoid duplicated single_insn_swpd · 2f8f8c04
      James Hogan 提交于
      The expression "uasm_in_compat_space_p(swpd) && !uasm_rel_lo(swpd)" is
      used twice in build_get_pgd_vmalloc64(), one of which is assigned to the
      local variable single_insn_swpd. Update the other use to just use
      single_insn_swpd instead to remove the duplication.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: David Daney <ddaney@caviumnetworks.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13779/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      2f8f8c04
    • J
      MIPS: uasm: Handle low values in uasm_in_compat_space_p() · f7d9afea
      James Hogan 提交于
      uasm_in_compat_space_p() determines whether the given value is in the
      32-bit compatibility part of the 64-bit address space, i.e. is in 32-bit
      sign-extended form, however it only handles the top half of the value
      space (corresponding to the kernel compatibility segments in the upper
      half of the address space). Since values < 2^31 (corresponding to the
      low 2GiB of the address space) can also be handled using 32-bit
      instructions (e.g. a LUI and ADDIU) rather than convoluted 64-bit
      immediate generation, rewrite it with a cast to check whether the
      address matches its 32-bit sign extended form.
      
      This allows UASM_i_LA to be used to generate arbitrary 32-bit immediates
      more efficiently on 64-bit CPUs, i.e. more like the li (load immediate)
      pseudo-instruction.
      
      For example this code to load the immediate (ST0_EXL | KSU_USER |
      ST0_BEV | ST0_KX) into k0 with UASM_i_LA():
      
       lui        k0,0x0
       dsll       k0,k0,0x10
       daddiu     k0,k0,64
       dsll       k0,k0,0x10
       daddiu     k0,k0,146
      
      Changes to this more efficient version:
      
       lui        k0,0x40
       addiu      k0,k0,146
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13778/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      f7d9afea
  19. 21 7月, 2016 1 次提交
  20. 06 7月, 2016 1 次提交
    • R
      MIPS: Remove cpu_has_safe_index_cacheops · c00ab489
      Ralf Baechle 提交于
      Very early versions of the 1004K had an hardware issue that made index
      cache ops unsafe so they had to be avoided and hit ops be used instead.
      This may significantly slow down cache maintenance operations.  Only
      very early FPGA versions of the 1004K were affected so let's get rid
      of the workaround which was only implemented for the DMA cache
      maintenance operations anyway.
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      c00ab489