1. 22 3月, 2017 1 次提交
    • H
      MIPS: Flush wrong invalid FTLB entry for huge page · 0115f6cb
      Huacai Chen 提交于
      On VTLB+FTLB platforms (such as Loongson-3A R2), FTLB's pagesize is
      usually configured the same as PAGE_SIZE. In such a case, Huge page
      entry is not suitable to write in FTLB.
      
      Unfortunately, when a huge page is created, its page table entries
      haven't created immediately. Then the TLB refill handler will fetch an
      invalid page table entry which has no "HUGE" bit, and this entry may be
      written to FTLB. Since it is invalid, TLB load/store handler will then
      use tlbwi to write the valid entry at the same place. However, the
      valid entry is a huge page entry which isn't suitable for FTLB.
      
      Our solution is to modify build_huge_handler_tail. Flush the invalid
      old entry (whether it is in FTLB or VTLB, this is in order to reduce
      branches) and use tlbwr to write the valid new entry.
      Signed-off-by: NRui Wang <wangr@lemote.com>
      Signed-off-by: NHuacai Chen <chenhc@lemote.com>
      Cc: John Crispin <john@phrozen.org>
      Cc: Steven J . Hill <Steven.Hill@caviumnetworks.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/15754/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      0115f6cb
  2. 03 2月, 2017 2 次提交
    • J
      MIPS: Export some tlbex internals for KVM to use · 722b4544
      James Hogan 提交于
      Export to TLB exception code generating functions so that KVM can
      construct a fast TLB refill handler for guest context without
      reinventing the wheel quite so much.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      722b4544
    • J
      MIPS: Export pgd/pmd symbols for KVM · ccf01516
      James Hogan 提交于
      Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to
      GPL kernel modules so that MIPS KVM can use the inline page table
      management functions and switch between page tables:
      
      - pmd_init() will be used directly by KVM to initialise newly allocated
        pmd tables with invalid lower level table pointers.
      
      - invalid_pmd_table is used by pud_present(), pud_none(), and
        pud_clear(), which KVM will use to test and clear pud entries.
      
      - tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch
        to the appropriate GVA page tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      ccf01516
  3. 03 1月, 2017 1 次提交
  4. 04 8月, 2016 1 次提交
    • M
      tree-wide: replace config_enabled() with IS_ENABLED() · 97f2645f
      Masahiro Yamada 提交于
      The use of config_enabled() against config options is ambiguous.  In
      practical terms, config_enabled() is equivalent to IS_BUILTIN(), but the
      author might have used it for the meaning of IS_ENABLED().  Using
      IS_ENABLED(), IS_BUILTIN(), IS_MODULE() etc.  makes the intention
      clearer.
      
      This commit replaces config_enabled() with IS_ENABLED() where possible.
      This commit is only touching bool config options.
      
      I noticed two cases where config_enabled() is used against a tristate
      option:
      
       - config_enabled(CONFIG_HWMON)
        [ drivers/net/wireless/ath/ath10k/thermal.c ]
      
       - config_enabled(CONFIG_BACKLIGHT_CLASS_DEVICE)
        [ drivers/gpu/drm/gma500/opregion.c ]
      
      I did not touch them because they should be converted to IS_BUILTIN()
      in order to keep the logic, but I was not sure it was the authors'
      intention.
      
      Link: http://lkml.kernel.org/r/1465215656-20569-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Stas Sergeev <stsp@list.ru>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: Joshua Kinard <kumba@gentoo.org>
      Cc: Jiri Slaby <jslaby@suse.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: "Dmitry V. Levin" <ldv@altlinux.org>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Will Drewry <wad@chromium.org>
      Cc: Nikolay Martynov <mar.kolya@gmail.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: Rafal Milecki <zajec5@gmail.com>
      Cc: James Cowgill <James.Cowgill@imgtec.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
      Cc: Qais Yousef <qais.yousef@imgtec.com>
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Mikko Rapeli <mikko.rapeli@iki.fi>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Norris <computersforpeace@gmail.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Roland McGrath <roland@hack.frob.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Kalle Valo <kvalo@qca.qualcomm.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Tony Wu <tung7970@gmail.com>
      Cc: Huaitong Han <huaitong.han@intel.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrea Gelmini <andrea.gelmini@gelma.net>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Rabin Vincent <rabin@rab.in>
      Cc: "Maciej W. Rozycki" <macro@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97f2645f
  5. 24 7月, 2016 1 次提交
  6. 28 5月, 2016 2 次提交
    • J
      MIPS: Fix 64-bit HTW configuration · aa76042a
      James Hogan 提交于
      The Hardware page Table Walker (HTW) is being misconfigured on 64-bit
      kernels. The PWSize.PS (pointer size) bit determines whether pointers
      within directories are loaded as 32-bit or 64-bit addresses, but was
      never being set to 1 for 64-bit kernels where the unsigned long in pgd_t
      is 64-bits wide.
      
      This actually reduces rather than improves performance when the HTW is
      enabled on P6600 since the HTW is initiated lots, but walks are all
      aborted due I think to bad intermediate pointers.
      
      Since we were already taking the width of the PTEs into account by
      setting PWSize.PTEW, which is the left shift applied to the page table
      index *in addition to* the native pointer size, we also need to reduce
      PTEW by 1 when PS=1. This is done by calculating PTEW based on the
      relative size of pte_t compared to pgd_t.
      
      Finally in order for the HTW to be used when PS=1, the appropriate
      XK/XS/XU bits corresponding to the different 64-bit segments need to be
      set in PWCtl. We enable only XU for now to enable walking for XUSeg.
      
      Supporting walking for XKSeg would be a bit more involved so is left for
      a future patch. It would either require the use of a per-CPU top level
      base directory if supported by the HTW (a bit like pgd_current but with
      a second entry pointing at swapper_pg_dir), or the HTW would prepend bit
      63 of the address to the global directory index which doesn't really
      match how we split user and kernel page directories.
      
      Fixes: cab25bc7 ("MIPS: Extend hardware table walking support to MIPS64")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13364/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      aa76042a
    • J
      MIPS: Add 64-bit HTW fields · 6446e6cf
      James Hogan 提交于
      Add field definitions for some of the 64-bit specific Hardware page
      Table Walker (HTW) register fields in PWSize and PWCtl, in preparation
      for fixing the 64-bit HTW configuration.
      
      Also print these fields out along with the others in print_htw_config().
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13363/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6446e6cf
  7. 13 5月, 2016 10 次提交
    • P
      MIPS: mm: Panic if an XPA kernel is run without RIXI · e56c7e18
      Paul Burton 提交于
      XPA kernels hardcode for the presence of RIXI - the PTE format & its
      handling presume RI & XI bits. Make this dependence explicit by panicing
      if we run on a system that violates it.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13125/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e56c7e18
    • J
      MIPS: mm: Don't do MTHC0 if XPA not present · 4b6f99d3
      James Hogan 提交于
      Performing an MTHC0 instruction without XPA being present will trigger a
      reserved instruction exception, therefore conditionalise the use of this
      instruction when building TLB handlers (build_update_entries()), and in
      __update_tlb().
      
      This allows an XPA kernel to run on non XPA hardware without that
      instruction implemented, just like it can run on XPA capable hardware
      without XPA in use (with the noxpa kernel argument) or with XPA not
      configured in hardware.
      
      [paul.burton@imgtec.com:
        - Rebase atop other TLB work.
        - Add "mm" to subject.
        - Handle the __kmap_pgprot case.]
      
      Fixes: c5b36783 ("MIPS: Add support for XPA.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13124/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      4b6f99d3
    • P
      MIPS: mm: Simplify build_update_entries · 2caa89b4
      Paul Burton 提交于
      We can simplify build_update_entries by unifying the code for the 36 bit
      physical addressing with MIPS32 case with the general case, by using
      pte_off_ variables in all cases & handling the trivial
      _PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This
      leaves XPA as the only special case.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13123/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      2caa89b4
    • P
      MIPS: mm: Be more explicit about PTE mode bit handling · b4ebbb87
      Paul Burton 提交于
      The XPA case in iPTE_SW or's in software mode bits to the pte_low value
      (which is what actually ends up in the high 32 bits of EntryLo...). It
      does this presuming that only bits in the upper 16 bits of the 32 bit
      pte_low value will be set. Make this assumption explicit with a BUG_ON.
      
      A similar assumption is made for the hardware mode bits, which are or'd
      in with a single ori instruction. Make that assumption explicit with a
      BUG_ON too.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13122/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      b4ebbb87
    • P
      MIPS: mm: Pass scratch register through to iPTE_SW · bbeeffec
      Paul Burton 提交于
      Rather than hardcode a scratch register for the XPA case in iPTE_SW,
      pass one through from the work registers allocated by the caller. This
      allows for the XPA path to function correctly regardless of the work
      registers in use.
      
      Without doing this there are cases (where KScratch registers are
      unavailable) in which iPTE_SW will incorrectly clobber $1 despite it
      already being in use for the PTE or PTE pointer.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13121/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      bbeeffec
    • J
      MIPS: mm: Don't clobber $1 on XPA TLB refill · f3832196
      James Hogan 提交于
      For XPA kernels build_update_entries() uses $1 (at) as a scratch
      register, but doesn't arrange for it to be preserved, so it will always
      be clobbered by the TLB refill exception. Although this register
      normally has a very short lifetime that doesn't cross memory accesses,
      TLB refills due to instruction fetches (either on a page boundary or
      after preemption) could clobber live data, and its easy to reproduce
      the clobber with a little bit of assembler code.
      
      Note that the use of a hardware page table walker will partly mask the
      problem, as the TLB refill handler will not always be invoked.
      
      This is fixed by avoiding the use of the extra scratch register. The
      pte_high parts (going into the lower half of the EntryLo registers) are
      loaded and manipulated separately so as to keep the PTE pointer around
      for the other halves (instead of storing in the scratch register), and
      the pte_low parts (going into the high half of the EntryLo registers)
      are masked with 0x00ffffff using an ext instruction (instead of loading
      0x00ffffff into the scratch register and AND'ing).
      
      [paul.burton@imgtec.com:
        - Rebase atop other TLB work.
        - Use ext instead of an sll, srl sequence.
        - Use cpu_has_xpa instead of #ifdefs.
        - Modify commit subject to include "mm".]
      
      Fixes: c5b36783 ("MIPS: Add support for XPA.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13120/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      f3832196
    • P
      MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic) · 7b2cb64f
      Paul Burton 提交于
      There are 2 distinct cases in which a kernel for a MIPS32 CPU
      (CONFIG_CPU_MIPS32=y) may use 64 bit physical addresses
      (CONFIG_PHYS_ADDR_T_64BIT=y):
      
        - 36 bit physical addressing as used by RMI Alchemy & Netlogic XLP/XLR
          CPUs.
      
        - MIPS32r5 eXtended Physical Addressing (XPA).
      
      These 2 cases are distinct in that they require different behaviour from
      the kernel - the EntryLo registers have different formats. Until Linux
      v4.1 we only supported the first case, with code conditional upon the 2
      aforementioned Kconfig variables being set. Commit c5b36783 ("MIPS:
      Add support for XPA.") added support for the second case, but did so by
      modifying the code that existed for the first case rather than treating
      the 2 cases as distinct. Since the EntryLo registers have different
      formats this breaks the 36 bit Alchemy/XLP/XLR case. Fix this by
      splitting the 2 cases, with XPA cases now being conditional upon
      CONFIG_XPA and the non-XPA case matching the code as it existed prior to
      commit c5b36783 ("MIPS: Add support for XPA.").
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reported-by: NManuel Lauss <manuel.lauss@gmail.com>
      Tested-by: NManuel Lauss <manuel.lauss@gmail.com>
      Fixes: c5b36783 ("MIPS: Add support for XPA.")
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: stable@vger.kernel.org # v4.1+
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13119/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      7b2cb64f
    • P
      MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ · 780602d7
      Paul Burton 提交于
      Ever since support for RI/XI was implemented by commit 6dd9344c
      ("MIPS: Implement Read Inhibit/eXecute Inhibit") we've had a mixture of
      _PAGE_READ & _PAGE_NO_READ bits. Rather than keep both around, switch
      away from using _PAGE_READ to determine page presence & instead invert
      the use to _PAGE_NO_READ. Wherever we formerly had no definition for
      _PAGE_NO_READ, change what was _PAGE_READ to _PAGE_NO_READ. The end
      result is that we consistently use _PAGE_NO_READ to determine whether a
      page is readable, regardless of whether RI/XI is implemented.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13116/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      780602d7
    • J
      MIPS: Fix HTW config on XPA kernel without LPA enabled · 14bc2414
      James Hogan 提交于
      The hardware page table walker (HTW) configuration is broken on XPA
      kernels where XPA couldn't be enabled (either nohtw or the hardware
      doesn't support it). This is because the PWSize.PTEW field (PTE width)
      was only set to 8 bytes (an extra shift of 1) in config_htw_params() if
      PageGrain.ELPA (enable large physical addressing) is set. On an XPA
      kernel though the size of PTEs is fixed at 8 bytes regardless of whether
      XPA could actually be enabled.
      
      Fix the initialisation of this field based on sizeof(pte_t) instead.
      
      Fixes: c5b36783 ("MIPS: Add support for XPA.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Steven J. Hill <sjhill@realitydiluted.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13113/Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      14bc2414
    • H
      MIPS: Loongson-3: Fast TLB refill handler · 380cd582
      Huacai Chen 提交于
      Loongson-3A R2 has pwbase/pwfield/pwsize/pwctl registers in CP0 (this
      is very similar to HTW) and lwdir/lwpte/lddir/ldpte instructions which
      can be used for fast TLB refill.
      
      [ralf@linux-mips.org: Resolve conflict.]
      Signed-off-by: NHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J . Hill <sjhill@realitydiluted.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12754/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      380cd582
  8. 03 4月, 2016 1 次提交
  9. 24 1月, 2016 1 次提交
  10. 16 1月, 2016 1 次提交
    • K
      mips, thp: remove infrastructure for handling splitting PMDs · b2787370
      Kirill A. Shutemov 提交于
      With new refcounting we don't need to mark PMDs splitting.  Let's drop
      code to handle this.
      
      pmdp_splitting_flush() is not needed too: on splitting PMD we will do
      pmdp_clear_flush() + set_pte_at().  pmdp_clear_flush() will do IPI as
      needed for fast_gup.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b2787370
  11. 11 11月, 2015 5 次提交
  12. 22 6月, 2015 3 次提交
  13. 17 6月, 2015 1 次提交
  14. 10 4月, 2015 1 次提交
  15. 01 4月, 2015 1 次提交
  16. 20 3月, 2015 1 次提交
  17. 18 3月, 2015 1 次提交
    • S
      MIPS: Rearrange PTE bits into fixed positions. · be0c37c9
      Steven J. Hill 提交于
      This patch rearranges the PTE bits into fixed positions for R2
      and later cores. In the past, the TLB handling code did runtime
      checking of RI/XI and adjusted the shifts and rotates in order
      to fit the largest PFN value into the PTE. The checking now
      occurs when building the TLB handler, thus eliminating those
      checks. These new arrangements also define the largest possible
      PFN value that can fit in the PTE. HUGE page support is only
      available for 64-bit cores. Layouts of the PTE bits are now:
      
         64-bit, R1 or earlier:     CCC D V G [S H] M A W R P
         32-bit, R1 or earler:      CCC D V G M A W R P
         64-bit, R2 or later:       CCC D V G RI/R XI [S H] M A W P
         32-bit, R2 or later:       CCC D V G RI/R XI M A W P
      
      [ralf@linux-mips.org: Fix another build error *rant* *rant*]
      Signed-off-by: NSteven J. Hill <Steven.Hill@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/9353/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      be0c37c9
  18. 17 2月, 2015 1 次提交
  19. 16 2月, 2015 1 次提交
  20. 28 11月, 2014 1 次提交
  21. 25 11月, 2014 1 次提交
  22. 23 10月, 2014 1 次提交
    • D
      MIPS: tlbex: Properly fix HUGE TLB Refill exception handler · 9e0f162a
      David Daney 提交于
      In commit 8393c524 (MIPS: tlbex: Fix a missing statement for
      HUGETLB), the TLB Refill handler was fixed so that non-OCTEON targets
      would work properly with huge pages.  The change was incorrect in that
      it broke the OCTEON case.
      
      The problem is shown here:
      
          xxx0:	df7a0000 	ld	k0,0(k1)
          .
          .
          .
          xxxc0:	df610000 	ld	at,0(k1)
          xxxc4:	335a0ff0 	andi	k0,k0,0xff0
          xxxc8:	e825ffcd 	bbit1	at,0x5,0x0
          xxxcc:	003ad82d 	daddu	k1,at,k0
          .
          .
          .
      
      In the non-octeon case there is a destructive test for the huge PTE
      bit, and then at 0, $k0 is reloaded (that is what the 8393c524
      patch added).
      
      In the octeon case, we modify k1 in the branch delay slot, but we
      never need k0 again, so the new load is not needed, but since k1 is
      modified, if we do the load, we load from a garbage location and then
      get a nested TLB Refill, which is seen in userspace as either SIGBUS
      or SIGSEGV (depending on the garbage).
      
      The real fix is to only do this reloading if it is needed, and never
      where it is harmful.
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: stable@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/8151/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      9e0f162a
  23. 02 8月, 2014 1 次提交