1. 13 9月, 2016 2 次提交
  2. 27 8月, 2016 1 次提交
  3. 19 8月, 2016 1 次提交
    • J
      MIPS: KVM: Check for pfn noslot case · ba913e4f
      James Hogan 提交于
      When mapping a page into the guest we error check using is_error_pfn(),
      however this doesn't detect a value of KVM_PFN_NOSLOT, indicating an
      error HVA for the page. This can only happen on MIPS right now due to
      unusual memslot management (e.g. being moved / removed / resized), or
      with an Enhanced Virtual Memory (EVA) configuration where the default
      KVM_HVA_ERR_* and kvm_is_error_hva() definitions are unsuitable (fixed
      in a later patch). This case will be treated as a pfn of zero, mapping
      the first page of physical memory into the guest.
      
      It would appear the MIPS KVM port wasn't updated prior to being merged
      (in v3.10) to take commit 81c52c56 ("KVM: do not treat noslot pfn as
      a error pfn") into account (merged v3.8), which converted a bunch of
      is_error_pfn() calls to is_error_noslot_pfn(). Switch to using
      is_error_noslot_pfn() instead to catch this case properly.
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.y-
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ba913e4f
  4. 12 8月, 2016 4 次提交
    • J
      MIPS: KVM: Propagate kseg0/mapped tlb fault errors · 9b731bcf
      James Hogan 提交于
      Propagate errors from kvm_mips_handle_kseg0_tlb_fault() and
      kvm_mips_handle_mapped_seg_tlb_fault(), usually triggering an internal
      error since they normally indicate the guest accessed bad physical
      memory or the commpage in an unexpected way.
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Fixes: e685c689 ("KVM/MIPS32: Privileged instruction/target branch emulation.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      9b731bcf
    • J
      MIPS: KVM: Fix gfn range check in kseg0 tlb faults · 0741f52d
      James Hogan 提交于
      Two consecutive gfns are loaded into host TLB, so ensure the range check
      isn't off by one if guest_pmap_npages is odd.
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      0741f52d
    • J
      MIPS: KVM: Add missing gfn range check · 8985d503
      James Hogan 提交于
      kvm_mips_handle_mapped_seg_tlb_fault() calculates the guest frame number
      based on the guest TLB EntryLo values, however it is not range checked
      to ensure it lies within the guest_pmap. If the physical memory the
      guest refers to is out of range then dump the guest TLB and emit an
      internal error.
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      8985d503
    • J
      MIPS: KVM: Fix mapped fault broken commpage handling · c604cffa
      James Hogan 提交于
      kvm_mips_handle_mapped_seg_tlb_fault() appears to map the guest page at
      virtual address 0 to PFN 0 if the guest has created its own mapping
      there. The intention is unclear, but it may have been an attempt to
      protect the zero page from being mapped to anything but the comm page in
      code paths you wouldn't expect from genuine commpage accesses (guest
      kernel mode cache instructions on that address, hitting trapping
      instructions when executing from that address with a coincidental TLB
      eviction during the KVM handling, and guest user mode accesses to that
      address).
      
      Fix this to check for mappings exactly at KVM_GUEST_COMMPAGE_ADDR (it
      may not be at address 0 since commit 42aa12e7 ("MIPS: KVM: Move
      commpage so 0x0 is unmapped")), and set the corresponding EntryLo to be
      interpreted as 0 (invalid).
      
      Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.10.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      c604cffa
  5. 04 8月, 2016 2 次提交
    • K
      dma-mapping: use unsigned long for dma_attrs · 00085f1e
      Krzysztof Kozlowski 提交于
      The dma-mapping core and the implementations do not change the DMA
      attributes passed by pointer.  Thus the pointer can point to const data.
      However the attributes do not have to be a bitfield.  Instead unsigned
      long will do fine:
      
      1. This is just simpler.  Both in terms of reading the code and setting
         attributes.  Instead of initializing local attributes on the stack
         and passing pointer to it to dma_set_attr(), just set the bits.
      
      2. It brings safeness and checking for const correctness because the
         attributes are passed by value.
      
      Semantic patches for this change (at least most of them):
      
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
      
          @@
          f(...,
          - struct dma_attrs *attrs
          + unsigned long attrs
          , ...)
          {
          ...
          }
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      and
      
          // Options: --all-includes
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
          type t;
      
          @@
          t f(..., struct dma_attrs *attrs);
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Acked-by: NVineet Gupta <vgupta@synopsys.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no>
      Acked-by: Mark Salter <msalter@redhat.com> [c6x]
      Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
      Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
      Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
      Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
      Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
      Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
      Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
      Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00085f1e
    • M
      tree-wide: replace config_enabled() with IS_ENABLED() · 97f2645f
      Masahiro Yamada 提交于
      The use of config_enabled() against config options is ambiguous.  In
      practical terms, config_enabled() is equivalent to IS_BUILTIN(), but the
      author might have used it for the meaning of IS_ENABLED().  Using
      IS_ENABLED(), IS_BUILTIN(), IS_MODULE() etc.  makes the intention
      clearer.
      
      This commit replaces config_enabled() with IS_ENABLED() where possible.
      This commit is only touching bool config options.
      
      I noticed two cases where config_enabled() is used against a tristate
      option:
      
       - config_enabled(CONFIG_HWMON)
        [ drivers/net/wireless/ath/ath10k/thermal.c ]
      
       - config_enabled(CONFIG_BACKLIGHT_CLASS_DEVICE)
        [ drivers/gpu/drm/gma500/opregion.c ]
      
      I did not touch them because they should be converted to IS_BUILTIN()
      in order to keep the logic, but I was not sure it was the authors'
      intention.
      
      Link: http://lkml.kernel.org/r/1465215656-20569-1-git-send-email-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Stas Sergeev <stsp@list.ru>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: Joshua Kinard <kumba@gentoo.org>
      Cc: Jiri Slaby <jslaby@suse.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: "Dmitry V. Levin" <ldv@altlinux.org>
      Cc: yu-cheng yu <yu-cheng.yu@intel.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Will Drewry <wad@chromium.org>
      Cc: Nikolay Martynov <mar.kolya@gmail.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: Rafal Milecki <zajec5@gmail.com>
      Cc: James Cowgill <James.Cowgill@imgtec.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
      Cc: Qais Yousef <qais.yousef@imgtec.com>
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Mikko Rapeli <mikko.rapeli@iki.fi>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Brian Norris <computersforpeace@gmail.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Roland McGrath <roland@hack.frob.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Kalle Valo <kvalo@qca.qualcomm.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Tony Wu <tung7970@gmail.com>
      Cc: Huaitong Han <huaitong.han@intel.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrea Gelmini <andrea.gelmini@gelma.net>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Rabin Vincent <rabin@rab.in>
      Cc: "Maciej W. Rozycki" <macro@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97f2645f
  6. 03 8月, 2016 6 次提交
  7. 02 8月, 2016 23 次提交
    • P
      MIPS: Use CPHYSADDR to implement mips32 __pa · 0d8d83d0
      Paul Burton 提交于
      Use CPHYSADDR to implement the __pa macro converting from a virtual to a
      physical address for MIPS32, much as is already done for MIPS64 (though
      without the complication of having both compatibility & XKPHYS
      segments).
      
      This allows for __pa to work regardless of whether the address being
      translated is in kseg0 or kseg1, unlike the previous subtraction based
      approach which only worked for addresses in kseg0. Working for kseg1
      addresses is important if __pa is used on addresses allocated by
      dma_alloc_coherent, where on systems with non-coherent I/O we provide
      addresses in kseg1. If this address is then used with
      dma_map_single_attrs then it is provided to virt_to_page, which in turn
      calls virt_to_phys which is a wrapper around __pa. The result is that we
      end up with a physical address 0x20000000 bytes (ie. the size of kseg0)
      too high.
      
      In addition to providing consistency with MIPS64 & fixing the kseg1 case
      above this has the added bonus of generating smaller code for systems
      implementing MIPS32r2 & beyond, where a single ext instruction can
      extract the physical address rather than needing to load an immediate
      into a temp register & subtract it. This results in ~1.3KB savings for a
      boston_defconfig kernel adjusted to set CONFIG_32BIT=y.
      
      This patch does not change the EVA case, which may or may not have
      similar issues around handling both cached & uncached addresses but is
      beyond the scope of this patch.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13836/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      0d8d83d0
    • A
      MIPS: Octeon: Dlink_dsr-1000n.dts: add more leds. · 5c315e39
      Aaro Koskinen 提交于
      Add more leds discovered by reverse engineering. Labels are according
      to markings in the mechanics.
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Cc: linux-mips@linux-mips.org
      Cc: devicetree@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13466/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      5c315e39
    • A
      MIPS: Octeon: Clean up GPIO definitions in dlink_dsr-1000n.dts. · e1b7d0e2
      Aaro Koskinen 提交于
      Clean up GPIO definitions in dlink_dsr-1000n.dts.
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Cc: linux-mips@linux-mips.org
      Cc: devicetree@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13465/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e1b7d0e2
    • A
      MIPS: Octeon: Delete built-in DTB pruning code for D-Link DSR-1000N. · 86bee12f
      Aaro Koskinen 提交于
      Users will get more complete functionality by using the appended DTB,
      so delete the legacy booting support for this board.
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Cc: linux-mips@linux-mips.org
      Cc: devicetree@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13464/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      86bee12f
    • J
      MIPS: store the appended dtb address in a variable · 15f37e15
      Jonas Gorski 提交于
      Instead of rewriting the arguments to match the UHI spec, store the
      address of a appended or UHI supplied dtb in fw_supplied_dtb.
      
      That way the original bootloader arugments are kept intact while still
      making the use of an appended dtb invisible for mach code.
      
      Mach code can still find out if it is an appended dtb by comparing
      fw_arg1 with fw_supplied_dtb.
      Signed-off-by: NJonas Gorski <jogo@openwrt.org>
      Cc: Kevin Cernekee <cernekee@gmail.com>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: John Crispin <john@phrozen.org>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Alban Bedel <albeu@free.fr>
      Cc: Daniel Gimpelevich <daniel@gimpelevich.san-francisco.ca.us>
      Cc: Antony Pavlov <antonynpavlov@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13699/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      15f37e15
    • J
      MIPS: ZBOOT: copy appended dtb to the end of the kernel · b8f54f2c
      Jonas Gorski 提交于
      Instead of rewriting the arguments, just move the appended dtb to where
      the decompressed kernel expects it. This eliminates the need for special
      casing vmlinuz.bin appended dtb files.
      Signed-off-by: NJonas Gorski <jogo@openwrt.org>
      Cc: Kevin Cernekee <cernekee@gmail.com>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: John Crispin <john@phrozen.org>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Alban Bedel <albeu@free.fr>
      Cc: Daniel Gimpelevich <daniel@gimpelevich.san-francisco.ca.us>
      Cc: Antony Pavlov <antonynpavlov@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13698/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      b8f54f2c
    • Á
      MIPS: ralink: fix spis group pinmux · 79977894
      Álvaro Fernández Rojas 提交于
      pwm function for spis conflicts with uart2 and uart1, fix this by changing it
      to pwm_uart2, which reflects the real use of these pins with these pinmux
      (2 for pwm and 2 for uart).
      Signed-off-by: NÁlvaro Fernández Rojas <noltari@gmail.com>
      Cc: john@phrozen.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13369/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      79977894
    • H
      MIPS: Factor o32 specific code into signal_o32.c · d1e63c94
      Harvey Hunt 提交于
      The commit ebb5e78c ("MIPS: Initial implementation of a VDSO")
      caused building a 64 bit kernel with support for n32 and not o32
      to produce a build error:
      
      arch/mips/kernel/signal32.c:415:11: error: ‘vdso_image_o32’ undeclared here (not in a function)
        .vdso  = &vdso_image_o32,
      
      Fix this by moving the o32 specific code into signal_o32.c and
      updating the Makefile accordingly.
      Signed-off-by: NHarvey Hunt <harvey.hunt@imgtec.com>
      Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: Alex Smith <alex@alex-smith.me.uk>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13690/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d1e63c94
    • P
      MIPS: non-exec stack & heap when non-exec PT_GNU_STACK is present · 1a770b85
      Paul Burton 提交于
      The stack and heap have both been executable by default on MIPS until
      now. This patch changes the default to be non-executable, but only for
      ELF binaries with a non-executable PT_GNU_STACK header present. This
      does apply to both the heap & the stack, despite the name PT_GNU_STACK,
      and this matches the behaviour of other architectures like ARM & x86.
      
      Current MIPS toolchains do not produce the PT_GNU_STACK header, which
      means that we can rely upon this patch not changing the behaviour of
      existing binaries. The new default will only take effect for newly
      compiled binaries once toolchains are updated to support PT_GNU_STACK,
      and since those binaries are newly compiled they can be compiled
      expecting the change in default behaviour. Again this matches the way in
      which the ARM & x86 architectures handled their implementations of
      non-executable memory.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: Maciej Rozycki <maciej.rozycki@imgtec.com>
      Cc: Faraz Shahbazker <faraz.shahbazker@imgtec.com>
      Cc: Raghu Gandham <raghu.gandham@imgtec.com>
      Cc: Matthew Fortune <matthew.fortune@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13765/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      1a770b85
    • P
      MIPS: Use per-mm page to execute branch delay slot instructions · 432c6bac
      Paul Burton 提交于
      In some cases the kernel needs to execute an instruction from the delay
      slot of an emulated branch instruction. These cases include:
      
        - Emulated floating point branch instructions (bc1[ft]l?) for systems
          which don't include an FPU, or upon which the kernel is run with the
          "nofpu" parameter.
      
        - MIPSr6 systems running binaries targeting older revisions of the
          architecture, which may include branch instructions whose encodings
          are no longer valid in MIPSr6.
      
      Executing instructions from such delay slots is done by writing the
      instruction to memory followed by a trap, as part of an "emuframe", and
      executing it. This avoids the requirement of an emulator for the entire
      MIPS instruction set. Prior to this patch such emuframes are written to
      the user stack and executed from there.
      
      This patch moves FP branch delay emuframes off of the user stack and
      into a per-mm page. Allocating a page per-mm leaves userland with access
      to only what it had access to previously, and compared to other
      solutions is relatively simple.
      
      When a thread requires a delay slot emulation, it is allocated a frame.
      A thread may only have one frame allocated at any one time, since it may
      only ever be executing one instruction at any one time. In order to
      ensure that we can free up allocated frame later, its index is recorded
      in struct thread_struct. In the typical case, after executing the delay
      slot instruction we'll execute a break instruction with the BRK_MEMU
      code. This traps back to the kernel & leads to a call to do_dsemulret
      which frees the allocated frame & moves the user PC back to the
      instruction that would have executed following the emulated branch.
      In some cases the delay slot instruction may be invalid, such as a
      branch, or may trigger an exception. In these cases the BRK_MEMU break
      instruction will not be hit. In order to ensure that frames are freed
      this patch introduces dsemul_thread_cleanup() and calls it to free any
      allocated frame upon thread exit. If the instruction generated an
      exception & leads to a signal being delivered to the thread, or indeed
      if a signal simply happens to be delivered to the thread whilst it is
      executing from the struct emuframe, then we need to take care to exit
      the frame appropriately. This is done by either rolling back the user PC
      to the branch or advancing it to the continuation PC prior to signal
      delivery, using dsemul_thread_rollback(). If this were not done then a
      sigreturn would return to the struct emuframe, and if that frame had
      meanwhile been used in response to an emulated branch instruction within
      the signal handler then we would execute the wrong user code.
      
      Whilst a user could theoretically place something like a compact branch
      to self in a delay slot and cause their thread to become stuck in an
      infinite loop with the frame never being deallocated, this would:
      
        - Only affect the users single process.
      
        - Be architecturally invalid since there would be a branch in the
          delay slot, which is forbidden.
      
        - Be extremely unlikely to happen by mistake, and provide a program
          with no more ability to harm the system than a simple infinite loop
          would.
      
      If a thread requires a delay slot emulation & no frame is available to
      it (ie. the process has enough other threads that all frames are
      currently in use) then the thread joins a waitqueue. It will sleep until
      a frame is freed by another thread in the process.
      
      Since we now know whether a thread has an allocated frame due to our
      tracking of its index, the cookie field of struct emuframe is removed as
      we can be more certain whether we have a valid frame. Since a thread may
      only ever have a single frame at any given time, the epc field of struct
      emuframe is also removed & the PC to continue from is instead stored in
      struct thread_struct. Together these changes simplify & shrink struct
      emuframe somewhat, allowing twice as many frames to fit into the page
      allocated for them.
      
      The primary benefit of this patch is that we are now free to mark the
      user stack non-executable where that is possible.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: Maciej Rozycki <maciej.rozycki@imgtec.com>
      Cc: Faraz Shahbazker <faraz.shahbazker@imgtec.com>
      Cc: Raghu Gandham <raghu.gandham@imgtec.com>
      Cc: Matthew Fortune <matthew.fortune@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13764/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      432c6bac
    • A
      MIPS: Modify error handling · 33799a6d
      Amitoj Kaur Chawla 提交于
      debugfs_create_file returns NULL on error so an IS_ERR test is
      incorrect here and a NULL check is required.
      
      The Coccinelle semantic patch used to make this change is as follows:
      @@
      expression e;
      @@
      
        e = debugfs_create_file(...);
      if(
      -    IS_ERR(e)
      +    !e
          )
          {
        <+...
        return
      - PTR_ERR(e)
      + -ENOMEM
        ;
        ...+>
        }
      Signed-off-by: NAmitoj Kaur Chawla <amitoj1606@gmail.com>
      Cc: julia.lawall@lip6.fr
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13834/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      33799a6d
    • J
      MIPS: Select HAVE_KVM for MIPS64_R{2,6} · 40a2df49
      James Hogan 提交于
      We are now able to support KVM T&E with MIPS32 guests on some MIPS64r2
      and MIPS64r6 hosts, so select HAVE_KVM so it can be enabled.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      40a2df49
    • J
      MIPS: KVM: Reset CP0_PageMask during host TLB flush · a700434d
      James Hogan 提交于
      KVM sometimes flushes host TLB entries, reading each one to check if it
      corresponds to a guest KSeg0 address. In the absence of EntryHi.EHInv
      bits to invalidate the whole entry, the entries will be set to unique
      virtual addresses in KSeg0 (which is not TLB mapped), spaced 2*PAGE_SIZE
      apart.
      
      The TLB read however will clobber the CP0_PageMask register with
      whatever page size that TLB entry had, and that same page size will be
      written back into the TLB entry along with the unique address.
      
      This would cause breakage when transparent huge pages are enabled on
      64-bit host kernels, since huge page entries will overlap other nearby
      entries when separated by only 2*PAGE_SIZE, causing a machine check
      exception.
      
      Fix this by restoring the old CP0_PageMask value (which should be set to
      the normal page size) after reading the TLB entry if we're going to go
      ahead and invalidate it.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a700434d
    • J
      MIPS: KVM: Fix ptr->int cast via KVM_GUEST_KSEGX() · 8296963e
      James Hogan 提交于
      kvm_mips_trans_replace() passes a pointer to KVM_GUEST_KSEGX(). This
      breaks on 64-bit builds due to the cast of that 64-bit pointer to a
      different sized 32-bit int. Cast the pointer argument to an unsigned
      long to work around the warning.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8296963e
    • J
      MIPS: KVM: Sign extend MFC0/RDHWR results · 172e02d1
      James Hogan 提交于
      When emulating MFC0 instructions to load 32-bit values from guest COP0
      registers and the RDHWR instruction to read the CC (Count) register,
      sign extend the result to comply with the MIPS64 architecture. The
      result must be in canonical 32-bit form or the guest may malfunction.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      172e02d1
    • J
      MIPS: KVM: Fix 64-bit big endian dynamic translation · 5808844f
      James Hogan 提交于
      The MFC0 and MTC0 instructions in the guest which cause traps can be
      replaced with 32-bit loads and stores to the commpage, however on big
      endian 64-bit builds the offset needs to have 4 added so as to
      load/store the least significant half of the long instead of the most
      significant half.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5808844f
    • J
      MIPS: KVM: Fail if ebase doesn't fit in CP0_EBase · 2a06dab8
      James Hogan 提交于
      Fail if the address of the allocated exception base doesn't fit into the
      CP0_EBase register. This can happen on MIPS64 if CP0_EBase.WG isn't
      implemented but RAM is available outside of the range of KSeg0.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2a06dab8
    • J
      MIPS: KVM: Use 64-bit CP0_EBase when appropriate · 0d17aea5
      James Hogan 提交于
      Update the KVM entry point to write CP0_EBase as a 64-bit register when
      it is 64-bits wide, and to set the WG (write gate) bit if it exists in
      order to write bits 63:30 (or 31:30 on MIPS32).
      
      Prior to MIPS64r6 it was UNDEFINED to perform a 64-bit read or write of
      a 32-bit COP0 register. Since this is dynamically generated code,
      generate the right type of access depending on whether the kernel is
      64-bit and cpu_has_ebase_wg.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0d17aea5
    • J
      MIPS: KVM: Set CP0_Status.KX on MIPS64 · 1d756942
      James Hogan 提交于
      Update the KVM entry code to set the CP0_Entry.KX bit on 64-bit kernels.
      This is important to allow the entry code, running in kernel mode, to
      access the full 64-bit address space right up to the point of entering
      the guest, and immediately after exiting the guest, so it can safely
      restore & save the guest context from 64-bit segments.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1d756942
    • J
      MIPS: KVM: Make entry code MIPS64 friendly · e41637d8
      James Hogan 提交于
      The MIPS KVM entry code (originally kvm_locore.S, later locore.S, and
      now entry.c) has never quite been right when built for 64-bit, using
      32-bit instructions when 64-bit instructions were needed for handling
      64-bit registers and pointers. Fix several cases of this now.
      
      The changes roughly fall into the following categories.
      
      - COP0 scratch registers contain guest register values and the VCPU
        pointer, and are themselves full width. Similarly CP0_EPC and
        CP0_BadVAddr registers are full width (even though technically we
        don't support 64-bit guest address spaces with trap & emulate KVM).
        Use MFC0/MTC0 for accessing them.
      
      - Handling of stack pointers and the VCPU pointer must match the pointer
        size of the kernel ABI (always o32 or n64), so use ADDIU.
      
      - The CPU number in thread_info, and the guest_{user,kernel}_asid arrays
        in kvm_vcpu_arch are all 32 bit integers, so use lw (instead of LW) to
        load them.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e41637d8
    • J
      MIPS: KVM: Use kmap instead of CKSEG0ADDR() · 28cc5bd5
      James Hogan 提交于
      There are several unportable uses of CKSEG0ADDR() in MIPS KVM, which
      implicitly assume that a host physical address will be in the low 512MB
      of the physical address space (accessible in KSeg0). These assumptions
      don't hold for highmem or on 64-bit kernels.
      
      When interpreting the guest physical address when reading or overwriting
      a trapping instruction, use kmap_atomic() to get a usable virtual
      address to access guest memory, which is portable to 64-bit and highmem
      kernels.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      28cc5bd5
    • J
      MIPS: KVM: Use virt_to_phys() to get commpage PFN · cfacaced
      James Hogan 提交于
      Calculate the PFN of the commpage using virt_to_phys() instead of
      CPHYSADDR(). This is more portable as kzalloc() may allocate from XKPhys
      instead of KSeg0 on 64-bit kernels, which CPHYSADDR() doesn't handle.
      This is sufficient for highmem kernels too since kzalloc() will allocate
      from lowmem in KSeg0.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cfacaced
    • J
      MIPS: Fix definition of KSEGX() for 64-bit · 6002bdd3
      James Hogan 提交于
      The KSEGX() macro is defined to 32-bit sign extend the address argument
      and logically AND the result with 0xe0000000, with the final result
      usually compared against one of the CKSEG macros. However the literal
      0xe0000000 is unsigned as the high bit is set, and is therefore
      zero-extended on 64-bit kernels, resulting in the sign extension bits of
      the argument being masked to zero. This results in the odd situation
      where:
      
        KSEGX(CKSEG) != CKSEG
        (0xffffffff80000000 & 0x00000000e0000000) != 0xffffffff80000000)
      
      Fix this by 32-bit sign extending the 0xe0000000 literal using
      _ACAST32_.
      
      This will help some MIPS KVM code handling 32-bit guest addresses to
      work on 64-bit host kernels, but will also affect KSEGX in
      dec_kn01_be_backend() on a 64-bit DECstation kernel, and the SiByte DMA
      page ops KSEGX check in clear_page() and copy_page() on 64-bit SB1
      kernels, neither of which appear to be designed with 64-bit segments in
      mind anyway.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6002bdd3
  8. 29 7月, 2016 1 次提交
    • J
      MIPS: c-r4k: Use SMP calls for CM indexed cache ops · 11f76903
      James Hogan 提交于
      The MIPS Coherence Manager (CM) can propagate address-based ("hit")
      cache operations to other cores in the coherent system, alleviating
      software of the need to use SMP calls, however indexed cache operations
      are not propagated by hardware since doing so makes no sense for
      separate caches.
      
      Update r4k_op_needs_ipi() to report that only hit cache operations are
      globalized by the CM, requiring indexed cache operations to be
      globalized by software via an SMP call.
      
      r4k_on_each_cpu() previously had a special case for CONFIG_MIPS_MT_SMP,
      intended to avoid the SMP calls when the only other CPUs in the system
      were other VPEs in the same core, and hence sharing the same caches.
      This was changed by commit cccf34e9 ("MIPS: c-r4k: Fix cache
      flushing for MT cores") to apparently handle multi-core multi-VPE
      systems, but it focussed mainly on hit cache ops, so the SMP calls were
      still disabled entirely for CM systems.
      
      This doesn't normally cause problems, but tests can be written to hit
      these corner cases by using multiple threads, or changing task
      affinities to force the process to migrate cores. For example the
      failure of mprotect RW->RX to globally sync icaches (via
      flush_cache_range) can be detected by modifying and mprotecting a code
      page on one core, and migrating to a different core to execute from it.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/13807/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      11f76903