1. 03 2月, 2017 21 次提交
    • J
      KVM: MIPS/MMU: Invalidate stale GVA PTEs on TLBW · aba85929
      James Hogan 提交于
      Implement invalidation of specific pairs of GVA page table entries in
      one or both of the GVA page tables. This is used when existing mappings
      are replaced in the guest TLB by emulated TLBWI/TLBWR instructions. Due
      to the sharing of page tables in the host kernel range, we should be
      careful not to allow host pages to be invalidated.
      
      Add a helper kvm_mips_walk_pgd() which can be used when walking of
      either GPA (future patches) or GVA page tables is needed, optionally
      with allocation of page tables along the way when they don't exist.
      
      GPA page table walking will need to be protected by the kvm->mmu_lock,
      so we also add a small MMU page cache in each KVM VCPU, like that found
      for other architectures but smaller. This allows enough pages to be
      pre-allocated to handle a single fault without holding the lock,
      allowing the helper to run with the lock held without having to handle
      allocation failures.
      
      Using the same mechanism for GVA allows the same code to be used, and
      allows it to use the same cache of allocated pages if the GPA walk
      didn't need to allocate any new tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      aba85929
    • J
      KVM: MIPS/MMU: Invalidate GVA PTs on ASID changes · a31b50d7
      James Hogan 提交于
      Implement invalidation of large ranges of virtual addresses from GVA
      page tables in response to a guest ASID change (immediately for guest
      kernel page table, lazily for guest user page table).
      
      We iterate through a range of page tables invalidating entries and
      freeing fully invalidated tables. To minimise overhead the exact ranges
      invalidated depends on the flags argument to kvm_mips_flush_gva_pt(),
      which also allows it to be used in future KVM_CAP_SYNC_MMU patches in
      response to GPA changes, which unlike guest TLB mapping changes affects
      guest KSeg0 mappings.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a31b50d7
    • J
      KVM: MIPS/TLB: Generalise host TLB invalidate to kernel ASID · 57e3869c
      James Hogan 提交于
      Refactor kvm_mips_host_tlb_inv() to also be able to invalidate any
      matching TLB entry in the kernel ASID rather than assuming only the TLB
      entries in the user ASID can change. Two new bool user/kernel arguments
      allow the caller to indicate whether the mapping should affect each of
      the ASIDs for guest user/kernel mode.
      
      - kvm_mips_invalidate_guest_tlb() (used by TLBWI/TLBWR emulation) can
        now invalidate any corresponding TLB entry in both the kernel ASID
        (guest kernel may have accessed any guest mapping), and the user ASID
        if the entry being replaced is in guest USeg (where guest user may
        also have accessed it).
      
      - The tlbmod fault handler (and the KSeg0 / TLB mapped / commpage fault
        handlers in later patches) can now invalidate the corresponding TLB
        entry in whichever ASID is currently active, since only a single page
        table will have been updated anyway.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      57e3869c
    • J
      KVM: MIPS/TLB: Fix off-by-one in TLB invalidate · f3a8603f
      James Hogan 提交于
      kvm_mips_host_tlb_inv() uses the TLBP instruction to probe the host TLB
      for an entry matching the given guest virtual address, and determines
      whether a match was found based on whether CP0_Index > 0. This is
      technically incorrect as an index of 0 (with the high bit clear) is a
      perfectly valid TLB index.
      
      This is harmless at the moment due to the use of at least 1 wired TLB
      entry for the KVM commpage, however we will soon be ridding ourselves of
      that particular wired entry so lets fix the condition in case the entry
      needing invalidation does land at TLB index 0.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f3a8603f
    • J
      KVM: MIPS: Add fast path TLB refill handler · a7cfa7ac
      James Hogan 提交于
      Use functions from the general MIPS TLB exception vector generation code
      (tlbex.c) to construct a fast path TLB refill handler similar to the
      general one, but cut down and capable of preserving K0 and K1.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a7cfa7ac
    • J
      KVM: MIPS: Support NetLogic KScratch registers · 29b500b5
      James Hogan 提交于
      tlbex.c uses the implementation dependent $22 CP0 register group on
      NetLogic cores, with the help of the c0_kscratch() helper. Allow these
      registers to be allocated by the KVM entry code too instead of assuming
      KScratch registers are all $31, which will also allow pgd_reg to be
      handled since it is allocated that way.
      
      We also drop the masking of kscratch_mask with 0xfc, as it is redundant
      for the standard KScratch registers (Config4.KScrExist won't have the
      low 2 bits set anyway), and apparently not necessary for NetLogic.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      29b500b5
    • J
      KVM: MIPS/T&E: Activate GVA page tables in guest context · 7faa6eec
      James Hogan 提交于
      Activate the GVA page tables when in guest context. This will allow the
      normal Linux TLB refill handler to fill from it when guest memory is
      read, as well as preventing accidental reading from user memory.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7faa6eec
    • J
      KVM: MIPS/T&E: Allocate GVA -> HPA page tables · f7f1427d
      James Hogan 提交于
      Allocate GVA -> HPA page tables for guest kernel and guest user mode on
      each VCPU, to allow for fast path TLB refill handling to be added later.
      
      In the process kvm_arch_vcpu_init() needs updating to pass on any error
      from the vcpu_init() callback.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f7f1427d
    • J
      KVM: MIPS: Wire up vcpu uninit · 630766b3
      James Hogan 提交于
      Wire up a vcpu uninit implementation callback. This will be used for the
      clean up of GVA->HPA page tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      630766b3
    • J
      KVM: MIPS/T&E: active_mm = init_mm in guest context · a7ebb2e4
      James Hogan 提交于
      Set init_mm as the active_mm and update mm_cpumask(current->mm) to
      reflect that it isn't active when in guest context. This prevents cache
      management code from attempting cache flushes on host virtual addresses
      while in guest context, for example due to a cache management IPIs or
      later when writing of dynamically translated code hits copy on write.
      
      We do this using helpers in static kernel code to avoid having to export
      init_mm to modules.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a7ebb2e4
    • J
      KVM: MIPS/T&E: Restore host asid on return to host · 91cdee57
      James Hogan 提交于
      We only need the guest ASID loaded while in guest context, i.e. while
      running guest code and while handling guest exits. We load the guest
      ASID when entering the guest, however we restore the host ASID later
      than necessary, when the VCPU state is saved i.e. vcpu_put() or slightly
      earlier if preempted after returning to the host.
      
      This mismatch is both unpleasant and causes redundant host ASID restores
      in kvm_trap_emul_vcpu_put(). Lets explicitly restore the host ASID when
      returning to the host, and don't bother restoring the host ASID on
      context switch in unless we're already in guest context.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      91cdee57
    • J
      KVM: MIPS: Add vcpu_run() & vcpu_reenter() callbacks · a2c046e4
      James Hogan 提交于
      Add implementation callbacks for entering the guest (vcpu_run()) and
      reentering the guest (vcpu_reenter()), allowing implementation specific
      operations to be performed before entering the guest or after returning
      to the host without cluttering kvm_arch_vcpu_ioctl_run().
      
      This allows the T&E specific lazy user GVA flush to be moved into
      trap_emul.c, along with disabling of the HTW. We also move
      kvm_mips_deliver_interrupts() as VZ will need to restore the guest timer
      state prior to delivering interrupts.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a2c046e4
    • J
      KVM: MIPS: Remove duplicated ASIDs from vcpu · c550d539
      James Hogan 提交于
      The kvm_vcpu_arch structure contains both mm_structs for allocating MMU
      contexts (primarily the ASID) but it also copies the resulting ASIDs
      into guest_{user,kernel}_asid[] arrays which are referenced from uasm
      generated code.
      
      This duplication doesn't seem to serve any purpose, and it gets in the
      way of generalising the ASID handling across guest kernel/user modes, so
      lets just extract the ASID straight out of the mm_struct on demand, and
      in fact there are convenient cpu_context() and cpu_asid() macros for
      doing so.
      
      To reduce the verbosity of this code we do also add kern_mm and user_mm
      local variables where the kernel and user mm_structs are used.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      c550d539
    • J
      KVM: MIPS/MMU: Move preempt/ASID handling to implementation · 1581ff3d
      James Hogan 提交于
      The MIPS KVM host and guest GVA ASIDs may need regenerating when
      scheduling a process in guest context, which is done from the
      kvm_arch_vcpu_load() / kvm_arch_vcpu_put() functions in mmu.c.
      
      However this is a fairly implementation specific detail. VZ for example
      may use GuestIDs instead of normal ASIDs to distinguish mappings
      belonging to different guests, and even on VZ without GuestID the root
      TLB will be used differently to trap & emulate.
      
      Trap & emulate GVA ASIDs only relate to the user part of the full
      address space, so can be left active during guest exit handling (guest
      context) to allow guest instructions to be easily read and translated.
      
      VZ root ASIDs however are for GPA mappings so can't be left active
      during normal kernel code. They also aren't useful for accessing guest
      virtual memory, and we should have CP0_BadInstr[P] registers available
      to provide encodings of trapping guest instructions anyway.
      
      Therefore move the ASID preemption handling into the implementation
      callback.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      1581ff3d
    • J
      KVM: MIPS: Convert get/set_regs -> vcpu_load/put · a60b8438
      James Hogan 提交于
      Convert the get_regs() and set_regs() callbacks to vcpu_load() and
      vcpu_put(), which provide a cpu argument and more closely match the
      kvm_arch_vcpu_load() / kvm_arch_vcpu_put() that they are called by.
      
      This is in preparation for moving ASID management into the
      implementations.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      a60b8438
    • J
      KVM: MIPS/MMU: Simplify ASID restoration · 1534b396
      James Hogan 提交于
      KVM T&E uses an ASID for guest kernel mode and an ASID for guest user
      mode. The current ASID is saved when the guest is scheduled out, and
      restored when scheduling back in, with checks for whether the ASID needs
      to be regenerated.
      
      This isn't really necessary as the ASID can be easily determined by the
      current guest mode, so lets simplify it to just read the required ASID
      from guest_kernel_asid or guest_user_asid even if the ASID hasn't been
      regenerated.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      1534b396
    • J
      KVM: MIPS: Drop partial KVM_NMI implementation · 00104b41
      James Hogan 提交于
      MIPS incompletely implements the KVM_NMI ioctl to supposedly perform a
      CPU reset, but all it actually does is invalidate the ASIDs. It doesn't
      expose the KVM_CAP_USER_NMI capability which is supposed to indicate the
      presence of the KVM_NMI ioctl, and no user software actually uses it on
      MIPS.
      
      Since this is dead code that would technically need updating for GVA
      page table handling in upcoming patches, remove it now. If we wanted to
      implement NMI injection later it can always be done properly along with
      the KVM_CAP_USER_NMI capability, and if we wanted to implement a proper
      CPU reset it would be better done with a separate ioctl.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      00104b41
    • J
      MIPS: Add return errors to protected cache ops · 7170bdc7
      James Hogan 提交于
      The protected cache ops contain no out of line fixup code to return an
      error code in the event of a fault, with the cache op being skipped in
      that case. For KVM however we'd like to detect this case as page
      faulting will be disabled so it could happen during normal operation if
      the GVA page tables were flushed, and need to be handled by the caller.
      
      Add the out-of-line fixup code to load the error value -EFAULT into the
      return variable, and adapt the protected cache line functions to pass
      the error back to the caller.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      7170bdc7
    • J
      MIPS: Export some tlbex internals for KVM to use · 722b4544
      James Hogan 提交于
      Export to TLB exception code generating functions so that KVM can
      construct a fast TLB refill handler for guest context without
      reinventing the wheel quite so much.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      722b4544
    • J
      MIPS: uasm: Add include guards in asm/uasm.h · 93a93c24
      James Hogan 提交于
      Add include guards in asm/uasm.h to allow it to be safely used by a new
      header asm/tlbex.h in the next patch to expose TLB exception building
      functions for KVM to use.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      93a93c24
    • J
      MIPS: Export pgd/pmd symbols for KVM · ccf01516
      James Hogan 提交于
      Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to
      GPL kernel modules so that MIPS KVM can use the inline page table
      management functions and switch between page tables:
      
      - pmd_init() will be used directly by KVM to initialise newly allocated
        pmd tables with invalid lower level table pointers.
      
      - invalid_pmd_table is used by pud_present(), pud_none(), and
        pud_clear(), which KVM will use to test and clear pud entries.
      
      - tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch
        to the appropriate GVA page tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      ccf01516
  2. 02 2月, 2017 2 次提交
  3. 05 1月, 2017 2 次提交
    • J
      KVM: MIPS: Flush KVM entry code from icache globally · 32eb12a6
      James Hogan 提交于
      Flush the KVM entry code from the icache on all CPUs, not just the one
      that built the entry code.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 3.16.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      32eb12a6
    • J
      KVM: MIPS: Don't clobber CP0_Status.UX · 4c881451
      James Hogan 提交于
      On 64-bit kernels, MIPS KVM will clear CP0_Status.UX to prevent the
      guest (running in user mode) from accessing the 64-bit memory segments.
      However the previous value of CP0_Status.UX is never restored when
      exiting from the guest.
      
      If the user process uses 64-bit addressing (the n64 ABI) this can result
      in address error exceptions from the kernel if it needs to deliver a
      signal before returning to user mode, as the kernel will need to write a
      sigframe to high user addresses on the user stack which are disallowed
      by CP0_Status.UX=0.
      
      This is fixed by explicitly setting SX and UX again when exiting from
      the guest, and explicitly clearing those bits when returning to the
      guest. Having the SX and UX bits set when handling guest exits (rather
      than only when exiting to userland) will be helpful when we support VZ,
      since we shouldn't need to directly read or write guest memory, so it
      will be valid for cache management IPIs to access host user addresses.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Cc: <stable@vger.kernel.org> # 4.8.x-
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      4c881451
  4. 25 12月, 2016 3 次提交
  5. 15 12月, 2016 1 次提交
  6. 14 12月, 2016 1 次提交
  7. 11 12月, 2016 2 次提交
  8. 30 11月, 2016 1 次提交
    • F
      tcp: SOF_TIMESTAMPING_OPT_STATS option for SO_TIMESTAMPING · 1c885808
      Francis Yan 提交于
      This patch exports the sender chronograph stats via the socket
      SO_TIMESTAMPING channel. Currently we can instrument how long a
      particular application unit of data was queued in TCP by tracking
      SOF_TIMESTAMPING_TX_SOFTWARE and SOF_TIMESTAMPING_TX_SCHED. Having
      these sender chronograph stats exported simultaneously along with
      these timestamps allow further breaking down the various sender
      limitation.  For example, a video server can tell if a particular
      chunk of video on a connection takes a long time to deliver because
      TCP was experiencing small receive window. It is not possible to
      tell before this patch without packet traces.
      
      To prepare these stats, the user needs to set
      SOF_TIMESTAMPING_OPT_STATS and SOF_TIMESTAMPING_OPT_TSONLY flags
      while requesting other SOF_TIMESTAMPING TX timestamps. When the
      timestamps are available in the error queue, the stats are returned
      in a separate control message of type SCM_TIMESTAMPING_OPT_STATS,
      in a list of TLVs (struct nlattr) of types: TCP_NLA_BUSY_TIME,
      TCP_NLA_RWND_LIMITED, TCP_NLA_SNDBUF_LIMITED. Unit is microsecond.
      Signed-off-by: NFrancis Yan <francisyyan@gmail.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1c885808
  9. 25 11月, 2016 1 次提交
  10. 24 11月, 2016 1 次提交
    • P
      MIPS: Mask out limit field when calculating wired entry count · 10313980
      Paul Burton 提交于
      Since MIPSr6 the Wired register is split into 2 fields, with the upper
      16 bits of the register indicating a limit on the value that the wired
      entry count in the bottom 16 bits of the register can take. This means
      that simply reading the wired register doesn't get us a valid TLB entry
      index any longer, and we instead need to retrieve only the lower 16 bits
      of the register. Introduce a new num_wired_entries() function which does
      this on MIPSr6 or higher and simply returns the value of the wired
      register on older architecture revisions, and make use of it when
      reading the number of wired entries.
      
      Since commit e710d666 ("MIPS: tlb-r4k: If there are wired entries,
      don't use TLBINVF") we have been using a non-zero number of wired
      entries to determine whether we should avoid use of the tlbinvf
      instruction (which would invalidate wired entries) and instead loop over
      TLB entries in local_flush_tlb_all(). This loop begins with the number
      of wired entries, or before this patch some large bogus TLB index on
      MIPSr6 systems. Thus since the aforementioned commit some MIPSr6 systems
      with FTLBs have been prone to leaving stale address translations in the
      FTLB & crashing in various weird & wonderful ways when we later observe
      the wrong memory.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/14557/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      10313980
  11. 23 11月, 2016 1 次提交
    • E
      ptrace: Don't allow accessing an undumpable mm · 84d77d3f
      Eric W. Biederman 提交于
      It is the reasonable expectation that if an executable file is not
      readable there will be no way for a user without special privileges to
      read the file.  This is enforced in ptrace_attach but if ptrace
      is already attached before exec there is no enforcement for read-only
      executables.
      
      As the only way to read such an mm is through access_process_vm
      spin a variant called ptrace_access_vm that will fail if the
      target process is not being ptraced by the current process, or
      the current process did not have sufficient privileges when ptracing
      began to read the target processes mm.
      
      In the ptrace implementations replace access_process_vm by
      ptrace_access_vm.  There remain several ptrace sites that still use
      access_process_vm as they are reading the target executables
      instructions (for kernel consumption) or register stacks.  As such it
      does not appear necessary to add a permission check to those calls.
      
      This bug has always existed in Linux.
      
      Fixes: v1.0
      Cc: stable@vger.kernel.org
      Reported-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      84d77d3f
  12. 17 11月, 2016 1 次提交
  13. 16 11月, 2016 2 次提交
    • C
      locking/core, arch: Remove cpu_relax_lowlatency() · 5bd0b85b
      Christian Borntraeger 提交于
      As there are no users left, we can remove cpu_relax_lowlatency()
      implementations from every architecture.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Noam Camus <noamc@ezchip.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Cc: <linux-arch@vger.kernel.org>
      Link: http://lkml.kernel.org/r/1477386195-32736-6-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5bd0b85b
    • C
      locking/core: Introduce cpu_relax_yield() · 79ab11cd
      Christian Borntraeger 提交于
      For spinning loops people do often use barrier() or cpu_relax().
      For most architectures cpu_relax and barrier are the same, but on
      some architectures cpu_relax can add some latency.
      For example on power,sparc64 and arc, cpu_relax can shift the CPU
      towards other hardware threads in an SMT environment.
      On s390 cpu_relax does even more, it uses an hypercall to the
      hypervisor to give up the timeslice.
      In contrast to the SMT yielding this can result in larger latencies.
      In some places this latency is unwanted, so another variant
      "cpu_relax_lowlatency" was introduced. Before this is used in more
      and more places, lets revert the logic and provide a cpu_relax_yield
      that can be called in places where yielding is more important than
      latency. By default this is the same as cpu_relax on all architectures.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Noam Camus <noamc@ezchip.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1477386195-32736-2-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      79ab11cd
  14. 05 11月, 2016 1 次提交
新手
引导
客服 返回
顶部