1. 26 11月, 2014 2 次提交
  2. 23 11月, 2014 4 次提交
    • T
      PCI/MSI: Rename mask/unmask_msi_irq treewide · 280510f1
      Thomas Gleixner 提交于
      The PCI/MSI irq chip callbacks mask/unmask_msi_irq have been renamed
      to pci_msi_mask/unmask_irq to mark them PCI specific. Rename all usage
      sites. The conversion helper functions are kept around to avoid
      conflicts in next and will be removed after merging into mainline.
      
      Coccinelle assisted conversion. No functional change.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: x86@kernel.org
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Murali Karicheri <m-karicheri2@ti.com>
      Cc: Thierry Reding <thierry.reding@gmail.com>
      Cc: Mohit Kumar <mohit.kumar@st.com>
      Cc: Simon Horman <horms@verge.net.au>
      Cc: Michal Simek <michal.simek@xilinx.com>
      Cc: Yijing Wang <wangyijing@huawei.com>
      280510f1
    • T
      PCI/MSI: Rename mask/unmask_msi_irq et al · 23ed8d57
      Thomas Gleixner 提交于
      mask/unmask_msi_irq and __mask_msi/msix_irq are PCI/MSI specific
      functions and should be named accordingly. This is a preparatory patch
      to support MSI on non PCI devices.
      
      Rename mask/unmask_msi_irq to pci_msi_mask/unmask_irq and document the
      functions. Provide conversion helpers.
      
      Rename __mask_msi/msix_irq to __pci_msi/msix_desc_mask so its clear
      that they operated on msi_desc. Fixup the only user outside of
      pci/msi.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Grant Likely <grant.likely@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Yijing Wang <wangyijing@huawei.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      23ed8d57
    • J
      PCI/MSI: Rename write_msi_msg() to pci_write_msi_msg() · 83a18912
      Jiang Liu 提交于
      Rename write_msi_msg() to pci_write_msi_msg() to mark it as PCI
      specific.
      Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Grant Likely <grant.likely@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Yingjoe Chen <yingjoe.chen@mediatek.com>
      Cc: Yijing Wang <wangyijing@huawei.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      83a18912
    • J
      PCI/MSI: Rename __read_msi_msg() to __pci_read_msi_msg() · 891d4a48
      Jiang Liu 提交于
      Rename __read_msi_msg() to __pci_read_msi_msg() and kill unused
      read_msi_msg(). It's a preparation to separate generic MSI code from
      PCI core.
      Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Grant Likely <grant.likely@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Yingjoe Chen <yingjoe.chen@mediatek.com>
      Cc: Yijing Wang <wangyijing@huawei.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      891d4a48
  3. 22 11月, 2014 2 次提交
  4. 12 11月, 2014 2 次提交
  5. 07 11月, 2014 1 次提交
    • Y
      PCI/MSI: Add pci_msi_ignore_mask to prevent writes to MSI/MSI-X Mask Bits · 38737d82
      Yijing Wang 提交于
      MSI-X vector Mask Bits are in MSI-X Tables in PCI memory space.  Xen PV
      guests can't write to those tables.  MSI vector Mask Bits are in PCI
      configuration space.  Xen PV guests can write to config space, but those
      writes are ignored.
      
      Commit 0e4ccb15 ("PCI: Add x86_msi.msi_mask_irq() and
      msix_mask_irq()") added a way to override default_mask_msi_irqs() and
      default_mask_msix_irqs() so they can be no-ops in Xen guests, but this is
      more complicated than necessary.
      
      Add "pci_msi_ignore_mask" in the core PCI MSI code.  If set,
      default_mask_msi_irqs() and default_mask_msix_irqs() return without doing
      anything.  This is less flexible, but much simpler.
      
      [bhelgaas: changelog]
      Signed-off-by: NYijing Wang <wangyijing@huawei.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      CC: xen-devel@lists.xenproject.org
      38737d82
  6. 19 10月, 2014 3 次提交
    • D
      sparc64: Do not define thread fpregs save area as zero-length array. · e2653143
      David S. Miller 提交于
      This breaks the stack end corruption detection facility.
      
      What that facility does it write a magic value to "end_of_stack()"
      and checking to see if it gets overwritten.
      
      "end_of_stack()" is "task_thread_info(p) + 1", which for sparc64 is
      the beginning of the FPU register save area.
      
      So once the user uses the FPU, the magic value is overwritten and the
      debug checks trigger.
      
      Fix this by making the size explicit.
      
      Due to the size we use for the fpsaved[], gsr[], and xfsr[] arrays we
      are limited to 7 levels of FPU state saves.  So each FPU register set
      is 256 bytes, allocate 256 * 7 for the fpregs area.
      Reported-by: NMeelis Roos <mroos@linux.ee>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e2653143
    • D
      sparc64: Fix corrupted thread fault code. · 84bd6d8b
      David S. Miller 提交于
      Every path that ends up at do_sparc64_fault() must install a valid
      FAULT_CODE_* bitmask in the per-thread fault code byte.
      
      Two paths leading to the label winfix_trampoline (which expects the
      FAULT_CODE_* mask in register %g4) were not doing so:
      
      1) For pre-hypervisor TLB protection violation traps, if we took
         the 'winfix_trampoline' path we wouldn't have %g4 initialized
         with the FAULT_CODE_* value yet.  Resulting in using the
         TLB_TAG_ACCESS register address value instead.
      
      2) In the TSB miss path, when we notice that we are going to use a
         hugepage mapping, but we haven't allocated the hugepage TSB yet, we
         still have to take the window fixup case into consideration and
         in that particular path we leave %g4 not setup properly.
      
      Errors on this sort were largely invisible previously, but after
      commit 4ccb9272 ("sparc64: sun4v TLB
      error power off events") we now have a fault_code mask bit
      (FAULT_CODE_BAD_RA) that triggers due to this bug.
      
      FAULT_CODE_BAD_RA triggers because this bit is set in TLB_TAG_ACCESS
      (see #1 above) and thus we get seemingly random bus errors triggered
      for user processes.
      
      Fixes: 4ccb9272 ("sparc64: sun4v TLB error power off events")
      Reported-by: NMeelis Roos <mroos@linux.ee>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      84bd6d8b
    • A
      x86,kvm,vmx: Preserve CR4 across VM entry · d974baa3
      Andy Lutomirski 提交于
      CR4 isn't constant; at least the TSD and PCE bits can vary.
      
      TBH, treating CR0 and CR3 as constant scares me a bit, too, but it looks
      like it's correct.
      
      This adds a branch and a read from cr4 to each vm entry.  Because it is
      extremely likely that consecutive entries into the same vcpu will have
      the same host cr4 value, this fixes up the vmcs instead of restoring cr4
      after the fact.  A subsequent patch will add a kernel-wide cr4 shadow,
      reducing the overhead in the common case to just two memory reads and a
      branch.
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: stable@vger.kernel.org
      Cc: Petr Matousek <pmatouse@redhat.com>
      Cc: Gleb Natapov <gleb@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d974baa3
  7. 17 10月, 2014 3 次提交
  8. 16 10月, 2014 5 次提交
  9. 15 10月, 2014 5 次提交
    • S
      arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort · 3d08c629
      Steve Capper 提交于
      Commit:
      b8865767 ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping
      
      introduced some code in user_mem_abort that failed to compile if
      STRICT_MM_TYPECHECKS was enabled.
      
      This patch fixes up the failing comparison.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Reviewed-by: NKim Phillips <kim.phillips@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      3d08c629
    • O
      ARM: sunxi_defconfig: enable CONFIG_REGULATOR · 9a2ad529
      Olof Johansson 提交于
      Commit 97a13e52 ('net: phy: mdio-sun4i: don't select REGULATOR') removed
      the select of REGULATOR, which means that it now has to be explicitly
      enabled in the defconfig or things won't work very well.
      
      In particular, this fixes a problem with SD/MMC not probing on my A31-based
      board.
      
      Cc: Beniamino Galvani <b.galvani@gmail.com>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      9a2ad529
    • D
      sparc64: Fix FPU register corruption with AES crypto offload. · f4da3628
      David S. Miller 提交于
      The AES loops in arch/sparc/crypto/aes_glue.c use a scheme where the
      key material is preloaded into the FPU registers, and then we loop
      over and over doing the crypt operation, reusing those pre-cooked key
      registers.
      
      There are intervening blkcipher*() calls between the crypt operation
      calls.  And those might perform memcpy() and thus also try to use the
      FPU.
      
      The sparc64 kernel FPU usage mechanism is designed to allow such
      recursive uses, but with a catch.
      
      There has to be a trap between the two FPU using threads of control.
      
      The mechanism works by, when the FPU is already in use by the kernel,
      allocating a slot for FPU saving at trap time.  Then if, within the
      trap handler, we try to use the FPU registers, the pre-trap FPU
      register state is saved into the slot.  Then at trap return time we
      notice this and restore the pre-trap FPU state.
      
      Over the long term there are various more involved ways we can make
      this work, but for a quick fix let's take advantage of the fact that
      the situation where this happens is very limited.
      
      All sparc64 chips that support the crypto instructiosn also are using
      the Niagara4 memcpy routine, and that routine only uses the FPU for
      large copies where we can't get the source aligned properly to a
      multiple of 8 bytes.
      
      We look to see if the FPU is already in use in this context, and if so
      we use the non-large copy path which only uses integer registers.
      
      Furthermore, we also limit this special logic to when we are doing
      kernel copy, rather than a user copy.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4da3628
    • I
    • A
      x86: bpf_jit: fix two bugs in eBPF JIT compiler · e0ee9c12
      Alexei Starovoitov 提交于
      1.
      JIT compiler using multi-pass approach to converge to final image size,
      since x86 instructions are variable length. It starts with large
      gaps between instructions (so some jumps may use imm32 instead of imm8)
      and iterates until total program size is the same as in previous pass.
      This algorithm works only if program size is strictly decreasing.
      Programs that use LD_ABS insn need additional code in prologue, but it
      was not emitted during 1st pass, so there was a chance that 2nd pass would
      adjust imm32->imm8 jump offsets to the same number of bytes as increase in
      prologue, which may cause algorithm to erroneously decide that size converged.
      Fix it by always emitting largest prologue in the first pass which
      is detected by oldproglen==0 check.
      Also change error check condition 'proglen != oldproglen' to fail gracefully.
      
      2.
      while staring at the code realized that 64-byte buffer may not be enough
      when 1st insn is large, so increase it to 128 to avoid buffer overflow
      (theoretical maximum size of prologue+div is 109) and add runtime check.
      
      Fixes: 62258278 ("net: filter: x86: internal BPF JIT")
      Reported-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Tested-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e0ee9c12
  10. 14 10月, 2014 13 次提交