1. 12 5月, 2016 2 次提交
    • P
      KVM: PPC: Book3S HV: Re-enable XICS fast path for irqfd-generated interrupts · b1a4286b
      Paul Mackerras 提交于
      Commit c9a5ecca ("kvm/eventfd: add arch-specific set_irq",
      2015-10-16) added the possibility for architecture-specific code
      to handle the generation of virtual interrupts in atomic context
      where possible, without having to schedule a work function.
      
      Since we can easily generate virtual interrupts on XICS without
      having to do anything worse than take a spinlock, we define a
      kvm_arch_set_irq_inatomic() for XICS.  We also remove kvm_set_msi()
      since it is not used any more.
      
      The one slightly tricky thing is that with the new interface, we
      don't get told whether the interrupt is an MSI (or other edge
      sensitive interrupt) vs. level-sensitive.  The difference as far
      as interrupt generation is concerned is that for LSIs we have to
      set the asserted flag so it will continue to fire until it is
      explicitly cleared.
      
      In fact the XICS code gets told which interrupts are LSIs by userspace
      when it configures the interrupt via the KVM_DEV_XICS_GRP_SOURCES
      attribute group on the XICS device.  To store this information, we add
      a new "lsi" field to struct ics_irq_state.  With that we can also do a
      better job of returning accurate values when reading the attribute
      group.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      b1a4286b
    • G
      kvm: introduce KVM_MAX_VCPU_ID · 0b1b1dfd
      Greg Kurz 提交于
      The KVM_MAX_VCPUS define provides the maximum number of vCPUs per guest, and
      also the upper limit for vCPU ids. This is okay for all archs except PowerPC
      which can have higher ids, depending on the cpu/core/thread topology. In the
      worst case (single threaded guest, host with 8 threads per core), it limits
      the maximum number of vCPUS to KVM_MAX_VCPUS / 8.
      
      This patch separates the vCPU numbering from the total number of vCPUs, with
      the introduction of KVM_MAX_VCPU_ID, as the maximal valid value for vCPU ids
      plus one.
      
      The corresponding KVM_CAP_MAX_VCPU_ID allows userspace to validate vCPU ids
      before passing them to KVM_CREATE_VCPU.
      
      This patch only implements KVM_MAX_VCPU_ID with a specific value for PowerPC.
      Other archs continue to return KVM_MAX_VCPUS instead.
      Suggested-by: NRadim Krcmar <rkrcmar@redhat.com>
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0b1b1dfd
  2. 11 5月, 2016 4 次提交
  3. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  4. 29 3月, 2016 3 次提交
  5. 26 3月, 2016 1 次提交
  6. 23 3月, 2016 2 次提交
  7. 22 3月, 2016 3 次提交
  8. 18 3月, 2016 4 次提交
  9. 16 3月, 2016 4 次提交
    • C
      powerpc: Fix unrecoverable SLB miss during restore_math() · 6e669f08
      Cyril Bur 提交于
      Commit 70fe3d98 "powerpc: Restore FPU/VEC/VSX if previously used" introduces a
      call to restore_math() late in the syscall return path, after MSR_RI has been
      cleared. The MSR_RI flag is used to indicate whether the kernel can take
      another exception or not. A cleared MSR_RI flag indicates that the kernel
      cannot.
      
      Unfortunately when a machine is under SLB pressure an SLB miss can occur
      in restore_math() which (with MSR_RI cleared) leads to an unrecoverable
      exception.
      
        Unrecoverable exception 4100 at c0000000000088d8
        cpu 0x0: Vector: 4100  at [c0000003fa473b20]
            pc: c0000000000088d8: .load_vr_state+0x70/0x110
            lr: c00000000000f710: .restore_math+0x130/0x188
            sp: c0000003fa473da0
           msr: 9000000002003030
          current = 0xc0000007f876f180
          paca    = 0xc00000000fff0000	 softe: 0	 irq_happened: 0x01
            pid   = 1944, comm = K08umountfs
        [link register   ] c00000000000f710 .restore_math+0x130/0x188
        [c0000003fa473da0] c0000003fa473e30 (unreliable)
        [c0000003fa473e30] c000000000007b6c system_call+0x84/0xfc
      
      The clearing of MSR_RI is actually an optimisation to avoid multiple MSR
      writes, what must be disabled are interrupts. See comment in entry_64.S:
      
        /*
         * For performance reasons we clear RI the same time that we
         * clear EE. We only need to clear RI just before we restore r13
         * below, but batching it with EE saves us one expensive mtmsrd call.
         * We have to be careful to restore RI if we branch anywhere from
         * here (eg syscall_exit_work).
         */
      
      At the point of calling restore_math() r13 has not been restored, as such, the
      quick fix of turning MSR_RI back on for the call to restore_math() will
      eliminate the occurrence of an unrecoverable exception.
      
      We'd like to do a better fix in future.
      
      Fixes: 70fe3d98 ("powerpc: Restore FPU/VEC/VSX if previously used")
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6e669f08
    • C
      powerpc/8xx: Fix do_mtspr_cpu6() build on older compilers · 2e098dce
      Christophe Leroy 提交于
      GCC < 4.9 is unable to build this, saying:
      
        arch/powerpc/mm/8xx_mmu.c:139:2: error: memory input 1 is not directly addressable
      
      Change the one-element array into a simple variable to avoid this.
      
      Fixes: 1458dd95 ("powerpc/8xx: Handle CPU6 ERRATA directly in mtspr() macro")
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Cc: Scott Wood <oss@buserror.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2e098dce
    • M
      powerpc/rcpm: Fix build break when SMP=n · b081251e
      Michael Ellerman 提交于
      Add an include of asm/smp.h to fix a build break when SMP=n:
      
        arch/powerpc/sysdev/fsl_rcpm.c:32:2: error: implicit declaration of
        function 'get_hard_smp_processor_id'
      
      Fixes: d17799f9 ("powerpc/rcpm: add RCPM driver")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b081251e
    • S
      powerpc/book3e-64: Use hardcoded mttmr opcode · 7a25d912
      Scott Wood 提交于
      This preserves the ability to build using older binutils (reportedly <=
      2.22).
      
      Fixes: 6becef7e ("powerpc/mpc85xx: Add CPU hotplug support for E6500")
      Signed-off-by: NScott Wood <oss@buserror.net>
      Cc: chenhui.zhao@freescale.com
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7a25d912
  10. 12 3月, 2016 16 次提交