1. 06 1月, 2016 1 次提交
  2. 20 12月, 2015 2 次提交
    • J
      x86/irq: Export functions to allow MSI domains in modules · c8f3e518
      Jake Oshins 提交于
      The Linux kernel already has the concept of IRQ domain, wherein a
      component can expose a set of IRQs which are managed by a particular
      interrupt controller chip or other subsystem. The PCI driver exposes
      the notion of an IRQ domain for Message-Signaled Interrupts (MSI) from
      PCI Express devices. This patch exposes the functions which are
      necessary for creating a MSI IRQ domain within a module.
      
      [ tglx: Split it into x86 and core irq parts ]
      Signed-off-by: NJake Oshins <jakeo@microsoft.com>
      Cc: gregkh@linuxfoundation.org
      Cc: kys@microsoft.com
      Cc: devel@linuxdriverproject.org
      Cc: olaf@aepfle.de
      Cc: apw@canonical.com
      Cc: vkuznets@redhat.com
      Cc: haiyangz@microsoft.com
      Cc: marc.zyngier@arm.com
      Cc: bhelgaas@google.com
      Link: http://lkml.kernel.org/r/1449769983-12948-4-git-send-email-jakeo@microsoft.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      c8f3e518
    • D
      x86/paravirt: Prevent rtc_cmos platform device init on PV guests · d8c98a1d
      David Vrabel 提交于
      Adding the rtc platform device in non-privileged Xen PV guests causes
      an IRQ conflict because these guests do not have legacy PIC and may
      allocate irqs in the legacy range.
      
      In a single VCPU Xen PV guest we should have:
      
      /proc/interrupts:
                 CPU0
        0:       4934  xen-percpu-virq      timer0
        1:          0  xen-percpu-ipi       spinlock0
        2:          0  xen-percpu-ipi       resched0
        3:          0  xen-percpu-ipi       callfunc0
        4:          0  xen-percpu-virq      debug0
        5:          0  xen-percpu-ipi       callfuncsingle0
        6:          0  xen-percpu-ipi       irqwork0
        7:        321   xen-dyn-event     xenbus
        8:         90   xen-dyn-event     hvc_console
        ...
      
      But hvc_console cannot get its interrupt because it is already in use
      by rtc0 and the console does not work.
      
        genirq: Flags mismatch irq 8. 00000000 (hvc_console) vs. 00000000 (rtc0)
      
      We can avoid this problem by realizing that unprivileged PV guests (both
      Xen and lguests) are not supposed to have rtc_cmos device and so
      adding it is not necessary.
      
      Privileged guests (i.e. Xen's dom0) do use it but they should not have
      irq conflicts since they allocate irqs above legacy range (above
      gsi_top, in fact).
      
      Instead of explicitly testing whether the guest is privileged we can
      extend pv_info structure to include information about guest's RTC
      support.
      Reported-and-tested-by: NSander Eikelenboom <linux@eikelenboom.it>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: vkuznets@redhat.com
      Cc: xen-devel@lists.xenproject.org
      Cc: konrad.wilk@oracle.com
      Cc: stable@vger.kernel.org # 4.2+
      Link: http://lkml.kernel.org/r/1449842873-2613-1-git-send-email-boris.ostrovsky@oracle.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      d8c98a1d
  3. 19 12月, 2015 8 次提交
  4. 11 12月, 2015 2 次提交
  5. 06 12月, 2015 2 次提交
    • A
      x86, tracing, perf: Add trace point for MSR accesses · 7f47d8cc
      Andi Kleen 提交于
      For debugging low level code interacting with the CPU it is often
      useful to trace the MSR read/writes. This gives a concise summary of
      PMU and other operations.
      
      perf has an ad-hoc way to do this using trace_printk, but it's
      somewhat limited (and also now spews ugly boot messages when enabled)
      
      Instead define real trace points for all MSR accesses.
      
      This adds three new trace points: read_msr and write_msr and rdpmc.
      
      They also report if the access faulted (if *_safe is used)
      
      This allows filtering and triggering on specific MSR values, which
      allows various more advanced debugging techniques.
      
      All the values are well defined in the CPU documentation.
      
      The trace can be post processed with
      Documentation/trace/postprocess/decode_msr.py to add symbolic MSR
      names to the trace.
      
      I only added it to native MSR accesses in C, not paravirtualized or in
      entry*.S (which is not too interesting)
      
      Originally the patch kit moved the MSRs out of line.  This uses an
      alternative approach recommended by Steven Rostedt of only moving the
      trace calls out of line, but open coding the access to the jump label.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/1449018060-1742-3-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7f47d8cc
    • A
      x86/headers: Don't include asm/processor.h in asm/atomic.h · 153a4334
      Andi Kleen 提交于
      asm/atomic.h doesn't really need asm/processor.h anymore. Everything
      it uses has moved to other header files. So remove that include.
      
      processor.h is a nasty header that includes lots of
      other headers and makes it prone to include loops. Removing the
      include here makes asm/atomic.h a "leaf" header that can
      be safely included in most other headers.
      
      The only fallout is in the lib/atomic tester which relied on
      this implicit include. Give it an explicit include.
      (the include is in ifdef because the user is also in ifdef)
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: rostedt@goodmis.org
      Link: http://lkml.kernel.org/r/1449018060-1742-1-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      153a4334
  6. 04 12月, 2015 1 次提交
    • K
      x86/mm: Fix regression with huge pages on PAE · 70f15287
      Kirill A. Shutemov 提交于
      Recent PAT patchset has caused issue on 32-bit PAE machines:
      
        page:eea45000 count:0 mapcount:-128 mapping:  (null) index:0x0 flags: 0x40000000()
        page dumped because: VM_BUG_ON_PAGE(page_mapcount(page) < 0)
        ------------[ cut here ]------------
        kernel BUG at /home/build/linux-boris/mm/huge_memory.c:1485!
        invalid opcode: 0000 [#1] SMP
        [...]
        Call Trace:
         unmap_single_vma
         ? __wake_up
         unmap_vmas
         unmap_region
         do_munmap
         vm_munmap
         SyS_munmap
         do_fast_syscall_32
         ? __do_page_fault
         sysenter_past_esp
        Code: ...
        EIP: [<c11bde80>] zap_huge_pmd+0x240/0x260 SS:ESP 0068:f6459d98
      
      The problem is in pmd_pfn_mask() and pmd_flags_mask(). These
      helpers use PMD_PAGE_MASK to calculate resulting mask.
      PMD_PAGE_MASK is 'unsigned long', not 'unsigned long long' as
      phys_addr_t is on 32-bit PAE (ARCH_PHYS_ADDR_T_64BIT). As a
      result, the upper bits of resulting mask get truncated.
      
      pud_pfn_mask() and pud_flags_mask() aren't problematic since we
      don't have PUD page table level on 32-bit systems, but it's
      reasonable to keep them consistent with PMD counterpart.
      
      Introduce PHYSICAL_PMD_PAGE_MASK and PHYSICAL_PUD_PAGE_MASK in
      addition to existing PHYSICAL_PAGE_MASK and reworks helpers to
      use them.
      Reported-and-Tested-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      [ Fix -Woverflow warnings from the realmode code. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NToshi Kani <toshi.kani@hpe.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jürgen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: elliott@hpe.com
      Cc: konrad.wilk@oracle.com
      Cc: linux-mm <linux-mm@kvack.org>
      Fixes: f70abb0f ("x86/asm: Fix pud/pmd interfaces to handle large PAT bit")
      Link: http://lkml.kernel.org/r/1448878233-11390-2-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      70f15287
  7. 26 11月, 2015 1 次提交
  8. 24 11月, 2015 5 次提交
  9. 23 11月, 2015 6 次提交
  10. 19 11月, 2015 2 次提交
  11. 17 11月, 2015 1 次提交
  12. 14 11月, 2015 1 次提交
  13. 10 11月, 2015 8 次提交