1. 06 12月, 2012 1 次提交
  2. 05 12月, 2012 2 次提交
  3. 02 12月, 2012 1 次提交
  4. 01 12月, 2012 2 次提交
    • W
      KVM: x86: Emulate IA32_TSC_ADJUST MSR · ba904635
      Will Auld 提交于
      CPUID.7.0.EBX[1]=1 indicates IA32_TSC_ADJUST MSR 0x3b is supported
      
      Basic design is to emulate the MSR by allowing reads and writes to a guest
      vcpu specific location to store the value of the emulated MSR while adding
      the value to the vmcs tsc_offset. In this way the IA32_TSC_ADJUST value will
      be included in all reads to the TSC MSR whether through rdmsr or rdtsc. This
      is of course as long as the "use TSC counter offsetting" VM-execution control
      is enabled as well as the IA32_TSC_ADJUST control.
      
      However, because hardware will only return the TSC + IA32_TSC_ADJUST +
      vmsc tsc_offset for a guest process when it does and rdtsc (with the correct
      settings) the value of our virtualized IA32_TSC_ADJUST must be stored in one
      of these three locations. The argument against storing it in the actual MSR
      is performance. This is likely to be seldom used while the save/restore is
      required on every transition. IA32_TSC_ADJUST was created as a way to solve
      some issues with writing TSC itself so that is not an option either.
      
      The remaining option, defined above as our solution has the problem of
      returning incorrect vmcs tsc_offset values (unless we intercept and fix, not
      done here) as mentioned above. However, more problematic is that storing the
      data in vmcs tsc_offset will have a different semantic effect on the system
      than does using the actual MSR. This is illustrated in the following example:
      
      The hypervisor set the IA32_TSC_ADJUST, then the guest sets it and a guest
      process performs a rdtsc. In this case the guest process will get
      TSC + IA32_TSC_ADJUST_hyperviser + vmsc tsc_offset including
      IA32_TSC_ADJUST_guest. While the total system semantics changed the semantics
      as seen by the guest do not and hence this will not cause a problem.
      Signed-off-by: NWill Auld <will.auld@intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      ba904635
    • W
      KVM: x86: Add code to track call origin for msr assignment · 8fe8ab46
      Will Auld 提交于
      In order to track who initiated the call (host or guest) to modify an msr
      value I have changed function call parameters along the call path. The
      specific change is to add a struct pointer parameter that points to (index,
      data, caller) information rather than having this information passed as
      individual parameters.
      
      The initial use for this capability is for updating the IA32_TSC_ADJUST msr
      while setting the tsc value. It is anticipated that this capability is
      useful for other tasks.
      Signed-off-by: NWill Auld <will.auld@intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      8fe8ab46
  5. 30 11月, 2012 1 次提交
  6. 29 11月, 2012 2 次提交
  7. 28 11月, 2012 16 次提交
  8. 14 11月, 2012 3 次提交
  9. 30 10月, 2012 1 次提交
  10. 26 10月, 2012 2 次提交
  11. 25 10月, 2012 3 次提交
  12. 24 10月, 2012 6 次提交
    • D
      x86/irq/ioapic: Check for valid irq_cfg pointer in smp_irq_move_cleanup_interrupt · 94777fc5
      Dimitri Sivanich 提交于
      Posting this patch to fix an issue concerning sparse irq's that
      I raised a while back.  There was discussion about adding
      refcounting to sparse irqs (to fix other potential race
      conditions), but that does not appear to have been addressed
      yet.  This covers the only issue of this type that I've
      encountered in this area.
      
      A NULL pointer dereference can occur in
      smp_irq_move_cleanup_interrupt() if we haven't yet setup the
      irq_cfg pointer in the irq_desc.irq_data.chip_data.
      
      In create_irq_nr() there is a window where we have set
      vector_irq in __assign_irq_vector(), but not yet called
      irq_set_chip_data() to set the irq_cfg pointer.
      
      Should an IRQ_MOVE_CLEANUP_VECTOR hit the cpu in question during
      this time, smp_irq_move_cleanup_interrupt() will attempt to
      process the aforementioned irq, but panic when accessing
      irq_cfg.
      
      Only continue processing the irq if irq_cfg is non-NULL.
      Signed-off-by: NDimitri Sivanich <sivanich@sgi.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Joerg Roedel <joerg.roedel@amd.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Alexander Gordeev <agordeev@redhat.com>
      Link: http://lkml.kernel.org/r/20121016125021.GA22935@sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      94777fc5
    • W
      perf/x86: Remove unused variable in nhmex_rbox_alter_er() · 64dfab8e
      Wei Yongjun 提交于
      The variable port is initialized but never used
      otherwise, so remove the unused variable.
      
      dpatch engine is used to auto generate this patch.
      (https://github.com/weiyj/dpatch)
      Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn>
      Cc: Yan, Zheng <zheng.z.yan@intel.com>
      Cc: a.p.zijlstra@chello.nl
      Cc: paulus@samba.org
      Cc: acme@ghostprotocols.net
      Link: http://lkml.kernel.org/r/CAPgLHd8NZkYSkZm22FpZxiEh6HcA0q-V%3D29vdnheiDhgrJZ%2Byw@mail.gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      64dfab8e
    • M
      x86/efi: Fix oops caused by incorrect set_memory_uc() usage · 3e8fa263
      Matt Fleming 提交于
      Calling __pa() with an ioremap'd address is invalid. If we
      encounter an efi_memory_desc_t without EFI_MEMORY_WB set in
      ->attribute we currently call set_memory_uc(), which in turn
      calls __pa() on a potentially ioremap'd address.
      
      On CONFIG_X86_32 this results in the following oops:
      
        BUG: unable to handle kernel paging request at f7f22280
        IP: [<c10257b9>] reserve_ram_pages_type+0x89/0x210
        *pdpt = 0000000001978001 *pde = 0000000001ffb067 *pte = 0000000000000000
        Oops: 0000 [#1] PREEMPT SMP
        Modules linked in:
      
        Pid: 0, comm: swapper Not tainted 3.0.0-acpi-efi-0805 #3
         EIP: 0060:[<c10257b9>] EFLAGS: 00010202 CPU: 0
         EIP is at reserve_ram_pages_type+0x89/0x210
         EAX: 0070e280 EBX: 38714000 ECX: f7814000 EDX: 00000000
         ESI: 00000000 EDI: 38715000 EBP: c189fef0 ESP: c189fea8
         DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
        Process swapper (pid: 0, ti=c189e000 task=c18bbe60 task.ti=c189e000)
        Stack:
         80000200 ff108000 00000000 c189ff00 00038714 00000000 00000000 c189fed0
         c104f8ca 00038714 00000000 00038715 00000000 00000000 00038715 00000000
         00000010 38715000 c189ff48 c1025aff 38715000 00000000 00000010 00000000
        Call Trace:
         [<c104f8ca>] ? page_is_ram+0x1a/0x40
         [<c1025aff>] reserve_memtype+0xdf/0x2f0
         [<c1024dc9>] set_memory_uc+0x49/0xa0
         [<c19334d0>] efi_enter_virtual_mode+0x1c2/0x3aa
         [<c19216d4>] start_kernel+0x291/0x2f2
         [<c19211c7>] ? loglevel+0x1b/0x1b
         [<c19210bf>] i386_start_kernel+0xbf/0xc8
      
      The only time we can call set_memory_uc() for a memory region is
      when it is part of the direct kernel mapping. For the case where
      we ioremap a memory region we must leave it alone.
      
      This patch reimplements the fix from e8c71062 ("x86, efi:
      Calling __pa() with an ioremap()ed address is invalid") which
      was reverted in e1ad783b because it caused a regression on
      some MacBooks (they hung at boot). The regression was caused
      because the commit only marked EFI_RUNTIME_SERVICES_DATA as
      E820_RESERVED_EFI, when it should have marked all regions that
      have the EFI_MEMORY_RUNTIME attribute.
      
      Despite first impressions, it's not possible to use
      ioremap_cache() to map all cached memory regions on
      CONFIG_X86_64 because of the way that the memory map might be
      configured as detailed in the following bug report,
      
      	https://bugzilla.redhat.com/show_bug.cgi?id=748516
      
      e.g. some of the EFI memory regions *need* to be mapped as part
      of the direct kernel mapping.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Cc: Matthew Garrett <mjg@redhat.com>
      Cc: Zhang Rui <rui.zhang@intel.com>
      Cc: Huang Ying <huang.ying.caritas@gmail.com>
      Cc: Keith Packard <keithp@keithp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1350649546-23541-1-git-send-email-matt@console-pimps.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3e8fa263
    • V
      perf/x86: Enable overflow on Intel KNC with a custom knc_pmu_handle_irq() · e4074b30
      Vince Weaver 提交于
      Although based on the Intel P6 design, the interrupt mechnanism
      for KNC more closely resembles the Intel architectural
      perfmon one.
      
      We can't just re-use that code though, because KNC has different
      MSR numbers for the status and ack registers.
      
      In this case we just cut-and paste from perf_event_intel.c
      with some minor changes, as it looks like it would not be
      worth the trouble to change that code to be MSR-configurable.
      Signed-off-by: NVince Weaver <vincent.weaver@maine.edu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: eranian@gmail.com
      Cc: Meadows Lawrence F <lawrence.f.meadows@intel.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171304410.23243@vincent-weaver-1.um.maine.edu
      [ Small stylistic edits. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e4074b30
    • V
      perf/x86: Remove cpuc->enable check on Intl KNC event enable/disable · 7d011962
      Vince Weaver 提交于
      x86_pmu.enable() is called from x86_pmu_enable() with
      cpuc->enabled set to 0.  This means we weren't re-enabling the
      counters after a context switch.
      
      This patch just removes the check, as it should't be necessary
      (and the equivelent x86_ generic code does not have the checks).
      
      The origin of this problem is the KNC driver being based on the
      P6 one.   The P6 driver also has this issue, but works anyway
      due to various lucky accidents.
      Signed-off-by: NVince Weaver <vincent.weaver@maine.edu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: eranian@gmail.com
      Cc: Meadows
      Cc: Lawrence F <lawrence.f.meadows@intel.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171303290.23243@vincent-weaver-1.um.maine.eduSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7d011962
    • V
      perf/x86: Make Intel KNC use full 40-bit width of counters · ae5ba47a
      Vince Weaver 提交于
      Early versions of Intel KNC chips have a bug where bits above 32
      were not properly set.  We worked around this by only using the
      bottom 32 bits (out of 40 that should be available).
      
      It turns out this workaround breaks overflow handling.
      
      The buggy silicon will in theory never be used in production
      systems, so remove this workaround so we get proper overflow
      support.
      Signed-off-by: NVince Weaver <vincent.weaver@maine.edu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: eranian@gmail.com
      Cc: Meadows Lawrence F <lawrence.f.meadows@intel.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171302140.23243@vincent-weaver-1.um.maine.eduSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ae5ba47a