1. 07 1月, 2014 1 次提交
  2. 15 11月, 2013 1 次提交
  3. 15 10月, 2013 1 次提交
  4. 24 9月, 2013 3 次提交
    • M
      x86/UV: Update UV support for external NMI signals · 0d12ef0c
      Mike Travis 提交于
      The current UV NMI handler has not been updated for the changes
      in the system NMI handler and the perf operations.  The UV NMI
      handler reads an MMR in the UV Hub to check to see if the NMI
      event was caused by the external 'system NMI' that the operator
      can initiate on the System Mgmt Controller.
      
      The problem arises when the perf tools are running, causing
      millions of perf events per second on very large CPU count
      systems.  Previously this was okay because the perf NMI handler
      ran at a higher priority on the NMI call chain and if the NMI
      was a perf event, it would stop calling other NMI handlers
      remaining on the NMI call chain.
      
      Now the system NMI handler calls all the handlers on the NMI
      call chain including the UV NMI handler.  This causes the UV NMI
      handler to read the MMRs at the same millions per second rate.
      This can lead to significant performance loss and possible
      system failures.  It also can cause thousands of 'Dazed and
      Confused' messages being sent to the system console.  This
      effectively makes perf tools unusable on UV systems.
      
      To avoid this excessive overhead when perf tools are running,
      this code has been optimized to minimize reading of the MMRs as
      much as possible, by moving to the NMI_UNKNOWN notifier chain.
      This chain is called only when all the users on the standard
      NMI_LOCAL call chain have been called and none of them have
      claimed this NMI.
      
      There is an exception where the NMI_LOCAL notifier chain is
      used.  When the perf tools are in use, it's possible that the UV
      NMI was captured by some other NMI handler and then either
      ignored or mistakenly processed as a perf event.  We set a
      per_cpu ('ping') flag for those CPUs that ignored the initial
      NMI, and then send them an IPI NMI signal.  The NMI_LOCAL
      handler on each cpu does not need to read the MMR, but instead
      checks the in memory flag indicating it was pinged.  There are
      two module variables, 'ping_count' indicating how many requested
      NMI events occurred, and 'ping_misses' indicating how many stray
      NMI events.  These most likely are perf events so it shows the
      overhead of the perf NMI interrupts and how many MMR reads were avoided.
      
      This patch also minimizes the reads of the MMRs by having the
      first cpu entering the NMI handler on each node set a per HUB
      in-memory atomic value.  (Having a per HUB value avoids sending
      lock traffic over NumaLink.)  Both types of UV NMIs from the SMI
      layer are supported.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NDimitri Sivanich <sivanich@sgi.com>
      Reviewed-by: NHedi Berriche <hedi@sgi.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Link: http://lkml.kernel.org/r/20130923212500.353547733@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0d12ef0c
    • M
      x86/UV: Move NMI support · 1e019421
      Mike Travis 提交于
      This patch moves the UV NMI support from the x2apic file to a
      new separate uv_nmi.c file in preparation for the next sequence
      of patches. It prevents upcoming bloat of the x2apic file, and
      has the added benefit of putting the upcoming /sys/module
      parameters under the name 'uv_nmi' instead of 'x2apic_uv_x',
      which was obscure.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NDimitri Sivanich <sivanich@sgi.com>
      Reviewed-by: NHedi Berriche <hedi@sgi.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Link: http://lkml.kernel.org/r/20130923212500.183295611@asylum.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1e019421
    • J
      x86 / ACPI: simplify _acpi_map_lsapic() · 7e1f85f9
      Jiang Liu 提交于
      In acpi_register_lapic(), it will generates a new logical cpu
      number and maps to the local APIC id, this logical cpu number
      can be returned to simplify _acpi_map_lsapic() implementation.
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      7e1f85f9
  5. 26 8月, 2013 1 次提交
  6. 20 8月, 2013 1 次提交
    • Y
      x86/ioapic/kcrash: Prevent crash_kexec() from deadlocking on ioapic_lock · 17405453
      Yoshihiro YUNOMAE 提交于
      Prevent crash_kexec() from deadlocking on ioapic_lock. When
      crash_kexec() is executed on a CPU, the CPU will take ioapic_lock
      in disable_IO_APIC(). So if the cpu gets an NMI while locking
      ioapic_lock, a deadlock will happen.
      
      In this patch, ioapic_lock is zapped/initialized before disable_IO_APIC().
      
      You can reproduce this deadlock the following way:
      
      1. Add mdelay(1000) after raw_spin_lock_irqsave() in
         native_ioapic_set_affinity()@arch/x86/kernel/apic/io_apic.c
      
         Although the deadlock can occur without this modification, it will increase
         the potential of the deadlock problem.
      
      2. Build and install the kernel
      
      3. Set up the OS which will run panic() and kexec when NMI is injected
          # echo "kernel.unknown_nmi_panic=1" >> /etc/sysctl.conf
          # vim /etc/default/grub
            add "nmi_watchdog=0 crashkernel=256M" in GRUB_CMDLINE_LINUX line
          # grub2-mkconfig
      
      4. Reboot the OS
      
      5. Run following command for each vcpu on the guest
          # while true; do echo <CPU num> > /proc/irq/<IO-APIC-edge or IO-APIC-fasteoi>/smp_affinitity; done;
         By running this command, cpus will get ioapic_lock for setting affinity.
      
      6. Inject NMI (push a dump button or execute 'virsh inject-nmi <domain>' if you
         use VM). After injecting NMI, panic() is called in an nmi-handler context.
         Then, kexec will normally run in panic(), but the operation will be stopped
         by deadlock on ioapic_lock in crash_kexec()->machine_crash_shutdown()->
         native_machine_crash_shutdown()->disable_IO_APIC()->clear_IO_APIC()->
         clear_IO_APIC_pin()->ioapic_read_entry().
      Signed-off-by: NYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/20130820070107.28245.83806.stgit@yunodevelSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17405453
  7. 07 8月, 2013 1 次提交
  8. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  9. 10 7月, 2013 1 次提交
  10. 21 6月, 2013 3 次提交
    • S
      trace,x86: Move creation of irq tracepoints from apic.c to irq.c · 83ab8514
      Steven Rostedt (Red Hat) 提交于
      Compiling without CONFIG_X86_LOCAL_APIC set, apic.c will not be
      compiled, and the irq tracepoints will not be created via the
      CREATE_TRACE_POINTS macro. When CONFIG_X86_LOCAL_APIC is not set,
      we get the following build error:
      
        LD      init/built-in.o
      arch/x86/built-in.o: In function `trace_x86_platform_ipi_entry':
      linux-test.git/arch/x86/include/asm/trace/irq_vectors.h:66: undefined reference to `__tracepoint_x86_platform_ipi_entry'
      arch/x86/built-in.o: In function `trace_x86_platform_ipi_exit':
      linux-test.git/arch/x86/include/asm/trace/irq_vectors.h:66: undefined reference to `__tracepoint_x86_platform_ipi_exit'
      arch/x86/built-in.o: In function `trace_irq_work_entry':
      linux-test.git/arch/x86/include/asm/trace/irq_vectors.h:72: undefined reference to `__tracepoint_irq_work_entry'
      arch/x86/built-in.o: In function `trace_irq_work_exit':
      linux-test.git/arch/x86/include/asm/trace/irq_vectors.h:72: undefined reference to `__tracepoint_irq_work_exit'
      arch/x86/built-in.o:(__jump_table+0x8): undefined reference to `__tracepoint_x86_platform_ipi_entry'
      arch/x86/built-in.o:(__jump_table+0x14): undefined reference to `__tracepoint_x86_platform_ipi_exit'
      arch/x86/built-in.o:(__jump_table+0x20): undefined reference to `__tracepoint_irq_work_entry'
      arch/x86/built-in.o:(__jump_table+0x2c): undefined reference to `__tracepoint_irq_work_exit'
      make[1]: *** [vmlinux] Error 1
      make: *** [sub-make] Error 2
      
      As irq.c is always compiled for x86, it is a more appropriate location
      to create the irq tracepoints.
      
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      83ab8514
    • S
      x86, trace: Add irq vector tracepoints · cf910e83
      Seiji Aguchi 提交于
      [Purpose of this patch]
      
      As Vaibhav explained in the thread below, tracepoints for irq vectors
      are useful.
      
      http://www.spinics.net/lists/mm-commits/msg85707.html
      
      <snip>
      The current interrupt traces from irq_handler_entry and irq_handler_exit
      provide when an interrupt is handled.  They provide good data about when
      the system has switched to kernel space and how it affects the currently
      running processes.
      
      There are some IRQ vectors which trigger the system into kernel space,
      which are not handled in generic IRQ handlers.  Tracing such events gives
      us the information about IRQ interaction with other system events.
      
      The trace also tells where the system is spending its time.  We want to
      know which cores are handling interrupts and how they are affecting other
      processes in the system.  Also, the trace provides information about when
      the cores are idle and which interrupts are changing that state.
      <snip>
      
      On the other hand, my usecase is tracing just local timer event and
      getting a value of instruction pointer.
      
      I suggested to add an argument local timer event to get instruction pointer before.
      But there is another way to get it with external module like systemtap.
      So, I don't need to add any argument to irq vector tracepoints now.
      
      [Patch Description]
      
      Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
      But there is an above use case to trace specific irq_vector rather than tracing all events.
      In this case, we are concerned about overhead due to unwanted events.
      
      So, add following tracepoints instead of introducing irq_vector_entry/exit.
      so that we can enable them independently.
         - local_timer_vector
         - reschedule_vector
         - call_function_vector
         - call_function_single_vector
         - irq_work_entry_vector
         - error_apic_vector
         - thermal_apic_vector
         - threshold_apic_vector
         - spurious_apic_vector
         - x86_platform_ipi_vector
      
      Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
      makes a zero when tracepoints are disabled. Detailed explanations are as follows.
       - Create trace irq handlers with entering_irq()/exiting_irq().
       - Create a new IDT, trace_idt_table, at boot time by adding a logic to
         _set_gate(). It is just a copy of original idt table.
       - Register the new handlers for tracpoints to the new IDT by introducing
         macros to alloc_intr_gate() called at registering time of irq_vector handlers.
       - Add checking, whether irq vector tracing is on/off, into load_current_idt().
         This has to be done below debug checking for these reasons.
         - Switching to debug IDT may be kicked while tracing is enabled.
         - On the other hands, switching to trace IDT is kicked only when debugging
           is disabled.
      
      In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
      used for other purposes.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      cf910e83
    • S
      x86, trace: Introduce entering/exiting_irq() · eddc0e92
      Seiji Aguchi 提交于
      When implementing tracepoints in interrupt handers, if the tracepoints are
      simply added in the performance sensitive path of interrupt handers,
      it may cause potential performance problem due to the time penalty.
      
      To solve the problem, an idea is to prepare non-trace/trace irq handers and
      switch their IDTs at the enabling/disabling time.
      
      So, let's introduce entering_irq()/exiting_irq() for pre/post-
      processing of each irq handler.
      
      A way to use them is as follows.
      
      Non-trace irq handler:
      smp_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      Trace irq_handler:
      smp_trace_irq_handler()
      {
      	entering_irq();		/* pre-processing of this handler */
      	trace_irq_entry();	/* tracepoint for irq entry */
      	__smp_irq_handler();	/*
      				 * common logic between non-trace and trace handlers
      				 * in a vector.
      				 */
      	trace_irq_exit();	/* tracepoint for irq exit */
      	exiting_irq();		/* post-processing of this handler */
      
      }
      
      If tracepoints can place outside entering_irq()/exiting_irq() as follows,
      it looks cleaner.
      
      smp_trace_irq_handler()
      {
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      }
      
      But it doesn't work.
      The problem is with irq_enter/exit() being called. They must be called before
      trace_irq_enter/exit(),  because of the rcu_irq_enter() must be called before
      any tracepoints are used, as tracepoints use  rcu to synchronize.
      
      As a possible alternative, we may be able to call irq_enter() first as follows
      if irq_enter() can nest.
      
      smp_trace_irq_hander()
      {
      	irq_entry();
      	trace_irq_entry();
      	smp_irq_handler();
      	trace_irq_exit();
      	irq_exit();
      }
      
      But it doesn't work, either.
      If irq_enter() is nested, it may have a time penalty because it has to check if it
      was already called or not. The time penalty is not desired in performance sensitive
      paths even if it is tiny.
      Signed-off-by: NSeiji Aguchi <seiji.aguchi@hds.com>
      Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      eddc0e92
  11. 20 6月, 2013 1 次提交
    • M
      x86: Fix trigger_all_cpu_backtrace() implementation · b52e0a7c
      Michel Lespinasse 提交于
      The following change fixes the x86 implementation of
      trigger_all_cpu_backtrace(), which was previously (accidentally,
      as far as I can tell) disabled to always return false as on
      architectures that do not implement this function.
      
      trigger_all_cpu_backtrace(), as defined in include/linux/nmi.h,
      should call arch_trigger_all_cpu_backtrace() if available, or
      return false if the underlying arch doesn't implement this
      function.
      
      x86 did provide a suitable arch_trigger_all_cpu_backtrace()
      implementation, but it wasn't actually being used because it was
      declared in asm/nmi.h, which linux/nmi.h doesn't include. Also,
      linux/nmi.h couldn't easily be fixed by including asm/nmi.h,
      because that file is not available on all architectures.
      
      I am proposing to fix this by moving the x86 definition of
      arch_trigger_all_cpu_backtrace() to asm/irq.h.
      
      Tested via: echo l > /proc/sysrq-trigger
      
      Before the change, this uses a fallback implementation which
      shows backtraces on active CPUs (using
      smp_call_function_interrupt() )
      
      After the change, this shows NMI backtraces on all CPUs
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1370518875-1346-1-git-send-email-walken@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b52e0a7c
  12. 31 5月, 2013 1 次提交
  13. 30 5月, 2013 1 次提交
  14. 20 2月, 2013 1 次提交
  15. 12 2月, 2013 1 次提交
  16. 11 2月, 2013 1 次提交
    • S
      x86/apic: Work around boot failure on HP ProLiant DL980 G7 Server systems · cb214ede
      Stoney Wang 提交于
      When a HP ProLiant DL980 G7 Server boots a regular kernel,
      there will be intermittent lost interrupts which could
      result in a hang or (in extreme cases) data loss.
      
      The reason is that this system only supports x2apic physical
      mode, while the kernel boots with a logical-cluster default
      setting.
      
      This bug can be worked around by specifying the "x2apic_phys" or
      "nox2apic" boot option, but we want to handle this system
      without requiring manual workarounds.
      
      The BIOS sets ACPI_FADT_APIC_PHYSICAL in FADT table.
      As all apicids are smaller than 255, BIOS need to pass the
      control to the OS with xapic mode, according to x2apic-spec,
      chapter 2.9.
      
      Current code handle x2apic when BIOS pass with xapic mode
      enabled:
      
      When user specifies x2apic_phys, or FADT indicates PHYSICAL:
      
      1. During madt oem check, apic driver is set with xapic logical
         or xapic phys driver at first.
      
      2. enable_IR_x2apic() will enable x2apic_mode.
      
      3. if user specifies x2apic_phys on the boot line, x2apic_phys_probe()
         will install the correct x2apic phys driver and use x2apic phys mode.
         Otherwise it will skip the driver will let x2apic_cluster_probe to
         take over to install x2apic cluster driver (wrong one) even though FADT
         indicates PHYSICAL, because x2apic_phys_probe does not check
         FADT PHYSICAL.
      
      Add checking x2apic_fadt_phys in x2apic_phys_probe() to fix the
      problem.
      Signed-off-by: NStoney Wang <song-bo.wang@hp.com>
      [ updated the changelog and simplified the code ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: stable@kernel.org
      Link: http://lkml.kernel.org/r/1360263182-16226-1-git-send-email-yinghai@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cb214ede
  17. 28 1月, 2013 16 次提交
  18. 25 1月, 2013 1 次提交
    • A
      x86/MSI: Support multiple MSIs in presense of IRQ remapping · 51906e77
      Alexander Gordeev 提交于
      The MSI specification has several constraints in comparison with
      MSI-X, most notable of them is the inability to configure MSIs
      independently. As a result, it is impossible to dispatch
      interrupts from different queues to different CPUs. This is
      largely devalues the support of multiple MSIs in SMP systems.
      
      Also, a necessity to allocate a contiguous block of vector
      numbers for devices capable of multiple MSIs might cause a
      considerable pressure on x86 interrupt vector allocator and
      could lead to fragmentation of the interrupt vectors space.
      
      This patch overcomes both drawbacks in presense of IRQ remapping
      and lets devices take advantage of multiple queues and per-IRQ
      affinity assignments.
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/c8bd86ff56b5fc118257436768aaa04489ac0a4c.1353324359.git.agordeev@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      51906e77
  19. 24 1月, 2013 1 次提交
  20. 08 12月, 2012 1 次提交
  21. 27 11月, 2012 1 次提交
    • S
      x86, apic: Cleanup cfg->domain setup for legacy interrupts · 29c574c0
      Suresh Siddha 提交于
      Issues that need to be handled:
      * Handle PIC interrupts on any CPU irrespective of the apic mode
      * In the apic lowest priority logical flat delivery mode, be prepared to
        handle the interrupt on any CPU irrespective of what the IO-APIC RTE says.
      * Because of above, when the IO-APIC starts handling the legacy PIC interrupt,
        use the same vector that is being used by the PIC while programming the
        corresponding IO-APIC RTE.
      
      Start with all the cpu's in the legacy PIC interrupts cfg->domain.
      
      By the time IO-APIC starts taking over the PIC interrupts, apic driver
      model is finalized. So depend on the assign_irq_vector() to update the
      cfg->domain and retain the same vector that was used by PIC before.
      
      For the logical apic flat mode, cfg->domain is updated (during the first
      call to assign_irq_vector()) to contain all the possible online cpu's (0xff).
      Vector used for the legacy PIC interrupt doesn't change when the IO-APIC
      starts handling the interrupt. Any interrupt migration after that
      doesn't change the cfg->domain or the vector used.
      
      For other apic modes like physical mode, cfg->domain is updated
      (during the first call to assign_irq_vector()) to the boot cpu (cpu-0),
      with the same vector that is being used by the PIC. When that interrupt is
      migrated to a different cpu, cfg->domin and the vector assigned will change
      accordingly.
      Tested-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1353970176.21070.51.camel@sbsiddha-desk.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      29c574c0