1. 10 9月, 2009 3 次提交
    • Y
      xen: use stronger barrier after unlocking lock · 2496afbf
      Yang Xiaowei 提交于
      We need to have a stronger barrier between releasing the lock and
      checking for any waiting spinners.  A compiler barrier is not sufficient
      because the CPU's ordering rules do not prevent the read xl->spinners
      from happening before the unlock assignment, as they are different
      memory locations.
      
      We need to have an explicit barrier to enforce the write-read ordering
      to different memory locations.
      
      Because of it, I can't bring up > 4 HVM guests on one SMP machine.
      
      [ Code and commit comments expanded -J ]
      
      [ Impact: avoid deadlock when using Xen PV spinlocks ]
      Signed-off-by: NYang Xiaowei <xiaowei.yang@intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      2496afbf
    • J
      xen: only enable interrupts while actually blocking for spinlock · 4d576b57
      Jeremy Fitzhardinge 提交于
      Where possible we enable interrupts while waiting for a spinlock to
      become free, in order to reduce big latency spikes in interrupt handling.
      
      However, at present if we manage to pick up the spinlock just before
      blocking, we'll end up holding the lock with interrupts enabled for a
      while.  This will cause a deadlock if we recieve an interrupt in that
      window, and the interrupt handler tries to take the lock too.
      
      Solve this by shrinking the interrupt-enabled region to just around the
      blocking call.
      
      [ Impact: avoid race/deadlock when using Xen PV spinlocks ]
      Reported-by: N"Yang, Xiaowei" <xiaowei.yang@intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      4d576b57
    • J
      xen: make -fstack-protector work under Xen · 577eebea
      Jeremy Fitzhardinge 提交于
      -fstack-protector uses a special per-cpu "stack canary" value.
      gcc generates special code in each function to test the canary to make
      sure that the function's stack hasn't been overrun.
      
      On x86-64, this is simply an offset of %gs, which is the usual per-cpu
      base segment register, so setting it up simply requires loading %gs's
      base as normal.
      
      On i386, the stack protector segment is %gs (rather than the usual kernel
      percpu %fs segment register).  This requires setting up the full kernel
      GDT and then loading %gs accordingly.  We also need to make sure %gs is
      initialized when bringing up secondary cpus too.
      
      To keep things consistent, we do the full GDT/segment register setup on
      both architectures.
      
      Because we need to avoid -fstack-protected code before setting up the GDT
      and because there's no way to disable it on a per-function basis, several
      files need to have stack-protector inhibited.
      
      [ Impact: allow Xen booting with stack-protector enabled ]
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      577eebea
  2. 06 9月, 2009 27 次提交
  3. 05 9月, 2009 10 次提交