1. 12 1月, 2006 1 次提交
    • A
      [PATCH] i386/x86-64: Update AMD CPUID flags · 3f98bc49
      Andi Kleen 提交于
      Print bits for RDTSCP, SVM, CR8-LEGACY.
      
      Also now print power flags on i386 like x86-64 always did.
      This will add a new line in the 386 cpuinfo, but that shouldn't
      be an issue - did that in the past too and I haven't heard
      of any breakage.
      
      I shrunk some of the fields in the i386 cpuinfo_x86 to chars
      to make up for the new int "x86_power" field. Overall it's
      smaller than before.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3f98bc49
  2. 15 11月, 2005 1 次提交
  3. 07 11月, 2005 1 次提交
  4. 11 9月, 2005 1 次提交
  5. 05 9月, 2005 4 次提交
    • Z
      [PATCH] x86: make IOPL explicit · a5201129
      Zachary Amsden 提交于
      The pushf/popf in switch_to are ONLY used to switch IOPL.  Making this
      explicit in C code is more clear.  This pushf/popf pair was added as a
      bugfix for leaking IOPL to unprivileged processes when using
      sysenter/sysexit based system calls (sysexit does not restore flags).
      
      When requesting an IOPL change in sys_iopl(), it is just as easy to change
      the current flags and the flags in the stack image (in case an IRET is
      required), but there is no reason to force an IRET if we came in from the
      SYSENTER path.
      
      This change is the minimal solution for supporting a paravirtualized Linux
      kernel that allows user processes to run with I/O privilege.  Other
      solutions require radical rewrites of part of the low level fault / system
      call handling code, or do not fully support sysenter based system calls.
      
      Unfortunately, this added one field to the thread_struct.  But as a bonus,
      on P4, the fastest time measured for switch_to() went from 312 to 260
      cycles, a win of about 17% in the fast case through this performance
      critical path.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a5201129
    • I
      [PATCH] i386: fix incorrect TSS entry for LDT · 4f0cb8d9
      Ingo Molnar 提交于
      Noticed by Chuck Ebbert: the .ldt entry of the TSS was set up incorrectly.
      It never mattered since this was a leftover from old times, so remove it.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4f0cb8d9
    • Z
      [PATCH] i386: cleanup serialize msr · 245067d1
      Zachary Amsden 提交于
      i386 arch cleanup.  Introduce the serialize macro to serialize processor
      state.  Why the microcode update needs it I am not quite sure, since wrmsr()
      is already a serializing instruction, but it is a microcode update, so I will
      keep the semantic the same, since this could be a timing workaround.  As far
      as I can tell, this has always been there since the original microcode update
      source.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      245067d1
    • Z
      [PATCH] i386: inline asm cleanup · 4bb0d3ec
      Zachary Amsden 提交于
      i386 Inline asm cleanup.  Use cr/dr accessor functions.
      
      Also, a potential bugfix.  Also, some CR accessors really should be volatile.
      Reads from CR0 (numeric state may change in an exception handler), writes to
      CR4 (flipping CR4.TSD) and reads from CR2 (page fault) prevent instruction
      re-ordering.  I did not add memory clobber to CR3 / CR4 / CR0 updates, as it
      was not there to begin with, and in no case should kernel memory be clobbered,
      except when doing a TLB flush, which already has memory clobber.
      
      I noticed that page invalidation does not have a memory clobber.  I can't find
      a bug as a result, but there is definitely a potential for a bug here:
      
      #define __flush_tlb_single(addr) \
      	__asm__ __volatile__("invlpg %0": :"m" (*(char *) addr))
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4bb0d3ec
  6. 17 8月, 2005 1 次提交
  7. 08 7月, 2005 1 次提交
  8. 26 6月, 2005 1 次提交
  9. 24 6月, 2005 1 次提交
  10. 17 4月, 2005 3 次提交