1. 25 1月, 2013 2 次提交
  2. 03 10月, 2012 1 次提交
  3. 19 9月, 2012 1 次提交
    • S
      x86, fpu: always use kernel_fpu_begin/end() for in-kernel FPU usage · 841e3604
      Suresh Siddha 提交于
      use kernel_fpu_begin/end() instead of unconditionally accessing cr0 and
      saving/restoring just the few used xmm/ymm registers.
      
      This has some advantages like:
      * If the task's FPU state is already active, then kernel_fpu_begin()
        will just save the user-state and avoiding the read/write of cr0.
        In general, cr0 accesses are much slower.
      
      * Manual save/restore of xmm/ymm registers will affect the 'modified' and
        the 'init' optimizations brought in the by xsaveopt/xrstor
        infrastructure.
      
      * Foward compatibility with future vector register extensions will be a
        problem if the xmm/ymm registers are manually saved and restored
        (corrupting the extended state of those vector registers).
      
      With this patch, there was no significant difference in the xor throughput
      using AVX, measured during boot.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/1345842782-24175-5-git-send-email-suresh.b.siddha@intel.com
      Cc: Jim Kukunas <james.t.kukunas@linux.intel.com>
      Cc: NeilBrown <neilb@suse.de>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      841e3604
  4. 22 5月, 2012 1 次提交
  5. 23 10月, 2008 2 次提交
  6. 18 6月, 2008 1 次提交
  7. 17 4月, 2008 1 次提交
  8. 30 1月, 2008 1 次提交
  9. 11 10月, 2007 1 次提交
  10. 05 9月, 2005 1 次提交
    • Z
      [PATCH] i386: inline asm cleanup · 4bb0d3ec
      Zachary Amsden 提交于
      i386 Inline asm cleanup.  Use cr/dr accessor functions.
      
      Also, a potential bugfix.  Also, some CR accessors really should be volatile.
      Reads from CR0 (numeric state may change in an exception handler), writes to
      CR4 (flipping CR4.TSD) and reads from CR2 (page fault) prevent instruction
      re-ordering.  I did not add memory clobber to CR3 / CR4 / CR0 updates, as it
      was not there to begin with, and in no case should kernel memory be clobbered,
      except when doing a TLB flush, which already has memory clobber.
      
      I noticed that page invalidation does not have a memory clobber.  I can't find
      a bug as a result, but there is definitely a potential for a bug here:
      
      #define __flush_tlb_single(addr) \
      	__asm__ __volatile__("invlpg %0": :"m" (*(char *) addr))
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4bb0d3ec
  11. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4