1. 05 9月, 2005 3 次提交
    • Z
      [PATCH] x86: remove redundant TSS clearing · e9f86e35
      Zachary Amsden 提交于
      When reviewing GDT updates, I found the code:
      
      	set_tss_desc(cpu,t);	/* This just modifies memory; ... */
              per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS].b &= 0xfffffdff;
      
      This second line is unnecessary, since set_tss_desc() has already cleared
      the busy bit.
      
      Commented disassembly, line 1:
      
      c028b8bd:       8b 0c 86                mov    (%esi,%eax,4),%ecx
      c028b8c0:       01 cb                   add    %ecx,%ebx
      c028b8c2:       8d 0c 39                lea    (%ecx,%edi,1),%ecx
      
        => %ecx = per_cpu(cpu_gdt_table, cpu)
      
      c028b8c5:       8d 91 80 00 00 00       lea    0x80(%ecx),%edx
      
        => %edx = &per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS]
      
      c028b8cb:       66 c7 42 00 73 20       movw   $0x2073,0x0(%edx)
      c028b8d1:       66 89 5a 02             mov    %bx,0x2(%edx)
      c028b8d5:       c1 cb 10                ror    $0x10,%ebx
      c028b8d8:       88 5a 04                mov    %bl,0x4(%edx)
      c028b8db:       c6 42 05 89             movb   $0x89,0x5(%edx)
      
        => ((char *)%edx)[5] = 0x89
        (equivalent) ((char *)per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS])[5] = 0x89
      
      c028b8df:       c6 42 06 00             movb   $0x0,0x6(%edx)
      c028b8e3:       88 7a 07                mov    %bh,0x7(%edx)
      c028b8e6:       c1 cb 10                ror    $0x10,%ebx
      
        => other bits
      
      Commented disassembly, line 2:
      
      c028b8e9:       8b 14 86                mov    (%esi,%eax,4),%edx
      c028b8ec:       8d 04 3a                lea    (%edx,%edi,1),%eax
      
        => %eax = per_cpu(cpu_gdt_table, cpu)
      
      c028b8ef:       81 a0 84 00 00 00 ff    andl   $0xfffffdff,0x84(%eax)
      
        => per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS].b &= 0xfffffdff;
        (equivalent) ((char *)per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS])[5] &= 0xfd
      
      Note that (0x89 & ~0xfd) == 0; i.e, set_tss_desc(cpu,t) has already stored
      the type field in the GDT with the busy bit clear.
      
      Eliminating redundant and obscure code is always a good thing; in fact, I
      pointed out this same optimization many moons ago in arch/i386/setup.c,
      back when it used to be called that.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e9f86e35
    • Z
      [PATCH] i386: inline assembler: cleanup and encapsulate descriptor and task register management · 4d37e7e3
      Zachary Amsden 提交于
      i386 inline assembler cleanup.
      
      This change encapsulates descriptor and task register management.  Also,
      it is possible to improve assembler generation in two cases; savesegment
      may store the value in a register instead of a memory location, which
      allows GCC to optimize stack variables into registers, and MOV MEM, SEG
      is always a 16-bit write to memory, making the casting in math-emu
      unnecessary.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4d37e7e3
    • Z
      [PATCH] i386: inline asm cleanup · 4bb0d3ec
      Zachary Amsden 提交于
      i386 Inline asm cleanup.  Use cr/dr accessor functions.
      
      Also, a potential bugfix.  Also, some CR accessors really should be volatile.
      Reads from CR0 (numeric state may change in an exception handler), writes to
      CR4 (flipping CR4.TSD) and reads from CR2 (page fault) prevent instruction
      re-ordering.  I did not add memory clobber to CR3 / CR4 / CR0 updates, as it
      was not there to begin with, and in no case should kernel memory be clobbered,
      except when doing a TLB flush, which already has memory clobber.
      
      I noticed that page invalidation does not have a memory clobber.  I can't find
      a bug as a result, but there is definitely a potential for a bug here:
      
      #define __flush_tlb_single(addr) \
      	__asm__ __volatile__("invlpg %0": :"m" (*(char *) addr))
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4bb0d3ec
  2. 08 7月, 2005 1 次提交
  3. 26 6月, 2005 2 次提交
  4. 24 6月, 2005 1 次提交
  5. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4