1. 11 10月, 2007 1 次提交
  2. 03 5月, 2007 1 次提交
    • B
      [PATCH] x86: Save and restore the fixed-range MTRRs of the BSP when suspending · 3ebad590
      Bernhard Kaindl 提交于
      Note: This patch didn'nt need an update since it's initial post.
      
      Some BIOSes may modify fixed-range MTRRs in SMM, e.g. when they
      transition the system into ACPI mode, which is entered thru an SMI,
      triggered by Linux in acpi_enable().
      
      SMIs which cause that Linux is interrupted and BIOS code is
      executed (which may change e.g. fixed-range MTRRs) in SMM may
      be raised by an embedded system controller which is often found
      in notebooks also at other occasions.
      
      If we would not update our copy of the fixed-range MTRRs before
      suspending to RAM or to disk, restore_processor_state() would
      set the fixed-range MTRRs of the BSP using old backup values
      which may be outdated and this could cause the system to fail
      later during resume.
      
      This patch ensures that our copy of the fixed-range MTRRs
      is updated when saving the boot processor state on suspend
      to disk and suspend to RAM.
      
      In combination with other patches this allows to fix s2ram
      and s2disk on the Acer Ferrari 1000 notebook and at least
      s2disk on the Acer Ferrari 5000 notebook.
      Signed-off-by: NBernhard Kaindl <bk@suse.de>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      3ebad590
  3. 07 12月, 2006 1 次提交
    • R
      [PATCH] paravirt: header and stubs for paravirtualisation · d3561b7f
      Rusty Russell 提交于
      Create a paravirt.h header for all the critical operations which need to be
      replaced with hypervisor calls, and include that instead of defining native
      operations, when CONFIG_PARAVIRT.
      
      This patch does the dumbest possible replacement of paravirtualized
      instructions: calls through a "paravirt_ops" structure.  Currently these are
      function implementations of native hardware: hypervisors will override the ops
      structure with their own variants.
      
      All the pv-ops functions are declared "fastcall" so that a specific
      register-based ABI is used, to make inlining assember easier.
      
      And:
      
      +From: Andy Whitcroft <apw@shadowen.org>
      
      The paravirt ops introduce a 'weak' attribute onto memory_setup().
      Code ordering leads to the following warnings on x86:
      
          arch/i386/kernel/setup.c:651: warning: weak declaration of
                      `memory_setup' after first use results in unspecified behavior
      
      Move memory_setup() to avoid this.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Zachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      d3561b7f
  4. 01 7月, 2006 1 次提交
  5. 23 6月, 2006 2 次提交
  6. 24 5月, 2006 1 次提交
  7. 07 11月, 2005 1 次提交
  8. 31 10月, 2005 1 次提交
    • S
      [PATCH] FPU context corrupted after resume · 08967f94
      Shaohua Li 提交于
      mxcsr_feature_mask_init isn't needed in suspend/resume time (we can use
      boot time mask).  And actually it's harmful, as it clear task's saved
      fxsave in resume.  This bug is widely seen by users using zsh.
      
      (akpm: my eyes.  Fixed some surrounding whitespace mess)
      
      Signed-off-by: Shaohua Li<shaohua.li@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      08967f94
  9. 27 9月, 2005 1 次提交
  10. 05 9月, 2005 3 次提交
    • Z
      [PATCH] x86: remove redundant TSS clearing · e9f86e35
      Zachary Amsden 提交于
      When reviewing GDT updates, I found the code:
      
      	set_tss_desc(cpu,t);	/* This just modifies memory; ... */
              per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS].b &= 0xfffffdff;
      
      This second line is unnecessary, since set_tss_desc() has already cleared
      the busy bit.
      
      Commented disassembly, line 1:
      
      c028b8bd:       8b 0c 86                mov    (%esi,%eax,4),%ecx
      c028b8c0:       01 cb                   add    %ecx,%ebx
      c028b8c2:       8d 0c 39                lea    (%ecx,%edi,1),%ecx
      
        => %ecx = per_cpu(cpu_gdt_table, cpu)
      
      c028b8c5:       8d 91 80 00 00 00       lea    0x80(%ecx),%edx
      
        => %edx = &per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS]
      
      c028b8cb:       66 c7 42 00 73 20       movw   $0x2073,0x0(%edx)
      c028b8d1:       66 89 5a 02             mov    %bx,0x2(%edx)
      c028b8d5:       c1 cb 10                ror    $0x10,%ebx
      c028b8d8:       88 5a 04                mov    %bl,0x4(%edx)
      c028b8db:       c6 42 05 89             movb   $0x89,0x5(%edx)
      
        => ((char *)%edx)[5] = 0x89
        (equivalent) ((char *)per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS])[5] = 0x89
      
      c028b8df:       c6 42 06 00             movb   $0x0,0x6(%edx)
      c028b8e3:       88 7a 07                mov    %bh,0x7(%edx)
      c028b8e6:       c1 cb 10                ror    $0x10,%ebx
      
        => other bits
      
      Commented disassembly, line 2:
      
      c028b8e9:       8b 14 86                mov    (%esi,%eax,4),%edx
      c028b8ec:       8d 04 3a                lea    (%edx,%edi,1),%eax
      
        => %eax = per_cpu(cpu_gdt_table, cpu)
      
      c028b8ef:       81 a0 84 00 00 00 ff    andl   $0xfffffdff,0x84(%eax)
      
        => per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS].b &= 0xfffffdff;
        (equivalent) ((char *)per_cpu(cpu_gdt_table, cpu)[GDT_ENTRY_TSS])[5] &= 0xfd
      
      Note that (0x89 & ~0xfd) == 0; i.e, set_tss_desc(cpu,t) has already stored
      the type field in the GDT with the busy bit clear.
      
      Eliminating redundant and obscure code is always a good thing; in fact, I
      pointed out this same optimization many moons ago in arch/i386/setup.c,
      back when it used to be called that.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e9f86e35
    • Z
      [PATCH] i386: inline assembler: cleanup and encapsulate descriptor and task register management · 4d37e7e3
      Zachary Amsden 提交于
      i386 inline assembler cleanup.
      
      This change encapsulates descriptor and task register management.  Also,
      it is possible to improve assembler generation in two cases; savesegment
      may store the value in a register instead of a memory location, which
      allows GCC to optimize stack variables into registers, and MOV MEM, SEG
      is always a 16-bit write to memory, making the casting in math-emu
      unnecessary.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4d37e7e3
    • Z
      [PATCH] i386: inline asm cleanup · 4bb0d3ec
      Zachary Amsden 提交于
      i386 Inline asm cleanup.  Use cr/dr accessor functions.
      
      Also, a potential bugfix.  Also, some CR accessors really should be volatile.
      Reads from CR0 (numeric state may change in an exception handler), writes to
      CR4 (flipping CR4.TSD) and reads from CR2 (page fault) prevent instruction
      re-ordering.  I did not add memory clobber to CR3 / CR4 / CR0 updates, as it
      was not there to begin with, and in no case should kernel memory be clobbered,
      except when doing a TLB flush, which already has memory clobber.
      
      I noticed that page invalidation does not have a memory clobber.  I can't find
      a bug as a result, but there is definitely a potential for a bug here:
      
      #define __flush_tlb_single(addr) \
      	__asm__ __volatile__("invlpg %0": :"m" (*(char *) addr))
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4bb0d3ec
  11. 08 7月, 2005 1 次提交
  12. 26 6月, 2005 2 次提交
  13. 24 6月, 2005 1 次提交
  14. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4