1. 25 5月, 2008 14 次提交
  2. 20 5月, 2008 1 次提交
  3. 18 5月, 2008 6 次提交
  4. 17 5月, 2008 1 次提交
  5. 14 5月, 2008 7 次提交
    • R
      x86: user_regset_view table fix for ia32 on 64-bit · 1f465f4e
      Roland McGrath 提交于
      The user_regset_view table for the 32-bit regsets on the 64-bit build had
      the wrong sizes for the FP regsets.  This bug had no user-visible effect
      (just on kernel modules using the user_regset interfaces and the like).
      But the fix is trivial and risk-free.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1f465f4e
    • P
      x86: arch/x86/mm/pat.c - fix warning · afc85343
      Pranith Kumar 提交于
      fix this warning:
      
       arch/x86/mm/pat.c: In function `phys_mem_access_prot_allowed':
       arch/x86/mm/pat.c:558: warning: long long unsigned int format, long
       unsigned int arg (arg 6)
       arch/x86/mm/pat.c: In function `map_devmem':
       arch/x86/mm/pat.c:580: warning: long long unsigned int format, long
       unsigned int arg (arg 6)
      Signed-off-by: ND Pranith Kumar <bobby.prani@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      afc85343
    • I
      x86: fix csum_partial() export · 89804c02
      Ingo Molnar 提交于
      Fix this symbol export problem:
      
          Building modules, stage 2.
          MODPOST 193 modules
          ERROR: "csum_partial" [fs/reiserfs/reiserfs.ko] undefined!
          make[1]: *** [__modpost] Error 1
          make: *** [modules] Error 2
      
      This is due to a known weakness of symbol exports: if a symbol's
      only in-core user is an EXPORT_SYMBOL from a lib-y section, the
      symbol is not linked in.
      
      The solution is to move the export to x8664_ksyms_64.c - but the real
      solution would be to fix kbuild.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89804c02
    • A
      x86: early_init_centaur(): use set_cpu_cap() · 8c45a4e4
      Andrew Morton 提交于
      arch/x86/kernel/setup_64.c:954: warning: passing argument 2 of 'set_bit' from incompatible pointer type
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8c45a4e4
    • H
      x86: fix app crashes after SMP resume · 61165d7a
      Hugh Dickins 提交于
      After resume on a 2cpu laptop, kernel builds collapse with a sed hang,
      sh or make segfault (often on 20295564), real-time signal to cc1 etc.
      
      Several hurdles to jump, but a manually-assisted bisect led to -rc1's
      d2bcbad5 x86: do not zap_low_mappings
      in __smp_prepare_cpus.  Though the low mappings were removed at bootup,
      they were left behind (with Global flags helping to keep them in TLB)
      after resume or cpu online, causing the crashes seen.
      
      Reinstate zap_low_mappings (with local __flush_tlb_all) for each cpu_up
      on x86_32.  This used to be serialized by smp_commenced_mask: that's now
      gone, but a low_mappings flag will do.  No need for native_smp_cpus_done
      to repeat the zap: let mem_init zap BSP's low mappings just like on UP.
      
      (In passing, fix error code from native_cpu_up: do_boot_cpu returns a
      variety of diagnostic values, Dprintk what it says but convert to -EIO.
      And save_pg_dir separately before zap_low_mappings: doesn't matter now,
      but zapping twice in succession wiped out resume's swsusp_pg_dir.)
      
      That worked well on the duo and one quad, but wouldn't boot 3rd or 4th
      cpu on P4 Xeon, oopsing just after unlock_ipi_call_lock.  The TLB flush
      IPI now being sent reveals a long-standing bug: the booting cpu has its
      APIC readied in smp_callin at the top of start_secondary, but isn't put
      into the cpu_online_map until just before that unlock_ipi_call_lock.
      
      So native_smp_call_function_mask to online cpus would send_IPI_allbutself,
      including the cpu just coming up, though it has been excluded from the
      count to wait for: by the time it handles the IPI, the call data on
      native_smp_call_function_mask's stack may well have been overwritten.
      
      So fall back to send_IPI_mask while cpu_online_map does not match
      cpu_callout_map: perhaps there's a better APICological fix to be
      made at the start_secondary end, but I wouldn't know that.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      61165d7a
    • V
      x86/PCI: X86_PAT & mprotect · 77db9885
      Venki Pallipadi 提交于
      Some versions of X used the mprotect workaround to change caching type from UC
      to WB, so that it can then use mtrr to program WC for that region [1].  Change
      the mmap of pci space through /sys or /proc interfaces from UC to UC_MINUS.
      With this change, X will not need to use mprotect workaround to get WC type
      since the MTRR mapping type will be honored.
      
      The bug in mprotect that clobbers PAT bits is fixed in a follow on patch. So,
      this X workaround will stop working as well.
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      77db9885
    • T
      x86/PCI: fix broken ISA DMA · 4a367f3a
      Takashi Iwai 提交于
      Rene Herman reported:
      
      > commit 8779f2fc
      >
      > "x86: don't try to allocate from DMA zone at first"
      >
      > breaks all of ISA DMA. Or all of ALSA ISA DMA at least. All
      > ISA soundcards are silent following that commit -- no error
      > messages, everything appears fine, just silence.
      
      That patch is buggy. We had an implicit assumption that
      dev = NULL for ISA devices that require 24bit DMA.
      
      The recent work on x86 dma_alloc_coherent() breaks the ISA DMA buffer
      allocation, which is represented by "dev = NULL" and requires 24bit
      DMA implicitly.
      Bisected-by: NRene Herman <rene.herman@keyaccess.nl>
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      4a367f3a
  6. 13 5月, 2008 3 次提交
  7. 11 5月, 2008 5 次提交
  8. 09 5月, 2008 1 次提交
  9. 08 5月, 2008 2 次提交
    • T
      x86: cleanup PAT cpu validation · 8d4a4300
      Thomas Gleixner 提交于
      Move the scattered checks for PAT support to a single function. Its
      moved to addon_cpuid_features.c as this file is shared between 32 and
      64 bit.
      
      Remove the manipulation of the PAT feature bit and just disable PAT in
      the PAT layer, based on the PAT bit provided by the CPU and the
      current CPU version/model white list.
      
      Change the boot CPU check so it works on Voyager somewhere in the
      future as well :) Also panic, when a secondary has PAT disabled but
      the primary one has alrady switched to PAT. We have no way to undo
      that.
      
      The white list is kept for now to ensure that we can rely on known to
      work CPU types and concentrate on the software induced problems
      instead of fighthing CPU erratas and subtle wreckage caused by not yet
      verified CPUs. Once the PAT code has stabilized enough, we can remove
      the white list and open the can of worms.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8d4a4300
    • A
      x86: GEODE: cache results from geode_has_vsa2() and uninline · 547acec7
      Andres Salomon 提交于
      This moves geode_has_vsa2 into a .c file, caches the result we get from
      the VSA virtual registers, and causes the function to no longer be inline.
      
      [akpm@linux-foundation.org: cleanup]
      Signed-off-by: NAndres Salomon <dilinger@debian.org>
      Cc: Jordan Crouse <jordan.crouse@amd.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      547acec7