1. 06 12月, 2009 1 次提交
  2. 27 11月, 2009 12 次提交
  3. 17 11月, 2009 1 次提交
    • F
      x86: Kill bad_dma_address variable · 8fd524b3
      FUJITA Tomonori 提交于
      This kills bad_dma_address variable, the old mechanism to enable
      IOMMU drivers to make dma_mapping_error() work in IOMMU's
      specific way.
      
      bad_dma_address variable was introduced to enable IOMMU drivers
      to make dma_mapping_error() work in IOMMU's specific way.
      However, it can't handle systems that use both swiotlb and HW
      IOMMU. SO we introduced dma_map_ops->mapping_error to solve that
      case.
      
      Intel VT-d, GART, and swiotlb already use
      dma_map_ops->mapping_error. Calgary, AMD IOMMU, and nommu use
      zero for an error dma address. This adds DMA_ERROR_CODE and
      converts them to use it (as SPARC and POWER does).
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Acked-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Cc: muli@il.ibm.com
      Cc: joerg.roedel@amd.com
      LKML-Reference: <1258287594-8777-3-git-send-email-fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8fd524b3
  4. 15 11月, 2009 2 次提交
  5. 11 11月, 2009 1 次提交
  6. 10 11月, 2009 5 次提交
    • F
      x86: Handle HW IOMMU initialization failure gracefully · 75f1cdf1
      FUJITA Tomonori 提交于
      If HW IOMMU initialization fails (Intel VT-d often does this,
      typically due to BIOS bugs), we fall back to nommu. It doesn't
      work for the majority since nowadays we have more than 4GB
      memory so we must use swiotlb instead of nommu.
      
      The problem is that it's too late to initialize swiotlb when HW
      IOMMU initialization fails. We need to allocate swiotlb memory
      earlier from bootmem allocator. Chris explained the issue in
      detail:
      
        http://marc.info/?l=linux-kernel&m=125657444317079&w=2
      
      The current x86 IOMMU initialization sequence is too complicated
      and handling the above issue makes it more hacky.
      
      This patch changes x86 IOMMU initialization sequence to handle
      the above issue cleanly.
      
      The new x86 IOMMU initialization sequence are:
      
      1. we initialize the swiotlb (and setting swiotlb to 1) in the case
         of (max_pfn > MAX_DMA32_PFN && !no_iommu). dma_ops is set to
         swiotlb_dma_ops or nommu_dma_ops. if swiotlb usage is forced by
         the boot option, we finish here.
      
      2. we call the detection functions of all the IOMMUs
      
      3. the detection function sets x86_init.iommu.iommu_init to the
         IOMMU initialization function (so we can avoid calling the
         initialization functions of all the IOMMUs needlessly).
      
      4. if the IOMMU initialization function doesn't need to swiotlb
         then sets swiotlb to zero (e.g. the initialization is
         sucessful).
      
      5. if we find that swiotlb is set to zero, we free swiotlb
         resource.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: chrisw@sous-sol.org
      Cc: dwmw2@infradead.org
      Cc: joerg.roedel@amd.com
      Cc: muli@il.ibm.com
      LKML-Reference: <1257849980-22640-10-git-send-email-fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      75f1cdf1
    • F
      x86: amd_iommu: Convert amd_iommu_detect() to use iommu_init hook · ea1b0d39
      FUJITA Tomonori 提交于
      This changes amd_iommu_detect() to set amd_iommu_init to
      iommu_init hook if amd_iommu_detect() finds the AMD IOMMU.
      
      We can kill the code to check if we found the IOMMU in
      amd_iommu_init() since amd_iommu_detect() sets amd_iommu_init()
      only when it found the IOMMU.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: chrisw@sous-sol.org
      Cc: dwmw2@infradead.org
      Cc: joerg.roedel@amd.com
      Cc: muli@il.ibm.com
      LKML-Reference: <1257849980-22640-5-git-send-email-fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ea1b0d39
    • F
      x86: GART: Convert gart_iommu_hole_init() to use iommu_init hook · de957628
      FUJITA Tomonori 提交于
      This changes gart_iommu_hole_init() to set gart_iommu_init() to
      iommu_init hook if gart_iommu_hole_init() finds the GART IOMMU.
      
      We can kill the code to check if we found the IOMMU in
      gart_iommu_init() since gart_iommu_hole_init() sets
      gart_iommu_init() only when it found the IOMMU.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: chrisw@sous-sol.org
      Cc: dwmw2@infradead.org
      Cc: joerg.roedel@amd.com
      Cc: muli@il.ibm.com
      LKML-Reference: <1257849980-22640-4-git-send-email-fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      de957628
    • F
      x86: Calgary: Convert detect_calgary() to use iommu_init hook · d7b9f7be
      FUJITA Tomonori 提交于
      This changes detect_calgary() to set init_calgary() to
      iommu_init hook if detect_calgary() finds the Calgary IOMMU.
      
      We can kill the code to check if we found the IOMMU in
      init_calgary() since detect_calgary() sets init_calgary() only
      when it found the IOMMU.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Acked-by: NMuli Ben-Yehuda <muli@il.ibm.com>
      Cc: chrisw@sous-sol.org
      Cc: dwmw2@infradead.org
      Cc: joerg.roedel@amd.com
      LKML-Reference: <1257849980-22640-3-git-send-email-fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d7b9f7be
    • F
      x86: Add iommu_init to x86_init_ops · d07c1be0
      FUJITA Tomonori 提交于
      We call the detections functions of all the IOMMUs then all
      their initialization functions. The latter is pointless since we
      don't detect multiple different IOMMUs. What we need to do is
      calling the initialization function of the detected IOMMU.
      
      This adds iommu_init hook to x86_init_ops so if an IOMMU
      detection function can set its initialization function to the
      hook.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: chrisw@sous-sol.org
      Cc: dwmw2@infradead.org
      Cc: joerg.roedel@amd.com
      Cc: muli@il.ibm.com
      LKML-Reference: <1257849980-22640-2-git-send-email-fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d07c1be0
  7. 08 11月, 2009 2 次提交
  8. 06 11月, 2009 1 次提交
    • C
      x86: Make sure get_user_desc() doesn't sign extend. · 2c75910f
      Chris Lalancette 提交于
      The current implementation of get_user_desc() sign extends the return
      value because of integer promotion rules.  For the most part, this
      doesn't matter, because the top bit of base2 is usually 0.  If, however,
      that bit is 1, then the entire value will be 0xffff...  which is
      probably not what the caller intended.
      
      This patch casts the entire thing to unsigned before returning, which
      generates almost the same assembly as the current code but replaces the
      final "cltq" (sign extend) with a "mov %eax %eax" (zero-extend).  This
      fixes booting certain guests under KVM.
      Signed-off-by: NChris Lalancette <clalance@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2c75910f
  9. 04 11月, 2009 1 次提交
    • S
      x86, fs: Fix x86 procfs stack information for threads on 64-bit · 89240ba0
      Stefani Seibold 提交于
      This patch fixes two issues in the procfs stack information on
      x86-64 linux.
      
      The 32 bit loader compat_do_execve did not store stack
      start. (this was figured out by Alexey Dobriyan).
      
      The stack information on a x64_64 kernel always shows 0 kbyte
      stack usage, because of a missing implementation of the KSTK_ESP
      macro which always returned -1.
      
      The new implementation now returns the right value.
      Signed-off-by: NStefani Seibold <stefani@seibold.net>
      Cc: Americo Wang <xiyou.wangcong@gmail.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <1257240160.4889.24.camel@wall-e>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89240ba0
  10. 03 11月, 2009 1 次提交
  11. 21 10月, 2009 1 次提交
  12. 16 10月, 2009 1 次提交
  13. 14 10月, 2009 1 次提交
  14. 13 10月, 2009 1 次提交
    • J
      x86/paravirt: Use normal calling sequences for irq enable/disable · 71999d98
      Jeremy Fitzhardinge 提交于
      Bastian Blank reported a boot crash with stackprotector enabled,
      and debugged it back to edx register corruption.
      
      For historical reasons irq enable/disable/save/restore had special
      calling sequences to make them more efficient.  With the more
      recent introduction of higher-level and more general optimisations
      this is no longer necessary so we can just use the normal PVOP_
      macros.
      
      This fixes some residual bugs in the old implementations which left
      edx liable to inadvertent clobbering. Also, fix some bugs in
      __PVOP_VCALLEESAVE which were revealed by actual use.
      Reported-by: NBastian Blank <bastian@waldi.eu.org>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Stable Kernel <stable@kernel.org>
      Cc: Xen-devel <xen-devel@lists.xensource.com>
      LKML-Reference: <4AD3BC9B.7040501@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      71999d98
  15. 10 10月, 2009 1 次提交
    • J
      x86/amd-iommu: Workaround for erratum 63 · c5cca146
      Joerg Roedel 提交于
      There is an erratum for IOMMU hardware which documents
      undefined behavior when forwarding SMI requests from
      peripherals and the DTE of that peripheral has a sysmgt
      value of 01b. This problem caused weird IO_PAGE_FAULTS in my
      case.
      This patch implements the suggested workaround for that
      erratum into the AMD IOMMU driver.  The erratum is
      documented with number 63.
      
      Cc: stable@kernel.org
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      c5cca146
  16. 04 10月, 2009 1 次提交
  17. 02 10月, 2009 2 次提交
    • I
      x86: EDAC: MCE: Fix MCE decoding callback logic · f436f8bb
      Ingo Molnar 提交于
      Make decoding of MCEs happen only on AMD hardware by registering a
      non-default callback only on CPU families which support it.
      
      While looking at the interaction of decode_mce() with the other MCE
      code i also noticed a few other things and made the following
      cleanups/fixes:
      
       - Fixed the mce_decode() weak alias - a weak alias is really not
         good here, it should be a proper callback. A weak alias will be
         overriden if a piece of code is built into the kernel - not
         good, obviously.
      
       - The patch initializes the callback on AMD family 10h and 11h.
      
       - Added the more correct fallback printk of:
      
      	No support for human readable MCE decoding on this CPU type.
      	Transcribe the message and run it through 'mcelog --ascii' to decode.
      
         On CPUs that dont have a decoder.
      
       - Made the surrounding code more readable.
      
      Note that the callback allows us to have a default fallback -
      without having to check the CPU versions during the printout
      itself. When an EDAC module registers itself, it can install the
      decode-print function.
      
      (there's no unregister needed as this is core code.)
      
      version -v2 by Borislav Petkov:
      
       - add K8 to the set of supported CPUs
      
       - always build in edac_mce_amd since we use an early_initcall now
      
       - fix checkpatch warnings
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      LKML-Reference: <20091001141432.GA11410@aftab>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f436f8bb
    • S
      x86: fix csum_ipv6_magic asm memory clobber · 392d814d
      Samuel Thibault 提交于
      Just like ip_fast_csum, the assembly snippet in csum_ipv6_magic needs a
      memory clobber, as it is only passed the address of the buffer, not a
      memory reference to the buffer itself.
      
      This caused failures in Hurd's pfinetv4 when we tried to compile it with
      gcc-4.3 (bogus checksums).
      Signed-off-by: NSamuel Thibault <samuel.thibault@ens-lyon.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: N"David S. Miller" <davem@davemloft.net>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      392d814d
  18. 01 10月, 2009 2 次提交
  19. 24 9月, 2009 3 次提交