1. 09 8月, 2013 1 次提交
  2. 28 6月, 2013 1 次提交
    • J
      xen: Convert printks to pr_<level> · 283c0972
      Joe Perches 提交于
      Convert printks to pr_<level> (excludes printk(KERN_DEBUG...)
      to be more consistent throughout the xen subsystem.
      
      Add pr_fmt with KBUILD_MODNAME or "xen:" KBUILD_MODNAME
      Coalesce formats and add missing word spaces
      Add missing newlines
      Align arguments and reflow to 80 columns
      Remove DRV_NAME from formats as pr_fmt adds the same content
      
      This does change some of the prefixes of these messages
      but it also does make them more consistent.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      283c0972
  3. 30 1月, 2013 1 次提交
    • Y
      x86: Don't panic if can not alloc buffer for swiotlb · ac2cbab2
      Yinghai Lu 提交于
      Normal boot path on system with iommu support:
      swiotlb buffer will be allocated early at first and then try to initialize
      iommu, if iommu for intel or AMD could setup properly, swiotlb buffer
      will be freed.
      
      The early allocating is with bootmem, and could panic when we try to use
      kdump with buffer above 4G only, or with memmap to limit mem under 4G.
      for example: memmap=4095M$1M to remove memory under 4G.
      
      According to Eric, add _nopanic version and no_iotlb_memory to fail
      map single later if swiotlb is still needed.
      
      -v2: don't pass nopanic, and use -ENOMEM return value according to Eric.
           panic early instead of using swiotlb_full to panic...according to Eric/Konrad.
      -v3: make swiotlb_init to be notpanic, but will affect:
           arm64, ia64, powerpc, tile, unicore32, x86.
      -v4: cleanup swiotlb_init by removing swiotlb_init_with_default_size.
      Suggested-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/1359058816-7615-36-git-send-email-yinghai@kernel.orgReviewed-and-tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Kyungmin Park <kyungmin.park@samsung.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Andrzej Pietrasiewicz <andrzej.p@samsung.com>
      Cc: linux-mips@linux-mips.org
      Cc: xen-devel@lists.xensource.com
      Cc: virtualization@lists.linux-foundation.org
      Cc: Shuah Khan <shuahkhan@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      ac2cbab2
  4. 30 10月, 2012 3 次提交
    • A
      swiotlb: Use physical addresses instead of virtual in swiotlb_tbl_sync_single · fbfda893
      Alexander Duyck 提交于
      This change makes it so that the sync functionality also uses physical
      addresses.  This helps to further reduce the use of virt_to_phys and
      phys_to_virt functions.
      
      In order to clarify things since we now have 2 physical addresses in use
      inside of swiotlb_tbl_sync_single I am renaming phys to orig_addr, and
      dma_addr to tlb_addr.  This way is should be clear that orig_addr is
      contained within io_orig_addr and tlb_addr is an address within the
      io_tlb_addr buffer.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      fbfda893
    • A
      swiotlb: Use physical addresses for swiotlb_tbl_unmap_single · 61ca08c3
      Alexander Duyck 提交于
      This change makes it so that the unmap functionality also uses physical
      addresses.  This helps to further reduce the use of virt_to_phys and
      phys_to_virt functions.
      
      In order to clarify things since we now have 2 physical addresses in use
      inside of swiotlb_tbl_unmap_single I am renaming phys to orig_addr, and
      dma_addr to tlb_addr.  This way is should be clear that orig_addr is
      contained within io_orig_addr and tlb_addr is an address within the
      io_tlb_addr buffer.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      61ca08c3
    • A
      swiotlb: Return physical addresses when calling swiotlb_tbl_map_single · e05ed4d1
      Alexander Duyck 提交于
      This change makes it so that swiotlb_tbl_map_single will return a physical
      address instead of a virtual address when called.  The advantage to this once
      again is that we are avoiding a number of virt_to_phys and phys_to_virt
      translations by working with everything as a physical address.
      
      One change I had to make in order to support using physical addresses is that
      I could no longer trust 0 to be a invalid physical address on all platforms.
      So instead I made it so that ~0 is returned on error.  This should never be a
      valid return value as it implies that only one byte would be available for
      use.
      
      In order to clarify things since we now have 2 physical addresses in use
      inside of swiotlb_tbl_map_single I am renaming phys to orig_addr, and
      dma_addr to tlb_addr.  This way is should be clear that orig_addr is
      contained within io_orig_addr and tlb_addr is an address within the
      io_tlb_addr buffer.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      e05ed4d1
  5. 18 9月, 2012 3 次提交
  6. 17 9月, 2012 2 次提交
  7. 05 9月, 2012 1 次提交
  8. 23 8月, 2012 1 次提交
  9. 22 8月, 2012 1 次提交
  10. 28 3月, 2012 1 次提交
  11. 16 12月, 2011 1 次提交
    • K
      xen/swiotlb: Use page alignment for early buffer allocation. · 63a74175
      Konrad Rzeszutek Wilk 提交于
      This fixes an odd bug found on a Dell PowerEdge 1850/0RC130
      (BIOS A05 01/09/2006) where all of the modules doing pci_set_dma_mask
      would fail with:
      
      ata_piix 0000:00:1f.1: enabling device (0005 -> 0007)
      ata_piix 0000:00:1f.1: can't derive routing for PCI INT A
      ata_piix 0000:00:1f.1: BMDMA: failed to set dma mask, falling back to PIO
      
      The issue was the Xen-SWIOTLB was allocated such as that the end of
      buffer was stradling a page (and also above 4GB). The fix was
      spotted by Kalev Leonid  which was to piggyback on git commit
      e79f86b2 "swiotlb: Use page alignment
      for early buffer allocation" which:
      
      	We could call free_bootmem_late() if swiotlb is not used, and
      	it will shrink to page alignment.
      
      	So alloc them with page alignment at first, to avoid lose two pages
      
      And doing that fixes the outstanding issue.
      
      CC: stable@kernel.org
      Suggested-by: N"Kalev, Leonid" <Leonid.Kalev@ca.com>
      Reported-and-Tested-by: N"Taylor, Neal E" <Neal.Taylor@ca.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      63a74175
  12. 06 12月, 2011 1 次提交
  13. 01 11月, 2011 1 次提交
  14. 27 8月, 2011 4 次提交
  15. 07 6月, 2011 1 次提交
    • F
      swiotlb: Export swioltb_nr_tbl and utilize it as appropiate. · 5f98ecdb
      FUJITA Tomonori 提交于
      By default the io_tlb_nslabs is set to zero, and gets set to
      whatever value is passed in via swiotlb_init_with_tbl function.
      The default value passed in is 64MB. However, if the user provides
      the 'swiotlb=<nslabs>' the default value is ignored and
      the value provided by the user is used... Except when the SWIOTLB
      is used under Xen - there the default value of 64MB is used and
      the Xen-SWIOTLB has no mechanism to get the 'io_tlb_nslabs' filled
      out by setup_io_tlb_npages functions. This patch provides a function
      for the Xen-SWIOTLB to call to see if the io_tlb_nslabs is set
      and if so use that value.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      5f98ecdb
  16. 10 4月, 2011 1 次提交
  17. 27 7月, 2010 1 次提交
    • K
      swiotlb-xen: SWIOTLB library for Xen PV guest with PCI passthrough. · b097186f
      Konrad Rzeszutek Wilk 提交于
      This patchset:
      
      PV guests under Xen are running in an non-contiguous memory architecture.
      
      When PCI pass-through is utilized, this necessitates an IOMMU for
      translating bus (DMA) to virtual and vice-versa and also providing a
      mechanism to have contiguous pages for device drivers operations (say DMA
      operations).
      
      Specifically, under Xen the Linux idea of pages is an illusion. It
      assumes that pages start at zero and go up to the available memory. To
      help with that, the Linux Xen MMU provides a lookup mechanism to
      translate the page frame numbers (PFN) to machine frame numbers (MFN)
      and vice-versa. The MFN are the "real" frame numbers. Furthermore
      memory is not contiguous. Xen hypervisor stitches memory for guests
      from different pools, which means there is no guarantee that PFN==MFN
      and PFN+1==MFN+1. Lastly with Xen 4.0, pages (in debug mode) are
      allocated in descending order (high to low), meaning the guest might
      never get any MFN's under the 4GB mark.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Albert Herranz <albert_herranz@yahoo.es>
      Cc: Ian Campbell <Ian.Campbell@citrix.com>
      b097186f