- 03 10月, 2013 1 次提交
-
-
由 Nishanth Aravamudan 提交于
Under heavy (DLPAR?) stress, we tripped this panic() in arch/powerpc/kernel/iommu.c::iommu_init_table(): page = alloc_pages_node(nid, GFP_ATOMIC, get_order(sz)); if (!page) panic("iommu_init_table: Can't allocate %ld bytes\n", sz); Before the panic() we got a page allocation failure for an order-2 allocation. There appears to be memory free, but perhaps not in the ATOMIC context. I looked through all the call-sites of iommu_init_table() and didn't see any obvious reason to need an ATOMIC allocation. Most call-sites in fact have an explicit GFP_KERNEL allocation shortly before the call to iommu_init_table(), indicating we are not in an atomic context. There is some indirection for some paths, but I didn't see any locks indicating that GFP_KERNEL is inappropriate. With this change under the same conditions, we have not been able to reproduce the panic. Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> CC: <stable@vger.kernel.org>
-
- 15 7月, 2013 1 次提交
-
-
由 Rusty Russell 提交于
Sweep of the simple cases. Cc: netdev@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-arm-kernel@lists.infradead.org Cc: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 6月, 2013 1 次提交
-
-
由 Alexey Kardashevskiy 提交于
This initializes IOMMU groups based on the IOMMU configuration discovered during the PCI scan on POWERNV (POWER non virtualized) platform. The IOMMU groups are to be used later by the VFIO driver, which is used for PCI pass through. It also implements an API for mapping/unmapping pages for guest PCI drivers and providing DMA window properties. This API is going to be used later by QEMU-VFIO to handle h_put_tce hypercalls from the KVM guest. The iommu_put_tce_user_mode() does only a single page mapping as an API for adding many mappings at once is going to be added later. Although this driver has been tested only on the POWERNV platform, it should work on any platform which supports TCE tables. As h_put_tce hypercall is received by the host kernel and processed by the QEMU (what involves calling the host kernel again), performance is not the best - circa 220MB/s on 10Gb ethernet network. To enable VFIO on POWER, enable SPAPR_TCE_IOMMU config option and configure VFIO as required. Cc: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 18 4月, 2013 1 次提交
-
-
由 Adrian-Leonard Radu 提交于
Signed-off-by: NAdrian-Leonard Radu <ady8radu@gmail.com> Acked-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
-
- 10 1月, 2013 1 次提交
-
-
When a device DMA window includes the address 0, it's reserved in the TCE bitmap to avoid returning that address to drivers. When the device is removed, the bitmap is checked for any mappings not removed by the driver, indicating a possible DMA mapping leak. Since the reserved address is not cleared, a message is printed, warning of such a leak. Check for the reservation, and clear it before checking for any other standing mappings. Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 11月, 2012 1 次提交
-
-
由 Akinobu Mita 提交于
- Caluculate the bitmap size with BITS_TO_LONGS() - Use bitmap_empty() to verify that all bits are cleared This also includes a printk to pr_warn() conversion. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 04 10月, 2012 1 次提交
-
-
由 Anton Blanchard 提交于
There are a number of issues in the recent IOMMU pools code: - On a preempt kernel we might switch CPUs in the middle of building a scatter gather list. When this happens the handle hint passed in no longer falls within the local CPU's pool. Check for this and fall back to the pool hint. - We were missing a spin_unlock/spin_lock in one spot where we switch pools. - We need to provide locking around dart_tlb_invalidate_all and dart_tlb_invalidate_one now that the global lock is gone. Reported-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> CC: <stable@kernel.org> [v3.6]
-
- 13 7月, 2012 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The iommu pool patch has a bug where it would cause a crash when using only one pool (based on the size of the DMA window). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 7月, 2012 1 次提交
-
-
由 Anton Blanchard 提交于
Add the ability to inject IOMMU faults. We enable this per device via a fail_iommu sysfs property, similar to fault injection on other subsystems. An example: ... 0003:01:00.1 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (be3) (rev 02) To inject one error to this device: echo 1 > /sys/bus/pci/devices/0003:01:00.1/fail_iommu echo 1 > /sys/kernel/debug/fail_iommu/probability echo 1 > /sys/kernel/debug/fail_iommu/times As feared, the first failure injected on the be3 results in an unrecoverable error, taking down both functions of the card permanently: be2net 0003:01:00.1: Unrecoverable error in the card Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 03 7月, 2012 4 次提交
-
-
由 Anton Blanchard 提交于
At the moment all queues in a multiqueue adapter will serialise against the IOMMU table lock. This is proving to be a big issue, especially with 10Gbit ethernet. This patch creates 4 pools and tries to spread the load across them. If the table is under 1GB in size we revert back to the original behaviour of 1 pool and 1 largealloc pool. We create a hash to map CPUs to pools. Since we prefer interrupts to be affinitised to primary CPUs, without some form of hashing we are very likely to end up using the same pool. As an example, POWER7 has 4 way SMT and with 4 pools all primary threads will map to the same pool. The largealloc pool is reduced from 1/2 to 1/4 of the space to partially offset the overhead of breaking the table up into pools. Some performance numbers were obtained with a Chelsio T3 adapter on two POWER7 boxes, running a 100 session TCP round robin test. Performance improved 69% with this patch applied. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
In preparation for IOMMU pools, push the spinlock into iommu_range_alloc and __iommu_free. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
This patch moves tce_free outside of the lock in iommu_free. Some performance numbers were obtained with a Chelsio T3 adapter on two POWER7 boxes, running a 100 session TCP round robin test. Performance improved 25% with this patch applied. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We currently hold the IOMMU spinlock around tce_build and tce_flush. This causes our spinlock hold times to be much higher than required and can impact multiqueue adapters. This patch moves tce_build and tce_flush outside of the lock in iommu_alloc, and tce_flush outside of the lock in iommu_free. Some performance numbers were obtained with a Chelsio T3 adapter on two POWER7 boxes, running a 100 session TCP round robin test. Performance improved 32% with this patch applied. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 2月, 2012 1 次提交
-
-
由 Mahesh Salgaonkar 提交于
On 2012-02-20 11:02:51 Mon, Paul Mackerras wrote: > On Thu, Feb 16, 2012 at 04:44:30PM +0530, Mahesh J Salgaonkar wrote: > > If I have read the code correctly, we are going to get this printk on > non-pSeries machines or on older pSeries machines, even if the user > has not put the fadump=on option on the kernel command line. The > printk will be annoying since there is no actual error condition. It > seems to me that the condition for the printk should include > fw_dump.fadump_enabled. In other words you should probably add > > if (!fw_dump.fadump_enabled) > return 0; > > at the beginning of the function. Hi Paul, Thanks for pointing it out. Please find the updated patch below. The existing patches above this (4/10 through 10/10) cleanly applies on this update. Thanks, -Mahesh. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 9月, 2011 1 次提交
-
-
Some devices have a dma-window that starts at the address 0. This allows DMA addresses to be mapped to this address and returned to drivers as a valid DMA address. Some drivers may not behave well in this case, since the address 0 is considered an error or not allocated. The solution to avoid this kind of error from happening is reserve the page addressed as 0 so it cannot be allocated for a DMA mapping. Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 12月, 2010 1 次提交
-
-
由 Anton Blanchard 提交于
Right now its difficult to see which device is running out of iommu space: iommu_alloc failed, tbl c00000076e096660 vaddr c000000768806600 npages 1 Use dev_info() so we get the device name and location: ipr 0000:00:01.0: iommu_alloc failed, tbl c00000076e096660 vaddr c000000768806600 npages 1 Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 5月, 2010 1 次提交
-
-
由 FUJITA Tomonori 提交于
'protect4gb' boot parameter was introduced to avoid allocating dma space acrossing 4GB boundary in 2007 (the commit 56997559). In 2008, the IOMMU was fixed to use the boundary_mask parameter per device properly. So 'protect4gb' workaround was removed (the 383af952). But somehow I messed the 'protect4gb' boot parameter that was used to enable the workaround. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 19 3月, 2010 1 次提交
-
-
由 FUJITA Tomonori 提交于
The description says: Cause IO segments sent to a device for DMA to be merged virtually by the IOMMU when they happen to have been allocated contiguously. This doesn't add pressure to the IOMMU allocator. However, some drivers don't support getting large merged segments coming back from *_map_sg(). Most drivers don't have this problem; it is safe to say Y here. It's out of date. Long ago, drivers didn't have a way to tell IOMMUs about their segment length limit (that is, the maximum segment length that they can handle). So IOMMUs merged as many segments as possible and gave too large segments to drivers. dma_get_max_seg_size() was introduced to solve the above problem. Device drives can use the API to tell IOMMU about the maximum segment length that they can handle. In addition, the default limit (64K) should be safe for everyone. So this config option seems to be unnecessary. Note that this config option just enables users to disable the virtual merging by default. Users can still disable the virtual merging by the boot parameter. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 16 12月, 2009 1 次提交
-
-
由 Akinobu Mita 提交于
Use bitmap library and kill some unused iommu helper functions. 1. s/iommu_area_free/bitmap_clear/ 2. s/iommu_area_reserve/bitmap_set/ 3. Use bitmap_find_next_zero_area instead of find_next_zero_area This cannot be simple substitution because find_next_zero_area doesn't check the last bit of the limit in bitmap 4. Remove iommu_area_free, iommu_area_reserve, and find_next_zero_area Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 1月, 2009 1 次提交
-
-
由 Ingo Molnar 提交于
Convert arch/powerpc/ over to long long based u64: -#ifdef __powerpc64__ -# include <asm-generic/int-l64.h> -#else -# include <asm-generic/int-ll64.h> -#endif +#include <asm-generic/int-ll64.h> This will avoid reoccuring spurious warnings in core kernel code that comes when people test on their own hardware. (i.e. x86 in ~98% of the cases) This is what x86 uses and it generally helps keep 64-bit code 32-bit clean too. [Adjusted to not impact user mode (from paulus) - sfr] Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 31 10月, 2008 2 次提交
-
-
由 Mark Nelson 提交于
After the merge of the 32 and 64bit DMA code, dma_direct_ops lost their map/unmap_single() functions but gained map/unmap_page(). This caused a problem for Cell because Cell's dma_iommu_fixed_ops called the dma_direct_ops if the fixed linear mapping was to be used or the iommu ops if the dynamic window was to be used. So in order to fix this problem we need to update the 64bit DMA code to use map/unmap_page. First, we update the generic IOMMU code so that iommu_map_single() becomes iommu_map_page() and iommu_unmap_single() becomes iommu_unmap_page(). Then we propagate these changes up through all the callers of these two functions and in the process update all the dma_mapping_ops so that they have map/unmap_page rahter than map/unmap_single. We can do this because on 64bit there is no HIGHMEM memory so map/unmap_page ends up performing exactly the same function as map/unmap_single, just taking different arguments. This has no affect on drivers because the dma_map_single_attrs() just ends up calling the map_page() function of the appropriate dma_mapping_ops and similarly the dma_unmap_single_attrs() calls unmap_page(). This fixes an oops on Cell blades, which oops on boot without this because they call dma_direct_ops.map_single, which is NULL. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Milton Miller 提交于
linux/crash_dump.h defines is_kdump_kernel() to be used by code that needs to know if the previous kernel crashed instead of a (clean) boot or reboot. This updates the just added powerpc code to use it. This is needed for the next commit, which will remove __kdump_flag. Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 22 10月, 2008 1 次提交
-
-
由 Mohan Kumar M 提交于
This adds relocatable kernel support for kdump. With this one can use the same regular kernel to capture the kdump. A signature (0xfeed1234) is passed in r6 from panic code to the next kernel through kexec_sequence and purgatory code. The signature is used to differentiate between kdump kernel and non-kdump kernels. The purgatory code compares the signature and sets the __kdump_flag in head_64.S. During the boot up, kernel code checks __kdump_flag and if it is set, the kernel will behave as relocatable kdump kernel. This kernel will boot at the address where it was loaded by kexec-tools ie. at the address reserved through crashkernel boot parameter. CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump kernel as relocatable. So the same kernel can be used as production and kdump kernel. This patch incorporates the changes suggested by Paul Mackerras to avoid GOT use and to avoid two copies of the code. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NMohan Kumar M <mohan@in.ibm.com> Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 17 10月, 2008 2 次提交
-
-
由 Joerg Roedel 提交于
Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Joerg Roedel 提交于
This is a preparation patch for introducing a generic iommu_num_pages function. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 7月, 2008 1 次提交
-
-
由 Robert Jennings 提交于
To support Cooperative Memory Overcommitment (CMO), we need to check for failure from some of the tce hcalls. These changes for the pseries platform affect the powerpc architecture; patches for the other affected platforms are included in this patch. pSeries platform IOMMU code changes: * platform TCE functions must handle H_NOT_ENOUGH_RESOURCES errors and return an error. Architecture IOMMU code changes: * Calls to ppc_md.tce_build need to check return values and return DMA_MAPPING_ERROR for transient errors. Architecture changes: * struct machdep_calls for tce_build*_pSeriesLP functions need to change to indicate failure. * all other platforms will need updates to iommu functions to match the new calling semantics; they will return 0 on success. The other platforms default configs have been built, but no further testing was performed. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 22 7月, 2008 1 次提交
-
-
由 Mark Nelson 提交于
Update iommu_alloc() to take the struct dma_attrs and pass them on to tce_build(). This change propagates down to the tce_build functions of all the platforms. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 7月, 2008 2 次提交
-
-
由 Mark Nelson 提交于
Update powerpc to use the new dma_*map*_attrs() interfaces. In doing so update struct dma_mapping_ops to accept a struct dma_attrs and propagate these changes through to all users of the code (generic IOMMU and the 64bit DMA code, and the iseries and ps3 platform code). The old dma_*map_*() interfaces are reimplemented as calls to the corresponding new interfaces. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mark Nelson 提交于
Make iommu_map_sg take a struct iommu_table. It did so before commit 740c3ce6 (iommu sg merging: ppc: make iommu respect the segment size limits). This stops the function looking in the archdata.dma_data for the iommu table because in the future it will be called with a device that has no table there. This also has the nice side effect of making iommu_map_sg() match the other map functions. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 4月, 2008 1 次提交
-
-
由 Harvey Harrison 提交于
__FUNCTION__ is gcc-specific, use __func__ Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 06 2月, 2008 3 次提交
-
-
由 FUJITA Tomonori 提交于
Previously, during initialization of the IOMMU tables, the last entry at each 4GB boundary is marked as used since there are many adapters which cannot handle DMAing across any 4GB boundary. The IOMMU doesn't allocate a memory area spanning LLD's segment boundary anymore. The segment boundary of devices are set to 4GB by default. So we can remove 4GB boundary protection now. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 FUJITA Tomonori 提交于
This patch converts PPC's IOMMU to use the IOMMU helper functions. The IOMMU doesn't allocate a memory area spanning LLD's segment boundary anymore. iseries_hv_alloc and iseries_hv_map don't have proper device struct. 4GB boundary is used for them. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 FUJITA Tomonori 提交于
This patch makes iommu respect segment size limits when merging sg lists. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Acked-by: NJens Axboe <jens.axboe@oracle.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 1月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Commit 5d2efba6 changed our iommu code so that it always uses an iommu page size of 4kB. That means with our current code, drivers may do a dma_map_sg() of a 64kB page and obtain a dma_addr_t that is only 4k aligned. This works fine in most cases except for some infiniband HW it seems, where they tell the HW about the page size and it ignores the low bits of the DMA address. This works around it by making our IOMMU code enforce a PAGE_SIZE alignment for mappings of objects that are page aligned in the first place and whose size is larger or equal to a page. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 11 12月, 2007 1 次提交
-
-
由 Stephen Rothwell 提交于
It only needs the iommu_table address. It also makes use of the node name to print error messages. So just pass it the things it needs. This reduces the places that know about the pci_dn by one. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 23 10月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 16 10月, 2007 1 次提交
-
-
由 Jens Axboe 提交于
This updates the ppc iommu/pci dma mappers to sg chaining. Includes further fixes from FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>. Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 17 8月, 2007 1 次提交
-
-
由 Jesper Juhl 提交于
This removes several duplicate includes from arch/powerpc/. Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 26 4月, 2007 1 次提交
-
-
由 Paul Mackerras 提交于
This reverts commit 618d3adc, because it is superseded by 56997559.
-
- 13 4月, 2007 1 次提交
-
-
由 Jake Moilanen 提交于
There are many adapters which cannot handle DMAing across any 4 GB boundary. For instance, the latest Emulex adapters. This normally is not an issue as firmware gives dma-windows under 4gigs. However, some of the new System-P boxes have dma-windows above 4gigs, and this present a problem. During initialization of the IOMMU tables, the last entry at each 4GB boundary is marked as used. Thus no mappings can cross the boundary. If a table ends at a 4GB boundary, the entry is not marked as used. A boot option to remove this 4GB protection is given w/ protect4gb=off. This exposes the potential issue for driver and hardware development purposes. Signed-off-by: NJake Moilanen <moilanen@austin.ibm.com> Acked-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-