- 28 9月, 2014 1 次提交
-
-
由 Dan Williams 提交于
Per commit "77873803 net_dma: mark broken" net_dma is no longer used and there is no plan to fix it. This is the mechanical removal of bits in CONFIG_NET_DMA ifdef guards. Reverting the remainder of the net_dma induced changes is deferred to subsequent patches. Marked for stable due to Roman's report of a memory leak in dma_pin_iovec_pages(): https://lkml.org/lkml/2014/9/3/177 Cc: Dave Jiang <dave.jiang@intel.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: David Whipple <whipple@securedatainnovations.ch> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: <stable@vger.kernel.org> Reported-by: NRoman Gushchin <klamm@yandex-team.ru> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 11 4月, 2014 1 次提交
-
-
由 Rashika 提交于
Mark the functions ioat3_prep_xor_val(), ioat3_prep_pq_val() and ioat3_prep_pqxor_val() as static in dma_v3.c because they are not used outside this file. This eliminates the following warnings in dma_v3.c: drivers/dma/ioat/dma_v3.c:741:1: warning: no previous prototype for ‘ioat3_prep_xor_val’ [-Wmissing-prototypes] drivers/dma/ioat/dma_v3.c:1092:1: warning: no previous prototype for ‘ioat3_prep_pq_val’ [-Wmissing-prototypes] drivers/dma/ioat/dma_v3.c:1134:1: warning: no previous prototype for ‘ioat3_prep_pqxor_val’ [-Wmissing-prototypes] Signed-off-by: NRashika Kheria <rashika.kheria@gmail.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org> Acked-by: NVinod Koul <vinod.koul@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 26 2月, 2014 1 次提交
-
-
由 Dan Williams 提交于
Since commit 77873803 "net_dma: mark broken" we no longer pin dma engines active for the network-receive-offload use case. As a result the ->free_chan_resources() that occurs after the driver self test no longer has a NET_DMA induced ->alloc_chan_resources() to back it up. A late firing irq can lead to ksoftirqd spinning indefinitely due to the tasklet_disable() performed by ->free_chan_resources(). Only ->alloc_chan_resources() can clear this condition in affected kernels. This problem has been present since commit 3e037454 "I/OAT: Add support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the NET_DMA use case is deprecated we can revisit moving the driver to use threaded irqs. For now, just tear down the irq and tasklet properly by: 1/ Disable the irq from triggering the tasklet 2/ Disable the irq from re-arming 3/ Flush inflight interrupts 4/ Flush the timer 5/ Flush inflight tasklets References: https://lkml.org/lkml/2014/1/27/282 https://lkml.org/lkml/2014/2/19/672 Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: <stable@vger.kernel.org> Reported-by: NMike Galbraith <bitbucket@online.de> Reported-by: NStanislav Fomichev <stfomichev@yandex-team.ru> Tested-by: NMike Galbraith <bitbucket@online.de> Tested-by: NStanislav Fomichev <stfomichev@yandex-team.ru> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 15 11月, 2013 8 次提交
-
-
由 Dan Williams 提交于
The implementation of ioat3_irq_reinit has two bugs: 1/ The mode is incorrectly set to MSIX for the MSI case 2/ The 'dev_id' parameter to free_irq is the ioatdma_device not the channel in the msi and intx case Include a small cleanup to clarify that ioat3_irq_reinit is only for bwd hardware Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Once we have determined that we will not have all of our desired msix vectors there is no point in attempting a single msix allocation. The driver will already need to read registers to determine the source of the interrupt the fact that it is msix is moot. Fallback directly to msi. Reported-by: NAlexander Gordeev <agordeev@redhat.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Use a single cache for all sed allocations. No need to make it per channel. This also avoids the slub_debug warnings for multiple caches with the same name. Switching to dmam_pool_create() to fix leaking the dma pools on initialization failure and lets us kill ioat3_dma_remove(). Cc: Dave Jiang <dave.jiang@intel.com> Acked-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
When performing continuations there are implied sources that need to be added to the source count. Quoting dma_set_maxpq: /* dma_maxpq - reduce maxpq in the face of continued operations * @dma - dma device with PQ capability * @flags - to check if DMA_PREP_CONTINUE and DMA_PREP_PQ_DISABLE_P are set * * When an engine does not support native continuation we need 3 extra * source slots to reuse P and Q with the following coefficients: * 1/ {00} * P : remove P from Q', but use it as a source for P' * 2/ {01} * Q : use Q to continue Q' calculation * 3/ {00} * Q : subtract Q from P' to cancel (2) * * In the case where P is disabled we only need 1 extra source: * 1/ {01} * Q : use Q to continue Q' calculation */ ...fix the selection of the 16 source path to take these implied sources into account. Note this also kills the BUG_ON(src_cnt < 9) check in __ioat3_prep_pq16_lock(). Besides not accounting for implied sources the check is redundant given we already made the path selection. Cc: <stable@vger.kernel.org> Cc: Dave Jiang <dave.jiang@intel.com> Acked-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
The array to lookup the sed pool based on the number of sources (pq16_idx_to_sedi) is 16 entries and expects a max source index. However, we pass the total source count which runs off the end of the array when src_cnt == 16. The minimal fix is to just pass src_cnt-1, but given we know the source count is > 8 we can just calculate the sed pool by (src_cnt - 2) >> 3. Cc: Dave Jiang <dave.jiang@intel.com> Cc: <stable@vger.kernel.org> Acked-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dave Jiang 提交于
Commit 48a9db46 (3.11) removed the memset op in the xor selftest for ioatdma. The issue is that with the removal of that op, it never replaced the memset with a CPU memset. The memory being operated on is expected to be zeroes but was not. This is causing the xor selftest to fail. Cc: <stable@vger.kernel.org> Signed-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
Remove no longer needed DMA unmap flags: - DMA_COMPL_SKIP_SRC_UNMAP - DMA_COMPL_SKIP_DEST_UNMAP - DMA_COMPL_SRC_UNMAP_SINGLE - DMA_COMPL_DEST_UNMAP_SINGLE Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NJon Mason <jon.mason@intel.com> Acked-by: NMark Brown <broonie@linaro.org> [djbw: clean up straggling skip unmap flags in ntb] Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
Remove support for DMA unmapping from drivers as it is no longer needed (DMA core code is now handling it). Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> [djbw: fix up chan2parent() unused warning in drivers/dma/dw/core.c] Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 14 11月, 2013 1 次提交
-
-
由 Dan Williams 提交于
Add a hook for a common dma unmap implementation to enable removal of the per driver custom unmap code. (A reworked version of Bartlomiej Zolnierkiewicz's patches to remove the custom callbacks and the size increase of dma_async_tx_descriptor for drivers that don't care about raid). Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> [bzolnier: prepare pl330 driver for adding missing unmap while at it] Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 25 10月, 2013 1 次提交
-
-
由 Vinod Koul 提交于
Acked-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 23 8月, 2013 2 次提交
-
-
由 Paul Bolle 提交于
Building dma_v3.o triggers a GCC warning: drivers/dma/ioat/dma_v3.c: In function ‘__ioat3_prep_pq16_lock’: drivers/dma/ioat/dma_v3.c:264:11: warning: array subscript is below array bounds [-Warray-bounds] drivers/dma/ioat/dma_v3.c:264:11: warning: array subscript is below array bounds [-Warray-bounds] This warning is caused by pq16_set_src(). It uses "int idx" as an index to an eight element array. Changing "idx" to "unsigned" silences this warning. Apparently GCC can then determine that "idx" will never be negative. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Acked-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <djbw@fb.com>
-
由 Brice Goglin 提交于
Disable RAID on non-Atom platform and remove related fixups such as the 64-byte alignement restriction on legacy DMA operations (introduced in commit f26df1a1 as a workaround for silicon errata). Signed-off-by: NBrice Goglin <Brice.Goglin@inria.fr> Acked-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NJon Mason <jon.mason@intel.com> Signed-off-by: NDan Williams <djbw@fb.com>
-
- 04 7月, 2013 1 次提交
-
-
There have never been any real users of MEMSET operations since they have been introduced in January 2007 by commit 7405f74b ("dmaengine: refactor dmaengine around dma_async_tx_descriptor"). Therefore remove support for them for now, it can be always brought back when needed. [sebastian.hesselbarth@gmail.com: fix drivers/dma/mv_xor] Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Cc: Vinod Koul <vinod.koul@intel.com> Acked-by: NDan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Olof Johansson <olof@lixom.net> Cc: Kevin Hilman <khilman@linaro.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 4月, 2013 5 次提交
-
-
由 Fengguang Wu 提交于
Reported-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Acked-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
v3.3 provides support for write back descriptor error status. This allows reporting of errors in a descriptor field. In supporting this, certain errors such as P/Q validation errors no longer halts the channel. The DMA engine can continue to execute until the end of the chain and allow software to report the "errors" up the stack. We are also going to mask those error interrupts and handle them when the "chain" has completed at the end. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
This workaround checks for channel 2&3 and remove RAID cap. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
v3.3 introduced 16 sources PQ operations. This also introduced super extended descriptors to support the 16 srcs operations. This patch adds support for the 16 sources ops and in turn adds the super extended descriptors for those ops. 5 SED pools are created depending on the descriptor sizes. An SED can be a 64 bytes sized descriptor or larger and must be physically contiguous. A kmem cache pool is created for allocating the software descriptor that manages the hardware descriptor. The super extended descriptor will take place of extended descriptor under certain operations and be "attached" to the op descriptor during operation. This is a new feature for ioatdma v3.3. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
CB3.2 and earlier hardware has silicon bugs that are no longer needed with the new hardware. We don't have to use a NULL op to signal interrupt for RAID ops any longer. This code make sure the legacy workarounds only happen on legacy hardware. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 15 4月, 2013 7 次提交
-
-
由 Dave Jiang 提交于
The alignment workaround is only necessary for cb3.2 or earlier platforms. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
The PQ Val ops work on the newer hardware so we should actually provide support for it and remove the disabling bits. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
Make it so only 3.2 and earlier platform need the PCI config register clearings since this implementation does not have the registers. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
The Intel Atom S1200 family ioatdma changed the channel reset behavior. It does a reset similar to PCI FLR by resetting all the MSIX registers. We have to re-init msix interrupts because of this. This workaround is only specific to this platform and is not expected to carry over to the later generations. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
Adding Haswell PCI device IDs for ioatdma and simplify the detection of certain Xeon CPUs that has alignment bugs so that modifications can be changed at a single place going forward. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
Looks like only the RAID channels are allowed to have irq coalescing support in the existing code. Fixing that. The ioat3 cleanup code can handle memcpy ops anyways Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
由 Dave Jiang 提交于
Making OP field a hex instead of integer to make it more readable. Also add the dump out of the NEXT field. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Acked-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 14 2月, 2013 1 次提交
-
-
由 Fengguang Wu 提交于
>> drivers/dma/ioat/dma_v3.c:371:6: sparse: symbol 'ioat3_timer_event' was not declared. Reported-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NFengguang Wu <fengguang.wu@intel.com> Acked-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 13 2月, 2013 1 次提交
-
-
由 Dave Jiang 提交于
There is a race that can hit during __cleanup() when the ioat->head pointer is incremented during descriptor submission. The __cleanup() can clear the PENDING flag when it does not see any active descriptors. This causes new submitted descriptors to be ignored because the COMPLETION_PENDING flag is cleared. This was introduced when code was adapted from ioatdma v1 to ioatdma v2. For v2 and v3, IOAT_COMPLETION_PENDING flag will be abandoned and a new flag IOAT_CHAN_ACTIVE will be utilized. This flag will also be protected under the prep_lock when being modified in order to avoid the race. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Reviewed-by: NDan Williams <djbw@fb.com> Signed-off-by: NVinod Koul <vinod.koul@intel.com>
-
- 08 1月, 2013 3 次提交
-
-
由 Dave Jiang 提交于
The existing code set a value in the PCI_CHANERRMSK_INT register for a workaround to address a pre-silicon bug on the Intel 5520 IO hub that has been fixed when the hardware was released. There is no need for this code. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <djbw@fb.com>
-
由 Dave Jiang 提交于
The PCI IDs for IvyBridge IOAT DMA needs to go into a header file since dma_v3.c looks them up for certain hardware workarounds. Need to add to the alignment workaround for IOAT 3.2 since it wasn't fixed in IVB. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <djbw@fb.com>
-
Make ioat_xor_val_self_test() do DMA unmapping itself and fix handling of failure cases. Cc: Dan Williams <djbw@fb.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NDan Williams <djbw@fb.com>
-
- 07 1月, 2013 1 次提交
-
-
由 Shuah Khan 提交于
ioat does DMA memory sync with DMA_TO_DEVICE direction on a buffer allocated for DMA_FROM_DEVICE dma, resulting in the following warning from dma debug. Fixed the dma_sync_single_for_device() call to use the correct direction. [ 226.288947] WARNING: at lib/dma-debug.c:990 check_sync+0x132/0x550() [ 226.288948] Hardware name: ProLiant DL380p Gen8 [ 226.288951] ioatdma 0000:00:04.0: DMA-API: device driver syncs DMA memory with different direction [device address=0x00000000ffff7000] [size=4096 bytes] [mapped with DMA_FROM_DEVICE] [synced with DMA_TO_DEVICE] [ 226.288953] Modules linked in: iTCO_wdt(+) sb_edac(+) ioatdma(+) microcode serio_raw pcspkr edac_core hpwdt(+) iTCO_vendor_support hpilo(+) dca acpi_power_meter ata_generic pata_acpi sd_mod crc_t10dif ata_piix libata hpsa tg3 netxen_nic(+) sunrpc dm_mirror dm_region_hash dm_log dm_mod [ 226.288967] Pid: 1055, comm: work_for_cpu Tainted: G W 3.3.0-0.20.el7.x86_64 #1 [ 226.288968] Call Trace: [ 226.288974] [<ffffffff810644cf>] warn_slowpath_common+0x7f/0xc0 [ 226.288977] [<ffffffff810645c6>] warn_slowpath_fmt+0x46/0x50 [ 226.288980] [<ffffffff81345502>] check_sync+0x132/0x550 [ 226.288983] [<ffffffff81345c9f>] debug_dma_sync_single_for_device+0x3f/0x50 [ 226.288988] [<ffffffff81661002>] ? wait_for_common+0x72/0x180 [ 226.288995] [<ffffffffa019590f>] ioat_xor_val_self_test+0x3e5/0x832 [ioatdma] [ 226.288999] [<ffffffff811a5739>] ? kfree+0x259/0x270 [ 226.289004] [<ffffffffa0195d77>] ioat3_dma_self_test+0x1b/0x20 [ioatdma] [ 226.289008] [<ffffffffa01952c3>] ioat_probe+0x2f8/0x348 [ioatdma] [ 226.289011] [<ffffffffa0195f51>] ioat3_dma_probe+0x1d5/0x2aa [ioatdma] [ 226.289016] [<ffffffffa0194d12>] ioat_pci_probe+0x139/0x17c [ioatdma] [ 226.289020] [<ffffffff81354b8c>] local_pci_probe+0x5c/0xd0 [ 226.289023] [<ffffffff81083e50>] ? destroy_work_on_stack+0x20/0x20 [ 226.289025] [<ffffffff81083e68>] do_work_for_cpu+0x18/0x30 [ 226.289029] [<ffffffff8108d997>] kthread+0xb7/0xc0 [ 226.289033] [<ffffffff8166cef4>] kernel_thread_helper+0x4/0x10 [ 226.289036] [<ffffffff81662d20>] ? _raw_spin_unlock_irq+0x30/0x50 [ 226.289038] [<ffffffff81663234>] ? retint_restore_args+0x13/0x13 [ 226.289041] [<ffffffff8108d8e0>] ? kthread_worker_fn+0x1a0/0x1a0 [ 226.289044] [<ffffffff8166cef0>] ? gs_change+0x13/0x13 [ 226.289045] ---[ end trace e1618afc7a606089 ]--- [ 226.289047] Mapped at: [ 226.289048] [<ffffffff81345307>] debug_dma_map_page+0x87/0x150 [ 226.289050] [<ffffffffa019653c>] dma_map_page.constprop.18+0x70/0xb34 [ioatdma] [ 226.289054] [<ffffffffa0195702>] ioat_xor_val_self_test+0x1d8/0x832 [ioatdma] [ 226.289058] [<ffffffffa0195d77>] ioat3_dma_self_test+0x1b/0x20 [ioatdma] [ 226.289061] [<ffffffffa01952c3>] ioat_probe+0x2f8/0x348 [ioatdma] Signed-off-by: NShuah Khan <shuah.khan@hp.com> CC: <stable@vger.kernel.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
- 04 1月, 2013 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
CONFIG_HOTPLUG is going away as an option. As a result, the __dev* markings need to be removed. This change removes the use of __devinit, __devexit_p, __devinitconst, and __devexit from these drivers. Based on patches originally written by Bill Pemberton, but redone by me in order to handle some of the coding style issues better, by hand. Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Viresh Kumar <viresh.linux@gmail.com> Cc: Dan Williams <djbw@fb.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Barry Song <baohua.song@csr.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Jassi Brar <jassisinghbrar@gmail.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 06 4月, 2012 1 次提交
-
-
由 Dave Jiang 提交于
Silicon errata where when RAID and legacy descriptors are mixed, the legacy (memcpy and friends) operation must have alignment of 64 bytes to avoid hanging. This effects Intel Xeon C55xx, C35xx, E5-2600. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 24 3月, 2012 1 次提交
-
-
由 Dan Williams 提交于
Starting with v3.2 Jonathan reports that Xen crashes loading the ioatdma driver. A debug run shows: ioatdma 0000:00:16.4: desc[0]: (0x300cc7000->0x300cc7040) cookie: 0 flags: 0x2 ctl: 0x29 (op: 0 int_en: 1 compl: 1) ... ioatdma 0000:00:16.4: ioat_get_current_completion: phys_complete: 0xcc7000 ...which shows that in this environment GFP_KERNEL memory may be backed by a 64-bit dma address. This breaks the driver's assumption that an unsigned long should be able to contain the physical address for descriptor memory. Switch to dma_addr_t which beyond being the right size, is the true type for the data i.e. an io-virtual address inidicating the engine's last processed descriptor. [stable: 3.2+] Cc: <stable@vger.kernel.org> Reported-by: NJonathan Nieder <jrnieder@gmail.com> Reported-by: NWilliam Dauchy <wdauchy@gmail.com> Tested-by: NWilliam Dauchy <wdauchy@gmail.com> Tested-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 13 3月, 2012 3 次提交
-
-
由 Vinod Koul 提交于
Fixed trivial issues in drivers: drivers/dma/imx-sdma.c drivers/dma/intel_mid_dma.c drivers/dma/ioat/dma_v3.c drivers/dma/iop-adma.c drivers/dma/sirf-dma.c drivers/dma/timb_dma.c Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
由 Russell King - ARM Linux 提交于
Now that we have the completed cookie in the dma_chan structure, we can consolidate the tx_status functions by providing a function to set the txstate structure and returning the DMA status. We also provide a separate helper to set the residue for cookies which are still in progress. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NJassi Brar <jassisinghbrar@gmail.com> [imx-sdma.c & mxs-dma.c] Tested-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-
由 Russell King - ARM Linux 提交于
Provide a common function to do the cookie mechanics for completing a DMA descriptor. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NJassi Brar <jassisinghbrar@gmail.com> [imx-sdma.c & mxs-dma.c] Tested-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NVinod Koul <vinod.koul@linux.intel.com>
-