dma-api.rst 26.7 KB
Newer Older
1 2 3
============================================
Dynamic DMA mapping using the generic device
============================================
L
Linus Torvalds 已提交
4

5
:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
L
Linus Torvalds 已提交
6 7

This document describes the DMA API.  For a more gentle introduction
8
of the API (and actual examples), see :doc:`/core-api/dma-api-howto`.
L
Linus Torvalds 已提交
9

10 11 12 13 14
This API is split into two pieces.  Part I describes the basic API.
Part II describes extensions for supporting non-consistent memory
machines.  Unless you know that your driver absolutely has to support
non-consistent platforms (this is usually only legacy platforms) you
should only use the API described in part I.
L
Linus Torvalds 已提交
15

16 17
Part I - dma_API
----------------
L
Linus Torvalds 已提交
18

19
To get the dma_API, you must #include <linux/dma-mapping.h>.  This
20
provides dma_addr_t and the interfaces described below.
L
Linus Torvalds 已提交
21

Y
Yinghai Lu 已提交
22 23 24 25
A dma_addr_t can hold any valid DMA address for the platform.  It can be
given to a device to use as a DMA source or target.  A CPU cannot reference
a dma_addr_t directly because there may be translation between its physical
address space and the DMA address space.
L
Linus Torvalds 已提交
26

27
Part Ia - Using large DMA-coherent buffers
L
Linus Torvalds 已提交
28 29
------------------------------------------

30 31 32 33 34
::

	void *
	dma_alloc_coherent(struct device *dev, size_t size,
			   dma_addr_t *dma_handle, gfp_t flag)
L
Linus Torvalds 已提交
35 36 37

Consistent memory is memory for which a write by either the device or
the processor can immediately be read by the processor or device
D
David Brownell 已提交
38 39 40
without having to worry about caching effects.  (You may however need
to make sure to flush the processor's write buffers before telling
devices to read that memory.)
L
Linus Torvalds 已提交
41 42 43

This routine allocates a region of <size> bytes of consistent memory.

44
It returns a pointer to the allocated region (in the processor's virtual
L
Linus Torvalds 已提交
45 46
address space) or NULL if the allocation failed.

47
It also returns a <dma_handle> which may be cast to an unsigned integer the
Y
Yinghai Lu 已提交
48
same width as the bus and given to the device as the DMA address base of
49 50
the region.

L
Linus Torvalds 已提交
51 52 53 54 55
Note: consistent memory can be expensive on some platforms, and the
minimum allocation length may be as big as a page, so you should
consolidate your requests for consistent memory as much as possible.
The simplest way to do that is to use the dma_pool calls (see below).

56
The flag parameter (dma_alloc_coherent() only) allows the caller to
57
specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
R
Randy Dunlap 已提交
58
implementation may choose to ignore flags that affect the location of
59
the returned memory, like GFP_DMA).
L
Linus Torvalds 已提交
60

61 62 63 64 65
::

	void
	dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
			  dma_addr_t dma_handle)
L
Linus Torvalds 已提交
66

67 68 69 70
Free a region of consistent memory you previously allocated.  dev,
size and dma_handle must all be the same as those passed into
dma_alloc_coherent().  cpu_addr must be the virtual address returned by
the dma_alloc_coherent().
L
Linus Torvalds 已提交
71

72 73 74
Note that unlike their sibling allocation calls, these routines
may only be called with IRQs enabled.

L
Linus Torvalds 已提交
75

76
Part Ib - Using small DMA-coherent buffers
L
Linus Torvalds 已提交
77 78
------------------------------------------

79
To get this part of the dma_API, you must #include <linux/dmapool.h>
L
Linus Torvalds 已提交
80

81
Many drivers need lots of small DMA-coherent memory regions for DMA
L
Linus Torvalds 已提交
82 83
descriptors or I/O buffers.  Rather than allocating in units of a page
or more using dma_alloc_coherent(), you can use DMA pools.  These work
84
much like a struct kmem_cache, except that they use the DMA-coherent allocator,
L
Linus Torvalds 已提交
85
not __get_free_pages().  Also, they understand common hardware constraints
R
Randy Dunlap 已提交
86
for alignment, like queue heads needing to be aligned on N-byte boundaries.
L
Linus Torvalds 已提交
87 88


89 90
::

L
Linus Torvalds 已提交
91 92 93 94
	struct dma_pool *
	dma_pool_create(const char *name, struct device *dev,
			size_t size, size_t align, size_t alloc);

95
dma_pool_create() initializes a pool of DMA-coherent buffers
L
Linus Torvalds 已提交
96 97 98
for use with a given device.  It must be called in a context which
can sleep.

99
The "name" is for diagnostics (like a struct kmem_cache name); dev and size
L
Linus Torvalds 已提交
100 101 102 103 104 105
are like what you'd pass to dma_alloc_coherent().  The device's hardware
alignment requirement for this type of data is "align" (which is expressed
in bytes, and must be a power of two).  If your device has no boundary
crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
from this pool must not cross 4KByte boundaries.

106
::
L
Linus Torvalds 已提交
107

108 109 110
	void *
	dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
		        dma_addr_t *handle)
111 112 113 114 115

Wraps dma_pool_alloc() and also zeroes the returned memory if the
allocation attempt succeeded.


116 117 118 119 120
::

	void *
	dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
		       dma_addr_t *dma_handle);
L
Linus Torvalds 已提交
121

122 123 124 125 126
This allocates memory from the pool; the returned memory will meet the
size and alignment requirements specified at creation time.  Pass
GFP_ATOMIC to prevent blocking, or if it's permitted (not
in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
blocking.  Like dma_alloc_coherent(), this returns two values:  an
127
address usable by the CPU, and the DMA address usable by the pool's
128
device.
L
Linus Torvalds 已提交
129

130
::
L
Linus Torvalds 已提交
131

132 133 134
	void
	dma_pool_free(struct dma_pool *pool, void *vaddr,
		      dma_addr_t addr);
L
Linus Torvalds 已提交
135 136

This puts memory back into the pool.  The pool is what was passed to
137
dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
L
Linus Torvalds 已提交
138 139
were returned when that routine allocated the memory being freed.

140
::
L
Linus Torvalds 已提交
141

142 143
	void
	dma_pool_destroy(struct dma_pool *pool);
L
Linus Torvalds 已提交
144

145
dma_pool_destroy() frees the resources of the pool.  It must be
L
Linus Torvalds 已提交
146 147 148 149 150 151 152
called in a context which can sleep.  Make sure you've freed all allocated
memory back to the pool before you destroy it.


Part Ic - DMA addressing limitations
------------------------------------

153 154 155 156
::

	int
	dma_set_mask_and_coherent(struct device *dev, u64 mask)
157 158 159 160 161 162

Checks to see if the mask is possible and updates the device
streaming and coherent DMA mask parameters if it is.

Returns: 0 if successful and a negative error if not.

163 164 165 166
::

	int
	dma_set_mask(struct device *dev, u64 mask)
L
Linus Torvalds 已提交
167 168 169 170 171 172

Checks to see if the mask is possible and updates the device
parameters if it is.

Returns: 0 if successful and a negative error if not.

173 174 175 176
::

	int
	dma_set_coherent_mask(struct device *dev, u64 mask)
177 178 179 180 181 182

Checks to see if the mask is possible and updates the device
parameters if it is.

Returns: 0 if successful and a negative error if not.

183 184 185 186
::

	u64
	dma_get_required_mask(struct device *dev)
L
Linus Torvalds 已提交
187

188 189
This API returns the mask that the platform requires to
operate efficiently.  Usually this means the returned mask
L
Linus Torvalds 已提交
190 191 192 193 194
is the minimum required to cover all of memory.  Examining the
required mask gives drivers with variable descriptor sizes the
opportunity to use smaller descriptors as necessary.

Requesting the required mask does not alter the current mask.  If you
195 196
wish to take advantage of it, you should issue a dma_set_mask()
call to set the mask to the value returned.
L
Linus Torvalds 已提交
197

198 199 200
::

	size_t
201
	dma_max_mapping_size(struct device *dev);
202 203 204 205

Returns the maximum size of a mapping for the device. The size parameter
of the mapping functions like dma_map_single(), dma_map_page() and
others should not be larger than the returned value.
L
Linus Torvalds 已提交
206

207 208 209 210 211 212 213 214
::

	bool
	dma_need_sync(struct device *dev, dma_addr_t dma_addr);

Returns %true if dma_sync_single_for_{device,cpu} calls are required to
transfer memory ownership.  Returns %false if those calls can be skipped.

215 216 217 218 219 220 221 222
::

	unsigned long
	dma_get_merge_boundary(struct device *dev);

Returns the DMA merge boundary. If the device cannot merge any the DMA address
segments, the function returns 0.

L
Linus Torvalds 已提交
223 224 225
Part Id - Streaming DMA mappings
--------------------------------

226 227 228 229 230
::

	dma_addr_t
	dma_map_single(struct device *dev, void *cpu_addr, size_t size,
		       enum dma_data_direction direction)
L
Linus Torvalds 已提交
231 232

Maps a piece of processor virtual memory so it can be accessed by the
Y
Yinghai Lu 已提交
233
device and returns the DMA address of the memory.
L
Linus Torvalds 已提交
234

235
The direction for both APIs may be converted freely by casting.
236
However the dma_API uses a strongly typed enumerator for its
L
Linus Torvalds 已提交
237 238
direction:

239
======================= =============================================
240 241 242 243
DMA_NONE		no direction (used for debugging)
DMA_TO_DEVICE		data is going from the memory to the device
DMA_FROM_DEVICE		data is coming from the device to the memory
DMA_BIDIRECTIONAL	direction isn't known
244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307
======================= =============================================

.. note::

	Not all memory regions in a machine can be mapped by this API.
	Further, contiguous kernel virtual space may not be contiguous as
	physical memory.  Since this API does not provide any scatter/gather
	capability, it will fail if the user tries to map a non-physically
	contiguous piece of memory.  For this reason, memory to be mapped by
	this API should be obtained from sources which guarantee it to be
	physically contiguous (like kmalloc).

	Further, the DMA address of the memory must be within the
	dma_mask of the device (the dma_mask is a bit mask of the
	addressable region for the device, i.e., if the DMA address of
	the memory ANDed with the dma_mask is still equal to the DMA
	address, then the device can perform DMA to the memory).  To
	ensure that the memory allocated by kmalloc is within the dma_mask,
	the driver may specify various platform-dependent flags to restrict
	the DMA address range of the allocation (e.g., on x86, GFP_DMA
	guarantees to be within the first 16MB of available DMA addresses,
	as required by ISA devices).

	Note also that the above constraints on physical contiguity and
	dma_mask may not apply if the platform has an IOMMU (a device which
	maps an I/O DMA address to a physical memory address).  However, to be
	portable, device driver writers may *not* assume that such an IOMMU
	exists.

.. warning::

	Memory coherency operates at a granularity called the cache
	line width.  In order for memory mapped by this API to operate
	correctly, the mapped region must begin exactly on a cache line
	boundary and end exactly on one (to prevent two separately mapped
	regions from sharing a single cache line).  Since the cache line size
	may not be known at compile time, the API will not enforce this
	requirement.  Therefore, it is recommended that driver writers who
	don't take special care to determine the cache line size at run time
	only map virtual regions that begin and end on page boundaries (which
	are guaranteed also to be cache line boundaries).

	DMA_TO_DEVICE synchronisation must be done after the last modification
	of the memory region by the software and before it is handed off to
	the device.  Once this primitive is used, memory covered by this
	primitive should be treated as read-only by the device.  If the device
	may write to it at any point, it should be DMA_BIDIRECTIONAL (see
	below).

	DMA_FROM_DEVICE synchronisation must be done before the driver
	accesses data that may be changed by the device.  This memory should
	be treated as read-only by the driver.  If the driver needs to write
	to it at any point, it should be DMA_BIDIRECTIONAL (see below).

	DMA_BIDIRECTIONAL requires special handling: it means that the driver
	isn't sure if the memory was modified before being handed off to the
	device and also isn't sure if the device will also modify it.  Thus,
	you must always sync bidirectional memory twice: once before the
	memory is handed off to the device (to make sure all memory changes
	are flushed from the processor) and once before the data may be
	accessed after being used by the device (to make sure any processor
	cache lines are updated with data that the device may have changed).

::
L
Linus Torvalds 已提交
308

309 310 311
	void
	dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
			 enum dma_data_direction direction)
L
Linus Torvalds 已提交
312 313 314 315 316

Unmaps the region previously mapped.  All the parameters passed in
must be identical to those passed in (and returned) by the mapping
API.

317 318 319 320 321 322 323 324 325 326
::

	dma_addr_t
	dma_map_page(struct device *dev, struct page *page,
		     unsigned long offset, size_t size,
		     enum dma_data_direction direction)

	void
	dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
		       enum dma_data_direction direction)
L
Linus Torvalds 已提交
327 328 329 330 331 332 333

API for mapping and unmapping for pages.  All the notes and warnings
for the other mapping APIs apply here.  Also, although the <offset>
and <size> parameters are provided to do partial page mapping, it is
recommended that you never use these unless you really know what the
cache width is.

334
::
335

336 337 338 339 340 341 342
	dma_addr_t
	dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
			 enum dma_data_direction dir, unsigned long attrs)

	void
	dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
			   enum dma_data_direction dir, unsigned long attrs)
343 344 345 346 347

API for mapping and unmapping for MMIO resources. All the notes and
warnings for the other mapping APIs apply here. The API should only be
used to map device MMIO resources, mapping of RAM is not permitted.

348 349 350 351
::

	int
	dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
L
Linus Torvalds 已提交
352

353 354 355 356 357
In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
will fail to create a mapping. A driver can check for these errors by testing
the returned DMA address with dma_mapping_error(). A non-zero return value
means the mapping could not be created and the driver should take appropriate
action (e.g. reduce current DMA mapping usage or delay and try again later).
L
Linus Torvalds 已提交
358

359 360
::

D
David Brownell 已提交
361 362
	int
	dma_map_sg(struct device *dev, struct scatterlist *sg,
363
		   int nents, enum dma_data_direction direction)
L
Linus Torvalds 已提交
364

Y
Yinghai Lu 已提交
365
Returns: the number of DMA address segments mapped (this may be shorter
366 367 368
than <nents> passed in if some elements of the scatter/gather list are
physically or virtually adjacent and an IOMMU maps them with a single
entry).
L
Linus Torvalds 已提交
369 370 371 372

Please note that the sg cannot be mapped again if it has been mapped once.
The mapping process is allowed to destroy information in the sg.

373
As with the other mapping interfaces, dma_map_sg() can fail. When it
L
Linus Torvalds 已提交
374 375 376 377 378
does, 0 is returned and a driver must take appropriate action. It is
critical that the driver do something, in the case of a block driver
aborting the request or even oopsing is better than doing nothing and
corrupting the filesystem.

379
With scatterlists, you use the resulting mapping like this::
D
David Brownell 已提交
380 381 382 383

	int i, count = dma_map_sg(dev, sglist, nents, direction);
	struct scatterlist *sg;

384
	for_each_sg(sglist, sg, count, i) {
D
David Brownell 已提交
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399
		hw_address[i] = sg_dma_address(sg);
		hw_len[i] = sg_dma_len(sg);
	}

where nents is the number of entries in the sglist.

The implementation is free to merge several consecutive sglist entries
into one (e.g. with an IOMMU, or if several pages just happen to be
physically contiguous) and returns the actual number of sg entries it
mapped them to. On failure 0, is returned.

Then you should loop count times (note: this can be less than nents times)
and use sg_dma_address() and sg_dma_len() macros where you previously
accessed sg->address and sg->length as shown above.

400 401
::

D
David Brownell 已提交
402 403
	void
	dma_unmap_sg(struct device *dev, struct scatterlist *sg,
404
		     int nents, enum dma_data_direction direction)
L
Linus Torvalds 已提交
405

R
Randy Dunlap 已提交
406
Unmap the previously mapped scatter/gather list.  All the parameters
L
Linus Torvalds 已提交
407 408 409 410
must be the same as those and passed in to the scatter/gather mapping
API.

Note: <nents> must be the number you passed in, *not* the number of
Y
Yinghai Lu 已提交
411
DMA address entries returned.
L
Linus Torvalds 已提交
412

413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433
::

	void
	dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
				size_t size,
				enum dma_data_direction direction)

	void
	dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
				   size_t size,
				   enum dma_data_direction direction)

	void
	dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
			    int nents,
			    enum dma_data_direction direction)

	void
	dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
			       int nents,
			       enum dma_data_direction direction)
434

435
Synchronise a single contiguous or scatter/gather mapping for the CPU
436 437 438 439 440 441
and device. With the sync_sg API, all the parameters must be the same
as those passed into the single mapping API. With the sync_single API,
you can use dma_handle and size parameters that aren't identical to
those passed into the single mapping API to do a partial sync.


442 443 444 445 446 447 448 449 450 451
.. note::

   You must do this:

   - Before reading values that have been written by DMA from the device
     (use the DMA_FROM_DEVICE direction)
   - After writing values that will be written to the device using DMA
     (use the DMA_TO_DEVICE) direction
   - before *and* after handing memory to the device if the memory is
     DMA_BIDIRECTIONAL
452 453 454

See also dma_map_single().

455 456 457 458 459 460
::

	dma_addr_t
	dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
			     enum dma_data_direction dir,
			     unsigned long attrs)
461

462 463 464 465
	void
	dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
			       size_t size, enum dma_data_direction dir,
			       unsigned long attrs)
466

467 468 469 470
	int
	dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
			 int nents, enum dma_data_direction dir,
			 unsigned long attrs)
471

472 473 474 475
	void
	dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
			   int nents, enum dma_data_direction dir,
			   unsigned long attrs)
476 477 478

The four functions above are just like the counterpart functions
without the _attrs suffixes, except that they pass an optional
479
dma_attrs.
480

481
The interpretation of DMA attributes is architecture-specific, and
482
each attribute should be documented in :doc:`/core-api/dma-attributes`.
483

484 485
If dma_attrs are 0, the semantics of each of these functions
is identical to those of the corresponding function
486 487 488
without the _attrs suffix. As a result dma_map_single_attrs()
can generally replace dma_map_single(), etc.

489
As an example of the use of the ``*_attrs`` functions, here's how
490
you could pass an attribute DMA_ATTR_FOO when mapping memory
491
for DMA::
492

493 494
	#include <linux/dma-mapping.h>
	/* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
495
	* documented in Documentation/core-api/dma-attributes.rst */
496
	...
497

498 499 500 501 502
		unsigned long attr;
		attr |= DMA_ATTR_FOO;
		....
		n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
		....
503 504 505

Architectures that care about DMA_ATTR_FOO would check for its
presence in their implementations of the mapping and unmapping
506 507 508 509 510 511 512 513 514 515 516
routines, e.g.:::

	void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
				     size_t size, enum dma_data_direction dir,
				     unsigned long attrs)
	{
		....
		if (attrs & DMA_ATTR_FOO)
			/* twizzle the frobnozzle */
		....
	}
517

L
Linus Torvalds 已提交
518

519 520
Part II - Non-coherent DMA allocations
--------------------------------------
L
Linus Torvalds 已提交
521

522 523 524 525
These APIs allow to allocate pages in the kernel direct mapping that are
guaranteed to be DMA addressable.  This means that unlike dma_alloc_coherent,
virt_to_page can be called on the resulting address, and the resulting
struct page can be used for everything a struct page is suitable for.
L
Linus Torvalds 已提交
526

527 528
If you don't understand how cache line coherency works between a processor and
an I/O device, you should not be using this part of the API.
L
Linus Torvalds 已提交
529

530 531 532
::

	void *
533 534 535
	dma_alloc_noncoherent(struct device *dev, size_t size,
			dma_addr_t *dma_handle, enum dma_data_direction dir,
			gfp_t gfp)
L
Linus Torvalds 已提交
536

537 538 539 540 541
This routine allocates a region of <size> bytes of consistent memory.  It
returns a pointer to the allocated region (in the processor's virtual address
space) or NULL if the allocation failed.  The returned memory may or may not
be in the kernels direct mapping.  Drivers must not call virt_to_page on
the returned memory region.
L
Linus Torvalds 已提交
542

543 544 545
It also returns a <dma_handle> which may be cast to an unsigned integer the
same width as the bus and given to the device as the DMA address base of
the region.
L
Linus Torvalds 已提交
546

547 548 549 550 551 552 553 554 555 556 557
The dir parameter specified if data is read and/or written by the device,
see dma_map_single() for details.

The gfp parameter allows the caller to specify the ``GFP_`` flags (see
kmalloc()) for the allocation, but rejects flags used to specify a memory
zone such as GFP_DMA or GFP_HIGHMEM.

Before giving the memory to the device, dma_sync_single_for_device() needs
to be called, and before reading memory written by the device,
dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
reused.
L
Linus Torvalds 已提交
558

559 560 561
::

	void
562 563
	dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
			dma_addr_t dma_handle, enum dma_data_direction dir)
L
Linus Torvalds 已提交
564

565 566 567 568
Free a region of memory previously allocated using dma_alloc_noncoherent().
dev, size and dma_handle and dir must all be the same as those passed into
dma_alloc_noncoherent().  cpu_addr must be the virtual address returned by
the dma_alloc_noncoherent().
L
Linus Torvalds 已提交
569

570 571 572 573
::

	int
	dma_get_cache_alignment(void)
L
Linus Torvalds 已提交
574

R
Randy Dunlap 已提交
575
Returns the processor cache alignment.  This is the absolute minimum
L
Linus Torvalds 已提交
576 577 578
alignment *and* width that you must observe when either mapping
memory or doing partial flushes.

579
.. note::
L
Linus Torvalds 已提交
580

581 582 583 584 585
	This API may return a number *larger* than the actual cache
	line, but it will guarantee that one or more cache lines fit exactly
	into the width returned by this call.  It will also always be a power
	of two for easy alignment.

L
Linus Torvalds 已提交
586

J
Joerg Roedel 已提交
587 588 589
Part III - Debug drivers use of the DMA-API
-------------------------------------------

590
The DMA-API as described above has some constraints. DMA addresses must be
J
Joerg Roedel 已提交
591 592 593 594 595 596 597 598 599 600 601 602 603 604
released with the corresponding function with the same size for example. With
the advent of hardware IOMMUs it becomes more and more important that drivers
do not violate those constraints. In the worst case such a violation can
result in data corruption up to destroyed filesystems.

To debug drivers and find bugs in the usage of the DMA-API checking code can
be compiled into the kernel which will tell the developer about those
violations. If your architecture supports it you can select the "Enable
debugging of DMA-API usage" option in your kernel configuration. Enabling this
option has a performance impact. Do not enable it in production kernels.

If you boot the resulting kernel will contain code which does some bookkeeping
about what DMA memory was allocated for which device. If this code detects an
error it prints a warning message with some details into your kernel log. An
605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635
example warning message may look like this::

	WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
		check_unmap+0x203/0x490()
	Hardware name:
	forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
		function [device address=0x00000000640444be] [size=66 bytes] [mapped as
	single] [unmapped as page]
	Modules linked in: nfsd exportfs bridge stp llc r8169
	Pid: 0, comm: swapper Tainted: G        W  2.6.28-dmatest-09289-g8bb99c0 #1
	Call Trace:
	<IRQ>  [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
	[<ffffffff80647b70>] _spin_unlock+0x10/0x30
	[<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
	[<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
	[<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
	[<ffffffff80252f96>] queue_work+0x56/0x60
	[<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
	[<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
	[<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
	[<ffffffff80235177>] find_busiest_group+0x207/0x8a0
	[<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
	[<ffffffff803c7ea3>] check_unmap+0x203/0x490
	[<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
	[<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
	[<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
	[<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
	[<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
	[<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
	[<ffffffff8020c093>] ret_from_intr+0x0/0xa
	<EOI> <4>---[ end trace f6435a98e2a38c0e ]---
J
Joerg Roedel 已提交
636 637 638 639 640 641 642 643 644 645 646 647 648

The driver developer can find the driver and the device including a stacktrace
of the DMA-API call which caused this warning.

Per default only the first error will result in a warning message. All other
errors will only silently counted. This limitation exist to prevent the code
from flooding your kernel log. To support debugging a device driver this can
be disabled via debugfs. See the debugfs interface documentation below for
details.

The debugfs directory for the DMA-API debugging code is called dma-api/. In
this directory the following files can currently be found:

649 650
=============================== ===============================================
dma-api/all_errors		This file contains a numeric value. If this
J
Joerg Roedel 已提交
651 652
				value is not equal to zero the debugging code
				will print a warning for every error it finds
653 654
				into the kernel log. Be careful with this
				option, as it can easily flood your logs.
J
Joerg Roedel 已提交
655

656
dma-api/disabled		This read-only file contains the character 'Y'
J
Joerg Roedel 已提交
657 658 659 660
				if the debugging code is disabled. This can
				happen when it runs out of memory or if it was
				disabled at boot time

661 662 663
dma-api/dump			This read-only file contains current DMA
				mappings.

664
dma-api/error_count		This file is read-only and shows the total
J
Joerg Roedel 已提交
665 666
				numbers of errors found.

667
dma-api/num_errors		The number in this file shows how many
J
Joerg Roedel 已提交
668 669 670 671 672
				warnings will be printed to the kernel log
				before it stops. This number is initialized to
				one at system boot and be set by writing into
				this file

673
dma-api/min_free_entries	This read-only file can be read to get the
J
Joerg Roedel 已提交
674 675
				minimum number of free dma_debug_entries the
				allocator has ever seen. If this value goes
676 677
				down to zero the code will attempt to increase
				nr_total_entries to compensate.
J
Joerg Roedel 已提交
678

679
dma-api/num_free_entries	The current number of free dma_debug_entries
J
Joerg Roedel 已提交
680 681
				in the allocator.

682 683 684
dma-api/nr_total_entries	The total number of dma_debug_entries in the
				allocator, both free and used.

685
dma-api/driver_filter		You can write a name of a driver into this file
686 687 688 689
				to limit the debug output to requests from that
				particular driver. Write an empty string to
				that file to disable the filter and see
				all errors again.
690
=============================== ===============================================
691

J
Joerg Roedel 已提交
692 693 694 695 696 697
If you have this code compiled into your kernel it will be enabled by default.
If you want to boot without the bookkeeping anyway you can provide
'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
Notice that you can not enable it again at runtime. You have to reboot to do
so.

698 699 700 701 702
If you want to see debug messages only for a special device driver you can
specify the dma_debug_driver=<drivername> parameter. This will enable the
driver filter at boot time. The debug code will only print errors for that
driver afterwards. This filter can be disabled or changed later using debugfs.

J
Joerg Roedel 已提交
703
When the code disables itself at runtime this is most likely because it ran
704 705
out of dma_debug_entries and was unable to allocate more on-demand. 65536
entries are preallocated at boot - if this is too low for you boot with
706 707 708
'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
that the code allocates entries in batches, so the exact number of
preallocated entries may be greater than the actual number requested. The
709 710 711 712
code will print to the kernel log each time it has dynamically allocated
as many entries as were initially preallocated. This is to indicate that a
larger preallocation size may be appropriate, or if it happens continually
that a driver may be leaking mappings.
713

714 715 716 717
::

	void
	debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
718 719

dma-debug interface debug_dma_mapping_error() to debug drivers that fail
720
to check DMA mapping errors on addresses returned by dma_map_single() and
721 722 723 724 725
dma_map_page() interfaces. This interface clears a flag set by
debug_dma_map_page() to indicate that dma_mapping_error() has been called by
the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
this flag is still set, prints warning message that includes call trace that
leads up to the unmap. This interface can be called from dma_mapping_error()
726
routines to enable DMA mapping error check debugging.