提交 ceb5ac32 编写于 作者: B Becky Bruce 提交者: Ingo Molnar

swiotlb: comment corrections

Impact: cleanup

swiotlb_map/unmap_single are now swiotlb_map/unmap_page;
trivially change all the comments to reference new names.

Also, there were some comments that should have been
referring to just plain old map_single, not swiotlb_map_single;
fix those as well.

Also change a use of the word "pointer", when what is
referred to is actually a dma/physical address.
Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
Acked-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
Cc: jeremy@goop.org
Cc: ian.campbell@citrix.com
LKML-Reference: <1239199761-22886-2-git-send-email-galak@kernel.crashing.org>
Signed-off-by: NIngo Molnar <mingo@elte.hu>
上级 577c9c45
...@@ -60,8 +60,8 @@ enum dma_sync_target { ...@@ -60,8 +60,8 @@ enum dma_sync_target {
int swiotlb_force; int swiotlb_force;
/* /*
* Used to do a quick range check in swiotlb_unmap_single and * Used to do a quick range check in unmap_single and
* swiotlb_sync_single_*, to see if the memory was in fact allocated by this * sync_single_*, to see if the memory was in fact allocated by this
* API. * API.
*/ */
static char *io_tlb_start, *io_tlb_end; static char *io_tlb_start, *io_tlb_end;
...@@ -560,7 +560,6 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size, ...@@ -560,7 +560,6 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
size)) { size)) {
/* /*
* The allocated memory isn't reachable by the device. * The allocated memory isn't reachable by the device.
* Fall back on swiotlb_map_single().
*/ */
free_pages((unsigned long) ret, order); free_pages((unsigned long) ret, order);
ret = NULL; ret = NULL;
...@@ -568,9 +567,8 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size, ...@@ -568,9 +567,8 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
if (!ret) { if (!ret) {
/* /*
* We are either out of memory or the device can't DMA * We are either out of memory or the device can't DMA
* to GFP_DMA memory; fall back on * to GFP_DMA memory; fall back on map_single(), which
* swiotlb_map_single(), which will grab memory from * will grab memory from the lowest available address range.
* the lowest available address range.
*/ */
ret = map_single(hwdev, 0, size, DMA_FROM_DEVICE); ret = map_single(hwdev, 0, size, DMA_FROM_DEVICE);
if (!ret) if (!ret)
...@@ -634,7 +632,7 @@ swiotlb_full(struct device *dev, size_t size, int dir, int do_panic) ...@@ -634,7 +632,7 @@ swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
* physical address to use is returned. * physical address to use is returned.
* *
* Once the device is given the dma address, the device owns this memory until * Once the device is given the dma address, the device owns this memory until
* either swiotlb_unmap_single or swiotlb_dma_sync_single is performed. * either swiotlb_unmap_page or swiotlb_dma_sync_single is performed.
*/ */
dma_addr_t swiotlb_map_page(struct device *dev, struct page *page, dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, unsigned long offset, size_t size,
...@@ -648,7 +646,7 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page, ...@@ -648,7 +646,7 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
BUG_ON(dir == DMA_NONE); BUG_ON(dir == DMA_NONE);
/* /*
* If the pointer passed in happens to be in the device's DMA window, * If the address happens to be in the device's DMA window,
* we can safely return the device addr and not worry about bounce * we can safely return the device addr and not worry about bounce
* buffering it. * buffering it.
*/ */
...@@ -679,7 +677,7 @@ EXPORT_SYMBOL_GPL(swiotlb_map_page); ...@@ -679,7 +677,7 @@ EXPORT_SYMBOL_GPL(swiotlb_map_page);
/* /*
* Unmap a single streaming mode DMA translation. The dma_addr and size must * Unmap a single streaming mode DMA translation. The dma_addr and size must
* match what was provided for in a previous swiotlb_map_single call. All * match what was provided for in a previous swiotlb_map_page call. All
* other usages are undefined. * other usages are undefined.
* *
* After this call, reads by the cpu to the buffer are guaranteed to see * After this call, reads by the cpu to the buffer are guaranteed to see
...@@ -703,7 +701,7 @@ EXPORT_SYMBOL_GPL(swiotlb_unmap_page); ...@@ -703,7 +701,7 @@ EXPORT_SYMBOL_GPL(swiotlb_unmap_page);
* Make physical memory consistent for a single streaming mode DMA translation * Make physical memory consistent for a single streaming mode DMA translation
* after a transfer. * after a transfer.
* *
* If you perform a swiotlb_map_single() but wish to interrogate the buffer * If you perform a swiotlb_map_page() but wish to interrogate the buffer
* using the cpu, yet do not wish to teardown the dma mapping, you must * using the cpu, yet do not wish to teardown the dma mapping, you must
* call this function before doing so. At the next point you give the dma * call this function before doing so. At the next point you give the dma
* address back to the card, you must first perform a * address back to the card, you must first perform a
...@@ -777,7 +775,7 @@ EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device); ...@@ -777,7 +775,7 @@ EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
/* /*
* Map a set of buffers described by scatterlist in streaming mode for DMA. * Map a set of buffers described by scatterlist in streaming mode for DMA.
* This is the scatter-gather version of the above swiotlb_map_single * This is the scatter-gather version of the above swiotlb_map_page
* interface. Here the scatter gather list elements are each tagged with the * interface. Here the scatter gather list elements are each tagged with the
* appropriate dma address and length. They are obtained via * appropriate dma address and length. They are obtained via
* sg_dma_{address,length}(SG). * sg_dma_{address,length}(SG).
...@@ -788,7 +786,7 @@ EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device); ...@@ -788,7 +786,7 @@ EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
* The routine returns the number of addr/length pairs actually * The routine returns the number of addr/length pairs actually
* used, at most nents. * used, at most nents.
* *
* Device ownership issues as mentioned above for swiotlb_map_single are the * Device ownership issues as mentioned above for swiotlb_map_page are the
* same here. * same here.
*/ */
int int
...@@ -836,7 +834,7 @@ EXPORT_SYMBOL(swiotlb_map_sg); ...@@ -836,7 +834,7 @@ EXPORT_SYMBOL(swiotlb_map_sg);
/* /*
* Unmap a set of streaming mode DMA translations. Again, cpu read rules * Unmap a set of streaming mode DMA translations. Again, cpu read rules
* concerning calls here are the same as for swiotlb_unmap_single() above. * concerning calls here are the same as for swiotlb_unmap_page() above.
*/ */
void void
swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl, swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册