diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl
index be35bc328b775948cd10d1c2cf3f8df76256aeb2..d21b1f84b838e0a277b160d8c82329766041fad0 100644
--- a/Documentation/DocBook/drm.tmpl
+++ b/Documentation/DocBook/drm.tmpl
@@ -492,10 +492,10 @@ char *date;
The Translation Table Manager (TTM)
- TTM design background and information belongs here.
+ TTM design background and information belongs here.
- TTM initialization
+ TTM initialization
This section is outdated.
Drivers wishing to support TTM must fill out a drm_bo_driver
@@ -503,42 +503,42 @@ char *date;
pointers for initializing the TTM, allocating and freeing memory,
waiting for command completion and fence synchronization, and memory
migration. See the radeon_ttm.c file for an example of usage.
-
-
- The ttm_global_reference structure is made up of several fields:
-
-
- struct ttm_global_reference {
- enum ttm_global_types global_type;
- size_t size;
- void *object;
- int (*init) (struct ttm_global_reference *);
- void (*release) (struct ttm_global_reference *);
- };
-
-
- There should be one global reference structure for your memory
- manager as a whole, and there will be others for each object
- created by the memory manager at runtime. Your global TTM should
- have a type of TTM_GLOBAL_TTM_MEM. The size field for the global
- object should be sizeof(struct ttm_mem_global), and the init and
- release hooks should point at your driver-specific init and
- release routines, which probably eventually call
- ttm_mem_global_init and ttm_mem_global_release, respectively.
-
-
- Once your global TTM accounting structure is set up and initialized
- by calling ttm_global_item_ref() on it,
- you need to create a buffer object TTM to
- provide a pool for buffer object allocation by clients and the
- kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO,
- and its size should be sizeof(struct ttm_bo_global). Again,
- driver-specific init and release functions may be provided,
- likely eventually calling ttm_bo_global_init() and
- ttm_bo_global_release(), respectively. Also, like the previous
- object, ttm_global_item_ref() is used to create an initial reference
- count for the TTM, which will call your initialization function.
-
+
+
+ The ttm_global_reference structure is made up of several fields:
+
+
+ struct ttm_global_reference {
+ enum ttm_global_types global_type;
+ size_t size;
+ void *object;
+ int (*init) (struct ttm_global_reference *);
+ void (*release) (struct ttm_global_reference *);
+ };
+
+
+ There should be one global reference structure for your memory
+ manager as a whole, and there will be others for each object
+ created by the memory manager at runtime. Your global TTM should
+ have a type of TTM_GLOBAL_TTM_MEM. The size field for the global
+ object should be sizeof(struct ttm_mem_global), and the init and
+ release hooks should point at your driver-specific init and
+ release routines, which probably eventually call
+ ttm_mem_global_init and ttm_mem_global_release, respectively.
+
+
+ Once your global TTM accounting structure is set up and initialized
+ by calling ttm_global_item_ref() on it,
+ you need to create a buffer object TTM to
+ provide a pool for buffer object allocation by clients and the
+ kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO,
+ and its size should be sizeof(struct ttm_bo_global). Again,
+ driver-specific init and release functions may be provided,
+ likely eventually calling ttm_bo_global_init() and
+ ttm_bo_global_release(), respectively. Also, like the previous
+ object, ttm_global_item_ref() is used to create an initial reference
+ count for the TTM, which will call your initialization function.
+
@@ -566,19 +566,19 @@ char *date;
using driver-specific ioctls.
- On a fundamental level, GEM involves several operations:
-
- Memory allocation and freeing
- Command execution
- Aperture management at command execution time
-
- Buffer object allocation is relatively straightforward and largely
+ On a fundamental level, GEM involves several operations:
+
+ Memory allocation and freeing
+ Command execution
+ Aperture management at command execution time
+
+ Buffer object allocation is relatively straightforward and largely
provided by Linux's shmem layer, which provides memory to back each
object.
Device-specific operations, such as command execution, pinning, buffer
- read & write, mapping, and domain ownership transfers are left to
+ read & write, mapping, and domain ownership transfers are left to
driver-specific ioctls.
@@ -738,16 +738,16 @@ char *date;
respectively. The conversion is handled by the DRM core without any
driver-specific support.
-
- GEM also supports buffer sharing with dma-buf file descriptors through
- PRIME. GEM-based drivers must use the provided helpers functions to
- implement the exporting and importing correctly. See .
- Since sharing file descriptors is inherently more secure than the
- easily guessable and global GEM names it is the preferred buffer
- sharing mechanism. Sharing buffers through GEM names is only supported
- for legacy userspace. Furthermore PRIME also allows cross-device
- buffer sharing since it is based on dma-bufs.
-
+
+ GEM also supports buffer sharing with dma-buf file descriptors through
+ PRIME. GEM-based drivers must use the provided helpers functions to
+ implement the exporting and importing correctly. See .
+ Since sharing file descriptors is inherently more secure than the
+ easily guessable and global GEM names it is the preferred buffer
+ sharing mechanism. Sharing buffers through GEM names is only supported
+ for legacy userspace. Furthermore PRIME also allows cross-device
+ buffer sharing since it is based on dma-bufs.
+
GEM Objects Mapping
@@ -852,7 +852,7 @@ char *date;
Command Execution
- Perhaps the most important GEM function for GPU devices is providing a
+ Perhaps the most important GEM function for GPU devices is providing a
command execution interface to clients. Client programs construct
command buffers containing references to previously allocated memory
objects, and then submit them to GEM. At that point, GEM takes care to
@@ -874,95 +874,95 @@ char *date;
GEM Function Reference
!Edrivers/gpu/drm/drm_gem.c
-
-
- VMA Offset Manager
+
+
+ VMA Offset Manager
!Pdrivers/gpu/drm/drm_vma_manager.c vma offset manager
!Edrivers/gpu/drm/drm_vma_manager.c
!Iinclude/drm/drm_vma_manager.h
-
-
- PRIME Buffer Sharing
-
- PRIME is the cross device buffer sharing framework in drm, originally
- created for the OPTIMUS range of multi-gpu platforms. To userspace
- PRIME buffers are dma-buf based file descriptors.
-
-
- Overview and Driver Interface
-
- Similar to GEM global names, PRIME file descriptors are
- also used to share buffer objects across processes. They offer
- additional security: as file descriptors must be explicitly sent over
- UNIX domain sockets to be shared between applications, they can't be
- guessed like the globally unique GEM names.
-
-
- Drivers that support the PRIME
- API must set the DRIVER_PRIME bit in the struct
- drm_driver
- driver_features field, and implement the
- prime_handle_to_fd and
- prime_fd_to_handle operations.
-
-
- int (*prime_handle_to_fd)(struct drm_device *dev,
- struct drm_file *file_priv, uint32_t handle,
- uint32_t flags, int *prime_fd);
+
+
+ PRIME Buffer Sharing
+
+ PRIME is the cross device buffer sharing framework in drm, originally
+ created for the OPTIMUS range of multi-gpu platforms. To userspace
+ PRIME buffers are dma-buf based file descriptors.
+
+
+ Overview and Driver Interface
+
+ Similar to GEM global names, PRIME file descriptors are
+ also used to share buffer objects across processes. They offer
+ additional security: as file descriptors must be explicitly sent over
+ UNIX domain sockets to be shared between applications, they can't be
+ guessed like the globally unique GEM names.
+
+
+ Drivers that support the PRIME
+ API must set the DRIVER_PRIME bit in the struct
+ drm_driver
+ driver_features field, and implement the
+ prime_handle_to_fd and
+ prime_fd_to_handle operations.
+
+
+ int (*prime_handle_to_fd)(struct drm_device *dev,
+ struct drm_file *file_priv, uint32_t handle,
+ uint32_t flags, int *prime_fd);
int (*prime_fd_to_handle)(struct drm_device *dev,
- struct drm_file *file_priv, int prime_fd,
- uint32_t *handle);
- Those two operations convert a handle to a PRIME file descriptor and
- vice versa. Drivers must use the kernel dma-buf buffer sharing framework
- to manage the PRIME file descriptors. Similar to the mode setting
- API PRIME is agnostic to the underlying buffer object manager, as
- long as handles are 32bit unsigned integers.
-
-
- While non-GEM drivers must implement the operations themselves, GEM
- drivers must use the drm_gem_prime_handle_to_fd
- and drm_gem_prime_fd_to_handle helper functions.
- Those helpers rely on the driver
- gem_prime_export and
- gem_prime_import operations to create a dma-buf
- instance from a GEM object (dma-buf exporter role) and to create a GEM
- object from a dma-buf instance (dma-buf importer role).
-
-
- struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
- struct drm_gem_object *obj,
- int flags);
+ struct drm_file *file_priv, int prime_fd,
+ uint32_t *handle);
+ Those two operations convert a handle to a PRIME file descriptor and
+ vice versa. Drivers must use the kernel dma-buf buffer sharing framework
+ to manage the PRIME file descriptors. Similar to the mode setting
+ API PRIME is agnostic to the underlying buffer object manager, as
+ long as handles are 32bit unsigned integers.
+
+
+ While non-GEM drivers must implement the operations themselves, GEM
+ drivers must use the drm_gem_prime_handle_to_fd
+ and drm_gem_prime_fd_to_handle helper functions.
+ Those helpers rely on the driver
+ gem_prime_export and
+ gem_prime_import operations to create a dma-buf
+ instance from a GEM object (dma-buf exporter role) and to create a GEM
+ object from a dma-buf instance (dma-buf importer role).
+
+
+ struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
+ struct drm_gem_object *obj,
+ int flags);
struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
- struct dma_buf *dma_buf);
- These two operations are mandatory for GEM drivers that support
- PRIME.
-
-
-
- PRIME Helper Functions
-!Pdrivers/gpu/drm/drm_prime.c PRIME Helpers
+ struct dma_buf *dma_buf);
+ These two operations are mandatory for GEM drivers that support
+ PRIME.
+
-
-
- PRIME Function References
+
+ PRIME Helper Functions
+!Pdrivers/gpu/drm/drm_prime.c PRIME Helpers
+
+
+
+ PRIME Function References
!Edrivers/gpu/drm/drm_prime.c
-
-
- DRM MM Range Allocator
-
- Overview
+
+
+ DRM MM Range Allocator
+
+ Overview
!Pdrivers/gpu/drm/drm_mm.c Overview
-
-
- LRU Scan/Eviction Support
+
+
+ LRU Scan/Eviction Support
!Pdrivers/gpu/drm/drm_mm.c lru scan roaster
-
+
-
- DRM MM Range Allocator Function References
+
+ DRM MM Range Allocator Function References
!Edrivers/gpu/drm/drm_mm.c
!Iinclude/drm/drm_mm.h
-
+