提交 940eba2d 编写于 作者: D Daniel Vetter

drm/gem|prime|mm: Use recommened kerneldoc for struct member refs

I just learned that &struct_name.member_name works and looks pretty
even. It doesn't (yet) link to the member directly though, which would
be really good for big structures or vfunc tables (where the
per-member kerneldoc tends to be long).

Also some minor drive-by polish where it makes sense, I read a lot
of docs ...

Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: NGustavo Padovan <gustavo.padovan@collabora.com>
Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170125062657.19270-5-daniel.vetter@ffwll.ch
上级 6806cdf9
...@@ -316,8 +316,8 @@ EXPORT_SYMBOL(drm_gem_handle_delete); ...@@ -316,8 +316,8 @@ EXPORT_SYMBOL(drm_gem_handle_delete);
* @dev: corresponding drm_device * @dev: corresponding drm_device
* @handle: the dumb handle to remove * @handle: the dumb handle to remove
* *
* This implements the ->dumb_destroy kms driver callback for drivers which use * This implements the &drm_driver.dumb_destroy kms driver callback for drivers
* gem to manage their backing storage. * which use gem to manage their backing storage.
*/ */
int drm_gem_dumb_destroy(struct drm_file *file, int drm_gem_dumb_destroy(struct drm_file *file,
struct drm_device *dev, struct drm_device *dev,
...@@ -333,9 +333,9 @@ EXPORT_SYMBOL(drm_gem_dumb_destroy); ...@@ -333,9 +333,9 @@ EXPORT_SYMBOL(drm_gem_dumb_destroy);
* @obj: object to register * @obj: object to register
* @handlep: pointer to return the created handle to the caller * @handlep: pointer to return the created handle to the caller
* *
* This expects the dev->object_name_lock to be held already and will drop it * This expects the &drm_device.object_name_lock to be held already and will
* before returning. Used to avoid races in establishing new handles when * drop it before returning. Used to avoid races in establishing new handles
* importing an object from either an flink name or a dma-buf. * when importing an object from either an flink name or a dma-buf.
* *
* Handles must be release again through drm_gem_handle_delete(). This is done * Handles must be release again through drm_gem_handle_delete(). This is done
* when userspace closes @file_priv for all attached handles, or through the * when userspace closes @file_priv for all attached handles, or through the
...@@ -447,8 +447,8 @@ EXPORT_SYMBOL(drm_gem_free_mmap_offset); ...@@ -447,8 +447,8 @@ EXPORT_SYMBOL(drm_gem_free_mmap_offset);
* structures. * structures.
* *
* This routine allocates and attaches a fake offset for @obj, in cases where * This routine allocates and attaches a fake offset for @obj, in cases where
* the virtual size differs from the physical size (ie. obj->size). Otherwise * the virtual size differs from the physical size (ie. &drm_gem_object.size).
* just use drm_gem_create_mmap_offset(). * Otherwise just use drm_gem_create_mmap_offset().
* *
* This function is idempotent and handles an already allocated mmap offset * This function is idempotent and handles an already allocated mmap offset
* transparently. Drivers do not need to check for this case. * transparently. Drivers do not need to check for this case.
...@@ -787,7 +787,7 @@ EXPORT_SYMBOL(drm_gem_object_release); ...@@ -787,7 +787,7 @@ EXPORT_SYMBOL(drm_gem_object_release);
* @kref: kref of the object to free * @kref: kref of the object to free
* *
* Called after the last reference to the object has been lost. * Called after the last reference to the object has been lost.
* Must be called holding &drm_device->struct_mutex. * Must be called holding &drm_device.struct_mutex.
* *
* Frees the object * Frees the object
*/ */
...@@ -813,7 +813,7 @@ EXPORT_SYMBOL(drm_gem_object_free); ...@@ -813,7 +813,7 @@ EXPORT_SYMBOL(drm_gem_object_free);
* @obj: GEM buffer object * @obj: GEM buffer object
* *
* This releases a reference to @obj. Callers must not hold the * This releases a reference to @obj. Callers must not hold the
* dev->struct_mutex lock when calling this function. * &drm_device.struct_mutex lock when calling this function.
* *
* See also __drm_gem_object_unreference(). * See also __drm_gem_object_unreference().
*/ */
...@@ -840,9 +840,9 @@ EXPORT_SYMBOL(drm_gem_object_unreference_unlocked); ...@@ -840,9 +840,9 @@ EXPORT_SYMBOL(drm_gem_object_unreference_unlocked);
* drm_gem_object_unreference - release a GEM BO reference * drm_gem_object_unreference - release a GEM BO reference
* @obj: GEM buffer object * @obj: GEM buffer object
* *
* This releases a reference to @obj. Callers must hold the dev->struct_mutex * This releases a reference to @obj. Callers must hold the
* lock when calling this function, even when the driver doesn't use * &drm_device.struct_mutex lock when calling this function, even when the
* dev->struct_mutex for anything. * driver doesn't use &drm_device.struct_mutex for anything.
* *
* For drivers not encumbered with legacy locking use * For drivers not encumbered with legacy locking use
* drm_gem_object_unreference_unlocked() instead. * drm_gem_object_unreference_unlocked() instead.
......
...@@ -552,8 +552,8 @@ EXPORT_SYMBOL(drm_mm_replace_node); ...@@ -552,8 +552,8 @@ EXPORT_SYMBOL(drm_mm_replace_node);
* objects to the roster, probably by walking an LRU list, but this can be * objects to the roster, probably by walking an LRU list, but this can be
* freely implemented. Eviction candiates are added using * freely implemented. Eviction candiates are added using
* drm_mm_scan_add_block() until a suitable hole is found or there are no * drm_mm_scan_add_block() until a suitable hole is found or there are no
* further evictable objects. Eviction roster metadata is tracked in struct * further evictable objects. Eviction roster metadata is tracked in &struct
* &drm_mm_scan. * drm_mm_scan.
* *
* The driver must walk through all objects again in exactly the reverse * The driver must walk through all objects again in exactly the reverse
* order to restore the allocator state. Note that while the allocator is used * order to restore the allocator state. Note that while the allocator is used
......
...@@ -291,7 +291,7 @@ static void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach, ...@@ -291,7 +291,7 @@ static void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
* This wraps dma_buf_export() for use by generic GEM drivers that are using * This wraps dma_buf_export() for use by generic GEM drivers that are using
* drm_gem_dmabuf_release(). In addition to calling dma_buf_export(), we take * drm_gem_dmabuf_release(). In addition to calling dma_buf_export(), we take
* a reference to the &drm_device and the exported &drm_gem_object (stored in * a reference to the &drm_device and the exported &drm_gem_object (stored in
* exp_info->priv) which is released by drm_gem_dmabuf_release(). * &dma_buf_export_info.priv) which is released by drm_gem_dmabuf_release().
* *
* Returns the new dmabuf. * Returns the new dmabuf.
*/ */
......
...@@ -63,7 +63,7 @@ struct drm_gem_object { ...@@ -63,7 +63,7 @@ struct drm_gem_object {
* drops to 0 any global names (e.g. the id in the flink namespace) will * drops to 0 any global names (e.g. the id in the flink namespace) will
* be cleared. * be cleared.
* *
* Protected by dev->object_name_lock. * Protected by &drm_device.object_name_lock.
*/ */
unsigned handle_count; unsigned handle_count;
...@@ -106,8 +106,8 @@ struct drm_gem_object { ...@@ -106,8 +106,8 @@ struct drm_gem_object {
* @name: * @name:
* *
* Global name for this object, starts at 1. 0 means unnamed. * Global name for this object, starts at 1. 0 means unnamed.
* Access is covered by dev->object_name_lock. This is used by the GEM_FLINK * Access is covered by &drm_device.object_name_lock. This is used by
* and GEM_OPEN ioctls. * the GEM_FLINK and GEM_OPEN ioctls.
*/ */
int name; int name;
...@@ -150,7 +150,7 @@ struct drm_gem_object { ...@@ -150,7 +150,7 @@ struct drm_gem_object {
* through importing or exporting). We break the resulting reference * through importing or exporting). We break the resulting reference
* loop when the last gem handle for this object is released. * loop when the last gem handle for this object is released.
* *
* Protected by obj->object_name_lock. * Protected by &drm_device.object_name_lock.
*/ */
struct dma_buf *dma_buf; struct dma_buf *dma_buf;
...@@ -163,7 +163,7 @@ struct drm_gem_object { ...@@ -163,7 +163,7 @@ struct drm_gem_object {
* attachment point for the device. This is invariant over the lifetime * attachment point for the device. This is invariant over the lifetime
* of a gem object. * of a gem object.
* *
* The driver's ->gem_free_object callback is responsible for cleaning * The &drm_driver.gem_free_object callback is responsible for cleaning
* up the dma_buf attachment and references acquired at import time. * up the dma_buf attachment and references acquired at import time.
* *
* Note that the drm gem/prime core does not depend upon drivers setting * Note that the drm gem/prime core does not depend upon drivers setting
...@@ -204,7 +204,7 @@ drm_gem_object_reference(struct drm_gem_object *obj) ...@@ -204,7 +204,7 @@ drm_gem_object_reference(struct drm_gem_object *obj)
* @obj: GEM buffer object * @obj: GEM buffer object
* *
* This function is meant to be used by drivers which are not encumbered with * This function is meant to be used by drivers which are not encumbered with
* dev->struct_mutex legacy locking and which are using the * &drm_device.struct_mutex legacy locking and which are using the
* gem_free_object_unlocked callback. It avoids all the locking checks and * gem_free_object_unlocked callback. It avoids all the locking checks and
* locking overhead of drm_gem_object_unreference() and * locking overhead of drm_gem_object_unreference() and
* drm_gem_object_unreference_unlocked(). * drm_gem_object_unreference_unlocked().
...@@ -212,8 +212,8 @@ drm_gem_object_reference(struct drm_gem_object *obj) ...@@ -212,8 +212,8 @@ drm_gem_object_reference(struct drm_gem_object *obj)
* Drivers should never call this directly in their code. Instead they should * Drivers should never call this directly in their code. Instead they should
* wrap it up into a ``driver_gem_object_unreference(struct driver_gem_object * wrap it up into a ``driver_gem_object_unreference(struct driver_gem_object
* *obj)`` wrapper function, and use that. Shared code should never call this, to * *obj)`` wrapper function, and use that. Shared code should never call this, to
* avoid breaking drivers by accident which still depend upon dev->struct_mutex * avoid breaking drivers by accident which still depend upon
* locking. * &drm_device.struct_mutex locking.
*/ */
static inline void static inline void
__drm_gem_object_unreference(struct drm_gem_object *obj) __drm_gem_object_unreference(struct drm_gem_object *obj)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册