提交 282d0a35 编写于 作者: D Dave Airlie

Merge tag 'drm-misc-next-2017-01-09' of git://anongit.freedesktop.org/git/drm-misc into drm-next

Back to regular -misc pulls with reasonable sizes:
- dma_fence error clarification (Chris)
- drm_crtc_from_index helper (Shawn), pile more patches on the m-l to roll
  this out to drivers
- mmu-less support for fbdev helpers from Benjamin
- piles of kerneldoc work
- some polish for crc support from Tomeu and Benjamin
- odd misc stuff all over

* tag 'drm-misc-next-2017-01-09' of git://anongit.freedesktop.org/git/drm-misc: (48 commits)
  dma-fence: Introduce drm_fence_set_error() helper
  dma-fence: Wrap querying the fence->status
  dma-fence: Clear fence->status during dma_fence_init()
  drm: fix compilations issues introduced by "drm: allow to use mmuless SoC"
  drm: Change the return type of the unload hook to void
  drm: add more document for drm_crtc_from_index()
  drm: remove useless parameters from drm_pick_cmdline_mode function
  drm: crc: Call wake_up_interruptible() each time there is a new CRC entry
  drm: allow to use mmuless SoC
  drm: compile drm_vm.c only when needed
  fbmem: add a default get_fb_unmapped_area function
  drm: crc: Wait for a frame before returning from open()
  drm: Move locking into drm_debugfs_crtc_crc_add
  drm/imx: imx-tve: Remove unused variable
  Revert "drm: nouveau: fix build when LEDS_CLASS=m"
  drm: Add kernel-doc for drm_crtc_commit_get/put
  drm/atomic: Fix outdated comment.
  drm: reference count event->completion
  gpu: drm: mgag200: mgag200_main:- Handle error from pci_iomap
  drm: Document deprecated load/unload hook
  ...
...@@ -34,25 +34,26 @@ TTM initialization ...@@ -34,25 +34,26 @@ TTM initialization
------------------ ------------------
**Warning** **Warning**
This section is outdated. This section is outdated.
Drivers wishing to support TTM must fill out a drm_bo_driver Drivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver
structure. The structure contains several fields with function pointers <ttm_bo_driver>` structure to ttm_bo_device_init, together with an
for initializing the TTM, allocating and freeing memory, waiting for initialized global reference to the memory manager. The ttm_bo_driver
command completion and fence synchronization, and memory migration. See structure contains several fields with function pointers for
the radeon_ttm.c file for an example of usage. initializing the TTM, allocating and freeing memory, waiting for command
completion and fence synchronization, and memory migration.
The ttm_global_reference structure is made up of several fields: The :c:type:`struct drm_global_reference <drm_global_reference>` is made
up of several fields:
.. code-block:: c .. code-block:: c
struct ttm_global_reference { struct drm_global_reference {
enum ttm_global_types global_type; enum ttm_global_types global_type;
size_t size; size_t size;
void *object; void *object;
int (*init) (struct ttm_global_reference *); int (*init) (struct drm_global_reference *);
void (*release) (struct ttm_global_reference *); void (*release) (struct drm_global_reference *);
}; };
...@@ -76,6 +77,12 @@ ttm_bo_global_release(), respectively. Also, like the previous ...@@ -76,6 +77,12 @@ ttm_bo_global_release(), respectively. Also, like the previous
object, ttm_global_item_ref() is used to create an initial reference object, ttm_global_item_ref() is used to create an initial reference
count for the TTM, which will call your initialization function. count for the TTM, which will call your initialization function.
See the radeon_ttm.c file for an example of usage.
.. kernel-doc:: drivers/gpu/drm/drm_global.c
:export:
The Graphics Execution Manager (GEM) The Graphics Execution Manager (GEM)
==================================== ====================================
...@@ -303,6 +310,17 @@ created. ...@@ -303,6 +310,17 @@ created.
Drivers that want to map the GEM object upfront instead of handling page Drivers that want to map the GEM object upfront instead of handling page
faults can implement their own mmap file operation handler. faults can implement their own mmap file operation handler.
For platforms without MMU the GEM core provides a helper method
:c:func:`drm_gem_cma_get_unmapped_area`. The mmap() routines will call
this to get a proposed address for the mapping.
To use :c:func:`drm_gem_cma_get_unmapped_area`, drivers must fill the
struct :c:type:`struct file_operations <file_operations>` get_unmapped_area
field with a pointer on :c:func:`drm_gem_cma_get_unmapped_area`.
More detailed information about get_unmapped_area can be found in
Documentation/nommu-mmap.txt
Memory Coherency Memory Coherency
---------------- ----------------
...@@ -442,7 +460,7 @@ LRU Scan/Eviction Support ...@@ -442,7 +460,7 @@ LRU Scan/Eviction Support
------------------------- -------------------------
.. kernel-doc:: drivers/gpu/drm/drm_mm.c .. kernel-doc:: drivers/gpu/drm/drm_mm.c
:doc: lru scan roaster :doc: lru scan roster
DRM MM Range Allocator Function References DRM MM Range Allocator Function References
------------------------------------------ ------------------------------------------
......
...@@ -156,8 +156,12 @@ other hand, a driver requires shared state between clients which is ...@@ -156,8 +156,12 @@ other hand, a driver requires shared state between clients which is
visible to user-space and accessible beyond open-file boundaries, they visible to user-space and accessible beyond open-file boundaries, they
cannot support render nodes. cannot support render nodes.
Testing and validation
======================
Validating changes with IGT Validating changes with IGT
=========================== ---------------------------
There's a collection of tests that aims to cover the whole functionality of There's a collection of tests that aims to cover the whole functionality of
DRM drivers and that can be used to check that changes to DRM drivers or the DRM drivers and that can be used to check that changes to DRM drivers or the
...@@ -193,6 +197,12 @@ run-tests.sh is a wrapper around piglit that will execute the tests matching ...@@ -193,6 +197,12 @@ run-tests.sh is a wrapper around piglit that will execute the tests matching
the -t options. A report in HTML format will be available in the -t options. A report in HTML format will be available in
./results/html/index.html. Results can be compared with piglit. ./results/html/index.html. Results can be compared with piglit.
Display CRC Support
-------------------
.. kernel-doc:: drivers/gpu/drm/drm_debugfs_crc.c
:doc: CRC ABI
VBlank event handling VBlank event handling
===================== =====================
...@@ -209,16 +219,3 @@ DRM_IOCTL_MODESET_CTL ...@@ -209,16 +219,3 @@ DRM_IOCTL_MODESET_CTL
mode setting, since on many devices the vertical blank counter is mode setting, since on many devices the vertical blank counter is
reset to 0 at some point during modeset. Modern drivers should not reset to 0 at some point during modeset. Modern drivers should not
call this any more since with kernel mode setting it is a no-op. call this any more since with kernel mode setting it is a no-op.
This second part of the GPU Driver Developer's Guide documents driver
code, implementation details and also all the driver-specific userspace
interfaces. Especially since all hardware-acceleration interfaces to
userspace are driver specific for efficiency and other reasons these
interfaces can be rather substantial. Hence every driver has its own
chapter.
Testing and validation
======================
.. kernel-doc:: drivers/gpu/drm/drm_debugfs_crc.c
:doc: CRC ABI
...@@ -23,13 +23,12 @@ For consistency this documentation uses American English. Abbreviations ...@@ -23,13 +23,12 @@ For consistency this documentation uses American English. Abbreviations
are written as all-uppercase, for example: DRM, KMS, IOCTL, CRTC, and so are written as all-uppercase, for example: DRM, KMS, IOCTL, CRTC, and so
on. To aid in reading, documentations make full use of the markup on. To aid in reading, documentations make full use of the markup
characters kerneldoc provides: @parameter for function parameters, characters kerneldoc provides: @parameter for function parameters,
@member for structure members, &structure to reference structures and @member for structure members (within the same structure), &struct structure to
function() for functions. These all get automatically hyperlinked if reference structures and function() for functions. These all get automatically
kerneldoc for the referenced objects exists. When referencing entries in hyperlinked if kerneldoc for the referenced objects exists. When referencing
function vtables please use ->vfunc(). Note that kerneldoc does not entries in function vtables (and structure members in general) please use
support referencing struct members directly, so please add a reference &vtable_name.vfunc. Unfortunately this does not yet yield a direct link to the
to the vtable struct somewhere in the same paragraph or at least member, only the structure.
section.
Except in special situations (to separate locked from unlocked variants) Except in special situations (to separate locked from unlocked variants)
locking requirements for functions aren't documented in the kerneldoc. locking requirements for functions aren't documented in the kerneldoc.
...@@ -49,3 +48,5 @@ section name should be all upper-case or not, and whether it should end ...@@ -49,3 +48,5 @@ section name should be all upper-case or not, and whether it should end
in a colon or not. Go with the file-local style. Other common section in a colon or not. Go with the file-local style. Other common section
names are "Notes" with information for dangerous or tricky corner cases, names are "Notes" with information for dangerous or tricky corner cases,
and "FIXME" where the interface could be cleaned up. and "FIXME" where the interface could be cleaned up.
Also read the :ref:`guidelines for the kernel documentation at large <doc_guide>`.
#include <asm-generic/vga.h>
...@@ -128,7 +128,7 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) ...@@ -128,7 +128,7 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
* DOC: fence polling * DOC: fence polling
* *
* To support cross-device and cross-driver synchronization of buffer access * To support cross-device and cross-driver synchronization of buffer access
* implicit fences (represented internally in the kernel with struct &fence) can * implicit fences (represented internally in the kernel with &struct fence) can
* be attached to a &dma_buf. The glue for that and a few related things are * be attached to a &dma_buf. The glue for that and a few related things are
* provided in the &reservation_object structure. * provided in the &reservation_object structure.
* *
...@@ -373,7 +373,7 @@ static inline int is_dma_buf_file(struct file *file) ...@@ -373,7 +373,7 @@ static inline int is_dma_buf_file(struct file *file)
* Additionally, provide a name string for exporter; useful in debugging. * Additionally, provide a name string for exporter; useful in debugging.
* *
* @exp_info: [in] holds all the export related information provided * @exp_info: [in] holds all the export related information provided
* by the exporter. see struct &dma_buf_export_info * by the exporter. see &struct dma_buf_export_info
* for further details. * for further details.
* *
* Returns, on success, a newly created dma_buf object, which wraps the * Returns, on success, a newly created dma_buf object, which wraps the
...@@ -516,9 +516,8 @@ EXPORT_SYMBOL_GPL(dma_buf_get); ...@@ -516,9 +516,8 @@ EXPORT_SYMBOL_GPL(dma_buf_get);
* Uses file's refcounting done implicitly by fput(). * Uses file's refcounting done implicitly by fput().
* *
* If, as a result of this call, the refcount becomes 0, the 'release' file * If, as a result of this call, the refcount becomes 0, the 'release' file
* operation related to this fd is called. It calls the release operation of * operation related to this fd is called. It calls &dma_buf_ops.release vfunc
* struct &dma_buf_ops in turn, and frees the memory allocated for dmabuf when * in turn, and frees the memory allocated for dmabuf when exported.
* exported.
*/ */
void dma_buf_put(struct dma_buf *dmabuf) void dma_buf_put(struct dma_buf *dmabuf)
{ {
......
...@@ -281,6 +281,31 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb, ...@@ -281,6 +281,31 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
} }
EXPORT_SYMBOL(dma_fence_add_callback); EXPORT_SYMBOL(dma_fence_add_callback);
/**
* dma_fence_get_status - returns the status upon completion
* @fence: [in] the dma_fence to query
*
* This wraps dma_fence_get_status_locked() to return the error status
* condition on a signaled fence. See dma_fence_get_status_locked() for more
* details.
*
* Returns 0 if the fence has not yet been signaled, 1 if the fence has
* been signaled without an error condition, or a negative error code
* if the fence has been completed in err.
*/
int dma_fence_get_status(struct dma_fence *fence)
{
unsigned long flags;
int status;
spin_lock_irqsave(fence->lock, flags);
status = dma_fence_get_status_locked(fence);
spin_unlock_irqrestore(fence->lock, flags);
return status;
}
EXPORT_SYMBOL(dma_fence_get_status);
/** /**
* dma_fence_remove_callback - remove a callback from the signaling list * dma_fence_remove_callback - remove a callback from the signaling list
* @fence: [in] the fence to wait on * @fence: [in] the fence to wait on
...@@ -541,6 +566,7 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, ...@@ -541,6 +566,7 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
fence->context = context; fence->context = context;
fence->seqno = seqno; fence->seqno = seqno;
fence->flags = 0UL; fence->flags = 0UL;
fence->error = 0;
trace_dma_fence_init(fence); trace_dma_fence_init(fence);
} }
......
...@@ -62,30 +62,29 @@ void sync_file_debug_remove(struct sync_file *sync_file) ...@@ -62,30 +62,29 @@ void sync_file_debug_remove(struct sync_file *sync_file)
static const char *sync_status_str(int status) static const char *sync_status_str(int status)
{ {
if (status == 0) if (status < 0)
return "signaled"; return "error";
if (status > 0) if (status > 0)
return "active"; return "signaled";
return "error"; return "active";
} }
static void sync_print_fence(struct seq_file *s, static void sync_print_fence(struct seq_file *s,
struct dma_fence *fence, bool show) struct dma_fence *fence, bool show)
{ {
int status = 1;
struct sync_timeline *parent = dma_fence_parent(fence); struct sync_timeline *parent = dma_fence_parent(fence);
int status;
if (dma_fence_is_signaled_locked(fence)) status = dma_fence_get_status_locked(fence);
status = fence->status;
seq_printf(s, " %s%sfence %s", seq_printf(s, " %s%sfence %s",
show ? parent->name : "", show ? parent->name : "",
show ? "_" : "", show ? "_" : "",
sync_status_str(status)); sync_status_str(status));
if (status <= 0) { if (status) {
struct timespec64 ts64 = struct timespec64 ts64 =
ktime_to_timespec64(fence->timestamp); ktime_to_timespec64(fence->timestamp);
...@@ -136,7 +135,7 @@ static void sync_print_sync_file(struct seq_file *s, ...@@ -136,7 +135,7 @@ static void sync_print_sync_file(struct seq_file *s,
int i; int i;
seq_printf(s, "[%p] %s: %s\n", sync_file, sync_file->name, seq_printf(s, "[%p] %s: %s\n", sync_file, sync_file->name,
sync_status_str(!dma_fence_is_signaled(sync_file->fence))); sync_status_str(dma_fence_get_status(sync_file->fence)));
if (dma_fence_is_array(sync_file->fence)) { if (dma_fence_is_array(sync_file->fence)) {
struct dma_fence_array *array = to_dma_fence_array(sync_file->fence); struct dma_fence_array *array = to_dma_fence_array(sync_file->fence);
......
...@@ -373,10 +373,8 @@ static void sync_fill_fence_info(struct dma_fence *fence, ...@@ -373,10 +373,8 @@ static void sync_fill_fence_info(struct dma_fence *fence,
sizeof(info->obj_name)); sizeof(info->obj_name));
strlcpy(info->driver_name, fence->ops->get_driver_name(fence), strlcpy(info->driver_name, fence->ops->get_driver_name(fence),
sizeof(info->driver_name)); sizeof(info->driver_name));
if (dma_fence_is_signaled(fence))
info->status = fence->status >= 0 ? 1 : fence->status; info->status = dma_fence_get_status(fence);
else
info->status = 0;
info->timestamp_ns = ktime_to_ns(fence->timestamp); info->timestamp_ns = ktime_to_ns(fence->timestamp);
} }
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
# #
menuconfig DRM menuconfig DRM
tristate "Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)" tristate "Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)"
depends on (AGP || AGP=n) && !EMULATED_CMPXCHG && MMU && HAS_DMA depends on (AGP || AGP=n) && !EMULATED_CMPXCHG && HAS_DMA
select HDMI select HDMI
select FB_CMDLINE select FB_CMDLINE
select I2C select I2C
...@@ -113,7 +113,7 @@ config DRM_LOAD_EDID_FIRMWARE ...@@ -113,7 +113,7 @@ config DRM_LOAD_EDID_FIRMWARE
config DRM_TTM config DRM_TTM
tristate tristate
depends on DRM depends on DRM && MMU
help help
GPU memory management subsystem for devices with multiple GPU memory management subsystem for devices with multiple
GPU memory types. Will be enabled automatically if a device driver GPU memory types. Will be enabled automatically if a device driver
...@@ -136,13 +136,17 @@ config DRM_KMS_CMA_HELPER ...@@ -136,13 +136,17 @@ config DRM_KMS_CMA_HELPER
help help
Choose this if you need the KMS CMA helper functions Choose this if you need the KMS CMA helper functions
config DRM_VM
bool
depends on DRM
source "drivers/gpu/drm/i2c/Kconfig" source "drivers/gpu/drm/i2c/Kconfig"
source "drivers/gpu/drm/arm/Kconfig" source "drivers/gpu/drm/arm/Kconfig"
config DRM_RADEON config DRM_RADEON
tristate "ATI Radeon" tristate "ATI Radeon"
depends on DRM && PCI depends on DRM && PCI && MMU
select FW_LOADER select FW_LOADER
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
...@@ -162,7 +166,7 @@ source "drivers/gpu/drm/radeon/Kconfig" ...@@ -162,7 +166,7 @@ source "drivers/gpu/drm/radeon/Kconfig"
config DRM_AMDGPU config DRM_AMDGPU
tristate "AMD GPU" tristate "AMD GPU"
depends on DRM && PCI depends on DRM && PCI && MMU
select FW_LOADER select FW_LOADER
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
...@@ -264,6 +268,7 @@ source "drivers/gpu/drm/meson/Kconfig" ...@@ -264,6 +268,7 @@ source "drivers/gpu/drm/meson/Kconfig"
menuconfig DRM_LEGACY menuconfig DRM_LEGACY
bool "Enable legacy drivers (DANGEROUS)" bool "Enable legacy drivers (DANGEROUS)"
depends on DRM depends on DRM
select DRM_VM
help help
Enable legacy DRI1 drivers. Those drivers expose unsafe and dangerous Enable legacy DRI1 drivers. Those drivers expose unsafe and dangerous
APIs to user-space, which can be used to circumvent access APIs to user-space, which can be used to circumvent access
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
drm-y := drm_auth.o drm_bufs.o drm_cache.o \ drm-y := drm_auth.o drm_bufs.o drm_cache.o \
drm_context.o drm_dma.o \ drm_context.o drm_dma.o \
drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \ drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \
drm_lock.o drm_memory.o drm_drv.o drm_vm.o \ drm_lock.o drm_memory.o drm_drv.o \
drm_scatter.o drm_pci.o \ drm_scatter.o drm_pci.o \
drm_platform.o drm_sysfs.o drm_hashtab.o drm_mm.o \ drm_platform.o drm_sysfs.o drm_hashtab.o drm_mm.o \
drm_crtc.o drm_fourcc.o drm_modes.o drm_edid.o \ drm_crtc.o drm_fourcc.o drm_modes.o drm_edid.o \
...@@ -19,6 +19,7 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \ ...@@ -19,6 +19,7 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \
drm_dumb_buffers.o drm_mode_config.o drm_dumb_buffers.o drm_mode_config.o
drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o
drm-$(CONFIG_DRM_VM) += drm_vm.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
drm-$(CONFIG_PCI) += ati_pcigart.o drm-$(CONFIG_PCI) += ati_pcigart.o
......
...@@ -1711,7 +1711,7 @@ extern const struct drm_ioctl_desc amdgpu_ioctls_kms[]; ...@@ -1711,7 +1711,7 @@ extern const struct drm_ioctl_desc amdgpu_ioctls_kms[];
extern const int amdgpu_max_kms_ioctl; extern const int amdgpu_max_kms_ioctl;
int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags); int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags);
int amdgpu_driver_unload_kms(struct drm_device *dev); void amdgpu_driver_unload_kms(struct drm_device *dev);
void amdgpu_driver_lastclose_kms(struct drm_device *dev); void amdgpu_driver_lastclose_kms(struct drm_device *dev);
int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv); int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv);
void amdgpu_driver_postclose_kms(struct drm_device *dev, void amdgpu_driver_postclose_kms(struct drm_device *dev,
......
...@@ -235,9 +235,10 @@ static void amdgpu_gtt_mgr_debug(struct ttm_mem_type_manager *man, ...@@ -235,9 +235,10 @@ static void amdgpu_gtt_mgr_debug(struct ttm_mem_type_manager *man,
const char *prefix) const char *prefix)
{ {
struct amdgpu_gtt_mgr *mgr = man->priv; struct amdgpu_gtt_mgr *mgr = man->priv;
struct drm_printer p = drm_debug_printer(prefix);
spin_lock(&mgr->lock); spin_lock(&mgr->lock);
drm_mm_debug_table(&mgr->mm, prefix); drm_mm_print(&mgr->mm, &p);
spin_unlock(&mgr->lock); spin_unlock(&mgr->lock);
} }
......
...@@ -50,12 +50,12 @@ static inline bool amdgpu_has_atpx(void) { return false; } ...@@ -50,12 +50,12 @@ static inline bool amdgpu_has_atpx(void) { return false; }
* This is the main unload function for KMS (all asics). * This is the main unload function for KMS (all asics).
* Returns 0 on success. * Returns 0 on success.
*/ */
int amdgpu_driver_unload_kms(struct drm_device *dev) void amdgpu_driver_unload_kms(struct drm_device *dev)
{ {
struct amdgpu_device *adev = dev->dev_private; struct amdgpu_device *adev = dev->dev_private;
if (adev == NULL) if (adev == NULL)
return 0; return;
if (adev->rmmio == NULL) if (adev->rmmio == NULL)
goto done_free; goto done_free;
...@@ -74,7 +74,6 @@ int amdgpu_driver_unload_kms(struct drm_device *dev) ...@@ -74,7 +74,6 @@ int amdgpu_driver_unload_kms(struct drm_device *dev)
done_free: done_free:
kfree(adev); kfree(adev);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
/** /**
......
...@@ -1482,18 +1482,18 @@ static int amdgpu_mm_dump_table(struct seq_file *m, void *data) ...@@ -1482,18 +1482,18 @@ static int amdgpu_mm_dump_table(struct seq_file *m, void *data)
struct drm_device *dev = node->minor->dev; struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private; struct amdgpu_device *adev = dev->dev_private;
struct drm_mm *mm = (struct drm_mm *)adev->mman.bdev.man[ttm_pl].priv; struct drm_mm *mm = (struct drm_mm *)adev->mman.bdev.man[ttm_pl].priv;
int ret;
struct ttm_bo_global *glob = adev->mman.bdev.glob; struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct drm_printer p = drm_seq_file_printer(m);
spin_lock(&glob->lru_lock); spin_lock(&glob->lru_lock);
ret = drm_mm_dump_table(m, mm); drm_mm_print(mm, &p);
spin_unlock(&glob->lru_lock); spin_unlock(&glob->lru_lock);
if (ttm_pl == TTM_PL_VRAM) if (ttm_pl == TTM_PL_VRAM)
seq_printf(m, "man size:%llu pages, ram usage:%lluMB, vis usage:%lluMB\n", seq_printf(m, "man size:%llu pages, ram usage:%lluMB, vis usage:%lluMB\n",
adev->mman.bdev.man[ttm_pl].size, adev->mman.bdev.man[ttm_pl].size,
(u64)atomic64_read(&adev->vram_usage) >> 20, (u64)atomic64_read(&adev->vram_usage) >> 20,
(u64)atomic64_read(&adev->vram_vis_usage) >> 20); (u64)atomic64_read(&adev->vram_vis_usage) >> 20);
return ret; return 0;
} }
static int ttm_pl_vram = TTM_PL_VRAM; static int ttm_pl_vram = TTM_PL_VRAM;
......
...@@ -207,9 +207,10 @@ static void amdgpu_vram_mgr_debug(struct ttm_mem_type_manager *man, ...@@ -207,9 +207,10 @@ static void amdgpu_vram_mgr_debug(struct ttm_mem_type_manager *man,
const char *prefix) const char *prefix)
{ {
struct amdgpu_vram_mgr *mgr = man->priv; struct amdgpu_vram_mgr *mgr = man->priv;
struct drm_printer p = drm_debug_printer(prefix);
spin_lock(&mgr->lock); spin_lock(&mgr->lock);
drm_mm_debug_table(&mgr->mm, prefix); drm_mm_print(&mgr->mm, &p);
spin_unlock(&mgr->lock); spin_unlock(&mgr->lock);
} }
......
...@@ -4,3 +4,5 @@ armada-y += armada_510.o ...@@ -4,3 +4,5 @@ armada-y += armada_510.o
armada-$(CONFIG_DEBUG_FS) += armada_debugfs.o armada-$(CONFIG_DEBUG_FS) += armada_debugfs.o
obj-$(CONFIG_DRM_ARMADA) := armada.o obj-$(CONFIG_DRM_ARMADA) := armada.o
CFLAGS_armada_trace.o := -I$(src)
...@@ -19,13 +19,13 @@ static int armada_debugfs_gem_linear_show(struct seq_file *m, void *data) ...@@ -19,13 +19,13 @@ static int armada_debugfs_gem_linear_show(struct seq_file *m, void *data)
struct drm_info_node *node = m->private; struct drm_info_node *node = m->private;
struct drm_device *dev = node->minor->dev; struct drm_device *dev = node->minor->dev;
struct armada_private *priv = dev->dev_private; struct armada_private *priv = dev->dev_private;
int ret; struct drm_printer p = drm_seq_file_printer(m);
mutex_lock(&priv->linear_lock); mutex_lock(&priv->linear_lock);
ret = drm_mm_dump_table(m, &priv->linear); drm_mm_print(&priv->linear, &p);
mutex_unlock(&priv->linear_lock); mutex_unlock(&priv->linear_lock);
return ret; return 0;
} }
static int armada_debugfs_reg_show(struct seq_file *m, void *data) static int armada_debugfs_reg_show(struct seq_file *m, void *data)
......
...@@ -203,12 +203,6 @@ static int armada_drm_bind(struct device *dev) ...@@ -203,12 +203,6 @@ static int armada_drm_bind(struct device *dev)
armada_drm_debugfs_init(priv->drm.primary); armada_drm_debugfs_init(priv->drm.primary);
#endif #endif
DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n",
armada_drm_driver.name, armada_drm_driver.major,
armada_drm_driver.minor, armada_drm_driver.patchlevel,
armada_drm_driver.date, dev_name(dev),
priv->drm.primary->index);
return 0; return 0;
err_poll: err_poll:
......
config DRM_AST config DRM_AST
tristate "AST server chips" tristate "AST server chips"
depends on DRM && PCI depends on DRM && PCI && MMU
select DRM_TTM select DRM_TTM
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
......
...@@ -122,7 +122,7 @@ struct ast_private { ...@@ -122,7 +122,7 @@ struct ast_private {
}; };
int ast_driver_load(struct drm_device *dev, unsigned long flags); int ast_driver_load(struct drm_device *dev, unsigned long flags);
int ast_driver_unload(struct drm_device *dev); void ast_driver_unload(struct drm_device *dev);
struct ast_gem_object; struct ast_gem_object;
......
...@@ -479,7 +479,7 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -479,7 +479,7 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags)
return ret; return ret;
} }
int ast_driver_unload(struct drm_device *dev) void ast_driver_unload(struct drm_device *dev)
{ {
struct ast_private *ast = dev->dev_private; struct ast_private *ast = dev->dev_private;
...@@ -492,7 +492,6 @@ int ast_driver_unload(struct drm_device *dev) ...@@ -492,7 +492,6 @@ int ast_driver_unload(struct drm_device *dev)
pci_iounmap(dev->pdev, ast->ioregs); pci_iounmap(dev->pdev, ast->ioregs);
pci_iounmap(dev->pdev, ast->regs); pci_iounmap(dev->pdev, ast->regs);
kfree(ast); kfree(ast);
return 0;
} }
int ast_gem_create(struct drm_device *dev, int ast_gem_create(struct drm_device *dev,
......
config DRM_BOCHS config DRM_BOCHS
tristate "DRM Support for bochs dispi vga interface (qemu stdvga)" tristate "DRM Support for bochs dispi vga interface (qemu stdvga)"
depends on DRM && PCI depends on DRM && PCI && MMU
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
help help
......
...@@ -19,7 +19,7 @@ MODULE_PARM_DESC(fbdev, "register fbdev device"); ...@@ -19,7 +19,7 @@ MODULE_PARM_DESC(fbdev, "register fbdev device");
/* ---------------------------------------------------------------------- */ /* ---------------------------------------------------------------------- */
/* drm interface */ /* drm interface */
static int bochs_unload(struct drm_device *dev) static void bochs_unload(struct drm_device *dev)
{ {
struct bochs_device *bochs = dev->dev_private; struct bochs_device *bochs = dev->dev_private;
...@@ -29,7 +29,6 @@ static int bochs_unload(struct drm_device *dev) ...@@ -29,7 +29,6 @@ static int bochs_unload(struct drm_device *dev)
bochs_hw_fini(dev); bochs_hw_fini(dev);
kfree(bochs); kfree(bochs);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
static int bochs_load(struct drm_device *dev, unsigned long flags) static int bochs_load(struct drm_device *dev, unsigned long flags)
......
config DRM_CIRRUS_QEMU config DRM_CIRRUS_QEMU
tristate "Cirrus driver for QEMU emulated device" tristate "Cirrus driver for QEMU emulated device"
depends on DRM && PCI depends on DRM && PCI && MMU
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
help help
......
...@@ -231,7 +231,7 @@ irqreturn_t cirrus_driver_irq_handler(int irq, void *arg); ...@@ -231,7 +231,7 @@ irqreturn_t cirrus_driver_irq_handler(int irq, void *arg);
/* cirrus_kms.c */ /* cirrus_kms.c */
int cirrus_driver_load(struct drm_device *dev, unsigned long flags); int cirrus_driver_load(struct drm_device *dev, unsigned long flags);
int cirrus_driver_unload(struct drm_device *dev); void cirrus_driver_unload(struct drm_device *dev);
extern struct drm_ioctl_desc cirrus_ioctls[]; extern struct drm_ioctl_desc cirrus_ioctls[];
extern int cirrus_max_ioctl; extern int cirrus_max_ioctl;
......
...@@ -208,18 +208,17 @@ int cirrus_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -208,18 +208,17 @@ int cirrus_driver_load(struct drm_device *dev, unsigned long flags)
return r; return r;
} }
int cirrus_driver_unload(struct drm_device *dev) void cirrus_driver_unload(struct drm_device *dev)
{ {
struct cirrus_device *cdev = dev->dev_private; struct cirrus_device *cdev = dev->dev_private;
if (cdev == NULL) if (cdev == NULL)
return 0; return;
cirrus_modeset_fini(cdev); cirrus_modeset_fini(cdev);
cirrus_mm_fini(cdev); cirrus_mm_fini(cdev);
cirrus_device_fini(cdev); cirrus_device_fini(cdev);
kfree(cdev); kfree(cdev);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
int cirrus_gem_create(struct drm_device *dev, int cirrus_gem_create(struct drm_device *dev,
......
...@@ -35,19 +35,14 @@ ...@@ -35,19 +35,14 @@
#include "drm_crtc_internal.h" #include "drm_crtc_internal.h"
static void crtc_commit_free(struct kref *kref) void __drm_crtc_commit_free(struct kref *kref)
{ {
struct drm_crtc_commit *commit = struct drm_crtc_commit *commit =
container_of(kref, struct drm_crtc_commit, ref); container_of(kref, struct drm_crtc_commit, ref);
kfree(commit); kfree(commit);
} }
EXPORT_SYMBOL(__drm_crtc_commit_free);
void drm_crtc_commit_put(struct drm_crtc_commit *commit)
{
kref_put(&commit->ref, crtc_commit_free);
}
EXPORT_SYMBOL(drm_crtc_commit_put);
/** /**
* drm_atomic_state_default_release - * drm_atomic_state_default_release -
...@@ -1599,10 +1594,8 @@ EXPORT_SYMBOL(drm_atomic_check_only); ...@@ -1599,10 +1594,8 @@ EXPORT_SYMBOL(drm_atomic_check_only);
* more locks but encountered a deadlock. The caller must then do the usual w/w * more locks but encountered a deadlock. The caller must then do the usual w/w
* backoff dance and restart. All other errors are fatal. * backoff dance and restart. All other errors are fatal.
* *
* Also note that on successful execution ownership of @state is transferred * This function will take its own reference on @state.
* from the caller of this function to the function itself. The caller must not * Callers should always release their reference with drm_atomic_state_put().
* free or in any other way access @state. If the function fails then the caller
* must clean up @state itself.
* *
* Returns: * Returns:
* 0 on success, negative error code on failure. * 0 on success, negative error code on failure.
...@@ -1630,10 +1623,8 @@ EXPORT_SYMBOL(drm_atomic_commit); ...@@ -1630,10 +1623,8 @@ EXPORT_SYMBOL(drm_atomic_commit);
* more locks but encountered a deadlock. The caller must then do the usual w/w * more locks but encountered a deadlock. The caller must then do the usual w/w
* backoff dance and restart. All other errors are fatal. * backoff dance and restart. All other errors are fatal.
* *
* Also note that on successful execution ownership of @state is transferred * This function will take its own reference on @state.
* from the caller of this function to the function itself. The caller must not * Callers should always release their reference with drm_atomic_state_put().
* free or in any other way access @state. If the function fails then the caller
* must clean up @state itself.
* *
* Returns: * Returns:
* 0 on success, negative error code on failure. * 0 on success, negative error code on failure.
...@@ -1882,7 +1873,7 @@ EXPORT_SYMBOL(drm_atomic_clean_old_fb); ...@@ -1882,7 +1873,7 @@ EXPORT_SYMBOL(drm_atomic_clean_old_fb);
* As a contrast, with implicit fencing the kernel keeps track of any * As a contrast, with implicit fencing the kernel keeps track of any
* ongoing rendering, and automatically ensures that the atomic update waits * ongoing rendering, and automatically ensures that the atomic update waits
* for any pending rendering to complete. For shared buffers represented with * for any pending rendering to complete. For shared buffers represented with
* a struct &dma_buf this is tracked in &reservation_object structures. * a &struct dma_buf this is tracked in &reservation_object structures.
* Implicit syncing is how Linux traditionally worked (e.g. DRI2/3 on X.org), * Implicit syncing is how Linux traditionally worked (e.g. DRI2/3 on X.org),
* whereas explicit fencing is what Android wants. * whereas explicit fencing is what Android wants.
* *
...@@ -1898,7 +1889,7 @@ EXPORT_SYMBOL(drm_atomic_clean_old_fb); ...@@ -1898,7 +1889,7 @@ EXPORT_SYMBOL(drm_atomic_clean_old_fb);
* it will only check if the Sync File is a valid one. * it will only check if the Sync File is a valid one.
* *
* On the driver side the fence is stored on the @fence parameter of * On the driver side the fence is stored on the @fence parameter of
* struct &drm_plane_state. Drivers which also support implicit fencing * &struct drm_plane_state. Drivers which also support implicit fencing
* should set the implicit fence using drm_atomic_set_fence_for_plane(), * should set the implicit fence using drm_atomic_set_fence_for_plane(),
* to make sure there's consistent behaviour between drivers in precedence * to make sure there's consistent behaviour between drivers in precedence
* of implicit vs. explicit fencing. * of implicit vs. explicit fencing.
...@@ -1917,7 +1908,7 @@ EXPORT_SYMBOL(drm_atomic_clean_old_fb); ...@@ -1917,7 +1908,7 @@ EXPORT_SYMBOL(drm_atomic_clean_old_fb);
* DRM_MODE_ATOMIC_TEST_ONLY flag the out fence will also be set to -1. * DRM_MODE_ATOMIC_TEST_ONLY flag the out fence will also be set to -1.
* *
* Note that out-fences don't have a special interface to drivers and are * Note that out-fences don't have a special interface to drivers and are
* internally represented by a struct &drm_pending_vblank_event in struct * internally represented by a &struct drm_pending_vblank_event in struct
* &drm_crtc_state, which is also used by the nonblocking atomic commit * &drm_crtc_state, which is also used by the nonblocking atomic commit
* helpers and for the DRM event handling for existing userspace. * helpers and for the DRM event handling for existing userspace.
*/ */
......
...@@ -56,9 +56,9 @@ ...@@ -56,9 +56,9 @@
* implement these functions themselves but must use the provided helpers. * implement these functions themselves but must use the provided helpers.
* *
* The atomic helper uses the same function table structures as all other * The atomic helper uses the same function table structures as all other
* modesetting helpers. See the documentation for struct &drm_crtc_helper_funcs, * modesetting helpers. See the documentation for &struct drm_crtc_helper_funcs,
* struct &drm_encoder_helper_funcs and struct &drm_connector_helper_funcs. It * struct &drm_encoder_helper_funcs and &struct drm_connector_helper_funcs. It
* also shares the struct &drm_plane_helper_funcs function table with the plane * also shares the &struct drm_plane_helper_funcs function table with the plane
* helpers. * helpers.
*/ */
static void static void
...@@ -1355,6 +1355,15 @@ static int stall_checks(struct drm_crtc *crtc, bool nonblock) ...@@ -1355,6 +1355,15 @@ static int stall_checks(struct drm_crtc *crtc, bool nonblock)
return ret < 0 ? ret : 0; return ret < 0 ? ret : 0;
} }
void release_crtc_commit(struct completion *completion)
{
struct drm_crtc_commit *commit = container_of(completion,
typeof(*commit),
flip_done);
drm_crtc_commit_put(commit);
}
/** /**
* drm_atomic_helper_setup_commit - setup possibly nonblocking commit * drm_atomic_helper_setup_commit - setup possibly nonblocking commit
* @state: new modeset state to be committed * @state: new modeset state to be committed
...@@ -1369,7 +1378,7 @@ static int stall_checks(struct drm_crtc *crtc, bool nonblock) ...@@ -1369,7 +1378,7 @@ static int stall_checks(struct drm_crtc *crtc, bool nonblock)
* actually committing the hardware state, and for nonblocking commits this call * actually committing the hardware state, and for nonblocking commits this call
* must be placed in the async worker. See also drm_atomic_helper_swap_state() * must be placed in the async worker. See also drm_atomic_helper_swap_state()
* and it's stall parameter, for when a driver's commit hooks look at the * and it's stall parameter, for when a driver's commit hooks look at the
* ->state pointers of struct &drm_crtc, &drm_plane or &drm_connector directly. * ->state pointers of &struct drm_crtc, &drm_plane or &drm_connector directly.
* *
* Completion of the hardware commit step must be signalled using * Completion of the hardware commit step must be signalled using
* drm_atomic_helper_commit_hw_done(). After this step the driver is not allowed * drm_atomic_helper_commit_hw_done(). After this step the driver is not allowed
...@@ -1447,6 +1456,8 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state, ...@@ -1447,6 +1456,8 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
} }
crtc_state->event->base.completion = &commit->flip_done; crtc_state->event->base.completion = &commit->flip_done;
crtc_state->event->base.completion_release = release_crtc_commit;
drm_crtc_commit_get(commit);
} }
return 0; return 0;
...@@ -3286,11 +3297,6 @@ EXPORT_SYMBOL(drm_atomic_helper_duplicate_state); ...@@ -3286,11 +3297,6 @@ EXPORT_SYMBOL(drm_atomic_helper_duplicate_state);
void void
__drm_atomic_helper_connector_destroy_state(struct drm_connector_state *state) __drm_atomic_helper_connector_destroy_state(struct drm_connector_state *state)
{ {
/*
* This is currently a placeholder so that drivers that subclass the
* state will automatically do the right thing if code is ever added
* to this function.
*/
if (state->crtc) if (state->crtc)
drm_connector_unreference(state->connector); drm_connector_unreference(state->connector);
} }
......
...@@ -35,8 +35,8 @@ ...@@ -35,8 +35,8 @@
/** /**
* DOC: master and authentication * DOC: master and authentication
* *
* struct &drm_master is used to track groups of clients with open * &struct drm_master is used to track groups of clients with open
* primary/legacy device nodes. For every struct &drm_file which has had at * primary/legacy device nodes. For every &struct drm_file which has had at
* least once successfully became the device master (either through the * least once successfully became the device master (either through the
* SET_MASTER IOCTL, or implicitly through opening the primary device node when * SET_MASTER IOCTL, or implicitly through opening the primary device node when
* no one else is the current master that time) there exists one &drm_master. * no one else is the current master that time) there exists one &drm_master.
...@@ -294,7 +294,7 @@ EXPORT_SYMBOL(drm_is_current_master); ...@@ -294,7 +294,7 @@ EXPORT_SYMBOL(drm_is_current_master);
/** /**
* drm_master_get - reference a master pointer * drm_master_get - reference a master pointer
* @master: struct &drm_master * @master: &struct drm_master
* *
* Increments the reference count of @master and returns a pointer to @master. * Increments the reference count of @master and returns a pointer to @master.
*/ */
...@@ -322,7 +322,7 @@ static void drm_master_destroy(struct kref *kref) ...@@ -322,7 +322,7 @@ static void drm_master_destroy(struct kref *kref)
/** /**
* drm_master_put - unreference and clear a master pointer * drm_master_put - unreference and clear a master pointer
* @master: pointer to a pointer of struct &drm_master * @master: pointer to a pointer of &struct drm_master
* *
* This decrements the &drm_master behind @master and sets it to NULL. * This decrements the &drm_master behind @master and sets it to NULL.
*/ */
......
...@@ -33,7 +33,7 @@ ...@@ -33,7 +33,7 @@
/** /**
* DOC: overview * DOC: overview
* *
* struct &drm_bridge represents a device that hangs on to an encoder. These are * &struct drm_bridge represents a device that hangs on to an encoder. These are
* handy when a regular &drm_encoder entity isn't enough to represent the entire * handy when a regular &drm_encoder entity isn't enough to represent the entire
* encoder chain. * encoder chain.
* *
...@@ -55,7 +55,7 @@ ...@@ -55,7 +55,7 @@
* just provide additional hooks to get the desired output at the end of the * just provide additional hooks to get the desired output at the end of the
* encoder chain. * encoder chain.
* *
* Bridges can also be chained up using the next pointer in struct &drm_bridge. * Bridges can also be chained up using the &drm_bridge.next pointer.
* *
* Both legacy CRTC helpers and the new atomic modeset helpers support bridges. * Both legacy CRTC helpers and the new atomic modeset helpers support bridges.
*/ */
...@@ -179,7 +179,7 @@ void drm_bridge_detach(struct drm_bridge *bridge) ...@@ -179,7 +179,7 @@ void drm_bridge_detach(struct drm_bridge *bridge)
* @mode: desired mode to be set for the bridge * @mode: desired mode to be set for the bridge
* @adjusted_mode: updated mode that works for this bridge * @adjusted_mode: updated mode that works for this bridge
* *
* Calls ->mode_fixup() &drm_bridge_funcs op for all the bridges in the * Calls &drm_bridge_funcs.mode_fixup for all the bridges in the
* encoder chain, starting from the first bridge to the last. * encoder chain, starting from the first bridge to the last.
* *
* Note: the bridge passed should be the one closest to the encoder * Note: the bridge passed should be the one closest to the encoder
...@@ -206,11 +206,10 @@ bool drm_bridge_mode_fixup(struct drm_bridge *bridge, ...@@ -206,11 +206,10 @@ bool drm_bridge_mode_fixup(struct drm_bridge *bridge,
EXPORT_SYMBOL(drm_bridge_mode_fixup); EXPORT_SYMBOL(drm_bridge_mode_fixup);
/** /**
* drm_bridge_disable - calls ->disable() &drm_bridge_funcs op for all * drm_bridge_disable - disables all bridges in the encoder chain
* bridges in the encoder chain.
* @bridge: bridge control structure * @bridge: bridge control structure
* *
* Calls ->disable() &drm_bridge_funcs op for all the bridges in the encoder * Calls &drm_bridge_funcs.disable op for all the bridges in the encoder
* chain, starting from the last bridge to the first. These are called before * chain, starting from the last bridge to the first. These are called before
* calling the encoder's prepare op. * calling the encoder's prepare op.
* *
...@@ -229,11 +228,10 @@ void drm_bridge_disable(struct drm_bridge *bridge) ...@@ -229,11 +228,10 @@ void drm_bridge_disable(struct drm_bridge *bridge)
EXPORT_SYMBOL(drm_bridge_disable); EXPORT_SYMBOL(drm_bridge_disable);
/** /**
* drm_bridge_post_disable - calls ->post_disable() &drm_bridge_funcs op for * drm_bridge_post_disable - cleans up after disabling all bridges in the encoder chain
* all bridges in the encoder chain.
* @bridge: bridge control structure * @bridge: bridge control structure
* *
* Calls ->post_disable() &drm_bridge_funcs op for all the bridges in the * Calls &drm_bridge_funcs.post_disable op for all the bridges in the
* encoder chain, starting from the first bridge to the last. These are called * encoder chain, starting from the first bridge to the last. These are called
* after completing the encoder's prepare op. * after completing the encoder's prepare op.
* *
...@@ -258,7 +256,7 @@ EXPORT_SYMBOL(drm_bridge_post_disable); ...@@ -258,7 +256,7 @@ EXPORT_SYMBOL(drm_bridge_post_disable);
* @mode: desired mode to be set for the bridge * @mode: desired mode to be set for the bridge
* @adjusted_mode: updated mode that works for this bridge * @adjusted_mode: updated mode that works for this bridge
* *
* Calls ->mode_set() &drm_bridge_funcs op for all the bridges in the * Calls &drm_bridge_funcs.mode_set op for all the bridges in the
* encoder chain, starting from the first bridge to the last. * encoder chain, starting from the first bridge to the last.
* *
* Note: the bridge passed should be the one closest to the encoder * Note: the bridge passed should be the one closest to the encoder
...@@ -278,11 +276,11 @@ void drm_bridge_mode_set(struct drm_bridge *bridge, ...@@ -278,11 +276,11 @@ void drm_bridge_mode_set(struct drm_bridge *bridge,
EXPORT_SYMBOL(drm_bridge_mode_set); EXPORT_SYMBOL(drm_bridge_mode_set);
/** /**
* drm_bridge_pre_enable - calls ->pre_enable() &drm_bridge_funcs op for all * drm_bridge_pre_enable - prepares for enabling all
* bridges in the encoder chain. * bridges in the encoder chain
* @bridge: bridge control structure * @bridge: bridge control structure
* *
* Calls ->pre_enable() &drm_bridge_funcs op for all the bridges in the encoder * Calls &drm_bridge_funcs.pre_enable op for all the bridges in the encoder
* chain, starting from the last bridge to the first. These are called * chain, starting from the last bridge to the first. These are called
* before calling the encoder's commit op. * before calling the encoder's commit op.
* *
...@@ -301,11 +299,10 @@ void drm_bridge_pre_enable(struct drm_bridge *bridge) ...@@ -301,11 +299,10 @@ void drm_bridge_pre_enable(struct drm_bridge *bridge)
EXPORT_SYMBOL(drm_bridge_pre_enable); EXPORT_SYMBOL(drm_bridge_pre_enable);
/** /**
* drm_bridge_enable - calls ->enable() &drm_bridge_funcs op for all bridges * drm_bridge_enable - enables all bridges in the encoder chain
* in the encoder chain.
* @bridge: bridge control structure * @bridge: bridge control structure
* *
* Calls ->enable() &drm_bridge_funcs op for all the bridges in the encoder * Calls &drm_bridge_funcs.enable op for all the bridges in the encoder
* chain, starting from the first bridge to the last. These are called * chain, starting from the first bridge to the last. These are called
* after completing the encoder's commit op. * after completing the encoder's commit op.
* *
......
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
* "DEGAMMA_LUT”: * "DEGAMMA_LUT”:
* Blob property to set the degamma lookup table (LUT) mapping pixel data * Blob property to set the degamma lookup table (LUT) mapping pixel data
* from the framebuffer before it is given to the transformation matrix. * from the framebuffer before it is given to the transformation matrix.
* The data is interpreted as an array of struct &drm_color_lut elements. * The data is interpreted as an array of &struct drm_color_lut elements.
* Hardware might choose not to use the full precision of the LUT elements * Hardware might choose not to use the full precision of the LUT elements
* nor use all the elements of the LUT (for example the hardware might * nor use all the elements of the LUT (for example the hardware might
* choose to interpolate between LUT[0] and LUT[4]). * choose to interpolate between LUT[0] and LUT[4]).
...@@ -65,7 +65,7 @@ ...@@ -65,7 +65,7 @@
* “GAMMA_LUT”: * “GAMMA_LUT”:
* Blob property to set the gamma lookup table (LUT) mapping pixel data * Blob property to set the gamma lookup table (LUT) mapping pixel data
* after the transformation matrix to data sent to the connector. The * after the transformation matrix to data sent to the connector. The
* data is interpreted as an array of struct &drm_color_lut elements. * data is interpreted as an array of &struct drm_color_lut elements.
* Hardware might choose not to use the full precision of the LUT elements * Hardware might choose not to use the full precision of the LUT elements
* nor use all the elements of the LUT (for example the hardware might * nor use all the elements of the LUT (for example the hardware might
* choose to interpolate between LUT[0] and LUT[4]). * choose to interpolate between LUT[0] and LUT[4]).
......
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
* Connectors must be attached to an encoder to be used. For devices that map * Connectors must be attached to an encoder to be used. For devices that map
* connectors to encoders 1:1, the connector should be attached at * connectors to encoders 1:1, the connector should be attached at
* initialization time with a call to drm_mode_connector_attach_encoder(). The * initialization time with a call to drm_mode_connector_attach_encoder(). The
* driver must also set the struct &drm_connector encoder field to point to the * driver must also set the &struct drm_connector encoder field to point to the
* attached encoder. * attached encoder.
* *
* For connectors which are not fixed (like built-in panels) the driver needs to * For connectors which are not fixed (like built-in panels) the driver needs to
......
...@@ -46,6 +46,29 @@ ...@@ -46,6 +46,29 @@
#include "drm_crtc_internal.h" #include "drm_crtc_internal.h"
#include "drm_internal.h" #include "drm_internal.h"
/**
* drm_crtc_from_index - find the registered CRTC at an index
* @dev: DRM device
* @idx: index of registered CRTC to find for
*
* Given a CRTC index, return the registered CRTC from DRM device's
* list of CRTCs with matching index. This is the inverse of drm_crtc_index().
* It's useful in the vblank callbacks (like &drm_driver.enable_vblank or
* &drm_driver.disable_vblank), since that still deals with indices instead
* of pointers to &struct drm_crtc."
*/
struct drm_crtc *drm_crtc_from_index(struct drm_device *dev, int idx)
{
struct drm_crtc *crtc;
drm_for_each_crtc(crtc, dev)
if (idx == crtc->index)
return crtc;
return NULL;
}
EXPORT_SYMBOL(drm_crtc_from_index);
/** /**
* drm_crtc_force_disable - Forcibly turn off a CRTC * drm_crtc_force_disable - Forcibly turn off a CRTC
* @crtc: CRTC to turn off * @crtc: CRTC to turn off
......
...@@ -71,7 +71,7 @@ ...@@ -71,7 +71,7 @@
* *
* These legacy modeset helpers use the same function table structures as * These legacy modeset helpers use the same function table structures as
* all other modesetting helpers. See the documentation for struct * all other modesetting helpers. See the documentation for struct
* &drm_crtc_helper_funcs, struct &drm_encoder_helper_funcs and struct * &drm_crtc_helper_funcs, &struct drm_encoder_helper_funcs and struct
* &drm_connector_helper_funcs. * &drm_connector_helper_funcs.
*/ */
...@@ -478,10 +478,10 @@ drm_crtc_helper_disable(struct drm_crtc *crtc) ...@@ -478,10 +478,10 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
* @set: mode set configuration * @set: mode set configuration
* *
* The drm_crtc_helper_set_config() helper function implements the set_config * The drm_crtc_helper_set_config() helper function implements the set_config
* callback of struct &drm_crtc_funcs for drivers using the legacy CRTC helpers. * callback of &struct drm_crtc_funcs for drivers using the legacy CRTC helpers.
* *
* It first tries to locate the best encoder for each connector by calling the * It first tries to locate the best encoder for each connector by calling the
* connector ->best_encoder() (struct &drm_connector_helper_funcs) helper * connector ->best_encoder() (&struct drm_connector_helper_funcs) helper
* operation. * operation.
* *
* After locating the appropriate encoders, the helper function will call the * After locating the appropriate encoders, the helper function will call the
...@@ -493,7 +493,7 @@ drm_crtc_helper_disable(struct drm_crtc *crtc) ...@@ -493,7 +493,7 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
* *
* If the adjusted mode is identical to the current mode but changes to the * If the adjusted mode is identical to the current mode but changes to the
* frame buffer need to be applied, the drm_crtc_helper_set_config() function * frame buffer need to be applied, the drm_crtc_helper_set_config() function
* will call the CRTC ->mode_set_base() (struct &drm_crtc_helper_funcs) helper * will call the CRTC ->mode_set_base() (&struct drm_crtc_helper_funcs) helper
* operation. * operation.
* *
* If the adjusted mode differs from the current mode, or if the * If the adjusted mode differs from the current mode, or if the
...@@ -501,7 +501,7 @@ drm_crtc_helper_disable(struct drm_crtc *crtc) ...@@ -501,7 +501,7 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
* performs a full mode set sequence by calling the ->prepare(), ->mode_set() * performs a full mode set sequence by calling the ->prepare(), ->mode_set()
* and ->commit() CRTC and encoder helper operations, in that order. * and ->commit() CRTC and encoder helper operations, in that order.
* Alternatively it can also use the dpms and disable helper operations. For * Alternatively it can also use the dpms and disable helper operations. For
* details see struct &drm_crtc_helper_funcs and struct * details see &struct drm_crtc_helper_funcs and struct
* &drm_encoder_helper_funcs. * &drm_encoder_helper_funcs.
* *
* This function is deprecated. New drivers must implement atomic modeset * This function is deprecated. New drivers must implement atomic modeset
...@@ -852,12 +852,12 @@ static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc) ...@@ -852,12 +852,12 @@ static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc)
* @mode: DPMS mode * @mode: DPMS mode
* *
* The drm_helper_connector_dpms() helper function implements the ->dpms() * The drm_helper_connector_dpms() helper function implements the ->dpms()
* callback of struct &drm_connector_funcs for drivers using the legacy CRTC helpers. * callback of &struct drm_connector_funcs for drivers using the legacy CRTC helpers.
* *
* This is the main helper function provided by the CRTC helper framework for * This is the main helper function provided by the CRTC helper framework for
* implementing the DPMS connector attribute. It computes the new desired DPMS * implementing the DPMS connector attribute. It computes the new desired DPMS
* state for all encoders and CRTCs in the output mesh and calls the ->dpms() * state for all encoders and CRTCs in the output mesh and calls the ->dpms()
* callbacks provided by the driver in struct &drm_crtc_helper_funcs and struct * callbacks provided by the driver in &struct drm_crtc_helper_funcs and struct
* &drm_encoder_helper_funcs appropriately. * &drm_encoder_helper_funcs appropriately.
* *
* This function is deprecated. New drivers must implement atomic modeset * This function is deprecated. New drivers must implement atomic modeset
......
...@@ -125,6 +125,12 @@ static const struct file_operations drm_crtc_crc_control_fops = { ...@@ -125,6 +125,12 @@ static const struct file_operations drm_crtc_crc_control_fops = {
.write = crc_control_write .write = crc_control_write
}; };
static int crtc_crc_data_count(struct drm_crtc_crc *crc)
{
assert_spin_locked(&crc->lock);
return CIRC_CNT(crc->head, crc->tail, DRM_CRC_ENTRIES_NR);
}
static int crtc_crc_open(struct inode *inode, struct file *filep) static int crtc_crc_open(struct inode *inode, struct file *filep)
{ {
struct drm_crtc *crtc = inode->i_private; struct drm_crtc *crtc = inode->i_private;
...@@ -160,8 +166,19 @@ static int crtc_crc_open(struct inode *inode, struct file *filep) ...@@ -160,8 +166,19 @@ static int crtc_crc_open(struct inode *inode, struct file *filep)
crc->entries = entries; crc->entries = entries;
crc->values_cnt = values_cnt; crc->values_cnt = values_cnt;
crc->opened = true; crc->opened = true;
/*
* Only return once we got a first frame, so userspace doesn't have to
* guess when this particular piece of HW will be ready to start
* generating CRCs.
*/
ret = wait_event_interruptible_lock_irq(crc->wq,
crtc_crc_data_count(crc),
crc->lock);
spin_unlock_irq(&crc->lock); spin_unlock_irq(&crc->lock);
WARN_ON(ret);
return 0; return 0;
err_disable: err_disable:
...@@ -189,12 +206,6 @@ static int crtc_crc_release(struct inode *inode, struct file *filep) ...@@ -189,12 +206,6 @@ static int crtc_crc_release(struct inode *inode, struct file *filep)
return 0; return 0;
} }
static int crtc_crc_data_count(struct drm_crtc_crc *crc)
{
assert_spin_locked(&crc->lock);
return CIRC_CNT(crc->head, crc->tail, DRM_CRC_ENTRIES_NR);
}
/* /*
* 1 frame field of 10 chars plus a number of CRC fields of 10 chars each, space * 1 frame field of 10 chars plus a number of CRC fields of 10 chars each, space
* separated, with a newline at the end and null-terminated. * separated, with a newline at the end and null-terminated.
...@@ -325,16 +336,19 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame, ...@@ -325,16 +336,19 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame,
struct drm_crtc_crc_entry *entry; struct drm_crtc_crc_entry *entry;
int head, tail; int head, tail;
assert_spin_locked(&crc->lock); spin_lock(&crc->lock);
/* Caller may not have noticed yet that userspace has stopped reading */ /* Caller may not have noticed yet that userspace has stopped reading */
if (!crc->opened) if (!crc->opened) {
spin_unlock(&crc->lock);
return -EINVAL; return -EINVAL;
}
head = crc->head; head = crc->head;
tail = crc->tail; tail = crc->tail;
if (CIRC_SPACE(head, tail, DRM_CRC_ENTRIES_NR) < 1) { if (CIRC_SPACE(head, tail, DRM_CRC_ENTRIES_NR) < 1) {
spin_unlock(&crc->lock);
DRM_ERROR("Overflow of CRC buffer, userspace reads too slow.\n"); DRM_ERROR("Overflow of CRC buffer, userspace reads too slow.\n");
return -ENOBUFS; return -ENOBUFS;
} }
...@@ -347,6 +361,10 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame, ...@@ -347,6 +361,10 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame,
head = (head + 1) & (DRM_CRC_ENTRIES_NR - 1); head = (head + 1) & (DRM_CRC_ENTRIES_NR - 1);
crc->head = head; crc->head = head;
spin_unlock(&crc->lock);
wake_up_interruptible(&crc->wq);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(drm_crtc_add_crc_entry); EXPORT_SYMBOL_GPL(drm_crtc_add_crc_entry);
...@@ -298,7 +298,7 @@ void drm_minor_release(struct drm_minor *minor) ...@@ -298,7 +298,7 @@ void drm_minor_release(struct drm_minor *minor)
/** /**
* DOC: driver instance overview * DOC: driver instance overview
* *
* A device instance for a drm driver is represented by struct &drm_device. This * A device instance for a drm driver is represented by &struct drm_device. This
* is allocated with drm_dev_alloc(), usually from bus-specific ->probe() * is allocated with drm_dev_alloc(), usually from bus-specific ->probe()
* callbacks implemented by the driver. The driver then needs to initialize all * callbacks implemented by the driver. The driver then needs to initialize all
* the various subsystems for the drm device like memory management, vblank * the various subsystems for the drm device like memory management, vblank
...@@ -323,7 +323,7 @@ void drm_minor_release(struct drm_minor *minor) ...@@ -323,7 +323,7 @@ void drm_minor_release(struct drm_minor *minor)
* historical baggage. Hence use the reference counting provided by * historical baggage. Hence use the reference counting provided by
* drm_dev_ref() and drm_dev_unref() only carefully. * drm_dev_ref() and drm_dev_unref() only carefully.
* *
* It is recommended that drivers embed struct &drm_device into their own device * It is recommended that drivers embed &struct drm_device into their own device
* structure, which is supported through drm_dev_init(). * structure, which is supported through drm_dev_init().
*/ */
...@@ -461,8 +461,8 @@ static void drm_fs_inode_free(struct inode *inode) ...@@ -461,8 +461,8 @@ static void drm_fs_inode_free(struct inode *inode)
* Note that for purely virtual devices @parent can be NULL. * Note that for purely virtual devices @parent can be NULL.
* *
* Drivers that do not want to allocate their own device struct * Drivers that do not want to allocate their own device struct
* embedding struct &drm_device can call drm_dev_alloc() instead. For drivers * embedding &struct drm_device can call drm_dev_alloc() instead. For drivers
* that do embed struct &drm_device it must be placed first in the overall * that do embed &struct drm_device it must be placed first in the overall
* structure, and the overall structure must be allocated using kmalloc(): The * structure, and the overall structure must be allocated using kmalloc(): The
* drm core's release function unconditionally calls kfree() on the @dev pointer * drm core's release function unconditionally calls kfree() on the @dev pointer
* when the final reference is released. * when the final reference is released.
...@@ -568,7 +568,7 @@ EXPORT_SYMBOL(drm_dev_init); ...@@ -568,7 +568,7 @@ EXPORT_SYMBOL(drm_dev_init);
* *
* Note that for purely virtual devices @parent can be NULL. * Note that for purely virtual devices @parent can be NULL.
* *
* Drivers that wish to subclass or embed struct &drm_device into their * Drivers that wish to subclass or embed &struct drm_device into their
* own struct should look at using drm_dev_init() instead. * own struct should look at using drm_dev_init() instead.
* *
* RETURNS: * RETURNS:
...@@ -728,6 +728,7 @@ static void remove_compat_control_link(struct drm_device *dev) ...@@ -728,6 +728,7 @@ static void remove_compat_control_link(struct drm_device *dev)
*/ */
int drm_dev_register(struct drm_device *dev, unsigned long flags) int drm_dev_register(struct drm_device *dev, unsigned long flags)
{ {
struct drm_driver *driver = dev->driver;
int ret; int ret;
mutex_lock(&drm_global_mutex); mutex_lock(&drm_global_mutex);
...@@ -758,6 +759,13 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags) ...@@ -758,6 +759,13 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
drm_modeset_register_all(dev); drm_modeset_register_all(dev);
ret = 0; ret = 0;
DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n",
driver->name, driver->major, driver->minor,
driver->patchlevel, driver->date,
dev->dev ? dev_name(dev->dev) : "virtual device",
dev->primary->index);
goto out_unlock; goto out_unlock;
err_minors: err_minors:
...@@ -924,7 +932,7 @@ static int __init drm_core_init(void) ...@@ -924,7 +932,7 @@ static int __init drm_core_init(void)
if (ret < 0) if (ret < 0)
goto error; goto error;
DRM_INFO("Initialized\n"); DRM_DEBUG("Initialized\n");
return 0; return 0;
error: error:
......
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
* KMS frame buffers. * KMS frame buffers.
* *
* To support dumb objects drivers must implement the dumb_create, * To support dumb objects drivers must implement the dumb_create,
* dumb_destroy and dumb_map_offset operations from struct &drm_driver. See * dumb_destroy and dumb_map_offset operations from &struct drm_driver. See
* there for further details. * there for further details.
* *
* Note that dumb objects may not be used for gpu acceleration, as has been * Note that dumb objects may not be used for gpu acceleration, as has been
......
...@@ -91,7 +91,7 @@ struct detailed_mode_closure { ...@@ -91,7 +91,7 @@ struct detailed_mode_closure {
#define LEVEL_GTF2 2 #define LEVEL_GTF2 2
#define LEVEL_CVT 3 #define LEVEL_CVT 3
static struct edid_quirk { static const struct edid_quirk {
char vendor[4]; char vendor[4];
int product_id; int product_id;
u32 quirks; u32 quirks;
...@@ -1478,7 +1478,7 @@ EXPORT_SYMBOL(drm_edid_duplicate); ...@@ -1478,7 +1478,7 @@ EXPORT_SYMBOL(drm_edid_duplicate);
* *
* Returns true if @vendor is in @edid, false otherwise * Returns true if @vendor is in @edid, false otherwise
*/ */
static bool edid_vendor(struct edid *edid, char *vendor) static bool edid_vendor(struct edid *edid, const char *vendor)
{ {
char edid_vendor[3]; char edid_vendor[3];
...@@ -1498,7 +1498,7 @@ static bool edid_vendor(struct edid *edid, char *vendor) ...@@ -1498,7 +1498,7 @@ static bool edid_vendor(struct edid *edid, char *vendor)
*/ */
static u32 edid_get_quirks(struct edid *edid) static u32 edid_get_quirks(struct edid *edid)
{ {
struct edid_quirk *quirk; const struct edid_quirk *quirk;
int i; int i;
for (i = 0; i < ARRAY_SIZE(edid_quirk_list); i++) { for (i = 0; i < ARRAY_SIZE(edid_quirk_list); i++) {
......
...@@ -30,8 +30,8 @@ ...@@ -30,8 +30,8 @@
* DOC: overview * DOC: overview
* *
* Encoders represent the connecting element between the CRTC (as the overall * Encoders represent the connecting element between the CRTC (as the overall
* pixel pipeline, represented by struct &drm_crtc) and the connectors (as the * pixel pipeline, represented by &struct drm_crtc) and the connectors (as the
* generic sink entity, represented by struct &drm_connector). An encoder takes * generic sink entity, represented by &struct drm_connector). An encoder takes
* pixel data from a CRTC and converts it to a format suitable for any attached * pixel data from a CRTC and converts it to a format suitable for any attached
* connector. Encoders are objects exposed to userspace, originally to allow * connector. Encoders are objects exposed to userspace, originally to allow
* userspace to infer cloning and connector/CRTC restrictions. Unfortunately * userspace to infer cloning and connector/CRTC restrictions. Unfortunately
......
...@@ -39,6 +39,7 @@ struct drm_fb_cma { ...@@ -39,6 +39,7 @@ struct drm_fb_cma {
struct drm_fbdev_cma { struct drm_fbdev_cma {
struct drm_fb_helper fb_helper; struct drm_fb_helper fb_helper;
struct drm_fb_cma *fb; struct drm_fb_cma *fb;
const struct drm_framebuffer_funcs *fb_funcs;
}; };
/** /**
...@@ -47,50 +48,40 @@ struct drm_fbdev_cma { ...@@ -47,50 +48,40 @@ struct drm_fbdev_cma {
* Provides helper functions for creating a cma (contiguous memory allocator) * Provides helper functions for creating a cma (contiguous memory allocator)
* backed framebuffer. * backed framebuffer.
* *
* drm_fb_cma_create() is used in the &drm_mode_config_funcs ->fb_create * drm_fb_cma_create() is used in the &drm_mode_config_funcs.fb_create
* callback function to create a cma backed framebuffer. * callback function to create a cma backed framebuffer.
* *
* An fbdev framebuffer backed by cma is also available by calling * An fbdev framebuffer backed by cma is also available by calling
* drm_fbdev_cma_init(). drm_fbdev_cma_fini() tears it down. * drm_fbdev_cma_init(). drm_fbdev_cma_fini() tears it down.
* If the &drm_framebuffer_funcs ->dirty callback is set, fb_deferred_io * If the &drm_framebuffer_funcs.dirty callback is set, fb_deferred_io will be
* will be set up automatically. dirty() is called by * set up automatically. &drm_framebuffer_funcs.dirty is called by
* drm_fb_helper_deferred_io() in process context (struct delayed_work). * drm_fb_helper_deferred_io() in process context (&struct delayed_work).
* *
* Example fbdev deferred io code:: * Example fbdev deferred io code::
* *
* static int driver_fbdev_fb_dirty(struct drm_framebuffer *fb, * static int driver_fb_dirty(struct drm_framebuffer *fb,
* struct drm_file *file_priv, * struct drm_file *file_priv,
* unsigned flags, unsigned color, * unsigned flags, unsigned color,
* struct drm_clip_rect *clips, * struct drm_clip_rect *clips,
* unsigned num_clips) * unsigned num_clips)
* { * {
* struct drm_gem_cma_object *cma = drm_fb_cma_get_gem_obj(fb, 0); * struct drm_gem_cma_object *cma = drm_fb_cma_get_gem_obj(fb, 0);
* ... push changes ... * ... push changes ...
* return 0; * return 0;
* } * }
* *
* static struct drm_framebuffer_funcs driver_fbdev_fb_funcs = { * static struct drm_framebuffer_funcs driver_fb_funcs = {
* .destroy = drm_fb_cma_destroy, * .destroy = drm_fb_cma_destroy,
* .create_handle = drm_fb_cma_create_handle, * .create_handle = drm_fb_cma_create_handle,
* .dirty = driver_fbdev_fb_dirty, * .dirty = driver_fb_dirty,
* }; * };
* *
* static int driver_fbdev_create(struct drm_fb_helper *helper, * Initialize::
* struct drm_fb_helper_surface_size *sizes)
* {
* return drm_fbdev_cma_create_with_funcs(helper, sizes,
* &driver_fbdev_fb_funcs);
* }
*
* static const struct drm_fb_helper_funcs driver_fb_helper_funcs = {
* .fb_probe = driver_fbdev_create,
* };
* *
* Initialize:
* fbdev = drm_fbdev_cma_init_with_funcs(dev, 16, * fbdev = drm_fbdev_cma_init_with_funcs(dev, 16,
* dev->mode_config.num_crtc, * dev->mode_config.num_crtc,
* dev->mode_config.num_connector, * dev->mode_config.num_connector,
* &driver_fb_helper_funcs); * &driver_fb_funcs);
* *
*/ */
...@@ -164,16 +155,16 @@ static struct drm_fb_cma *drm_fb_cma_alloc(struct drm_device *dev, ...@@ -164,16 +155,16 @@ static struct drm_fb_cma *drm_fb_cma_alloc(struct drm_device *dev,
/** /**
* drm_fb_cma_create_with_funcs() - helper function for the * drm_fb_cma_create_with_funcs() - helper function for the
* &drm_mode_config_funcs ->fb_create * &drm_mode_config_funcs.fb_create
* callback function * callback
* @dev: DRM device * @dev: DRM device
* @file_priv: drm file for the ioctl call * @file_priv: drm file for the ioctl call
* @mode_cmd: metadata from the userspace fb creation request * @mode_cmd: metadata from the userspace fb creation request
* @funcs: vtable to be used for the new framebuffer object * @funcs: vtable to be used for the new framebuffer object
* *
* This can be used to set &drm_framebuffer_funcs for drivers that need the * This can be used to set &drm_framebuffer_funcs for drivers that need the
* dirty() callback. Use drm_fb_cma_create() if you don't need to change * &drm_framebuffer_funcs.dirty callback. Use drm_fb_cma_create() if you don't
* &drm_framebuffer_funcs. * need to change &drm_framebuffer_funcs.
*/ */
struct drm_framebuffer *drm_fb_cma_create_with_funcs(struct drm_device *dev, struct drm_framebuffer *drm_fb_cma_create_with_funcs(struct drm_device *dev,
struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd,
...@@ -230,14 +221,14 @@ struct drm_framebuffer *drm_fb_cma_create_with_funcs(struct drm_device *dev, ...@@ -230,14 +221,14 @@ struct drm_framebuffer *drm_fb_cma_create_with_funcs(struct drm_device *dev,
EXPORT_SYMBOL_GPL(drm_fb_cma_create_with_funcs); EXPORT_SYMBOL_GPL(drm_fb_cma_create_with_funcs);
/** /**
* drm_fb_cma_create() - &drm_mode_config_funcs ->fb_create callback function * drm_fb_cma_create() - &drm_mode_config_funcs.fb_create callback function
* @dev: DRM device * @dev: DRM device
* @file_priv: drm file for the ioctl call * @file_priv: drm file for the ioctl call
* @mode_cmd: metadata from the userspace fb creation request * @mode_cmd: metadata from the userspace fb creation request
* *
* If your hardware has special alignment or pitch requirements these should be * If your hardware has special alignment or pitch requirements these should be
* checked before calling this function. Use drm_fb_cma_create_with_funcs() if * checked before calling this function. Use drm_fb_cma_create_with_funcs() if
* you need to set &drm_framebuffer_funcs ->dirty. * you need to set &drm_framebuffer_funcs.dirty.
*/ */
struct drm_framebuffer *drm_fb_cma_create(struct drm_device *dev, struct drm_framebuffer *drm_fb_cma_create(struct drm_device *dev,
struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd) struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd)
...@@ -273,7 +264,7 @@ EXPORT_SYMBOL_GPL(drm_fb_cma_get_gem_obj); ...@@ -273,7 +264,7 @@ EXPORT_SYMBOL_GPL(drm_fb_cma_get_gem_obj);
* @plane: Which plane * @plane: Which plane
* @state: Plane state attach fence to * @state: Plane state attach fence to
* *
* This should be put into prepare_fb hook of struct &drm_plane_helper_funcs . * This should be set as the &struct drm_plane_helper_funcs.prepare_fb hook.
* *
* This function checks if the plane FB has an dma-buf attached, extracts * This function checks if the plane FB has an dma-buf attached, extracts
* the exclusive fence and attaches it to plane state for the atomic helper * the exclusive fence and attaches it to plane state for the atomic helper
...@@ -408,13 +399,9 @@ static void drm_fbdev_cma_defio_fini(struct fb_info *fbi) ...@@ -408,13 +399,9 @@ static void drm_fbdev_cma_defio_fini(struct fb_info *fbi)
kfree(fbi->fbops); kfree(fbi->fbops);
} }
/* static int
* For use in a (struct drm_fb_helper_funcs *)->fb_probe callback function that drm_fbdev_cma_create(struct drm_fb_helper *helper,
* needs custom struct drm_framebuffer_funcs, like dirty() for deferred_io use. struct drm_fb_helper_surface_size *sizes)
*/
int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes,
const struct drm_framebuffer_funcs *funcs)
{ {
struct drm_fbdev_cma *fbdev_cma = to_fbdev_cma(helper); struct drm_fbdev_cma *fbdev_cma = to_fbdev_cma(helper);
struct drm_mode_fb_cmd2 mode_cmd = { 0 }; struct drm_mode_fb_cmd2 mode_cmd = { 0 };
...@@ -450,7 +437,8 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper, ...@@ -450,7 +437,8 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper,
goto err_gem_free_object; goto err_gem_free_object;
} }
fbdev_cma->fb = drm_fb_cma_alloc(dev, &mode_cmd, &obj, 1, funcs); fbdev_cma->fb = drm_fb_cma_alloc(dev, &mode_cmd, &obj, 1,
fbdev_cma->fb_funcs);
if (IS_ERR(fbdev_cma->fb)) { if (IS_ERR(fbdev_cma->fb)) {
dev_err(dev->dev, "Failed to allocate DRM framebuffer.\n"); dev_err(dev->dev, "Failed to allocate DRM framebuffer.\n");
ret = PTR_ERR(fbdev_cma->fb); ret = PTR_ERR(fbdev_cma->fb);
...@@ -476,7 +464,7 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper, ...@@ -476,7 +464,7 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper,
fbi->screen_size = size; fbi->screen_size = size;
fbi->fix.smem_len = size; fbi->fix.smem_len = size;
if (funcs->dirty) { if (fbdev_cma->fb_funcs->dirty) {
ret = drm_fbdev_cma_defio_init(fbi, obj); ret = drm_fbdev_cma_defio_init(fbi, obj);
if (ret) if (ret)
goto err_cma_destroy; goto err_cma_destroy;
...@@ -493,13 +481,6 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper, ...@@ -493,13 +481,6 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper,
drm_gem_object_unreference_unlocked(&obj->base); drm_gem_object_unreference_unlocked(&obj->base);
return ret; return ret;
} }
EXPORT_SYMBOL(drm_fbdev_cma_create_with_funcs);
static int drm_fbdev_cma_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
return drm_fbdev_cma_create_with_funcs(helper, sizes, &drm_fb_cma_funcs);
}
static const struct drm_fb_helper_funcs drm_fb_cma_helper_funcs = { static const struct drm_fb_helper_funcs drm_fb_cma_helper_funcs = {
.fb_probe = drm_fbdev_cma_create, .fb_probe = drm_fbdev_cma_create,
...@@ -511,13 +492,13 @@ static const struct drm_fb_helper_funcs drm_fb_cma_helper_funcs = { ...@@ -511,13 +492,13 @@ static const struct drm_fb_helper_funcs drm_fb_cma_helper_funcs = {
* @preferred_bpp: Preferred bits per pixel for the device * @preferred_bpp: Preferred bits per pixel for the device
* @num_crtc: Number of CRTCs * @num_crtc: Number of CRTCs
* @max_conn_count: Maximum number of connectors * @max_conn_count: Maximum number of connectors
* @funcs: fb helper functions, in particular fb_probe() * @funcs: fb helper functions, in particular a custom dirty() callback
* *
* Returns a newly allocated drm_fbdev_cma struct or a ERR_PTR. * Returns a newly allocated drm_fbdev_cma struct or a ERR_PTR.
*/ */
struct drm_fbdev_cma *drm_fbdev_cma_init_with_funcs(struct drm_device *dev, struct drm_fbdev_cma *drm_fbdev_cma_init_with_funcs(struct drm_device *dev,
unsigned int preferred_bpp, unsigned int num_crtc, unsigned int preferred_bpp, unsigned int num_crtc,
unsigned int max_conn_count, const struct drm_fb_helper_funcs *funcs) unsigned int max_conn_count, const struct drm_framebuffer_funcs *funcs)
{ {
struct drm_fbdev_cma *fbdev_cma; struct drm_fbdev_cma *fbdev_cma;
struct drm_fb_helper *helper; struct drm_fb_helper *helper;
...@@ -528,10 +509,11 @@ struct drm_fbdev_cma *drm_fbdev_cma_init_with_funcs(struct drm_device *dev, ...@@ -528,10 +509,11 @@ struct drm_fbdev_cma *drm_fbdev_cma_init_with_funcs(struct drm_device *dev,
dev_err(dev->dev, "Failed to allocate drm fbdev.\n"); dev_err(dev->dev, "Failed to allocate drm fbdev.\n");
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
} }
fbdev_cma->fb_funcs = funcs;
helper = &fbdev_cma->fb_helper; helper = &fbdev_cma->fb_helper;
drm_fb_helper_prepare(dev, helper, funcs); drm_fb_helper_prepare(dev, helper, &drm_fb_cma_helper_funcs);
ret = drm_fb_helper_init(dev, helper, num_crtc, max_conn_count); ret = drm_fb_helper_init(dev, helper, num_crtc, max_conn_count);
if (ret < 0) { if (ret < 0) {
...@@ -577,7 +559,7 @@ struct drm_fbdev_cma *drm_fbdev_cma_init(struct drm_device *dev, ...@@ -577,7 +559,7 @@ struct drm_fbdev_cma *drm_fbdev_cma_init(struct drm_device *dev,
unsigned int max_conn_count) unsigned int max_conn_count)
{ {
return drm_fbdev_cma_init_with_funcs(dev, preferred_bpp, num_crtc, return drm_fbdev_cma_init_with_funcs(dev, preferred_bpp, num_crtc,
max_conn_count, &drm_fb_cma_helper_funcs); max_conn_count, &drm_fb_cma_funcs);
} }
EXPORT_SYMBOL_GPL(drm_fbdev_cma_init); EXPORT_SYMBOL_GPL(drm_fbdev_cma_init);
...@@ -606,7 +588,7 @@ EXPORT_SYMBOL_GPL(drm_fbdev_cma_fini); ...@@ -606,7 +588,7 @@ EXPORT_SYMBOL_GPL(drm_fbdev_cma_fini);
* drm_fbdev_cma_restore_mode() - Restores initial framebuffer mode * drm_fbdev_cma_restore_mode() - Restores initial framebuffer mode
* @fbdev_cma: The drm_fbdev_cma struct, may be NULL * @fbdev_cma: The drm_fbdev_cma struct, may be NULL
* *
* This function is usually called from the DRM drivers lastclose callback. * This function is usually called from the &drm_driver.lastclose callback.
*/ */
void drm_fbdev_cma_restore_mode(struct drm_fbdev_cma *fbdev_cma) void drm_fbdev_cma_restore_mode(struct drm_fbdev_cma *fbdev_cma)
{ {
...@@ -619,7 +601,7 @@ EXPORT_SYMBOL_GPL(drm_fbdev_cma_restore_mode); ...@@ -619,7 +601,7 @@ EXPORT_SYMBOL_GPL(drm_fbdev_cma_restore_mode);
* drm_fbdev_cma_hotplug_event() - Poll for hotpulug events * drm_fbdev_cma_hotplug_event() - Poll for hotpulug events
* @fbdev_cma: The drm_fbdev_cma struct, may be NULL * @fbdev_cma: The drm_fbdev_cma struct, may be NULL
* *
* This function is usually called from the DRM drivers output_poll_changed * This function is usually called from the &drm_mode_config.output_poll_changed
* callback. * callback.
*/ */
void drm_fbdev_cma_hotplug_event(struct drm_fbdev_cma *fbdev_cma) void drm_fbdev_cma_hotplug_event(struct drm_fbdev_cma *fbdev_cma)
......
...@@ -1752,8 +1752,7 @@ static bool drm_has_cmdline_mode(struct drm_fb_helper_connector *fb_connector) ...@@ -1752,8 +1752,7 @@ static bool drm_has_cmdline_mode(struct drm_fb_helper_connector *fb_connector)
return fb_connector->connector->cmdline_mode.specified; return fb_connector->connector->cmdline_mode.specified;
} }
struct drm_display_mode *drm_pick_cmdline_mode(struct drm_fb_helper_connector *fb_helper_conn, struct drm_display_mode *drm_pick_cmdline_mode(struct drm_fb_helper_connector *fb_helper_conn)
int width, int height)
{ {
struct drm_cmdline_mode *cmdline_mode; struct drm_cmdline_mode *cmdline_mode;
struct drm_display_mode *mode; struct drm_display_mode *mode;
...@@ -1871,7 +1870,7 @@ static bool drm_target_cloned(struct drm_fb_helper *fb_helper, ...@@ -1871,7 +1870,7 @@ static bool drm_target_cloned(struct drm_fb_helper *fb_helper,
if (!enabled[i]) if (!enabled[i])
continue; continue;
fb_helper_conn = fb_helper->connector_info[i]; fb_helper_conn = fb_helper->connector_info[i];
modes[i] = drm_pick_cmdline_mode(fb_helper_conn, width, height); modes[i] = drm_pick_cmdline_mode(fb_helper_conn);
if (!modes[i]) { if (!modes[i]) {
can_clone = false; can_clone = false;
break; break;
...@@ -1993,7 +1992,7 @@ static bool drm_target_preferred(struct drm_fb_helper *fb_helper, ...@@ -1993,7 +1992,7 @@ static bool drm_target_preferred(struct drm_fb_helper *fb_helper,
fb_helper_conn->connector->base.id); fb_helper_conn->connector->base.id);
/* got for command line mode first */ /* got for command line mode first */
modes[i] = drm_pick_cmdline_mode(fb_helper_conn, width, height); modes[i] = drm_pick_cmdline_mode(fb_helper_conn);
if (!modes[i]) { if (!modes[i]) {
DRM_DEBUG_KMS("looking for preferred mode on connector %d %d\n", DRM_DEBUG_KMS("looking for preferred mode on connector %d %d\n",
fb_helper_conn->connector->base.id, fb_helper_conn->connector->tile_group ? fb_helper_conn->connector->tile_group->id : 0); fb_helper_conn->connector->base.id, fb_helper_conn->connector->tile_group ? fb_helper_conn->connector->tile_group->id : 0);
......
...@@ -689,8 +689,8 @@ void drm_send_event_locked(struct drm_device *dev, struct drm_pending_event *e) ...@@ -689,8 +689,8 @@ void drm_send_event_locked(struct drm_device *dev, struct drm_pending_event *e)
assert_spin_locked(&dev->event_lock); assert_spin_locked(&dev->event_lock);
if (e->completion) { if (e->completion) {
/* ->completion might disappear as soon as it signalled. */
complete_all(e->completion); complete_all(e->completion);
e->completion_release(e->completion);
e->completion = NULL; e->completion = NULL;
} }
......
...@@ -39,13 +39,13 @@ ...@@ -39,13 +39,13 @@
* Frame buffers rely on the underlying memory manager for allocating backing * Frame buffers rely on the underlying memory manager for allocating backing
* storage. When creating a frame buffer applications pass a memory handle * storage. When creating a frame buffer applications pass a memory handle
* (or a list of memory handles for multi-planar formats) through the * (or a list of memory handles for multi-planar formats) through the
* struct &drm_mode_fb_cmd2 argument. For drivers using GEM as their userspace * &struct drm_mode_fb_cmd2 argument. For drivers using GEM as their userspace
* buffer management interface this would be a GEM handle. Drivers are however * buffer management interface this would be a GEM handle. Drivers are however
* free to use their own backing storage object handles, e.g. vmwgfx directly * free to use their own backing storage object handles, e.g. vmwgfx directly
* exposes special TTM handles to userspace and so expects TTM handles in the * exposes special TTM handles to userspace and so expects TTM handles in the
* create ioctl and not GEM handles. * create ioctl and not GEM handles.
* *
* Framebuffers are tracked with struct &drm_framebuffer. They are published * Framebuffers are tracked with &struct drm_framebuffer. They are published
* using drm_framebuffer_init() - after calling that function userspace can use * using drm_framebuffer_init() - after calling that function userspace can use
* and access the framebuffer object. The helper function * and access the framebuffer object. The helper function
* drm_helper_mode_fill_fb_struct() can be used to pre-fill the required * drm_helper_mode_fill_fb_struct() can be used to pre-fill the required
...@@ -55,7 +55,7 @@ ...@@ -55,7 +55,7 @@
* drivers can grab additional references with drm_framebuffer_reference() and * drivers can grab additional references with drm_framebuffer_reference() and
* drop them again with drm_framebuffer_unreference(). For driver-private * drop them again with drm_framebuffer_unreference(). For driver-private
* framebuffers for which the last reference is never dropped (e.g. for the * framebuffers for which the last reference is never dropped (e.g. for the
* fbdev framebuffer when the struct struct &drm_framebuffer is embedded into * fbdev framebuffer when the struct &struct drm_framebuffer is embedded into
* the fbdev helper struct) drivers can manually clean up a framebuffer at * the fbdev helper struct) drivers can manually clean up a framebuffer at
* module unload time with drm_framebuffer_unregister_private(). But doing this * module unload time with drm_framebuffer_unregister_private(). But doing this
* is not recommended, and it's better to have a normal free-standing struct * is not recommended, and it's better to have a normal free-standing struct
......
...@@ -176,8 +176,8 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv, ...@@ -176,8 +176,8 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
* *
* This function frees the backing memory of the CMA GEM object, cleans up the * This function frees the backing memory of the CMA GEM object, cleans up the
* GEM object state and frees the memory used to store the object itself. * GEM object state and frees the memory used to store the object itself.
* Drivers using the CMA helpers should set this as their DRM driver's * Drivers using the CMA helpers should set this as their
* ->gem_free_object() callback. * &drm_driver.gem_free_object callback.
*/ */
void drm_gem_cma_free_object(struct drm_gem_object *gem_obj) void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
{ {
...@@ -207,7 +207,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_free_object); ...@@ -207,7 +207,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_free_object);
* This aligns the pitch and size arguments to the minimum required. This is * This aligns the pitch and size arguments to the minimum required. This is
* an internal helper that can be wrapped by a driver to account for hardware * an internal helper that can be wrapped by a driver to account for hardware
* with more specific alignment requirements. It should not be used directly * with more specific alignment requirements. It should not be used directly
* as the ->dumb_create() callback in a DRM driver. * as their &drm_driver.dumb_create callback.
* *
* Returns: * Returns:
* 0 on success or a negative error code on failure. * 0 on success or a negative error code on failure.
...@@ -240,7 +240,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_dumb_create_internal); ...@@ -240,7 +240,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_dumb_create_internal);
* This function computes the pitch of the dumb buffer and rounds it up to an * This function computes the pitch of the dumb buffer and rounds it up to an
* integer number of bytes per pixel. Drivers for hardware that doesn't have * integer number of bytes per pixel. Drivers for hardware that doesn't have
* any additional restrictions on the pitch can directly use this function as * any additional restrictions on the pitch can directly use this function as
* their ->dumb_create() callback. * their &drm_driver.dumb_create callback.
* *
* For hardware with additional restrictions, drivers can adjust the fields * For hardware with additional restrictions, drivers can adjust the fields
* set up by userspace and pass the IOCTL data along to the * set up by userspace and pass the IOCTL data along to the
...@@ -274,7 +274,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_dumb_create); ...@@ -274,7 +274,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_dumb_create);
* *
* This function look up an object by its handle and returns the fake mmap * This function look up an object by its handle and returns the fake mmap
* offset associated with it. Drivers using the CMA helpers should set this * offset associated with it. Drivers using the CMA helpers should set this
* as their DRM driver's ->dumb_map_offset() callback. * as their &drm_driver.dumb_map_offset callback.
* *
* Returns: * Returns:
* 0 on success or a negative error code on failure. * 0 on success or a negative error code on failure.
...@@ -358,6 +358,77 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma) ...@@ -358,6 +358,77 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
} }
EXPORT_SYMBOL_GPL(drm_gem_cma_mmap); EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
#ifndef CONFIG_MMU
/**
* drm_gem_cma_get_unmapped_area - propose address for mapping in noMMU cases
* @filp: file object
* @addr: memory address
* @len: buffer size
* @pgoff: page offset
* @flags: memory flags
*
* This function is used in noMMU platforms to propose address mapping
* for a given buffer.
* It's intended to be used as a direct handler for the struct
* &file_operations.get_unmapped_area operation.
*
* Returns:
* mapping address on success or a negative error code on failure.
*/
unsigned long drm_gem_cma_get_unmapped_area(struct file *filp,
unsigned long addr,
unsigned long len,
unsigned long pgoff,
unsigned long flags)
{
struct drm_gem_cma_object *cma_obj;
struct drm_gem_object *obj = NULL;
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_vma_offset_node *node;
if (drm_device_is_unplugged(dev))
return -ENODEV;
drm_vma_offset_lock_lookup(dev->vma_offset_manager);
node = drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager,
pgoff,
len >> PAGE_SHIFT);
if (likely(node)) {
obj = container_of(node, struct drm_gem_object, vma_node);
/*
* When the object is being freed, after it hits 0-refcnt it
* proceeds to tear down the object. In the process it will
* attempt to remove the VMA offset and so acquire this
* mgr->vm_lock. Therefore if we find an object with a 0-refcnt
* that matches our range, we know it is in the process of being
* destroyed and will be freed as soon as we release the lock -
* so we have to check for the 0-refcnted object and treat it as
* invalid.
*/
if (!kref_get_unless_zero(&obj->refcount))
obj = NULL;
}
drm_vma_offset_unlock_lookup(dev->vma_offset_manager);
if (!obj)
return -EINVAL;
if (!drm_vma_node_is_allowed(node, priv)) {
drm_gem_object_unreference_unlocked(obj);
return -EACCES;
}
cma_obj = to_drm_gem_cma_obj(obj);
drm_gem_object_unreference_unlocked(obj);
return cma_obj->vaddr ? (unsigned long)cma_obj->vaddr : -EINVAL;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_get_unmapped_area);
#endif
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
/** /**
* drm_gem_cma_describe - describe a CMA GEM object for debugfs * drm_gem_cma_describe - describe a CMA GEM object for debugfs
...@@ -391,7 +462,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_describe); ...@@ -391,7 +462,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
* *
* This function exports a scatter/gather table suitable for PRIME usage by * This function exports a scatter/gather table suitable for PRIME usage by
* calling the standard DMA mapping API. Drivers using the CMA helpers should * calling the standard DMA mapping API. Drivers using the CMA helpers should
* set this as their DRM driver's ->gem_prime_get_sg_table() callback. * set this as their &drm_driver.gem_prime_get_sg_table callback.
* *
* Returns: * Returns:
* A pointer to the scatter/gather table of pinned pages or NULL on failure. * A pointer to the scatter/gather table of pinned pages or NULL on failure.
...@@ -429,8 +500,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_get_sg_table); ...@@ -429,8 +500,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_get_sg_table);
* This function imports a scatter/gather table exported via DMA-BUF by * This function imports a scatter/gather table exported via DMA-BUF by
* another driver. Imported buffers must be physically contiguous in memory * another driver. Imported buffers must be physically contiguous in memory
* (i.e. the scatter/gather table must contain a single entry). Drivers that * (i.e. the scatter/gather table must contain a single entry). Drivers that
* use the CMA helpers should set this as their DRM driver's * use the CMA helpers should set this as their
* ->gem_prime_import_sg_table() callback. * &drm_driver.gem_prime_import_sg_table callback.
* *
* Returns: * Returns:
* A pointer to a newly created GEM object or an ERR_PTR-encoded negative * A pointer to a newly created GEM object or an ERR_PTR-encoded negative
...@@ -467,7 +538,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table); ...@@ -467,7 +538,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
* *
* This function maps a buffer imported via DRM PRIME into a userspace * This function maps a buffer imported via DRM PRIME into a userspace
* process's address space. Drivers that use the CMA helpers should set this * process's address space. Drivers that use the CMA helpers should set this
* as their DRM driver's ->gem_prime_mmap() callback. * as their &drm_driver.gem_prime_mmap callback.
* *
* Returns: * Returns:
* 0 on success or a negative error code on failure. * 0 on success or a negative error code on failure.
...@@ -496,7 +567,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); ...@@ -496,7 +567,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
* virtual address space. Since the CMA buffers are already mapped into the * virtual address space. Since the CMA buffers are already mapped into the
* kernel virtual address space this simply returns the cached virtual * kernel virtual address space this simply returns the cached virtual
* address. Drivers using the CMA helpers should set this as their DRM * address. Drivers using the CMA helpers should set this as their DRM
* driver's ->gem_prime_vmap() callback. * driver's &drm_driver.gem_prime_vmap callback.
* *
* Returns: * Returns:
* The kernel virtual address of the CMA GEM object's backing store. * The kernel virtual address of the CMA GEM object's backing store.
...@@ -518,7 +589,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); ...@@ -518,7 +589,7 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
* This function removes a buffer exported via DRM PRIME from the kernel's * This function removes a buffer exported via DRM PRIME from the kernel's
* virtual address space. This is a no-op because CMA buffers cannot be * virtual address space. This is a no-op because CMA buffers cannot be
* unmapped from kernel space. Drivers using the CMA helpers should set this * unmapped from kernel space. Drivers using the CMA helpers should set this
* as their DRM driver's ->gem_prime_vunmap() callback. * as their &drm_driver.gem_prime_vunmap callback.
*/ */
void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr) void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
{ {
......
...@@ -63,6 +63,18 @@ void drm_global_release(void) ...@@ -63,6 +63,18 @@ void drm_global_release(void)
} }
} }
/**
* drm_global_item_ref - Initialize and acquire reference to memory
* object
* @ref: Object for initialization
*
* This initializes a memory object, allocating memory and calling the
* .init() hook. Further calls will increase the reference count for
* that item.
*
* Returns:
* Zero on success, non-zero otherwise.
*/
int drm_global_item_ref(struct drm_global_reference *ref) int drm_global_item_ref(struct drm_global_reference *ref)
{ {
int ret = 0; int ret = 0;
...@@ -97,6 +109,17 @@ int drm_global_item_ref(struct drm_global_reference *ref) ...@@ -97,6 +109,17 @@ int drm_global_item_ref(struct drm_global_reference *ref)
} }
EXPORT_SYMBOL(drm_global_item_ref); EXPORT_SYMBOL(drm_global_item_ref);
/**
* drm_global_item_unref - Drop reference to memory
* object
* @ref: Object being removed
*
* Drop a reference to the memory object and eventually call the
* release() hook. The allocated object should be dropped in the
* release() hook or before calling this function
*
*/
void drm_global_item_unref(struct drm_global_reference *ref) void drm_global_item_unref(struct drm_global_reference *ref)
{ {
struct drm_global_item *item = &glob[ref->global_type]; struct drm_global_item *item = &glob[ref->global_type];
......
...@@ -95,9 +95,6 @@ ...@@ -95,9 +95,6 @@
* broken. * broken.
*/ */
static int drm_version(struct drm_device *dev, void *data,
struct drm_file *file_priv);
/* /*
* Get the bus id. * Get the bus id.
* *
...@@ -481,15 +478,17 @@ static int drm_version(struct drm_device *dev, void *data, ...@@ -481,15 +478,17 @@ static int drm_version(struct drm_device *dev, void *data,
return err; return err;
} }
/* /**
* drm_ioctl_permit - Check ioctl permissions against caller * drm_ioctl_permit - Check ioctl permissions against caller
* *
* @flags: ioctl permission flags. * @flags: ioctl permission flags.
* @file_priv: Pointer to struct drm_file identifying the caller. * @file_priv: Pointer to struct drm_file identifying the caller.
* *
* Checks whether the caller is allowed to run an ioctl with the * Checks whether the caller is allowed to run an ioctl with the
* indicated permissions. If so, returns zero. Otherwise returns an * indicated permissions.
* error code suitable for ioctl return. *
* Returns:
* Zero if allowed, -EACCES otherwise.
*/ */
int drm_ioctl_permit(u32 flags, struct drm_file *file_priv) int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{ {
......
...@@ -982,7 +982,7 @@ static void send_vblank_event(struct drm_device *dev, ...@@ -982,7 +982,7 @@ static void send_vblank_event(struct drm_device *dev,
* period. This helper function implements exactly the required vblank arming * period. This helper function implements exactly the required vblank arming
* behaviour. * behaviour.
* *
* NOTE: Drivers using this to send out the event in struct &drm_crtc_state * NOTE: Drivers using this to send out the event in &struct drm_crtc_state
* as part of an atomic commit must ensure that the next vblank happens at * as part of an atomic commit must ensure that the next vblank happens at
* exactly the same time as the atomic commit is committed to the hardware. This * exactly the same time as the atomic commit is committed to the hardware. This
* function itself does **not** protect again the next vblank interrupt racing * function itself does **not** protect again the next vblank interrupt racing
......
...@@ -74,7 +74,14 @@ int drm_legacy_freebufs(struct drm_device *d, void *v, struct drm_file *f); ...@@ -74,7 +74,14 @@ int drm_legacy_freebufs(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_mapbufs(struct drm_device *d, void *v, struct drm_file *f); int drm_legacy_mapbufs(struct drm_device *d, void *v, struct drm_file *f);
int drm_legacy_dma_ioctl(struct drm_device *d, void *v, struct drm_file *f); int drm_legacy_dma_ioctl(struct drm_device *d, void *v, struct drm_file *f);
#ifdef CONFIG_DRM_VM
void drm_legacy_vma_flush(struct drm_device *d); void drm_legacy_vma_flush(struct drm_device *d);
#else
static inline void drm_legacy_vma_flush(struct drm_device *d)
{
/* do nothing */
}
#endif
/* /*
* AGP Support * AGP Support
......
...@@ -59,8 +59,8 @@ ...@@ -59,8 +59,8 @@
* *
* The main data struct is &drm_mm, allocations are tracked in &drm_mm_node. * The main data struct is &drm_mm, allocations are tracked in &drm_mm_node.
* Drivers are free to embed either of them into their own suitable * Drivers are free to embed either of them into their own suitable
* datastructures. drm_mm itself will not do any allocations of its own, so if * datastructures. drm_mm itself will not do any memory allocations of its own,
* drivers choose not to embed nodes they need to still allocate them * so if drivers choose not to embed nodes they need to still allocate them
* themselves. * themselves.
* *
* The range allocator also supports reservation of preallocated blocks. This is * The range allocator also supports reservation of preallocated blocks. This is
...@@ -78,7 +78,7 @@ ...@@ -78,7 +78,7 @@
* steep cliff not a real concern. Removing a node again is O(1). * steep cliff not a real concern. Removing a node again is O(1).
* *
* drm_mm supports a few features: Alignment and range restrictions can be * drm_mm supports a few features: Alignment and range restrictions can be
* supplied. Further more every &drm_mm_node has a color value (which is just an * supplied. Furthermore every &drm_mm_node has a color value (which is just an
* opaque unsigned long) which in conjunction with a driver callback can be used * opaque unsigned long) which in conjunction with a driver callback can be used
* to implement sophisticated placement restrictions. The i915 DRM driver uses * to implement sophisticated placement restrictions. The i915 DRM driver uses
* this to implement guard pages between incompatible caching domains in the * this to implement guard pages between incompatible caching domains in the
...@@ -296,11 +296,11 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node, ...@@ -296,11 +296,11 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
* @mm: drm_mm allocator to insert @node into * @mm: drm_mm allocator to insert @node into
* @node: drm_mm_node to insert * @node: drm_mm_node to insert
* *
* This functions inserts an already set-up drm_mm_node into the allocator, * This functions inserts an already set-up &drm_mm_node into the allocator,
* meaning that start, size and color must be set by the caller. This is useful * meaning that start, size and color must be set by the caller. All other
* to initialize the allocator with preallocated objects which must be set-up * fields must be cleared to 0. This is useful to initialize the allocator with
* before the range allocator can be set-up, e.g. when taking over a firmware * preallocated objects which must be set-up before the range allocator can be
* framebuffer. * set-up, e.g. when taking over a firmware framebuffer.
* *
* Returns: * Returns:
* 0 on success, -ENOSPC if there's no hole where @node is. * 0 on success, -ENOSPC if there's no hole where @node is.
...@@ -375,7 +375,7 @@ EXPORT_SYMBOL(drm_mm_reserve_node); ...@@ -375,7 +375,7 @@ EXPORT_SYMBOL(drm_mm_reserve_node);
* @sflags: flags to fine-tune the allocation search * @sflags: flags to fine-tune the allocation search
* @aflags: flags to fine-tune the allocation behavior * @aflags: flags to fine-tune the allocation behavior
* *
* The preallocated node must be cleared to 0. * The preallocated @node must be cleared to 0.
* *
* Returns: * Returns:
* 0 on success, -ENOSPC if there's no suitable hole. * 0 on success, -ENOSPC if there's no suitable hole.
...@@ -537,7 +537,7 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new) ...@@ -537,7 +537,7 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
EXPORT_SYMBOL(drm_mm_replace_node); EXPORT_SYMBOL(drm_mm_replace_node);
/** /**
* DOC: lru scan roaster * DOC: lru scan roster
* *
* Very often GPUs need to have continuous allocations for a given object. When * Very often GPUs need to have continuous allocations for a given object. When
* evicting objects to make space for a new one it is therefore not most * evicting objects to make space for a new one it is therefore not most
...@@ -549,9 +549,11 @@ EXPORT_SYMBOL(drm_mm_replace_node); ...@@ -549,9 +549,11 @@ EXPORT_SYMBOL(drm_mm_replace_node);
* The DRM range allocator supports this use-case through the scanning * The DRM range allocator supports this use-case through the scanning
* interfaces. First a scan operation needs to be initialized with * interfaces. First a scan operation needs to be initialized with
* drm_mm_scan_init() or drm_mm_scan_init_with_range(). The driver adds * drm_mm_scan_init() or drm_mm_scan_init_with_range(). The driver adds
* objects to the roster (probably by walking an LRU list, but this can be * objects to the roster, probably by walking an LRU list, but this can be
* freely implemented) (using drm_mm_scan_add_block()) until a suitable hole * freely implemented. Eviction candiates are added using
* is found or there are no further evictable objects. * drm_mm_scan_add_block() until a suitable hole is found or there are no
* further evictable objects. Eviction roster metadata is tracked in struct
* &drm_mm_scan.
* *
* The driver must walk through all objects again in exactly the reverse * The driver must walk through all objects again in exactly the reverse
* order to restore the allocator state. Note that while the allocator is used * order to restore the allocator state. Note that while the allocator is used
...@@ -559,7 +561,7 @@ EXPORT_SYMBOL(drm_mm_replace_node); ...@@ -559,7 +561,7 @@ EXPORT_SYMBOL(drm_mm_replace_node);
* *
* Finally the driver evicts all objects selected (drm_mm_scan_remove_block() * Finally the driver evicts all objects selected (drm_mm_scan_remove_block()
* reported true) in the scan, and any overlapping nodes after color adjustment * reported true) in the scan, and any overlapping nodes after color adjustment
* (drm_mm_scan_evict_color()). Adding and removing an object is O(1), and * (drm_mm_scan_color_evict()). Adding and removing an object is O(1), and
* since freeing a node is also O(1) the overall complexity is * since freeing a node is also O(1) the overall complexity is
* O(scanned_objects). So like the free stack which needs to be walked before a * O(scanned_objects). So like the free stack which needs to be walked before a
* scan operation even begins this is linear in the number of objects. It * scan operation even begins this is linear in the number of objects. It
...@@ -705,14 +707,15 @@ EXPORT_SYMBOL(drm_mm_scan_add_block); ...@@ -705,14 +707,15 @@ EXPORT_SYMBOL(drm_mm_scan_add_block);
* @scan: the active drm_mm scanner * @scan: the active drm_mm scanner
* @node: drm_mm_node to remove * @node: drm_mm_node to remove
* *
* Nodes _must_ be removed in exactly the reverse order from the scan list as * Nodes **must** be removed in exactly the reverse order from the scan list as
* they have been added (e.g. using list_add as they are added and then * they have been added (e.g. using list_add() as they are added and then
* list_for_each over that eviction list to remove), otherwise the internal * list_for_each() over that eviction list to remove), otherwise the internal
* state of the memory manager will be corrupted. * state of the memory manager will be corrupted.
* *
* When the scan list is empty, the selected memory nodes can be freed. An * When the scan list is empty, the selected memory nodes can be freed. An
* immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then * immediately following drm_mm_insert_node_in_range_generic() or one of the
* return the just freed block (because its at the top of the free_stack list). * simpler versions of that function with !DRM_MM_SEARCH_BEST will then return
* the just freed block (because its at the top of the free_stack list).
* *
* Returns: * Returns:
* True if this block should be evicted, false otherwise. Will always * True if this block should be evicted, false otherwise. Will always
...@@ -832,8 +835,7 @@ void drm_mm_takedown(struct drm_mm *mm) ...@@ -832,8 +835,7 @@ void drm_mm_takedown(struct drm_mm *mm)
} }
EXPORT_SYMBOL(drm_mm_takedown); EXPORT_SYMBOL(drm_mm_takedown);
static u64 drm_mm_debug_hole(const struct drm_mm_node *entry, static u64 drm_mm_dump_hole(struct drm_printer *p, const struct drm_mm_node *entry)
const char *prefix)
{ {
u64 hole_start, hole_end, hole_size; u64 hole_start, hole_end, hole_size;
...@@ -841,49 +843,7 @@ static u64 drm_mm_debug_hole(const struct drm_mm_node *entry, ...@@ -841,49 +843,7 @@ static u64 drm_mm_debug_hole(const struct drm_mm_node *entry,
hole_start = drm_mm_hole_node_start(entry); hole_start = drm_mm_hole_node_start(entry);
hole_end = drm_mm_hole_node_end(entry); hole_end = drm_mm_hole_node_end(entry);
hole_size = hole_end - hole_start; hole_size = hole_end - hole_start;
pr_debug("%s %#llx-%#llx: %llu: free\n", prefix, hole_start, drm_printf(p, "%#018llx-%#018llx: %llu: free\n", hole_start,
hole_end, hole_size);
return hole_size;
}
return 0;
}
/**
* drm_mm_debug_table - dump allocator state to dmesg
* @mm: drm_mm allocator to dump
* @prefix: prefix to use for dumping to dmesg
*/
void drm_mm_debug_table(const struct drm_mm *mm, const char *prefix)
{
const struct drm_mm_node *entry;
u64 total_used = 0, total_free = 0, total = 0;
total_free += drm_mm_debug_hole(&mm->head_node, prefix);
drm_mm_for_each_node(entry, mm) {
pr_debug("%s %#llx-%#llx: %llu: used\n", prefix, entry->start,
entry->start + entry->size, entry->size);
total_used += entry->size;
total_free += drm_mm_debug_hole(entry, prefix);
}
total = total_free + total_used;
pr_debug("%s total: %llu, used %llu free %llu\n", prefix, total,
total_used, total_free);
}
EXPORT_SYMBOL(drm_mm_debug_table);
#if defined(CONFIG_DEBUG_FS)
static u64 drm_mm_dump_hole(struct seq_file *m, const struct drm_mm_node *entry)
{
u64 hole_start, hole_end, hole_size;
if (entry->hole_follows) {
hole_start = drm_mm_hole_node_start(entry);
hole_end = drm_mm_hole_node_end(entry);
hole_size = hole_end - hole_start;
seq_printf(m, "%#018llx-%#018llx: %llu: free\n", hole_start,
hole_end, hole_size); hole_end, hole_size);
return hole_size; return hole_size;
} }
...@@ -892,28 +852,26 @@ static u64 drm_mm_dump_hole(struct seq_file *m, const struct drm_mm_node *entry) ...@@ -892,28 +852,26 @@ static u64 drm_mm_dump_hole(struct seq_file *m, const struct drm_mm_node *entry)
} }
/** /**
* drm_mm_dump_table - dump allocator state to a seq_file * drm_mm_print - print allocator state
* @m: seq_file to dump to * @mm: drm_mm allocator to print
* @mm: drm_mm allocator to dump * @p: DRM printer to use
*/ */
int drm_mm_dump_table(struct seq_file *m, const struct drm_mm *mm) void drm_mm_print(const struct drm_mm *mm, struct drm_printer *p)
{ {
const struct drm_mm_node *entry; const struct drm_mm_node *entry;
u64 total_used = 0, total_free = 0, total = 0; u64 total_used = 0, total_free = 0, total = 0;
total_free += drm_mm_dump_hole(m, &mm->head_node); total_free += drm_mm_dump_hole(p, &mm->head_node);
drm_mm_for_each_node(entry, mm) { drm_mm_for_each_node(entry, mm) {
seq_printf(m, "%#018llx-%#018llx: %llu: used\n", entry->start, drm_printf(p, "%#018llx-%#018llx: %llu: used\n", entry->start,
entry->start + entry->size, entry->size); entry->start + entry->size, entry->size);
total_used += entry->size; total_used += entry->size;
total_free += drm_mm_dump_hole(m, entry); total_free += drm_mm_dump_hole(p, entry);
} }
total = total_free + total_used; total = total_free + total_used;
seq_printf(m, "total: %llu, used %llu free %llu\n", total, drm_printf(p, "total: %llu, used %llu free %llu\n", total,
total_used, total_free); total_used, total_free);
return 0;
} }
EXPORT_SYMBOL(drm_mm_dump_table); EXPORT_SYMBOL(drm_mm_print);
#endif
...@@ -257,10 +257,6 @@ int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent, ...@@ -257,10 +257,6 @@ int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent,
if (ret) if (ret)
goto err_agp; goto err_agp;
DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n",
driver->name, driver->major, driver->minor, driver->patchlevel,
driver->date, pci_name(pdev), dev->primary->index);
/* No locking needed since shadow-attach is single-threaded since it may /* No locking needed since shadow-attach is single-threaded since it may
* only be called from the per-driver module init hook. */ * only be called from the per-driver module init hook. */
if (drm_core_check_feature(dev, DRIVER_LEGACY)) if (drm_core_check_feature(dev, DRIVER_LEGACY))
......
...@@ -37,7 +37,7 @@ ...@@ -37,7 +37,7 @@
* rotation or Z-position. All these properties are stored in &drm_plane_state. * rotation or Z-position. All these properties are stored in &drm_plane_state.
* *
* To create a plane, a KMS drivers allocates and zeroes an instances of * To create a plane, a KMS drivers allocates and zeroes an instances of
* struct &drm_plane (possibly as part of a larger structure) and registers it * &struct drm_plane (possibly as part of a larger structure) and registers it
* with a call to drm_universal_plane_init(). * with a call to drm_universal_plane_init().
* *
* Cursor and overlay planes are optional. All drivers should provide one * Cursor and overlay planes are optional. All drivers should provide one
...@@ -254,7 +254,7 @@ EXPORT_SYMBOL(drm_plane_cleanup); ...@@ -254,7 +254,7 @@ EXPORT_SYMBOL(drm_plane_cleanup);
* @idx: index of registered plane to find for * @idx: index of registered plane to find for
* *
* Given a plane index, return the registered plane from DRM device's * Given a plane index, return the registered plane from DRM device's
* list of planes with matching index. * list of planes with matching index. This is the inverse of drm_plane_index().
*/ */
struct drm_plane * struct drm_plane *
drm_plane_from_index(struct drm_device *dev, int idx) drm_plane_from_index(struct drm_device *dev, int idx)
......
...@@ -60,7 +60,7 @@ ...@@ -60,7 +60,7 @@
* Again drivers are strongly urged to switch to the new interfaces. * Again drivers are strongly urged to switch to the new interfaces.
* *
* The plane helpers share the function table structures with other helpers, * The plane helpers share the function table structures with other helpers,
* specifically also the atomic helpers. See struct &drm_plane_helper_funcs for * specifically also the atomic helpers. See &struct drm_plane_helper_funcs for
* the details. * the details.
*/ */
......
...@@ -57,10 +57,6 @@ static int drm_get_platform_dev(struct platform_device *platdev, ...@@ -57,10 +57,6 @@ static int drm_get_platform_dev(struct platform_device *platdev,
if (ret) if (ret)
goto err_free; goto err_free;
DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
driver->name, driver->major, driver->minor, driver->patchlevel,
driver->date, dev->primary->index);
return 0; return 0;
err_free: err_free:
......
...@@ -40,6 +40,12 @@ void __drm_printfn_info(struct drm_printer *p, struct va_format *vaf) ...@@ -40,6 +40,12 @@ void __drm_printfn_info(struct drm_printer *p, struct va_format *vaf)
} }
EXPORT_SYMBOL(__drm_printfn_info); EXPORT_SYMBOL(__drm_printfn_info);
void __drm_printfn_debug(struct drm_printer *p, struct va_format *vaf)
{
pr_debug("%s %pV", p->prefix, vaf);
}
EXPORT_SYMBOL(__drm_printfn_debug);
/** /**
* drm_printf - print to a &drm_printer stream * drm_printf - print to a &drm_printer stream
* @p: the &drm_printer * @p: the &drm_printer
......
...@@ -55,7 +55,7 @@ ...@@ -55,7 +55,7 @@
* handling code to avoid probing unrelated outputs. * handling code to avoid probing unrelated outputs.
* *
* The probe helpers share the function table structures with other display * The probe helpers share the function table structures with other display
* helper libraries. See struct &drm_connector_helper_funcs for the details. * helper libraries. See &struct drm_connector_helper_funcs for the details.
*/ */
static bool drm_kms_helper_poll = true; static bool drm_kms_helper_poll = true;
......
...@@ -34,7 +34,7 @@ ...@@ -34,7 +34,7 @@
* even the only way to transport metadata about the desired new modeset * even the only way to transport metadata about the desired new modeset
* configuration from userspace to the kernel. Properties have a well-defined * configuration from userspace to the kernel. Properties have a well-defined
* value range, which is enforced by the drm core. See the documentation of the * value range, which is enforced by the drm core. See the documentation of the
* flags member of struct &drm_property for an overview of the different * flags member of &struct drm_property for an overview of the different
* property types and ranges. * property types and ranges.
* *
* Properties don't store the current value directly, but need to be * Properties don't store the current value directly, but need to be
......
...@@ -371,10 +371,10 @@ EXPORT_SYMBOL(drm_rect_rotate); ...@@ -371,10 +371,10 @@ EXPORT_SYMBOL(drm_rect_rotate);
* to the vertical axis of the original untransformed * to the vertical axis of the original untransformed
* coordinate space, so that you never have to flip * coordinate space, so that you never have to flip
* them when doing a rotatation and its inverse. * them when doing a rotatation and its inverse.
* That is, if you do: * That is, if you do ::
* *
* drm_rotate(&r, width, height, rotation); * drm_rotate(&r, width, height, rotation);
* drm_rotate_inv(&r, width, height, rotation); * drm_rotate_inv(&r, width, height, rotation);
* *
* you will always get back the original rectangle. * you will always get back the original rectangle.
*/ */
......
...@@ -23,7 +23,7 @@ ...@@ -23,7 +23,7 @@
* *
* drm_simple_display_pipe_init() initializes a simple display pipeline * drm_simple_display_pipe_init() initializes a simple display pipeline
* which has only one full-screen scanout buffer feeding one output. The * which has only one full-screen scanout buffer feeding one output. The
* pipeline is represented by struct &drm_simple_display_pipe and binds * pipeline is represented by &struct drm_simple_display_pipe and binds
* together &drm_plane, &drm_crtc and &drm_encoder structures into one fixed * together &drm_plane, &drm_crtc and &drm_encoder structures into one fixed
* entity. Some flexibility for code reuse is provided through a separately * entity. Some flexibility for code reuse is provided through a separately
* allocated &drm_connector object and supporting optional &drm_bridge * allocated &drm_connector object and supporting optional &drm_bridge
......
...@@ -147,21 +147,23 @@ static int etnaviv_gem_show(struct drm_device *dev, struct seq_file *m) ...@@ -147,21 +147,23 @@ static int etnaviv_gem_show(struct drm_device *dev, struct seq_file *m)
static int etnaviv_mm_show(struct drm_device *dev, struct seq_file *m) static int etnaviv_mm_show(struct drm_device *dev, struct seq_file *m)
{ {
int ret; struct drm_printer p = drm_seq_file_printer(m);
read_lock(&dev->vma_offset_manager->vm_lock); read_lock(&dev->vma_offset_manager->vm_lock);
ret = drm_mm_dump_table(m, &dev->vma_offset_manager->vm_addr_space_mm); drm_mm_print(&dev->vma_offset_manager->vm_addr_space_mm, &p);
read_unlock(&dev->vma_offset_manager->vm_lock); read_unlock(&dev->vma_offset_manager->vm_lock);
return ret; return 0;
} }
static int etnaviv_mmu_show(struct etnaviv_gpu *gpu, struct seq_file *m) static int etnaviv_mmu_show(struct etnaviv_gpu *gpu, struct seq_file *m)
{ {
struct drm_printer p = drm_seq_file_printer(m);
seq_printf(m, "Active Objects (%s):\n", dev_name(gpu->dev)); seq_printf(m, "Active Objects (%s):\n", dev_name(gpu->dev));
mutex_lock(&gpu->mmu->lock); mutex_lock(&gpu->mmu->lock);
drm_mm_dump_table(m, &gpu->mmu->mm); drm_mm_print(&gpu->mmu->mm, &p);
mutex_unlock(&gpu->mmu->lock); mutex_unlock(&gpu->mmu->lock);
return 0; return 0;
......
...@@ -186,7 +186,7 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags) ...@@ -186,7 +186,7 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
return ret; return ret;
} }
static int exynos_drm_unload(struct drm_device *dev) static void exynos_drm_unload(struct drm_device *dev)
{ {
exynos_drm_device_subdrv_remove(dev); exynos_drm_device_subdrv_remove(dev);
...@@ -200,8 +200,6 @@ static int exynos_drm_unload(struct drm_device *dev) ...@@ -200,8 +200,6 @@ static int exynos_drm_unload(struct drm_device *dev)
kfree(dev->dev_private); kfree(dev->dev_private);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
static int commit_is_pending(struct exynos_drm_private *priv, u32 crtcs) static int commit_is_pending(struct exynos_drm_private *priv, u32 crtcs)
......
...@@ -116,7 +116,7 @@ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags) ...@@ -116,7 +116,7 @@ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
return ret; return ret;
} }
static int fsl_dcu_unload(struct drm_device *dev) static void fsl_dcu_unload(struct drm_device *dev)
{ {
struct fsl_dcu_drm_device *fsl_dev = dev->dev_private; struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
...@@ -131,8 +131,6 @@ static int fsl_dcu_unload(struct drm_device *dev) ...@@ -131,8 +131,6 @@ static int fsl_dcu_unload(struct drm_device *dev)
drm_irq_uninstall(dev); drm_irq_uninstall(dev);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
static irqreturn_t fsl_dcu_drm_irq(int irq, void *arg) static irqreturn_t fsl_dcu_drm_irq(int irq, void *arg)
...@@ -415,10 +413,6 @@ static int fsl_dcu_drm_probe(struct platform_device *pdev) ...@@ -415,10 +413,6 @@ static int fsl_dcu_drm_probe(struct platform_device *pdev)
if (ret < 0) if (ret < 0)
goto unref; goto unref;
DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n", driver->name,
driver->major, driver->minor, driver->patchlevel,
driver->date, drm->primary->index);
return 0; return 0;
unref: unref:
......
config DRM_GMA500 config DRM_GMA500
tristate "Intel GMA5/600 KMS Framebuffer" tristate "Intel GMA5/600 KMS Framebuffer"
depends on DRM && PCI && X86 depends on DRM && PCI && X86 && MMU
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
# GMA500 depends on ACPI_VIDEO when ACPI is enabled, just like i915 # GMA500 depends on ACPI_VIDEO when ACPI is enabled, just like i915
......
...@@ -159,7 +159,7 @@ static int psb_do_init(struct drm_device *dev) ...@@ -159,7 +159,7 @@ static int psb_do_init(struct drm_device *dev)
return 0; return 0;
} }
static int psb_driver_unload(struct drm_device *dev) static void psb_driver_unload(struct drm_device *dev)
{ {
struct drm_psb_private *dev_priv = dev->dev_private; struct drm_psb_private *dev_priv = dev->dev_private;
...@@ -220,7 +220,6 @@ static int psb_driver_unload(struct drm_device *dev) ...@@ -220,7 +220,6 @@ static int psb_driver_unload(struct drm_device *dev)
dev->dev_private = NULL; dev->dev_private = NULL;
} }
gma_power_uninit(dev); gma_power_uninit(dev);
return 0;
} }
static int psb_driver_load(struct drm_device *dev, unsigned long flags) static int psb_driver_load(struct drm_device *dev, unsigned long flags)
......
config DRM_HISI_HIBMC config DRM_HISI_HIBMC
tristate "DRM Support for Hisilicon Hibmc" tristate "DRM Support for Hisilicon Hibmc"
depends on DRM && PCI depends on DRM && PCI && MMU
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
......
...@@ -217,10 +217,6 @@ static int kirin_drm_bind(struct device *dev) ...@@ -217,10 +217,6 @@ static int kirin_drm_bind(struct device *dev)
if (ret) if (ret)
goto err_kms_cleanup; goto err_kms_cleanup;
DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
driver->name, driver->major, driver->minor, driver->patchlevel,
driver->date, drm_dev->primary->index);
return 0; return 0;
err_kms_cleanup: err_kms_cleanup:
......
...@@ -447,7 +447,7 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper, ...@@ -447,7 +447,7 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
connector->name); connector->name);
/* go for command line mode first */ /* go for command line mode first */
modes[i] = drm_pick_cmdline_mode(fb_conn, width, height); modes[i] = drm_pick_cmdline_mode(fb_conn);
/* try for preferred next */ /* try for preferred next */
if (!modes[i]) { if (!modes[i]) {
......
...@@ -150,13 +150,11 @@ __releases(&tve->lock) ...@@ -150,13 +150,11 @@ __releases(&tve->lock)
static void tve_enable(struct imx_tve *tve) static void tve_enable(struct imx_tve *tve)
{ {
int ret;
if (!tve->enabled) { if (!tve->enabled) {
tve->enabled = true; tve->enabled = true;
clk_prepare_enable(tve->clk); clk_prepare_enable(tve->clk);
ret = regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, regmap_update_bits(tve->regmap, TVE_COM_CONF_REG,
TVE_EN, TVE_EN); TVE_EN, TVE_EN);
} }
/* clear interrupt status register */ /* clear interrupt status register */
...@@ -174,12 +172,9 @@ static void tve_enable(struct imx_tve *tve) ...@@ -174,12 +172,9 @@ static void tve_enable(struct imx_tve *tve)
static void tve_disable(struct imx_tve *tve) static void tve_disable(struct imx_tve *tve)
{ {
int ret;
if (tve->enabled) { if (tve->enabled) {
tve->enabled = false; tve->enabled = false;
ret = regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, regmap_update_bits(tve->regmap, TVE_COM_CONF_REG, TVE_EN, 0);
TVE_EN, 0);
clk_disable_unprepare(tve->clk); clk_disable_unprepare(tve->clk);
} }
} }
......
...@@ -1127,12 +1127,10 @@ int mga_dma_buffers(struct drm_device *dev, void *data, ...@@ -1127,12 +1127,10 @@ int mga_dma_buffers(struct drm_device *dev, void *data,
/** /**
* Called just before the module is unloaded. * Called just before the module is unloaded.
*/ */
int mga_driver_unload(struct drm_device *dev) void mga_driver_unload(struct drm_device *dev)
{ {
kfree(dev->dev_private); kfree(dev->dev_private);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
/** /**
......
...@@ -166,7 +166,7 @@ extern int mga_dma_reset(struct drm_device *dev, void *data, ...@@ -166,7 +166,7 @@ extern int mga_dma_reset(struct drm_device *dev, void *data,
extern int mga_dma_buffers(struct drm_device *dev, void *data, extern int mga_dma_buffers(struct drm_device *dev, void *data,
struct drm_file *file_priv); struct drm_file *file_priv);
extern int mga_driver_load(struct drm_device *dev, unsigned long flags); extern int mga_driver_load(struct drm_device *dev, unsigned long flags);
extern int mga_driver_unload(struct drm_device *dev); extern void mga_driver_unload(struct drm_device *dev);
extern void mga_driver_lastclose(struct drm_device *dev); extern void mga_driver_lastclose(struct drm_device *dev);
extern int mga_driver_dma_quiescent(struct drm_device *dev); extern int mga_driver_dma_quiescent(struct drm_device *dev);
......
config DRM_MGAG200 config DRM_MGAG200
tristate "Kernel modesetting driver for MGA G200 server engines" tristate "Kernel modesetting driver for MGA G200 server engines"
depends on DRM && PCI depends on DRM && PCI && MMU
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
help help
......
...@@ -258,7 +258,7 @@ int mgag200_framebuffer_init(struct drm_device *dev, ...@@ -258,7 +258,7 @@ int mgag200_framebuffer_init(struct drm_device *dev,
int mgag200_driver_load(struct drm_device *dev, unsigned long flags); int mgag200_driver_load(struct drm_device *dev, unsigned long flags);
int mgag200_driver_unload(struct drm_device *dev); void mgag200_driver_unload(struct drm_device *dev);
int mgag200_gem_create(struct drm_device *dev, int mgag200_gem_create(struct drm_device *dev,
u32 size, bool iskernel, u32 size, bool iskernel,
struct drm_gem_object **obj); struct drm_gem_object **obj);
......
...@@ -145,6 +145,8 @@ static int mga_vram_init(struct mga_device *mdev) ...@@ -145,6 +145,8 @@ static int mga_vram_init(struct mga_device *mdev)
} }
mem = pci_iomap(mdev->dev->pdev, 0, 0); mem = pci_iomap(mdev->dev->pdev, 0, 0);
if (!mem)
return -ENOMEM;
mdev->mc.vram_size = mga_probe_vram(mdev, mem); mdev->mc.vram_size = mga_probe_vram(mdev, mem);
...@@ -262,18 +264,17 @@ int mgag200_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -262,18 +264,17 @@ int mgag200_driver_load(struct drm_device *dev, unsigned long flags)
return r; return r;
} }
int mgag200_driver_unload(struct drm_device *dev) void mgag200_driver_unload(struct drm_device *dev)
{ {
struct mga_device *mdev = dev->dev_private; struct mga_device *mdev = dev->dev_private;
if (mdev == NULL) if (mdev == NULL)
return 0; return;
mgag200_modeset_fini(mdev); mgag200_modeset_fini(mdev);
mgag200_fbdev_fini(mdev); mgag200_fbdev_fini(mdev);
drm_mode_config_cleanup(dev); drm_mode_config_cleanup(dev);
mgag200_mm_fini(mdev); mgag200_mm_fini(mdev);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
int mgag200_gem_create(struct drm_device *dev, int mgag200_gem_create(struct drm_device *dev,
......
...@@ -52,7 +52,11 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) ...@@ -52,7 +52,11 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m)
static int msm_mm_show(struct drm_device *dev, struct seq_file *m) static int msm_mm_show(struct drm_device *dev, struct seq_file *m)
{ {
return drm_mm_dump_table(m, &dev->vma_offset_manager->vm_addr_space_mm); struct drm_printer p = drm_seq_file_printer(m);
drm_mm_print(&dev->vma_offset_manager->vm_addr_space_mm, &p);
return 0;
} }
static int msm_fb_show(struct drm_device *dev, struct seq_file *m) static int msm_fb_show(struct drm_device *dev, struct seq_file *m)
......
config DRM_NOUVEAU config DRM_NOUVEAU
tristate "Nouveau (NVIDIA) cards" tristate "Nouveau (NVIDIA) cards"
depends on DRM && PCI depends on DRM && PCI && MMU
select FW_LOADER select FW_LOADER
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
...@@ -16,6 +16,7 @@ config DRM_NOUVEAU ...@@ -16,6 +16,7 @@ config DRM_NOUVEAU
select INPUT if ACPI && X86 select INPUT if ACPI && X86
select THERMAL if ACPI && X86 select THERMAL if ACPI && X86
select ACPI_VIDEO if ACPI && X86 select ACPI_VIDEO if ACPI && X86
select DRM_VM
help help
Choose this option for open-source NVIDIA support. Choose this option for open-source NVIDIA support.
......
...@@ -502,7 +502,7 @@ nouveau_drm_load(struct drm_device *dev, unsigned long flags) ...@@ -502,7 +502,7 @@ nouveau_drm_load(struct drm_device *dev, unsigned long flags)
return ret; return ret;
} }
static int static void
nouveau_drm_unload(struct drm_device *dev) nouveau_drm_unload(struct drm_device *dev)
{ {
struct nouveau_drm *drm = nouveau_drm(dev); struct nouveau_drm *drm = nouveau_drm(dev);
...@@ -531,7 +531,6 @@ nouveau_drm_unload(struct drm_device *dev) ...@@ -531,7 +531,6 @@ nouveau_drm_unload(struct drm_device *dev)
if (drm->hdmi_device) if (drm->hdmi_device)
pci_dev_put(drm->hdmi_device); pci_dev_put(drm->hdmi_device);
nouveau_cli_destroy(&drm->client); nouveau_cli_destroy(&drm->client);
return 0;
} }
void void
......
...@@ -50,7 +50,11 @@ static int mm_show(struct seq_file *m, void *arg) ...@@ -50,7 +50,11 @@ static int mm_show(struct seq_file *m, void *arg)
{ {
struct drm_info_node *node = (struct drm_info_node *) m->private; struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev; struct drm_device *dev = node->minor->dev;
return drm_mm_dump_table(m, &dev->vma_offset_manager->vm_addr_space_mm); struct drm_printer p = drm_seq_file_printer(m);
drm_mm_print(&dev->vma_offset_manager->vm_addr_space_mm, &p);
return 0;
} }
#ifdef CONFIG_DRM_FBDEV_EMULATION #ifdef CONFIG_DRM_FBDEV_EMULATION
......
...@@ -694,7 +694,7 @@ static int dev_load(struct drm_device *dev, unsigned long flags) ...@@ -694,7 +694,7 @@ static int dev_load(struct drm_device *dev, unsigned long flags)
return 0; return 0;
} }
static int dev_unload(struct drm_device *dev) static void dev_unload(struct drm_device *dev)
{ {
struct omap_drm_private *priv = dev->dev_private; struct omap_drm_private *priv = dev->dev_private;
...@@ -717,8 +717,6 @@ static int dev_unload(struct drm_device *dev) ...@@ -717,8 +717,6 @@ static int dev_unload(struct drm_device *dev)
dev->dev_private = NULL; dev->dev_private = NULL;
dev_set_drvdata(dev->dev, NULL); dev_set_drvdata(dev->dev, NULL);
return 0;
} }
static int dev_open(struct drm_device *dev, struct drm_file *file) static int dev_open(struct drm_device *dev, struct drm_file *file)
......
config DRM_QXL config DRM_QXL
tristate "QXL virtual GPU" tristate "QXL virtual GPU"
depends on DRM && PCI depends on DRM && PCI && MMU
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
select CRC32 select CRC32
......
...@@ -337,7 +337,7 @@ extern const struct drm_ioctl_desc qxl_ioctls[]; ...@@ -337,7 +337,7 @@ extern const struct drm_ioctl_desc qxl_ioctls[];
extern int qxl_max_ioctl; extern int qxl_max_ioctl;
int qxl_driver_load(struct drm_device *dev, unsigned long flags); int qxl_driver_load(struct drm_device *dev, unsigned long flags);
int qxl_driver_unload(struct drm_device *dev); void qxl_driver_unload(struct drm_device *dev);
int qxl_modeset_init(struct qxl_device *qdev); int qxl_modeset_init(struct qxl_device *qdev);
void qxl_modeset_fini(struct qxl_device *qdev); void qxl_modeset_fini(struct qxl_device *qdev);
......
...@@ -285,12 +285,12 @@ static void qxl_device_fini(struct qxl_device *qdev) ...@@ -285,12 +285,12 @@ static void qxl_device_fini(struct qxl_device *qdev)
qxl_debugfs_remove_files(qdev); qxl_debugfs_remove_files(qdev);
} }
int qxl_driver_unload(struct drm_device *dev) void qxl_driver_unload(struct drm_device *dev)
{ {
struct qxl_device *qdev = dev->dev_private; struct qxl_device *qdev = dev->dev_private;
if (qdev == NULL) if (qdev == NULL)
return 0; return;
drm_vblank_cleanup(dev); drm_vblank_cleanup(dev);
...@@ -299,7 +299,6 @@ int qxl_driver_unload(struct drm_device *dev) ...@@ -299,7 +299,6 @@ int qxl_driver_unload(struct drm_device *dev)
kfree(qdev); kfree(qdev);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
int qxl_driver_load(struct drm_device *dev, unsigned long flags) int qxl_driver_load(struct drm_device *dev, unsigned long flags)
......
...@@ -463,13 +463,13 @@ static int qxl_mm_dump_table(struct seq_file *m, void *data) ...@@ -463,13 +463,13 @@ static int qxl_mm_dump_table(struct seq_file *m, void *data)
struct drm_mm *mm = (struct drm_mm *)node->info_ent->data; struct drm_mm *mm = (struct drm_mm *)node->info_ent->data;
struct drm_device *dev = node->minor->dev; struct drm_device *dev = node->minor->dev;
struct qxl_device *rdev = dev->dev_private; struct qxl_device *rdev = dev->dev_private;
int ret;
struct ttm_bo_global *glob = rdev->mman.bdev.glob; struct ttm_bo_global *glob = rdev->mman.bdev.glob;
struct drm_printer p = drm_seq_file_printer(m);
spin_lock(&glob->lru_lock); spin_lock(&glob->lru_lock);
ret = drm_mm_dump_table(m, mm); drm_mm_print(mm, &p);
spin_unlock(&glob->lru_lock); spin_unlock(&glob->lru_lock);
return ret; return 0;
} }
#endif #endif
......
...@@ -102,7 +102,7 @@ ...@@ -102,7 +102,7 @@
#define KMS_DRIVER_MINOR 48 #define KMS_DRIVER_MINOR 48
#define KMS_DRIVER_PATCHLEVEL 0 #define KMS_DRIVER_PATCHLEVEL 0
int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
int radeon_driver_unload_kms(struct drm_device *dev); void radeon_driver_unload_kms(struct drm_device *dev);
void radeon_driver_lastclose_kms(struct drm_device *dev); void radeon_driver_lastclose_kms(struct drm_device *dev);
int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv); int radeon_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv);
void radeon_driver_postclose_kms(struct drm_device *dev, void radeon_driver_postclose_kms(struct drm_device *dev,
......
...@@ -53,12 +53,12 @@ static inline bool radeon_has_atpx(void) { return false; } ...@@ -53,12 +53,12 @@ static inline bool radeon_has_atpx(void) { return false; }
* the rest of the device (CP, writeback, etc.). * the rest of the device (CP, writeback, etc.).
* Returns 0 on success. * Returns 0 on success.
*/ */
int radeon_driver_unload_kms(struct drm_device *dev) void radeon_driver_unload_kms(struct drm_device *dev)
{ {
struct radeon_device *rdev = dev->dev_private; struct radeon_device *rdev = dev->dev_private;
if (rdev == NULL) if (rdev == NULL)
return 0; return;
if (rdev->rmmio == NULL) if (rdev->rmmio == NULL)
goto done_free; goto done_free;
...@@ -78,7 +78,6 @@ int radeon_driver_unload_kms(struct drm_device *dev) ...@@ -78,7 +78,6 @@ int radeon_driver_unload_kms(struct drm_device *dev)
done_free: done_free:
kfree(rdev); kfree(rdev);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
/** /**
......
...@@ -1033,13 +1033,13 @@ static int radeon_mm_dump_table(struct seq_file *m, void *data) ...@@ -1033,13 +1033,13 @@ static int radeon_mm_dump_table(struct seq_file *m, void *data)
struct drm_device *dev = node->minor->dev; struct drm_device *dev = node->minor->dev;
struct radeon_device *rdev = dev->dev_private; struct radeon_device *rdev = dev->dev_private;
struct drm_mm *mm = (struct drm_mm *)rdev->mman.bdev.man[ttm_pl].priv; struct drm_mm *mm = (struct drm_mm *)rdev->mman.bdev.man[ttm_pl].priv;
int ret;
struct ttm_bo_global *glob = rdev->mman.bdev.glob; struct ttm_bo_global *glob = rdev->mman.bdev.glob;
struct drm_printer p = drm_seq_file_printer(m);
spin_lock(&glob->lru_lock); spin_lock(&glob->lru_lock);
ret = drm_mm_dump_table(m, mm); drm_mm_print(mm, &p);
spin_unlock(&glob->lru_lock); spin_unlock(&glob->lru_lock);
return ret; return 0;
} }
static int ttm_pl_vram = TTM_PL_VRAM; static int ttm_pl_vram = TTM_PL_VRAM;
......
...@@ -99,24 +99,11 @@ void rockchip_unregister_crtc_funcs(struct drm_crtc *crtc) ...@@ -99,24 +99,11 @@ void rockchip_unregister_crtc_funcs(struct drm_crtc *crtc)
priv->crtc_funcs[pipe] = NULL; priv->crtc_funcs[pipe] = NULL;
} }
static struct drm_crtc *rockchip_crtc_from_pipe(struct drm_device *drm,
int pipe)
{
struct drm_crtc *crtc;
int i = 0;
list_for_each_entry(crtc, &drm->mode_config.crtc_list, head)
if (i++ == pipe)
return crtc;
return NULL;
}
static int rockchip_drm_crtc_enable_vblank(struct drm_device *dev, static int rockchip_drm_crtc_enable_vblank(struct drm_device *dev,
unsigned int pipe) unsigned int pipe)
{ {
struct rockchip_drm_private *priv = dev->dev_private; struct rockchip_drm_private *priv = dev->dev_private;
struct drm_crtc *crtc = rockchip_crtc_from_pipe(dev, pipe); struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe);
if (crtc && priv->crtc_funcs[pipe] && if (crtc && priv->crtc_funcs[pipe] &&
priv->crtc_funcs[pipe]->enable_vblank) priv->crtc_funcs[pipe]->enable_vblank)
...@@ -129,7 +116,7 @@ static void rockchip_drm_crtc_disable_vblank(struct drm_device *dev, ...@@ -129,7 +116,7 @@ static void rockchip_drm_crtc_disable_vblank(struct drm_device *dev,
unsigned int pipe) unsigned int pipe)
{ {
struct rockchip_drm_private *priv = dev->dev_private; struct rockchip_drm_private *priv = dev->dev_private;
struct drm_crtc *crtc = rockchip_crtc_from_pipe(dev, pipe); struct drm_crtc *crtc = drm_crtc_from_index(dev, pipe);
if (crtc && priv->crtc_funcs[pipe] && if (crtc && priv->crtc_funcs[pipe] &&
priv->crtc_funcs[pipe]->enable_vblank) priv->crtc_funcs[pipe]->enable_vblank)
......
...@@ -655,13 +655,11 @@ void savage_driver_lastclose(struct drm_device *dev) ...@@ -655,13 +655,11 @@ void savage_driver_lastclose(struct drm_device *dev)
} }
} }
int savage_driver_unload(struct drm_device *dev) void savage_driver_unload(struct drm_device *dev)
{ {
drm_savage_private_t *dev_priv = dev->dev_private; drm_savage_private_t *dev_priv = dev->dev_private;
kfree(dev_priv); kfree(dev_priv);
return 0;
} }
static int savage_do_init_bci(struct drm_device * dev, drm_savage_init_t * init) static int savage_do_init_bci(struct drm_device * dev, drm_savage_init_t * init)
......
...@@ -210,7 +210,7 @@ extern uint32_t *savage_dma_alloc(drm_savage_private_t * dev_priv, ...@@ -210,7 +210,7 @@ extern uint32_t *savage_dma_alloc(drm_savage_private_t * dev_priv,
extern int savage_driver_load(struct drm_device *dev, unsigned long chipset); extern int savage_driver_load(struct drm_device *dev, unsigned long chipset);
extern int savage_driver_firstopen(struct drm_device *dev); extern int savage_driver_firstopen(struct drm_device *dev);
extern void savage_driver_lastclose(struct drm_device *dev); extern void savage_driver_lastclose(struct drm_device *dev);
extern int savage_driver_unload(struct drm_device *dev); extern void savage_driver_unload(struct drm_device *dev);
extern void savage_reclaim_buffers(struct drm_device *dev, extern void savage_reclaim_buffers(struct drm_device *dev,
struct drm_file *file_priv); struct drm_file *file_priv);
......
...@@ -194,6 +194,10 @@ static bool assert_node(struct drm_mm_node *node, struct drm_mm *mm, ...@@ -194,6 +194,10 @@ static bool assert_node(struct drm_mm_node *node, struct drm_mm *mm,
return ok; return ok;
} }
#define show_mm(mm) do { \
struct drm_printer __p = drm_debug_printer(__func__); \
drm_mm_print((mm), &__p); } while (0)
static int igt_init(void *ignored) static int igt_init(void *ignored)
{ {
const unsigned int size = 4096; const unsigned int size = 4096;
...@@ -250,7 +254,7 @@ static int igt_init(void *ignored) ...@@ -250,7 +254,7 @@ static int igt_init(void *ignored)
out: out:
if (ret) if (ret)
drm_mm_debug_table(&mm, __func__); show_mm(&mm);
drm_mm_takedown(&mm); drm_mm_takedown(&mm);
return ret; return ret;
} }
...@@ -286,7 +290,7 @@ static int igt_debug(void *ignored) ...@@ -286,7 +290,7 @@ static int igt_debug(void *ignored)
return ret; return ret;
} }
drm_mm_debug_table(&mm, __func__); show_mm(&mm);
return 0; return 0;
} }
...@@ -2031,7 +2035,7 @@ static int igt_color_evict(void *ignored) ...@@ -2031,7 +2035,7 @@ static int igt_color_evict(void *ignored)
ret = 0; ret = 0;
out: out:
if (ret) if (ret)
drm_mm_debug_table(&mm, __func__); show_mm(&mm);
drm_mm_for_each_node_safe(node, next, &mm) drm_mm_for_each_node_safe(node, next, &mm)
drm_mm_remove_node(node); drm_mm_remove_node(node);
drm_mm_takedown(&mm); drm_mm_takedown(&mm);
...@@ -2130,7 +2134,7 @@ static int igt_color_evict_range(void *ignored) ...@@ -2130,7 +2134,7 @@ static int igt_color_evict_range(void *ignored)
ret = 0; ret = 0;
out: out:
if (ret) if (ret)
drm_mm_debug_table(&mm, __func__); show_mm(&mm);
drm_mm_for_each_node_safe(node, next, &mm) drm_mm_for_each_node_safe(node, next, &mm)
drm_mm_remove_node(node); drm_mm_remove_node(node);
drm_mm_takedown(&mm); drm_mm_takedown(&mm);
......
...@@ -104,7 +104,7 @@ static int shmob_drm_setup_clocks(struct shmob_drm_device *sdev, ...@@ -104,7 +104,7 @@ static int shmob_drm_setup_clocks(struct shmob_drm_device *sdev,
* DRM operations * DRM operations
*/ */
static int shmob_drm_unload(struct drm_device *dev) static void shmob_drm_unload(struct drm_device *dev)
{ {
drm_kms_helper_poll_fini(dev); drm_kms_helper_poll_fini(dev);
drm_mode_config_cleanup(dev); drm_mode_config_cleanup(dev);
...@@ -112,8 +112,6 @@ static int shmob_drm_unload(struct drm_device *dev) ...@@ -112,8 +112,6 @@ static int shmob_drm_unload(struct drm_device *dev)
drm_irq_uninstall(dev); drm_irq_uninstall(dev);
dev->dev_private = NULL; dev->dev_private = NULL;
return 0;
} }
static int shmob_drm_load(struct drm_device *dev, unsigned long flags) static int shmob_drm_load(struct drm_device *dev, unsigned long flags)
......
...@@ -54,15 +54,13 @@ static int sis_driver_load(struct drm_device *dev, unsigned long chipset) ...@@ -54,15 +54,13 @@ static int sis_driver_load(struct drm_device *dev, unsigned long chipset)
return 0; return 0;
} }
static int sis_driver_unload(struct drm_device *dev) static void sis_driver_unload(struct drm_device *dev)
{ {
drm_sis_private_t *dev_priv = dev->dev_private; drm_sis_private_t *dev_priv = dev->dev_private;
idr_destroy(&dev_priv->object_idr); idr_destroy(&dev_priv->object_idr);
kfree(dev_priv); kfree(dev_priv);
return 0;
} }
static const struct file_operations sis_driver_fops = { static const struct file_operations sis_driver_fops = {
......
...@@ -214,7 +214,7 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags) ...@@ -214,7 +214,7 @@ static int tegra_drm_load(struct drm_device *drm, unsigned long flags)
return err; return err;
} }
static int tegra_drm_unload(struct drm_device *drm) static void tegra_drm_unload(struct drm_device *drm)
{ {
struct host1x_device *device = to_host1x_device(drm->dev); struct host1x_device *device = to_host1x_device(drm->dev);
struct tegra_drm *tegra = drm->dev_private; struct tegra_drm *tegra = drm->dev_private;
...@@ -227,7 +227,7 @@ static int tegra_drm_unload(struct drm_device *drm) ...@@ -227,7 +227,7 @@ static int tegra_drm_unload(struct drm_device *drm)
err = host1x_device_exit(device); err = host1x_device_exit(device);
if (err < 0) if (err < 0)
return err; return;
if (tegra->domain) { if (tegra->domain) {
iommu_domain_free(tegra->domain); iommu_domain_free(tegra->domain);
...@@ -235,8 +235,6 @@ static int tegra_drm_unload(struct drm_device *drm) ...@@ -235,8 +235,6 @@ static int tegra_drm_unload(struct drm_device *drm)
} }
kfree(tegra); kfree(tegra);
return 0;
} }
static int tegra_drm_open(struct drm_device *drm, struct drm_file *filp) static int tegra_drm_open(struct drm_device *drm, struct drm_file *filp)
...@@ -891,8 +889,11 @@ static int tegra_debugfs_iova(struct seq_file *s, void *data) ...@@ -891,8 +889,11 @@ static int tegra_debugfs_iova(struct seq_file *s, void *data)
struct drm_info_node *node = (struct drm_info_node *)s->private; struct drm_info_node *node = (struct drm_info_node *)s->private;
struct drm_device *drm = node->minor->dev; struct drm_device *drm = node->minor->dev;
struct tegra_drm *tegra = drm->dev_private; struct tegra_drm *tegra = drm->dev_private;
struct drm_printer p = drm_seq_file_printer(s);
return drm_mm_dump_table(s, &tegra->mm); drm_mm_print(&tegra->mm, &p);
return 0;
} }
static struct drm_info_list tegra_debugfs_list[] = { static struct drm_info_list tegra_debugfs_list[] = {
...@@ -992,10 +993,6 @@ static int host1x_drm_probe(struct host1x_device *dev) ...@@ -992,10 +993,6 @@ static int host1x_drm_probe(struct host1x_device *dev)
if (err < 0) if (err < 0)
goto unref; goto unref;
DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n", driver->name,
driver->major, driver->minor, driver->patchlevel,
driver->date, drm->primary->index);
return 0; return 0;
unref: unref:
......
...@@ -507,7 +507,9 @@ static int tilcdc_mm_show(struct seq_file *m, void *arg) ...@@ -507,7 +507,9 @@ static int tilcdc_mm_show(struct seq_file *m, void *arg)
{ {
struct drm_info_node *node = (struct drm_info_node *) m->private; struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev; struct drm_device *dev = node->minor->dev;
return drm_mm_dump_table(m, &dev->vma_offset_manager->vm_addr_space_mm); struct drm_printer p = drm_seq_file_printer(m);
drm_mm_print(&dev->vma_offset_manager->vm_addr_space_mm, &p);
return 0;
} }
static struct drm_info_list tilcdc_debugfs_list[] = { static struct drm_info_list tilcdc_debugfs_list[] = {
......
...@@ -141,9 +141,10 @@ static void ttm_bo_man_debug(struct ttm_mem_type_manager *man, ...@@ -141,9 +141,10 @@ static void ttm_bo_man_debug(struct ttm_mem_type_manager *man,
const char *prefix) const char *prefix)
{ {
struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv; struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv;
struct drm_printer p = drm_debug_printer(prefix);
spin_lock(&rman->lock); spin_lock(&rman->lock);
drm_mm_debug_table(&rman->mm, prefix); drm_mm_print(&rman->mm, &p);
spin_unlock(&rman->lock); spin_unlock(&rman->lock);
} }
......
...@@ -100,7 +100,7 @@ int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len); ...@@ -100,7 +100,7 @@ int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len);
void udl_urb_completion(struct urb *urb); void udl_urb_completion(struct urb *urb);
int udl_driver_load(struct drm_device *dev, unsigned long flags); int udl_driver_load(struct drm_device *dev, unsigned long flags);
int udl_driver_unload(struct drm_device *dev); void udl_driver_unload(struct drm_device *dev);
int udl_fbdev_init(struct drm_device *dev); int udl_fbdev_init(struct drm_device *dev);
void udl_fbdev_cleanup(struct drm_device *dev); void udl_fbdev_cleanup(struct drm_device *dev);
......
...@@ -367,7 +367,7 @@ int udl_drop_usb(struct drm_device *dev) ...@@ -367,7 +367,7 @@ int udl_drop_usb(struct drm_device *dev)
return 0; return 0;
} }
int udl_driver_unload(struct drm_device *dev) void udl_driver_unload(struct drm_device *dev)
{ {
struct udl_device *udl = dev->dev_private; struct udl_device *udl = dev->dev_private;
...@@ -379,5 +379,4 @@ int udl_driver_unload(struct drm_device *dev) ...@@ -379,5 +379,4 @@ int udl_driver_unload(struct drm_device *dev)
udl_fbdev_cleanup(dev); udl_fbdev_cleanup(dev);
udl_modeset_cleanup(dev); udl_modeset_cleanup(dev);
kfree(udl); kfree(udl);
return 0;
} }
...@@ -134,7 +134,7 @@ extern int via_dma_blit_sync(struct drm_device *dev, void *data, struct drm_file ...@@ -134,7 +134,7 @@ extern int via_dma_blit_sync(struct drm_device *dev, void *data, struct drm_file
extern int via_dma_blit(struct drm_device *dev, void *data, struct drm_file *file_priv); extern int via_dma_blit(struct drm_device *dev, void *data, struct drm_file *file_priv);
extern int via_driver_load(struct drm_device *dev, unsigned long chipset); extern int via_driver_load(struct drm_device *dev, unsigned long chipset);
extern int via_driver_unload(struct drm_device *dev); extern void via_driver_unload(struct drm_device *dev);
extern int via_init_context(struct drm_device *dev, int context); extern int via_init_context(struct drm_device *dev, int context);
extern int via_final_context(struct drm_device *dev, int context); extern int via_final_context(struct drm_device *dev, int context);
......
...@@ -116,13 +116,11 @@ int via_driver_load(struct drm_device *dev, unsigned long chipset) ...@@ -116,13 +116,11 @@ int via_driver_load(struct drm_device *dev, unsigned long chipset)
return 0; return 0;
} }
int via_driver_unload(struct drm_device *dev) void via_driver_unload(struct drm_device *dev)
{ {
drm_via_private_t *dev_priv = dev->dev_private; drm_via_private_t *dev_priv = dev->dev_private;
idr_destroy(&dev_priv->object_idr); idr_destroy(&dev_priv->object_idr);
kfree(dev_priv); kfree(dev_priv);
return 0;
} }
config DRM_VIRTIO_GPU config DRM_VIRTIO_GPU
tristate "Virtio GPU driver" tristate "Virtio GPU driver"
depends on DRM && VIRTIO depends on DRM && VIRTIO && MMU
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
help help
......
...@@ -83,10 +83,6 @@ int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev) ...@@ -83,10 +83,6 @@ int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev)
if (ret) if (ret)
goto err_free; goto err_free;
DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n", driver->name,
driver->major, driver->minor, driver->patchlevel,
driver->date, dev->primary->index);
return 0; return 0;
err_free: err_free:
......
...@@ -215,7 +215,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; ...@@ -215,7 +215,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
/* virtio_kms.c */ /* virtio_kms.c */
int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags); int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags);
int virtio_gpu_driver_unload(struct drm_device *dev); void virtio_gpu_driver_unload(struct drm_device *dev);
int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file); int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file);
void virtio_gpu_driver_postclose(struct drm_device *dev, struct drm_file *file); void virtio_gpu_driver_postclose(struct drm_device *dev, struct drm_file *file);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册