提交 542aefb5 编写于 作者: D Dave Airlie

Merge tag 'drm-misc-next-2017-07-26' of git://anongit.freedesktop.org/git/drm-misc into drm-next

drm-misc-next-2017-07-18:
Core Changes:
- A couple fixes to only opening crc when needed (Maarten)
- Change atomic helper swap_state to be interruptible (Maarten)
- fb_helper: Support waiting for an output before setting up (Daniel)
- Allow drivers supporting runtime_pm to use helper_commit_tail (Maxime)

Driver Changes:
- misc: Use %pOF to print device node names (Rob)
- Miscellaneous fixes

drm-misc-next-2017-07-18:
UAPI Changes:
- Fail commits which request an event without including a crtc (Andrey)

Core Changes:
- Add YCBCR 4:2:0 support (Shashank)
- s/drm_atomic_replace_property_blob/drm_property_replace_blob/ (Peter)
- Add proper base class for private objs instead of using void* (Ville)
- Remove pending_read/write_domains from drm_gem_object (Chris)
- Add async plane update support (ie: cursor) to atomic helpers (Gustavo)
- Add old state to .enable and rename to .atomic_enable (Laurent)
- Add drm_atomic_helper_wait_for_flip_done() (Boris)
- Remove drm_driver->set_busid hook (Daniel)
- Migrate vblank documentation into the source files (Daniel)
- Add fb_helper->lock instead of abusing modeset lock (Thierry/Daniel)

Driver Changes:
- stm: Add STM32 DSI controller driver (Phillipe)
- amdgpu: Numerous small/misc fixes
- bridge: Add Synopsys Designware MIPI DSI host bridge driver (Phillipe)
- tinydrm: Add support for Pervasive Displays RePaper displays (Noralf)
- misc: Replace for_each_[obj]_in_state to prep for removal (Maarten)
- misc: Use .atomic_disable for atomic drivers (Laurent)
- vgem: Pin pages when mapped/exported (Chris)
- dw_hdmi: Add support for Rockchip RK3399 (Mark)
- atmel-hlcdc: Add 8-bit color look-up table format (Peter)
- vc4: Send vblank event when disabling a crtc (Boris)
- vc4: Use atomic helpers for fence waits (Eric)
- misc: drop drm_vblank_cleanup cargo-cult (Daniel)

Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Boris Brezillon <boris.brezillon@free-electrons.com>
Cc: Eric Anholt <eric@anholt.net>
Cc: Peter Rosin <peda@axentia.se>
Cc: Mark Yao <mark.yao@rock-chips.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Cc: Gustavo Padovan <gustavo.padovan@collabora.com>
Cc: Thierry Reding <treding@nvidia.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Peter Rosin <peda@axentia.se>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Cc: Philippe CORNU <philippe.cornu@st.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>

* tag 'drm-misc-next-2017-07-26' of git://anongit.freedesktop.org/git/drm-misc: (171 commits)
  drm/hisilicon: fix build error without fbdev emulation
  drm/atomic: implement drm_atomic_helper_commit_tail for runtime_pm users
  drm: Improve kerneldoc for drm_modeset_lock
  drm/hisilicon: Remove custom FB helper deferred setup
  drm/exynos: Remove custom FB helper deferred setup
  drm/fb-helper: Support deferred setup
  dma-fence: Don't BUG_ON when not absolutely needed
  drm: Convert to using %pOF instead of full_name
  drm/syncobj: Fix kerneldoc
  drm/atomic: Allow drm_atomic_helper_swap_state to fail
  drm/atomic: Add __must_check to drm_atomic_helper_swap_state.
  drm/vc4: Handle drm_atomic_helper_swap_state failure
  drm/tilcdc: Handle drm_atomic_helper_swap_state failure
  drm/tegra: Handle drm_atomic_helper_swap_state failure
  drm/msm: Handle drm_atomic_helper_swap_state failure
  drm/mediatek: Handle drm_atomic_helper_swap_state failure
  drm/i915: Handle drm_atomic_helper_swap_state failure
  drm/atmel-hlcdc: Handle drm_atomic_helper_swap_state failure
  drm/nouveau: Handle drm_atomic_helper_swap_state failure
  drm/atomic: Change drm_atomic_helper_swap_state to return an error.
  ...
Synopsys DesignWare MIPI DSI host controller
============================================
This document defines device tree properties for the Synopsys DesignWare MIPI
DSI host controller. It doesn't constitue a device tree binding specification
by itself but is meant to be referenced by platform-specific device tree
bindings.
When referenced from platform device tree bindings the properties defined in
this document are defined as follows. The platform device tree bindings are
responsible for defining whether each optional property is used or not.
- reg: Memory mapped base address and length of the DesignWare MIPI DSI
host controller registers. (mandatory)
- clocks: References to all the clocks specified in the clock-names property
as specified in [1]. (mandatory)
- clock-names:
- "pclk" is the peripheral clock for either AHB and APB. (mandatory)
- "px_clk" is the pixel clock for the DPI/RGB input. (optional)
- resets: References to all the resets specified in the reset-names property
as specified in [2]. (optional)
- reset-names: string reset name, must be "apb" if used. (optional)
- panel or bridge node: see [3]. (mandatory)
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
[2] Documentation/devicetree/bindings/reset/reset.txt
[3] Documentation/devicetree/bindings/display/mipi-dsi-bus.txt
Pervasive Displays RePaper branded e-ink displays
Required properties:
- compatible: "pervasive,e1144cs021" for 1.44" display
"pervasive,e1190cs021" for 1.9" display
"pervasive,e2200cs021" for 2.0" display
"pervasive,e2271cs021" for 2.7" display
- panel-on-gpios: Timing controller power control
- discharge-gpios: Discharge control
- reset-gpios: RESET pin
- busy-gpios: BUSY pin
Required property for e2271cs021:
- border-gpios: Border control
The node for this driver must be a child node of a SPI controller, hence
all mandatory properties described in ../spi/spi-bus.txt must be specified.
Optional property:
- pervasive,thermal-zone: name of thermometer's thermal zone
Example:
display_temp: lm75@48 {
compatible = "lm75b";
reg = <0x48>;
#thermal-sensor-cells = <0>;
};
thermal-zones {
display {
polling-delay-passive = <0>;
polling-delay = <0>;
thermal-sensors = <&display_temp>;
};
};
papirus27@0{
compatible = "pervasive,e2271cs021";
reg = <0>;
spi-max-frequency = <8000000>;
panel-on-gpios = <&gpio 23 0>;
border-gpios = <&gpio 14 0>;
discharge-gpios = <&gpio 15 0>;
reset-gpios = <&gpio 24 0>;
busy-gpios = <&gpio 25 0>;
pervasive,thermal-zone = "display";
};
......@@ -11,7 +11,9 @@ following device-specific properties.
Required properties:
- compatible: Shall contain "rockchip,rk3288-dw-hdmi".
- compatible: should be one of the following:
"rockchip,rk3288-dw-hdmi"
"rockchip,rk3399-dw-hdmi"
- reg: See dw_hdmi.txt.
- reg-io-width: See dw_hdmi.txt. Shall be 4.
- interrupts: HDMI interrupt number
......@@ -30,7 +32,8 @@ Optional properties
I2C master controller.
- clock-names: See dw_hdmi.txt. The "cec" clock is optional.
- clock-names: May contain "cec" as defined in dw_hdmi.txt.
- clock-names: May contain "grf", power for grf io.
- clock-names: May contain "vpll", external clock for some hdmi phy.
Example:
......
......@@ -249,6 +249,7 @@ oxsemi Oxford Semiconductor, Ltd.
panasonic Panasonic Corporation
parade Parade Technologies Inc.
pericom Pericom Technology Inc.
pervasive Pervasive Displays, Inc.
phytec PHYTEC Messtechnik GmbH
picochip Picochip Ltd
pine64 Pine64
......
......@@ -201,6 +201,8 @@ drivers.
Open/Close, File Operations and IOCTLs
======================================
.. _drm_driver_fops:
File Operations
---------------
......
......@@ -523,9 +523,6 @@ Color Management Properties
.. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c
:doc: overview
.. kernel-doc:: include/drm/drm_color_mgmt.h
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c
:export:
......@@ -554,60 +551,8 @@ various modules/drivers.
Vertical Blanking
=================
Vertical blanking plays a major role in graphics rendering. To achieve
tear-free display, users must synchronize page flips and/or rendering to
vertical blanking. The DRM API offers ioctls to perform page flips
synchronized to vertical blanking and wait for vertical blanking.
The DRM core handles most of the vertical blanking management logic,
which involves filtering out spurious interrupts, keeping race-free
blanking counters, coping with counter wrap-around and resets and
keeping use counts. It relies on the driver to generate vertical
blanking interrupts and optionally provide a hardware vertical blanking
counter. Drivers must implement the following operations.
- int (\*enable_vblank) (struct drm_device \*dev, int crtc); void
(\*disable_vblank) (struct drm_device \*dev, int crtc);
Enable or disable vertical blanking interrupts for the given CRTC.
- u32 (\*get_vblank_counter) (struct drm_device \*dev, int crtc);
Retrieve the value of the vertical blanking counter for the given
CRTC. If the hardware maintains a vertical blanking counter its value
should be returned. Otherwise drivers can use the
:c:func:`drm_vblank_count()` helper function to handle this
operation.
Drivers must initialize the vertical blanking handling core with a call
to :c:func:`drm_vblank_init()` in their load operation.
Vertical blanking interrupts can be enabled by the DRM core or by
drivers themselves (for instance to handle page flipping operations).
The DRM core maintains a vertical blanking use count to ensure that the
interrupts are not disabled while a user still needs them. To increment
the use count, drivers call :c:func:`drm_vblank_get()`. Upon
return vertical blanking interrupts are guaranteed to be enabled.
To decrement the use count drivers call
:c:func:`drm_vblank_put()`. Only when the use count drops to zero
will the DRM core disable the vertical blanking interrupts after a delay
by scheduling a timer. The delay is accessible through the
vblankoffdelay module parameter or the ``drm_vblank_offdelay`` global
variable and expressed in milliseconds. Its default value is 5000 ms.
Zero means never disable, and a negative value means disable
immediately. Drivers may override the behaviour by setting the
:c:type:`struct drm_device <drm_device>`
vblank_disable_immediate flag, which when set causes vblank interrupts
to be disabled immediately regardless of the drm_vblank_offdelay
value. The flag should only be set if there's a properly working
hardware vblank counter present.
When a vertical blanking interrupt occurs drivers only need to call the
:c:func:`drm_handle_vblank()` function to account for the
interrupt.
Resources allocated by :c:func:`drm_vblank_init()` must be freed
with a call to :c:func:`drm_vblank_cleanup()` in the driver unload
operation handler.
.. kernel-doc:: drivers/gpu/drm/drm_vblank.c
:doc: vblank handling
Vertical Blanking and Interrupt Handling Functions Reference
------------------------------------------------------------
......
......@@ -191,7 +191,7 @@ acquired and release by :c:func:`calling drm_gem_object_get()` and
holding the lock.
When the last reference to a GEM object is released the GEM core calls
the :c:type:`struct drm_driver <drm_driver>` gem_free_object
the :c:type:`struct drm_driver <drm_driver>` gem_free_object_unlocked
operation. That operation is mandatory for GEM-enabled drivers and must
free the GEM object and all associated resources.
......@@ -492,7 +492,7 @@ DRM Sync Objects
:doc: Overview
.. kernel-doc:: include/drm/drm_syncobj.h
:export:
:internal:
.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
:export:
......@@ -160,6 +160,8 @@ other hand, a driver requires shared state between clients which is
visible to user-space and accessible beyond open-file boundaries, they
cannot support render nodes.
.. _drm_driver_ioctl:
IOCTL Support on Device Nodes
=============================
......
......@@ -4541,6 +4541,12 @@ M: Dave Airlie <airlied@redhat.com>
S: Odd Fixes
F: drivers/gpu/drm/mgag200/
DRM DRIVER FOR PERVASIVE DISPLAYS REPAPER PANELS
M: Noralf Trønnes <noralf@tronnes.org>
S: Maintained
F: drivers/gpu/drm/tinydrm/repaper.c
F: Documentation/devicetree/bindings/display/repaper.txt
DRM DRIVER FOR RAGE 128 VIDEO CARDS
S: Orphan / Obsolete
F: drivers/gpu/drm/r128/
......
......@@ -48,7 +48,7 @@ static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(0);
*/
u64 dma_fence_context_alloc(unsigned num)
{
BUG_ON(!num);
WARN_ON(!num);
return atomic64_add_return(num, &dma_fence_context_counter) - num;
}
EXPORT_SYMBOL(dma_fence_context_alloc);
......@@ -177,7 +177,7 @@ void dma_fence_release(struct kref *kref)
trace_dma_fence_destroy(fence);
BUG_ON(!list_empty(&fence->cb_list));
WARN_ON(!list_empty(&fence->cb_list));
if (fence->ops->release)
fence->ops->release(fence);
......
......@@ -96,9 +96,9 @@ static struct sync_timeline *sync_timeline_create(const char *name)
obj->context = dma_fence_context_alloc(1);
strlcpy(obj->name, name, sizeof(obj->name));
INIT_LIST_HEAD(&obj->child_list_head);
INIT_LIST_HEAD(&obj->active_list_head);
spin_lock_init(&obj->child_list_lock);
obj->pt_tree = RB_ROOT;
INIT_LIST_HEAD(&obj->pt_list);
spin_lock_init(&obj->lock);
sync_timeline_debug_add(obj);
......@@ -135,28 +135,28 @@ static void sync_timeline_put(struct sync_timeline *obj)
*/
static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
{
unsigned long flags;
struct sync_pt *pt, *next;
trace_sync_timeline(obj);
spin_lock_irqsave(&obj->child_list_lock, flags);
spin_lock_irq(&obj->lock);
obj->value += inc;
list_for_each_entry_safe(pt, next, &obj->active_list_head,
active_list) {
if (dma_fence_is_signaled_locked(&pt->base))
list_del_init(&pt->active_list);
list_for_each_entry_safe(pt, next, &obj->pt_list, link) {
if (!dma_fence_is_signaled_locked(&pt->base))
break;
list_del_init(&pt->link);
rb_erase(&pt->node, &obj->pt_tree);
}
spin_unlock_irqrestore(&obj->child_list_lock, flags);
spin_unlock_irq(&obj->lock);
}
/**
* sync_pt_create() - creates a sync pt
* @parent: fence's parent sync_timeline
* @size: size to allocate for this pt
* @inc: value of the fence
*
* Creates a new sync_pt as a child of @parent. @size bytes will be
......@@ -164,26 +164,55 @@ static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
* the generic sync_timeline struct. Returns the sync_pt object or
* NULL in case of error.
*/
static struct sync_pt *sync_pt_create(struct sync_timeline *obj, int size,
unsigned int value)
static struct sync_pt *sync_pt_create(struct sync_timeline *obj,
unsigned int value)
{
unsigned long flags;
struct sync_pt *pt;
if (size < sizeof(*pt))
return NULL;
pt = kzalloc(size, GFP_KERNEL);
pt = kzalloc(sizeof(*pt), GFP_KERNEL);
if (!pt)
return NULL;
spin_lock_irqsave(&obj->child_list_lock, flags);
sync_timeline_get(obj);
dma_fence_init(&pt->base, &timeline_fence_ops, &obj->child_list_lock,
dma_fence_init(&pt->base, &timeline_fence_ops, &obj->lock,
obj->context, value);
list_add_tail(&pt->child_list, &obj->child_list_head);
INIT_LIST_HEAD(&pt->active_list);
spin_unlock_irqrestore(&obj->child_list_lock, flags);
INIT_LIST_HEAD(&pt->link);
spin_lock_irq(&obj->lock);
if (!dma_fence_is_signaled_locked(&pt->base)) {
struct rb_node **p = &obj->pt_tree.rb_node;
struct rb_node *parent = NULL;
while (*p) {
struct sync_pt *other;
int cmp;
parent = *p;
other = rb_entry(parent, typeof(*pt), node);
cmp = value - other->base.seqno;
if (cmp > 0) {
p = &parent->rb_right;
} else if (cmp < 0) {
p = &parent->rb_left;
} else {
if (dma_fence_get_rcu(&other->base)) {
dma_fence_put(&pt->base);
pt = other;
goto unlock;
}
p = &parent->rb_left;
}
}
rb_link_node(&pt->node, parent, p);
rb_insert_color(&pt->node, &obj->pt_tree);
parent = rb_next(&pt->node);
list_add_tail(&pt->link,
parent ? &rb_entry(parent, typeof(*pt), node)->link : &obj->pt_list);
}
unlock:
spin_unlock_irq(&obj->lock);
return pt;
}
......@@ -203,13 +232,17 @@ static void timeline_fence_release(struct dma_fence *fence)
{
struct sync_pt *pt = dma_fence_to_sync_pt(fence);
struct sync_timeline *parent = dma_fence_parent(fence);
unsigned long flags;
spin_lock_irqsave(fence->lock, flags);
list_del(&pt->child_list);
if (!list_empty(&pt->active_list))
list_del(&pt->active_list);
spin_unlock_irqrestore(fence->lock, flags);
if (!list_empty(&pt->link)) {
unsigned long flags;
spin_lock_irqsave(fence->lock, flags);
if (!list_empty(&pt->link)) {
list_del(&pt->link);
rb_erase(&pt->node, &parent->pt_tree);
}
spin_unlock_irqrestore(fence->lock, flags);
}
sync_timeline_put(parent);
dma_fence_free(fence);
......@@ -219,18 +252,11 @@ static bool timeline_fence_signaled(struct dma_fence *fence)
{
struct sync_timeline *parent = dma_fence_parent(fence);
return (fence->seqno > parent->value) ? false : true;
return !__dma_fence_is_later(fence->seqno, parent->value);
}
static bool timeline_fence_enable_signaling(struct dma_fence *fence)
{
struct sync_pt *pt = dma_fence_to_sync_pt(fence);
struct sync_timeline *parent = dma_fence_parent(fence);
if (timeline_fence_signaled(fence))
return false;
list_add_tail(&pt->active_list, &parent->active_list_head);
return true;
}
......@@ -309,7 +335,7 @@ static long sw_sync_ioctl_create_fence(struct sync_timeline *obj,
goto err;
}
pt = sync_pt_create(obj, sizeof(*pt), data.value);
pt = sync_pt_create(obj, data.value);
if (!pt) {
err = -ENOMEM;
goto err;
......@@ -345,6 +371,11 @@ static long sw_sync_ioctl_inc(struct sync_timeline *obj, unsigned long arg)
if (copy_from_user(&value, (void __user *)arg, sizeof(value)))
return -EFAULT;
while (value > INT_MAX) {
sync_timeline_signal(obj, INT_MAX);
value -= INT_MAX;
}
sync_timeline_signal(obj, value);
return 0;
......
......@@ -116,17 +116,15 @@ static void sync_print_fence(struct seq_file *s,
static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
{
struct list_head *pos;
unsigned long flags;
seq_printf(s, "%s: %d\n", obj->name, obj->value);
spin_lock_irqsave(&obj->child_list_lock, flags);
list_for_each(pos, &obj->child_list_head) {
struct sync_pt *pt =
container_of(pos, struct sync_pt, child_list);
spin_lock_irq(&obj->lock);
list_for_each(pos, &obj->pt_list) {
struct sync_pt *pt = container_of(pos, struct sync_pt, link);
sync_print_fence(s, &pt->base, false);
}
spin_unlock_irqrestore(&obj->child_list_lock, flags);
spin_unlock_irq(&obj->lock);
}
static void sync_print_sync_file(struct seq_file *s,
......@@ -151,12 +149,11 @@ static void sync_print_sync_file(struct seq_file *s,
static int sync_debugfs_show(struct seq_file *s, void *unused)
{
unsigned long flags;
struct list_head *pos;
seq_puts(s, "objs:\n--------------\n");
spin_lock_irqsave(&sync_timeline_list_lock, flags);
spin_lock_irq(&sync_timeline_list_lock);
list_for_each(pos, &sync_timeline_list_head) {
struct sync_timeline *obj =
container_of(pos, struct sync_timeline,
......@@ -165,11 +162,11 @@ static int sync_debugfs_show(struct seq_file *s, void *unused)
sync_print_obj(s, obj);
seq_putc(s, '\n');
}
spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
spin_unlock_irq(&sync_timeline_list_lock);
seq_puts(s, "fences:\n--------------\n");
spin_lock_irqsave(&sync_file_list_lock, flags);
spin_lock_irq(&sync_file_list_lock);
list_for_each(pos, &sync_file_list_head) {
struct sync_file *sync_file =
container_of(pos, struct sync_file, sync_file_list);
......@@ -177,7 +174,7 @@ static int sync_debugfs_show(struct seq_file *s, void *unused)
sync_print_sync_file(s, sync_file);
seq_putc(s, '\n');
}
spin_unlock_irqrestore(&sync_file_list_lock, flags);
spin_unlock_irq(&sync_file_list_lock);
return 0;
}
......
......@@ -14,6 +14,7 @@
#define _LINUX_SYNC_H
#include <linux/list.h>
#include <linux/rbtree.h>
#include <linux/spinlock.h>
#include <linux/dma-fence.h>
......@@ -24,42 +25,41 @@
* struct sync_timeline - sync object
* @kref: reference count on fence.
* @name: name of the sync_timeline. Useful for debugging
* @child_list_head: list of children sync_pts for this sync_timeline
* @child_list_lock: lock protecting @child_list_head and fence.status
* @active_list_head: list of active (unsignaled/errored) sync_pts
* @lock: lock protecting @pt_list and @value
* @pt_tree: rbtree of active (unsignaled/errored) sync_pts
* @pt_list: list of active (unsignaled/errored) sync_pts
* @sync_timeline_list: membership in global sync_timeline_list
*/
struct sync_timeline {
struct kref kref;
char name[32];
/* protected by child_list_lock */
/* protected by lock */
u64 context;
int value;
struct list_head child_list_head;
spinlock_t child_list_lock;
struct list_head active_list_head;
struct rb_root pt_tree;
struct list_head pt_list;
spinlock_t lock;
struct list_head sync_timeline_list;
};
static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence)
{
return container_of(fence->lock, struct sync_timeline, child_list_lock);
return container_of(fence->lock, struct sync_timeline, lock);
}
/**
* struct sync_pt - sync_pt object
* @base: base fence object
* @child_list: sync timeline child's list
* @active_list: sync timeline active child's list
* @link: link on the sync timeline's list
* @node: node in the sync timeline's tree
*/
struct sync_pt {
struct dma_fence base;
struct list_head child_list;
struct list_head active_list;
struct list_head link;
struct rb_node node;
};
#ifdef CONFIG_SW_SYNC
......
......@@ -803,7 +803,6 @@ static struct drm_driver kms_driver = {
.open = amdgpu_driver_open_kms,
.postclose = amdgpu_driver_postclose_kms,
.lastclose = amdgpu_driver_lastclose_kms,
.set_busid = drm_pci_set_busid,
.unload = amdgpu_driver_unload_kms,
.get_vblank_counter = amdgpu_get_vblank_counter_kms,
.enable_vblank = amdgpu_enable_vblank_kms,
......
......@@ -245,7 +245,6 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
info->fbops = &amdgpufb_ops;
tmp = amdgpu_bo_gpu_offset(abo) - adev->mc.vram_start;
......
......@@ -263,7 +263,6 @@ void amdgpu_irq_fini(struct amdgpu_device *adev)
{
unsigned i, j;
drm_vblank_cleanup(adev->ddev);
if (adev->irq.installed) {
drm_irq_uninstall(adev->ddev);
adev->irq.installed = false;
......
......@@ -1867,7 +1867,7 @@ static void dce_v10_0_afmt_setmode(struct drm_encoder *encoder,
dce_v10_0_audio_write_sad_regs(encoder);
dce_v10_0_audio_write_latency_fields(encoder, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
return;
......
......@@ -1851,7 +1851,7 @@ static void dce_v11_0_afmt_setmode(struct drm_encoder *encoder,
dce_v11_0_audio_write_sad_regs(encoder);
dce_v11_0_audio_write_latency_fields(encoder, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
return;
......
......@@ -1597,7 +1597,7 @@ static void dce_v6_0_audio_set_avi_infoframe(struct drm_encoder *encoder,
ssize_t err;
u32 tmp;
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
return;
......
......@@ -1750,7 +1750,7 @@ static void dce_v8_0_afmt_setmode(struct drm_encoder *encoder,
dce_v8_0_audio_write_sad_regs(encoder);
dce_v8_0_audio_write_latency_fields(encoder, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
return;
......
......@@ -64,6 +64,19 @@ static const struct drm_crtc_funcs arc_pgu_crtc_funcs = {
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
};
static enum drm_mode_status arc_pgu_crtc_mode_valid(struct drm_crtc *crtc,
const struct drm_display_mode *mode)
{
struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
long rate, clk_rate = mode->clock * 1000;
rate = clk_round_rate(arcpgu->clk, clk_rate);
if (rate != clk_rate)
return MODE_NOCLOCK;
return MODE_OK;
}
static void arc_pgu_crtc_mode_set_nofb(struct drm_crtc *crtc)
{
struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
......@@ -106,7 +119,8 @@ static void arc_pgu_crtc_mode_set_nofb(struct drm_crtc *crtc)
clk_set_rate(arcpgu->clk, m->crtc_clock * 1000);
}
static void arc_pgu_crtc_enable(struct drm_crtc *crtc)
static void arc_pgu_crtc_atomic_enable(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
......@@ -116,7 +130,8 @@ static void arc_pgu_crtc_enable(struct drm_crtc *crtc)
ARCPGU_CTRL_ENABLE_MASK);
}
static void arc_pgu_crtc_disable(struct drm_crtc *crtc)
static void arc_pgu_crtc_atomic_disable(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
......@@ -129,20 +144,6 @@ static void arc_pgu_crtc_disable(struct drm_crtc *crtc)
~ARCPGU_CTRL_ENABLE_MASK);
}
static int arc_pgu_crtc_atomic_check(struct drm_crtc *crtc,
struct drm_crtc_state *state)
{
struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
struct drm_display_mode *mode = &state->adjusted_mode;
long rate, clk_rate = mode->clock * 1000;
rate = clk_round_rate(arcpgu->clk, clk_rate);
if (rate != clk_rate)
return -EINVAL;
return 0;
}
static void arc_pgu_crtc_atomic_begin(struct drm_crtc *crtc,
struct drm_crtc_state *state)
{
......@@ -158,15 +159,13 @@ static void arc_pgu_crtc_atomic_begin(struct drm_crtc *crtc,
}
static const struct drm_crtc_helper_funcs arc_pgu_crtc_helper_funcs = {
.mode_valid = arc_pgu_crtc_mode_valid,
.mode_set = drm_helper_crtc_mode_set,
.mode_set_base = drm_helper_crtc_mode_set_base,
.mode_set_nofb = arc_pgu_crtc_mode_set_nofb,
.enable = arc_pgu_crtc_enable,
.disable = arc_pgu_crtc_disable,
.prepare = arc_pgu_crtc_disable,
.commit = arc_pgu_crtc_enable,
.atomic_check = arc_pgu_crtc_atomic_check,
.atomic_begin = arc_pgu_crtc_atomic_begin,
.atomic_enable = arc_pgu_crtc_atomic_enable,
.atomic_disable = arc_pgu_crtc_atomic_disable,
};
static void arc_pgu_plane_atomic_update(struct drm_plane *plane,
......
......@@ -165,7 +165,8 @@ static void hdlcd_crtc_mode_set_nofb(struct drm_crtc *crtc)
clk_set_rate(hdlcd->clk, m->crtc_clock * 1000);
}
static void hdlcd_crtc_enable(struct drm_crtc *crtc)
static void hdlcd_crtc_atomic_enable(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
......@@ -175,7 +176,8 @@ static void hdlcd_crtc_enable(struct drm_crtc *crtc)
drm_crtc_vblank_on(crtc);
}
static void hdlcd_crtc_disable(struct drm_crtc *crtc)
static void hdlcd_crtc_atomic_disable(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
......@@ -218,10 +220,10 @@ static void hdlcd_crtc_atomic_begin(struct drm_crtc *crtc,
}
static const struct drm_crtc_helper_funcs hdlcd_crtc_helper_funcs = {
.enable = hdlcd_crtc_enable,
.disable = hdlcd_crtc_disable,
.atomic_check = hdlcd_crtc_atomic_check,
.atomic_begin = hdlcd_crtc_atomic_begin,
.atomic_enable = hdlcd_crtc_atomic_enable,
.atomic_disable = hdlcd_crtc_atomic_disable,
};
static int hdlcd_plane_atomic_check(struct drm_plane *plane,
......
......@@ -343,7 +343,6 @@ static int hdlcd_drm_bind(struct device *dev)
}
err_fbdev:
drm_kms_helper_poll_fini(drm);
drm_vblank_cleanup(drm);
err_vblank:
pm_runtime_disable(drm->dev);
err_pm_active:
......@@ -375,7 +374,6 @@ static void hdlcd_drm_unbind(struct device *dev)
component_unbind_all(dev, drm);
of_node_put(hdlcd->crtc.port);
hdlcd->crtc.port = NULL;
drm_vblank_cleanup(drm);
pm_runtime_get_sync(drm->dev);
drm_irq_uninstall(drm);
pm_runtime_put_sync(drm->dev);
......
......@@ -46,7 +46,8 @@ static enum drm_mode_status malidp_crtc_mode_valid(struct drm_crtc *crtc,
return MODE_OK;
}
static void malidp_crtc_enable(struct drm_crtc *crtc)
static void malidp_crtc_atomic_enable(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
struct malidp_hw_device *hwdev = malidp->dev;
......@@ -69,7 +70,8 @@ static void malidp_crtc_enable(struct drm_crtc *crtc)
drm_crtc_vblank_on(crtc);
}
static void malidp_crtc_disable(struct drm_crtc *crtc)
static void malidp_crtc_atomic_disable(struct drm_crtc *crtc,
struct drm_crtc_state *old_state)
{
struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
struct malidp_hw_device *hwdev = malidp->dev;
......@@ -408,9 +410,9 @@ static int malidp_crtc_atomic_check(struct drm_crtc *crtc,
static const struct drm_crtc_helper_funcs malidp_crtc_helper_funcs = {
.mode_valid = malidp_crtc_mode_valid,
.enable = malidp_crtc_enable,
.disable = malidp_crtc_disable,
.atomic_check = malidp_crtc_atomic_check,
.atomic_enable = malidp_crtc_atomic_enable,
.atomic_disable = malidp_crtc_atomic_disable,
};
static struct drm_crtc_state *malidp_crtc_duplicate_state(struct drm_crtc *crtc)
......
......@@ -225,7 +225,7 @@ static void malidp_atomic_commit_tail(struct drm_atomic_state *state)
drm_atomic_helper_commit_modeset_disables(drm, state);
for_each_crtc_in_state(state, crtc, old_crtc_state, i) {
for_each_old_crtc_in_state(state, crtc, old_crtc_state, i) {
malidp_atomic_commit_update_gamma(crtc, old_crtc_state);
malidp_atomic_commit_update_coloradj(crtc, old_crtc_state);
malidp_atomic_commit_se_config(crtc, old_crtc_state);
......
......@@ -1150,13 +1150,13 @@ int armada_drm_plane_init(struct armada_plane *plane)
return 0;
}
static struct drm_prop_enum_list armada_drm_csc_yuv_enum_list[] = {
static const struct drm_prop_enum_list armada_drm_csc_yuv_enum_list[] = {
{ CSC_AUTO, "Auto" },
{ CSC_YUV_CCIR601, "CCIR601" },
{ CSC_YUV_CCIR709, "CCIR709" },
};
static struct drm_prop_enum_list armada_drm_csc_rgb_enum_list[] = {
static const struct drm_prop_enum_list armada_drm_csc_rgb_enum_list[] = {
{ CSC_AUTO, "Auto" },
{ CSC_RGB_COMPUTER, "Computer system" },
{ CSC_RGB_STUDIO, "Studio" },
......@@ -1329,8 +1329,7 @@ armada_lcd_bind(struct device *dev, struct device *master, void *data)
port = of_get_child_by_name(parent, "port");
of_node_put(np);
if (!port) {
dev_err(dev, "no port node found in %s\n",
parent->full_name);
dev_err(dev, "no port node found in %pOF\n", parent);
return -ENXIO;
}
......@@ -1364,7 +1363,7 @@ static int armada_lcd_remove(struct platform_device *pdev)
return 0;
}
static struct of_device_id armada_lcd_of_match[] = {
static const struct of_device_id armada_lcd_of_match[] = {
{
.compatible = "marvell,dove-lcd",
.data = &armada510_ops,
......
......@@ -232,8 +232,8 @@ static void armada_add_endpoints(struct device *dev,
of_node_put(remote);
continue;
} else if (!of_device_is_available(remote->parent)) {
dev_warn(dev, "parent device of %s is not available\n",
remote->full_name);
dev_warn(dev, "parent device of %pOF is not available\n",
remote);
of_node_put(remote);
continue;
}
......
......@@ -81,7 +81,6 @@ static int armada_fb_create(struct drm_fb_helper *fbh,
strlcpy(info->fix.id, "armada-drmfb", sizeof(info->fix.id));
info->par = fbh;
info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
info->fbops = &armada_fb_ops;
info->fix.smem_start = obj->phys_addr;
info->fix.smem_len = obj->obj.size;
......
......@@ -388,7 +388,7 @@ static const uint32_t armada_ovl_formats[] = {
DRM_FORMAT_BGR565,
};
static struct drm_prop_enum_list armada_drm_colorkey_enum_list[] = {
static const struct drm_prop_enum_list armada_drm_colorkey_enum_list[] = {
{ CKMODE_DISABLE, "disabled" },
{ CKMODE_Y, "Y component" },
{ CKMODE_U, "U component" },
......
......@@ -197,7 +197,6 @@ static struct drm_driver driver = {
.load = ast_driver_load,
.unload = ast_driver_unload,
.set_busid = drm_pci_set_busid,
.fops = &ast_fops,
.name = DRIVER_NAME,
......@@ -221,11 +220,11 @@ static int __init ast_init(void)
if (ast_modeset == 0)
return -EINVAL;
return drm_pci_init(&driver, &ast_pci_driver);
return pci_register_driver(&ast_pci_driver);
}
static void __exit ast_exit(void)
{
drm_pci_exit(&driver, &ast_pci_driver);
pci_unregister_driver(&ast_pci_driver);
}
module_init(ast_init);
......
......@@ -231,7 +231,6 @@ static int astfb_create(struct drm_fb_helper *helper,
strcpy(info->fix.id, "astdrmfb");
info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
info->fbops = &astfb_ops;
info->apertures->ranges[0].base = pci_resource_start(dev->pdev, 0);
......
......@@ -149,7 +149,8 @@ atmel_hlcdc_crtc_mode_valid(struct drm_crtc *c,
return atmel_hlcdc_dc_mode_valid(crtc->dc, mode);
}
static void atmel_hlcdc_crtc_disable(struct drm_crtc *c)
static void atmel_hlcdc_crtc_atomic_disable(struct drm_crtc *c,
struct drm_crtc_state *old_state)
{
struct drm_device *dev = c->dev;
struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
......@@ -183,7 +184,8 @@ static void atmel_hlcdc_crtc_disable(struct drm_crtc *c)
pm_runtime_put_sync(dev->dev);
}
static void atmel_hlcdc_crtc_enable(struct drm_crtc *c)
static void atmel_hlcdc_crtc_atomic_enable(struct drm_crtc *c,
struct drm_crtc_state *old_state)
{
struct drm_device *dev = c->dev;
struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
......@@ -235,7 +237,7 @@ static int atmel_hlcdc_crtc_select_output_mode(struct drm_crtc_state *state)
crtc = drm_crtc_to_atmel_hlcdc_crtc(state->crtc);
for_each_connector_in_state(state->state, connector, cstate, i) {
for_each_new_connector_in_state(state->state, connector, cstate, i) {
struct drm_display_info *info = &connector->display_info;
unsigned int supported_fmts = 0;
int j;
......@@ -319,11 +321,11 @@ static const struct drm_crtc_helper_funcs lcdc_crtc_helper_funcs = {
.mode_set = drm_helper_crtc_mode_set,
.mode_set_nofb = atmel_hlcdc_crtc_mode_set_nofb,
.mode_set_base = drm_helper_crtc_mode_set_base,
.disable = atmel_hlcdc_crtc_disable,
.enable = atmel_hlcdc_crtc_enable,
.atomic_check = atmel_hlcdc_crtc_atomic_check,
.atomic_begin = atmel_hlcdc_crtc_atomic_begin,
.atomic_flush = atmel_hlcdc_crtc_atomic_flush,
.atomic_enable = atmel_hlcdc_crtc_atomic_enable,
.atomic_disable = atmel_hlcdc_crtc_atomic_disable,
};
static void atmel_hlcdc_crtc_destroy(struct drm_crtc *c)
......@@ -429,6 +431,8 @@ static const struct drm_crtc_funcs atmel_hlcdc_crtc_funcs = {
.atomic_destroy_state = atmel_hlcdc_crtc_destroy_state,
.enable_vblank = atmel_hlcdc_crtc_enable_vblank,
.disable_vblank = atmel_hlcdc_crtc_disable_vblank,
.set_property = drm_atomic_helper_crtc_set_property,
.gamma_set = drm_atomic_helper_legacy_gamma_set,
};
int atmel_hlcdc_crtc_create(struct drm_device *dev)
......@@ -484,6 +488,10 @@ int atmel_hlcdc_crtc_create(struct drm_device *dev)
drm_crtc_helper_add(&crtc->base, &lcdc_crtc_helper_funcs);
drm_crtc_vblank_reset(&crtc->base);
drm_mode_crtc_set_gamma_size(&crtc->base, ATMEL_HLCDC_CLUT_SIZE);
drm_crtc_enable_color_mgmt(&crtc->base, 0, false,
ATMEL_HLCDC_CLUT_SIZE);
dc->crtc = &crtc->base;
return 0;
......
......@@ -42,6 +42,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9n12_layers[] = {
.default_color = 3,
.general_config = 4,
},
.clut_offset = 0x400,
},
};
......@@ -73,6 +74,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
.disc_pos = 5,
.disc_size = 6,
},
.clut_offset = 0x400,
},
{
.name = "overlay1",
......@@ -91,6 +93,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
.chroma_key_mask = 8,
.general_config = 9,
},
.clut_offset = 0x800,
},
{
.name = "high-end-overlay",
......@@ -112,6 +115,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
.scaler_config = 13,
.csc = 14,
},
.clut_offset = 0x1000,
},
{
.name = "cursor",
......@@ -131,6 +135,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
.chroma_key_mask = 8,
.general_config = 9,
},
.clut_offset = 0x1400,
},
};
......@@ -162,6 +167,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
.disc_pos = 5,
.disc_size = 6,
},
.clut_offset = 0x600,
},
{
.name = "overlay1",
......@@ -180,6 +186,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
.chroma_key_mask = 8,
.general_config = 9,
},
.clut_offset = 0xa00,
},
{
.name = "overlay2",
......@@ -198,6 +205,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
.chroma_key_mask = 8,
.general_config = 9,
},
.clut_offset = 0xe00,
},
{
.name = "high-end-overlay",
......@@ -223,6 +231,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
},
.csc = 14,
},
.clut_offset = 0x1200,
},
{
.name = "cursor",
......@@ -244,6 +253,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
.general_config = 9,
.scaler_config = 13,
},
.clut_offset = 0x1600,
},
};
......@@ -275,6 +285,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
.disc_pos = 5,
.disc_size = 6,
},
.clut_offset = 0x600,
},
{
.name = "overlay1",
......@@ -293,6 +304,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
.chroma_key_mask = 8,
.general_config = 9,
},
.clut_offset = 0xa00,
},
{
.name = "overlay2",
......@@ -311,6 +323,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
.chroma_key_mask = 8,
.general_config = 9,
},
.clut_offset = 0xe00,
},
{
.name = "high-end-overlay",
......@@ -336,6 +349,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
},
.csc = 14,
},
.clut_offset = 0x1200,
},
};
......@@ -451,8 +465,7 @@ static void atmel_hlcdc_fb_output_poll_changed(struct drm_device *dev)
{
struct atmel_hlcdc_dc *dc = dev->dev_private;
if (dc->fbdev)
drm_fbdev_cma_hotplug_event(dc->fbdev);
drm_fbdev_cma_hotplug_event(dc->fbdev);
}
struct atmel_hlcdc_dc_commit {
......@@ -526,14 +539,13 @@ static int atmel_hlcdc_dc_atomic_commit(struct drm_device *dev,
dc->commit.pending = true;
spin_unlock(&dc->commit.wait.lock);
if (ret) {
kfree(commit);
goto error;
}
if (ret)
goto err_free;
/* Swap the state, this is the point of no return. */
drm_atomic_helper_swap_state(state, true);
/* We have our own synchronization through the commit lock. */
BUG_ON(drm_atomic_helper_swap_state(state, false) < 0);
/* Swap state succeeded, this is the point of no return. */
drm_atomic_state_get(state);
if (async)
queue_work(dc->wq, &commit->work);
......@@ -542,6 +554,8 @@ static int atmel_hlcdc_dc_atomic_commit(struct drm_device *dev,
return 0;
err_free:
kfree(commit);
error:
drm_atomic_helper_cleanup_planes(dev, state);
return ret;
......
......@@ -88,6 +88,11 @@
#define ATMEL_HLCDC_YUV422SWP BIT(17)
#define ATMEL_HLCDC_DSCALEOPT BIT(20)
#define ATMEL_HLCDC_C1_MODE ATMEL_HLCDC_CLUT_MODE(0)
#define ATMEL_HLCDC_C2_MODE ATMEL_HLCDC_CLUT_MODE(1)
#define ATMEL_HLCDC_C4_MODE ATMEL_HLCDC_CLUT_MODE(2)
#define ATMEL_HLCDC_C8_MODE ATMEL_HLCDC_CLUT_MODE(3)
#define ATMEL_HLCDC_XRGB4444_MODE ATMEL_HLCDC_RGB_MODE(0)
#define ATMEL_HLCDC_ARGB4444_MODE ATMEL_HLCDC_RGB_MODE(1)
#define ATMEL_HLCDC_RGBA4444_MODE ATMEL_HLCDC_RGB_MODE(2)
......@@ -142,6 +147,8 @@
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE BIT(2)
#define ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN BIT(3)
#define ATMEL_HLCDC_CLUT_SIZE 256
#define ATMEL_HLCDC_MAX_LAYERS 6
/**
......@@ -259,6 +266,7 @@ struct atmel_hlcdc_layer_desc {
int id;
int regs_offset;
int cfgs_offset;
int clut_offset;
struct atmel_hlcdc_formats *formats;
struct atmel_hlcdc_layer_cfg_layout layout;
int max_width;
......@@ -414,6 +422,14 @@ static inline u32 atmel_hlcdc_layer_read_cfg(struct atmel_hlcdc_layer *layer,
(cfgid * sizeof(u32)));
}
static inline void atmel_hlcdc_layer_write_clut(struct atmel_hlcdc_layer *layer,
unsigned int c, u32 val)
{
regmap_write(layer->regmap,
layer->desc->clut_offset + c * sizeof(u32),
val);
}
static inline void atmel_hlcdc_layer_init(struct atmel_hlcdc_layer *layer,
const struct atmel_hlcdc_layer_desc *desc,
struct regmap *regmap)
......
......@@ -83,6 +83,7 @@ drm_plane_state_to_atmel_hlcdc_plane_state(struct drm_plane_state *s)
#define SUBPIXEL_MASK 0xffff
static uint32_t rgb_formats[] = {
DRM_FORMAT_C8,
DRM_FORMAT_XRGB4444,
DRM_FORMAT_ARGB4444,
DRM_FORMAT_RGBA4444,
......@@ -100,6 +101,7 @@ struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_formats = {
};
static uint32_t rgb_and_yuv_formats[] = {
DRM_FORMAT_C8,
DRM_FORMAT_XRGB4444,
DRM_FORMAT_ARGB4444,
DRM_FORMAT_RGBA4444,
......@@ -128,6 +130,9 @@ struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_and_yuv_formats = {
static int atmel_hlcdc_format_to_plane_mode(u32 format, u32 *mode)
{
switch (format) {
case DRM_FORMAT_C8:
*mode = ATMEL_HLCDC_C8_MODE;
break;
case DRM_FORMAT_XRGB4444:
*mode = ATMEL_HLCDC_XRGB4444_MODE;
break;
......@@ -424,6 +429,29 @@ static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
ATMEL_HLCDC_LAYER_FORMAT_CFG, cfg);
}
static void atmel_hlcdc_plane_update_clut(struct atmel_hlcdc_plane *plane)
{
struct drm_crtc *crtc = plane->base.crtc;
struct drm_color_lut *lut;
int idx;
if (!crtc || !crtc->state)
return;
if (!crtc->state->color_mgmt_changed || !crtc->state->gamma_lut)
return;
lut = (struct drm_color_lut *)crtc->state->gamma_lut->data;
for (idx = 0; idx < ATMEL_HLCDC_CLUT_SIZE; idx++, lut++) {
u32 val = ((lut->red << 8) & 0xff0000) |
(lut->green & 0xff00) |
(lut->blue >> 8);
atmel_hlcdc_layer_write_clut(&plane->layer, idx, val);
}
}
static void atmel_hlcdc_plane_update_buffers(struct atmel_hlcdc_plane *plane,
struct atmel_hlcdc_plane_state *state)
{
......@@ -768,6 +796,7 @@ static void atmel_hlcdc_plane_atomic_update(struct drm_plane *p,
atmel_hlcdc_plane_update_pos_and_size(plane, state);
atmel_hlcdc_plane_update_general_settings(plane, state);
atmel_hlcdc_plane_update_format(plane, state);
atmel_hlcdc_plane_update_clut(plane);
atmel_hlcdc_plane_update_buffers(plane, state);
atmel_hlcdc_plane_update_disc_area(plane, state);
......
......@@ -84,7 +84,6 @@ static struct drm_driver bochs_driver = {
.driver_features = DRIVER_GEM | DRIVER_MODESET,
.load = bochs_load,
.unload = bochs_unload,
.set_busid = drm_pci_set_busid,
.fops = &bochs_fops,
.name = "bochs-drm",
.desc = "bochs dispi vga interface (qemu stdvga)",
......@@ -224,12 +223,12 @@ static int __init bochs_init(void)
if (bochs_modeset == 0)
return -EINVAL;
return drm_pci_init(&bochs_driver, &bochs_pci_driver);
return pci_register_driver(&bochs_pci_driver);
}
static void __exit bochs_exit(void)
{
drm_pci_exit(&bochs_driver, &bochs_pci_driver);
pci_unregister_driver(&bochs_pci_driver);
}
module_init(bochs_init);
......
......@@ -23,9 +23,9 @@ static int bochsfb_mmap(struct fb_info *info,
static struct fb_ops bochsfb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
.fb_fillrect = drm_fb_helper_sys_fillrect,
.fb_copyarea = drm_fb_helper_sys_copyarea,
.fb_imageblit = drm_fb_helper_sys_imageblit,
.fb_fillrect = drm_fb_helper_cfb_fillrect,
.fb_copyarea = drm_fb_helper_cfb_copyarea,
.fb_imageblit = drm_fb_helper_cfb_imageblit,
.fb_mmap = bochsfb_mmap,
};
......@@ -118,7 +118,6 @@ static int bochsfb_create(struct drm_fb_helper *helper,
strcpy(info->fix.id, "bochsdrmfb");
info->flags = FBINFO_DEFAULT;
info->fbops = &bochsfb_ops;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
......
......@@ -1126,11 +1126,7 @@ static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id)
adv7511->bridge.funcs = &adv7511_bridge_funcs;
adv7511->bridge.of_node = dev->of_node;
ret = drm_bridge_add(&adv7511->bridge);
if (ret) {
dev_err(dev, "failed to add adv7511 bridge\n");
goto err_unregister_cec;
}
drm_bridge_add(&adv7511->bridge);
adv7511_audio_init(dev, adv7511);
......
......@@ -1097,7 +1097,8 @@ static void anx78xx_bridge_mode_set(struct drm_bridge *bridge,
mutex_lock(&anx78xx->lock);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, adjusted_mode);
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, adjusted_mode,
false);
if (err) {
DRM_ERROR("Failed to setup AVI infoframe: %d\n", err);
goto unlock;
......@@ -1438,11 +1439,7 @@ static int anx78xx_i2c_probe(struct i2c_client *client,
anx78xx->bridge.funcs = &anx78xx_bridge_funcs;
err = drm_bridge_add(&anx78xx->bridge);
if (err < 0) {
DRM_ERROR("Failed to add drm bridge: %d\n", err);
goto err_poweroff;
}
drm_bridge_add(&anx78xx->bridge);
/* If cable is pulled out, just poweroff and wait for HPD event */
if (!gpiod_get_value(anx78xx->pdata.gpiod_hpd))
......
......@@ -177,7 +177,6 @@ static struct i2c_adapter *dumb_vga_retrieve_ddc(struct device *dev)
static int dumb_vga_probe(struct platform_device *pdev)
{
struct dumb_vga *vga;
int ret;
vga = devm_kzalloc(&pdev->dev, sizeof(*vga), GFP_KERNEL);
if (!vga)
......@@ -186,7 +185,7 @@ static int dumb_vga_probe(struct platform_device *pdev)
vga->vdd = devm_regulator_get_optional(&pdev->dev, "vdd");
if (IS_ERR(vga->vdd)) {
ret = PTR_ERR(vga->vdd);
int ret = PTR_ERR(vga->vdd);
if (ret == -EPROBE_DEFER)
return -EPROBE_DEFER;
vga->vdd = NULL;
......@@ -207,11 +206,9 @@ static int dumb_vga_probe(struct platform_device *pdev)
vga->bridge.funcs = &dumb_vga_bridge_funcs;
vga->bridge.of_node = pdev->dev.of_node;
ret = drm_bridge_add(&vga->bridge);
if (ret && !IS_ERR(vga->ddc))
i2c_put_adapter(vga->ddc);
drm_bridge_add(&vga->bridge);
return ret;
return 0;
}
static int dumb_vga_remove(struct platform_device *pdev)
......
......@@ -332,11 +332,7 @@ static int ptn3460_probe(struct i2c_client *client,
ptn_bridge->bridge.funcs = &ptn3460_bridge_funcs;
ptn_bridge->bridge.of_node = dev->of_node;
ret = drm_bridge_add(&ptn_bridge->bridge);
if (ret) {
DRM_ERROR("Failed to add bridge\n");
return ret;
}
drm_bridge_add(&ptn_bridge->bridge);
i2c_set_clientdata(client, ptn_bridge);
......
......@@ -158,7 +158,6 @@ struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel,
u32 connector_type)
{
struct panel_bridge *panel_bridge;
int ret;
if (!panel)
return ERR_PTR(-EINVAL);
......@@ -176,9 +175,7 @@ struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel,
panel_bridge->bridge.of_node = panel->dev->of_node;
#endif
ret = drm_bridge_add(&panel_bridge->bridge);
if (ret)
return ERR_PTR(ret);
drm_bridge_add(&panel_bridge->bridge);
return &panel_bridge->bridge;
}
......
......@@ -598,11 +598,7 @@ static int ps8622_probe(struct i2c_client *client,
ps8622->bridge.funcs = &ps8622_bridge_funcs;
ps8622->bridge.of_node = dev->of_node;
ret = drm_bridge_add(&ps8622->bridge);
if (ret) {
DRM_ERROR("Failed to add bridge\n");
return ret;
}
drm_bridge_add(&ps8622->bridge);
i2c_set_clientdata(client, ps8622);
......
......@@ -269,7 +269,7 @@ static void sii902x_bridge_mode_set(struct drm_bridge *bridge,
if (ret)
return;
ret = drm_hdmi_avi_infoframe_from_display_mode(&frame, adj);
ret = drm_hdmi_avi_infoframe_from_display_mode(&frame, adj, false);
if (ret < 0) {
DRM_ERROR("couldn't fill AVI infoframe\n");
return;
......@@ -418,11 +418,7 @@ static int sii902x_probe(struct i2c_client *client,
sii902x->bridge.funcs = &sii902x_bridge_funcs;
sii902x->bridge.of_node = dev->of_node;
ret = drm_bridge_add(&sii902x->bridge);
if (ret) {
dev_err(dev, "Failed to add drm_bridge\n");
return ret;
}
drm_bridge_add(&sii902x->bridge);
i2c_set_clientdata(client, sii902x);
......
......@@ -22,3 +22,9 @@ config DRM_DW_HDMI_I2S_AUDIO
help
Support the I2S Audio interface which is part of the Synopsys
Designware HDMI block.
config DRM_DW_MIPI_DSI
tristate
select DRM_KMS_HELPER
select DRM_MIPI_DSI
select DRM_PANEL_BRIDGE
......@@ -3,3 +3,5 @@
obj-$(CONFIG_DRM_DW_HDMI) += dw-hdmi.o
obj-$(CONFIG_DRM_DW_HDMI_AHB_AUDIO) += dw-hdmi-ahb-audio.o
obj-$(CONFIG_DRM_DW_HDMI_I2S_AUDIO) += dw-hdmi-i2s-audio.o
obj-$(CONFIG_DRM_DW_MIPI_DSI) += dw-mipi-dsi.o
......@@ -1317,7 +1317,7 @@ static void hdmi_config_AVI(struct dw_hdmi *hdmi, struct drm_display_mode *mode)
u8 val;
/* Initialise info frame from DRM mode */
drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
if (hdmi_bus_fmt_is_yuv444(hdmi->hdmi_data.enc_out_bus_format))
frame.colorspace = HDMI_COLORSPACE_YUV444;
......@@ -2485,17 +2485,12 @@ int dw_hdmi_probe(struct platform_device *pdev,
const struct dw_hdmi_plat_data *plat_data)
{
struct dw_hdmi *hdmi;
int ret;
hdmi = __dw_hdmi_probe(pdev, plat_data);
if (IS_ERR(hdmi))
return PTR_ERR(hdmi);
ret = drm_bridge_add(&hdmi->bridge);
if (ret < 0) {
__dw_hdmi_remove(hdmi);
return ret;
}
drm_bridge_add(&hdmi->bridge);
return 0;
}
......
此差异已折叠。
......@@ -1325,11 +1325,7 @@ static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
tc->bridge.funcs = &tc_bridge_funcs;
tc->bridge.of_node = dev->of_node;
ret = drm_bridge_add(&tc->bridge);
if (ret) {
dev_err(dev, "Failed to add drm_bridge: %d\n", ret);
goto err_unregister_aux;
}
drm_bridge_add(&tc->bridge);
i2c_set_clientdata(client, tc);
......
......@@ -237,11 +237,7 @@ static int tfp410_init(struct device *dev)
}
}
ret = drm_bridge_add(&dvi->bridge);
if (ret) {
dev_err(dev, "drm_bridge_add() failed: %d\n", ret);
goto fail;
}
drm_bridge_add(&dvi->bridge);
return 0;
fail:
......
......@@ -132,7 +132,6 @@ static struct drm_driver driver = {
.driver_features = DRIVER_MODESET | DRIVER_GEM,
.load = cirrus_driver_load,
.unload = cirrus_driver_unload,
.set_busid = drm_pci_set_busid,
.fops = &cirrus_driver_fops,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
......@@ -166,12 +165,12 @@ static int __init cirrus_init(void)
if (cirrus_modeset == 0)
return -EINVAL;
return drm_pci_init(&driver, &cirrus_pci_driver);
return pci_register_driver(&cirrus_pci_driver);
}
static void __exit cirrus_exit(void)
{
drm_pci_exit(&driver, &cirrus_pci_driver);
pci_unregister_driver(&cirrus_pci_driver);
}
module_init(cirrus_init);
......
......@@ -215,7 +215,6 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
strcpy(info->fix.id, "cirrusdrmfb");
info->flags = FBINFO_DEFAULT;
info->fbops = &cirrusfb_ops;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
......
......@@ -29,7 +29,6 @@
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
#include <drm/drm_mode.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_print.h>
#include <linux/sync_file.h>
......@@ -188,12 +187,15 @@ void drm_atomic_state_default_clear(struct drm_atomic_state *state)
}
for (i = 0; i < state->num_private_objs; i++) {
void *obj_state = state->private_objs[i].obj_state;
struct drm_private_obj *obj = state->private_objs[i].ptr;
state->private_objs[i].funcs->destroy_state(obj_state);
state->private_objs[i].obj = NULL;
state->private_objs[i].obj_state = NULL;
state->private_objs[i].funcs = NULL;
if (!obj)
continue;
obj->funcs->atomic_destroy_state(obj,
state->private_objs[i].state);
state->private_objs[i].ptr = NULL;
state->private_objs[i].state = NULL;
}
state->num_private_objs = 0;
......@@ -409,34 +411,6 @@ int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state,
}
EXPORT_SYMBOL(drm_atomic_set_mode_prop_for_crtc);
/**
* drm_atomic_replace_property_blob - replace a blob property
* @blob: a pointer to the member blob to be replaced
* @new_blob: the new blob to replace with
* @replaced: whether the blob has been replaced
*
* RETURNS:
* Zero on success, error code on failure
*/
static void
drm_atomic_replace_property_blob(struct drm_property_blob **blob,
struct drm_property_blob *new_blob,
bool *replaced)
{
struct drm_property_blob *old_blob = *blob;
if (old_blob == new_blob)
return;
drm_property_blob_put(old_blob);
if (new_blob)
drm_property_blob_get(new_blob);
*blob = new_blob;
*replaced = true;
return;
}
static int
drm_atomic_replace_property_blob_from_id(struct drm_device *dev,
struct drm_property_blob **blob,
......@@ -457,7 +431,7 @@ drm_atomic_replace_property_blob_from_id(struct drm_device *dev,
}
}
drm_atomic_replace_property_blob(blob, new_blob, replaced);
*replaced |= drm_property_replace_blob(blob, new_blob);
drm_property_blob_put(new_blob);
return 0;
......@@ -990,12 +964,45 @@ static void drm_atomic_plane_print_state(struct drm_printer *p,
plane->funcs->atomic_print_state(p, state);
}
/**
* drm_atomic_private_obj_init - initialize private object
* @obj: private object
* @state: initial private object state
* @funcs: pointer to the struct of function pointers that identify the object
* type
*
* Initialize the private object, which can be embedded into any
* driver private object that needs its own atomic state.
*/
void
drm_atomic_private_obj_init(struct drm_private_obj *obj,
struct drm_private_state *state,
const struct drm_private_state_funcs *funcs)
{
memset(obj, 0, sizeof(*obj));
obj->state = state;
obj->funcs = funcs;
}
EXPORT_SYMBOL(drm_atomic_private_obj_init);
/**
* drm_atomic_private_obj_fini - finalize private object
* @obj: private object
*
* Finalize the private object.
*/
void
drm_atomic_private_obj_fini(struct drm_private_obj *obj)
{
obj->funcs->atomic_destroy_state(obj, obj->state);
}
EXPORT_SYMBOL(drm_atomic_private_obj_fini);
/**
* drm_atomic_get_private_obj_state - get private object state
* @state: global atomic state
* @obj: private object to get the state for
* @funcs: pointer to the struct of function pointers that identify the object
* type
*
* This function returns the private object state for the given private object,
* allocating the state if needed. It does not grab any locks as the caller is
......@@ -1005,18 +1012,18 @@ static void drm_atomic_plane_print_state(struct drm_printer *p,
*
* Either the allocated state or the error code encoded into a pointer.
*/
void *
drm_atomic_get_private_obj_state(struct drm_atomic_state *state, void *obj,
const struct drm_private_state_funcs *funcs)
struct drm_private_state *
drm_atomic_get_private_obj_state(struct drm_atomic_state *state,
struct drm_private_obj *obj)
{
int index, num_objs, i;
size_t size;
struct __drm_private_objs_state *arr;
struct drm_private_state *obj_state;
for (i = 0; i < state->num_private_objs; i++)
if (obj == state->private_objs[i].obj &&
state->private_objs[i].obj_state)
return state->private_objs[i].obj_state;
if (obj == state->private_objs[i].ptr)
return state->private_objs[i].state;
num_objs = state->num_private_objs + 1;
size = sizeof(*state->private_objs) * num_objs;
......@@ -1028,18 +1035,21 @@ drm_atomic_get_private_obj_state(struct drm_atomic_state *state, void *obj,
index = state->num_private_objs;
memset(&state->private_objs[index], 0, sizeof(*state->private_objs));
state->private_objs[index].obj_state = funcs->duplicate_state(state, obj);
if (!state->private_objs[index].obj_state)
obj_state = obj->funcs->atomic_duplicate_state(obj);
if (!obj_state)
return ERR_PTR(-ENOMEM);
state->private_objs[index].obj = obj;
state->private_objs[index].funcs = funcs;
state->private_objs[index].state = obj_state;
state->private_objs[index].old_state = obj->state;
state->private_objs[index].new_state = obj_state;
state->private_objs[index].ptr = obj;
state->num_private_objs = num_objs;
DRM_DEBUG_ATOMIC("Added new private object state %p to %p\n",
state->private_objs[index].obj_state, state);
DRM_DEBUG_ATOMIC("Added new private object %p state %p to %p\n",
obj, obj_state, state);
return state->private_objs[index].obj_state;
return obj_state;
}
EXPORT_SYMBOL(drm_atomic_get_private_obj_state);
......@@ -2039,7 +2049,7 @@ static int prepare_crtc_signaling(struct drm_device *dev,
{
struct drm_crtc *crtc;
struct drm_crtc_state *crtc_state;
int i, ret;
int i, c = 0, ret;
if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY)
return 0;
......@@ -2100,8 +2110,17 @@ static int prepare_crtc_signaling(struct drm_device *dev,
crtc_state->event->base.fence = fence;
}
c++;
}
/*
* Having this flag means user mode pends on event which will never
* reach due to lack of at least one CRTC for signaling
*/
if (c == 0 && (arg->flags & DRM_MODE_PAGE_FLIP_EVENT))
return -EINVAL;
return 0;
}
......
......@@ -795,6 +795,9 @@ int drm_atomic_helper_check(struct drm_device *dev,
if (ret)
return ret;
if (state->legacy_cursor_update)
state->async_update = !drm_atomic_helper_async_check(dev, state);
return ret;
}
EXPORT_SYMBOL(drm_atomic_helper_check);
......@@ -1069,12 +1072,13 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
struct drm_atomic_state *old_state)
{
struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state;
struct drm_crtc_state *new_crtc_state;
struct drm_connector *connector;
struct drm_connector_state *new_conn_state;
int i;
for_each_new_crtc_in_state(old_state, crtc, new_crtc_state, i) {
for_each_oldnew_crtc_in_state(old_state, crtc, old_crtc_state, new_crtc_state, i) {
const struct drm_crtc_helper_funcs *funcs;
/* Need to filter out CRTCs where only planes change. */
......@@ -1090,8 +1094,8 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
DRM_DEBUG_ATOMIC("enabling [CRTC:%d:%s]\n",
crtc->base.id, crtc->name);
if (funcs->enable)
funcs->enable(crtc);
if (funcs->atomic_enable)
funcs->atomic_enable(crtc, old_crtc_state);
else
funcs->commit(crtc);
}
......@@ -1191,9 +1195,13 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_fences);
*
* Helper to, after atomic commit, wait for vblanks on all effected
* crtcs (ie. before cleaning up old framebuffers using
* drm_atomic_helper_cleanup_planes()). It will only wait on crtcs where the
* drm_atomic_helper_cleanup_planes()). It will only wait on CRTCs where the
* framebuffers have actually changed to optimize for the legacy cursor and
* plane update use-case.
*
* Drivers using the nonblocking commit tracking support initialized by calling
* drm_atomic_helper_setup_commit() should look at
* drm_atomic_helper_wait_for_flip_done() as an alternative.
*/
void
drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
......@@ -1240,28 +1248,55 @@ drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
}
EXPORT_SYMBOL(drm_atomic_helper_wait_for_vblanks);
/**
* drm_atomic_helper_wait_for_flip_done - wait for all page flips to be done
* @dev: DRM device
* @old_state: atomic state object with old state structures
*
* Helper to, after atomic commit, wait for page flips on all effected
* crtcs (ie. before cleaning up old framebuffers using
* drm_atomic_helper_cleanup_planes()). Compared to
* drm_atomic_helper_wait_for_vblanks() this waits for the completion of on all
* CRTCs, assuming that cursors-only updates are signalling their completion
* immediately (or using a different path).
*
* This requires that drivers use the nonblocking commit tracking support
* initialized using drm_atomic_helper_setup_commit().
*/
void drm_atomic_helper_wait_for_flip_done(struct drm_device *dev,
struct drm_atomic_state *old_state)
{
struct drm_crtc_state *unused;
struct drm_crtc *crtc;
int i;
for_each_crtc_in_state(old_state, crtc, unused, i) {
struct drm_crtc_commit *commit = old_state->crtcs[i].commit;
int ret;
if (!commit)
continue;
ret = wait_for_completion_timeout(&commit->flip_done, 10 * HZ);
if (ret == 0)
DRM_ERROR("[CRTC:%d:%s] flip_done timed out\n",
crtc->base.id, crtc->name);
}
}
EXPORT_SYMBOL(drm_atomic_helper_wait_for_flip_done);
/**
* drm_atomic_helper_commit_tail - commit atomic update to hardware
* @old_state: atomic state object with old state structures
*
* This is the default implementation for the
* &drm_mode_config_helper_funcs.atomic_commit_tail hook.
* &drm_mode_config_helper_funcs.atomic_commit_tail hook, for drivers
* that do not support runtime_pm or do not need the CRTC to be
* enabled to perform a commit. Otherwise, see
* drm_atomic_helper_commit_tail_rpm().
*
* Note that the default ordering of how the various stages are called is to
* match the legacy modeset helper library closest. One peculiarity of that is
* that it doesn't mesh well with runtime PM at all.
*
* For drivers supporting runtime PM the recommended sequence is instead ::
*
* drm_atomic_helper_commit_modeset_disables(dev, old_state);
*
* drm_atomic_helper_commit_modeset_enables(dev, old_state);
*
* drm_atomic_helper_commit_planes(dev, old_state,
* DRM_PLANE_COMMIT_ACTIVE_ONLY);
*
* for committing the atomic update to hardware. See the kerneldoc entries for
* these three functions for more details.
* match the legacy modeset helper library closest.
*/
void drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
{
......@@ -1281,6 +1316,35 @@ void drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
}
EXPORT_SYMBOL(drm_atomic_helper_commit_tail);
/**
* drm_atomic_helper_commit_tail_rpm - commit atomic update to hardware
* @old_state: new modeset state to be committed
*
* This is an alternative implementation for the
* &drm_mode_config_helper_funcs.atomic_commit_tail hook, for drivers
* that support runtime_pm or need the CRTC to be enabled to perform a
* commit. Otherwise, one should use the default implementation
* drm_atomic_helper_commit_tail().
*/
void drm_atomic_helper_commit_tail_rpm(struct drm_atomic_state *old_state)
{
struct drm_device *dev = old_state->dev;
drm_atomic_helper_commit_modeset_disables(dev, old_state);
drm_atomic_helper_commit_modeset_enables(dev, old_state);
drm_atomic_helper_commit_planes(dev, old_state,
DRM_PLANE_COMMIT_ACTIVE_ONLY);
drm_atomic_helper_commit_hw_done(old_state);
drm_atomic_helper_wait_for_vblanks(dev, old_state);
drm_atomic_helper_cleanup_planes(dev, old_state);
}
EXPORT_SYMBOL(drm_atomic_helper_commit_tail_rpm);
static void commit_tail(struct drm_atomic_state *old_state)
{
struct drm_device *dev = old_state->dev;
......@@ -1310,6 +1374,114 @@ static void commit_work(struct work_struct *work)
commit_tail(state);
}
/**
* drm_atomic_helper_async_check - check if state can be commited asynchronously
* @dev: DRM device
* @state: the driver state object
*
* This helper will check if it is possible to commit the state asynchronously.
* Async commits are not supposed to swap the states like normal sync commits
* but just do in-place changes on the current state.
*
* It will return 0 if the commit can happen in an asynchronous fashion or error
* if not. Note that error just mean it can't be commited asynchronously, if it
* fails the commit should be treated like a normal synchronous commit.
*/
int drm_atomic_helper_async_check(struct drm_device *dev,
struct drm_atomic_state *state)
{
struct drm_crtc *crtc;
struct drm_crtc_state *crtc_state;
struct drm_crtc_commit *commit;
struct drm_plane *__plane, *plane = NULL;
struct drm_plane_state *__plane_state, *plane_state = NULL;
const struct drm_plane_helper_funcs *funcs;
int i, j, n_planes = 0;
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
if (drm_atomic_crtc_needs_modeset(crtc_state))
return -EINVAL;
}
for_each_new_plane_in_state(state, __plane, __plane_state, i) {
n_planes++;
plane = __plane;
plane_state = __plane_state;
}
/* FIXME: we support only single plane updates for now */
if (!plane || n_planes != 1)
return -EINVAL;
if (!plane_state->crtc)
return -EINVAL;
funcs = plane->helper_private;
if (!funcs->atomic_async_update)
return -EINVAL;
if (plane_state->fence)
return -EINVAL;
/*
* Don't do an async update if there is an outstanding commit modifying
* the plane. This prevents our async update's changes from getting
* overridden by a previous synchronous update's state.
*/
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
if (plane->crtc != crtc)
continue;
spin_lock(&crtc->commit_lock);
commit = list_first_entry_or_null(&crtc->commit_list,
struct drm_crtc_commit,
commit_entry);
if (!commit) {
spin_unlock(&crtc->commit_lock);
continue;
}
spin_unlock(&crtc->commit_lock);
if (!crtc->state->state)
continue;
for_each_plane_in_state(crtc->state->state, __plane,
__plane_state, j) {
if (__plane == plane)
return -EINVAL;
}
}
return funcs->atomic_async_check(plane, plane_state);
}
EXPORT_SYMBOL(drm_atomic_helper_async_check);
/**
* drm_atomic_helper_async_commit - commit state asynchronously
* @dev: DRM device
* @state: the driver state object
*
* This function commits a state asynchronously, i.e., not vblank
* synchronized. It should be used on a state only when
* drm_atomic_async_check() succeeds. Async commits are not supposed to swap
* the states like normal sync commits, but just do in-place changes on the
* current state.
*/
void drm_atomic_helper_async_commit(struct drm_device *dev,
struct drm_atomic_state *state)
{
struct drm_plane *plane;
struct drm_plane_state *plane_state;
const struct drm_plane_helper_funcs *funcs;
int i;
for_each_new_plane_in_state(state, plane, plane_state, i) {
funcs = plane->helper_private;
funcs->atomic_async_update(plane, plane_state);
}
}
EXPORT_SYMBOL(drm_atomic_helper_async_commit);
/**
* drm_atomic_helper_commit - commit validated state object
* @dev: DRM device
......@@ -1334,6 +1506,17 @@ int drm_atomic_helper_commit(struct drm_device *dev,
{
int ret;
if (state->async_update) {
ret = drm_atomic_helper_prepare_planes(dev, state);
if (ret)
return ret;
drm_atomic_helper_async_commit(dev, state);
drm_atomic_helper_cleanup_planes(dev, state);
return 0;
}
ret = drm_atomic_helper_setup_commit(state, nonblock);
if (ret)
return ret;
......@@ -1346,10 +1529,8 @@ int drm_atomic_helper_commit(struct drm_device *dev,
if (!nonblock) {
ret = drm_atomic_helper_wait_for_fences(dev, state, true);
if (ret) {
drm_atomic_helper_cleanup_planes(dev, state);
return ret;
}
if (ret)
goto err;
}
/*
......@@ -1358,7 +1539,9 @@ int drm_atomic_helper_commit(struct drm_device *dev,
* the software side now.
*/
drm_atomic_helper_swap_state(state, true);
ret = drm_atomic_helper_swap_state(state, true);
if (ret)
goto err;
/*
* Everything below can be run asynchronously without the need to grab
......@@ -1387,6 +1570,10 @@ int drm_atomic_helper_commit(struct drm_device *dev,
commit_tail(state);
return 0;
err:
drm_atomic_helper_cleanup_planes(dev, state);
return ret;
}
EXPORT_SYMBOL(drm_atomic_helper_commit);
......@@ -1680,9 +1867,7 @@ void drm_atomic_helper_commit_hw_done(struct drm_atomic_state *old_state)
/* backend must have consumed any event by now */
WARN_ON(new_crtc_state->event);
spin_lock(&crtc->commit_lock);
complete_all(&commit->hw_done);
spin_unlock(&crtc->commit_lock);
}
}
EXPORT_SYMBOL(drm_atomic_helper_commit_hw_done);
......@@ -1711,7 +1896,6 @@ void drm_atomic_helper_commit_cleanup_done(struct drm_atomic_state *old_state)
if (WARN_ON(!commit))
continue;
spin_lock(&crtc->commit_lock);
complete_all(&commit->cleanup_done);
WARN_ON(!try_wait_for_completion(&commit->hw_done));
......@@ -1721,8 +1905,6 @@ void drm_atomic_helper_commit_cleanup_done(struct drm_atomic_state *old_state)
if (try_wait_for_completion(&commit->flip_done))
goto del_commit;
spin_unlock(&crtc->commit_lock);
/* We must wait for the vblank event to signal our completion
* before releasing our reference, since the vblank work does
* not hold a reference of its own. */
......@@ -1732,8 +1914,8 @@ void drm_atomic_helper_commit_cleanup_done(struct drm_atomic_state *old_state)
DRM_ERROR("[CRTC:%d:%s] flip_done timed out\n",
crtc->base.id, crtc->name);
spin_lock(&crtc->commit_lock);
del_commit:
spin_lock(&crtc->commit_lock);
list_del(&commit->commit_entry);
spin_unlock(&crtc->commit_lock);
}
......@@ -2069,14 +2251,14 @@ EXPORT_SYMBOL(drm_atomic_helper_cleanup_planes);
/**
* drm_atomic_helper_swap_state - store atomic state into current sw state
* @state: atomic state
* @stall: stall for proceeding commits
* @stall: stall for preceeding commits
*
* This function stores the atomic state into the current state pointers in all
* driver objects. It should be called after all failing steps have been done
* and succeeded, but before the actual hardware state is committed.
*
* For cleanup and error recovery the current state for all changed objects will
* be swaped into @state.
* be swapped into @state.
*
* With that sequence it fits perfectly into the plane prepare/cleanup sequence:
*
......@@ -2095,12 +2277,16 @@ EXPORT_SYMBOL(drm_atomic_helper_cleanup_planes);
* the &drm_plane.state, &drm_crtc.state or &drm_connector.state pointer. With
* the current atomic helpers this is almost always the case, since the helpers
* don't pass the right state structures to the callbacks.
*
* Returns:
*
* Returns 0 on success. Can return -ERESTARTSYS when @stall is true and the
* waiting for the previous commits has been interrupted.
*/
void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
int drm_atomic_helper_swap_state(struct drm_atomic_state *state,
bool stall)
{
int i;
long ret;
int i, ret;
struct drm_connector *connector;
struct drm_connector_state *old_conn_state, *new_conn_state;
struct drm_crtc *crtc;
......@@ -2108,8 +2294,8 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
struct drm_plane *plane;
struct drm_plane_state *old_plane_state, *new_plane_state;
struct drm_crtc_commit *commit;
void *obj, *obj_state;
const struct drm_private_state_funcs *funcs;
struct drm_private_obj *obj;
struct drm_private_state *old_obj_state, *new_obj_state;
if (stall) {
for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
......@@ -2123,12 +2309,11 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
if (!commit)
continue;
ret = wait_for_completion_timeout(&commit->hw_done,
10*HZ);
if (ret == 0)
DRM_ERROR("[CRTC:%d:%s] hw_done timed out\n",
crtc->base.id, crtc->name);
ret = wait_for_completion_interruptible(&commit->hw_done);
drm_crtc_commit_put(commit);
if (ret)
return ret;
}
}
......@@ -2171,8 +2356,17 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
plane->state = new_plane_state;
}
__for_each_private_obj(state, obj, obj_state, i, funcs)
funcs->swap_state(obj, &state->private_objs[i].obj_state);
for_each_oldnew_private_obj_in_state(state, obj, old_obj_state, new_obj_state, i) {
WARN_ON(obj->state != old_obj_state);
old_obj_state->state = state;
new_obj_state->state = NULL;
state->private_objs[i].state = old_obj_state;
obj->state = new_obj_state;
}
return 0;
}
EXPORT_SYMBOL(drm_atomic_helper_swap_state);
......@@ -2556,13 +2750,13 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
goto free;
}
for_each_connector_in_state(state, conn, conn_state, i) {
for_each_new_connector_in_state(state, conn, conn_state, i) {
ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
if (ret < 0)
goto free;
}
for_each_plane_in_state(state, plane, plane_state, i) {
for_each_new_plane_in_state(state, plane, plane_state, i) {
ret = drm_atomic_set_crtc_for_plane(plane_state, NULL);
if (ret < 0)
goto free;
......@@ -2928,12 +3122,11 @@ drm_atomic_helper_connector_set_property(struct drm_connector *connector,
}
EXPORT_SYMBOL(drm_atomic_helper_connector_set_property);
static int page_flip_common(
struct drm_atomic_state *state,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t flags)
static int page_flip_common(struct drm_atomic_state *state,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t flags)
{
struct drm_plane *plane = crtc->primary;
struct drm_plane_state *plane_state;
......@@ -3027,13 +3220,12 @@ EXPORT_SYMBOL(drm_atomic_helper_page_flip);
* Returns:
* Returns 0 on success, negative errno numbers on failure.
*/
int drm_atomic_helper_page_flip_target(
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t flags,
uint32_t target,
struct drm_modeset_acquire_ctx *ctx)
int drm_atomic_helper_page_flip_target(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t flags,
uint32_t target,
struct drm_modeset_acquire_ctx *ctx)
{
struct drm_plane *plane = crtc->primary;
struct drm_atomic_state *state;
......@@ -3612,12 +3804,12 @@ int drm_atomic_helper_legacy_gamma_set(struct drm_crtc *crtc,
struct drm_modeset_acquire_ctx *ctx)
{
struct drm_device *dev = crtc->dev;
struct drm_mode_config *config = &dev->mode_config;
struct drm_atomic_state *state;
struct drm_crtc_state *crtc_state;
struct drm_property_blob *blob = NULL;
struct drm_color_lut *blob_data;
int i, ret = 0;
bool replaced;
state = drm_atomic_state_alloc(crtc->dev);
if (!state)
......@@ -3648,20 +3840,10 @@ int drm_atomic_helper_legacy_gamma_set(struct drm_crtc *crtc,
}
/* Reset DEGAMMA_LUT and CTM properties. */
ret = drm_atomic_crtc_set_property(crtc, crtc_state,
config->degamma_lut_property, 0);
if (ret)
goto fail;
ret = drm_atomic_crtc_set_property(crtc, crtc_state,
config->ctm_property, 0);
if (ret)
goto fail;
ret = drm_atomic_crtc_set_property(crtc, crtc_state,
config->gamma_lut_property, blob->base.id);
if (ret)
goto fail;
replaced = drm_property_replace_blob(&crtc_state->degamma_lut, NULL);
replaced |= drm_property_replace_blob(&crtc_state->ctm, NULL);
replaced |= drm_property_replace_blob(&crtc_state->gamma_lut, blob);
crtc_state->color_mgmt_changed |= replaced;
ret = drm_atomic_commit(state);
......@@ -3671,3 +3853,18 @@ int drm_atomic_helper_legacy_gamma_set(struct drm_crtc *crtc,
return ret;
}
EXPORT_SYMBOL(drm_atomic_helper_legacy_gamma_set);
/**
* __drm_atomic_helper_private_duplicate_state - copy atomic private state
* @obj: CRTC object
* @state: new private object state
*
* Copies atomic state from a private objects's current state and resets inferred values.
* This is useful for drivers that subclass the private state.
*/
void __drm_atomic_helper_private_obj_duplicate_state(struct drm_private_obj *obj,
struct drm_private_state *state)
{
memcpy(state, obj->state, sizeof(*state));
}
EXPORT_SYMBOL(__drm_atomic_helper_private_obj_duplicate_state);
......@@ -128,6 +128,9 @@ EXPORT_SYMBOL(drm_color_lut_extract);
* optional. The gamma and degamma properties are only attached if
* their size is not 0 and ctm_property is only attached if has_ctm is
* true.
*
* Drivers should use drm_atomic_helper_legacy_gamma_set() to implement the
* legacy &drm_crtc_funcs.gamma_set callback.
*/
void drm_crtc_enable_color_mgmt(struct drm_crtc *crtc,
uint degamma_lut_size,
......
......@@ -136,21 +136,51 @@ static int crtc_crc_data_count(struct drm_crtc_crc *crc)
return CIRC_CNT(crc->head, crc->tail, DRM_CRC_ENTRIES_NR);
}
static void crtc_crc_cleanup(struct drm_crtc_crc *crc)
{
kfree(crc->entries);
crc->entries = NULL;
crc->head = 0;
crc->tail = 0;
crc->values_cnt = 0;
crc->opened = false;
}
static int crtc_crc_open(struct inode *inode, struct file *filep)
{
struct drm_crtc *crtc = inode->i_private;
struct drm_crtc_crc *crc = &crtc->crc;
struct drm_crtc_crc_entry *entries = NULL;
size_t values_cnt;
int ret;
int ret = 0;
if (crc->opened)
return -EBUSY;
if (drm_drv_uses_atomic_modeset(crtc->dev)) {
ret = drm_modeset_lock_interruptible(&crtc->mutex, NULL);
if (ret)
return ret;
if (!crtc->state->active)
ret = -EIO;
drm_modeset_unlock(&crtc->mutex);
if (ret)
return ret;
}
spin_lock_irq(&crc->lock);
if (!crc->opened)
crc->opened = true;
else
ret = -EBUSY;
spin_unlock_irq(&crc->lock);
ret = crtc->funcs->set_crc_source(crtc, crc->source, &values_cnt);
if (ret)
return ret;
ret = crtc->funcs->set_crc_source(crtc, crc->source, &values_cnt);
if (ret)
goto err;
if (WARN_ON(values_cnt > DRM_MAX_CRC_NR)) {
ret = -EINVAL;
goto err_disable;
......@@ -170,7 +200,6 @@ static int crtc_crc_open(struct inode *inode, struct file *filep)
spin_lock_irq(&crc->lock);
crc->entries = entries;
crc->values_cnt = values_cnt;
crc->opened = true;
/*
* Only return once we got a first frame, so userspace doesn't have to
......@@ -182,12 +211,17 @@ static int crtc_crc_open(struct inode *inode, struct file *filep)
crc->lock);
spin_unlock_irq(&crc->lock);
WARN_ON(ret);
if (ret)
goto err_disable;
return 0;
err_disable:
crtc->funcs->set_crc_source(crtc, NULL, &values_cnt);
err:
spin_lock_irq(&crc->lock);
crtc_crc_cleanup(crc);
spin_unlock_irq(&crc->lock);
return ret;
}
......@@ -197,17 +231,12 @@ static int crtc_crc_release(struct inode *inode, struct file *filep)
struct drm_crtc_crc *crc = &crtc->crc;
size_t values_cnt;
crtc->funcs->set_crc_source(crtc, NULL, &values_cnt);
spin_lock_irq(&crc->lock);
kfree(crc->entries);
crc->entries = NULL;
crc->head = 0;
crc->tail = 0;
crc->values_cnt = 0;
crc->opened = false;
crtc_crc_cleanup(crc);
spin_unlock_irq(&crc->lock);
crtc->funcs->set_crc_source(crtc, NULL, &values_cnt);
return 0;
}
......@@ -334,7 +363,7 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame,
spin_lock(&crc->lock);
/* Caller may not have noticed yet that userspace has stopped reading */
if (!crc->opened) {
if (!crc->entries) {
spin_unlock(&crc->lock);
return -EINVAL;
}
......
......@@ -31,6 +31,8 @@
#include <drm/drmP.h>
#include <drm/drm_fixed.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
/**
* DOC: dp mst helper
......@@ -1335,15 +1337,17 @@ static void drm_dp_mst_link_probe_work(struct work_struct *work)
static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
u8 *guid)
{
static u8 zero_guid[16];
u64 salt;
if (!memcmp(guid, zero_guid, 16)) {
u64 salt = get_jiffies_64();
memcpy(&guid[0], &salt, sizeof(u64));
memcpy(&guid[8], &salt, sizeof(u64));
return false;
}
return true;
if (memchr_inv(guid, 0, 16))
return true;
salt = get_jiffies_64();
memcpy(&guid[0], &salt, sizeof(u64));
memcpy(&guid[8], &salt, sizeof(u64));
return false;
}
#if 0
......@@ -2515,8 +2519,8 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
int req_slots;
topology_state = drm_atomic_get_mst_topology_state(state, mgr);
if (topology_state == NULL)
return -ENOMEM;
if (IS_ERR(topology_state))
return PTR_ERR(topology_state);
port = drm_dp_get_validated_port_ref(mgr, port);
if (port == NULL)
......@@ -2555,8 +2559,8 @@ int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,
struct drm_dp_mst_topology_state *topology_state;
topology_state = drm_atomic_get_mst_topology_state(state, mgr);
if (topology_state == NULL)
return -ENOMEM;
if (IS_ERR(topology_state))
return PTR_ERR(topology_state);
/* We cannot rely on port->vcpi.num_slots to update
* topology_state->avail_slots as the port may not exist if the parent
......@@ -2992,41 +2996,32 @@ static void drm_dp_destroy_connector_work(struct work_struct *work)
(*mgr->cbs->hotplug)(mgr);
}
void *drm_dp_mst_duplicate_state(struct drm_atomic_state *state, void *obj)
static struct drm_private_state *
drm_dp_mst_duplicate_state(struct drm_private_obj *obj)
{
struct drm_dp_mst_topology_mgr *mgr = obj;
struct drm_dp_mst_topology_state *new_mst_state;
struct drm_dp_mst_topology_state *state;
if (WARN_ON(!mgr->state))
state = kmemdup(obj->state, sizeof(*state), GFP_KERNEL);
if (!state)
return NULL;
new_mst_state = kmemdup(mgr->state, sizeof(*new_mst_state), GFP_KERNEL);
if (new_mst_state)
new_mst_state->state = state;
return new_mst_state;
}
void drm_dp_mst_swap_state(void *obj, void **obj_state_ptr)
{
struct drm_dp_mst_topology_mgr *mgr = obj;
struct drm_dp_mst_topology_state **topology_state_ptr;
topology_state_ptr = (struct drm_dp_mst_topology_state **)obj_state_ptr;
__drm_atomic_helper_private_obj_duplicate_state(obj, &state->base);
mgr->state->state = (*topology_state_ptr)->state;
swap(*topology_state_ptr, mgr->state);
mgr->state->state = NULL;
return &state->base;
}
void drm_dp_mst_destroy_state(void *obj_state)
static void drm_dp_mst_destroy_state(struct drm_private_obj *obj,
struct drm_private_state *state)
{
kfree(obj_state);
struct drm_dp_mst_topology_state *mst_state =
to_dp_mst_topology_state(state);
kfree(mst_state);
}
static const struct drm_private_state_funcs mst_state_funcs = {
.duplicate_state = drm_dp_mst_duplicate_state,
.swap_state = drm_dp_mst_swap_state,
.destroy_state = drm_dp_mst_destroy_state,
.atomic_duplicate_state = drm_dp_mst_duplicate_state,
.atomic_destroy_state = drm_dp_mst_destroy_state,
};
/**
......@@ -3050,8 +3045,7 @@ struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_a
struct drm_device *dev = mgr->dev;
WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
return drm_atomic_get_private_obj_state(state, mgr,
&mst_state_funcs);
return to_dp_mst_topology_state(drm_atomic_get_private_obj_state(state, &mgr->base));
}
EXPORT_SYMBOL(drm_atomic_get_mst_topology_state);
......@@ -3071,6 +3065,8 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
int max_dpcd_transaction_bytes,
int max_payloads, int conn_base_id)
{
struct drm_dp_mst_topology_state *mst_state;
mutex_init(&mgr->lock);
mutex_init(&mgr->qlock);
mutex_init(&mgr->payload_lock);
......@@ -3099,14 +3095,18 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
if (test_calc_pbn_mode() < 0)
DRM_ERROR("MST PBN self-test failed\n");
mgr->state = kzalloc(sizeof(*mgr->state), GFP_KERNEL);
if (mgr->state == NULL)
mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL);
if (mst_state == NULL)
return -ENOMEM;
mgr->state->mgr = mgr;
mst_state->mgr = mgr;
/* max. time slots - one slot for MTP header */
mgr->state->avail_slots = 63;
mgr->funcs = &mst_state_funcs;
mst_state->avail_slots = 63;
drm_atomic_private_obj_init(&mgr->base,
&mst_state->base,
&mst_state_funcs);
return 0;
}
......@@ -3128,8 +3128,7 @@ void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)
mutex_unlock(&mgr->payload_lock);
mgr->dev = NULL;
mgr->aux = NULL;
kfree(mgr->state);
mgr->state = NULL;
drm_atomic_private_obj_fini(&mgr->base);
mgr->funcs = NULL;
}
EXPORT_SYMBOL(drm_dp_mst_topology_mgr_destroy);
......
......@@ -63,6 +63,15 @@ module_param_named(debug, drm_debug, int, 0600);
static DEFINE_SPINLOCK(drm_minor_lock);
static struct idr drm_minors_idr;
/*
* If the drm core fails to init for whatever reason,
* we should prevent any drivers from registering with it.
* It's best to check this at drm_dev_init(), as some drivers
* prefer to embed struct drm_device into their own device
* structure and call drm_dev_init() themselves.
*/
static bool drm_core_init_complete = false;
static struct dentry *drm_debugfs_root;
#define DRM_PRINTK_FMT "[" DRM_NAME ":%s]%s %pV"
......@@ -484,6 +493,11 @@ int drm_dev_init(struct drm_device *dev,
{
int ret;
if (!drm_core_init_complete) {
DRM_ERROR("DRM core is not initialized\n");
return -ENODEV;
}
kref_init(&dev->ref);
dev->dev = parent;
dev->driver = driver;
......@@ -966,6 +980,8 @@ static int __init drm_core_init(void)
if (ret < 0)
goto error;
drm_core_init_complete = true;
DRM_DEBUG("Initialized\n");
return 0;
......
此差异已折叠。
......@@ -640,7 +640,7 @@ EXPORT_SYMBOL_GPL(drm_fbdev_cma_hotplug_event);
* Calls drm_fb_helper_set_suspend, which is a wrapper around
* fb_set_suspend implemented by fbdev core.
*/
void drm_fbdev_cma_set_suspend(struct drm_fbdev_cma *fbdev_cma, int state)
void drm_fbdev_cma_set_suspend(struct drm_fbdev_cma *fbdev_cma, bool state)
{
if (fbdev_cma)
drm_fb_helper_set_suspend(&fbdev_cma->fb_helper, state);
......@@ -657,7 +657,7 @@ EXPORT_SYMBOL(drm_fbdev_cma_set_suspend);
* fb_set_suspend implemented by fbdev core.
*/
void drm_fbdev_cma_set_suspend_unlocked(struct drm_fbdev_cma *fbdev_cma,
int state)
bool state)
{
if (fbdev_cma)
drm_fb_helper_set_suspend_unlocked(&fbdev_cma->fb_helper,
......
此差异已折叠。
此差异已折叠。
......@@ -817,7 +817,7 @@ static int atomic_remove_fb(struct drm_framebuffer *fb)
plane->old_fb = plane->fb;
}
for_each_connector_in_state(state, conn, conn_state, i) {
for_each_new_connector_in_state(state, conn, conn_state, i) {
ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
if (ret)
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册