提交 3806a271 编写于 作者: D Dave Airlie

Merge tag 'drm-misc-next-2016-12-30' of git://anongit.freedesktop.org/git/drm-misc into drm-next

First -misc pull for 4.11:
- drm_mm rework + lots of selftests (Chris Wilson)
- new connector_list locking+iterators
- plenty of kerneldoc updates
- format handling rework from Ville
- atomic helper changes from Maarten for better plane corner-case handling
  in drivers, plus the i915 legacy cursor patch that needs this
- bridge cleanup from Laurent
- plus plenty of small stuff all over
- also contains a merge of the 4.10 docs tree so that we could apply the
  dma-buf kerneldoc patches

It's a lot more than usual, but due to the merge window blackout it also
covers about 4 weeks, so all in line again on a per-week basis. The more
annoying part with no pull request for 4 weeks is managing cross-tree
work. The -intel pull request I'll follow up with does conflict quite a
bit with -misc here. Longer-term (if drm-misc keeps growing) a
drm-next-queued to accept pull request for the next merge window during
this time might be useful.

I'd also like to backmerge -rc2+this into drm-intel next week, we have
quite a pile of patches waiting for the stuff in here.

* tag 'drm-misc-next-2016-12-30' of git://anongit.freedesktop.org/git/drm-misc: (126 commits)
  drm: Add kerneldoc markup for new @scan parameters in drm_mm
  drm/mm: Document locking rules
  drm: Use drm_mm_insert_node_in_range_generic() for everyone
  drm: Apply range restriction after color adjustment when allocation
  drm: Wrap drm_mm_node.hole_follows
  drm: Apply tight eviction scanning to color_adjust
  drm: Simplify drm_mm scan-list manipulation
  drm: Optimise power-of-two alignments in drm_mm_scan_add_block()
  drm: Compute tight evictions for drm_mm_scan
  drm: Fix application of color vs range restriction when scanning drm_mm
  drm: Unconditionally do the range check in drm_mm_scan_add_block()
  drm: Rename prev_node to hole in drm_mm_scan_add_block()
  drm: Fix O= out-of-tree builds for selftests
  drm: Extract struct drm_mm_scan from struct drm_mm
  drm: Add asserts to catch overflow in drm_mm_init() and drm_mm_init_scan()
  drm: Simplify drm_mm_clean()
  drm: Detect overflow in drm_mm_reserve_node()
  drm: Fix kerneldoc for drm_mm_scan_remove_block()
  drm: Promote drm_mm alignment to u64
  drm: kselftest for drm_mm and restricted color eviction
  ...
THS8135 Video DAC
-----------------
This is the binding for Texas Instruments THS8135 Video DAC bridge.
Required properties:
- compatible: Must be "ti,ths8135"
Required nodes:
This device has two video ports. Their connections are modelled using the OF
graph bindings specified in Documentation/devicetree/bindings/graph.txt.
- Video port 0 for RGB input
- Video port 1 for VGA output
Example
-------
vga-bridge {
compatible = "ti,ths8135";
#address-cells = <1>;
#size-cells = <0>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
vga_bridge_in: endpoint {
remote-endpoint = <&lcdc_out_vga>;
};
};
port@1 {
reg = <1>;
vga_bridge_out: endpoint {
remote-endpoint = <&vga_con_in>;
};
};
};
};
......@@ -16,7 +16,7 @@ Required properties:
"clk_ade_core" for the ADE core clock.
"clk_codec_jpeg" for the media NOC QoS clock, which use the same clock with
jpeg codec.
"clk_ade_pix" for the ADE pixel clok.
"clk_ade_pix" for the ADE pixel clock.
- assigned-clocks: Should contain "clk_ade_core" and "clk_codec_jpeg" clocks'
phandle + clock-specifier pairs.
- assigned-clock-rates: clock rates, one for each entry in assigned-clocks.
......
此差异已折叠。
......@@ -17,6 +17,98 @@ shared or exclusive fence(s) associated with the buffer.
Shared DMA Buffers
------------------
This document serves as a guide to device-driver writers on what is the dma-buf
buffer sharing API, how to use it for exporting and using shared buffers.
Any device driver which wishes to be a part of DMA buffer sharing, can do so as
either the 'exporter' of buffers, or the 'user' or 'importer' of buffers.
Say a driver A wants to use buffers created by driver B, then we call B as the
exporter, and A as buffer-user/importer.
The exporter
- implements and manages operations in :c:type:`struct dma_buf_ops
<dma_buf_ops>` for the buffer,
- allows other users to share the buffer by using dma_buf sharing APIs,
- manages the details of buffer allocation, wrapped int a :c:type:`struct
dma_buf <dma_buf>`,
- decides about the actual backing storage where this allocation happens,
- and takes care of any migration of scatterlist - for all (shared) users of
this buffer.
The buffer-user
- is one of (many) sharing users of the buffer.
- doesn't need to worry about how the buffer is allocated, or where.
- and needs a mechanism to get access to the scatterlist that makes up this
buffer in memory, mapped into its own address space, so it can access the
same area of memory. This interface is provided by :c:type:`struct
dma_buf_attachment <dma_buf_attachment>`.
Any exporters or users of the dma-buf buffer sharing framework must have a
'select DMA_SHARED_BUFFER' in their respective Kconfigs.
Userspace Interface Notes
~~~~~~~~~~~~~~~~~~~~~~~~~
Mostly a DMA buffer file descriptor is simply an opaque object for userspace,
and hence the generic interface exposed is very minimal. There's a few things to
consider though:
- Since kernel 3.12 the dma-buf FD supports the llseek system call, but only
with offset=0 and whence=SEEK_END|SEEK_SET. SEEK_SET is supported to allow
the usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every other
llseek operation will report -EINVAL.
If llseek on dma-buf FDs isn't support the kernel will report -ESPIPE for all
cases. Userspace can use this to detect support for discovering the dma-buf
size using llseek.
- In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be set
on the file descriptor. This is not just a resource leak, but a
potential security hole. It could give the newly exec'd application
access to buffers, via the leaked fd, to which it should otherwise
not be permitted access.
The problem with doing this via a separate fcntl() call, versus doing it
atomically when the fd is created, is that this is inherently racy in a
multi-threaded app[3]. The issue is made worse when it is library code
opening/creating the file descriptor, as the application may not even be
aware of the fd's.
To avoid this problem, userspace must have a way to request O_CLOEXEC
flag be set when the dma-buf fd is created. So any API provided by
the exporting driver to create a dmabuf fd must provide a way to let
userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
- Memory mapping the contents of the DMA buffer is also supported. See the
discussion below on `CPU Access to DMA Buffer Objects`_ for the full details.
- The DMA buffer FD is also pollable, see `Fence Poll Support`_ below for
details.
Basic Operation and Device DMA Access
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: dma buf device access
CPU Access to DMA Buffer Objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: cpu access
Fence Poll Support
~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: fence polling
Kernel Functions and Structures Reference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:export:
......
......@@ -3966,7 +3966,7 @@ F: drivers/dma-buf/
F: include/linux/dma-buf*
F: include/linux/reservation.h
F: include/linux/*fence.h
F: Documentation/dma-buf-sharing.txt
F: Documentation/driver-api/dma-buf.rst
T: git git://anongit.freedesktop.org/drm/drm-misc
SYNC FILE FRAMEWORK
......
......@@ -124,6 +124,28 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
return base + offset;
}
/**
* DOC: fence polling
*
* To support cross-device and cross-driver synchronization of buffer access
* implicit fences (represented internally in the kernel with struct &fence) can
* be attached to a &dma_buf. The glue for that and a few related things are
* provided in the &reservation_object structure.
*
* Userspace can query the state of these implicitly tracked fences using poll()
* and related system calls:
*
* - Checking for POLLIN, i.e. read access, can be use to query the state of the
* most recent write or exclusive fence.
*
* - Checking for POLLOUT, i.e. write access, can be used to query the state of
* all attached fences, shared and exclusive ones.
*
* Note that this only signals the completion of the respective fences, i.e. the
* DMA transfers are complete. Cache flushing and any other necessary
* preparations before CPU access can begin still need to happen.
*/
static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
{
struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb;
......@@ -313,6 +335,37 @@ static inline int is_dma_buf_file(struct file *file)
return file->f_op == &dma_buf_fops;
}
/**
* DOC: dma buf device access
*
* For device DMA access to a shared DMA buffer the usual sequence of operations
* is fairly simple:
*
* 1. The exporter defines his exporter instance using
* DEFINE_DMA_BUF_EXPORT_INFO() and calls dma_buf_export() to wrap a private
* buffer object into a &dma_buf. It then exports that &dma_buf to userspace
* as a file descriptor by calling dma_buf_fd().
*
* 2. Userspace passes this file-descriptors to all drivers it wants this buffer
* to share with: First the filedescriptor is converted to a &dma_buf using
* dma_buf_get(). The the buffer is attached to the device using
* dma_buf_attach().
*
* Up to this stage the exporter is still free to migrate or reallocate the
* backing storage.
*
* 3. Once the buffer is attached to all devices userspace can inniate DMA
* access to the shared buffer. In the kernel this is done by calling
* dma_buf_map_attachment() and dma_buf_unmap_attachment().
*
* 4. Once a driver is done with a shared buffer it needs to call
* dma_buf_detach() (after cleaning up any mappings) and then release the
* reference acquired with dma_buf_get by calling dma_buf_put().
*
* For the detailed semantics exporters are expected to implement see
* &dma_buf_ops.
*/
/**
* dma_buf_export - Creates a new dma_buf, and associates an anon file
* with this buffer, so it can be exported.
......@@ -320,13 +373,15 @@ static inline int is_dma_buf_file(struct file *file)
* Additionally, provide a name string for exporter; useful in debugging.
*
* @exp_info: [in] holds all the export related information provided
* by the exporter. see struct dma_buf_export_info
* by the exporter. see struct &dma_buf_export_info
* for further details.
*
* Returns, on success, a newly created dma_buf object, which wraps the
* supplied private data and operations for dma_buf_ops. On either missing
* ops, or error in allocating struct dma_buf, will return negative error.
*
* For most cases the easiest way to create @exp_info is through the
* %DEFINE_DMA_BUF_EXPORT_INFO macro.
*/
struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
{
......@@ -458,7 +513,12 @@ EXPORT_SYMBOL_GPL(dma_buf_get);
* dma_buf_put - decreases refcount of the buffer
* @dmabuf: [in] buffer to reduce refcount of
*
* Uses file's refcounting done implicitly by fput()
* Uses file's refcounting done implicitly by fput().
*
* If, as a result of this call, the refcount becomes 0, the 'release' file
* operation related to this fd is called. It calls the release operation of
* struct &dma_buf_ops in turn, and frees the memory allocated for dmabuf when
* exported.
*/
void dma_buf_put(struct dma_buf *dmabuf)
{
......@@ -475,8 +535,17 @@ EXPORT_SYMBOL_GPL(dma_buf_put);
* @dmabuf: [in] buffer to attach device to.
* @dev: [in] device to be attached.
*
* Returns struct dma_buf_attachment * for this attachment; returns ERR_PTR on
* error.
* Returns struct dma_buf_attachment pointer for this attachment. Attachments
* must be cleaned up by calling dma_buf_detach().
*
* Returns:
*
* A pointer to newly created &dma_buf_attachment on success, or a negative
* error code wrapped into a pointer on failure.
*
* Note that this can fail if the backing storage of @dmabuf is in a place not
* accessible to @dev, and cannot be moved to a more suitable place. This is
* indicated with the error code -EBUSY.
*/
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev)
......@@ -519,6 +588,7 @@ EXPORT_SYMBOL_GPL(dma_buf_attach);
* @dmabuf: [in] buffer to detach from.
* @attach: [in] attachment to be detached; is free'd after this call.
*
* Clean up a device attachment obtained by calling dma_buf_attach().
*/
void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
{
......@@ -543,7 +613,12 @@ EXPORT_SYMBOL_GPL(dma_buf_detach);
* @direction: [in] direction of DMA transfer
*
* Returns sg_table containing the scatterlist to be returned; returns ERR_PTR
* on error.
* on error. May return -EINTR if it is interrupted by a signal.
*
* A mapping must be unmapped again using dma_buf_map_attachment(). Note that
* the underlying backing storage is pinned for as long as a mapping exists,
* therefore users/importers should not hold onto a mapping for undue amounts of
* time.
*/
struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
enum dma_data_direction direction)
......@@ -571,6 +646,7 @@ EXPORT_SYMBOL_GPL(dma_buf_map_attachment);
* @sg_table: [in] scatterlist info of the buffer to unmap
* @direction: [in] direction of DMA transfer
*
* This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment().
*/
void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
struct sg_table *sg_table,
......@@ -586,6 +662,122 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
}
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
/**
* DOC: cpu access
*
* There are mutliple reasons for supporting CPU access to a dma buffer object:
*
* - Fallback operations in the kernel, for example when a device is connected
* over USB and the kernel needs to shuffle the data around first before
* sending it away. Cache coherency is handled by braketing any transactions
* with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
* access.
*
* To support dma_buf objects residing in highmem cpu access is page-based
* using an api similar to kmap. Accessing a dma_buf is done in aligned chunks
* of PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which
* returns a pointer in kernel virtual address space. Afterwards the chunk
* needs to be unmapped again. There is no limit on how often a given chunk
* can be mapped and unmapped, i.e. the importer does not need to call
* begin_cpu_access again before mapping the same chunk again.
*
* Interfaces::
* void \*dma_buf_kmap(struct dma_buf \*, unsigned long);
* void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*);
*
* There are also atomic variants of these interfaces. Like for kmap they
* facilitate non-blocking fast-paths. Neither the importer nor the exporter
* (in the callback) is allowed to block when using these.
*
* Interfaces::
* void \*dma_buf_kmap_atomic(struct dma_buf \*, unsigned long);
* void dma_buf_kunmap_atomic(struct dma_buf \*, unsigned long, void \*);
*
* For importers all the restrictions of using kmap apply, like the limited
* supply of kmap_atomic slots. Hence an importer shall only hold onto at
* max 2 atomic dma_buf kmaps at the same time (in any given process context).
*
* dma_buf kmap calls outside of the range specified in begin_cpu_access are
* undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
* the partial chunks at the beginning and end but may return stale or bogus
* data outside of the range (in these partial chunks).
*
* Note that these calls need to always succeed. The exporter needs to
* complete any preparations that might fail in begin_cpu_access.
*
* For some cases the overhead of kmap can be too high, a vmap interface
* is introduced. This interface should be used very carefully, as vmalloc
* space is a limited resources on many architectures.
*
* Interfaces::
* void \*dma_buf_vmap(struct dma_buf \*dmabuf)
* void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
*
* The vmap call can fail if there is no vmap support in the exporter, or if
* it runs out of vmalloc space. Fallback to kmap should be implemented. Note
* that the dma-buf layer keeps a reference count for all vmap access and
* calls down into the exporter's vmap function only when no vmapping exists,
* and only unmaps it once. Protection against concurrent vmap/vunmap calls is
* provided by taking the dma_buf->lock mutex.
*
* - For full compatibility on the importer side with existing userspace
* interfaces, which might already support mmap'ing buffers. This is needed in
* many processing pipelines (e.g. feeding a software rendered image into a
* hardware pipeline, thumbnail creation, snapshots, ...). Also, Android's ION
* framework already supported this and for DMA buffer file descriptors to
* replace ION buffers mmap support was needed.
*
* There is no special interfaces, userspace simply calls mmap on the dma-buf
* fd. But like for CPU access there's a need to braket the actual access,
* which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
* DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
* be restarted.
*
* Some systems might need some sort of cache coherency management e.g. when
* CPU and GPU domains are being accessed through dma-buf at the same time.
* To circumvent this problem there are begin/end coherency markers, that
* forward directly to existing dma-buf device drivers vfunc hooks. Userspace
* can make use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. The
* sequence would be used like following:
*
* - mmap dma-buf fd
* - for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write
* to mmap area 3. SYNC_END ioctl. This can be repeated as often as you
* want (with the new data being consumed by say the GPU or the scanout
* device)
* - munmap once you don't need the buffer any more
*
* For correctness and optimal performance, it is always required to use
* SYNC_START and SYNC_END before and after, respectively, when accessing the
* mapped address. Userspace cannot rely on coherent access, even when there
* are systems where it just works without calling these ioctls.
*
* - And as a CPU fallback in userspace processing pipelines.
*
* Similar to the motivation for kernel cpu access it is again important that
* the userspace code of a given importing subsystem can use the same
* interfaces with a imported dma-buf buffer object as with a native buffer
* object. This is especially important for drm where the userspace part of
* contemporary OpenGL, X, and other drivers is huge, and reworking them to
* use a different way to mmap a buffer rather invasive.
*
* The assumption in the current dma-buf interfaces is that redirecting the
* initial mmap is all that's needed. A survey of some of the existing
* subsystems shows that no driver seems to do any nefarious thing like
* syncing up with outstanding asynchronous processing on the device or
* allocating special resources at fault time. So hopefully this is good
* enough, since adding interfaces to intercept pagefaults and allow pte
* shootdowns would increase the complexity quite a bit.
*
* Interface::
* int dma_buf_mmap(struct dma_buf \*, struct vm_area_struct \*,
* unsigned long);
*
* If the importing subsystem simply provides a special-purpose mmap call to
* set up a mapping in userspace, calling do_mmap with dma_buf->file will
* equally achieve that for a dma-buf object.
*/
static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
......@@ -611,6 +803,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
* @dmabuf: [in] buffer to prepare cpu access for.
* @direction: [in] length of range for cpu access.
*
* After the cpu access is complete the caller should call
* dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
* it guaranteed to be coherent with other DMA access.
*
* Can return negative error values, returns 0 on success.
*/
int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
......@@ -643,6 +839,8 @@ EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
* @dmabuf: [in] buffer to complete cpu access for.
* @direction: [in] length of range for cpu access.
*
* This terminates CPU access started with dma_buf_begin_cpu_access().
*
* Can return negative error values, returns 0 on success.
*/
int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
......
......@@ -67,9 +67,10 @@ static void fence_check_cb_func(struct dma_fence *f, struct dma_fence_cb *cb)
* sync_file_create() - creates a sync file
* @fence: fence to add to the sync_fence
*
* Creates a sync_file containg @fence. Once this is called, the sync_file
* takes ownership of @fence. The sync_file can be released with
* fput(sync_file->file). Returns the sync_file or NULL in case of error.
* Creates a sync_file containg @fence. This function acquires and additional
* reference of @fence for the newly-created &sync_file, if it succeeds. The
* sync_file can be released with fput(sync_file->file). Returns the
* sync_file or NULL in case of error.
*/
struct sync_file *sync_file_create(struct dma_fence *fence)
{
......@@ -90,13 +91,6 @@ struct sync_file *sync_file_create(struct dma_fence *fence)
}
EXPORT_SYMBOL(sync_file_create);
/**
* sync_file_fdget() - get a sync_file from an fd
* @fd: fd referencing a fence
*
* Ensures @fd references a valid sync_file, increments the refcount of the
* backing file. Returns the sync_file or NULL in case of error.
*/
static struct sync_file *sync_file_fdget(int fd)
{
struct file *file = fget(fd);
......@@ -468,4 +462,3 @@ static const struct file_operations sync_file_fops = {
.unlocked_ioctl = sync_file_ioctl,
.compat_ioctl = sync_file_ioctl,
};
......@@ -48,6 +48,21 @@ config DRM_DEBUG_MM
If in doubt, say "N".
config DRM_DEBUG_MM_SELFTEST
tristate "kselftests for DRM range manager (struct drm_mm)"
depends on DRM
depends on DEBUG_KERNEL
select PRIME_NUMBERS
select DRM_LIB_RANDOM
default n
help
This option provides a kernel module that can be used to test
the DRM range manager (drm_mm) and its API. This option is not
useful for distributions or general kernels, but only for kernel
developers working on DRM and associated drivers.
If in doubt, say "N".
config DRM_KMS_HELPER
tristate
depends on DRM
......@@ -321,3 +336,7 @@ config DRM_SAVAGE
chipset. If M is selected the module will be called savage.
endif # DRM_LEGACY
config DRM_LIB_RANDOM
bool
default n
......@@ -18,6 +18,7 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \
drm_plane.o drm_color_mgmt.o drm_print.o \
drm_dumb_buffers.o drm_mode_config.o
drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
drm-$(CONFIG_PCI) += ati_pcigart.o
......@@ -37,6 +38,7 @@ drm_kms_helper-$(CONFIG_DRM_KMS_CMA_HELPER) += drm_fb_cma_helper.o
drm_kms_helper-$(CONFIG_DRM_DP_AUX_CHARDEV) += drm_dp_aux_dev.o
obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o
obj-$(CONFIG_DRM_DEBUG_MM_SELFTEST) += selftests/
CFLAGS_drm_trace_points.o := -I$(src)
......
......@@ -508,7 +508,7 @@ amdgpu_framebuffer_init(struct drm_device *dev,
{
int ret;
rfb->obj = obj;
drm_helper_mode_fill_fb_struct(&rfb->base, mode_cmd);
drm_helper_mode_fill_fb_struct(dev, &rfb->base, mode_cmd);
ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs);
if (ret) {
rfb->obj = NULL;
......
......@@ -245,7 +245,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
strcpy(info->fix.id, "amdgpudrmfb");
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth);
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
info->fbops = &amdgpufb_ops;
......@@ -272,7 +272,7 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start);
DRM_INFO("vram apper at 0x%lX\n", (unsigned long)adev->mc.aper_base);
DRM_INFO("size %lu\n", (unsigned long)amdgpu_bo_size(abo));
DRM_INFO("fb depth is %d\n", fb->depth);
DRM_INFO("fb depth is %d\n", fb->format->depth);
DRM_INFO(" pitch is %d\n", fb->pitches[0]);
vga_switcheroo_client_fb_set(adev->ddev->pdev, info);
......
......@@ -61,10 +61,8 @@ static void amdgpu_hotplug_work_func(struct work_struct *work)
struct drm_connector *connector;
mutex_lock(&mode_config->mutex);
if (mode_config->num_connector) {
list_for_each_entry(connector, &mode_config->connector_list, head)
amdgpu_connector_hotplug(connector);
}
list_for_each_entry(connector, &mode_config->connector_list, head)
amdgpu_connector_hotplug(connector);
mutex_unlock(&mode_config->mutex);
/* Just fire off a uevent and let userspace tell us what to do */
drm_helper_hpd_irq_event(dev);
......
......@@ -32,6 +32,7 @@
#include <drm/drm_crtc.h>
#include <drm/drm_edid.h>
#include <drm/drm_encoder.h>
#include <drm/drm_dp_helper.h>
#include <drm/drm_fixed.h>
#include <drm/drm_crtc_helper.h>
......
......@@ -2072,7 +2072,7 @@ static int dce_v10_0_crtc_do_set_base(struct drm_crtc *crtc,
pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
switch (target_fb->pixel_format) {
switch (target_fb->format->format) {
case DRM_FORMAT_C8:
fb_format = REG_SET_FIELD(0, GRPH_CONTROL, GRPH_DEPTH, 0);
fb_format = REG_SET_FIELD(fb_format, GRPH_CONTROL, GRPH_FORMAT, 0);
......@@ -2145,7 +2145,7 @@ static int dce_v10_0_crtc_do_set_base(struct drm_crtc *crtc,
break;
default:
DRM_ERROR("Unsupported screen format %s\n",
drm_get_format_name(target_fb->pixel_format, &format_name));
drm_get_format_name(target_fb->format->format, &format_name));
return -EINVAL;
}
......@@ -2220,7 +2220,7 @@ static int dce_v10_0_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width);
WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height);
fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8);
fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0];
WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels);
dce_v10_0_grph_enable(crtc, true);
......
......@@ -2053,7 +2053,7 @@ static int dce_v11_0_crtc_do_set_base(struct drm_crtc *crtc,
pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
switch (target_fb->pixel_format) {
switch (target_fb->format->format) {
case DRM_FORMAT_C8:
fb_format = REG_SET_FIELD(0, GRPH_CONTROL, GRPH_DEPTH, 0);
fb_format = REG_SET_FIELD(fb_format, GRPH_CONTROL, GRPH_FORMAT, 0);
......@@ -2126,7 +2126,7 @@ static int dce_v11_0_crtc_do_set_base(struct drm_crtc *crtc,
break;
default:
DRM_ERROR("Unsupported screen format %s\n",
drm_get_format_name(target_fb->pixel_format, &format_name));
drm_get_format_name(target_fb->format->format, &format_name));
return -EINVAL;
}
......@@ -2201,7 +2201,7 @@ static int dce_v11_0_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width);
WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height);
fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8);
fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0];
WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels);
dce_v11_0_grph_enable(crtc, true);
......
......@@ -1501,7 +1501,7 @@ static int dce_v6_0_crtc_do_set_base(struct drm_crtc *crtc,
amdgpu_bo_get_tiling_flags(abo, &tiling_flags);
amdgpu_bo_unreserve(abo);
switch (target_fb->pixel_format) {
switch (target_fb->format->format) {
case DRM_FORMAT_C8:
fb_format = (GRPH_DEPTH(GRPH_DEPTH_8BPP) |
GRPH_FORMAT(GRPH_FORMAT_INDEXED));
......@@ -1567,7 +1567,7 @@ static int dce_v6_0_crtc_do_set_base(struct drm_crtc *crtc,
break;
default:
DRM_ERROR("Unsupported screen format %s\n",
drm_get_format_name(target_fb->pixel_format, &format_name));
drm_get_format_name(target_fb->format->format, &format_name));
return -EINVAL;
}
......@@ -1630,7 +1630,7 @@ static int dce_v6_0_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width);
WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height);
fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8);
fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0];
WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels);
dce_v6_0_grph_enable(crtc, true);
......
......@@ -1950,7 +1950,7 @@ static int dce_v8_0_crtc_do_set_base(struct drm_crtc *crtc,
pipe_config = AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
switch (target_fb->pixel_format) {
switch (target_fb->format->format) {
case DRM_FORMAT_C8:
fb_format = ((GRPH_DEPTH_8BPP << GRPH_CONTROL__GRPH_DEPTH__SHIFT) |
(GRPH_FORMAT_INDEXED << GRPH_CONTROL__GRPH_FORMAT__SHIFT));
......@@ -2016,7 +2016,7 @@ static int dce_v8_0_crtc_do_set_base(struct drm_crtc *crtc,
break;
default:
DRM_ERROR("Unsupported screen format %s\n",
drm_get_format_name(target_fb->pixel_format, &format_name));
drm_get_format_name(target_fb->format->format, &format_name));
return -EINVAL;
}
......@@ -2079,7 +2079,7 @@ static int dce_v8_0_crtc_do_set_base(struct drm_crtc *crtc,
WREG32(mmGRPH_X_END + amdgpu_crtc->crtc_offset, target_fb->width);
WREG32(mmGRPH_Y_END + amdgpu_crtc->crtc_offset, target_fb->height);
fb_pitch_pixels = target_fb->pitches[0] / (target_fb->bits_per_pixel / 8);
fb_pitch_pixels = target_fb->pitches[0] / target_fb->format->cpp[0];
WREG32(mmGRPH_PITCH + amdgpu_crtc->crtc_offset, fb_pitch_pixels);
dce_v8_0_grph_enable(crtc, true);
......
......@@ -35,7 +35,8 @@ static struct simplefb_format supported_formats[] = {
static void arc_pgu_set_pxl_fmt(struct drm_crtc *crtc)
{
struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
uint32_t pixel_format = crtc->primary->state->fb->pixel_format;
const struct drm_framebuffer *fb = crtc->primary->state->fb;
uint32_t pixel_format = fb->format->format;
struct simplefb_format *format = NULL;
int i;
......
......@@ -47,10 +47,7 @@ int arcpgu_drm_hdmi_init(struct drm_device *drm, struct device_node *np)
return ret;
/* Link drm_bridge to encoder */
bridge->encoder = encoder;
encoder->bridge = bridge;
ret = drm_bridge_attach(drm, bridge);
ret = drm_bridge_attach(encoder, bridge, NULL);
if (ret)
drm_encoder_cleanup(encoder);
......
......@@ -60,11 +60,12 @@ static int hdlcd_set_pxl_fmt(struct drm_crtc *crtc)
{
unsigned int btpp;
struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
const struct drm_framebuffer *fb = crtc->primary->state->fb;
uint32_t pixel_format;
struct simplefb_format *format = NULL;
int i;
pixel_format = crtc->primary->state->fb->pixel_format;
pixel_format = fb->format->format;
for (i = 0; i < ARRAY_SIZE(supported_formats); i++) {
if (supported_formats[i].fourcc == pixel_format)
......@@ -220,27 +221,28 @@ static int hdlcd_plane_atomic_check(struct drm_plane *plane,
static void hdlcd_plane_atomic_update(struct drm_plane *plane,
struct drm_plane_state *state)
{
struct drm_framebuffer *fb = plane->state->fb;
struct hdlcd_drm_private *hdlcd;
struct drm_gem_cma_object *gem;
u32 src_w, src_h, dest_w, dest_h;
dma_addr_t scanout_start;
if (!plane->state->fb)
if (!fb)
return;
src_w = plane->state->src_w >> 16;
src_h = plane->state->src_h >> 16;
dest_w = plane->state->crtc_w;
dest_h = plane->state->crtc_h;
gem = drm_fb_cma_get_gem_obj(plane->state->fb, 0);
scanout_start = gem->paddr + plane->state->fb->offsets[0] +
plane->state->crtc_y * plane->state->fb->pitches[0] +
gem = drm_fb_cma_get_gem_obj(fb, 0);
scanout_start = gem->paddr + fb->offsets[0] +
plane->state->crtc_y * fb->pitches[0] +
plane->state->crtc_x *
drm_format_plane_cpp(plane->state->fb->pixel_format, 0);
fb->format->cpp[0];
hdlcd = plane->dev->dev_private;
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, plane->state->fb->pitches[0]);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_PITCH, plane->state->fb->pitches[0]);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, fb->pitches[0]);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_PITCH, fb->pitches[0]);
hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_COUNT, dest_h - 1);
hdlcd_write(hdlcd, HDLCD_REG_FB_BASE, scanout_start);
}
......
......@@ -112,11 +112,11 @@ static int malidp_de_plane_check(struct drm_plane *plane,
fb = state->fb;
ms->format = malidp_hw_get_format_id(&mp->hwdev->map, mp->layer->id,
fb->pixel_format);
fb->format->format);
if (ms->format == MALIDP_INVALID_FORMAT_ID)
return -EINVAL;
ms->n_planes = drm_format_num_planes(fb->pixel_format);
ms->n_planes = fb->format->num_planes;
for (i = 0; i < ms->n_planes; i++) {
if (!malidp_hw_pitch_valid(mp->hwdev, fb->pitches[i])) {
DRM_DEBUG_KMS("Invalid pitch %u for plane %d\n",
......@@ -137,8 +137,8 @@ static int malidp_de_plane_check(struct drm_plane *plane,
/* packed RGB888 / BGR888 can't be rotated or flipped */
if (state->rotation != DRM_ROTATE_0 &&
(state->fb->pixel_format == DRM_FORMAT_RGB888 ||
state->fb->pixel_format == DRM_FORMAT_BGR888))
(fb->format->format == DRM_FORMAT_RGB888 ||
fb->format->format == DRM_FORMAT_BGR888))
return -EINVAL;
ms->rotmem_size = 0;
......@@ -147,7 +147,7 @@ static int malidp_de_plane_check(struct drm_plane *plane,
val = mp->hwdev->rotmem_required(mp->hwdev, state->crtc_h,
state->crtc_w,
state->fb->pixel_format);
fb->format->format);
if (val < 0)
return val;
......
......@@ -169,8 +169,7 @@ void armada_drm_plane_calc_addrs(u32 *addrs, struct drm_framebuffer *fb,
int x, int y)
{
u32 addr = drm_fb_obj(fb)->dev_addr;
u32 pixel_format = fb->pixel_format;
int num_planes = drm_format_num_planes(pixel_format);
int num_planes = fb->format->num_planes;
int i;
if (num_planes > 3)
......@@ -178,7 +177,7 @@ void armada_drm_plane_calc_addrs(u32 *addrs, struct drm_framebuffer *fb,
for (i = 0; i < num_planes; i++)
addrs[i] = addr + fb->offsets[i] + y * fb->pitches[i] +
x * drm_format_plane_cpp(pixel_format, i);
x * fb->format->cpp[i];
for (; i < 3; i++)
addrs[i] = 0;
}
......@@ -191,7 +190,7 @@ static unsigned armada_drm_crtc_calc_fb(struct drm_framebuffer *fb,
unsigned i = 0;
DRM_DEBUG_DRIVER("pitch %u x %d y %d bpp %d\n",
pitch, x, y, fb->bits_per_pixel);
pitch, x, y, fb->format->cpp[0] * 8);
armada_drm_plane_calc_addrs(addrs, fb, x, y);
......@@ -1036,7 +1035,7 @@ static int armada_drm_crtc_page_flip(struct drm_crtc *crtc,
int ret;
/* We don't support changing the pixel format */
if (fb->pixel_format != crtc->primary->fb->pixel_format)
if (fb->format != crtc->primary->fb->format)
return -EINVAL;
work = kmalloc(sizeof(*work), GFP_KERNEL);
......
......@@ -81,7 +81,7 @@ struct armada_framebuffer *armada_framebuffer_create(struct drm_device *dev,
dfb->mod = config;
dfb->obj = obj;
drm_helper_mode_fill_fb_struct(&dfb->fb, mode);
drm_helper_mode_fill_fb_struct(dev, &dfb->fb, mode);
ret = drm_framebuffer_init(dev, &dfb->fb, &armada_fb_funcs);
if (ret) {
......
......@@ -89,11 +89,12 @@ static int armada_fb_create(struct drm_fb_helper *fbh,
info->screen_base = ptr;
fbh->fb = &dfb->fb;
drm_fb_helper_fill_fix(info, dfb->fb.pitches[0], dfb->fb.depth);
drm_fb_helper_fill_fix(info, dfb->fb.pitches[0],
dfb->fb.format->depth);
drm_fb_helper_fill_var(info, fbh, sizes->fb_width, sizes->fb_height);
DRM_DEBUG_KMS("allocated %dx%d %dbpp fb: 0x%08llx\n",
dfb->fb.width, dfb->fb.height, dfb->fb.bits_per_pixel,
dfb->fb.width, dfb->fb.height, dfb->fb.format->cpp[0] * 8,
(unsigned long long)obj->phys_addr);
return 0;
......
......@@ -186,9 +186,9 @@ armada_ovl_plane_update(struct drm_plane *plane, struct drm_crtc *crtc,
armada_drm_plane_calc_addrs(addrs, fb, src_x, src_y);
pixel_format = fb->pixel_format;
pixel_format = fb->format->format;
hsub = drm_format_horz_chroma_subsampling(pixel_format);
num_planes = drm_format_num_planes(pixel_format);
num_planes = fb->format->num_planes;
/*
* Annoyingly, shifting a YUYV-format image by one pixel
......
......@@ -28,6 +28,7 @@
#ifndef __AST_DRV_H__
#define __AST_DRV_H__
#include <drm/drm_encoder.h>
#include <drm/drm_fb_helper.h>
#include <drm/ttm/ttm_bo_api.h>
......
......@@ -49,7 +49,7 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
struct drm_gem_object *obj;
struct ast_bo *bo;
int src_offset, dst_offset;
int bpp = (afbdev->afb.base.bits_per_pixel + 7)/8;
int bpp = afbdev->afb.base.format->cpp[0];
int ret = -EBUSY;
bool unmap = false;
bool store_for_later = false;
......@@ -237,7 +237,7 @@ static int astfb_create(struct drm_fb_helper *helper,
info->apertures->ranges[0].base = pci_resource_start(dev->pdev, 0);
info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 0);
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth);
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(info, &afbdev->helper, sizes->fb_width, sizes->fb_height);
info->screen_base = sysram;
......
......@@ -314,7 +314,7 @@ int ast_framebuffer_init(struct drm_device *dev,
{
int ret;
drm_helper_mode_fill_fb_struct(&ast_fb->base, mode_cmd);
drm_helper_mode_fill_fb_struct(dev, &ast_fb->base, mode_cmd);
ast_fb->obj = obj;
ret = drm_framebuffer_init(dev, &ast_fb->base, &ast_fb_funcs);
if (ret) {
......
......@@ -79,12 +79,13 @@ static bool ast_get_vbios_mode_info(struct drm_crtc *crtc, struct drm_display_mo
struct ast_vbios_mode_info *vbios_mode)
{
struct ast_private *ast = crtc->dev->dev_private;
const struct drm_framebuffer *fb = crtc->primary->fb;
u32 refresh_rate_index = 0, mode_id, color_index, refresh_rate;
u32 hborder, vborder;
bool check_sync;
struct ast_vbios_enhtable *best = NULL;
switch (crtc->primary->fb->bits_per_pixel) {
switch (fb->format->cpp[0] * 8) {
case 8:
vbios_mode->std_table = &vbios_stdtable[VGAModeIndex];
color_index = VGAModeIndex - 1;
......@@ -207,7 +208,8 @@ static bool ast_get_vbios_mode_info(struct drm_crtc *crtc, struct drm_display_mo
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x91, 0x00);
if (vbios_mode->enh_table->flags & NewModeInfo) {
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x91, 0xa8);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x92, crtc->primary->fb->bits_per_pixel);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x92,
fb->format->cpp[0] * 8);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x93, adjusted_mode->clock / 1000);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x94, adjusted_mode->crtc_hdisplay);
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x95, adjusted_mode->crtc_hdisplay >> 8);
......@@ -369,10 +371,11 @@ static void ast_set_crtc_reg(struct drm_crtc *crtc, struct drm_display_mode *mod
static void ast_set_offset_reg(struct drm_crtc *crtc)
{
struct ast_private *ast = crtc->dev->dev_private;
const struct drm_framebuffer *fb = crtc->primary->fb;
u16 offset;
offset = crtc->primary->fb->pitches[0] >> 3;
offset = fb->pitches[0] >> 3;
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x13, (offset & 0xff));
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xb0, (offset >> 8) & 0x3f);
}
......@@ -395,9 +398,10 @@ static void ast_set_ext_reg(struct drm_crtc *crtc, struct drm_display_mode *mode
struct ast_vbios_mode_info *vbios_mode)
{
struct ast_private *ast = crtc->dev->dev_private;
const struct drm_framebuffer *fb = crtc->primary->fb;
u8 jregA0 = 0, jregA3 = 0, jregA8 = 0;
switch (crtc->primary->fb->bits_per_pixel) {
switch (fb->format->cpp[0] * 8) {
case 8:
jregA0 = 0x70;
jregA3 = 0x01;
......@@ -452,7 +456,9 @@ static void ast_set_sync_reg(struct drm_device *dev, struct drm_display_mode *mo
static bool ast_set_dac_reg(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct ast_vbios_mode_info *vbios_mode)
{
switch (crtc->primary->fb->bits_per_pixel) {
const struct drm_framebuffer *fb = crtc->primary->fb;
switch (fb->format->cpp[0] * 8) {
case 8:
break;
default:
......
......@@ -446,7 +446,7 @@ void atmel_hlcdc_layer_update_set_fb(struct atmel_hlcdc_layer *layer,
return;
if (fb)
nplanes = drm_format_num_planes(fb->pixel_format);
nplanes = fb->format->num_planes;
if (nplanes > layer->max_planes)
return;
......
......@@ -230,9 +230,7 @@ static int atmel_hlcdc_attach_endpoint(struct drm_device *dev,
of_node_put(np);
if (bridge) {
output->encoder.bridge = bridge;
bridge->encoder = &output->encoder;
ret = drm_bridge_attach(dev, bridge);
ret = drm_bridge_attach(&output->encoder, bridge, NULL);
if (!ret)
return 0;
}
......
......@@ -356,7 +356,7 @@ atmel_hlcdc_plane_update_general_settings(struct atmel_hlcdc_plane *plane,
cfg |= ATMEL_HLCDC_LAYER_OVR | ATMEL_HLCDC_LAYER_ITER2BL |
ATMEL_HLCDC_LAYER_ITER;
if (atmel_hlcdc_format_embeds_alpha(state->base.fb->pixel_format))
if (atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format))
cfg |= ATMEL_HLCDC_LAYER_LAEN;
else
cfg |= ATMEL_HLCDC_LAYER_GAEN |
......@@ -386,13 +386,13 @@ static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
u32 cfg;
int ret;
ret = atmel_hlcdc_format_to_plane_mode(state->base.fb->pixel_format,
ret = atmel_hlcdc_format_to_plane_mode(state->base.fb->format->format,
&cfg);
if (ret)
return;
if ((state->base.fb->pixel_format == DRM_FORMAT_YUV422 ||
state->base.fb->pixel_format == DRM_FORMAT_NV61) &&
if ((state->base.fb->format->format == DRM_FORMAT_YUV422 ||
state->base.fb->format->format == DRM_FORMAT_NV61) &&
drm_rotation_90_or_270(state->base.rotation))
cfg |= ATMEL_HLCDC_YUV422ROT;
......@@ -405,7 +405,7 @@ static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
* Rotation optimization is not working on RGB888 (rotation is still
* working but without any optimization).
*/
if (state->base.fb->pixel_format == DRM_FORMAT_RGB888)
if (state->base.fb->format->format == DRM_FORMAT_RGB888)
cfg = ATMEL_HLCDC_LAYER_DMA_ROTDIS;
else
cfg = 0;
......@@ -514,7 +514,7 @@ atmel_hlcdc_plane_prepare_disc_area(struct drm_crtc_state *c_state)
ovl_state = drm_plane_state_to_atmel_hlcdc_plane_state(ovl_s);
if (!ovl_s->fb ||
atmel_hlcdc_format_embeds_alpha(ovl_s->fb->pixel_format) ||
atmel_hlcdc_format_embeds_alpha(ovl_s->fb->format->format) ||
ovl_state->alpha != 255)
continue;
......@@ -621,7 +621,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
state->src_w >>= 16;
state->src_h >>= 16;
state->nplanes = drm_format_num_planes(fb->pixel_format);
state->nplanes = fb->format->num_planes;
if (state->nplanes > ATMEL_HLCDC_MAX_PLANES)
return -EINVAL;
......@@ -664,15 +664,15 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
patched_src_h = DIV_ROUND_CLOSEST(patched_crtc_h * state->src_h,
state->crtc_h);
hsub = drm_format_horz_chroma_subsampling(fb->pixel_format);
vsub = drm_format_vert_chroma_subsampling(fb->pixel_format);
hsub = drm_format_horz_chroma_subsampling(fb->format->format);
vsub = drm_format_vert_chroma_subsampling(fb->format->format);
for (i = 0; i < state->nplanes; i++) {
unsigned int offset = 0;
int xdiv = i ? hsub : 1;
int ydiv = i ? vsub : 1;
state->bpp[i] = drm_format_plane_cpp(fb->pixel_format, i);
state->bpp[i] = fb->format->cpp[i];
if (!state->bpp[i])
return -EINVAL;
......@@ -741,7 +741,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
if ((state->crtc_h != state->src_h || state->crtc_w != state->src_w) &&
(!layout->memsize ||
atmel_hlcdc_format_embeds_alpha(state->base.fb->pixel_format)))
atmel_hlcdc_format_embeds_alpha(state->base.fb->format->format)))
return -EINVAL;
if (state->crtc_x < 0 || state->crtc_y < 0)
......
......@@ -4,6 +4,7 @@
#include <drm/drmP.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem.h>
......
......@@ -123,7 +123,7 @@ static int bochsfb_create(struct drm_fb_helper *helper,
info->flags = FBINFO_DEFAULT;
info->fbops = &bochsfb_ops;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth);
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(info, &bochs->fb.helper, sizes->fb_width,
sizes->fb_height);
......
......@@ -484,7 +484,7 @@ int bochs_framebuffer_init(struct drm_device *dev,
{
int ret;
drm_helper_mode_fill_fb_struct(&gfb->base, mode_cmd);
drm_helper_mode_fill_fb_struct(dev, &gfb->base, mode_cmd);
gfb->obj = obj;
ret = drm_framebuffer_init(dev, &gfb->base, &bochs_fb_funcs);
if (ret) {
......
......@@ -133,6 +133,7 @@ int analogix_dp_disable_psr(struct device *dev)
{
struct analogix_dp_device *dp = dev_get_drvdata(dev);
struct edp_vsc_psr psr_vsc;
int ret;
if (!dp->psr_support)
return 0;
......@@ -147,6 +148,10 @@ int analogix_dp_disable_psr(struct device *dev)
psr_vsc.DB0 = 0;
psr_vsc.DB1 = 0;
ret = drm_dp_dpcd_writeb(&dp->aux, DP_SET_POWER, DP_SET_POWER_D0);
if (ret != 1)
dev_err(dp->dev, "Failed to set DP Power0 %d\n", ret);
analogix_dp_send_psr_spd(dp, &psr_vsc);
return 0;
}
......@@ -1227,12 +1232,10 @@ static int analogix_dp_create_bridge(struct drm_device *drm_dev,
dp->bridge = bridge;
dp->encoder->bridge = bridge;
bridge->driver_private = dp;
bridge->encoder = dp->encoder;
bridge->funcs = &analogix_dp_bridge_funcs;
ret = drm_bridge_attach(drm_dev, bridge);
ret = drm_bridge_attach(dp->encoder, bridge, NULL);
if (ret) {
DRM_ERROR("failed to attach drm bridge\n");
return -EINVAL;
......
......@@ -237,6 +237,7 @@ static int dumb_vga_remove(struct platform_device *pdev)
static const struct of_device_id dumb_vga_match[] = {
{ .compatible = "dumb-vga-dac" },
{ .compatible = "ti,ths8135" },
{},
};
MODULE_DEVICE_TABLE(of, dumb_vga_match);
......
......@@ -1841,13 +1841,12 @@ static int dw_hdmi_register(struct drm_device *drm, struct dw_hdmi *hdmi)
hdmi->bridge = bridge;
bridge->driver_private = hdmi;
bridge->funcs = &dw_hdmi_bridge_funcs;
ret = drm_bridge_attach(drm, bridge);
ret = drm_bridge_attach(encoder, bridge, NULL);
if (ret) {
DRM_ERROR("Failed to initialize bridge with drm\n");
return -EINVAL;
}
encoder->bridge = bridge;
hdmi->connector.polled = DRM_CONNECTOR_POLL_HPD;
drm_connector_helper_add(&hdmi->connector,
......
......@@ -13,6 +13,7 @@
#include <video/vga.h>
#include <drm/drm_encoder.h>
#include <drm/drm_fb_helper.h>
#include <drm/ttm/ttm_bo_api.h>
......
......@@ -22,7 +22,7 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev,
struct drm_gem_object *obj;
struct cirrus_bo *bo;
int src_offset, dst_offset;
int bpp = (afbdev->gfb.base.bits_per_pixel + 7)/8;
int bpp = afbdev->gfb.base.format->cpp[0];
int ret = -EBUSY;
bool unmap = false;
bool store_for_later = false;
......@@ -218,7 +218,7 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
info->flags = FBINFO_DEFAULT;
info->fbops = &cirrusfb_ops;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth);
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(info, &gfbdev->helper, sizes->fb_width,
sizes->fb_height);
......@@ -238,7 +238,7 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
DRM_INFO("fb mappable at 0x%lX\n", info->fix.smem_start);
DRM_INFO("vram aper at 0x%lX\n", (unsigned long)info->fix.smem_start);
DRM_INFO("size %lu\n", (unsigned long)info->fix.smem_len);
DRM_INFO("fb depth is %d\n", fb->depth);
DRM_INFO("fb depth is %d\n", fb->format->depth);
DRM_INFO(" pitch is %d\n", fb->pitches[0]);
return 0;
......
......@@ -34,7 +34,7 @@ int cirrus_framebuffer_init(struct drm_device *dev,
{
int ret;
drm_helper_mode_fill_fb_struct(&gfb->base, mode_cmd);
drm_helper_mode_fill_fb_struct(dev, &gfb->base, mode_cmd);
gfb->obj = obj;
ret = drm_framebuffer_init(dev, &gfb->base, &cirrus_fb_funcs);
if (ret) {
......
......@@ -185,6 +185,7 @@ static int cirrus_crtc_mode_set(struct drm_crtc *crtc,
{
struct drm_device *dev = crtc->dev;
struct cirrus_device *cdev = dev->dev_private;
const struct drm_framebuffer *fb = crtc->primary->fb;
int hsyncstart, hsyncend, htotal, hdispend;
int vtotal, vdispend;
int tmp;
......@@ -257,7 +258,7 @@ static int cirrus_crtc_mode_set(struct drm_crtc *crtc,
sr07 = RREG8(SEQ_DATA);
sr07 &= 0xe0;
hdr = 0;
switch (crtc->primary->fb->bits_per_pixel) {
switch (fb->format->cpp[0] * 8) {
case 8:
sr07 |= 0x11;
break;
......@@ -280,13 +281,13 @@ static int cirrus_crtc_mode_set(struct drm_crtc *crtc,
WREG_SEQ(0x7, sr07);
/* Program the pitch */
tmp = crtc->primary->fb->pitches[0] / 8;
tmp = fb->pitches[0] / 8;
WREG_CRT(VGA_CRTC_OFFSET, tmp);
/* Enable extended blanking and pitch bits, and enable full memory */
tmp = 0x22;
tmp |= (crtc->primary->fb->pitches[0] >> 7) & 0x10;
tmp |= (crtc->primary->fb->pitches[0] >> 6) & 0x40;
tmp |= (fb->pitches[0] >> 7) & 0x10;
tmp |= (fb->pitches[0] >> 6) & 0x40;
WREG_CRT(0x1b, tmp);
/* Enable high-colour modes */
......
......@@ -902,11 +902,11 @@ static int drm_atomic_plane_check(struct drm_plane *plane,
}
/* Check whether this plane supports the fb pixel format. */
ret = drm_plane_check_pixel_format(plane, state->fb->pixel_format);
ret = drm_plane_check_pixel_format(plane, state->fb->format->format);
if (ret) {
struct drm_format_name_buf format_name;
DRM_DEBUG_ATOMIC("Invalid pixel format %s\n",
drm_get_format_name(state->fb->pixel_format,
drm_get_format_name(state->fb->format->format,
&format_name));
return ret;
}
......@@ -960,11 +960,11 @@ static void drm_atomic_plane_print_state(struct drm_printer *p,
drm_printf(p, "\tfb=%u\n", state->fb ? state->fb->base.id : 0);
if (state->fb) {
struct drm_framebuffer *fb = state->fb;
int i, n = drm_format_num_planes(fb->pixel_format);
int i, n = fb->format->num_planes;
struct drm_format_name_buf format_name;
drm_printf(p, "\t\tformat=%s\n",
drm_get_format_name(fb->pixel_format, &format_name));
drm_get_format_name(fb->format->format, &format_name));
drm_printf(p, "\t\t\tmodifier=0x%llx\n", fb->modifier);
drm_printf(p, "\t\tsize=%dx%d\n", fb->width, fb->height);
drm_printf(p, "\t\tlayers:\n");
......@@ -1417,6 +1417,7 @@ drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
struct drm_mode_config *config = &state->dev->mode_config;
struct drm_connector *connector;
struct drm_connector_state *conn_state;
struct drm_connector_list_iter conn_iter;
int ret;
ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx);
......@@ -1430,14 +1431,18 @@ drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
* Changed connectors are already in @state, so only need to look at the
* current configuration.
*/
drm_for_each_connector(connector, state->dev) {
drm_connector_list_iter_get(state->dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->state->crtc != crtc)
continue;
conn_state = drm_atomic_get_connector_state(state, connector);
if (IS_ERR(conn_state))
if (IS_ERR(conn_state)) {
drm_connector_list_iter_put(&conn_iter);
return PTR_ERR(conn_state);
}
}
drm_connector_list_iter_put(&conn_iter);
return 0;
}
......@@ -1692,6 +1697,7 @@ void drm_state_dump(struct drm_device *dev, struct drm_printer *p)
struct drm_plane *plane;
struct drm_crtc *crtc;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
if (!drm_core_check_feature(dev, DRIVER_ATOMIC))
return;
......@@ -1702,8 +1708,10 @@ void drm_state_dump(struct drm_device *dev, struct drm_printer *p)
list_for_each_entry(crtc, &config->crtc_list, head)
drm_atomic_crtc_print_state(p, crtc->state);
list_for_each_entry(connector, &config->connector_list, head)
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
drm_atomic_connector_print_state(p, connector->state);
drm_connector_list_iter_put(&conn_iter);
}
EXPORT_SYMBOL(drm_state_dump);
......@@ -2195,10 +2203,6 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
goto out;
if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY) {
/*
* Unlike commit, check_only does not clean up state.
* Below we call drm_atomic_state_put for it.
*/
ret = drm_atomic_check_only(state);
} else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) {
ret = drm_atomic_nonblocking_commit(state);
......
......@@ -94,9 +94,10 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
{
struct drm_connector_state *conn_state;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_encoder *encoder;
unsigned encoder_mask = 0;
int i, ret;
int i, ret = 0;
/*
* First loop, find all newly assigned encoders from the connectors
......@@ -144,7 +145,8 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
* and the crtc is disabled if no encoder is left. This preserves
* compatibility with the legacy set_config behavior.
*/
drm_for_each_connector(connector, state->dev) {
drm_connector_list_iter_get(state->dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
struct drm_crtc_state *crtc_state;
if (drm_atomic_get_existing_connector_state(state, connector))
......@@ -160,12 +162,15 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
connector->state->crtc->base.id,
connector->state->crtc->name,
connector->base.id, connector->name);
return -EINVAL;
ret = -EINVAL;
goto out;
}
conn_state = drm_atomic_get_connector_state(state, connector);
if (IS_ERR(conn_state))
return PTR_ERR(conn_state);
if (IS_ERR(conn_state)) {
ret = PTR_ERR(conn_state);
goto out;
}
DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] in use on [CRTC:%d:%s], disabling [CONNECTOR:%d:%s]\n",
encoder->base.id, encoder->name,
......@@ -176,19 +181,21 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
if (ret)
return ret;
goto out;
if (!crtc_state->connector_mask) {
ret = drm_atomic_set_mode_prop_for_crtc(crtc_state,
NULL);
if (ret < 0)
return ret;
goto out;
crtc_state->active = false;
}
}
out:
drm_connector_list_iter_put(&conn_iter);
return 0;
return ret;
}
static void
......@@ -1057,41 +1064,6 @@ int drm_atomic_helper_wait_for_fences(struct drm_device *dev,
}
EXPORT_SYMBOL(drm_atomic_helper_wait_for_fences);
/**
* drm_atomic_helper_framebuffer_changed - check if framebuffer has changed
* @dev: DRM device
* @old_state: atomic state object with old state structures
* @crtc: DRM crtc
*
* Checks whether the framebuffer used for this CRTC changes as a result of
* the atomic update. This is useful for drivers which cannot use
* drm_atomic_helper_wait_for_vblanks() and need to reimplement its
* functionality.
*
* Returns:
* true if the framebuffer changed.
*/
bool drm_atomic_helper_framebuffer_changed(struct drm_device *dev,
struct drm_atomic_state *old_state,
struct drm_crtc *crtc)
{
struct drm_plane *plane;
struct drm_plane_state *old_plane_state;
int i;
for_each_plane_in_state(old_state, plane, old_plane_state, i) {
if (plane->state->crtc != crtc &&
old_plane_state->crtc != crtc)
continue;
if (plane->state->fb != old_plane_state->fb)
return true;
}
return false;
}
EXPORT_SYMBOL(drm_atomic_helper_framebuffer_changed);
/**
* drm_atomic_helper_wait_for_vblanks - wait for vblank on crtcs
* @dev: DRM device
......@@ -1110,39 +1082,35 @@ drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state;
int i, ret;
unsigned crtc_mask = 0;
for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
/* No one cares about the old state, so abuse it for tracking
* and store whether we hold a vblank reference (and should do a
* vblank wait) in the ->enable boolean. */
old_crtc_state->enable = false;
if (!crtc->state->enable)
continue;
/*
* Legacy cursor ioctls are completely unsynced, and userspace
* relies on that (by doing tons of cursor updates).
*/
if (old_state->legacy_cursor_update)
return;
/* Legacy cursor ioctls are completely unsynced, and userspace
* relies on that (by doing tons of cursor updates). */
if (old_state->legacy_cursor_update)
continue;
for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
struct drm_crtc_state *new_crtc_state = crtc->state;
if (!drm_atomic_helper_framebuffer_changed(dev,
old_state, crtc))
if (!new_crtc_state->active || !new_crtc_state->planes_changed)
continue;
ret = drm_crtc_vblank_get(crtc);
if (ret != 0)
continue;
old_crtc_state->enable = true;
old_crtc_state->last_vblank_count = drm_crtc_vblank_count(crtc);
crtc_mask |= drm_crtc_mask(crtc);
old_state->crtcs[i].last_vblank_count = drm_crtc_vblank_count(crtc);
}
for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
if (!old_crtc_state->enable)
if (!(crtc_mask & drm_crtc_mask(crtc)))
continue;
ret = wait_event_timeout(dev->vblank[i].queue,
old_crtc_state->last_vblank_count !=
old_state->crtcs[i].last_vblank_count !=
drm_crtc_vblank_count(crtc),
msecs_to_jiffies(50));
......@@ -1664,9 +1632,6 @@ int drm_atomic_helper_prepare_planes(struct drm_device *dev,
funcs = plane->helper_private;
if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc))
continue;
if (funcs->prepare_fb) {
ret = funcs->prepare_fb(plane, plane_state);
if (ret)
......@@ -1683,9 +1648,6 @@ int drm_atomic_helper_prepare_planes(struct drm_device *dev,
if (j >= i)
continue;
if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc))
continue;
funcs = plane->helper_private;
if (funcs->cleanup_fb)
......@@ -1952,9 +1914,6 @@ void drm_atomic_helper_cleanup_planes(struct drm_device *dev,
for_each_plane_in_state(old_state, plane, plane_state, i) {
const struct drm_plane_helper_funcs *funcs;
if (!drm_atomic_helper_framebuffer_changed(dev, old_state, plane_state->crtc))
continue;
funcs = plane->helper_private;
if (funcs->cleanup_fb)
......@@ -2442,6 +2401,7 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
{
struct drm_atomic_state *state;
struct drm_connector *conn;
struct drm_connector_list_iter conn_iter;
int err;
state = drm_atomic_state_alloc(dev);
......@@ -2450,7 +2410,8 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
state->acquire_ctx = ctx;
drm_for_each_connector(conn, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(conn, &conn_iter) {
struct drm_crtc *crtc = conn->state->crtc;
struct drm_crtc_state *crtc_state;
......@@ -2468,6 +2429,7 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
err = drm_atomic_commit(state);
free:
drm_connector_list_iter_put(&conn_iter);
drm_atomic_state_put(state);
return err;
}
......@@ -2840,6 +2802,7 @@ int drm_atomic_helper_connector_dpms(struct drm_connector *connector,
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc;
struct drm_connector *tmp_connector;
struct drm_connector_list_iter conn_iter;
int ret;
bool active = false;
int old_mode = connector->dpms;
......@@ -2867,7 +2830,8 @@ int drm_atomic_helper_connector_dpms(struct drm_connector *connector,
WARN_ON(!drm_modeset_is_locked(&config->connection_mutex));
drm_for_each_connector(tmp_connector, connector->dev) {
drm_connector_list_iter_get(connector->dev, &conn_iter);
drm_for_each_connector_iter(tmp_connector, &conn_iter) {
if (tmp_connector->state->crtc != crtc)
continue;
......@@ -2876,6 +2840,7 @@ int drm_atomic_helper_connector_dpms(struct drm_connector *connector,
break;
}
}
drm_connector_list_iter_put(&conn_iter);
crtc_state->active = active;
ret = drm_atomic_commit(state);
......@@ -3253,6 +3218,7 @@ drm_atomic_helper_duplicate_state(struct drm_device *dev,
{
struct drm_atomic_state *state;
struct drm_connector *conn;
struct drm_connector_list_iter conn_iter;
struct drm_plane *plane;
struct drm_crtc *crtc;
int err = 0;
......@@ -3283,15 +3249,18 @@ drm_atomic_helper_duplicate_state(struct drm_device *dev,
}
}
drm_for_each_connector(conn, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(conn, &conn_iter) {
struct drm_connector_state *conn_state;
conn_state = drm_atomic_get_connector_state(state, conn);
if (IS_ERR(conn_state)) {
err = PTR_ERR(conn_state);
drm_connector_list_iter_put(&conn_iter);
goto free;
}
}
drm_connector_list_iter_put(&conn_iter);
/* clear the acquire context so that it isn't accidentally reused */
state->acquire_ctx = NULL;
......
......@@ -26,6 +26,9 @@
#include <linux/mutex.h>
#include <drm/drm_bridge.h>
#include <drm/drm_encoder.h>
#include "drm_crtc_internal.h"
/**
* DOC: overview
......@@ -92,47 +95,58 @@ void drm_bridge_remove(struct drm_bridge *bridge)
EXPORT_SYMBOL(drm_bridge_remove);
/**
* drm_bridge_attach - associate given bridge to our DRM device
* drm_bridge_attach - attach the bridge to an encoder's chain
*
* @dev: DRM device
* @bridge: bridge control structure
* @encoder: DRM encoder
* @bridge: bridge to attach
* @previous: previous bridge in the chain (optional)
*
* Called by a kms driver to link one of our encoder/bridge to the given
* bridge.
* Called by a kms driver to link the bridge to an encoder's chain. The previous
* argument specifies the previous bridge in the chain. If NULL, the bridge is
* linked directly at the encoder's output. Otherwise it is linked at the
* previous bridge's output.
*
* Note that setting up links between the bridge and our encoder/bridge
* objects needs to be handled by the kms driver itself.
* If non-NULL the previous bridge must be already attached by a call to this
* function.
*
* RETURNS:
* Zero on success, error code on failure
*/
int drm_bridge_attach(struct drm_device *dev, struct drm_bridge *bridge)
int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge,
struct drm_bridge *previous)
{
if (!dev || !bridge)
int ret;
if (!encoder || !bridge)
return -EINVAL;
if (previous && (!previous->dev || previous->encoder != encoder))
return -EINVAL;
if (bridge->dev)
return -EBUSY;
bridge->dev = dev;
bridge->dev = encoder->dev;
bridge->encoder = encoder;
if (bridge->funcs->attach) {
ret = bridge->funcs->attach(bridge);
if (ret < 0) {
bridge->dev = NULL;
bridge->encoder = NULL;
return ret;
}
}
if (bridge->funcs->attach)
return bridge->funcs->attach(bridge);
if (previous)
previous->next = bridge;
else
encoder->bridge = bridge;
return 0;
}
EXPORT_SYMBOL(drm_bridge_attach);
/**
* drm_bridge_detach - deassociate given bridge from its DRM device
*
* @bridge: bridge control structure
*
* Called by a kms driver to unlink the given bridge from its DRM device.
*
* Note that tearing down links between the bridge and our encoder/bridge
* objects needs to be handled by the kms driver itself.
*/
void drm_bridge_detach(struct drm_bridge *bridge)
{
if (WARN_ON(!bridge))
......@@ -146,7 +160,6 @@ void drm_bridge_detach(struct drm_bridge *bridge)
bridge->dev = NULL;
}
EXPORT_SYMBOL(drm_bridge_detach);
/**
* DOC: bridge callbacks
......
......@@ -23,6 +23,7 @@
#include <drm/drmP.h>
#include <drm/drm_connector.h>
#include <drm/drm_edid.h>
#include <drm/drm_encoder.h>
#include "drm_crtc_internal.h"
#include "drm_internal.h"
......@@ -189,13 +190,11 @@ int drm_connector_init(struct drm_device *dev,
struct ida *connector_ida =
&drm_connector_enum_list[connector_type].ida;
drm_modeset_lock_all(dev);
ret = drm_mode_object_get_reg(dev, &connector->base,
DRM_MODE_OBJECT_CONNECTOR,
false, drm_connector_free);
if (ret)
goto out_unlock;
return ret;
connector->base.properties = &connector->properties;
connector->dev = dev;
......@@ -225,6 +224,7 @@ int drm_connector_init(struct drm_device *dev,
INIT_LIST_HEAD(&connector->probed_modes);
INIT_LIST_HEAD(&connector->modes);
mutex_init(&connector->mutex);
connector->edid_blob_ptr = NULL;
connector->status = connector_status_unknown;
......@@ -232,8 +232,10 @@ int drm_connector_init(struct drm_device *dev,
/* We should add connectors at the end to avoid upsetting the connector
* index too much. */
spin_lock_irq(&config->connector_list_lock);
list_add_tail(&connector->head, &config->connector_list);
config->num_connector++;
spin_unlock_irq(&config->connector_list_lock);
if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL)
drm_object_attach_property(&connector->base,
......@@ -258,9 +260,6 @@ int drm_connector_init(struct drm_device *dev,
if (ret)
drm_mode_object_unregister(dev, &connector->base);
out_unlock:
drm_modeset_unlock_all(dev);
return ret;
}
EXPORT_SYMBOL(drm_connector_init);
......@@ -351,14 +350,18 @@ void drm_connector_cleanup(struct drm_connector *connector)
drm_mode_object_unregister(dev, &connector->base);
kfree(connector->name);
connector->name = NULL;
spin_lock_irq(&dev->mode_config.connector_list_lock);
list_del(&connector->head);
dev->mode_config.num_connector--;
spin_unlock_irq(&dev->mode_config.connector_list_lock);
WARN_ON(connector->state && !connector->funcs->atomic_destroy_state);
if (connector->state && connector->funcs->atomic_destroy_state)
connector->funcs->atomic_destroy_state(connector,
connector->state);
mutex_destroy(&connector->mutex);
memset(connector, 0, sizeof(*connector));
}
EXPORT_SYMBOL(drm_connector_cleanup);
......@@ -374,14 +377,15 @@ EXPORT_SYMBOL(drm_connector_cleanup);
*/
int drm_connector_register(struct drm_connector *connector)
{
int ret;
int ret = 0;
mutex_lock(&connector->mutex);
if (connector->registered)
return 0;
goto unlock;
ret = drm_sysfs_connector_add(connector);
if (ret)
return ret;
goto unlock;
ret = drm_debugfs_connector_add(connector);
if (ret) {
......@@ -397,12 +401,14 @@ int drm_connector_register(struct drm_connector *connector)
drm_mode_object_register(connector->dev, &connector->base);
connector->registered = true;
return 0;
goto unlock;
err_debugfs:
drm_debugfs_connector_remove(connector);
err_sysfs:
drm_sysfs_connector_remove(connector);
unlock:
mutex_unlock(&connector->mutex);
return ret;
}
EXPORT_SYMBOL(drm_connector_register);
......@@ -415,8 +421,11 @@ EXPORT_SYMBOL(drm_connector_register);
*/
void drm_connector_unregister(struct drm_connector *connector)
{
if (!connector->registered)
mutex_lock(&connector->mutex);
if (!connector->registered) {
mutex_unlock(&connector->mutex);
return;
}
if (connector->funcs->early_unregister)
connector->funcs->early_unregister(connector);
......@@ -425,36 +434,37 @@ void drm_connector_unregister(struct drm_connector *connector)
drm_debugfs_connector_remove(connector);
connector->registered = false;
mutex_unlock(&connector->mutex);
}
EXPORT_SYMBOL(drm_connector_unregister);
void drm_connector_unregister_all(struct drm_device *dev)
{
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
/* FIXME: taking the mode config mutex ends up in a clash with sysfs */
list_for_each_entry(connector, &dev->mode_config.connector_list, head)
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
drm_connector_unregister(connector);
drm_connector_list_iter_put(&conn_iter);
}
int drm_connector_register_all(struct drm_device *dev)
{
struct drm_connector *connector;
int ret;
struct drm_connector_list_iter conn_iter;
int ret = 0;
/* FIXME: taking the mode config mutex ends up in a clash with
* fbcon/backlight registration */
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
ret = drm_connector_register(connector);
if (ret)
goto err;
break;
}
drm_connector_list_iter_put(&conn_iter);
return 0;
err:
mutex_unlock(&dev->mode_config.mutex);
drm_connector_unregister_all(dev);
if (ret)
drm_connector_unregister_all(dev);
return ret;
}
......@@ -476,6 +486,87 @@ const char *drm_get_connector_status_name(enum drm_connector_status status)
}
EXPORT_SYMBOL(drm_get_connector_status_name);
#ifdef CONFIG_LOCKDEP
static struct lockdep_map connector_list_iter_dep_map = {
.name = "drm_connector_list_iter"
};
#endif
/**
* drm_connector_list_iter_get - initialize a connector_list iterator
* @dev: DRM device
* @iter: connector_list iterator
*
* Sets @iter up to walk the connector list in &drm_mode_config of @dev. @iter
* must always be cleaned up again by calling drm_connector_list_iter_put().
* Iteration itself happens using drm_connector_list_iter_next() or
* drm_for_each_connector_iter().
*/
void drm_connector_list_iter_get(struct drm_device *dev,
struct drm_connector_list_iter *iter)
{
iter->dev = dev;
iter->conn = NULL;
lock_acquire_shared_recursive(&connector_list_iter_dep_map, 0, 1, NULL, _RET_IP_);
}
EXPORT_SYMBOL(drm_connector_list_iter_get);
/**
* drm_connector_list_iter_next - return next connector
* @iter: connectr_list iterator
*
* Returns the next connector for @iter, or NULL when the list walk has
* completed.
*/
struct drm_connector *
drm_connector_list_iter_next(struct drm_connector_list_iter *iter)
{
struct drm_connector *old_conn = iter->conn;
struct drm_mode_config *config = &iter->dev->mode_config;
struct list_head *lhead;
unsigned long flags;
spin_lock_irqsave(&config->connector_list_lock, flags);
lhead = old_conn ? &old_conn->head : &config->connector_list;
do {
if (lhead->next == &config->connector_list) {
iter->conn = NULL;
break;
}
lhead = lhead->next;
iter->conn = list_entry(lhead, struct drm_connector, head);
/* loop until it's not a zombie connector */
} while (!kref_get_unless_zero(&iter->conn->base.refcount));
spin_unlock_irqrestore(&config->connector_list_lock, flags);
if (old_conn)
drm_connector_unreference(old_conn);
return iter->conn;
}
EXPORT_SYMBOL(drm_connector_list_iter_next);
/**
* drm_connector_list_iter_put - tear down a connector_list iterator
* @iter: connector_list iterator
*
* Tears down @iter and releases any resources (like &drm_connector references)
* acquired while walking the list. This must always be called, both when the
* iteration completes fully or when it was aborted without walking the entire
* list.
*/
void drm_connector_list_iter_put(struct drm_connector_list_iter *iter)
{
iter->dev = NULL;
if (iter->conn)
drm_connector_unreference(iter->conn);
lock_release(&connector_list_iter_dep_map, 0, _RET_IP_);
}
EXPORT_SYMBOL(drm_connector_list_iter_put);
static const struct drm_prop_enum_list drm_subpixel_enum_list[] = {
{ SubPixelUnknown, "Unknown" },
{ SubPixelHorizontalRGB, "Horizontal RGB" },
......@@ -1072,43 +1163,65 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
memset(&u_mode, 0, sizeof(struct drm_mode_modeinfo));
mutex_lock(&dev->mode_config.mutex);
connector = drm_connector_lookup(dev, out_resp->connector_id);
if (!connector) {
ret = -ENOENT;
goto out_unlock;
}
if (!connector)
return -ENOENT;
drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
encoder = drm_connector_get_encoder(connector);
if (encoder)
out_resp->encoder_id = encoder->base.id;
else
out_resp->encoder_id = 0;
ret = drm_mode_object_get_properties(&connector->base, file_priv->atomic,
(uint32_t __user *)(unsigned long)(out_resp->props_ptr),
(uint64_t __user *)(unsigned long)(out_resp->prop_values_ptr),
&out_resp->count_props);
drm_modeset_unlock(&dev->mode_config.connection_mutex);
if (ret)
goto out_unref;
for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++)
if (connector->encoder_ids[i] != 0)
encoders_count++;
if ((out_resp->count_encoders >= encoders_count) && encoders_count) {
copied = 0;
encoder_ptr = (uint32_t __user *)(unsigned long)(out_resp->encoders_ptr);
for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) {
if (connector->encoder_ids[i] != 0) {
if (put_user(connector->encoder_ids[i],
encoder_ptr + copied)) {
ret = -EFAULT;
goto out_unref;
}
copied++;
}
}
}
out_resp->count_encoders = encoders_count;
out_resp->connector_id = connector->base.id;
out_resp->connector_type = connector->connector_type;
out_resp->connector_type_id = connector->connector_type_id;
mutex_lock(&dev->mode_config.mutex);
if (out_resp->count_modes == 0) {
connector->funcs->fill_modes(connector,
dev->mode_config.max_width,
dev->mode_config.max_height);
}
/* delayed so we get modes regardless of pre-fill_modes state */
list_for_each_entry(mode, &connector->modes, head)
if (drm_mode_expose_to_userspace(mode, file_priv))
mode_count++;
out_resp->connector_id = connector->base.id;
out_resp->connector_type = connector->connector_type;
out_resp->connector_type_id = connector->connector_type_id;
out_resp->mm_width = connector->display_info.width_mm;
out_resp->mm_height = connector->display_info.height_mm;
out_resp->subpixel = connector->display_info.subpixel_order;
out_resp->connection = connector->status;
drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
encoder = drm_connector_get_encoder(connector);
if (encoder)
out_resp->encoder_id = encoder->base.id;
else
out_resp->encoder_id = 0;
/* delayed so we get modes regardless of pre-fill_modes state */
list_for_each_entry(mode, &connector->modes, head)
if (drm_mode_expose_to_userspace(mode, file_priv))
mode_count++;
/*
* This ioctl is called twice, once to determine how much space is
......@@ -1131,36 +1244,10 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
}
}
out_resp->count_modes = mode_count;
ret = drm_mode_object_get_properties(&connector->base, file_priv->atomic,
(uint32_t __user *)(unsigned long)(out_resp->props_ptr),
(uint64_t __user *)(unsigned long)(out_resp->prop_values_ptr),
&out_resp->count_props);
if (ret)
goto out;
if ((out_resp->count_encoders >= encoders_count) && encoders_count) {
copied = 0;
encoder_ptr = (uint32_t __user *)(unsigned long)(out_resp->encoders_ptr);
for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) {
if (connector->encoder_ids[i] != 0) {
if (put_user(connector->encoder_ids[i],
encoder_ptr + copied)) {
ret = -EFAULT;
goto out;
}
copied++;
}
}
}
out_resp->count_encoders = encoders_count;
out:
drm_modeset_unlock(&dev->mode_config.connection_mutex);
drm_connector_unreference(connector);
out_unlock:
mutex_unlock(&dev->mode_config.mutex);
out_unref:
drm_connector_unreference(connector);
return ret;
}
......
......@@ -357,7 +357,10 @@ int drm_mode_getcrtc(struct drm_device *dev,
drm_modeset_lock_crtc(crtc, crtc->primary);
crtc_resp->gamma_size = crtc->gamma_size;
if (crtc->primary->fb)
if (crtc->primary->state && crtc->primary->state->fb)
crtc_resp->fb_id = crtc->primary->state->fb->base.id;
else if (!crtc->primary->state && crtc->primary->fb)
crtc_resp->fb_id = crtc->primary->fb->base.id;
else
crtc_resp->fb_id = 0;
......@@ -572,11 +575,11 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
*/
if (!crtc->primary->format_default) {
ret = drm_plane_check_pixel_format(crtc->primary,
fb->pixel_format);
fb->format->format);
if (ret) {
struct drm_format_name_buf format_name;
DRM_DEBUG_KMS("Invalid pixel format %s\n",
drm_get_format_name(fb->pixel_format,
drm_get_format_name(fb->format->format,
&format_name));
goto out;
}
......
......@@ -36,6 +36,7 @@
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
#include <drm/drm_crtc.h>
#include <drm/drm_encoder.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
......@@ -88,6 +89,7 @@
bool drm_helper_encoder_in_use(struct drm_encoder *encoder)
{
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_device *dev = encoder->dev;
/*
......@@ -99,9 +101,15 @@ bool drm_helper_encoder_in_use(struct drm_encoder *encoder)
WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
}
drm_for_each_connector(connector, dev)
if (connector->encoder == encoder)
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->encoder == encoder) {
drm_connector_list_iter_put(&conn_iter);
return true;
}
}
drm_connector_list_iter_put(&conn_iter);
return false;
}
EXPORT_SYMBOL(drm_helper_encoder_in_use);
......@@ -436,10 +444,13 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
/* Decouple all encoders and their attached connectors from this crtc */
drm_for_each_encoder(encoder, dev) {
struct drm_connector_list_iter conn_iter;
if (encoder->crtc != crtc)
continue;
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->encoder != encoder)
continue;
......@@ -456,6 +467,7 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
/* we keep a reference while the encoder is bound */
drm_connector_unreference(connector);
}
drm_connector_list_iter_put(&conn_iter);
}
__drm_helper_disable_unused_functions(dev);
......@@ -507,6 +519,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
bool mode_changed = false; /* if true do a full mode set */
bool fb_changed = false; /* if true and !mode_changed just do a flip */
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
int count = 0, ro, fail = 0;
const struct drm_crtc_helper_funcs *crtc_funcs;
struct drm_mode_set save_set;
......@@ -571,9 +584,10 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
}
count = 0;
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
save_connector_encoders[count++] = connector->encoder;
}
drm_connector_list_iter_put(&conn_iter);
save_set.crtc = set->crtc;
save_set.mode = &set->crtc->mode;
......@@ -588,8 +602,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
if (set->crtc->primary->fb == NULL) {
DRM_DEBUG_KMS("crtc has no fb, full mode set\n");
mode_changed = true;
} else if (set->fb->pixel_format !=
set->crtc->primary->fb->pixel_format) {
} else if (set->fb->format != set->crtc->primary->fb->format) {
mode_changed = true;
} else
fb_changed = true;
......@@ -616,7 +629,8 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
/* a) traverse passed in connector list and get encoders for them */
count = 0;
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
const struct drm_connector_helper_funcs *connector_funcs =
connector->helper_private;
new_encoder = connector->encoder;
......@@ -649,6 +663,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
connector->encoder = new_encoder;
}
}
drm_connector_list_iter_put(&conn_iter);
if (fail) {
ret = -EINVAL;
......@@ -656,7 +671,8 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
}
count = 0;
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (!connector->encoder)
continue;
......@@ -674,6 +690,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
if (new_crtc &&
!drm_encoder_crtc_ok(connector->encoder, new_crtc)) {
ret = -EINVAL;
drm_connector_list_iter_put(&conn_iter);
goto fail;
}
if (new_crtc != connector->encoder->crtc) {
......@@ -690,6 +707,7 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
connector->base.id, connector->name);
}
}
drm_connector_list_iter_put(&conn_iter);
/* mode_set_base is not a required function */
if (fb_changed && !crtc_funcs->mode_set_base)
......@@ -744,9 +762,10 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
}
count = 0;
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
connector->encoder = save_connector_encoders[count++];
}
drm_connector_list_iter_put(&conn_iter);
/* after fail drop reference on all unbound connectors in set, let
* bound connectors keep their reference
......@@ -773,12 +792,16 @@ static int drm_helper_choose_encoder_dpms(struct drm_encoder *encoder)
{
int dpms = DRM_MODE_DPMS_OFF;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_device *dev = encoder->dev;
drm_for_each_connector(connector, dev)
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
if (connector->encoder == encoder)
if (connector->dpms < dpms)
dpms = connector->dpms;
drm_connector_list_iter_put(&conn_iter);
return dpms;
}
......@@ -810,12 +833,16 @@ static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc)
{
int dpms = DRM_MODE_DPMS_OFF;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_device *dev = crtc->dev;
drm_for_each_connector(connector, dev)
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
if (connector->encoder && connector->encoder->crtc == crtc)
if (connector->dpms < dpms)
dpms = connector->dpms;
drm_connector_list_iter_put(&conn_iter);
return dpms;
}
......
......@@ -174,6 +174,12 @@ int drm_mode_dirtyfb_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv);
/* drm_atomic.c */
#ifdef CONFIG_DEBUG_FS
struct drm_minor;
int drm_atomic_debugfs_init(struct drm_minor *minor);
int drm_atomic_debugfs_cleanup(struct drm_minor *minor);
#endif
int drm_atomic_get_property(struct drm_mode_object *obj,
struct drm_property *property, uint64_t *val);
int drm_mode_atomic_ioctl(struct drm_device *dev,
......@@ -186,6 +192,9 @@ void drm_plane_unregister_all(struct drm_device *dev);
int drm_plane_check_pixel_format(const struct drm_plane *plane,
u32 format);
/* drm_bridge.c */
void drm_bridge_detach(struct drm_bridge *bridge);
/* IOCTL */
int drm_mode_getplane_res(struct drm_device *dev, void *data,
struct drm_file *file_priv);
......
......@@ -38,6 +38,7 @@
#include <drm/drm_edid.h>
#include <drm/drm_atomic.h>
#include "drm_internal.h"
#include "drm_crtc_internal.h"
#if defined(CONFIG_DEBUG_FS)
......
......@@ -323,9 +323,8 @@ void drm_minor_release(struct drm_minor *minor)
* historical baggage. Hence use the reference counting provided by
* drm_dev_ref() and drm_dev_unref() only carefully.
*
* Also note that embedding of &drm_device is currently not (yet) supported (but
* it would be easy to add). Drivers can store driver-private data in the
* dev_priv field of &drm_device.
* It is recommended that drivers embed struct &drm_device into their own device
* structure, which is supported through drm_dev_init().
*/
/**
......@@ -462,7 +461,11 @@ static void drm_fs_inode_free(struct inode *inode)
* Note that for purely virtual devices @parent can be NULL.
*
* Drivers that do not want to allocate their own device struct
* embedding struct &drm_device can call drm_dev_alloc() instead.
* embedding struct &drm_device can call drm_dev_alloc() instead. For drivers
* that do embed struct &drm_device it must be placed first in the overall
* structure, and the overall structure must be allocated using kmalloc(): The
* drm core's release function unconditionally calls kfree() on the @dev pointer
* when the final reference is released.
*
* RETURNS:
* 0 on success, or error code on failure.
......
......@@ -35,6 +35,7 @@
#include <linux/vga_switcheroo.h>
#include <drm/drmP.h>
#include <drm/drm_edid.h>
#include <drm/drm_encoder.h>
#include <drm/drm_displayid.h>
#define version_greater(edid, maj, min) \
......
......@@ -159,6 +159,17 @@ void drm_encoder_cleanup(struct drm_encoder *encoder)
* the indices on the drm_encoder after us in the encoder_list.
*/
if (encoder->bridge) {
struct drm_bridge *bridge = encoder->bridge;
struct drm_bridge *next;
while (bridge) {
next = bridge->next;
drm_bridge_detach(bridge);
bridge = next;
}
}
drm_mode_object_unregister(dev, &encoder->base);
kfree(encoder->name);
list_del(&encoder->head);
......@@ -173,10 +184,12 @@ static struct drm_crtc *drm_encoder_get_crtc(struct drm_encoder *encoder)
struct drm_connector *connector;
struct drm_device *dev = encoder->dev;
bool uses_atomic = false;
struct drm_connector_list_iter conn_iter;
/* For atomic drivers only state objects are synchronously updated and
* protected by modeset locks, so check those first. */
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (!connector->state)
continue;
......@@ -185,8 +198,10 @@ static struct drm_crtc *drm_encoder_get_crtc(struct drm_encoder *encoder)
if (connector->state->best_encoder != encoder)
continue;
drm_connector_list_iter_put(&conn_iter);
return connector->state->crtc;
}
drm_connector_list_iter_put(&conn_iter);
/* Don't return stale data (e.g. pending async disable). */
if (uses_atomic)
......
......@@ -147,7 +147,7 @@ static struct drm_fb_cma *drm_fb_cma_alloc(struct drm_device *dev,
if (!fb_cma)
return ERR_PTR(-ENOMEM);
drm_helper_mode_fill_fb_struct(&fb_cma->fb, mode_cmd);
drm_helper_mode_fill_fb_struct(dev, &fb_cma->fb, mode_cmd);
for (i = 0; i < num_planes; i++)
fb_cma->obj[i] = obj[i];
......@@ -304,15 +304,12 @@ EXPORT_SYMBOL_GPL(drm_fb_cma_prepare_fb);
static void drm_fb_cma_describe(struct drm_framebuffer *fb, struct seq_file *m)
{
struct drm_fb_cma *fb_cma = to_fb_cma(fb);
const struct drm_format_info *info;
int i;
seq_printf(m, "fb: %dx%d@%4.4s\n", fb->width, fb->height,
(char *)&fb->pixel_format);
info = drm_format_info(fb->pixel_format);
(char *)&fb->format->format);
for (i = 0; i < info->num_planes; i++) {
for (i = 0; i < fb->format->num_planes; i++) {
seq_printf(m, " %d: offset=%d pitch=%d, obj: ",
i, fb->offsets[i], fb->pitches[i]);
drm_gem_cma_describe(fb_cma->obj[i], m);
......@@ -467,7 +464,7 @@ int drm_fbdev_cma_create_with_funcs(struct drm_fb_helper *helper,
fbi->flags = FBINFO_FLAG_DEFAULT;
fbi->fbops = &drm_fbdev_cma_ops;
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth);
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height);
offset = fbi->var.xoffset * bytes_per_pixel;
......
......@@ -120,20 +120,22 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
{
struct drm_device *dev = fb_helper->dev;
struct drm_connector *connector;
int i, ret;
struct drm_connector_list_iter conn_iter;
int i, ret = 0;
if (!drm_fbdev_emulation)
return 0;
mutex_lock(&dev->mode_config.mutex);
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
ret = drm_fb_helper_add_one_connector(fb_helper, connector);
if (ret)
goto fail;
}
mutex_unlock(&dev->mode_config.mutex);
return 0;
goto out;
fail:
drm_fb_helper_for_each_connector(fb_helper, i) {
struct drm_fb_helper_connector *fb_helper_connector =
......@@ -145,6 +147,8 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
fb_helper->connector_info[i] = NULL;
}
fb_helper->connector_count = 0;
out:
drm_connector_list_iter_put(&conn_iter);
mutex_unlock(&dev->mode_config.mutex);
return ret;
......@@ -401,7 +405,7 @@ static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
drm_warn_on_modeset_not_all_locked(dev);
if (dev->mode_config.funcs->atomic_commit)
if (drm_drv_uses_atomic_modeset(dev))
return restore_fbdev_mode_atomic(fb_helper);
drm_for_each_plane(plane, dev) {
......@@ -1169,7 +1173,7 @@ static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
!fb_helper->funcs->gamma_get))
return -EINVAL;
WARN_ON(fb->bits_per_pixel != 8);
WARN_ON(fb->format->cpp[0] != 1);
fb_helper->funcs->gamma_set(crtc, red, green, blue, regno);
......@@ -1252,14 +1256,14 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
* Changes struct fb_var_screeninfo are currently not pushed back
* to KMS, hence fail if different settings are requested.
*/
if (var->bits_per_pixel != fb->bits_per_pixel ||
if (var->bits_per_pixel != fb->format->cpp[0] * 8 ||
var->xres != fb->width || var->yres != fb->height ||
var->xres_virtual != fb->width || var->yres_virtual != fb->height) {
DRM_DEBUG("fb userspace requested width/height/bpp different than current fb "
"request %dx%d-%d (virtual %dx%d) > %dx%d-%d\n",
var->xres, var->yres, var->bits_per_pixel,
var->xres_virtual, var->yres_virtual,
fb->width, fb->height, fb->bits_per_pixel);
fb->width, fb->height, fb->format->cpp[0] * 8);
return -EINVAL;
}
......@@ -1440,7 +1444,7 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
return -EBUSY;
}
if (dev->mode_config.funcs->atomic_commit) {
if (drm_drv_uses_atomic_modeset(dev)) {
ret = pan_display_atomic(var, info);
goto unlock;
}
......@@ -1645,7 +1649,7 @@ void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper *fb_helpe
info->pseudo_palette = fb_helper->pseudo_palette;
info->var.xres_virtual = fb->width;
info->var.yres_virtual = fb->height;
info->var.bits_per_pixel = fb->bits_per_pixel;
info->var.bits_per_pixel = fb->format->cpp[0] * 8;
info->var.accel_flags = FB_ACCELF_TEXT;
info->var.xoffset = 0;
info->var.yoffset = 0;
......@@ -1653,7 +1657,7 @@ void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper *fb_helpe
info->var.height = -1;
info->var.width = -1;
switch (fb->depth) {
switch (fb->format->depth) {
case 8:
info->var.red.offset = 0;
info->var.green.offset = 0;
......@@ -2056,7 +2060,7 @@ static int drm_pick_crtcs(struct drm_fb_helper *fb_helper,
* NULL we fallback to the default drm_atomic_helper_best_encoder()
* helper.
*/
if (fb_helper->dev->mode_config.funcs->atomic_commit &&
if (drm_drv_uses_atomic_modeset(fb_helper->dev) &&
!connector_funcs->best_encoder)
encoder = drm_atomic_helper_best_encoder(connector);
else
......
......@@ -622,7 +622,7 @@ EXPORT_SYMBOL(drm_event_reserve_init_locked);
* kmalloc and @p must be the first member element.
*
* Callers which already hold dev->event_lock should use
* drm_event_reserve_init() instead.
* drm_event_reserve_init_locked() instead.
*
* RETURNS:
*
......
......@@ -432,8 +432,8 @@ int drm_mode_getfb(struct drm_device *dev,
r->height = fb->height;
r->width = fb->width;
r->depth = fb->depth;
r->bpp = fb->bits_per_pixel;
r->depth = fb->format->depth;
r->bpp = fb->format->cpp[0] * 8;
r->pitch = fb->pitches[0];
if (fb->funcs->create_handle) {
if (drm_is_current_master(file_priv) || capable(CAP_SYS_ADMIN) ||
......@@ -631,8 +631,11 @@ int drm_framebuffer_init(struct drm_device *dev, struct drm_framebuffer *fb,
{
int ret;
if (WARN_ON_ONCE(fb->dev != dev || !fb->format))
return -EINVAL;
INIT_LIST_HEAD(&fb->filp_head);
fb->dev = dev;
fb->funcs = funcs;
ret = drm_mode_object_get_reg(dev, &fb->base, DRM_MODE_OBJECT_FB,
......@@ -790,3 +793,47 @@ void drm_framebuffer_remove(struct drm_framebuffer *fb)
drm_framebuffer_unreference(fb);
}
EXPORT_SYMBOL(drm_framebuffer_remove);
/**
* drm_framebuffer_plane_width - width of the plane given the first plane
* @width: width of the first plane
* @fb: the framebuffer
* @plane: plane index
*
* Returns:
* The width of @plane, given that the width of the first plane is @width.
*/
int drm_framebuffer_plane_width(int width,
const struct drm_framebuffer *fb, int plane)
{
if (plane >= fb->format->num_planes)
return 0;
if (plane == 0)
return width;
return width / fb->format->hsub;
}
EXPORT_SYMBOL(drm_framebuffer_plane_width);
/**
* drm_framebuffer_plane_height - height of the plane given the first plane
* @height: height of the first plane
* @fb: the framebuffer
* @plane: plane index
*
* Returns:
* The height of @plane, given that the height of the first plane is @height.
*/
int drm_framebuffer_plane_height(int height,
const struct drm_framebuffer *fb, int plane)
{
if (plane >= fb->format->num_planes)
return 0;
if (plane == 0)
return height;
return height / fb->format->vsub;
}
EXPORT_SYMBOL(drm_framebuffer_plane_height);
......@@ -58,10 +58,10 @@ extern unsigned int drm_timestamp_monotonic;
/* IOCTLS */
int drm_wait_vblank(struct drm_device *dev, void *data,
struct drm_file *filp);
int drm_control(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_modeset_ctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_irq_control(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int drm_legacy_modeset_ctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
/* drm_auth.c */
int drm_getmagic(struct drm_device *dev, void *data,
......
......@@ -115,11 +115,15 @@ static int drm_getunique(struct drm_device *dev, void *data,
struct drm_unique *u = data;
struct drm_master *master = file_priv->master;
mutex_lock(&master->dev->master_mutex);
if (u->unique_len >= master->unique_len) {
if (copy_to_user(u->unique, master->unique, master->unique_len))
if (copy_to_user(u->unique, master->unique, master->unique_len)) {
mutex_unlock(&master->dev->master_mutex);
return -EFAULT;
}
}
u->unique_len = master->unique_len;
mutex_unlock(&master->dev->master_mutex);
return 0;
}
......@@ -340,6 +344,7 @@ static int drm_setversion(struct drm_device *dev, void *data, struct drm_file *f
struct drm_set_version *sv = data;
int if_version, retcode = 0;
mutex_lock(&dev->master_mutex);
if (sv->drm_di_major != -1) {
if (sv->drm_di_major != DRM_IF_MAJOR ||
sv->drm_di_minor < 0 || sv->drm_di_minor > DRM_IF_MINOR) {
......@@ -374,6 +379,7 @@ static int drm_setversion(struct drm_device *dev, void *data, struct drm_file *f
sv->drm_di_minor = DRM_IF_MINOR;
sv->drm_dd_major = dev->driver->major;
sv->drm_dd_minor = dev->driver->minor;
mutex_unlock(&dev->master_mutex);
return retcode;
}
......@@ -528,15 +534,15 @@ EXPORT_SYMBOL(drm_ioctl_permit);
static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version,
DRM_UNLOCKED|DRM_RENDER_ALLOW|DRM_CONTROL_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0),
DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_legacy_getmap_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, 0),
DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_MASTER),
DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_setclientcap, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_UNLOCKED | DRM_MASTER),
DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_invalid_op, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
......@@ -575,7 +581,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_legacy_freebufs, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_legacy_dma_ioctl, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_legacy_irq_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
#if IS_ENABLED(CONFIG_AGP)
DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_agp_acquire_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
......@@ -593,7 +599,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_modeset_ctl, 0),
DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_legacy_modeset_ctl, 0),
DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
......@@ -729,9 +735,8 @@ long drm_ioctl(struct file *filp,
if (ksize > in_size)
memset(kdata + in_size, 0, ksize - in_size);
/* Enforce sane locking for modern driver ioctls. Core ioctls are
* too messy still. */
if ((!drm_core_check_feature(dev, DRIVER_LEGACY) && is_driver_ioctl) ||
/* Enforce sane locking for modern driver ioctls. */
if (!drm_core_check_feature(dev, DRIVER_LEGACY) ||
(ioctl->flags & DRM_UNLOCKED))
retcode = func(dev, kdata, file_priv);
else {
......
......@@ -579,19 +579,8 @@ int drm_irq_uninstall(struct drm_device *dev)
}
EXPORT_SYMBOL(drm_irq_uninstall);
/*
* IRQ control ioctl.
*
* \param inode device inode.
* \param file_priv DRM file private.
* \param cmd command.
* \param arg user argument, pointing to a drm_control structure.
* \return zero on success or a negative number on failure.
*
* Calls irq_install() or irq_uninstall() according to \p arg.
*/
int drm_control(struct drm_device *dev, void *data,
struct drm_file *file_priv)
int drm_legacy_irq_control(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_control *ctl = data;
int ret = 0, irq;
......@@ -1442,19 +1431,8 @@ static void drm_legacy_vblank_post_modeset(struct drm_device *dev,
}
}
/*
* drm_modeset_ctl - handle vblank event counter changes across mode switch
* @DRM_IOCTL_ARGS: standard ioctl arguments
*
* Applications should call the %_DRM_PRE_MODESET and %_DRM_POST_MODESET
* ioctls around modesetting so that any lost vblank events are accounted for.
*
* Generally the counter will reset across mode sets. If interrupts are
* enabled around this call, we don't have to do anything since the counter
* will have already been incremented.
*/
int drm_modeset_ctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
int drm_legacy_modeset_ctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_modeset_ctl *modeset = data;
unsigned int pipe;
......
此差异已折叠。
......@@ -20,6 +20,7 @@
* OF THIS SOFTWARE.
*/
#include <drm/drm_encoder.h>
#include <drm/drm_mode_config.h>
#include <drm/drmP.h>
......@@ -84,113 +85,74 @@ int drm_mode_getresources(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_mode_card_res *card_res = data;
struct list_head *lh;
struct drm_framebuffer *fb;
struct drm_connector *connector;
struct drm_crtc *crtc;
struct drm_encoder *encoder;
int ret = 0;
int connector_count = 0;
int crtc_count = 0;
int fb_count = 0;
int encoder_count = 0;
int copied = 0;
int count, ret = 0;
uint32_t __user *fb_id;
uint32_t __user *crtc_id;
uint32_t __user *connector_id;
uint32_t __user *encoder_id;
struct drm_connector_list_iter conn_iter;
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
mutex_lock(&file_priv->fbs_lock);
/*
* For the non-control nodes we need to limit the list of resources
* by IDs in the group list for this node
*/
list_for_each(lh, &file_priv->fbs)
fb_count++;
/* handle this in 4 parts */
/* FBs */
if (card_res->count_fbs >= fb_count) {
copied = 0;
fb_id = (uint32_t __user *)(unsigned long)card_res->fb_id_ptr;
list_for_each_entry(fb, &file_priv->fbs, filp_head) {
if (put_user(fb->base.id, fb_id + copied)) {
mutex_unlock(&file_priv->fbs_lock);
return -EFAULT;
}
copied++;
count = 0;
fb_id = u64_to_user_ptr(card_res->fb_id_ptr);
list_for_each_entry(fb, &file_priv->fbs, filp_head) {
if (count < card_res->count_fbs &&
put_user(fb->base.id, fb_id + count)) {
mutex_unlock(&file_priv->fbs_lock);
return -EFAULT;
}
count++;
}
card_res->count_fbs = fb_count;
card_res->count_fbs = count;
mutex_unlock(&file_priv->fbs_lock);
/* mode_config.mutex protects the connector list against e.g. DP MST
* connector hot-adding. CRTC/Plane lists are invariant. */
mutex_lock(&dev->mode_config.mutex);
drm_for_each_crtc(crtc, dev)
crtc_count++;
drm_for_each_connector(connector, dev)
connector_count++;
drm_for_each_encoder(encoder, dev)
encoder_count++;
card_res->max_height = dev->mode_config.max_height;
card_res->min_height = dev->mode_config.min_height;
card_res->max_width = dev->mode_config.max_width;
card_res->min_width = dev->mode_config.min_width;
/* CRTCs */
if (card_res->count_crtcs >= crtc_count) {
copied = 0;
crtc_id = (uint32_t __user *)(unsigned long)card_res->crtc_id_ptr;
drm_for_each_crtc(crtc, dev) {
if (put_user(crtc->base.id, crtc_id + copied)) {
ret = -EFAULT;
goto out;
}
copied++;
}
count = 0;
crtc_id = u64_to_user_ptr(card_res->crtc_id_ptr);
drm_for_each_crtc(crtc, dev) {
if (count < card_res->count_crtcs &&
put_user(crtc->base.id, crtc_id + count))
return -EFAULT;
count++;
}
card_res->count_crtcs = crtc_count;
/* Encoders */
if (card_res->count_encoders >= encoder_count) {
copied = 0;
encoder_id = (uint32_t __user *)(unsigned long)card_res->encoder_id_ptr;
drm_for_each_encoder(encoder, dev) {
if (put_user(encoder->base.id, encoder_id +
copied)) {
ret = -EFAULT;
goto out;
}
copied++;
}
card_res->count_crtcs = count;
count = 0;
encoder_id = u64_to_user_ptr(card_res->encoder_id_ptr);
drm_for_each_encoder(encoder, dev) {
if (count < card_res->count_encoders &&
put_user(encoder->base.id, encoder_id + count))
return -EFAULT;
count++;
}
card_res->count_encoders = encoder_count;
/* Connectors */
if (card_res->count_connectors >= connector_count) {
copied = 0;
connector_id = (uint32_t __user *)(unsigned long)card_res->connector_id_ptr;
drm_for_each_connector(connector, dev) {
if (put_user(connector->base.id,
connector_id + copied)) {
ret = -EFAULT;
goto out;
}
copied++;
card_res->count_encoders = count;
drm_connector_list_iter_get(dev, &conn_iter);
count = 0;
connector_id = u64_to_user_ptr(card_res->connector_id_ptr);
drm_for_each_connector_iter(connector, &conn_iter) {
if (count < card_res->count_connectors &&
put_user(connector->base.id, connector_id + count)) {
drm_connector_list_iter_put(&conn_iter);
return -EFAULT;
}
count++;
}
card_res->count_connectors = connector_count;
card_res->count_connectors = count;
drm_connector_list_iter_put(&conn_iter);
out:
mutex_unlock(&dev->mode_config.mutex);
return ret;
}
......@@ -208,6 +170,7 @@ void drm_mode_config_reset(struct drm_device *dev)
struct drm_plane *plane;
struct drm_encoder *encoder;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
drm_for_each_plane(plane, dev)
if (plane->funcs->reset)
......@@ -221,11 +184,11 @@ void drm_mode_config_reset(struct drm_device *dev)
if (encoder->funcs->reset)
encoder->funcs->reset(encoder);
mutex_lock(&dev->mode_config.mutex);
drm_for_each_connector(connector, dev)
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter)
if (connector->funcs->reset)
connector->funcs->reset(connector);
mutex_unlock(&dev->mode_config.mutex);
drm_connector_list_iter_put(&conn_iter);
}
EXPORT_SYMBOL(drm_mode_config_reset);
......@@ -406,10 +369,9 @@ void drm_mode_config_init(struct drm_device *dev)
idr_init(&dev->mode_config.crtc_idr);
idr_init(&dev->mode_config.tile_idr);
ida_init(&dev->mode_config.connector_ida);
spin_lock_init(&dev->mode_config.connector_list_lock);
drm_modeset_lock_all(dev);
drm_mode_create_standard_properties(dev);
drm_modeset_unlock_all(dev);
/* Just to be sure */
dev->mode_config.num_fb = 0;
......@@ -436,7 +398,8 @@ EXPORT_SYMBOL(drm_mode_config_init);
*/
void drm_mode_config_cleanup(struct drm_device *dev)
{
struct drm_connector *connector, *ot;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
struct drm_crtc *crtc, *ct;
struct drm_encoder *encoder, *enct;
struct drm_framebuffer *fb, *fbt;
......@@ -449,10 +412,16 @@ void drm_mode_config_cleanup(struct drm_device *dev)
encoder->funcs->destroy(encoder);
}
list_for_each_entry_safe(connector, ot,
&dev->mode_config.connector_list, head) {
connector->funcs->destroy(connector);
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
/* drm_connector_list_iter holds an full reference to the
* current connector itself, which means it is inherently safe
* against unreferencing the current connector - but not against
* deleting it right away. */
drm_connector_unreference(connector);
}
drm_connector_list_iter_put(&conn_iter);
WARN_ON(!list_empty(&dev->mode_config.connector_list));
list_for_each_entry_safe(property, pt, &dev->mode_config.property_list,
head) {
......
......@@ -23,6 +23,7 @@
#include <linux/export.h>
#include <drm/drmP.h>
#include <drm/drm_mode_object.h>
#include <drm/drm_atomic.h>
#include "drm_crtc_internal.h"
......@@ -273,7 +274,7 @@ int drm_object_property_get_value(struct drm_mode_object *obj,
* their value in obj->properties->values[].. mostly to avoid
* having to deal w/ EDID and similar props in atomic paths:
*/
if (drm_core_check_feature(property->dev, DRIVER_ATOMIC) &&
if (drm_drv_uses_atomic_modeset(property->dev) &&
!(property->flags & DRM_MODE_PROP_IMMUTABLE))
return drm_atomic_get_property(obj, property, val);
......
......@@ -48,6 +48,7 @@ void drm_helper_move_panel_connectors_to_head(struct drm_device *dev)
INIT_LIST_HEAD(&panel_list);
spin_lock_irq(&dev->mode_config.connector_list_lock);
list_for_each_entry_safe(connector, tmp,
&dev->mode_config.connector_list, head) {
if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS ||
......@@ -57,38 +58,27 @@ void drm_helper_move_panel_connectors_to_head(struct drm_device *dev)
}
list_splice(&panel_list, &dev->mode_config.connector_list);
spin_unlock_irq(&dev->mode_config.connector_list_lock);
}
EXPORT_SYMBOL(drm_helper_move_panel_connectors_to_head);
/**
* drm_helper_mode_fill_fb_struct - fill out framebuffer metadata
* @dev: DRM device
* @fb: drm_framebuffer object to fill out
* @mode_cmd: metadata from the userspace fb creation request
*
* This helper can be used in a drivers fb_create callback to pre-fill the fb's
* metadata fields.
*/
void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
void drm_helper_mode_fill_fb_struct(struct drm_device *dev,
struct drm_framebuffer *fb,
const struct drm_mode_fb_cmd2 *mode_cmd)
{
const struct drm_format_info *info;
int i;
info = drm_format_info(mode_cmd->pixel_format);
if (!info || !info->depth) {
struct drm_format_name_buf format_name;
DRM_DEBUG_KMS("non-RGB pixel format %s\n",
drm_get_format_name(mode_cmd->pixel_format,
&format_name));
fb->depth = 0;
fb->bits_per_pixel = 0;
} else {
fb->depth = info->depth;
fb->bits_per_pixel = info->cpp[0] * 8;
}
fb->dev = dev;
fb->format = drm_format_info(mode_cmd->pixel_format);
fb->width = mode_cmd->width;
fb->height = mode_cmd->height;
for (i = 0; i < 4; i++) {
......@@ -96,7 +86,6 @@ void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
fb->offsets[i] = mode_cmd->offsets[i];
}
fb->modifier = mode_cmd->modifier[0];
fb->pixel_format = mode_cmd->pixel_format;
fb->flags = mode_cmd->flags;
}
EXPORT_SYMBOL(drm_helper_mode_fill_fb_struct);
......
......@@ -4,6 +4,7 @@
#include <linux/of_graph.h>
#include <drm/drmP.h>
#include <drm/drm_crtc.h>
#include <drm/drm_encoder.h>
#include <drm/drm_of.h>
static void drm_release_of(struct device *dev, void *data)
......
......@@ -392,12 +392,16 @@ int drm_mode_getplane(struct drm_device *dev, void *data,
return -ENOENT;
drm_modeset_lock(&plane->mutex, NULL);
if (plane->crtc)
if (plane->state && plane->state->crtc)
plane_resp->crtc_id = plane->state->crtc->base.id;
else if (!plane->state && plane->crtc)
plane_resp->crtc_id = plane->crtc->base.id;
else
plane_resp->crtc_id = 0;
if (plane->fb)
if (plane->state && plane->state->fb)
plane_resp->fb_id = plane->state->fb->base.id;
else if (!plane->state && plane->fb)
plane_resp->fb_id = plane->fb->base.id;
else
plane_resp->fb_id = 0;
......@@ -478,11 +482,11 @@ static int __setplane_internal(struct drm_plane *plane,
}
/* Check whether this plane supports the fb pixel format. */
ret = drm_plane_check_pixel_format(plane, fb->pixel_format);
ret = drm_plane_check_pixel_format(plane, fb->format->format);
if (ret) {
struct drm_format_name_buf format_name;
DRM_DEBUG_KMS("Invalid pixel format %s\n",
drm_get_format_name(fb->pixel_format,
drm_get_format_name(fb->format->format,
&format_name));
goto out;
}
......@@ -854,7 +858,7 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
if (ret)
goto out;
if (crtc->primary->fb->pixel_format != fb->pixel_format) {
if (crtc->primary->fb->format != fb->format) {
DRM_DEBUG_KMS("Page flip is not allowed to change frame buffer format.\n");
ret = -EINVAL;
goto out;
......
......@@ -29,6 +29,7 @@
#include <drm/drm_rect.h>
#include <drm/drm_atomic.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder.h>
#include <drm/drm_atomic_helper.h>
#define SUBPIXEL_MASK 0xffff
......@@ -74,6 +75,7 @@ static int get_connectors_for_crtc(struct drm_crtc *crtc,
{
struct drm_device *dev = crtc->dev;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
int count = 0;
/*
......@@ -83,7 +85,8 @@ static int get_connectors_for_crtc(struct drm_crtc *crtc,
*/
WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->encoder && connector->encoder->crtc == crtc) {
if (connector_list != NULL && count < num_connectors)
*(connector_list++) = connector;
......@@ -91,6 +94,7 @@ static int get_connectors_for_crtc(struct drm_crtc *crtc,
count++;
}
}
drm_connector_list_iter_put(&conn_iter);
return count;
}
......
......@@ -129,6 +129,7 @@ void drm_kms_helper_poll_enable_locked(struct drm_device *dev)
{
bool poll = false;
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
unsigned long delay = DRM_OUTPUT_POLL_PERIOD;
WARN_ON(!mutex_is_locked(&dev->mode_config.mutex));
......@@ -136,11 +137,13 @@ void drm_kms_helper_poll_enable_locked(struct drm_device *dev)
if (!dev->mode_config.poll_enabled || !drm_kms_helper_poll)
return;
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT |
DRM_CONNECTOR_POLL_DISCONNECT))
poll = true;
}
drm_connector_list_iter_put(&conn_iter);
if (dev->mode_config.delayed_event) {
poll = true;
......@@ -382,6 +385,7 @@ static void output_poll_execute(struct work_struct *work)
struct delayed_work *delayed_work = to_delayed_work(work);
struct drm_device *dev = container_of(delayed_work, struct drm_device, mode_config.output_poll_work);
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
enum drm_connector_status old_status;
bool repoll = false, changed;
......@@ -397,8 +401,8 @@ static void output_poll_execute(struct work_struct *work)
goto out;
}
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
/* Ignore forced connectors. */
if (connector->force)
continue;
......@@ -451,6 +455,7 @@ static void output_poll_execute(struct work_struct *work)
changed = true;
}
}
drm_connector_list_iter_put(&conn_iter);
mutex_unlock(&dev->mode_config.mutex);
......@@ -562,6 +567,7 @@ EXPORT_SYMBOL(drm_kms_helper_poll_fini);
bool drm_helper_hpd_irq_event(struct drm_device *dev)
{
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
enum drm_connector_status old_status;
bool changed = false;
......@@ -569,8 +575,8 @@ bool drm_helper_hpd_irq_event(struct drm_device *dev)
return false;
mutex_lock(&dev->mode_config.mutex);
drm_for_each_connector(connector, dev) {
drm_connector_list_iter_get(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
/* Only handle HPD capable connectors. */
if (!(connector->polled & DRM_CONNECTOR_POLL_HPD))
continue;
......@@ -586,7 +592,7 @@ bool drm_helper_hpd_irq_event(struct drm_device *dev)
if (old_status != connector->status)
changed = true;
}
drm_connector_list_iter_put(&conn_iter);
mutex_unlock(&dev->mode_config.mutex);
if (changed)
......
......@@ -182,29 +182,10 @@ static const struct drm_plane_funcs drm_simple_kms_plane_funcs = {
int drm_simple_display_pipe_attach_bridge(struct drm_simple_display_pipe *pipe,
struct drm_bridge *bridge)
{
bridge->encoder = &pipe->encoder;
pipe->encoder.bridge = bridge;
return drm_bridge_attach(pipe->encoder.dev, bridge);
return drm_bridge_attach(&pipe->encoder, bridge, NULL);
}
EXPORT_SYMBOL(drm_simple_display_pipe_attach_bridge);
/**
* drm_simple_display_pipe_detach_bridge - Detach the bridge from the display pipe
* @pipe: simple display pipe object
*
* Detaches the drm bridge previously attached with
* drm_simple_display_pipe_attach_bridge()
*/
void drm_simple_display_pipe_detach_bridge(struct drm_simple_display_pipe *pipe)
{
if (WARN_ON(!pipe->encoder.bridge))
return;
drm_bridge_detach(pipe->encoder.bridge);
pipe->encoder.bridge = NULL;
}
EXPORT_SYMBOL(drm_simple_display_pipe_detach_bridge);
/**
* drm_simple_display_pipe_init - Initialize a simple display pipeline
* @dev: DRM device
......
......@@ -592,7 +592,7 @@ static void etnaviv_unbind(struct device *dev)
drm->dev_private = NULL;
kfree(priv);
drm_put_dev(drm);
drm_dev_unref(drm);
}
static const struct component_master_ops etnaviv_master_ops = {
......
......@@ -113,6 +113,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
while (1) {
struct etnaviv_vram_mapping *m, *n;
struct drm_mm_scan scan;
struct list_head list;
bool found;
......@@ -134,7 +135,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
}
/* Try to retire some entries */
drm_mm_init_scan(&mmu->mm, size, 0, 0);
drm_mm_scan_init(&scan, &mmu->mm, size, 0, 0, 0);
found = 0;
INIT_LIST_HEAD(&list);
......@@ -151,7 +152,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
continue;
list_add(&free->scan_node, &list);
if (drm_mm_scan_add_block(&free->vram_node)) {
if (drm_mm_scan_add_block(&scan, &free->vram_node)) {
found = true;
break;
}
......@@ -160,7 +161,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
if (!found) {
/* Nothing found, clean up and fail */
list_for_each_entry_safe(m, n, &list, scan_node)
BUG_ON(drm_mm_scan_remove_block(&m->vram_node));
BUG_ON(drm_mm_scan_remove_block(&scan, &m->vram_node));
break;
}
......@@ -171,7 +172,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu *mmu,
* can leave the block pinned.
*/
list_for_each_entry_safe(m, n, &list, scan_node)
if (!drm_mm_scan_remove_block(&m->vram_node))
if (!drm_mm_scan_remove_block(&scan, &m->vram_node))
list_del_init(&m->scan_node);
/*
......
......@@ -200,7 +200,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
val = readl(ctx->addr + DECON_WINCONx(win));
val &= ~WINCONx_BPPMODE_MASK;
switch (fb->pixel_format) {
switch (fb->format->format) {
case DRM_FORMAT_XRGB1555:
val |= WINCONx_BPPMODE_16BPP_I1555;
val |= WINCONx_HAWSWP_F;
......@@ -226,7 +226,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
return;
}
DRM_DEBUG_KMS("bpp = %u\n", fb->bits_per_pixel);
DRM_DEBUG_KMS("bpp = %u\n", fb->format->cpp[0] * 8);
/*
* In case of exynos, setting dma-burst to 16Word causes permanent
......@@ -275,7 +275,7 @@ static void decon_update_plane(struct exynos_drm_crtc *crtc,
struct decon_context *ctx = crtc->ctx;
struct drm_framebuffer *fb = state->base.fb;
unsigned int win = plane->index;
unsigned int bpp = fb->bits_per_pixel >> 3;
unsigned int bpp = fb->format->cpp[0];
unsigned int pitch = fb->pitches[0];
dma_addr_t dma_addr = exynos_drm_fb_dma_addr(fb, 0);
u32 val;
......
......@@ -281,7 +281,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
val = readl(ctx->regs + WINCON(win));
val &= ~WINCONx_BPPMODE_MASK;
switch (fb->pixel_format) {
switch (fb->format->format) {
case DRM_FORMAT_RGB565:
val |= WINCONx_BPPMODE_16BPP_565;
val |= WINCONx_BURSTLEN_16WORD;
......@@ -330,7 +330,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
break;
}
DRM_DEBUG_KMS("bpp = %d\n", fb->bits_per_pixel);
DRM_DEBUG_KMS("bpp = %d\n", fb->format->cpp[0] * 8);
/*
* In case of exynos, setting dma-burst to 16Word causes permanent
......@@ -340,7 +340,7 @@ static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
* movement causes unstable DMA which results into iommu crash/tear.
*/
padding = (fb->pitches[0] / (fb->bits_per_pixel >> 3)) - fb->width;
padding = (fb->pitches[0] / fb->format->cpp[0]) - fb->width;
if (fb->width + padding < MIN_FB_WIDTH_FOR_16WORD_BURST) {
val &= ~WINCONx_BURSTLEN_MASK;
val |= WINCONx_BURSTLEN_8WORD;
......@@ -407,7 +407,7 @@ static void decon_update_plane(struct exynos_drm_crtc *crtc,
unsigned int last_x;
unsigned int last_y;
unsigned int win = plane->index;
unsigned int bpp = fb->bits_per_pixel >> 3;
unsigned int bpp = fb->format->cpp[0];
unsigned int pitch = fb->pitches[0];
if (ctx->suspended)
......
......@@ -99,7 +99,6 @@ static int exynos_dp_bridge_attach(struct analogix_dp_plat_data *plat_data,
struct drm_connector *connector)
{
struct exynos_dp_device *dp = to_dp(plat_data);
struct drm_encoder *encoder = &dp->encoder;
int ret;
drm_connector_register(connector);
......@@ -107,9 +106,7 @@ static int exynos_dp_bridge_attach(struct analogix_dp_plat_data *plat_data,
/* Pre-empt DP connector creation if there's a bridge */
if (dp->ptn_bridge) {
bridge->next = dp->ptn_bridge;
dp->ptn_bridge->encoder = encoder;
ret = drm_bridge_attach(encoder->dev, dp->ptn_bridge);
ret = drm_bridge_attach(&dp->encoder, dp->ptn_bridge, bridge);
if (ret) {
DRM_ERROR("Failed to attach bridge to drm\n");
bridge->next = NULL;
......
......@@ -1718,10 +1718,8 @@ static int exynos_dsi_bind(struct device *dev, struct device *master,
}
bridge = of_drm_find_bridge(dsi->bridge_node);
if (bridge) {
encoder->bridge = bridge;
drm_bridge_attach(drm_dev, bridge);
}
if (bridge)
drm_bridge_attach(encoder, bridge, NULL);
return mipi_dsi_host_register(&dsi->dsi_host);
}
......
......@@ -126,7 +126,7 @@ exynos_drm_framebuffer_init(struct drm_device *dev,
+ mode_cmd->offsets[i];
}
drm_helper_mode_fill_fb_struct(&exynos_fb->fb, mode_cmd);
drm_helper_mode_fill_fb_struct(dev, &exynos_fb->fb, mode_cmd);
ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs);
if (ret < 0) {
......
......@@ -76,7 +76,7 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
{
struct fb_info *fbi;
struct drm_framebuffer *fb = helper->fb;
unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3);
unsigned int size = fb->width * fb->height * fb->format->cpp[0];
unsigned int nr_pages;
unsigned long offset;
......@@ -90,7 +90,7 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
fbi->flags = FBINFO_FLAG_DEFAULT;
fbi->fbops = &exynos_drm_fb_ops;
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth);
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->format->depth);
drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height);
nr_pages = exynos_gem->size >> PAGE_SHIFT;
......@@ -103,7 +103,7 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
return -EIO;
}
offset = fbi->var.xoffset * (fb->bits_per_pixel >> 3);
offset = fbi->var.xoffset * fb->format->cpp[0];
offset += fbi->var.yoffset * fb->pitches[0];
fbi->screen_base = exynos_gem->kvaddr + offset;
......
......@@ -738,7 +738,7 @@ static void fimd_update_plane(struct exynos_drm_crtc *crtc,
unsigned long val, size, offset;
unsigned int last_x, last_y, buf_offsize, line_size;
unsigned int win = plane->index;
unsigned int bpp = fb->bits_per_pixel >> 3;
unsigned int bpp = fb->format->cpp[0];
unsigned int pitch = fb->pitches[0];
if (ctx->suspended)
......@@ -804,7 +804,7 @@ static void fimd_update_plane(struct exynos_drm_crtc *crtc,
DRM_DEBUG_KMS("osd size = 0x%x\n", (unsigned int)val);
}
fimd_win_set_pixfmt(ctx, win, fb->pixel_format, state->src.w);
fimd_win_set_pixfmt(ctx, win, fb->format->format, state->src.w);
/* hardware window 0 doesn't support color key. */
if (win != 0)
......
......@@ -485,7 +485,7 @@ static void vp_video_buffer(struct mixer_context *ctx,
bool crcb_mode = false;
u32 val;
switch (fb->pixel_format) {
switch (fb->format->format) {
case DRM_FORMAT_NV12:
crcb_mode = false;
break;
......@@ -494,7 +494,7 @@ static void vp_video_buffer(struct mixer_context *ctx,
break;
default:
DRM_ERROR("pixel format for vp is wrong [%d].\n",
fb->pixel_format);
fb->format->format);
return;
}
......@@ -597,7 +597,7 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
unsigned int fmt;
u32 val;
switch (fb->pixel_format) {
switch (fb->format->format) {
case DRM_FORMAT_XRGB4444:
case DRM_FORMAT_ARGB4444:
fmt = MXR_FORMAT_ARGB4444;
......@@ -631,7 +631,7 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
/* converting dma address base and source offset */
dma_addr = exynos_drm_fb_dma_addr(fb, 0)
+ (state->src.x * fb->bits_per_pixel >> 3)
+ (state->src.x * fb->format->cpp[0])
+ (state->src.y * fb->pitches[0]);
src_x_offset = 0;
src_y_offset = 0;
......@@ -649,7 +649,7 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
/* setup geometry */
mixer_reg_write(res, MXR_GRAPHIC_SPAN(win),
fb->pitches[0] / (fb->bits_per_pixel >> 3));
fb->pitches[0] / fb->format->cpp[0]);
/* setup display size */
if (ctx->mxr_ver == MXR_VER_128_0_0_184 &&
......@@ -681,7 +681,7 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
mixer_cfg_scan(ctx, mode->vdisplay);
mixer_cfg_rgb_fmt(ctx, mode->vdisplay);
mixer_cfg_layer(ctx, win, priority, true);
mixer_cfg_gfx_blend(ctx, win, is_alpha_format(fb->pixel_format));
mixer_cfg_gfx_blend(ctx, win, is_alpha_format(fb->format->format));
/* layer update mandatory for mixer 16.0.33.0 */
if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
......
......@@ -434,7 +434,8 @@ static int fsl_dcu_drm_remove(struct platform_device *pdev)
{
struct fsl_dcu_drm_device *fsl_dev = platform_get_drvdata(pdev);
drm_put_dev(fsl_dev->drm);
drm_dev_unregister(fsl_dev->drm);
drm_dev_unref(fsl_dev->drm);
clk_disable_unprepare(fsl_dev->clk);
clk_unregister(fsl_dev->pix_clk);
......
......@@ -12,6 +12,8 @@
#ifndef __FSL_DCU_DRM_DRV_H__
#define __FSL_DCU_DRM_DRV_H__
#include <drm/drm_encoder.h>
#include "fsl_dcu_drm_crtc.h"
#include "fsl_dcu_drm_output.h"
#include "fsl_dcu_drm_plane.h"
......
......@@ -44,7 +44,7 @@ static int fsl_dcu_drm_plane_atomic_check(struct drm_plane *plane,
if (!state->fb || !state->crtc)
return 0;
switch (fb->pixel_format) {
switch (fb->format->format) {
case DRM_FORMAT_RGB565:
case DRM_FORMAT_RGB888:
case DRM_FORMAT_XRGB8888:
......@@ -96,7 +96,7 @@ static void fsl_dcu_drm_plane_atomic_update(struct drm_plane *plane,
gem = drm_fb_cma_get_gem_obj(fb, 0);
switch (fb->pixel_format) {
switch (fb->format->format) {
case DRM_FORMAT_RGB565:
bpp = FSL_DCU_RGB565;
break;
......
......@@ -160,10 +160,7 @@ static int fsl_dcu_attach_endpoint(struct fsl_dcu_drm_device *fsl_dev,
if (!bridge)
return -ENODEV;
fsl_dev->encoder.bridge = bridge;
bridge->encoder = &fsl_dev->encoder;
return drm_bridge_attach(fsl_dev->drm, bridge);
return drm_bridge_attach(&fsl_dev->encoder, bridge, NULL);
}
int fsl_dcu_create_outputs(struct fsl_dcu_drm_device *fsl_dev)
......
......@@ -254,7 +254,7 @@ static void psbfb_copyarea_accel(struct fb_info *info,
offset = psbfb->gtt->offset;
stride = fb->pitches[0];
switch (fb->depth) {
switch (fb->format->depth) {
case 8:
src_format = PSB_2D_SRC_332RGB;
dst_format = PSB_2D_DST_332RGB;
......
......@@ -77,7 +77,7 @@ static int psbfb_setcolreg(unsigned regno, unsigned red, unsigned green,
(transp << info->var.transp.offset);
if (regno < 16) {
switch (fb->bits_per_pixel) {
switch (fb->format->cpp[0] * 8) {
case 16:
((uint32_t *) info->pseudo_palette)[regno] = v;
break;
......@@ -244,7 +244,7 @@ static int psb_framebuffer_init(struct drm_device *dev,
if (mode_cmd->pitches[0] & 63)
return -EINVAL;
drm_helper_mode_fill_fb_struct(&fb->base, mode_cmd);
drm_helper_mode_fill_fb_struct(dev, &fb->base, mode_cmd);
fb->gtt = gt;
ret = drm_framebuffer_init(dev, &fb->base, &psb_fb_funcs);
if (ret) {
......@@ -407,7 +407,7 @@ static int psbfb_create(struct psb_fbdev *fbdev,
fbdev->psb_fb_helper.fb = fb;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth);
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
strcpy(info->fix.id, "psbdrmfb");
info->flags = FBINFO_DEFAULT;
......
......@@ -59,7 +59,8 @@ int gma_pipe_set_base(struct drm_crtc *crtc, int x, int y,
struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb);
struct drm_framebuffer *fb = crtc->primary->fb;
struct psb_framebuffer *psbfb = to_psb_fb(fb);
int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe];
unsigned long start, offset;
......@@ -70,7 +71,7 @@ int gma_pipe_set_base(struct drm_crtc *crtc, int x, int y,
return 0;
/* no fb bound */
if (!crtc->primary->fb) {
if (!fb) {
dev_err(dev->dev, "No FB bound\n");
goto gma_pipe_cleaner;
}
......@@ -81,19 +82,19 @@ int gma_pipe_set_base(struct drm_crtc *crtc, int x, int y,
if (ret < 0)
goto gma_pipe_set_base_exit;
start = psbfb->gtt->offset;
offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8);
offset = y * fb->pitches[0] + x * fb->format->cpp[0];
REG_WRITE(map->stride, crtc->primary->fb->pitches[0]);
REG_WRITE(map->stride, fb->pitches[0]);
dspcntr = REG_READ(map->cntr);
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (crtc->primary->fb->bits_per_pixel) {
switch (fb->format->cpp[0] * 8) {
case 8:
dspcntr |= DISPPLANE_8BPP;
break;
case 16:
if (crtc->primary->fb->depth == 15)
if (fb->format->depth == 15)
dspcntr |= DISPPLANE_15_16BPP;
else
dspcntr |= DISPPLANE_16BPP;
......
......@@ -148,7 +148,7 @@ static int check_fb(struct drm_framebuffer *fb)
if (!fb)
return 0;
switch (fb->bits_per_pixel) {
switch (fb->format->cpp[0] * 8) {
case 8:
case 16:
case 24:
......@@ -165,8 +165,9 @@ static int mdfld__intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
{
struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private;
struct drm_framebuffer *fb = crtc->primary->fb;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb);
struct psb_framebuffer *psbfb = to_psb_fb(fb);
int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe];
unsigned long start, offset;
......@@ -178,12 +179,12 @@ static int mdfld__intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
dev_dbg(dev->dev, "pipe = 0x%x.\n", pipe);
/* no fb bound */
if (!crtc->primary->fb) {
if (!fb) {
dev_dbg(dev->dev, "No FB bound\n");
return 0;
}
ret = check_fb(crtc->primary->fb);
ret = check_fb(fb);
if (ret)
return ret;
......@@ -196,18 +197,18 @@ static int mdfld__intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
return 0;
start = psbfb->gtt->offset;
offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8);
offset = y * fb->pitches[0] + x * fb->format->cpp[0];
REG_WRITE(map->stride, crtc->primary->fb->pitches[0]);
REG_WRITE(map->stride, fb->pitches[0]);
dspcntr = REG_READ(map->cntr);
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (crtc->primary->fb->bits_per_pixel) {
switch (fb->format->cpp[0] * 8) {
case 8:
dspcntr |= DISPPLANE_8BPP;
break;
case 16:
if (crtc->primary->fb->depth == 15)
if (fb->format->depth == 15)
dspcntr |= DISPPLANE_15_16BPP;
else
dspcntr |= DISPPLANE_16BPP;
......
......@@ -599,7 +599,8 @@ static int oaktrail_pipe_set_base(struct drm_crtc *crtc,
struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
struct psb_framebuffer *psbfb = to_psb_fb(crtc->primary->fb);
struct drm_framebuffer *fb = crtc->primary->fb;
struct psb_framebuffer *psbfb = to_psb_fb(fb);
int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe];
unsigned long start, offset;
......@@ -608,7 +609,7 @@ static int oaktrail_pipe_set_base(struct drm_crtc *crtc,
int ret = 0;
/* no fb bound */
if (!crtc->primary->fb) {
if (!fb) {
dev_dbg(dev->dev, "No FB bound\n");
return 0;
}
......@@ -617,19 +618,19 @@ static int oaktrail_pipe_set_base(struct drm_crtc *crtc,
return 0;
start = psbfb->gtt->offset;
offset = y * crtc->primary->fb->pitches[0] + x * (crtc->primary->fb->bits_per_pixel / 8);
offset = y * fb->pitches[0] + x * fb->format->cpp[0];
REG_WRITE(map->stride, crtc->primary->fb->pitches[0]);
REG_WRITE(map->stride, fb->pitches[0]);
dspcntr = REG_READ(map->cntr);
dspcntr &= ~DISPPLANE_PIXFORMAT_MASK;
switch (crtc->primary->fb->bits_per_pixel) {
switch (fb->format->cpp[0] * 8) {
case 8:
dspcntr |= DISPPLANE_8BPP;
break;
case 16:
if (crtc->primary->fb->depth == 15)
if (fb->format->depth == 15)
dspcntr |= DISPPLANE_15_16BPP;
else
dspcntr |= DISPPLANE_16BPP;
......
......@@ -23,6 +23,7 @@
#include <linux/i2c-algo-bit.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder.h>
#include <linux/gpio.h>
#include "gma_display.h"
......
......@@ -122,11 +122,11 @@ static void hibmc_plane_atomic_update(struct drm_plane *plane,
writel(gpu_addr, priv->mmio + HIBMC_CRT_FB_ADDRESS);
reg = state->fb->width * (state->fb->bits_per_pixel / 8);
reg = state->fb->width * (state->fb->format->cpp[0]);
/* now line_pad is 16 */
reg = PADDING(16, reg);
line_l = state->fb->width * state->fb->bits_per_pixel / 8;
line_l = state->fb->width * state->fb->format->cpp[0];
line_l = PADDING(16, line_l);
writel(HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_WIDTH, reg) |
HIBMC_FIELD(HIBMC_CRT_FB_WIDTH_OFFS, line_l),
......@@ -136,7 +136,7 @@ static void hibmc_plane_atomic_update(struct drm_plane *plane,
reg = readl(priv->mmio + HIBMC_CRT_DISP_CTL);
reg &= ~HIBMC_CRT_DISP_CTL_FORMAT_MASK;
reg |= HIBMC_FIELD(HIBMC_CRT_DISP_CTL_FORMAT,
state->fb->bits_per_pixel / 16);
state->fb->format->cpp[0] * 8 / 16);
writel(reg, priv->mmio + HIBMC_CRT_DISP_CTL);
}
......
......@@ -135,7 +135,7 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
info->fbops = &hibmc_drm_fb_ops;
drm_fb_helper_fill_fix(info, hi_fbdev->fb->fb.pitches[0],
hi_fbdev->fb->fb.depth);
hi_fbdev->fb->fb.format->depth);
drm_fb_helper_fill_var(info, &priv->fbdev->helper, sizes->fb_width,
sizes->fb_height);
......
......@@ -512,7 +512,7 @@ hibmc_framebuffer_init(struct drm_device *dev,
return ERR_PTR(-ENOMEM);
}
drm_helper_mode_fill_fb_struct(&hibmc_fb->fb, mode_cmd);
drm_helper_mode_fill_fb_struct(dev, &hibmc_fb->fb, mode_cmd);
hibmc_fb->obj = obj;
ret = drm_framebuffer_init(dev, &hibmc_fb->fb, &hibmc_fb_funcs);
if (ret) {
......
......@@ -709,10 +709,7 @@ static int dsi_bridge_init(struct drm_device *dev, struct dw_dsi *dsi)
int ret;
/* associate the bridge to dsi encoder */
encoder->bridge = bridge;
bridge->encoder = encoder;
ret = drm_bridge_attach(dev, bridge);
ret = drm_bridge_attach(encoder, bridge, NULL);
if (ret) {
DRM_ERROR("failed to attach external bridge\n");
return ret;
......
......@@ -617,7 +617,7 @@ static void ade_rdma_set(void __iomem *base, struct drm_framebuffer *fb,
ch + 1, y, in_h, stride, (u32)obj->paddr);
DRM_DEBUG_DRIVER("addr=0x%x, fb:%dx%d, pixel_format=%d(%s)\n",
addr, fb->width, fb->height, fmt,
drm_get_format_name(fb->pixel_format, &format_name));
drm_get_format_name(fb->format->format, &format_name));
/* get reg offset */
reg_ctrl = RD_CH_CTRL(ch);
......@@ -773,7 +773,7 @@ static void ade_update_channel(struct ade_plane *aplane,
{
struct ade_hw_ctx *ctx = aplane->ctx;
void __iomem *base = ctx->base;
u32 fmt = ade_get_format(fb->pixel_format);
u32 fmt = ade_get_format(fb->format->format);
u32 ch = aplane->ch;
u32 in_w;
u32 in_h;
......@@ -835,7 +835,7 @@ static int ade_plane_atomic_check(struct drm_plane *plane,
if (!crtc || !fb)
return 0;
fmt = ade_get_format(fb->pixel_format);
fmt = ade_get_format(fb->format->format);
if (fmt == ADE_FORMAT_UNSUPPORT)
return -EINVAL;
......@@ -973,9 +973,9 @@ static int ade_dts_parse(struct platform_device *pdev, struct ade_hw_ctx *ctx)
return 0;
}
static int ade_drm_init(struct drm_device *dev)
static int ade_drm_init(struct platform_device *pdev)
{
struct platform_device *pdev = dev->platformdev;
struct drm_device *dev = platform_get_drvdata(pdev);
struct ade_data *ade;
struct ade_hw_ctx *ctx;
struct ade_crtc *acrtc;
......@@ -1034,13 +1034,8 @@ static int ade_drm_init(struct drm_device *dev)
return 0;
}
static void ade_drm_cleanup(struct drm_device *dev)
static void ade_drm_cleanup(struct platform_device *pdev)
{
struct platform_device *pdev = dev->platformdev;
struct ade_data *ade = platform_get_drvdata(pdev);
struct drm_crtc *crtc = &ade->acrtc.base;
drm_crtc_cleanup(crtc);
}
const struct kirin_dc_ops ade_dc_ops = {
......
......@@ -42,7 +42,7 @@ static int kirin_drm_kms_cleanup(struct drm_device *dev)
#endif
drm_kms_helper_poll_fini(dev);
drm_vblank_cleanup(dev);
dc_ops->cleanup(dev);
dc_ops->cleanup(to_platform_device(dev->dev));
drm_mode_config_cleanup(dev);
devm_kfree(dev->dev, priv);
dev->dev_private = NULL;
......@@ -104,7 +104,7 @@ static int kirin_drm_kms_init(struct drm_device *dev)
kirin_drm_mode_config_init(dev);
/* display controller init */
ret = dc_ops->init(dev);
ret = dc_ops->init(to_platform_device(dev->dev));
if (ret)
goto err_mode_config_cleanup;
......@@ -138,7 +138,7 @@ static int kirin_drm_kms_init(struct drm_device *dev)
err_unbind_all:
component_unbind_all(dev->dev, dev);
err_dc_cleanup:
dc_ops->cleanup(dev);
dc_ops->cleanup(to_platform_device(dev->dev));
err_mode_config_cleanup:
drm_mode_config_cleanup(dev);
devm_kfree(dev->dev, priv);
......@@ -209,8 +209,6 @@ static int kirin_drm_bind(struct device *dev)
if (IS_ERR(drm_dev))
return PTR_ERR(drm_dev);
drm_dev->platformdev = to_platform_device(dev);
ret = kirin_drm_kms_init(drm_dev);
if (ret)
goto err_drm_dev_unref;
......
......@@ -15,8 +15,8 @@
/* display controller init/cleanup ops */
struct kirin_dc_ops {
int (*init)(struct drm_device *dev);
void (*cleanup)(struct drm_device *dev);
int (*init)(struct platform_device *pdev);
void (*cleanup)(struct platform_device *pdev);
};
struct kirin_drm_private {
......
此差异已折叠。
......@@ -1026,7 +1026,7 @@ struct intel_fbc {
struct {
u64 ilk_ggtt_offset;
uint32_t pixel_format;
const struct drm_format_info *format;
unsigned int stride;
int fence_reg;
unsigned int tiling_mode;
......@@ -1042,7 +1042,7 @@ struct intel_fbc {
struct {
u64 ggtt_offset;
uint32_t pixel_format;
const struct drm_format_info *format;
unsigned int stride;
int fence_reg;
} fb;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册