提交 6c56e8ad 编写于 作者: D Daniel Vetter

Merge tag 'drm-misc-next-2019-12-16' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v5.6:

UAPI Changes:
- Add support for DMA-BUF HEAPS.

Cross-subsystem Changes:
- mipi dsi definition updates, pulled into drm-intel as well.
- Add lockdep annotations for dma_resv vs mmap_sem and fs_reclaim.
- Remove support for dma-buf kmap/kunmap.
- Constify fb_ops in all fbdev drivers, including drm drivers and drm-core, and media as well.

Core Changes:
- Small cleanups to ttm.
- Fix SCDC definition.
- Assorted cleanups to core.
- Add todo to remove load/unload hooks, and use generic fbdev emulation.
- Assorted documentation updates.
- Use blocking ww lock in ttm fault handler.
- Remove drm_fb_helper_fbdev_setup/teardown.
- Warning fixes with W=1 for atomic.
- Use drm_debug_enabled() instead of drm_debug flag testing in various drivers.
- Fallback to nontiled mode in fbdev emulation when not all tiles are present. (Later on reverted)
- Various kconfig indentation fixes in core and drivers.
- Fix freeing transactions in dp-mst correctly.
- Sean Paul is steping down as core maintainer. :-(
- Add lockdep annotations for atomic locks vs dma-resv.
- Prevent use-after-free for a bad job in drm_scheduler.
- Fill out all block sizes in the P01x and P210 definitions.
- Avoid division by zero in drm/rect, and fix bounds.
- Add drm/rect selftests.
- Add aspect ratio and alternate clocks for HDMI 4k modes.
- Add todo for drm_framebuffer_funcs and fb_create cleanup.
- Drop DRM_AUTH for prime import/export ioctls.
- Clear DP-MST payload id tables downstream when initializating.
- Fix for DSC throughput definition.
- Add extra FEC definitions.
- Fix fake offset in drm_gem_object_funs.mmap.
- Stop using encoder->bridge in core directly
- Handle bridge chaining slightly better.
- Add backlight support to drm/panel, and use it in many panel drivers.
- Increase max number of y420 modes from 128 to 256, as preparation to add the new modes.

Driver Changes:
- Small fixes all over.
- Fix documentation in vkms.
- Fix mmap_sem vs dma_resv in nouveau.
- Small cleanup in komeda.
- Add page flip support in gma500 for psb/cdv.
- Add ddc symlink in the connector sysfs directory for many drivers.
- Add support for analogic an6345, and fix small bugs in it.
- Add atomic modesetting support to ast.
- Fix radeon fault handler VMA race.
- Switch udl to use generic shmem helpers.
- Unconditional vblank handling for mcde.
- Miscellaneous fixes to mcde.
- Tweak debug output from komeda using debugfs.
- Add gamma and color transform support to komeda for DOU-IPS.
- Add support for sony acx424AKP panel.
- Various small cleanups to gma500.
- Use generic fbdev emulation in udl, and replace udl_framebuffer with generic implementation.
- Add support for Logic PD Type 28 panel.
- Use drm_panel_* wrapper functions in exynos/tegra/msm.
- Add devicetree bindings for generic DSI panels.
- Don't include drm_pci.h directly in many drivers.
- Add support for begin/end_cpu_access in udmabuf.
- Stop using drm_get_pci_dev in gma500 and mga200.
- Fixes to UDL damage handling, and use dma_buf_begin/end_cpu_access.
- Add devfreq thermal support to panfrost.
- Fix hotplug with daisy chained monitors by removing VCPI when disabling topology manager.
- meson: Add support for OSD1 plane AFBC commit.
- Stop displaying garbage when toggling ast primary plane on/off.
- More cleanups and fixes to UDL.
- Add D32 suport to komeda.
- Remove globle copy of drm_dev in gma500.
- Add support for Boe Himax8279d MIPI-DSI LCD panel.
- Add support for ingenic JZ4770 panel.
- Small null pointer deference fix in ingenic.
- Remove support for the special tfp420 driver, as there is a generic way to do it.
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/ba73535a-9334-5302-2e1f-5208bd7390bd@linux.intel.com
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/dsi-controller.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Common Properties for DSI Display Panels
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
description: |
This document defines device tree properties common to DSI, Display
Serial Interface controllers and attached panels. It doesn't constitute
a device tree binding specification by itself but is meant to be referenced
by device tree bindings.
When referenced from panel device tree bindings the properties defined in
this document are defined as follows. The panel device tree bindings are
responsible for defining whether each property is required or optional.
Notice: this binding concerns DSI panels connected directly to a master
without any intermediate port graph to the panel. Each DSI master
can control one to four virtual channels to one panel. Each virtual
channel should have a node "panel" for their virtual channel with their
reg-property set to the virtual channel number, usually there is just
one virtual channel, number 0.
properties:
$nodename:
pattern: "^dsi-controller(@.*)?$"
"#address-cells":
const: 1
"#size-cells":
const: 0
patternProperties:
"^panel@[0-3]$":
description: Panels connected to the DSI link
type: object
properties:
reg:
minimum: 0
maximum: 3
description:
The virtual channel number of a DSI peripheral. Must be in the range
from 0 to 3, as DSI uses a 2-bit addressing scheme. Some DSI
peripherals respond to more than a single virtual channel. In that
case the reg property can take multiple entries, one for each virtual
channel that the peripheral responds to.
clock-master:
type: boolean
description:
Should be enabled if the host is being used in conjunction with
another DSI host to drive the same peripheral. Hardware supporting
such a configuration generally requires the data on both the busses
to be driven by the same clock. Only the DSI host instance
controlling this clock should contain this property.
enforce-video-mode:
type: boolean
description:
The best option is usually to run a panel in command mode, as this
gives better control over the panel hardware. However for different
reasons like broken hardware, missing features or testing, it may be
useful to be able to force a command mode-capable panel into video
mode.
required:
- reg
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi-controller@a0351000 {
reg = <0xa0351000 0x1000>;
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "sony,acx424akp";
reg = <0>;
vddi-supply = <&ab8500_ldo_aux1_reg>;
reset-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
};
};
...
...@@ -4,6 +4,7 @@ Required properties: ...@@ -4,6 +4,7 @@ Required properties:
- compatible: one of: - compatible: one of:
* ingenic,jz4740-lcd * ingenic,jz4740-lcd
* ingenic,jz4725b-lcd * ingenic,jz4725b-lcd
* ingenic,jz4770-lcd
- reg: LCD registers location and length - reg: LCD registers location and length
- clocks: LCD pixclock and device clock specifiers. - clocks: LCD pixclock and device clock specifiers.
The device clock is only required on the JZ4740. The device clock is only required on the JZ4740.
......
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/logicpd,type28.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Logic PD Type 28 4.3" WQVGA TFT LCD panel
maintainers:
- Adam Ford <aford173@gmail.com>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: logicpd,type28
power-supply: true
enable-gpios: true
backlight: true
port: true
required:
- compatible
additionalProperties: false
examples:
- |
lcd0: display {
compatible = "logicpd,type28";
enable-gpios = <&gpio5 27 0>;
backlight = <&backlight>;
port {
lcd_in: endpoint {
remote-endpoint = <&dpi_out>;
};
};
};
...
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/sony,acx424akp.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Sony ACX424AKP 4" 480x864 AMOLED panel
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: sony,acx424akp
reg: true
reset-gpios: true
vddi-supply:
description: regulator that supplies the vddi voltage
enforce-video-mode: true
required:
- compatible
- reg
- reset-gpios
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
dsi-controller@a0351000 {
compatible = "ste,mcde-dsi";
reg = <0xa0351000 0x1000>;
#address-cells = <1>;
#size-cells = <0>;
panel@0 {
compatible = "sony,acx424akp";
reg = <0>;
vddi-supply = <&foo>;
reset-gpios = <&foo_gpio 0 GPIO_ACTIVE_LOW>;
};
};
...
Device-Tree bindings for tilcdc DRM TFP410 output driver
Required properties:
- compatible: value should be "ti,tilcdc,tfp410".
- i2c: the phandle for the i2c device to use for DDC
Recommended properties:
- pinctrl-names, pinctrl-0: the pincontrol settings to configure
muxing properly for pins that connect to TFP410 device
- powerdn-gpio: the powerdown GPIO, pulled low to power down the
TFP410 device (for DPMS_OFF)
Example:
dvicape {
compatible = "ti,tilcdc,tfp410";
i2c = <&i2c2>;
pinctrl-names = "default";
pinctrl-0 = <&bone_dvi_cape_dvi_00A1_pins>;
powerdn-gpio = <&gpio2 31 0>;
};
...@@ -24,9 +24,9 @@ Driver Initialization ...@@ -24,9 +24,9 @@ Driver Initialization
At the core of every DRM driver is a :c:type:`struct drm_driver At the core of every DRM driver is a :c:type:`struct drm_driver
<drm_driver>` structure. Drivers typically statically initialize <drm_driver>` structure. Drivers typically statically initialize
a drm_driver structure, and then pass it to a drm_driver structure, and then pass it to
:c:func:`drm_dev_alloc()` to allocate a device instance. After the drm_dev_alloc() to allocate a device instance. After the
device instance is fully initialized it can be registered (which makes device instance is fully initialized it can be registered (which makes
it accessible from userspace) using :c:func:`drm_dev_register()`. it accessible from userspace) using drm_dev_register().
The :c:type:`struct drm_driver <drm_driver>` structure The :c:type:`struct drm_driver <drm_driver>` structure
contains static information that describes the driver and features it contains static information that describes the driver and features it
......
...@@ -3,7 +3,7 @@ Kernel Mode Setting (KMS) ...@@ -3,7 +3,7 @@ Kernel Mode Setting (KMS)
========================= =========================
Drivers must initialize the mode setting core by calling Drivers must initialize the mode setting core by calling
:c:func:`drm_mode_config_init()` on the DRM device. The function drm_mode_config_init() on the DRM device. The function
initializes the :c:type:`struct drm_device <drm_device>` initializes the :c:type:`struct drm_device <drm_device>`
mode_config field and never fails. Once done, mode configuration must mode_config field and never fails. Once done, mode configuration must
be setup by initializing the following fields. be setup by initializing the following fields.
...@@ -181,8 +181,7 @@ Setting`_). The somewhat surprising part here is that properties are not ...@@ -181,8 +181,7 @@ Setting`_). The somewhat surprising part here is that properties are not
directly instantiated on each object, but free-standing mode objects themselves, directly instantiated on each object, but free-standing mode objects themselves,
represented by :c:type:`struct drm_property <drm_property>`, which only specify represented by :c:type:`struct drm_property <drm_property>`, which only specify
the type and value range of a property. Any given property can be attached the type and value range of a property. Any given property can be attached
multiple times to different objects using :c:func:`drm_object_attach_property() multiple times to different objects using drm_object_attach_property().
<drm_object_attach_property>`.
.. kernel-doc:: include/drm/drm_mode_object.h .. kernel-doc:: include/drm/drm_mode_object.h
:internal: :internal:
...@@ -260,7 +259,8 @@ Taken all together there's two consequences for the atomic design: ...@@ -260,7 +259,8 @@ Taken all together there's two consequences for the atomic design:
drm_connector_state <drm_connector_state>` for connectors. These are the only drm_connector_state <drm_connector_state>` for connectors. These are the only
objects with userspace-visible and settable state. For internal state drivers objects with userspace-visible and settable state. For internal state drivers
can subclass these structures through embeddeding, or add entirely new state can subclass these structures through embeddeding, or add entirely new state
structures for their globally shared hardware functions. structures for their globally shared hardware functions, see :c:type:`struct
drm_private_state<drm_private_state>`.
- An atomic update is assembled and validated as an entirely free-standing pile - An atomic update is assembled and validated as an entirely free-standing pile
of structures within the :c:type:`drm_atomic_state <drm_atomic_state>` of structures within the :c:type:`drm_atomic_state <drm_atomic_state>`
...@@ -269,6 +269,14 @@ Taken all together there's two consequences for the atomic design: ...@@ -269,6 +269,14 @@ Taken all together there's two consequences for the atomic design:
to the driver and modeset objects. This way rolling back an update boils down to the driver and modeset objects. This way rolling back an update boils down
to releasing memory and unreferencing objects like framebuffers. to releasing memory and unreferencing objects like framebuffers.
Locking of atomic state structures is internally using :c:type:`struct
drm_modeset_lock <drm_modeset_lock>`. As a general rule the locking shouldn't be
exposed to drivers, instead the right locks should be automatically acquired by
any function that duplicates or peeks into a state, like e.g.
drm_atomic_get_crtc_state(). Locking only protects the software data
structure, ordering of committing state changes to hardware is sequenced using
:c:type:`struct drm_crtc_commit <drm_crtc_commit>`.
Read on in this chapter, and also in :ref:`drm_atomic_helper` for more detailed Read on in this chapter, and also in :ref:`drm_atomic_helper` for more detailed
coverage of specific topics. coverage of specific topics.
...@@ -479,6 +487,9 @@ Color Management Properties ...@@ -479,6 +487,9 @@ Color Management Properties
.. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c .. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c
:export: :export:
.. kernel-doc:: include/drm/drm_color_mgmt.h
:internal:
Tile Group Property Tile Group Property
------------------- -------------------
......
...@@ -149,19 +149,19 @@ struct :c:type:`struct drm_gem_object <drm_gem_object>`. ...@@ -149,19 +149,19 @@ struct :c:type:`struct drm_gem_object <drm_gem_object>`.
To create a GEM object, a driver allocates memory for an instance of its To create a GEM object, a driver allocates memory for an instance of its
specific GEM object type and initializes the embedded struct specific GEM object type and initializes the embedded struct
:c:type:`struct drm_gem_object <drm_gem_object>` with a call :c:type:`struct drm_gem_object <drm_gem_object>` with a call
to :c:func:`drm_gem_object_init()`. The function takes a pointer to drm_gem_object_init(). The function takes a pointer
to the DRM device, a pointer to the GEM object and the buffer object to the DRM device, a pointer to the GEM object and the buffer object
size in bytes. size in bytes.
GEM uses shmem to allocate anonymous pageable memory. GEM uses shmem to allocate anonymous pageable memory.
:c:func:`drm_gem_object_init()` will create an shmfs file of the drm_gem_object_init() will create an shmfs file of the
requested size and store it into the struct :c:type:`struct requested size and store it into the struct :c:type:`struct
drm_gem_object <drm_gem_object>` filp field. The memory is drm_gem_object <drm_gem_object>` filp field. The memory is
used as either main storage for the object when the graphics hardware used as either main storage for the object when the graphics hardware
uses system memory directly or as a backing store otherwise. uses system memory directly or as a backing store otherwise.
Drivers are responsible for the actual physical pages allocation by Drivers are responsible for the actual physical pages allocation by
calling :c:func:`shmem_read_mapping_page_gfp()` for each page. calling shmem_read_mapping_page_gfp() for each page.
Note that they can decide to allocate pages when initializing the GEM Note that they can decide to allocate pages when initializing the GEM
object, or to delay allocation until the memory is needed (for instance object, or to delay allocation until the memory is needed (for instance
when a page fault occurs as a result of a userspace memory access or when a page fault occurs as a result of a userspace memory access or
...@@ -170,20 +170,18 @@ when the driver needs to start a DMA transfer involving the memory). ...@@ -170,20 +170,18 @@ when the driver needs to start a DMA transfer involving the memory).
Anonymous pageable memory allocation is not always desired, for instance Anonymous pageable memory allocation is not always desired, for instance
when the hardware requires physically contiguous system memory as is when the hardware requires physically contiguous system memory as is
often the case in embedded devices. Drivers can create GEM objects with often the case in embedded devices. Drivers can create GEM objects with
no shmfs backing (called private GEM objects) by initializing them with no shmfs backing (called private GEM objects) by initializing them with a call
a call to :c:func:`drm_gem_private_object_init()` instead of to drm_gem_private_object_init() instead of drm_gem_object_init(). Storage for
:c:func:`drm_gem_object_init()`. Storage for private GEM objects private GEM objects must be managed by drivers.
must be managed by drivers.
GEM Objects Lifetime GEM Objects Lifetime
-------------------- --------------------
All GEM objects are reference-counted by the GEM core. References can be All GEM objects are reference-counted by the GEM core. References can be
acquired and release by :c:func:`calling drm_gem_object_get()` and acquired and release by calling drm_gem_object_get() and drm_gem_object_put()
:c:func:`drm_gem_object_put()` respectively. The caller must hold the respectively. The caller must hold the :c:type:`struct drm_device <drm_device>`
:c:type:`struct drm_device <drm_device>` struct_mutex lock when calling struct_mutex lock when calling drm_gem_object_get(). As a convenience, GEM
:c:func:`drm_gem_object_get()`. As a convenience, GEM provides provides drm_gem_object_put_unlocked() functions that can be called without
:c:func:`drm_gem_object_put_unlocked()` functions that can be called without
holding the lock. holding the lock.
When the last reference to a GEM object is released the GEM core calls When the last reference to a GEM object is released the GEM core calls
...@@ -194,7 +192,7 @@ free the GEM object and all associated resources. ...@@ -194,7 +192,7 @@ free the GEM object and all associated resources.
void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are void (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are
responsible for freeing all GEM object resources. This includes the responsible for freeing all GEM object resources. This includes the
resources created by the GEM core, which need to be released with resources created by the GEM core, which need to be released with
:c:func:`drm_gem_object_release()`. drm_gem_object_release().
GEM Objects Naming GEM Objects Naming
------------------ ------------------
...@@ -210,13 +208,11 @@ to the GEM object in other standard or driver-specific ioctls. Closing a ...@@ -210,13 +208,11 @@ to the GEM object in other standard or driver-specific ioctls. Closing a
DRM file handle frees all its GEM handles and dereferences the DRM file handle frees all its GEM handles and dereferences the
associated GEM objects. associated GEM objects.
To create a handle for a GEM object drivers call To create a handle for a GEM object drivers call drm_gem_handle_create(). The
:c:func:`drm_gem_handle_create()`. The function takes a pointer function takes a pointer to the DRM file and the GEM object and returns a
to the DRM file and the GEM object and returns a locally unique handle. locally unique handle. When the handle is no longer needed drivers delete it
When the handle is no longer needed drivers delete it with a call to with a call to drm_gem_handle_delete(). Finally the GEM object associated with a
:c:func:`drm_gem_handle_delete()`. Finally the GEM object handle can be retrieved by a call to drm_gem_object_lookup().
associated with a handle can be retrieved by a call to
:c:func:`drm_gem_object_lookup()`.
Handles don't take ownership of GEM objects, they only take a reference Handles don't take ownership of GEM objects, they only take a reference
to the object that will be dropped when the handle is destroyed. To to the object that will be dropped when the handle is destroyed. To
...@@ -258,7 +254,7 @@ The mmap system call can't be used directly to map GEM objects, as they ...@@ -258,7 +254,7 @@ The mmap system call can't be used directly to map GEM objects, as they
don't have their own file handle. Two alternative methods currently don't have their own file handle. Two alternative methods currently
co-exist to map GEM objects to userspace. The first method uses a co-exist to map GEM objects to userspace. The first method uses a
driver-specific ioctl to perform the mapping operation, calling driver-specific ioctl to perform the mapping operation, calling
:c:func:`do_mmap()` under the hood. This is often considered do_mmap() under the hood. This is often considered
dubious, seems to be discouraged for new GEM-enabled drivers, and will dubious, seems to be discouraged for new GEM-enabled drivers, and will
thus not be described here. thus not be described here.
...@@ -267,23 +263,22 @@ The second method uses the mmap system call on the DRM file handle. void ...@@ -267,23 +263,22 @@ The second method uses the mmap system call on the DRM file handle. void
offset); DRM identifies the GEM object to be mapped by a fake offset offset); DRM identifies the GEM object to be mapped by a fake offset
passed through the mmap offset argument. Prior to being mapped, a GEM passed through the mmap offset argument. Prior to being mapped, a GEM
object must thus be associated with a fake offset. To do so, drivers object must thus be associated with a fake offset. To do so, drivers
must call :c:func:`drm_gem_create_mmap_offset()` on the object. must call drm_gem_create_mmap_offset() on the object.
Once allocated, the fake offset value must be passed to the application Once allocated, the fake offset value must be passed to the application
in a driver-specific way and can then be used as the mmap offset in a driver-specific way and can then be used as the mmap offset
argument. argument.
The GEM core provides a helper method :c:func:`drm_gem_mmap()` to The GEM core provides a helper method drm_gem_mmap() to
handle object mapping. The method can be set directly as the mmap file handle object mapping. The method can be set directly as the mmap file
operation handler. It will look up the GEM object based on the offset operation handler. It will look up the GEM object based on the offset
value and set the VMA operations to the :c:type:`struct drm_driver value and set the VMA operations to the :c:type:`struct drm_driver
<drm_driver>` gem_vm_ops field. Note that <drm_driver>` gem_vm_ops field. Note that drm_gem_mmap() doesn't map memory to
:c:func:`drm_gem_mmap()` doesn't map memory to userspace, but userspace, but relies on the driver-provided fault handler to map pages
relies on the driver-provided fault handler to map pages individually. individually.
To use :c:func:`drm_gem_mmap()`, drivers must fill the struct To use drm_gem_mmap(), drivers must fill the struct :c:type:`struct drm_driver
:c:type:`struct drm_driver <drm_driver>` gem_vm_ops field <drm_driver>` gem_vm_ops field with a pointer to VM operations.
with a pointer to VM operations.
The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>` The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>`
made up of several fields, the more interesting ones being: made up of several fields, the more interesting ones being:
...@@ -298,9 +293,8 @@ made up of several fields, the more interesting ones being: ...@@ -298,9 +293,8 @@ made up of several fields, the more interesting ones being:
The open and close operations must update the GEM object reference The open and close operations must update the GEM object reference
count. Drivers can use the :c:func:`drm_gem_vm_open()` and count. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper
:c:func:`drm_gem_vm_close()` helper functions directly as open functions directly as open and close handlers.
and close handlers.
The fault operation handler is responsible for mapping individual pages The fault operation handler is responsible for mapping individual pages
to userspace when a page fault occurs. Depending on the memory to userspace when a page fault occurs. Depending on the memory
...@@ -312,12 +306,12 @@ Drivers that want to map the GEM object upfront instead of handling page ...@@ -312,12 +306,12 @@ Drivers that want to map the GEM object upfront instead of handling page
faults can implement their own mmap file operation handler. faults can implement their own mmap file operation handler.
For platforms without MMU the GEM core provides a helper method For platforms without MMU the GEM core provides a helper method
:c:func:`drm_gem_cma_get_unmapped_area`. The mmap() routines will call drm_gem_cma_get_unmapped_area(). The mmap() routines will call this to get a
this to get a proposed address for the mapping. proposed address for the mapping.
To use :c:func:`drm_gem_cma_get_unmapped_area`, drivers must fill the To use drm_gem_cma_get_unmapped_area(), drivers must fill the struct
struct :c:type:`struct file_operations <file_operations>` get_unmapped_area :c:type:`struct file_operations <file_operations>` get_unmapped_area field with
field with a pointer on :c:func:`drm_gem_cma_get_unmapped_area`. a pointer on drm_gem_cma_get_unmapped_area().
More detailed information about get_unmapped_area can be found in More detailed information about get_unmapped_area can be found in
Documentation/nommu-mmap.txt Documentation/nommu-mmap.txt
......
...@@ -254,36 +254,45 @@ Validating changes with IGT ...@@ -254,36 +254,45 @@ Validating changes with IGT
There's a collection of tests that aims to cover the whole functionality of There's a collection of tests that aims to cover the whole functionality of
DRM drivers and that can be used to check that changes to DRM drivers or the DRM drivers and that can be used to check that changes to DRM drivers or the
core don't regress existing functionality. This test suite is called IGT and core don't regress existing functionality. This test suite is called IGT and
its code can be found in https://cgit.freedesktop.org/drm/igt-gpu-tools/. its code and instructions to build and run can be found in
https://gitlab.freedesktop.org/drm/igt-gpu-tools/.
To build IGT, start by installing its build dependencies. In Debian-based Using VKMS to test DRM API
systems:: --------------------------
# apt-get build-dep intel-gpu-tools VKMS is a software-only model of a KMS driver that is useful for testing
and for running compositors. VKMS aims to enable a virtual display without
the need for a hardware display capability. These characteristics made VKMS
a perfect tool for validating the DRM core behavior and also support the
compositor developer. VKMS makes it possible to test DRM functions in a
virtual machine without display, simplifying the validation of some of the
core changes.
And in Fedora-based systems:: To Validate changes in DRM API with VKMS, start setting the kernel: make
sure to enable VKMS module; compile the kernel with the VKMS enabled and
install it in the target machine. VKMS can be run in a Virtual Machine
(QEMU, virtme or similar). It's recommended the use of KVM with the minimum
of 1GB of RAM and four cores.
# dnf builddep intel-gpu-tools It's possible to run the IGT-tests in a VM in two ways:
Then clone the repository:: 1. Use IGT inside a VM
2. Use IGT from the host machine and write the results in a shared directory.
$ git clone git://anongit.freedesktop.org/drm/igt-gpu-tools As follow, there is an example of using a VM with a shared directory with
the host machine to run igt-tests. As an example it's used virtme::
Configure the build system and start the build:: $ virtme-run --rwdir /path/for/shared_dir --kdir=path/for/kernel/directory --mods=auto
$ cd igt-gpu-tools && ./autogen.sh && make -j6 Run the igt-tests in the guest machine, as example it's ran the 'kms_flip'
tests::
Download the piglit dependency:: $ /path/for/igt-gpu-tools/scripts/run-tests.sh -p -s -t "kms_flip.*" -v
$ ./scripts/run-tests.sh -d In this example, instead of build the igt_runner, Piglit is used
(-p option); it's created html summary of the tests results and it's saved
And run the tests:: in the folder "igt-gpu-tools/results"; it's executed only the igt-tests
matching the -t option.
$ ./scripts/run-tests.sh -t kms -t core -s
run-tests.sh is a wrapper around piglit that will execute the tests matching
the -t options. A report in HTML format will be available in
./results/html/index.html. Results can be compared with piglit.
Display CRC Support Display CRC Support
------------------- -------------------
......
...@@ -171,23 +171,40 @@ Contact: Maintainer of the driver you plan to convert ...@@ -171,23 +171,40 @@ Contact: Maintainer of the driver you plan to convert
Level: Intermediate Level: Intermediate
Convert drivers to use drm_fb_helper_fbdev_setup/teardown() Convert drivers to use drm_fbdev_generic_setup()
----------------------------------------------------------- ------------------------------------------------
Most drivers can use drm_fb_helper_fbdev_setup() except maybe: Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
atomic modesetting and GEM vmap support. Current generic fbdev emulation
expects the framebuffer in system memory (or system-like memory).
- amdgpu which has special logic to decide whether to call Contact: Maintainer of the driver you plan to convert
drm_helper_disable_unused_functions()
Level: Intermediate
- armada which isn't atomic and doesn't call drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
drm_helper_disable_unused_functions() -----------------------------------------------------------------
- i915 which calls drm_fb_helper_initial_config() in a worker A lot more drivers could be switched over to the drm_gem_framebuffer helpers.
Various hold-ups:
Drivers that use drm_framebuffer_remove() to clean up the fbdev framebuffer can - Need to switch over to the generic dirty tracking code using
probably use drm_fb_helper_fbdev_teardown(). drm_atomic_helper_dirtyfb first (e.g. qxl).
Contact: Maintainer of the driver you plan to convert - Need to switch to drm_fbdev_generic_setup(), otherwise a lot of the custom fb
setup code can't be deleted.
- Many drivers wrap drm_gem_fb_create() only to check for valid formats. For
atomic drivers we could check for valid formats by calling
drm_plane_check_pixel_format() against all planes, and pass if any plane
supports the format. For non-atomic that's not possible since like the format
list for the primary plane is fake and we'd therefor reject valid formats.
- Many drivers subclass drm_framebuffer, we'd need a embedding compatible
version of the varios drm_gem_fb_create functions. Maybe called
drm_gem_fb_create/_with_dirty/_with_funcs as needed.
Contact: Daniel Vetter
Level: Intermediate Level: Intermediate
...@@ -328,8 +345,8 @@ drm_fb_helper tasks ...@@ -328,8 +345,8 @@ drm_fb_helper tasks
these igt tests need to be fixed: kms_fbcon_fbt@psr and these igt tests need to be fixed: kms_fbcon_fbt@psr and
kms_fbcon_fbt@psr-suspend. kms_fbcon_fbt@psr-suspend.
- The max connector argument for drm_fb_helper_init() and - The max connector argument for drm_fb_helper_init() isn't used anymore and
drm_fb_helper_fbdev_setup() isn't used anymore and can be removed. can be removed.
- The helper doesn't keep an array of connectors anymore so these can be - The helper doesn't keep an array of connectors anymore so these can be
removed: drm_fb_helper_single_add_all_connectors(), removed: drm_fb_helper_single_add_all_connectors(),
...@@ -351,6 +368,23 @@ connector register/unregister fixes ...@@ -351,6 +368,23 @@ connector register/unregister fixes
Level: Intermediate Level: Intermediate
Remove load/unload callbacks from all non-DRIVER_LEGACY drivers
---------------------------------------------------------------
The load/unload callbacks in struct &drm_driver are very much midlayers, plus
for historical reasons they get the ordering wrong (and we can't fix that)
between setting up the &drm_driver structure and calling drm_dev_register().
- Rework drivers to no longer use the load/unload callbacks, directly coding the
load/unload sequence into the driver's probe function.
- Once all non-DRIVER_LEGACY drivers are converted, disallow the load/unload
callbacks for all modern drivers.
Contact: Daniel Vetter
Level: Intermediate
Core refactorings Core refactorings
================= =================
......
...@@ -4973,6 +4973,24 @@ F: Documentation/driver-api/dma-buf.rst ...@@ -4973,6 +4973,24 @@ F: Documentation/driver-api/dma-buf.rst
K: dma_(buf|fence|resv) K: dma_(buf|fence|resv)
T: git git://anongit.freedesktop.org/drm/drm-misc T: git git://anongit.freedesktop.org/drm/drm-misc
DMA-BUF HEAPS FRAMEWORK
M: Sumit Semwal <sumit.semwal@linaro.org>
R: Andrew F. Davis <afd@ti.com>
R: Benjamin Gaignard <benjamin.gaignard@linaro.org>
R: Liam Mark <lmark@codeaurora.org>
R: Laura Abbott <labbott@redhat.com>
R: Brian Starkey <Brian.Starkey@arm.com>
R: John Stultz <john.stultz@linaro.org>
S: Maintained
L: linux-media@vger.kernel.org
L: dri-devel@lists.freedesktop.org
L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
F: include/uapi/linux/dma-heap.h
F: include/linux/dma-heap.h
F: drivers/dma-buf/dma-heap.c
F: drivers/dma-buf/heaps/*
T: git git://anongit.freedesktop.org/drm/drm-misc
DMA GENERIC OFFLOAD ENGINE SUBSYSTEM DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
M: Vinod Koul <vkoul@kernel.org> M: Vinod Koul <vkoul@kernel.org>
L: dmaengine@vger.kernel.org L: dmaengine@vger.kernel.org
...@@ -5178,6 +5196,12 @@ T: git git://anongit.freedesktop.org/drm/drm-misc ...@@ -5178,6 +5196,12 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
S: Maintained S: Maintained
F: drivers/gpu/drm/bochs/ F: drivers/gpu/drm/bochs/
DRM DRIVER FOR BOE HIMAX8279D PANELS
M: Jerry Han <hanxu5@huaqin.corp-partner.google.com>
S: Maintained
F: drivers/gpu/drm/panel/panel-boe-himax8279d.c
F: Documentation/devicetree/bindings/display/panel/boe,himax8279d.txt
DRM DRIVER FOR FARADAY TVE200 TV ENCODER DRM DRIVER FOR FARADAY TVE200 TV ENCODER
M: Linus Walleij <linus.walleij@linaro.org> M: Linus Walleij <linus.walleij@linaro.org>
T: git git://anongit.freedesktop.org/drm/drm-misc T: git git://anongit.freedesktop.org/drm/drm-misc
...@@ -5405,7 +5429,6 @@ F: include/linux/vga* ...@@ -5405,7 +5429,6 @@ F: include/linux/vga*
DRM DRIVERS AND MISC GPU PATCHES DRM DRIVERS AND MISC GPU PATCHES
M: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> M: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
M: Maxime Ripard <mripard@kernel.org> M: Maxime Ripard <mripard@kernel.org>
M: Sean Paul <sean@poorly.run>
W: https://01.org/linuxgraphics/gfx-docs/maintainer-tools/drm-misc.html W: https://01.org/linuxgraphics/gfx-docs/maintainer-tools/drm-misc.html
S: Maintained S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc T: git git://anongit.freedesktop.org/drm/drm-misc
......
...@@ -6,21 +6,31 @@ ...@@ -6,21 +6,31 @@
* Author: Sathyanarayanan Kuppuswamy <sathyanarayanan.kuppuswamy@intel.com> * Author: Sathyanarayanan Kuppuswamy <sathyanarayanan.kuppuswamy@intel.com>
*/ */
#include <linux/gpio.h> #include <linux/gpio/machine.h>
#include <linux/platform_data/tc35876x.h>
#include <asm/intel-mid.h> #include <asm/intel-mid.h>
static struct gpiod_lookup_table tc35876x_gpio_table = {
.dev_id = "i2c_disp_brig",
.table = {
GPIO_LOOKUP("0000:00:0c.0", -1, "bridge-reset", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("0000:00:0c.0", -1, "bl-en", GPIO_ACTIVE_HIGH),
GPIO_LOOKUP("0000:00:0c.0", -1, "vadd", GPIO_ACTIVE_HIGH),
{ },
},
};
/*tc35876x DSI_LVDS bridge chip and panel platform data*/ /*tc35876x DSI_LVDS bridge chip and panel platform data*/
static void *tc35876x_platform_data(void *data) static void *tc35876x_platform_data(void *data)
{ {
static struct tc35876x_platform_data pdata; struct gpiod_lookup_table *table = &tc35876x_gpio_table;
struct gpiod_lookup *lookup = table->table;
/* gpio pins set to -1 will not be used by the driver */ lookup[0].chip_hwnum = get_gpio_by_name("LCMB_RXEN");
pdata.gpio_bridge_reset = get_gpio_by_name("LCMB_RXEN"); lookup[1].chip_hwnum = get_gpio_by_name("6S6P_BL_EN");
pdata.gpio_panel_bl_en = get_gpio_by_name("6S6P_BL_EN"); lookup[2].chip_hwnum = get_gpio_by_name("EN_VREG_LCD_V3P3");
pdata.gpio_panel_vadd = get_gpio_by_name("EN_VREG_LCD_V3P3"); gpiod_add_lookup_table(table);
return &pdata; return NULL;
} }
static const struct devs_id tc35876x_dev_id __initconst = { static const struct devs_id tc35876x_dev_id __initconst = {
......
...@@ -57,7 +57,7 @@ static int cfag12864bfb_mmap(struct fb_info *info, struct vm_area_struct *vma) ...@@ -57,7 +57,7 @@ static int cfag12864bfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
return vm_map_pages_zero(vma, &pages, 1); return vm_map_pages_zero(vma, &pages, 1);
} }
static struct fb_ops cfag12864bfb_ops = { static const struct fb_ops cfag12864bfb_ops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.fb_read = fb_sys_read, .fb_read = fb_sys_read,
.fb_write = fb_sys_write, .fb_write = fb_sys_write,
......
...@@ -228,7 +228,7 @@ static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma) ...@@ -228,7 +228,7 @@ static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma)
return vm_map_pages_zero(vma, &pages, 1); return vm_map_pages_zero(vma, &pages, 1);
} }
static struct fb_ops ht16k33_fb_ops = { static const struct fb_ops ht16k33_fb_ops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.fb_read = fb_sys_read, .fb_read = fb_sys_read,
.fb_write = fb_sys_write, .fb_write = fb_sys_write,
......
...@@ -44,4 +44,15 @@ config DMABUF_SELFTESTS ...@@ -44,4 +44,15 @@ config DMABUF_SELFTESTS
default n default n
depends on DMA_SHARED_BUFFER depends on DMA_SHARED_BUFFER
menuconfig DMABUF_HEAPS
bool "DMA-BUF Userland Memory Heaps"
select DMA_SHARED_BUFFER
help
Choose this option to enable the DMA-BUF userland memory heaps.
This options creates per heap chardevs in /dev/dma_heap/ which
allows userspace to allocate dma-bufs that can be shared
between drivers.
source "drivers/dma-buf/heaps/Kconfig"
endmenu endmenu
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \ obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
dma-resv.o seqno-fence.o dma-resv.o seqno-fence.o
obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
obj-$(CONFIG_DMABUF_HEAPS) += heaps/
obj-$(CONFIG_SYNC_FILE) += sync_file.o obj-$(CONFIG_SYNC_FILE) += sync_file.o
obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
obj-$(CONFIG_UDMABUF) += udmabuf.o obj-$(CONFIG_UDMABUF) += udmabuf.o
......
...@@ -878,29 +878,9 @@ EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); ...@@ -878,29 +878,9 @@ EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
* with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
* access. * access.
* *
* To support dma_buf objects residing in highmem cpu access is page-based * Since for most kernel internal dma-buf accesses need the entire buffer, a
* using an api similar to kmap. Accessing a dma_buf is done in aligned chunks * vmap interface is introduced. Note that on very old 32-bit architectures
* of PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which * vmalloc space might be limited and result in vmap calls failing.
* returns a pointer in kernel virtual address space. Afterwards the chunk
* needs to be unmapped again. There is no limit on how often a given chunk
* can be mapped and unmapped, i.e. the importer does not need to call
* begin_cpu_access again before mapping the same chunk again.
*
* Interfaces::
* void \*dma_buf_kmap(struct dma_buf \*, unsigned long);
* void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*);
*
* Implementing the functions is optional for exporters and for importers all
* the restrictions of using kmap apply.
*
* dma_buf kmap calls outside of the range specified in begin_cpu_access are
* undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
* the partial chunks at the beginning and end but may return stale or bogus
* data outside of the range (in these partial chunks).
*
* For some cases the overhead of kmap can be too high, a vmap interface
* is introduced. This interface should be used very carefully, as vmalloc
* space is a limited resources on many architectures.
* *
* Interfaces:: * Interfaces::
* void \*dma_buf_vmap(struct dma_buf \*dmabuf) * void \*dma_buf_vmap(struct dma_buf \*dmabuf)
...@@ -1050,43 +1030,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf, ...@@ -1050,43 +1030,6 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
} }
EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
/**
* dma_buf_kmap - Map a page of the buffer object into kernel address space. The
* same restrictions as for kmap and friends apply.
* @dmabuf: [in] buffer to map page from.
* @page_num: [in] page in PAGE_SIZE units to map.
*
* This call must always succeed, any necessary preparations that might fail
* need to be done in begin_cpu_access.
*/
void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num)
{
WARN_ON(!dmabuf);
if (!dmabuf->ops->map)
return NULL;
return dmabuf->ops->map(dmabuf, page_num);
}
EXPORT_SYMBOL_GPL(dma_buf_kmap);
/**
* dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap.
* @dmabuf: [in] buffer to unmap page from.
* @page_num: [in] page in PAGE_SIZE units to unmap.
* @vaddr: [in] kernel space pointer obtained from dma_buf_kmap.
*
* This call must always succeed.
*/
void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
void *vaddr)
{
WARN_ON(!dmabuf);
if (dmabuf->ops->unmap)
dmabuf->ops->unmap(dmabuf, page_num, vaddr);
}
EXPORT_SYMBOL_GPL(dma_buf_kunmap);
/** /**
* dma_buf_mmap - Setup up a userspace mmap with the given vma * dma_buf_mmap - Setup up a userspace mmap with the given vma
......
// SPDX-License-Identifier: GPL-2.0
/*
* Framework for userspace DMA-BUF allocations
*
* Copyright (C) 2011 Google, Inc.
* Copyright (C) 2019 Linaro Ltd.
*/
#include <linux/cdev.h>
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-buf.h>
#include <linux/err.h>
#include <linux/xarray.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/syscalls.h>
#include <linux/dma-heap.h>
#include <uapi/linux/dma-heap.h>
#define DEVNAME "dma_heap"
#define NUM_HEAP_MINORS 128
/**
* struct dma_heap - represents a dmabuf heap in the system
* @name: used for debugging/device-node name
* @ops: ops struct for this heap
* @heap_devt heap device node
* @list list head connecting to list of heaps
* @heap_cdev heap char device
*
* Represents a heap of memory from which buffers can be made.
*/
struct dma_heap {
const char *name;
const struct dma_heap_ops *ops;
void *priv;
dev_t heap_devt;
struct list_head list;
struct cdev heap_cdev;
};
static LIST_HEAD(heap_list);
static DEFINE_MUTEX(heap_list_lock);
static dev_t dma_heap_devt;
static struct class *dma_heap_class;
static DEFINE_XARRAY_ALLOC(dma_heap_minors);
static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
unsigned int fd_flags,
unsigned int heap_flags)
{
/*
* Allocations from all heaps have to begin
* and end on page boundaries.
*/
len = PAGE_ALIGN(len);
if (!len)
return -EINVAL;
return heap->ops->allocate(heap, len, fd_flags, heap_flags);
}
static int dma_heap_open(struct inode *inode, struct file *file)
{
struct dma_heap *heap;
heap = xa_load(&dma_heap_minors, iminor(inode));
if (!heap) {
pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
return -ENODEV;
}
/* instance data as context */
file->private_data = heap;
nonseekable_open(inode, file);
return 0;
}
static long dma_heap_ioctl_allocate(struct file *file, void *data)
{
struct dma_heap_allocation_data *heap_allocation = data;
struct dma_heap *heap = file->private_data;
int fd;
if (heap_allocation->fd)
return -EINVAL;
if (heap_allocation->fd_flags & ~DMA_HEAP_VALID_FD_FLAGS)
return -EINVAL;
if (heap_allocation->heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS)
return -EINVAL;
fd = dma_heap_buffer_alloc(heap, heap_allocation->len,
heap_allocation->fd_flags,
heap_allocation->heap_flags);
if (fd < 0)
return fd;
heap_allocation->fd = fd;
return 0;
}
unsigned int dma_heap_ioctl_cmds[] = {
DMA_HEAP_IOC_ALLOC,
};
static long dma_heap_ioctl(struct file *file, unsigned int ucmd,
unsigned long arg)
{
char stack_kdata[128];
char *kdata = stack_kdata;
unsigned int kcmd;
unsigned int in_size, out_size, drv_size, ksize;
int nr = _IOC_NR(ucmd);
int ret = 0;
if (nr >= ARRAY_SIZE(dma_heap_ioctl_cmds))
return -EINVAL;
/* Get the kernel ioctl cmd that matches */
kcmd = dma_heap_ioctl_cmds[nr];
/* Figure out the delta between user cmd size and kernel cmd size */
drv_size = _IOC_SIZE(kcmd);
out_size = _IOC_SIZE(ucmd);
in_size = out_size;
if ((ucmd & kcmd & IOC_IN) == 0)
in_size = 0;
if ((ucmd & kcmd & IOC_OUT) == 0)
out_size = 0;
ksize = max(max(in_size, out_size), drv_size);
/* If necessary, allocate buffer for ioctl argument */
if (ksize > sizeof(stack_kdata)) {
kdata = kmalloc(ksize, GFP_KERNEL);
if (!kdata)
return -ENOMEM;
}
if (copy_from_user(kdata, (void __user *)arg, in_size) != 0) {
ret = -EFAULT;
goto err;
}
/* zero out any difference between the kernel/user structure size */
if (ksize > in_size)
memset(kdata + in_size, 0, ksize - in_size);
switch (kcmd) {
case DMA_HEAP_IOC_ALLOC:
ret = dma_heap_ioctl_allocate(file, kdata);
break;
default:
return -ENOTTY;
}
if (copy_to_user((void __user *)arg, kdata, out_size) != 0)
ret = -EFAULT;
err:
if (kdata != stack_kdata)
kfree(kdata);
return ret;
}
static const struct file_operations dma_heap_fops = {
.owner = THIS_MODULE,
.open = dma_heap_open,
.unlocked_ioctl = dma_heap_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = dma_heap_ioctl,
#endif
};
/**
* dma_heap_get_drvdata() - get per-subdriver data for the heap
* @heap: DMA-Heap to retrieve private data for
*
* Returns:
* The per-subdriver data for the heap.
*/
void *dma_heap_get_drvdata(struct dma_heap *heap)
{
return heap->priv;
}
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
{
struct dma_heap *heap, *h, *err_ret;
struct device *dev_ret;
unsigned int minor;
int ret;
if (!exp_info->name || !strcmp(exp_info->name, "")) {
pr_err("dma_heap: Cannot add heap without a name\n");
return ERR_PTR(-EINVAL);
}
if (!exp_info->ops || !exp_info->ops->allocate) {
pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
return ERR_PTR(-EINVAL);
}
/* check the name is unique */
mutex_lock(&heap_list_lock);
list_for_each_entry(h, &heap_list, list) {
if (!strcmp(h->name, exp_info->name)) {
mutex_unlock(&heap_list_lock);
pr_err("dma_heap: Already registered heap named %s\n",
exp_info->name);
return ERR_PTR(-EINVAL);
}
}
mutex_unlock(&heap_list_lock);
heap = kzalloc(sizeof(*heap), GFP_KERNEL);
if (!heap)
return ERR_PTR(-ENOMEM);
heap->name = exp_info->name;
heap->ops = exp_info->ops;
heap->priv = exp_info->priv;
/* Find unused minor number */
ret = xa_alloc(&dma_heap_minors, &minor, heap,
XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL);
if (ret < 0) {
pr_err("dma_heap: Unable to get minor number for heap\n");
err_ret = ERR_PTR(ret);
goto err0;
}
/* Create device */
heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), minor);
cdev_init(&heap->heap_cdev, &dma_heap_fops);
ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1);
if (ret < 0) {
pr_err("dma_heap: Unable to add char device\n");
err_ret = ERR_PTR(ret);
goto err1;
}
dev_ret = device_create(dma_heap_class,
NULL,
heap->heap_devt,
NULL,
heap->name);
if (IS_ERR(dev_ret)) {
pr_err("dma_heap: Unable to create device\n");
err_ret = ERR_CAST(dev_ret);
goto err2;
}
/* Add heap to the list */
mutex_lock(&heap_list_lock);
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
return heap;
err2:
cdev_del(&heap->heap_cdev);
err1:
xa_erase(&dma_heap_minors, minor);
err0:
kfree(heap);
return err_ret;
}
static char *dma_heap_devnode(struct device *dev, umode_t *mode)
{
return kasprintf(GFP_KERNEL, "dma_heap/%s", dev_name(dev));
}
static int dma_heap_init(void)
{
int ret;
ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
if (ret)
return ret;
dma_heap_class = class_create(THIS_MODULE, DEVNAME);
if (IS_ERR(dma_heap_class)) {
unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
return PTR_ERR(dma_heap_class);
}
dma_heap_class->devnode = dma_heap_devnode;
return 0;
}
subsys_initcall(dma_heap_init);
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#include <linux/dma-resv.h> #include <linux/dma-resv.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/sched/mm.h>
/** /**
* DOC: Reservation Object Overview * DOC: Reservation Object Overview
...@@ -95,6 +96,37 @@ static void dma_resv_list_free(struct dma_resv_list *list) ...@@ -95,6 +96,37 @@ static void dma_resv_list_free(struct dma_resv_list *list)
kfree_rcu(list, rcu); kfree_rcu(list, rcu);
} }
#if IS_ENABLED(CONFIG_LOCKDEP)
static int __init dma_resv_lockdep(void)
{
struct mm_struct *mm = mm_alloc();
struct ww_acquire_ctx ctx;
struct dma_resv obj;
int ret;
if (!mm)
return -ENOMEM;
dma_resv_init(&obj);
down_read(&mm->mmap_sem);
ww_acquire_init(&ctx, &reservation_ww_class);
ret = dma_resv_lock(&obj, &ctx);
if (ret == -EDEADLK)
dma_resv_lock_slow(&obj, &ctx);
fs_reclaim_acquire(GFP_KERNEL);
fs_reclaim_release(GFP_KERNEL);
ww_mutex_unlock(&obj.lock);
ww_acquire_fini(&ctx);
up_read(&mm->mmap_sem);
mmput(mm);
return 0;
}
subsys_initcall(dma_resv_lockdep);
#endif
/** /**
* dma_resv_init - initialize a reservation object * dma_resv_init - initialize a reservation object
* @obj: the reservation object * @obj: the reservation object
......
config DMABUF_HEAPS_SYSTEM
bool "DMA-BUF System Heap"
depends on DMABUF_HEAPS
help
Choose this option to enable the system dmabuf heap. The system heap
is backed by pages from the buddy allocator. If in doubt, say Y.
config DMABUF_HEAPS_CMA
bool "DMA-BUF CMA Heap"
depends on DMABUF_HEAPS && DMA_CMA
help
Choose this option to enable dma-buf CMA heap. This heap is backed
by the Contiguous Memory Allocator (CMA). If your system has these
regions, you should say Y here.
# SPDX-License-Identifier: GPL-2.0
obj-y += heap-helpers.o
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
// SPDX-License-Identifier: GPL-2.0
/*
* DMABUF CMA heap exporter
*
* Copyright (C) 2012, 2019 Linaro Ltd.
* Author: <benjamin.gaignard@linaro.org> for ST-Ericsson.
*/
#include <linux/cma.h>
#include <linux/device.h>
#include <linux/dma-buf.h>
#include <linux/dma-heap.h>
#include <linux/dma-contiguous.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/highmem.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/scatterlist.h>
#include <linux/sched/signal.h>
#include "heap-helpers.h"
struct cma_heap {
struct dma_heap *heap;
struct cma *cma;
};
static void cma_heap_free(struct heap_helper_buffer *buffer)
{
struct cma_heap *cma_heap = dma_heap_get_drvdata(buffer->heap);
unsigned long nr_pages = buffer->pagecount;
struct page *cma_pages = buffer->priv_virt;
/* free page list */
kfree(buffer->pages);
/* release memory */
cma_release(cma_heap->cma, cma_pages, nr_pages);
kfree(buffer);
}
/* dmabuf heap CMA operations functions */
static int cma_heap_allocate(struct dma_heap *heap,
unsigned long len,
unsigned long fd_flags,
unsigned long heap_flags)
{
struct cma_heap *cma_heap = dma_heap_get_drvdata(heap);
struct heap_helper_buffer *helper_buffer;
struct page *cma_pages;
size_t size = PAGE_ALIGN(len);
unsigned long nr_pages = size >> PAGE_SHIFT;
unsigned long align = get_order(size);
struct dma_buf *dmabuf;
int ret = -ENOMEM;
pgoff_t pg;
if (align > CONFIG_CMA_ALIGNMENT)
align = CONFIG_CMA_ALIGNMENT;
helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
if (!helper_buffer)
return -ENOMEM;
init_heap_helper_buffer(helper_buffer, cma_heap_free);
helper_buffer->heap = heap;
helper_buffer->size = len;
cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
if (!cma_pages)
goto free_buf;
if (PageHighMem(cma_pages)) {
unsigned long nr_clear_pages = nr_pages;
struct page *page = cma_pages;
while (nr_clear_pages > 0) {
void *vaddr = kmap_atomic(page);
memset(vaddr, 0, PAGE_SIZE);
kunmap_atomic(vaddr);
/*
* Avoid wasting time zeroing memory if the process
* has been killed by by SIGKILL
*/
if (fatal_signal_pending(current))
goto free_cma;
page++;
nr_clear_pages--;
}
} else {
memset(page_address(cma_pages), 0, size);
}
helper_buffer->pagecount = nr_pages;
helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
sizeof(*helper_buffer->pages),
GFP_KERNEL);
if (!helper_buffer->pages) {
ret = -ENOMEM;
goto free_cma;
}
for (pg = 0; pg < helper_buffer->pagecount; pg++)
helper_buffer->pages[pg] = &cma_pages[pg];
/* create the dmabuf */
dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags);
if (IS_ERR(dmabuf)) {
ret = PTR_ERR(dmabuf);
goto free_pages;
}
helper_buffer->dmabuf = dmabuf;
helper_buffer->priv_virt = cma_pages;
ret = dma_buf_fd(dmabuf, fd_flags);
if (ret < 0) {
dma_buf_put(dmabuf);
/* just return, as put will call release and that will free */
return ret;
}
return ret;
free_pages:
kfree(helper_buffer->pages);
free_cma:
cma_release(cma_heap->cma, cma_pages, nr_pages);
free_buf:
kfree(helper_buffer);
return ret;
}
static const struct dma_heap_ops cma_heap_ops = {
.allocate = cma_heap_allocate,
};
static int __add_cma_heap(struct cma *cma, void *data)
{
struct cma_heap *cma_heap;
struct dma_heap_export_info exp_info;
cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL);
if (!cma_heap)
return -ENOMEM;
cma_heap->cma = cma;
exp_info.name = cma_get_name(cma);
exp_info.ops = &cma_heap_ops;
exp_info.priv = cma_heap;
cma_heap->heap = dma_heap_add(&exp_info);
if (IS_ERR(cma_heap->heap)) {
int ret = PTR_ERR(cma_heap->heap);
kfree(cma_heap);
return ret;
}
return 0;
}
static int add_default_cma_heap(void)
{
struct cma *default_cma = dev_get_cma_area(NULL);
int ret = 0;
if (default_cma)
ret = __add_cma_heap(default_cma, NULL);
return ret;
}
module_init(add_default_cma_heap);
MODULE_DESCRIPTION("DMA-BUF CMA Heap");
MODULE_LICENSE("GPL v2");
// SPDX-License-Identifier: GPL-2.0
#include <linux/device.h>
#include <linux/dma-buf.h>
#include <linux/err.h>
#include <linux/highmem.h>
#include <linux/idr.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/vmalloc.h>
#include <uapi/linux/dma-heap.h>
#include "heap-helpers.h"
void init_heap_helper_buffer(struct heap_helper_buffer *buffer,
void (*free)(struct heap_helper_buffer *))
{
buffer->priv_virt = NULL;
mutex_init(&buffer->lock);
buffer->vmap_cnt = 0;
buffer->vaddr = NULL;
buffer->pagecount = 0;
buffer->pages = NULL;
INIT_LIST_HEAD(&buffer->attachments);
buffer->free = free;
}
struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer,
int fd_flags)
{
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
exp_info.ops = &heap_helper_ops;
exp_info.size = buffer->size;
exp_info.flags = fd_flags;
exp_info.priv = buffer;
return dma_buf_export(&exp_info);
}
static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
{
void *vaddr;
vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
if (!vaddr)
return ERR_PTR(-ENOMEM);
return vaddr;
}
static void dma_heap_buffer_destroy(struct heap_helper_buffer *buffer)
{
if (buffer->vmap_cnt > 0) {
WARN(1, "%s: buffer still mapped in the kernel\n", __func__);
vunmap(buffer->vaddr);
}
buffer->free(buffer);
}
static void *dma_heap_buffer_vmap_get(struct heap_helper_buffer *buffer)
{
void *vaddr;
if (buffer->vmap_cnt) {
buffer->vmap_cnt++;
return buffer->vaddr;
}
vaddr = dma_heap_map_kernel(buffer);
if (IS_ERR(vaddr))
return vaddr;
buffer->vaddr = vaddr;
buffer->vmap_cnt++;
return vaddr;
}
static void dma_heap_buffer_vmap_put(struct heap_helper_buffer *buffer)
{
if (!--buffer->vmap_cnt) {
vunmap(buffer->vaddr);
buffer->vaddr = NULL;
}
}
struct dma_heaps_attachment {
struct device *dev;
struct sg_table table;
struct list_head list;
};
static int dma_heap_attach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
{
struct dma_heaps_attachment *a;
struct heap_helper_buffer *buffer = dmabuf->priv;
int ret;
a = kzalloc(sizeof(*a), GFP_KERNEL);
if (!a)
return -ENOMEM;
ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
buffer->pagecount, 0,
buffer->pagecount << PAGE_SHIFT,
GFP_KERNEL);
if (ret) {
kfree(a);
return ret;
}
a->dev = attachment->dev;
INIT_LIST_HEAD(&a->list);
attachment->priv = a;
mutex_lock(&buffer->lock);
list_add(&a->list, &buffer->attachments);
mutex_unlock(&buffer->lock);
return 0;
}
static void dma_heap_detach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
{
struct dma_heaps_attachment *a = attachment->priv;
struct heap_helper_buffer *buffer = dmabuf->priv;
mutex_lock(&buffer->lock);
list_del(&a->list);
mutex_unlock(&buffer->lock);
sg_free_table(&a->table);
kfree(a);
}
static
struct sg_table *dma_heap_map_dma_buf(struct dma_buf_attachment *attachment,
enum dma_data_direction direction)
{
struct dma_heaps_attachment *a = attachment->priv;
struct sg_table *table;
table = &a->table;
if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
direction))
table = ERR_PTR(-ENOMEM);
return table;
}
static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *table,
enum dma_data_direction direction)
{
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
}
static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct heap_helper_buffer *buffer = vma->vm_private_data;
if (vmf->pgoff > buffer->pagecount)
return VM_FAULT_SIGBUS;
vmf->page = buffer->pages[vmf->pgoff];
get_page(vmf->page);
return 0;
}
static const struct vm_operations_struct dma_heap_vm_ops = {
.fault = dma_heap_vm_fault,
};
static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
return -EINVAL;
vma->vm_ops = &dma_heap_vm_ops;
vma->vm_private_data = buffer;
return 0;
}
static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
dma_heap_buffer_destroy(buffer);
}
static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
struct dma_heaps_attachment *a;
int ret = 0;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt)
invalidate_kernel_vmap_range(buffer->vaddr, buffer->size);
list_for_each_entry(a, &buffer->attachments, list) {
dma_sync_sg_for_cpu(a->dev, a->table.sgl, a->table.nents,
direction);
}
mutex_unlock(&buffer->lock);
return ret;
}
static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
struct dma_heaps_attachment *a;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt)
flush_kernel_vmap_range(buffer->vaddr, buffer->size);
list_for_each_entry(a, &buffer->attachments, list) {
dma_sync_sg_for_device(a->dev, a->table.sgl, a->table.nents,
direction);
}
mutex_unlock(&buffer->lock);
return 0;
}
static void *dma_heap_dma_buf_vmap(struct dma_buf *dmabuf)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
void *vaddr;
mutex_lock(&buffer->lock);
vaddr = dma_heap_buffer_vmap_get(buffer);
mutex_unlock(&buffer->lock);
return vaddr;
}
static void dma_heap_dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
{
struct heap_helper_buffer *buffer = dmabuf->priv;
mutex_lock(&buffer->lock);
dma_heap_buffer_vmap_put(buffer);
mutex_unlock(&buffer->lock);
}
const struct dma_buf_ops heap_helper_ops = {
.map_dma_buf = dma_heap_map_dma_buf,
.unmap_dma_buf = dma_heap_unmap_dma_buf,
.mmap = dma_heap_mmap,
.release = dma_heap_dma_buf_release,
.attach = dma_heap_attach,
.detach = dma_heap_detach,
.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
.vmap = dma_heap_dma_buf_vmap,
.vunmap = dma_heap_dma_buf_vunmap,
};
/* SPDX-License-Identifier: GPL-2.0 */
/*
* DMABUF Heaps helper code
*
* Copyright (C) 2011 Google, Inc.
* Copyright (C) 2019 Linaro Ltd.
*/
#ifndef _HEAP_HELPERS_H
#define _HEAP_HELPERS_H
#include <linux/dma-heap.h>
#include <linux/list.h>
/**
* struct heap_helper_buffer - helper buffer metadata
* @heap: back pointer to the heap the buffer came from
* @dmabuf: backing dma-buf for this buffer
* @size: size of the buffer
* @priv_virt pointer to heap specific private value
* @lock mutext to protect the data in this structure
* @vmap_cnt count of vmap references on the buffer
* @vaddr vmap'ed virtual address
* @pagecount number of pages in the buffer
* @pages list of page pointers
* @attachments list of device attachments
*
* @free heap callback to free the buffer
*/
struct heap_helper_buffer {
struct dma_heap *heap;
struct dma_buf *dmabuf;
size_t size;
void *priv_virt;
struct mutex lock;
int vmap_cnt;
void *vaddr;
pgoff_t pagecount;
struct page **pages;
struct list_head attachments;
void (*free)(struct heap_helper_buffer *buffer);
};
void init_heap_helper_buffer(struct heap_helper_buffer *buffer,
void (*free)(struct heap_helper_buffer *));
struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer,
int fd_flags);
extern const struct dma_buf_ops heap_helper_ops;
#endif /* _HEAP_HELPERS_H */
// SPDX-License-Identifier: GPL-2.0
/*
* DMABUF System heap exporter
*
* Copyright (C) 2011 Google, Inc.
* Copyright (C) 2019 Linaro Ltd.
*/
#include <linux/dma-buf.h>
#include <linux/dma-mapping.h>
#include <linux/dma-heap.h>
#include <linux/err.h>
#include <linux/highmem.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/sched/signal.h>
#include <asm/page.h>
#include "heap-helpers.h"
struct dma_heap *sys_heap;
static void system_heap_free(struct heap_helper_buffer *buffer)
{
pgoff_t pg;
for (pg = 0; pg < buffer->pagecount; pg++)
__free_page(buffer->pages[pg]);
kfree(buffer->pages);
kfree(buffer);
}
static int system_heap_allocate(struct dma_heap *heap,
unsigned long len,
unsigned long fd_flags,
unsigned long heap_flags)
{
struct heap_helper_buffer *helper_buffer;
struct dma_buf *dmabuf;
int ret = -ENOMEM;
pgoff_t pg;
helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
if (!helper_buffer)
return -ENOMEM;
init_heap_helper_buffer(helper_buffer, system_heap_free);
helper_buffer->heap = heap;
helper_buffer->size = len;
helper_buffer->pagecount = len / PAGE_SIZE;
helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
sizeof(*helper_buffer->pages),
GFP_KERNEL);
if (!helper_buffer->pages) {
ret = -ENOMEM;
goto err0;
}
for (pg = 0; pg < helper_buffer->pagecount; pg++) {
/*
* Avoid trying to allocate memory if the process
* has been killed by by SIGKILL
*/
if (fatal_signal_pending(current))
goto err1;
helper_buffer->pages[pg] = alloc_page(GFP_KERNEL | __GFP_ZERO);
if (!helper_buffer->pages[pg])
goto err1;
}
/* create the dmabuf */
dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags);
if (IS_ERR(dmabuf)) {
ret = PTR_ERR(dmabuf);
goto err1;
}
helper_buffer->dmabuf = dmabuf;
ret = dma_buf_fd(dmabuf, fd_flags);
if (ret < 0) {
dma_buf_put(dmabuf);
/* just return, as put will call release and that will free */
return ret;
}
return ret;
err1:
while (pg > 0)
__free_page(helper_buffer->pages[--pg]);
kfree(helper_buffer->pages);
err0:
kfree(helper_buffer);
return ret;
}
static const struct dma_heap_ops system_heap_ops = {
.allocate = system_heap_allocate,
};
static int system_heap_create(void)
{
struct dma_heap_export_info exp_info;
int ret = 0;
exp_info.name = "system_heap";
exp_info.ops = &system_heap_ops;
exp_info.priv = NULL;
sys_heap = dma_heap_add(&exp_info);
if (IS_ERR(sys_heap))
ret = PTR_ERR(sys_heap);
return ret;
}
module_init(system_heap_create);
MODULE_LICENSE("GPL v2");
...@@ -18,6 +18,8 @@ static const size_t size_limit_mb = 64; /* total dmabuf size, in megabytes */ ...@@ -18,6 +18,8 @@ static const size_t size_limit_mb = 64; /* total dmabuf size, in megabytes */
struct udmabuf { struct udmabuf {
pgoff_t pagecount; pgoff_t pagecount;
struct page **pages; struct page **pages;
struct sg_table *sg;
struct miscdevice *device;
}; };
static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf) static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
...@@ -46,10 +48,10 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma) ...@@ -46,10 +48,10 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
return 0; return 0;
} }
static struct sg_table *map_udmabuf(struct dma_buf_attachment *at, static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
struct udmabuf *ubuf = at->dmabuf->priv; struct udmabuf *ubuf = buf->priv;
struct sg_table *sg; struct sg_table *sg;
int ret; int ret;
...@@ -61,7 +63,7 @@ static struct sg_table *map_udmabuf(struct dma_buf_attachment *at, ...@@ -61,7 +63,7 @@ static struct sg_table *map_udmabuf(struct dma_buf_attachment *at,
GFP_KERNEL); GFP_KERNEL);
if (ret < 0) if (ret < 0)
goto err; goto err;
if (!dma_map_sg(at->dev, sg->sgl, sg->nents, direction)) { if (!dma_map_sg(dev, sg->sgl, sg->nents, direction)) {
ret = -EINVAL; ret = -EINVAL;
goto err; goto err;
} }
...@@ -73,54 +75,90 @@ static struct sg_table *map_udmabuf(struct dma_buf_attachment *at, ...@@ -73,54 +75,90 @@ static struct sg_table *map_udmabuf(struct dma_buf_attachment *at,
return ERR_PTR(ret); return ERR_PTR(ret);
} }
static void put_sg_table(struct device *dev, struct sg_table *sg,
enum dma_data_direction direction)
{
dma_unmap_sg(dev, sg->sgl, sg->nents, direction);
sg_free_table(sg);
kfree(sg);
}
static struct sg_table *map_udmabuf(struct dma_buf_attachment *at,
enum dma_data_direction direction)
{
return get_sg_table(at->dev, at->dmabuf, direction);
}
static void unmap_udmabuf(struct dma_buf_attachment *at, static void unmap_udmabuf(struct dma_buf_attachment *at,
struct sg_table *sg, struct sg_table *sg,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
dma_unmap_sg(at->dev, sg->sgl, sg->nents, direction); return put_sg_table(at->dev, sg, direction);
sg_free_table(sg);
kfree(sg);
} }
static void release_udmabuf(struct dma_buf *buf) static void release_udmabuf(struct dma_buf *buf)
{ {
struct udmabuf *ubuf = buf->priv; struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
pgoff_t pg; pgoff_t pg;
if (ubuf->sg)
put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
for (pg = 0; pg < ubuf->pagecount; pg++) for (pg = 0; pg < ubuf->pagecount; pg++)
put_page(ubuf->pages[pg]); put_page(ubuf->pages[pg]);
kfree(ubuf->pages); kfree(ubuf->pages);
kfree(ubuf); kfree(ubuf);
} }
static void *kmap_udmabuf(struct dma_buf *buf, unsigned long page_num) static int begin_cpu_udmabuf(struct dma_buf *buf,
enum dma_data_direction direction)
{ {
struct udmabuf *ubuf = buf->priv; struct udmabuf *ubuf = buf->priv;
struct page *page = ubuf->pages[page_num]; struct device *dev = ubuf->device->this_device;
if (!ubuf->sg) {
ubuf->sg = get_sg_table(dev, buf, direction);
if (IS_ERR(ubuf->sg))
return PTR_ERR(ubuf->sg);
} else {
dma_sync_sg_for_device(dev, ubuf->sg->sgl,
ubuf->sg->nents,
direction);
}
return kmap(page); return 0;
} }
static void kunmap_udmabuf(struct dma_buf *buf, unsigned long page_num, static int end_cpu_udmabuf(struct dma_buf *buf,
void *vaddr) enum dma_data_direction direction)
{ {
kunmap(vaddr); struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
if (!ubuf->sg)
return -EINVAL;
dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents, direction);
return 0;
} }
static const struct dma_buf_ops udmabuf_ops = { static const struct dma_buf_ops udmabuf_ops = {
.map_dma_buf = map_udmabuf, .cache_sgt_mapping = true,
.unmap_dma_buf = unmap_udmabuf, .map_dma_buf = map_udmabuf,
.release = release_udmabuf, .unmap_dma_buf = unmap_udmabuf,
.map = kmap_udmabuf, .release = release_udmabuf,
.unmap = kunmap_udmabuf, .mmap = mmap_udmabuf,
.mmap = mmap_udmabuf, .begin_cpu_access = begin_cpu_udmabuf,
.end_cpu_access = end_cpu_udmabuf,
}; };
#define SEALS_WANTED (F_SEAL_SHRINK) #define SEALS_WANTED (F_SEAL_SHRINK)
#define SEALS_DENIED (F_SEAL_WRITE) #define SEALS_DENIED (F_SEAL_WRITE)
static long udmabuf_create(const struct udmabuf_create_list *head, static long udmabuf_create(struct miscdevice *device,
const struct udmabuf_create_item *list) struct udmabuf_create_list *head,
struct udmabuf_create_item *list)
{ {
DEFINE_DMA_BUF_EXPORT_INFO(exp_info); DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct file *memfd = NULL; struct file *memfd = NULL;
...@@ -187,6 +225,7 @@ static long udmabuf_create(const struct udmabuf_create_list *head, ...@@ -187,6 +225,7 @@ static long udmabuf_create(const struct udmabuf_create_list *head,
exp_info.priv = ubuf; exp_info.priv = ubuf;
exp_info.flags = O_RDWR; exp_info.flags = O_RDWR;
ubuf->device = device;
buf = dma_buf_export(&exp_info); buf = dma_buf_export(&exp_info);
if (IS_ERR(buf)) { if (IS_ERR(buf)) {
ret = PTR_ERR(buf); ret = PTR_ERR(buf);
...@@ -224,7 +263,7 @@ static long udmabuf_ioctl_create(struct file *filp, unsigned long arg) ...@@ -224,7 +263,7 @@ static long udmabuf_ioctl_create(struct file *filp, unsigned long arg)
list.offset = create.offset; list.offset = create.offset;
list.size = create.size; list.size = create.size;
return udmabuf_create(&head, &list); return udmabuf_create(filp->private_data, &head, &list);
} }
static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg) static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg)
...@@ -243,7 +282,7 @@ static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg) ...@@ -243,7 +282,7 @@ static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg)
if (IS_ERR(list)) if (IS_ERR(list))
return PTR_ERR(list); return PTR_ERR(list);
ret = udmabuf_create(&head, list); ret = udmabuf_create(filp->private_data, &head, list);
kfree(list); kfree(list);
return ret; return ret;
} }
......
...@@ -294,9 +294,6 @@ config DRM_VKMS ...@@ -294,9 +294,6 @@ config DRM_VKMS
If M is selected the module will be called vkms. If M is selected the module will be called vkms.
config DRM_ATI_PCIGART
bool
source "drivers/gpu/drm/exynos/Kconfig" source "drivers/gpu/drm/exynos/Kconfig"
source "drivers/gpu/drm/rockchip/Kconfig" source "drivers/gpu/drm/rockchip/Kconfig"
...@@ -393,7 +390,6 @@ menuconfig DRM_LEGACY ...@@ -393,7 +390,6 @@ menuconfig DRM_LEGACY
bool "Enable legacy drivers (DANGEROUS)" bool "Enable legacy drivers (DANGEROUS)"
depends on DRM && MMU depends on DRM && MMU
select DRM_VM select DRM_VM
select DRM_ATI_PCIGART if PCI
help help
Enable legacy DRI1 drivers. Those drivers expose unsafe and dangerous Enable legacy DRI1 drivers. Those drivers expose unsafe and dangerous
APIs to user-space, which can be used to circumvent access APIs to user-space, which can be used to circumvent access
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
drm-y := drm_auth.o drm_cache.o \ drm-y := drm_auth.o drm_cache.o \
drm_file.o drm_gem.o drm_ioctl.o drm_irq.o \ drm_file.o drm_gem.o drm_ioctl.o drm_irq.o \
drm_memory.o drm_drv.o drm_pci.o \ drm_memory.o drm_drv.o \
drm_sysfs.o drm_hashtab.o drm_mm.o \ drm_sysfs.o drm_hashtab.o drm_mm.o \
drm_crtc.o drm_fourcc.o drm_modes.o drm_edid.o \ drm_crtc.o drm_fourcc.o drm_modes.o drm_edid.o \
drm_encoder_slave.o \ drm_encoder_slave.o \
...@@ -25,10 +25,10 @@ drm-$(CONFIG_DRM_VM) += drm_vm.o ...@@ -25,10 +25,10 @@ drm-$(CONFIG_DRM_VM) += drm_vm.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
drm-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_gem_shmem_helper.o drm-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_gem_shmem_helper.o
drm-$(CONFIG_DRM_ATI_PCIGART) += ati_pcigart.o
drm-$(CONFIG_DRM_PANEL) += drm_panel.o drm-$(CONFIG_DRM_PANEL) += drm_panel.o
drm-$(CONFIG_OF) += drm_of.o drm-$(CONFIG_OF) += drm_of.o
drm-$(CONFIG_AGP) += drm_agpsupport.o drm-$(CONFIG_AGP) += drm_agpsupport.o
drm-$(CONFIG_PCI) += drm_pci.o
drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
......
...@@ -360,10 +360,8 @@ struct dma_buf *amdgpu_gem_prime_export(struct drm_gem_object *gobj, ...@@ -360,10 +360,8 @@ struct dma_buf *amdgpu_gem_prime_export(struct drm_gem_object *gobj,
return ERR_PTR(-EPERM); return ERR_PTR(-EPERM);
buf = drm_gem_prime_export(gobj, flags); buf = drm_gem_prime_export(gobj, flags);
if (!IS_ERR(buf)) { if (!IS_ERR(buf))
buf->file->f_mapping = gobj->dev->anon_inode->i_mapping;
buf->ops = &amdgpu_dmabuf_ops; buf->ops = &amdgpu_dmabuf_ops;
}
return buf; return buf;
} }
......
...@@ -69,7 +69,7 @@ amdgpufb_release(struct fb_info *info, int user) ...@@ -69,7 +69,7 @@ amdgpufb_release(struct fb_info *info, int user)
return 0; return 0;
} }
static struct fb_ops amdgpufb_ops = { static const struct fb_ops amdgpufb_ops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS, DRM_FB_HELPER_DEFAULT_OPS,
.fb_open = amdgpufb_open, .fb_open = amdgpufb_open,
......
...@@ -234,7 +234,7 @@ static uint32_t smu_v11_0_i2c_transmit(struct i2c_adapter *control, ...@@ -234,7 +234,7 @@ static uint32_t smu_v11_0_i2c_transmit(struct i2c_adapter *control,
DRM_DEBUG_DRIVER("I2C_Transmit(), address = %x, bytes = %d , data: ", DRM_DEBUG_DRIVER("I2C_Transmit(), address = %x, bytes = %d , data: ",
(uint16_t)address, numbytes); (uint16_t)address, numbytes);
if (drm_debug & DRM_UT_DRIVER) { if (drm_debug_enabled(DRM_UT_DRIVER)) {
print_hex_dump(KERN_INFO, "data: ", DUMP_PREFIX_NONE, print_hex_dump(KERN_INFO, "data: ", DUMP_PREFIX_NONE,
16, 1, data, numbytes, false); 16, 1, data, numbytes, false);
} }
...@@ -388,7 +388,7 @@ static uint32_t smu_v11_0_i2c_receive(struct i2c_adapter *control, ...@@ -388,7 +388,7 @@ static uint32_t smu_v11_0_i2c_receive(struct i2c_adapter *control,
DRM_DEBUG_DRIVER("I2C_Receive(), address = %x, bytes = %d, data :", DRM_DEBUG_DRIVER("I2C_Receive(), address = %x, bytes = %d, data :",
(uint16_t)address, bytes_received); (uint16_t)address, bytes_received);
if (drm_debug & DRM_UT_DRIVER) { if (drm_debug_enabled(DRM_UT_DRIVER)) {
print_hex_dump(KERN_INFO, "data: ", DUMP_PREFIX_NONE, print_hex_dump(KERN_INFO, "data: ", DUMP_PREFIX_NONE,
16, 1, data, bytes_received, false); 16, 1, data, bytes_received, false);
} }
......
...@@ -5324,11 +5324,12 @@ static int amdgpu_dm_connector_init(struct amdgpu_display_manager *dm, ...@@ -5324,11 +5324,12 @@ static int amdgpu_dm_connector_init(struct amdgpu_display_manager *dm,
connector_type = to_drm_connector_type(link->connector_signal); connector_type = to_drm_connector_type(link->connector_signal);
res = drm_connector_init( res = drm_connector_init_with_ddc(
dm->ddev, dm->ddev,
&aconnector->base, &aconnector->base,
&amdgpu_dm_connector_funcs, &amdgpu_dm_connector_funcs,
connector_type); connector_type,
&i2c->base);
if (res) { if (res) {
DRM_ERROR("connector_init failed\n"); DRM_ERROR("connector_init failed\n");
......
...@@ -12,9 +12,3 @@ config DRM_KOMEDA ...@@ -12,9 +12,3 @@ config DRM_KOMEDA
Processor driver. It supports the D71 variants of the hardware. Processor driver. It supports the D71 variants of the hardware.
If compiled as a module it will be called komeda. If compiled as a module it will be called komeda.
config DRM_KOMEDA_ERROR_PRINT
bool "Enable komeda error print"
depends on DRM_KOMEDA
help
Choose this option to enable error printing.
...@@ -18,7 +18,8 @@ ...@@ -18,7 +18,8 @@
#define MALIDP_CORE_ID_STATUS(__core_id) (((__u32)(__core_id)) & 0xFF) #define MALIDP_CORE_ID_STATUS(__core_id) (((__u32)(__core_id)) & 0xFF)
/* Mali-display product IDs */ /* Mali-display product IDs */
#define MALIDP_D71_PRODUCT_ID 0x0071 #define MALIDP_D71_PRODUCT_ID 0x0071
#define MALIDP_D32_PRODUCT_ID 0x0032
union komeda_config_id { union komeda_config_id {
struct { struct {
......
...@@ -16,12 +16,11 @@ komeda-y := \ ...@@ -16,12 +16,11 @@ komeda-y := \
komeda_crtc.o \ komeda_crtc.o \
komeda_plane.o \ komeda_plane.o \
komeda_wb_connector.o \ komeda_wb_connector.o \
komeda_private_obj.o komeda_private_obj.o \
komeda_event.o
komeda-y += \ komeda-y += \
d71/d71_dev.o \ d71/d71_dev.o \
d71/d71_component.o d71/d71_component.o
komeda-$(CONFIG_DRM_KOMEDA_ERROR_PRINT) += komeda_event.o
obj-$(CONFIG_DRM_KOMEDA) += komeda.o obj-$(CONFIG_DRM_KOMEDA) += komeda.o
...@@ -1044,7 +1044,9 @@ static int d71_merger_init(struct d71_dev *d71, ...@@ -1044,7 +1044,9 @@ static int d71_merger_init(struct d71_dev *d71,
static void d71_improc_update(struct komeda_component *c, static void d71_improc_update(struct komeda_component *c,
struct komeda_component_state *state) struct komeda_component_state *state)
{ {
struct drm_crtc_state *crtc_st = state->crtc->state;
struct komeda_improc_state *st = to_improc_st(state); struct komeda_improc_state *st = to_improc_st(state);
struct d71_pipeline *pipe = to_d71_pipeline(c->pipeline);
u32 __iomem *reg = c->reg; u32 __iomem *reg = c->reg;
u32 index, mask = 0, ctrl = 0; u32 index, mask = 0, ctrl = 0;
...@@ -1055,6 +1057,24 @@ static void d71_improc_update(struct komeda_component *c, ...@@ -1055,6 +1057,24 @@ static void d71_improc_update(struct komeda_component *c,
malidp_write32(reg, BLK_SIZE, HV_SIZE(st->hsize, st->vsize)); malidp_write32(reg, BLK_SIZE, HV_SIZE(st->hsize, st->vsize));
malidp_write32(reg, IPS_DEPTH, st->color_depth); malidp_write32(reg, IPS_DEPTH, st->color_depth);
if (crtc_st->color_mgmt_changed) {
mask |= IPS_CTRL_FT | IPS_CTRL_RGB;
if (crtc_st->gamma_lut) {
malidp_write_group(pipe->dou_ft_coeff_addr, FT_COEFF0,
KOMEDA_N_GAMMA_COEFFS,
st->fgamma_coeffs);
ctrl |= IPS_CTRL_FT; /* enable gamma */
}
if (crtc_st->ctm) {
malidp_write_group(reg, IPS_RGB_RGB_COEFF0,
KOMEDA_N_CTM_COEFFS,
st->ctm_coeffs);
ctrl |= IPS_CTRL_RGB; /* enable gamut */
}
}
mask |= IPS_CTRL_YUV | IPS_CTRL_CHD422 | IPS_CTRL_CHD420; mask |= IPS_CTRL_YUV | IPS_CTRL_CHD422 | IPS_CTRL_CHD420;
/* config color format */ /* config color format */
...@@ -1250,7 +1270,7 @@ static int d71_timing_ctrlr_init(struct d71_dev *d71, ...@@ -1250,7 +1270,7 @@ static int d71_timing_ctrlr_init(struct d71_dev *d71,
ctrlr = to_ctrlr(c); ctrlr = to_ctrlr(c);
ctrlr->supports_dual_link = true; ctrlr->supports_dual_link = d71->supports_dual_link;
return 0; return 0;
} }
......
...@@ -371,23 +371,33 @@ static int d71_enum_resources(struct komeda_dev *mdev) ...@@ -371,23 +371,33 @@ static int d71_enum_resources(struct komeda_dev *mdev)
goto err_cleanup; goto err_cleanup;
} }
/* probe PERIPH */ /* Only the legacy HW has the periph block, the newer merges the periph
* into GCU
*/
value = malidp_read32(d71->periph_addr, BLK_BLOCK_INFO); value = malidp_read32(d71->periph_addr, BLK_BLOCK_INFO);
if (BLOCK_INFO_BLK_TYPE(value) != D71_BLK_TYPE_PERIPH) { if (BLOCK_INFO_BLK_TYPE(value) != D71_BLK_TYPE_PERIPH)
DRM_ERROR("access blk periph but got blk: %d.\n", d71->periph_addr = NULL;
BLOCK_INFO_BLK_TYPE(value));
err = -EINVAL; if (d71->periph_addr) {
goto err_cleanup; /* probe PERIPHERAL in legacy HW */
value = malidp_read32(d71->periph_addr, PERIPH_CONFIGURATION_ID);
d71->max_line_size = value & PERIPH_MAX_LINE_SIZE ? 4096 : 2048;
d71->max_vsize = 4096;
d71->num_rich_layers = value & PERIPH_NUM_RICH_LAYERS ? 2 : 1;
d71->supports_dual_link = !!(value & PERIPH_SPLIT_EN);
d71->integrates_tbu = !!(value & PERIPH_TBU_EN);
} else {
value = malidp_read32(d71->gcu_addr, GCU_CONFIGURATION_ID0);
d71->max_line_size = GCU_MAX_LINE_SIZE(value);
d71->max_vsize = GCU_MAX_NUM_LINES(value);
value = malidp_read32(d71->gcu_addr, GCU_CONFIGURATION_ID1);
d71->num_rich_layers = GCU_NUM_RICH_LAYERS(value);
d71->supports_dual_link = GCU_DISPLAY_SPLIT_EN(value);
d71->integrates_tbu = GCU_DISPLAY_TBU_EN(value);
} }
value = malidp_read32(d71->periph_addr, PERIPH_CONFIGURATION_ID);
d71->max_line_size = value & PERIPH_MAX_LINE_SIZE ? 4096 : 2048;
d71->max_vsize = 4096;
d71->num_rich_layers = value & PERIPH_NUM_RICH_LAYERS ? 2 : 1;
d71->supports_dual_link = value & PERIPH_SPLIT_EN ? true : false;
d71->integrates_tbu = value & PERIPH_TBU_EN ? true : false;
for (i = 0; i < d71->num_pipelines; i++) { for (i = 0; i < d71->num_pipelines; i++) {
pipe = komeda_pipeline_add(mdev, sizeof(struct d71_pipeline), pipe = komeda_pipeline_add(mdev, sizeof(struct d71_pipeline),
&d71_pipeline_funcs); &d71_pipeline_funcs);
...@@ -414,8 +424,11 @@ static int d71_enum_resources(struct komeda_dev *mdev) ...@@ -414,8 +424,11 @@ static int d71_enum_resources(struct komeda_dev *mdev)
d71->pipes[i] = to_d71_pipeline(pipe); d71->pipes[i] = to_d71_pipeline(pipe);
} }
/* loop the register blks and probe */ /* loop the register blks and probe.
i = 2; /* exclude GCU and PERIPH */ * NOTE: d71->num_blocks includes reserved blocks.
* d71->num_blocks = GCU + valid blocks + reserved blocks
*/
i = 1; /* exclude GCU */
offset = D71_BLOCK_SIZE; /* skip GCU */ offset = D71_BLOCK_SIZE; /* skip GCU */
while (i < d71->num_blocks) { while (i < d71->num_blocks) {
blk_base = mdev->reg_base + (offset >> 2); blk_base = mdev->reg_base + (offset >> 2);
...@@ -425,9 +438,9 @@ static int d71_enum_resources(struct komeda_dev *mdev) ...@@ -425,9 +438,9 @@ static int d71_enum_resources(struct komeda_dev *mdev)
err = d71_probe_block(d71, &blk, blk_base); err = d71_probe_block(d71, &blk, blk_base);
if (err) if (err)
goto err_cleanup; goto err_cleanup;
i++;
} }
i++;
offset += D71_BLOCK_SIZE; offset += D71_BLOCK_SIZE;
} }
...@@ -594,10 +607,26 @@ static const struct komeda_dev_funcs d71_chip_funcs = { ...@@ -594,10 +607,26 @@ static const struct komeda_dev_funcs d71_chip_funcs = {
const struct komeda_dev_funcs * const struct komeda_dev_funcs *
d71_identify(u32 __iomem *reg_base, struct komeda_chip_info *chip) d71_identify(u32 __iomem *reg_base, struct komeda_chip_info *chip)
{ {
const struct komeda_dev_funcs *funcs;
u32 product_id;
chip->core_id = malidp_read32(reg_base, GLB_CORE_ID);
product_id = MALIDP_CORE_ID_PRODUCT_ID(chip->core_id);
switch (product_id) {
case MALIDP_D71_PRODUCT_ID:
case MALIDP_D32_PRODUCT_ID:
funcs = &d71_chip_funcs;
break;
default:
DRM_ERROR("Unsupported product: 0x%x\n", product_id);
return NULL;
}
chip->arch_id = malidp_read32(reg_base, GLB_ARCH_ID); chip->arch_id = malidp_read32(reg_base, GLB_ARCH_ID);
chip->core_id = malidp_read32(reg_base, GLB_CORE_ID);
chip->core_info = malidp_read32(reg_base, GLB_CORE_INFO); chip->core_info = malidp_read32(reg_base, GLB_CORE_INFO);
chip->bus_width = D71_BUS_WIDTH_16_BYTES; chip->bus_width = D71_BUS_WIDTH_16_BYTES;
return &d71_chip_funcs; return funcs;
} }
...@@ -72,6 +72,19 @@ ...@@ -72,6 +72,19 @@
#define GCU_CONTROL_MODE(x) ((x) & 0x7) #define GCU_CONTROL_MODE(x) ((x) & 0x7)
#define GCU_CONTROL_SRST BIT(16) #define GCU_CONTROL_SRST BIT(16)
/* GCU_CONFIGURATION registers */
#define GCU_CONFIGURATION_ID0 0x100
#define GCU_CONFIGURATION_ID1 0x104
/* GCU configuration */
#define GCU_MAX_LINE_SIZE(x) ((x) & 0xFFFF)
#define GCU_MAX_NUM_LINES(x) ((x) >> 16)
#define GCU_NUM_RICH_LAYERS(x) ((x) & 0x7)
#define GCU_NUM_PIPELINES(x) (((x) >> 3) & 0x7)
#define GCU_NUM_SCALERS(x) (((x) >> 6) & 0x7)
#define GCU_DISPLAY_SPLIT_EN(x) (((x) >> 16) & 0x1)
#define GCU_DISPLAY_TBU_EN(x) (((x) >> 17) & 0x1)
/* GCU opmode */ /* GCU opmode */
#define INACTIVE_MODE 0 #define INACTIVE_MODE 0
#define TBU_CONNECT_MODE 1 #define TBU_CONNECT_MODE 1
......
...@@ -65,3 +65,69 @@ const s32 *komeda_select_yuv2rgb_coeffs(u32 color_encoding, u32 color_range) ...@@ -65,3 +65,69 @@ const s32 *komeda_select_yuv2rgb_coeffs(u32 color_encoding, u32 color_range)
return coeffs; return coeffs;
} }
struct gamma_curve_sector {
u32 boundary_start;
u32 num_of_segments;
u32 segment_width;
};
struct gamma_curve_segment {
u32 start;
u32 end;
};
static struct gamma_curve_sector sector_tbl[] = {
{ 0, 4, 4 },
{ 16, 4, 4 },
{ 32, 4, 8 },
{ 64, 4, 16 },
{ 128, 4, 32 },
{ 256, 4, 64 },
{ 512, 16, 32 },
{ 1024, 24, 128 },
};
static void
drm_lut_to_coeffs(struct drm_property_blob *lut_blob, u32 *coeffs,
struct gamma_curve_sector *sector_tbl, u32 num_sectors)
{
struct drm_color_lut *lut;
u32 i, j, in, num = 0;
if (!lut_blob)
return;
lut = lut_blob->data;
for (i = 0; i < num_sectors; i++) {
for (j = 0; j < sector_tbl[i].num_of_segments; j++) {
in = sector_tbl[i].boundary_start +
j * sector_tbl[i].segment_width;
coeffs[num++] = drm_color_lut_extract(lut[in].red,
KOMEDA_COLOR_PRECISION);
}
}
coeffs[num] = BIT(KOMEDA_COLOR_PRECISION);
}
void drm_lut_to_fgamma_coeffs(struct drm_property_blob *lut_blob, u32 *coeffs)
{
drm_lut_to_coeffs(lut_blob, coeffs, sector_tbl, ARRAY_SIZE(sector_tbl));
}
void drm_ctm_to_coeffs(struct drm_property_blob *ctm_blob, u32 *coeffs)
{
struct drm_color_ctm *ctm;
u32 i;
if (!ctm_blob)
return;
ctm = ctm_blob->data;
for (i = 0; i < KOMEDA_N_CTM_COEFFS; i++)
coeffs[i] = drm_color_ctm_s31_32_to_qm_n(ctm->matrix[i], 3, 12);
}
...@@ -11,7 +11,15 @@ ...@@ -11,7 +11,15 @@
#include <drm/drm_color_mgmt.h> #include <drm/drm_color_mgmt.h>
#define KOMEDA_N_YUV2RGB_COEFFS 12 #define KOMEDA_N_YUV2RGB_COEFFS 12
#define KOMEDA_N_RGB2YUV_COEFFS 12
#define KOMEDA_COLOR_PRECISION 12
#define KOMEDA_N_GAMMA_COEFFS 65
#define KOMEDA_COLOR_LUT_SIZE BIT(KOMEDA_COLOR_PRECISION)
#define KOMEDA_N_CTM_COEFFS 9
void drm_lut_to_fgamma_coeffs(struct drm_property_blob *lut_blob, u32 *coeffs);
void drm_ctm_to_coeffs(struct drm_property_blob *ctm_blob, u32 *coeffs);
const s32 *komeda_select_yuv2rgb_coeffs(u32 color_encoding, u32 color_range); const s32 *komeda_select_yuv2rgb_coeffs(u32 color_encoding, u32 color_range);
#endif #endif /*_KOMEDA_COLOR_MGMT_H_*/
...@@ -617,6 +617,8 @@ static int komeda_crtc_add(struct komeda_kms_dev *kms, ...@@ -617,6 +617,8 @@ static int komeda_crtc_add(struct komeda_kms_dev *kms,
crtc->port = kcrtc->master->of_output_port; crtc->port = kcrtc->master->of_output_port;
drm_crtc_enable_color_mgmt(crtc, 0, true, KOMEDA_COLOR_LUT_SIZE);
return err; return err;
} }
......
...@@ -58,6 +58,8 @@ static void komeda_debugfs_init(struct komeda_dev *mdev) ...@@ -58,6 +58,8 @@ static void komeda_debugfs_init(struct komeda_dev *mdev)
mdev->debugfs_root = debugfs_create_dir("komeda", NULL); mdev->debugfs_root = debugfs_create_dir("komeda", NULL);
debugfs_create_file("register", 0444, mdev->debugfs_root, debugfs_create_file("register", 0444, mdev->debugfs_root,
mdev, &komeda_register_fops); mdev, &komeda_register_fops);
debugfs_create_x16("err_verbosity", 0664, mdev->debugfs_root,
&mdev->err_verbosity);
} }
#endif #endif
...@@ -113,22 +115,14 @@ static struct attribute_group komeda_sysfs_attr_group = { ...@@ -113,22 +115,14 @@ static struct attribute_group komeda_sysfs_attr_group = {
.attrs = komeda_sysfs_entries, .attrs = komeda_sysfs_entries,
}; };
static int komeda_parse_pipe_dt(struct komeda_dev *mdev, struct device_node *np) static int komeda_parse_pipe_dt(struct komeda_pipeline *pipe)
{ {
struct komeda_pipeline *pipe; struct device_node *np = pipe->of_node;
struct clk *clk; struct clk *clk;
u32 pipe_id;
int ret = 0;
ret = of_property_read_u32(np, "reg", &pipe_id);
if (ret != 0 || pipe_id >= mdev->n_pipelines)
return -EINVAL;
pipe = mdev->pipelines[pipe_id];
clk = of_clk_get_by_name(np, "pxclk"); clk = of_clk_get_by_name(np, "pxclk");
if (IS_ERR(clk)) { if (IS_ERR(clk)) {
DRM_ERROR("get pxclk for pipeline %d failed!\n", pipe_id); DRM_ERROR("get pxclk for pipeline %d failed!\n", pipe->id);
return PTR_ERR(clk); return PTR_ERR(clk);
} }
pipe->pxlclk = clk; pipe->pxlclk = clk;
...@@ -142,7 +136,6 @@ static int komeda_parse_pipe_dt(struct komeda_dev *mdev, struct device_node *np) ...@@ -142,7 +136,6 @@ static int komeda_parse_pipe_dt(struct komeda_dev *mdev, struct device_node *np)
of_graph_get_port_by_id(np, KOMEDA_OF_PORT_OUTPUT); of_graph_get_port_by_id(np, KOMEDA_OF_PORT_OUTPUT);
pipe->dual_link = pipe->of_output_links[0] && pipe->of_output_links[1]; pipe->dual_link = pipe->of_output_links[0] && pipe->of_output_links[1];
pipe->of_node = of_node_get(np);
return 0; return 0;
} }
...@@ -151,7 +144,9 @@ static int komeda_parse_dt(struct device *dev, struct komeda_dev *mdev) ...@@ -151,7 +144,9 @@ static int komeda_parse_dt(struct device *dev, struct komeda_dev *mdev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
struct device_node *child, *np = dev->of_node; struct device_node *child, *np = dev->of_node;
int ret; struct komeda_pipeline *pipe;
u32 pipe_id = U32_MAX;
int ret = -1;
mdev->irq = platform_get_irq(pdev, 0); mdev->irq = platform_get_irq(pdev, 0);
if (mdev->irq < 0) { if (mdev->irq < 0) {
...@@ -166,37 +161,44 @@ static int komeda_parse_dt(struct device *dev, struct komeda_dev *mdev) ...@@ -166,37 +161,44 @@ static int komeda_parse_dt(struct device *dev, struct komeda_dev *mdev)
ret = 0; ret = 0;
for_each_available_child_of_node(np, child) { for_each_available_child_of_node(np, child) {
if (of_node_cmp(child->name, "pipeline") == 0) { if (of_node_name_eq(child, "pipeline")) {
ret = komeda_parse_pipe_dt(mdev, child); of_property_read_u32(child, "reg", &pipe_id);
if (ret) { if (pipe_id >= mdev->n_pipelines) {
DRM_ERROR("parse pipeline dt error!\n"); DRM_WARN("Skip the redundant DT node: pipeline-%u.\n",
of_node_put(child); pipe_id);
break; continue;
} }
mdev->pipelines[pipe_id]->of_node = of_node_get(child);
} }
} }
return ret; for (pipe_id = 0; pipe_id < mdev->n_pipelines; pipe_id++) {
pipe = mdev->pipelines[pipe_id];
if (!pipe->of_node) {
DRM_ERROR("Pipeline-%d doesn't have a DT node.\n",
pipe->id);
return -EINVAL;
}
ret = komeda_parse_pipe_dt(pipe);
if (ret)
return ret;
}
return 0;
} }
struct komeda_dev *komeda_dev_create(struct device *dev) struct komeda_dev *komeda_dev_create(struct device *dev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
const struct komeda_product_data *product; komeda_identify_func komeda_identify;
struct komeda_dev *mdev; struct komeda_dev *mdev;
struct resource *io_res;
int err = 0; int err = 0;
product = of_device_get_match_data(dev); komeda_identify = of_device_get_match_data(dev);
if (!product) if (!komeda_identify)
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
io_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!io_res) {
DRM_ERROR("No registers defined.\n");
return ERR_PTR(-ENODEV);
}
mdev = devm_kzalloc(dev, sizeof(*mdev), GFP_KERNEL); mdev = devm_kzalloc(dev, sizeof(*mdev), GFP_KERNEL);
if (!mdev) if (!mdev)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
...@@ -204,7 +206,7 @@ struct komeda_dev *komeda_dev_create(struct device *dev) ...@@ -204,7 +206,7 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
mutex_init(&mdev->lock); mutex_init(&mdev->lock);
mdev->dev = dev; mdev->dev = dev;
mdev->reg_base = devm_ioremap_resource(dev, io_res); mdev->reg_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(mdev->reg_base)) { if (IS_ERR(mdev->reg_base)) {
DRM_ERROR("Map register space failed.\n"); DRM_ERROR("Map register space failed.\n");
err = PTR_ERR(mdev->reg_base); err = PTR_ERR(mdev->reg_base);
...@@ -222,11 +224,9 @@ struct komeda_dev *komeda_dev_create(struct device *dev) ...@@ -222,11 +224,9 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
clk_prepare_enable(mdev->aclk); clk_prepare_enable(mdev->aclk);
mdev->funcs = product->identify(mdev->reg_base, &mdev->chip); mdev->funcs = komeda_identify(mdev->reg_base, &mdev->chip);
if (!komeda_product_match(mdev, product->product_id)) { if (!mdev->funcs) {
DRM_ERROR("DT configured %x mismatch with real HW %x.\n", DRM_ERROR("Failed to identify the HW.\n");
product->product_id,
MALIDP_CORE_ID_PRODUCT_ID(mdev->chip.core_id));
err = -ENODEV; err = -ENODEV;
goto disable_clk; goto disable_clk;
} }
...@@ -280,6 +280,8 @@ struct komeda_dev *komeda_dev_create(struct device *dev) ...@@ -280,6 +280,8 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
goto err_cleanup; goto err_cleanup;
} }
mdev->err_verbosity = KOMEDA_DEV_PRINT_ERR_EVENTS;
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
komeda_debugfs_init(mdev); komeda_debugfs_init(mdev);
#endif #endif
......
...@@ -51,10 +51,12 @@ ...@@ -51,10 +51,12 @@
#define KOMEDA_WARN_EVENTS KOMEDA_ERR_CSCE #define KOMEDA_WARN_EVENTS KOMEDA_ERR_CSCE
/* malidp device id */ #define KOMEDA_INFO_EVENTS (0 \
enum { | KOMEDA_EVENT_VSYNC \
MALI_D71 = 0, | KOMEDA_EVENT_FLIP \
}; | KOMEDA_EVENT_EOW \
| KOMEDA_EVENT_MODE \
)
/* pipeline DT ports */ /* pipeline DT ports */
enum { enum {
...@@ -69,12 +71,6 @@ struct komeda_chip_info { ...@@ -69,12 +71,6 @@ struct komeda_chip_info {
u32 bus_width; u32 bus_width;
}; };
struct komeda_product_data {
u32 product_id;
const struct komeda_dev_funcs *(*identify)(u32 __iomem *reg,
struct komeda_chip_info *info);
};
struct komeda_dev; struct komeda_dev;
struct komeda_events { struct komeda_events {
...@@ -202,6 +198,23 @@ struct komeda_dev { ...@@ -202,6 +198,23 @@ struct komeda_dev {
/** @debugfs_root: root directory of komeda debugfs */ /** @debugfs_root: root directory of komeda debugfs */
struct dentry *debugfs_root; struct dentry *debugfs_root;
/**
* @err_verbosity: bitmask for how much extra info to print on error
*
* See KOMEDA_DEV_* macros for details. Low byte contains the debug
* level categories, the high byte contains extra debug options.
*/
u16 err_verbosity;
/* Print a single line per error per frame with error events. */
#define KOMEDA_DEV_PRINT_ERR_EVENTS BIT(0)
/* Print a single line per warning per frame with error events. */
#define KOMEDA_DEV_PRINT_WARN_EVENTS BIT(1)
/* Print a single line per info event per frame with error events. */
#define KOMEDA_DEV_PRINT_INFO_EVENTS BIT(2)
/* Dump DRM state on an error or warning event. */
#define KOMEDA_DEV_PRINT_DUMP_STATE_ON_EVENT BIT(8)
/* Disable rate limiting of event prints (normally one per commit) */
#define KOMEDA_DEV_PRINT_DISABLE_RATELIMIT BIT(12)
}; };
static inline bool static inline bool
...@@ -210,6 +223,9 @@ komeda_product_match(struct komeda_dev *mdev, u32 target) ...@@ -210,6 +223,9 @@ komeda_product_match(struct komeda_dev *mdev, u32 target)
return MALIDP_CORE_ID_PRODUCT_ID(mdev->chip.core_id) == target; return MALIDP_CORE_ID_PRODUCT_ID(mdev->chip.core_id) == target;
} }
typedef const struct komeda_dev_funcs *
(*komeda_identify_func)(u32 __iomem *reg, struct komeda_chip_info *chip);
const struct komeda_dev_funcs * const struct komeda_dev_funcs *
d71_identify(u32 __iomem *reg, struct komeda_chip_info *chip); d71_identify(u32 __iomem *reg, struct komeda_chip_info *chip);
...@@ -218,11 +234,7 @@ void komeda_dev_destroy(struct komeda_dev *mdev); ...@@ -218,11 +234,7 @@ void komeda_dev_destroy(struct komeda_dev *mdev);
struct komeda_dev *dev_to_mdev(struct device *dev); struct komeda_dev *dev_to_mdev(struct device *dev);
#ifdef CONFIG_DRM_KOMEDA_ERROR_PRINT void komeda_print_events(struct komeda_events *evts, struct drm_device *dev);
void komeda_print_events(struct komeda_events *evts);
#else
static inline void komeda_print_events(struct komeda_events *evts) {}
#endif
int komeda_dev_resume(struct komeda_dev *mdev); int komeda_dev_resume(struct komeda_dev *mdev);
int komeda_dev_suspend(struct komeda_dev *mdev); int komeda_dev_suspend(struct komeda_dev *mdev);
......
...@@ -123,15 +123,9 @@ static int komeda_platform_remove(struct platform_device *pdev) ...@@ -123,15 +123,9 @@ static int komeda_platform_remove(struct platform_device *pdev)
return 0; return 0;
} }
static const struct komeda_product_data komeda_products[] = {
[MALI_D71] = {
.product_id = MALIDP_D71_PRODUCT_ID,
.identify = d71_identify,
},
};
static const struct of_device_id komeda_of_match[] = { static const struct of_device_id komeda_of_match[] = {
{ .compatible = "arm,mali-d71", .data = &komeda_products[MALI_D71], }, { .compatible = "arm,mali-d71", .data = d71_identify, },
{ .compatible = "arm,mali-d32", .data = d71_identify, },
{}, {},
}; };
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
* Author: James.Qian.Wang <james.qian.wang@arm.com> * Author: James.Qian.Wang <james.qian.wang@arm.com>
* *
*/ */
#include <drm/drm_atomic.h>
#include <drm/drm_print.h> #include <drm/drm_print.h>
#include "komeda_dev.h" #include "komeda_dev.h"
...@@ -16,6 +17,7 @@ struct komeda_str { ...@@ -16,6 +17,7 @@ struct komeda_str {
/* return 0 on success, < 0 on no space. /* return 0 on success, < 0 on no space.
*/ */
__printf(2, 3)
static int komeda_sprintf(struct komeda_str *str, const char *fmt, ...) static int komeda_sprintf(struct komeda_str *str, const char *fmt, ...)
{ {
va_list args; va_list args;
...@@ -107,20 +109,31 @@ static bool is_new_frame(struct komeda_events *a) ...@@ -107,20 +109,31 @@ static bool is_new_frame(struct komeda_events *a)
(KOMEDA_EVENT_FLIP | KOMEDA_EVENT_EOW); (KOMEDA_EVENT_FLIP | KOMEDA_EVENT_EOW);
} }
void komeda_print_events(struct komeda_events *evts) void komeda_print_events(struct komeda_events *evts, struct drm_device *dev)
{ {
u64 print_evts = KOMEDA_ERR_EVENTS; u64 print_evts = 0;
static bool en_print = true; static bool en_print = true;
struct komeda_dev *mdev = dev->dev_private;
u16 const err_verbosity = mdev->err_verbosity;
u64 evts_mask = evts->global | evts->pipes[0] | evts->pipes[1];
/* reduce the same msg print, only print the first evt for one frame */ /* reduce the same msg print, only print the first evt for one frame */
if (evts->global || is_new_frame(evts)) if (evts->global || is_new_frame(evts))
en_print = true; en_print = true;
if (!en_print) if (!(err_verbosity & KOMEDA_DEV_PRINT_DISABLE_RATELIMIT) && !en_print)
return; return;
if ((evts->global | evts->pipes[0] | evts->pipes[1]) & print_evts) { if (err_verbosity & KOMEDA_DEV_PRINT_ERR_EVENTS)
print_evts |= KOMEDA_ERR_EVENTS;
if (err_verbosity & KOMEDA_DEV_PRINT_WARN_EVENTS)
print_evts |= KOMEDA_WARN_EVENTS;
if (err_verbosity & KOMEDA_DEV_PRINT_INFO_EVENTS)
print_evts |= KOMEDA_INFO_EVENTS;
if (evts_mask & print_evts) {
char msg[256]; char msg[256];
struct komeda_str str; struct komeda_str str;
struct drm_printer p = drm_info_printer(dev->dev);
str.str = msg; str.str = msg;
str.sz = sizeof(msg); str.sz = sizeof(msg);
...@@ -134,6 +147,9 @@ void komeda_print_events(struct komeda_events *evts) ...@@ -134,6 +147,9 @@ void komeda_print_events(struct komeda_events *evts)
evt_str(&str, evts->pipes[1]); evt_str(&str, evts->pipes[1]);
DRM_ERROR("err detect: %s\n", msg); DRM_ERROR("err detect: %s\n", msg);
if ((err_verbosity & KOMEDA_DEV_PRINT_DUMP_STATE_ON_EVENT) &&
(evts_mask & (KOMEDA_ERR_EVENTS | KOMEDA_WARN_EVENTS)))
drm_state_dump(dev, &p);
en_print = false; en_print = false;
} }
......
...@@ -48,7 +48,7 @@ static irqreturn_t komeda_kms_irq_handler(int irq, void *data) ...@@ -48,7 +48,7 @@ static irqreturn_t komeda_kms_irq_handler(int irq, void *data)
memset(&evts, 0, sizeof(evts)); memset(&evts, 0, sizeof(evts));
status = mdev->funcs->irq_handler(mdev, &evts); status = mdev->funcs->irq_handler(mdev, &evts);
komeda_print_events(&evts); komeda_print_events(&evts, drm);
/* Notify the crtc to handle the events */ /* Notify the crtc to handle the events */
for (i = 0; i < kms->n_crtcs; i++) for (i = 0; i < kms->n_crtcs; i++)
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include "malidp_utils.h" #include "malidp_utils.h"
#include "komeda_color_mgmt.h"
#define KOMEDA_MAX_PIPELINES 2 #define KOMEDA_MAX_PIPELINES 2
#define KOMEDA_PIPELINE_MAX_LAYERS 4 #define KOMEDA_PIPELINE_MAX_LAYERS 4
...@@ -327,6 +328,8 @@ struct komeda_improc_state { ...@@ -327,6 +328,8 @@ struct komeda_improc_state {
struct komeda_component_state base; struct komeda_component_state base;
u8 color_format, color_depth; u8 color_format, color_depth;
u16 hsize, vsize; u16 hsize, vsize;
u32 fgamma_coeffs[KOMEDA_N_GAMMA_COEFFS];
u32 ctm_coeffs[KOMEDA_N_CTM_COEFFS];
}; };
/* display timing controller */ /* display timing controller */
......
...@@ -802,6 +802,12 @@ komeda_improc_validate(struct komeda_improc *improc, ...@@ -802,6 +802,12 @@ komeda_improc_validate(struct komeda_improc *improc,
st->color_format = BIT(__ffs(avail_formats)); st->color_format = BIT(__ffs(avail_formats));
} }
if (kcrtc_st->base.color_mgmt_changed) {
drm_lut_to_fgamma_coeffs(kcrtc_st->base.gamma_lut,
st->fgamma_coeffs);
drm_ctm_to_coeffs(kcrtc_st->base.ctm, st->ctm_coeffs);
}
komeda_component_add_input(&st->base, &dflow->input, 0); komeda_component_add_input(&st->base, &dflow->input, 0);
komeda_component_set_output(&dflow->input, &improc->base, 0); komeda_component_set_output(&dflow->input, &improc->base, 0);
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#include "armada_fb.h" #include "armada_fb.h"
#include "armada_gem.h" #include "armada_gem.h"
static /*const*/ struct fb_ops armada_fb_ops = { static const struct fb_ops armada_fb_ops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS, DRM_FB_HELPER_DEFAULT_OPS,
.fb_fillrect = drm_fb_helper_cfb_fillrect, .fb_fillrect = drm_fb_helper_cfb_fillrect,
......
...@@ -461,16 +461,6 @@ static void armada_gem_prime_unmap_dma_buf(struct dma_buf_attachment *attach, ...@@ -461,16 +461,6 @@ static void armada_gem_prime_unmap_dma_buf(struct dma_buf_attachment *attach,
kfree(sgt); kfree(sgt);
} }
static void *armada_gem_dmabuf_no_kmap(struct dma_buf *buf, unsigned long n)
{
return NULL;
}
static void
armada_gem_dmabuf_no_kunmap(struct dma_buf *buf, unsigned long n, void *addr)
{
}
static int static int
armada_gem_dmabuf_mmap(struct dma_buf *buf, struct vm_area_struct *vma) armada_gem_dmabuf_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
{ {
...@@ -481,8 +471,6 @@ static const struct dma_buf_ops armada_gem_prime_dmabuf_ops = { ...@@ -481,8 +471,6 @@ static const struct dma_buf_ops armada_gem_prime_dmabuf_ops = {
.map_dma_buf = armada_gem_prime_map_dma_buf, .map_dma_buf = armada_gem_prime_map_dma_buf,
.unmap_dma_buf = armada_gem_prime_unmap_dma_buf, .unmap_dma_buf = armada_gem_prime_unmap_dma_buf,
.release = drm_gem_dmabuf_release, .release = drm_gem_dmabuf_release,
.map = armada_gem_dmabuf_no_kmap,
.unmap = armada_gem_dmabuf_no_kunmap,
.mmap = armada_gem_dmabuf_mmap, .mmap = armada_gem_dmabuf_mmap,
}; };
......
...@@ -33,7 +33,6 @@ ...@@ -33,7 +33,6 @@
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_drv.h> #include <drm/drm_drv.h>
#include <drm/drm_gem_vram_helper.h> #include <drm/drm_gem_vram_helper.h>
#include <drm/drm_pci.h>
#include <drm/drm_probe_helper.h> #include <drm/drm_probe_helper.h>
#include "ast_drv.h" #include "ast_drv.h"
...@@ -86,9 +85,42 @@ static void ast_kick_out_firmware_fb(struct pci_dev *pdev) ...@@ -86,9 +85,42 @@ static void ast_kick_out_firmware_fb(struct pci_dev *pdev)
static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{ {
struct drm_device *dev;
int ret;
ast_kick_out_firmware_fb(pdev); ast_kick_out_firmware_fb(pdev);
return drm_get_pci_dev(pdev, ent, &driver); ret = pci_enable_device(pdev);
if (ret)
return ret;
dev = drm_dev_alloc(&driver, &pdev->dev);
if (IS_ERR(dev)) {
ret = PTR_ERR(dev);
goto err_pci_disable_device;
}
dev->pdev = pdev;
pci_set_drvdata(pdev, dev);
ret = ast_driver_load(dev, ent->driver_data);
if (ret)
goto err_drm_dev_put;
ret = drm_dev_register(dev, ent->driver_data);
if (ret)
goto err_ast_driver_unload;
return 0;
err_ast_driver_unload:
ast_driver_unload(dev);
err_drm_dev_put:
drm_dev_put(dev);
err_pci_disable_device:
pci_disable_device(pdev);
return ret;
} }
static void static void
...@@ -96,17 +128,19 @@ ast_pci_remove(struct pci_dev *pdev) ...@@ -96,17 +128,19 @@ ast_pci_remove(struct pci_dev *pdev)
{ {
struct drm_device *dev = pci_get_drvdata(pdev); struct drm_device *dev = pci_get_drvdata(pdev);
drm_put_dev(dev); drm_dev_unregister(dev);
ast_driver_unload(dev);
drm_dev_put(dev);
} }
static int ast_drm_freeze(struct drm_device *dev) static int ast_drm_freeze(struct drm_device *dev)
{ {
drm_kms_helper_poll_disable(dev); int error;
pci_save_state(dev->pdev);
drm_fb_helper_set_suspend_unlocked(dev->fb_helper, true);
error = drm_mode_config_helper_suspend(dev);
if (error)
return error;
pci_save_state(dev->pdev);
return 0; return 0;
} }
...@@ -114,11 +148,7 @@ static int ast_drm_thaw(struct drm_device *dev) ...@@ -114,11 +148,7 @@ static int ast_drm_thaw(struct drm_device *dev)
{ {
ast_post_gpu(dev); ast_post_gpu(dev);
drm_mode_config_reset(dev); return drm_mode_config_helper_resume(dev);
drm_helper_resume_force_mode(dev);
drm_fb_helper_set_suspend_unlocked(dev->fb_helper, false);
return 0;
} }
static int ast_drm_resume(struct drm_device *dev) static int ast_drm_resume(struct drm_device *dev)
...@@ -131,8 +161,6 @@ static int ast_drm_resume(struct drm_device *dev) ...@@ -131,8 +161,6 @@ static int ast_drm_resume(struct drm_device *dev)
ret = ast_drm_thaw(dev); ret = ast_drm_thaw(dev);
if (ret) if (ret)
return ret; return ret;
drm_kms_helper_poll_enable(dev);
return 0; return 0;
} }
...@@ -150,6 +178,7 @@ static int ast_pm_suspend(struct device *dev) ...@@ -150,6 +178,7 @@ static int ast_pm_suspend(struct device *dev)
pci_set_power_state(pdev, PCI_D3hot); pci_set_power_state(pdev, PCI_D3hot);
return 0; return 0;
} }
static int ast_pm_resume(struct device *dev) static int ast_pm_resume(struct device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
...@@ -165,7 +194,6 @@ static int ast_pm_freeze(struct device *dev) ...@@ -165,7 +194,6 @@ static int ast_pm_freeze(struct device *dev)
if (!ddev || !ddev->dev_private) if (!ddev || !ddev->dev_private)
return -ENODEV; return -ENODEV;
return ast_drm_freeze(ddev); return ast_drm_freeze(ddev);
} }
static int ast_pm_thaw(struct device *dev) static int ast_pm_thaw(struct device *dev)
...@@ -203,10 +231,9 @@ static struct pci_driver ast_pci_driver = { ...@@ -203,10 +231,9 @@ static struct pci_driver ast_pci_driver = {
DEFINE_DRM_GEM_FOPS(ast_fops); DEFINE_DRM_GEM_FOPS(ast_fops);
static struct drm_driver driver = { static struct drm_driver driver = {
.driver_features = DRIVER_MODESET | DRIVER_GEM, .driver_features = DRIVER_ATOMIC |
DRIVER_GEM |
.load = ast_driver_load, DRIVER_MODESET,
.unload = ast_driver_unload,
.fops = &ast_fops, .fops = &ast_fops,
.name = DRIVER_NAME, .name = DRIVER_NAME,
......
...@@ -121,6 +121,9 @@ struct ast_private { ...@@ -121,6 +121,9 @@ struct ast_private {
unsigned int next_index; unsigned int next_index;
} cursor; } cursor;
struct drm_plane primary_plane;
struct drm_plane cursor_plane;
bool support_wide_screen; bool support_wide_screen;
enum { enum {
ast_use_p2a, ast_use_p2a,
...@@ -137,8 +140,6 @@ struct ast_private { ...@@ -137,8 +140,6 @@ struct ast_private {
int ast_driver_load(struct drm_device *dev, unsigned long flags); int ast_driver_load(struct drm_device *dev, unsigned long flags);
void ast_driver_unload(struct drm_device *dev); void ast_driver_unload(struct drm_device *dev);
struct ast_gem_object;
#define AST_IO_AR_PORT_WRITE (0x40) #define AST_IO_AR_PORT_WRITE (0x40)
#define AST_IO_MISC_PORT_WRITE (0x42) #define AST_IO_MISC_PORT_WRITE (0x42)
#define AST_IO_VGA_ENABLE_PORT (0x43) #define AST_IO_VGA_ENABLE_PORT (0x43)
...@@ -280,6 +281,17 @@ struct ast_vbios_mode_info { ...@@ -280,6 +281,17 @@ struct ast_vbios_mode_info {
const struct ast_vbios_enhtable *enh_table; const struct ast_vbios_enhtable *enh_table;
}; };
struct ast_crtc_state {
struct drm_crtc_state base;
/* Last known format of primary plane */
const struct drm_format_info *format;
struct ast_vbios_mode_info vbios_mode_info;
};
#define to_ast_crtc_state(state) container_of(state, struct ast_crtc_state, base)
extern int ast_mode_init(struct drm_device *dev); extern int ast_mode_init(struct drm_device *dev);
extern void ast_mode_fini(struct drm_device *dev); extern void ast_mode_fini(struct drm_device *dev);
...@@ -289,10 +301,6 @@ extern void ast_mode_fini(struct drm_device *dev); ...@@ -289,10 +301,6 @@ extern void ast_mode_fini(struct drm_device *dev);
int ast_mm_init(struct ast_private *ast); int ast_mm_init(struct ast_private *ast);
void ast_mm_fini(struct ast_private *ast); void ast_mm_fini(struct ast_private *ast);
int ast_gem_create(struct drm_device *dev,
u32 size, bool iskernel,
struct drm_gem_object **obj);
/* ast post */ /* ast post */
void ast_enable_vga(struct drm_device *dev); void ast_enable_vga(struct drm_device *dev);
void ast_enable_mmio(struct drm_device *dev); void ast_enable_mmio(struct drm_device *dev);
......
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/drm_gem.h> #include <drm/drm_gem.h>
...@@ -387,8 +388,33 @@ static int ast_get_dram_info(struct drm_device *dev) ...@@ -387,8 +388,33 @@ static int ast_get_dram_info(struct drm_device *dev)
return 0; return 0;
} }
enum drm_mode_status ast_mode_config_mode_valid(struct drm_device *dev,
const struct drm_display_mode *mode)
{
static const unsigned long max_bpp = 4; /* DRM_FORMAT_XRGBA8888 */
struct ast_private *ast = dev->dev_private;
unsigned long fbsize, fbpages, max_fbpages;
/* To support double buffering, a framebuffer may not
* consume more than half of the available VRAM.
*/
max_fbpages = (ast->vram_size / 2) >> PAGE_SHIFT;
fbsize = mode->hdisplay * mode->vdisplay * max_bpp;
fbpages = DIV_ROUND_UP(fbsize, PAGE_SIZE);
if (fbpages > max_fbpages)
return MODE_MEM;
return MODE_OK;
}
static const struct drm_mode_config_funcs ast_mode_funcs = { static const struct drm_mode_config_funcs ast_mode_funcs = {
.fb_create = drm_gem_fb_create .fb_create = drm_gem_fb_create,
.mode_valid = ast_mode_config_mode_valid,
.atomic_check = drm_atomic_helper_check,
.atomic_commit = drm_atomic_helper_commit,
}; };
static u32 ast_get_vram_info(struct drm_device *dev) static u32 ast_get_vram_info(struct drm_device *dev)
...@@ -506,6 +532,8 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -506,6 +532,8 @@ int ast_driver_load(struct drm_device *dev, unsigned long flags)
if (ret) if (ret)
goto out_free; goto out_free;
drm_mode_config_reset(dev);
ret = drm_fbdev_generic_setup(dev, 32); ret = drm_fbdev_generic_setup(dev, 32);
if (ret) if (ret)
goto out_free; goto out_free;
...@@ -535,27 +563,3 @@ void ast_driver_unload(struct drm_device *dev) ...@@ -535,27 +563,3 @@ void ast_driver_unload(struct drm_device *dev)
pci_iounmap(dev->pdev, ast->regs); pci_iounmap(dev->pdev, ast->regs);
kfree(ast); kfree(ast);
} }
int ast_gem_create(struct drm_device *dev,
u32 size, bool iskernel,
struct drm_gem_object **obj)
{
struct drm_gem_vram_object *gbo;
int ret;
*obj = NULL;
size = roundup(size, PAGE_SIZE);
if (size == 0)
return -EINVAL;
gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false);
if (IS_ERR(gbo)) {
ret = PTR_ERR(gbo);
if (ret != -ERESTARTSYS)
DRM_ERROR("failed to allocate GEM object\n");
return ret;
}
*obj = &gbo->bo.base;
return 0;
}
此差异已折叠。
...@@ -557,12 +557,6 @@ static irqreturn_t atmel_hlcdc_dc_irq_handler(int irq, void *data) ...@@ -557,12 +557,6 @@ static irqreturn_t atmel_hlcdc_dc_irq_handler(int irq, void *data)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static struct drm_framebuffer *atmel_hlcdc_fb_create(struct drm_device *dev,
struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd)
{
return drm_gem_fb_create(dev, file_priv, mode_cmd);
}
struct atmel_hlcdc_dc_commit { struct atmel_hlcdc_dc_commit {
struct work_struct work; struct work_struct work;
struct drm_device *dev; struct drm_device *dev;
...@@ -657,7 +651,7 @@ static int atmel_hlcdc_dc_atomic_commit(struct drm_device *dev, ...@@ -657,7 +651,7 @@ static int atmel_hlcdc_dc_atomic_commit(struct drm_device *dev,
} }
static const struct drm_mode_config_funcs mode_config_funcs = { static const struct drm_mode_config_funcs mode_config_funcs = {
.fb_create = atmel_hlcdc_fb_create, .fb_create = drm_gem_fb_create,
.atomic_check = drm_atomic_helper_check, .atomic_check = drm_atomic_helper_check,
.atomic_commit = atmel_hlcdc_dc_atomic_commit, .atomic_commit = atmel_hlcdc_dc_atomic_commit,
}; };
......
...@@ -604,7 +604,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p, ...@@ -604,7 +604,7 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
int ret; int ret;
int i; int i;
if (!state->base.crtc || !fb) if (!state->base.crtc || WARN_ON(!fb))
return 0; return 0;
crtc_state = drm_atomic_get_existing_crtc_state(s->state, s->crtc); crtc_state = drm_atomic_get_existing_crtc_state(s->state, s->crtc);
......
...@@ -16,16 +16,6 @@ config DRM_PANEL_BRIDGE ...@@ -16,16 +16,6 @@ config DRM_PANEL_BRIDGE
menu "Display Interface Bridges" menu "Display Interface Bridges"
depends on DRM && DRM_BRIDGE depends on DRM && DRM_BRIDGE
config DRM_ANALOGIX_ANX78XX
tristate "Analogix ANX78XX bridge"
select DRM_KMS_HELPER
select REGMAP_I2C
---help---
ANX78XX is an ultra-low power Full-HD SlimPort transmitter
designed for portable devices. The ANX78XX transforms
the HDMI output of an application processor to MyDP
or DisplayPort.
config DRM_CDNS_DSI config DRM_CDNS_DSI
tristate "Cadence DPI/DSI bridge" tristate "Cadence DPI/DSI bridge"
select DRM_KMS_HELPER select DRM_KMS_HELPER
...@@ -60,10 +50,10 @@ config DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW ...@@ -60,10 +50,10 @@ config DRM_MEGACHIPS_STDPXXXX_GE_B850V3_FW
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_PANEL select DRM_PANEL
---help--- ---help---
This is a driver for the display bridges of This is a driver for the display bridges of
GE B850v3 that convert dual channel LVDS GE B850v3 that convert dual channel LVDS
to DP++. This is used with the i.MX6 imx-ldb to DP++. This is used with the i.MX6 imx-ldb
driver. You are likely to say N here. driver. You are likely to say N here.
config DRM_NXP_PTN3460 config DRM_NXP_PTN3460
tristate "NXP PTN3460 DP/LVDS bridge" tristate "NXP PTN3460 DP/LVDS bridge"
......
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DRM_ANALOGIX_ANX78XX) += analogix-anx78xx.o
obj-$(CONFIG_DRM_CDNS_DSI) += cdns-dsi.o obj-$(CONFIG_DRM_CDNS_DSI) += cdns-dsi.o
obj-$(CONFIG_DRM_DUMB_VGA_DAC) += dumb-vga-dac.o obj-$(CONFIG_DRM_DUMB_VGA_DAC) += dumb-vga-dac.o
obj-$(CONFIG_DRM_LVDS_ENCODER) += lvds-encoder.o obj-$(CONFIG_DRM_LVDS_ENCODER) += lvds-encoder.o
...@@ -12,8 +11,9 @@ obj-$(CONFIG_DRM_SII9234) += sii9234.o ...@@ -12,8 +11,9 @@ obj-$(CONFIG_DRM_SII9234) += sii9234.o
obj-$(CONFIG_DRM_THINE_THC63LVD1024) += thc63lvd1024.o obj-$(CONFIG_DRM_THINE_THC63LVD1024) += thc63lvd1024.o
obj-$(CONFIG_DRM_TOSHIBA_TC358764) += tc358764.o obj-$(CONFIG_DRM_TOSHIBA_TC358764) += tc358764.o
obj-$(CONFIG_DRM_TOSHIBA_TC358767) += tc358767.o obj-$(CONFIG_DRM_TOSHIBA_TC358767) += tc358767.o
obj-$(CONFIG_DRM_ANALOGIX_DP) += analogix/
obj-$(CONFIG_DRM_I2C_ADV7511) += adv7511/ obj-$(CONFIG_DRM_I2C_ADV7511) += adv7511/
obj-$(CONFIG_DRM_TI_SN65DSI86) += ti-sn65dsi86.o obj-$(CONFIG_DRM_TI_SN65DSI86) += ti-sn65dsi86.o
obj-$(CONFIG_DRM_TI_TFP410) += ti-tfp410.o obj-$(CONFIG_DRM_TI_TFP410) += ti-tfp410.o
obj-y += analogix/
obj-y += synopsys/ obj-y += synopsys/
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
config DRM_ANALOGIX_ANX6345
tristate "Analogix ANX6345 bridge"
depends on OF
select DRM_ANALOGIX_DP
select DRM_KMS_HELPER
select REGMAP_I2C
help
ANX6345 is an ultra-low Full-HD DisplayPort/eDP
transmitter designed for portable devices. The
ANX6345 transforms the LVTTL RGB output of an
application processor to eDP or DisplayPort.
config DRM_ANALOGIX_ANX78XX
tristate "Analogix ANX78XX bridge"
select DRM_ANALOGIX_DP
select DRM_KMS_HELPER
select REGMAP_I2C
help
ANX78XX is an ultra-low power Full-HD SlimPort transmitter
designed for portable devices. The ANX78XX transforms
the HDMI output of an application processor to MyDP
or DisplayPort.
config DRM_ANALOGIX_DP config DRM_ANALOGIX_DP
tristate tristate
depends on DRM depends on DRM
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
analogix_dp-objs := analogix_dp_core.o analogix_dp_reg.o analogix_dp-objs := analogix_dp_core.o analogix_dp_reg.o analogix-i2c-dptx.o
obj-$(CONFIG_DRM_ANALOGIX_ANX6345) += analogix-anx6345.o
obj-$(CONFIG_DRM_ANALOGIX_ANX78XX) += analogix-anx78xx.o
obj-$(CONFIG_DRM_ANALOGIX_DP) += analogix_dp.o obj-$(CONFIG_DRM_ANALOGIX_DP) += analogix_dp.o
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -1111,7 +1111,7 @@ static int analogix_dp_get_modes(struct drm_connector *connector) ...@@ -1111,7 +1111,7 @@ static int analogix_dp_get_modes(struct drm_connector *connector)
int ret, num_modes = 0; int ret, num_modes = 0;
if (dp->plat_data->panel) { if (dp->plat_data->panel) {
num_modes += drm_panel_get_modes(dp->plat_data->panel); num_modes += drm_panel_get_modes(dp->plat_data->panel, connector);
} else { } else {
ret = analogix_dp_prepare_panel(dp, true, false); ret = analogix_dp_prepare_panel(dp, true, false);
if (ret) { if (ret) {
......
...@@ -37,7 +37,7 @@ static int panel_bridge_connector_get_modes(struct drm_connector *connector) ...@@ -37,7 +37,7 @@ static int panel_bridge_connector_get_modes(struct drm_connector *connector)
struct panel_bridge *panel_bridge = struct panel_bridge *panel_bridge =
drm_connector_to_panel_bridge(connector); drm_connector_to_panel_bridge(connector);
return drm_panel_get_modes(panel_bridge->panel); return drm_panel_get_modes(panel_bridge->panel, connector);
} }
static const struct drm_connector_helper_funcs static const struct drm_connector_helper_funcs
...@@ -289,3 +289,21 @@ struct drm_bridge *devm_drm_panel_bridge_add_typed(struct device *dev, ...@@ -289,3 +289,21 @@ struct drm_bridge *devm_drm_panel_bridge_add_typed(struct device *dev,
return bridge; return bridge;
} }
EXPORT_SYMBOL(devm_drm_panel_bridge_add_typed); EXPORT_SYMBOL(devm_drm_panel_bridge_add_typed);
/**
* drm_panel_bridge_connector - return the connector for the panel bridge
*
* drm_panel_bridge creates the connector.
* This function gives external access to the connector.
*
* Returns: Pointer to drm_connector
*/
struct drm_connector *drm_panel_bridge_connector(struct drm_bridge *bridge)
{
struct panel_bridge *panel_bridge;
panel_bridge = drm_bridge_to_panel_bridge(bridge);
return &panel_bridge->connector;
}
EXPORT_SYMBOL(drm_panel_bridge_connector);
...@@ -461,7 +461,7 @@ static int ps8622_get_modes(struct drm_connector *connector) ...@@ -461,7 +461,7 @@ static int ps8622_get_modes(struct drm_connector *connector)
ps8622 = connector_to_ps8622(connector); ps8622 = connector_to_ps8622(connector);
return drm_panel_get_modes(ps8622->panel); return drm_panel_get_modes(ps8622->panel, connector);
} }
static const struct drm_connector_helper_funcs ps8622_connector_helper_funcs = { static const struct drm_connector_helper_funcs ps8622_connector_helper_funcs = {
......
...@@ -282,7 +282,7 @@ static int tc358764_get_modes(struct drm_connector *connector) ...@@ -282,7 +282,7 @@ static int tc358764_get_modes(struct drm_connector *connector)
{ {
struct tc358764 *ctx = connector_to_tc358764(connector); struct tc358764 *ctx = connector_to_tc358764(connector);
return drm_panel_get_modes(ctx->panel); return drm_panel_get_modes(ctx->panel, connector);
} }
static const static const
......
...@@ -1346,7 +1346,7 @@ static int tc_connector_get_modes(struct drm_connector *connector) ...@@ -1346,7 +1346,7 @@ static int tc_connector_get_modes(struct drm_connector *connector)
return 0; return 0;
} }
count = drm_panel_get_modes(tc->panel); count = drm_panel_get_modes(tc->panel, connector);
if (count > 0) if (count > 0)
return count; return count;
......
...@@ -206,7 +206,7 @@ static int ti_sn_bridge_connector_get_modes(struct drm_connector *connector) ...@@ -206,7 +206,7 @@ static int ti_sn_bridge_connector_get_modes(struct drm_connector *connector)
{ {
struct ti_sn_bridge *pdata = connector_to_ti_sn_bridge(connector); struct ti_sn_bridge *pdata = connector_to_ti_sn_bridge(connector);
return drm_panel_get_modes(pdata->panel); return drm_panel_get_modes(pdata->panel, connector);
} }
static enum drm_mode_status static enum drm_mode_status
......
...@@ -212,7 +212,7 @@ int drm_agp_alloc(struct drm_device *dev, struct drm_agp_buffer *request) ...@@ -212,7 +212,7 @@ int drm_agp_alloc(struct drm_device *dev, struct drm_agp_buffer *request)
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
pages = (request->size + PAGE_SIZE - 1) / PAGE_SIZE; pages = DIV_ROUND_UP(request->size, PAGE_SIZE);
type = (u32) request->type; type = (u32) request->type;
memory = agp_allocate_memory(dev->agp->bridge, pages, type); memory = agp_allocate_memory(dev->agp->bridge, pages, type);
if (!memory) { if (!memory) {
...@@ -325,7 +325,7 @@ int drm_agp_bind(struct drm_device *dev, struct drm_agp_binding *request) ...@@ -325,7 +325,7 @@ int drm_agp_bind(struct drm_device *dev, struct drm_agp_binding *request)
entry = drm_agp_lookup_entry(dev, request->handle); entry = drm_agp_lookup_entry(dev, request->handle);
if (!entry || entry->bound) if (!entry || entry->bound)
return -EINVAL; return -EINVAL;
page = (request->offset + PAGE_SIZE - 1) / PAGE_SIZE; page = DIV_ROUND_UP(request->offset, PAGE_SIZE);
retcode = drm_bind_agp(entry->memory, page); retcode = drm_bind_agp(entry->memory, page);
if (retcode) if (retcode)
return retcode; return retcode;
......
...@@ -688,10 +688,12 @@ static void drm_atomic_plane_print_state(struct drm_printer *p, ...@@ -688,10 +688,12 @@ static void drm_atomic_plane_print_state(struct drm_printer *p,
* associated state struct &drm_private_state. * associated state struct &drm_private_state.
* *
* Similar to userspace-exposed objects, private state structures can be * Similar to userspace-exposed objects, private state structures can be
* acquired by calling drm_atomic_get_private_obj_state(). Since this function * acquired by calling drm_atomic_get_private_obj_state(). This also takes care
* does not take care of locking, drivers should wrap it for each type of * of locking, hence drivers should not have a need to call drm_modeset_lock()
* private state object they have with the required call to drm_modeset_lock() * directly. Sequence of the actual hardware state commit is not handled,
* for the corresponding &drm_modeset_lock. * drivers might need to keep track of struct drm_crtc_commit within subclassed
* structure of &drm_private_state as necessary, e.g. similar to
* &drm_plane_state.commit. See also &drm_atomic_state.fake_commit.
* *
* All private state structures contained in a &drm_atomic_state update can be * All private state structures contained in a &drm_atomic_state update can be
* iterated using for_each_oldnew_private_obj_in_state(), * iterated using for_each_oldnew_private_obj_in_state(),
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -48,6 +48,8 @@ ...@@ -48,6 +48,8 @@
#include <drm/drm_print.h> #include <drm/drm_print.h>
#include <drm/drm_vblank.h> #include <drm/drm_vblank.h>
#include "drm_crtc_helper_internal.h"
/** /**
* DOC: overview * DOC: overview
* *
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -285,7 +285,7 @@ static int drm_cpu_valid(void) ...@@ -285,7 +285,7 @@ static int drm_cpu_valid(void)
} }
/* /*
* Called whenever a process opens /dev/drm. * Called whenever a process opens a drm node
* *
* \param filp file pointer. * \param filp file pointer.
* \param minor acquired minor-object. * \param minor acquired minor-object.
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册