提交 a09e9a7a 编写于 作者: L Linus Torvalds

Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux

Pull drm tree changes from Dave Airlie:
 "This is the main drm pull request, I have some overlap with sound and
  arm-soc, the sound patch is acked and may conflict based on -next
  reports but should be a trivial fixup, which I'll leave to you!

  Highlights:

   - new drivers:

     MSM driver from Rob Clark

   - non-drm:

     switcheroo and hdmi audio driver support for secondary GPU
     poweroff, so drivers can use runtime PM to poweroff the GPUs.  This
     can save 5 or 6W on some optimus laptops.

   - drm core:

     combined GEM and TTM VMA manager
     per-filp mmap permission tracking
     initial rendernode support (via a runtime enable for now, until we get api stable),
     remove old proc support,
     lots of cleanups of legacy code
     hdmi vendor infoframes and 4k modes
     lots of gem/prime locking and races fixes
     async pageflip scaffolding
     drm bridge objects

   - i915:

     Haswell PC8+ support and eLLC support, HDMI 4K support, initial
     per-process VMA pieces, watermark reworks, convert to generic hdmi
     infoframes, encoder reworking, fastboot support,

   - radeon:

     CIK PM support, remove 3d blit code in favour of DMA engines,
     Berlin GPU support, HDMI audio fixes

   - nouveau:

     secondary GPU power down support for optimus laptops, lots of
     fixes, use MSI, VP3 engine support

   - exynos:

     runtime pm support for g2d, DT support, remove non-DT,

   - tda998x i2c driver:

     lots of fixes for sync issues

   - gma500:

     lots of cleanups

   - rcar:

     add LVDS support, fbdev emulation,

   - tegra:

     just minor fixes"

* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (684 commits)
  drm/exynos: Fix build error with exynos_drm_connector.c
  drm/exynos: Remove non-DT support in exynos_drm_fimd
  drm/exynos: Remove non-DT support in exynos_hdmi
  drm/exynos: Remove non-DT support in exynos_drm_g2d
  drm/exynos: Remove non-DT support in exynos_hdmiphy
  drm/exynos: Remove non-DT support in exynos_ddc
  drm/exynos: Make Exynos DRM drivers depend on OF
  drm/exynos: Consider fallback option to allocation fail
  drm/exynos: fimd: move platform data parsing to separate function
  drm/exynos: fimd: get signal polarities from device tree
  drm/exynos: fimd: replace struct fb_videomode with videomode
  drm/exynos: check a pixel format to a particular window layer
  drm/exynos: fix fimd pixel format setting
  drm/exynos: Add NULL pointer check
  drm/exynos: Remove redundant error messages
  drm/exynos: Add missing of.h header include
  drm/exynos: Remove redundant NULL check in exynos_drm_buf
  drm/exynos: add device tree support for rotator
  drm/exynos: Add missing includes
  drm/exynos: add runtime pm interfaces to g2d driver
  ...
...@@ -155,13 +155,6 @@ ...@@ -155,13 +155,6 @@
will become a fatal error. will become a fatal error.
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term>DRIVER_USE_MTRR</term>
<listitem><para>
Driver uses MTRR interface for mapping memory, the DRM core will
manage MTRR resources. Deprecated.
</para></listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term>DRIVER_PCI_DMA</term> <term>DRIVER_PCI_DMA</term>
<listitem><para> <listitem><para>
...@@ -194,28 +187,6 @@ ...@@ -194,28 +187,6 @@
support shared IRQs (note that this is required of PCI drivers). support shared IRQs (note that this is required of PCI drivers).
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term>DRIVER_IRQ_VBL</term>
<listitem><para>Unused. Deprecated.</para></listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_DMA_QUEUE</term>
<listitem><para>
Should be set if the driver queues DMA requests and completes them
asynchronously. Deprecated.
</para></listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_FB_DMA</term>
<listitem><para>
Driver supports DMA to/from the framebuffer, mapping of frambuffer
DMA buffers to userspace will be supported. Deprecated.
</para></listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_IRQ_VBL2</term>
<listitem><para>Unused. Deprecated.</para></listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term>DRIVER_GEM</term> <term>DRIVER_GEM</term>
<listitem><para> <listitem><para>
...@@ -234,6 +205,12 @@ ...@@ -234,6 +205,12 @@
Driver implements DRM PRIME buffer sharing. Driver implements DRM PRIME buffer sharing.
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term>DRIVER_RENDER</term>
<listitem><para>
Driver supports dedicated render nodes.
</para></listitem>
</varlistentry>
</variablelist> </variablelist>
</sect3> </sect3>
<sect3> <sect3>
...@@ -2212,6 +2189,18 @@ void intel_crt_init(struct drm_device *dev) ...@@ -2212,6 +2189,18 @@ void intel_crt_init(struct drm_device *dev)
!Iinclude/drm/drm_rect.h !Iinclude/drm/drm_rect.h
!Edrivers/gpu/drm/drm_rect.c !Edrivers/gpu/drm/drm_rect.c
</sect2> </sect2>
<sect2>
<title>Flip-work Helper Reference</title>
!Pinclude/drm/drm_flip_work.h flip utils
!Iinclude/drm/drm_flip_work.h
!Edrivers/gpu/drm/drm_flip_work.c
</sect2>
<sect2>
<title>VMA Offset Manager</title>
!Pdrivers/gpu/drm/drm_vma_manager.c vma offset manager
!Edrivers/gpu/drm/drm_vma_manager.c
!Iinclude/drm/drm_vma_manager.h
</sect2>
</sect1> </sect1>
<!-- Internals: kms properties --> <!-- Internals: kms properties -->
...@@ -2422,18 +2411,18 @@ void (*postclose) (struct drm_device *, struct drm_file *);</synopsis> ...@@ -2422,18 +2411,18 @@ void (*postclose) (struct drm_device *, struct drm_file *);</synopsis>
</abstract> </abstract>
<para> <para>
The <methodname>firstopen</methodname> method is called by the DRM core The <methodname>firstopen</methodname> method is called by the DRM core
when an application opens a device that has no other opened file handle. for legacy UMS (User Mode Setting) drivers only when an application
Similarly the <methodname>lastclose</methodname> method is called when opens a device that has no other opened file handle. UMS drivers can
the last application holding a file handle opened on the device closes implement it to acquire device resources. KMS drivers can't use the
it. Both methods are mostly used for UMS (User Mode Setting) drivers to method and must acquire resources in the <methodname>load</methodname>
acquire and release device resources which should be done in the method instead.
<methodname>load</methodname> and <methodname>unload</methodname>
methods for KMS drivers.
</para> </para>
<para> <para>
Note that the <methodname>lastclose</methodname> method is also called Similarly the <methodname>lastclose</methodname> method is called when
at module unload time or, for hot-pluggable devices, when the device is the last application holding a file handle opened on the device closes
unplugged. The <methodname>firstopen</methodname> and it, for both UMS and KMS drivers. Additionally, the method is also
called at module unload time or, for hot-pluggable devices, when the
device is unplugged. The <methodname>firstopen</methodname> and
<methodname>lastclose</methodname> calls can thus be unbalanced. <methodname>lastclose</methodname> calls can thus be unbalanced.
</para> </para>
<para> <para>
...@@ -2462,7 +2451,12 @@ void (*postclose) (struct drm_device *, struct drm_file *);</synopsis> ...@@ -2462,7 +2451,12 @@ void (*postclose) (struct drm_device *, struct drm_file *);</synopsis>
<para> <para>
The <methodname>lastclose</methodname> method should restore CRTC and The <methodname>lastclose</methodname> method should restore CRTC and
plane properties to default value, so that a subsequent open of the plane properties to default value, so that a subsequent open of the
device will not inherit state from the previous user. device will not inherit state from the previous user. It can also be
used to execute delayed power switching state changes, e.g. in
conjunction with the vga-switcheroo infrastructure. Beyond that KMS
drivers should not do any further cleanup. Only legacy UMS drivers might
need to clean up device state so that the vga console or an independent
fbdev driver could take over.
</para> </para>
</sect2> </sect2>
<sect2> <sect2>
...@@ -2498,7 +2492,6 @@ void (*postclose) (struct drm_device *, struct drm_file *);</synopsis> ...@@ -2498,7 +2492,6 @@ void (*postclose) (struct drm_device *, struct drm_file *);</synopsis>
<programlisting> <programlisting>
.poll = drm_poll, .poll = drm_poll,
.read = drm_read, .read = drm_read,
.fasync = drm_fasync,
.llseek = no_llseek, .llseek = no_llseek,
</programlisting> </programlisting>
</para> </para>
...@@ -2657,6 +2650,69 @@ int (*resume) (struct drm_device *);</synopsis> ...@@ -2657,6 +2650,69 @@ int (*resume) (struct drm_device *);</synopsis>
info, since man pages should cover the rest. info, since man pages should cover the rest.
</para> </para>
<!-- External: render nodes -->
<sect1>
<title>Render nodes</title>
<para>
DRM core provides multiple character-devices for user-space to use.
Depending on which device is opened, user-space can perform a different
set of operations (mainly ioctls). The primary node is always created
and called <term>card&lt;num&gt;</term>. Additionally, a currently
unused control node, called <term>controlD&lt;num&gt;</term> is also
created. The primary node provides all legacy operations and
historically was the only interface used by userspace. With KMS, the
control node was introduced. However, the planned KMS control interface
has never been written and so the control node stays unused to date.
</para>
<para>
With the increased use of offscreen renderers and GPGPU applications,
clients no longer require running compositors or graphics servers to
make use of a GPU. But the DRM API required unprivileged clients to
authenticate to a DRM-Master prior to getting GPU access. To avoid this
step and to grant clients GPU access without authenticating, render
nodes were introduced. Render nodes solely serve render clients, that
is, no modesetting or privileged ioctls can be issued on render nodes.
Only non-global rendering commands are allowed. If a driver supports
render nodes, it must advertise it via the <term>DRIVER_RENDER</term>
DRM driver capability. If not supported, the primary node must be used
for render clients together with the legacy drmAuth authentication
procedure.
</para>
<para>
If a driver advertises render node support, DRM core will create a
separate render node called <term>renderD&lt;num&gt;</term>. There will
be one render node per device. No ioctls except PRIME-related ioctls
will be allowed on this node. Especially <term>GEM_OPEN</term> will be
explicitly prohibited. Render nodes are designed to avoid the
buffer-leaks, which occur if clients guess the flink names or mmap
offsets on the legacy interface. Additionally to this basic interface,
drivers must mark their driver-dependent render-only ioctls as
<term>DRM_RENDER_ALLOW</term> so render clients can use them. Driver
authors must be careful not to allow any privileged ioctls on render
nodes.
</para>
<para>
With render nodes, user-space can now control access to the render node
via basic file-system access-modes. A running graphics server which
authenticates clients on the privileged primary/legacy node is no longer
required. Instead, a client can open the render node and is immediately
granted GPU access. Communication between clients (or servers) is done
via PRIME. FLINK from render node to legacy node is not supported. New
clients must not use the insecure FLINK interface.
</para>
<para>
Besides dropping all modeset/global ioctls, render nodes also drop the
DRM-Master concept. There is no reason to associate render clients with
a DRM-Master as they are independent of any graphics server. Besides,
they must work without any running master, anyway.
Drivers must be able to run without a master object if they support
render nodes. If, on the other hand, a driver requires shared state
between clients which is visible to user-space and accessible beyond
open-file boundaries, they cannot support render nodes.
</para>
</sect1>
<!-- External: vblank handling --> <!-- External: vblank handling -->
<sect1> <sect1>
......
* Samsung Image Rotator
Required properties:
- compatible : value should be one of the following:
(a) "samsung,exynos4210-rotator" for Rotator IP in Exynos4210
(b) "samsung,exynos4212-rotator" for Rotator IP in Exynos4212/4412
(c) "samsung,exynos5250-rotator" for Rotator IP in Exynos5250
- reg : Physical base address of the IP registers and length of memory
mapped region.
- interrupts : Interrupt specifier for rotator interrupt, according to format
specific to interrupt parent.
- clocks : Clock specifier for rotator clock, according to generic clock
bindings. (See Documentation/devicetree/bindings/clock/exynos*.txt)
- clock-names : Names of clocks. For exynos rotator, it should be "rotator".
Example:
rotator@12810000 {
compatible = "samsung,exynos4210-rotator";
reg = <0x12810000 0x1000>;
interrupts = <0 83 0>;
clocks = <&clock 278>;
clock-names = "rotator";
};
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
# #
menuconfig DRM menuconfig DRM
tristate "Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)" tristate "Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)"
depends on (AGP || AGP=n) && !EMULATED_CMPXCHG && MMU depends on (AGP || AGP=n) && !EMULATED_CMPXCHG && MMU && HAS_DMA
select HDMI select HDMI
select I2C select I2C
select I2C_ALGOBIT select I2C_ALGOBIT
...@@ -168,6 +168,17 @@ config DRM_I915_KMS ...@@ -168,6 +168,17 @@ config DRM_I915_KMS
the driver to bind to PCI devices, which precludes loading things the driver to bind to PCI devices, which precludes loading things
like intelfb. like intelfb.
config DRM_I915_PRELIMINARY_HW_SUPPORT
bool "Enable preliminary support for prerelease Intel hardware by default"
depends on DRM_I915
help
Choose this option if you have prerelease Intel hardware and want the
i915 driver to support it by default. You can enable such support at
runtime with the module option i915.preliminary_hw_support=1; this
option changes the default for that module option.
If in doubt, say "N".
config DRM_MGA config DRM_MGA
tristate "Matrox g200/g400" tristate "Matrox g200/g400"
depends on DRM && PCI depends on DRM && PCI
...@@ -223,3 +234,5 @@ source "drivers/gpu/drm/omapdrm/Kconfig" ...@@ -223,3 +234,5 @@ source "drivers/gpu/drm/omapdrm/Kconfig"
source "drivers/gpu/drm/tilcdc/Kconfig" source "drivers/gpu/drm/tilcdc/Kconfig"
source "drivers/gpu/drm/qxl/Kconfig" source "drivers/gpu/drm/qxl/Kconfig"
source "drivers/gpu/drm/msm/Kconfig"
...@@ -7,13 +7,13 @@ ccflags-y := -Iinclude/drm ...@@ -7,13 +7,13 @@ ccflags-y := -Iinclude/drm
drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \ drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \
drm_context.o drm_dma.o \ drm_context.o drm_dma.o \
drm_drv.o drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \ drm_drv.o drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \
drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \ drm_lock.o drm_memory.o drm_stub.o drm_vm.o \
drm_agpsupport.o drm_scatter.o drm_pci.o \ drm_agpsupport.o drm_scatter.o drm_pci.o \
drm_platform.o drm_sysfs.o drm_hashtab.o drm_mm.o \ drm_platform.o drm_sysfs.o drm_hashtab.o drm_mm.o \
drm_crtc.o drm_modes.o drm_edid.o \ drm_crtc.o drm_modes.o drm_edid.o \
drm_info.o drm_debugfs.o drm_encoder_slave.o \ drm_info.o drm_debugfs.o drm_encoder_slave.o \
drm_trace_points.o drm_global.o drm_prime.o \ drm_trace_points.o drm_global.o drm_prime.o \
drm_rect.o drm_rect.o drm_vma_manager.o drm_flip_work.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
...@@ -54,4 +54,5 @@ obj-$(CONFIG_DRM_SHMOBILE) +=shmobile/ ...@@ -54,4 +54,5 @@ obj-$(CONFIG_DRM_SHMOBILE) +=shmobile/
obj-$(CONFIG_DRM_OMAP) += omapdrm/ obj-$(CONFIG_DRM_OMAP) += omapdrm/
obj-$(CONFIG_DRM_TILCDC) += tilcdc/ obj-$(CONFIG_DRM_TILCDC) += tilcdc/
obj-$(CONFIG_DRM_QXL) += qxl/ obj-$(CONFIG_DRM_QXL) += qxl/
obj-$(CONFIG_DRM_MSM) += msm/
obj-y += i2c/ obj-y += i2c/
...@@ -190,7 +190,6 @@ static const struct file_operations ast_fops = { ...@@ -190,7 +190,6 @@ static const struct file_operations ast_fops = {
.unlocked_ioctl = drm_ioctl, .unlocked_ioctl = drm_ioctl,
.mmap = ast_mmap, .mmap = ast_mmap,
.poll = drm_poll, .poll = drm_poll,
.fasync = drm_fasync,
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl, .compat_ioctl = drm_compat_ioctl,
#endif #endif
...@@ -198,7 +197,7 @@ static const struct file_operations ast_fops = { ...@@ -198,7 +197,7 @@ static const struct file_operations ast_fops = {
}; };
static struct drm_driver driver = { static struct drm_driver driver = {
.driver_features = DRIVER_USE_MTRR | DRIVER_MODESET | DRIVER_GEM, .driver_features = DRIVER_MODESET | DRIVER_GEM,
.dev_priv_size = 0, .dev_priv_size = 0,
.load = ast_driver_load, .load = ast_driver_load,
...@@ -216,7 +215,7 @@ static struct drm_driver driver = { ...@@ -216,7 +215,7 @@ static struct drm_driver driver = {
.gem_free_object = ast_gem_free_object, .gem_free_object = ast_gem_free_object,
.dumb_create = ast_dumb_create, .dumb_create = ast_dumb_create,
.dumb_map_offset = ast_dumb_mmap_offset, .dumb_map_offset = ast_dumb_mmap_offset,
.dumb_destroy = ast_dumb_destroy, .dumb_destroy = drm_gem_dumb_destroy,
}; };
......
...@@ -322,9 +322,6 @@ ast_bo(struct ttm_buffer_object *bo) ...@@ -322,9 +322,6 @@ ast_bo(struct ttm_buffer_object *bo)
extern int ast_dumb_create(struct drm_file *file, extern int ast_dumb_create(struct drm_file *file,
struct drm_device *dev, struct drm_device *dev,
struct drm_mode_create_dumb *args); struct drm_mode_create_dumb *args);
extern int ast_dumb_destroy(struct drm_file *file,
struct drm_device *dev,
uint32_t handle);
extern int ast_gem_init_object(struct drm_gem_object *obj); extern int ast_gem_init_object(struct drm_gem_object *obj);
extern void ast_gem_free_object(struct drm_gem_object *obj); extern void ast_gem_free_object(struct drm_gem_object *obj);
......
...@@ -449,13 +449,6 @@ int ast_dumb_create(struct drm_file *file, ...@@ -449,13 +449,6 @@ int ast_dumb_create(struct drm_file *file,
return 0; return 0;
} }
int ast_dumb_destroy(struct drm_file *file,
struct drm_device *dev,
uint32_t handle)
{
return drm_gem_handle_delete(file, handle);
}
int ast_gem_init_object(struct drm_gem_object *obj) int ast_gem_init_object(struct drm_gem_object *obj)
{ {
BUG(); BUG();
...@@ -487,7 +480,7 @@ void ast_gem_free_object(struct drm_gem_object *obj) ...@@ -487,7 +480,7 @@ void ast_gem_free_object(struct drm_gem_object *obj)
static inline u64 ast_bo_mmap_offset(struct ast_bo *bo) static inline u64 ast_bo_mmap_offset(struct ast_bo *bo)
{ {
return bo->bo.addr_space_offset; return drm_vma_node_offset_addr(&bo->bo.vma_node);
} }
int int
ast_dumb_mmap_offset(struct drm_file *file, ast_dumb_mmap_offset(struct drm_file *file,
......
...@@ -148,7 +148,9 @@ ast_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) ...@@ -148,7 +148,9 @@ ast_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
static int ast_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp) static int ast_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
{ {
return 0; struct ast_bo *astbo = ast_bo(bo);
return drm_vma_node_verify_access(&astbo->gem.vma_node, filp);
} }
static int ast_ttm_io_mem_reserve(struct ttm_bo_device *bdev, static int ast_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
...@@ -321,7 +323,6 @@ int ast_bo_create(struct drm_device *dev, int size, int align, ...@@ -321,7 +323,6 @@ int ast_bo_create(struct drm_device *dev, int size, int align,
return ret; return ret;
} }
astbo->gem.driver_private = NULL;
astbo->bo.bdev = &ast->ttm.bdev; astbo->bo.bdev = &ast->ttm.bdev;
astbo->bo.bdev->dev_mapping = dev->dev_mapping; astbo->bo.bdev->dev_mapping = dev->dev_mapping;
......
...@@ -85,10 +85,9 @@ static const struct file_operations cirrus_driver_fops = { ...@@ -85,10 +85,9 @@ static const struct file_operations cirrus_driver_fops = {
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
.compat_ioctl = drm_compat_ioctl, .compat_ioctl = drm_compat_ioctl,
#endif #endif
.fasync = drm_fasync,
}; };
static struct drm_driver driver = { static struct drm_driver driver = {
.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_USE_MTRR, .driver_features = DRIVER_MODESET | DRIVER_GEM,
.load = cirrus_driver_load, .load = cirrus_driver_load,
.unload = cirrus_driver_unload, .unload = cirrus_driver_unload,
.fops = &cirrus_driver_fops, .fops = &cirrus_driver_fops,
...@@ -102,7 +101,7 @@ static struct drm_driver driver = { ...@@ -102,7 +101,7 @@ static struct drm_driver driver = {
.gem_free_object = cirrus_gem_free_object, .gem_free_object = cirrus_gem_free_object,
.dumb_create = cirrus_dumb_create, .dumb_create = cirrus_dumb_create,
.dumb_map_offset = cirrus_dumb_mmap_offset, .dumb_map_offset = cirrus_dumb_mmap_offset,
.dumb_destroy = cirrus_dumb_destroy, .dumb_destroy = drm_gem_dumb_destroy,
}; };
static struct pci_driver cirrus_pci_driver = { static struct pci_driver cirrus_pci_driver = {
......
...@@ -203,9 +203,6 @@ int cirrus_gem_create(struct drm_device *dev, ...@@ -203,9 +203,6 @@ int cirrus_gem_create(struct drm_device *dev,
int cirrus_dumb_create(struct drm_file *file, int cirrus_dumb_create(struct drm_file *file,
struct drm_device *dev, struct drm_device *dev,
struct drm_mode_create_dumb *args); struct drm_mode_create_dumb *args);
int cirrus_dumb_destroy(struct drm_file *file,
struct drm_device *dev,
uint32_t handle);
int cirrus_framebuffer_init(struct drm_device *dev, int cirrus_framebuffer_init(struct drm_device *dev,
struct cirrus_framebuffer *gfb, struct cirrus_framebuffer *gfb,
......
...@@ -255,13 +255,6 @@ int cirrus_dumb_create(struct drm_file *file, ...@@ -255,13 +255,6 @@ int cirrus_dumb_create(struct drm_file *file,
return 0; return 0;
} }
int cirrus_dumb_destroy(struct drm_file *file,
struct drm_device *dev,
uint32_t handle)
{
return drm_gem_handle_delete(file, handle);
}
int cirrus_gem_init_object(struct drm_gem_object *obj) int cirrus_gem_init_object(struct drm_gem_object *obj)
{ {
BUG(); BUG();
...@@ -294,7 +287,7 @@ void cirrus_gem_free_object(struct drm_gem_object *obj) ...@@ -294,7 +287,7 @@ void cirrus_gem_free_object(struct drm_gem_object *obj)
static inline u64 cirrus_bo_mmap_offset(struct cirrus_bo *bo) static inline u64 cirrus_bo_mmap_offset(struct cirrus_bo *bo)
{ {
return bo->bo.addr_space_offset; return drm_vma_node_offset_addr(&bo->bo.vma_node);
} }
int int
......
...@@ -148,7 +148,9 @@ cirrus_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) ...@@ -148,7 +148,9 @@ cirrus_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
static int cirrus_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp) static int cirrus_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
{ {
return 0; struct cirrus_bo *cirrusbo = cirrus_bo(bo);
return drm_vma_node_verify_access(&cirrusbo->gem.vma_node, filp);
} }
static int cirrus_ttm_io_mem_reserve(struct ttm_bo_device *bdev, static int cirrus_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
...@@ -326,7 +328,6 @@ int cirrus_bo_create(struct drm_device *dev, int size, int align, ...@@ -326,7 +328,6 @@ int cirrus_bo_create(struct drm_device *dev, int size, int align,
return ret; return ret;
} }
cirrusbo->gem.driver_private = NULL;
cirrusbo->bo.bdev = &cirrus->ttm.bdev; cirrusbo->bo.bdev = &cirrus->ttm.bdev;
cirrusbo->bo.bdev->dev_mapping = dev->dev_mapping; cirrusbo->bo.bdev->dev_mapping = dev->dev_mapping;
......
...@@ -423,6 +423,57 @@ struct drm_agp_head *drm_agp_init(struct drm_device *dev) ...@@ -423,6 +423,57 @@ struct drm_agp_head *drm_agp_init(struct drm_device *dev)
return head; return head;
} }
/**
* drm_agp_clear - Clear AGP resource list
* @dev: DRM device
*
* Iterate over all AGP resources and remove them. But keep the AGP head
* intact so it can still be used. It is safe to call this if AGP is disabled or
* was already removed.
*
* If DRIVER_MODESET is active, nothing is done to protect the modesetting
* resources from getting destroyed. Drivers are responsible of cleaning them up
* during device shutdown.
*/
void drm_agp_clear(struct drm_device *dev)
{
struct drm_agp_mem *entry, *tempe;
if (!drm_core_has_AGP(dev) || !dev->agp)
return;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return;
list_for_each_entry_safe(entry, tempe, &dev->agp->memory, head) {
if (entry->bound)
drm_unbind_agp(entry->memory);
drm_free_agp(entry->memory, entry->pages);
kfree(entry);
}
INIT_LIST_HEAD(&dev->agp->memory);
if (dev->agp->acquired)
drm_agp_release(dev);
dev->agp->acquired = 0;
dev->agp->enabled = 0;
}
/**
* drm_agp_destroy - Destroy AGP head
* @dev: DRM device
*
* Destroy resources that were previously allocated via drm_agp_initp. Caller
* must ensure to clean up all AGP resources before calling this. See
* drm_agp_clear().
*
* Call this to destroy AGP heads allocated via drm_agp_init().
*/
void drm_agp_destroy(struct drm_agp_head *agp)
{
kfree(agp);
}
/** /**
* Binds a collection of pages into AGP memory at the given offset, returning * Binds a collection of pages into AGP memory at the given offset, returning
* the AGP memory structure containing them. * the AGP memory structure containing them.
......
...@@ -207,12 +207,10 @@ static int drm_addmap_core(struct drm_device * dev, resource_size_t offset, ...@@ -207,12 +207,10 @@ static int drm_addmap_core(struct drm_device * dev, resource_size_t offset,
return 0; return 0;
} }
if (drm_core_has_MTRR(dev)) { if (map->type == _DRM_FRAME_BUFFER ||
if (map->type == _DRM_FRAME_BUFFER || (map->flags & _DRM_WRITE_COMBINING)) {
(map->flags & _DRM_WRITE_COMBINING)) { map->mtrr =
map->mtrr = arch_phys_wc_add(map->offset, map->size);
arch_phys_wc_add(map->offset, map->size);
}
} }
if (map->type == _DRM_REGISTERS) { if (map->type == _DRM_REGISTERS) {
if (map->flags & _DRM_WRITE_COMBINING) if (map->flags & _DRM_WRITE_COMBINING)
...@@ -243,7 +241,7 @@ static int drm_addmap_core(struct drm_device * dev, resource_size_t offset, ...@@ -243,7 +241,7 @@ static int drm_addmap_core(struct drm_device * dev, resource_size_t offset,
} }
map->handle = vmalloc_user(map->size); map->handle = vmalloc_user(map->size);
DRM_DEBUG("%lu %d %p\n", DRM_DEBUG("%lu %d %p\n",
map->size, drm_order(map->size), map->handle); map->size, order_base_2(map->size), map->handle);
if (!map->handle) { if (!map->handle) {
kfree(map); kfree(map);
return -ENOMEM; return -ENOMEM;
...@@ -464,8 +462,7 @@ int drm_rmmap_locked(struct drm_device *dev, struct drm_local_map *map) ...@@ -464,8 +462,7 @@ int drm_rmmap_locked(struct drm_device *dev, struct drm_local_map *map)
iounmap(map->handle); iounmap(map->handle);
/* FALLTHROUGH */ /* FALLTHROUGH */
case _DRM_FRAME_BUFFER: case _DRM_FRAME_BUFFER:
if (drm_core_has_MTRR(dev)) arch_phys_wc_del(map->mtrr);
arch_phys_wc_del(map->mtrr);
break; break;
case _DRM_SHM: case _DRM_SHM:
vfree(map->handle); vfree(map->handle);
...@@ -630,7 +627,7 @@ int drm_addbufs_agp(struct drm_device * dev, struct drm_buf_desc * request) ...@@ -630,7 +627,7 @@ int drm_addbufs_agp(struct drm_device * dev, struct drm_buf_desc * request)
return -EINVAL; return -EINVAL;
count = request->count; count = request->count;
order = drm_order(request->size); order = order_base_2(request->size);
size = 1 << order; size = 1 << order;
alignment = (request->flags & _DRM_PAGE_ALIGN) alignment = (request->flags & _DRM_PAGE_ALIGN)
...@@ -800,7 +797,7 @@ int drm_addbufs_pci(struct drm_device * dev, struct drm_buf_desc * request) ...@@ -800,7 +797,7 @@ int drm_addbufs_pci(struct drm_device * dev, struct drm_buf_desc * request)
return -EPERM; return -EPERM;
count = request->count; count = request->count;
order = drm_order(request->size); order = order_base_2(request->size);
size = 1 << order; size = 1 << order;
DRM_DEBUG("count=%d, size=%d (%d), order=%d\n", DRM_DEBUG("count=%d, size=%d (%d), order=%d\n",
...@@ -1002,7 +999,7 @@ static int drm_addbufs_sg(struct drm_device * dev, struct drm_buf_desc * request ...@@ -1002,7 +999,7 @@ static int drm_addbufs_sg(struct drm_device * dev, struct drm_buf_desc * request
return -EPERM; return -EPERM;
count = request->count; count = request->count;
order = drm_order(request->size); order = order_base_2(request->size);
size = 1 << order; size = 1 << order;
alignment = (request->flags & _DRM_PAGE_ALIGN) alignment = (request->flags & _DRM_PAGE_ALIGN)
...@@ -1130,161 +1127,6 @@ static int drm_addbufs_sg(struct drm_device * dev, struct drm_buf_desc * request ...@@ -1130,161 +1127,6 @@ static int drm_addbufs_sg(struct drm_device * dev, struct drm_buf_desc * request
return 0; return 0;
} }
static int drm_addbufs_fb(struct drm_device * dev, struct drm_buf_desc * request)
{
struct drm_device_dma *dma = dev->dma;
struct drm_buf_entry *entry;
struct drm_buf *buf;
unsigned long offset;
unsigned long agp_offset;
int count;
int order;
int size;
int alignment;
int page_order;
int total;
int byte_count;
int i;
struct drm_buf **temp_buflist;
if (!drm_core_check_feature(dev, DRIVER_FB_DMA))
return -EINVAL;
if (!dma)
return -EINVAL;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
count = request->count;
order = drm_order(request->size);
size = 1 << order;
alignment = (request->flags & _DRM_PAGE_ALIGN)
? PAGE_ALIGN(size) : size;
page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
total = PAGE_SIZE << page_order;
byte_count = 0;
agp_offset = request->agp_start;
DRM_DEBUG("count: %d\n", count);
DRM_DEBUG("order: %d\n", order);
DRM_DEBUG("size: %d\n", size);
DRM_DEBUG("agp_offset: %lu\n", agp_offset);
DRM_DEBUG("alignment: %d\n", alignment);
DRM_DEBUG("page_order: %d\n", page_order);
DRM_DEBUG("total: %d\n", total);
if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER)
return -EINVAL;
spin_lock(&dev->count_lock);
if (dev->buf_use) {
spin_unlock(&dev->count_lock);
return -EBUSY;
}
atomic_inc(&dev->buf_alloc);
spin_unlock(&dev->count_lock);
mutex_lock(&dev->struct_mutex);
entry = &dma->bufs[order];
if (entry->buf_count) {
mutex_unlock(&dev->struct_mutex);
atomic_dec(&dev->buf_alloc);
return -ENOMEM; /* May only call once for each order */
}
if (count < 0 || count > 4096) {
mutex_unlock(&dev->struct_mutex);
atomic_dec(&dev->buf_alloc);
return -EINVAL;
}
entry->buflist = kzalloc(count * sizeof(*entry->buflist),
GFP_KERNEL);
if (!entry->buflist) {
mutex_unlock(&dev->struct_mutex);
atomic_dec(&dev->buf_alloc);
return -ENOMEM;
}
entry->buf_size = size;
entry->page_order = page_order;
offset = 0;
while (entry->buf_count < count) {
buf = &entry->buflist[entry->buf_count];
buf->idx = dma->buf_count + entry->buf_count;
buf->total = alignment;
buf->order = order;
buf->used = 0;
buf->offset = (dma->byte_count + offset);
buf->bus_address = agp_offset + offset;
buf->address = (void *)(agp_offset + offset);
buf->next = NULL;
buf->waiting = 0;
buf->pending = 0;
buf->file_priv = NULL;
buf->dev_priv_size = dev->driver->dev_priv_size;
buf->dev_private = kzalloc(buf->dev_priv_size, GFP_KERNEL);
if (!buf->dev_private) {
/* Set count correctly so we free the proper amount. */
entry->buf_count = count;
drm_cleanup_buf_error(dev, entry);
mutex_unlock(&dev->struct_mutex);
atomic_dec(&dev->buf_alloc);
return -ENOMEM;
}
DRM_DEBUG("buffer %d @ %p\n", entry->buf_count, buf->address);
offset += alignment;
entry->buf_count++;
byte_count += PAGE_SIZE << page_order;
}
DRM_DEBUG("byte_count: %d\n", byte_count);
temp_buflist = krealloc(dma->buflist,
(dma->buf_count + entry->buf_count) *
sizeof(*dma->buflist), GFP_KERNEL);
if (!temp_buflist) {
/* Free the entry because it isn't valid */
drm_cleanup_buf_error(dev, entry);
mutex_unlock(&dev->struct_mutex);
atomic_dec(&dev->buf_alloc);
return -ENOMEM;
}
dma->buflist = temp_buflist;
for (i = 0; i < entry->buf_count; i++) {
dma->buflist[i + dma->buf_count] = &entry->buflist[i];
}
dma->buf_count += entry->buf_count;
dma->seg_count += entry->seg_count;
dma->page_count += byte_count >> PAGE_SHIFT;
dma->byte_count += byte_count;
DRM_DEBUG("dma->buf_count : %d\n", dma->buf_count);
DRM_DEBUG("entry->buf_count : %d\n", entry->buf_count);
mutex_unlock(&dev->struct_mutex);
request->count = entry->buf_count;
request->size = size;
dma->flags = _DRM_DMA_USE_FB;
atomic_dec(&dev->buf_alloc);
return 0;
}
/** /**
* Add buffers for DMA transfers (ioctl). * Add buffers for DMA transfers (ioctl).
* *
...@@ -1305,6 +1147,9 @@ int drm_addbufs(struct drm_device *dev, void *data, ...@@ -1305,6 +1147,9 @@ int drm_addbufs(struct drm_device *dev, void *data,
struct drm_buf_desc *request = data; struct drm_buf_desc *request = data;
int ret; int ret;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA)) if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
return -EINVAL; return -EINVAL;
...@@ -1316,7 +1161,7 @@ int drm_addbufs(struct drm_device *dev, void *data, ...@@ -1316,7 +1161,7 @@ int drm_addbufs(struct drm_device *dev, void *data,
if (request->flags & _DRM_SG_BUFFER) if (request->flags & _DRM_SG_BUFFER)
ret = drm_addbufs_sg(dev, request); ret = drm_addbufs_sg(dev, request);
else if (request->flags & _DRM_FB_BUFFER) else if (request->flags & _DRM_FB_BUFFER)
ret = drm_addbufs_fb(dev, request); ret = -EINVAL;
else else
ret = drm_addbufs_pci(dev, request); ret = drm_addbufs_pci(dev, request);
...@@ -1348,6 +1193,9 @@ int drm_infobufs(struct drm_device *dev, void *data, ...@@ -1348,6 +1193,9 @@ int drm_infobufs(struct drm_device *dev, void *data,
int i; int i;
int count; int count;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA)) if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
return -EINVAL; return -EINVAL;
...@@ -1427,6 +1275,9 @@ int drm_markbufs(struct drm_device *dev, void *data, ...@@ -1427,6 +1275,9 @@ int drm_markbufs(struct drm_device *dev, void *data,
int order; int order;
struct drm_buf_entry *entry; struct drm_buf_entry *entry;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA)) if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
return -EINVAL; return -EINVAL;
...@@ -1435,7 +1286,7 @@ int drm_markbufs(struct drm_device *dev, void *data, ...@@ -1435,7 +1286,7 @@ int drm_markbufs(struct drm_device *dev, void *data,
DRM_DEBUG("%d, %d, %d\n", DRM_DEBUG("%d, %d, %d\n",
request->size, request->low_mark, request->high_mark); request->size, request->low_mark, request->high_mark);
order = drm_order(request->size); order = order_base_2(request->size);
if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER) if (order < DRM_MIN_ORDER || order > DRM_MAX_ORDER)
return -EINVAL; return -EINVAL;
entry = &dma->bufs[order]; entry = &dma->bufs[order];
...@@ -1472,6 +1323,9 @@ int drm_freebufs(struct drm_device *dev, void *data, ...@@ -1472,6 +1323,9 @@ int drm_freebufs(struct drm_device *dev, void *data,
int idx; int idx;
struct drm_buf *buf; struct drm_buf *buf;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA)) if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
return -EINVAL; return -EINVAL;
...@@ -1524,6 +1378,9 @@ int drm_mapbufs(struct drm_device *dev, void *data, ...@@ -1524,6 +1378,9 @@ int drm_mapbufs(struct drm_device *dev, void *data,
struct drm_buf_map *request = data; struct drm_buf_map *request = data;
int i; int i;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA)) if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
return -EINVAL; return -EINVAL;
...@@ -1541,9 +1398,7 @@ int drm_mapbufs(struct drm_device *dev, void *data, ...@@ -1541,9 +1398,7 @@ int drm_mapbufs(struct drm_device *dev, void *data,
if (request->count >= dma->buf_count) { if (request->count >= dma->buf_count) {
if ((drm_core_has_AGP(dev) && (dma->flags & _DRM_DMA_USE_AGP)) if ((drm_core_has_AGP(dev) && (dma->flags & _DRM_DMA_USE_AGP))
|| (drm_core_check_feature(dev, DRIVER_SG) || (drm_core_check_feature(dev, DRIVER_SG)
&& (dma->flags & _DRM_DMA_USE_SG)) && (dma->flags & _DRM_DMA_USE_SG))) {
|| (drm_core_check_feature(dev, DRIVER_FB_DMA)
&& (dma->flags & _DRM_DMA_USE_FB))) {
struct drm_local_map *map = dev->agp_buffer_map; struct drm_local_map *map = dev->agp_buffer_map;
unsigned long token = dev->agp_buffer_token; unsigned long token = dev->agp_buffer_token;
...@@ -1600,25 +1455,28 @@ int drm_mapbufs(struct drm_device *dev, void *data, ...@@ -1600,25 +1455,28 @@ int drm_mapbufs(struct drm_device *dev, void *data,
return retcode; return retcode;
} }
/** int drm_dma_ioctl(struct drm_device *dev, void *data,
* Compute size order. Returns the exponent of the smaller power of two which struct drm_file *file_priv)
* is greater or equal to given number.
*
* \param size size.
* \return order.
*
* \todo Can be made faster.
*/
int drm_order(unsigned long size)
{ {
int order; if (drm_core_check_feature(dev, DRIVER_MODESET))
unsigned long tmp; return -EINVAL;
for (order = 0, tmp = size >> 1; tmp; tmp >>= 1, order++) ; if (dev->driver->dma_ioctl)
return dev->driver->dma_ioctl(dev, data, file_priv);
else
return -EINVAL;
}
if (size & (size - 1)) struct drm_local_map *drm_getsarea(struct drm_device *dev)
++order; {
struct drm_map_list *entry;
return order; list_for_each_entry(entry, &dev->maplist, head) {
if (entry->map && entry->map->type == _DRM_SHM &&
(entry->map->flags & _DRM_CONTAINS_LOCK)) {
return entry->map;
}
}
return NULL;
} }
EXPORT_SYMBOL(drm_order); EXPORT_SYMBOL(drm_getsarea);
...@@ -42,10 +42,6 @@ ...@@ -42,10 +42,6 @@
#include <drm/drmP.h> #include <drm/drmP.h>
/******************************************************************/
/** \name Context bitmap support */
/*@{*/
/** /**
* Free a handle from the context bitmap. * Free a handle from the context bitmap.
* *
...@@ -56,13 +52,48 @@ ...@@ -56,13 +52,48 @@
* in drm_device::ctx_idr, while holding the drm_device::struct_mutex * in drm_device::ctx_idr, while holding the drm_device::struct_mutex
* lock. * lock.
*/ */
void drm_ctxbitmap_free(struct drm_device * dev, int ctx_handle) static void drm_ctxbitmap_free(struct drm_device * dev, int ctx_handle)
{ {
if (drm_core_check_feature(dev, DRIVER_MODESET))
return;
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
idr_remove(&dev->ctx_idr, ctx_handle); idr_remove(&dev->ctx_idr, ctx_handle);
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
} }
/******************************************************************/
/** \name Context bitmap support */
/*@{*/
void drm_legacy_ctxbitmap_release(struct drm_device *dev,
struct drm_file *file_priv)
{
if (drm_core_check_feature(dev, DRIVER_MODESET))
return;
mutex_lock(&dev->ctxlist_mutex);
if (!list_empty(&dev->ctxlist)) {
struct drm_ctx_list *pos, *n;
list_for_each_entry_safe(pos, n, &dev->ctxlist, head) {
if (pos->tag == file_priv &&
pos->handle != DRM_KERNEL_CONTEXT) {
if (dev->driver->context_dtor)
dev->driver->context_dtor(dev,
pos->handle);
drm_ctxbitmap_free(dev, pos->handle);
list_del(&pos->head);
kfree(pos);
--dev->ctx_count;
}
}
}
mutex_unlock(&dev->ctxlist_mutex);
}
/** /**
* Context bitmap allocation. * Context bitmap allocation.
* *
...@@ -90,10 +121,12 @@ static int drm_ctxbitmap_next(struct drm_device * dev) ...@@ -90,10 +121,12 @@ static int drm_ctxbitmap_next(struct drm_device * dev)
* *
* Initialise the drm_device::ctx_idr * Initialise the drm_device::ctx_idr
*/ */
int drm_ctxbitmap_init(struct drm_device * dev) void drm_legacy_ctxbitmap_init(struct drm_device * dev)
{ {
if (drm_core_check_feature(dev, DRIVER_MODESET))
return;
idr_init(&dev->ctx_idr); idr_init(&dev->ctx_idr);
return 0;
} }
/** /**
...@@ -104,7 +137,7 @@ int drm_ctxbitmap_init(struct drm_device * dev) ...@@ -104,7 +137,7 @@ int drm_ctxbitmap_init(struct drm_device * dev)
* Free all idr members using drm_ctx_sarea_free helper function * Free all idr members using drm_ctx_sarea_free helper function
* while holding the drm_device::struct_mutex lock. * while holding the drm_device::struct_mutex lock.
*/ */
void drm_ctxbitmap_cleanup(struct drm_device * dev) void drm_legacy_ctxbitmap_cleanup(struct drm_device * dev)
{ {
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
idr_destroy(&dev->ctx_idr); idr_destroy(&dev->ctx_idr);
...@@ -136,6 +169,9 @@ int drm_getsareactx(struct drm_device *dev, void *data, ...@@ -136,6 +169,9 @@ int drm_getsareactx(struct drm_device *dev, void *data,
struct drm_local_map *map; struct drm_local_map *map;
struct drm_map_list *_entry; struct drm_map_list *_entry;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
map = idr_find(&dev->ctx_idr, request->ctx_id); map = idr_find(&dev->ctx_idr, request->ctx_id);
...@@ -180,6 +216,9 @@ int drm_setsareactx(struct drm_device *dev, void *data, ...@@ -180,6 +216,9 @@ int drm_setsareactx(struct drm_device *dev, void *data,
struct drm_local_map *map = NULL; struct drm_local_map *map = NULL;
struct drm_map_list *r_list = NULL; struct drm_map_list *r_list = NULL;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
list_for_each_entry(r_list, &dev->maplist, head) { list_for_each_entry(r_list, &dev->maplist, head) {
if (r_list->map if (r_list->map
...@@ -251,7 +290,6 @@ static int drm_context_switch_complete(struct drm_device *dev, ...@@ -251,7 +290,6 @@ static int drm_context_switch_complete(struct drm_device *dev,
struct drm_file *file_priv, int new) struct drm_file *file_priv, int new)
{ {
dev->last_context = new; /* PRE/POST: This is the _only_ writer. */ dev->last_context = new; /* PRE/POST: This is the _only_ writer. */
dev->last_switch = jiffies;
if (!_DRM_LOCK_IS_HELD(file_priv->master->lock.hw_lock->lock)) { if (!_DRM_LOCK_IS_HELD(file_priv->master->lock.hw_lock->lock)) {
DRM_ERROR("Lock isn't held after context switch\n"); DRM_ERROR("Lock isn't held after context switch\n");
...@@ -261,7 +299,6 @@ static int drm_context_switch_complete(struct drm_device *dev, ...@@ -261,7 +299,6 @@ static int drm_context_switch_complete(struct drm_device *dev,
when the kernel holds the lock, release when the kernel holds the lock, release
that lock here. */ that lock here. */
clear_bit(0, &dev->context_flag); clear_bit(0, &dev->context_flag);
wake_up(&dev->context_wait);
return 0; return 0;
} }
...@@ -282,6 +319,9 @@ int drm_resctx(struct drm_device *dev, void *data, ...@@ -282,6 +319,9 @@ int drm_resctx(struct drm_device *dev, void *data,
struct drm_ctx ctx; struct drm_ctx ctx;
int i; int i;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
if (res->count >= DRM_RESERVED_CONTEXTS) { if (res->count >= DRM_RESERVED_CONTEXTS) {
memset(&ctx, 0, sizeof(ctx)); memset(&ctx, 0, sizeof(ctx));
for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) { for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
...@@ -312,6 +352,9 @@ int drm_addctx(struct drm_device *dev, void *data, ...@@ -312,6 +352,9 @@ int drm_addctx(struct drm_device *dev, void *data,
struct drm_ctx_list *ctx_entry; struct drm_ctx_list *ctx_entry;
struct drm_ctx *ctx = data; struct drm_ctx *ctx = data;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
ctx->handle = drm_ctxbitmap_next(dev); ctx->handle = drm_ctxbitmap_next(dev);
if (ctx->handle == DRM_KERNEL_CONTEXT) { if (ctx->handle == DRM_KERNEL_CONTEXT) {
/* Skip kernel's context and get a new one. */ /* Skip kernel's context and get a new one. */
...@@ -342,12 +385,6 @@ int drm_addctx(struct drm_device *dev, void *data, ...@@ -342,12 +385,6 @@ int drm_addctx(struct drm_device *dev, void *data,
return 0; return 0;
} }
int drm_modctx(struct drm_device *dev, void *data, struct drm_file *file_priv)
{
/* This does nothing */
return 0;
}
/** /**
* Get context. * Get context.
* *
...@@ -361,6 +398,9 @@ int drm_getctx(struct drm_device *dev, void *data, struct drm_file *file_priv) ...@@ -361,6 +398,9 @@ int drm_getctx(struct drm_device *dev, void *data, struct drm_file *file_priv)
{ {
struct drm_ctx *ctx = data; struct drm_ctx *ctx = data;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
/* This is 0, because we don't handle any context flags */ /* This is 0, because we don't handle any context flags */
ctx->flags = 0; ctx->flags = 0;
...@@ -383,6 +423,9 @@ int drm_switchctx(struct drm_device *dev, void *data, ...@@ -383,6 +423,9 @@ int drm_switchctx(struct drm_device *dev, void *data,
{ {
struct drm_ctx *ctx = data; struct drm_ctx *ctx = data;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
DRM_DEBUG("%d\n", ctx->handle); DRM_DEBUG("%d\n", ctx->handle);
return drm_context_switch(dev, dev->last_context, ctx->handle); return drm_context_switch(dev, dev->last_context, ctx->handle);
} }
...@@ -403,6 +446,9 @@ int drm_newctx(struct drm_device *dev, void *data, ...@@ -403,6 +446,9 @@ int drm_newctx(struct drm_device *dev, void *data,
{ {
struct drm_ctx *ctx = data; struct drm_ctx *ctx = data;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
DRM_DEBUG("%d\n", ctx->handle); DRM_DEBUG("%d\n", ctx->handle);
drm_context_switch_complete(dev, file_priv, ctx->handle); drm_context_switch_complete(dev, file_priv, ctx->handle);
...@@ -425,6 +471,9 @@ int drm_rmctx(struct drm_device *dev, void *data, ...@@ -425,6 +471,9 @@ int drm_rmctx(struct drm_device *dev, void *data,
{ {
struct drm_ctx *ctx = data; struct drm_ctx *ctx = data;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
DRM_DEBUG("%d\n", ctx->handle); DRM_DEBUG("%d\n", ctx->handle);
if (ctx->handle != DRM_KERNEL_CONTEXT) { if (ctx->handle != DRM_KERNEL_CONTEXT) {
if (dev->driver->context_dtor) if (dev->driver->context_dtor)
......
...@@ -125,13 +125,6 @@ static const struct drm_prop_enum_list drm_scaling_mode_enum_list[] = ...@@ -125,13 +125,6 @@ static const struct drm_prop_enum_list drm_scaling_mode_enum_list[] =
{ DRM_MODE_SCALE_ASPECT, "Full aspect" }, { DRM_MODE_SCALE_ASPECT, "Full aspect" },
}; };
static const struct drm_prop_enum_list drm_dithering_mode_enum_list[] =
{
{ DRM_MODE_DITHERING_OFF, "Off" },
{ DRM_MODE_DITHERING_ON, "On" },
{ DRM_MODE_DITHERING_AUTO, "Automatic" },
};
/* /*
* Non-global properties, but "required" for certain connectors. * Non-global properties, but "required" for certain connectors.
*/ */
...@@ -186,29 +179,29 @@ static const struct drm_prop_enum_list drm_dirty_info_enum_list[] = { ...@@ -186,29 +179,29 @@ static const struct drm_prop_enum_list drm_dirty_info_enum_list[] = {
struct drm_conn_prop_enum_list { struct drm_conn_prop_enum_list {
int type; int type;
const char *name; const char *name;
int count; struct ida ida;
}; };
/* /*
* Connector and encoder types. * Connector and encoder types.
*/ */
static struct drm_conn_prop_enum_list drm_connector_enum_list[] = static struct drm_conn_prop_enum_list drm_connector_enum_list[] =
{ { DRM_MODE_CONNECTOR_Unknown, "Unknown", 0 }, { { DRM_MODE_CONNECTOR_Unknown, "Unknown" },
{ DRM_MODE_CONNECTOR_VGA, "VGA", 0 }, { DRM_MODE_CONNECTOR_VGA, "VGA" },
{ DRM_MODE_CONNECTOR_DVII, "DVI-I", 0 }, { DRM_MODE_CONNECTOR_DVII, "DVI-I" },
{ DRM_MODE_CONNECTOR_DVID, "DVI-D", 0 }, { DRM_MODE_CONNECTOR_DVID, "DVI-D" },
{ DRM_MODE_CONNECTOR_DVIA, "DVI-A", 0 }, { DRM_MODE_CONNECTOR_DVIA, "DVI-A" },
{ DRM_MODE_CONNECTOR_Composite, "Composite", 0 }, { DRM_MODE_CONNECTOR_Composite, "Composite" },
{ DRM_MODE_CONNECTOR_SVIDEO, "SVIDEO", 0 }, { DRM_MODE_CONNECTOR_SVIDEO, "SVIDEO" },
{ DRM_MODE_CONNECTOR_LVDS, "LVDS", 0 }, { DRM_MODE_CONNECTOR_LVDS, "LVDS" },
{ DRM_MODE_CONNECTOR_Component, "Component", 0 }, { DRM_MODE_CONNECTOR_Component, "Component" },
{ DRM_MODE_CONNECTOR_9PinDIN, "DIN", 0 }, { DRM_MODE_CONNECTOR_9PinDIN, "DIN" },
{ DRM_MODE_CONNECTOR_DisplayPort, "DP", 0 }, { DRM_MODE_CONNECTOR_DisplayPort, "DP" },
{ DRM_MODE_CONNECTOR_HDMIA, "HDMI-A", 0 }, { DRM_MODE_CONNECTOR_HDMIA, "HDMI-A" },
{ DRM_MODE_CONNECTOR_HDMIB, "HDMI-B", 0 }, { DRM_MODE_CONNECTOR_HDMIB, "HDMI-B" },
{ DRM_MODE_CONNECTOR_TV, "TV", 0 }, { DRM_MODE_CONNECTOR_TV, "TV" },
{ DRM_MODE_CONNECTOR_eDP, "eDP", 0 }, { DRM_MODE_CONNECTOR_eDP, "eDP" },
{ DRM_MODE_CONNECTOR_VIRTUAL, "Virtual", 0}, { DRM_MODE_CONNECTOR_VIRTUAL, "Virtual" },
}; };
static const struct drm_prop_enum_list drm_encoder_enum_list[] = static const struct drm_prop_enum_list drm_encoder_enum_list[] =
...@@ -220,6 +213,22 @@ static const struct drm_prop_enum_list drm_encoder_enum_list[] = ...@@ -220,6 +213,22 @@ static const struct drm_prop_enum_list drm_encoder_enum_list[] =
{ DRM_MODE_ENCODER_VIRTUAL, "Virtual" }, { DRM_MODE_ENCODER_VIRTUAL, "Virtual" },
}; };
void drm_connector_ida_init(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(drm_connector_enum_list); i++)
ida_init(&drm_connector_enum_list[i].ida);
}
void drm_connector_ida_destroy(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(drm_connector_enum_list); i++)
ida_destroy(&drm_connector_enum_list[i].ida);
}
const char *drm_get_encoder_name(const struct drm_encoder *encoder) const char *drm_get_encoder_name(const struct drm_encoder *encoder)
{ {
static char buf[32]; static char buf[32];
...@@ -677,20 +686,19 @@ void drm_mode_probed_add(struct drm_connector *connector, ...@@ -677,20 +686,19 @@ void drm_mode_probed_add(struct drm_connector *connector,
} }
EXPORT_SYMBOL(drm_mode_probed_add); EXPORT_SYMBOL(drm_mode_probed_add);
/** /*
* drm_mode_remove - remove and free a mode * drm_mode_remove - remove and free a mode
* @connector: connector list to modify * @connector: connector list to modify
* @mode: mode to remove * @mode: mode to remove
* *
* Remove @mode from @connector's mode list, then free it. * Remove @mode from @connector's mode list, then free it.
*/ */
void drm_mode_remove(struct drm_connector *connector, static void drm_mode_remove(struct drm_connector *connector,
struct drm_display_mode *mode) struct drm_display_mode *mode)
{ {
list_del(&mode->head); list_del(&mode->head);
drm_mode_destroy(connector->dev, mode); drm_mode_destroy(connector->dev, mode);
} }
EXPORT_SYMBOL(drm_mode_remove);
/** /**
* drm_connector_init - Init a preallocated connector * drm_connector_init - Init a preallocated connector
...@@ -711,6 +719,8 @@ int drm_connector_init(struct drm_device *dev, ...@@ -711,6 +719,8 @@ int drm_connector_init(struct drm_device *dev,
int connector_type) int connector_type)
{ {
int ret; int ret;
struct ida *connector_ida =
&drm_connector_enum_list[connector_type].ida;
drm_modeset_lock_all(dev); drm_modeset_lock_all(dev);
...@@ -723,7 +733,12 @@ int drm_connector_init(struct drm_device *dev, ...@@ -723,7 +733,12 @@ int drm_connector_init(struct drm_device *dev,
connector->funcs = funcs; connector->funcs = funcs;
connector->connector_type = connector_type; connector->connector_type = connector_type;
connector->connector_type_id = connector->connector_type_id =
++drm_connector_enum_list[connector_type].count; /* TODO */ ida_simple_get(connector_ida, 1, 0, GFP_KERNEL);
if (connector->connector_type_id < 0) {
ret = connector->connector_type_id;
drm_mode_object_put(dev, &connector->base);
goto out;
}
INIT_LIST_HEAD(&connector->probed_modes); INIT_LIST_HEAD(&connector->probed_modes);
INIT_LIST_HEAD(&connector->modes); INIT_LIST_HEAD(&connector->modes);
connector->edid_blob_ptr = NULL; connector->edid_blob_ptr = NULL;
...@@ -764,6 +779,9 @@ void drm_connector_cleanup(struct drm_connector *connector) ...@@ -764,6 +779,9 @@ void drm_connector_cleanup(struct drm_connector *connector)
list_for_each_entry_safe(mode, t, &connector->modes, head) list_for_each_entry_safe(mode, t, &connector->modes, head)
drm_mode_remove(connector, mode); drm_mode_remove(connector, mode);
ida_remove(&drm_connector_enum_list[connector->connector_type].ida,
connector->connector_type_id);
drm_mode_object_put(dev, &connector->base); drm_mode_object_put(dev, &connector->base);
list_del(&connector->head); list_del(&connector->head);
dev->mode_config.num_connector--; dev->mode_config.num_connector--;
...@@ -781,6 +799,41 @@ void drm_connector_unplug_all(struct drm_device *dev) ...@@ -781,6 +799,41 @@ void drm_connector_unplug_all(struct drm_device *dev)
} }
EXPORT_SYMBOL(drm_connector_unplug_all); EXPORT_SYMBOL(drm_connector_unplug_all);
int drm_bridge_init(struct drm_device *dev, struct drm_bridge *bridge,
const struct drm_bridge_funcs *funcs)
{
int ret;
drm_modeset_lock_all(dev);
ret = drm_mode_object_get(dev, &bridge->base, DRM_MODE_OBJECT_BRIDGE);
if (ret)
goto out;
bridge->dev = dev;
bridge->funcs = funcs;
list_add_tail(&bridge->head, &dev->mode_config.bridge_list);
dev->mode_config.num_bridge++;
out:
drm_modeset_unlock_all(dev);
return ret;
}
EXPORT_SYMBOL(drm_bridge_init);
void drm_bridge_cleanup(struct drm_bridge *bridge)
{
struct drm_device *dev = bridge->dev;
drm_modeset_lock_all(dev);
drm_mode_object_put(dev, &bridge->base);
list_del(&bridge->head);
dev->mode_config.num_bridge--;
drm_modeset_unlock_all(dev);
}
EXPORT_SYMBOL(drm_bridge_cleanup);
int drm_encoder_init(struct drm_device *dev, int drm_encoder_init(struct drm_device *dev,
struct drm_encoder *encoder, struct drm_encoder *encoder,
const struct drm_encoder_funcs *funcs, const struct drm_encoder_funcs *funcs,
...@@ -1134,30 +1187,6 @@ int drm_mode_create_scaling_mode_property(struct drm_device *dev) ...@@ -1134,30 +1187,6 @@ int drm_mode_create_scaling_mode_property(struct drm_device *dev)
} }
EXPORT_SYMBOL(drm_mode_create_scaling_mode_property); EXPORT_SYMBOL(drm_mode_create_scaling_mode_property);
/**
* drm_mode_create_dithering_property - create dithering property
* @dev: DRM device
*
* Called by a driver the first time it's needed, must be attached to desired
* connectors.
*/
int drm_mode_create_dithering_property(struct drm_device *dev)
{
struct drm_property *dithering_mode;
if (dev->mode_config.dithering_mode_property)
return 0;
dithering_mode =
drm_property_create_enum(dev, 0, "dithering",
drm_dithering_mode_enum_list,
ARRAY_SIZE(drm_dithering_mode_enum_list));
dev->mode_config.dithering_mode_property = dithering_mode;
return 0;
}
EXPORT_SYMBOL(drm_mode_create_dithering_property);
/** /**
* drm_mode_create_dirty_property - create dirty property * drm_mode_create_dirty_property - create dirty property
* @dev: DRM device * @dev: DRM device
...@@ -1190,6 +1219,7 @@ static int drm_mode_group_init(struct drm_device *dev, struct drm_mode_group *gr ...@@ -1190,6 +1219,7 @@ static int drm_mode_group_init(struct drm_device *dev, struct drm_mode_group *gr
total_objects += dev->mode_config.num_crtc; total_objects += dev->mode_config.num_crtc;
total_objects += dev->mode_config.num_connector; total_objects += dev->mode_config.num_connector;
total_objects += dev->mode_config.num_encoder; total_objects += dev->mode_config.num_encoder;
total_objects += dev->mode_config.num_bridge;
group->id_list = kzalloc(total_objects * sizeof(uint32_t), GFP_KERNEL); group->id_list = kzalloc(total_objects * sizeof(uint32_t), GFP_KERNEL);
if (!group->id_list) if (!group->id_list)
...@@ -1198,6 +1228,7 @@ static int drm_mode_group_init(struct drm_device *dev, struct drm_mode_group *gr ...@@ -1198,6 +1228,7 @@ static int drm_mode_group_init(struct drm_device *dev, struct drm_mode_group *gr
group->num_crtcs = 0; group->num_crtcs = 0;
group->num_connectors = 0; group->num_connectors = 0;
group->num_encoders = 0; group->num_encoders = 0;
group->num_bridges = 0;
return 0; return 0;
} }
...@@ -1207,6 +1238,7 @@ int drm_mode_group_init_legacy_group(struct drm_device *dev, ...@@ -1207,6 +1238,7 @@ int drm_mode_group_init_legacy_group(struct drm_device *dev,
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_encoder *encoder; struct drm_encoder *encoder;
struct drm_connector *connector; struct drm_connector *connector;
struct drm_bridge *bridge;
int ret; int ret;
if ((ret = drm_mode_group_init(dev, group))) if ((ret = drm_mode_group_init(dev, group)))
...@@ -1223,6 +1255,11 @@ int drm_mode_group_init_legacy_group(struct drm_device *dev, ...@@ -1223,6 +1255,11 @@ int drm_mode_group_init_legacy_group(struct drm_device *dev,
group->id_list[group->num_crtcs + group->num_encoders + group->id_list[group->num_crtcs + group->num_encoders +
group->num_connectors++] = connector->base.id; group->num_connectors++] = connector->base.id;
list_for_each_entry(bridge, &dev->mode_config.bridge_list, head)
group->id_list[group->num_crtcs + group->num_encoders +
group->num_connectors + group->num_bridges++] =
bridge->base.id;
return 0; return 0;
} }
EXPORT_SYMBOL(drm_mode_group_init_legacy_group); EXPORT_SYMBOL(drm_mode_group_init_legacy_group);
...@@ -2604,10 +2641,22 @@ int drm_mode_getfb(struct drm_device *dev, ...@@ -2604,10 +2641,22 @@ int drm_mode_getfb(struct drm_device *dev,
r->depth = fb->depth; r->depth = fb->depth;
r->bpp = fb->bits_per_pixel; r->bpp = fb->bits_per_pixel;
r->pitch = fb->pitches[0]; r->pitch = fb->pitches[0];
if (fb->funcs->create_handle) if (fb->funcs->create_handle) {
ret = fb->funcs->create_handle(fb, file_priv, &r->handle); if (file_priv->is_master || capable(CAP_SYS_ADMIN)) {
else ret = fb->funcs->create_handle(fb, file_priv,
&r->handle);
} else {
/* GET_FB() is an unprivileged ioctl so we must not
* return a buffer-handle to non-master processes! For
* backwards-compatibility reasons, we cannot make
* GET_FB() privileged, so just return an invalid handle
* for non-masters. */
r->handle = 0;
ret = 0;
}
} else {
ret = -ENODEV; ret = -ENODEV;
}
drm_framebuffer_unreference(fb); drm_framebuffer_unreference(fb);
...@@ -3514,6 +3563,9 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev, ...@@ -3514,6 +3563,9 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
page_flip->reserved != 0) page_flip->reserved != 0)
return -EINVAL; return -EINVAL;
if ((page_flip->flags & DRM_MODE_PAGE_FLIP_ASYNC) && !dev->mode_config.async_page_flip)
return -EINVAL;
obj = drm_mode_object_find(dev, page_flip->crtc_id, DRM_MODE_OBJECT_CRTC); obj = drm_mode_object_find(dev, page_flip->crtc_id, DRM_MODE_OBJECT_CRTC);
if (!obj) if (!obj)
return -EINVAL; return -EINVAL;
...@@ -3587,7 +3639,7 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev, ...@@ -3587,7 +3639,7 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
} }
old_fb = crtc->fb; old_fb = crtc->fb;
ret = crtc->funcs->page_flip(crtc, fb, e); ret = crtc->funcs->page_flip(crtc, fb, e, page_flip->flags);
if (ret) { if (ret) {
if (page_flip->flags & DRM_MODE_PAGE_FLIP_EVENT) { if (page_flip->flags & DRM_MODE_PAGE_FLIP_EVENT) {
spin_lock_irqsave(&dev->event_lock, flags); spin_lock_irqsave(&dev->event_lock, flags);
...@@ -3905,6 +3957,7 @@ void drm_mode_config_init(struct drm_device *dev) ...@@ -3905,6 +3957,7 @@ void drm_mode_config_init(struct drm_device *dev)
INIT_LIST_HEAD(&dev->mode_config.fb_list); INIT_LIST_HEAD(&dev->mode_config.fb_list);
INIT_LIST_HEAD(&dev->mode_config.crtc_list); INIT_LIST_HEAD(&dev->mode_config.crtc_list);
INIT_LIST_HEAD(&dev->mode_config.connector_list); INIT_LIST_HEAD(&dev->mode_config.connector_list);
INIT_LIST_HEAD(&dev->mode_config.bridge_list);
INIT_LIST_HEAD(&dev->mode_config.encoder_list); INIT_LIST_HEAD(&dev->mode_config.encoder_list);
INIT_LIST_HEAD(&dev->mode_config.property_list); INIT_LIST_HEAD(&dev->mode_config.property_list);
INIT_LIST_HEAD(&dev->mode_config.property_blob_list); INIT_LIST_HEAD(&dev->mode_config.property_blob_list);
...@@ -3941,6 +3994,7 @@ void drm_mode_config_cleanup(struct drm_device *dev) ...@@ -3941,6 +3994,7 @@ void drm_mode_config_cleanup(struct drm_device *dev)
struct drm_connector *connector, *ot; struct drm_connector *connector, *ot;
struct drm_crtc *crtc, *ct; struct drm_crtc *crtc, *ct;
struct drm_encoder *encoder, *enct; struct drm_encoder *encoder, *enct;
struct drm_bridge *bridge, *brt;
struct drm_framebuffer *fb, *fbt; struct drm_framebuffer *fb, *fbt;
struct drm_property *property, *pt; struct drm_property *property, *pt;
struct drm_property_blob *blob, *bt; struct drm_property_blob *blob, *bt;
...@@ -3951,6 +4005,11 @@ void drm_mode_config_cleanup(struct drm_device *dev) ...@@ -3951,6 +4005,11 @@ void drm_mode_config_cleanup(struct drm_device *dev)
encoder->funcs->destroy(encoder); encoder->funcs->destroy(encoder);
} }
list_for_each_entry_safe(bridge, brt,
&dev->mode_config.bridge_list, head) {
bridge->funcs->destroy(bridge);
}
list_for_each_entry_safe(connector, ot, list_for_each_entry_safe(connector, ot,
&dev->mode_config.connector_list, head) { &dev->mode_config.connector_list, head) {
connector->funcs->destroy(connector); connector->funcs->destroy(connector);
......
...@@ -257,10 +257,16 @@ drm_encoder_disable(struct drm_encoder *encoder) ...@@ -257,10 +257,16 @@ drm_encoder_disable(struct drm_encoder *encoder)
{ {
struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private; struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private;
if (encoder->bridge)
encoder->bridge->funcs->disable(encoder->bridge);
if (encoder_funcs->disable) if (encoder_funcs->disable)
(*encoder_funcs->disable)(encoder); (*encoder_funcs->disable)(encoder);
else else
(*encoder_funcs->dpms)(encoder, DRM_MODE_DPMS_OFF); (*encoder_funcs->dpms)(encoder, DRM_MODE_DPMS_OFF);
if (encoder->bridge)
encoder->bridge->funcs->post_disable(encoder->bridge);
} }
/** /**
...@@ -424,6 +430,16 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc, ...@@ -424,6 +430,16 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc,
if (encoder->crtc != crtc) if (encoder->crtc != crtc)
continue; continue;
if (encoder->bridge && encoder->bridge->funcs->mode_fixup) {
ret = encoder->bridge->funcs->mode_fixup(
encoder->bridge, mode, adjusted_mode);
if (!ret) {
DRM_DEBUG_KMS("Bridge fixup failed\n");
goto done;
}
}
encoder_funcs = encoder->helper_private; encoder_funcs = encoder->helper_private;
if (!(ret = encoder_funcs->mode_fixup(encoder, mode, if (!(ret = encoder_funcs->mode_fixup(encoder, mode,
adjusted_mode))) { adjusted_mode))) {
...@@ -443,9 +459,16 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc, ...@@ -443,9 +459,16 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc,
if (encoder->crtc != crtc) if (encoder->crtc != crtc)
continue; continue;
if (encoder->bridge)
encoder->bridge->funcs->disable(encoder->bridge);
encoder_funcs = encoder->helper_private; encoder_funcs = encoder->helper_private;
/* Disable the encoders as the first thing we do. */ /* Disable the encoders as the first thing we do. */
encoder_funcs->prepare(encoder); encoder_funcs->prepare(encoder);
if (encoder->bridge)
encoder->bridge->funcs->post_disable(encoder->bridge);
} }
drm_crtc_prepare_encoders(dev); drm_crtc_prepare_encoders(dev);
...@@ -469,6 +492,10 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc, ...@@ -469,6 +492,10 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc,
mode->base.id, mode->name); mode->base.id, mode->name);
encoder_funcs = encoder->helper_private; encoder_funcs = encoder->helper_private;
encoder_funcs->mode_set(encoder, mode, adjusted_mode); encoder_funcs->mode_set(encoder, mode, adjusted_mode);
if (encoder->bridge && encoder->bridge->funcs->mode_set)
encoder->bridge->funcs->mode_set(encoder->bridge, mode,
adjusted_mode);
} }
/* Now enable the clocks, plane, pipe, and connectors that we set up. */ /* Now enable the clocks, plane, pipe, and connectors that we set up. */
...@@ -479,9 +506,14 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc, ...@@ -479,9 +506,14 @@ bool drm_crtc_helper_set_mode(struct drm_crtc *crtc,
if (encoder->crtc != crtc) if (encoder->crtc != crtc)
continue; continue;
if (encoder->bridge)
encoder->bridge->funcs->pre_enable(encoder->bridge);
encoder_funcs = encoder->helper_private; encoder_funcs = encoder->helper_private;
encoder_funcs->commit(encoder); encoder_funcs->commit(encoder);
if (encoder->bridge)
encoder->bridge->funcs->enable(encoder->bridge);
} }
/* Store real post-adjustment hardware mode. */ /* Store real post-adjustment hardware mode. */
...@@ -830,6 +862,31 @@ static int drm_helper_choose_encoder_dpms(struct drm_encoder *encoder) ...@@ -830,6 +862,31 @@ static int drm_helper_choose_encoder_dpms(struct drm_encoder *encoder)
return dpms; return dpms;
} }
/* Helper which handles bridge ordering around encoder dpms */
static void drm_helper_encoder_dpms(struct drm_encoder *encoder, int mode)
{
struct drm_bridge *bridge = encoder->bridge;
struct drm_encoder_helper_funcs *encoder_funcs;
if (bridge) {
if (mode == DRM_MODE_DPMS_ON)
bridge->funcs->pre_enable(bridge);
else
bridge->funcs->disable(bridge);
}
encoder_funcs = encoder->helper_private;
if (encoder_funcs->dpms)
encoder_funcs->dpms(encoder, mode);
if (bridge) {
if (mode == DRM_MODE_DPMS_ON)
bridge->funcs->enable(bridge);
else
bridge->funcs->post_disable(bridge);
}
}
static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc) static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc)
{ {
int dpms = DRM_MODE_DPMS_OFF; int dpms = DRM_MODE_DPMS_OFF;
...@@ -857,7 +914,7 @@ void drm_helper_connector_dpms(struct drm_connector *connector, int mode) ...@@ -857,7 +914,7 @@ void drm_helper_connector_dpms(struct drm_connector *connector, int mode)
{ {
struct drm_encoder *encoder = connector->encoder; struct drm_encoder *encoder = connector->encoder;
struct drm_crtc *crtc = encoder ? encoder->crtc : NULL; struct drm_crtc *crtc = encoder ? encoder->crtc : NULL;
int old_dpms; int old_dpms, encoder_dpms = DRM_MODE_DPMS_OFF;
if (mode == connector->dpms) if (mode == connector->dpms)
return; return;
...@@ -865,6 +922,9 @@ void drm_helper_connector_dpms(struct drm_connector *connector, int mode) ...@@ -865,6 +922,9 @@ void drm_helper_connector_dpms(struct drm_connector *connector, int mode)
old_dpms = connector->dpms; old_dpms = connector->dpms;
connector->dpms = mode; connector->dpms = mode;
if (encoder)
encoder_dpms = drm_helper_choose_encoder_dpms(encoder);
/* from off to on, do crtc then encoder */ /* from off to on, do crtc then encoder */
if (mode < old_dpms) { if (mode < old_dpms) {
if (crtc) { if (crtc) {
...@@ -873,22 +933,14 @@ void drm_helper_connector_dpms(struct drm_connector *connector, int mode) ...@@ -873,22 +933,14 @@ void drm_helper_connector_dpms(struct drm_connector *connector, int mode)
(*crtc_funcs->dpms) (crtc, (*crtc_funcs->dpms) (crtc,
drm_helper_choose_crtc_dpms(crtc)); drm_helper_choose_crtc_dpms(crtc));
} }
if (encoder) { if (encoder)
struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private; drm_helper_encoder_dpms(encoder, encoder_dpms);
if (encoder_funcs->dpms)
(*encoder_funcs->dpms) (encoder,
drm_helper_choose_encoder_dpms(encoder));
}
} }
/* from on to off, do encoder then crtc */ /* from on to off, do encoder then crtc */
if (mode > old_dpms) { if (mode > old_dpms) {
if (encoder) { if (encoder)
struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private; drm_helper_encoder_dpms(encoder, encoder_dpms);
if (encoder_funcs->dpms)
(*encoder_funcs->dpms) (encoder,
drm_helper_choose_encoder_dpms(encoder));
}
if (crtc) { if (crtc) {
struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
if (crtc_funcs->dpms) if (crtc_funcs->dpms)
...@@ -924,9 +976,8 @@ int drm_helper_resume_force_mode(struct drm_device *dev) ...@@ -924,9 +976,8 @@ int drm_helper_resume_force_mode(struct drm_device *dev)
{ {
struct drm_crtc *crtc; struct drm_crtc *crtc;
struct drm_encoder *encoder; struct drm_encoder *encoder;
struct drm_encoder_helper_funcs *encoder_funcs;
struct drm_crtc_helper_funcs *crtc_funcs; struct drm_crtc_helper_funcs *crtc_funcs;
int ret; int ret, encoder_dpms;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
...@@ -946,10 +997,10 @@ int drm_helper_resume_force_mode(struct drm_device *dev) ...@@ -946,10 +997,10 @@ int drm_helper_resume_force_mode(struct drm_device *dev)
if(encoder->crtc != crtc) if(encoder->crtc != crtc)
continue; continue;
encoder_funcs = encoder->helper_private; encoder_dpms = drm_helper_choose_encoder_dpms(
if (encoder_funcs->dpms) encoder);
(*encoder_funcs->dpms) (encoder,
drm_helper_choose_encoder_dpms(encoder)); drm_helper_encoder_dpms(encoder, encoder_dpms);
} }
crtc_funcs = crtc->helper_private; crtc_funcs = crtc->helper_private;
......
...@@ -44,10 +44,18 @@ ...@@ -44,10 +44,18 @@
* *
* Allocate and initialize a drm_device_dma structure. * Allocate and initialize a drm_device_dma structure.
*/ */
int drm_dma_setup(struct drm_device *dev) int drm_legacy_dma_setup(struct drm_device *dev)
{ {
int i; int i;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA) ||
drm_core_check_feature(dev, DRIVER_MODESET)) {
return 0;
}
dev->buf_use = 0;
atomic_set(&dev->buf_alloc, 0);
dev->dma = kzalloc(sizeof(*dev->dma), GFP_KERNEL); dev->dma = kzalloc(sizeof(*dev->dma), GFP_KERNEL);
if (!dev->dma) if (!dev->dma)
return -ENOMEM; return -ENOMEM;
...@@ -66,11 +74,16 @@ int drm_dma_setup(struct drm_device *dev) ...@@ -66,11 +74,16 @@ int drm_dma_setup(struct drm_device *dev)
* Free all pages associated with DMA buffers, the buffers and pages lists, and * Free all pages associated with DMA buffers, the buffers and pages lists, and
* finally the drm_device::dma structure itself. * finally the drm_device::dma structure itself.
*/ */
void drm_dma_takedown(struct drm_device *dev) void drm_legacy_dma_takedown(struct drm_device *dev)
{ {
struct drm_device_dma *dma = dev->dma; struct drm_device_dma *dma = dev->dma;
int i, j; int i, j;
if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA) ||
drm_core_check_feature(dev, DRIVER_MODESET)) {
return;
}
if (!dma) if (!dma)
return; return;
......
...@@ -68,7 +68,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = { ...@@ -68,7 +68,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_getmap, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_getmap, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_getclient, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_getstats, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, drm_getcap, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_MASTER), DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_setversion, DRM_MASTER),
DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_setunique, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_setunique, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
...@@ -87,7 +87,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = { ...@@ -87,7 +87,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_ADD_CTX, drm_addctx, DRM_AUTH|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_ADD_CTX, drm_addctx, DRM_AUTH|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_RM_CTX, drm_rmctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_RM_CTX, drm_rmctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_MOD_CTX, drm_modctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_MOD_CTX, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_GET_CTX, drm_getctx, DRM_AUTH), DRM_IOCTL_DEF(DRM_IOCTL_GET_CTX, drm_getctx, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_SWITCH_CTX, drm_switchctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_SWITCH_CTX, drm_switchctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_NEW_CTX, drm_newctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_NEW_CTX, drm_newctx, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
...@@ -106,8 +106,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = { ...@@ -106,8 +106,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_INFO_BUFS, drm_infobufs, DRM_AUTH), DRM_IOCTL_DEF(DRM_IOCTL_INFO_BUFS, drm_infobufs, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_MAP_BUFS, drm_mapbufs, DRM_AUTH), DRM_IOCTL_DEF(DRM_IOCTL_MAP_BUFS, drm_mapbufs, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_freebufs, DRM_AUTH), DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_freebufs, DRM_AUTH),
/* The DRM_IOCTL_DMA ioctl should be defined by the driver. */ DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_dma_ioctl, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_DMA, NULL, DRM_AUTH),
DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_control, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
...@@ -122,7 +121,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = { ...@@ -122,7 +121,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_agp_unbind_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_agp_unbind_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
#endif #endif
DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_sg_alloc_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_sg_alloc, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_sg_free, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_sg_free, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED),
...@@ -131,14 +130,14 @@ static const struct drm_ioctl_desc drm_ioctls[] = { ...@@ -131,14 +130,14 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_GEM_CLOSE, drm_gem_close_ioctl, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GEM_CLOSE, drm_gem_close_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_GEM_FLINK, drm_gem_flink_ioctl, DRM_AUTH|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GEM_FLINK, drm_gem_flink_ioctl, DRM_AUTH|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GEM_OPEN, drm_gem_open_ioctl, DRM_AUTH|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_GEM_OPEN, drm_gem_open_ioctl, DRM_AUTH|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_mode_getresources, DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_mode_getresources, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_AUTH|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, drm_prime_handle_to_fd_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_AUTH|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, drm_prime_fd_to_handle_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_mode_getplane_res, DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_mode_getplane_res, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...@@ -171,6 +170,31 @@ static const struct drm_ioctl_desc drm_ioctls[] = { ...@@ -171,6 +170,31 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
#define DRM_CORE_IOCTL_COUNT ARRAY_SIZE( drm_ioctls ) #define DRM_CORE_IOCTL_COUNT ARRAY_SIZE( drm_ioctls )
/**
* drm_legacy_dev_reinit
*
* Reinitializes a legacy/ums drm device in it's lastclose function.
*/
static void drm_legacy_dev_reinit(struct drm_device *dev)
{
int i;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return;
atomic_set(&dev->ioctl_count, 0);
atomic_set(&dev->vma_count, 0);
for (i = 0; i < ARRAY_SIZE(dev->counts); i++)
atomic_set(&dev->counts[i], 0);
dev->sigdata.lock = NULL;
dev->context_flag = 0;
dev->last_context = 0;
dev->if_version = 0;
}
/** /**
* Take down the DRM device. * Take down the DRM device.
* *
...@@ -195,32 +219,9 @@ int drm_lastclose(struct drm_device * dev) ...@@ -195,32 +219,9 @@ int drm_lastclose(struct drm_device * dev)
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
/* Clear AGP information */ drm_agp_clear(dev);
if (drm_core_has_AGP(dev) && dev->agp &&
!drm_core_check_feature(dev, DRIVER_MODESET)) {
struct drm_agp_mem *entry, *tempe;
/* Remove AGP resources, but leave dev->agp
intact until drv_cleanup is called. */
list_for_each_entry_safe(entry, tempe, &dev->agp->memory, head) {
if (entry->bound)
drm_unbind_agp(entry->memory);
drm_free_agp(entry->memory, entry->pages);
kfree(entry);
}
INIT_LIST_HEAD(&dev->agp->memory);
if (dev->agp->acquired) drm_legacy_sg_cleanup(dev);
drm_agp_release(dev);
dev->agp->acquired = 0;
dev->agp->enabled = 0;
}
if (drm_core_check_feature(dev, DRIVER_SG) && dev->sg &&
!drm_core_check_feature(dev, DRIVER_MODESET)) {
drm_sg_cleanup(dev->sg);
dev->sg = NULL;
}
/* Clear vma list (only built for debugging) */ /* Clear vma list (only built for debugging) */
list_for_each_entry_safe(vma, vma_temp, &dev->vmalist, head) { list_for_each_entry_safe(vma, vma_temp, &dev->vmalist, head) {
...@@ -228,13 +229,13 @@ int drm_lastclose(struct drm_device * dev) ...@@ -228,13 +229,13 @@ int drm_lastclose(struct drm_device * dev)
kfree(vma); kfree(vma);
} }
if (drm_core_check_feature(dev, DRIVER_HAVE_DMA) && drm_legacy_dma_takedown(dev);
!drm_core_check_feature(dev, DRIVER_MODESET))
drm_dma_takedown(dev);
dev->dev_mapping = NULL; dev->dev_mapping = NULL;
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
drm_legacy_dev_reinit(dev);
DRM_DEBUG("lastclose completed\n"); DRM_DEBUG("lastclose completed\n");
return 0; return 0;
} }
...@@ -251,6 +252,7 @@ static int __init drm_core_init(void) ...@@ -251,6 +252,7 @@ static int __init drm_core_init(void)
int ret = -ENOMEM; int ret = -ENOMEM;
drm_global_init(); drm_global_init();
drm_connector_ida_init();
idr_init(&drm_minors_idr); idr_init(&drm_minors_idr);
if (register_chrdev(DRM_MAJOR, "drm", &drm_stub_fops)) if (register_chrdev(DRM_MAJOR, "drm", &drm_stub_fops))
...@@ -263,13 +265,6 @@ static int __init drm_core_init(void) ...@@ -263,13 +265,6 @@ static int __init drm_core_init(void)
goto err_p2; goto err_p2;
} }
drm_proc_root = proc_mkdir("dri", NULL);
if (!drm_proc_root) {
DRM_ERROR("Cannot create /proc/dri\n");
ret = -1;
goto err_p3;
}
drm_debugfs_root = debugfs_create_dir("dri", NULL); drm_debugfs_root = debugfs_create_dir("dri", NULL);
if (!drm_debugfs_root) { if (!drm_debugfs_root) {
DRM_ERROR("Cannot create /sys/kernel/debug/dri\n"); DRM_ERROR("Cannot create /sys/kernel/debug/dri\n");
...@@ -292,12 +287,12 @@ static int __init drm_core_init(void) ...@@ -292,12 +287,12 @@ static int __init drm_core_init(void)
static void __exit drm_core_exit(void) static void __exit drm_core_exit(void)
{ {
remove_proc_entry("dri", NULL);
debugfs_remove(drm_debugfs_root); debugfs_remove(drm_debugfs_root);
drm_sysfs_destroy(); drm_sysfs_destroy();
unregister_chrdev(DRM_MAJOR, "drm"); unregister_chrdev(DRM_MAJOR, "drm");
drm_connector_ida_destroy();
idr_destroy(&drm_minors_idr); idr_destroy(&drm_minors_idr);
} }
...@@ -420,17 +415,15 @@ long drm_ioctl(struct file *filp, ...@@ -420,17 +415,15 @@ long drm_ioctl(struct file *filp,
/* Do not trust userspace, use our own definition */ /* Do not trust userspace, use our own definition */
func = ioctl->func; func = ioctl->func;
/* is there a local override? */
if ((nr == DRM_IOCTL_NR(DRM_IOCTL_DMA)) && dev->driver->dma_ioctl)
func = dev->driver->dma_ioctl;
if (!func) { if (!func) {
DRM_DEBUG("no function\n"); DRM_DEBUG("no function\n");
retcode = -EINVAL; retcode = -EINVAL;
} else if (((ioctl->flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)) || } else if (((ioctl->flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)) ||
((ioctl->flags & DRM_AUTH) && !file_priv->authenticated) || ((ioctl->flags & DRM_AUTH) && !drm_is_render_client(file_priv) && !file_priv->authenticated) ||
((ioctl->flags & DRM_MASTER) && !file_priv->is_master) || ((ioctl->flags & DRM_MASTER) && !file_priv->is_master) ||
(!(ioctl->flags & DRM_CONTROL_ALLOW) && (file_priv->minor->type == DRM_MINOR_CONTROL))) { (!(ioctl->flags & DRM_CONTROL_ALLOW) && (file_priv->minor->type == DRM_MINOR_CONTROL)) ||
(!(ioctl->flags & DRM_RENDER_ALLOW) && drm_is_render_client(file_priv))) {
retcode = -EACCES; retcode = -EACCES;
} else { } else {
if (cmd & (IOC_IN | IOC_OUT)) { if (cmd & (IOC_IN | IOC_OUT)) {
...@@ -485,19 +478,4 @@ long drm_ioctl(struct file *filp, ...@@ -485,19 +478,4 @@ long drm_ioctl(struct file *filp,
DRM_DEBUG("ret = %d\n", retcode); DRM_DEBUG("ret = %d\n", retcode);
return retcode; return retcode;
} }
EXPORT_SYMBOL(drm_ioctl); EXPORT_SYMBOL(drm_ioctl);
struct drm_local_map *drm_getsarea(struct drm_device *dev)
{
struct drm_map_list *entry;
list_for_each_entry(entry, &dev->maplist, head) {
if (entry->map && entry->map->type == _DRM_SHM &&
(entry->map->flags & _DRM_CONTAINS_LOCK)) {
return entry->map;
}
}
return NULL;
}
EXPORT_SYMBOL(drm_getsarea);
...@@ -125,6 +125,9 @@ static struct edid_quirk { ...@@ -125,6 +125,9 @@ static struct edid_quirk {
/* ViewSonic VA2026w */ /* ViewSonic VA2026w */
{ "VSC", 5020, EDID_QUIRK_FORCE_REDUCED_BLANKING }, { "VSC", 5020, EDID_QUIRK_FORCE_REDUCED_BLANKING },
/* Medion MD 30217 PG */
{ "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 },
}; };
/* /*
...@@ -931,6 +934,36 @@ static const struct drm_display_mode edid_cea_modes[] = { ...@@ -931,6 +934,36 @@ static const struct drm_display_mode edid_cea_modes[] = {
.vrefresh = 100, }, .vrefresh = 100, },
}; };
/*
* HDMI 1.4 4k modes.
*/
static const struct drm_display_mode edid_4k_modes[] = {
/* 1 - 3840x2160@30Hz */
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000,
3840, 4016, 4104, 4400, 0,
2160, 2168, 2178, 2250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 30, },
/* 2 - 3840x2160@25Hz */
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000,
3840, 4896, 4984, 5280, 0,
2160, 2168, 2178, 2250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 25, },
/* 3 - 3840x2160@24Hz */
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000,
3840, 5116, 5204, 5500, 0,
2160, 2168, 2178, 2250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 24, },
/* 4 - 4096x2160@24Hz (SMPTE) */
{ DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 297000,
4096, 5116, 5204, 5500, 0,
2160, 2168, 2178, 2250, 0,
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
.vrefresh = 24, },
};
/*** DDC fetch and block validation ***/ /*** DDC fetch and block validation ***/
static const u8 edid_header[] = { static const u8 edid_header[] = {
...@@ -2287,7 +2320,6 @@ add_detailed_modes(struct drm_connector *connector, struct edid *edid, ...@@ -2287,7 +2320,6 @@ add_detailed_modes(struct drm_connector *connector, struct edid *edid,
return closure.modes; return closure.modes;
} }
#define HDMI_IDENTIFIER 0x000C03
#define AUDIO_BLOCK 0x01 #define AUDIO_BLOCK 0x01
#define VIDEO_BLOCK 0x02 #define VIDEO_BLOCK 0x02
#define VENDOR_BLOCK 0x03 #define VENDOR_BLOCK 0x03
...@@ -2298,10 +2330,10 @@ add_detailed_modes(struct drm_connector *connector, struct edid *edid, ...@@ -2298,10 +2330,10 @@ add_detailed_modes(struct drm_connector *connector, struct edid *edid,
#define EDID_CEA_YCRCB422 (1 << 4) #define EDID_CEA_YCRCB422 (1 << 4)
#define EDID_CEA_VCDB_QS (1 << 6) #define EDID_CEA_VCDB_QS (1 << 6)
/** /*
* Search EDID for CEA extension block. * Search EDID for CEA extension block.
*/ */
u8 *drm_find_cea_extension(struct edid *edid) static u8 *drm_find_cea_extension(struct edid *edid)
{ {
u8 *edid_ext = NULL; u8 *edid_ext = NULL;
int i; int i;
...@@ -2322,7 +2354,6 @@ u8 *drm_find_cea_extension(struct edid *edid) ...@@ -2322,7 +2354,6 @@ u8 *drm_find_cea_extension(struct edid *edid)
return edid_ext; return edid_ext;
} }
EXPORT_SYMBOL(drm_find_cea_extension);
/* /*
* Calculate the alternate clock for the CEA mode * Calculate the alternate clock for the CEA mode
...@@ -2380,6 +2411,54 @@ u8 drm_match_cea_mode(const struct drm_display_mode *to_match) ...@@ -2380,6 +2411,54 @@ u8 drm_match_cea_mode(const struct drm_display_mode *to_match)
} }
EXPORT_SYMBOL(drm_match_cea_mode); EXPORT_SYMBOL(drm_match_cea_mode);
/*
* Calculate the alternate clock for HDMI modes (those from the HDMI vendor
* specific block).
*
* It's almost like cea_mode_alternate_clock(), we just need to add an
* exception for the VIC 4 mode (4096x2160@24Hz): no alternate clock for this
* one.
*/
static unsigned int
hdmi_mode_alternate_clock(const struct drm_display_mode *hdmi_mode)
{
if (hdmi_mode->vdisplay == 4096 && hdmi_mode->hdisplay == 2160)
return hdmi_mode->clock;
return cea_mode_alternate_clock(hdmi_mode);
}
/*
* drm_match_hdmi_mode - look for a HDMI mode matching given mode
* @to_match: display mode
*
* An HDMI mode is one defined in the HDMI vendor specific block.
*
* Returns the HDMI Video ID (VIC) of the mode or 0 if it isn't one.
*/
static u8 drm_match_hdmi_mode(const struct drm_display_mode *to_match)
{
u8 mode;
if (!to_match->clock)
return 0;
for (mode = 0; mode < ARRAY_SIZE(edid_4k_modes); mode++) {
const struct drm_display_mode *hdmi_mode = &edid_4k_modes[mode];
unsigned int clock1, clock2;
/* Make sure to also match alternate clocks */
clock1 = hdmi_mode->clock;
clock2 = hdmi_mode_alternate_clock(hdmi_mode);
if ((KHZ2PICOS(to_match->clock) == KHZ2PICOS(clock1) ||
KHZ2PICOS(to_match->clock) == KHZ2PICOS(clock2)) &&
drm_mode_equal_no_clocks(to_match, hdmi_mode))
return mode + 1;
}
return 0;
}
static int static int
add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid) add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid)
{ {
...@@ -2397,18 +2476,26 @@ add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid) ...@@ -2397,18 +2476,26 @@ add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid)
* with the alternate clock for certain CEA modes. * with the alternate clock for certain CEA modes.
*/ */
list_for_each_entry(mode, &connector->probed_modes, head) { list_for_each_entry(mode, &connector->probed_modes, head) {
const struct drm_display_mode *cea_mode; const struct drm_display_mode *cea_mode = NULL;
struct drm_display_mode *newmode; struct drm_display_mode *newmode;
u8 cea_mode_idx = drm_match_cea_mode(mode) - 1; u8 mode_idx = drm_match_cea_mode(mode) - 1;
unsigned int clock1, clock2; unsigned int clock1, clock2;
if (cea_mode_idx >= ARRAY_SIZE(edid_cea_modes)) if (mode_idx < ARRAY_SIZE(edid_cea_modes)) {
continue; cea_mode = &edid_cea_modes[mode_idx];
clock2 = cea_mode_alternate_clock(cea_mode);
} else {
mode_idx = drm_match_hdmi_mode(mode) - 1;
if (mode_idx < ARRAY_SIZE(edid_4k_modes)) {
cea_mode = &edid_4k_modes[mode_idx];
clock2 = hdmi_mode_alternate_clock(cea_mode);
}
}
cea_mode = &edid_cea_modes[cea_mode_idx]; if (!cea_mode)
continue;
clock1 = cea_mode->clock; clock1 = cea_mode->clock;
clock2 = cea_mode_alternate_clock(cea_mode);
if (clock1 == clock2) if (clock1 == clock2)
continue; continue;
...@@ -2442,10 +2529,11 @@ add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid) ...@@ -2442,10 +2529,11 @@ add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid)
} }
static int static int
do_cea_modes (struct drm_connector *connector, u8 *db, u8 len) do_cea_modes(struct drm_connector *connector, const u8 *db, u8 len)
{ {
struct drm_device *dev = connector->dev; struct drm_device *dev = connector->dev;
u8 * mode, cea_mode; const u8 *mode;
u8 cea_mode;
int modes = 0; int modes = 0;
for (mode = db; mode < db + len; mode++) { for (mode = db; mode < db + len; mode++) {
...@@ -2465,6 +2553,68 @@ do_cea_modes (struct drm_connector *connector, u8 *db, u8 len) ...@@ -2465,6 +2553,68 @@ do_cea_modes (struct drm_connector *connector, u8 *db, u8 len)
return modes; return modes;
} }
/*
* do_hdmi_vsdb_modes - Parse the HDMI Vendor Specific data block
* @connector: connector corresponding to the HDMI sink
* @db: start of the CEA vendor specific block
* @len: length of the CEA block payload, ie. one can access up to db[len]
*
* Parses the HDMI VSDB looking for modes to add to @connector.
*/
static int
do_hdmi_vsdb_modes(struct drm_connector *connector, const u8 *db, u8 len)
{
struct drm_device *dev = connector->dev;
int modes = 0, offset = 0, i;
u8 vic_len;
if (len < 8)
goto out;
/* no HDMI_Video_Present */
if (!(db[8] & (1 << 5)))
goto out;
/* Latency_Fields_Present */
if (db[8] & (1 << 7))
offset += 2;
/* I_Latency_Fields_Present */
if (db[8] & (1 << 6))
offset += 2;
/* the declared length is not long enough for the 2 first bytes
* of additional video format capabilities */
offset += 2;
if (len < (8 + offset))
goto out;
vic_len = db[8 + offset] >> 5;
for (i = 0; i < vic_len && len >= (9 + offset + i); i++) {
struct drm_display_mode *newmode;
u8 vic;
vic = db[9 + offset + i];
vic--; /* VICs start at 1 */
if (vic >= ARRAY_SIZE(edid_4k_modes)) {
DRM_ERROR("Unknown HDMI VIC: %d\n", vic);
continue;
}
newmode = drm_mode_duplicate(dev, &edid_4k_modes[vic]);
if (!newmode)
continue;
drm_mode_probed_add(connector, newmode);
modes++;
}
out:
return modes;
}
static int static int
cea_db_payload_len(const u8 *db) cea_db_payload_len(const u8 *db)
{ {
...@@ -2496,14 +2646,30 @@ cea_db_offsets(const u8 *cea, int *start, int *end) ...@@ -2496,14 +2646,30 @@ cea_db_offsets(const u8 *cea, int *start, int *end)
return 0; return 0;
} }
static bool cea_db_is_hdmi_vsdb(const u8 *db)
{
int hdmi_id;
if (cea_db_tag(db) != VENDOR_BLOCK)
return false;
if (cea_db_payload_len(db) < 5)
return false;
hdmi_id = db[1] | (db[2] << 8) | (db[3] << 16);
return hdmi_id == HDMI_IEEE_OUI;
}
#define for_each_cea_db(cea, i, start, end) \ #define for_each_cea_db(cea, i, start, end) \
for ((i) = (start); (i) < (end) && (i) + cea_db_payload_len(&(cea)[(i)]) < (end); (i) += cea_db_payload_len(&(cea)[(i)]) + 1) for ((i) = (start); (i) < (end) && (i) + cea_db_payload_len(&(cea)[(i)]) < (end); (i) += cea_db_payload_len(&(cea)[(i)]) + 1)
static int static int
add_cea_modes(struct drm_connector *connector, struct edid *edid) add_cea_modes(struct drm_connector *connector, struct edid *edid)
{ {
u8 * cea = drm_find_cea_extension(edid); const u8 *cea = drm_find_cea_extension(edid);
u8 * db, dbl; const u8 *db;
u8 dbl;
int modes = 0; int modes = 0;
if (cea && cea_revision(cea) >= 3) { if (cea && cea_revision(cea) >= 3) {
...@@ -2517,7 +2683,9 @@ add_cea_modes(struct drm_connector *connector, struct edid *edid) ...@@ -2517,7 +2683,9 @@ add_cea_modes(struct drm_connector *connector, struct edid *edid)
dbl = cea_db_payload_len(db); dbl = cea_db_payload_len(db);
if (cea_db_tag(db) == VIDEO_BLOCK) if (cea_db_tag(db) == VIDEO_BLOCK)
modes += do_cea_modes (connector, db+1, dbl); modes += do_cea_modes(connector, db + 1, dbl);
else if (cea_db_is_hdmi_vsdb(db))
modes += do_hdmi_vsdb_modes(connector, db, dbl);
} }
} }
...@@ -2570,21 +2738,6 @@ monitor_name(struct detailed_timing *t, void *data) ...@@ -2570,21 +2738,6 @@ monitor_name(struct detailed_timing *t, void *data)
*(u8 **)data = t->data.other_data.data.str.str; *(u8 **)data = t->data.other_data.data.str.str;
} }
static bool cea_db_is_hdmi_vsdb(const u8 *db)
{
int hdmi_id;
if (cea_db_tag(db) != VENDOR_BLOCK)
return false;
if (cea_db_payload_len(db) < 5)
return false;
hdmi_id = db[1] | (db[2] << 8) | (db[3] << 16);
return hdmi_id == HDMI_IDENTIFIER;
}
/** /**
* drm_edid_to_eld - build ELD from EDID * drm_edid_to_eld - build ELD from EDID
* @connector: connector corresponding to the HDMI/DP sink * @connector: connector corresponding to the HDMI/DP sink
...@@ -2731,6 +2884,58 @@ int drm_edid_to_sad(struct edid *edid, struct cea_sad **sads) ...@@ -2731,6 +2884,58 @@ int drm_edid_to_sad(struct edid *edid, struct cea_sad **sads)
} }
EXPORT_SYMBOL(drm_edid_to_sad); EXPORT_SYMBOL(drm_edid_to_sad);
/**
* drm_edid_to_speaker_allocation - extracts Speaker Allocation Data Blocks from EDID
* @edid: EDID to parse
* @sadb: pointer to the speaker block
*
* Looks for CEA EDID block and extracts the Speaker Allocation Data Block from it.
* Note: returned pointer needs to be kfreed
*
* Return number of found Speaker Allocation Blocks or negative number on error.
*/
int drm_edid_to_speaker_allocation(struct edid *edid, u8 **sadb)
{
int count = 0;
int i, start, end, dbl;
const u8 *cea;
cea = drm_find_cea_extension(edid);
if (!cea) {
DRM_DEBUG_KMS("SAD: no CEA Extension found\n");
return -ENOENT;
}
if (cea_revision(cea) < 3) {
DRM_DEBUG_KMS("SAD: wrong CEA revision\n");
return -ENOTSUPP;
}
if (cea_db_offsets(cea, &start, &end)) {
DRM_DEBUG_KMS("SAD: invalid data block offsets\n");
return -EPROTO;
}
for_each_cea_db(cea, i, start, end) {
const u8 *db = &cea[i];
if (cea_db_tag(db) == SPEAKER_BLOCK) {
dbl = cea_db_payload_len(db);
/* Speaker Allocation Data Block */
if (dbl == 3) {
*sadb = kmalloc(dbl, GFP_KERNEL);
memcpy(*sadb, &db[1], dbl);
count = dbl;
break;
}
}
}
return count;
}
EXPORT_SYMBOL(drm_edid_to_speaker_allocation);
/** /**
* drm_av_sync_delay - HDMI/DP sink audio-video sync delay in millisecond * drm_av_sync_delay - HDMI/DP sink audio-video sync delay in millisecond
* @connector: connector associated with the HDMI/DP sink * @connector: connector associated with the HDMI/DP sink
...@@ -3102,9 +3307,10 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame, ...@@ -3102,9 +3307,10 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
if (err < 0) if (err < 0)
return err; return err;
if (mode->flags & DRM_MODE_FLAG_DBLCLK)
frame->pixel_repeat = 1;
frame->video_code = drm_match_cea_mode(mode); frame->video_code = drm_match_cea_mode(mode);
if (!frame->video_code)
return 0;
frame->picture_aspect = HDMI_PICTURE_ASPECT_NONE; frame->picture_aspect = HDMI_PICTURE_ASPECT_NONE;
frame->active_aspect = HDMI_ACTIVE_ASPECT_PICTURE; frame->active_aspect = HDMI_ACTIVE_ASPECT_PICTURE;
...@@ -3112,3 +3318,39 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame, ...@@ -3112,3 +3318,39 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
return 0; return 0;
} }
EXPORT_SYMBOL(drm_hdmi_avi_infoframe_from_display_mode); EXPORT_SYMBOL(drm_hdmi_avi_infoframe_from_display_mode);
/**
* drm_hdmi_vendor_infoframe_from_display_mode() - fill an HDMI infoframe with
* data from a DRM display mode
* @frame: HDMI vendor infoframe
* @mode: DRM display mode
*
* Note that there's is a need to send HDMI vendor infoframes only when using a
* 4k or stereoscopic 3D mode. So when giving any other mode as input this
* function will return -EINVAL, error that can be safely ignored.
*
* Returns 0 on success or a negative error code on failure.
*/
int
drm_hdmi_vendor_infoframe_from_display_mode(struct hdmi_vendor_infoframe *frame,
const struct drm_display_mode *mode)
{
int err;
u8 vic;
if (!frame || !mode)
return -EINVAL;
vic = drm_match_hdmi_mode(mode);
if (!vic)
return -EINVAL;
err = hdmi_vendor_infoframe_init(frame);
if (err < 0)
return err;
frame->vic = vic;
return 0;
}
EXPORT_SYMBOL(drm_hdmi_vendor_infoframe_from_display_mode);
...@@ -181,11 +181,11 @@ struct drm_gem_cma_object *drm_fb_cma_get_gem_obj(struct drm_framebuffer *fb, ...@@ -181,11 +181,11 @@ struct drm_gem_cma_object *drm_fb_cma_get_gem_obj(struct drm_framebuffer *fb,
EXPORT_SYMBOL_GPL(drm_fb_cma_get_gem_obj); EXPORT_SYMBOL_GPL(drm_fb_cma_get_gem_obj);
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
/** /*
* drm_fb_cma_describe() - Helper to dump information about a single * drm_fb_cma_describe() - Helper to dump information about a single
* CMA framebuffer object * CMA framebuffer object
*/ */
void drm_fb_cma_describe(struct drm_framebuffer *fb, struct seq_file *m) static void drm_fb_cma_describe(struct drm_framebuffer *fb, struct seq_file *m)
{ {
struct drm_fb_cma *fb_cma = to_fb_cma(fb); struct drm_fb_cma *fb_cma = to_fb_cma(fb);
int i, n = drm_format_num_planes(fb->pixel_format); int i, n = drm_format_num_planes(fb->pixel_format);
...@@ -199,7 +199,6 @@ void drm_fb_cma_describe(struct drm_framebuffer *fb, struct seq_file *m) ...@@ -199,7 +199,6 @@ void drm_fb_cma_describe(struct drm_framebuffer *fb, struct seq_file *m)
drm_gem_cma_describe(fb_cma->obj[i], m); drm_gem_cma_describe(fb_cma->obj[i], m);
} }
} }
EXPORT_SYMBOL_GPL(drm_fb_cma_describe);
/** /**
* drm_fb_cma_debugfs_show() - Helper to list CMA framebuffer objects * drm_fb_cma_debugfs_show() - Helper to list CMA framebuffer objects
......
/*
* Copyright (C) 2013 Red Hat
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "drmP.h"
#include "drm_flip_work.h"
/**
* drm_flip_work_queue - queue work
* @work: the flip-work
* @val: the value to queue
*
* Queues work, that will later be run (passed back to drm_flip_func_t
* func) on a work queue after drm_flip_work_commit() is called.
*/
void drm_flip_work_queue(struct drm_flip_work *work, void *val)
{
if (kfifo_put(&work->fifo, (const void **)&val)) {
atomic_inc(&work->pending);
} else {
DRM_ERROR("%s fifo full!\n", work->name);
work->func(work, val);
}
}
EXPORT_SYMBOL(drm_flip_work_queue);
/**
* drm_flip_work_commit - commit queued work
* @work: the flip-work
* @wq: the work-queue to run the queued work on
*
* Trigger work previously queued by drm_flip_work_queue() to run
* on a workqueue. The typical usage would be to queue work (via
* drm_flip_work_queue()) at any point (from vblank irq and/or
* prior), and then from vblank irq commit the queued work.
*/
void drm_flip_work_commit(struct drm_flip_work *work,
struct workqueue_struct *wq)
{
uint32_t pending = atomic_read(&work->pending);
atomic_add(pending, &work->count);
atomic_sub(pending, &work->pending);
queue_work(wq, &work->worker);
}
EXPORT_SYMBOL(drm_flip_work_commit);
static void flip_worker(struct work_struct *w)
{
struct drm_flip_work *work = container_of(w, struct drm_flip_work, worker);
uint32_t count = atomic_read(&work->count);
void *val = NULL;
atomic_sub(count, &work->count);
while(count--)
if (!WARN_ON(!kfifo_get(&work->fifo, &val)))
work->func(work, val);
}
/**
* drm_flip_work_init - initialize flip-work
* @work: the flip-work to initialize
* @size: the max queue depth
* @name: debug name
* @func: the callback work function
*
* Initializes/allocates resources for the flip-work
*
* RETURNS:
* Zero on success, error code on failure.
*/
int drm_flip_work_init(struct drm_flip_work *work, int size,
const char *name, drm_flip_func_t func)
{
int ret;
work->name = name;
atomic_set(&work->count, 0);
atomic_set(&work->pending, 0);
work->func = func;
ret = kfifo_alloc(&work->fifo, size, GFP_KERNEL);
if (ret) {
DRM_ERROR("could not allocate %s fifo\n", name);
return ret;
}
INIT_WORK(&work->worker, flip_worker);
return 0;
}
EXPORT_SYMBOL(drm_flip_work_init);
/**
* drm_flip_work_cleanup - cleans up flip-work
* @work: the flip-work to cleanup
*
* Destroy resources allocated for the flip-work
*/
void drm_flip_work_cleanup(struct drm_flip_work *work)
{
WARN_ON(!kfifo_is_empty(&work->fifo));
kfifo_free(&work->fifo);
}
EXPORT_SYMBOL(drm_flip_work_cleanup);
...@@ -48,59 +48,21 @@ static int drm_open_helper(struct inode *inode, struct file *filp, ...@@ -48,59 +48,21 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
static int drm_setup(struct drm_device * dev) static int drm_setup(struct drm_device * dev)
{ {
int i;
int ret; int ret;
if (dev->driver->firstopen) { if (dev->driver->firstopen &&
!drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = dev->driver->firstopen(dev); ret = dev->driver->firstopen(dev);
if (ret != 0) if (ret != 0)
return ret; return ret;
} }
atomic_set(&dev->ioctl_count, 0); ret = drm_legacy_dma_setup(dev);
atomic_set(&dev->vma_count, 0); if (ret < 0)
return ret;
if (drm_core_check_feature(dev, DRIVER_HAVE_DMA) &&
!drm_core_check_feature(dev, DRIVER_MODESET)) {
dev->buf_use = 0;
atomic_set(&dev->buf_alloc, 0);
i = drm_dma_setup(dev);
if (i < 0)
return i;
}
for (i = 0; i < ARRAY_SIZE(dev->counts); i++)
atomic_set(&dev->counts[i], 0);
dev->sigdata.lock = NULL;
dev->context_flag = 0;
dev->interrupt_flag = 0;
dev->dma_flag = 0;
dev->last_context = 0;
dev->last_switch = 0;
dev->last_checked = 0;
init_waitqueue_head(&dev->context_wait);
dev->if_version = 0;
dev->ctx_start = 0;
dev->lck_start = 0;
dev->buf_async = NULL;
init_waitqueue_head(&dev->buf_readers);
init_waitqueue_head(&dev->buf_writers);
DRM_DEBUG("\n"); DRM_DEBUG("\n");
/*
* The kernel's context could be created here, but is now created
* in drm_dma_enqueue. This is more resource-efficient for
* hardware that does not do DMA, but may mean that
* drm_select_queue fails between the time the interrupt is
* initialized and the time the queues are initialized.
*/
return 0; return 0;
} }
...@@ -257,7 +219,7 @@ static int drm_open_helper(struct inode *inode, struct file *filp, ...@@ -257,7 +219,7 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
return -EBUSY; /* No exclusive opens */ return -EBUSY; /* No exclusive opens */
if (!drm_cpu_valid()) if (!drm_cpu_valid())
return -EINVAL; return -EINVAL;
if (dev->switch_power_state != DRM_SWITCH_POWER_ON) if (dev->switch_power_state != DRM_SWITCH_POWER_ON && dev->switch_power_state != DRM_SWITCH_POWER_DYNAMIC_OFF)
return -EINVAL; return -EINVAL;
DRM_DEBUG("pid = %d, minor = %d\n", task_pid_nr(current), minor_id); DRM_DEBUG("pid = %d, minor = %d\n", task_pid_nr(current), minor_id);
...@@ -300,10 +262,10 @@ static int drm_open_helper(struct inode *inode, struct file *filp, ...@@ -300,10 +262,10 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
goto out_prime_destroy; goto out_prime_destroy;
} }
/* if there is no current master make this fd it, but do not create
/* if there is no current master make this fd it */ * any master object for render clients */
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
if (!priv->minor->master) { if (!priv->minor->master && !drm_is_render_client(priv)) {
/* create a new master */ /* create a new master */
priv->minor->master = drm_master_create(priv->minor); priv->minor->master = drm_master_create(priv->minor);
if (!priv->minor->master) { if (!priv->minor->master) {
...@@ -341,12 +303,11 @@ static int drm_open_helper(struct inode *inode, struct file *filp, ...@@ -341,12 +303,11 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
goto out_close; goto out_close;
} }
} }
mutex_unlock(&dev->struct_mutex); } else if (!drm_is_render_client(priv)) {
} else {
/* get a reference to the master */ /* get a reference to the master */
priv->master = drm_master_get(priv->minor->master); priv->master = drm_master_get(priv->minor->master);
mutex_unlock(&dev->struct_mutex);
} }
mutex_unlock(&dev->struct_mutex);
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
list_add(&priv->lhead, &dev->filelist); list_add(&priv->lhead, &dev->filelist);
...@@ -388,18 +349,6 @@ static int drm_open_helper(struct inode *inode, struct file *filp, ...@@ -388,18 +349,6 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
return ret; return ret;
} }
/** No-op. */
int drm_fasync(int fd, struct file *filp, int on)
{
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
DRM_DEBUG("fd = %d, device = 0x%lx\n", fd,
(long)old_encode_dev(priv->minor->device));
return fasync_helper(fd, filp, on, &dev->buf_async);
}
EXPORT_SYMBOL(drm_fasync);
static void drm_master_release(struct drm_device *dev, struct file *filp) static void drm_master_release(struct drm_device *dev, struct file *filp)
{ {
struct drm_file *file_priv = filp->private_data; struct drm_file *file_priv = filp->private_data;
...@@ -490,26 +439,7 @@ int drm_release(struct inode *inode, struct file *filp) ...@@ -490,26 +439,7 @@ int drm_release(struct inode *inode, struct file *filp)
if (dev->driver->driver_features & DRIVER_GEM) if (dev->driver->driver_features & DRIVER_GEM)
drm_gem_release(dev, file_priv); drm_gem_release(dev, file_priv);
mutex_lock(&dev->ctxlist_mutex); drm_legacy_ctxbitmap_release(dev, file_priv);
if (!list_empty(&dev->ctxlist)) {
struct drm_ctx_list *pos, *n;
list_for_each_entry_safe(pos, n, &dev->ctxlist, head) {
if (pos->tag == file_priv &&
pos->handle != DRM_KERNEL_CONTEXT) {
if (dev->driver->context_dtor)
dev->driver->context_dtor(dev,
pos->handle);
drm_ctxbitmap_free(dev, pos->handle);
list_del(&pos->head);
kfree(pos);
--dev->ctx_count;
}
}
}
mutex_unlock(&dev->ctxlist_mutex);
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
...@@ -547,7 +477,8 @@ int drm_release(struct inode *inode, struct file *filp) ...@@ -547,7 +477,8 @@ int drm_release(struct inode *inode, struct file *filp)
iput(container_of(dev->dev_mapping, struct inode, i_data)); iput(container_of(dev->dev_mapping, struct inode, i_data));
/* drop the reference held my the file priv */ /* drop the reference held my the file priv */
drm_master_put(&file_priv->master); if (file_priv->master)
drm_master_put(&file_priv->master);
file_priv->is_master = 0; file_priv->is_master = 0;
list_del(&file_priv->lhead); list_del(&file_priv->lhead);
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
...@@ -555,6 +486,7 @@ int drm_release(struct inode *inode, struct file *filp) ...@@ -555,6 +486,7 @@ int drm_release(struct inode *inode, struct file *filp)
if (dev->driver->postclose) if (dev->driver->postclose)
dev->driver->postclose(dev, file_priv); dev->driver->postclose(dev, file_priv);
if (drm_core_check_feature(dev, DRIVER_PRIME)) if (drm_core_check_feature(dev, DRIVER_PRIME))
drm_prime_destroy_file_private(&file_priv->prime); drm_prime_destroy_file_private(&file_priv->prime);
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <linux/shmem_fs.h> #include <linux/shmem_fs.h>
#include <linux/dma-buf.h> #include <linux/dma-buf.h>
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm_vma_manager.h>
/** @file drm_gem.c /** @file drm_gem.c
* *
...@@ -92,7 +93,7 @@ drm_gem_init(struct drm_device *dev) ...@@ -92,7 +93,7 @@ drm_gem_init(struct drm_device *dev)
{ {
struct drm_gem_mm *mm; struct drm_gem_mm *mm;
spin_lock_init(&dev->object_name_lock); mutex_init(&dev->object_name_lock);
idr_init(&dev->object_name_idr); idr_init(&dev->object_name_idr);
mm = kzalloc(sizeof(struct drm_gem_mm), GFP_KERNEL); mm = kzalloc(sizeof(struct drm_gem_mm), GFP_KERNEL);
...@@ -102,14 +103,9 @@ drm_gem_init(struct drm_device *dev) ...@@ -102,14 +103,9 @@ drm_gem_init(struct drm_device *dev)
} }
dev->mm_private = mm; dev->mm_private = mm;
drm_vma_offset_manager_init(&mm->vma_manager,
if (drm_ht_create(&mm->offset_hash, 12)) { DRM_FILE_PAGE_OFFSET_START,
kfree(mm); DRM_FILE_PAGE_OFFSET_SIZE);
return -ENOMEM;
}
drm_mm_init(&mm->offset_manager, DRM_FILE_PAGE_OFFSET_START,
DRM_FILE_PAGE_OFFSET_SIZE);
return 0; return 0;
} }
...@@ -119,8 +115,7 @@ drm_gem_destroy(struct drm_device *dev) ...@@ -119,8 +115,7 @@ drm_gem_destroy(struct drm_device *dev)
{ {
struct drm_gem_mm *mm = dev->mm_private; struct drm_gem_mm *mm = dev->mm_private;
drm_mm_takedown(&mm->offset_manager); drm_vma_offset_manager_destroy(&mm->vma_manager);
drm_ht_remove(&mm->offset_hash);
kfree(mm); kfree(mm);
dev->mm_private = NULL; dev->mm_private = NULL;
} }
...@@ -132,16 +127,14 @@ drm_gem_destroy(struct drm_device *dev) ...@@ -132,16 +127,14 @@ drm_gem_destroy(struct drm_device *dev)
int drm_gem_object_init(struct drm_device *dev, int drm_gem_object_init(struct drm_device *dev,
struct drm_gem_object *obj, size_t size) struct drm_gem_object *obj, size_t size)
{ {
BUG_ON((size & (PAGE_SIZE - 1)) != 0); struct file *filp;
obj->dev = dev; filp = shmem_file_setup("drm mm object", size, VM_NORESERVE);
obj->filp = shmem_file_setup("drm mm object", size, VM_NORESERVE); if (IS_ERR(filp))
if (IS_ERR(obj->filp)) return PTR_ERR(filp);
return PTR_ERR(obj->filp);
kref_init(&obj->refcount); drm_gem_private_object_init(dev, obj, size);
atomic_set(&obj->handle_count, 0); obj->filp = filp;
obj->size = size;
return 0; return 0;
} }
...@@ -152,8 +145,8 @@ EXPORT_SYMBOL(drm_gem_object_init); ...@@ -152,8 +145,8 @@ EXPORT_SYMBOL(drm_gem_object_init);
* no GEM provided backing store. Instead the caller is responsible for * no GEM provided backing store. Instead the caller is responsible for
* backing the object and handling it. * backing the object and handling it.
*/ */
int drm_gem_private_object_init(struct drm_device *dev, void drm_gem_private_object_init(struct drm_device *dev,
struct drm_gem_object *obj, size_t size) struct drm_gem_object *obj, size_t size)
{ {
BUG_ON((size & (PAGE_SIZE - 1)) != 0); BUG_ON((size & (PAGE_SIZE - 1)) != 0);
...@@ -161,10 +154,9 @@ int drm_gem_private_object_init(struct drm_device *dev, ...@@ -161,10 +154,9 @@ int drm_gem_private_object_init(struct drm_device *dev,
obj->filp = NULL; obj->filp = NULL;
kref_init(&obj->refcount); kref_init(&obj->refcount);
atomic_set(&obj->handle_count, 0); obj->handle_count = 0;
obj->size = size; obj->size = size;
drm_vma_node_reset(&obj->vma_node);
return 0;
} }
EXPORT_SYMBOL(drm_gem_private_object_init); EXPORT_SYMBOL(drm_gem_private_object_init);
...@@ -200,16 +192,79 @@ EXPORT_SYMBOL(drm_gem_object_alloc); ...@@ -200,16 +192,79 @@ EXPORT_SYMBOL(drm_gem_object_alloc);
static void static void
drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp) drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp)
{ {
if (obj->import_attach) { /*
drm_prime_remove_buf_handle(&filp->prime, * Note: obj->dma_buf can't disappear as long as we still hold a
obj->import_attach->dmabuf); * handle reference in obj->handle_count.
*/
mutex_lock(&filp->prime.lock);
if (obj->dma_buf) {
drm_prime_remove_buf_handle_locked(&filp->prime,
obj->dma_buf);
} }
if (obj->export_dma_buf) { mutex_unlock(&filp->prime.lock);
drm_prime_remove_buf_handle(&filp->prime, }
obj->export_dma_buf);
static void drm_gem_object_ref_bug(struct kref *list_kref)
{
BUG();
}
/**
* Called after the last handle to the object has been closed
*
* Removes any name for the object. Note that this must be
* called before drm_gem_object_free or we'll be touching
* freed memory
*/
static void drm_gem_object_handle_free(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
/* Remove any name for this object */
if (obj->name) {
idr_remove(&dev->object_name_idr, obj->name);
obj->name = 0;
/*
* The object name held a reference to this object, drop
* that now.
*
* This cannot be the last reference, since the handle holds one too.
*/
kref_put(&obj->refcount, drm_gem_object_ref_bug);
} }
} }
static void drm_gem_object_exported_dma_buf_free(struct drm_gem_object *obj)
{
/* Unbreak the reference cycle if we have an exported dma_buf. */
if (obj->dma_buf) {
dma_buf_put(obj->dma_buf);
obj->dma_buf = NULL;
}
}
static void
drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj)
{
if (WARN_ON(obj->handle_count == 0))
return;
/*
* Must bump handle count first as this may be the last
* ref, in which case the object would disappear before we
* checked for a name
*/
mutex_lock(&obj->dev->object_name_lock);
if (--obj->handle_count == 0) {
drm_gem_object_handle_free(obj);
drm_gem_object_exported_dma_buf_free(obj);
}
mutex_unlock(&obj->dev->object_name_lock);
drm_gem_object_unreference_unlocked(obj);
}
/** /**
* Removes the mapping from handle to filp for this object. * Removes the mapping from handle to filp for this object.
*/ */
...@@ -242,7 +297,9 @@ drm_gem_handle_delete(struct drm_file *filp, u32 handle) ...@@ -242,7 +297,9 @@ drm_gem_handle_delete(struct drm_file *filp, u32 handle)
idr_remove(&filp->object_idr, handle); idr_remove(&filp->object_idr, handle);
spin_unlock(&filp->table_lock); spin_unlock(&filp->table_lock);
drm_gem_remove_prime_handles(obj, filp); if (drm_core_check_feature(dev, DRIVER_PRIME))
drm_gem_remove_prime_handles(obj, filp);
drm_vma_node_revoke(&obj->vma_node, filp->filp);
if (dev->driver->gem_close_object) if (dev->driver->gem_close_object)
dev->driver->gem_close_object(obj, filp); dev->driver->gem_close_object(obj, filp);
...@@ -253,18 +310,36 @@ drm_gem_handle_delete(struct drm_file *filp, u32 handle) ...@@ -253,18 +310,36 @@ drm_gem_handle_delete(struct drm_file *filp, u32 handle)
EXPORT_SYMBOL(drm_gem_handle_delete); EXPORT_SYMBOL(drm_gem_handle_delete);
/** /**
* Create a handle for this object. This adds a handle reference * drm_gem_dumb_destroy - dumb fb callback helper for gem based drivers
* to the object, which includes a regular reference count. Callers *
* will likely want to dereference the object afterwards. * This implements the ->dumb_destroy kms driver callback for drivers which use
* gem to manage their backing storage.
*/
int drm_gem_dumb_destroy(struct drm_file *file,
struct drm_device *dev,
uint32_t handle)
{
return drm_gem_handle_delete(file, handle);
}
EXPORT_SYMBOL(drm_gem_dumb_destroy);
/**
* drm_gem_handle_create_tail - internal functions to create a handle
*
* This expects the dev->object_name_lock to be held already and will drop it
* before returning. Used to avoid races in establishing new handles when
* importing an object from either an flink name or a dma-buf.
*/ */
int int
drm_gem_handle_create(struct drm_file *file_priv, drm_gem_handle_create_tail(struct drm_file *file_priv,
struct drm_gem_object *obj, struct drm_gem_object *obj,
u32 *handlep) u32 *handlep)
{ {
struct drm_device *dev = obj->dev; struct drm_device *dev = obj->dev;
int ret; int ret;
WARN_ON(!mutex_is_locked(&dev->object_name_lock));
/* /*
* Get the user-visible handle using idr. Preload and perform * Get the user-visible handle using idr. Preload and perform
* allocation under our spinlock. * allocation under our spinlock.
...@@ -273,14 +348,22 @@ drm_gem_handle_create(struct drm_file *file_priv, ...@@ -273,14 +348,22 @@ drm_gem_handle_create(struct drm_file *file_priv,
spin_lock(&file_priv->table_lock); spin_lock(&file_priv->table_lock);
ret = idr_alloc(&file_priv->object_idr, obj, 1, 0, GFP_NOWAIT); ret = idr_alloc(&file_priv->object_idr, obj, 1, 0, GFP_NOWAIT);
drm_gem_object_reference(obj);
obj->handle_count++;
spin_unlock(&file_priv->table_lock); spin_unlock(&file_priv->table_lock);
idr_preload_end(); idr_preload_end();
if (ret < 0) mutex_unlock(&dev->object_name_lock);
if (ret < 0) {
drm_gem_object_handle_unreference_unlocked(obj);
return ret; return ret;
}
*handlep = ret; *handlep = ret;
drm_gem_object_handle_reference(obj); ret = drm_vma_node_allow(&obj->vma_node, file_priv->filp);
if (ret) {
drm_gem_handle_delete(file_priv, *handlep);
return ret;
}
if (dev->driver->gem_open_object) { if (dev->driver->gem_open_object) {
ret = dev->driver->gem_open_object(obj, file_priv); ret = dev->driver->gem_open_object(obj, file_priv);
...@@ -292,6 +375,21 @@ drm_gem_handle_create(struct drm_file *file_priv, ...@@ -292,6 +375,21 @@ drm_gem_handle_create(struct drm_file *file_priv,
return 0; return 0;
} }
/**
* Create a handle for this object. This adds a handle reference
* to the object, which includes a regular reference count. Callers
* will likely want to dereference the object afterwards.
*/
int
drm_gem_handle_create(struct drm_file *file_priv,
struct drm_gem_object *obj,
u32 *handlep)
{
mutex_lock(&obj->dev->object_name_lock);
return drm_gem_handle_create_tail(file_priv, obj, handlep);
}
EXPORT_SYMBOL(drm_gem_handle_create); EXPORT_SYMBOL(drm_gem_handle_create);
...@@ -306,81 +404,155 @@ drm_gem_free_mmap_offset(struct drm_gem_object *obj) ...@@ -306,81 +404,155 @@ drm_gem_free_mmap_offset(struct drm_gem_object *obj)
{ {
struct drm_device *dev = obj->dev; struct drm_device *dev = obj->dev;
struct drm_gem_mm *mm = dev->mm_private; struct drm_gem_mm *mm = dev->mm_private;
struct drm_map_list *list = &obj->map_list;
drm_ht_remove_item(&mm->offset_hash, &list->hash); drm_vma_offset_remove(&mm->vma_manager, &obj->vma_node);
drm_mm_put_block(list->file_offset_node);
kfree(list->map);
list->map = NULL;
} }
EXPORT_SYMBOL(drm_gem_free_mmap_offset); EXPORT_SYMBOL(drm_gem_free_mmap_offset);
/** /**
* drm_gem_create_mmap_offset - create a fake mmap offset for an object * drm_gem_create_mmap_offset_size - create a fake mmap offset for an object
* @obj: obj in question * @obj: obj in question
* @size: the virtual size
* *
* GEM memory mapping works by handing back to userspace a fake mmap offset * GEM memory mapping works by handing back to userspace a fake mmap offset
* it can use in a subsequent mmap(2) call. The DRM core code then looks * it can use in a subsequent mmap(2) call. The DRM core code then looks
* up the object based on the offset and sets up the various memory mapping * up the object based on the offset and sets up the various memory mapping
* structures. * structures.
* *
* This routine allocates and attaches a fake offset for @obj. * This routine allocates and attaches a fake offset for @obj, in cases where
* the virtual size differs from the physical size (ie. obj->size). Otherwise
* just use drm_gem_create_mmap_offset().
*/ */
int int
drm_gem_create_mmap_offset(struct drm_gem_object *obj) drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size)
{ {
struct drm_device *dev = obj->dev; struct drm_device *dev = obj->dev;
struct drm_gem_mm *mm = dev->mm_private; struct drm_gem_mm *mm = dev->mm_private;
struct drm_map_list *list;
struct drm_local_map *map;
int ret;
/* Set the object up for mmap'ing */ return drm_vma_offset_add(&mm->vma_manager, &obj->vma_node,
list = &obj->map_list; size / PAGE_SIZE);
list->map = kzalloc(sizeof(struct drm_map_list), GFP_KERNEL); }
if (!list->map) EXPORT_SYMBOL(drm_gem_create_mmap_offset_size);
return -ENOMEM;
map = list->map;
map->type = _DRM_GEM;
map->size = obj->size;
map->handle = obj;
/* Get a DRM GEM mmap offset allocated... */ /**
list->file_offset_node = drm_mm_search_free(&mm->offset_manager, * drm_gem_create_mmap_offset - create a fake mmap offset for an object
obj->size / PAGE_SIZE, 0, false); * @obj: obj in question
*
* GEM memory mapping works by handing back to userspace a fake mmap offset
* it can use in a subsequent mmap(2) call. The DRM core code then looks
* up the object based on the offset and sets up the various memory mapping
* structures.
*
* This routine allocates and attaches a fake offset for @obj.
*/
int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
{
return drm_gem_create_mmap_offset_size(obj, obj->size);
}
EXPORT_SYMBOL(drm_gem_create_mmap_offset);
if (!list->file_offset_node) { /**
DRM_ERROR("failed to allocate offset for bo %d\n", obj->name); * drm_gem_get_pages - helper to allocate backing pages for a GEM object
ret = -ENOSPC; * from shmem
goto out_free_list; * @obj: obj in question
* @gfpmask: gfp mask of requested pages
*/
struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
{
struct inode *inode;
struct address_space *mapping;
struct page *p, **pages;
int i, npages;
/* This is the shared memory object that backs the GEM resource */
inode = file_inode(obj->filp);
mapping = inode->i_mapping;
/* We already BUG_ON() for non-page-aligned sizes in
* drm_gem_object_init(), so we should never hit this unless
* driver author is doing something really wrong:
*/
WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0);
npages = obj->size >> PAGE_SHIFT;
pages = drm_malloc_ab(npages, sizeof(struct page *));
if (pages == NULL)
return ERR_PTR(-ENOMEM);
gfpmask |= mapping_gfp_mask(mapping);
for (i = 0; i < npages; i++) {
p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
if (IS_ERR(p))
goto fail;
pages[i] = p;
/* There is a hypothetical issue w/ drivers that require
* buffer memory in the low 4GB.. if the pages are un-
* pinned, and swapped out, they can end up swapped back
* in above 4GB. If pages are already in memory, then
* shmem_read_mapping_page_gfp will ignore the gfpmask,
* even if the already in-memory page disobeys the mask.
*
* It is only a theoretical issue today, because none of
* the devices with this limitation can be populated with
* enough memory to trigger the issue. But this BUG_ON()
* is here as a reminder in case the problem with
* shmem_read_mapping_page_gfp() isn't solved by the time
* it does become a real issue.
*
* See this thread: http://lkml.org/lkml/2011/7/11/238
*/
BUG_ON((gfpmask & __GFP_DMA32) &&
(page_to_pfn(p) >= 0x00100000UL));
} }
list->file_offset_node = drm_mm_get_block(list->file_offset_node, return pages;
obj->size / PAGE_SIZE, 0);
if (!list->file_offset_node) {
ret = -ENOMEM;
goto out_free_list;
}
list->hash.key = list->file_offset_node->start; fail:
ret = drm_ht_insert_item(&mm->offset_hash, &list->hash); while (i--)
if (ret) { page_cache_release(pages[i]);
DRM_ERROR("failed to add to map hash\n");
goto out_free_mm;
}
return 0; drm_free_large(pages);
return ERR_CAST(p);
}
EXPORT_SYMBOL(drm_gem_get_pages);
out_free_mm: /**
drm_mm_put_block(list->file_offset_node); * drm_gem_put_pages - helper to free backing pages for a GEM object
out_free_list: * @obj: obj in question
kfree(list->map); * @pages: pages to free
list->map = NULL; * @dirty: if true, pages will be marked as dirty
* @accessed: if true, the pages will be marked as accessed
*/
void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
bool dirty, bool accessed)
{
int i, npages;
return ret; /* We already BUG_ON() for non-page-aligned sizes in
* drm_gem_object_init(), so we should never hit this unless
* driver author is doing something really wrong:
*/
WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0);
npages = obj->size >> PAGE_SHIFT;
for (i = 0; i < npages; i++) {
if (dirty)
set_page_dirty(pages[i]);
if (accessed)
mark_page_accessed(pages[i]);
/* Undo the reference we took when populating the table */
page_cache_release(pages[i]);
}
drm_free_large(pages);
} }
EXPORT_SYMBOL(drm_gem_create_mmap_offset); EXPORT_SYMBOL(drm_gem_put_pages);
/** Returns a reference to the object named by the handle. */ /** Returns a reference to the object named by the handle. */
struct drm_gem_object * struct drm_gem_object *
...@@ -445,8 +617,14 @@ drm_gem_flink_ioctl(struct drm_device *dev, void *data, ...@@ -445,8 +617,14 @@ drm_gem_flink_ioctl(struct drm_device *dev, void *data,
if (obj == NULL) if (obj == NULL)
return -ENOENT; return -ENOENT;
mutex_lock(&dev->object_name_lock);
idr_preload(GFP_KERNEL); idr_preload(GFP_KERNEL);
spin_lock(&dev->object_name_lock); /* prevent races with concurrent gem_close. */
if (obj->handle_count == 0) {
ret = -ENOENT;
goto err;
}
if (!obj->name) { if (!obj->name) {
ret = idr_alloc(&dev->object_name_idr, obj, 1, 0, GFP_NOWAIT); ret = idr_alloc(&dev->object_name_idr, obj, 1, 0, GFP_NOWAIT);
if (ret < 0) if (ret < 0)
...@@ -462,8 +640,8 @@ drm_gem_flink_ioctl(struct drm_device *dev, void *data, ...@@ -462,8 +640,8 @@ drm_gem_flink_ioctl(struct drm_device *dev, void *data,
ret = 0; ret = 0;
err: err:
spin_unlock(&dev->object_name_lock);
idr_preload_end(); idr_preload_end();
mutex_unlock(&dev->object_name_lock);
drm_gem_object_unreference_unlocked(obj); drm_gem_object_unreference_unlocked(obj);
return ret; return ret;
} }
...@@ -486,15 +664,17 @@ drm_gem_open_ioctl(struct drm_device *dev, void *data, ...@@ -486,15 +664,17 @@ drm_gem_open_ioctl(struct drm_device *dev, void *data,
if (!(dev->driver->driver_features & DRIVER_GEM)) if (!(dev->driver->driver_features & DRIVER_GEM))
return -ENODEV; return -ENODEV;
spin_lock(&dev->object_name_lock); mutex_lock(&dev->object_name_lock);
obj = idr_find(&dev->object_name_idr, (int) args->name); obj = idr_find(&dev->object_name_idr, (int) args->name);
if (obj) if (obj) {
drm_gem_object_reference(obj); drm_gem_object_reference(obj);
spin_unlock(&dev->object_name_lock); } else {
if (!obj) mutex_unlock(&dev->object_name_lock);
return -ENOENT; return -ENOENT;
}
ret = drm_gem_handle_create(file_priv, obj, &handle); /* drm_gem_handle_create_tail unlocks dev->object_name_lock. */
ret = drm_gem_handle_create_tail(file_priv, obj, &handle);
drm_gem_object_unreference_unlocked(obj); drm_gem_object_unreference_unlocked(obj);
if (ret) if (ret)
return ret; return ret;
...@@ -527,7 +707,9 @@ drm_gem_object_release_handle(int id, void *ptr, void *data) ...@@ -527,7 +707,9 @@ drm_gem_object_release_handle(int id, void *ptr, void *data)
struct drm_gem_object *obj = ptr; struct drm_gem_object *obj = ptr;
struct drm_device *dev = obj->dev; struct drm_device *dev = obj->dev;
drm_gem_remove_prime_handles(obj, file_priv); if (drm_core_check_feature(dev, DRIVER_PRIME))
drm_gem_remove_prime_handles(obj, file_priv);
drm_vma_node_revoke(&obj->vma_node, file_priv->filp);
if (dev->driver->gem_close_object) if (dev->driver->gem_close_object)
dev->driver->gem_close_object(obj, file_priv); dev->driver->gem_close_object(obj, file_priv);
...@@ -553,6 +735,8 @@ drm_gem_release(struct drm_device *dev, struct drm_file *file_private) ...@@ -553,6 +735,8 @@ drm_gem_release(struct drm_device *dev, struct drm_file *file_private)
void void
drm_gem_object_release(struct drm_gem_object *obj) drm_gem_object_release(struct drm_gem_object *obj)
{ {
WARN_ON(obj->dma_buf);
if (obj->filp) if (obj->filp)
fput(obj->filp); fput(obj->filp);
} }
...@@ -577,41 +761,6 @@ drm_gem_object_free(struct kref *kref) ...@@ -577,41 +761,6 @@ drm_gem_object_free(struct kref *kref)
} }
EXPORT_SYMBOL(drm_gem_object_free); EXPORT_SYMBOL(drm_gem_object_free);
static void drm_gem_object_ref_bug(struct kref *list_kref)
{
BUG();
}
/**
* Called after the last handle to the object has been closed
*
* Removes any name for the object. Note that this must be
* called before drm_gem_object_free or we'll be touching
* freed memory
*/
void drm_gem_object_handle_free(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
/* Remove any name for this object */
spin_lock(&dev->object_name_lock);
if (obj->name) {
idr_remove(&dev->object_name_idr, obj->name);
obj->name = 0;
spin_unlock(&dev->object_name_lock);
/*
* The object name held a reference to this object, drop
* that now.
*
* This cannot be the last reference, since the handle holds one too.
*/
kref_put(&obj->refcount, drm_gem_object_ref_bug);
} else
spin_unlock(&dev->object_name_lock);
}
EXPORT_SYMBOL(drm_gem_object_handle_free);
void drm_gem_vm_open(struct vm_area_struct *vma) void drm_gem_vm_open(struct vm_area_struct *vma)
{ {
struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_object *obj = vma->vm_private_data;
...@@ -653,6 +802,10 @@ EXPORT_SYMBOL(drm_gem_vm_close); ...@@ -653,6 +802,10 @@ EXPORT_SYMBOL(drm_gem_vm_close);
* the GEM object is not looked up based on its fake offset. To implement the * the GEM object is not looked up based on its fake offset. To implement the
* DRM mmap operation, drivers should use the drm_gem_mmap() function. * DRM mmap operation, drivers should use the drm_gem_mmap() function.
* *
* drm_gem_mmap_obj() assumes the user is granted access to the buffer while
* drm_gem_mmap() prevents unprivileged users from mapping random objects. So
* callers must verify access restrictions before calling this helper.
*
* NOTE: This function has to be protected with dev->struct_mutex * NOTE: This function has to be protected with dev->struct_mutex
* *
* Return 0 or success or -EINVAL if the object size is smaller than the VMA * Return 0 or success or -EINVAL if the object size is smaller than the VMA
...@@ -701,14 +854,17 @@ EXPORT_SYMBOL(drm_gem_mmap_obj); ...@@ -701,14 +854,17 @@ EXPORT_SYMBOL(drm_gem_mmap_obj);
* Look up the GEM object based on the offset passed in (vma->vm_pgoff will * Look up the GEM object based on the offset passed in (vma->vm_pgoff will
* contain the fake offset we created when the GTT map ioctl was called on * contain the fake offset we created when the GTT map ioctl was called on
* the object) and map it with a call to drm_gem_mmap_obj(). * the object) and map it with a call to drm_gem_mmap_obj().
*
* If the caller is not granted access to the buffer object, the mmap will fail
* with EACCES. Please see the vma manager for more information.
*/ */
int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
{ {
struct drm_file *priv = filp->private_data; struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev; struct drm_device *dev = priv->minor->dev;
struct drm_gem_mm *mm = dev->mm_private; struct drm_gem_mm *mm = dev->mm_private;
struct drm_local_map *map = NULL; struct drm_gem_object *obj;
struct drm_hash_item *hash; struct drm_vma_offset_node *node;
int ret = 0; int ret = 0;
if (drm_device_is_unplugged(dev)) if (drm_device_is_unplugged(dev))
...@@ -716,21 +872,19 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) ...@@ -716,21 +872,19 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
if (drm_ht_find_item(&mm->offset_hash, vma->vm_pgoff, &hash)) { node = drm_vma_offset_exact_lookup(&mm->vma_manager, vma->vm_pgoff,
vma_pages(vma));
if (!node) {
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
return drm_mmap(filp, vma); return drm_mmap(filp, vma);
} else if (!drm_vma_node_is_allowed(node, filp)) {
mutex_unlock(&dev->struct_mutex);
return -EACCES;
} }
map = drm_hash_entry(hash, struct drm_map_list, hash)->map; obj = container_of(node, struct drm_gem_object, vma_node);
if (!map || ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT, vma);
((map->flags & _DRM_RESTRICTED) && !capable(CAP_SYS_ADMIN))) {
ret = -EPERM;
goto out_unlock;
}
ret = drm_gem_mmap_obj(map->handle, map->size, vma);
out_unlock:
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
return ret; return ret;
......
...@@ -27,11 +27,7 @@ ...@@ -27,11 +27,7 @@
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm.h> #include <drm/drm.h>
#include <drm/drm_gem_cma_helper.h> #include <drm/drm_gem_cma_helper.h>
#include <drm/drm_vma_manager.h>
static unsigned int get_gem_mmap_offset(struct drm_gem_object *obj)
{
return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT;
}
/* /*
* __drm_gem_cma_create - Create a GEM CMA object without allocating memory * __drm_gem_cma_create - Create a GEM CMA object without allocating memory
...@@ -172,8 +168,7 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj) ...@@ -172,8 +168,7 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
{ {
struct drm_gem_cma_object *cma_obj; struct drm_gem_cma_object *cma_obj;
if (gem_obj->map_list.map) drm_gem_free_mmap_offset(gem_obj);
drm_gem_free_mmap_offset(gem_obj);
cma_obj = to_drm_gem_cma_obj(gem_obj); cma_obj = to_drm_gem_cma_obj(gem_obj);
...@@ -237,7 +232,7 @@ int drm_gem_cma_dumb_map_offset(struct drm_file *file_priv, ...@@ -237,7 +232,7 @@ int drm_gem_cma_dumb_map_offset(struct drm_file *file_priv,
return -EINVAL; return -EINVAL;
} }
*offset = get_gem_mmap_offset(gem_obj); *offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
drm_gem_object_unreference(gem_obj); drm_gem_object_unreference(gem_obj);
...@@ -286,27 +281,16 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma) ...@@ -286,27 +281,16 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
} }
EXPORT_SYMBOL_GPL(drm_gem_cma_mmap); EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
/*
* drm_gem_cma_dumb_destroy - (struct drm_driver)->dumb_destroy callback function
*/
int drm_gem_cma_dumb_destroy(struct drm_file *file_priv,
struct drm_device *drm, unsigned int handle)
{
return drm_gem_handle_delete(file_priv, handle);
}
EXPORT_SYMBOL_GPL(drm_gem_cma_dumb_destroy);
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
void drm_gem_cma_describe(struct drm_gem_cma_object *cma_obj, struct seq_file *m) void drm_gem_cma_describe(struct drm_gem_cma_object *cma_obj, struct seq_file *m)
{ {
struct drm_gem_object *obj = &cma_obj->base; struct drm_gem_object *obj = &cma_obj->base;
struct drm_device *dev = obj->dev; struct drm_device *dev = obj->dev;
uint64_t off = 0; uint64_t off;
WARN_ON(!mutex_is_locked(&dev->struct_mutex)); WARN_ON(!mutex_is_locked(&dev->struct_mutex));
if (obj->map_list.map) off = drm_vma_node_start(&obj->vma_node);
off = (uint64_t)obj->map_list.hash.key;
seq_printf(m, "%2d (%2d) %08llx %08Zx %p %d", seq_printf(m, "%2d (%2d) %08llx %08Zx %p %d",
obj->name, obj->refcount.refcount.counter, obj->name, obj->refcount.refcount.counter,
......
...@@ -207,7 +207,7 @@ static int drm_gem_one_name_info(int id, void *ptr, void *data) ...@@ -207,7 +207,7 @@ static int drm_gem_one_name_info(int id, void *ptr, void *data)
seq_printf(m, "%6d %8zd %7d %8d\n", seq_printf(m, "%6d %8zd %7d %8d\n",
obj->name, obj->size, obj->name, obj->size,
atomic_read(&obj->handle_count), obj->handle_count,
atomic_read(&obj->refcount.refcount)); atomic_read(&obj->refcount.refcount));
return 0; return 0;
} }
...@@ -218,7 +218,11 @@ int drm_gem_name_info(struct seq_file *m, void *data) ...@@ -218,7 +218,11 @@ int drm_gem_name_info(struct seq_file *m, void *data)
struct drm_device *dev = node->minor->dev; struct drm_device *dev = node->minor->dev;
seq_printf(m, " name size handles refcount\n"); seq_printf(m, " name size handles refcount\n");
mutex_lock(&dev->object_name_lock);
idr_for_each(&dev->object_name_idr, drm_gem_one_name_info, m); idr_for_each(&dev->object_name_idr, drm_gem_one_name_info, m);
mutex_unlock(&dev->object_name_lock);
return 0; return 0;
} }
......
...@@ -217,29 +217,30 @@ int drm_getclient(struct drm_device *dev, void *data, ...@@ -217,29 +217,30 @@ int drm_getclient(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_client *client = data; struct drm_client *client = data;
struct drm_file *pt;
int idx;
int i;
idx = client->idx; /*
i = 0; * Hollowed-out getclient ioctl to keep some dead old drm tests/tools
* not breaking completely. Userspace tools stop enumerating one they
mutex_lock(&dev->struct_mutex); * get -EINVAL, hence this is the return value we need to hand back for
list_for_each_entry(pt, &dev->filelist, lhead) { * no clients tracked.
if (i++ >= idx) { *
client->auth = pt->authenticated; * Unfortunately some clients (*cough* libva *cough*) use this in a fun
client->pid = pid_vnr(pt->pid); * attempt to figure out whether they're authenticated or not. Since
client->uid = from_kuid_munged(current_user_ns(), pt->uid); * that's the only thing they care about, give it to the directly
client->magic = pt->magic; * instead of walking one giant list.
client->iocs = pt->ioctl_count; */
mutex_unlock(&dev->struct_mutex); if (client->idx == 0) {
client->auth = file_priv->authenticated;
return 0; client->pid = pid_vnr(file_priv->pid);
} client->uid = from_kuid_munged(current_user_ns(),
file_priv->uid);
client->magic = 0;
client->iocs = 0;
return 0;
} else {
return -EINVAL;
} }
mutex_unlock(&dev->struct_mutex);
return -EINVAL;
} }
/** /**
...@@ -256,21 +257,10 @@ int drm_getstats(struct drm_device *dev, void *data, ...@@ -256,21 +257,10 @@ int drm_getstats(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_stats *stats = data; struct drm_stats *stats = data;
int i;
/* Clear stats to prevent userspace from eating its stack garbage. */
memset(stats, 0, sizeof(*stats)); memset(stats, 0, sizeof(*stats));
for (i = 0; i < dev->counters; i++) {
if (dev->types[i] == _DRM_STAT_LOCK)
stats->data[i].value =
(file_priv->master->lock.hw_lock ? file_priv->master->lock.hw_lock->lock : 0);
else
stats->data[i].value = atomic_read(&dev->counts[i]);
stats->data[i].type = dev->types[i];
}
stats->count = dev->counters;
return 0; return 0;
} }
...@@ -303,6 +293,9 @@ int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_priv) ...@@ -303,6 +293,9 @@ int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_priv)
case DRM_CAP_TIMESTAMP_MONOTONIC: case DRM_CAP_TIMESTAMP_MONOTONIC:
req->value = drm_timestamp_monotonic; req->value = drm_timestamp_monotonic;
break; break;
case DRM_CAP_ASYNC_PAGE_FLIP:
req->value = dev->mode_config.async_page_flip;
break;
default: default:
return -EINVAL; return -EINVAL;
} }
...@@ -352,9 +345,6 @@ int drm_setversion(struct drm_device *dev, void *data, struct drm_file *file_pri ...@@ -352,9 +345,6 @@ int drm_setversion(struct drm_device *dev, void *data, struct drm_file *file_pri
retcode = -EINVAL; retcode = -EINVAL;
goto done; goto done;
} }
if (dev->driver->set_version)
dev->driver->set_version(dev, sv);
} }
done: done:
......
...@@ -86,7 +86,6 @@ void drm_free_agp(DRM_AGP_MEM * handle, int pages) ...@@ -86,7 +86,6 @@ void drm_free_agp(DRM_AGP_MEM * handle, int pages)
{ {
agp_free_memory(handle); agp_free_memory(handle);
} }
EXPORT_SYMBOL(drm_free_agp);
/** Wrapper around agp_bind_memory() */ /** Wrapper around agp_bind_memory() */
int drm_bind_agp(DRM_AGP_MEM * handle, unsigned int start) int drm_bind_agp(DRM_AGP_MEM * handle, unsigned int start)
...@@ -99,7 +98,6 @@ int drm_unbind_agp(DRM_AGP_MEM * handle) ...@@ -99,7 +98,6 @@ int drm_unbind_agp(DRM_AGP_MEM * handle)
{ {
return agp_unbind_memory(handle); return agp_unbind_memory(handle);
} }
EXPORT_SYMBOL(drm_unbind_agp);
#else /* __OS_HAS_AGP */ #else /* __OS_HAS_AGP */
static inline void *agp_remap(unsigned long offset, unsigned long size, static inline void *agp_remap(unsigned long offset, unsigned long size,
......
...@@ -49,58 +49,18 @@ ...@@ -49,58 +49,18 @@
#define MM_UNUSED_TARGET 4 #define MM_UNUSED_TARGET 4
static struct drm_mm_node *drm_mm_kmalloc(struct drm_mm *mm, int atomic) static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
{ unsigned long size,
struct drm_mm_node *child; unsigned alignment,
unsigned long color,
if (atomic) enum drm_mm_search_flags flags);
child = kzalloc(sizeof(*child), GFP_ATOMIC); static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
else unsigned long size,
child = kzalloc(sizeof(*child), GFP_KERNEL); unsigned alignment,
unsigned long color,
if (unlikely(child == NULL)) { unsigned long start,
spin_lock(&mm->unused_lock); unsigned long end,
if (list_empty(&mm->unused_nodes)) enum drm_mm_search_flags flags);
child = NULL;
else {
child =
list_entry(mm->unused_nodes.next,
struct drm_mm_node, node_list);
list_del(&child->node_list);
--mm->num_unused;
}
spin_unlock(&mm->unused_lock);
}
return child;
}
/* drm_mm_pre_get() - pre allocate drm_mm_node structure
* drm_mm: memory manager struct we are pre-allocating for
*
* Returns 0 on success or -ENOMEM if allocation fails.
*/
int drm_mm_pre_get(struct drm_mm *mm)
{
struct drm_mm_node *node;
spin_lock(&mm->unused_lock);
while (mm->num_unused < MM_UNUSED_TARGET) {
spin_unlock(&mm->unused_lock);
node = kzalloc(sizeof(*node), GFP_KERNEL);
spin_lock(&mm->unused_lock);
if (unlikely(node == NULL)) {
int ret = (mm->num_unused < 2) ? -ENOMEM : 0;
spin_unlock(&mm->unused_lock);
return ret;
}
++mm->num_unused;
list_add_tail(&node->node_list, &mm->unused_nodes);
}
spin_unlock(&mm->unused_lock);
return 0;
}
EXPORT_SYMBOL(drm_mm_pre_get);
static void drm_mm_insert_helper(struct drm_mm_node *hole_node, static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
struct drm_mm_node *node, struct drm_mm_node *node,
...@@ -147,33 +107,27 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node, ...@@ -147,33 +107,27 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
} }
} }
struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm, int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
unsigned long start,
unsigned long size,
bool atomic)
{ {
struct drm_mm_node *hole, *node; struct drm_mm_node *hole;
unsigned long end = start + size; unsigned long end = node->start + node->size;
unsigned long hole_start; unsigned long hole_start;
unsigned long hole_end; unsigned long hole_end;
BUG_ON(node == NULL);
/* Find the relevant hole to add our node to */
drm_mm_for_each_hole(hole, mm, hole_start, hole_end) { drm_mm_for_each_hole(hole, mm, hole_start, hole_end) {
if (hole_start > start || hole_end < end) if (hole_start > node->start || hole_end < end)
continue; continue;
node = drm_mm_kmalloc(mm, atomic);
if (unlikely(node == NULL))
return NULL;
node->start = start;
node->size = size;
node->mm = mm; node->mm = mm;
node->allocated = 1; node->allocated = 1;
INIT_LIST_HEAD(&node->hole_stack); INIT_LIST_HEAD(&node->hole_stack);
list_add(&node->node_list, &hole->node_list); list_add(&node->node_list, &hole->node_list);
if (start == hole_start) { if (node->start == hole_start) {
hole->hole_follows = 0; hole->hole_follows = 0;
list_del_init(&hole->hole_stack); list_del_init(&hole->hole_stack);
} }
...@@ -184,31 +138,14 @@ struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm, ...@@ -184,31 +138,14 @@ struct drm_mm_node *drm_mm_create_block(struct drm_mm *mm,
node->hole_follows = 1; node->hole_follows = 1;
} }
return node; return 0;
} }
WARN(1, "no hole found for block 0x%lx + 0x%lx\n", start, size); WARN(1, "no hole found for node 0x%lx + 0x%lx\n",
return NULL; node->start, node->size);
} return -ENOSPC;
EXPORT_SYMBOL(drm_mm_create_block);
struct drm_mm_node *drm_mm_get_block_generic(struct drm_mm_node *hole_node,
unsigned long size,
unsigned alignment,
unsigned long color,
int atomic)
{
struct drm_mm_node *node;
node = drm_mm_kmalloc(hole_node->mm, atomic);
if (unlikely(node == NULL))
return NULL;
drm_mm_insert_helper(hole_node, node, size, alignment, color);
return node;
} }
EXPORT_SYMBOL(drm_mm_get_block_generic); EXPORT_SYMBOL(drm_mm_reserve_node);
/** /**
* Search for free space and insert a preallocated memory node. Returns * Search for free space and insert a preallocated memory node. Returns
...@@ -217,12 +154,13 @@ EXPORT_SYMBOL(drm_mm_get_block_generic); ...@@ -217,12 +154,13 @@ EXPORT_SYMBOL(drm_mm_get_block_generic);
*/ */
int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node, int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node,
unsigned long size, unsigned alignment, unsigned long size, unsigned alignment,
unsigned long color) unsigned long color,
enum drm_mm_search_flags flags)
{ {
struct drm_mm_node *hole_node; struct drm_mm_node *hole_node;
hole_node = drm_mm_search_free_generic(mm, size, alignment, hole_node = drm_mm_search_free_generic(mm, size, alignment,
color, 0); color, flags);
if (!hole_node) if (!hole_node)
return -ENOSPC; return -ENOSPC;
...@@ -231,13 +169,6 @@ int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node, ...@@ -231,13 +169,6 @@ int drm_mm_insert_node_generic(struct drm_mm *mm, struct drm_mm_node *node,
} }
EXPORT_SYMBOL(drm_mm_insert_node_generic); EXPORT_SYMBOL(drm_mm_insert_node_generic);
int drm_mm_insert_node(struct drm_mm *mm, struct drm_mm_node *node,
unsigned long size, unsigned alignment)
{
return drm_mm_insert_node_generic(mm, node, size, alignment, 0);
}
EXPORT_SYMBOL(drm_mm_insert_node);
static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
struct drm_mm_node *node, struct drm_mm_node *node,
unsigned long size, unsigned alignment, unsigned long size, unsigned alignment,
...@@ -290,27 +221,6 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node, ...@@ -290,27 +221,6 @@ static void drm_mm_insert_helper_range(struct drm_mm_node *hole_node,
} }
} }
struct drm_mm_node *drm_mm_get_block_range_generic(struct drm_mm_node *hole_node,
unsigned long size,
unsigned alignment,
unsigned long color,
unsigned long start,
unsigned long end,
int atomic)
{
struct drm_mm_node *node;
node = drm_mm_kmalloc(hole_node->mm, atomic);
if (unlikely(node == NULL))
return NULL;
drm_mm_insert_helper_range(hole_node, node, size, alignment, color,
start, end);
return node;
}
EXPORT_SYMBOL(drm_mm_get_block_range_generic);
/** /**
* Search for free space and insert a preallocated memory node. Returns * Search for free space and insert a preallocated memory node. Returns
* -ENOSPC if no suitable free area is available. This is for range * -ENOSPC if no suitable free area is available. This is for range
...@@ -318,13 +228,14 @@ EXPORT_SYMBOL(drm_mm_get_block_range_generic); ...@@ -318,13 +228,14 @@ EXPORT_SYMBOL(drm_mm_get_block_range_generic);
*/ */
int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node, int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *node,
unsigned long size, unsigned alignment, unsigned long color, unsigned long size, unsigned alignment, unsigned long color,
unsigned long start, unsigned long end) unsigned long start, unsigned long end,
enum drm_mm_search_flags flags)
{ {
struct drm_mm_node *hole_node; struct drm_mm_node *hole_node;
hole_node = drm_mm_search_free_in_range_generic(mm, hole_node = drm_mm_search_free_in_range_generic(mm,
size, alignment, color, size, alignment, color,
start, end, 0); start, end, flags);
if (!hole_node) if (!hole_node)
return -ENOSPC; return -ENOSPC;
...@@ -335,14 +246,6 @@ int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *n ...@@ -335,14 +246,6 @@ int drm_mm_insert_node_in_range_generic(struct drm_mm *mm, struct drm_mm_node *n
} }
EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic); EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic);
int drm_mm_insert_node_in_range(struct drm_mm *mm, struct drm_mm_node *node,
unsigned long size, unsigned alignment,
unsigned long start, unsigned long end)
{
return drm_mm_insert_node_in_range_generic(mm, node, size, alignment, 0, start, end);
}
EXPORT_SYMBOL(drm_mm_insert_node_in_range);
/** /**
* Remove a memory node from the allocator. * Remove a memory node from the allocator.
*/ */
...@@ -351,6 +254,9 @@ void drm_mm_remove_node(struct drm_mm_node *node) ...@@ -351,6 +254,9 @@ void drm_mm_remove_node(struct drm_mm_node *node)
struct drm_mm *mm = node->mm; struct drm_mm *mm = node->mm;
struct drm_mm_node *prev_node; struct drm_mm_node *prev_node;
if (WARN_ON(!node->allocated))
return;
BUG_ON(node->scanned_block || node->scanned_prev_free BUG_ON(node->scanned_block || node->scanned_prev_free
|| node->scanned_next_free); || node->scanned_next_free);
...@@ -377,28 +283,6 @@ void drm_mm_remove_node(struct drm_mm_node *node) ...@@ -377,28 +283,6 @@ void drm_mm_remove_node(struct drm_mm_node *node)
} }
EXPORT_SYMBOL(drm_mm_remove_node); EXPORT_SYMBOL(drm_mm_remove_node);
/*
* Remove a memory node from the allocator and free the allocated struct
* drm_mm_node. Only to be used on a struct drm_mm_node obtained by one of the
* drm_mm_get_block functions.
*/
void drm_mm_put_block(struct drm_mm_node *node)
{
struct drm_mm *mm = node->mm;
drm_mm_remove_node(node);
spin_lock(&mm->unused_lock);
if (mm->num_unused < MM_UNUSED_TARGET) {
list_add(&node->node_list, &mm->unused_nodes);
++mm->num_unused;
} else
kfree(node);
spin_unlock(&mm->unused_lock);
}
EXPORT_SYMBOL(drm_mm_put_block);
static int check_free_hole(unsigned long start, unsigned long end, static int check_free_hole(unsigned long start, unsigned long end,
unsigned long size, unsigned alignment) unsigned long size, unsigned alignment)
{ {
...@@ -414,11 +298,11 @@ static int check_free_hole(unsigned long start, unsigned long end, ...@@ -414,11 +298,11 @@ static int check_free_hole(unsigned long start, unsigned long end,
return end >= start + size; return end >= start + size;
} }
struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, static struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
unsigned long size, unsigned long size,
unsigned alignment, unsigned alignment,
unsigned long color, unsigned long color,
bool best_match) enum drm_mm_search_flags flags)
{ {
struct drm_mm_node *entry; struct drm_mm_node *entry;
struct drm_mm_node *best; struct drm_mm_node *best;
...@@ -441,7 +325,7 @@ struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, ...@@ -441,7 +325,7 @@ struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
if (!check_free_hole(adj_start, adj_end, size, alignment)) if (!check_free_hole(adj_start, adj_end, size, alignment))
continue; continue;
if (!best_match) if (!(flags & DRM_MM_SEARCH_BEST))
return entry; return entry;
if (entry->size < best_size) { if (entry->size < best_size) {
...@@ -452,15 +336,14 @@ struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm, ...@@ -452,15 +336,14 @@ struct drm_mm_node *drm_mm_search_free_generic(const struct drm_mm *mm,
return best; return best;
} }
EXPORT_SYMBOL(drm_mm_search_free_generic);
struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, static struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
unsigned long size, unsigned long size,
unsigned alignment, unsigned alignment,
unsigned long color, unsigned long color,
unsigned long start, unsigned long start,
unsigned long end, unsigned long end,
bool best_match) enum drm_mm_search_flags flags)
{ {
struct drm_mm_node *entry; struct drm_mm_node *entry;
struct drm_mm_node *best; struct drm_mm_node *best;
...@@ -488,7 +371,7 @@ struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, ...@@ -488,7 +371,7 @@ struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
if (!check_free_hole(adj_start, adj_end, size, alignment)) if (!check_free_hole(adj_start, adj_end, size, alignment))
continue; continue;
if (!best_match) if (!(flags & DRM_MM_SEARCH_BEST))
return entry; return entry;
if (entry->size < best_size) { if (entry->size < best_size) {
...@@ -499,7 +382,6 @@ struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm, ...@@ -499,7 +382,6 @@ struct drm_mm_node *drm_mm_search_free_in_range_generic(const struct drm_mm *mm,
return best; return best;
} }
EXPORT_SYMBOL(drm_mm_search_free_in_range_generic);
/** /**
* Moves an allocation. To be used with embedded struct drm_mm_node. * Moves an allocation. To be used with embedded struct drm_mm_node.
...@@ -634,8 +516,8 @@ EXPORT_SYMBOL(drm_mm_scan_add_block); ...@@ -634,8 +516,8 @@ EXPORT_SYMBOL(drm_mm_scan_add_block);
* corrupted. * corrupted.
* *
* When the scan list is empty, the selected memory nodes can be freed. An * When the scan list is empty, the selected memory nodes can be freed. An
* immediately following drm_mm_search_free with best_match = 0 will then return * immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then
* the just freed block (because its at the top of the free_stack list). * return the just freed block (because its at the top of the free_stack list).
* *
* Returns one if this block should be evicted, zero otherwise. Will always * Returns one if this block should be evicted, zero otherwise. Will always
* return zero when no hole has been found. * return zero when no hole has been found.
...@@ -672,10 +554,7 @@ EXPORT_SYMBOL(drm_mm_clean); ...@@ -672,10 +554,7 @@ EXPORT_SYMBOL(drm_mm_clean);
void drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size) void drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size)
{ {
INIT_LIST_HEAD(&mm->hole_stack); INIT_LIST_HEAD(&mm->hole_stack);
INIT_LIST_HEAD(&mm->unused_nodes);
mm->num_unused = 0;
mm->scanned_blocks = 0; mm->scanned_blocks = 0;
spin_lock_init(&mm->unused_lock);
/* Clever trick to avoid a special case in the free hole tracking. */ /* Clever trick to avoid a special case in the free hole tracking. */
INIT_LIST_HEAD(&mm->head_node.node_list); INIT_LIST_HEAD(&mm->head_node.node_list);
...@@ -695,22 +574,8 @@ EXPORT_SYMBOL(drm_mm_init); ...@@ -695,22 +574,8 @@ EXPORT_SYMBOL(drm_mm_init);
void drm_mm_takedown(struct drm_mm * mm) void drm_mm_takedown(struct drm_mm * mm)
{ {
struct drm_mm_node *entry, *next; WARN(!list_empty(&mm->head_node.node_list),
"Memory manager not clean during takedown.\n");
if (WARN(!list_empty(&mm->head_node.node_list),
"Memory manager not clean. Delaying takedown\n")) {
return;
}
spin_lock(&mm->unused_lock);
list_for_each_entry_safe(entry, next, &mm->unused_nodes, node_list) {
list_del(&entry->node_list);
kfree(entry);
--mm->num_unused;
}
spin_unlock(&mm->unused_lock);
BUG_ON(mm->num_unused != 0);
} }
EXPORT_SYMBOL(drm_mm_takedown); EXPORT_SYMBOL(drm_mm_takedown);
......
...@@ -595,27 +595,6 @@ void drm_mode_set_name(struct drm_display_mode *mode) ...@@ -595,27 +595,6 @@ void drm_mode_set_name(struct drm_display_mode *mode)
} }
EXPORT_SYMBOL(drm_mode_set_name); EXPORT_SYMBOL(drm_mode_set_name);
/**
* drm_mode_list_concat - move modes from one list to another
* @head: source list
* @new: dst list
*
* LOCKING:
* Caller must ensure both lists are locked.
*
* Move all the modes from @head to @new.
*/
void drm_mode_list_concat(struct list_head *head, struct list_head *new)
{
struct list_head *entry, *tmp;
list_for_each_safe(entry, tmp, head) {
list_move_tail(entry, new);
}
}
EXPORT_SYMBOL(drm_mode_list_concat);
/** /**
* drm_mode_width - get the width of a mode * drm_mode_width - get the width of a mode
* @mode: mode * @mode: mode
...@@ -922,43 +901,6 @@ void drm_mode_validate_size(struct drm_device *dev, ...@@ -922,43 +901,6 @@ void drm_mode_validate_size(struct drm_device *dev,
} }
EXPORT_SYMBOL(drm_mode_validate_size); EXPORT_SYMBOL(drm_mode_validate_size);
/**
* drm_mode_validate_clocks - validate modes against clock limits
* @dev: DRM device
* @mode_list: list of modes to check
* @min: minimum clock rate array
* @max: maximum clock rate array
* @n_ranges: number of clock ranges (size of arrays)
*
* LOCKING:
* Caller must hold a lock protecting @mode_list.
*
* Some code may need to check a mode list against the clock limits of the
* device in question. This function walks the mode list, testing to make
* sure each mode falls within a given range (defined by @min and @max
* arrays) and sets @mode->status as needed.
*/
void drm_mode_validate_clocks(struct drm_device *dev,
struct list_head *mode_list,
int *min, int *max, int n_ranges)
{
struct drm_display_mode *mode;
int i;
list_for_each_entry(mode, mode_list, head) {
bool good = false;
for (i = 0; i < n_ranges; i++) {
if (mode->clock >= min[i] && mode->clock <= max[i]) {
good = true;
break;
}
}
if (!good)
mode->status = MODE_CLOCK_RANGE;
}
}
EXPORT_SYMBOL(drm_mode_validate_clocks);
/** /**
* drm_mode_prune_invalid - remove invalid modes from mode list * drm_mode_prune_invalid - remove invalid modes from mode list
* @dev: DRM device * @dev: DRM device
......
...@@ -52,10 +52,8 @@ ...@@ -52,10 +52,8 @@
drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t align) drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t align)
{ {
drm_dma_handle_t *dmah; drm_dma_handle_t *dmah;
#if 1
unsigned long addr; unsigned long addr;
size_t sz; size_t sz;
#endif
/* pci_alloc_consistent only guarantees alignment to the smallest /* pci_alloc_consistent only guarantees alignment to the smallest
* PAGE_SIZE order which is greater than or equal to the requested size. * PAGE_SIZE order which is greater than or equal to the requested size.
...@@ -97,10 +95,8 @@ EXPORT_SYMBOL(drm_pci_alloc); ...@@ -97,10 +95,8 @@ EXPORT_SYMBOL(drm_pci_alloc);
*/ */
void __drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah) void __drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
{ {
#if 1
unsigned long addr; unsigned long addr;
size_t sz; size_t sz;
#endif
if (dmah->vaddr) { if (dmah->vaddr) {
/* XXX - Is virt_to_page() legal for consistent mem? */ /* XXX - Is virt_to_page() legal for consistent mem? */
...@@ -276,17 +272,26 @@ static int drm_pci_agp_init(struct drm_device *dev) ...@@ -276,17 +272,26 @@ static int drm_pci_agp_init(struct drm_device *dev)
DRM_ERROR("Cannot initialize the agpgart module.\n"); DRM_ERROR("Cannot initialize the agpgart module.\n");
return -EINVAL; return -EINVAL;
} }
if (drm_core_has_MTRR(dev)) { if (dev->agp) {
if (dev->agp) dev->agp->agp_mtrr = arch_phys_wc_add(
dev->agp->agp_mtrr = arch_phys_wc_add( dev->agp->agp_info.aper_base,
dev->agp->agp_info.aper_base, dev->agp->agp_info.aper_size *
dev->agp->agp_info.aper_size * 1024 * 1024);
1024 * 1024);
} }
} }
return 0; return 0;
} }
static void drm_pci_agp_destroy(struct drm_device *dev)
{
if (drm_core_has_AGP(dev) && dev->agp) {
arch_phys_wc_del(dev->agp->agp_mtrr);
drm_agp_clear(dev);
drm_agp_destroy(dev->agp);
dev->agp = NULL;
}
}
static struct drm_bus drm_pci_bus = { static struct drm_bus drm_pci_bus = {
.bus_type = DRIVER_BUS_PCI, .bus_type = DRIVER_BUS_PCI,
.get_irq = drm_pci_get_irq, .get_irq = drm_pci_get_irq,
...@@ -295,6 +300,7 @@ static struct drm_bus drm_pci_bus = { ...@@ -295,6 +300,7 @@ static struct drm_bus drm_pci_bus = {
.set_unique = drm_pci_set_unique, .set_unique = drm_pci_set_unique,
.irq_by_busid = drm_pci_irq_by_busid, .irq_by_busid = drm_pci_irq_by_busid,
.agp_init = drm_pci_agp_init, .agp_init = drm_pci_agp_init,
.agp_destroy = drm_pci_agp_destroy,
}; };
/** /**
...@@ -348,6 +354,12 @@ int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent, ...@@ -348,6 +354,12 @@ int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent,
goto err_g2; goto err_g2;
} }
if (drm_core_check_feature(dev, DRIVER_RENDER) && drm_rnodes) {
ret = drm_get_minor(dev, &dev->render, DRM_MINOR_RENDER);
if (ret)
goto err_g21;
}
if ((ret = drm_get_minor(dev, &dev->primary, DRM_MINOR_LEGACY))) if ((ret = drm_get_minor(dev, &dev->primary, DRM_MINOR_LEGACY)))
goto err_g3; goto err_g3;
...@@ -377,6 +389,9 @@ int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent, ...@@ -377,6 +389,9 @@ int drm_get_pci_dev(struct pci_dev *pdev, const struct pci_device_id *ent,
err_g4: err_g4:
drm_put_minor(&dev->primary); drm_put_minor(&dev->primary);
err_g3: err_g3:
if (dev->render)
drm_put_minor(&dev->render);
err_g21:
if (drm_core_check_feature(dev, DRIVER_MODESET)) if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_put_minor(&dev->control); drm_put_minor(&dev->control);
err_g2: err_g2:
......
...@@ -28,7 +28,7 @@ ...@@ -28,7 +28,7 @@
#include <linux/export.h> #include <linux/export.h>
#include <drm/drmP.h> #include <drm/drmP.h>
/** /*
* Register. * Register.
* *
* \param platdev - Platform device struture * \param platdev - Platform device struture
...@@ -39,8 +39,8 @@ ...@@ -39,8 +39,8 @@
* Try and register, if we fail to register, backout previous work. * Try and register, if we fail to register, backout previous work.
*/ */
int drm_get_platform_dev(struct platform_device *platdev, static int drm_get_platform_dev(struct platform_device *platdev,
struct drm_driver *driver) struct drm_driver *driver)
{ {
struct drm_device *dev; struct drm_device *dev;
int ret; int ret;
...@@ -69,6 +69,12 @@ int drm_get_platform_dev(struct platform_device *platdev, ...@@ -69,6 +69,12 @@ int drm_get_platform_dev(struct platform_device *platdev,
goto err_g1; goto err_g1;
} }
if (drm_core_check_feature(dev, DRIVER_RENDER) && drm_rnodes) {
ret = drm_get_minor(dev, &dev->render, DRM_MINOR_RENDER);
if (ret)
goto err_g11;
}
ret = drm_get_minor(dev, &dev->primary, DRM_MINOR_LEGACY); ret = drm_get_minor(dev, &dev->primary, DRM_MINOR_LEGACY);
if (ret) if (ret)
goto err_g2; goto err_g2;
...@@ -100,6 +106,9 @@ int drm_get_platform_dev(struct platform_device *platdev, ...@@ -100,6 +106,9 @@ int drm_get_platform_dev(struct platform_device *platdev,
err_g3: err_g3:
drm_put_minor(&dev->primary); drm_put_minor(&dev->primary);
err_g2: err_g2:
if (dev->render)
drm_put_minor(&dev->render);
err_g11:
if (drm_core_check_feature(dev, DRIVER_MODESET)) if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_put_minor(&dev->control); drm_put_minor(&dev->control);
err_g1: err_g1:
...@@ -107,7 +116,6 @@ int drm_get_platform_dev(struct platform_device *platdev, ...@@ -107,7 +116,6 @@ int drm_get_platform_dev(struct platform_device *platdev,
mutex_unlock(&drm_global_mutex); mutex_unlock(&drm_global_mutex);
return ret; return ret;
} }
EXPORT_SYMBOL(drm_get_platform_dev);
static int drm_platform_get_irq(struct drm_device *dev) static int drm_platform_get_irq(struct drm_device *dev)
{ {
......
...@@ -83,6 +83,34 @@ static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, ...@@ -83,6 +83,34 @@ static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv,
return 0; return 0;
} }
static struct dma_buf *drm_prime_lookup_buf_by_handle(struct drm_prime_file_private *prime_fpriv,
uint32_t handle)
{
struct drm_prime_member *member;
list_for_each_entry(member, &prime_fpriv->head, entry) {
if (member->handle == handle)
return member->dma_buf;
}
return NULL;
}
static int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpriv,
struct dma_buf *dma_buf,
uint32_t *handle)
{
struct drm_prime_member *member;
list_for_each_entry(member, &prime_fpriv->head, entry) {
if (member->dma_buf == dma_buf) {
*handle = member->handle;
return 0;
}
}
return -ENOENT;
}
static int drm_gem_map_attach(struct dma_buf *dma_buf, static int drm_gem_map_attach(struct dma_buf *dma_buf,
struct device *target_dev, struct device *target_dev,
struct dma_buf_attachment *attach) struct dma_buf_attachment *attach)
...@@ -131,9 +159,8 @@ static void drm_gem_map_detach(struct dma_buf *dma_buf, ...@@ -131,9 +159,8 @@ static void drm_gem_map_detach(struct dma_buf *dma_buf,
attach->priv = NULL; attach->priv = NULL;
} }
static void drm_prime_remove_buf_handle_locked( void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpriv,
struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf)
struct dma_buf *dma_buf)
{ {
struct drm_prime_member *member, *safe; struct drm_prime_member *member, *safe;
...@@ -167,8 +194,6 @@ static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach, ...@@ -167,8 +194,6 @@ static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
if (WARN_ON(prime_attach->dir != DMA_NONE)) if (WARN_ON(prime_attach->dir != DMA_NONE))
return ERR_PTR(-EBUSY); return ERR_PTR(-EBUSY);
mutex_lock(&obj->dev->struct_mutex);
sgt = obj->dev->driver->gem_prime_get_sg_table(obj); sgt = obj->dev->driver->gem_prime_get_sg_table(obj);
if (!IS_ERR(sgt)) { if (!IS_ERR(sgt)) {
...@@ -182,7 +207,6 @@ static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach, ...@@ -182,7 +207,6 @@ static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
} }
} }
mutex_unlock(&obj->dev->struct_mutex);
return sgt; return sgt;
} }
...@@ -192,16 +216,14 @@ static void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach, ...@@ -192,16 +216,14 @@ static void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
/* nothing to be done here */ /* nothing to be done here */
} }
static void drm_gem_dmabuf_release(struct dma_buf *dma_buf) void drm_gem_dmabuf_release(struct dma_buf *dma_buf)
{ {
struct drm_gem_object *obj = dma_buf->priv; struct drm_gem_object *obj = dma_buf->priv;
if (obj->export_dma_buf == dma_buf) { /* drop the reference on the export fd holds */
/* drop the reference on the export fd holds */ drm_gem_object_unreference_unlocked(obj);
obj->export_dma_buf = NULL;
drm_gem_object_unreference_unlocked(obj);
}
} }
EXPORT_SYMBOL(drm_gem_dmabuf_release);
static void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf) static void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf)
{ {
...@@ -300,62 +322,107 @@ struct dma_buf *drm_gem_prime_export(struct drm_device *dev, ...@@ -300,62 +322,107 @@ struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
} }
EXPORT_SYMBOL(drm_gem_prime_export); EXPORT_SYMBOL(drm_gem_prime_export);
static struct dma_buf *export_and_register_object(struct drm_device *dev,
struct drm_gem_object *obj,
uint32_t flags)
{
struct dma_buf *dmabuf;
/* prevent races with concurrent gem_close. */
if (obj->handle_count == 0) {
dmabuf = ERR_PTR(-ENOENT);
return dmabuf;
}
dmabuf = dev->driver->gem_prime_export(dev, obj, flags);
if (IS_ERR(dmabuf)) {
/* normally the created dma-buf takes ownership of the ref,
* but if that fails then drop the ref
*/
return dmabuf;
}
/*
* Note that callers do not need to clean up the export cache
* since the check for obj->handle_count guarantees that someone
* will clean it up.
*/
obj->dma_buf = dmabuf;
get_dma_buf(obj->dma_buf);
/* Grab a new ref since the callers is now used by the dma-buf */
drm_gem_object_reference(obj);
return dmabuf;
}
int drm_gem_prime_handle_to_fd(struct drm_device *dev, int drm_gem_prime_handle_to_fd(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle, uint32_t flags, struct drm_file *file_priv, uint32_t handle, uint32_t flags,
int *prime_fd) int *prime_fd)
{ {
struct drm_gem_object *obj; struct drm_gem_object *obj;
void *buf;
int ret = 0; int ret = 0;
struct dma_buf *dmabuf; struct dma_buf *dmabuf;
mutex_lock(&file_priv->prime.lock);
obj = drm_gem_object_lookup(dev, file_priv, handle); obj = drm_gem_object_lookup(dev, file_priv, handle);
if (!obj) if (!obj) {
return -ENOENT; ret = -ENOENT;
goto out_unlock;
}
mutex_lock(&file_priv->prime.lock); dmabuf = drm_prime_lookup_buf_by_handle(&file_priv->prime, handle);
if (dmabuf) {
get_dma_buf(dmabuf);
goto out_have_handle;
}
mutex_lock(&dev->object_name_lock);
/* re-export the original imported object */ /* re-export the original imported object */
if (obj->import_attach) { if (obj->import_attach) {
dmabuf = obj->import_attach->dmabuf; dmabuf = obj->import_attach->dmabuf;
get_dma_buf(dmabuf);
goto out_have_obj; goto out_have_obj;
} }
if (obj->export_dma_buf) { if (obj->dma_buf) {
dmabuf = obj->export_dma_buf; get_dma_buf(obj->dma_buf);
dmabuf = obj->dma_buf;
goto out_have_obj; goto out_have_obj;
} }
buf = dev->driver->gem_prime_export(dev, obj, flags); dmabuf = export_and_register_object(dev, obj, flags);
if (IS_ERR(buf)) { if (IS_ERR(dmabuf)) {
/* normally the created dma-buf takes ownership of the ref, /* normally the created dma-buf takes ownership of the ref,
* but if that fails then drop the ref * but if that fails then drop the ref
*/ */
ret = PTR_ERR(buf); ret = PTR_ERR(dmabuf);
mutex_unlock(&dev->object_name_lock);
goto out; goto out;
} }
obj->export_dma_buf = buf;
/* if we've exported this buffer the cheat and add it to the import list out_have_obj:
* so we get the correct handle back /*
* If we've exported this buffer then cheat and add it to the import list
* so we get the correct handle back. We must do this under the
* protection of dev->object_name_lock to ensure that a racing gem close
* ioctl doesn't miss to remove this buffer handle from the cache.
*/ */
ret = drm_prime_add_buf_handle(&file_priv->prime, ret = drm_prime_add_buf_handle(&file_priv->prime,
obj->export_dma_buf, handle); dmabuf, handle);
mutex_unlock(&dev->object_name_lock);
if (ret) if (ret)
goto fail_put_dmabuf; goto fail_put_dmabuf;
ret = dma_buf_fd(buf, flags); out_have_handle:
if (ret < 0)
goto fail_rm_handle;
*prime_fd = ret;
mutex_unlock(&file_priv->prime.lock);
return 0;
out_have_obj:
get_dma_buf(dmabuf);
ret = dma_buf_fd(dmabuf, flags); ret = dma_buf_fd(dmabuf, flags);
/*
* We must _not_ remove the buffer from the handle cache since the newly
* created dma buf is already linked in the global obj->dma_buf pointer,
* and that is invariant as long as a userspace gem handle exists.
* Closing the handle will clean out the cache anyway, so we don't leak.
*/
if (ret < 0) { if (ret < 0) {
dma_buf_put(dmabuf); goto fail_put_dmabuf;
} else { } else {
*prime_fd = ret; *prime_fd = ret;
ret = 0; ret = 0;
...@@ -363,15 +430,13 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, ...@@ -363,15 +430,13 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev,
goto out; goto out;
fail_rm_handle:
drm_prime_remove_buf_handle_locked(&file_priv->prime, buf);
fail_put_dmabuf: fail_put_dmabuf:
/* clear NOT to be checked when releasing dma_buf */ dma_buf_put(dmabuf);
obj->export_dma_buf = NULL;
dma_buf_put(buf);
out: out:
drm_gem_object_unreference_unlocked(obj); drm_gem_object_unreference_unlocked(obj);
out_unlock:
mutex_unlock(&file_priv->prime.lock); mutex_unlock(&file_priv->prime.lock);
return ret; return ret;
} }
EXPORT_SYMBOL(drm_gem_prime_handle_to_fd); EXPORT_SYMBOL(drm_gem_prime_handle_to_fd);
...@@ -446,19 +511,26 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev, ...@@ -446,19 +511,26 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev,
ret = drm_prime_lookup_buf_handle(&file_priv->prime, ret = drm_prime_lookup_buf_handle(&file_priv->prime,
dma_buf, handle); dma_buf, handle);
if (!ret) { if (ret == 0)
ret = 0;
goto out_put; goto out_put;
}
/* never seen this one, need to import */ /* never seen this one, need to import */
mutex_lock(&dev->object_name_lock);
obj = dev->driver->gem_prime_import(dev, dma_buf); obj = dev->driver->gem_prime_import(dev, dma_buf);
if (IS_ERR(obj)) { if (IS_ERR(obj)) {
ret = PTR_ERR(obj); ret = PTR_ERR(obj);
goto out_put; goto out_unlock;
} }
ret = drm_gem_handle_create(file_priv, obj, handle); if (obj->dma_buf) {
WARN_ON(obj->dma_buf != dma_buf);
} else {
obj->dma_buf = dma_buf;
get_dma_buf(dma_buf);
}
/* drm_gem_handle_create_tail unlocks dev->object_name_lock. */
ret = drm_gem_handle_create_tail(file_priv, obj, handle);
drm_gem_object_unreference_unlocked(obj); drm_gem_object_unreference_unlocked(obj);
if (ret) if (ret)
goto out_put; goto out_put;
...@@ -478,7 +550,9 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev, ...@@ -478,7 +550,9 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev,
/* hmm, if driver attached, we are relying on the free-object path /* hmm, if driver attached, we are relying on the free-object path
* to detach.. which seems ok.. * to detach.. which seems ok..
*/ */
drm_gem_object_handle_unreference_unlocked(obj); drm_gem_handle_delete(file_priv, *handle);
out_unlock:
mutex_unlock(&dev->object_name_lock);
out_put: out_put:
dma_buf_put(dma_buf); dma_buf_put(dma_buf);
mutex_unlock(&file_priv->prime.lock); mutex_unlock(&file_priv->prime.lock);
...@@ -618,25 +692,3 @@ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv) ...@@ -618,25 +692,3 @@ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv)
WARN_ON(!list_empty(&prime_fpriv->head)); WARN_ON(!list_empty(&prime_fpriv->head));
} }
EXPORT_SYMBOL(drm_prime_destroy_file_private); EXPORT_SYMBOL(drm_prime_destroy_file_private);
int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t *handle)
{
struct drm_prime_member *member;
list_for_each_entry(member, &prime_fpriv->head, entry) {
if (member->dma_buf == dma_buf) {
*handle = member->handle;
return 0;
}
}
return -ENOENT;
}
EXPORT_SYMBOL(drm_prime_lookup_buf_handle);
void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf)
{
mutex_lock(&prime_fpriv->lock);
drm_prime_remove_buf_handle_locked(prime_fpriv, dma_buf);
mutex_unlock(&prime_fpriv->lock);
}
EXPORT_SYMBOL(drm_prime_remove_buf_handle);
/**
* \file drm_proc.c
* /proc support for DRM
*
* \author Rickard E. (Rik) Faith <faith@valinux.com>
* \author Gareth Hughes <gareth@valinux.com>
*
* \par Acknowledgements:
* Matthew J Sottek <matthew.j.sottek@intel.com> sent in a patch to fix
* the problem with the proc files not outputting all their information.
*/
/*
* Created: Mon Jan 11 09:48:47 1999 by faith@valinux.com
*
* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/export.h>
#include <drm/drmP.h>
/***************************************************
* Initialization, etc.
**************************************************/
/**
* Proc file list.
*/
static const struct drm_info_list drm_proc_list[] = {
{"name", drm_name_info, 0},
{"vm", drm_vm_info, 0},
{"clients", drm_clients_info, 0},
{"bufs", drm_bufs_info, 0},
{"gem_names", drm_gem_name_info, DRIVER_GEM},
#if DRM_DEBUG_CODE
{"vma", drm_vma_info, 0},
#endif
};
#define DRM_PROC_ENTRIES ARRAY_SIZE(drm_proc_list)
static int drm_proc_open(struct inode *inode, struct file *file)
{
struct drm_info_node* node = PDE_DATA(inode);
return single_open(file, node->info_ent->show, node);
}
static const struct file_operations drm_proc_fops = {
.owner = THIS_MODULE,
.open = drm_proc_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
/**
* Initialize a given set of proc files for a device
*
* \param files The array of files to create
* \param count The number of files given
* \param root DRI proc dir entry.
* \param minor device minor number
* \return Zero on success, non-zero on failure
*
* Create a given set of proc files represented by an array of
* gdm_proc_lists in the given root directory.
*/
static int drm_proc_create_files(const struct drm_info_list *files, int count,
struct proc_dir_entry *root, struct drm_minor *minor)
{
struct drm_device *dev = minor->dev;
struct proc_dir_entry *ent;
struct drm_info_node *tmp;
int i;
for (i = 0; i < count; i++) {
u32 features = files[i].driver_features;
if (features != 0 &&
(dev->driver->driver_features & features) != features)
continue;
tmp = kmalloc(sizeof(struct drm_info_node), GFP_KERNEL);
if (!tmp)
return -1;
tmp->minor = minor;
tmp->info_ent = &files[i];
list_add(&tmp->list, &minor->proc_nodes.list);
ent = proc_create_data(files[i].name, S_IRUGO, root,
&drm_proc_fops, tmp);
if (!ent) {
DRM_ERROR("Cannot create /proc/dri/%u/%s\n",
minor->index, files[i].name);
list_del(&tmp->list);
kfree(tmp);
return -1;
}
}
return 0;
}
/**
* Initialize the DRI proc filesystem for a device
*
* \param dev DRM device
* \param root DRI proc dir entry.
* \param dev_root resulting DRI device proc dir entry.
* \return root entry pointer on success, or NULL on failure.
*
* Create the DRI proc root entry "/proc/dri", the device proc root entry
* "/proc/dri/%minor%/", and each entry in proc_list as
* "/proc/dri/%minor%/%name%".
*/
int drm_proc_init(struct drm_minor *minor, struct proc_dir_entry *root)
{
char name[12];
int ret;
INIT_LIST_HEAD(&minor->proc_nodes.list);
sprintf(name, "%u", minor->index);
minor->proc_root = proc_mkdir(name, root);
if (!minor->proc_root) {
DRM_ERROR("Cannot create /proc/dri/%s\n", name);
return -1;
}
ret = drm_proc_create_files(drm_proc_list, DRM_PROC_ENTRIES,
minor->proc_root, minor);
if (ret) {
remove_proc_subtree(name, root);
minor->proc_root = NULL;
DRM_ERROR("Failed to create core drm proc files\n");
return ret;
}
return 0;
}
static int drm_proc_remove_files(const struct drm_info_list *files, int count,
struct drm_minor *minor)
{
struct list_head *pos, *q;
struct drm_info_node *tmp;
int i;
for (i = 0; i < count; i++) {
list_for_each_safe(pos, q, &minor->proc_nodes.list) {
tmp = list_entry(pos, struct drm_info_node, list);
if (tmp->info_ent == &files[i]) {
remove_proc_entry(files[i].name,
minor->proc_root);
list_del(pos);
kfree(tmp);
}
}
}
return 0;
}
/**
* Cleanup the proc filesystem resources.
*
* \param minor device minor number.
* \param root DRI proc dir entry.
* \param dev_root DRI device proc dir entry.
* \return always zero.
*
* Remove all proc entries created by proc_init().
*/
int drm_proc_cleanup(struct drm_minor *minor, struct proc_dir_entry *root)
{
char name[64];
if (!root || !minor->proc_root)
return 0;
drm_proc_remove_files(drm_proc_list, DRM_PROC_ENTRIES, minor);
sprintf(name, "%d", minor->index);
remove_proc_subtree(name, root);
return 0;
}
...@@ -46,7 +46,7 @@ static inline void *drm_vmalloc_dma(unsigned long size) ...@@ -46,7 +46,7 @@ static inline void *drm_vmalloc_dma(unsigned long size)
#endif #endif
} }
void drm_sg_cleanup(struct drm_sg_mem * entry) static void drm_sg_cleanup(struct drm_sg_mem * entry)
{ {
struct page *page; struct page *page;
int i; int i;
...@@ -64,19 +64,32 @@ void drm_sg_cleanup(struct drm_sg_mem * entry) ...@@ -64,19 +64,32 @@ void drm_sg_cleanup(struct drm_sg_mem * entry)
kfree(entry); kfree(entry);
} }
void drm_legacy_sg_cleanup(struct drm_device *dev)
{
if (drm_core_check_feature(dev, DRIVER_SG) && dev->sg &&
!drm_core_check_feature(dev, DRIVER_MODESET)) {
drm_sg_cleanup(dev->sg);
dev->sg = NULL;
}
}
#ifdef _LP64 #ifdef _LP64
# define ScatterHandle(x) (unsigned int)((x >> 32) + (x & ((1L << 32) - 1))) # define ScatterHandle(x) (unsigned int)((x >> 32) + (x & ((1L << 32) - 1)))
#else #else
# define ScatterHandle(x) (unsigned int)(x) # define ScatterHandle(x) (unsigned int)(x)
#endif #endif
int drm_sg_alloc(struct drm_device *dev, struct drm_scatter_gather * request) int drm_sg_alloc(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{ {
struct drm_scatter_gather *request = data;
struct drm_sg_mem *entry; struct drm_sg_mem *entry;
unsigned long pages, i, j; unsigned long pages, i, j;
DRM_DEBUG("\n"); DRM_DEBUG("\n");
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
if (!drm_core_check_feature(dev, DRIVER_SG)) if (!drm_core_check_feature(dev, DRIVER_SG))
return -EINVAL; return -EINVAL;
...@@ -181,21 +194,15 @@ int drm_sg_alloc(struct drm_device *dev, struct drm_scatter_gather * request) ...@@ -181,21 +194,15 @@ int drm_sg_alloc(struct drm_device *dev, struct drm_scatter_gather * request)
return -ENOMEM; return -ENOMEM;
} }
int drm_sg_alloc_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct drm_scatter_gather *request = data;
return drm_sg_alloc(dev, request);
}
int drm_sg_free(struct drm_device *dev, void *data, int drm_sg_free(struct drm_device *dev, void *data,
struct drm_file *file_priv) struct drm_file *file_priv)
{ {
struct drm_scatter_gather *request = data; struct drm_scatter_gather *request = data;
struct drm_sg_mem *entry; struct drm_sg_mem *entry;
if (drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
if (!drm_core_check_feature(dev, DRIVER_SG)) if (!drm_core_check_feature(dev, DRIVER_SG))
return -EINVAL; return -EINVAL;
......
...@@ -40,6 +40,9 @@ ...@@ -40,6 +40,9 @@
unsigned int drm_debug = 0; /* 1 to enable debug output */ unsigned int drm_debug = 0; /* 1 to enable debug output */
EXPORT_SYMBOL(drm_debug); EXPORT_SYMBOL(drm_debug);
unsigned int drm_rnodes = 0; /* 1 to enable experimental render nodes API */
EXPORT_SYMBOL(drm_rnodes);
unsigned int drm_vblank_offdelay = 5000; /* Default to 5000 msecs. */ unsigned int drm_vblank_offdelay = 5000; /* Default to 5000 msecs. */
EXPORT_SYMBOL(drm_vblank_offdelay); EXPORT_SYMBOL(drm_vblank_offdelay);
...@@ -56,11 +59,13 @@ MODULE_AUTHOR(CORE_AUTHOR); ...@@ -56,11 +59,13 @@ MODULE_AUTHOR(CORE_AUTHOR);
MODULE_DESCRIPTION(CORE_DESC); MODULE_DESCRIPTION(CORE_DESC);
MODULE_LICENSE("GPL and additional rights"); MODULE_LICENSE("GPL and additional rights");
MODULE_PARM_DESC(debug, "Enable debug output"); MODULE_PARM_DESC(debug, "Enable debug output");
MODULE_PARM_DESC(rnodes, "Enable experimental render nodes API");
MODULE_PARM_DESC(vblankoffdelay, "Delay until vblank irq auto-disable [msecs]"); MODULE_PARM_DESC(vblankoffdelay, "Delay until vblank irq auto-disable [msecs]");
MODULE_PARM_DESC(timestamp_precision_usec, "Max. error on timestamps [usecs]"); MODULE_PARM_DESC(timestamp_precision_usec, "Max. error on timestamps [usecs]");
MODULE_PARM_DESC(timestamp_monotonic, "Use monotonic timestamps"); MODULE_PARM_DESC(timestamp_monotonic, "Use monotonic timestamps");
module_param_named(debug, drm_debug, int, 0600); module_param_named(debug, drm_debug, int, 0600);
module_param_named(rnodes, drm_rnodes, int, 0600);
module_param_named(vblankoffdelay, drm_vblank_offdelay, int, 0600); module_param_named(vblankoffdelay, drm_vblank_offdelay, int, 0600);
module_param_named(timestamp_precision_usec, drm_timestamp_precision, int, 0600); module_param_named(timestamp_precision_usec, drm_timestamp_precision, int, 0600);
module_param_named(timestamp_monotonic, drm_timestamp_monotonic, int, 0600); module_param_named(timestamp_monotonic, drm_timestamp_monotonic, int, 0600);
...@@ -68,7 +73,6 @@ module_param_named(timestamp_monotonic, drm_timestamp_monotonic, int, 0600); ...@@ -68,7 +73,6 @@ module_param_named(timestamp_monotonic, drm_timestamp_monotonic, int, 0600);
struct idr drm_minors_idr; struct idr drm_minors_idr;
struct class *drm_class; struct class *drm_class;
struct proc_dir_entry *drm_proc_root;
struct dentry *drm_debugfs_root; struct dentry *drm_debugfs_root;
int drm_err(const char *func, const char *format, ...) int drm_err(const char *func, const char *format, ...)
...@@ -113,12 +117,12 @@ static int drm_minor_get_id(struct drm_device *dev, int type) ...@@ -113,12 +117,12 @@ static int drm_minor_get_id(struct drm_device *dev, int type)
int base = 0, limit = 63; int base = 0, limit = 63;
if (type == DRM_MINOR_CONTROL) { if (type == DRM_MINOR_CONTROL) {
base += 64; base += 64;
limit = base + 127; limit = base + 63;
} else if (type == DRM_MINOR_RENDER) { } else if (type == DRM_MINOR_RENDER) {
base += 128; base += 128;
limit = base + 255; limit = base + 63;
} }
mutex_lock(&dev->struct_mutex); mutex_lock(&dev->struct_mutex);
ret = idr_alloc(&drm_minors_idr, NULL, base, limit, GFP_KERNEL); ret = idr_alloc(&drm_minors_idr, NULL, base, limit, GFP_KERNEL);
...@@ -288,13 +292,7 @@ int drm_fill_in_dev(struct drm_device *dev, ...@@ -288,13 +292,7 @@ int drm_fill_in_dev(struct drm_device *dev,
goto error_out_unreg; goto error_out_unreg;
} }
drm_legacy_ctxbitmap_init(dev);
retcode = drm_ctxbitmap_init(dev);
if (retcode) {
DRM_ERROR("Cannot allocate memory for context bitmap.\n");
goto error_out_unreg;
}
if (driver->driver_features & DRIVER_GEM) { if (driver->driver_features & DRIVER_GEM) {
retcode = drm_gem_init(dev); retcode = drm_gem_init(dev);
...@@ -321,9 +319,8 @@ EXPORT_SYMBOL(drm_fill_in_dev); ...@@ -321,9 +319,8 @@ EXPORT_SYMBOL(drm_fill_in_dev);
* \param sec-minor structure to hold the assigned minor * \param sec-minor structure to hold the assigned minor
* \return negative number on failure. * \return negative number on failure.
* *
* Search an empty entry and initialize it to the given parameters, and * Search an empty entry and initialize it to the given parameters. This
* create the proc init entry via proc_init(). This routines assigns * routines assigns minor numbers to secondary heads of multi-headed cards
* minor numbers to secondary heads of multi-headed cards
*/ */
int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int type) int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int type)
{ {
...@@ -351,20 +348,11 @@ int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int type) ...@@ -351,20 +348,11 @@ int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int type)
idr_replace(&drm_minors_idr, new_minor, minor_id); idr_replace(&drm_minors_idr, new_minor, minor_id);
if (type == DRM_MINOR_LEGACY) {
ret = drm_proc_init(new_minor, drm_proc_root);
if (ret) {
DRM_ERROR("DRM: Failed to initialize /proc/dri.\n");
goto err_mem;
}
} else
new_minor->proc_root = NULL;
#if defined(CONFIG_DEBUG_FS) #if defined(CONFIG_DEBUG_FS)
ret = drm_debugfs_init(new_minor, minor_id, drm_debugfs_root); ret = drm_debugfs_init(new_minor, minor_id, drm_debugfs_root);
if (ret) { if (ret) {
DRM_ERROR("DRM: Failed to initialize /sys/kernel/debug/dri.\n"); DRM_ERROR("DRM: Failed to initialize /sys/kernel/debug/dri.\n");
goto err_g2; goto err_mem;
} }
#endif #endif
...@@ -372,7 +360,7 @@ int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int type) ...@@ -372,7 +360,7 @@ int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int type)
if (ret) { if (ret) {
printk(KERN_ERR printk(KERN_ERR
"DRM: Error sysfs_device_add.\n"); "DRM: Error sysfs_device_add.\n");
goto err_g2; goto err_debugfs;
} }
*minor = new_minor; *minor = new_minor;
...@@ -380,10 +368,11 @@ int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int type) ...@@ -380,10 +368,11 @@ int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int type)
return 0; return 0;
err_g2: err_debugfs:
if (new_minor->type == DRM_MINOR_LEGACY) #if defined(CONFIG_DEBUG_FS)
drm_proc_cleanup(new_minor, drm_proc_root); drm_debugfs_cleanup(new_minor);
err_mem: err_mem:
#endif
kfree(new_minor); kfree(new_minor);
err_idr: err_idr:
idr_remove(&drm_minors_idr, minor_id); idr_remove(&drm_minors_idr, minor_id);
...@@ -397,10 +386,6 @@ EXPORT_SYMBOL(drm_get_minor); ...@@ -397,10 +386,6 @@ EXPORT_SYMBOL(drm_get_minor);
* *
* \param sec_minor - structure to be released * \param sec_minor - structure to be released
* \return always zero * \return always zero
*
* Cleans up the proc resources. Not legal for this to be the
* last minor released.
*
*/ */
int drm_put_minor(struct drm_minor **minor_p) int drm_put_minor(struct drm_minor **minor_p)
{ {
...@@ -408,8 +393,6 @@ int drm_put_minor(struct drm_minor **minor_p) ...@@ -408,8 +393,6 @@ int drm_put_minor(struct drm_minor **minor_p)
DRM_DEBUG("release secondary minor %d\n", minor->index); DRM_DEBUG("release secondary minor %d\n", minor->index);
if (minor->type == DRM_MINOR_LEGACY)
drm_proc_cleanup(minor, drm_proc_root);
#if defined(CONFIG_DEBUG_FS) #if defined(CONFIG_DEBUG_FS)
drm_debugfs_cleanup(minor); drm_debugfs_cleanup(minor);
#endif #endif
...@@ -451,16 +434,11 @@ void drm_put_dev(struct drm_device *dev) ...@@ -451,16 +434,11 @@ void drm_put_dev(struct drm_device *dev)
drm_lastclose(dev); drm_lastclose(dev);
if (drm_core_has_MTRR(dev) && drm_core_has_AGP(dev) && dev->agp)
arch_phys_wc_del(dev->agp->agp_mtrr);
if (dev->driver->unload) if (dev->driver->unload)
dev->driver->unload(dev); dev->driver->unload(dev);
if (drm_core_has_AGP(dev) && dev->agp) { if (dev->driver->bus->agp_destroy)
kfree(dev->agp); dev->driver->bus->agp_destroy(dev);
dev->agp = NULL;
}
drm_vblank_cleanup(dev); drm_vblank_cleanup(dev);
...@@ -468,11 +446,14 @@ void drm_put_dev(struct drm_device *dev) ...@@ -468,11 +446,14 @@ void drm_put_dev(struct drm_device *dev)
drm_rmmap(dev, r_list->map); drm_rmmap(dev, r_list->map);
drm_ht_remove(&dev->map_hash); drm_ht_remove(&dev->map_hash);
drm_ctxbitmap_cleanup(dev); drm_legacy_ctxbitmap_cleanup(dev);
if (drm_core_check_feature(dev, DRIVER_MODESET)) if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_put_minor(&dev->control); drm_put_minor(&dev->control);
if (dev->render)
drm_put_minor(&dev->render);
if (driver->driver_features & DRIVER_GEM) if (driver->driver_features & DRIVER_GEM)
drm_gem_destroy(dev); drm_gem_destroy(dev);
...@@ -489,6 +470,8 @@ void drm_unplug_dev(struct drm_device *dev) ...@@ -489,6 +470,8 @@ void drm_unplug_dev(struct drm_device *dev)
/* for a USB device */ /* for a USB device */
if (drm_core_check_feature(dev, DRIVER_MODESET)) if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_unplug_minor(dev->control); drm_unplug_minor(dev->control);
if (dev->render)
drm_unplug_minor(dev->render);
drm_unplug_minor(dev->primary); drm_unplug_minor(dev->primary);
mutex_lock(&drm_global_mutex); mutex_lock(&drm_global_mutex);
......
...@@ -33,6 +33,12 @@ int drm_get_usb_dev(struct usb_interface *interface, ...@@ -33,6 +33,12 @@ int drm_get_usb_dev(struct usb_interface *interface,
if (ret) if (ret)
goto err_g1; goto err_g1;
if (drm_core_check_feature(dev, DRIVER_RENDER) && drm_rnodes) {
ret = drm_get_minor(dev, &dev->render, DRM_MINOR_RENDER);
if (ret)
goto err_g11;
}
ret = drm_get_minor(dev, &dev->primary, DRM_MINOR_LEGACY); ret = drm_get_minor(dev, &dev->primary, DRM_MINOR_LEGACY);
if (ret) if (ret)
goto err_g2; goto err_g2;
...@@ -62,6 +68,9 @@ int drm_get_usb_dev(struct usb_interface *interface, ...@@ -62,6 +68,9 @@ int drm_get_usb_dev(struct usb_interface *interface,
err_g3: err_g3:
drm_put_minor(&dev->primary); drm_put_minor(&dev->primary);
err_g2: err_g2:
if (dev->render)
drm_put_minor(&dev->render);
err_g11:
drm_put_minor(&dev->control); drm_put_minor(&dev->control);
err_g1: err_g1:
kfree(dev); kfree(dev);
......
...@@ -251,8 +251,7 @@ static void drm_vm_shm_close(struct vm_area_struct *vma) ...@@ -251,8 +251,7 @@ static void drm_vm_shm_close(struct vm_area_struct *vma)
switch (map->type) { switch (map->type) {
case _DRM_REGISTERS: case _DRM_REGISTERS:
case _DRM_FRAME_BUFFER: case _DRM_FRAME_BUFFER:
if (drm_core_has_MTRR(dev)) arch_phys_wc_del(map->mtrr);
arch_phys_wc_del(map->mtrr);
iounmap(map->handle); iounmap(map->handle);
break; break;
case _DRM_SHM: case _DRM_SHM:
......
/*
* Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
* Copyright (c) 2012 David Airlie <airlied@linux.ie>
* Copyright (c) 2013 David Herrmann <dh.herrmann@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <drm/drmP.h>
#include <drm/drm_mm.h>
#include <drm/drm_vma_manager.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/rbtree.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/types.h>
/**
* DOC: vma offset manager
*
* The vma-manager is responsible to map arbitrary driver-dependent memory
* regions into the linear user address-space. It provides offsets to the
* caller which can then be used on the address_space of the drm-device. It
* takes care to not overlap regions, size them appropriately and to not
* confuse mm-core by inconsistent fake vm_pgoff fields.
* Drivers shouldn't use this for object placement in VMEM. This manager should
* only be used to manage mappings into linear user-space VMs.
*
* We use drm_mm as backend to manage object allocations. But it is highly
* optimized for alloc/free calls, not lookups. Hence, we use an rb-tree to
* speed up offset lookups.
*
* You must not use multiple offset managers on a single address_space.
* Otherwise, mm-core will be unable to tear down memory mappings as the VM will
* no longer be linear. Please use VM_NONLINEAR in that case and implement your
* own offset managers.
*
* This offset manager works on page-based addresses. That is, every argument
* and return code (with the exception of drm_vma_node_offset_addr()) is given
* in number of pages, not number of bytes. That means, object sizes and offsets
* must always be page-aligned (as usual).
* If you want to get a valid byte-based user-space address for a given offset,
* please see drm_vma_node_offset_addr().
*
* Additionally to offset management, the vma offset manager also handles access
* management. For every open-file context that is allowed to access a given
* node, you must call drm_vma_node_allow(). Otherwise, an mmap() call on this
* open-file with the offset of the node will fail with -EACCES. To revoke
* access again, use drm_vma_node_revoke(). However, the caller is responsible
* for destroying already existing mappings, if required.
*/
/**
* drm_vma_offset_manager_init - Initialize new offset-manager
* @mgr: Manager object
* @page_offset: Offset of available memory area (page-based)
* @size: Size of available address space range (page-based)
*
* Initialize a new offset-manager. The offset and area size available for the
* manager are given as @page_offset and @size. Both are interpreted as
* page-numbers, not bytes.
*
* Adding/removing nodes from the manager is locked internally and protected
* against concurrent access. However, node allocation and destruction is left
* for the caller. While calling into the vma-manager, a given node must
* always be guaranteed to be referenced.
*/
void drm_vma_offset_manager_init(struct drm_vma_offset_manager *mgr,
unsigned long page_offset, unsigned long size)
{
rwlock_init(&mgr->vm_lock);
mgr->vm_addr_space_rb = RB_ROOT;
drm_mm_init(&mgr->vm_addr_space_mm, page_offset, size);
}
EXPORT_SYMBOL(drm_vma_offset_manager_init);
/**
* drm_vma_offset_manager_destroy() - Destroy offset manager
* @mgr: Manager object
*
* Destroy an object manager which was previously created via
* drm_vma_offset_manager_init(). The caller must remove all allocated nodes
* before destroying the manager. Otherwise, drm_mm will refuse to free the
* requested resources.
*
* The manager must not be accessed after this function is called.
*/
void drm_vma_offset_manager_destroy(struct drm_vma_offset_manager *mgr)
{
/* take the lock to protect against buggy drivers */
write_lock(&mgr->vm_lock);
drm_mm_takedown(&mgr->vm_addr_space_mm);
write_unlock(&mgr->vm_lock);
}
EXPORT_SYMBOL(drm_vma_offset_manager_destroy);
/**
* drm_vma_offset_lookup() - Find node in offset space
* @mgr: Manager object
* @start: Start address for object (page-based)
* @pages: Size of object (page-based)
*
* Find a node given a start address and object size. This returns the _best_
* match for the given node. That is, @start may point somewhere into a valid
* region and the given node will be returned, as long as the node spans the
* whole requested area (given the size in number of pages as @pages).
*
* RETURNS:
* Returns NULL if no suitable node can be found. Otherwise, the best match
* is returned. It's the caller's responsibility to make sure the node doesn't
* get destroyed before the caller can access it.
*/
struct drm_vma_offset_node *drm_vma_offset_lookup(struct drm_vma_offset_manager *mgr,
unsigned long start,
unsigned long pages)
{
struct drm_vma_offset_node *node;
read_lock(&mgr->vm_lock);
node = drm_vma_offset_lookup_locked(mgr, start, pages);
read_unlock(&mgr->vm_lock);
return node;
}
EXPORT_SYMBOL(drm_vma_offset_lookup);
/**
* drm_vma_offset_lookup_locked() - Find node in offset space
* @mgr: Manager object
* @start: Start address for object (page-based)
* @pages: Size of object (page-based)
*
* Same as drm_vma_offset_lookup() but requires the caller to lock offset lookup
* manually. See drm_vma_offset_lock_lookup() for an example.
*
* RETURNS:
* Returns NULL if no suitable node can be found. Otherwise, the best match
* is returned.
*/
struct drm_vma_offset_node *drm_vma_offset_lookup_locked(struct drm_vma_offset_manager *mgr,
unsigned long start,
unsigned long pages)
{
struct drm_vma_offset_node *node, *best;
struct rb_node *iter;
unsigned long offset;
iter = mgr->vm_addr_space_rb.rb_node;
best = NULL;
while (likely(iter)) {
node = rb_entry(iter, struct drm_vma_offset_node, vm_rb);
offset = node->vm_node.start;
if (start >= offset) {
iter = iter->rb_right;
best = node;
if (start == offset)
break;
} else {
iter = iter->rb_left;
}
}
/* verify that the node spans the requested area */
if (best) {
offset = best->vm_node.start + best->vm_node.size;
if (offset < start + pages)
best = NULL;
}
return best;
}
EXPORT_SYMBOL(drm_vma_offset_lookup_locked);
/* internal helper to link @node into the rb-tree */
static void _drm_vma_offset_add_rb(struct drm_vma_offset_manager *mgr,
struct drm_vma_offset_node *node)
{
struct rb_node **iter = &mgr->vm_addr_space_rb.rb_node;
struct rb_node *parent = NULL;
struct drm_vma_offset_node *iter_node;
while (likely(*iter)) {
parent = *iter;
iter_node = rb_entry(*iter, struct drm_vma_offset_node, vm_rb);
if (node->vm_node.start < iter_node->vm_node.start)
iter = &(*iter)->rb_left;
else if (node->vm_node.start > iter_node->vm_node.start)
iter = &(*iter)->rb_right;
else
BUG();
}
rb_link_node(&node->vm_rb, parent, iter);
rb_insert_color(&node->vm_rb, &mgr->vm_addr_space_rb);
}
/**
* drm_vma_offset_add() - Add offset node to manager
* @mgr: Manager object
* @node: Node to be added
* @pages: Allocation size visible to user-space (in number of pages)
*
* Add a node to the offset-manager. If the node was already added, this does
* nothing and return 0. @pages is the size of the object given in number of
* pages.
* After this call succeeds, you can access the offset of the node until it
* is removed again.
*
* If this call fails, it is safe to retry the operation or call
* drm_vma_offset_remove(), anyway. However, no cleanup is required in that
* case.
*
* @pages is not required to be the same size as the underlying memory object
* that you want to map. It only limits the size that user-space can map into
* their address space.
*
* RETURNS:
* 0 on success, negative error code on failure.
*/
int drm_vma_offset_add(struct drm_vma_offset_manager *mgr,
struct drm_vma_offset_node *node, unsigned long pages)
{
int ret;
write_lock(&mgr->vm_lock);
if (drm_mm_node_allocated(&node->vm_node)) {
ret = 0;
goto out_unlock;
}
ret = drm_mm_insert_node(&mgr->vm_addr_space_mm, &node->vm_node,
pages, 0, DRM_MM_SEARCH_DEFAULT);
if (ret)
goto out_unlock;
_drm_vma_offset_add_rb(mgr, node);
out_unlock:
write_unlock(&mgr->vm_lock);
return ret;
}
EXPORT_SYMBOL(drm_vma_offset_add);
/**
* drm_vma_offset_remove() - Remove offset node from manager
* @mgr: Manager object
* @node: Node to be removed
*
* Remove a node from the offset manager. If the node wasn't added before, this
* does nothing. After this call returns, the offset and size will be 0 until a
* new offset is allocated via drm_vma_offset_add() again. Helper functions like
* drm_vma_node_start() and drm_vma_node_offset_addr() will return 0 if no
* offset is allocated.
*/
void drm_vma_offset_remove(struct drm_vma_offset_manager *mgr,
struct drm_vma_offset_node *node)
{
write_lock(&mgr->vm_lock);
if (drm_mm_node_allocated(&node->vm_node)) {
rb_erase(&node->vm_rb, &mgr->vm_addr_space_rb);
drm_mm_remove_node(&node->vm_node);
memset(&node->vm_node, 0, sizeof(node->vm_node));
}
write_unlock(&mgr->vm_lock);
}
EXPORT_SYMBOL(drm_vma_offset_remove);
/**
* drm_vma_node_allow - Add open-file to list of allowed users
* @node: Node to modify
* @filp: Open file to add
*
* Add @filp to the list of allowed open-files for this node. If @filp is
* already on this list, the ref-count is incremented.
*
* The list of allowed-users is preserved across drm_vma_offset_add() and
* drm_vma_offset_remove() calls. You may even call it if the node is currently
* not added to any offset-manager.
*
* You must remove all open-files the same number of times as you added them
* before destroying the node. Otherwise, you will leak memory.
*
* This is locked against concurrent access internally.
*
* RETURNS:
* 0 on success, negative error code on internal failure (out-of-mem)
*/
int drm_vma_node_allow(struct drm_vma_offset_node *node, struct file *filp)
{
struct rb_node **iter;
struct rb_node *parent = NULL;
struct drm_vma_offset_file *new, *entry;
int ret = 0;
/* Preallocate entry to avoid atomic allocations below. It is quite
* unlikely that an open-file is added twice to a single node so we
* don't optimize for this case. OOM is checked below only if the entry
* is actually used. */
new = kmalloc(sizeof(*entry), GFP_KERNEL);
write_lock(&node->vm_lock);
iter = &node->vm_files.rb_node;
while (likely(*iter)) {
parent = *iter;
entry = rb_entry(*iter, struct drm_vma_offset_file, vm_rb);
if (filp == entry->vm_filp) {
entry->vm_count++;
goto unlock;
} else if (filp > entry->vm_filp) {
iter = &(*iter)->rb_right;
} else {
iter = &(*iter)->rb_left;
}
}
if (!new) {
ret = -ENOMEM;
goto unlock;
}
new->vm_filp = filp;
new->vm_count = 1;
rb_link_node(&new->vm_rb, parent, iter);
rb_insert_color(&new->vm_rb, &node->vm_files);
new = NULL;
unlock:
write_unlock(&node->vm_lock);
kfree(new);
return ret;
}
EXPORT_SYMBOL(drm_vma_node_allow);
/**
* drm_vma_node_revoke - Remove open-file from list of allowed users
* @node: Node to modify
* @filp: Open file to remove
*
* Decrement the ref-count of @filp in the list of allowed open-files on @node.
* If the ref-count drops to zero, remove @filp from the list. You must call
* this once for every drm_vma_node_allow() on @filp.
*
* This is locked against concurrent access internally.
*
* If @filp is not on the list, nothing is done.
*/
void drm_vma_node_revoke(struct drm_vma_offset_node *node, struct file *filp)
{
struct drm_vma_offset_file *entry;
struct rb_node *iter;
write_lock(&node->vm_lock);
iter = node->vm_files.rb_node;
while (likely(iter)) {
entry = rb_entry(iter, struct drm_vma_offset_file, vm_rb);
if (filp == entry->vm_filp) {
if (!--entry->vm_count) {
rb_erase(&entry->vm_rb, &node->vm_files);
kfree(entry);
}
break;
} else if (filp > entry->vm_filp) {
iter = iter->rb_right;
} else {
iter = iter->rb_left;
}
}
write_unlock(&node->vm_lock);
}
EXPORT_SYMBOL(drm_vma_node_revoke);
/**
* drm_vma_node_is_allowed - Check whether an open-file is granted access
* @node: Node to check
* @filp: Open-file to check for
*
* Search the list in @node whether @filp is currently on the list of allowed
* open-files (see drm_vma_node_allow()).
*
* This is locked against concurrent access internally.
*
* RETURNS:
* true iff @filp is on the list
*/
bool drm_vma_node_is_allowed(struct drm_vma_offset_node *node,
struct file *filp)
{
struct drm_vma_offset_file *entry;
struct rb_node *iter;
read_lock(&node->vm_lock);
iter = node->vm_files.rb_node;
while (likely(iter)) {
entry = rb_entry(iter, struct drm_vma_offset_file, vm_rb);
if (filp == entry->vm_filp)
break;
else if (filp > entry->vm_filp)
iter = iter->rb_right;
else
iter = iter->rb_left;
}
read_unlock(&node->vm_lock);
return iter;
}
EXPORT_SYMBOL(drm_vma_node_is_allowed);
此差异已折叠。
此差异已折叠。
...@@ -324,10 +324,8 @@ exynos_drm_encoder_create(struct drm_device *dev, ...@@ -324,10 +324,8 @@ exynos_drm_encoder_create(struct drm_device *dev,
return NULL; return NULL;
exynos_encoder = kzalloc(sizeof(*exynos_encoder), GFP_KERNEL); exynos_encoder = kzalloc(sizeof(*exynos_encoder), GFP_KERNEL);
if (!exynos_encoder) { if (!exynos_encoder)
DRM_ERROR("failed to allocate encoder\n");
return NULL; return NULL;
}
exynos_encoder->dpms = DRM_MODE_DPMS_OFF; exynos_encoder->dpms = DRM_MODE_DPMS_OFF;
exynos_encoder->manager = manager; exynos_encoder->manager = manager;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册