提交 3c2e81ef 编写于 作者: L Linus Torvalds

Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux

Pull DRM updates from Dave Airlie:
 "This is the one and only next pull for 3.8, we had a regression we
  found last week, so I was waiting for that to resolve itself, and I
  ended up with some Intel fixes on top as well.

  Highlights:
   - new driver: nvidia tegra 20/30/hdmi support
   - radeon: add support for previously unused DMA engines, more HDMI
     regs, eviction speeds ups and fixes
   - i915: HSW support enable, agp removal on GEN6, seqno wrapping
   - exynos: IPP subsystem support (image post proc), HDMI
   - nouveau: display class reworking, nv20->40 z compression
   - ttm: start of locking fixes, rcu usage for lookups,
   - core: documentation updates, docbook integration, monotonic clock
     usage, move from connector to object properties"

* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (590 commits)
  drm/exynos: add gsc ipp driver
  drm/exynos: add rotator ipp driver
  drm/exynos: add fimc ipp driver
  drm/exynos: add iommu support for ipp
  drm/exynos: add ipp subsystem
  drm/exynos: support device tree for fimd
  radeon: fix regression with eviction since evict caching changes
  drm/radeon: add more pedantic checks in the CP DMA checker
  drm/radeon: bump version for CS ioctl support for async DMA
  drm/radeon: enable the async DMA rings in the CS ioctl
  drm/radeon: add VM CS parser support for async DMA on cayman/TN/SI
  drm/radeon/kms: add evergreen/cayman CS parser for async DMA (v2)
  drm/radeon/kms: add 6xx/7xx CS parser for async DMA (v2)
  drm/radeon: fix htile buffer size computation for command stream checker
  drm/radeon: fix fence locking in the pageflip callback
  drm/radeon: make indirect register access concurrency-safe
  drm/radeon: add W|RREG32_IDX for MM_INDEX|DATA based mmio accesss
  drm/exynos: support extended screen coordinate of fimd
  drm/exynos: fix x, y coordinates for right bottom pixel
  drm/exynos: fix fb offset calculation for plane
  ...
...@@ -91,3 +91,12 @@ transferred to 'device' domain. This attribute can be also used for ...@@ -91,3 +91,12 @@ transferred to 'device' domain. This attribute can be also used for
dma_unmap_{single,page,sg} functions family to force buffer to stay in dma_unmap_{single,page,sg} functions family to force buffer to stay in
device domain after releasing a mapping for it. Use this attribute with device domain after releasing a mapping for it. Use this attribute with
care! care!
DMA_ATTR_FORCE_CONTIGUOUS
-------------------------
By default DMA-mapping subsystem is allowed to assemble the buffer
allocated by dma_alloc_attrs() function from individual pages if it can
be mapped as contiguous chunk into device dma address space. By
specifing this attribute the allocated buffer is forced to be contiguous
also in physical memory.
...@@ -1141,23 +1141,13 @@ int max_width, max_height;</synopsis> ...@@ -1141,23 +1141,13 @@ int max_width, max_height;</synopsis>
the <methodname>page_flip</methodname> operation will be called with a the <methodname>page_flip</methodname> operation will be called with a
non-NULL <parameter>event</parameter> argument pointing to a non-NULL <parameter>event</parameter> argument pointing to a
<structname>drm_pending_vblank_event</structname> instance. Upon page <structname>drm_pending_vblank_event</structname> instance. Upon page
flip completion the driver must fill the flip completion the driver must call <methodname>drm_send_vblank_event</methodname>
<parameter>event</parameter>::<structfield>event</structfield> to fill in the event and send to wake up any waiting processes.
<structfield>sequence</structfield>, <structfield>tv_sec</structfield> This can be performed with
and <structfield>tv_usec</structfield> fields with the associated
vertical blanking count and timestamp, add the event to the
<parameter>drm_file</parameter> list of events to be signaled, and wake
up any waiting process. This can be performed with
<programlisting><![CDATA[ <programlisting><![CDATA[
struct timeval now;
event->event.sequence = drm_vblank_count_and_time(..., &now);
event->event.tv_sec = now.tv_sec;
event->event.tv_usec = now.tv_usec;
spin_lock_irqsave(&dev->event_lock, flags); spin_lock_irqsave(&dev->event_lock, flags);
list_add_tail(&event->base.link, &event->base.file_priv->event_list); ...
wake_up_interruptible(&event->base.file_priv->event_wait); drm_send_vblank_event(dev, pipe, event);
spin_unlock_irqrestore(&dev->event_lock, flags); spin_unlock_irqrestore(&dev->event_lock, flags);
]]></programlisting> ]]></programlisting>
</para> </para>
...@@ -1621,10 +1611,10 @@ void intel_crt_init(struct drm_device *dev) ...@@ -1621,10 +1611,10 @@ void intel_crt_init(struct drm_device *dev)
</sect2> </sect2>
</sect1> </sect1>
<!-- Internals: mid-layer helper functions --> <!-- Internals: kms helper functions -->
<sect1> <sect1>
<title>Mid-layer Helper Functions</title> <title>Mode Setting Helper Functions</title>
<para> <para>
The CRTC, encoder and connector functions provided by the drivers The CRTC, encoder and connector functions provided by the drivers
implement the DRM API. They're called by the DRM core and ioctl handlers implement the DRM API. They're called by the DRM core and ioctl handlers
...@@ -2106,6 +2096,21 @@ void intel_crt_init(struct drm_device *dev) ...@@ -2106,6 +2096,21 @@ void intel_crt_init(struct drm_device *dev)
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</sect2> </sect2>
<sect2>
<title>Modeset Helper Functions Reference</title>
!Edrivers/gpu/drm/drm_crtc_helper.c
</sect2>
<sect2>
<title>fbdev Helper Functions Reference</title>
!Pdrivers/gpu/drm/drm_fb_helper.c fbdev helpers
!Edrivers/gpu/drm/drm_fb_helper.c
</sect2>
<sect2>
<title>Display Port Helper Functions Reference</title>
!Pdrivers/gpu/drm/drm_dp_helper.c dp helpers
!Iinclude/drm/drm_dp_helper.h
!Edrivers/gpu/drm/drm_dp_helper.c
</sect2>
</sect1> </sect1>
<!-- Internals: vertical blanking --> <!-- Internals: vertical blanking -->
......
NVIDIA Tegra host1x
Required properties:
- compatible: "nvidia,tegra<chip>-host1x"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
- #address-cells: The number of cells used to represent physical base addresses
in the host1x address space. Should be 1.
- #size-cells: The number of cells used to represent the size of an address
range in the host1x address space. Should be 1.
- ranges: The mapping of the host1x address space to the CPU address space.
The host1x top-level node defines a number of children, each representing one
of the following host1x client modules:
- mpe: video encoder
Required properties:
- compatible: "nvidia,tegra<chip>-mpe"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
- vi: video input
Required properties:
- compatible: "nvidia,tegra<chip>-vi"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
- epp: encoder pre-processor
Required properties:
- compatible: "nvidia,tegra<chip>-epp"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
- isp: image signal processor
Required properties:
- compatible: "nvidia,tegra<chip>-isp"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
- gr2d: 2D graphics engine
Required properties:
- compatible: "nvidia,tegra<chip>-gr2d"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
- gr3d: 3D graphics engine
Required properties:
- compatible: "nvidia,tegra<chip>-gr3d"
- reg: Physical base address and length of the controller's registers.
- dc: display controller
Required properties:
- compatible: "nvidia,tegra<chip>-dc"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
Each display controller node has a child node, named "rgb", that represents
the RGB output associated with the controller. It can take the following
optional properties:
- nvidia,ddc-i2c-bus: phandle of an I2C controller used for DDC EDID probing
- nvidia,hpd-gpio: specifies a GPIO used for hotplug detection
- nvidia,edid: supplies a binary EDID blob
- hdmi: High Definition Multimedia Interface
Required properties:
- compatible: "nvidia,tegra<chip>-hdmi"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
- vdd-supply: regulator for supply voltage
- pll-supply: regulator for PLL
Optional properties:
- nvidia,ddc-i2c-bus: phandle of an I2C controller used for DDC EDID probing
- nvidia,hpd-gpio: specifies a GPIO used for hotplug detection
- nvidia,edid: supplies a binary EDID blob
- tvo: TV encoder output
Required properties:
- compatible: "nvidia,tegra<chip>-tvo"
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt outputs from the controller.
- dsi: display serial interface
Required properties:
- compatible: "nvidia,tegra<chip>-dsi"
- reg: Physical base address and length of the controller's registers.
Example:
/ {
...
host1x {
compatible = "nvidia,tegra20-host1x", "simple-bus";
reg = <0x50000000 0x00024000>;
interrupts = <0 65 0x04 /* mpcore syncpt */
0 67 0x04>; /* mpcore general */
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x54000000 0x54000000 0x04000000>;
mpe {
compatible = "nvidia,tegra20-mpe";
reg = <0x54040000 0x00040000>;
interrupts = <0 68 0x04>;
};
vi {
compatible = "nvidia,tegra20-vi";
reg = <0x54080000 0x00040000>;
interrupts = <0 69 0x04>;
};
epp {
compatible = "nvidia,tegra20-epp";
reg = <0x540c0000 0x00040000>;
interrupts = <0 70 0x04>;
};
isp {
compatible = "nvidia,tegra20-isp";
reg = <0x54100000 0x00040000>;
interrupts = <0 71 0x04>;
};
gr2d {
compatible = "nvidia,tegra20-gr2d";
reg = <0x54140000 0x00040000>;
interrupts = <0 72 0x04>;
};
gr3d {
compatible = "nvidia,tegra20-gr3d";
reg = <0x54180000 0x00040000>;
};
dc@54200000 {
compatible = "nvidia,tegra20-dc";
reg = <0x54200000 0x00040000>;
interrupts = <0 73 0x04>;
rgb {
status = "disabled";
};
};
dc@54240000 {
compatible = "nvidia,tegra20-dc";
reg = <0x54240000 0x00040000>;
interrupts = <0 74 0x04>;
rgb {
status = "disabled";
};
};
hdmi {
compatible = "nvidia,tegra20-hdmi";
reg = <0x54280000 0x00040000>;
interrupts = <0 75 0x04>;
status = "disabled";
};
tvo {
compatible = "nvidia,tegra20-tvo";
reg = <0x542c0000 0x00040000>;
interrupts = <0 76 0x04>;
status = "disabled";
};
dsi {
compatible = "nvidia,tegra20-dsi";
reg = <0x54300000 0x00040000>;
status = "disabled";
};
};
...
};
...@@ -213,3 +213,91 @@ presentation on krefs, which can be found at: ...@@ -213,3 +213,91 @@ presentation on krefs, which can be found at:
and: and:
http://www.kroah.com/linux/talks/ols_2004_kref_talk/ http://www.kroah.com/linux/talks/ols_2004_kref_talk/
The above example could also be optimized using kref_get_unless_zero() in
the following way:
static struct my_data *get_entry()
{
struct my_data *entry = NULL;
mutex_lock(&mutex);
if (!list_empty(&q)) {
entry = container_of(q.next, struct my_data, link);
if (!kref_get_unless_zero(&entry->refcount))
entry = NULL;
}
mutex_unlock(&mutex);
return entry;
}
static void release_entry(struct kref *ref)
{
struct my_data *entry = container_of(ref, struct my_data, refcount);
mutex_lock(&mutex);
list_del(&entry->link);
mutex_unlock(&mutex);
kfree(entry);
}
static void put_entry(struct my_data *entry)
{
kref_put(&entry->refcount, release_entry);
}
Which is useful to remove the mutex lock around kref_put() in put_entry(), but
it's important that kref_get_unless_zero is enclosed in the same critical
section that finds the entry in the lookup table,
otherwise kref_get_unless_zero may reference already freed memory.
Note that it is illegal to use kref_get_unless_zero without checking its
return value. If you are sure (by already having a valid pointer) that
kref_get_unless_zero() will return true, then use kref_get() instead.
The function kref_get_unless_zero also makes it possible to use rcu
locking for lookups in the above example:
struct my_data
{
struct rcu_head rhead;
.
struct kref refcount;
.
.
};
static struct my_data *get_entry_rcu()
{
struct my_data *entry = NULL;
rcu_read_lock();
if (!list_empty(&q)) {
entry = container_of(q.next, struct my_data, link);
if (!kref_get_unless_zero(&entry->refcount))
entry = NULL;
}
rcu_read_unlock();
return entry;
}
static void release_entry_rcu(struct kref *ref)
{
struct my_data *entry = container_of(ref, struct my_data, refcount);
mutex_lock(&mutex);
list_del_rcu(&entry->link);
mutex_unlock(&mutex);
kfree_rcu(entry, rhead);
}
static void put_entry(struct my_data *entry)
{
kref_put(&entry->refcount, release_entry_rcu);
}
But note that the struct kref member needs to remain in valid memory for a
rcu grace period after release_entry_rcu was called. That can be accomplished
by using kfree_rcu(entry, rhead) as done above, or by calling synchronize_rcu()
before using kfree, but note that synchronize_rcu() may sleep for a
substantial amount of time.
Thomas Hellstrom <thellstrom@vmware.com>
...@@ -2549,6 +2549,15 @@ S: Supported ...@@ -2549,6 +2549,15 @@ S: Supported
F: drivers/gpu/drm/exynos F: drivers/gpu/drm/exynos
F: include/drm/exynos* F: include/drm/exynos*
DRM DRIVERS FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@avionic-design.de>
L: dri-devel@lists.freedesktop.org
L: linux-tegra@vger.kernel.org
T: git git://gitorious.org/thierryreding/linux.git
S: Maintained
F: drivers/gpu/drm/tegra/
F: Documentation/devicetree/bindings/gpu/nvidia,tegra20-host1x.txt
DSCC4 DRIVER DSCC4 DRIVER
M: Francois Romieu <romieu@fr.zoreil.com> M: Francois Romieu <romieu@fr.zoreil.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
......
...@@ -1034,7 +1034,8 @@ static inline void __free_iova(struct dma_iommu_mapping *mapping, ...@@ -1034,7 +1034,8 @@ static inline void __free_iova(struct dma_iommu_mapping *mapping,
spin_unlock_irqrestore(&mapping->lock, flags); spin_unlock_irqrestore(&mapping->lock, flags);
} }
static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t gfp) static struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
gfp_t gfp, struct dma_attrs *attrs)
{ {
struct page **pages; struct page **pages;
int count = size >> PAGE_SHIFT; int count = size >> PAGE_SHIFT;
...@@ -1048,6 +1049,23 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t ...@@ -1048,6 +1049,23 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t
if (!pages) if (!pages)
return NULL; return NULL;
if (dma_get_attr(DMA_ATTR_FORCE_CONTIGUOUS, attrs))
{
unsigned long order = get_order(size);
struct page *page;
page = dma_alloc_from_contiguous(dev, count, order);
if (!page)
goto error;
__dma_clear_buffer(page, size);
for (i = 0; i < count; i++)
pages[i] = page + i;
return pages;
}
while (count) { while (count) {
int j, order = __fls(count); int j, order = __fls(count);
...@@ -1081,14 +1099,21 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t ...@@ -1081,14 +1099,21 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t
return NULL; return NULL;
} }
static int __iommu_free_buffer(struct device *dev, struct page **pages, size_t size) static int __iommu_free_buffer(struct device *dev, struct page **pages,
size_t size, struct dma_attrs *attrs)
{ {
int count = size >> PAGE_SHIFT; int count = size >> PAGE_SHIFT;
int array_size = count * sizeof(struct page *); int array_size = count * sizeof(struct page *);
int i; int i;
for (i = 0; i < count; i++)
if (pages[i]) if (dma_get_attr(DMA_ATTR_FORCE_CONTIGUOUS, attrs)) {
__free_pages(pages[i], 0); dma_release_from_contiguous(dev, pages[0], count);
} else {
for (i = 0; i < count; i++)
if (pages[i])
__free_pages(pages[i], 0);
}
if (array_size <= PAGE_SIZE) if (array_size <= PAGE_SIZE)
kfree(pages); kfree(pages);
else else
...@@ -1250,7 +1275,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, ...@@ -1250,7 +1275,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
if (gfp & GFP_ATOMIC) if (gfp & GFP_ATOMIC)
return __iommu_alloc_atomic(dev, size, handle); return __iommu_alloc_atomic(dev, size, handle);
pages = __iommu_alloc_buffer(dev, size, gfp); pages = __iommu_alloc_buffer(dev, size, gfp, attrs);
if (!pages) if (!pages)
return NULL; return NULL;
...@@ -1271,7 +1296,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, ...@@ -1271,7 +1296,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
err_mapping: err_mapping:
__iommu_remove_mapping(dev, *handle, size); __iommu_remove_mapping(dev, *handle, size);
err_buffer: err_buffer:
__iommu_free_buffer(dev, pages, size); __iommu_free_buffer(dev, pages, size, attrs);
return NULL; return NULL;
} }
...@@ -1327,7 +1352,7 @@ void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, ...@@ -1327,7 +1352,7 @@ void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr,
} }
__iommu_remove_mapping(dev, handle, size); __iommu_remove_mapping(dev, handle, size);
__iommu_free_buffer(dev, pages, size); __iommu_free_buffer(dev, pages, size, attrs);
} }
static int arm_iommu_get_sgtable(struct device *dev, struct sg_table *sgt, static int arm_iommu_get_sgtable(struct device *dev, struct sg_table *sgt,
......
...@@ -62,12 +62,6 @@ ...@@ -62,12 +62,6 @@
#define I810_PTE_LOCAL 0x00000002 #define I810_PTE_LOCAL 0x00000002
#define I810_PTE_VALID 0x00000001 #define I810_PTE_VALID 0x00000001
#define I830_PTE_SYSTEM_CACHED 0x00000006 #define I830_PTE_SYSTEM_CACHED 0x00000006
/* GT PTE cache control fields */
#define GEN6_PTE_UNCACHED 0x00000002
#define HSW_PTE_UNCACHED 0x00000000
#define GEN6_PTE_LLC 0x00000004
#define GEN6_PTE_LLC_MLC 0x00000006
#define GEN6_PTE_GFDT 0x00000008
#define I810_SMRAM_MISCC 0x70 #define I810_SMRAM_MISCC 0x70
#define I810_GFX_MEM_WIN_SIZE 0x00010000 #define I810_GFX_MEM_WIN_SIZE 0x00010000
...@@ -97,7 +91,6 @@ ...@@ -97,7 +91,6 @@
#define G4x_GMCH_SIZE_VT_2M (G4x_GMCH_SIZE_2M | G4x_GMCH_SIZE_VT_EN) #define G4x_GMCH_SIZE_VT_2M (G4x_GMCH_SIZE_2M | G4x_GMCH_SIZE_VT_EN)
#define GFX_FLSH_CNTL 0x2170 /* 915+ */ #define GFX_FLSH_CNTL 0x2170 /* 915+ */
#define GFX_FLSH_CNTL_VLV 0x101008
#define I810_DRAM_CTL 0x3000 #define I810_DRAM_CTL 0x3000
#define I810_DRAM_ROW_0 0x00000001 #define I810_DRAM_ROW_0 0x00000001
...@@ -148,29 +141,6 @@ ...@@ -148,29 +141,6 @@
#define INTEL_I7505_AGPCTRL 0x70 #define INTEL_I7505_AGPCTRL 0x70
#define INTEL_I7505_MCHCFG 0x50 #define INTEL_I7505_MCHCFG 0x50
#define SNB_GMCH_CTRL 0x50
#define SNB_GMCH_GMS_STOLEN_MASK 0xF8
#define SNB_GMCH_GMS_STOLEN_32M (1 << 3)
#define SNB_GMCH_GMS_STOLEN_64M (2 << 3)
#define SNB_GMCH_GMS_STOLEN_96M (3 << 3)
#define SNB_GMCH_GMS_STOLEN_128M (4 << 3)
#define SNB_GMCH_GMS_STOLEN_160M (5 << 3)
#define SNB_GMCH_GMS_STOLEN_192M (6 << 3)
#define SNB_GMCH_GMS_STOLEN_224M (7 << 3)
#define SNB_GMCH_GMS_STOLEN_256M (8 << 3)
#define SNB_GMCH_GMS_STOLEN_288M (9 << 3)
#define SNB_GMCH_GMS_STOLEN_320M (0xa << 3)
#define SNB_GMCH_GMS_STOLEN_352M (0xb << 3)
#define SNB_GMCH_GMS_STOLEN_384M (0xc << 3)
#define SNB_GMCH_GMS_STOLEN_416M (0xd << 3)
#define SNB_GMCH_GMS_STOLEN_448M (0xe << 3)
#define SNB_GMCH_GMS_STOLEN_480M (0xf << 3)
#define SNB_GMCH_GMS_STOLEN_512M (0x10 << 3)
#define SNB_GTT_SIZE_0M (0 << 8)
#define SNB_GTT_SIZE_1M (1 << 8)
#define SNB_GTT_SIZE_2M (2 << 8)
#define SNB_GTT_SIZE_MASK (3 << 8)
/* pci devices ids */ /* pci devices ids */
#define PCI_DEVICE_ID_INTEL_E7221_HB 0x2588 #define PCI_DEVICE_ID_INTEL_E7221_HB 0x2588
#define PCI_DEVICE_ID_INTEL_E7221_IG 0x258a #define PCI_DEVICE_ID_INTEL_E7221_IG 0x258a
...@@ -219,66 +189,5 @@ ...@@ -219,66 +189,5 @@
#define PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB 0x0062 #define PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB 0x0062
#define PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB 0x006a #define PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB 0x006a
#define PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG 0x0046 #define PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG 0x0046
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB 0x0100 /* Desktop */
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_GT1_IG 0x0102
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_GT2_IG 0x0112
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_GT2_PLUS_IG 0x0122
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB 0x0104 /* Mobile */
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_GT1_IG 0x0106
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_GT2_IG 0x0116
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_GT2_PLUS_IG 0x0126
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_S_HB 0x0108 /* Server */
#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_S_IG 0x010A
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_HB 0x0150 /* Desktop */
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_GT1_IG 0x0152
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_GT2_IG 0x0162
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_M_HB 0x0154 /* Mobile */
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_M_GT1_IG 0x0156
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_M_GT2_IG 0x0166
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_HB 0x0158 /* Server */
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_GT1_IG 0x015A
#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_GT2_IG 0x016A
#define PCI_DEVICE_ID_INTEL_VALLEYVIEW_HB 0x0F00 /* VLV1 */
#define PCI_DEVICE_ID_INTEL_VALLEYVIEW_IG 0x0F30
#define PCI_DEVICE_ID_INTEL_HASWELL_HB 0x0400 /* Desktop */
#define PCI_DEVICE_ID_INTEL_HASWELL_D_GT1_IG 0x0402
#define PCI_DEVICE_ID_INTEL_HASWELL_D_GT2_IG 0x0412
#define PCI_DEVICE_ID_INTEL_HASWELL_D_GT2_PLUS_IG 0x0422
#define PCI_DEVICE_ID_INTEL_HASWELL_M_HB 0x0404 /* Mobile */
#define PCI_DEVICE_ID_INTEL_HASWELL_M_GT1_IG 0x0406
#define PCI_DEVICE_ID_INTEL_HASWELL_M_GT2_IG 0x0416
#define PCI_DEVICE_ID_INTEL_HASWELL_M_GT2_PLUS_IG 0x0426
#define PCI_DEVICE_ID_INTEL_HASWELL_S_HB 0x0408 /* Server */
#define PCI_DEVICE_ID_INTEL_HASWELL_S_GT1_IG 0x040a
#define PCI_DEVICE_ID_INTEL_HASWELL_S_GT2_IG 0x041a
#define PCI_DEVICE_ID_INTEL_HASWELL_S_GT2_PLUS_IG 0x042a
#define PCI_DEVICE_ID_INTEL_HASWELL_E_HB 0x0c04
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_D_GT1_IG 0x0C02
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_D_GT2_IG 0x0C12
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_D_GT2_PLUS_IG 0x0C22
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_M_GT1_IG 0x0C06
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_M_GT2_IG 0x0C16
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_M_GT2_PLUS_IG 0x0C26
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_S_GT1_IG 0x0C0A
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_S_GT2_IG 0x0C1A
#define PCI_DEVICE_ID_INTEL_HASWELL_SDV_S_GT2_PLUS_IG 0x0C2A
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_D_GT1_IG 0x0A02
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_D_GT2_IG 0x0A12
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_D_GT2_PLUS_IG 0x0A22
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_M_GT1_IG 0x0A06
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_M_GT2_IG 0x0A16
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_M_GT2_PLUS_IG 0x0A26
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_S_GT1_IG 0x0A0A
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_S_GT2_IG 0x0A1A
#define PCI_DEVICE_ID_INTEL_HASWELL_ULT_S_GT2_PLUS_IG 0x0A2A
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_D_GT1_IG 0x0D12
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_D_GT2_IG 0x0D22
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_D_GT2_PLUS_IG 0x0D32
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_M_GT1_IG 0x0D16
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_M_GT2_IG 0x0D26
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_M_GT2_PLUS_IG 0x0D36
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_S_GT1_IG 0x0D1A
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_S_GT2_IG 0x0D2A
#define PCI_DEVICE_ID_INTEL_HASWELL_CRW_S_GT2_PLUS_IG 0x0D3A
#endif #endif
...@@ -367,62 +367,6 @@ static unsigned int intel_gtt_stolen_size(void) ...@@ -367,62 +367,6 @@ static unsigned int intel_gtt_stolen_size(void)
stolen_size = 0; stolen_size = 0;
break; break;
} }
} else if (INTEL_GTT_GEN == 6) {
/*
* SandyBridge has new memory control reg at 0x50.w
*/
u16 snb_gmch_ctl;
pci_read_config_word(intel_private.pcidev, SNB_GMCH_CTRL, &snb_gmch_ctl);
switch (snb_gmch_ctl & SNB_GMCH_GMS_STOLEN_MASK) {
case SNB_GMCH_GMS_STOLEN_32M:
stolen_size = MB(32);
break;
case SNB_GMCH_GMS_STOLEN_64M:
stolen_size = MB(64);
break;
case SNB_GMCH_GMS_STOLEN_96M:
stolen_size = MB(96);
break;
case SNB_GMCH_GMS_STOLEN_128M:
stolen_size = MB(128);
break;
case SNB_GMCH_GMS_STOLEN_160M:
stolen_size = MB(160);
break;
case SNB_GMCH_GMS_STOLEN_192M:
stolen_size = MB(192);
break;
case SNB_GMCH_GMS_STOLEN_224M:
stolen_size = MB(224);
break;
case SNB_GMCH_GMS_STOLEN_256M:
stolen_size = MB(256);
break;
case SNB_GMCH_GMS_STOLEN_288M:
stolen_size = MB(288);
break;
case SNB_GMCH_GMS_STOLEN_320M:
stolen_size = MB(320);
break;
case SNB_GMCH_GMS_STOLEN_352M:
stolen_size = MB(352);
break;
case SNB_GMCH_GMS_STOLEN_384M:
stolen_size = MB(384);
break;
case SNB_GMCH_GMS_STOLEN_416M:
stolen_size = MB(416);
break;
case SNB_GMCH_GMS_STOLEN_448M:
stolen_size = MB(448);
break;
case SNB_GMCH_GMS_STOLEN_480M:
stolen_size = MB(480);
break;
case SNB_GMCH_GMS_STOLEN_512M:
stolen_size = MB(512);
break;
}
} else { } else {
switch (gmch_ctrl & I855_GMCH_GMS_MASK) { switch (gmch_ctrl & I855_GMCH_GMS_MASK) {
case I855_GMCH_GMS_STOLEN_1M: case I855_GMCH_GMS_STOLEN_1M:
...@@ -556,29 +500,9 @@ static unsigned int i965_gtt_total_entries(void) ...@@ -556,29 +500,9 @@ static unsigned int i965_gtt_total_entries(void)
static unsigned int intel_gtt_total_entries(void) static unsigned int intel_gtt_total_entries(void)
{ {
int size;
if (IS_G33 || INTEL_GTT_GEN == 4 || INTEL_GTT_GEN == 5) if (IS_G33 || INTEL_GTT_GEN == 4 || INTEL_GTT_GEN == 5)
return i965_gtt_total_entries(); return i965_gtt_total_entries();
else if (INTEL_GTT_GEN == 6) { else {
u16 snb_gmch_ctl;
pci_read_config_word(intel_private.pcidev, SNB_GMCH_CTRL, &snb_gmch_ctl);
switch (snb_gmch_ctl & SNB_GTT_SIZE_MASK) {
default:
case SNB_GTT_SIZE_0M:
printk(KERN_ERR "Bad GTT size mask: 0x%04x.\n", snb_gmch_ctl);
size = MB(0);
break;
case SNB_GTT_SIZE_1M:
size = MB(1);
break;
case SNB_GTT_SIZE_2M:
size = MB(2);
break;
}
return size/4;
} else {
/* On previous hardware, the GTT size was just what was /* On previous hardware, the GTT size was just what was
* required to map the aperture. * required to map the aperture.
*/ */
...@@ -778,9 +702,6 @@ bool intel_enable_gtt(void) ...@@ -778,9 +702,6 @@ bool intel_enable_gtt(void)
{ {
u8 __iomem *reg; u8 __iomem *reg;
if (INTEL_GTT_GEN >= 6)
return true;
if (INTEL_GTT_GEN == 2) { if (INTEL_GTT_GEN == 2) {
u16 gmch_ctrl; u16 gmch_ctrl;
...@@ -1149,85 +1070,6 @@ static void i965_write_entry(dma_addr_t addr, ...@@ -1149,85 +1070,6 @@ static void i965_write_entry(dma_addr_t addr,
writel(addr | pte_flags, intel_private.gtt + entry); writel(addr | pte_flags, intel_private.gtt + entry);
} }
static bool gen6_check_flags(unsigned int flags)
{
return true;
}
static void haswell_write_entry(dma_addr_t addr, unsigned int entry,
unsigned int flags)
{
unsigned int type_mask = flags & ~AGP_USER_CACHED_MEMORY_GFDT;
unsigned int gfdt = flags & AGP_USER_CACHED_MEMORY_GFDT;
u32 pte_flags;
if (type_mask == AGP_USER_MEMORY)
pte_flags = HSW_PTE_UNCACHED | I810_PTE_VALID;
else if (type_mask == AGP_USER_CACHED_MEMORY_LLC_MLC) {
pte_flags = GEN6_PTE_LLC_MLC | I810_PTE_VALID;
if (gfdt)
pte_flags |= GEN6_PTE_GFDT;
} else { /* set 'normal'/'cached' to LLC by default */
pte_flags = GEN6_PTE_LLC | I810_PTE_VALID;
if (gfdt)
pte_flags |= GEN6_PTE_GFDT;
}
/* gen6 has bit11-4 for physical addr bit39-32 */
addr |= (addr >> 28) & 0xff0;
writel(addr | pte_flags, intel_private.gtt + entry);
}
static void gen6_write_entry(dma_addr_t addr, unsigned int entry,
unsigned int flags)
{
unsigned int type_mask = flags & ~AGP_USER_CACHED_MEMORY_GFDT;
unsigned int gfdt = flags & AGP_USER_CACHED_MEMORY_GFDT;
u32 pte_flags;
if (type_mask == AGP_USER_MEMORY)
pte_flags = GEN6_PTE_UNCACHED | I810_PTE_VALID;
else if (type_mask == AGP_USER_CACHED_MEMORY_LLC_MLC) {
pte_flags = GEN6_PTE_LLC_MLC | I810_PTE_VALID;
if (gfdt)
pte_flags |= GEN6_PTE_GFDT;
} else { /* set 'normal'/'cached' to LLC by default */
pte_flags = GEN6_PTE_LLC | I810_PTE_VALID;
if (gfdt)
pte_flags |= GEN6_PTE_GFDT;
}
/* gen6 has bit11-4 for physical addr bit39-32 */
addr |= (addr >> 28) & 0xff0;
writel(addr | pte_flags, intel_private.gtt + entry);
}
static void valleyview_write_entry(dma_addr_t addr, unsigned int entry,
unsigned int flags)
{
unsigned int type_mask = flags & ~AGP_USER_CACHED_MEMORY_GFDT;
unsigned int gfdt = flags & AGP_USER_CACHED_MEMORY_GFDT;
u32 pte_flags;
if (type_mask == AGP_USER_MEMORY)
pte_flags = GEN6_PTE_UNCACHED | I810_PTE_VALID;
else {
pte_flags = GEN6_PTE_LLC | I810_PTE_VALID;
if (gfdt)
pte_flags |= GEN6_PTE_GFDT;
}
/* gen6 has bit11-4 for physical addr bit39-32 */
addr |= (addr >> 28) & 0xff0;
writel(addr | pte_flags, intel_private.gtt + entry);
writel(1, intel_private.registers + GFX_FLSH_CNTL_VLV);
}
static void gen6_cleanup(void)
{
}
/* Certain Gen5 chipsets require require idling the GPU before /* Certain Gen5 chipsets require require idling the GPU before
* unmapping anything from the GTT when VT-d is enabled. * unmapping anything from the GTT when VT-d is enabled.
*/ */
...@@ -1249,41 +1091,29 @@ static inline int needs_idle_maps(void) ...@@ -1249,41 +1091,29 @@ static inline int needs_idle_maps(void)
static int i9xx_setup(void) static int i9xx_setup(void)
{ {
u32 reg_addr; u32 reg_addr, gtt_addr;
int size = KB(512); int size = KB(512);
pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &reg_addr); pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &reg_addr);
reg_addr &= 0xfff80000; reg_addr &= 0xfff80000;
if (INTEL_GTT_GEN >= 7)
size = MB(2);
intel_private.registers = ioremap(reg_addr, size); intel_private.registers = ioremap(reg_addr, size);
if (!intel_private.registers) if (!intel_private.registers)
return -ENOMEM; return -ENOMEM;
if (INTEL_GTT_GEN == 3) { switch (INTEL_GTT_GEN) {
u32 gtt_addr; case 3:
pci_read_config_dword(intel_private.pcidev, pci_read_config_dword(intel_private.pcidev,
I915_PTEADDR, &gtt_addr); I915_PTEADDR, &gtt_addr);
intel_private.gtt_bus_addr = gtt_addr; intel_private.gtt_bus_addr = gtt_addr;
} else { break;
u32 gtt_offset; case 5:
intel_private.gtt_bus_addr = reg_addr + MB(2);
switch (INTEL_GTT_GEN) { break;
case 5: default:
case 6: intel_private.gtt_bus_addr = reg_addr + KB(512);
case 7: break;
gtt_offset = MB(2);
break;
case 4:
default:
gtt_offset = KB(512);
break;
}
intel_private.gtt_bus_addr = reg_addr + gtt_offset;
} }
if (needs_idle_maps()) if (needs_idle_maps())
...@@ -1395,32 +1225,6 @@ static const struct intel_gtt_driver ironlake_gtt_driver = { ...@@ -1395,32 +1225,6 @@ static const struct intel_gtt_driver ironlake_gtt_driver = {
.check_flags = i830_check_flags, .check_flags = i830_check_flags,
.chipset_flush = i9xx_chipset_flush, .chipset_flush = i9xx_chipset_flush,
}; };
static const struct intel_gtt_driver sandybridge_gtt_driver = {
.gen = 6,
.setup = i9xx_setup,
.cleanup = gen6_cleanup,
.write_entry = gen6_write_entry,
.dma_mask_size = 40,
.check_flags = gen6_check_flags,
.chipset_flush = i9xx_chipset_flush,
};
static const struct intel_gtt_driver haswell_gtt_driver = {
.gen = 6,
.setup = i9xx_setup,
.cleanup = gen6_cleanup,
.write_entry = haswell_write_entry,
.dma_mask_size = 40,
.check_flags = gen6_check_flags,
.chipset_flush = i9xx_chipset_flush,
};
static const struct intel_gtt_driver valleyview_gtt_driver = {
.gen = 7,
.setup = i9xx_setup,
.cleanup = gen6_cleanup,
.write_entry = valleyview_write_entry,
.dma_mask_size = 40,
.check_flags = gen6_check_flags,
};
/* Table to describe Intel GMCH and AGP/PCIE GART drivers. At least one of /* Table to describe Intel GMCH and AGP/PCIE GART drivers. At least one of
* driver and gmch_driver must be non-null, and find_gmch will determine * driver and gmch_driver must be non-null, and find_gmch will determine
...@@ -1501,106 +1305,6 @@ static const struct intel_gtt_driver_description { ...@@ -1501,106 +1305,6 @@ static const struct intel_gtt_driver_description {
"HD Graphics", &ironlake_gtt_driver }, "HD Graphics", &ironlake_gtt_driver },
{ PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG, { PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG,
"HD Graphics", &ironlake_gtt_driver }, "HD Graphics", &ironlake_gtt_driver },
{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_GT1_IG,
"Sandybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_GT2_IG,
"Sandybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_GT2_PLUS_IG,
"Sandybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_GT1_IG,
"Sandybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_GT2_IG,
"Sandybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_GT2_PLUS_IG,
"Sandybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_S_IG,
"Sandybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_IVYBRIDGE_GT1_IG,
"Ivybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_IVYBRIDGE_GT2_IG,
"Ivybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_IVYBRIDGE_M_GT1_IG,
"Ivybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_IVYBRIDGE_M_GT2_IG,
"Ivybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_GT1_IG,
"Ivybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_GT2_IG,
"Ivybridge", &sandybridge_gtt_driver },
{ PCI_DEVICE_ID_INTEL_VALLEYVIEW_IG,
"ValleyView", &valleyview_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_D_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_D_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_D_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_M_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_M_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_M_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_S_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_S_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_S_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_D_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_D_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_D_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_M_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_M_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_M_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_S_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_S_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_SDV_S_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_D_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_D_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_D_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_M_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_M_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_M_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_S_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_S_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_ULT_S_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_D_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_D_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_D_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_M_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_M_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_M_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_S_GT1_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_S_GT2_IG,
"Haswell", &haswell_gtt_driver },
{ PCI_DEVICE_ID_INTEL_HASWELL_CRW_S_GT2_PLUS_IG,
"Haswell", &haswell_gtt_driver },
{ 0, NULL, NULL } { 0, NULL, NULL }
}; };
...@@ -1686,7 +1390,7 @@ int intel_gmch_probe(struct pci_dev *bridge_pdev, struct pci_dev *gpu_pdev, ...@@ -1686,7 +1390,7 @@ int intel_gmch_probe(struct pci_dev *bridge_pdev, struct pci_dev *gpu_pdev,
} }
EXPORT_SYMBOL(intel_gmch_probe); EXPORT_SYMBOL(intel_gmch_probe);
const struct intel_gtt *intel_gtt_get(void) struct intel_gtt *intel_gtt_get(void)
{ {
return &intel_private.base; return &intel_private.base;
} }
......
...@@ -210,3 +210,5 @@ source "drivers/gpu/drm/mgag200/Kconfig" ...@@ -210,3 +210,5 @@ source "drivers/gpu/drm/mgag200/Kconfig"
source "drivers/gpu/drm/cirrus/Kconfig" source "drivers/gpu/drm/cirrus/Kconfig"
source "drivers/gpu/drm/shmobile/Kconfig" source "drivers/gpu/drm/shmobile/Kconfig"
source "drivers/gpu/drm/tegra/Kconfig"
...@@ -8,7 +8,7 @@ drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \ ...@@ -8,7 +8,7 @@ drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \
drm_context.o drm_dma.o \ drm_context.o drm_dma.o \
drm_drv.o drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \ drm_drv.o drm_fops.o drm_gem.o drm_ioctl.o drm_irq.o \
drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \ drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \
drm_agpsupport.o drm_scatter.o ati_pcigart.o drm_pci.o \ drm_agpsupport.o drm_scatter.o drm_pci.o \
drm_platform.o drm_sysfs.o drm_hashtab.o drm_mm.o \ drm_platform.o drm_sysfs.o drm_hashtab.o drm_mm.o \
drm_crtc.o drm_modes.o drm_edid.o \ drm_crtc.o drm_modes.o drm_edid.o \
drm_info.o drm_debugfs.o drm_encoder_slave.o \ drm_info.o drm_debugfs.o drm_encoder_slave.o \
...@@ -16,10 +16,11 @@ drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \ ...@@ -16,10 +16,11 @@ drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \
drm-$(CONFIG_COMPAT) += drm_ioc32.o drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
drm-$(CONFIG_PCI) += ati_pcigart.o
drm-usb-y := drm_usb.o drm-usb-y := drm_usb.o
drm_kms_helper-y := drm_fb_helper.o drm_crtc_helper.o drm_dp_i2c_helper.o drm_kms_helper-y := drm_fb_helper.o drm_crtc_helper.o drm_dp_helper.o
drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
drm_kms_helper-$(CONFIG_DRM_KMS_CMA_HELPER) += drm_fb_cma_helper.o drm_kms_helper-$(CONFIG_DRM_KMS_CMA_HELPER) += drm_fb_cma_helper.o
...@@ -48,4 +49,5 @@ obj-$(CONFIG_DRM_GMA500) += gma500/ ...@@ -48,4 +49,5 @@ obj-$(CONFIG_DRM_GMA500) += gma500/
obj-$(CONFIG_DRM_UDL) += udl/ obj-$(CONFIG_DRM_UDL) += udl/
obj-$(CONFIG_DRM_AST) += ast/ obj-$(CONFIG_DRM_AST) += ast/
obj-$(CONFIG_DRM_SHMOBILE) +=shmobile/ obj-$(CONFIG_DRM_SHMOBILE) +=shmobile/
obj-$(CONFIG_DRM_TEGRA) += tegra/
obj-y += i2c/ obj-y += i2c/
...@@ -186,11 +186,11 @@ static void ast_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg * ...@@ -186,11 +186,11 @@ static void ast_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *
static int ast_bo_move(struct ttm_buffer_object *bo, static int ast_bo_move(struct ttm_buffer_object *bo,
bool evict, bool interruptible, bool evict, bool interruptible,
bool no_wait_reserve, bool no_wait_gpu, bool no_wait_gpu,
struct ttm_mem_reg *new_mem) struct ttm_mem_reg *new_mem)
{ {
int r; int r;
r = ttm_bo_move_memcpy(bo, evict, no_wait_reserve, no_wait_gpu, new_mem); r = ttm_bo_move_memcpy(bo, evict, no_wait_gpu, new_mem);
return r; return r;
} }
...@@ -356,7 +356,7 @@ int ast_bo_create(struct drm_device *dev, int size, int align, ...@@ -356,7 +356,7 @@ int ast_bo_create(struct drm_device *dev, int size, int align,
ret = ttm_bo_init(&ast->ttm.bdev, &astbo->bo, size, ret = ttm_bo_init(&ast->ttm.bdev, &astbo->bo, size,
ttm_bo_type_device, &astbo->placement, ttm_bo_type_device, &astbo->placement,
align >> PAGE_SHIFT, 0, false, NULL, acc_size, align >> PAGE_SHIFT, false, NULL, acc_size,
NULL, ast_bo_ttm_destroy); NULL, ast_bo_ttm_destroy);
if (ret) if (ret)
return ret; return ret;
...@@ -383,7 +383,7 @@ int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr) ...@@ -383,7 +383,7 @@ int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr)
ast_ttm_placement(bo, pl_flag); ast_ttm_placement(bo, pl_flag);
for (i = 0; i < bo->placement.num_placement; i++) for (i = 0; i < bo->placement.num_placement; i++)
bo->placements[i] |= TTM_PL_FLAG_NO_EVICT; bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false, false); ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) if (ret)
return ret; return ret;
...@@ -406,7 +406,7 @@ int ast_bo_unpin(struct ast_bo *bo) ...@@ -406,7 +406,7 @@ int ast_bo_unpin(struct ast_bo *bo)
for (i = 0; i < bo->placement.num_placement ; i++) for (i = 0; i < bo->placement.num_placement ; i++)
bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT; bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false, false); ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) if (ret)
return ret; return ret;
...@@ -431,7 +431,7 @@ int ast_bo_push_sysram(struct ast_bo *bo) ...@@ -431,7 +431,7 @@ int ast_bo_push_sysram(struct ast_bo *bo)
for (i = 0; i < bo->placement.num_placement ; i++) for (i = 0; i < bo->placement.num_placement ; i++)
bo->placements[i] |= TTM_PL_FLAG_NO_EVICT; bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false, false); ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) { if (ret) {
DRM_ERROR("pushing to VRAM failed\n"); DRM_ERROR("pushing to VRAM failed\n");
return ret; return ret;
......
...@@ -35,12 +35,15 @@ static DEFINE_PCI_DEVICE_TABLE(pciidlist) = { ...@@ -35,12 +35,15 @@ static DEFINE_PCI_DEVICE_TABLE(pciidlist) = {
}; };
static void cirrus_kick_out_firmware_fb(struct pci_dev *pdev) static int cirrus_kick_out_firmware_fb(struct pci_dev *pdev)
{ {
struct apertures_struct *ap; struct apertures_struct *ap;
bool primary = false; bool primary = false;
ap = alloc_apertures(1); ap = alloc_apertures(1);
if (!ap)
return -ENOMEM;
ap->ranges[0].base = pci_resource_start(pdev, 0); ap->ranges[0].base = pci_resource_start(pdev, 0);
ap->ranges[0].size = pci_resource_len(pdev, 0); ap->ranges[0].size = pci_resource_len(pdev, 0);
...@@ -49,12 +52,18 @@ static void cirrus_kick_out_firmware_fb(struct pci_dev *pdev) ...@@ -49,12 +52,18 @@ static void cirrus_kick_out_firmware_fb(struct pci_dev *pdev)
#endif #endif
remove_conflicting_framebuffers(ap, "cirrusdrmfb", primary); remove_conflicting_framebuffers(ap, "cirrusdrmfb", primary);
kfree(ap); kfree(ap);
return 0;
} }
static int __devinit static int __devinit
cirrus_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) cirrus_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{ {
cirrus_kick_out_firmware_fb(pdev); int ret;
ret = cirrus_kick_out_firmware_fb(pdev);
if (ret)
return ret;
return drm_get_pci_dev(pdev, ent, &driver); return drm_get_pci_dev(pdev, ent, &driver);
} }
......
...@@ -186,11 +186,11 @@ static void cirrus_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_re ...@@ -186,11 +186,11 @@ static void cirrus_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_re
static int cirrus_bo_move(struct ttm_buffer_object *bo, static int cirrus_bo_move(struct ttm_buffer_object *bo,
bool evict, bool interruptible, bool evict, bool interruptible,
bool no_wait_reserve, bool no_wait_gpu, bool no_wait_gpu,
struct ttm_mem_reg *new_mem) struct ttm_mem_reg *new_mem)
{ {
int r; int r;
r = ttm_bo_move_memcpy(bo, evict, no_wait_reserve, no_wait_gpu, new_mem); r = ttm_bo_move_memcpy(bo, evict, no_wait_gpu, new_mem);
return r; return r;
} }
...@@ -361,7 +361,7 @@ int cirrus_bo_create(struct drm_device *dev, int size, int align, ...@@ -361,7 +361,7 @@ int cirrus_bo_create(struct drm_device *dev, int size, int align,
ret = ttm_bo_init(&cirrus->ttm.bdev, &cirrusbo->bo, size, ret = ttm_bo_init(&cirrus->ttm.bdev, &cirrusbo->bo, size,
ttm_bo_type_device, &cirrusbo->placement, ttm_bo_type_device, &cirrusbo->placement,
align >> PAGE_SHIFT, 0, false, NULL, acc_size, align >> PAGE_SHIFT, false, NULL, acc_size,
NULL, cirrus_bo_ttm_destroy); NULL, cirrus_bo_ttm_destroy);
if (ret) if (ret)
return ret; return ret;
...@@ -388,7 +388,7 @@ int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr) ...@@ -388,7 +388,7 @@ int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr)
cirrus_ttm_placement(bo, pl_flag); cirrus_ttm_placement(bo, pl_flag);
for (i = 0; i < bo->placement.num_placement; i++) for (i = 0; i < bo->placement.num_placement; i++)
bo->placements[i] |= TTM_PL_FLAG_NO_EVICT; bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false, false); ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) if (ret)
return ret; return ret;
...@@ -411,7 +411,7 @@ int cirrus_bo_unpin(struct cirrus_bo *bo) ...@@ -411,7 +411,7 @@ int cirrus_bo_unpin(struct cirrus_bo *bo)
for (i = 0; i < bo->placement.num_placement ; i++) for (i = 0; i < bo->placement.num_placement ; i++)
bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT; bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false, false); ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) if (ret)
return ret; return ret;
...@@ -436,7 +436,7 @@ int cirrus_bo_push_sysram(struct cirrus_bo *bo) ...@@ -436,7 +436,7 @@ int cirrus_bo_push_sysram(struct cirrus_bo *bo)
for (i = 0; i < bo->placement.num_placement ; i++) for (i = 0; i < bo->placement.num_placement ; i++)
bo->placements[i] |= TTM_PL_FLAG_NO_EVICT; bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false, false); ret = ttm_bo_validate(&bo->bo, &bo->placement, false, false);
if (ret) { if (ret) {
DRM_ERROR("pushing to VRAM failed\n"); DRM_ERROR("pushing to VRAM failed\n");
return ret; return ret;
......
...@@ -470,10 +470,8 @@ void drm_crtc_cleanup(struct drm_crtc *crtc) ...@@ -470,10 +470,8 @@ void drm_crtc_cleanup(struct drm_crtc *crtc)
{ {
struct drm_device *dev = crtc->dev; struct drm_device *dev = crtc->dev;
if (crtc->gamma_store) { kfree(crtc->gamma_store);
kfree(crtc->gamma_store); crtc->gamma_store = NULL;
crtc->gamma_store = NULL;
}
drm_mode_object_put(dev, &crtc->base); drm_mode_object_put(dev, &crtc->base);
list_del(&crtc->head); list_del(&crtc->head);
...@@ -555,16 +553,17 @@ int drm_connector_init(struct drm_device *dev, ...@@ -555,16 +553,17 @@ int drm_connector_init(struct drm_device *dev,
INIT_LIST_HEAD(&connector->probed_modes); INIT_LIST_HEAD(&connector->probed_modes);
INIT_LIST_HEAD(&connector->modes); INIT_LIST_HEAD(&connector->modes);
connector->edid_blob_ptr = NULL; connector->edid_blob_ptr = NULL;
connector->status = connector_status_unknown;
list_add_tail(&connector->head, &dev->mode_config.connector_list); list_add_tail(&connector->head, &dev->mode_config.connector_list);
dev->mode_config.num_connector++; dev->mode_config.num_connector++;
if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL) if (connector_type != DRM_MODE_CONNECTOR_VIRTUAL)
drm_connector_attach_property(connector, drm_object_attach_property(&connector->base,
dev->mode_config.edid_property, dev->mode_config.edid_property,
0); 0);
drm_connector_attach_property(connector, drm_object_attach_property(&connector->base,
dev->mode_config.dpms_property, 0); dev->mode_config.dpms_property, 0);
out: out:
...@@ -2280,13 +2279,21 @@ static int framebuffer_check(const struct drm_mode_fb_cmd2 *r) ...@@ -2280,13 +2279,21 @@ static int framebuffer_check(const struct drm_mode_fb_cmd2 *r)
for (i = 0; i < num_planes; i++) { for (i = 0; i < num_planes; i++) {
unsigned int width = r->width / (i != 0 ? hsub : 1); unsigned int width = r->width / (i != 0 ? hsub : 1);
unsigned int height = r->height / (i != 0 ? vsub : 1);
unsigned int cpp = drm_format_plane_cpp(r->pixel_format, i);
if (!r->handles[i]) { if (!r->handles[i]) {
DRM_DEBUG_KMS("no buffer object handle for plane %d\n", i); DRM_DEBUG_KMS("no buffer object handle for plane %d\n", i);
return -EINVAL; return -EINVAL;
} }
if (r->pitches[i] < drm_format_plane_cpp(r->pixel_format, i) * width) { if ((uint64_t) width * cpp > UINT_MAX)
return -ERANGE;
if ((uint64_t) height * r->pitches[i] + r->offsets[i] > UINT_MAX)
return -ERANGE;
if (r->pitches[i] < width * cpp) {
DRM_DEBUG_KMS("bad pitch %u for plane %d\n", r->pitches[i], i); DRM_DEBUG_KMS("bad pitch %u for plane %d\n", r->pitches[i], i);
return -EINVAL; return -EINVAL;
} }
...@@ -2323,6 +2330,11 @@ int drm_mode_addfb2(struct drm_device *dev, ...@@ -2323,6 +2330,11 @@ int drm_mode_addfb2(struct drm_device *dev,
if (!drm_core_check_feature(dev, DRIVER_MODESET)) if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL; return -EINVAL;
if (r->flags & ~DRM_MODE_FB_INTERLACED) {
DRM_DEBUG_KMS("bad framebuffer flags 0x%08x\n", r->flags);
return -EINVAL;
}
if ((config->min_width > r->width) || (r->width > config->max_width)) { if ((config->min_width > r->width) || (r->width > config->max_width)) {
DRM_DEBUG_KMS("bad framebuffer width %d, should be >= %d && <= %d\n", DRM_DEBUG_KMS("bad framebuffer width %d, should be >= %d && <= %d\n",
r->width, config->min_width, config->max_width); r->width, config->min_width, config->max_width);
...@@ -2916,27 +2928,6 @@ void drm_property_destroy(struct drm_device *dev, struct drm_property *property) ...@@ -2916,27 +2928,6 @@ void drm_property_destroy(struct drm_device *dev, struct drm_property *property)
} }
EXPORT_SYMBOL(drm_property_destroy); EXPORT_SYMBOL(drm_property_destroy);
void drm_connector_attach_property(struct drm_connector *connector,
struct drm_property *property, uint64_t init_val)
{
drm_object_attach_property(&connector->base, property, init_val);
}
EXPORT_SYMBOL(drm_connector_attach_property);
int drm_connector_property_set_value(struct drm_connector *connector,
struct drm_property *property, uint64_t value)
{
return drm_object_property_set_value(&connector->base, property, value);
}
EXPORT_SYMBOL(drm_connector_property_set_value);
int drm_connector_property_get_value(struct drm_connector *connector,
struct drm_property *property, uint64_t *val)
{
return drm_object_property_get_value(&connector->base, property, val);
}
EXPORT_SYMBOL(drm_connector_property_get_value);
void drm_object_attach_property(struct drm_mode_object *obj, void drm_object_attach_property(struct drm_mode_object *obj,
struct drm_property *property, struct drm_property *property,
uint64_t init_val) uint64_t init_val)
...@@ -3173,15 +3164,17 @@ int drm_mode_connector_update_edid_property(struct drm_connector *connector, ...@@ -3173,15 +3164,17 @@ int drm_mode_connector_update_edid_property(struct drm_connector *connector,
/* Delete edid, when there is none. */ /* Delete edid, when there is none. */
if (!edid) { if (!edid) {
connector->edid_blob_ptr = NULL; connector->edid_blob_ptr = NULL;
ret = drm_connector_property_set_value(connector, dev->mode_config.edid_property, 0); ret = drm_object_property_set_value(&connector->base, dev->mode_config.edid_property, 0);
return ret; return ret;
} }
size = EDID_LENGTH * (1 + edid->extensions); size = EDID_LENGTH * (1 + edid->extensions);
connector->edid_blob_ptr = drm_property_create_blob(connector->dev, connector->edid_blob_ptr = drm_property_create_blob(connector->dev,
size, edid); size, edid);
if (!connector->edid_blob_ptr)
return -EINVAL;
ret = drm_connector_property_set_value(connector, ret = drm_object_property_set_value(&connector->base,
dev->mode_config.edid_property, dev->mode_config.edid_property,
connector->edid_blob_ptr->base.id); connector->edid_blob_ptr->base.id);
...@@ -3204,6 +3197,9 @@ static bool drm_property_change_is_valid(struct drm_property *property, ...@@ -3204,6 +3197,9 @@ static bool drm_property_change_is_valid(struct drm_property *property,
for (i = 0; i < property->num_values; i++) for (i = 0; i < property->num_values; i++)
valid_mask |= (1ULL << property->values[i]); valid_mask |= (1ULL << property->values[i]);
return !(value & ~valid_mask); return !(value & ~valid_mask);
} else if (property->flags & DRM_MODE_PROP_BLOB) {
/* Only the driver knows */
return true;
} else { } else {
int i; int i;
for (i = 0; i < property->num_values; i++) for (i = 0; i < property->num_values; i++)
...@@ -3245,7 +3241,7 @@ static int drm_mode_connector_set_obj_prop(struct drm_mode_object *obj, ...@@ -3245,7 +3241,7 @@ static int drm_mode_connector_set_obj_prop(struct drm_mode_object *obj,
/* store the property value if successful */ /* store the property value if successful */
if (!ret) if (!ret)
drm_connector_property_set_value(connector, property, value); drm_object_property_set_value(&connector->base, property, value);
return ret; return ret;
} }
...@@ -3656,9 +3652,12 @@ void drm_mode_config_reset(struct drm_device *dev) ...@@ -3656,9 +3652,12 @@ void drm_mode_config_reset(struct drm_device *dev)
if (encoder->funcs->reset) if (encoder->funcs->reset)
encoder->funcs->reset(encoder); encoder->funcs->reset(encoder);
list_for_each_entry(connector, &dev->mode_config.connector_list, head) list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
connector->status = connector_status_unknown;
if (connector->funcs->reset) if (connector->funcs->reset)
connector->funcs->reset(connector); connector->funcs->reset(connector);
}
} }
EXPORT_SYMBOL(drm_mode_config_reset); EXPORT_SYMBOL(drm_mode_config_reset);
......
...@@ -39,6 +39,35 @@ ...@@ -39,6 +39,35 @@
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <drm/drm_edid.h> #include <drm/drm_edid.h>
/**
* drm_helper_move_panel_connectors_to_head() - move panels to the front in the
* connector list
* @dev: drm device to operate on
*
* Some userspace presumes that the first connected connector is the main
* display, where it's supposed to display e.g. the login screen. For
* laptops, this should be the main panel. Use this function to sort all
* (eDP/LVDS) panels to the front of the connector list, instead of
* painstakingly trying to initialize them in the right order.
*/
void drm_helper_move_panel_connectors_to_head(struct drm_device *dev)
{
struct drm_connector *connector, *tmp;
struct list_head panel_list;
INIT_LIST_HEAD(&panel_list);
list_for_each_entry_safe(connector, tmp,
&dev->mode_config.connector_list, head) {
if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS ||
connector->connector_type == DRM_MODE_CONNECTOR_eDP)
list_move_tail(&connector->head, &panel_list);
}
list_splice(&panel_list, &dev->mode_config.connector_list);
}
EXPORT_SYMBOL(drm_helper_move_panel_connectors_to_head);
static bool drm_kms_helper_poll = true; static bool drm_kms_helper_poll = true;
module_param_named(poll, drm_kms_helper_poll, bool, 0600); module_param_named(poll, drm_kms_helper_poll, bool, 0600);
...@@ -64,22 +93,21 @@ static void drm_mode_validate_flag(struct drm_connector *connector, ...@@ -64,22 +93,21 @@ static void drm_mode_validate_flag(struct drm_connector *connector,
/** /**
* drm_helper_probe_single_connector_modes - get complete set of display modes * drm_helper_probe_single_connector_modes - get complete set of display modes
* @dev: DRM device * @connector: connector to probe
* @maxX: max width for modes * @maxX: max width for modes
* @maxY: max height for modes * @maxY: max height for modes
* *
* LOCKING: * LOCKING:
* Caller must hold mode config lock. * Caller must hold mode config lock.
* *
* Based on @dev's mode_config layout, scan all the connectors and try to detect * Based on the helper callbacks implemented by @connector try to detect all
* modes on them. Modes will first be added to the connector's probed_modes * valid modes. Modes will first be added to the connector's probed_modes list,
* list, then culled (based on validity and the @maxX, @maxY parameters) and * then culled (based on validity and the @maxX, @maxY parameters) and put into
* put into the normal modes list. * the normal modes list.
* *
* Intended to be used either at bootup time or when major configuration * Intended to be use as a generic implementation of the ->probe() @connector
* changes have occurred. * callback for drivers that use the crtc helpers for output mode filtering and
* * detection.
* FIXME: take into account monitor limits
* *
* RETURNS: * RETURNS:
* Number of modes found on @connector. * Number of modes found on @connector.
...@@ -109,9 +137,14 @@ int drm_helper_probe_single_connector_modes(struct drm_connector *connector, ...@@ -109,9 +137,14 @@ int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
connector->funcs->force(connector); connector->funcs->force(connector);
} else { } else {
connector->status = connector->funcs->detect(connector, true); connector->status = connector->funcs->detect(connector, true);
drm_kms_helper_poll_enable(dev);
} }
/* Re-enable polling in case the global poll config changed. */
if (drm_kms_helper_poll != dev->mode_config.poll_running)
drm_kms_helper_poll_enable(dev);
dev->mode_config.poll_running = drm_kms_helper_poll;
if (connector->status == connector_status_disconnected) { if (connector->status == connector_status_disconnected) {
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] disconnected\n", DRM_DEBUG_KMS("[CONNECTOR:%d:%s] disconnected\n",
connector->base.id, drm_get_connector_name(connector)); connector->base.id, drm_get_connector_name(connector));
...@@ -325,17 +358,24 @@ drm_crtc_prepare_encoders(struct drm_device *dev) ...@@ -325,17 +358,24 @@ drm_crtc_prepare_encoders(struct drm_device *dev)
} }
/** /**
* drm_crtc_set_mode - set a mode * drm_crtc_helper_set_mode - internal helper to set a mode
* @crtc: CRTC to program * @crtc: CRTC to program
* @mode: mode to use * @mode: mode to use
* @x: width of mode * @x: horizontal offset into the surface
* @y: height of mode * @y: vertical offset into the surface
* @old_fb: old framebuffer, for cleanup
* *
* LOCKING: * LOCKING:
* Caller must hold mode config lock. * Caller must hold mode config lock.
* *
* Try to set @mode on @crtc. Give @crtc and its associated connectors a chance * Try to set @mode on @crtc. Give @crtc and its associated connectors a chance
* to fixup or reject the mode prior to trying to set it. * to fixup or reject the mode prior to trying to set it. This is an internal
* helper that drivers could e.g. use to update properties that require the
* entire output pipe to be disabled and re-enabled in a new configuration. For
* example for changing whether audio is enabled on a hdmi link or for changing
* panel fitter or dither attributes. It is also called by the
* drm_crtc_helper_set_config() helper function to drive the mode setting
* sequence.
* *
* RETURNS: * RETURNS:
* True if the mode was set successfully, or false otherwise. * True if the mode was set successfully, or false otherwise.
...@@ -491,20 +531,19 @@ drm_crtc_helper_disable(struct drm_crtc *crtc) ...@@ -491,20 +531,19 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
/** /**
* drm_crtc_helper_set_config - set a new config from userspace * drm_crtc_helper_set_config - set a new config from userspace
* @crtc: CRTC to setup * @set: mode set configuration
* @crtc_info: user provided configuration
* @new_mode: new mode to set
* @connector_set: set of connectors for the new config
* @fb: new framebuffer
* *
* LOCKING: * LOCKING:
* Caller must hold mode config lock. * Caller must hold mode config lock.
* *
* Setup a new configuration, provided by the user in @crtc_info, and enable * Setup a new configuration, provided by the upper layers (either an ioctl call
* it. * from userspace or internally e.g. from the fbdev suppport code) in @set, and
* enable it. This is the main helper functions for drivers that implement
* kernel mode setting with the crtc helper functions and the assorted
* ->prepare(), ->modeset() and ->commit() helper callbacks.
* *
* RETURNS: * RETURNS:
* Zero. (FIXME) * Returns 0 on success, -ERRNO on failure.
*/ */
int drm_crtc_helper_set_config(struct drm_mode_set *set) int drm_crtc_helper_set_config(struct drm_mode_set *set)
{ {
...@@ -800,12 +839,14 @@ static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc) ...@@ -800,12 +839,14 @@ static int drm_helper_choose_crtc_dpms(struct drm_crtc *crtc)
} }
/** /**
* drm_helper_connector_dpms * drm_helper_connector_dpms() - connector dpms helper implementation
* @connector affected connector * @connector: affected connector
* @mode DPMS mode * @mode: DPMS mode
* *
* Calls the low-level connector DPMS function, then * This is the main helper function provided by the crtc helper framework for
* calls appropriate encoder and crtc DPMS functions as well * implementing the DPMS connector attribute. It computes the new desired DPMS
* state for all encoders and crtcs in the output mesh and calls the ->dpms()
* callback provided by the driver appropriately.
*/ */
void drm_helper_connector_dpms(struct drm_connector *connector, int mode) void drm_helper_connector_dpms(struct drm_connector *connector, int mode)
{ {
...@@ -918,6 +959,15 @@ int drm_helper_resume_force_mode(struct drm_device *dev) ...@@ -918,6 +959,15 @@ int drm_helper_resume_force_mode(struct drm_device *dev)
} }
EXPORT_SYMBOL(drm_helper_resume_force_mode); EXPORT_SYMBOL(drm_helper_resume_force_mode);
void drm_kms_helper_hotplug_event(struct drm_device *dev)
{
/* send a uevent + call fbdev */
drm_sysfs_hotplug_event(dev);
if (dev->mode_config.funcs->output_poll_changed)
dev->mode_config.funcs->output_poll_changed(dev);
}
EXPORT_SYMBOL(drm_kms_helper_hotplug_event);
#define DRM_OUTPUT_POLL_PERIOD (10*HZ) #define DRM_OUTPUT_POLL_PERIOD (10*HZ)
static void output_poll_execute(struct work_struct *work) static void output_poll_execute(struct work_struct *work)
{ {
...@@ -933,20 +983,22 @@ static void output_poll_execute(struct work_struct *work) ...@@ -933,20 +983,22 @@ static void output_poll_execute(struct work_struct *work)
mutex_lock(&dev->mode_config.mutex); mutex_lock(&dev->mode_config.mutex);
list_for_each_entry(connector, &dev->mode_config.connector_list, head) { list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
/* if this is HPD or polled don't check it - /* Ignore forced connectors. */
TV out for instance */ if (connector->force)
if (!connector->polled)
continue; continue;
else if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT)) /* Ignore HPD capable connectors and connectors where we don't
repoll = true; * want any hotplug detection at all for polling. */
if (!connector->polled || connector->polled == DRM_CONNECTOR_POLL_HPD)
continue;
repoll = true;
old_status = connector->status; old_status = connector->status;
/* if we are connected and don't want to poll for disconnect /* if we are connected and don't want to poll for disconnect
skip it */ skip it */
if (old_status == connector_status_connected && if (old_status == connector_status_connected &&
!(connector->polled & DRM_CONNECTOR_POLL_DISCONNECT) && !(connector->polled & DRM_CONNECTOR_POLL_DISCONNECT))
!(connector->polled & DRM_CONNECTOR_POLL_HPD))
continue; continue;
connector->status = connector->funcs->detect(connector, false); connector->status = connector->funcs->detect(connector, false);
...@@ -960,12 +1012,8 @@ static void output_poll_execute(struct work_struct *work) ...@@ -960,12 +1012,8 @@ static void output_poll_execute(struct work_struct *work)
mutex_unlock(&dev->mode_config.mutex); mutex_unlock(&dev->mode_config.mutex);
if (changed) { if (changed)
/* send a uevent + call fbdev */ drm_kms_helper_hotplug_event(dev);
drm_sysfs_hotplug_event(dev);
if (dev->mode_config.funcs->output_poll_changed)
dev->mode_config.funcs->output_poll_changed(dev);
}
if (repoll) if (repoll)
schedule_delayed_work(delayed_work, DRM_OUTPUT_POLL_PERIOD); schedule_delayed_work(delayed_work, DRM_OUTPUT_POLL_PERIOD);
...@@ -988,7 +1036,8 @@ void drm_kms_helper_poll_enable(struct drm_device *dev) ...@@ -988,7 +1036,8 @@ void drm_kms_helper_poll_enable(struct drm_device *dev)
return; return;
list_for_each_entry(connector, &dev->mode_config.connector_list, head) { list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (connector->polled) if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT |
DRM_CONNECTOR_POLL_DISCONNECT))
poll = true; poll = true;
} }
...@@ -1014,12 +1063,34 @@ EXPORT_SYMBOL(drm_kms_helper_poll_fini); ...@@ -1014,12 +1063,34 @@ EXPORT_SYMBOL(drm_kms_helper_poll_fini);
void drm_helper_hpd_irq_event(struct drm_device *dev) void drm_helper_hpd_irq_event(struct drm_device *dev)
{ {
struct drm_connector *connector;
enum drm_connector_status old_status;
bool changed = false;
if (!dev->mode_config.poll_enabled) if (!dev->mode_config.poll_enabled)
return; return;
/* kill timer and schedule immediate execution, this doesn't block */ mutex_lock(&dev->mode_config.mutex);
cancel_delayed_work(&dev->mode_config.output_poll_work); list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (drm_kms_helper_poll)
schedule_delayed_work(&dev->mode_config.output_poll_work, 0); /* Only handle HPD capable connectors. */
if (!(connector->polled & DRM_CONNECTOR_POLL_HPD))
continue;
old_status = connector->status;
connector->status = connector->funcs->detect(connector, false);
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %d to %d\n",
connector->base.id,
drm_get_connector_name(connector),
old_status, connector->status);
if (old_status != connector->status)
changed = true;
}
mutex_unlock(&dev->mode_config.mutex);
if (changed)
drm_kms_helper_hotplug_event(dev);
} }
EXPORT_SYMBOL(drm_helper_hpd_irq_event); EXPORT_SYMBOL(drm_helper_hpd_irq_event);
...@@ -30,6 +30,15 @@ ...@@ -30,6 +30,15 @@
#include <drm/drm_dp_helper.h> #include <drm/drm_dp_helper.h>
#include <drm/drmP.h> #include <drm/drmP.h>
/**
* DOC: dp helpers
*
* These functions contain some common logic and helpers at various abstraction
* levels to deal with Display Port sink devices and related things like DP aux
* channel transfers, EDID reading over DP aux channels, decoding certain DPCD
* blocks, ...
*/
/* Run a single AUX_CH I2C transaction, writing/reading data as necessary */ /* Run a single AUX_CH I2C transaction, writing/reading data as necessary */
static int static int
i2c_algo_dp_aux_transaction(struct i2c_adapter *adapter, int mode, i2c_algo_dp_aux_transaction(struct i2c_adapter *adapter, int mode,
...@@ -37,7 +46,7 @@ i2c_algo_dp_aux_transaction(struct i2c_adapter *adapter, int mode, ...@@ -37,7 +46,7 @@ i2c_algo_dp_aux_transaction(struct i2c_adapter *adapter, int mode,
{ {
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data; struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int ret; int ret;
ret = (*algo_data->aux_ch)(adapter, mode, ret = (*algo_data->aux_ch)(adapter, mode,
write_byte, read_byte); write_byte, read_byte);
return ret; return ret;
...@@ -182,7 +191,6 @@ i2c_dp_aux_reset_bus(struct i2c_adapter *adapter) ...@@ -182,7 +191,6 @@ i2c_dp_aux_reset_bus(struct i2c_adapter *adapter)
{ {
(void) i2c_algo_dp_aux_address(adapter, 0, false); (void) i2c_algo_dp_aux_address(adapter, 0, false);
(void) i2c_algo_dp_aux_stop(adapter, false); (void) i2c_algo_dp_aux_stop(adapter, false);
} }
static int static int
...@@ -194,11 +202,23 @@ i2c_dp_aux_prepare_bus(struct i2c_adapter *adapter) ...@@ -194,11 +202,23 @@ i2c_dp_aux_prepare_bus(struct i2c_adapter *adapter)
return 0; return 0;
} }
/**
* i2c_dp_aux_add_bus() - register an i2c adapter using the aux ch helper
* @adapter: i2c adapter to register
*
* This registers an i2c adapater that uses dp aux channel as it's underlaying
* transport. The driver needs to fill out the &i2c_algo_dp_aux_data structure
* and store it in the algo_data member of the @adapter argument. This will be
* used by the i2c over dp aux algorithm to drive the hardware.
*
* RETURNS:
* 0 on success, -ERRNO on failure.
*/
int int
i2c_dp_aux_add_bus(struct i2c_adapter *adapter) i2c_dp_aux_add_bus(struct i2c_adapter *adapter)
{ {
int error; int error;
error = i2c_dp_aux_prepare_bus(adapter); error = i2c_dp_aux_prepare_bus(adapter);
if (error) if (error)
return error; return error;
...@@ -206,3 +226,123 @@ i2c_dp_aux_add_bus(struct i2c_adapter *adapter) ...@@ -206,3 +226,123 @@ i2c_dp_aux_add_bus(struct i2c_adapter *adapter)
return error; return error;
} }
EXPORT_SYMBOL(i2c_dp_aux_add_bus); EXPORT_SYMBOL(i2c_dp_aux_add_bus);
/* Helpers for DP link training */
static u8 dp_link_status(u8 link_status[DP_LINK_STATUS_SIZE], int r)
{
return link_status[r - DP_LANE0_1_STATUS];
}
static u8 dp_get_lane_status(u8 link_status[DP_LINK_STATUS_SIZE],
int lane)
{
int i = DP_LANE0_1_STATUS + (lane >> 1);
int s = (lane & 1) * 4;
u8 l = dp_link_status(link_status, i);
return (l >> s) & 0xf;
}
bool drm_dp_channel_eq_ok(u8 link_status[DP_LINK_STATUS_SIZE],
int lane_count)
{
u8 lane_align;
u8 lane_status;
int lane;
lane_align = dp_link_status(link_status,
DP_LANE_ALIGN_STATUS_UPDATED);
if ((lane_align & DP_INTERLANE_ALIGN_DONE) == 0)
return false;
for (lane = 0; lane < lane_count; lane++) {
lane_status = dp_get_lane_status(link_status, lane);
if ((lane_status & DP_CHANNEL_EQ_BITS) != DP_CHANNEL_EQ_BITS)
return false;
}
return true;
}
EXPORT_SYMBOL(drm_dp_channel_eq_ok);
bool drm_dp_clock_recovery_ok(u8 link_status[DP_LINK_STATUS_SIZE],
int lane_count)
{
int lane;
u8 lane_status;
for (lane = 0; lane < lane_count; lane++) {
lane_status = dp_get_lane_status(link_status, lane);
if ((lane_status & DP_LANE_CR_DONE) == 0)
return false;
}
return true;
}
EXPORT_SYMBOL(drm_dp_clock_recovery_ok);
u8 drm_dp_get_adjust_request_voltage(u8 link_status[DP_LINK_STATUS_SIZE],
int lane)
{
int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1);
int s = ((lane & 1) ?
DP_ADJUST_VOLTAGE_SWING_LANE1_SHIFT :
DP_ADJUST_VOLTAGE_SWING_LANE0_SHIFT);
u8 l = dp_link_status(link_status, i);
return ((l >> s) & 0x3) << DP_TRAIN_VOLTAGE_SWING_SHIFT;
}
EXPORT_SYMBOL(drm_dp_get_adjust_request_voltage);
u8 drm_dp_get_adjust_request_pre_emphasis(u8 link_status[DP_LINK_STATUS_SIZE],
int lane)
{
int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1);
int s = ((lane & 1) ?
DP_ADJUST_PRE_EMPHASIS_LANE1_SHIFT :
DP_ADJUST_PRE_EMPHASIS_LANE0_SHIFT);
u8 l = dp_link_status(link_status, i);
return ((l >> s) & 0x3) << DP_TRAIN_PRE_EMPHASIS_SHIFT;
}
EXPORT_SYMBOL(drm_dp_get_adjust_request_pre_emphasis);
void drm_dp_link_train_clock_recovery_delay(u8 dpcd[DP_RECEIVER_CAP_SIZE]) {
if (dpcd[DP_TRAINING_AUX_RD_INTERVAL] == 0)
udelay(100);
else
mdelay(dpcd[DP_TRAINING_AUX_RD_INTERVAL] * 4);
}
EXPORT_SYMBOL(drm_dp_link_train_clock_recovery_delay);
void drm_dp_link_train_channel_eq_delay(u8 dpcd[DP_RECEIVER_CAP_SIZE]) {
if (dpcd[DP_TRAINING_AUX_RD_INTERVAL] == 0)
udelay(400);
else
mdelay(dpcd[DP_TRAINING_AUX_RD_INTERVAL] * 4);
}
EXPORT_SYMBOL(drm_dp_link_train_channel_eq_delay);
u8 drm_dp_link_rate_to_bw_code(int link_rate)
{
switch (link_rate) {
case 162000:
default:
return DP_LINK_BW_1_62;
case 270000:
return DP_LINK_BW_2_7;
case 540000:
return DP_LINK_BW_5_4;
}
}
EXPORT_SYMBOL(drm_dp_link_rate_to_bw_code);
int drm_dp_bw_code_to_link_rate(u8 link_bw)
{
switch (link_bw) {
case DP_LINK_BW_1_62:
default:
return 162000;
case DP_LINK_BW_2_7:
return 270000;
case DP_LINK_BW_5_4:
return 540000;
}
}
EXPORT_SYMBOL(drm_dp_bw_code_to_link_rate);
...@@ -307,12 +307,9 @@ drm_do_probe_ddc_edid(struct i2c_adapter *adapter, unsigned char *buf, ...@@ -307,12 +307,9 @@ drm_do_probe_ddc_edid(struct i2c_adapter *adapter, unsigned char *buf,
static bool drm_edid_is_zero(u8 *in_edid, int length) static bool drm_edid_is_zero(u8 *in_edid, int length)
{ {
int i; if (memchr_inv(in_edid, 0, length))
u32 *raw_edid = (u32 *)in_edid; return false;
for (i = 0; i < length / 4; i++)
if (*(raw_edid + i) != 0)
return false;
return true; return true;
} }
...@@ -1516,6 +1513,26 @@ u8 *drm_find_cea_extension(struct edid *edid) ...@@ -1516,6 +1513,26 @@ u8 *drm_find_cea_extension(struct edid *edid)
} }
EXPORT_SYMBOL(drm_find_cea_extension); EXPORT_SYMBOL(drm_find_cea_extension);
/*
* Looks for a CEA mode matching given drm_display_mode.
* Returns its CEA Video ID code, or 0 if not found.
*/
u8 drm_match_cea_mode(struct drm_display_mode *to_match)
{
struct drm_display_mode *cea_mode;
u8 mode;
for (mode = 0; mode < drm_num_cea_modes; mode++) {
cea_mode = (struct drm_display_mode *)&edid_cea_modes[mode];
if (drm_mode_equal(to_match, cea_mode))
return mode + 1;
}
return 0;
}
EXPORT_SYMBOL(drm_match_cea_mode);
static int static int
do_cea_modes (struct drm_connector *connector, u8 *db, u8 len) do_cea_modes (struct drm_connector *connector, u8 *db, u8 len)
{ {
...@@ -1622,7 +1639,7 @@ parse_hdmi_vsdb(struct drm_connector *connector, const u8 *db) ...@@ -1622,7 +1639,7 @@ parse_hdmi_vsdb(struct drm_connector *connector, const u8 *db)
if (len >= 12) if (len >= 12)
connector->audio_latency[1] = db[12]; connector->audio_latency[1] = db[12];
DRM_LOG_KMS("HDMI: DVI dual %d, " DRM_DEBUG_KMS("HDMI: DVI dual %d, "
"max TMDS clock %d, " "max TMDS clock %d, "
"latency present %d %d, " "latency present %d %d, "
"video latency %d %d, " "video latency %d %d, "
...@@ -2062,3 +2079,22 @@ int drm_add_modes_noedid(struct drm_connector *connector, ...@@ -2062,3 +2079,22 @@ int drm_add_modes_noedid(struct drm_connector *connector,
return num_modes; return num_modes;
} }
EXPORT_SYMBOL(drm_add_modes_noedid); EXPORT_SYMBOL(drm_add_modes_noedid);
/**
* drm_mode_cea_vic - return the CEA-861 VIC of a given mode
* @mode: mode
*
* RETURNS:
* The VIC number, 0 in case it's not a CEA-861 mode.
*/
uint8_t drm_mode_cea_vic(const struct drm_display_mode *mode)
{
uint8_t i;
for (i = 0; i < drm_num_cea_modes; i++)
if (drm_mode_equal(mode, &edid_cea_modes[i]))
return i + 1;
return 0;
}
EXPORT_SYMBOL(drm_mode_cea_vic);
...@@ -27,6 +27,8 @@ ...@@ -27,6 +27,8 @@
* Dave Airlie <airlied@linux.ie> * Dave Airlie <airlied@linux.ie>
* Jesse Barnes <jesse.barnes@intel.com> * Jesse Barnes <jesse.barnes@intel.com>
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/sysrq.h> #include <linux/sysrq.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -43,6 +45,15 @@ MODULE_LICENSE("GPL and additional rights"); ...@@ -43,6 +45,15 @@ MODULE_LICENSE("GPL and additional rights");
static LIST_HEAD(kernel_fb_helper_list); static LIST_HEAD(kernel_fb_helper_list);
/**
* DOC: fbdev helpers
*
* The fb helper functions are useful to provide an fbdev on top of a drm kernel
* mode setting driver. They can be used mostly independantely from the crtc
* helper functions used by many drivers to implement the kernel mode setting
* interfaces.
*/
/* simple single crtc case helper function */ /* simple single crtc case helper function */
int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper) int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
{ {
...@@ -95,10 +106,16 @@ static int drm_fb_helper_parse_command_line(struct drm_fb_helper *fb_helper) ...@@ -95,10 +106,16 @@ static int drm_fb_helper_parse_command_line(struct drm_fb_helper *fb_helper)
if (mode->force) { if (mode->force) {
const char *s; const char *s;
switch (mode->force) { switch (mode->force) {
case DRM_FORCE_OFF: s = "OFF"; break; case DRM_FORCE_OFF:
case DRM_FORCE_ON_DIGITAL: s = "ON - dig"; break; s = "OFF";
break;
case DRM_FORCE_ON_DIGITAL:
s = "ON - dig";
break;
default: default:
case DRM_FORCE_ON: s = "ON"; break; case DRM_FORCE_ON:
s = "ON";
break;
} }
DRM_INFO("forcing %s connector %s\n", DRM_INFO("forcing %s connector %s\n",
...@@ -265,7 +282,7 @@ int drm_fb_helper_panic(struct notifier_block *n, unsigned long ununsed, ...@@ -265,7 +282,7 @@ int drm_fb_helper_panic(struct notifier_block *n, unsigned long ununsed,
if (panic_timeout < 0) if (panic_timeout < 0)
return 0; return 0;
printk(KERN_ERR "panic occurred, switching back to text console\n"); pr_err("panic occurred, switching back to text console\n");
return drm_fb_helper_force_kernel_mode(); return drm_fb_helper_force_kernel_mode();
} }
EXPORT_SYMBOL(drm_fb_helper_panic); EXPORT_SYMBOL(drm_fb_helper_panic);
...@@ -331,7 +348,7 @@ static void drm_fb_helper_dpms(struct fb_info *info, int dpms_mode) ...@@ -331,7 +348,7 @@ static void drm_fb_helper_dpms(struct fb_info *info, int dpms_mode)
for (j = 0; j < fb_helper->connector_count; j++) { for (j = 0; j < fb_helper->connector_count; j++) {
connector = fb_helper->connector_info[j]->connector; connector = fb_helper->connector_info[j]->connector;
connector->funcs->dpms(connector, dpms_mode); connector->funcs->dpms(connector, dpms_mode);
drm_connector_property_set_value(connector, drm_object_property_set_value(&connector->base,
dev->mode_config.dpms_property, dpms_mode); dev->mode_config.dpms_property, dpms_mode);
} }
} }
...@@ -433,7 +450,7 @@ void drm_fb_helper_fini(struct drm_fb_helper *fb_helper) ...@@ -433,7 +450,7 @@ void drm_fb_helper_fini(struct drm_fb_helper *fb_helper)
if (!list_empty(&fb_helper->kernel_fb_list)) { if (!list_empty(&fb_helper->kernel_fb_list)) {
list_del(&fb_helper->kernel_fb_list); list_del(&fb_helper->kernel_fb_list);
if (list_empty(&kernel_fb_helper_list)) { if (list_empty(&kernel_fb_helper_list)) {
printk(KERN_INFO "drm: unregistered panic notifier\n"); pr_info("drm: unregistered panic notifier\n");
atomic_notifier_chain_unregister(&panic_notifier_list, atomic_notifier_chain_unregister(&panic_notifier_list,
&paniced); &paniced);
unregister_sysrq_key('v', &sysrq_drm_fb_helper_restore_op); unregister_sysrq_key('v', &sysrq_drm_fb_helper_restore_op);
...@@ -724,9 +741,9 @@ int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper, ...@@ -724,9 +741,9 @@ int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
/* if driver picks 8 or 16 by default use that /* if driver picks 8 or 16 by default use that
for both depth/bpp */ for both depth/bpp */
if (preferred_bpp != sizes.surface_bpp) { if (preferred_bpp != sizes.surface_bpp)
sizes.surface_depth = sizes.surface_bpp = preferred_bpp; sizes.surface_depth = sizes.surface_bpp = preferred_bpp;
}
/* first up get a count of crtcs now in use and new min/maxes width/heights */ /* first up get a count of crtcs now in use and new min/maxes width/heights */
for (i = 0; i < fb_helper->connector_count; i++) { for (i = 0; i < fb_helper->connector_count; i++) {
struct drm_fb_helper_connector *fb_helper_conn = fb_helper->connector_info[i]; struct drm_fb_helper_connector *fb_helper_conn = fb_helper->connector_info[i];
...@@ -794,18 +811,16 @@ int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper, ...@@ -794,18 +811,16 @@ int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
info = fb_helper->fbdev; info = fb_helper->fbdev;
/* set the fb pointer */ /* set the fb pointer */
for (i = 0; i < fb_helper->crtc_count; i++) { for (i = 0; i < fb_helper->crtc_count; i++)
fb_helper->crtc_info[i].mode_set.fb = fb_helper->fb; fb_helper->crtc_info[i].mode_set.fb = fb_helper->fb;
}
if (new_fb) { if (new_fb) {
info->var.pixclock = 0; info->var.pixclock = 0;
if (register_framebuffer(info) < 0) { if (register_framebuffer(info) < 0)
return -EINVAL; return -EINVAL;
}
printk(KERN_INFO "fb%d: %s frame buffer device\n", info->node, dev_info(fb_helper->dev->dev, "fb%d: %s frame buffer device\n",
info->fix.id); info->node, info->fix.id);
} else { } else {
drm_fb_helper_set_par(info); drm_fb_helper_set_par(info);
...@@ -814,7 +829,7 @@ int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper, ...@@ -814,7 +829,7 @@ int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
/* Switch back to kernel console on panic */ /* Switch back to kernel console on panic */
/* multi card linked list maybe */ /* multi card linked list maybe */
if (list_empty(&kernel_fb_helper_list)) { if (list_empty(&kernel_fb_helper_list)) {
printk(KERN_INFO "drm: registered panic notifier\n"); dev_info(fb_helper->dev->dev, "registered panic notifier\n");
atomic_notifier_chain_register(&panic_notifier_list, atomic_notifier_chain_register(&panic_notifier_list,
&paniced); &paniced);
register_sysrq_key('v', &sysrq_drm_fb_helper_restore_op); register_sysrq_key('v', &sysrq_drm_fb_helper_restore_op);
...@@ -1002,11 +1017,11 @@ static bool drm_connector_enabled(struct drm_connector *connector, bool strict) ...@@ -1002,11 +1017,11 @@ static bool drm_connector_enabled(struct drm_connector *connector, bool strict)
{ {
bool enable; bool enable;
if (strict) { if (strict)
enable = connector->status == connector_status_connected; enable = connector->status == connector_status_connected;
} else { else
enable = connector->status != connector_status_disconnected; enable = connector->status != connector_status_disconnected;
}
return enable; return enable;
} }
...@@ -1191,9 +1206,8 @@ static int drm_pick_crtcs(struct drm_fb_helper *fb_helper, ...@@ -1191,9 +1206,8 @@ static int drm_pick_crtcs(struct drm_fb_helper *fb_helper,
for (c = 0; c < fb_helper->crtc_count; c++) { for (c = 0; c < fb_helper->crtc_count; c++) {
crtc = &fb_helper->crtc_info[c]; crtc = &fb_helper->crtc_info[c];
if ((encoder->possible_crtcs & (1 << c)) == 0) { if ((encoder->possible_crtcs & (1 << c)) == 0)
continue; continue;
}
for (o = 0; o < n; o++) for (o = 0; o < n; o++)
if (best_crtcs[o] == crtc) if (best_crtcs[o] == crtc)
...@@ -1246,6 +1260,11 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper) ...@@ -1246,6 +1260,11 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
sizeof(struct drm_display_mode *), GFP_KERNEL); sizeof(struct drm_display_mode *), GFP_KERNEL);
enabled = kcalloc(dev->mode_config.num_connector, enabled = kcalloc(dev->mode_config.num_connector,
sizeof(bool), GFP_KERNEL); sizeof(bool), GFP_KERNEL);
if (!crtcs || !modes || !enabled) {
DRM_ERROR("Memory allocation failed\n");
goto out;
}
drm_enable_connectors(fb_helper, enabled); drm_enable_connectors(fb_helper, enabled);
...@@ -1284,6 +1303,7 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper) ...@@ -1284,6 +1303,7 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
} }
} }
out:
kfree(crtcs); kfree(crtcs);
kfree(modes); kfree(modes);
kfree(enabled); kfree(enabled);
...@@ -1291,12 +1311,14 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper) ...@@ -1291,12 +1311,14 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
/** /**
* drm_helper_initial_config - setup a sane initial connector configuration * drm_helper_initial_config - setup a sane initial connector configuration
* @dev: DRM device * @fb_helper: fb_helper device struct
* @bpp_sel: bpp value to use for the framebuffer configuration
* *
* LOCKING: * LOCKING:
* Called at init time, must take mode config lock. * Called at init time by the driver to set up the @fb_helper initial
* configuration, must take the mode config lock.
* *
* Scan the CRTCs and connectors and try to put together an initial setup. * Scans the CRTCs and connectors and tries to put together an initial setup.
* At the moment, this is a cloned configuration across all heads with * At the moment, this is a cloned configuration across all heads with
* a new framebuffer object as the backing store. * a new framebuffer object as the backing store.
* *
...@@ -1319,9 +1341,9 @@ bool drm_fb_helper_initial_config(struct drm_fb_helper *fb_helper, int bpp_sel) ...@@ -1319,9 +1341,9 @@ bool drm_fb_helper_initial_config(struct drm_fb_helper *fb_helper, int bpp_sel)
/* /*
* we shouldn't end up with no modes here. * we shouldn't end up with no modes here.
*/ */
if (count == 0) { if (count == 0)
printk(KERN_INFO "No connectors reported connected with modes\n"); dev_info(fb_helper->dev->dev, "No connectors reported connected with modes\n");
}
drm_setup_crtcs(fb_helper); drm_setup_crtcs(fb_helper);
return drm_fb_helper_single_fb_probe(fb_helper, bpp_sel); return drm_fb_helper_single_fb_probe(fb_helper, bpp_sel);
...@@ -1330,7 +1352,7 @@ EXPORT_SYMBOL(drm_fb_helper_initial_config); ...@@ -1330,7 +1352,7 @@ EXPORT_SYMBOL(drm_fb_helper_initial_config);
/** /**
* drm_fb_helper_hotplug_event - respond to a hotplug notification by * drm_fb_helper_hotplug_event - respond to a hotplug notification by
* probing all the outputs attached to the fb. * probing all the outputs attached to the fb
* @fb_helper: the drm_fb_helper * @fb_helper: the drm_fb_helper
* *
* LOCKING: * LOCKING:
......
...@@ -67,10 +67,8 @@ void drm_ht_verbose_list(struct drm_open_hash *ht, unsigned long key) ...@@ -67,10 +67,8 @@ void drm_ht_verbose_list(struct drm_open_hash *ht, unsigned long key)
hashed_key = hash_long(key, ht->order); hashed_key = hash_long(key, ht->order);
DRM_DEBUG("Key is 0x%08lx, Hashed key is 0x%08x\n", key, hashed_key); DRM_DEBUG("Key is 0x%08lx, Hashed key is 0x%08x\n", key, hashed_key);
h_list = &ht->table[hashed_key]; h_list = &ht->table[hashed_key];
hlist_for_each(list, h_list) { hlist_for_each_entry(entry, list, h_list, head)
entry = hlist_entry(list, struct drm_hash_item, head);
DRM_DEBUG("count %d, key: 0x%08lx\n", count++, entry->key); DRM_DEBUG("count %d, key: 0x%08lx\n", count++, entry->key);
}
} }
static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht, static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht,
...@@ -83,8 +81,7 @@ static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht, ...@@ -83,8 +81,7 @@ static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht,
hashed_key = hash_long(key, ht->order); hashed_key = hash_long(key, ht->order);
h_list = &ht->table[hashed_key]; h_list = &ht->table[hashed_key];
hlist_for_each(list, h_list) { hlist_for_each_entry(entry, list, h_list, head) {
entry = hlist_entry(list, struct drm_hash_item, head);
if (entry->key == key) if (entry->key == key)
return list; return list;
if (entry->key > key) if (entry->key > key)
...@@ -93,6 +90,24 @@ static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht, ...@@ -93,6 +90,24 @@ static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht,
return NULL; return NULL;
} }
static struct hlist_node *drm_ht_find_key_rcu(struct drm_open_hash *ht,
unsigned long key)
{
struct drm_hash_item *entry;
struct hlist_head *h_list;
struct hlist_node *list;
unsigned int hashed_key;
hashed_key = hash_long(key, ht->order);
h_list = &ht->table[hashed_key];
hlist_for_each_entry_rcu(entry, list, h_list, head) {
if (entry->key == key)
return list;
if (entry->key > key)
break;
}
return NULL;
}
int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item) int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item)
{ {
...@@ -105,8 +120,7 @@ int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item) ...@@ -105,8 +120,7 @@ int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item)
hashed_key = hash_long(key, ht->order); hashed_key = hash_long(key, ht->order);
h_list = &ht->table[hashed_key]; h_list = &ht->table[hashed_key];
parent = NULL; parent = NULL;
hlist_for_each(list, h_list) { hlist_for_each_entry(entry, list, h_list, head) {
entry = hlist_entry(list, struct drm_hash_item, head);
if (entry->key == key) if (entry->key == key)
return -EINVAL; return -EINVAL;
if (entry->key > key) if (entry->key > key)
...@@ -114,9 +128,9 @@ int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item) ...@@ -114,9 +128,9 @@ int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item)
parent = list; parent = list;
} }
if (parent) { if (parent) {
hlist_add_after(parent, &item->head); hlist_add_after_rcu(parent, &item->head);
} else { } else {
hlist_add_head(&item->head, h_list); hlist_add_head_rcu(&item->head, h_list);
} }
return 0; return 0;
} }
...@@ -156,7 +170,7 @@ int drm_ht_find_item(struct drm_open_hash *ht, unsigned long key, ...@@ -156,7 +170,7 @@ int drm_ht_find_item(struct drm_open_hash *ht, unsigned long key,
{ {
struct hlist_node *list; struct hlist_node *list;
list = drm_ht_find_key(ht, key); list = drm_ht_find_key_rcu(ht, key);
if (!list) if (!list)
return -EINVAL; return -EINVAL;
...@@ -171,7 +185,7 @@ int drm_ht_remove_key(struct drm_open_hash *ht, unsigned long key) ...@@ -171,7 +185,7 @@ int drm_ht_remove_key(struct drm_open_hash *ht, unsigned long key)
list = drm_ht_find_key(ht, key); list = drm_ht_find_key(ht, key);
if (list) { if (list) {
hlist_del_init(list); hlist_del_init_rcu(list);
return 0; return 0;
} }
return -EINVAL; return -EINVAL;
...@@ -179,7 +193,7 @@ int drm_ht_remove_key(struct drm_open_hash *ht, unsigned long key) ...@@ -179,7 +193,7 @@ int drm_ht_remove_key(struct drm_open_hash *ht, unsigned long key)
int drm_ht_remove_item(struct drm_open_hash *ht, struct drm_hash_item *item) int drm_ht_remove_item(struct drm_open_hash *ht, struct drm_hash_item *item)
{ {
hlist_del_init(&item->head); hlist_del_init_rcu(&item->head);
return 0; return 0;
} }
EXPORT_SYMBOL(drm_ht_remove_item); EXPORT_SYMBOL(drm_ht_remove_item);
......
...@@ -287,6 +287,9 @@ int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_priv) ...@@ -287,6 +287,9 @@ int drm_getcap(struct drm_device *dev, void *data, struct drm_file *file_priv)
req->value |= dev->driver->prime_fd_to_handle ? DRM_PRIME_CAP_IMPORT : 0; req->value |= dev->driver->prime_fd_to_handle ? DRM_PRIME_CAP_IMPORT : 0;
req->value |= dev->driver->prime_handle_to_fd ? DRM_PRIME_CAP_EXPORT : 0; req->value |= dev->driver->prime_handle_to_fd ? DRM_PRIME_CAP_EXPORT : 0;
break; break;
case DRM_CAP_TIMESTAMP_MONOTONIC:
req->value = drm_timestamp_monotonic;
break;
default: default:
return -EINVAL; return -EINVAL;
} }
......
...@@ -106,6 +106,7 @@ static void vblank_disable_and_save(struct drm_device *dev, int crtc) ...@@ -106,6 +106,7 @@ static void vblank_disable_and_save(struct drm_device *dev, int crtc)
s64 diff_ns; s64 diff_ns;
int vblrc; int vblrc;
struct timeval tvblank; struct timeval tvblank;
int count = DRM_TIMESTAMP_MAXRETRIES;
/* Prevent vblank irq processing while disabling vblank irqs, /* Prevent vblank irq processing while disabling vblank irqs,
* so no updates of timestamps or count can happen after we've * so no updates of timestamps or count can happen after we've
...@@ -131,7 +132,10 @@ static void vblank_disable_and_save(struct drm_device *dev, int crtc) ...@@ -131,7 +132,10 @@ static void vblank_disable_and_save(struct drm_device *dev, int crtc)
do { do {
dev->last_vblank[crtc] = dev->driver->get_vblank_counter(dev, crtc); dev->last_vblank[crtc] = dev->driver->get_vblank_counter(dev, crtc);
vblrc = drm_get_last_vbltimestamp(dev, crtc, &tvblank, 0); vblrc = drm_get_last_vbltimestamp(dev, crtc, &tvblank, 0);
} while (dev->last_vblank[crtc] != dev->driver->get_vblank_counter(dev, crtc)); } while (dev->last_vblank[crtc] != dev->driver->get_vblank_counter(dev, crtc) && (--count) && vblrc);
if (!count)
vblrc = 0;
/* Compute time difference to stored timestamp of last vblank /* Compute time difference to stored timestamp of last vblank
* as updated by last invocation of drm_handle_vblank() in vblank irq. * as updated by last invocation of drm_handle_vblank() in vblank irq.
...@@ -576,7 +580,8 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc, ...@@ -576,7 +580,8 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
unsigned flags, unsigned flags,
struct drm_crtc *refcrtc) struct drm_crtc *refcrtc)
{ {
struct timeval stime, raw_time; ktime_t stime, etime, mono_time_offset;
struct timeval tv_etime;
struct drm_display_mode *mode; struct drm_display_mode *mode;
int vbl_status, vtotal, vdisplay; int vbl_status, vtotal, vdisplay;
int vpos, hpos, i; int vpos, hpos, i;
...@@ -625,13 +630,15 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc, ...@@ -625,13 +630,15 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
preempt_disable(); preempt_disable();
/* Get system timestamp before query. */ /* Get system timestamp before query. */
do_gettimeofday(&stime); stime = ktime_get();
/* Get vertical and horizontal scanout pos. vpos, hpos. */ /* Get vertical and horizontal scanout pos. vpos, hpos. */
vbl_status = dev->driver->get_scanout_position(dev, crtc, &vpos, &hpos); vbl_status = dev->driver->get_scanout_position(dev, crtc, &vpos, &hpos);
/* Get system timestamp after query. */ /* Get system timestamp after query. */
do_gettimeofday(&raw_time); etime = ktime_get();
if (!drm_timestamp_monotonic)
mono_time_offset = ktime_get_monotonic_offset();
preempt_enable(); preempt_enable();
...@@ -642,7 +649,7 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc, ...@@ -642,7 +649,7 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
return -EIO; return -EIO;
} }
duration_ns = timeval_to_ns(&raw_time) - timeval_to_ns(&stime); duration_ns = ktime_to_ns(etime) - ktime_to_ns(stime);
/* Accept result with < max_error nsecs timing uncertainty. */ /* Accept result with < max_error nsecs timing uncertainty. */
if (duration_ns <= (s64) *max_error) if (duration_ns <= (s64) *max_error)
...@@ -689,14 +696,20 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc, ...@@ -689,14 +696,20 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
vbl_status |= 0x8; vbl_status |= 0x8;
} }
if (!drm_timestamp_monotonic)
etime = ktime_sub(etime, mono_time_offset);
/* save this only for debugging purposes */
tv_etime = ktime_to_timeval(etime);
/* Subtract time delta from raw timestamp to get final /* Subtract time delta from raw timestamp to get final
* vblank_time timestamp for end of vblank. * vblank_time timestamp for end of vblank.
*/ */
*vblank_time = ns_to_timeval(timeval_to_ns(&raw_time) - delta_ns); etime = ktime_sub_ns(etime, delta_ns);
*vblank_time = ktime_to_timeval(etime);
DRM_DEBUG("crtc %d : v %d p(%d,%d)@ %ld.%ld -> %ld.%ld [e %d us, %d rep]\n", DRM_DEBUG("crtc %d : v %d p(%d,%d)@ %ld.%ld -> %ld.%ld [e %d us, %d rep]\n",
crtc, (int)vbl_status, hpos, vpos, crtc, (int)vbl_status, hpos, vpos,
(long)raw_time.tv_sec, (long)raw_time.tv_usec, (long)tv_etime.tv_sec, (long)tv_etime.tv_usec,
(long)vblank_time->tv_sec, (long)vblank_time->tv_usec, (long)vblank_time->tv_sec, (long)vblank_time->tv_usec,
(int)duration_ns/1000, i); (int)duration_ns/1000, i);
...@@ -708,6 +721,17 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc, ...@@ -708,6 +721,17 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc,
} }
EXPORT_SYMBOL(drm_calc_vbltimestamp_from_scanoutpos); EXPORT_SYMBOL(drm_calc_vbltimestamp_from_scanoutpos);
static struct timeval get_drm_timestamp(void)
{
ktime_t now;
now = ktime_get();
if (!drm_timestamp_monotonic)
now = ktime_sub(now, ktime_get_monotonic_offset());
return ktime_to_timeval(now);
}
/** /**
* drm_get_last_vbltimestamp - retrieve raw timestamp for the most recent * drm_get_last_vbltimestamp - retrieve raw timestamp for the most recent
* vblank interval. * vblank interval.
...@@ -745,9 +769,9 @@ u32 drm_get_last_vbltimestamp(struct drm_device *dev, int crtc, ...@@ -745,9 +769,9 @@ u32 drm_get_last_vbltimestamp(struct drm_device *dev, int crtc,
} }
/* GPU high precision timestamp query unsupported or failed. /* GPU high precision timestamp query unsupported or failed.
* Return gettimeofday timestamp as best estimate. * Return current monotonic/gettimeofday timestamp as best estimate.
*/ */
do_gettimeofday(tvblank); *tvblank = get_drm_timestamp();
return 0; return 0;
} }
...@@ -802,6 +826,47 @@ u32 drm_vblank_count_and_time(struct drm_device *dev, int crtc, ...@@ -802,6 +826,47 @@ u32 drm_vblank_count_and_time(struct drm_device *dev, int crtc,
} }
EXPORT_SYMBOL(drm_vblank_count_and_time); EXPORT_SYMBOL(drm_vblank_count_and_time);
static void send_vblank_event(struct drm_device *dev,
struct drm_pending_vblank_event *e,
unsigned long seq, struct timeval *now)
{
WARN_ON_SMP(!spin_is_locked(&dev->event_lock));
e->event.sequence = seq;
e->event.tv_sec = now->tv_sec;
e->event.tv_usec = now->tv_usec;
list_add_tail(&e->base.link,
&e->base.file_priv->event_list);
wake_up_interruptible(&e->base.file_priv->event_wait);
trace_drm_vblank_event_delivered(e->base.pid, e->pipe,
e->event.sequence);
}
/**
* drm_send_vblank_event - helper to send vblank event after pageflip
* @dev: DRM device
* @crtc: CRTC in question
* @e: the event to send
*
* Updates sequence # and timestamp on event, and sends it to userspace.
* Caller must hold event lock.
*/
void drm_send_vblank_event(struct drm_device *dev, int crtc,
struct drm_pending_vblank_event *e)
{
struct timeval now;
unsigned int seq;
if (crtc >= 0) {
seq = drm_vblank_count_and_time(dev, crtc, &now);
} else {
seq = 0;
now = get_drm_timestamp();
}
send_vblank_event(dev, e, seq, &now);
}
EXPORT_SYMBOL(drm_send_vblank_event);
/** /**
* drm_update_vblank_count - update the master vblank counter * drm_update_vblank_count - update the master vblank counter
* @dev: DRM device * @dev: DRM device
...@@ -936,6 +1001,13 @@ void drm_vblank_put(struct drm_device *dev, int crtc) ...@@ -936,6 +1001,13 @@ void drm_vblank_put(struct drm_device *dev, int crtc)
} }
EXPORT_SYMBOL(drm_vblank_put); EXPORT_SYMBOL(drm_vblank_put);
/**
* drm_vblank_off - disable vblank events on a CRTC
* @dev: DRM device
* @crtc: CRTC in question
*
* Caller must hold event lock.
*/
void drm_vblank_off(struct drm_device *dev, int crtc) void drm_vblank_off(struct drm_device *dev, int crtc)
{ {
struct drm_pending_vblank_event *e, *t; struct drm_pending_vblank_event *e, *t;
...@@ -949,22 +1021,19 @@ void drm_vblank_off(struct drm_device *dev, int crtc) ...@@ -949,22 +1021,19 @@ void drm_vblank_off(struct drm_device *dev, int crtc)
/* Send any queued vblank events, lest the natives grow disquiet */ /* Send any queued vblank events, lest the natives grow disquiet */
seq = drm_vblank_count_and_time(dev, crtc, &now); seq = drm_vblank_count_and_time(dev, crtc, &now);
spin_lock(&dev->event_lock);
list_for_each_entry_safe(e, t, &dev->vblank_event_list, base.link) { list_for_each_entry_safe(e, t, &dev->vblank_event_list, base.link) {
if (e->pipe != crtc) if (e->pipe != crtc)
continue; continue;
DRM_DEBUG("Sending premature vblank event on disable: \ DRM_DEBUG("Sending premature vblank event on disable: \
wanted %d, current %d\n", wanted %d, current %d\n",
e->event.sequence, seq); e->event.sequence, seq);
list_del(&e->base.link);
e->event.sequence = seq;
e->event.tv_sec = now.tv_sec;
e->event.tv_usec = now.tv_usec;
drm_vblank_put(dev, e->pipe); drm_vblank_put(dev, e->pipe);
list_move_tail(&e->base.link, &e->base.file_priv->event_list); send_vblank_event(dev, e, seq, &now);
wake_up_interruptible(&e->base.file_priv->event_wait);
trace_drm_vblank_event_delivered(e->base.pid, e->pipe,
e->event.sequence);
} }
spin_unlock(&dev->event_lock);
spin_unlock_irqrestore(&dev->vbl_lock, irqflags); spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
} }
...@@ -1107,15 +1176,9 @@ static int drm_queue_vblank_event(struct drm_device *dev, int pipe, ...@@ -1107,15 +1176,9 @@ static int drm_queue_vblank_event(struct drm_device *dev, int pipe,
e->event.sequence = vblwait->request.sequence; e->event.sequence = vblwait->request.sequence;
if ((seq - vblwait->request.sequence) <= (1 << 23)) { if ((seq - vblwait->request.sequence) <= (1 << 23)) {
e->event.sequence = seq;
e->event.tv_sec = now.tv_sec;
e->event.tv_usec = now.tv_usec;
drm_vblank_put(dev, pipe); drm_vblank_put(dev, pipe);
list_add_tail(&e->base.link, &e->base.file_priv->event_list); send_vblank_event(dev, e, seq, &now);
wake_up_interruptible(&e->base.file_priv->event_wait);
vblwait->reply.sequence = seq; vblwait->reply.sequence = seq;
trace_drm_vblank_event_delivered(current->pid, pipe,
vblwait->request.sequence);
} else { } else {
/* drm_handle_vblank_events will call drm_vblank_put */ /* drm_handle_vblank_events will call drm_vblank_put */
list_add_tail(&e->base.link, &dev->vblank_event_list); list_add_tail(&e->base.link, &dev->vblank_event_list);
...@@ -1256,14 +1319,9 @@ static void drm_handle_vblank_events(struct drm_device *dev, int crtc) ...@@ -1256,14 +1319,9 @@ static void drm_handle_vblank_events(struct drm_device *dev, int crtc)
DRM_DEBUG("vblank event on %d, current %d\n", DRM_DEBUG("vblank event on %d, current %d\n",
e->event.sequence, seq); e->event.sequence, seq);
e->event.sequence = seq; list_del(&e->base.link);
e->event.tv_sec = now.tv_sec;
e->event.tv_usec = now.tv_usec;
drm_vblank_put(dev, e->pipe); drm_vblank_put(dev, e->pipe);
list_move_tail(&e->base.link, &e->base.file_priv->event_list); send_vblank_event(dev, e, seq, &now);
wake_up_interruptible(&e->base.file_priv->event_wait);
trace_drm_vblank_event_delivered(e->base.pid, e->pipe,
e->event.sequence);
} }
spin_unlock_irqrestore(&dev->event_lock, flags); spin_unlock_irqrestore(&dev->event_lock, flags);
......
...@@ -46,7 +46,7 @@ ...@@ -46,7 +46,7 @@
* *
* Describe @mode using DRM_DEBUG. * Describe @mode using DRM_DEBUG.
*/ */
void drm_mode_debug_printmodeline(struct drm_display_mode *mode) void drm_mode_debug_printmodeline(const struct drm_display_mode *mode)
{ {
DRM_DEBUG_KMS("Modeline %d:\"%s\" %d %d %d %d %d %d %d %d %d %d " DRM_DEBUG_KMS("Modeline %d:\"%s\" %d %d %d %d %d %d %d %d %d %d "
"0x%x 0x%x\n", "0x%x 0x%x\n",
...@@ -558,7 +558,7 @@ EXPORT_SYMBOL(drm_mode_list_concat); ...@@ -558,7 +558,7 @@ EXPORT_SYMBOL(drm_mode_list_concat);
* RETURNS: * RETURNS:
* @mode->hdisplay * @mode->hdisplay
*/ */
int drm_mode_width(struct drm_display_mode *mode) int drm_mode_width(const struct drm_display_mode *mode)
{ {
return mode->hdisplay; return mode->hdisplay;
...@@ -579,7 +579,7 @@ EXPORT_SYMBOL(drm_mode_width); ...@@ -579,7 +579,7 @@ EXPORT_SYMBOL(drm_mode_width);
* RETURNS: * RETURNS:
* @mode->vdisplay * @mode->vdisplay
*/ */
int drm_mode_height(struct drm_display_mode *mode) int drm_mode_height(const struct drm_display_mode *mode)
{ {
return mode->vdisplay; return mode->vdisplay;
} }
...@@ -768,7 +768,7 @@ EXPORT_SYMBOL(drm_mode_duplicate); ...@@ -768,7 +768,7 @@ EXPORT_SYMBOL(drm_mode_duplicate);
* RETURNS: * RETURNS:
* True if the modes are equal, false otherwise. * True if the modes are equal, false otherwise.
*/ */
bool drm_mode_equal(struct drm_display_mode *mode1, struct drm_display_mode *mode2) bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_display_mode *mode2)
{ {
/* do clock check convert to PICOS so fb modes get matched /* do clock check convert to PICOS so fb modes get matched
* the same */ * the same */
......
...@@ -470,7 +470,7 @@ int drm_pcie_get_speed_cap_mask(struct drm_device *dev, u32 *mask) ...@@ -470,7 +470,7 @@ int drm_pcie_get_speed_cap_mask(struct drm_device *dev, u32 *mask)
{ {
struct pci_dev *root; struct pci_dev *root;
int pos; int pos;
u32 lnkcap, lnkcap2; u32 lnkcap = 0, lnkcap2 = 0;
*mask = 0; *mask = 0;
if (!dev->pdev) if (!dev->pdev)
......
...@@ -46,16 +46,24 @@ EXPORT_SYMBOL(drm_vblank_offdelay); ...@@ -46,16 +46,24 @@ EXPORT_SYMBOL(drm_vblank_offdelay);
unsigned int drm_timestamp_precision = 20; /* Default to 20 usecs. */ unsigned int drm_timestamp_precision = 20; /* Default to 20 usecs. */
EXPORT_SYMBOL(drm_timestamp_precision); EXPORT_SYMBOL(drm_timestamp_precision);
/*
* Default to use monotonic timestamps for wait-for-vblank and page-flip
* complete events.
*/
unsigned int drm_timestamp_monotonic = 1;
MODULE_AUTHOR(CORE_AUTHOR); MODULE_AUTHOR(CORE_AUTHOR);
MODULE_DESCRIPTION(CORE_DESC); MODULE_DESCRIPTION(CORE_DESC);
MODULE_LICENSE("GPL and additional rights"); MODULE_LICENSE("GPL and additional rights");
MODULE_PARM_DESC(debug, "Enable debug output"); MODULE_PARM_DESC(debug, "Enable debug output");
MODULE_PARM_DESC(vblankoffdelay, "Delay until vblank irq auto-disable [msecs]"); MODULE_PARM_DESC(vblankoffdelay, "Delay until vblank irq auto-disable [msecs]");
MODULE_PARM_DESC(timestamp_precision_usec, "Max. error on timestamps [usecs]"); MODULE_PARM_DESC(timestamp_precision_usec, "Max. error on timestamps [usecs]");
MODULE_PARM_DESC(timestamp_monotonic, "Use monotonic timestamps");
module_param_named(debug, drm_debug, int, 0600); module_param_named(debug, drm_debug, int, 0600);
module_param_named(vblankoffdelay, drm_vblank_offdelay, int, 0600); module_param_named(vblankoffdelay, drm_vblank_offdelay, int, 0600);
module_param_named(timestamp_precision_usec, drm_timestamp_precision, int, 0600); module_param_named(timestamp_precision_usec, drm_timestamp_precision, int, 0600);
module_param_named(timestamp_monotonic, drm_timestamp_monotonic, int, 0600);
struct idr drm_minors_idr; struct idr drm_minors_idr;
...@@ -221,20 +229,20 @@ int drm_setmaster_ioctl(struct drm_device *dev, void *data, ...@@ -221,20 +229,20 @@ int drm_setmaster_ioctl(struct drm_device *dev, void *data,
if (!file_priv->master) if (!file_priv->master)
return -EINVAL; return -EINVAL;
if (!file_priv->minor->master && if (file_priv->minor->master)
file_priv->minor->master != file_priv->master) { return -EINVAL;
mutex_lock(&dev->struct_mutex);
file_priv->minor->master = drm_master_get(file_priv->master); mutex_lock(&dev->struct_mutex);
file_priv->is_master = 1; file_priv->minor->master = drm_master_get(file_priv->master);
if (dev->driver->master_set) { file_priv->is_master = 1;
ret = dev->driver->master_set(dev, file_priv, false); if (dev->driver->master_set) {
if (unlikely(ret != 0)) { ret = dev->driver->master_set(dev, file_priv, false);
file_priv->is_master = 0; if (unlikely(ret != 0)) {
drm_master_put(&file_priv->minor->master); file_priv->is_master = 0;
} drm_master_put(&file_priv->minor->master);
} }
mutex_unlock(&dev->struct_mutex);
} }
mutex_unlock(&dev->struct_mutex);
return 0; return 0;
} }
...@@ -492,10 +500,7 @@ void drm_put_dev(struct drm_device *dev) ...@@ -492,10 +500,7 @@ void drm_put_dev(struct drm_device *dev)
drm_put_minor(&dev->primary); drm_put_minor(&dev->primary);
list_del(&dev->driver_item); list_del(&dev->driver_item);
if (dev->devname) { kfree(dev->devname);
kfree(dev->devname);
dev->devname = NULL;
}
kfree(dev); kfree(dev);
} }
EXPORT_SYMBOL(drm_put_dev); EXPORT_SYMBOL(drm_put_dev);
......
...@@ -182,7 +182,7 @@ static ssize_t dpms_show(struct device *device, ...@@ -182,7 +182,7 @@ static ssize_t dpms_show(struct device *device,
uint64_t dpms_status; uint64_t dpms_status;
int ret; int ret;
ret = drm_connector_property_get_value(connector, ret = drm_object_property_get_value(&connector->base,
dev->mode_config.dpms_property, dev->mode_config.dpms_property,
&dpms_status); &dpms_status);
if (ret) if (ret)
...@@ -277,7 +277,7 @@ static ssize_t subconnector_show(struct device *device, ...@@ -277,7 +277,7 @@ static ssize_t subconnector_show(struct device *device,
return 0; return 0;
} }
ret = drm_connector_property_get_value(connector, prop, &subconnector); ret = drm_object_property_get_value(&connector->base, prop, &subconnector);
if (ret) if (ret)
return 0; return 0;
...@@ -318,7 +318,7 @@ static ssize_t select_subconnector_show(struct device *device, ...@@ -318,7 +318,7 @@ static ssize_t select_subconnector_show(struct device *device,
return 0; return 0;
} }
ret = drm_connector_property_get_value(connector, prop, &subconnector); ret = drm_object_property_get_value(&connector->base, prop, &subconnector);
if (ret) if (ret)
return 0; return 0;
......
...@@ -10,6 +10,12 @@ config DRM_EXYNOS ...@@ -10,6 +10,12 @@ config DRM_EXYNOS
Choose this option if you have a Samsung SoC EXYNOS chipset. Choose this option if you have a Samsung SoC EXYNOS chipset.
If M is selected the module will be called exynosdrm. If M is selected the module will be called exynosdrm.
config DRM_EXYNOS_IOMMU
bool "EXYNOS DRM IOMMU Support"
depends on DRM_EXYNOS && EXYNOS_IOMMU && ARM_DMA_USE_IOMMU
help
Choose this option if you want to use IOMMU feature for DRM.
config DRM_EXYNOS_DMABUF config DRM_EXYNOS_DMABUF
bool "EXYNOS DRM DMABUF" bool "EXYNOS DRM DMABUF"
depends on DRM_EXYNOS depends on DRM_EXYNOS
...@@ -39,3 +45,27 @@ config DRM_EXYNOS_G2D ...@@ -39,3 +45,27 @@ config DRM_EXYNOS_G2D
depends on DRM_EXYNOS && !VIDEO_SAMSUNG_S5P_G2D depends on DRM_EXYNOS && !VIDEO_SAMSUNG_S5P_G2D
help help
Choose this option if you want to use Exynos G2D for DRM. Choose this option if you want to use Exynos G2D for DRM.
config DRM_EXYNOS_IPP
bool "Exynos DRM IPP"
depends on DRM_EXYNOS
help
Choose this option if you want to use IPP feature for DRM.
config DRM_EXYNOS_FIMC
bool "Exynos DRM FIMC"
depends on DRM_EXYNOS_IPP
help
Choose this option if you want to use Exynos FIMC for DRM.
config DRM_EXYNOS_ROTATOR
bool "Exynos DRM Rotator"
depends on DRM_EXYNOS_IPP
help
Choose this option if you want to use Exynos Rotator for DRM.
config DRM_EXYNOS_GSC
bool "Exynos DRM GSC"
depends on DRM_EXYNOS_IPP && ARCH_EXYNOS5
help
Choose this option if you want to use Exynos GSC for DRM.
...@@ -8,6 +8,7 @@ exynosdrm-y := exynos_drm_drv.o exynos_drm_encoder.o exynos_drm_connector.o \ ...@@ -8,6 +8,7 @@ exynosdrm-y := exynos_drm_drv.o exynos_drm_encoder.o exynos_drm_connector.o \
exynos_drm_buf.o exynos_drm_gem.o exynos_drm_core.o \ exynos_drm_buf.o exynos_drm_gem.o exynos_drm_core.o \
exynos_drm_plane.o exynos_drm_plane.o
exynosdrm-$(CONFIG_DRM_EXYNOS_IOMMU) += exynos_drm_iommu.o
exynosdrm-$(CONFIG_DRM_EXYNOS_DMABUF) += exynos_drm_dmabuf.o exynosdrm-$(CONFIG_DRM_EXYNOS_DMABUF) += exynos_drm_dmabuf.o
exynosdrm-$(CONFIG_DRM_EXYNOS_FIMD) += exynos_drm_fimd.o exynosdrm-$(CONFIG_DRM_EXYNOS_FIMD) += exynos_drm_fimd.o
exynosdrm-$(CONFIG_DRM_EXYNOS_HDMI) += exynos_hdmi.o exynos_mixer.o \ exynosdrm-$(CONFIG_DRM_EXYNOS_HDMI) += exynos_hdmi.o exynos_mixer.o \
...@@ -15,5 +16,9 @@ exynosdrm-$(CONFIG_DRM_EXYNOS_HDMI) += exynos_hdmi.o exynos_mixer.o \ ...@@ -15,5 +16,9 @@ exynosdrm-$(CONFIG_DRM_EXYNOS_HDMI) += exynos_hdmi.o exynos_mixer.o \
exynos_drm_hdmi.o exynos_drm_hdmi.o
exynosdrm-$(CONFIG_DRM_EXYNOS_VIDI) += exynos_drm_vidi.o exynosdrm-$(CONFIG_DRM_EXYNOS_VIDI) += exynos_drm_vidi.o
exynosdrm-$(CONFIG_DRM_EXYNOS_G2D) += exynos_drm_g2d.o exynosdrm-$(CONFIG_DRM_EXYNOS_G2D) += exynos_drm_g2d.o
exynosdrm-$(CONFIG_DRM_EXYNOS_IPP) += exynos_drm_ipp.o
exynosdrm-$(CONFIG_DRM_EXYNOS_FIMC) += exynos_drm_fimc.o
exynosdrm-$(CONFIG_DRM_EXYNOS_ROTATOR) += exynos_drm_rotator.o
exynosdrm-$(CONFIG_DRM_EXYNOS_GSC) += exynos_drm_gsc.o
obj-$(CONFIG_DRM_EXYNOS) += exynosdrm.o obj-$(CONFIG_DRM_EXYNOS) += exynosdrm.o
...@@ -48,6 +48,7 @@ static struct i2c_device_id ddc_idtable[] = { ...@@ -48,6 +48,7 @@ static struct i2c_device_id ddc_idtable[] = {
{ }, { },
}; };
#ifdef CONFIG_OF
static struct of_device_id hdmiddc_match_types[] = { static struct of_device_id hdmiddc_match_types[] = {
{ {
.compatible = "samsung,exynos5-hdmiddc", .compatible = "samsung,exynos5-hdmiddc",
...@@ -55,12 +56,13 @@ static struct of_device_id hdmiddc_match_types[] = { ...@@ -55,12 +56,13 @@ static struct of_device_id hdmiddc_match_types[] = {
/* end node */ /* end node */
} }
}; };
#endif
struct i2c_driver ddc_driver = { struct i2c_driver ddc_driver = {
.driver = { .driver = {
.name = "exynos-hdmiddc", .name = "exynos-hdmiddc",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.of_match_table = hdmiddc_match_types, .of_match_table = of_match_ptr(hdmiddc_match_types),
}, },
.id_table = ddc_idtable, .id_table = ddc_idtable,
.probe = s5p_ddc_probe, .probe = s5p_ddc_probe,
......
...@@ -33,89 +33,64 @@ ...@@ -33,89 +33,64 @@
static int lowlevel_buffer_allocate(struct drm_device *dev, static int lowlevel_buffer_allocate(struct drm_device *dev,
unsigned int flags, struct exynos_drm_gem_buf *buf) unsigned int flags, struct exynos_drm_gem_buf *buf)
{ {
dma_addr_t start_addr;
unsigned int npages, i = 0;
struct scatterlist *sgl;
int ret = 0; int ret = 0;
enum dma_attr attr;
unsigned int nr_pages;
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
if (IS_NONCONTIG_BUFFER(flags)) {
DRM_DEBUG_KMS("not support allocation type.\n");
return -EINVAL;
}
if (buf->dma_addr) { if (buf->dma_addr) {
DRM_DEBUG_KMS("already allocated.\n"); DRM_DEBUG_KMS("already allocated.\n");
return 0; return 0;
} }
if (buf->size >= SZ_1M) { init_dma_attrs(&buf->dma_attrs);
npages = buf->size >> SECTION_SHIFT;
buf->page_size = SECTION_SIZE;
} else if (buf->size >= SZ_64K) {
npages = buf->size >> 16;
buf->page_size = SZ_64K;
} else {
npages = buf->size >> PAGE_SHIFT;
buf->page_size = PAGE_SIZE;
}
buf->sgt = kzalloc(sizeof(struct sg_table), GFP_KERNEL); /*
if (!buf->sgt) { * if EXYNOS_BO_CONTIG, fully physically contiguous memory
DRM_ERROR("failed to allocate sg table.\n"); * region will be allocated else physically contiguous
return -ENOMEM; * as possible.
} */
if (flags & EXYNOS_BO_CONTIG)
dma_set_attr(DMA_ATTR_FORCE_CONTIGUOUS, &buf->dma_attrs);
ret = sg_alloc_table(buf->sgt, npages, GFP_KERNEL); /*
if (ret < 0) { * if EXYNOS_BO_WC or EXYNOS_BO_NONCACHABLE, writecombine mapping
DRM_ERROR("failed to initialize sg table.\n"); * else cachable mapping.
kfree(buf->sgt); */
buf->sgt = NULL; if (flags & EXYNOS_BO_WC || !(flags & EXYNOS_BO_CACHABLE))
return -ENOMEM; attr = DMA_ATTR_WRITE_COMBINE;
} else
attr = DMA_ATTR_NON_CONSISTENT;
buf->kvaddr = dma_alloc_writecombine(dev->dev, buf->size, dma_set_attr(attr, &buf->dma_attrs);
&buf->dma_addr, GFP_KERNEL); dma_set_attr(DMA_ATTR_NO_KERNEL_MAPPING, &buf->dma_attrs);
if (!buf->kvaddr) {
DRM_ERROR("failed to allocate buffer.\n");
ret = -ENOMEM;
goto err1;
}
buf->pages = kzalloc(sizeof(struct page) * npages, GFP_KERNEL); buf->pages = dma_alloc_attrs(dev->dev, buf->size,
&buf->dma_addr, GFP_KERNEL, &buf->dma_attrs);
if (!buf->pages) { if (!buf->pages) {
DRM_ERROR("failed to allocate pages.\n"); DRM_ERROR("failed to allocate buffer.\n");
ret = -ENOMEM; return -ENOMEM;
goto err2;
} }
sgl = buf->sgt->sgl; nr_pages = buf->size >> PAGE_SHIFT;
start_addr = buf->dma_addr; buf->sgt = drm_prime_pages_to_sg(buf->pages, nr_pages);
if (!buf->sgt) {
while (i < npages) { DRM_ERROR("failed to get sg table.\n");
buf->pages[i] = phys_to_page(start_addr); ret = -ENOMEM;
sg_set_page(sgl, buf->pages[i], buf->page_size, 0); goto err_free_attrs;
sg_dma_address(sgl) = start_addr;
start_addr += buf->page_size;
sgl = sg_next(sgl);
i++;
} }
DRM_DEBUG_KMS("vaddr(0x%lx), dma_addr(0x%lx), size(0x%lx)\n", DRM_DEBUG_KMS("dma_addr(0x%lx), size(0x%lx)\n",
(unsigned long)buf->kvaddr,
(unsigned long)buf->dma_addr, (unsigned long)buf->dma_addr,
buf->size); buf->size);
return ret; return ret;
err2:
dma_free_writecombine(dev->dev, buf->size, buf->kvaddr, err_free_attrs:
(dma_addr_t)buf->dma_addr); dma_free_attrs(dev->dev, buf->size, buf->pages,
(dma_addr_t)buf->dma_addr, &buf->dma_attrs);
buf->dma_addr = (dma_addr_t)NULL; buf->dma_addr = (dma_addr_t)NULL;
err1:
sg_free_table(buf->sgt);
kfree(buf->sgt);
buf->sgt = NULL;
return ret; return ret;
} }
...@@ -125,23 +100,12 @@ static void lowlevel_buffer_deallocate(struct drm_device *dev, ...@@ -125,23 +100,12 @@ static void lowlevel_buffer_deallocate(struct drm_device *dev,
{ {
DRM_DEBUG_KMS("%s.\n", __FILE__); DRM_DEBUG_KMS("%s.\n", __FILE__);
/*
* release only physically continuous memory and
* non-continuous memory would be released by exynos
* gem framework.
*/
if (IS_NONCONTIG_BUFFER(flags)) {
DRM_DEBUG_KMS("not support allocation type.\n");
return;
}
if (!buf->dma_addr) { if (!buf->dma_addr) {
DRM_DEBUG_KMS("dma_addr is invalid.\n"); DRM_DEBUG_KMS("dma_addr is invalid.\n");
return; return;
} }
DRM_DEBUG_KMS("vaddr(0x%lx), dma_addr(0x%lx), size(0x%lx)\n", DRM_DEBUG_KMS("dma_addr(0x%lx), size(0x%lx)\n",
(unsigned long)buf->kvaddr,
(unsigned long)buf->dma_addr, (unsigned long)buf->dma_addr,
buf->size); buf->size);
...@@ -150,11 +114,8 @@ static void lowlevel_buffer_deallocate(struct drm_device *dev, ...@@ -150,11 +114,8 @@ static void lowlevel_buffer_deallocate(struct drm_device *dev,
kfree(buf->sgt); kfree(buf->sgt);
buf->sgt = NULL; buf->sgt = NULL;
kfree(buf->pages); dma_free_attrs(dev->dev, buf->size, buf->pages,
buf->pages = NULL; (dma_addr_t)buf->dma_addr, &buf->dma_attrs);
dma_free_writecombine(dev->dev, buf->size, buf->kvaddr,
(dma_addr_t)buf->dma_addr);
buf->dma_addr = (dma_addr_t)NULL; buf->dma_addr = (dma_addr_t)NULL;
} }
......
...@@ -34,12 +34,12 @@ struct exynos_drm_gem_buf *exynos_drm_init_buf(struct drm_device *dev, ...@@ -34,12 +34,12 @@ struct exynos_drm_gem_buf *exynos_drm_init_buf(struct drm_device *dev,
void exynos_drm_fini_buf(struct drm_device *dev, void exynos_drm_fini_buf(struct drm_device *dev,
struct exynos_drm_gem_buf *buffer); struct exynos_drm_gem_buf *buffer);
/* allocate physical memory region and setup sgt and pages. */ /* allocate physical memory region and setup sgt. */
int exynos_drm_alloc_buf(struct drm_device *dev, int exynos_drm_alloc_buf(struct drm_device *dev,
struct exynos_drm_gem_buf *buf, struct exynos_drm_gem_buf *buf,
unsigned int flags); unsigned int flags);
/* release physical memory region, sgt and pages. */ /* release physical memory region, and sgt. */
void exynos_drm_free_buf(struct drm_device *dev, void exynos_drm_free_buf(struct drm_device *dev,
unsigned int flags, unsigned int flags,
struct exynos_drm_gem_buf *buffer); struct exynos_drm_gem_buf *buffer);
......
...@@ -236,16 +236,21 @@ static int exynos_drm_crtc_page_flip(struct drm_crtc *crtc, ...@@ -236,16 +236,21 @@ static int exynos_drm_crtc_page_flip(struct drm_crtc *crtc,
goto out; goto out;
} }
spin_lock_irq(&dev->event_lock);
list_add_tail(&event->base.link, list_add_tail(&event->base.link,
&dev_priv->pageflip_event_list); &dev_priv->pageflip_event_list);
spin_unlock_irq(&dev->event_lock);
crtc->fb = fb; crtc->fb = fb;
ret = exynos_drm_crtc_mode_set_base(crtc, crtc->x, crtc->y, ret = exynos_drm_crtc_mode_set_base(crtc, crtc->x, crtc->y,
NULL); NULL);
if (ret) { if (ret) {
crtc->fb = old_fb; crtc->fb = old_fb;
spin_lock_irq(&dev->event_lock);
drm_vblank_put(dev, exynos_crtc->pipe); drm_vblank_put(dev, exynos_crtc->pipe);
list_del(&event->base.link); list_del(&event->base.link);
spin_unlock_irq(&dev->event_lock);
goto out; goto out;
} }
......
...@@ -30,70 +30,108 @@ ...@@ -30,70 +30,108 @@
#include <linux/dma-buf.h> #include <linux/dma-buf.h>
static struct sg_table *exynos_pages_to_sg(struct page **pages, int nr_pages, struct exynos_drm_dmabuf_attachment {
unsigned int page_size) struct sg_table sgt;
enum dma_data_direction dir;
};
static int exynos_gem_attach_dma_buf(struct dma_buf *dmabuf,
struct device *dev,
struct dma_buf_attachment *attach)
{ {
struct sg_table *sgt = NULL; struct exynos_drm_dmabuf_attachment *exynos_attach;
struct scatterlist *sgl;
int i, ret;
sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); exynos_attach = kzalloc(sizeof(*exynos_attach), GFP_KERNEL);
if (!sgt) if (!exynos_attach)
goto out; return -ENOMEM;
ret = sg_alloc_table(sgt, nr_pages, GFP_KERNEL); exynos_attach->dir = DMA_NONE;
if (ret) attach->priv = exynos_attach;
goto err_free_sgt;
if (page_size < PAGE_SIZE) return 0;
page_size = PAGE_SIZE; }
for_each_sg(sgt->sgl, sgl, nr_pages, i) static void exynos_gem_detach_dma_buf(struct dma_buf *dmabuf,
sg_set_page(sgl, pages[i], page_size, 0); struct dma_buf_attachment *attach)
{
struct exynos_drm_dmabuf_attachment *exynos_attach = attach->priv;
struct sg_table *sgt;
return sgt; if (!exynos_attach)
return;
err_free_sgt: sgt = &exynos_attach->sgt;
kfree(sgt);
sgt = NULL; if (exynos_attach->dir != DMA_NONE)
out: dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
return NULL; exynos_attach->dir);
sg_free_table(sgt);
kfree(exynos_attach);
attach->priv = NULL;
} }
static struct sg_table * static struct sg_table *
exynos_gem_map_dma_buf(struct dma_buf_attachment *attach, exynos_gem_map_dma_buf(struct dma_buf_attachment *attach,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
struct exynos_drm_dmabuf_attachment *exynos_attach = attach->priv;
struct exynos_drm_gem_obj *gem_obj = attach->dmabuf->priv; struct exynos_drm_gem_obj *gem_obj = attach->dmabuf->priv;
struct drm_device *dev = gem_obj->base.dev; struct drm_device *dev = gem_obj->base.dev;
struct exynos_drm_gem_buf *buf; struct exynos_drm_gem_buf *buf;
struct scatterlist *rd, *wr;
struct sg_table *sgt = NULL; struct sg_table *sgt = NULL;
unsigned int npages; unsigned int i;
int nents; int nents, ret;
DRM_DEBUG_PRIME("%s\n", __FILE__); DRM_DEBUG_PRIME("%s\n", __FILE__);
mutex_lock(&dev->struct_mutex); if (WARN_ON(dir == DMA_NONE))
return ERR_PTR(-EINVAL);
/* just return current sgt if already requested. */
if (exynos_attach->dir == dir)
return &exynos_attach->sgt;
/* reattaching is not allowed. */
if (WARN_ON(exynos_attach->dir != DMA_NONE))
return ERR_PTR(-EBUSY);
buf = gem_obj->buffer; buf = gem_obj->buffer;
if (!buf) {
DRM_ERROR("buffer is null.\n");
return ERR_PTR(-ENOMEM);
}
/* there should always be pages allocated. */ sgt = &exynos_attach->sgt;
if (!buf->pages) {
DRM_ERROR("pages is null.\n"); ret = sg_alloc_table(sgt, buf->sgt->orig_nents, GFP_KERNEL);
goto err_unlock; if (ret) {
DRM_ERROR("failed to alloc sgt.\n");
return ERR_PTR(-ENOMEM);
} }
npages = buf->size / buf->page_size; mutex_lock(&dev->struct_mutex);
sgt = exynos_pages_to_sg(buf->pages, npages, buf->page_size); rd = buf->sgt->sgl;
if (!sgt) { wr = sgt->sgl;
DRM_DEBUG_PRIME("exynos_pages_to_sg returned NULL!\n"); for (i = 0; i < sgt->orig_nents; ++i) {
sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
rd = sg_next(rd);
wr = sg_next(wr);
}
nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
if (!nents) {
DRM_ERROR("failed to map sgl with iommu.\n");
sgt = ERR_PTR(-EIO);
goto err_unlock; goto err_unlock;
} }
nents = dma_map_sg(attach->dev, sgt->sgl, sgt->nents, dir);
DRM_DEBUG_PRIME("npages = %d buffer size = 0x%lx page_size = 0x%lx\n", exynos_attach->dir = dir;
npages, buf->size, buf->page_size); attach->priv = exynos_attach;
DRM_DEBUG_PRIME("buffer size = 0x%lx\n", buf->size);
err_unlock: err_unlock:
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
...@@ -104,10 +142,7 @@ static void exynos_gem_unmap_dma_buf(struct dma_buf_attachment *attach, ...@@ -104,10 +142,7 @@ static void exynos_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
struct sg_table *sgt, struct sg_table *sgt,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir); /* Nothing to do. */
sg_free_table(sgt);
kfree(sgt);
sgt = NULL;
} }
static void exynos_dmabuf_release(struct dma_buf *dmabuf) static void exynos_dmabuf_release(struct dma_buf *dmabuf)
...@@ -169,6 +204,8 @@ static int exynos_gem_dmabuf_mmap(struct dma_buf *dma_buf, ...@@ -169,6 +204,8 @@ static int exynos_gem_dmabuf_mmap(struct dma_buf *dma_buf,
} }
static struct dma_buf_ops exynos_dmabuf_ops = { static struct dma_buf_ops exynos_dmabuf_ops = {
.attach = exynos_gem_attach_dma_buf,
.detach = exynos_gem_detach_dma_buf,
.map_dma_buf = exynos_gem_map_dma_buf, .map_dma_buf = exynos_gem_map_dma_buf,
.unmap_dma_buf = exynos_gem_unmap_dma_buf, .unmap_dma_buf = exynos_gem_unmap_dma_buf,
.kmap = exynos_gem_dmabuf_kmap, .kmap = exynos_gem_dmabuf_kmap,
...@@ -196,7 +233,6 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, ...@@ -196,7 +233,6 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
struct scatterlist *sgl; struct scatterlist *sgl;
struct exynos_drm_gem_obj *exynos_gem_obj; struct exynos_drm_gem_obj *exynos_gem_obj;
struct exynos_drm_gem_buf *buffer; struct exynos_drm_gem_buf *buffer;
struct page *page;
int ret; int ret;
DRM_DEBUG_PRIME("%s\n", __FILE__); DRM_DEBUG_PRIME("%s\n", __FILE__);
...@@ -233,38 +269,27 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, ...@@ -233,38 +269,27 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
goto err_unmap_attach; goto err_unmap_attach;
} }
buffer->pages = kzalloc(sizeof(*page) * sgt->nents, GFP_KERNEL);
if (!buffer->pages) {
DRM_ERROR("failed to allocate pages.\n");
ret = -ENOMEM;
goto err_free_buffer;
}
exynos_gem_obj = exynos_drm_gem_init(drm_dev, dma_buf->size); exynos_gem_obj = exynos_drm_gem_init(drm_dev, dma_buf->size);
if (!exynos_gem_obj) { if (!exynos_gem_obj) {
ret = -ENOMEM; ret = -ENOMEM;
goto err_free_pages; goto err_free_buffer;
} }
sgl = sgt->sgl; sgl = sgt->sgl;
if (sgt->nents == 1) { buffer->size = dma_buf->size;
buffer->dma_addr = sg_dma_address(sgt->sgl); buffer->dma_addr = sg_dma_address(sgl);
buffer->size = sg_dma_len(sgt->sgl);
if (sgt->nents == 1) {
/* always physically continuous memory if sgt->nents is 1. */ /* always physically continuous memory if sgt->nents is 1. */
exynos_gem_obj->flags |= EXYNOS_BO_CONTIG; exynos_gem_obj->flags |= EXYNOS_BO_CONTIG;
} else { } else {
unsigned int i = 0; /*
* this case could be CONTIG or NONCONTIG type but for now
buffer->dma_addr = sg_dma_address(sgl); * sets NONCONTIG.
while (i < sgt->nents) { * TODO. we have to find a way that exporter can notify
buffer->pages[i] = sg_page(sgl); * the type of its own buffer to importer.
buffer->size += sg_dma_len(sgl); */
sgl = sg_next(sgl);
i++;
}
exynos_gem_obj->flags |= EXYNOS_BO_NONCONTIG; exynos_gem_obj->flags |= EXYNOS_BO_NONCONTIG;
} }
...@@ -277,9 +302,6 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, ...@@ -277,9 +302,6 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
return &exynos_gem_obj->base; return &exynos_gem_obj->base;
err_free_pages:
kfree(buffer->pages);
buffer->pages = NULL;
err_free_buffer: err_free_buffer:
kfree(buffer); kfree(buffer);
buffer = NULL; buffer = NULL;
......
...@@ -40,6 +40,8 @@ ...@@ -40,6 +40,8 @@
#include "exynos_drm_vidi.h" #include "exynos_drm_vidi.h"
#include "exynos_drm_dmabuf.h" #include "exynos_drm_dmabuf.h"
#include "exynos_drm_g2d.h" #include "exynos_drm_g2d.h"
#include "exynos_drm_ipp.h"
#include "exynos_drm_iommu.h"
#define DRIVER_NAME "exynos" #define DRIVER_NAME "exynos"
#define DRIVER_DESC "Samsung SoC DRM" #define DRIVER_DESC "Samsung SoC DRM"
...@@ -49,6 +51,9 @@ ...@@ -49,6 +51,9 @@
#define VBLANK_OFF_DELAY 50000 #define VBLANK_OFF_DELAY 50000
/* platform device pointer for eynos drm device. */
static struct platform_device *exynos_drm_pdev;
static int exynos_drm_load(struct drm_device *dev, unsigned long flags) static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
{ {
struct exynos_drm_private *private; struct exynos_drm_private *private;
...@@ -66,6 +71,18 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags) ...@@ -66,6 +71,18 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
INIT_LIST_HEAD(&private->pageflip_event_list); INIT_LIST_HEAD(&private->pageflip_event_list);
dev->dev_private = (void *)private; dev->dev_private = (void *)private;
/*
* create mapping to manage iommu table and set a pointer to iommu
* mapping structure to iommu_mapping of private data.
* also this iommu_mapping can be used to check if iommu is supported
* or not.
*/
ret = drm_create_iommu_mapping(dev);
if (ret < 0) {
DRM_ERROR("failed to create iommu mapping.\n");
goto err_crtc;
}
drm_mode_config_init(dev); drm_mode_config_init(dev);
/* init kms poll for handling hpd */ /* init kms poll for handling hpd */
...@@ -80,7 +97,7 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags) ...@@ -80,7 +97,7 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
for (nr = 0; nr < MAX_CRTC; nr++) { for (nr = 0; nr < MAX_CRTC; nr++) {
ret = exynos_drm_crtc_create(dev, nr); ret = exynos_drm_crtc_create(dev, nr);
if (ret) if (ret)
goto err_crtc; goto err_release_iommu_mapping;
} }
for (nr = 0; nr < MAX_PLANE; nr++) { for (nr = 0; nr < MAX_PLANE; nr++) {
...@@ -89,12 +106,12 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags) ...@@ -89,12 +106,12 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
plane = exynos_plane_init(dev, possible_crtcs, false); plane = exynos_plane_init(dev, possible_crtcs, false);
if (!plane) if (!plane)
goto err_crtc; goto err_release_iommu_mapping;
} }
ret = drm_vblank_init(dev, MAX_CRTC); ret = drm_vblank_init(dev, MAX_CRTC);
if (ret) if (ret)
goto err_crtc; goto err_release_iommu_mapping;
/* /*
* probe sub drivers such as display controller and hdmi driver, * probe sub drivers such as display controller and hdmi driver,
...@@ -126,6 +143,8 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags) ...@@ -126,6 +143,8 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
exynos_drm_device_unregister(dev); exynos_drm_device_unregister(dev);
err_vblank: err_vblank:
drm_vblank_cleanup(dev); drm_vblank_cleanup(dev);
err_release_iommu_mapping:
drm_release_iommu_mapping(dev);
err_crtc: err_crtc:
drm_mode_config_cleanup(dev); drm_mode_config_cleanup(dev);
kfree(private); kfree(private);
...@@ -142,6 +161,8 @@ static int exynos_drm_unload(struct drm_device *dev) ...@@ -142,6 +161,8 @@ static int exynos_drm_unload(struct drm_device *dev)
drm_vblank_cleanup(dev); drm_vblank_cleanup(dev);
drm_kms_helper_poll_fini(dev); drm_kms_helper_poll_fini(dev);
drm_mode_config_cleanup(dev); drm_mode_config_cleanup(dev);
drm_release_iommu_mapping(dev);
kfree(dev->dev_private); kfree(dev->dev_private);
dev->dev_private = NULL; dev->dev_private = NULL;
...@@ -229,6 +250,14 @@ static struct drm_ioctl_desc exynos_ioctls[] = { ...@@ -229,6 +250,14 @@ static struct drm_ioctl_desc exynos_ioctls[] = {
exynos_g2d_set_cmdlist_ioctl, DRM_UNLOCKED | DRM_AUTH), exynos_g2d_set_cmdlist_ioctl, DRM_UNLOCKED | DRM_AUTH),
DRM_IOCTL_DEF_DRV(EXYNOS_G2D_EXEC, DRM_IOCTL_DEF_DRV(EXYNOS_G2D_EXEC,
exynos_g2d_exec_ioctl, DRM_UNLOCKED | DRM_AUTH), exynos_g2d_exec_ioctl, DRM_UNLOCKED | DRM_AUTH),
DRM_IOCTL_DEF_DRV(EXYNOS_IPP_GET_PROPERTY,
exynos_drm_ipp_get_property, DRM_UNLOCKED | DRM_AUTH),
DRM_IOCTL_DEF_DRV(EXYNOS_IPP_SET_PROPERTY,
exynos_drm_ipp_set_property, DRM_UNLOCKED | DRM_AUTH),
DRM_IOCTL_DEF_DRV(EXYNOS_IPP_QUEUE_BUF,
exynos_drm_ipp_queue_buf, DRM_UNLOCKED | DRM_AUTH),
DRM_IOCTL_DEF_DRV(EXYNOS_IPP_CMD_CTRL,
exynos_drm_ipp_cmd_ctrl, DRM_UNLOCKED | DRM_AUTH),
}; };
static const struct file_operations exynos_drm_driver_fops = { static const struct file_operations exynos_drm_driver_fops = {
...@@ -279,6 +308,7 @@ static int exynos_drm_platform_probe(struct platform_device *pdev) ...@@ -279,6 +308,7 @@ static int exynos_drm_platform_probe(struct platform_device *pdev)
{ {
DRM_DEBUG_DRIVER("%s\n", __FILE__); DRM_DEBUG_DRIVER("%s\n", __FILE__);
pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
exynos_drm_driver.num_ioctls = DRM_ARRAY_SIZE(exynos_ioctls); exynos_drm_driver.num_ioctls = DRM_ARRAY_SIZE(exynos_ioctls);
return drm_platform_init(&exynos_drm_driver, pdev); return drm_platform_init(&exynos_drm_driver, pdev);
...@@ -324,6 +354,10 @@ static int __init exynos_drm_init(void) ...@@ -324,6 +354,10 @@ static int __init exynos_drm_init(void)
ret = platform_driver_register(&exynos_drm_common_hdmi_driver); ret = platform_driver_register(&exynos_drm_common_hdmi_driver);
if (ret < 0) if (ret < 0)
goto out_common_hdmi; goto out_common_hdmi;
ret = exynos_platform_device_hdmi_register();
if (ret < 0)
goto out_common_hdmi_dev;
#endif #endif
#ifdef CONFIG_DRM_EXYNOS_VIDI #ifdef CONFIG_DRM_EXYNOS_VIDI
...@@ -338,24 +372,80 @@ static int __init exynos_drm_init(void) ...@@ -338,24 +372,80 @@ static int __init exynos_drm_init(void)
goto out_g2d; goto out_g2d;
#endif #endif
#ifdef CONFIG_DRM_EXYNOS_FIMC
ret = platform_driver_register(&fimc_driver);
if (ret < 0)
goto out_fimc;
#endif
#ifdef CONFIG_DRM_EXYNOS_ROTATOR
ret = platform_driver_register(&rotator_driver);
if (ret < 0)
goto out_rotator;
#endif
#ifdef CONFIG_DRM_EXYNOS_GSC
ret = platform_driver_register(&gsc_driver);
if (ret < 0)
goto out_gsc;
#endif
#ifdef CONFIG_DRM_EXYNOS_IPP
ret = platform_driver_register(&ipp_driver);
if (ret < 0)
goto out_ipp;
#endif
ret = platform_driver_register(&exynos_drm_platform_driver); ret = platform_driver_register(&exynos_drm_platform_driver);
if (ret < 0) if (ret < 0)
goto out_drm;
exynos_drm_pdev = platform_device_register_simple("exynos-drm", -1,
NULL, 0);
if (IS_ERR_OR_NULL(exynos_drm_pdev)) {
ret = PTR_ERR(exynos_drm_pdev);
goto out; goto out;
}
return 0; return 0;
out: out:
platform_driver_unregister(&exynos_drm_platform_driver);
out_drm:
#ifdef CONFIG_DRM_EXYNOS_IPP
platform_driver_unregister(&ipp_driver);
out_ipp:
#endif
#ifdef CONFIG_DRM_EXYNOS_GSC
platform_driver_unregister(&gsc_driver);
out_gsc:
#endif
#ifdef CONFIG_DRM_EXYNOS_ROTATOR
platform_driver_unregister(&rotator_driver);
out_rotator:
#endif
#ifdef CONFIG_DRM_EXYNOS_FIMC
platform_driver_unregister(&fimc_driver);
out_fimc:
#endif
#ifdef CONFIG_DRM_EXYNOS_G2D #ifdef CONFIG_DRM_EXYNOS_G2D
platform_driver_unregister(&g2d_driver); platform_driver_unregister(&g2d_driver);
out_g2d: out_g2d:
#endif #endif
#ifdef CONFIG_DRM_EXYNOS_VIDI #ifdef CONFIG_DRM_EXYNOS_VIDI
out_vidi:
platform_driver_unregister(&vidi_driver); platform_driver_unregister(&vidi_driver);
out_vidi:
#endif #endif
#ifdef CONFIG_DRM_EXYNOS_HDMI #ifdef CONFIG_DRM_EXYNOS_HDMI
exynos_platform_device_hdmi_unregister();
out_common_hdmi_dev:
platform_driver_unregister(&exynos_drm_common_hdmi_driver); platform_driver_unregister(&exynos_drm_common_hdmi_driver);
out_common_hdmi: out_common_hdmi:
platform_driver_unregister(&mixer_driver); platform_driver_unregister(&mixer_driver);
...@@ -375,13 +465,32 @@ static void __exit exynos_drm_exit(void) ...@@ -375,13 +465,32 @@ static void __exit exynos_drm_exit(void)
{ {
DRM_DEBUG_DRIVER("%s\n", __FILE__); DRM_DEBUG_DRIVER("%s\n", __FILE__);
platform_device_unregister(exynos_drm_pdev);
platform_driver_unregister(&exynos_drm_platform_driver); platform_driver_unregister(&exynos_drm_platform_driver);
#ifdef CONFIG_DRM_EXYNOS_IPP
platform_driver_unregister(&ipp_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_GSC
platform_driver_unregister(&gsc_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_ROTATOR
platform_driver_unregister(&rotator_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_FIMC
platform_driver_unregister(&fimc_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_G2D #ifdef CONFIG_DRM_EXYNOS_G2D
platform_driver_unregister(&g2d_driver); platform_driver_unregister(&g2d_driver);
#endif #endif
#ifdef CONFIG_DRM_EXYNOS_HDMI #ifdef CONFIG_DRM_EXYNOS_HDMI
exynos_platform_device_hdmi_unregister();
platform_driver_unregister(&exynos_drm_common_hdmi_driver); platform_driver_unregister(&exynos_drm_common_hdmi_driver);
platform_driver_unregister(&mixer_driver); platform_driver_unregister(&mixer_driver);
platform_driver_unregister(&hdmi_driver); platform_driver_unregister(&hdmi_driver);
......
...@@ -74,8 +74,6 @@ enum exynos_drm_output_type { ...@@ -74,8 +74,6 @@ enum exynos_drm_output_type {
* @commit: apply hardware specific overlay data to registers. * @commit: apply hardware specific overlay data to registers.
* @enable: enable hardware specific overlay. * @enable: enable hardware specific overlay.
* @disable: disable hardware specific overlay. * @disable: disable hardware specific overlay.
* @wait_for_vblank: wait for vblank interrupt to make sure that
* hardware overlay is disabled.
*/ */
struct exynos_drm_overlay_ops { struct exynos_drm_overlay_ops {
void (*mode_set)(struct device *subdrv_dev, void (*mode_set)(struct device *subdrv_dev,
...@@ -83,7 +81,6 @@ struct exynos_drm_overlay_ops { ...@@ -83,7 +81,6 @@ struct exynos_drm_overlay_ops {
void (*commit)(struct device *subdrv_dev, int zpos); void (*commit)(struct device *subdrv_dev, int zpos);
void (*enable)(struct device *subdrv_dev, int zpos); void (*enable)(struct device *subdrv_dev, int zpos);
void (*disable)(struct device *subdrv_dev, int zpos); void (*disable)(struct device *subdrv_dev, int zpos);
void (*wait_for_vblank)(struct device *subdrv_dev);
}; };
/* /*
...@@ -110,7 +107,6 @@ struct exynos_drm_overlay_ops { ...@@ -110,7 +107,6 @@ struct exynos_drm_overlay_ops {
* @pixel_format: fourcc pixel format of this overlay * @pixel_format: fourcc pixel format of this overlay
* @dma_addr: array of bus(accessed by dma) address to the memory region * @dma_addr: array of bus(accessed by dma) address to the memory region
* allocated for a overlay. * allocated for a overlay.
* @vaddr: array of virtual memory addresss to this overlay.
* @zpos: order of overlay layer(z position). * @zpos: order of overlay layer(z position).
* @default_win: a window to be enabled. * @default_win: a window to be enabled.
* @color_key: color key on or off. * @color_key: color key on or off.
...@@ -142,7 +138,6 @@ struct exynos_drm_overlay { ...@@ -142,7 +138,6 @@ struct exynos_drm_overlay {
unsigned int pitch; unsigned int pitch;
uint32_t pixel_format; uint32_t pixel_format;
dma_addr_t dma_addr[MAX_FB_BUFFER]; dma_addr_t dma_addr[MAX_FB_BUFFER];
void __iomem *vaddr[MAX_FB_BUFFER];
int zpos; int zpos;
bool default_win; bool default_win;
...@@ -186,6 +181,8 @@ struct exynos_drm_display_ops { ...@@ -186,6 +181,8 @@ struct exynos_drm_display_ops {
* @commit: set current hw specific display mode to hw. * @commit: set current hw specific display mode to hw.
* @enable_vblank: specific driver callback for enabling vblank interrupt. * @enable_vblank: specific driver callback for enabling vblank interrupt.
* @disable_vblank: specific driver callback for disabling vblank interrupt. * @disable_vblank: specific driver callback for disabling vblank interrupt.
* @wait_for_vblank: wait for vblank interrupt to make sure that
* hardware overlay is updated.
*/ */
struct exynos_drm_manager_ops { struct exynos_drm_manager_ops {
void (*dpms)(struct device *subdrv_dev, int mode); void (*dpms)(struct device *subdrv_dev, int mode);
...@@ -200,6 +197,7 @@ struct exynos_drm_manager_ops { ...@@ -200,6 +197,7 @@ struct exynos_drm_manager_ops {
void (*commit)(struct device *subdrv_dev); void (*commit)(struct device *subdrv_dev);
int (*enable_vblank)(struct device *subdrv_dev); int (*enable_vblank)(struct device *subdrv_dev);
void (*disable_vblank)(struct device *subdrv_dev); void (*disable_vblank)(struct device *subdrv_dev);
void (*wait_for_vblank)(struct device *subdrv_dev);
}; };
/* /*
...@@ -231,16 +229,28 @@ struct exynos_drm_g2d_private { ...@@ -231,16 +229,28 @@ struct exynos_drm_g2d_private {
struct device *dev; struct device *dev;
struct list_head inuse_cmdlist; struct list_head inuse_cmdlist;
struct list_head event_list; struct list_head event_list;
struct list_head gem_list; struct list_head userptr_list;
unsigned int gem_nr; };
struct exynos_drm_ipp_private {
struct device *dev;
struct list_head event_list;
}; };
struct drm_exynos_file_private { struct drm_exynos_file_private {
struct exynos_drm_g2d_private *g2d_priv; struct exynos_drm_g2d_private *g2d_priv;
struct exynos_drm_ipp_private *ipp_priv;
}; };
/* /*
* Exynos drm private structure. * Exynos drm private structure.
*
* @da_start: start address to device address space.
* with iommu, device address space starts from this address
* otherwise default one.
* @da_space_size: size of device address space.
* if 0 then default value is used for it.
* @da_space_order: order to device address space.
*/ */
struct exynos_drm_private { struct exynos_drm_private {
struct drm_fb_helper *fb_helper; struct drm_fb_helper *fb_helper;
...@@ -255,6 +265,10 @@ struct exynos_drm_private { ...@@ -255,6 +265,10 @@ struct exynos_drm_private {
struct drm_crtc *crtc[MAX_CRTC]; struct drm_crtc *crtc[MAX_CRTC];
struct drm_property *plane_zpos_property; struct drm_property *plane_zpos_property;
struct drm_property *crtc_mode_property; struct drm_property *crtc_mode_property;
unsigned long da_start;
unsigned long da_space_size;
unsigned long da_space_order;
}; };
/* /*
...@@ -318,10 +332,25 @@ int exynos_drm_subdrv_unregister(struct exynos_drm_subdrv *drm_subdrv); ...@@ -318,10 +332,25 @@ int exynos_drm_subdrv_unregister(struct exynos_drm_subdrv *drm_subdrv);
int exynos_drm_subdrv_open(struct drm_device *dev, struct drm_file *file); int exynos_drm_subdrv_open(struct drm_device *dev, struct drm_file *file);
void exynos_drm_subdrv_close(struct drm_device *dev, struct drm_file *file); void exynos_drm_subdrv_close(struct drm_device *dev, struct drm_file *file);
/*
* this function registers exynos drm hdmi platform device. It ensures only one
* instance of the device is created.
*/
extern int exynos_platform_device_hdmi_register(void);
/*
* this function unregisters exynos drm hdmi platform device if it exists.
*/
void exynos_platform_device_hdmi_unregister(void);
extern struct platform_driver fimd_driver; extern struct platform_driver fimd_driver;
extern struct platform_driver hdmi_driver; extern struct platform_driver hdmi_driver;
extern struct platform_driver mixer_driver; extern struct platform_driver mixer_driver;
extern struct platform_driver exynos_drm_common_hdmi_driver; extern struct platform_driver exynos_drm_common_hdmi_driver;
extern struct platform_driver vidi_driver; extern struct platform_driver vidi_driver;
extern struct platform_driver g2d_driver; extern struct platform_driver g2d_driver;
extern struct platform_driver fimc_driver;
extern struct platform_driver rotator_driver;
extern struct platform_driver gsc_driver;
extern struct platform_driver ipp_driver;
#endif #endif
...@@ -234,6 +234,32 @@ static void exynos_drm_encoder_commit(struct drm_encoder *encoder) ...@@ -234,6 +234,32 @@ static void exynos_drm_encoder_commit(struct drm_encoder *encoder)
exynos_encoder->dpms = DRM_MODE_DPMS_ON; exynos_encoder->dpms = DRM_MODE_DPMS_ON;
} }
void exynos_drm_encoder_complete_scanout(struct drm_framebuffer *fb)
{
struct exynos_drm_encoder *exynos_encoder;
struct exynos_drm_manager_ops *ops;
struct drm_device *dev = fb->dev;
struct drm_encoder *encoder;
/*
* make sure that overlay data are updated to real hardware
* for all encoders.
*/
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
exynos_encoder = to_exynos_encoder(encoder);
ops = exynos_encoder->manager->ops;
/*
* wait for vblank interrupt
* - this makes sure that overlay data are updated to
* real hardware.
*/
if (ops->wait_for_vblank)
ops->wait_for_vblank(exynos_encoder->manager->dev);
}
}
static void exynos_drm_encoder_disable(struct drm_encoder *encoder) static void exynos_drm_encoder_disable(struct drm_encoder *encoder)
{ {
struct drm_plane *plane; struct drm_plane *plane;
...@@ -505,14 +531,4 @@ void exynos_drm_encoder_plane_disable(struct drm_encoder *encoder, void *data) ...@@ -505,14 +531,4 @@ void exynos_drm_encoder_plane_disable(struct drm_encoder *encoder, void *data)
if (overlay_ops && overlay_ops->disable) if (overlay_ops && overlay_ops->disable)
overlay_ops->disable(manager->dev, zpos); overlay_ops->disable(manager->dev, zpos);
/*
* wait for vblank interrupt
* - this makes sure that hardware overlay is disabled to avoid
* for the dma accesses to memory after gem buffer was released
* because the setting for disabling the overlay will be updated
* at vsync.
*/
if (overlay_ops && overlay_ops->wait_for_vblank)
overlay_ops->wait_for_vblank(manager->dev);
} }
...@@ -46,5 +46,6 @@ void exynos_drm_encoder_plane_mode_set(struct drm_encoder *encoder, void *data); ...@@ -46,5 +46,6 @@ void exynos_drm_encoder_plane_mode_set(struct drm_encoder *encoder, void *data);
void exynos_drm_encoder_plane_commit(struct drm_encoder *encoder, void *data); void exynos_drm_encoder_plane_commit(struct drm_encoder *encoder, void *data);
void exynos_drm_encoder_plane_enable(struct drm_encoder *encoder, void *data); void exynos_drm_encoder_plane_enable(struct drm_encoder *encoder, void *data);
void exynos_drm_encoder_plane_disable(struct drm_encoder *encoder, void *data); void exynos_drm_encoder_plane_disable(struct drm_encoder *encoder, void *data);
void exynos_drm_encoder_complete_scanout(struct drm_framebuffer *fb);
#endif #endif
...@@ -30,10 +30,13 @@ ...@@ -30,10 +30,13 @@
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h> #include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h> #include <drm/drm_fb_helper.h>
#include <uapi/drm/exynos_drm.h>
#include "exynos_drm_drv.h" #include "exynos_drm_drv.h"
#include "exynos_drm_fb.h" #include "exynos_drm_fb.h"
#include "exynos_drm_gem.h" #include "exynos_drm_gem.h"
#include "exynos_drm_iommu.h"
#include "exynos_drm_encoder.h"
#define to_exynos_fb(x) container_of(x, struct exynos_drm_fb, fb) #define to_exynos_fb(x) container_of(x, struct exynos_drm_fb, fb)
...@@ -50,6 +53,32 @@ struct exynos_drm_fb { ...@@ -50,6 +53,32 @@ struct exynos_drm_fb {
struct exynos_drm_gem_obj *exynos_gem_obj[MAX_FB_BUFFER]; struct exynos_drm_gem_obj *exynos_gem_obj[MAX_FB_BUFFER];
}; };
static int check_fb_gem_memory_type(struct drm_device *drm_dev,
struct exynos_drm_gem_obj *exynos_gem_obj)
{
unsigned int flags;
/*
* if exynos drm driver supports iommu then framebuffer can use
* all the buffer types.
*/
if (is_drm_iommu_supported(drm_dev))
return 0;
flags = exynos_gem_obj->flags;
/*
* without iommu support, not support physically non-continuous memory
* for framebuffer.
*/
if (IS_NONCONTIG_BUFFER(flags)) {
DRM_ERROR("cannot use this gem memory type for fb.\n");
return -EINVAL;
}
return 0;
}
static void exynos_drm_fb_destroy(struct drm_framebuffer *fb) static void exynos_drm_fb_destroy(struct drm_framebuffer *fb)
{ {
struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb); struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);
...@@ -57,6 +86,9 @@ static void exynos_drm_fb_destroy(struct drm_framebuffer *fb) ...@@ -57,6 +86,9 @@ static void exynos_drm_fb_destroy(struct drm_framebuffer *fb)
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
/* make sure that overlay data are updated before relesing fb. */
exynos_drm_encoder_complete_scanout(fb);
drm_framebuffer_cleanup(fb); drm_framebuffer_cleanup(fb);
for (i = 0; i < ARRAY_SIZE(exynos_fb->exynos_gem_obj); i++) { for (i = 0; i < ARRAY_SIZE(exynos_fb->exynos_gem_obj); i++) {
...@@ -128,23 +160,32 @@ exynos_drm_framebuffer_init(struct drm_device *dev, ...@@ -128,23 +160,32 @@ exynos_drm_framebuffer_init(struct drm_device *dev,
struct drm_gem_object *obj) struct drm_gem_object *obj)
{ {
struct exynos_drm_fb *exynos_fb; struct exynos_drm_fb *exynos_fb;
struct exynos_drm_gem_obj *exynos_gem_obj;
int ret; int ret;
exynos_gem_obj = to_exynos_gem_obj(obj);
ret = check_fb_gem_memory_type(dev, exynos_gem_obj);
if (ret < 0) {
DRM_ERROR("cannot use this gem memory type for fb.\n");
return ERR_PTR(-EINVAL);
}
exynos_fb = kzalloc(sizeof(*exynos_fb), GFP_KERNEL); exynos_fb = kzalloc(sizeof(*exynos_fb), GFP_KERNEL);
if (!exynos_fb) { if (!exynos_fb) {
DRM_ERROR("failed to allocate exynos drm framebuffer\n"); DRM_ERROR("failed to allocate exynos drm framebuffer\n");
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
} }
drm_helper_mode_fill_fb_struct(&exynos_fb->fb, mode_cmd);
exynos_fb->exynos_gem_obj[0] = exynos_gem_obj;
ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs); ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs);
if (ret) { if (ret) {
DRM_ERROR("failed to initialize framebuffer\n"); DRM_ERROR("failed to initialize framebuffer\n");
return ERR_PTR(ret); return ERR_PTR(ret);
} }
drm_helper_mode_fill_fb_struct(&exynos_fb->fb, mode_cmd);
exynos_fb->exynos_gem_obj[0] = to_exynos_gem_obj(obj);
return &exynos_fb->fb; return &exynos_fb->fb;
} }
...@@ -190,9 +231,8 @@ exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv, ...@@ -190,9 +231,8 @@ exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv,
struct drm_mode_fb_cmd2 *mode_cmd) struct drm_mode_fb_cmd2 *mode_cmd)
{ {
struct drm_gem_object *obj; struct drm_gem_object *obj;
struct drm_framebuffer *fb;
struct exynos_drm_fb *exynos_fb; struct exynos_drm_fb *exynos_fb;
int i; int i, ret;
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
...@@ -202,30 +242,56 @@ exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv, ...@@ -202,30 +242,56 @@ exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv,
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
} }
fb = exynos_drm_framebuffer_init(dev, mode_cmd, obj); exynos_fb = kzalloc(sizeof(*exynos_fb), GFP_KERNEL);
if (IS_ERR(fb)) { if (!exynos_fb) {
drm_gem_object_unreference_unlocked(obj); DRM_ERROR("failed to allocate exynos drm framebuffer\n");
return fb; return ERR_PTR(-ENOMEM);
} }
exynos_fb = to_exynos_fb(fb); drm_helper_mode_fill_fb_struct(&exynos_fb->fb, mode_cmd);
exynos_fb->exynos_gem_obj[0] = to_exynos_gem_obj(obj);
exynos_fb->buf_cnt = exynos_drm_format_num_buffers(mode_cmd); exynos_fb->buf_cnt = exynos_drm_format_num_buffers(mode_cmd);
DRM_DEBUG_KMS("buf_cnt = %d\n", exynos_fb->buf_cnt); DRM_DEBUG_KMS("buf_cnt = %d\n", exynos_fb->buf_cnt);
for (i = 1; i < exynos_fb->buf_cnt; i++) { for (i = 1; i < exynos_fb->buf_cnt; i++) {
struct exynos_drm_gem_obj *exynos_gem_obj;
int ret;
obj = drm_gem_object_lookup(dev, file_priv, obj = drm_gem_object_lookup(dev, file_priv,
mode_cmd->handles[i]); mode_cmd->handles[i]);
if (!obj) { if (!obj) {
DRM_ERROR("failed to lookup gem object\n"); DRM_ERROR("failed to lookup gem object\n");
exynos_drm_fb_destroy(fb); kfree(exynos_fb);
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
} }
exynos_gem_obj = to_exynos_gem_obj(obj);
ret = check_fb_gem_memory_type(dev, exynos_gem_obj);
if (ret < 0) {
DRM_ERROR("cannot use this gem memory type for fb.\n");
kfree(exynos_fb);
return ERR_PTR(ret);
}
exynos_fb->exynos_gem_obj[i] = to_exynos_gem_obj(obj); exynos_fb->exynos_gem_obj[i] = to_exynos_gem_obj(obj);
} }
return fb; ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs);
if (ret) {
for (i = 0; i < exynos_fb->buf_cnt; i++) {
struct exynos_drm_gem_obj *gem_obj;
gem_obj = exynos_fb->exynos_gem_obj[i];
drm_gem_object_unreference_unlocked(&gem_obj->base);
}
kfree(exynos_fb);
return ERR_PTR(ret);
}
return &exynos_fb->fb;
} }
struct exynos_drm_gem_buf *exynos_drm_fb_buffer(struct drm_framebuffer *fb, struct exynos_drm_gem_buf *exynos_drm_fb_buffer(struct drm_framebuffer *fb,
...@@ -243,9 +309,7 @@ struct exynos_drm_gem_buf *exynos_drm_fb_buffer(struct drm_framebuffer *fb, ...@@ -243,9 +309,7 @@ struct exynos_drm_gem_buf *exynos_drm_fb_buffer(struct drm_framebuffer *fb,
if (!buffer) if (!buffer)
return NULL; return NULL;
DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n", DRM_DEBUG_KMS("dma_addr = 0x%lx\n", (unsigned long)buffer->dma_addr);
(unsigned long)buffer->kvaddr,
(unsigned long)buffer->dma_addr);
return buffer; return buffer;
} }
......
...@@ -46,8 +46,38 @@ struct exynos_drm_fbdev { ...@@ -46,8 +46,38 @@ struct exynos_drm_fbdev {
struct exynos_drm_gem_obj *exynos_gem_obj; struct exynos_drm_gem_obj *exynos_gem_obj;
}; };
static int exynos_drm_fb_mmap(struct fb_info *info,
struct vm_area_struct *vma)
{
struct drm_fb_helper *helper = info->par;
struct exynos_drm_fbdev *exynos_fbd = to_exynos_fbdev(helper);
struct exynos_drm_gem_obj *exynos_gem_obj = exynos_fbd->exynos_gem_obj;
struct exynos_drm_gem_buf *buffer = exynos_gem_obj->buffer;
unsigned long vm_size;
int ret;
DRM_DEBUG_KMS("%s\n", __func__);
vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
vm_size = vma->vm_end - vma->vm_start;
if (vm_size > buffer->size)
return -EINVAL;
ret = dma_mmap_attrs(helper->dev->dev, vma, buffer->pages,
buffer->dma_addr, buffer->size, &buffer->dma_attrs);
if (ret < 0) {
DRM_ERROR("failed to mmap.\n");
return ret;
}
return 0;
}
static struct fb_ops exynos_drm_fb_ops = { static struct fb_ops exynos_drm_fb_ops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.fb_mmap = exynos_drm_fb_mmap,
.fb_fillrect = cfb_fillrect, .fb_fillrect = cfb_fillrect,
.fb_copyarea = cfb_copyarea, .fb_copyarea = cfb_copyarea,
.fb_imageblit = cfb_imageblit, .fb_imageblit = cfb_imageblit,
...@@ -79,6 +109,17 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper, ...@@ -79,6 +109,17 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
return -EFAULT; return -EFAULT;
} }
/* map pages with kernel virtual space. */
if (!buffer->kvaddr) {
unsigned int nr_pages = buffer->size >> PAGE_SHIFT;
buffer->kvaddr = vmap(buffer->pages, nr_pages, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
if (!buffer->kvaddr) {
DRM_ERROR("failed to map pages to kernel space.\n");
return -EIO;
}
}
/* buffer count to framebuffer always is 1 at booting time. */ /* buffer count to framebuffer always is 1 at booting time. */
exynos_drm_fb_set_buf_cnt(fb, 1); exynos_drm_fb_set_buf_cnt(fb, 1);
...@@ -87,8 +128,8 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper, ...@@ -87,8 +128,8 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
dev->mode_config.fb_base = (resource_size_t)buffer->dma_addr; dev->mode_config.fb_base = (resource_size_t)buffer->dma_addr;
fbi->screen_base = buffer->kvaddr + offset; fbi->screen_base = buffer->kvaddr + offset;
fbi->fix.smem_start = (unsigned long)(page_to_phys(buffer->pages[0]) + fbi->fix.smem_start = (unsigned long)
offset); (page_to_phys(sg_page(buffer->sgt->sgl)) + offset);
fbi->screen_size = size; fbi->screen_size = size;
fbi->fix.smem_len = size; fbi->fix.smem_len = size;
...@@ -134,7 +175,7 @@ static int exynos_drm_fbdev_create(struct drm_fb_helper *helper, ...@@ -134,7 +175,7 @@ static int exynos_drm_fbdev_create(struct drm_fb_helper *helper,
exynos_gem_obj = exynos_drm_gem_create(dev, 0, size); exynos_gem_obj = exynos_drm_gem_create(dev, 0, size);
if (IS_ERR(exynos_gem_obj)) { if (IS_ERR(exynos_gem_obj)) {
ret = PTR_ERR(exynos_gem_obj); ret = PTR_ERR(exynos_gem_obj);
goto out; goto err_release_framebuffer;
} }
exynos_fbdev->exynos_gem_obj = exynos_gem_obj; exynos_fbdev->exynos_gem_obj = exynos_gem_obj;
...@@ -144,7 +185,7 @@ static int exynos_drm_fbdev_create(struct drm_fb_helper *helper, ...@@ -144,7 +185,7 @@ static int exynos_drm_fbdev_create(struct drm_fb_helper *helper,
if (IS_ERR_OR_NULL(helper->fb)) { if (IS_ERR_OR_NULL(helper->fb)) {
DRM_ERROR("failed to create drm framebuffer.\n"); DRM_ERROR("failed to create drm framebuffer.\n");
ret = PTR_ERR(helper->fb); ret = PTR_ERR(helper->fb);
goto out; goto err_destroy_gem;
} }
helper->fbdev = fbi; helper->fbdev = fbi;
...@@ -156,14 +197,24 @@ static int exynos_drm_fbdev_create(struct drm_fb_helper *helper, ...@@ -156,14 +197,24 @@ static int exynos_drm_fbdev_create(struct drm_fb_helper *helper,
ret = fb_alloc_cmap(&fbi->cmap, 256, 0); ret = fb_alloc_cmap(&fbi->cmap, 256, 0);
if (ret) { if (ret) {
DRM_ERROR("failed to allocate cmap.\n"); DRM_ERROR("failed to allocate cmap.\n");
goto out; goto err_destroy_framebuffer;
} }
ret = exynos_drm_fbdev_update(helper, helper->fb); ret = exynos_drm_fbdev_update(helper, helper->fb);
if (ret < 0) { if (ret < 0)
fb_dealloc_cmap(&fbi->cmap); goto err_dealloc_cmap;
goto out;
} mutex_unlock(&dev->struct_mutex);
return ret;
err_dealloc_cmap:
fb_dealloc_cmap(&fbi->cmap);
err_destroy_framebuffer:
drm_framebuffer_cleanup(helper->fb);
err_destroy_gem:
exynos_drm_gem_destroy(exynos_gem_obj);
err_release_framebuffer:
framebuffer_release(fbi);
/* /*
* if failed, all resources allocated above would be released by * if failed, all resources allocated above would be released by
...@@ -265,8 +316,13 @@ int exynos_drm_fbdev_init(struct drm_device *dev) ...@@ -265,8 +316,13 @@ int exynos_drm_fbdev_init(struct drm_device *dev)
static void exynos_drm_fbdev_destroy(struct drm_device *dev, static void exynos_drm_fbdev_destroy(struct drm_device *dev,
struct drm_fb_helper *fb_helper) struct drm_fb_helper *fb_helper)
{ {
struct exynos_drm_fbdev *exynos_fbd = to_exynos_fbdev(fb_helper);
struct exynos_drm_gem_obj *exynos_gem_obj = exynos_fbd->exynos_gem_obj;
struct drm_framebuffer *fb; struct drm_framebuffer *fb;
if (exynos_gem_obj->buffer->kvaddr)
vunmap(exynos_gem_obj->buffer->kvaddr);
/* release drm framebuffer and real buffer */ /* release drm framebuffer and real buffer */
if (fb_helper->fb && fb_helper->fb->funcs) { if (fb_helper->fb && fb_helper->fb->funcs) {
fb = fb_helper->fb; fb = fb_helper->fb;
......
此差异已折叠。
/*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
*
* Authors:
* Eunchul Kim <chulspro.kim@samsung.com>
* Jinyoung Jeon <jy0.jeon@samsung.com>
* Sangmin Lee <lsmin.lee@samsung.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _EXYNOS_DRM_FIMC_H_
#define _EXYNOS_DRM_FIMC_H_
/*
* TODO
* FIMD output interface notifier callback.
*/
#endif /* _EXYNOS_DRM_FIMC_H_ */
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/of_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <video/samsung_fimd.h> #include <video/samsung_fimd.h>
...@@ -25,6 +26,7 @@ ...@@ -25,6 +26,7 @@
#include "exynos_drm_drv.h" #include "exynos_drm_drv.h"
#include "exynos_drm_fbdev.h" #include "exynos_drm_fbdev.h"
#include "exynos_drm_crtc.h" #include "exynos_drm_crtc.h"
#include "exynos_drm_iommu.h"
/* /*
* FIMD is stand for Fully Interactive Mobile Display and * FIMD is stand for Fully Interactive Mobile Display and
...@@ -78,10 +80,10 @@ struct fimd_win_data { ...@@ -78,10 +80,10 @@ struct fimd_win_data {
unsigned int fb_height; unsigned int fb_height;
unsigned int bpp; unsigned int bpp;
dma_addr_t dma_addr; dma_addr_t dma_addr;
void __iomem *vaddr;
unsigned int buf_offsize; unsigned int buf_offsize;
unsigned int line_size; /* bytes */ unsigned int line_size; /* bytes */
bool enabled; bool enabled;
bool resume;
}; };
struct fimd_context { struct fimd_context {
...@@ -99,13 +101,34 @@ struct fimd_context { ...@@ -99,13 +101,34 @@ struct fimd_context {
u32 vidcon1; u32 vidcon1;
bool suspended; bool suspended;
struct mutex lock; struct mutex lock;
wait_queue_head_t wait_vsync_queue;
atomic_t wait_vsync_event;
struct exynos_drm_panel_info *panel; struct exynos_drm_panel_info *panel;
}; };
#ifdef CONFIG_OF
static const struct of_device_id fimd_driver_dt_match[] = {
{ .compatible = "samsung,exynos4-fimd",
.data = &exynos4_fimd_driver_data },
{ .compatible = "samsung,exynos5-fimd",
.data = &exynos5_fimd_driver_data },
{},
};
MODULE_DEVICE_TABLE(of, fimd_driver_dt_match);
#endif
static inline struct fimd_driver_data *drm_fimd_get_driver_data( static inline struct fimd_driver_data *drm_fimd_get_driver_data(
struct platform_device *pdev) struct platform_device *pdev)
{ {
#ifdef CONFIG_OF
const struct of_device_id *of_id =
of_match_device(fimd_driver_dt_match, &pdev->dev);
if (of_id)
return (struct fimd_driver_data *)of_id->data;
#endif
return (struct fimd_driver_data *) return (struct fimd_driver_data *)
platform_get_device_id(pdev)->driver_data; platform_get_device_id(pdev)->driver_data;
} }
...@@ -240,7 +263,9 @@ static void fimd_commit(struct device *dev) ...@@ -240,7 +263,9 @@ static void fimd_commit(struct device *dev)
/* setup horizontal and vertical display size. */ /* setup horizontal and vertical display size. */
val = VIDTCON2_LINEVAL(timing->yres - 1) | val = VIDTCON2_LINEVAL(timing->yres - 1) |
VIDTCON2_HOZVAL(timing->xres - 1); VIDTCON2_HOZVAL(timing->xres - 1) |
VIDTCON2_LINEVAL_E(timing->yres - 1) |
VIDTCON2_HOZVAL_E(timing->xres - 1);
writel(val, ctx->regs + driver_data->timing_base + VIDTCON2); writel(val, ctx->regs + driver_data->timing_base + VIDTCON2);
/* setup clock source, clock divider, enable dma. */ /* setup clock source, clock divider, enable dma. */
...@@ -307,12 +332,32 @@ static void fimd_disable_vblank(struct device *dev) ...@@ -307,12 +332,32 @@ static void fimd_disable_vblank(struct device *dev)
} }
} }
static void fimd_wait_for_vblank(struct device *dev)
{
struct fimd_context *ctx = get_fimd_context(dev);
if (ctx->suspended)
return;
atomic_set(&ctx->wait_vsync_event, 1);
/*
* wait for FIMD to signal VSYNC interrupt or return after
* timeout which is set to 50ms (refresh rate of 20).
*/
if (!wait_event_timeout(ctx->wait_vsync_queue,
!atomic_read(&ctx->wait_vsync_event),
DRM_HZ/20))
DRM_DEBUG_KMS("vblank wait timed out.\n");
}
static struct exynos_drm_manager_ops fimd_manager_ops = { static struct exynos_drm_manager_ops fimd_manager_ops = {
.dpms = fimd_dpms, .dpms = fimd_dpms,
.apply = fimd_apply, .apply = fimd_apply,
.commit = fimd_commit, .commit = fimd_commit,
.enable_vblank = fimd_enable_vblank, .enable_vblank = fimd_enable_vblank,
.disable_vblank = fimd_disable_vblank, .disable_vblank = fimd_disable_vblank,
.wait_for_vblank = fimd_wait_for_vblank,
}; };
static void fimd_win_mode_set(struct device *dev, static void fimd_win_mode_set(struct device *dev,
...@@ -351,7 +396,6 @@ static void fimd_win_mode_set(struct device *dev, ...@@ -351,7 +396,6 @@ static void fimd_win_mode_set(struct device *dev,
win_data->fb_width = overlay->fb_width; win_data->fb_width = overlay->fb_width;
win_data->fb_height = overlay->fb_height; win_data->fb_height = overlay->fb_height;
win_data->dma_addr = overlay->dma_addr[0] + offset; win_data->dma_addr = overlay->dma_addr[0] + offset;
win_data->vaddr = overlay->vaddr[0] + offset;
win_data->bpp = overlay->bpp; win_data->bpp = overlay->bpp;
win_data->buf_offsize = (overlay->fb_width - overlay->crtc_width) * win_data->buf_offsize = (overlay->fb_width - overlay->crtc_width) *
(overlay->bpp >> 3); (overlay->bpp >> 3);
...@@ -361,9 +405,7 @@ static void fimd_win_mode_set(struct device *dev, ...@@ -361,9 +405,7 @@ static void fimd_win_mode_set(struct device *dev,
win_data->offset_x, win_data->offset_y); win_data->offset_x, win_data->offset_y);
DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n", DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",
win_data->ovl_width, win_data->ovl_height); win_data->ovl_width, win_data->ovl_height);
DRM_DEBUG_KMS("paddr = 0x%lx, vaddr = 0x%lx\n", DRM_DEBUG_KMS("paddr = 0x%lx\n", (unsigned long)win_data->dma_addr);
(unsigned long)win_data->dma_addr,
(unsigned long)win_data->vaddr);
DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n", DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n",
overlay->fb_width, overlay->crtc_width); overlay->fb_width, overlay->crtc_width);
} }
...@@ -451,6 +493,8 @@ static void fimd_win_commit(struct device *dev, int zpos) ...@@ -451,6 +493,8 @@ static void fimd_win_commit(struct device *dev, int zpos)
struct fimd_win_data *win_data; struct fimd_win_data *win_data;
int win = zpos; int win = zpos;
unsigned long val, alpha, size; unsigned long val, alpha, size;
unsigned int last_x;
unsigned int last_y;
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
...@@ -496,24 +540,32 @@ static void fimd_win_commit(struct device *dev, int zpos) ...@@ -496,24 +540,32 @@ static void fimd_win_commit(struct device *dev, int zpos)
/* buffer size */ /* buffer size */
val = VIDW_BUF_SIZE_OFFSET(win_data->buf_offsize) | val = VIDW_BUF_SIZE_OFFSET(win_data->buf_offsize) |
VIDW_BUF_SIZE_PAGEWIDTH(win_data->line_size); VIDW_BUF_SIZE_PAGEWIDTH(win_data->line_size) |
VIDW_BUF_SIZE_OFFSET_E(win_data->buf_offsize) |
VIDW_BUF_SIZE_PAGEWIDTH_E(win_data->line_size);
writel(val, ctx->regs + VIDWx_BUF_SIZE(win, 0)); writel(val, ctx->regs + VIDWx_BUF_SIZE(win, 0));
/* OSD position */ /* OSD position */
val = VIDOSDxA_TOPLEFT_X(win_data->offset_x) | val = VIDOSDxA_TOPLEFT_X(win_data->offset_x) |
VIDOSDxA_TOPLEFT_Y(win_data->offset_y); VIDOSDxA_TOPLEFT_Y(win_data->offset_y) |
VIDOSDxA_TOPLEFT_X_E(win_data->offset_x) |
VIDOSDxA_TOPLEFT_Y_E(win_data->offset_y);
writel(val, ctx->regs + VIDOSD_A(win)); writel(val, ctx->regs + VIDOSD_A(win));
val = VIDOSDxB_BOTRIGHT_X(win_data->offset_x + last_x = win_data->offset_x + win_data->ovl_width;
win_data->ovl_width - 1) | if (last_x)
VIDOSDxB_BOTRIGHT_Y(win_data->offset_y + last_x--;
win_data->ovl_height - 1); last_y = win_data->offset_y + win_data->ovl_height;
if (last_y)
last_y--;
val = VIDOSDxB_BOTRIGHT_X(last_x) | VIDOSDxB_BOTRIGHT_Y(last_y) |
VIDOSDxB_BOTRIGHT_X_E(last_x) | VIDOSDxB_BOTRIGHT_Y_E(last_y);
writel(val, ctx->regs + VIDOSD_B(win)); writel(val, ctx->regs + VIDOSD_B(win));
DRM_DEBUG_KMS("osd pos: tx = %d, ty = %d, bx = %d, by = %d\n", DRM_DEBUG_KMS("osd pos: tx = %d, ty = %d, bx = %d, by = %d\n",
win_data->offset_x, win_data->offset_y, win_data->offset_x, win_data->offset_y, last_x, last_y);
win_data->offset_x + win_data->ovl_width - 1,
win_data->offset_y + win_data->ovl_height - 1);
/* hardware window 0 doesn't support alpha channel. */ /* hardware window 0 doesn't support alpha channel. */
if (win != 0) { if (win != 0) {
...@@ -573,6 +625,12 @@ static void fimd_win_disable(struct device *dev, int zpos) ...@@ -573,6 +625,12 @@ static void fimd_win_disable(struct device *dev, int zpos)
win_data = &ctx->win_data[win]; win_data = &ctx->win_data[win];
if (ctx->suspended) {
/* do not resume this window*/
win_data->resume = false;
return;
}
/* protect windows */ /* protect windows */
val = readl(ctx->regs + SHADOWCON); val = readl(ctx->regs + SHADOWCON);
val |= SHADOWCON_WINx_PROTECT(win); val |= SHADOWCON_WINx_PROTECT(win);
...@@ -592,22 +650,10 @@ static void fimd_win_disable(struct device *dev, int zpos) ...@@ -592,22 +650,10 @@ static void fimd_win_disable(struct device *dev, int zpos)
win_data->enabled = false; win_data->enabled = false;
} }
static void fimd_wait_for_vblank(struct device *dev)
{
struct fimd_context *ctx = get_fimd_context(dev);
int ret;
ret = wait_for((__raw_readl(ctx->regs + VIDCON1) &
VIDCON1_VSTATUS_VSYNC), 50);
if (ret < 0)
DRM_DEBUG_KMS("vblank wait timed out.\n");
}
static struct exynos_drm_overlay_ops fimd_overlay_ops = { static struct exynos_drm_overlay_ops fimd_overlay_ops = {
.mode_set = fimd_win_mode_set, .mode_set = fimd_win_mode_set,
.commit = fimd_win_commit, .commit = fimd_win_commit,
.disable = fimd_win_disable, .disable = fimd_win_disable,
.wait_for_vblank = fimd_wait_for_vblank,
}; };
static struct exynos_drm_manager fimd_manager = { static struct exynos_drm_manager fimd_manager = {
...@@ -623,7 +669,6 @@ static void fimd_finish_pageflip(struct drm_device *drm_dev, int crtc) ...@@ -623,7 +669,6 @@ static void fimd_finish_pageflip(struct drm_device *drm_dev, int crtc)
struct drm_pending_vblank_event *e, *t; struct drm_pending_vblank_event *e, *t;
struct timeval now; struct timeval now;
unsigned long flags; unsigned long flags;
bool is_checked = false;
spin_lock_irqsave(&drm_dev->event_lock, flags); spin_lock_irqsave(&drm_dev->event_lock, flags);
...@@ -633,8 +678,6 @@ static void fimd_finish_pageflip(struct drm_device *drm_dev, int crtc) ...@@ -633,8 +678,6 @@ static void fimd_finish_pageflip(struct drm_device *drm_dev, int crtc)
if (crtc != e->pipe) if (crtc != e->pipe)
continue; continue;
is_checked = true;
do_gettimeofday(&now); do_gettimeofday(&now);
e->event.sequence = 0; e->event.sequence = 0;
e->event.tv_sec = now.tv_sec; e->event.tv_sec = now.tv_sec;
...@@ -642,22 +685,7 @@ static void fimd_finish_pageflip(struct drm_device *drm_dev, int crtc) ...@@ -642,22 +685,7 @@ static void fimd_finish_pageflip(struct drm_device *drm_dev, int crtc)
list_move_tail(&e->base.link, &e->base.file_priv->event_list); list_move_tail(&e->base.link, &e->base.file_priv->event_list);
wake_up_interruptible(&e->base.file_priv->event_wait); wake_up_interruptible(&e->base.file_priv->event_wait);
} drm_vblank_put(drm_dev, crtc);
if (is_checked) {
/*
* call drm_vblank_put only in case that drm_vblank_get was
* called.
*/
if (atomic_read(&drm_dev->vblank_refcount[crtc]) > 0)
drm_vblank_put(drm_dev, crtc);
/*
* don't off vblank if vblank_disable_allowed is 1,
* because vblank would be off by timer handler.
*/
if (!drm_dev->vblank_disable_allowed)
drm_vblank_off(drm_dev, crtc);
} }
spin_unlock_irqrestore(&drm_dev->event_lock, flags); spin_unlock_irqrestore(&drm_dev->event_lock, flags);
...@@ -684,6 +712,11 @@ static irqreturn_t fimd_irq_handler(int irq, void *dev_id) ...@@ -684,6 +712,11 @@ static irqreturn_t fimd_irq_handler(int irq, void *dev_id)
drm_handle_vblank(drm_dev, manager->pipe); drm_handle_vblank(drm_dev, manager->pipe);
fimd_finish_pageflip(drm_dev, manager->pipe); fimd_finish_pageflip(drm_dev, manager->pipe);
/* set wait vsync event to zero and wake up queue. */
if (atomic_read(&ctx->wait_vsync_event)) {
atomic_set(&ctx->wait_vsync_event, 0);
DRM_WAKEUP(&ctx->wait_vsync_queue);
}
out: out:
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -709,6 +742,10 @@ static int fimd_subdrv_probe(struct drm_device *drm_dev, struct device *dev) ...@@ -709,6 +742,10 @@ static int fimd_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
*/ */
drm_dev->vblank_disable_allowed = 1; drm_dev->vblank_disable_allowed = 1;
/* attach this sub driver to iommu mapping if supported. */
if (is_drm_iommu_supported(drm_dev))
drm_iommu_attach_device(drm_dev, dev);
return 0; return 0;
} }
...@@ -716,7 +753,9 @@ static void fimd_subdrv_remove(struct drm_device *drm_dev, struct device *dev) ...@@ -716,7 +753,9 @@ static void fimd_subdrv_remove(struct drm_device *drm_dev, struct device *dev)
{ {
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO. */ /* detach this sub driver from iommu mapping if supported. */
if (is_drm_iommu_supported(drm_dev))
drm_iommu_detach_device(drm_dev, dev);
} }
static int fimd_calc_clkdiv(struct fimd_context *ctx, static int fimd_calc_clkdiv(struct fimd_context *ctx,
...@@ -805,11 +844,38 @@ static int fimd_clock(struct fimd_context *ctx, bool enable) ...@@ -805,11 +844,38 @@ static int fimd_clock(struct fimd_context *ctx, bool enable)
return 0; return 0;
} }
static void fimd_window_suspend(struct device *dev)
{
struct fimd_context *ctx = get_fimd_context(dev);
struct fimd_win_data *win_data;
int i;
for (i = 0; i < WINDOWS_NR; i++) {
win_data = &ctx->win_data[i];
win_data->resume = win_data->enabled;
fimd_win_disable(dev, i);
}
fimd_wait_for_vblank(dev);
}
static void fimd_window_resume(struct device *dev)
{
struct fimd_context *ctx = get_fimd_context(dev);
struct fimd_win_data *win_data;
int i;
for (i = 0; i < WINDOWS_NR; i++) {
win_data = &ctx->win_data[i];
win_data->enabled = win_data->resume;
win_data->resume = false;
}
}
static int fimd_activate(struct fimd_context *ctx, bool enable) static int fimd_activate(struct fimd_context *ctx, bool enable)
{ {
struct device *dev = ctx->subdrv.dev;
if (enable) { if (enable) {
int ret; int ret;
struct device *dev = ctx->subdrv.dev;
ret = fimd_clock(ctx, true); ret = fimd_clock(ctx, true);
if (ret < 0) if (ret < 0)
...@@ -820,7 +886,11 @@ static int fimd_activate(struct fimd_context *ctx, bool enable) ...@@ -820,7 +886,11 @@ static int fimd_activate(struct fimd_context *ctx, bool enable)
/* if vblank was enabled status, enable it again. */ /* if vblank was enabled status, enable it again. */
if (test_and_clear_bit(0, &ctx->irq_flags)) if (test_and_clear_bit(0, &ctx->irq_flags))
fimd_enable_vblank(dev); fimd_enable_vblank(dev);
fimd_window_resume(dev);
} else { } else {
fimd_window_suspend(dev);
fimd_clock(ctx, false); fimd_clock(ctx, false);
ctx->suspended = true; ctx->suspended = true;
} }
...@@ -857,18 +927,16 @@ static int __devinit fimd_probe(struct platform_device *pdev) ...@@ -857,18 +927,16 @@ static int __devinit fimd_probe(struct platform_device *pdev)
if (!ctx) if (!ctx)
return -ENOMEM; return -ENOMEM;
ctx->bus_clk = clk_get(dev, "fimd"); ctx->bus_clk = devm_clk_get(dev, "fimd");
if (IS_ERR(ctx->bus_clk)) { if (IS_ERR(ctx->bus_clk)) {
dev_err(dev, "failed to get bus clock\n"); dev_err(dev, "failed to get bus clock\n");
ret = PTR_ERR(ctx->bus_clk); return PTR_ERR(ctx->bus_clk);
goto err_clk_get;
} }
ctx->lcd_clk = clk_get(dev, "sclk_fimd"); ctx->lcd_clk = devm_clk_get(dev, "sclk_fimd");
if (IS_ERR(ctx->lcd_clk)) { if (IS_ERR(ctx->lcd_clk)) {
dev_err(dev, "failed to get lcd clock\n"); dev_err(dev, "failed to get lcd clock\n");
ret = PTR_ERR(ctx->lcd_clk); return PTR_ERR(ctx->lcd_clk);
goto err_bus_clk;
} }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
...@@ -876,14 +944,13 @@ static int __devinit fimd_probe(struct platform_device *pdev) ...@@ -876,14 +944,13 @@ static int __devinit fimd_probe(struct platform_device *pdev)
ctx->regs = devm_request_and_ioremap(&pdev->dev, res); ctx->regs = devm_request_and_ioremap(&pdev->dev, res);
if (!ctx->regs) { if (!ctx->regs) {
dev_err(dev, "failed to map registers\n"); dev_err(dev, "failed to map registers\n");
ret = -ENXIO; return -ENXIO;
goto err_clk;
} }
res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (!res) { if (!res) {
dev_err(dev, "irq request failed.\n"); dev_err(dev, "irq request failed.\n");
goto err_clk; return -ENXIO;
} }
ctx->irq = res->start; ctx->irq = res->start;
...@@ -892,13 +959,15 @@ static int __devinit fimd_probe(struct platform_device *pdev) ...@@ -892,13 +959,15 @@ static int __devinit fimd_probe(struct platform_device *pdev)
0, "drm_fimd", ctx); 0, "drm_fimd", ctx);
if (ret) { if (ret) {
dev_err(dev, "irq request failed.\n"); dev_err(dev, "irq request failed.\n");
goto err_clk; return ret;
} }
ctx->vidcon0 = pdata->vidcon0; ctx->vidcon0 = pdata->vidcon0;
ctx->vidcon1 = pdata->vidcon1; ctx->vidcon1 = pdata->vidcon1;
ctx->default_win = pdata->default_win; ctx->default_win = pdata->default_win;
ctx->panel = panel; ctx->panel = panel;
DRM_INIT_WAITQUEUE(&ctx->wait_vsync_queue);
atomic_set(&ctx->wait_vsync_event, 0);
subdrv = &ctx->subdrv; subdrv = &ctx->subdrv;
...@@ -926,17 +995,6 @@ static int __devinit fimd_probe(struct platform_device *pdev) ...@@ -926,17 +995,6 @@ static int __devinit fimd_probe(struct platform_device *pdev)
exynos_drm_subdrv_register(subdrv); exynos_drm_subdrv_register(subdrv);
return 0; return 0;
err_clk:
clk_disable(ctx->lcd_clk);
clk_put(ctx->lcd_clk);
err_bus_clk:
clk_disable(ctx->bus_clk);
clk_put(ctx->bus_clk);
err_clk_get:
return ret;
} }
static int __devexit fimd_remove(struct platform_device *pdev) static int __devexit fimd_remove(struct platform_device *pdev)
...@@ -960,9 +1018,6 @@ static int __devexit fimd_remove(struct platform_device *pdev) ...@@ -960,9 +1018,6 @@ static int __devexit fimd_remove(struct platform_device *pdev)
out: out:
pm_runtime_disable(dev); pm_runtime_disable(dev);
clk_put(ctx->lcd_clk);
clk_put(ctx->bus_clk);
return 0; return 0;
} }
...@@ -1056,5 +1111,6 @@ struct platform_driver fimd_driver = { ...@@ -1056,5 +1111,6 @@ struct platform_driver fimd_driver = {
.name = "exynos4-fb", .name = "exynos4-fb",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.pm = &fimd_pm_ops, .pm = &fimd_pm_ops,
.of_match_table = of_match_ptr(fimd_driver_dt_match),
}, },
}; };
...@@ -17,11 +17,14 @@ ...@@ -17,11 +17,14 @@
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/dma-mapping.h>
#include <linux/dma-attrs.h>
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/exynos_drm.h> #include <drm/exynos_drm.h>
#include "exynos_drm_drv.h" #include "exynos_drm_drv.h"
#include "exynos_drm_gem.h" #include "exynos_drm_gem.h"
#include "exynos_drm_iommu.h"
#define G2D_HW_MAJOR_VER 4 #define G2D_HW_MAJOR_VER 4
#define G2D_HW_MINOR_VER 1 #define G2D_HW_MINOR_VER 1
...@@ -92,11 +95,21 @@ ...@@ -92,11 +95,21 @@
#define G2D_CMDLIST_POOL_SIZE (G2D_CMDLIST_SIZE * G2D_CMDLIST_NUM) #define G2D_CMDLIST_POOL_SIZE (G2D_CMDLIST_SIZE * G2D_CMDLIST_NUM)
#define G2D_CMDLIST_DATA_NUM (G2D_CMDLIST_SIZE / sizeof(u32) - 2) #define G2D_CMDLIST_DATA_NUM (G2D_CMDLIST_SIZE / sizeof(u32) - 2)
#define MAX_BUF_ADDR_NR 6
/* maximum buffer pool size of userptr is 64MB as default */
#define MAX_POOL (64 * 1024 * 1024)
enum {
BUF_TYPE_GEM = 1,
BUF_TYPE_USERPTR,
};
/* cmdlist data structure */ /* cmdlist data structure */
struct g2d_cmdlist { struct g2d_cmdlist {
u32 head; u32 head;
u32 data[G2D_CMDLIST_DATA_NUM]; unsigned long data[G2D_CMDLIST_DATA_NUM];
u32 last; /* last data offset */ u32 last; /* last data offset */
}; };
struct drm_exynos_pending_g2d_event { struct drm_exynos_pending_g2d_event {
...@@ -104,15 +117,26 @@ struct drm_exynos_pending_g2d_event { ...@@ -104,15 +117,26 @@ struct drm_exynos_pending_g2d_event {
struct drm_exynos_g2d_event event; struct drm_exynos_g2d_event event;
}; };
struct g2d_gem_node { struct g2d_cmdlist_userptr {
struct list_head list; struct list_head list;
unsigned int handle; dma_addr_t dma_addr;
unsigned long userptr;
unsigned long size;
struct page **pages;
unsigned int npages;
struct sg_table *sgt;
struct vm_area_struct *vma;
atomic_t refcount;
bool in_pool;
bool out_of_list;
}; };
struct g2d_cmdlist_node { struct g2d_cmdlist_node {
struct list_head list; struct list_head list;
struct g2d_cmdlist *cmdlist; struct g2d_cmdlist *cmdlist;
unsigned int gem_nr; unsigned int map_nr;
unsigned long handles[MAX_BUF_ADDR_NR];
unsigned int obj_type[MAX_BUF_ADDR_NR];
dma_addr_t dma_addr; dma_addr_t dma_addr;
struct drm_exynos_pending_g2d_event *event; struct drm_exynos_pending_g2d_event *event;
...@@ -122,6 +146,7 @@ struct g2d_runqueue_node { ...@@ -122,6 +146,7 @@ struct g2d_runqueue_node {
struct list_head list; struct list_head list;
struct list_head run_cmdlist; struct list_head run_cmdlist;
struct list_head event_list; struct list_head event_list;
struct drm_file *filp;
pid_t pid; pid_t pid;
struct completion complete; struct completion complete;
int async; int async;
...@@ -143,23 +168,33 @@ struct g2d_data { ...@@ -143,23 +168,33 @@ struct g2d_data {
struct mutex cmdlist_mutex; struct mutex cmdlist_mutex;
dma_addr_t cmdlist_pool; dma_addr_t cmdlist_pool;
void *cmdlist_pool_virt; void *cmdlist_pool_virt;
struct dma_attrs cmdlist_dma_attrs;
/* runqueue*/ /* runqueue*/
struct g2d_runqueue_node *runqueue_node; struct g2d_runqueue_node *runqueue_node;
struct list_head runqueue; struct list_head runqueue;
struct mutex runqueue_mutex; struct mutex runqueue_mutex;
struct kmem_cache *runqueue_slab; struct kmem_cache *runqueue_slab;
unsigned long current_pool;
unsigned long max_pool;
}; };
static int g2d_init_cmdlist(struct g2d_data *g2d) static int g2d_init_cmdlist(struct g2d_data *g2d)
{ {
struct device *dev = g2d->dev; struct device *dev = g2d->dev;
struct g2d_cmdlist_node *node = g2d->cmdlist_node; struct g2d_cmdlist_node *node = g2d->cmdlist_node;
struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
int nr; int nr;
int ret; int ret;
g2d->cmdlist_pool_virt = dma_alloc_coherent(dev, G2D_CMDLIST_POOL_SIZE, init_dma_attrs(&g2d->cmdlist_dma_attrs);
&g2d->cmdlist_pool, GFP_KERNEL); dma_set_attr(DMA_ATTR_WRITE_COMBINE, &g2d->cmdlist_dma_attrs);
g2d->cmdlist_pool_virt = dma_alloc_attrs(subdrv->drm_dev->dev,
G2D_CMDLIST_POOL_SIZE,
&g2d->cmdlist_pool, GFP_KERNEL,
&g2d->cmdlist_dma_attrs);
if (!g2d->cmdlist_pool_virt) { if (!g2d->cmdlist_pool_virt) {
dev_err(dev, "failed to allocate dma memory\n"); dev_err(dev, "failed to allocate dma memory\n");
return -ENOMEM; return -ENOMEM;
...@@ -184,18 +219,20 @@ static int g2d_init_cmdlist(struct g2d_data *g2d) ...@@ -184,18 +219,20 @@ static int g2d_init_cmdlist(struct g2d_data *g2d)
return 0; return 0;
err: err:
dma_free_coherent(dev, G2D_CMDLIST_POOL_SIZE, g2d->cmdlist_pool_virt, dma_free_attrs(subdrv->drm_dev->dev, G2D_CMDLIST_POOL_SIZE,
g2d->cmdlist_pool); g2d->cmdlist_pool_virt,
g2d->cmdlist_pool, &g2d->cmdlist_dma_attrs);
return ret; return ret;
} }
static void g2d_fini_cmdlist(struct g2d_data *g2d) static void g2d_fini_cmdlist(struct g2d_data *g2d)
{ {
struct device *dev = g2d->dev; struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
kfree(g2d->cmdlist_node); kfree(g2d->cmdlist_node);
dma_free_coherent(dev, G2D_CMDLIST_POOL_SIZE, g2d->cmdlist_pool_virt, dma_free_attrs(subdrv->drm_dev->dev, G2D_CMDLIST_POOL_SIZE,
g2d->cmdlist_pool); g2d->cmdlist_pool_virt,
g2d->cmdlist_pool, &g2d->cmdlist_dma_attrs);
} }
static struct g2d_cmdlist_node *g2d_get_cmdlist(struct g2d_data *g2d) static struct g2d_cmdlist_node *g2d_get_cmdlist(struct g2d_data *g2d)
...@@ -245,62 +282,300 @@ static void g2d_add_cmdlist_to_inuse(struct exynos_drm_g2d_private *g2d_priv, ...@@ -245,62 +282,300 @@ static void g2d_add_cmdlist_to_inuse(struct exynos_drm_g2d_private *g2d_priv,
list_add_tail(&node->event->base.link, &g2d_priv->event_list); list_add_tail(&node->event->base.link, &g2d_priv->event_list);
} }
static int g2d_get_cmdlist_gem(struct drm_device *drm_dev, static void g2d_userptr_put_dma_addr(struct drm_device *drm_dev,
struct drm_file *file, unsigned long obj,
struct g2d_cmdlist_node *node) bool force)
{ {
struct drm_exynos_file_private *file_priv = file->driver_priv; struct g2d_cmdlist_userptr *g2d_userptr =
(struct g2d_cmdlist_userptr *)obj;
if (!obj)
return;
if (force)
goto out;
atomic_dec(&g2d_userptr->refcount);
if (atomic_read(&g2d_userptr->refcount) > 0)
return;
if (g2d_userptr->in_pool)
return;
out:
exynos_gem_unmap_sgt_from_dma(drm_dev, g2d_userptr->sgt,
DMA_BIDIRECTIONAL);
exynos_gem_put_pages_to_userptr(g2d_userptr->pages,
g2d_userptr->npages,
g2d_userptr->vma);
if (!g2d_userptr->out_of_list)
list_del_init(&g2d_userptr->list);
sg_free_table(g2d_userptr->sgt);
kfree(g2d_userptr->sgt);
g2d_userptr->sgt = NULL;
kfree(g2d_userptr->pages);
g2d_userptr->pages = NULL;
kfree(g2d_userptr);
g2d_userptr = NULL;
}
dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
unsigned long userptr,
unsigned long size,
struct drm_file *filp,
unsigned long *obj)
{
struct drm_exynos_file_private *file_priv = filp->driver_priv;
struct exynos_drm_g2d_private *g2d_priv = file_priv->g2d_priv;
struct g2d_cmdlist_userptr *g2d_userptr;
struct g2d_data *g2d;
struct page **pages;
struct sg_table *sgt;
struct vm_area_struct *vma;
unsigned long start, end;
unsigned int npages, offset;
int ret;
if (!size) {
DRM_ERROR("invalid userptr size.\n");
return ERR_PTR(-EINVAL);
}
g2d = dev_get_drvdata(g2d_priv->dev);
/* check if userptr already exists in userptr_list. */
list_for_each_entry(g2d_userptr, &g2d_priv->userptr_list, list) {
if (g2d_userptr->userptr == userptr) {
/*
* also check size because there could be same address
* and different size.
*/
if (g2d_userptr->size == size) {
atomic_inc(&g2d_userptr->refcount);
*obj = (unsigned long)g2d_userptr;
return &g2d_userptr->dma_addr;
}
/*
* at this moment, maybe g2d dma is accessing this
* g2d_userptr memory region so just remove this
* g2d_userptr object from userptr_list not to be
* referred again and also except it the userptr
* pool to be released after the dma access completion.
*/
g2d_userptr->out_of_list = true;
g2d_userptr->in_pool = false;
list_del_init(&g2d_userptr->list);
break;
}
}
g2d_userptr = kzalloc(sizeof(*g2d_userptr), GFP_KERNEL);
if (!g2d_userptr) {
DRM_ERROR("failed to allocate g2d_userptr.\n");
return ERR_PTR(-ENOMEM);
}
atomic_set(&g2d_userptr->refcount, 1);
start = userptr & PAGE_MASK;
offset = userptr & ~PAGE_MASK;
end = PAGE_ALIGN(userptr + size);
npages = (end - start) >> PAGE_SHIFT;
g2d_userptr->npages = npages;
pages = kzalloc(npages * sizeof(struct page *), GFP_KERNEL);
if (!pages) {
DRM_ERROR("failed to allocate pages.\n");
kfree(g2d_userptr);
return ERR_PTR(-ENOMEM);
}
vma = find_vma(current->mm, userptr);
if (!vma) {
DRM_ERROR("failed to get vm region.\n");
ret = -EFAULT;
goto err_free_pages;
}
if (vma->vm_end < userptr + size) {
DRM_ERROR("vma is too small.\n");
ret = -EFAULT;
goto err_free_pages;
}
g2d_userptr->vma = exynos_gem_get_vma(vma);
if (!g2d_userptr->vma) {
DRM_ERROR("failed to copy vma.\n");
ret = -ENOMEM;
goto err_free_pages;
}
g2d_userptr->size = size;
ret = exynos_gem_get_pages_from_userptr(start & PAGE_MASK,
npages, pages, vma);
if (ret < 0) {
DRM_ERROR("failed to get user pages from userptr.\n");
goto err_put_vma;
}
g2d_userptr->pages = pages;
sgt = kzalloc(sizeof *sgt, GFP_KERNEL);
if (!sgt) {
DRM_ERROR("failed to allocate sg table.\n");
ret = -ENOMEM;
goto err_free_userptr;
}
ret = sg_alloc_table_from_pages(sgt, pages, npages, offset,
size, GFP_KERNEL);
if (ret < 0) {
DRM_ERROR("failed to get sgt from pages.\n");
goto err_free_sgt;
}
g2d_userptr->sgt = sgt;
ret = exynos_gem_map_sgt_with_dma(drm_dev, g2d_userptr->sgt,
DMA_BIDIRECTIONAL);
if (ret < 0) {
DRM_ERROR("failed to map sgt with dma region.\n");
goto err_free_sgt;
}
g2d_userptr->dma_addr = sgt->sgl[0].dma_address;
g2d_userptr->userptr = userptr;
list_add_tail(&g2d_userptr->list, &g2d_priv->userptr_list);
if (g2d->current_pool + (npages << PAGE_SHIFT) < g2d->max_pool) {
g2d->current_pool += npages << PAGE_SHIFT;
g2d_userptr->in_pool = true;
}
*obj = (unsigned long)g2d_userptr;
return &g2d_userptr->dma_addr;
err_free_sgt:
sg_free_table(sgt);
kfree(sgt);
sgt = NULL;
err_free_userptr:
exynos_gem_put_pages_to_userptr(g2d_userptr->pages,
g2d_userptr->npages,
g2d_userptr->vma);
err_put_vma:
exynos_gem_put_vma(g2d_userptr->vma);
err_free_pages:
kfree(pages);
kfree(g2d_userptr);
pages = NULL;
g2d_userptr = NULL;
return ERR_PTR(ret);
}
static void g2d_userptr_free_all(struct drm_device *drm_dev,
struct g2d_data *g2d,
struct drm_file *filp)
{
struct drm_exynos_file_private *file_priv = filp->driver_priv;
struct exynos_drm_g2d_private *g2d_priv = file_priv->g2d_priv; struct exynos_drm_g2d_private *g2d_priv = file_priv->g2d_priv;
struct g2d_cmdlist_userptr *g2d_userptr, *n;
list_for_each_entry_safe(g2d_userptr, n, &g2d_priv->userptr_list, list)
if (g2d_userptr->in_pool)
g2d_userptr_put_dma_addr(drm_dev,
(unsigned long)g2d_userptr,
true);
g2d->current_pool = 0;
}
static int g2d_map_cmdlist_gem(struct g2d_data *g2d,
struct g2d_cmdlist_node *node,
struct drm_device *drm_dev,
struct drm_file *file)
{
struct g2d_cmdlist *cmdlist = node->cmdlist; struct g2d_cmdlist *cmdlist = node->cmdlist;
dma_addr_t *addr;
int offset; int offset;
int i; int i;
for (i = 0; i < node->gem_nr; i++) { for (i = 0; i < node->map_nr; i++) {
struct g2d_gem_node *gem_node; unsigned long handle;
dma_addr_t *addr;
gem_node = kzalloc(sizeof(*gem_node), GFP_KERNEL);
if (!gem_node) {
dev_err(g2d_priv->dev, "failed to allocate gem node\n");
return -ENOMEM;
}
offset = cmdlist->last - (i * 2 + 1); offset = cmdlist->last - (i * 2 + 1);
gem_node->handle = cmdlist->data[offset]; handle = cmdlist->data[offset];
addr = exynos_drm_gem_get_dma_addr(drm_dev, gem_node->handle, if (node->obj_type[i] == BUF_TYPE_GEM) {
file); addr = exynos_drm_gem_get_dma_addr(drm_dev, handle,
if (IS_ERR(addr)) { file);
node->gem_nr = i; if (IS_ERR(addr)) {
kfree(gem_node); node->map_nr = i;
return PTR_ERR(addr); return -EFAULT;
}
} else {
struct drm_exynos_g2d_userptr g2d_userptr;
if (copy_from_user(&g2d_userptr, (void __user *)handle,
sizeof(struct drm_exynos_g2d_userptr))) {
node->map_nr = i;
return -EFAULT;
}
addr = g2d_userptr_get_dma_addr(drm_dev,
g2d_userptr.userptr,
g2d_userptr.size,
file,
&handle);
if (IS_ERR(addr)) {
node->map_nr = i;
return -EFAULT;
}
} }
cmdlist->data[offset] = *addr; cmdlist->data[offset] = *addr;
list_add_tail(&gem_node->list, &g2d_priv->gem_list); node->handles[i] = handle;
g2d_priv->gem_nr++;
} }
return 0; return 0;
} }
static void g2d_put_cmdlist_gem(struct drm_device *drm_dev, static void g2d_unmap_cmdlist_gem(struct g2d_data *g2d,
struct drm_file *file, struct g2d_cmdlist_node *node,
unsigned int nr) struct drm_file *filp)
{ {
struct drm_exynos_file_private *file_priv = file->driver_priv; struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
struct exynos_drm_g2d_private *g2d_priv = file_priv->g2d_priv; int i;
struct g2d_gem_node *node, *n;
list_for_each_entry_safe_reverse(node, n, &g2d_priv->gem_list, list) { for (i = 0; i < node->map_nr; i++) {
if (!nr) unsigned long handle = node->handles[i];
break;
exynos_drm_gem_put_dma_addr(drm_dev, node->handle, file); if (node->obj_type[i] == BUF_TYPE_GEM)
list_del_init(&node->list); exynos_drm_gem_put_dma_addr(subdrv->drm_dev, handle,
kfree(node); filp);
nr--; else
g2d_userptr_put_dma_addr(subdrv->drm_dev, handle,
false);
node->handles[i] = 0;
} }
node->map_nr = 0;
} }
static void g2d_dma_start(struct g2d_data *g2d, static void g2d_dma_start(struct g2d_data *g2d,
...@@ -337,10 +612,18 @@ static struct g2d_runqueue_node *g2d_get_runqueue_node(struct g2d_data *g2d) ...@@ -337,10 +612,18 @@ static struct g2d_runqueue_node *g2d_get_runqueue_node(struct g2d_data *g2d)
static void g2d_free_runqueue_node(struct g2d_data *g2d, static void g2d_free_runqueue_node(struct g2d_data *g2d,
struct g2d_runqueue_node *runqueue_node) struct g2d_runqueue_node *runqueue_node)
{ {
struct g2d_cmdlist_node *node;
if (!runqueue_node) if (!runqueue_node)
return; return;
mutex_lock(&g2d->cmdlist_mutex); mutex_lock(&g2d->cmdlist_mutex);
/*
* commands in run_cmdlist have been completed so unmap all gem
* objects in each command node so that they are unreferenced.
*/
list_for_each_entry(node, &runqueue_node->run_cmdlist, list)
g2d_unmap_cmdlist_gem(g2d, node, runqueue_node->filp);
list_splice_tail_init(&runqueue_node->run_cmdlist, &g2d->free_cmdlist); list_splice_tail_init(&runqueue_node->run_cmdlist, &g2d->free_cmdlist);
mutex_unlock(&g2d->cmdlist_mutex); mutex_unlock(&g2d->cmdlist_mutex);
...@@ -430,15 +713,28 @@ static irqreturn_t g2d_irq_handler(int irq, void *dev_id) ...@@ -430,15 +713,28 @@ static irqreturn_t g2d_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static int g2d_check_reg_offset(struct device *dev, struct g2d_cmdlist *cmdlist, static int g2d_check_reg_offset(struct device *dev,
struct g2d_cmdlist_node *node,
int nr, bool for_addr) int nr, bool for_addr)
{ {
struct g2d_cmdlist *cmdlist = node->cmdlist;
int reg_offset; int reg_offset;
int index; int index;
int i; int i;
for (i = 0; i < nr; i++) { for (i = 0; i < nr; i++) {
index = cmdlist->last - 2 * (i + 1); index = cmdlist->last - 2 * (i + 1);
if (for_addr) {
/* check userptr buffer type. */
reg_offset = (cmdlist->data[index] &
~0x7fffffff) >> 31;
if (reg_offset) {
node->obj_type[i] = BUF_TYPE_USERPTR;
cmdlist->data[index] &= ~G2D_BUF_USERPTR;
}
}
reg_offset = cmdlist->data[index] & ~0xfffff000; reg_offset = cmdlist->data[index] & ~0xfffff000;
if (reg_offset < G2D_VALID_START || reg_offset > G2D_VALID_END) if (reg_offset < G2D_VALID_START || reg_offset > G2D_VALID_END)
...@@ -455,6 +751,9 @@ static int g2d_check_reg_offset(struct device *dev, struct g2d_cmdlist *cmdlist, ...@@ -455,6 +751,9 @@ static int g2d_check_reg_offset(struct device *dev, struct g2d_cmdlist *cmdlist,
case G2D_MSK_BASE_ADDR: case G2D_MSK_BASE_ADDR:
if (!for_addr) if (!for_addr)
goto err; goto err;
if (node->obj_type[i] != BUF_TYPE_USERPTR)
node->obj_type[i] = BUF_TYPE_GEM;
break; break;
default: default:
if (for_addr) if (for_addr)
...@@ -466,7 +765,7 @@ static int g2d_check_reg_offset(struct device *dev, struct g2d_cmdlist *cmdlist, ...@@ -466,7 +765,7 @@ static int g2d_check_reg_offset(struct device *dev, struct g2d_cmdlist *cmdlist,
return 0; return 0;
err: err:
dev_err(dev, "Bad register offset: 0x%x\n", cmdlist->data[index]); dev_err(dev, "Bad register offset: 0x%lx\n", cmdlist->data[index]);
return -EINVAL; return -EINVAL;
} }
...@@ -566,7 +865,7 @@ int exynos_g2d_set_cmdlist_ioctl(struct drm_device *drm_dev, void *data, ...@@ -566,7 +865,7 @@ int exynos_g2d_set_cmdlist_ioctl(struct drm_device *drm_dev, void *data,
} }
/* Check size of cmdlist: last 2 is about G2D_BITBLT_START */ /* Check size of cmdlist: last 2 is about G2D_BITBLT_START */
size = cmdlist->last + req->cmd_nr * 2 + req->cmd_gem_nr * 2 + 2; size = cmdlist->last + req->cmd_nr * 2 + req->cmd_buf_nr * 2 + 2;
if (size > G2D_CMDLIST_DATA_NUM) { if (size > G2D_CMDLIST_DATA_NUM) {
dev_err(dev, "cmdlist size is too big\n"); dev_err(dev, "cmdlist size is too big\n");
ret = -EINVAL; ret = -EINVAL;
...@@ -583,29 +882,29 @@ int exynos_g2d_set_cmdlist_ioctl(struct drm_device *drm_dev, void *data, ...@@ -583,29 +882,29 @@ int exynos_g2d_set_cmdlist_ioctl(struct drm_device *drm_dev, void *data,
} }
cmdlist->last += req->cmd_nr * 2; cmdlist->last += req->cmd_nr * 2;
ret = g2d_check_reg_offset(dev, cmdlist, req->cmd_nr, false); ret = g2d_check_reg_offset(dev, node, req->cmd_nr, false);
if (ret < 0) if (ret < 0)
goto err_free_event; goto err_free_event;
node->gem_nr = req->cmd_gem_nr; node->map_nr = req->cmd_buf_nr;
if (req->cmd_gem_nr) { if (req->cmd_buf_nr) {
struct drm_exynos_g2d_cmd *cmd_gem; struct drm_exynos_g2d_cmd *cmd_buf;
cmd_gem = (struct drm_exynos_g2d_cmd *)(uint32_t)req->cmd_gem; cmd_buf = (struct drm_exynos_g2d_cmd *)(uint32_t)req->cmd_buf;
if (copy_from_user(cmdlist->data + cmdlist->last, if (copy_from_user(cmdlist->data + cmdlist->last,
(void __user *)cmd_gem, (void __user *)cmd_buf,
sizeof(*cmd_gem) * req->cmd_gem_nr)) { sizeof(*cmd_buf) * req->cmd_buf_nr)) {
ret = -EFAULT; ret = -EFAULT;
goto err_free_event; goto err_free_event;
} }
cmdlist->last += req->cmd_gem_nr * 2; cmdlist->last += req->cmd_buf_nr * 2;
ret = g2d_check_reg_offset(dev, cmdlist, req->cmd_gem_nr, true); ret = g2d_check_reg_offset(dev, node, req->cmd_buf_nr, true);
if (ret < 0) if (ret < 0)
goto err_free_event; goto err_free_event;
ret = g2d_get_cmdlist_gem(drm_dev, file, node); ret = g2d_map_cmdlist_gem(g2d, node, drm_dev, file);
if (ret < 0) if (ret < 0)
goto err_unmap; goto err_unmap;
} }
...@@ -624,7 +923,7 @@ int exynos_g2d_set_cmdlist_ioctl(struct drm_device *drm_dev, void *data, ...@@ -624,7 +923,7 @@ int exynos_g2d_set_cmdlist_ioctl(struct drm_device *drm_dev, void *data,
return 0; return 0;
err_unmap: err_unmap:
g2d_put_cmdlist_gem(drm_dev, file, node->gem_nr); g2d_unmap_cmdlist_gem(g2d, node, file);
err_free_event: err_free_event:
if (node->event) { if (node->event) {
spin_lock_irqsave(&drm_dev->event_lock, flags); spin_lock_irqsave(&drm_dev->event_lock, flags);
...@@ -680,6 +979,7 @@ int exynos_g2d_exec_ioctl(struct drm_device *drm_dev, void *data, ...@@ -680,6 +979,7 @@ int exynos_g2d_exec_ioctl(struct drm_device *drm_dev, void *data,
mutex_lock(&g2d->runqueue_mutex); mutex_lock(&g2d->runqueue_mutex);
runqueue_node->pid = current->pid; runqueue_node->pid = current->pid;
runqueue_node->filp = file;
list_add_tail(&runqueue_node->list, &g2d->runqueue); list_add_tail(&runqueue_node->list, &g2d->runqueue);
if (!g2d->runqueue_node) if (!g2d->runqueue_node)
g2d_exec_runqueue(g2d); g2d_exec_runqueue(g2d);
...@@ -696,6 +996,43 @@ int exynos_g2d_exec_ioctl(struct drm_device *drm_dev, void *data, ...@@ -696,6 +996,43 @@ int exynos_g2d_exec_ioctl(struct drm_device *drm_dev, void *data,
} }
EXPORT_SYMBOL_GPL(exynos_g2d_exec_ioctl); EXPORT_SYMBOL_GPL(exynos_g2d_exec_ioctl);
static int g2d_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
{
struct g2d_data *g2d;
int ret;
g2d = dev_get_drvdata(dev);
if (!g2d)
return -EFAULT;
/* allocate dma-aware cmdlist buffer. */
ret = g2d_init_cmdlist(g2d);
if (ret < 0) {
dev_err(dev, "cmdlist init failed\n");
return ret;
}
if (!is_drm_iommu_supported(drm_dev))
return 0;
ret = drm_iommu_attach_device(drm_dev, dev);
if (ret < 0) {
dev_err(dev, "failed to enable iommu.\n");
g2d_fini_cmdlist(g2d);
}
return ret;
}
static void g2d_subdrv_remove(struct drm_device *drm_dev, struct device *dev)
{
if (!is_drm_iommu_supported(drm_dev))
return;
drm_iommu_detach_device(drm_dev, dev);
}
static int g2d_open(struct drm_device *drm_dev, struct device *dev, static int g2d_open(struct drm_device *drm_dev, struct device *dev,
struct drm_file *file) struct drm_file *file)
{ {
...@@ -713,7 +1050,7 @@ static int g2d_open(struct drm_device *drm_dev, struct device *dev, ...@@ -713,7 +1050,7 @@ static int g2d_open(struct drm_device *drm_dev, struct device *dev,
INIT_LIST_HEAD(&g2d_priv->inuse_cmdlist); INIT_LIST_HEAD(&g2d_priv->inuse_cmdlist);
INIT_LIST_HEAD(&g2d_priv->event_list); INIT_LIST_HEAD(&g2d_priv->event_list);
INIT_LIST_HEAD(&g2d_priv->gem_list); INIT_LIST_HEAD(&g2d_priv->userptr_list);
return 0; return 0;
} }
...@@ -734,11 +1071,21 @@ static void g2d_close(struct drm_device *drm_dev, struct device *dev, ...@@ -734,11 +1071,21 @@ static void g2d_close(struct drm_device *drm_dev, struct device *dev,
return; return;
mutex_lock(&g2d->cmdlist_mutex); mutex_lock(&g2d->cmdlist_mutex);
list_for_each_entry_safe(node, n, &g2d_priv->inuse_cmdlist, list) list_for_each_entry_safe(node, n, &g2d_priv->inuse_cmdlist, list) {
/*
* unmap all gem objects not completed.
*
* P.S. if current process was terminated forcely then
* there may be some commands in inuse_cmdlist so unmap
* them.
*/
g2d_unmap_cmdlist_gem(g2d, node, file);
list_move_tail(&node->list, &g2d->free_cmdlist); list_move_tail(&node->list, &g2d->free_cmdlist);
}
mutex_unlock(&g2d->cmdlist_mutex); mutex_unlock(&g2d->cmdlist_mutex);
g2d_put_cmdlist_gem(drm_dev, file, g2d_priv->gem_nr); /* release all g2d_userptr in pool. */
g2d_userptr_free_all(drm_dev, g2d, file);
kfree(file_priv->g2d_priv); kfree(file_priv->g2d_priv);
} }
...@@ -778,15 +1125,11 @@ static int __devinit g2d_probe(struct platform_device *pdev) ...@@ -778,15 +1125,11 @@ static int __devinit g2d_probe(struct platform_device *pdev)
mutex_init(&g2d->cmdlist_mutex); mutex_init(&g2d->cmdlist_mutex);
mutex_init(&g2d->runqueue_mutex); mutex_init(&g2d->runqueue_mutex);
ret = g2d_init_cmdlist(g2d); g2d->gate_clk = devm_clk_get(dev, "fimg2d");
if (ret < 0)
goto err_destroy_workqueue;
g2d->gate_clk = clk_get(dev, "fimg2d");
if (IS_ERR(g2d->gate_clk)) { if (IS_ERR(g2d->gate_clk)) {
dev_err(dev, "failed to get gate clock\n"); dev_err(dev, "failed to get gate clock\n");
ret = PTR_ERR(g2d->gate_clk); ret = PTR_ERR(g2d->gate_clk);
goto err_fini_cmdlist; goto err_destroy_workqueue;
} }
pm_runtime_enable(dev); pm_runtime_enable(dev);
...@@ -814,10 +1157,14 @@ static int __devinit g2d_probe(struct platform_device *pdev) ...@@ -814,10 +1157,14 @@ static int __devinit g2d_probe(struct platform_device *pdev)
goto err_put_clk; goto err_put_clk;
} }
g2d->max_pool = MAX_POOL;
platform_set_drvdata(pdev, g2d); platform_set_drvdata(pdev, g2d);
subdrv = &g2d->subdrv; subdrv = &g2d->subdrv;
subdrv->dev = dev; subdrv->dev = dev;
subdrv->probe = g2d_subdrv_probe;
subdrv->remove = g2d_subdrv_remove;
subdrv->open = g2d_open; subdrv->open = g2d_open;
subdrv->close = g2d_close; subdrv->close = g2d_close;
...@@ -834,9 +1181,6 @@ static int __devinit g2d_probe(struct platform_device *pdev) ...@@ -834,9 +1181,6 @@ static int __devinit g2d_probe(struct platform_device *pdev)
err_put_clk: err_put_clk:
pm_runtime_disable(dev); pm_runtime_disable(dev);
clk_put(g2d->gate_clk);
err_fini_cmdlist:
g2d_fini_cmdlist(g2d);
err_destroy_workqueue: err_destroy_workqueue:
destroy_workqueue(g2d->g2d_workq); destroy_workqueue(g2d->g2d_workq);
err_destroy_slab: err_destroy_slab:
...@@ -857,7 +1201,6 @@ static int __devexit g2d_remove(struct platform_device *pdev) ...@@ -857,7 +1201,6 @@ static int __devexit g2d_remove(struct platform_device *pdev)
} }
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
clk_put(g2d->gate_clk);
g2d_fini_cmdlist(g2d); g2d_fini_cmdlist(g2d);
destroy_workqueue(g2d->g2d_workq); destroy_workqueue(g2d->g2d_workq);
......
...@@ -35,21 +35,27 @@ ...@@ -35,21 +35,27 @@
* exynos drm gem buffer structure. * exynos drm gem buffer structure.
* *
* @kvaddr: kernel virtual address to allocated memory region. * @kvaddr: kernel virtual address to allocated memory region.
* *userptr: user space address.
* @dma_addr: bus address(accessed by dma) to allocated memory region. * @dma_addr: bus address(accessed by dma) to allocated memory region.
* - this address could be physical address without IOMMU and * - this address could be physical address without IOMMU and
* device address with IOMMU. * device address with IOMMU.
* @write: whether pages will be written to by the caller.
* @pages: Array of backing pages.
* @sgt: sg table to transfer page data. * @sgt: sg table to transfer page data.
* @pages: contain all pages to allocated memory region.
* @page_size: could be 4K, 64K or 1MB.
* @size: size of allocated memory region. * @size: size of allocated memory region.
* @pfnmap: indicate whether memory region from userptr is mmaped with
* VM_PFNMAP or not.
*/ */
struct exynos_drm_gem_buf { struct exynos_drm_gem_buf {
void __iomem *kvaddr; void __iomem *kvaddr;
unsigned long userptr;
dma_addr_t dma_addr; dma_addr_t dma_addr;
struct sg_table *sgt; struct dma_attrs dma_attrs;
unsigned int write;
struct page **pages; struct page **pages;
unsigned long page_size; struct sg_table *sgt;
unsigned long size; unsigned long size;
bool pfnmap;
}; };
/* /*
...@@ -65,6 +71,7 @@ struct exynos_drm_gem_buf { ...@@ -65,6 +71,7 @@ struct exynos_drm_gem_buf {
* or at framebuffer creation. * or at framebuffer creation.
* @size: size requested from user, in bytes and this size is aligned * @size: size requested from user, in bytes and this size is aligned
* in page unit. * in page unit.
* @vma: a pointer to vm_area.
* @flags: indicate memory type to allocated buffer and cache attruibute. * @flags: indicate memory type to allocated buffer and cache attruibute.
* *
* P.S. this object would be transfered to user as kms_bo.handle so * P.S. this object would be transfered to user as kms_bo.handle so
...@@ -74,6 +81,7 @@ struct exynos_drm_gem_obj { ...@@ -74,6 +81,7 @@ struct exynos_drm_gem_obj {
struct drm_gem_object base; struct drm_gem_object base;
struct exynos_drm_gem_buf *buffer; struct exynos_drm_gem_buf *buffer;
unsigned long size; unsigned long size;
struct vm_area_struct *vma;
unsigned int flags; unsigned int flags;
}; };
...@@ -104,9 +112,9 @@ int exynos_drm_gem_create_ioctl(struct drm_device *dev, void *data, ...@@ -104,9 +112,9 @@ int exynos_drm_gem_create_ioctl(struct drm_device *dev, void *data,
* other drivers such as 2d/3d acceleration drivers. * other drivers such as 2d/3d acceleration drivers.
* with this function call, gem object reference count would be increased. * with this function call, gem object reference count would be increased.
*/ */
void *exynos_drm_gem_get_dma_addr(struct drm_device *dev, dma_addr_t *exynos_drm_gem_get_dma_addr(struct drm_device *dev,
unsigned int gem_handle, unsigned int gem_handle,
struct drm_file *file_priv); struct drm_file *filp);
/* /*
* put dma address from gem handle and this function could be used for * put dma address from gem handle and this function could be used for
...@@ -115,7 +123,7 @@ void *exynos_drm_gem_get_dma_addr(struct drm_device *dev, ...@@ -115,7 +123,7 @@ void *exynos_drm_gem_get_dma_addr(struct drm_device *dev,
*/ */
void exynos_drm_gem_put_dma_addr(struct drm_device *dev, void exynos_drm_gem_put_dma_addr(struct drm_device *dev,
unsigned int gem_handle, unsigned int gem_handle,
struct drm_file *file_priv); struct drm_file *filp);
/* get buffer offset to map to user space. */ /* get buffer offset to map to user space. */
int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data, int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data,
...@@ -128,6 +136,10 @@ int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data, ...@@ -128,6 +136,10 @@ int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data,
int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data, int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv); struct drm_file *file_priv);
/* map user space allocated by malloc to pages. */
int exynos_drm_gem_userptr_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
/* get buffer information to memory region allocated by gem. */ /* get buffer information to memory region allocated by gem. */
int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data, int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv); struct drm_file *file_priv);
...@@ -163,4 +175,36 @@ int exynos_drm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf); ...@@ -163,4 +175,36 @@ int exynos_drm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
/* set vm_flags and we can change the vm attribute to other one at here. */ /* set vm_flags and we can change the vm attribute to other one at here. */
int exynos_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma); int exynos_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
static inline int vma_is_io(struct vm_area_struct *vma)
{
return !!(vma->vm_flags & (VM_IO | VM_PFNMAP));
}
/* get a copy of a virtual memory region. */
struct vm_area_struct *exynos_gem_get_vma(struct vm_area_struct *vma);
/* release a userspace virtual memory area. */
void exynos_gem_put_vma(struct vm_area_struct *vma);
/* get pages from user space. */
int exynos_gem_get_pages_from_userptr(unsigned long start,
unsigned int npages,
struct page **pages,
struct vm_area_struct *vma);
/* drop the reference to pages. */
void exynos_gem_put_pages_to_userptr(struct page **pages,
unsigned int npages,
struct vm_area_struct *vma);
/* map sgt with dma region. */
int exynos_gem_map_sgt_with_dma(struct drm_device *drm_dev,
struct sg_table *sgt,
enum dma_data_direction dir);
/* unmap sgt from dma region. */
void exynos_gem_unmap_sgt_from_dma(struct drm_device *drm_dev,
struct sg_table *sgt,
enum dma_data_direction dir);
#endif #endif
此差异已折叠。
/*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
*
* Authors:
* Eunchul Kim <chulspro.kim@samsung.com>
* Jinyoung Jeon <jy0.jeon@samsung.com>
* Sangmin Lee <lsmin.lee@samsung.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef _EXYNOS_DRM_GSC_H_
#define _EXYNOS_DRM_GSC_H_
/*
* TODO
* FIMD output interface notifier callback.
* Mixer output interface notifier callback.
*/
#endif /* _EXYNOS_DRM_GSC_H_ */
...@@ -29,6 +29,9 @@ ...@@ -29,6 +29,9 @@
#define get_ctx_from_subdrv(subdrv) container_of(subdrv,\ #define get_ctx_from_subdrv(subdrv) container_of(subdrv,\
struct drm_hdmi_context, subdrv); struct drm_hdmi_context, subdrv);
/* platform device pointer for common drm hdmi device. */
static struct platform_device *exynos_drm_hdmi_pdev;
/* Common hdmi subdrv needs to access the hdmi and mixer though context. /* Common hdmi subdrv needs to access the hdmi and mixer though context.
* These should be initialied by the repective drivers */ * These should be initialied by the repective drivers */
static struct exynos_drm_hdmi_context *hdmi_ctx; static struct exynos_drm_hdmi_context *hdmi_ctx;
...@@ -46,6 +49,25 @@ struct drm_hdmi_context { ...@@ -46,6 +49,25 @@ struct drm_hdmi_context {
bool enabled[MIXER_WIN_NR]; bool enabled[MIXER_WIN_NR];
}; };
int exynos_platform_device_hdmi_register(void)
{
if (exynos_drm_hdmi_pdev)
return -EEXIST;
exynos_drm_hdmi_pdev = platform_device_register_simple(
"exynos-drm-hdmi", -1, NULL, 0);
if (IS_ERR_OR_NULL(exynos_drm_hdmi_pdev))
return PTR_ERR(exynos_drm_hdmi_pdev);
return 0;
}
void exynos_platform_device_hdmi_unregister(void)
{
if (exynos_drm_hdmi_pdev)
platform_device_unregister(exynos_drm_hdmi_pdev);
}
void exynos_hdmi_drv_attach(struct exynos_drm_hdmi_context *ctx) void exynos_hdmi_drv_attach(struct exynos_drm_hdmi_context *ctx)
{ {
if (ctx) if (ctx)
...@@ -157,6 +179,16 @@ static void drm_hdmi_disable_vblank(struct device *subdrv_dev) ...@@ -157,6 +179,16 @@ static void drm_hdmi_disable_vblank(struct device *subdrv_dev)
return mixer_ops->disable_vblank(ctx->mixer_ctx->ctx); return mixer_ops->disable_vblank(ctx->mixer_ctx->ctx);
} }
static void drm_hdmi_wait_for_vblank(struct device *subdrv_dev)
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (mixer_ops && mixer_ops->wait_for_vblank)
mixer_ops->wait_for_vblank(ctx->mixer_ctx->ctx);
}
static void drm_hdmi_mode_fixup(struct device *subdrv_dev, static void drm_hdmi_mode_fixup(struct device *subdrv_dev,
struct drm_connector *connector, struct drm_connector *connector,
const struct drm_display_mode *mode, const struct drm_display_mode *mode,
...@@ -238,6 +270,7 @@ static struct exynos_drm_manager_ops drm_hdmi_manager_ops = { ...@@ -238,6 +270,7 @@ static struct exynos_drm_manager_ops drm_hdmi_manager_ops = {
.apply = drm_hdmi_apply, .apply = drm_hdmi_apply,
.enable_vblank = drm_hdmi_enable_vblank, .enable_vblank = drm_hdmi_enable_vblank,
.disable_vblank = drm_hdmi_disable_vblank, .disable_vblank = drm_hdmi_disable_vblank,
.wait_for_vblank = drm_hdmi_wait_for_vblank,
.mode_fixup = drm_hdmi_mode_fixup, .mode_fixup = drm_hdmi_mode_fixup,
.mode_set = drm_hdmi_mode_set, .mode_set = drm_hdmi_mode_set,
.get_max_resol = drm_hdmi_get_max_resol, .get_max_resol = drm_hdmi_get_max_resol,
...@@ -291,21 +324,10 @@ static void drm_mixer_disable(struct device *subdrv_dev, int zpos) ...@@ -291,21 +324,10 @@ static void drm_mixer_disable(struct device *subdrv_dev, int zpos)
ctx->enabled[win] = false; ctx->enabled[win] = false;
} }
static void drm_mixer_wait_for_vblank(struct device *subdrv_dev)
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (mixer_ops && mixer_ops->wait_for_vblank)
mixer_ops->wait_for_vblank(ctx->mixer_ctx->ctx);
}
static struct exynos_drm_overlay_ops drm_hdmi_overlay_ops = { static struct exynos_drm_overlay_ops drm_hdmi_overlay_ops = {
.mode_set = drm_mixer_mode_set, .mode_set = drm_mixer_mode_set,
.commit = drm_mixer_commit, .commit = drm_mixer_commit,
.disable = drm_mixer_disable, .disable = drm_mixer_disable,
.wait_for_vblank = drm_mixer_wait_for_vblank,
}; };
static struct exynos_drm_manager hdmi_manager = { static struct exynos_drm_manager hdmi_manager = {
...@@ -346,9 +368,23 @@ static int hdmi_subdrv_probe(struct drm_device *drm_dev, ...@@ -346,9 +368,23 @@ static int hdmi_subdrv_probe(struct drm_device *drm_dev,
ctx->hdmi_ctx->drm_dev = drm_dev; ctx->hdmi_ctx->drm_dev = drm_dev;
ctx->mixer_ctx->drm_dev = drm_dev; ctx->mixer_ctx->drm_dev = drm_dev;
if (mixer_ops->iommu_on)
mixer_ops->iommu_on(ctx->mixer_ctx->ctx, true);
return 0; return 0;
} }
static void hdmi_subdrv_remove(struct drm_device *drm_dev, struct device *dev)
{
struct drm_hdmi_context *ctx;
struct exynos_drm_subdrv *subdrv = to_subdrv(dev);
ctx = get_ctx_from_subdrv(subdrv);
if (mixer_ops->iommu_on)
mixer_ops->iommu_on(ctx->mixer_ctx->ctx, false);
}
static int __devinit exynos_drm_hdmi_probe(struct platform_device *pdev) static int __devinit exynos_drm_hdmi_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
...@@ -368,6 +404,7 @@ static int __devinit exynos_drm_hdmi_probe(struct platform_device *pdev) ...@@ -368,6 +404,7 @@ static int __devinit exynos_drm_hdmi_probe(struct platform_device *pdev)
subdrv->dev = dev; subdrv->dev = dev;
subdrv->manager = &hdmi_manager; subdrv->manager = &hdmi_manager;
subdrv->probe = hdmi_subdrv_probe; subdrv->probe = hdmi_subdrv_probe;
subdrv->remove = hdmi_subdrv_remove;
platform_set_drvdata(pdev, subdrv); platform_set_drvdata(pdev, subdrv);
......
...@@ -62,12 +62,13 @@ struct exynos_hdmi_ops { ...@@ -62,12 +62,13 @@ struct exynos_hdmi_ops {
struct exynos_mixer_ops { struct exynos_mixer_ops {
/* manager */ /* manager */
int (*iommu_on)(void *ctx, bool enable);
int (*enable_vblank)(void *ctx, int pipe); int (*enable_vblank)(void *ctx, int pipe);
void (*disable_vblank)(void *ctx); void (*disable_vblank)(void *ctx);
void (*wait_for_vblank)(void *ctx);
void (*dpms)(void *ctx, int mode); void (*dpms)(void *ctx, int mode);
/* overlay */ /* overlay */
void (*wait_for_vblank)(void *ctx);
void (*win_mode_set)(void *ctx, struct exynos_drm_overlay *overlay); void (*win_mode_set)(void *ctx, struct exynos_drm_overlay *overlay);
void (*win_commit)(void *ctx, int zpos); void (*win_commit)(void *ctx, int zpos);
void (*win_disable)(void *ctx, int zpos); void (*win_disable)(void *ctx, int zpos);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -1650,7 +1650,7 @@ cdv_intel_dp_set_property(struct drm_connector *connector, ...@@ -1650,7 +1650,7 @@ cdv_intel_dp_set_property(struct drm_connector *connector,
struct cdv_intel_dp *intel_dp = encoder->dev_priv; struct cdv_intel_dp *intel_dp = encoder->dev_priv;
int ret; int ret;
ret = drm_connector_property_set_value(connector, property, val); ret = drm_object_property_set_value(&connector->base, property, val);
if (ret) if (ret)
return ret; return ret;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册