提交 20a2078c 编写于 作者: L Linus Torvalds

Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux

Pull drm updates from Dave Airlie:
 "This is the main drm pull request for 3.10.

  Wierd bits:
   - OMAP drm changes required OMAP dss changes, in drivers/video, so I
     took them in here.
   - one more fbcon fix for font handover
   - VT switch avoidance in pm code
   - scatterlist helpers for gpu drivers - have acks from akpm

  Highlights:
   - qxl kms driver - driver for the spice qxl virtual GPU

  Nouveau:
   - fermi/kepler VRAM compression
   - GK110/nvf0 modesetting support.

  Tegra:
   - host1x core merged with 2D engine support

  i915:
   - vt switchless resume
   - more valleyview support
   - vblank fixes
   - modesetting pipe config rework

  radeon:
   - UVD engine support
   - SI chip tiling support
   - GPU registers initialisation from golden values.

  exynos:
   - device tree changes
   - fimc block support

  Otherwise:
   - bunches of fixes all over the place."

* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (513 commits)
  qxl: update to new idr interfaces.
  drm/nouveau: fix build with nv50->nvc0
  drm/radeon: fix handling of v6 power tables
  drm/radeon: clarify family checks in pm table parsing
  drm/radeon: consolidate UVD clock programming
  drm/radeon: fix UPLL_REF_DIV_MASK definition
  radeon: add bo tracking debugfs
  drm/radeon: add new richland pci ids
  drm/radeon: add some new SI PCI ids
  drm/radeon: fix scratch reg handling for UVD fence
  drm/radeon: allocate SA bo in the requested domain
  drm/radeon: fix possible segfault when parsing pm tables
  drm/radeon: fix endian bugs in atom_allocate_fb_scratch()
  OMAPDSS: TFP410: return EPROBE_DEFER if the i2c adapter not found
  OMAPDSS: VENC: Add error handling for venc_probe_pdata
  OMAPDSS: HDMI: Add error handling for hdmi_probe_pdata
  OMAPDSS: RFBI: Add error handling for rfbi_probe_pdata
  OMAPDSS: DSI: Add error handling for dsi_probe_pdata
  OMAPDSS: SDI: Add error handling for sdi_probe_pdata
  OMAPDSS: DPI: Add error handling for dpi_probe_pdata
  ...
/*
1600x1200.S: EDID data set for standard 1600x1200 60 Hz monitor
Copyright (C) 2013 Carsten Emde <C.Emde@osadl.org>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*/
/* EDID */
#define VERSION 1
#define REVISION 3
/* Display */
#define CLOCK 162000 /* kHz */
#define XPIX 1600
#define YPIX 1200
#define XY_RATIO XY_RATIO_4_3
#define XBLANK 560
#define YBLANK 50
#define XOFFSET 64
#define XPULSE 192
#define YOFFSET (63+1)
#define YPULSE (63+3)
#define DPI 72
#define VFREQ 60 /* Hz */
#define TIMING_NAME "Linux UXGA"
#define ESTABLISHED_TIMINGS_BITS 0x00 /* none */
#define HSYNC_POL 1
#define VSYNC_POL 1
#define CRC 0x9d
#include "edid.S"
...@@ -18,12 +18,12 @@ CONFIG_DRM_LOAD_EDID_FIRMWARE was introduced. It allows to provide an ...@@ -18,12 +18,12 @@ CONFIG_DRM_LOAD_EDID_FIRMWARE was introduced. It allows to provide an
individually prepared or corrected EDID data set in the /lib/firmware individually prepared or corrected EDID data set in the /lib/firmware
directory from where it is loaded via the firmware interface. The code directory from where it is loaded via the firmware interface. The code
(see drivers/gpu/drm/drm_edid_load.c) contains built-in data sets for (see drivers/gpu/drm/drm_edid_load.c) contains built-in data sets for
commonly used screen resolutions (1024x768, 1280x1024, 1680x1050, commonly used screen resolutions (1024x768, 1280x1024, 1600x1200,
1920x1080) as binary blobs, but the kernel source tree does not contain 1680x1050, 1920x1080) as binary blobs, but the kernel source tree does
code to create these data. In order to elucidate the origin of the not contain code to create these data. In order to elucidate the origin
built-in binary EDID blobs and to facilitate the creation of individual of the built-in binary EDID blobs and to facilitate the creation of
data for a specific misbehaving monitor, commented sources and a individual data for a specific misbehaving monitor, commented sources
Makefile environment are given here. and a Makefile environment are given here.
To create binary EDID and C source code files from the existing data To create binary EDID and C source code files from the existing data
material, simply type "make". material, simply type "make".
......
Samsung 2D Graphic Accelerator using DRM frame work
Samsung FIMG2D is a graphics 2D accelerator which supports Bit Block Transfer.
We set the drawing-context registers for configuring rendering parameters and
then start rendering.
This driver is for SOCs which contain G2D IPs with version 4.1.
Required properties:
-compatible:
should be "samsung,exynos-g2d-41".
-reg:
physical base address of the controller and length
of memory mapped region.
-interrupts:
interrupt combiner values.
Example:
g2d {
compatible = "samsung,exynos-g2d-41";
reg = <0x10850000 0x1000>;
interrupts = <0 91 0>;
};
...@@ -38,7 +38,7 @@ ...@@ -38,7 +38,7 @@
#include "gpmc-smc91x.h" #include "gpmc-smc91x.h"
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-generic-dpi.h> #include <video/omap-panel-data.h>
#include "mux.h" #include "mux.h"
#include "hsmmc.h" #include "hsmmc.h"
......
...@@ -35,7 +35,7 @@ ...@@ -35,7 +35,7 @@
#include "common.h" #include "common.h"
#include <linux/omap-dma.h> #include <linux/omap-dma.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-tfp410.h> #include <video/omap-panel-data.h>
#include "gpmc.h" #include "gpmc.h"
#include "gpmc-smc91x.h" #include "gpmc-smc91x.h"
......
...@@ -35,8 +35,7 @@ ...@@ -35,8 +35,7 @@
#include "common.h" #include "common.h"
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-generic-dpi.h> #include <video/omap-panel-data.h>
#include <video/omap-panel-tfp410.h>
#include "am35xx-emac.h" #include "am35xx-emac.h"
#include "mux.h" #include "mux.h"
......
...@@ -41,8 +41,7 @@ ...@@ -41,8 +41,7 @@
#include <linux/platform_data/mtd-nand-omap2.h> #include <linux/platform_data/mtd-nand-omap2.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-generic-dpi.h> #include <video/omap-panel-data.h>
#include <video/omap-panel-tfp410.h>
#include <linux/platform_data/spi-omap2-mcspi.h> #include <linux/platform_data/spi-omap2-mcspi.h>
#include "common.h" #include "common.h"
......
...@@ -43,8 +43,7 @@ ...@@ -43,8 +43,7 @@
#include "gpmc.h" #include "gpmc.h"
#include <linux/platform_data/mtd-nand-omap2.h> #include <linux/platform_data/mtd-nand-omap2.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-generic-dpi.h> #include <video/omap-panel-data.h>
#include <video/omap-panel-tfp410.h>
#include <linux/platform_data/spi-omap2-mcspi.h> #include <linux/platform_data/spi-omap2-mcspi.h>
#include <linux/input/matrix_keypad.h> #include <linux/input/matrix_keypad.h>
......
...@@ -34,7 +34,7 @@ ...@@ -34,7 +34,7 @@
#include <asm/mach/map.h> #include <asm/mach/map.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-generic-dpi.h> #include <video/omap-panel-data.h>
#include "common.h" #include "common.h"
#include "mux.h" #include "mux.h"
......
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
#include <asm/mach/arch.h> #include <asm/mach/arch.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-tfp410.h> #include <video/omap-panel-data.h>
#include <linux/platform_data/mtd-onenand-omap2.h> #include <linux/platform_data/mtd-onenand-omap2.h>
#include "common.h" #include "common.h"
......
...@@ -41,7 +41,7 @@ ...@@ -41,7 +41,7 @@
#include "gpmc-smsc911x.h" #include "gpmc-smsc911x.h"
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-generic-dpi.h> #include <video/omap-panel-data.h>
#include "board-flash.h" #include "board-flash.h"
#include "mux.h" #include "mux.h"
......
...@@ -43,7 +43,7 @@ ...@@ -43,7 +43,7 @@
#include <asm/mach/flash.h> #include <asm/mach/flash.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-tfp410.h> #include <video/omap-panel-data.h>
#include <linux/platform_data/mtd-nand-omap2.h> #include <linux/platform_data/mtd-nand-omap2.h>
#include "common.h" #include "common.h"
......
...@@ -51,7 +51,7 @@ ...@@ -51,7 +51,7 @@
#include "common.h" #include "common.h"
#include <linux/platform_data/spi-omap2-mcspi.h> #include <linux/platform_data/spi-omap2-mcspi.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-tfp410.h> #include <video/omap-panel-data.h>
#include "soc.h" #include "soc.h"
#include "mux.h" #include "mux.h"
......
...@@ -44,8 +44,7 @@ ...@@ -44,8 +44,7 @@
#include "gpmc.h" #include "gpmc.h"
#include <linux/platform_data/mtd-nand-omap2.h> #include <linux/platform_data/mtd-nand-omap2.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-generic-dpi.h> #include <video/omap-panel-data.h>
#include <video/omap-panel-tfp410.h>
#include <linux/platform_data/spi-omap2-mcspi.h> #include <linux/platform_data/spi-omap2-mcspi.h>
......
...@@ -47,8 +47,7 @@ ...@@ -47,8 +47,7 @@
#include <asm/mach/map.h> #include <asm/mach/map.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-generic-dpi.h> #include <video/omap-panel-data.h>
#include <video/omap-panel-tfp410.h>
#include "common.h" #include "common.h"
#include "mux.h" #include "mux.h"
......
...@@ -27,9 +27,7 @@ ...@@ -27,9 +27,7 @@
#include <linux/gpio.h> #include <linux/gpio.h>
#include <video/omapdss.h> #include <video/omapdss.h>
#include <video/omap-panel-tfp410.h> #include <video/omap-panel-data.h>
#include <video/omap-panel-nokia-dsi.h>
#include <video/omap-panel-picodlp.h>
#include "soc.h" #include "soc.h"
#include "dss-common.h" #include "dss-common.h"
......
obj-y += drm/ vga/ obj-y += drm/ vga/
obj-$(CONFIG_TEGRA_HOST1X) += host1x/
...@@ -215,8 +215,8 @@ source "drivers/gpu/drm/cirrus/Kconfig" ...@@ -215,8 +215,8 @@ source "drivers/gpu/drm/cirrus/Kconfig"
source "drivers/gpu/drm/shmobile/Kconfig" source "drivers/gpu/drm/shmobile/Kconfig"
source "drivers/gpu/drm/tegra/Kconfig"
source "drivers/gpu/drm/omapdrm/Kconfig" source "drivers/gpu/drm/omapdrm/Kconfig"
source "drivers/gpu/drm/tilcdc/Kconfig" source "drivers/gpu/drm/tilcdc/Kconfig"
source "drivers/gpu/drm/qxl/Kconfig"
...@@ -49,7 +49,7 @@ obj-$(CONFIG_DRM_GMA500) += gma500/ ...@@ -49,7 +49,7 @@ obj-$(CONFIG_DRM_GMA500) += gma500/
obj-$(CONFIG_DRM_UDL) += udl/ obj-$(CONFIG_DRM_UDL) += udl/
obj-$(CONFIG_DRM_AST) += ast/ obj-$(CONFIG_DRM_AST) += ast/
obj-$(CONFIG_DRM_SHMOBILE) +=shmobile/ obj-$(CONFIG_DRM_SHMOBILE) +=shmobile/
obj-$(CONFIG_DRM_TEGRA) += tegra/
obj-$(CONFIG_DRM_OMAP) += omapdrm/ obj-$(CONFIG_DRM_OMAP) += omapdrm/
obj-$(CONFIG_DRM_TILCDC) += tilcdc/ obj-$(CONFIG_DRM_TILCDC) += tilcdc/
obj-$(CONFIG_DRM_QXL) += qxl/
obj-y += i2c/ obj-y += i2c/
...@@ -241,6 +241,8 @@ struct ast_fbdev { ...@@ -241,6 +241,8 @@ struct ast_fbdev {
void *sysram; void *sysram;
int size; int size;
struct ttm_bo_kmap_obj mapping; struct ttm_bo_kmap_obj mapping;
int x1, y1, x2, y2; /* dirty rect */
spinlock_t dirty_lock;
}; };
#define to_ast_crtc(x) container_of(x, struct ast_crtc, base) #define to_ast_crtc(x) container_of(x, struct ast_crtc, base)
......
...@@ -53,16 +53,52 @@ static void ast_dirty_update(struct ast_fbdev *afbdev, ...@@ -53,16 +53,52 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
int bpp = (afbdev->afb.base.bits_per_pixel + 7)/8; int bpp = (afbdev->afb.base.bits_per_pixel + 7)/8;
int ret; int ret;
bool unmap = false; bool unmap = false;
bool store_for_later = false;
int x2, y2;
unsigned long flags;
obj = afbdev->afb.obj; obj = afbdev->afb.obj;
bo = gem_to_ast_bo(obj); bo = gem_to_ast_bo(obj);
/*
* try and reserve the BO, if we fail with busy
* then the BO is being moved and we should
* store up the damage until later.
*/
ret = ast_bo_reserve(bo, true); ret = ast_bo_reserve(bo, true);
if (ret) { if (ret) {
DRM_ERROR("failed to reserve fb bo\n"); if (ret != -EBUSY)
return;
store_for_later = true;
}
x2 = x + width - 1;
y2 = y + height - 1;
spin_lock_irqsave(&afbdev->dirty_lock, flags);
if (afbdev->y1 < y)
y = afbdev->y1;
if (afbdev->y2 > y2)
y2 = afbdev->y2;
if (afbdev->x1 < x)
x = afbdev->x1;
if (afbdev->x2 > x2)
x2 = afbdev->x2;
if (store_for_later) {
afbdev->x1 = x;
afbdev->x2 = x2;
afbdev->y1 = y;
afbdev->y2 = y2;
spin_unlock_irqrestore(&afbdev->dirty_lock, flags);
return; return;
} }
afbdev->x1 = afbdev->y1 = INT_MAX;
afbdev->x2 = afbdev->y2 = 0;
spin_unlock_irqrestore(&afbdev->dirty_lock, flags);
if (!bo->kmap.virtual) { if (!bo->kmap.virtual) {
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
if (ret) { if (ret) {
...@@ -72,10 +108,10 @@ static void ast_dirty_update(struct ast_fbdev *afbdev, ...@@ -72,10 +108,10 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
} }
unmap = true; unmap = true;
} }
for (i = y; i < y + height; i++) { for (i = y; i <= y2; i++) {
/* assume equal stride for now */ /* assume equal stride for now */
src_offset = dst_offset = i * afbdev->afb.base.pitches[0] + (x * bpp); src_offset = dst_offset = i * afbdev->afb.base.pitches[0] + (x * bpp);
memcpy_toio(bo->kmap.virtual + src_offset, afbdev->sysram + src_offset, width * bpp); memcpy_toio(bo->kmap.virtual + src_offset, afbdev->sysram + src_offset, (x2 - x + 1) * bpp);
} }
if (unmap) if (unmap)
...@@ -292,6 +328,7 @@ int ast_fbdev_init(struct drm_device *dev) ...@@ -292,6 +328,7 @@ int ast_fbdev_init(struct drm_device *dev)
ast->fbdev = afbdev; ast->fbdev = afbdev;
afbdev->helper.funcs = &ast_fb_helper_funcs; afbdev->helper.funcs = &ast_fb_helper_funcs;
spin_lock_init(&afbdev->dirty_lock);
ret = drm_fb_helper_init(dev, &afbdev->helper, ret = drm_fb_helper_init(dev, &afbdev->helper,
1, 1); 1, 1);
if (ret) { if (ret) {
......
...@@ -316,7 +316,7 @@ int ast_bo_reserve(struct ast_bo *bo, bool no_wait) ...@@ -316,7 +316,7 @@ int ast_bo_reserve(struct ast_bo *bo, bool no_wait)
ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0); ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
if (ret) { if (ret) {
if (ret != -ERESTARTSYS) if (ret != -ERESTARTSYS && ret != -EBUSY)
DRM_ERROR("reserve failed %p\n", bo); DRM_ERROR("reserve failed %p\n", bo);
return ret; return ret;
} }
......
...@@ -154,6 +154,8 @@ struct cirrus_fbdev { ...@@ -154,6 +154,8 @@ struct cirrus_fbdev {
struct list_head fbdev_list; struct list_head fbdev_list;
void *sysram; void *sysram;
int size; int size;
int x1, y1, x2, y2; /* dirty rect */
spinlock_t dirty_lock;
}; };
struct cirrus_bo { struct cirrus_bo {
......
...@@ -27,16 +27,51 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev, ...@@ -27,16 +27,51 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev,
int bpp = (afbdev->gfb.base.bits_per_pixel + 7)/8; int bpp = (afbdev->gfb.base.bits_per_pixel + 7)/8;
int ret; int ret;
bool unmap = false; bool unmap = false;
bool store_for_later = false;
int x2, y2;
unsigned long flags;
obj = afbdev->gfb.obj; obj = afbdev->gfb.obj;
bo = gem_to_cirrus_bo(obj); bo = gem_to_cirrus_bo(obj);
/*
* try and reserve the BO, if we fail with busy
* then the BO is being moved and we should
* store up the damage until later.
*/
ret = cirrus_bo_reserve(bo, true); ret = cirrus_bo_reserve(bo, true);
if (ret) { if (ret) {
DRM_ERROR("failed to reserve fb bo\n"); if (ret != -EBUSY)
return;
store_for_later = true;
}
x2 = x + width - 1;
y2 = y + height - 1;
spin_lock_irqsave(&afbdev->dirty_lock, flags);
if (afbdev->y1 < y)
y = afbdev->y1;
if (afbdev->y2 > y2)
y2 = afbdev->y2;
if (afbdev->x1 < x)
x = afbdev->x1;
if (afbdev->x2 > x2)
x2 = afbdev->x2;
if (store_for_later) {
afbdev->x1 = x;
afbdev->x2 = x2;
afbdev->y1 = y;
afbdev->y2 = y2;
spin_unlock_irqrestore(&afbdev->dirty_lock, flags);
return; return;
} }
afbdev->x1 = afbdev->y1 = INT_MAX;
afbdev->x2 = afbdev->y2 = 0;
spin_unlock_irqrestore(&afbdev->dirty_lock, flags);
if (!bo->kmap.virtual) { if (!bo->kmap.virtual) {
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
if (ret) { if (ret) {
...@@ -268,6 +303,7 @@ int cirrus_fbdev_init(struct cirrus_device *cdev) ...@@ -268,6 +303,7 @@ int cirrus_fbdev_init(struct cirrus_device *cdev)
cdev->mode_info.gfbdev = gfbdev; cdev->mode_info.gfbdev = gfbdev;
gfbdev->helper.funcs = &cirrus_fb_helper_funcs; gfbdev->helper.funcs = &cirrus_fb_helper_funcs;
spin_lock_init(&gfbdev->dirty_lock);
ret = drm_fb_helper_init(cdev->dev, &gfbdev->helper, ret = drm_fb_helper_init(cdev->dev, &gfbdev->helper,
cdev->num_crtc, CIRRUSFB_CONN_LIMIT); cdev->num_crtc, CIRRUSFB_CONN_LIMIT);
......
...@@ -321,7 +321,7 @@ int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait) ...@@ -321,7 +321,7 @@ int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait)
ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0); ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
if (ret) { if (ret) {
if (ret != -ERESTARTSYS) if (ret != -ERESTARTSYS && ret != -EBUSY)
DRM_ERROR("reserve failed %p\n", bo); DRM_ERROR("reserve failed %p\n", bo);
return ret; return ret;
} }
......
...@@ -105,12 +105,11 @@ drm_clflush_sg(struct sg_table *st) ...@@ -105,12 +105,11 @@ drm_clflush_sg(struct sg_table *st)
{ {
#if defined(CONFIG_X86) #if defined(CONFIG_X86)
if (cpu_has_clflush) { if (cpu_has_clflush) {
struct scatterlist *sg; struct sg_page_iter sg_iter;
int i;
mb(); mb();
for_each_sg(st->sgl, sg, st->nents, i) for_each_sg_page(st->sgl, &sg_iter, st->nents, 0)
drm_clflush_page(sg_page(sg)); drm_clflush_page(sg_page_iter_page(&sg_iter));
mb(); mb();
return; return;
......
...@@ -178,9 +178,6 @@ static struct drm_prop_enum_list drm_dirty_info_enum_list[] = { ...@@ -178,9 +178,6 @@ static struct drm_prop_enum_list drm_dirty_info_enum_list[] = {
{ DRM_MODE_DIRTY_ANNOTATE, "Annotate" }, { DRM_MODE_DIRTY_ANNOTATE, "Annotate" },
}; };
DRM_ENUM_NAME_FN(drm_get_dirty_info_name,
drm_dirty_info_enum_list)
struct drm_conn_prop_enum_list { struct drm_conn_prop_enum_list {
int type; int type;
char *name; char *name;
...@@ -412,7 +409,7 @@ struct drm_framebuffer *drm_framebuffer_lookup(struct drm_device *dev, ...@@ -412,7 +409,7 @@ struct drm_framebuffer *drm_framebuffer_lookup(struct drm_device *dev,
mutex_lock(&dev->mode_config.fb_lock); mutex_lock(&dev->mode_config.fb_lock);
fb = __drm_framebuffer_lookup(dev, id); fb = __drm_framebuffer_lookup(dev, id);
if (fb) if (fb)
kref_get(&fb->refcount); drm_framebuffer_reference(fb);
mutex_unlock(&dev->mode_config.fb_lock); mutex_unlock(&dev->mode_config.fb_lock);
return fb; return fb;
...@@ -706,7 +703,6 @@ int drm_connector_init(struct drm_device *dev, ...@@ -706,7 +703,6 @@ int drm_connector_init(struct drm_device *dev,
connector->connector_type = connector_type; connector->connector_type = connector_type;
connector->connector_type_id = connector->connector_type_id =
++drm_connector_enum_list[connector_type].count; /* TODO */ ++drm_connector_enum_list[connector_type].count; /* TODO */
INIT_LIST_HEAD(&connector->user_modes);
INIT_LIST_HEAD(&connector->probed_modes); INIT_LIST_HEAD(&connector->probed_modes);
INIT_LIST_HEAD(&connector->modes); INIT_LIST_HEAD(&connector->modes);
connector->edid_blob_ptr = NULL; connector->edid_blob_ptr = NULL;
...@@ -747,9 +743,6 @@ void drm_connector_cleanup(struct drm_connector *connector) ...@@ -747,9 +743,6 @@ void drm_connector_cleanup(struct drm_connector *connector)
list_for_each_entry_safe(mode, t, &connector->modes, head) list_for_each_entry_safe(mode, t, &connector->modes, head)
drm_mode_remove(connector, mode); drm_mode_remove(connector, mode);
list_for_each_entry_safe(mode, t, &connector->user_modes, head)
drm_mode_remove(connector, mode);
drm_mode_object_put(dev, &connector->base); drm_mode_object_put(dev, &connector->base);
list_del(&connector->head); list_del(&connector->head);
dev->mode_config.num_connector--; dev->mode_config.num_connector--;
...@@ -1120,45 +1113,7 @@ int drm_mode_create_dirty_info_property(struct drm_device *dev) ...@@ -1120,45 +1113,7 @@ int drm_mode_create_dirty_info_property(struct drm_device *dev)
} }
EXPORT_SYMBOL(drm_mode_create_dirty_info_property); EXPORT_SYMBOL(drm_mode_create_dirty_info_property);
/** static int drm_mode_group_init(struct drm_device *dev, struct drm_mode_group *group)
* drm_mode_config_init - initialize DRM mode_configuration structure
* @dev: DRM device
*
* Initialize @dev's mode_config structure, used for tracking the graphics
* configuration of @dev.
*
* Since this initializes the modeset locks, no locking is possible. Which is no
* problem, since this should happen single threaded at init time. It is the
* driver's problem to ensure this guarantee.
*
*/
void drm_mode_config_init(struct drm_device *dev)
{
mutex_init(&dev->mode_config.mutex);
mutex_init(&dev->mode_config.idr_mutex);
mutex_init(&dev->mode_config.fb_lock);
INIT_LIST_HEAD(&dev->mode_config.fb_list);
INIT_LIST_HEAD(&dev->mode_config.crtc_list);
INIT_LIST_HEAD(&dev->mode_config.connector_list);
INIT_LIST_HEAD(&dev->mode_config.encoder_list);
INIT_LIST_HEAD(&dev->mode_config.property_list);
INIT_LIST_HEAD(&dev->mode_config.property_blob_list);
INIT_LIST_HEAD(&dev->mode_config.plane_list);
idr_init(&dev->mode_config.crtc_idr);
drm_modeset_lock_all(dev);
drm_mode_create_standard_connector_properties(dev);
drm_modeset_unlock_all(dev);
/* Just to be sure */
dev->mode_config.num_fb = 0;
dev->mode_config.num_connector = 0;
dev->mode_config.num_crtc = 0;
dev->mode_config.num_encoder = 0;
}
EXPORT_SYMBOL(drm_mode_config_init);
int drm_mode_group_init(struct drm_device *dev, struct drm_mode_group *group)
{ {
uint32_t total_objects = 0; uint32_t total_objects = 0;
...@@ -1202,69 +1157,6 @@ int drm_mode_group_init_legacy_group(struct drm_device *dev, ...@@ -1202,69 +1157,6 @@ int drm_mode_group_init_legacy_group(struct drm_device *dev,
} }
EXPORT_SYMBOL(drm_mode_group_init_legacy_group); EXPORT_SYMBOL(drm_mode_group_init_legacy_group);
/**
* drm_mode_config_cleanup - free up DRM mode_config info
* @dev: DRM device
*
* Free up all the connectors and CRTCs associated with this DRM device, then
* free up the framebuffers and associated buffer objects.
*
* Note that since this /should/ happen single-threaded at driver/device
* teardown time, no locking is required. It's the driver's job to ensure that
* this guarantee actually holds true.
*
* FIXME: cleanup any dangling user buffer objects too
*/
void drm_mode_config_cleanup(struct drm_device *dev)
{
struct drm_connector *connector, *ot;
struct drm_crtc *crtc, *ct;
struct drm_encoder *encoder, *enct;
struct drm_framebuffer *fb, *fbt;
struct drm_property *property, *pt;
struct drm_plane *plane, *plt;
list_for_each_entry_safe(encoder, enct, &dev->mode_config.encoder_list,
head) {
encoder->funcs->destroy(encoder);
}
list_for_each_entry_safe(connector, ot,
&dev->mode_config.connector_list, head) {
connector->funcs->destroy(connector);
}
list_for_each_entry_safe(property, pt, &dev->mode_config.property_list,
head) {
drm_property_destroy(dev, property);
}
/*
* Single-threaded teardown context, so it's not required to grab the
* fb_lock to protect against concurrent fb_list access. Contrary, it
* would actually deadlock with the drm_framebuffer_cleanup function.
*
* Also, if there are any framebuffers left, that's a driver leak now,
* so politely WARN about this.
*/
WARN_ON(!list_empty(&dev->mode_config.fb_list));
list_for_each_entry_safe(fb, fbt, &dev->mode_config.fb_list, head) {
drm_framebuffer_remove(fb);
}
list_for_each_entry_safe(plane, plt, &dev->mode_config.plane_list,
head) {
plane->funcs->destroy(plane);
}
list_for_each_entry_safe(crtc, ct, &dev->mode_config.crtc_list, head) {
crtc->funcs->destroy(crtc);
}
idr_destroy(&dev->mode_config.crtc_idr);
}
EXPORT_SYMBOL(drm_mode_config_cleanup);
/** /**
* drm_crtc_convert_to_umode - convert a drm_display_mode into a modeinfo * drm_crtc_convert_to_umode - convert a drm_display_mode into a modeinfo
* @out: drm_mode_modeinfo struct to return to the user * @out: drm_mode_modeinfo struct to return to the user
...@@ -2717,192 +2609,6 @@ void drm_fb_release(struct drm_file *priv) ...@@ -2717,192 +2609,6 @@ void drm_fb_release(struct drm_file *priv)
mutex_unlock(&priv->fbs_lock); mutex_unlock(&priv->fbs_lock);
} }
/**
* drm_mode_attachmode - add a mode to the user mode list
* @dev: DRM device
* @connector: connector to add the mode to
* @mode: mode to add
*
* Add @mode to @connector's user mode list.
*/
static void drm_mode_attachmode(struct drm_device *dev,
struct drm_connector *connector,
struct drm_display_mode *mode)
{
list_add_tail(&mode->head, &connector->user_modes);
}
int drm_mode_attachmode_crtc(struct drm_device *dev, struct drm_crtc *crtc,
const struct drm_display_mode *mode)
{
struct drm_connector *connector;
int ret = 0;
struct drm_display_mode *dup_mode, *next;
LIST_HEAD(list);
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (!connector->encoder)
continue;
if (connector->encoder->crtc == crtc) {
dup_mode = drm_mode_duplicate(dev, mode);
if (!dup_mode) {
ret = -ENOMEM;
goto out;
}
list_add_tail(&dup_mode->head, &list);
}
}
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (!connector->encoder)
continue;
if (connector->encoder->crtc == crtc)
list_move_tail(list.next, &connector->user_modes);
}
WARN_ON(!list_empty(&list));
out:
list_for_each_entry_safe(dup_mode, next, &list, head)
drm_mode_destroy(dev, dup_mode);
return ret;
}
EXPORT_SYMBOL(drm_mode_attachmode_crtc);
static int drm_mode_detachmode(struct drm_device *dev,
struct drm_connector *connector,
struct drm_display_mode *mode)
{
int found = 0;
int ret = 0;
struct drm_display_mode *match_mode, *t;
list_for_each_entry_safe(match_mode, t, &connector->user_modes, head) {
if (drm_mode_equal(match_mode, mode)) {
list_del(&match_mode->head);
drm_mode_destroy(dev, match_mode);
found = 1;
break;
}
}
if (!found)
ret = -EINVAL;
return ret;
}
int drm_mode_detachmode_crtc(struct drm_device *dev, struct drm_display_mode *mode)
{
struct drm_connector *connector;
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
drm_mode_detachmode(dev, connector, mode);
}
return 0;
}
EXPORT_SYMBOL(drm_mode_detachmode_crtc);
/**
* drm_fb_attachmode - Attach a user mode to an connector
* @dev: drm device for the ioctl
* @data: data pointer for the ioctl
* @file_priv: drm file for the ioctl call
*
* This attaches a user specified mode to an connector.
* Called by the user via ioctl.
*
* RETURNS:
* Zero on success, errno on failure.
*/
int drm_mode_attachmode_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv)
{
struct drm_mode_mode_cmd *mode_cmd = data;
struct drm_connector *connector;
struct drm_display_mode *mode;
struct drm_mode_object *obj;
struct drm_mode_modeinfo *umode = &mode_cmd->mode;
int ret;
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
drm_modeset_lock_all(dev);
obj = drm_mode_object_find(dev, mode_cmd->connector_id, DRM_MODE_OBJECT_CONNECTOR);
if (!obj) {
ret = -EINVAL;
goto out;
}
connector = obj_to_connector(obj);
mode = drm_mode_create(dev);
if (!mode) {
ret = -ENOMEM;
goto out;
}
ret = drm_crtc_convert_umode(mode, umode);
if (ret) {
DRM_DEBUG_KMS("Invalid mode\n");
drm_mode_destroy(dev, mode);
goto out;
}
drm_mode_attachmode(dev, connector, mode);
out:
drm_modeset_unlock_all(dev);
return ret;
}
/**
* drm_fb_detachmode - Detach a user specified mode from an connector
* @dev: drm device for the ioctl
* @data: data pointer for the ioctl
* @file_priv: drm file for the ioctl call
*
* Called by the user via ioctl.
*
* RETURNS:
* Zero on success, errno on failure.
*/
int drm_mode_detachmode_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv)
{
struct drm_mode_object *obj;
struct drm_mode_mode_cmd *mode_cmd = data;
struct drm_connector *connector;
struct drm_display_mode mode;
struct drm_mode_modeinfo *umode = &mode_cmd->mode;
int ret;
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
drm_modeset_lock_all(dev);
obj = drm_mode_object_find(dev, mode_cmd->connector_id, DRM_MODE_OBJECT_CONNECTOR);
if (!obj) {
ret = -EINVAL;
goto out;
}
connector = obj_to_connector(obj);
ret = drm_crtc_convert_umode(&mode, umode);
if (ret) {
DRM_DEBUG_KMS("Invalid mode\n");
goto out;
}
ret = drm_mode_detachmode(dev, connector, &mode);
out:
drm_modeset_unlock_all(dev);
return ret;
}
struct drm_property *drm_property_create(struct drm_device *dev, int flags, struct drm_property *drm_property_create(struct drm_device *dev, int flags,
const char *name, int num_values) const char *name, int num_values)
{ {
...@@ -3739,6 +3445,12 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev, ...@@ -3739,6 +3445,12 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
goto out; goto out;
} }
if (crtc->fb->pixel_format != fb->pixel_format) {
DRM_DEBUG_KMS("Page flip is not allowed to change frame buffer format.\n");
ret = -EINVAL;
goto out;
}
if (page_flip->flags & DRM_MODE_PAGE_FLIP_EVENT) { if (page_flip->flags & DRM_MODE_PAGE_FLIP_EVENT) {
ret = -ENOMEM; ret = -ENOMEM;
spin_lock_irqsave(&dev->event_lock, flags); spin_lock_irqsave(&dev->event_lock, flags);
...@@ -4064,3 +3776,110 @@ int drm_format_vert_chroma_subsampling(uint32_t format) ...@@ -4064,3 +3776,110 @@ int drm_format_vert_chroma_subsampling(uint32_t format)
} }
} }
EXPORT_SYMBOL(drm_format_vert_chroma_subsampling); EXPORT_SYMBOL(drm_format_vert_chroma_subsampling);
/**
* drm_mode_config_init - initialize DRM mode_configuration structure
* @dev: DRM device
*
* Initialize @dev's mode_config structure, used for tracking the graphics
* configuration of @dev.
*
* Since this initializes the modeset locks, no locking is possible. Which is no
* problem, since this should happen single threaded at init time. It is the
* driver's problem to ensure this guarantee.
*
*/
void drm_mode_config_init(struct drm_device *dev)
{
mutex_init(&dev->mode_config.mutex);
mutex_init(&dev->mode_config.idr_mutex);
mutex_init(&dev->mode_config.fb_lock);
INIT_LIST_HEAD(&dev->mode_config.fb_list);
INIT_LIST_HEAD(&dev->mode_config.crtc_list);
INIT_LIST_HEAD(&dev->mode_config.connector_list);
INIT_LIST_HEAD(&dev->mode_config.encoder_list);
INIT_LIST_HEAD(&dev->mode_config.property_list);
INIT_LIST_HEAD(&dev->mode_config.property_blob_list);
INIT_LIST_HEAD(&dev->mode_config.plane_list);
idr_init(&dev->mode_config.crtc_idr);
drm_modeset_lock_all(dev);
drm_mode_create_standard_connector_properties(dev);
drm_modeset_unlock_all(dev);
/* Just to be sure */
dev->mode_config.num_fb = 0;
dev->mode_config.num_connector = 0;
dev->mode_config.num_crtc = 0;
dev->mode_config.num_encoder = 0;
}
EXPORT_SYMBOL(drm_mode_config_init);
/**
* drm_mode_config_cleanup - free up DRM mode_config info
* @dev: DRM device
*
* Free up all the connectors and CRTCs associated with this DRM device, then
* free up the framebuffers and associated buffer objects.
*
* Note that since this /should/ happen single-threaded at driver/device
* teardown time, no locking is required. It's the driver's job to ensure that
* this guarantee actually holds true.
*
* FIXME: cleanup any dangling user buffer objects too
*/
void drm_mode_config_cleanup(struct drm_device *dev)
{
struct drm_connector *connector, *ot;
struct drm_crtc *crtc, *ct;
struct drm_encoder *encoder, *enct;
struct drm_framebuffer *fb, *fbt;
struct drm_property *property, *pt;
struct drm_property_blob *blob, *bt;
struct drm_plane *plane, *plt;
list_for_each_entry_safe(encoder, enct, &dev->mode_config.encoder_list,
head) {
encoder->funcs->destroy(encoder);
}
list_for_each_entry_safe(connector, ot,
&dev->mode_config.connector_list, head) {
connector->funcs->destroy(connector);
}
list_for_each_entry_safe(property, pt, &dev->mode_config.property_list,
head) {
drm_property_destroy(dev, property);
}
list_for_each_entry_safe(blob, bt, &dev->mode_config.property_blob_list,
head) {
drm_property_destroy_blob(dev, blob);
}
/*
* Single-threaded teardown context, so it's not required to grab the
* fb_lock to protect against concurrent fb_list access. Contrary, it
* would actually deadlock with the drm_framebuffer_cleanup function.
*
* Also, if there are any framebuffers left, that's a driver leak now,
* so politely WARN about this.
*/
WARN_ON(!list_empty(&dev->mode_config.fb_list));
list_for_each_entry_safe(fb, fbt, &dev->mode_config.fb_list, head) {
drm_framebuffer_remove(fb);
}
list_for_each_entry_safe(plane, plt, &dev->mode_config.plane_list,
head) {
plane->funcs->destroy(plane);
}
list_for_each_entry_safe(crtc, ct, &dev->mode_config.crtc_list, head) {
crtc->funcs->destroy(crtc);
}
idr_destroy(&dev->mode_config.crtc_idr);
}
EXPORT_SYMBOL(drm_mode_config_cleanup);
...@@ -648,6 +648,9 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set) ...@@ -648,6 +648,9 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
} else if (set->fb->bits_per_pixel != } else if (set->fb->bits_per_pixel !=
set->crtc->fb->bits_per_pixel) { set->crtc->fb->bits_per_pixel) {
mode_changed = true; mode_changed = true;
} else if (set->fb->pixel_format !=
set->crtc->fb->pixel_format) {
mode_changed = true;
} else } else
fb_changed = true; fb_changed = true;
} }
......
...@@ -60,7 +60,7 @@ static int drm_version(struct drm_device *dev, void *data, ...@@ -60,7 +60,7 @@ static int drm_version(struct drm_device *dev, void *data,
[DRM_IOCTL_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, .cmd_drv = 0} [DRM_IOCTL_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, .cmd_drv = 0}
/** Ioctl table */ /** Ioctl table */
static struct drm_ioctl_desc drm_ioctls[] = { static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0), DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0),
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, 0), DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, 0),
...@@ -150,8 +150,8 @@ static struct drm_ioctl_desc drm_ioctls[] = { ...@@ -150,8 +150,8 @@ static struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETGAMMA, drm_mode_gamma_set_ioctl, DRM_MASTER|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETGAMMA, drm_mode_gamma_set_ioctl, DRM_MASTER|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETENCODER, drm_mode_getencoder, DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETENCODER, drm_mode_getencoder, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCONNECTOR, drm_mode_getconnector, DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCONNECTOR, drm_mode_getconnector, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATTACHMODE, drm_mode_attachmode_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATTACHMODE, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_DETACHMODE, drm_mode_detachmode_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_DETACHMODE, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPERTY, drm_mode_getproperty_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPERTY, drm_mode_getproperty_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPROPERTY, drm_mode_connector_property_set_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPROPERTY, drm_mode_connector_property_set_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPBLOB, drm_mode_getblob_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED), DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPBLOB, drm_mode_getblob_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...@@ -375,7 +375,7 @@ long drm_ioctl(struct file *filp, ...@@ -375,7 +375,7 @@ long drm_ioctl(struct file *filp,
{ {
struct drm_file *file_priv = filp->private_data; struct drm_file *file_priv = filp->private_data;
struct drm_device *dev; struct drm_device *dev;
struct drm_ioctl_desc *ioctl; const struct drm_ioctl_desc *ioctl;
drm_ioctl_t *func; drm_ioctl_t *func;
unsigned int nr = DRM_IOCTL_NR(cmd); unsigned int nr = DRM_IOCTL_NR(cmd);
int retcode = -EINVAL; int retcode = -EINVAL;
...@@ -408,6 +408,7 @@ long drm_ioctl(struct file *filp, ...@@ -408,6 +408,7 @@ long drm_ioctl(struct file *filp,
usize = asize = _IOC_SIZE(cmd); usize = asize = _IOC_SIZE(cmd);
if (drv_size > asize) if (drv_size > asize)
asize = drv_size; asize = drv_size;
cmd = ioctl->cmd_drv;
} }
else if ((nr >= DRM_COMMAND_END) || (nr < DRM_COMMAND_BASE)) { else if ((nr >= DRM_COMMAND_END) || (nr < DRM_COMMAND_BASE)) {
ioctl = &drm_ioctls[nr]; ioctl = &drm_ioctls[nr];
......
此差异已折叠。
...@@ -31,10 +31,11 @@ module_param_string(edid_firmware, edid_firmware, sizeof(edid_firmware), 0644); ...@@ -31,10 +31,11 @@ module_param_string(edid_firmware, edid_firmware, sizeof(edid_firmware), 0644);
MODULE_PARM_DESC(edid_firmware, "Do not probe monitor, use specified EDID blob " MODULE_PARM_DESC(edid_firmware, "Do not probe monitor, use specified EDID blob "
"from built-in data or /lib/firmware instead. "); "from built-in data or /lib/firmware instead. ");
#define GENERIC_EDIDS 4 #define GENERIC_EDIDS 5
static char *generic_edid_name[GENERIC_EDIDS] = { static char *generic_edid_name[GENERIC_EDIDS] = {
"edid/1024x768.bin", "edid/1024x768.bin",
"edid/1280x1024.bin", "edid/1280x1024.bin",
"edid/1600x1200.bin",
"edid/1680x1050.bin", "edid/1680x1050.bin",
"edid/1920x1080.bin", "edid/1920x1080.bin",
}; };
...@@ -79,6 +80,24 @@ static u8 generic_edid[GENERIC_EDIDS][128] = { ...@@ -79,6 +80,24 @@ static u8 generic_edid[GENERIC_EDIDS][128] = {
{ {
0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00,
0x31, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x31, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x05, 0x16, 0x01, 0x03, 0x6d, 0x37, 0x29, 0x78,
0xea, 0x5e, 0xc0, 0xa4, 0x59, 0x4a, 0x98, 0x25,
0x20, 0x50, 0x54, 0x00, 0x00, 0x00, 0xa9, 0x40,
0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x48, 0x3f,
0x40, 0x30, 0x62, 0xb0, 0x32, 0x40, 0x40, 0xc0,
0x13, 0x00, 0x2b, 0xa0, 0x21, 0x00, 0x00, 0x1e,
0x00, 0x00, 0x00, 0xff, 0x00, 0x4c, 0x69, 0x6e,
0x75, 0x78, 0x20, 0x23, 0x30, 0x0a, 0x20, 0x20,
0x20, 0x20, 0x00, 0x00, 0x00, 0xfd, 0x00, 0x3b,
0x3d, 0x4a, 0x4c, 0x11, 0x00, 0x0a, 0x20, 0x20,
0x20, 0x20, 0x20, 0x20, 0x00, 0x00, 0x00, 0xfc,
0x00, 0x4c, 0x69, 0x6e, 0x75, 0x78, 0x20, 0x55,
0x58, 0x47, 0x41, 0x0a, 0x20, 0x20, 0x00, 0x9d,
},
{
0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00,
0x31, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x05, 0x16, 0x01, 0x03, 0x6d, 0x2b, 0x1b, 0x78, 0x05, 0x16, 0x01, 0x03, 0x6d, 0x2b, 0x1b, 0x78,
0xea, 0x5e, 0xc0, 0xa4, 0x59, 0x4a, 0x98, 0x25, 0xea, 0x5e, 0xc0, 0xa4, 0x59, 0x4a, 0x98, 0x25,
0x20, 0x50, 0x54, 0x00, 0x00, 0x00, 0xb3, 0x00, 0x20, 0x50, 0x54, 0x00, 0x00, 0x00, 0xb3, 0x00,
......
...@@ -1398,7 +1398,7 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper) ...@@ -1398,7 +1398,7 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
struct drm_mode_set *modeset; struct drm_mode_set *modeset;
bool *enabled; bool *enabled;
int width, height; int width, height;
int i, ret; int i;
DRM_DEBUG_KMS("\n"); DRM_DEBUG_KMS("\n");
...@@ -1419,16 +1419,23 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper) ...@@ -1419,16 +1419,23 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
drm_enable_connectors(fb_helper, enabled); drm_enable_connectors(fb_helper, enabled);
ret = drm_target_cloned(fb_helper, modes, enabled, width, height); if (!(fb_helper->funcs->initial_config &&
if (!ret) { fb_helper->funcs->initial_config(fb_helper, crtcs, modes,
ret = drm_target_preferred(fb_helper, modes, enabled, width, height); enabled, width, height))) {
if (!ret) memset(modes, 0, dev->mode_config.num_connector*sizeof(modes[0]));
memset(crtcs, 0, dev->mode_config.num_connector*sizeof(crtcs[0]));
if (!drm_target_cloned(fb_helper,
modes, enabled, width, height) &&
!drm_target_preferred(fb_helper,
modes, enabled, width, height))
DRM_ERROR("Unable to find initial modes\n"); DRM_ERROR("Unable to find initial modes\n");
}
DRM_DEBUG_KMS("picking CRTCs for %dx%d config\n", width, height); DRM_DEBUG_KMS("picking CRTCs for %dx%d config\n",
width, height);
drm_pick_crtcs(fb_helper, crtcs, modes, 0, width, height); drm_pick_crtcs(fb_helper, crtcs, modes, 0, width, height);
}
/* need to set the modesets up here for use later */ /* need to set the modesets up here for use later */
/* fill out the connector<->crtc mappings into the modesets */ /* fill out the connector<->crtc mappings into the modesets */
......
...@@ -205,11 +205,11 @@ static void ...@@ -205,11 +205,11 @@ static void
drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp) drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp)
{ {
if (obj->import_attach) { if (obj->import_attach) {
drm_prime_remove_imported_buf_handle(&filp->prime, drm_prime_remove_buf_handle(&filp->prime,
obj->import_attach->dmabuf); obj->import_attach->dmabuf);
} }
if (obj->export_dma_buf) { if (obj->export_dma_buf) {
drm_prime_remove_imported_buf_handle(&filp->prime, drm_prime_remove_buf_handle(&filp->prime,
obj->export_dma_buf); obj->export_dma_buf);
} }
} }
......
...@@ -848,6 +848,26 @@ bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_displ ...@@ -848,6 +848,26 @@ bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_displ
} else if (mode1->clock != mode2->clock) } else if (mode1->clock != mode2->clock)
return false; return false;
return drm_mode_equal_no_clocks(mode1, mode2);
}
EXPORT_SYMBOL(drm_mode_equal);
/**
* drm_mode_equal_no_clocks - test modes for equality
* @mode1: first mode
* @mode2: second mode
*
* LOCKING:
* None.
*
* Check to see if @mode1 and @mode2 are equivalent, but
* don't check the pixel clocks.
*
* RETURNS:
* True if the modes are equal, false otherwise.
*/
bool drm_mode_equal_no_clocks(const struct drm_display_mode *mode1, const struct drm_display_mode *mode2)
{
if (mode1->hdisplay == mode2->hdisplay && if (mode1->hdisplay == mode2->hdisplay &&
mode1->hsync_start == mode2->hsync_start && mode1->hsync_start == mode2->hsync_start &&
mode1->hsync_end == mode2->hsync_end && mode1->hsync_end == mode2->hsync_end &&
...@@ -863,7 +883,7 @@ bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_displ ...@@ -863,7 +883,7 @@ bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_displ
return false; return false;
} }
EXPORT_SYMBOL(drm_mode_equal); EXPORT_SYMBOL(drm_mode_equal_no_clocks);
/** /**
* drm_mode_validate_size - make sure modes adhere to size constraints * drm_mode_validate_size - make sure modes adhere to size constraints
......
...@@ -152,7 +152,7 @@ static const char *drm_pci_get_name(struct drm_device *dev) ...@@ -152,7 +152,7 @@ static const char *drm_pci_get_name(struct drm_device *dev)
return pdriver->name; return pdriver->name;
} }
int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master) static int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master)
{ {
int len, ret; int len, ret;
struct pci_driver *pdriver = dev->driver->kdriver.pci; struct pci_driver *pdriver = dev->driver->kdriver.pci;
...@@ -194,9 +194,9 @@ int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master) ...@@ -194,9 +194,9 @@ int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master)
return ret; return ret;
} }
int drm_pci_set_unique(struct drm_device *dev, static int drm_pci_set_unique(struct drm_device *dev,
struct drm_master *master, struct drm_master *master,
struct drm_unique *u) struct drm_unique *u)
{ {
int domain, bus, slot, func, ret; int domain, bus, slot, func, ret;
const char *bus_name; const char *bus_name;
...@@ -266,7 +266,7 @@ static int drm_pci_irq_by_busid(struct drm_device *dev, struct drm_irq_busid *p) ...@@ -266,7 +266,7 @@ static int drm_pci_irq_by_busid(struct drm_device *dev, struct drm_irq_busid *p)
return 0; return 0;
} }
int drm_pci_agp_init(struct drm_device *dev) static int drm_pci_agp_init(struct drm_device *dev)
{ {
if (drm_core_has_AGP(dev)) { if (drm_core_has_AGP(dev)) {
if (drm_pci_device_is_agp(dev)) if (drm_pci_device_is_agp(dev))
......
...@@ -62,6 +62,7 @@ struct drm_prime_member { ...@@ -62,6 +62,7 @@ struct drm_prime_member {
struct dma_buf *dma_buf; struct dma_buf *dma_buf;
uint32_t handle; uint32_t handle;
}; };
static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t handle);
static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach, static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
enum dma_data_direction dir) enum dma_data_direction dir)
...@@ -200,7 +201,8 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, ...@@ -200,7 +201,8 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev,
{ {
struct drm_gem_object *obj; struct drm_gem_object *obj;
void *buf; void *buf;
int ret; int ret = 0;
struct dma_buf *dmabuf;
obj = drm_gem_object_lookup(dev, file_priv, handle); obj = drm_gem_object_lookup(dev, file_priv, handle);
if (!obj) if (!obj)
...@@ -209,43 +211,44 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, ...@@ -209,43 +211,44 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev,
mutex_lock(&file_priv->prime.lock); mutex_lock(&file_priv->prime.lock);
/* re-export the original imported object */ /* re-export the original imported object */
if (obj->import_attach) { if (obj->import_attach) {
get_dma_buf(obj->import_attach->dmabuf); dmabuf = obj->import_attach->dmabuf;
*prime_fd = dma_buf_fd(obj->import_attach->dmabuf, flags); goto out_have_obj;
drm_gem_object_unreference_unlocked(obj);
mutex_unlock(&file_priv->prime.lock);
return 0;
} }
if (obj->export_dma_buf) { if (obj->export_dma_buf) {
get_dma_buf(obj->export_dma_buf); dmabuf = obj->export_dma_buf;
*prime_fd = dma_buf_fd(obj->export_dma_buf, flags); goto out_have_obj;
drm_gem_object_unreference_unlocked(obj); }
} else {
buf = dev->driver->gem_prime_export(dev, obj, flags); buf = dev->driver->gem_prime_export(dev, obj, flags);
if (IS_ERR(buf)) { if (IS_ERR(buf)) {
/* normally the created dma-buf takes ownership of the ref, /* normally the created dma-buf takes ownership of the ref,
* but if that fails then drop the ref * but if that fails then drop the ref
*/ */
drm_gem_object_unreference_unlocked(obj); ret = PTR_ERR(buf);
mutex_unlock(&file_priv->prime.lock); goto out;
return PTR_ERR(buf);
}
obj->export_dma_buf = buf;
*prime_fd = dma_buf_fd(buf, flags);
} }
obj->export_dma_buf = buf;
/* if we've exported this buffer the cheat and add it to the import list /* if we've exported this buffer the cheat and add it to the import list
* so we get the correct handle back * so we get the correct handle back
*/ */
ret = drm_prime_add_imported_buf_handle(&file_priv->prime, ret = drm_prime_add_buf_handle(&file_priv->prime,
obj->export_dma_buf, handle); obj->export_dma_buf, handle);
if (ret) { if (ret)
drm_gem_object_unreference_unlocked(obj); goto out;
mutex_unlock(&file_priv->prime.lock);
return ret;
}
*prime_fd = dma_buf_fd(buf, flags);
mutex_unlock(&file_priv->prime.lock); mutex_unlock(&file_priv->prime.lock);
return 0; return 0;
out_have_obj:
get_dma_buf(dmabuf);
*prime_fd = dma_buf_fd(dmabuf, flags);
out:
drm_gem_object_unreference_unlocked(obj);
mutex_unlock(&file_priv->prime.lock);
return ret;
} }
EXPORT_SYMBOL(drm_gem_prime_handle_to_fd); EXPORT_SYMBOL(drm_gem_prime_handle_to_fd);
...@@ -268,7 +271,6 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev, ...@@ -268,7 +271,6 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
* refcount on gem itself instead of f_count of dmabuf. * refcount on gem itself instead of f_count of dmabuf.
*/ */
drm_gem_object_reference(obj); drm_gem_object_reference(obj);
dma_buf_put(dma_buf);
return obj; return obj;
} }
} }
...@@ -277,6 +279,8 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev, ...@@ -277,6 +279,8 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
if (IS_ERR(attach)) if (IS_ERR(attach))
return ERR_PTR(PTR_ERR(attach)); return ERR_PTR(PTR_ERR(attach));
get_dma_buf(dma_buf);
sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
if (IS_ERR_OR_NULL(sgt)) { if (IS_ERR_OR_NULL(sgt)) {
ret = PTR_ERR(sgt); ret = PTR_ERR(sgt);
...@@ -297,6 +301,8 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev, ...@@ -297,6 +301,8 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
fail_detach: fail_detach:
dma_buf_detach(dma_buf, attach); dma_buf_detach(dma_buf, attach);
dma_buf_put(dma_buf);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
EXPORT_SYMBOL(drm_gem_prime_import); EXPORT_SYMBOL(drm_gem_prime_import);
...@@ -314,7 +320,7 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev, ...@@ -314,7 +320,7 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev,
mutex_lock(&file_priv->prime.lock); mutex_lock(&file_priv->prime.lock);
ret = drm_prime_lookup_imported_buf_handle(&file_priv->prime, ret = drm_prime_lookup_buf_handle(&file_priv->prime,
dma_buf, handle); dma_buf, handle);
if (!ret) { if (!ret) {
ret = 0; ret = 0;
...@@ -333,12 +339,15 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev, ...@@ -333,12 +339,15 @@ int drm_gem_prime_fd_to_handle(struct drm_device *dev,
if (ret) if (ret)
goto out_put; goto out_put;
ret = drm_prime_add_imported_buf_handle(&file_priv->prime, ret = drm_prime_add_buf_handle(&file_priv->prime,
dma_buf, *handle); dma_buf, *handle);
if (ret) if (ret)
goto fail; goto fail;
mutex_unlock(&file_priv->prime.lock); mutex_unlock(&file_priv->prime.lock);
dma_buf_put(dma_buf);
return 0; return 0;
fail: fail:
...@@ -401,21 +410,17 @@ int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data, ...@@ -401,21 +410,17 @@ int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
struct sg_table *drm_prime_pages_to_sg(struct page **pages, int nr_pages) struct sg_table *drm_prime_pages_to_sg(struct page **pages, int nr_pages)
{ {
struct sg_table *sg = NULL; struct sg_table *sg = NULL;
struct scatterlist *iter;
int i;
int ret; int ret;
sg = kmalloc(sizeof(struct sg_table), GFP_KERNEL); sg = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
if (!sg) if (!sg)
goto out; goto out;
ret = sg_alloc_table(sg, nr_pages, GFP_KERNEL); ret = sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
nr_pages << PAGE_SHIFT, GFP_KERNEL);
if (ret) if (ret)
goto out; goto out;
for_each_sg(sg->sgl, iter, nr_pages, i)
sg_set_page(iter, pages[i], PAGE_SIZE, 0);
return sg; return sg;
out: out:
kfree(sg); kfree(sg);
...@@ -483,15 +488,12 @@ EXPORT_SYMBOL(drm_prime_init_file_private); ...@@ -483,15 +488,12 @@ EXPORT_SYMBOL(drm_prime_init_file_private);
void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv) void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv)
{ {
struct drm_prime_member *member, *safe; /* by now drm_gem_release should've made sure the list is empty */
list_for_each_entry_safe(member, safe, &prime_fpriv->head, entry) { WARN_ON(!list_empty(&prime_fpriv->head));
list_del(&member->entry);
kfree(member);
}
} }
EXPORT_SYMBOL(drm_prime_destroy_file_private); EXPORT_SYMBOL(drm_prime_destroy_file_private);
int drm_prime_add_imported_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t handle) static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t handle)
{ {
struct drm_prime_member *member; struct drm_prime_member *member;
...@@ -499,14 +501,14 @@ int drm_prime_add_imported_buf_handle(struct drm_prime_file_private *prime_fpriv ...@@ -499,14 +501,14 @@ int drm_prime_add_imported_buf_handle(struct drm_prime_file_private *prime_fpriv
if (!member) if (!member)
return -ENOMEM; return -ENOMEM;
get_dma_buf(dma_buf);
member->dma_buf = dma_buf; member->dma_buf = dma_buf;
member->handle = handle; member->handle = handle;
list_add(&member->entry, &prime_fpriv->head); list_add(&member->entry, &prime_fpriv->head);
return 0; return 0;
} }
EXPORT_SYMBOL(drm_prime_add_imported_buf_handle);
int drm_prime_lookup_imported_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t *handle) int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t *handle)
{ {
struct drm_prime_member *member; struct drm_prime_member *member;
...@@ -518,19 +520,20 @@ int drm_prime_lookup_imported_buf_handle(struct drm_prime_file_private *prime_fp ...@@ -518,19 +520,20 @@ int drm_prime_lookup_imported_buf_handle(struct drm_prime_file_private *prime_fp
} }
return -ENOENT; return -ENOENT;
} }
EXPORT_SYMBOL(drm_prime_lookup_imported_buf_handle); EXPORT_SYMBOL(drm_prime_lookup_buf_handle);
void drm_prime_remove_imported_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf) void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf)
{ {
struct drm_prime_member *member, *safe; struct drm_prime_member *member, *safe;
mutex_lock(&prime_fpriv->lock); mutex_lock(&prime_fpriv->lock);
list_for_each_entry_safe(member, safe, &prime_fpriv->head, entry) { list_for_each_entry_safe(member, safe, &prime_fpriv->head, entry) {
if (member->dma_buf == dma_buf) { if (member->dma_buf == dma_buf) {
dma_buf_put(dma_buf);
list_del(&member->entry); list_del(&member->entry);
kfree(member); kfree(member);
} }
} }
mutex_unlock(&prime_fpriv->lock); mutex_unlock(&prime_fpriv->lock);
} }
EXPORT_SYMBOL(drm_prime_remove_imported_buf_handle); EXPORT_SYMBOL(drm_prime_remove_buf_handle);
...@@ -422,6 +422,7 @@ void drm_vm_open_locked(struct drm_device *dev, ...@@ -422,6 +422,7 @@ void drm_vm_open_locked(struct drm_device *dev,
list_add(&vma_entry->head, &dev->vmalist); list_add(&vma_entry->head, &dev->vmalist);
} }
} }
EXPORT_SYMBOL_GPL(drm_vm_open_locked);
static void drm_vm_open(struct vm_area_struct *vma) static void drm_vm_open(struct vm_area_struct *vma)
{ {
......
...@@ -24,7 +24,9 @@ config DRM_EXYNOS_DMABUF ...@@ -24,7 +24,9 @@ config DRM_EXYNOS_DMABUF
config DRM_EXYNOS_FIMD config DRM_EXYNOS_FIMD
bool "Exynos DRM FIMD" bool "Exynos DRM FIMD"
depends on DRM_EXYNOS && !FB_S3C && !ARCH_MULTIPLATFORM depends on OF && DRM_EXYNOS && !FB_S3C && !ARCH_MULTIPLATFORM
select FB_MODE_HELPERS
select VIDEOMODE_HELPERS
help help
Choose this option if you want to use Exynos FIMD for DRM. Choose this option if you want to use Exynos FIMD for DRM.
...@@ -54,7 +56,7 @@ config DRM_EXYNOS_IPP ...@@ -54,7 +56,7 @@ config DRM_EXYNOS_IPP
config DRM_EXYNOS_FIMC config DRM_EXYNOS_FIMC
bool "Exynos DRM FIMC" bool "Exynos DRM FIMC"
depends on DRM_EXYNOS_IPP depends on DRM_EXYNOS_IPP && MFD_SYSCON && OF
help help
Choose this option if you want to use Exynos FIMC for DRM. Choose this option if you want to use Exynos FIMC for DRM.
......
...@@ -124,7 +124,7 @@ static int exynos_drm_connector_get_modes(struct drm_connector *connector) ...@@ -124,7 +124,7 @@ static int exynos_drm_connector_get_modes(struct drm_connector *connector)
} }
count = drm_add_edid_modes(connector, edid); count = drm_add_edid_modes(connector, edid);
if (count < 0) { if (!count) {
DRM_ERROR("Add edid modes failed %d\n", count); DRM_ERROR("Add edid modes failed %d\n", count);
goto out; goto out;
} }
......
...@@ -235,7 +235,6 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, ...@@ -235,7 +235,6 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
* refcount on gem itself instead of f_count of dmabuf. * refcount on gem itself instead of f_count of dmabuf.
*/ */
drm_gem_object_reference(obj); drm_gem_object_reference(obj);
dma_buf_put(dma_buf);
return obj; return obj;
} }
} }
...@@ -244,6 +243,7 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, ...@@ -244,6 +243,7 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
if (IS_ERR(attach)) if (IS_ERR(attach))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
get_dma_buf(dma_buf);
sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
if (IS_ERR_OR_NULL(sgt)) { if (IS_ERR_OR_NULL(sgt)) {
...@@ -298,6 +298,8 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev, ...@@ -298,6 +298,8 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
err_buf_detach: err_buf_detach:
dma_buf_detach(dma_buf, attach); dma_buf_detach(dma_buf, attach);
dma_buf_put(dma_buf);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
......
...@@ -380,6 +380,10 @@ static int __init exynos_drm_init(void) ...@@ -380,6 +380,10 @@ static int __init exynos_drm_init(void)
ret = platform_driver_register(&ipp_driver); ret = platform_driver_register(&ipp_driver);
if (ret < 0) if (ret < 0)
goto out_ipp; goto out_ipp;
ret = exynos_platform_device_ipp_register();
if (ret < 0)
goto out_ipp_dev;
#endif #endif
ret = platform_driver_register(&exynos_drm_platform_driver); ret = platform_driver_register(&exynos_drm_platform_driver);
...@@ -388,7 +392,7 @@ static int __init exynos_drm_init(void) ...@@ -388,7 +392,7 @@ static int __init exynos_drm_init(void)
exynos_drm_pdev = platform_device_register_simple("exynos-drm", -1, exynos_drm_pdev = platform_device_register_simple("exynos-drm", -1,
NULL, 0); NULL, 0);
if (IS_ERR_OR_NULL(exynos_drm_pdev)) { if (IS_ERR(exynos_drm_pdev)) {
ret = PTR_ERR(exynos_drm_pdev); ret = PTR_ERR(exynos_drm_pdev);
goto out; goto out;
} }
...@@ -400,6 +404,8 @@ static int __init exynos_drm_init(void) ...@@ -400,6 +404,8 @@ static int __init exynos_drm_init(void)
out_drm: out_drm:
#ifdef CONFIG_DRM_EXYNOS_IPP #ifdef CONFIG_DRM_EXYNOS_IPP
exynos_platform_device_ipp_unregister();
out_ipp_dev:
platform_driver_unregister(&ipp_driver); platform_driver_unregister(&ipp_driver);
out_ipp: out_ipp:
#endif #endif
...@@ -456,6 +462,7 @@ static void __exit exynos_drm_exit(void) ...@@ -456,6 +462,7 @@ static void __exit exynos_drm_exit(void)
platform_driver_unregister(&exynos_drm_platform_driver); platform_driver_unregister(&exynos_drm_platform_driver);
#ifdef CONFIG_DRM_EXYNOS_IPP #ifdef CONFIG_DRM_EXYNOS_IPP
exynos_platform_device_ipp_unregister();
platform_driver_unregister(&ipp_driver); platform_driver_unregister(&ipp_driver);
#endif #endif
......
...@@ -322,13 +322,23 @@ void exynos_drm_subdrv_close(struct drm_device *dev, struct drm_file *file); ...@@ -322,13 +322,23 @@ void exynos_drm_subdrv_close(struct drm_device *dev, struct drm_file *file);
* this function registers exynos drm hdmi platform device. It ensures only one * this function registers exynos drm hdmi platform device. It ensures only one
* instance of the device is created. * instance of the device is created.
*/ */
extern int exynos_platform_device_hdmi_register(void); int exynos_platform_device_hdmi_register(void);
/* /*
* this function unregisters exynos drm hdmi platform device if it exists. * this function unregisters exynos drm hdmi platform device if it exists.
*/ */
void exynos_platform_device_hdmi_unregister(void); void exynos_platform_device_hdmi_unregister(void);
/*
* this function registers exynos drm ipp platform device.
*/
int exynos_platform_device_ipp_register(void);
/*
* this function unregisters exynos drm ipp platform device if it exists.
*/
void exynos_platform_device_ipp_unregister(void);
extern struct platform_driver fimd_driver; extern struct platform_driver fimd_driver;
extern struct platform_driver hdmi_driver; extern struct platform_driver hdmi_driver;
extern struct platform_driver mixer_driver; extern struct platform_driver mixer_driver;
......
...@@ -12,11 +12,12 @@ ...@@ -12,11 +12,12 @@
* *
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <plat/map-base.h>
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/exynos_drm.h> #include <drm/exynos_drm.h>
...@@ -76,6 +77,27 @@ enum fimc_wb { ...@@ -76,6 +77,27 @@ enum fimc_wb {
FIMC_WB_B, FIMC_WB_B,
}; };
enum {
FIMC_CLK_LCLK,
FIMC_CLK_GATE,
FIMC_CLK_WB_A,
FIMC_CLK_WB_B,
FIMC_CLK_MUX,
FIMC_CLK_PARENT,
FIMC_CLKS_MAX
};
static const char * const fimc_clock_names[] = {
[FIMC_CLK_LCLK] = "sclk_fimc",
[FIMC_CLK_GATE] = "fimc",
[FIMC_CLK_WB_A] = "pxl_async0",
[FIMC_CLK_WB_B] = "pxl_async1",
[FIMC_CLK_MUX] = "mux",
[FIMC_CLK_PARENT] = "parent",
};
#define FIMC_DEFAULT_LCLK_FREQUENCY 133000000UL
/* /*
* A structure of scaler. * A structure of scaler.
* *
...@@ -118,15 +140,6 @@ struct fimc_capability { ...@@ -118,15 +140,6 @@ struct fimc_capability {
u32 rl_h_rot; u32 rl_h_rot;
}; };
/*
* A structure of fimc driver data.
*
* @parent_clk: name of parent clock.
*/
struct fimc_driverdata {
char *parent_clk;
};
/* /*
* A structure of fimc context. * A structure of fimc context.
* *
...@@ -134,13 +147,10 @@ struct fimc_driverdata { ...@@ -134,13 +147,10 @@ struct fimc_driverdata {
* @regs_res: register resources. * @regs_res: register resources.
* @regs: memory mapped io registers. * @regs: memory mapped io registers.
* @lock: locking of operations. * @lock: locking of operations.
* @sclk_fimc_clk: fimc source clock. * @clocks: fimc clocks.
* @fimc_clk: fimc clock. * @clk_frequency: LCLK clock frequency.
* @wb_clk: writeback a clock. * @sysreg: handle to SYSREG block regmap.
* @wb_b_clk: writeback b clock.
* @sc: scaler infomations. * @sc: scaler infomations.
* @odr: ordering of YUV.
* @ver: fimc version.
* @pol: porarity of writeback. * @pol: porarity of writeback.
* @id: fimc id. * @id: fimc id.
* @irq: irq number. * @irq: irq number.
...@@ -151,12 +161,10 @@ struct fimc_context { ...@@ -151,12 +161,10 @@ struct fimc_context {
struct resource *regs_res; struct resource *regs_res;
void __iomem *regs; void __iomem *regs;
struct mutex lock; struct mutex lock;
struct clk *sclk_fimc_clk; struct clk *clocks[FIMC_CLKS_MAX];
struct clk *fimc_clk; u32 clk_frequency;
struct clk *wb_clk; struct regmap *sysreg;
struct clk *wb_b_clk;
struct fimc_scaler sc; struct fimc_scaler sc;
struct fimc_driverdata *ddata;
struct exynos_drm_ipp_pol pol; struct exynos_drm_ipp_pol pol;
int id; int id;
int irq; int irq;
...@@ -200,17 +208,13 @@ static void fimc_sw_reset(struct fimc_context *ctx) ...@@ -200,17 +208,13 @@ static void fimc_sw_reset(struct fimc_context *ctx)
fimc_write(0x0, EXYNOS_CIFCNTSEQ); fimc_write(0x0, EXYNOS_CIFCNTSEQ);
} }
static void fimc_set_camblk_fimd0_wb(struct fimc_context *ctx) static int fimc_set_camblk_fimd0_wb(struct fimc_context *ctx)
{ {
u32 camblk_cfg;
DRM_DEBUG_KMS("%s\n", __func__); DRM_DEBUG_KMS("%s\n", __func__);
camblk_cfg = readl(SYSREG_CAMERA_BLK); return regmap_update_bits(ctx->sysreg, SYSREG_CAMERA_BLK,
camblk_cfg &= ~(SYSREG_FIMD0WB_DEST_MASK); SYSREG_FIMD0WB_DEST_MASK,
camblk_cfg |= ctx->id << (SYSREG_FIMD0WB_DEST_SHIFT); ctx->id << SYSREG_FIMD0WB_DEST_SHIFT);
writel(camblk_cfg, SYSREG_CAMERA_BLK);
} }
static void fimc_set_type_ctrl(struct fimc_context *ctx, enum fimc_wb wb) static void fimc_set_type_ctrl(struct fimc_context *ctx, enum fimc_wb wb)
...@@ -1301,14 +1305,12 @@ static int fimc_clk_ctrl(struct fimc_context *ctx, bool enable) ...@@ -1301,14 +1305,12 @@ static int fimc_clk_ctrl(struct fimc_context *ctx, bool enable)
DRM_DEBUG_KMS("%s:enable[%d]\n", __func__, enable); DRM_DEBUG_KMS("%s:enable[%d]\n", __func__, enable);
if (enable) { if (enable) {
clk_enable(ctx->sclk_fimc_clk); clk_prepare_enable(ctx->clocks[FIMC_CLK_GATE]);
clk_enable(ctx->fimc_clk); clk_prepare_enable(ctx->clocks[FIMC_CLK_WB_A]);
clk_enable(ctx->wb_clk);
ctx->suspended = false; ctx->suspended = false;
} else { } else {
clk_disable(ctx->sclk_fimc_clk); clk_disable_unprepare(ctx->clocks[FIMC_CLK_GATE]);
clk_disable(ctx->fimc_clk); clk_disable_unprepare(ctx->clocks[FIMC_CLK_WB_A]);
clk_disable(ctx->wb_clk);
ctx->suspended = true; ctx->suspended = true;
} }
...@@ -1613,7 +1615,11 @@ static int fimc_ippdrv_start(struct device *dev, enum drm_exynos_ipp_cmd cmd) ...@@ -1613,7 +1615,11 @@ static int fimc_ippdrv_start(struct device *dev, enum drm_exynos_ipp_cmd cmd)
fimc_handle_lastend(ctx, true); fimc_handle_lastend(ctx, true);
/* setup FIMD */ /* setup FIMD */
fimc_set_camblk_fimd0_wb(ctx); ret = fimc_set_camblk_fimd0_wb(ctx);
if (ret < 0) {
dev_err(dev, "camblk setup failed.\n");
return ret;
}
set_wb.enable = 1; set_wb.enable = 1;
set_wb.refresh = property->refresh_rate; set_wb.refresh = property->refresh_rate;
...@@ -1713,76 +1719,118 @@ static void fimc_ippdrv_stop(struct device *dev, enum drm_exynos_ipp_cmd cmd) ...@@ -1713,76 +1719,118 @@ static void fimc_ippdrv_stop(struct device *dev, enum drm_exynos_ipp_cmd cmd)
fimc_write(cfg, EXYNOS_CIGCTRL); fimc_write(cfg, EXYNOS_CIGCTRL);
} }
static void fimc_put_clocks(struct fimc_context *ctx)
{
int i;
for (i = 0; i < FIMC_CLKS_MAX; i++) {
if (IS_ERR(ctx->clocks[i]))
continue;
clk_put(ctx->clocks[i]);
ctx->clocks[i] = ERR_PTR(-EINVAL);
}
}
static int fimc_setup_clocks(struct fimc_context *ctx)
{
struct device *fimc_dev = ctx->ippdrv.dev;
struct device *dev;
int ret, i;
for (i = 0; i < FIMC_CLKS_MAX; i++)
ctx->clocks[i] = ERR_PTR(-EINVAL);
for (i = 0; i < FIMC_CLKS_MAX; i++) {
if (i == FIMC_CLK_WB_A || i == FIMC_CLK_WB_B)
dev = fimc_dev->parent;
else
dev = fimc_dev;
ctx->clocks[i] = clk_get(dev, fimc_clock_names[i]);
if (IS_ERR(ctx->clocks[i])) {
if (i >= FIMC_CLK_MUX)
break;
ret = PTR_ERR(ctx->clocks[i]);
dev_err(fimc_dev, "failed to get clock: %s\n",
fimc_clock_names[i]);
goto e_clk_free;
}
}
/* Optional FIMC LCLK parent clock setting */
if (!IS_ERR(ctx->clocks[FIMC_CLK_PARENT])) {
ret = clk_set_parent(ctx->clocks[FIMC_CLK_MUX],
ctx->clocks[FIMC_CLK_PARENT]);
if (ret < 0) {
dev_err(fimc_dev, "failed to set parent.\n");
goto e_clk_free;
}
}
ret = clk_set_rate(ctx->clocks[FIMC_CLK_LCLK], ctx->clk_frequency);
if (ret < 0)
goto e_clk_free;
ret = clk_prepare_enable(ctx->clocks[FIMC_CLK_LCLK]);
if (!ret)
return ret;
e_clk_free:
fimc_put_clocks(ctx);
return ret;
}
static int fimc_parse_dt(struct fimc_context *ctx)
{
struct device_node *node = ctx->ippdrv.dev->of_node;
/* Handle only devices that support the LCD Writeback data path */
if (!of_property_read_bool(node, "samsung,lcd-wb"))
return -ENODEV;
if (of_property_read_u32(node, "clock-frequency",
&ctx->clk_frequency))
ctx->clk_frequency = FIMC_DEFAULT_LCLK_FREQUENCY;
ctx->id = of_alias_get_id(node, "fimc");
if (ctx->id < 0) {
dev_err(ctx->ippdrv.dev, "failed to get node alias id.\n");
return -EINVAL;
}
return 0;
}
static int fimc_probe(struct platform_device *pdev) static int fimc_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct fimc_context *ctx; struct fimc_context *ctx;
struct clk *parent_clk;
struct resource *res; struct resource *res;
struct exynos_drm_ippdrv *ippdrv; struct exynos_drm_ippdrv *ippdrv;
struct exynos_drm_fimc_pdata *pdata;
struct fimc_driverdata *ddata;
int ret; int ret;
pdata = pdev->dev.platform_data; if (!dev->of_node) {
if (!pdata) { dev_err(dev, "device tree node not found.\n");
dev_err(dev, "no platform data specified.\n"); return -ENODEV;
return -EINVAL;
} }
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx) if (!ctx)
return -ENOMEM; return -ENOMEM;
ddata = (struct fimc_driverdata *) ctx->ippdrv.dev = dev;
platform_get_device_id(pdev)->driver_data;
/* clock control */
ctx->sclk_fimc_clk = devm_clk_get(dev, "sclk_fimc");
if (IS_ERR(ctx->sclk_fimc_clk)) {
dev_err(dev, "failed to get src fimc clock.\n");
return PTR_ERR(ctx->sclk_fimc_clk);
}
clk_enable(ctx->sclk_fimc_clk);
ctx->fimc_clk = devm_clk_get(dev, "fimc");
if (IS_ERR(ctx->fimc_clk)) {
dev_err(dev, "failed to get fimc clock.\n");
clk_disable(ctx->sclk_fimc_clk);
return PTR_ERR(ctx->fimc_clk);
}
ctx->wb_clk = devm_clk_get(dev, "pxl_async0");
if (IS_ERR(ctx->wb_clk)) {
dev_err(dev, "failed to get writeback a clock.\n");
clk_disable(ctx->sclk_fimc_clk);
return PTR_ERR(ctx->wb_clk);
}
ctx->wb_b_clk = devm_clk_get(dev, "pxl_async1");
if (IS_ERR(ctx->wb_b_clk)) {
dev_err(dev, "failed to get writeback b clock.\n");
clk_disable(ctx->sclk_fimc_clk);
return PTR_ERR(ctx->wb_b_clk);
}
parent_clk = devm_clk_get(dev, ddata->parent_clk); ret = fimc_parse_dt(ctx);
if (ret < 0)
if (IS_ERR(parent_clk)) { return ret;
dev_err(dev, "failed to get parent clock.\n");
clk_disable(ctx->sclk_fimc_clk);
return PTR_ERR(parent_clk);
}
if (clk_set_parent(ctx->sclk_fimc_clk, parent_clk)) { ctx->sysreg = syscon_regmap_lookup_by_phandle(dev->of_node,
dev_err(dev, "failed to set parent.\n"); "samsung,sysreg");
clk_disable(ctx->sclk_fimc_clk); if (IS_ERR(ctx->sysreg)) {
return -EINVAL; dev_err(dev, "syscon regmap lookup failed.\n");
return PTR_ERR(ctx->sysreg);
} }
devm_clk_put(dev, parent_clk);
clk_set_rate(ctx->sclk_fimc_clk, pdata->clk_rate);
/* resource memory */ /* resource memory */
ctx->regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ctx->regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ctx->regs = devm_ioremap_resource(dev, ctx->regs_res); ctx->regs = devm_ioremap_resource(dev, ctx->regs_res);
...@@ -1804,13 +1852,11 @@ static int fimc_probe(struct platform_device *pdev) ...@@ -1804,13 +1852,11 @@ static int fimc_probe(struct platform_device *pdev)
return ret; return ret;
} }
/* context initailization */ ret = fimc_setup_clocks(ctx);
ctx->id = pdev->id; if (ret < 0)
ctx->pol = pdata->pol; goto err_free_irq;
ctx->ddata = ddata;
ippdrv = &ctx->ippdrv; ippdrv = &ctx->ippdrv;
ippdrv->dev = dev;
ippdrv->ops[EXYNOS_DRM_OPS_SRC] = &fimc_src_ops; ippdrv->ops[EXYNOS_DRM_OPS_SRC] = &fimc_src_ops;
ippdrv->ops[EXYNOS_DRM_OPS_DST] = &fimc_dst_ops; ippdrv->ops[EXYNOS_DRM_OPS_DST] = &fimc_dst_ops;
ippdrv->check_property = fimc_ippdrv_check_property; ippdrv->check_property = fimc_ippdrv_check_property;
...@@ -1820,7 +1866,7 @@ static int fimc_probe(struct platform_device *pdev) ...@@ -1820,7 +1866,7 @@ static int fimc_probe(struct platform_device *pdev)
ret = fimc_init_prop_list(ippdrv); ret = fimc_init_prop_list(ippdrv);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "failed to init property list.\n"); dev_err(dev, "failed to init property list.\n");
goto err_get_irq; goto err_put_clk;
} }
DRM_DEBUG_KMS("%s:id[%d]ippdrv[0x%x]\n", __func__, ctx->id, DRM_DEBUG_KMS("%s:id[%d]ippdrv[0x%x]\n", __func__, ctx->id,
...@@ -1835,17 +1881,18 @@ static int fimc_probe(struct platform_device *pdev) ...@@ -1835,17 +1881,18 @@ static int fimc_probe(struct platform_device *pdev)
ret = exynos_drm_ippdrv_register(ippdrv); ret = exynos_drm_ippdrv_register(ippdrv);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "failed to register drm fimc device.\n"); dev_err(dev, "failed to register drm fimc device.\n");
goto err_ippdrv_register; goto err_pm_dis;
} }
dev_info(&pdev->dev, "drm fimc registered successfully.\n"); dev_info(&pdev->dev, "drm fimc registered successfully.\n");
return 0; return 0;
err_ippdrv_register: err_pm_dis:
devm_kfree(dev, ippdrv->prop_list);
pm_runtime_disable(dev); pm_runtime_disable(dev);
err_get_irq: err_put_clk:
fimc_put_clocks(ctx);
err_free_irq:
free_irq(ctx->irq, ctx); free_irq(ctx->irq, ctx);
return ret; return ret;
...@@ -1857,10 +1904,10 @@ static int fimc_remove(struct platform_device *pdev) ...@@ -1857,10 +1904,10 @@ static int fimc_remove(struct platform_device *pdev)
struct fimc_context *ctx = get_fimc_context(dev); struct fimc_context *ctx = get_fimc_context(dev);
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv; struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
devm_kfree(dev, ippdrv->prop_list);
exynos_drm_ippdrv_unregister(ippdrv); exynos_drm_ippdrv_unregister(ippdrv);
mutex_destroy(&ctx->lock); mutex_destroy(&ctx->lock);
fimc_put_clocks(ctx);
pm_runtime_set_suspended(dev); pm_runtime_set_suspended(dev);
pm_runtime_disable(dev); pm_runtime_disable(dev);
...@@ -1915,36 +1962,22 @@ static int fimc_runtime_resume(struct device *dev) ...@@ -1915,36 +1962,22 @@ static int fimc_runtime_resume(struct device *dev)
} }
#endif #endif
static struct fimc_driverdata exynos4210_fimc_data = {
.parent_clk = "mout_mpll",
};
static struct fimc_driverdata exynos4410_fimc_data = {
.parent_clk = "mout_mpll_user",
};
static struct platform_device_id fimc_driver_ids[] = {
{
.name = "exynos4210-fimc",
.driver_data = (unsigned long)&exynos4210_fimc_data,
}, {
.name = "exynos4412-fimc",
.driver_data = (unsigned long)&exynos4410_fimc_data,
},
{},
};
MODULE_DEVICE_TABLE(platform, fimc_driver_ids);
static const struct dev_pm_ops fimc_pm_ops = { static const struct dev_pm_ops fimc_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(fimc_suspend, fimc_resume) SET_SYSTEM_SLEEP_PM_OPS(fimc_suspend, fimc_resume)
SET_RUNTIME_PM_OPS(fimc_runtime_suspend, fimc_runtime_resume, NULL) SET_RUNTIME_PM_OPS(fimc_runtime_suspend, fimc_runtime_resume, NULL)
}; };
static const struct of_device_id fimc_of_match[] = {
{ .compatible = "samsung,exynos4210-fimc" },
{ .compatible = "samsung,exynos4212-fimc" },
{ },
};
struct platform_driver fimc_driver = { struct platform_driver fimc_driver = {
.probe = fimc_probe, .probe = fimc_probe,
.remove = fimc_remove, .remove = fimc_remove,
.id_table = fimc_driver_ids,
.driver = { .driver = {
.of_match_table = fimc_of_match,
.name = "exynos-drm-fimc", .name = "exynos-drm-fimc",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.pm = &fimc_pm_ops, .pm = &fimc_pm_ops,
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <video/of_display_timing.h>
#include <video/samsung_fimd.h> #include <video/samsung_fimd.h>
#include <drm/exynos_drm.h> #include <drm/exynos_drm.h>
...@@ -800,18 +801,18 @@ static int fimd_clock(struct fimd_context *ctx, bool enable) ...@@ -800,18 +801,18 @@ static int fimd_clock(struct fimd_context *ctx, bool enable)
if (enable) { if (enable) {
int ret; int ret;
ret = clk_enable(ctx->bus_clk); ret = clk_prepare_enable(ctx->bus_clk);
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = clk_enable(ctx->lcd_clk); ret = clk_prepare_enable(ctx->lcd_clk);
if (ret < 0) { if (ret < 0) {
clk_disable(ctx->bus_clk); clk_disable_unprepare(ctx->bus_clk);
return ret; return ret;
} }
} else { } else {
clk_disable(ctx->lcd_clk); clk_disable_unprepare(ctx->lcd_clk);
clk_disable(ctx->bus_clk); clk_disable_unprepare(ctx->bus_clk);
} }
return 0; return 0;
...@@ -884,10 +885,25 @@ static int fimd_probe(struct platform_device *pdev) ...@@ -884,10 +885,25 @@ static int fimd_probe(struct platform_device *pdev)
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
pdata = pdev->dev.platform_data; if (pdev->dev.of_node) {
if (!pdata) { pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
dev_err(dev, "no platform data specified\n"); if (!pdata) {
return -EINVAL; DRM_ERROR("memory allocation for pdata failed\n");
return -ENOMEM;
}
ret = of_get_fb_videomode(dev->of_node, &pdata->panel.timing,
OF_USE_NATIVE_MODE);
if (ret) {
DRM_ERROR("failed: of_get_fb_videomode() : %d\n", ret);
return ret;
}
} else {
pdata = pdev->dev.platform_data;
if (!pdata) {
DRM_ERROR("no platform data specified\n");
return -EINVAL;
}
} }
panel = &pdata->panel; panel = &pdata->panel;
...@@ -918,7 +934,7 @@ static int fimd_probe(struct platform_device *pdev) ...@@ -918,7 +934,7 @@ static int fimd_probe(struct platform_device *pdev)
if (IS_ERR(ctx->regs)) if (IS_ERR(ctx->regs))
return PTR_ERR(ctx->regs); return PTR_ERR(ctx->regs);
res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); res = platform_get_resource_byname(pdev, IORESOURCE_IRQ, "vsync");
if (!res) { if (!res) {
dev_err(dev, "irq request failed.\n"); dev_err(dev, "irq request failed.\n");
return -ENXIO; return -ENXIO;
...@@ -980,9 +996,6 @@ static int fimd_remove(struct platform_device *pdev) ...@@ -980,9 +996,6 @@ static int fimd_remove(struct platform_device *pdev)
if (ctx->suspended) if (ctx->suspended)
goto out; goto out;
clk_disable(ctx->lcd_clk);
clk_disable(ctx->bus_clk);
pm_runtime_set_suspended(dev); pm_runtime_set_suspended(dev);
pm_runtime_put_sync(dev); pm_runtime_put_sync(dev);
......
...@@ -682,7 +682,8 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv, ...@@ -682,7 +682,8 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv,
args->pitch = args->width * ((args->bpp + 7) / 8); args->pitch = args->width * ((args->bpp + 7) / 8);
args->size = args->pitch * args->height; args->size = args->pitch * args->height;
exynos_gem_obj = exynos_drm_gem_create(dev, args->flags, args->size); exynos_gem_obj = exynos_drm_gem_create(dev, EXYNOS_BO_CONTIG |
EXYNOS_BO_WC, args->size);
if (IS_ERR(exynos_gem_obj)) if (IS_ERR(exynos_gem_obj))
return PTR_ERR(exynos_gem_obj); return PTR_ERR(exynos_gem_obj);
......
...@@ -51,21 +51,27 @@ struct drm_hdmi_context { ...@@ -51,21 +51,27 @@ struct drm_hdmi_context {
int exynos_platform_device_hdmi_register(void) int exynos_platform_device_hdmi_register(void)
{ {
struct platform_device *pdev;
if (exynos_drm_hdmi_pdev) if (exynos_drm_hdmi_pdev)
return -EEXIST; return -EEXIST;
exynos_drm_hdmi_pdev = platform_device_register_simple( pdev = platform_device_register_simple(
"exynos-drm-hdmi", -1, NULL, 0); "exynos-drm-hdmi", -1, NULL, 0);
if (IS_ERR_OR_NULL(exynos_drm_hdmi_pdev)) if (IS_ERR(pdev))
return PTR_ERR(exynos_drm_hdmi_pdev); return PTR_ERR(pdev);
exynos_drm_hdmi_pdev = pdev;
return 0; return 0;
} }
void exynos_platform_device_hdmi_unregister(void) void exynos_platform_device_hdmi_unregister(void)
{ {
if (exynos_drm_hdmi_pdev) if (exynos_drm_hdmi_pdev) {
platform_device_unregister(exynos_drm_hdmi_pdev); platform_device_unregister(exynos_drm_hdmi_pdev);
exynos_drm_hdmi_pdev = NULL;
}
} }
void exynos_hdmi_drv_attach(struct exynos_drm_hdmi_context *ctx) void exynos_hdmi_drv_attach(struct exynos_drm_hdmi_context *ctx)
...@@ -205,13 +211,45 @@ static void drm_hdmi_mode_fixup(struct device *subdrv_dev, ...@@ -205,13 +211,45 @@ static void drm_hdmi_mode_fixup(struct device *subdrv_dev,
const struct drm_display_mode *mode, const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode) struct drm_display_mode *adjusted_mode)
{ {
struct drm_hdmi_context *ctx = to_context(subdrv_dev); struct drm_display_mode *m;
int mode_ok;
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->mode_fixup) drm_mode_set_crtcinfo(adjusted_mode, 0);
hdmi_ops->mode_fixup(ctx->hdmi_ctx->ctx, connector, mode,
adjusted_mode); mode_ok = drm_hdmi_check_timing(subdrv_dev, adjusted_mode);
/* just return if user desired mode exists. */
if (mode_ok == 0)
return;
/*
* otherwise, find the most suitable mode among modes and change it
* to adjusted_mode.
*/
list_for_each_entry(m, &connector->modes, head) {
mode_ok = drm_hdmi_check_timing(subdrv_dev, m);
if (mode_ok == 0) {
struct drm_mode_object base;
struct list_head head;
DRM_INFO("desired mode doesn't exist so\n");
DRM_INFO("use the most suitable mode among modes.\n");
DRM_DEBUG_KMS("Adjusted Mode: [%d]x[%d] [%d]Hz\n",
m->hdisplay, m->vdisplay, m->vrefresh);
/* preserve display mode header while copying. */
head = adjusted_mode->head;
base = adjusted_mode->base;
memcpy(adjusted_mode, m, sizeof(*m));
adjusted_mode->head = head;
adjusted_mode->base = base;
break;
}
}
} }
static void drm_hdmi_mode_set(struct device *subdrv_dev, void *mode) static void drm_hdmi_mode_set(struct device *subdrv_dev, void *mode)
......
...@@ -36,9 +36,6 @@ struct exynos_hdmi_ops { ...@@ -36,9 +36,6 @@ struct exynos_hdmi_ops {
int (*power_on)(void *ctx, int mode); int (*power_on)(void *ctx, int mode);
/* manager */ /* manager */
void (*mode_fixup)(void *ctx, struct drm_connector *connector,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode);
void (*mode_set)(void *ctx, void *mode); void (*mode_set)(void *ctx, void *mode);
void (*get_max_resol)(void *ctx, unsigned int *width, void (*get_max_resol)(void *ctx, unsigned int *width,
unsigned int *height); unsigned int *height);
......
...@@ -47,6 +47,9 @@ ...@@ -47,6 +47,9 @@
#define get_ipp_context(dev) platform_get_drvdata(to_platform_device(dev)) #define get_ipp_context(dev) platform_get_drvdata(to_platform_device(dev))
#define ipp_is_m2m_cmd(c) (c == IPP_CMD_M2M) #define ipp_is_m2m_cmd(c) (c == IPP_CMD_M2M)
/* platform device pointer for ipp device. */
static struct platform_device *exynos_drm_ipp_pdev;
/* /*
* A structure of event. * A structure of event.
* *
...@@ -102,6 +105,30 @@ static LIST_HEAD(exynos_drm_ippdrv_list); ...@@ -102,6 +105,30 @@ static LIST_HEAD(exynos_drm_ippdrv_list);
static DEFINE_MUTEX(exynos_drm_ippdrv_lock); static DEFINE_MUTEX(exynos_drm_ippdrv_lock);
static BLOCKING_NOTIFIER_HEAD(exynos_drm_ippnb_list); static BLOCKING_NOTIFIER_HEAD(exynos_drm_ippnb_list);
int exynos_platform_device_ipp_register(void)
{
struct platform_device *pdev;
if (exynos_drm_ipp_pdev)
return -EEXIST;
pdev = platform_device_register_simple("exynos-drm-ipp", -1, NULL, 0);
if (IS_ERR(pdev))
return PTR_ERR(pdev);
exynos_drm_ipp_pdev = pdev;
return 0;
}
void exynos_platform_device_ipp_unregister(void)
{
if (exynos_drm_ipp_pdev) {
platform_device_unregister(exynos_drm_ipp_pdev);
exynos_drm_ipp_pdev = NULL;
}
}
int exynos_drm_ippdrv_register(struct exynos_drm_ippdrv *ippdrv) int exynos_drm_ippdrv_register(struct exynos_drm_ippdrv *ippdrv)
{ {
DRM_DEBUG_KMS("%s\n", __func__); DRM_DEBUG_KMS("%s\n", __func__);
......
...@@ -674,7 +674,7 @@ static int rotator_probe(struct platform_device *pdev) ...@@ -674,7 +674,7 @@ static int rotator_probe(struct platform_device *pdev)
} }
rot->clock = devm_clk_get(dev, "rotator"); rot->clock = devm_clk_get(dev, "rotator");
if (IS_ERR_OR_NULL(rot->clock)) { if (IS_ERR(rot->clock)) {
dev_err(dev, "failed to get clock\n"); dev_err(dev, "failed to get clock\n");
ret = PTR_ERR(rot->clock); ret = PTR_ERR(rot->clock);
goto err_clk_get; goto err_clk_get;
......
...@@ -643,12 +643,14 @@ static void mixer_win_reset(struct mixer_context *ctx) ...@@ -643,12 +643,14 @@ static void mixer_win_reset(struct mixer_context *ctx)
/* setting graphical layers */ /* setting graphical layers */
val = MXR_GRP_CFG_COLOR_KEY_DISABLE; /* no blank key */ val = MXR_GRP_CFG_COLOR_KEY_DISABLE; /* no blank key */
val |= MXR_GRP_CFG_WIN_BLEND_EN; val |= MXR_GRP_CFG_WIN_BLEND_EN;
val |= MXR_GRP_CFG_BLEND_PRE_MUL;
val |= MXR_GRP_CFG_PIXEL_BLEND_EN;
val |= MXR_GRP_CFG_ALPHA_VAL(0xff); /* non-transparent alpha */ val |= MXR_GRP_CFG_ALPHA_VAL(0xff); /* non-transparent alpha */
/* the same configuration for both layers */ /* Don't blend layer 0 onto the mixer background */
mixer_reg_write(res, MXR_GRAPHIC_CFG(0), val); mixer_reg_write(res, MXR_GRAPHIC_CFG(0), val);
/* Blend layer 1 into layer 0 */
val |= MXR_GRP_CFG_BLEND_PRE_MUL;
val |= MXR_GRP_CFG_PIXEL_BLEND_EN;
mixer_reg_write(res, MXR_GRAPHIC_CFG(1), val); mixer_reg_write(res, MXR_GRAPHIC_CFG(1), val);
/* setting video layers */ /* setting video layers */
...@@ -820,7 +822,6 @@ static void mixer_win_disable(void *ctx, int win) ...@@ -820,7 +822,6 @@ static void mixer_win_disable(void *ctx, int win)
static int mixer_check_timing(void *ctx, struct fb_videomode *timing) static int mixer_check_timing(void *ctx, struct fb_videomode *timing)
{ {
struct mixer_context *mixer_ctx = ctx;
u32 w, h; u32 w, h;
w = timing->xres; w = timing->xres;
...@@ -831,9 +832,6 @@ static int mixer_check_timing(void *ctx, struct fb_videomode *timing) ...@@ -831,9 +832,6 @@ static int mixer_check_timing(void *ctx, struct fb_videomode *timing)
timing->refresh, (timing->vmode & timing->refresh, (timing->vmode &
FB_VMODE_INTERLACED) ? true : false); FB_VMODE_INTERLACED) ? true : false);
if (mixer_ctx->mxr_ver == MXR_VER_0_0_0_16)
return 0;
if ((w >= 464 && w <= 720 && h >= 261 && h <= 576) || if ((w >= 464 && w <= 720 && h >= 261 && h <= 576) ||
(w >= 1024 && w <= 1280 && h >= 576 && h <= 720) || (w >= 1024 && w <= 1280 && h >= 576 && h <= 720) ||
(w >= 1664 && w <= 1920 && h >= 936 && h <= 1080)) (w >= 1664 && w <= 1920 && h >= 936 && h <= 1080))
...@@ -1047,13 +1045,13 @@ static int mixer_resources_init(struct exynos_drm_hdmi_context *ctx, ...@@ -1047,13 +1045,13 @@ static int mixer_resources_init(struct exynos_drm_hdmi_context *ctx,
spin_lock_init(&mixer_res->reg_slock); spin_lock_init(&mixer_res->reg_slock);
mixer_res->mixer = devm_clk_get(dev, "mixer"); mixer_res->mixer = devm_clk_get(dev, "mixer");
if (IS_ERR_OR_NULL(mixer_res->mixer)) { if (IS_ERR(mixer_res->mixer)) {
dev_err(dev, "failed to get clock 'mixer'\n"); dev_err(dev, "failed to get clock 'mixer'\n");
return -ENODEV; return -ENODEV;
} }
mixer_res->sclk_hdmi = devm_clk_get(dev, "sclk_hdmi"); mixer_res->sclk_hdmi = devm_clk_get(dev, "sclk_hdmi");
if (IS_ERR_OR_NULL(mixer_res->sclk_hdmi)) { if (IS_ERR(mixer_res->sclk_hdmi)) {
dev_err(dev, "failed to get clock 'sclk_hdmi'\n"); dev_err(dev, "failed to get clock 'sclk_hdmi'\n");
return -ENODEV; return -ENODEV;
} }
...@@ -1096,17 +1094,17 @@ static int vp_resources_init(struct exynos_drm_hdmi_context *ctx, ...@@ -1096,17 +1094,17 @@ static int vp_resources_init(struct exynos_drm_hdmi_context *ctx,
struct resource *res; struct resource *res;
mixer_res->vp = devm_clk_get(dev, "vp"); mixer_res->vp = devm_clk_get(dev, "vp");
if (IS_ERR_OR_NULL(mixer_res->vp)) { if (IS_ERR(mixer_res->vp)) {
dev_err(dev, "failed to get clock 'vp'\n"); dev_err(dev, "failed to get clock 'vp'\n");
return -ENODEV; return -ENODEV;
} }
mixer_res->sclk_mixer = devm_clk_get(dev, "sclk_mixer"); mixer_res->sclk_mixer = devm_clk_get(dev, "sclk_mixer");
if (IS_ERR_OR_NULL(mixer_res->sclk_mixer)) { if (IS_ERR(mixer_res->sclk_mixer)) {
dev_err(dev, "failed to get clock 'sclk_mixer'\n"); dev_err(dev, "failed to get clock 'sclk_mixer'\n");
return -ENODEV; return -ENODEV;
} }
mixer_res->sclk_dac = devm_clk_get(dev, "sclk_dac"); mixer_res->sclk_dac = devm_clk_get(dev, "sclk_dac");
if (IS_ERR_OR_NULL(mixer_res->sclk_dac)) { if (IS_ERR(mixer_res->sclk_dac)) {
dev_err(dev, "failed to get clock 'sclk_dac'\n"); dev_err(dev, "failed to get clock 'sclk_dac'\n");
return -ENODEV; return -ENODEV;
} }
......
...@@ -661,9 +661,8 @@ ...@@ -661,9 +661,8 @@
#define EXYNOS_CLKSRC_SCLK (1 << 1) #define EXYNOS_CLKSRC_SCLK (1 << 1)
/* SYSREG for FIMC writeback */ /* SYSREG for FIMC writeback */
#define SYSREG_CAMERA_BLK (S3C_VA_SYS + 0x0218) #define SYSREG_CAMERA_BLK (0x0218)
#define SYSREG_ISP_BLK (S3C_VA_SYS + 0x020c) #define SYSREG_FIMD0WB_DEST_MASK (0x3 << 23)
#define SYSREG_FIMD0WB_DEST_MASK (0x3 << 23) #define SYSREG_FIMD0WB_DEST_SHIFT 23
#define SYSREG_FIMD0WB_DEST_SHIFT 23
#endif /* EXYNOS_REGS_FIMC_H */ #endif /* EXYNOS_REGS_FIMC_H */
...@@ -2,10 +2,15 @@ config DRM_GMA500 ...@@ -2,10 +2,15 @@ config DRM_GMA500
tristate "Intel GMA5/600 KMS Framebuffer" tristate "Intel GMA5/600 KMS Framebuffer"
depends on DRM && PCI && X86 depends on DRM && PCI && X86
select FB_CFB_COPYAREA select FB_CFB_COPYAREA
select FB_CFB_FILLRECT select FB_CFB_FILLRECT
select FB_CFB_IMAGEBLIT select FB_CFB_IMAGEBLIT
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_TTM select DRM_TTM
# GMA500 depends on ACPI_VIDEO when ACPI is enabled, just like i915
select ACPI_VIDEO if ACPI
select BACKLIGHT_CLASS_DEVICE if ACPI
select VIDEO_OUTPUT_CONTROL if ACPI
select INPUT if ACPI
help help
Say yes for an experimental 2D KMS framebuffer driver for the Say yes for an experimental 2D KMS framebuffer driver for the
Intel GMA500 ('Poulsbo') and other Intel IMG based graphics Intel GMA500 ('Poulsbo') and other Intel IMG based graphics
......
...@@ -276,6 +276,7 @@ void cdv_intel_crt_init(struct drm_device *dev, ...@@ -276,6 +276,7 @@ void cdv_intel_crt_init(struct drm_device *dev,
goto failed_connector; goto failed_connector;
connector = &psb_intel_connector->base; connector = &psb_intel_connector->base;
connector->polled = DRM_CONNECTOR_POLL_HPD;
drm_connector_init(dev, connector, drm_connector_init(dev, connector,
&cdv_intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA); &cdv_intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
......
...@@ -319,6 +319,7 @@ void cdv_hdmi_init(struct drm_device *dev, ...@@ -319,6 +319,7 @@ void cdv_hdmi_init(struct drm_device *dev,
goto err_priv; goto err_priv;
connector = &psb_intel_connector->base; connector = &psb_intel_connector->base;
connector->polled = DRM_CONNECTOR_POLL_HPD;
encoder = &psb_intel_encoder->base; encoder = &psb_intel_encoder->base;
drm_connector_init(dev, connector, drm_connector_init(dev, connector,
&cdv_hdmi_connector_funcs, &cdv_hdmi_connector_funcs,
......
...@@ -431,7 +431,7 @@ static int psbfb_create(struct psb_fbdev *fbdev, ...@@ -431,7 +431,7 @@ static int psbfb_create(struct psb_fbdev *fbdev,
fbdev->psb_fb_helper.fbdev = info; fbdev->psb_fb_helper.fbdev = info;
drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth); drm_fb_helper_fill_fix(info, fb->pitches[0], fb->depth);
strcpy(info->fix.id, "psbfb"); strcpy(info->fix.id, "psbdrmfb");
info->flags = FBINFO_DEFAULT; info->flags = FBINFO_DEFAULT;
if (dev_priv->ops->accel_2d && pitch_lines > 8) /* 2D engine */ if (dev_priv->ops->accel_2d && pitch_lines > 8) /* 2D engine */
...@@ -772,8 +772,8 @@ void psb_modeset_init(struct drm_device *dev) ...@@ -772,8 +772,8 @@ void psb_modeset_init(struct drm_device *dev)
for (i = 0; i < dev_priv->num_pipe; i++) for (i = 0; i < dev_priv->num_pipe; i++)
psb_intel_crtc_init(dev, i, mode_dev); psb_intel_crtc_init(dev, i, mode_dev);
dev->mode_config.max_width = 2048; dev->mode_config.max_width = 4096;
dev->mode_config.max_height = 2048; dev->mode_config.max_height = 4096;
psb_setup_outputs(dev); psb_setup_outputs(dev);
......
...@@ -80,7 +80,8 @@ static u32 __iomem *psb_gtt_entry(struct drm_device *dev, struct gtt_range *r) ...@@ -80,7 +80,8 @@ static u32 __iomem *psb_gtt_entry(struct drm_device *dev, struct gtt_range *r)
* the GTT. This is protected via the gtt mutex which the caller * the GTT. This is protected via the gtt mutex which the caller
* must hold. * must hold.
*/ */
static int psb_gtt_insert(struct drm_device *dev, struct gtt_range *r) static int psb_gtt_insert(struct drm_device *dev, struct gtt_range *r,
int resume)
{ {
u32 __iomem *gtt_slot; u32 __iomem *gtt_slot;
u32 pte; u32 pte;
...@@ -97,8 +98,10 @@ static int psb_gtt_insert(struct drm_device *dev, struct gtt_range *r) ...@@ -97,8 +98,10 @@ static int psb_gtt_insert(struct drm_device *dev, struct gtt_range *r)
gtt_slot = psb_gtt_entry(dev, r); gtt_slot = psb_gtt_entry(dev, r);
pages = r->pages; pages = r->pages;
/* Make sure changes are visible to the GPU */ if (!resume) {
set_pages_array_wc(pages, r->npage); /* Make sure changes are visible to the GPU */
set_pages_array_wc(pages, r->npage);
}
/* Write our page entries into the GTT itself */ /* Write our page entries into the GTT itself */
for (i = r->roll; i < r->npage; i++) { for (i = r->roll; i < r->npage; i++) {
...@@ -269,7 +272,7 @@ int psb_gtt_pin(struct gtt_range *gt) ...@@ -269,7 +272,7 @@ int psb_gtt_pin(struct gtt_range *gt)
ret = psb_gtt_attach_pages(gt); ret = psb_gtt_attach_pages(gt);
if (ret < 0) if (ret < 0)
goto out; goto out;
ret = psb_gtt_insert(dev, gt); ret = psb_gtt_insert(dev, gt, 0);
if (ret < 0) { if (ret < 0) {
psb_gtt_detach_pages(gt); psb_gtt_detach_pages(gt);
goto out; goto out;
...@@ -421,9 +424,11 @@ int psb_gtt_init(struct drm_device *dev, int resume) ...@@ -421,9 +424,11 @@ int psb_gtt_init(struct drm_device *dev, int resume)
int ret = 0; int ret = 0;
uint32_t pte; uint32_t pte;
mutex_init(&dev_priv->gtt_mutex); if (!resume) {
mutex_init(&dev_priv->gtt_mutex);
psb_gtt_alloc(dev);
}
psb_gtt_alloc(dev);
pg = &dev_priv->gtt; pg = &dev_priv->gtt;
/* Enable the GTT */ /* Enable the GTT */
...@@ -505,7 +510,8 @@ int psb_gtt_init(struct drm_device *dev, int resume) ...@@ -505,7 +510,8 @@ int psb_gtt_init(struct drm_device *dev, int resume)
/* /*
* Map the GTT and the stolen memory area * Map the GTT and the stolen memory area
*/ */
dev_priv->gtt_map = ioremap_nocache(pg->gtt_phys_start, if (!resume)
dev_priv->gtt_map = ioremap_nocache(pg->gtt_phys_start,
gtt_pages << PAGE_SHIFT); gtt_pages << PAGE_SHIFT);
if (!dev_priv->gtt_map) { if (!dev_priv->gtt_map) {
dev_err(dev->dev, "Failure to map gtt.\n"); dev_err(dev->dev, "Failure to map gtt.\n");
...@@ -513,7 +519,9 @@ int psb_gtt_init(struct drm_device *dev, int resume) ...@@ -513,7 +519,9 @@ int psb_gtt_init(struct drm_device *dev, int resume)
goto out_err; goto out_err;
} }
dev_priv->vram_addr = ioremap_wc(dev_priv->stolen_base, stolen_size); if (!resume)
dev_priv->vram_addr = ioremap_wc(dev_priv->stolen_base,
stolen_size);
if (!dev_priv->vram_addr) { if (!dev_priv->vram_addr) {
dev_err(dev->dev, "Failure to map stolen base.\n"); dev_err(dev->dev, "Failure to map stolen base.\n");
ret = -ENOMEM; ret = -ENOMEM;
...@@ -549,3 +557,31 @@ int psb_gtt_init(struct drm_device *dev, int resume) ...@@ -549,3 +557,31 @@ int psb_gtt_init(struct drm_device *dev, int resume)
psb_gtt_takedown(dev); psb_gtt_takedown(dev);
return ret; return ret;
} }
int psb_gtt_restore(struct drm_device *dev)
{
struct drm_psb_private *dev_priv = dev->dev_private;
struct resource *r = dev_priv->gtt_mem->child;
struct gtt_range *range;
unsigned int restored = 0, total = 0, size = 0;
/* On resume, the gtt_mutex is already initialized */
mutex_lock(&dev_priv->gtt_mutex);
psb_gtt_init(dev, 1);
while (r != NULL) {
range = container_of(r, struct gtt_range, resource);
if (range->pages) {
psb_gtt_insert(dev, range, 1);
size += range->resource.end - range->resource.start;
restored++;
}
r = r->sibling;
total++;
}
mutex_unlock(&dev_priv->gtt_mutex);
DRM_DEBUG_DRIVER("Restored %u of %u gtt ranges (%u KB)", restored,
total, (size / 1024));
return 0;
}
...@@ -60,5 +60,5 @@ extern int psb_gtt_pin(struct gtt_range *gt); ...@@ -60,5 +60,5 @@ extern int psb_gtt_pin(struct gtt_range *gt);
extern void psb_gtt_unpin(struct gtt_range *gt); extern void psb_gtt_unpin(struct gtt_range *gt);
extern void psb_gtt_roll(struct drm_device *dev, extern void psb_gtt_roll(struct drm_device *dev,
struct gtt_range *gt, int roll); struct gtt_range *gt, int roll);
extern int psb_gtt_restore(struct drm_device *dev);
#endif #endif
...@@ -218,12 +218,11 @@ static void parse_backlight_data(struct drm_psb_private *dev_priv, ...@@ -218,12 +218,11 @@ static void parse_backlight_data(struct drm_psb_private *dev_priv,
bl_start = find_section(bdb, BDB_LVDS_BACKLIGHT); bl_start = find_section(bdb, BDB_LVDS_BACKLIGHT);
vbt_lvds_bl = (struct bdb_lvds_backlight *)(bl_start + 1) + p_type; vbt_lvds_bl = (struct bdb_lvds_backlight *)(bl_start + 1) + p_type;
lvds_bl = kzalloc(sizeof(*vbt_lvds_bl), GFP_KERNEL); lvds_bl = kmemdup(vbt_lvds_bl, sizeof(*vbt_lvds_bl), GFP_KERNEL);
if (!lvds_bl) { if (!lvds_bl) {
dev_err(dev_priv->dev->dev, "out of memory for backlight data\n"); dev_err(dev_priv->dev->dev, "out of memory for backlight data\n");
return; return;
} }
memcpy(lvds_bl, vbt_lvds_bl, sizeof(*vbt_lvds_bl));
dev_priv->lvds_bl = lvds_bl; dev_priv->lvds_bl = lvds_bl;
} }
......
...@@ -19,8 +19,8 @@ ...@@ -19,8 +19,8 @@
* *
*/ */
#ifndef _I830_BIOS_H_ #ifndef _INTEL_BIOS_H_
#define _I830_BIOS_H_ #define _INTEL_BIOS_H_
#include <drm/drmP.h> #include <drm/drmP.h>
#include <drm/drm_dp_helper.h> #include <drm/drm_dp_helper.h>
...@@ -618,4 +618,4 @@ extern void psb_intel_destroy_bios(struct drm_device *dev); ...@@ -618,4 +618,4 @@ extern void psb_intel_destroy_bios(struct drm_device *dev);
#define PORT_IDPC 8 #define PORT_IDPC 8
#define PORT_IDPD 9 #define PORT_IDPD 9
#endif /* _I830_BIOS_H_ */ #endif /* _INTEL_BIOS_H_ */
...@@ -92,8 +92,8 @@ void mdfld_dsi_brightness_init(struct mdfld_dsi_config *dsi_config, int pipe) ...@@ -92,8 +92,8 @@ void mdfld_dsi_brightness_init(struct mdfld_dsi_config *dsi_config, int pipe)
{ {
struct mdfld_dsi_pkg_sender *sender = struct mdfld_dsi_pkg_sender *sender =
mdfld_dsi_get_pkg_sender(dsi_config); mdfld_dsi_get_pkg_sender(dsi_config);
struct drm_device *dev = sender->dev; struct drm_device *dev;
struct drm_psb_private *dev_priv = dev->dev_private; struct drm_psb_private *dev_priv;
u32 gen_ctrl_val; u32 gen_ctrl_val;
if (!sender) { if (!sender) {
...@@ -101,6 +101,9 @@ void mdfld_dsi_brightness_init(struct mdfld_dsi_config *dsi_config, int pipe) ...@@ -101,6 +101,9 @@ void mdfld_dsi_brightness_init(struct mdfld_dsi_config *dsi_config, int pipe)
return; return;
} }
dev = sender->dev;
dev_priv = dev->dev_private;
/* Set default display backlight value to 85% (0xd8)*/ /* Set default display backlight value to 85% (0xd8)*/
mdfld_dsi_send_mcs_short(sender, write_display_brightness, 0xd8, 1, mdfld_dsi_send_mcs_short(sender, write_display_brightness, 0xd8, 1,
true); true);
......
...@@ -110,6 +110,8 @@ static void gma_resume_display(struct pci_dev *pdev) ...@@ -110,6 +110,8 @@ static void gma_resume_display(struct pci_dev *pdev)
PSB_WVDC32(dev_priv->pge_ctl | _PSB_PGETBL_ENABLED, PSB_PGETBL_CTL); PSB_WVDC32(dev_priv->pge_ctl | _PSB_PGETBL_ENABLED, PSB_PGETBL_CTL);
pci_write_config_word(pdev, PSB_GMCH_CTRL, pci_write_config_word(pdev, PSB_GMCH_CTRL,
dev_priv->gmch_ctrl | _PSB_GMCH_ENABLED); dev_priv->gmch_ctrl | _PSB_GMCH_ENABLED);
psb_gtt_restore(dev); /* Rebuild our GTT mappings */
dev_priv->ops->restore_regs(dev); dev_priv->ops->restore_regs(dev);
} }
...@@ -313,3 +315,18 @@ int psb_runtime_idle(struct device *dev) ...@@ -313,3 +315,18 @@ int psb_runtime_idle(struct device *dev)
else else
return 1; return 1;
} }
int gma_power_thaw(struct device *_dev)
{
return gma_power_resume(_dev);
}
int gma_power_freeze(struct device *_dev)
{
return gma_power_suspend(_dev);
}
int gma_power_restore(struct device *_dev)
{
return gma_power_resume(_dev);
}
...@@ -41,6 +41,9 @@ void gma_power_uninit(struct drm_device *dev); ...@@ -41,6 +41,9 @@ void gma_power_uninit(struct drm_device *dev);
*/ */
int gma_power_suspend(struct device *dev); int gma_power_suspend(struct device *dev);
int gma_power_resume(struct device *dev); int gma_power_resume(struct device *dev);
int gma_power_thaw(struct device *dev);
int gma_power_freeze(struct device *dev);
int gma_power_restore(struct device *_dev);
/* /*
* These are the functions the driver should use to wrap all hw access * These are the functions the driver should use to wrap all hw access
......
...@@ -601,6 +601,9 @@ static void psb_remove(struct pci_dev *pdev) ...@@ -601,6 +601,9 @@ static void psb_remove(struct pci_dev *pdev)
static const struct dev_pm_ops psb_pm_ops = { static const struct dev_pm_ops psb_pm_ops = {
.resume = gma_power_resume, .resume = gma_power_resume,
.suspend = gma_power_suspend, .suspend = gma_power_suspend,
.thaw = gma_power_thaw,
.freeze = gma_power_freeze,
.restore = gma_power_restore,
.runtime_suspend = psb_runtime_suspend, .runtime_suspend = psb_runtime_suspend,
.runtime_resume = psb_runtime_resume, .runtime_resume = psb_runtime_resume,
.runtime_idle = psb_runtime_idle, .runtime_idle = psb_runtime_idle,
......
...@@ -876,7 +876,6 @@ extern const struct psb_ops cdv_chip_ops; ...@@ -876,7 +876,6 @@ extern const struct psb_ops cdv_chip_ops;
#define PSB_D_MSVDX (1 << 9) #define PSB_D_MSVDX (1 << 9)
#define PSB_D_TOPAZ (1 << 10) #define PSB_D_TOPAZ (1 << 10)
extern int drm_psb_no_fb;
extern int drm_idle_check_interval; extern int drm_idle_check_interval;
/* /*
......
...@@ -50,119 +50,41 @@ struct psb_intel_p2_t { ...@@ -50,119 +50,41 @@ struct psb_intel_p2_t {
int p2_slow, p2_fast; int p2_slow, p2_fast;
}; };
#define INTEL_P2_NUM 2
struct psb_intel_limit_t { struct psb_intel_limit_t {
struct psb_intel_range_t dot, vco, n, m, m1, m2, p, p1; struct psb_intel_range_t dot, vco, n, m, m1, m2, p, p1;
struct psb_intel_p2_t p2; struct psb_intel_p2_t p2;
}; };
#define I8XX_DOT_MIN 25000 #define INTEL_LIMIT_I9XX_SDVO_DAC 0
#define I8XX_DOT_MAX 350000 #define INTEL_LIMIT_I9XX_LVDS 1
#define I8XX_VCO_MIN 930000
#define I8XX_VCO_MAX 1400000
#define I8XX_N_MIN 3
#define I8XX_N_MAX 16
#define I8XX_M_MIN 96
#define I8XX_M_MAX 140
#define I8XX_M1_MIN 18
#define I8XX_M1_MAX 26
#define I8XX_M2_MIN 6
#define I8XX_M2_MAX 16
#define I8XX_P_MIN 4
#define I8XX_P_MAX 128
#define I8XX_P1_MIN 2
#define I8XX_P1_MAX 33
#define I8XX_P1_LVDS_MIN 1
#define I8XX_P1_LVDS_MAX 6
#define I8XX_P2_SLOW 4
#define I8XX_P2_FAST 2
#define I8XX_P2_LVDS_SLOW 14
#define I8XX_P2_LVDS_FAST 14 /* No fast option */
#define I8XX_P2_SLOW_LIMIT 165000
#define I9XX_DOT_MIN 20000
#define I9XX_DOT_MAX 400000
#define I9XX_VCO_MIN 1400000
#define I9XX_VCO_MAX 2800000
#define I9XX_N_MIN 1
#define I9XX_N_MAX 6
#define I9XX_M_MIN 70
#define I9XX_M_MAX 120
#define I9XX_M1_MIN 8
#define I9XX_M1_MAX 18
#define I9XX_M2_MIN 3
#define I9XX_M2_MAX 7
#define I9XX_P_SDVO_DAC_MIN 5
#define I9XX_P_SDVO_DAC_MAX 80
#define I9XX_P_LVDS_MIN 7
#define I9XX_P_LVDS_MAX 98
#define I9XX_P1_MIN 1
#define I9XX_P1_MAX 8
#define I9XX_P2_SDVO_DAC_SLOW 10
#define I9XX_P2_SDVO_DAC_FAST 5
#define I9XX_P2_SDVO_DAC_SLOW_LIMIT 200000
#define I9XX_P2_LVDS_SLOW 14
#define I9XX_P2_LVDS_FAST 7
#define I9XX_P2_LVDS_SLOW_LIMIT 112000
#define INTEL_LIMIT_I8XX_DVO_DAC 0
#define INTEL_LIMIT_I8XX_LVDS 1
#define INTEL_LIMIT_I9XX_SDVO_DAC 2
#define INTEL_LIMIT_I9XX_LVDS 3
static const struct psb_intel_limit_t psb_intel_limits[] = { static const struct psb_intel_limit_t psb_intel_limits[] = {
{ /* INTEL_LIMIT_I8XX_DVO_DAC */
.dot = {.min = I8XX_DOT_MIN, .max = I8XX_DOT_MAX},
.vco = {.min = I8XX_VCO_MIN, .max = I8XX_VCO_MAX},
.n = {.min = I8XX_N_MIN, .max = I8XX_N_MAX},
.m = {.min = I8XX_M_MIN, .max = I8XX_M_MAX},
.m1 = {.min = I8XX_M1_MIN, .max = I8XX_M1_MAX},
.m2 = {.min = I8XX_M2_MIN, .max = I8XX_M2_MAX},
.p = {.min = I8XX_P_MIN, .max = I8XX_P_MAX},
.p1 = {.min = I8XX_P1_MIN, .max = I8XX_P1_MAX},
.p2 = {.dot_limit = I8XX_P2_SLOW_LIMIT,
.p2_slow = I8XX_P2_SLOW, .p2_fast = I8XX_P2_FAST},
},
{ /* INTEL_LIMIT_I8XX_LVDS */
.dot = {.min = I8XX_DOT_MIN, .max = I8XX_DOT_MAX},
.vco = {.min = I8XX_VCO_MIN, .max = I8XX_VCO_MAX},
.n = {.min = I8XX_N_MIN, .max = I8XX_N_MAX},
.m = {.min = I8XX_M_MIN, .max = I8XX_M_MAX},
.m1 = {.min = I8XX_M1_MIN, .max = I8XX_M1_MAX},
.m2 = {.min = I8XX_M2_MIN, .max = I8XX_M2_MAX},
.p = {.min = I8XX_P_MIN, .max = I8XX_P_MAX},
.p1 = {.min = I8XX_P1_LVDS_MIN, .max = I8XX_P1_LVDS_MAX},
.p2 = {.dot_limit = I8XX_P2_SLOW_LIMIT,
.p2_slow = I8XX_P2_LVDS_SLOW, .p2_fast = I8XX_P2_LVDS_FAST},
},
{ /* INTEL_LIMIT_I9XX_SDVO_DAC */ { /* INTEL_LIMIT_I9XX_SDVO_DAC */
.dot = {.min = I9XX_DOT_MIN, .max = I9XX_DOT_MAX}, .dot = {.min = 20000, .max = 400000},
.vco = {.min = I9XX_VCO_MIN, .max = I9XX_VCO_MAX}, .vco = {.min = 1400000, .max = 2800000},
.n = {.min = I9XX_N_MIN, .max = I9XX_N_MAX}, .n = {.min = 1, .max = 6},
.m = {.min = I9XX_M_MIN, .max = I9XX_M_MAX}, .m = {.min = 70, .max = 120},
.m1 = {.min = I9XX_M1_MIN, .max = I9XX_M1_MAX}, .m1 = {.min = 8, .max = 18},
.m2 = {.min = I9XX_M2_MIN, .max = I9XX_M2_MAX}, .m2 = {.min = 3, .max = 7},
.p = {.min = I9XX_P_SDVO_DAC_MIN, .max = I9XX_P_SDVO_DAC_MAX}, .p = {.min = 5, .max = 80},
.p1 = {.min = I9XX_P1_MIN, .max = I9XX_P1_MAX}, .p1 = {.min = 1, .max = 8},
.p2 = {.dot_limit = I9XX_P2_SDVO_DAC_SLOW_LIMIT, .p2 = {.dot_limit = 200000,
.p2_slow = I9XX_P2_SDVO_DAC_SLOW, .p2_fast = .p2_slow = 10, .p2_fast = 5},
I9XX_P2_SDVO_DAC_FAST},
}, },
{ /* INTEL_LIMIT_I9XX_LVDS */ { /* INTEL_LIMIT_I9XX_LVDS */
.dot = {.min = I9XX_DOT_MIN, .max = I9XX_DOT_MAX}, .dot = {.min = 20000, .max = 400000},
.vco = {.min = I9XX_VCO_MIN, .max = I9XX_VCO_MAX}, .vco = {.min = 1400000, .max = 2800000},
.n = {.min = I9XX_N_MIN, .max = I9XX_N_MAX}, .n = {.min = 1, .max = 6},
.m = {.min = I9XX_M_MIN, .max = I9XX_M_MAX}, .m = {.min = 70, .max = 120},
.m1 = {.min = I9XX_M1_MIN, .max = I9XX_M1_MAX}, .m1 = {.min = 8, .max = 18},
.m2 = {.min = I9XX_M2_MIN, .max = I9XX_M2_MAX}, .m2 = {.min = 3, .max = 7},
.p = {.min = I9XX_P_LVDS_MIN, .max = I9XX_P_LVDS_MAX}, .p = {.min = 7, .max = 98},
.p1 = {.min = I9XX_P1_MIN, .max = I9XX_P1_MAX}, .p1 = {.min = 1, .max = 8},
/* The single-channel range is 25-112Mhz, and dual-channel /* The single-channel range is 25-112Mhz, and dual-channel
* is 80-224Mhz. Prefer single channel as much as possible. * is 80-224Mhz. Prefer single channel as much as possible.
*/ */
.p2 = {.dot_limit = I9XX_P2_LVDS_SLOW_LIMIT, .p2 = {.dot_limit = 112000,
.p2_slow = I9XX_P2_LVDS_SLOW, .p2_fast = I9XX_P2_LVDS_FAST}, .p2_slow = 14, .p2_fast = 7},
}, },
}; };
...@@ -177,9 +99,7 @@ static const struct psb_intel_limit_t *psb_intel_limit(struct drm_crtc *crtc) ...@@ -177,9 +99,7 @@ static const struct psb_intel_limit_t *psb_intel_limit(struct drm_crtc *crtc)
return limit; return limit;
} }
/** Derive the pixel clock for the given refclk and divisors for 8xx chips. */ static void psb_intel_clock(int refclk, struct psb_intel_clock_t *clock)
static void i8xx_clock(int refclk, struct psb_intel_clock_t *clock)
{ {
clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2); clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2);
clock->p = clock->p1 * clock->p2; clock->p = clock->p1 * clock->p2;
...@@ -187,22 +107,6 @@ static void i8xx_clock(int refclk, struct psb_intel_clock_t *clock) ...@@ -187,22 +107,6 @@ static void i8xx_clock(int refclk, struct psb_intel_clock_t *clock)
clock->dot = clock->vco / clock->p; clock->dot = clock->vco / clock->p;
} }
/** Derive the pixel clock for the given refclk and divisors for 9xx chips. */
static void i9xx_clock(int refclk, struct psb_intel_clock_t *clock)
{
clock->m = 5 * (clock->m1 + 2) + (clock->m2 + 2);
clock->p = clock->p1 * clock->p2;
clock->vco = refclk * clock->m / (clock->n + 2);
clock->dot = clock->vco / clock->p;
}
static void psb_intel_clock(struct drm_device *dev, int refclk,
struct psb_intel_clock_t *clock)
{
return i9xx_clock(refclk, clock);
}
/** /**
* Returns whether any output on the specified pipe is of the specified type * Returns whether any output on the specified pipe is of the specified type
*/ */
...@@ -308,7 +212,7 @@ static bool psb_intel_find_best_PLL(struct drm_crtc *crtc, int target, ...@@ -308,7 +212,7 @@ static bool psb_intel_find_best_PLL(struct drm_crtc *crtc, int target,
clock.p1++) { clock.p1++) {
int this_err; int this_err;
psb_intel_clock(dev, refclk, &clock); psb_intel_clock(refclk, &clock);
if (!psb_intel_PLL_is_valid if (!psb_intel_PLL_is_valid
(crtc, &clock)) (crtc, &clock))
...@@ -1068,7 +972,7 @@ static int psb_intel_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) ...@@ -1068,7 +972,7 @@ static int psb_intel_crtc_cursor_move(struct drm_crtc *crtc, int x, int y)
return 0; return 0;
} }
void psb_intel_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, static void psb_intel_crtc_gamma_set(struct drm_crtc *crtc, u16 *red,
u16 *green, u16 *blue, uint32_t type, uint32_t size) u16 *green, u16 *blue, uint32_t type, uint32_t size)
{ {
struct psb_intel_crtc *psb_intel_crtc = to_psb_intel_crtc(crtc); struct psb_intel_crtc *psb_intel_crtc = to_psb_intel_crtc(crtc);
...@@ -1149,9 +1053,9 @@ static int psb_intel_crtc_clock_get(struct drm_device *dev, ...@@ -1149,9 +1053,9 @@ static int psb_intel_crtc_clock_get(struct drm_device *dev,
if ((dpll & PLL_REF_INPUT_MASK) == if ((dpll & PLL_REF_INPUT_MASK) ==
PLLB_REF_INPUT_SPREADSPECTRUMIN) { PLLB_REF_INPUT_SPREADSPECTRUMIN) {
/* XXX: might not be 66MHz */ /* XXX: might not be 66MHz */
i8xx_clock(66000, &clock); psb_intel_clock(66000, &clock);
} else } else
i8xx_clock(48000, &clock); psb_intel_clock(48000, &clock);
} else { } else {
if (dpll & PLL_P1_DIVIDE_BY_TWO) if (dpll & PLL_P1_DIVIDE_BY_TWO)
clock.p1 = 2; clock.p1 = 2;
...@@ -1166,7 +1070,7 @@ static int psb_intel_crtc_clock_get(struct drm_device *dev, ...@@ -1166,7 +1070,7 @@ static int psb_intel_crtc_clock_get(struct drm_device *dev,
else else
clock.p2 = 2; clock.p2 = 2;
i8xx_clock(48000, &clock); psb_intel_clock(48000, &clock);
} }
/* XXX: It would be nice to validate the clocks, but we can't reuse /* XXX: It would be nice to validate the clocks, but we can't reuse
...@@ -1225,7 +1129,7 @@ struct drm_display_mode *psb_intel_crtc_mode_get(struct drm_device *dev, ...@@ -1225,7 +1129,7 @@ struct drm_display_mode *psb_intel_crtc_mode_get(struct drm_device *dev,
return mode; return mode;
} }
void psb_intel_crtc_destroy(struct drm_crtc *crtc) static void psb_intel_crtc_destroy(struct drm_crtc *crtc)
{ {
struct psb_intel_crtc *psb_intel_crtc = to_psb_intel_crtc(crtc); struct psb_intel_crtc *psb_intel_crtc = to_psb_intel_crtc(crtc);
struct gtt_range *gt; struct gtt_range *gt;
......
...@@ -21,8 +21,5 @@ ...@@ -21,8 +21,5 @@
#define _INTEL_DISPLAY_H_ #define _INTEL_DISPLAY_H_
bool psb_intel_pipe_has_type(struct drm_crtc *crtc, int type); bool psb_intel_pipe_has_type(struct drm_crtc *crtc, int type);
void psb_intel_crtc_gamma_set(struct drm_crtc *crtc, u16 *red,
u16 *green, u16 *blue, uint32_t type, uint32_t size);
void psb_intel_crtc_destroy(struct drm_crtc *crtc);
#endif #endif
...@@ -32,9 +32,6 @@ ...@@ -32,9 +32,6 @@
/* maximum connectors per crtcs in the mode set */ /* maximum connectors per crtcs in the mode set */
#define INTELFB_CONN_LIMIT 4 #define INTELFB_CONN_LIMIT 4
#define INTEL_I2C_BUS_DVO 1
#define INTEL_I2C_BUS_SDVO 2
/* Intel Pipe Clone Bit */ /* Intel Pipe Clone Bit */
#define INTEL_HDMIB_CLONE_BIT 1 #define INTEL_HDMIB_CLONE_BIT 1
#define INTEL_HDMIC_CLONE_BIT 2 #define INTEL_HDMIC_CLONE_BIT 2
...@@ -68,11 +65,6 @@ ...@@ -68,11 +65,6 @@
#define INTEL_OUTPUT_DISPLAYPORT 9 #define INTEL_OUTPUT_DISPLAYPORT 9
#define INTEL_OUTPUT_EDP 10 #define INTEL_OUTPUT_EDP 10
#define INTEL_DVO_CHIP_NONE 0
#define INTEL_DVO_CHIP_LVDS 1
#define INTEL_DVO_CHIP_TMDS 2
#define INTEL_DVO_CHIP_TVOUT 4
#define INTEL_MODE_PIXEL_MULTIPLIER_SHIFT (0x0) #define INTEL_MODE_PIXEL_MULTIPLIER_SHIFT (0x0)
#define INTEL_MODE_PIXEL_MULTIPLIER_MASK (0xf << INTEL_MODE_PIXEL_MULTIPLIER_SHIFT) #define INTEL_MODE_PIXEL_MULTIPLIER_MASK (0xf << INTEL_MODE_PIXEL_MULTIPLIER_SHIFT)
......
...@@ -493,7 +493,6 @@ ...@@ -493,7 +493,6 @@
#define PIPEACONF_DISABLE 0 #define PIPEACONF_DISABLE 0
#define PIPEACONF_DOUBLE_WIDE (1 << 30) #define PIPEACONF_DOUBLE_WIDE (1 << 30)
#define PIPECONF_ACTIVE (1 << 30) #define PIPECONF_ACTIVE (1 << 30)
#define I965_PIPECONF_ACTIVE (1 << 30)
#define PIPECONF_DSIPLL_LOCK (1 << 29) #define PIPECONF_DSIPLL_LOCK (1 << 29)
#define PIPEACONF_SINGLE_WIDE 0 #define PIPEACONF_SINGLE_WIDE 0
#define PIPEACONF_PIPE_UNLOCKED 0 #define PIPEACONF_PIPE_UNLOCKED 0
......
...@@ -134,6 +134,9 @@ struct psb_intel_sdvo { ...@@ -134,6 +134,9 @@ struct psb_intel_sdvo {
/* Input timings for adjusted_mode */ /* Input timings for adjusted_mode */
struct psb_intel_sdvo_dtd input_dtd; struct psb_intel_sdvo_dtd input_dtd;
/* Saved SDVO output states */
uint32_t saveSDVO; /* Can be SDVOB or SDVOC depending on sdvo_reg */
}; };
struct psb_intel_sdvo_connector { struct psb_intel_sdvo_connector {
...@@ -1830,6 +1833,34 @@ psb_intel_sdvo_set_property(struct drm_connector *connector, ...@@ -1830,6 +1833,34 @@ psb_intel_sdvo_set_property(struct drm_connector *connector,
#undef CHECK_PROPERTY #undef CHECK_PROPERTY
} }
static void psb_intel_sdvo_save(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct psb_intel_encoder *psb_intel_encoder =
psb_intel_attached_encoder(connector);
struct psb_intel_sdvo *sdvo =
to_psb_intel_sdvo(&psb_intel_encoder->base);
sdvo->saveSDVO = REG_READ(sdvo->sdvo_reg);
}
static void psb_intel_sdvo_restore(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_encoder *encoder =
&psb_intel_attached_encoder(connector)->base;
struct psb_intel_sdvo *sdvo = to_psb_intel_sdvo(encoder);
struct drm_crtc *crtc = encoder->crtc;
REG_WRITE(sdvo->sdvo_reg, sdvo->saveSDVO);
/* Force a full mode set on the crtc. We're supposed to have the
mode_config lock already. */
if (connector->status == connector_status_connected)
drm_crtc_helper_set_mode(crtc, &crtc->mode, crtc->x, crtc->y,
NULL);
}
static const struct drm_encoder_helper_funcs psb_intel_sdvo_helper_funcs = { static const struct drm_encoder_helper_funcs psb_intel_sdvo_helper_funcs = {
.dpms = psb_intel_sdvo_dpms, .dpms = psb_intel_sdvo_dpms,
.mode_fixup = psb_intel_sdvo_mode_fixup, .mode_fixup = psb_intel_sdvo_mode_fixup,
...@@ -1840,6 +1871,8 @@ static const struct drm_encoder_helper_funcs psb_intel_sdvo_helper_funcs = { ...@@ -1840,6 +1871,8 @@ static const struct drm_encoder_helper_funcs psb_intel_sdvo_helper_funcs = {
static const struct drm_connector_funcs psb_intel_sdvo_connector_funcs = { static const struct drm_connector_funcs psb_intel_sdvo_connector_funcs = {
.dpms = drm_helper_connector_dpms, .dpms = drm_helper_connector_dpms,
.save = psb_intel_sdvo_save,
.restore = psb_intel_sdvo_restore,
.detect = psb_intel_sdvo_detect, .detect = psb_intel_sdvo_detect,
.fill_modes = drm_helper_probe_single_connector_modes, .fill_modes = drm_helper_probe_single_connector_modes,
.set_property = psb_intel_sdvo_set_property, .set_property = psb_intel_sdvo_set_property,
......
...@@ -211,7 +211,7 @@ irqreturn_t psb_irq_handler(DRM_IRQ_ARGS) ...@@ -211,7 +211,7 @@ irqreturn_t psb_irq_handler(DRM_IRQ_ARGS)
vdc_stat = PSB_RVDC32(PSB_INT_IDENTITY_R); vdc_stat = PSB_RVDC32(PSB_INT_IDENTITY_R);
if (vdc_stat & _PSB_PIPE_EVENT_FLAG) if (vdc_stat & (_PSB_PIPE_EVENT_FLAG|_PSB_IRQ_ASLE))
dsp_int = 1; dsp_int = 1;
/* FIXME: Handle Medfield /* FIXME: Handle Medfield
......
...@@ -21,8 +21,8 @@ ...@@ -21,8 +21,8 @@
* *
**************************************************************************/ **************************************************************************/
#ifndef _SYSIRQ_H_ #ifndef _PSB_IRQ_H_
#define _SYSIRQ_H_ #define _PSB_IRQ_H_
#include <drm/drmP.h> #include <drm/drmP.h>
...@@ -44,4 +44,4 @@ u32 psb_get_vblank_counter(struct drm_device *dev, int pipe); ...@@ -44,4 +44,4 @@ u32 psb_get_vblank_counter(struct drm_device *dev, int pipe);
int mdfld_enable_te(struct drm_device *dev, int pipe); int mdfld_enable_te(struct drm_device *dev, int pipe);
void mdfld_disable_te(struct drm_device *dev, int pipe); void mdfld_disable_te(struct drm_device *dev, int pipe);
#endif /* _SYSIRQ_H_ */ #endif /* _PSB_IRQ_H_ */
...@@ -772,6 +772,23 @@ static int i915_error_state(struct seq_file *m, void *unused) ...@@ -772,6 +772,23 @@ static int i915_error_state(struct seq_file *m, void *unused)
} }
} }
} }
obj = error->ring[i].ctx;
if (obj) {
seq_printf(m, "%s --- HW Context = 0x%08x\n",
dev_priv->ring[i].name,
obj->gtt_offset);
offset = 0;
for (elt = 0; elt < PAGE_SIZE/16; elt += 4) {
seq_printf(m, "[%04x] %08x %08x %08x %08x\n",
offset,
obj->pages[0][elt],
obj->pages[0][elt+1],
obj->pages[0][elt+2],
obj->pages[0][elt+3]);
offset += 16;
}
}
} }
if (error->overlay) if (error->overlay)
...@@ -849,76 +866,42 @@ static const struct file_operations i915_error_state_fops = { ...@@ -849,76 +866,42 @@ static const struct file_operations i915_error_state_fops = {
.release = i915_error_state_release, .release = i915_error_state_release,
}; };
static ssize_t static int
i915_next_seqno_read(struct file *filp, i915_next_seqno_get(void *data, u64 *val)
char __user *ubuf,
size_t max,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_private_t *dev_priv = dev->dev_private;
char buf[80];
int len;
int ret; int ret;
ret = mutex_lock_interruptible(&dev->struct_mutex); ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret) if (ret)
return ret; return ret;
len = snprintf(buf, sizeof(buf), *val = dev_priv->next_seqno;
"next_seqno : 0x%x\n",
dev_priv->next_seqno);
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
if (len > sizeof(buf)) return 0;
len = sizeof(buf);
return simple_read_from_buffer(ubuf, max, ppos, buf, len);
} }
static ssize_t static int
i915_next_seqno_write(struct file *filp, i915_next_seqno_set(void *data, u64 val)
const char __user *ubuf, {
size_t cnt, struct drm_device *dev = data;
loff_t *ppos)
{
struct drm_device *dev = filp->private_data;
char buf[20];
u32 val = 1;
int ret; int ret;
if (cnt > 0) {
if (cnt > sizeof(buf) - 1)
return -EINVAL;
if (copy_from_user(buf, ubuf, cnt))
return -EFAULT;
buf[cnt] = 0;
ret = kstrtouint(buf, 0, &val);
if (ret < 0)
return ret;
}
ret = mutex_lock_interruptible(&dev->struct_mutex); ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret) if (ret)
return ret; return ret;
ret = i915_gem_set_seqno(dev, val); ret = i915_gem_set_seqno(dev, val);
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
return ret ?: cnt; return ret;
} }
static const struct file_operations i915_next_seqno_fops = { DEFINE_SIMPLE_ATTRIBUTE(i915_next_seqno_fops,
.owner = THIS_MODULE, i915_next_seqno_get, i915_next_seqno_set,
.open = simple_open, "0x%llx\n");
.read = i915_next_seqno_read,
.write = i915_next_seqno_write,
.llseek = default_llseek,
};
static int i915_rstdby_delays(struct seq_file *m, void *unused) static int i915_rstdby_delays(struct seq_file *m, void *unused)
{ {
...@@ -1023,6 +1006,9 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused) ...@@ -1023,6 +1006,9 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused)
max_freq = rp_state_cap & 0xff; max_freq = rp_state_cap & 0xff;
seq_printf(m, "Max non-overclocked (RP0) frequency: %dMHz\n", seq_printf(m, "Max non-overclocked (RP0) frequency: %dMHz\n",
max_freq * GT_FREQUENCY_MULTIPLIER); max_freq * GT_FREQUENCY_MULTIPLIER);
seq_printf(m, "Max overclocked frequency: %dMHz\n",
dev_priv->rps.hw_max * GT_FREQUENCY_MULTIPLIER);
} else { } else {
seq_printf(m, "no P-state info available\n"); seq_printf(m, "no P-state info available\n");
} }
...@@ -1371,7 +1357,7 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused) ...@@ -1371,7 +1357,7 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused)
if (ret) if (ret)
return ret; return ret;
seq_printf(m, "GPU freq (MHz)\tEffective CPU freq (MHz)\n"); seq_printf(m, "GPU freq (MHz)\tEffective CPU freq (MHz)\tEffective Ring freq (MHz)\n");
for (gpu_freq = dev_priv->rps.min_delay; for (gpu_freq = dev_priv->rps.min_delay;
gpu_freq <= dev_priv->rps.max_delay; gpu_freq <= dev_priv->rps.max_delay;
...@@ -1380,7 +1366,10 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused) ...@@ -1380,7 +1366,10 @@ static int i915_ring_freq_table(struct seq_file *m, void *unused)
sandybridge_pcode_read(dev_priv, sandybridge_pcode_read(dev_priv,
GEN6_PCODE_READ_MIN_FREQ_TABLE, GEN6_PCODE_READ_MIN_FREQ_TABLE,
&ia_freq); &ia_freq);
seq_printf(m, "%d\t\t%d\n", gpu_freq * GT_FREQUENCY_MULTIPLIER, ia_freq * 100); seq_printf(m, "%d\t\t%d\t\t\t\t%d\n",
gpu_freq * GT_FREQUENCY_MULTIPLIER,
((ia_freq >> 0) & 0xff) * 100,
((ia_freq >> 8) & 0xff) * 100);
} }
mutex_unlock(&dev_priv->rps.hw_lock); mutex_unlock(&dev_priv->rps.hw_lock);
...@@ -1680,105 +1669,51 @@ static int i915_dpio_info(struct seq_file *m, void *data) ...@@ -1680,105 +1669,51 @@ static int i915_dpio_info(struct seq_file *m, void *data)
return 0; return 0;
} }
static ssize_t static int
i915_wedged_read(struct file *filp, i915_wedged_get(void *data, u64 *val)
char __user *ubuf,
size_t max,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_private_t *dev_priv = dev->dev_private;
char buf[80];
int len;
len = snprintf(buf, sizeof(buf), *val = atomic_read(&dev_priv->gpu_error.reset_counter);
"wedged : %d\n",
atomic_read(&dev_priv->gpu_error.reset_counter));
if (len > sizeof(buf)) return 0;
len = sizeof(buf);
return simple_read_from_buffer(ubuf, max, ppos, buf, len);
} }
static ssize_t static int
i915_wedged_write(struct file *filp, i915_wedged_set(void *data, u64 val)
const char __user *ubuf,
size_t cnt,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
char buf[20];
int val = 1;
if (cnt > 0) {
if (cnt > sizeof(buf) - 1)
return -EINVAL;
if (copy_from_user(buf, ubuf, cnt)) DRM_INFO("Manually setting wedged to %llu\n", val);
return -EFAULT;
buf[cnt] = 0;
val = simple_strtoul(buf, NULL, 0);
}
DRM_INFO("Manually setting wedged to %d\n", val);
i915_handle_error(dev, val); i915_handle_error(dev, val);
return cnt; return 0;
} }
static const struct file_operations i915_wedged_fops = { DEFINE_SIMPLE_ATTRIBUTE(i915_wedged_fops,
.owner = THIS_MODULE, i915_wedged_get, i915_wedged_set,
.open = simple_open, "%llu\n");
.read = i915_wedged_read,
.write = i915_wedged_write,
.llseek = default_llseek,
};
static ssize_t static int
i915_ring_stop_read(struct file *filp, i915_ring_stop_get(void *data, u64 *val)
char __user *ubuf,
size_t max,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_private_t *dev_priv = dev->dev_private;
char buf[20];
int len;
len = snprintf(buf, sizeof(buf), *val = dev_priv->gpu_error.stop_rings;
"0x%08x\n", dev_priv->gpu_error.stop_rings);
if (len > sizeof(buf)) return 0;
len = sizeof(buf);
return simple_read_from_buffer(ubuf, max, ppos, buf, len);
} }
static ssize_t static int
i915_ring_stop_write(struct file *filp, i915_ring_stop_set(void *data, u64 val)
const char __user *ubuf,
size_t cnt,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
char buf[20]; int ret;
int val = 0, ret;
if (cnt > 0) {
if (cnt > sizeof(buf) - 1)
return -EINVAL;
if (copy_from_user(buf, ubuf, cnt))
return -EFAULT;
buf[cnt] = 0;
val = simple_strtoul(buf, NULL, 0);
}
DRM_DEBUG_DRIVER("Stopping rings 0x%08x\n", val); DRM_DEBUG_DRIVER("Stopping rings 0x%08llx\n", val);
ret = mutex_lock_interruptible(&dev->struct_mutex); ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret) if (ret)
...@@ -1787,16 +1722,12 @@ i915_ring_stop_write(struct file *filp, ...@@ -1787,16 +1722,12 @@ i915_ring_stop_write(struct file *filp,
dev_priv->gpu_error.stop_rings = val; dev_priv->gpu_error.stop_rings = val;
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
return cnt; return 0;
} }
static const struct file_operations i915_ring_stop_fops = { DEFINE_SIMPLE_ATTRIBUTE(i915_ring_stop_fops,
.owner = THIS_MODULE, i915_ring_stop_get, i915_ring_stop_set,
.open = simple_open, "0x%08llx\n");
.read = i915_ring_stop_read,
.write = i915_ring_stop_write,
.llseek = default_llseek,
};
#define DROP_UNBOUND 0x1 #define DROP_UNBOUND 0x1
#define DROP_BOUND 0x2 #define DROP_BOUND 0x2
...@@ -1806,46 +1737,23 @@ static const struct file_operations i915_ring_stop_fops = { ...@@ -1806,46 +1737,23 @@ static const struct file_operations i915_ring_stop_fops = {
DROP_BOUND | \ DROP_BOUND | \
DROP_RETIRE | \ DROP_RETIRE | \
DROP_ACTIVE) DROP_ACTIVE)
static ssize_t static int
i915_drop_caches_read(struct file *filp, i915_drop_caches_get(void *data, u64 *val)
char __user *ubuf,
size_t max,
loff_t *ppos)
{ {
char buf[20]; *val = DROP_ALL;
int len;
len = snprintf(buf, sizeof(buf), "0x%08x\n", DROP_ALL); return 0;
if (len > sizeof(buf))
len = sizeof(buf);
return simple_read_from_buffer(ubuf, max, ppos, buf, len);
} }
static ssize_t static int
i915_drop_caches_write(struct file *filp, i915_drop_caches_set(void *data, u64 val)
const char __user *ubuf,
size_t cnt,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj, *next; struct drm_i915_gem_object *obj, *next;
char buf[20]; int ret;
int val = 0, ret;
if (cnt > 0) {
if (cnt > sizeof(buf) - 1)
return -EINVAL;
if (copy_from_user(buf, ubuf, cnt))
return -EFAULT;
buf[cnt] = 0;
val = simple_strtoul(buf, NULL, 0);
}
DRM_DEBUG_DRIVER("Dropping caches: 0x%08x\n", val); DRM_DEBUG_DRIVER("Dropping caches: 0x%08llx\n", val);
/* No need to check and wait for gpu resets, only libdrm auto-restarts /* No need to check and wait for gpu resets, only libdrm auto-restarts
* on ioctls on -EAGAIN. */ * on ioctls on -EAGAIN. */
...@@ -1883,27 +1791,19 @@ i915_drop_caches_write(struct file *filp, ...@@ -1883,27 +1791,19 @@ i915_drop_caches_write(struct file *filp,
unlock: unlock:
mutex_unlock(&dev->struct_mutex); mutex_unlock(&dev->struct_mutex);
return ret ?: cnt; return ret;
} }
static const struct file_operations i915_drop_caches_fops = { DEFINE_SIMPLE_ATTRIBUTE(i915_drop_caches_fops,
.owner = THIS_MODULE, i915_drop_caches_get, i915_drop_caches_set,
.open = simple_open, "0x%08llx\n");
.read = i915_drop_caches_read,
.write = i915_drop_caches_write,
.llseek = default_llseek,
};
static ssize_t static int
i915_max_freq_read(struct file *filp, i915_max_freq_get(void *data, u64 *val)
char __user *ubuf,
size_t max,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_private_t *dev_priv = dev->dev_private;
char buf[80]; int ret;
int len, ret;
if (!(IS_GEN6(dev) || IS_GEN7(dev))) if (!(IS_GEN6(dev) || IS_GEN7(dev)))
return -ENODEV; return -ENODEV;
...@@ -1912,42 +1812,23 @@ i915_max_freq_read(struct file *filp, ...@@ -1912,42 +1812,23 @@ i915_max_freq_read(struct file *filp,
if (ret) if (ret)
return ret; return ret;
len = snprintf(buf, sizeof(buf), *val = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER;
"max freq: %d\n", dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER);
mutex_unlock(&dev_priv->rps.hw_lock); mutex_unlock(&dev_priv->rps.hw_lock);
if (len > sizeof(buf)) return 0;
len = sizeof(buf);
return simple_read_from_buffer(ubuf, max, ppos, buf, len);
} }
static ssize_t static int
i915_max_freq_write(struct file *filp, i915_max_freq_set(void *data, u64 val)
const char __user *ubuf,
size_t cnt,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
char buf[20]; int ret;
int val = 1, ret;
if (!(IS_GEN6(dev) || IS_GEN7(dev))) if (!(IS_GEN6(dev) || IS_GEN7(dev)))
return -ENODEV; return -ENODEV;
if (cnt > 0) { DRM_DEBUG_DRIVER("Manually setting max freq to %llu\n", val);
if (cnt > sizeof(buf) - 1)
return -EINVAL;
if (copy_from_user(buf, ubuf, cnt))
return -EFAULT;
buf[cnt] = 0;
val = simple_strtoul(buf, NULL, 0);
}
DRM_DEBUG_DRIVER("Manually setting max freq to %d\n", val);
ret = mutex_lock_interruptible(&dev_priv->rps.hw_lock); ret = mutex_lock_interruptible(&dev_priv->rps.hw_lock);
if (ret) if (ret)
...@@ -1956,30 +1837,24 @@ i915_max_freq_write(struct file *filp, ...@@ -1956,30 +1837,24 @@ i915_max_freq_write(struct file *filp,
/* /*
* Turbo will still be enabled, but won't go above the set value. * Turbo will still be enabled, but won't go above the set value.
*/ */
dev_priv->rps.max_delay = val / GT_FREQUENCY_MULTIPLIER; do_div(val, GT_FREQUENCY_MULTIPLIER);
dev_priv->rps.max_delay = val;
gen6_set_rps(dev, val / GT_FREQUENCY_MULTIPLIER); gen6_set_rps(dev, val);
mutex_unlock(&dev_priv->rps.hw_lock); mutex_unlock(&dev_priv->rps.hw_lock);
return cnt; return 0;
} }
static const struct file_operations i915_max_freq_fops = { DEFINE_SIMPLE_ATTRIBUTE(i915_max_freq_fops,
.owner = THIS_MODULE, i915_max_freq_get, i915_max_freq_set,
.open = simple_open, "%llu\n");
.read = i915_max_freq_read,
.write = i915_max_freq_write,
.llseek = default_llseek,
};
static ssize_t static int
i915_min_freq_read(struct file *filp, char __user *ubuf, size_t max, i915_min_freq_get(void *data, u64 *val)
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_private_t *dev_priv = dev->dev_private;
char buf[80]; int ret;
int len, ret;
if (!(IS_GEN6(dev) || IS_GEN7(dev))) if (!(IS_GEN6(dev) || IS_GEN7(dev)))
return -ENODEV; return -ENODEV;
...@@ -1988,40 +1863,23 @@ i915_min_freq_read(struct file *filp, char __user *ubuf, size_t max, ...@@ -1988,40 +1863,23 @@ i915_min_freq_read(struct file *filp, char __user *ubuf, size_t max,
if (ret) if (ret)
return ret; return ret;
len = snprintf(buf, sizeof(buf), *val = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER;
"min freq: %d\n", dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER);
mutex_unlock(&dev_priv->rps.hw_lock); mutex_unlock(&dev_priv->rps.hw_lock);
if (len > sizeof(buf)) return 0;
len = sizeof(buf);
return simple_read_from_buffer(ubuf, max, ppos, buf, len);
} }
static ssize_t static int
i915_min_freq_write(struct file *filp, const char __user *ubuf, size_t cnt, i915_min_freq_set(void *data, u64 val)
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
char buf[20]; int ret;
int val = 1, ret;
if (!(IS_GEN6(dev) || IS_GEN7(dev))) if (!(IS_GEN6(dev) || IS_GEN7(dev)))
return -ENODEV; return -ENODEV;
if (cnt > 0) { DRM_DEBUG_DRIVER("Manually setting min freq to %llu\n", val);
if (cnt > sizeof(buf) - 1)
return -EINVAL;
if (copy_from_user(buf, ubuf, cnt))
return -EFAULT;
buf[cnt] = 0;
val = simple_strtoul(buf, NULL, 0);
}
DRM_DEBUG_DRIVER("Manually setting min freq to %d\n", val);
ret = mutex_lock_interruptible(&dev_priv->rps.hw_lock); ret = mutex_lock_interruptible(&dev_priv->rps.hw_lock);
if (ret) if (ret)
...@@ -2030,33 +1888,25 @@ i915_min_freq_write(struct file *filp, const char __user *ubuf, size_t cnt, ...@@ -2030,33 +1888,25 @@ i915_min_freq_write(struct file *filp, const char __user *ubuf, size_t cnt,
/* /*
* Turbo will still be enabled, but won't go below the set value. * Turbo will still be enabled, but won't go below the set value.
*/ */
dev_priv->rps.min_delay = val / GT_FREQUENCY_MULTIPLIER; do_div(val, GT_FREQUENCY_MULTIPLIER);
dev_priv->rps.min_delay = val;
gen6_set_rps(dev, val / GT_FREQUENCY_MULTIPLIER); gen6_set_rps(dev, val);
mutex_unlock(&dev_priv->rps.hw_lock); mutex_unlock(&dev_priv->rps.hw_lock);
return cnt; return 0;
} }
static const struct file_operations i915_min_freq_fops = { DEFINE_SIMPLE_ATTRIBUTE(i915_min_freq_fops,
.owner = THIS_MODULE, i915_min_freq_get, i915_min_freq_set,
.open = simple_open, "%llu\n");
.read = i915_min_freq_read,
.write = i915_min_freq_write,
.llseek = default_llseek,
};
static ssize_t static int
i915_cache_sharing_read(struct file *filp, i915_cache_sharing_get(void *data, u64 *val)
char __user *ubuf,
size_t max,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
drm_i915_private_t *dev_priv = dev->dev_private; drm_i915_private_t *dev_priv = dev->dev_private;
char buf[80];
u32 snpcr; u32 snpcr;
int len, ret; int ret;
if (!(IS_GEN6(dev) || IS_GEN7(dev))) if (!(IS_GEN6(dev) || IS_GEN7(dev)))
return -ENODEV; return -ENODEV;
...@@ -2068,46 +1918,25 @@ i915_cache_sharing_read(struct file *filp, ...@@ -2068,46 +1918,25 @@ i915_cache_sharing_read(struct file *filp,
snpcr = I915_READ(GEN6_MBCUNIT_SNPCR); snpcr = I915_READ(GEN6_MBCUNIT_SNPCR);
mutex_unlock(&dev_priv->dev->struct_mutex); mutex_unlock(&dev_priv->dev->struct_mutex);
len = snprintf(buf, sizeof(buf), *val = (snpcr & GEN6_MBC_SNPCR_MASK) >> GEN6_MBC_SNPCR_SHIFT;
"%d\n", (snpcr & GEN6_MBC_SNPCR_MASK) >>
GEN6_MBC_SNPCR_SHIFT);
if (len > sizeof(buf)) return 0;
len = sizeof(buf);
return simple_read_from_buffer(ubuf, max, ppos, buf, len);
} }
static ssize_t static int
i915_cache_sharing_write(struct file *filp, i915_cache_sharing_set(void *data, u64 val)
const char __user *ubuf,
size_t cnt,
loff_t *ppos)
{ {
struct drm_device *dev = filp->private_data; struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
char buf[20];
u32 snpcr; u32 snpcr;
int val = 1;
if (!(IS_GEN6(dev) || IS_GEN7(dev))) if (!(IS_GEN6(dev) || IS_GEN7(dev)))
return -ENODEV; return -ENODEV;
if (cnt > 0) { if (val > 3)
if (cnt > sizeof(buf) - 1)
return -EINVAL;
if (copy_from_user(buf, ubuf, cnt))
return -EFAULT;
buf[cnt] = 0;
val = simple_strtoul(buf, NULL, 0);
}
if (val < 0 || val > 3)
return -EINVAL; return -EINVAL;
DRM_DEBUG_DRIVER("Manually setting uncore sharing to %d\n", val); DRM_DEBUG_DRIVER("Manually setting uncore sharing to %llu\n", val);
/* Update the cache sharing policy here as well */ /* Update the cache sharing policy here as well */
snpcr = I915_READ(GEN6_MBCUNIT_SNPCR); snpcr = I915_READ(GEN6_MBCUNIT_SNPCR);
...@@ -2115,16 +1944,12 @@ i915_cache_sharing_write(struct file *filp, ...@@ -2115,16 +1944,12 @@ i915_cache_sharing_write(struct file *filp,
snpcr |= (val << GEN6_MBC_SNPCR_SHIFT); snpcr |= (val << GEN6_MBC_SNPCR_SHIFT);
I915_WRITE(GEN6_MBCUNIT_SNPCR, snpcr); I915_WRITE(GEN6_MBCUNIT_SNPCR, snpcr);
return cnt; return 0;
} }
static const struct file_operations i915_cache_sharing_fops = { DEFINE_SIMPLE_ATTRIBUTE(i915_cache_sharing_fops,
.owner = THIS_MODULE, i915_cache_sharing_get, i915_cache_sharing_set,
.open = simple_open, "%llu\n");
.read = i915_cache_sharing_read,
.write = i915_cache_sharing_write,
.llseek = default_llseek,
};
/* As the drm_debugfs_init() routines are called before dev->dev_private is /* As the drm_debugfs_init() routines are called before dev->dev_private is
* allocated we need to hook into the minor for release. */ * allocated we need to hook into the minor for release. */
......
...@@ -1322,6 +1322,10 @@ static int i915_load_modeset_init(struct drm_device *dev) ...@@ -1322,6 +1322,10 @@ static int i915_load_modeset_init(struct drm_device *dev)
/* Always safe in the mode setting case. */ /* Always safe in the mode setting case. */
/* FIXME: do pre/post-mode set stuff in core KMS code */ /* FIXME: do pre/post-mode set stuff in core KMS code */
dev->vblank_disable_allowed = 1; dev->vblank_disable_allowed = 1;
if (INTEL_INFO(dev)->num_pipes == 0) {
dev_priv->mm.suspended = 0;
return 0;
}
ret = intel_fbdev_init(dev); ret = intel_fbdev_init(dev);
if (ret) if (ret)
...@@ -1452,6 +1456,22 @@ static void i915_dump_device_info(struct drm_i915_private *dev_priv) ...@@ -1452,6 +1456,22 @@ static void i915_dump_device_info(struct drm_i915_private *dev_priv)
#undef DEV_INFO_SEP #undef DEV_INFO_SEP
} }
/**
* intel_early_sanitize_regs - clean up BIOS state
* @dev: DRM device
*
* This function must be called before we do any I915_READ or I915_WRITE. Its
* purpose is to clean up any state left by the BIOS that may affect us when
* reading and/or writing registers.
*/
static void intel_early_sanitize_regs(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
if (IS_HASWELL(dev))
I915_WRITE_NOTRACE(FPGA_DBG, FPGA_DBG_RM_NOCLAIM);
}
/** /**
* i915_driver_load - setup chip and create an initial config * i915_driver_load - setup chip and create an initial config
* @dev: DRM device * @dev: DRM device
...@@ -1498,6 +1518,28 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -1498,6 +1518,28 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
goto free_priv; goto free_priv;
} }
mmio_bar = IS_GEN2(dev) ? 1 : 0;
/* Before gen4, the registers and the GTT are behind different BARs.
* However, from gen4 onwards, the registers and the GTT are shared
* in the same BAR, so we want to restrict this ioremap from
* clobbering the GTT which we want ioremap_wc instead. Fortunately,
* the register BAR remains the same size for all the earlier
* generations up to Ironlake.
*/
if (info->gen < 5)
mmio_size = 512*1024;
else
mmio_size = 2*1024*1024;
dev_priv->regs = pci_iomap(dev->pdev, mmio_bar, mmio_size);
if (!dev_priv->regs) {
DRM_ERROR("failed to map registers\n");
ret = -EIO;
goto put_bridge;
}
intel_early_sanitize_regs(dev);
ret = i915_gem_gtt_init(dev); ret = i915_gem_gtt_init(dev);
if (ret) if (ret)
goto put_bridge; goto put_bridge;
...@@ -1522,26 +1564,6 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -1522,26 +1564,6 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
if (IS_BROADWATER(dev) || IS_CRESTLINE(dev)) if (IS_BROADWATER(dev) || IS_CRESTLINE(dev))
dma_set_coherent_mask(&dev->pdev->dev, DMA_BIT_MASK(32)); dma_set_coherent_mask(&dev->pdev->dev, DMA_BIT_MASK(32));
mmio_bar = IS_GEN2(dev) ? 1 : 0;
/* Before gen4, the registers and the GTT are behind different BARs.
* However, from gen4 onwards, the registers and the GTT are shared
* in the same BAR, so we want to restrict this ioremap from
* clobbering the GTT which we want ioremap_wc instead. Fortunately,
* the register BAR remains the same size for all the earlier
* generations up to Ironlake.
*/
if (info->gen < 5)
mmio_size = 512*1024;
else
mmio_size = 2*1024*1024;
dev_priv->regs = pci_iomap(dev->pdev, mmio_bar, mmio_size);
if (!dev_priv->regs) {
DRM_ERROR("failed to map registers\n");
ret = -EIO;
goto put_gmch;
}
aperture_size = dev_priv->gtt.mappable_end; aperture_size = dev_priv->gtt.mappable_end;
dev_priv->gtt.mappable = dev_priv->gtt.mappable =
...@@ -1612,16 +1634,15 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -1612,16 +1634,15 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
mutex_init(&dev_priv->rps.hw_lock); mutex_init(&dev_priv->rps.hw_lock);
mutex_init(&dev_priv->modeset_restore_lock); mutex_init(&dev_priv->modeset_restore_lock);
if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) dev_priv->num_plane = 1;
dev_priv->num_pipe = 3; if (IS_VALLEYVIEW(dev))
else if (IS_MOBILE(dev) || !IS_GEN2(dev)) dev_priv->num_plane = 2;
dev_priv->num_pipe = 2;
else
dev_priv->num_pipe = 1;
ret = drm_vblank_init(dev, dev_priv->num_pipe); if (INTEL_INFO(dev)->num_pipes) {
if (ret) ret = drm_vblank_init(dev, INTEL_INFO(dev)->num_pipes);
goto out_gem_unload; if (ret)
goto out_gem_unload;
}
/* Start out suspended */ /* Start out suspended */
dev_priv->mm.suspended = 1; dev_priv->mm.suspended = 1;
...@@ -1636,9 +1657,11 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -1636,9 +1657,11 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
i915_setup_sysfs(dev); i915_setup_sysfs(dev);
/* Must be done after probing outputs */ if (INTEL_INFO(dev)->num_pipes) {
intel_opregion_init(dev); /* Must be done after probing outputs */
acpi_video_register(); intel_opregion_init(dev);
acpi_video_register();
}
if (IS_GEN5(dev)) if (IS_GEN5(dev))
intel_gpu_ips_init(dev_priv); intel_gpu_ips_init(dev_priv);
...@@ -1663,10 +1686,9 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags) ...@@ -1663,10 +1686,9 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
dev_priv->mm.gtt_mtrr = -1; dev_priv->mm.gtt_mtrr = -1;
} }
io_mapping_free(dev_priv->gtt.mappable); io_mapping_free(dev_priv->gtt.mappable);
dev_priv->gtt.gtt_remove(dev);
out_rmmap: out_rmmap:
pci_iounmap(dev->pdev, dev_priv->regs); pci_iounmap(dev->pdev, dev_priv->regs);
put_gmch:
dev_priv->gtt.gtt_remove(dev);
put_bridge: put_bridge:
pci_dev_put(dev_priv->bridge_dev); pci_dev_put(dev_priv->bridge_dev);
free_priv: free_priv:
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -152,6 +152,13 @@ create_hw_context(struct drm_device *dev, ...@@ -152,6 +152,13 @@ create_hw_context(struct drm_device *dev,
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
} }
if (INTEL_INFO(dev)->gen >= 7) {
ret = i915_gem_object_set_cache_level(ctx->obj,
I915_CACHE_LLC_MLC);
if (ret)
goto err_out;
}
/* The ring associated with the context object is handled by the normal /* The ring associated with the context object is handled by the normal
* object tracking code. We give an initial ring value simple to pass an * object tracking code. We give an initial ring value simple to pass an
* assertion in the context switch code. * assertion in the context switch code.
......
...@@ -62,7 +62,7 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme ...@@ -62,7 +62,7 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
src = obj->pages->sgl; src = obj->pages->sgl;
dst = st->sgl; dst = st->sgl;
for (i = 0; i < obj->pages->nents; i++) { for (i = 0; i < obj->pages->nents; i++) {
sg_set_page(dst, sg_page(src), PAGE_SIZE, 0); sg_set_page(dst, sg_page(src), src->length, 0);
dst = sg_next(dst); dst = sg_next(dst);
src = sg_next(src); src = sg_next(src);
} }
...@@ -105,7 +105,7 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf) ...@@ -105,7 +105,7 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
{ {
struct drm_i915_gem_object *obj = dma_buf->priv; struct drm_i915_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->base.dev; struct drm_device *dev = obj->base.dev;
struct scatterlist *sg; struct sg_page_iter sg_iter;
struct page **pages; struct page **pages;
int ret, i; int ret, i;
...@@ -124,14 +124,15 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf) ...@@ -124,14 +124,15 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
ret = -ENOMEM; ret = -ENOMEM;
pages = drm_malloc_ab(obj->pages->nents, sizeof(struct page *)); pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
if (pages == NULL) if (pages == NULL)
goto error; goto error;
for_each_sg(obj->pages->sgl, sg, obj->pages->nents, i) i = 0;
pages[i] = sg_page(sg); for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
pages[i++] = sg_page_iter_page(&sg_iter);
obj->dma_buf_vmapping = vmap(pages, obj->pages->nents, 0, PAGE_KERNEL); obj->dma_buf_vmapping = vmap(pages, i, 0, PAGE_KERNEL);
drm_free_large(pages); drm_free_large(pages);
if (!obj->dma_buf_vmapping) if (!obj->dma_buf_vmapping)
...@@ -271,7 +272,6 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev, ...@@ -271,7 +272,6 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
* refcount on gem itself instead of f_count of dmabuf. * refcount on gem itself instead of f_count of dmabuf.
*/ */
drm_gem_object_reference(&obj->base); drm_gem_object_reference(&obj->base);
dma_buf_put(dma_buf);
return &obj->base; return &obj->base;
} }
} }
...@@ -281,6 +281,8 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev, ...@@ -281,6 +281,8 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
if (IS_ERR(attach)) if (IS_ERR(attach))
return ERR_CAST(attach); return ERR_CAST(attach);
get_dma_buf(dma_buf);
obj = i915_gem_object_alloc(dev); obj = i915_gem_object_alloc(dev);
if (obj == NULL) { if (obj == NULL) {
ret = -ENOMEM; ret = -ENOMEM;
...@@ -300,5 +302,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev, ...@@ -300,5 +302,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
fail_detach: fail_detach:
dma_buf_detach(dma_buf, attach); dma_buf_detach(dma_buf, attach);
dma_buf_put(dma_buf);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
此差异已折叠。
...@@ -312,6 +312,71 @@ i915_gem_object_create_stolen(struct drm_device *dev, u32 size) ...@@ -312,6 +312,71 @@ i915_gem_object_create_stolen(struct drm_device *dev, u32 size)
return NULL; return NULL;
} }
struct drm_i915_gem_object *
i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
u32 stolen_offset,
u32 gtt_offset,
u32 size)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj;
struct drm_mm_node *stolen;
if (dev_priv->mm.stolen_base == 0)
return NULL;
DRM_DEBUG_KMS("creating preallocated stolen object: stolen_offset=%x, gtt_offset=%x, size=%x\n",
stolen_offset, gtt_offset, size);
/* KISS and expect everything to be page-aligned */
BUG_ON(stolen_offset & 4095);
BUG_ON(gtt_offset & 4095);
BUG_ON(size & 4095);
if (WARN_ON(size == 0))
return NULL;
stolen = drm_mm_create_block(&dev_priv->mm.stolen,
stolen_offset, size,
false);
if (stolen == NULL) {
DRM_DEBUG_KMS("failed to allocate stolen space\n");
return NULL;
}
obj = _i915_gem_object_create_stolen(dev, stolen);
if (obj == NULL) {
DRM_DEBUG_KMS("failed to allocate stolen object\n");
drm_mm_put_block(stolen);
return NULL;
}
/* To simplify the initialisation sequence between KMS and GTT,
* we allow construction of the stolen object prior to
* setting up the GTT space. The actual reservation will occur
* later.
*/
if (drm_mm_initialized(&dev_priv->mm.gtt_space)) {
obj->gtt_space = drm_mm_create_block(&dev_priv->mm.gtt_space,
gtt_offset, size,
false);
if (obj->gtt_space == NULL) {
DRM_DEBUG_KMS("failed to allocate stolen GTT space\n");
drm_gem_object_unreference(&obj->base);
return NULL;
}
} else
obj->gtt_space = I915_GTT_RESERVED;
obj->gtt_offset = gtt_offset;
obj->has_global_gtt_mapping = 1;
list_add_tail(&obj->gtt_list, &dev_priv->mm.bound_list);
list_add_tail(&obj->mm_list, &dev_priv->mm.inactive_list);
return obj;
}
void void
i915_gem_object_release_stolen(struct drm_i915_gem_object *obj) i915_gem_object_release_stolen(struct drm_i915_gem_object *obj)
{ {
......
此差异已折叠。
此差异已折叠。
...@@ -209,7 +209,8 @@ static void i915_save_display(struct drm_device *dev) ...@@ -209,7 +209,8 @@ static void i915_save_display(struct drm_device *dev)
dev_priv->regfile.saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_PCH_CTL2); dev_priv->regfile.saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_PCH_CTL2);
dev_priv->regfile.saveBLC_CPU_PWM_CTL = I915_READ(BLC_PWM_CPU_CTL); dev_priv->regfile.saveBLC_CPU_PWM_CTL = I915_READ(BLC_PWM_CPU_CTL);
dev_priv->regfile.saveBLC_CPU_PWM_CTL2 = I915_READ(BLC_PWM_CPU_CTL2); dev_priv->regfile.saveBLC_CPU_PWM_CTL2 = I915_READ(BLC_PWM_CPU_CTL2);
dev_priv->regfile.saveLVDS = I915_READ(PCH_LVDS); if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev))
dev_priv->regfile.saveLVDS = I915_READ(PCH_LVDS);
} else { } else {
dev_priv->regfile.savePP_CONTROL = I915_READ(PP_CONTROL); dev_priv->regfile.savePP_CONTROL = I915_READ(PP_CONTROL);
dev_priv->regfile.savePFIT_PGM_RATIOS = I915_READ(PFIT_PGM_RATIOS); dev_priv->regfile.savePFIT_PGM_RATIOS = I915_READ(PFIT_PGM_RATIOS);
...@@ -255,6 +256,7 @@ static void i915_save_display(struct drm_device *dev) ...@@ -255,6 +256,7 @@ static void i915_save_display(struct drm_device *dev)
static void i915_restore_display(struct drm_device *dev) static void i915_restore_display(struct drm_device *dev)
{ {
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
u32 mask = 0xffffffff;
/* Display arbitration */ /* Display arbitration */
if (INTEL_INFO(dev)->gen <= 4) if (INTEL_INFO(dev)->gen <= 4)
...@@ -267,10 +269,13 @@ static void i915_restore_display(struct drm_device *dev) ...@@ -267,10 +269,13 @@ static void i915_restore_display(struct drm_device *dev)
if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev)) if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev))
I915_WRITE(BLC_PWM_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2); I915_WRITE(BLC_PWM_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2);
if (HAS_PCH_SPLIT(dev)) { if (drm_core_check_feature(dev, DRIVER_MODESET))
I915_WRITE(PCH_LVDS, dev_priv->regfile.saveLVDS); mask = ~LVDS_PORT_EN;
} else if (IS_MOBILE(dev) && !IS_I830(dev))
I915_WRITE(LVDS, dev_priv->regfile.saveLVDS); if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev))
I915_WRITE(PCH_LVDS, dev_priv->regfile.saveLVDS & mask);
else if (INTEL_INFO(dev)->gen <= 4 && IS_MOBILE(dev) && !IS_I830(dev))
I915_WRITE(LVDS, dev_priv->regfile.saveLVDS & mask);
if (!IS_I830(dev) && !IS_845G(dev) && !HAS_PCH_SPLIT(dev)) if (!IS_I830(dev) && !IS_845G(dev) && !HAS_PCH_SPLIT(dev))
I915_WRITE(PFIT_CONTROL, dev_priv->regfile.savePFIT_CONTROL); I915_WRITE(PFIT_CONTROL, dev_priv->regfile.savePFIT_CONTROL);
......
此差异已折叠。
此差异已折叠。
...@@ -127,7 +127,9 @@ struct bdb_general_features { ...@@ -127,7 +127,9 @@ struct bdb_general_features {
/* bits 3 */ /* bits 3 */
u8 disable_smooth_vision:1; u8 disable_smooth_vision:1;
u8 single_dvi:1; u8 single_dvi:1;
u8 rsvd9:6; /* finish byte */ u8 rsvd9:1;
u8 fdi_rx_polarity_inverted:1;
u8 rsvd10:4; /* finish byte */
/* bits 4 */ /* bits 4 */
u8 legacy_monitor_detect; u8 legacy_monitor_detect;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册