提交 5c37daf5 编写于 作者: D Dave Airlie

Merge tag 'drm-intel-next-2017-01-09' of git://anongit.freedesktop.org/git/drm-intel into drm-next

More 4.11 stuff, holidays edition (i.e. not much):

- docs and cleanups for shared dpll code (Ander)
- some kerneldoc work (Chris)
- fbc by default on gen9+ too, yeah! (Paulo)
- fixes, polish and other small things all over gem code (Chris)
- and a few small things on top

Plus a backmerge, because Dave was enjoying time off too.

* tag 'drm-intel-next-2017-01-09' of git://anongit.freedesktop.org/git/drm-intel: (275 commits)
  drm/i915: Update DRIVER_DATE to 20170109
  drm/i915: Drain freed objects for mmap space exhaustion
  drm/i915: Purge loose pages if we run out of DMA remap space
  drm/i915: Fix phys pwrite for struct_mutex-less operation
  drm/i915: Simplify testing for am-I-the-kernel-context?
  drm/i915: Use range_overflows()
  drm/i915: Use fixed-sized types for stolen
  drm/i915: Use phys_addr_t for the address of stolen memory
  drm/i915: Consolidate checks for memcpy-from-wc support
  drm/i915: Only skip requests once a context is banned
  drm/i915: Move a few more utility macros to i915_utils.h
  drm/i915: Clear ret before unbinding in i915_gem_evict_something()
  drm/i915/guc: Exclude the upper end of the Global GTT for the GuC
  drm/i915: Move a few utility macros into a separate header
  drm/i915/execlists: Reorder execlists register enabling
  drm/i915: Assert that we do create the deferred context
  drm/i915: Assert all timeline requests are gone before fini
  drm/i915: Revoke fenced GTT mmapings across GPU reset
  drm/i915: enable FBC on gen9+ too
  drm/i915: actually drive the BDW reserved IDs
  ...
......@@ -213,6 +213,18 @@ Video BIOS Table (VBT)
.. kernel-doc:: drivers/gpu/drm/i915/intel_vbt_defs.h
:internal:
Display PLLs
------------
.. kernel-doc:: drivers/gpu/drm/i915/intel_dpll_mgr.c
:doc: Display PLLs
.. kernel-doc:: drivers/gpu/drm/i915/intel_dpll_mgr.c
:internal:
.. kernel-doc:: drivers/gpu/drm/i915/intel_dpll_mgr.h
:internal:
Memory Management and Command Submission
========================================
......@@ -356,4 +368,95 @@ switch_mm
.. kernel-doc:: drivers/gpu/drm/i915/i915_trace.h
:doc: switch_mm tracepoint
Perf
====
Overview
--------
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:doc: i915 Perf Overview
Comparison with Core Perf
-------------------------
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:doc: i915 Perf History and Comparison with Core Perf
i915 Driver Entry Points
------------------------
This section covers the entrypoints exported outside of i915_perf.c to
integrate with drm/i915 and to handle the `DRM_I915_PERF_OPEN` ioctl.
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_init
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_fini
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_register
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_unregister
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_open_ioctl
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_release
i915 Perf Stream
----------------
This section covers the stream-semantics-agnostic structures and functions
for representing an i915 perf stream FD and associated file operations.
.. kernel-doc:: drivers/gpu/drm/i915/i915_drv.h
:functions: i915_perf_stream
.. kernel-doc:: drivers/gpu/drm/i915/i915_drv.h
:functions: i915_perf_stream_ops
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: read_properties_unlocked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_open_ioctl_locked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_destroy_locked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_read
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_ioctl
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_enable_locked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_disable_locked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_poll
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_perf_poll_locked
i915 Perf Observation Architecture Stream
-----------------------------------------
.. kernel-doc:: drivers/gpu/drm/i915/i915_drv.h
:functions: i915_oa_ops
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_stream_init
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_read
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_stream_enable
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_stream_disable
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_wait_unlocked
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:functions: i915_oa_poll_wait
All i915 Perf Internals
-----------------------
This section simply includes all currently documented i915 perf internals, in
no particular order, but may include some more minor utilities or platform
specific details than found in the more high-level sections.
.. kernel-doc:: drivers/gpu/drm/i915/i915_perf.c
:internal:
.. WARNING: DOCPROC directive not supported: !Cdrivers/gpu/drm/i915/i915_irq.c
......@@ -1420,8 +1420,10 @@ int intel_gmch_probe(struct pci_dev *bridge_pdev, struct pci_dev *gpu_pdev,
}
EXPORT_SYMBOL(intel_gmch_probe);
void intel_gtt_get(u64 *gtt_total, size_t *stolen_size,
phys_addr_t *mappable_base, u64 *mappable_end)
void intel_gtt_get(u64 *gtt_total,
u32 *stolen_size,
phys_addr_t *mappable_base,
u64 *mappable_end)
{
*gtt_total = intel_private.gtt_total_entries << PAGE_SHIFT;
*stolen_size = intel_private.stolen_size;
......
......@@ -19,9 +19,12 @@ config DRM_I915_DEBUG
bool "Enable additional driver debugging"
depends on DRM_I915
select PREEMPT_COUNT
select I2C_CHARDEV
select DRM_DP_AUX_CHARDEV
select X86_MSR # used by igt/pm_rpm
select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
select DRM_DEBUG_MM if DRM=y
select DRM_I915_SW_FENCE_DEBUG_OBJECTS
default n
help
Choose this option to turn on extra driver debugging that may affect
......@@ -43,3 +46,15 @@ config DRM_I915_DEBUG_GEM
If in doubt, say "N".
config DRM_I915_SW_FENCE_DEBUG_OBJECTS
bool "Enable additional driver debugging for fence objects"
depends on DRM_I915
select DEBUG_OBJECTS
default n
help
Choose this option to turn on extra driver debugging that may affect
performance but will catch some internal issues.
Recommended for driver developers only.
If in doubt, say "N".
......@@ -24,7 +24,7 @@ i915-y := i915_drv.o \
intel_runtime_pm.o
i915-$(CONFIG_COMPAT) += i915_ioc32.o
i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o
i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o intel_pipe_crc.o
# GEM code
i915-y += i915_cmd_parser.o \
......@@ -55,7 +55,8 @@ i915-y += i915_cmd_parser.o \
intel_uncore.o
# general-purpose microcontroller (GuC) support
i915-y += intel_guc_loader.o \
i915-y += intel_uc.o \
intel_guc_loader.o \
i915_guc_submission.o
# autogenerated null render state
......@@ -117,6 +118,10 @@ i915-$(CONFIG_DRM_I915_CAPTURE_ERROR) += i915_gpu_error.o
# virtual gpu code
i915-y += i915_vgpu.o
# perf code
i915-y += i915_perf.o \
i915_oa_hsw.o
ifeq ($(CONFIG_DRM_I915_GVT),y)
i915-y += intel_gvt.o
include $(src)/gvt/Makefile
......
......@@ -73,12 +73,15 @@ static int alloc_gm(struct intel_vgpu *vgpu, bool high_gm)
mutex_lock(&dev_priv->drm.struct_mutex);
search_again:
ret = drm_mm_insert_node_in_range_generic(&dev_priv->ggtt.base.mm,
node, size, 4096, 0,
node, size, 4096,
I915_COLOR_UNEVICTABLE,
start, end, search_flag,
alloc_flag);
if (ret) {
ret = i915_gem_evict_something(&dev_priv->ggtt.base,
size, 4096, 0, start, end, 0);
size, 4096,
I915_COLOR_UNEVICTABLE,
start, end, 0);
if (ret == 0 && ++retried < 3)
goto search_again;
......
......@@ -1602,7 +1602,7 @@ static int perform_bb_shadow(struct parser_exec_state *s)
return -ENOMEM;
entry_obj->obj =
i915_gem_object_create(&(s->vgpu->gvt->dev_priv->drm),
i915_gem_object_create(s->vgpu->gvt->dev_priv,
roundup(bb_size, PAGE_SIZE));
if (IS_ERR(entry_obj->obj)) {
ret = PTR_ERR(entry_obj->obj);
......@@ -2665,14 +2665,13 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload)
static int shadow_indirect_ctx(struct intel_shadow_wa_ctx *wa_ctx)
{
struct drm_device *dev = &wa_ctx->workload->vgpu->gvt->dev_priv->drm;
int ctx_size = wa_ctx->indirect_ctx.size;
unsigned long guest_gma = wa_ctx->indirect_ctx.guest_gma;
struct drm_i915_gem_object *obj;
int ret = 0;
void *map;
obj = i915_gem_object_create(dev,
obj = i915_gem_object_create(wa_ctx->workload->vgpu->gvt->dev_priv,
roundup(ctx_size + CACHELINE_BYTES,
PAGE_SIZE));
if (IS_ERR(obj))
......
......@@ -2200,7 +2200,7 @@ static int init_generic_mmio_info(struct intel_gvt *gvt)
MMIO_DFH(0x1217c, D_ALL, F_CMD_ACCESS, NULL, NULL);
MMIO_F(0x2290, 8, 0, 0, 0, D_HSW_PLUS, NULL, NULL);
MMIO_D(OACONTROL, D_HSW);
MMIO_D(GEN7_OACONTROL, D_HSW);
MMIO_D(0x2b00, D_BDW_PLUS);
MMIO_D(0x2360, D_BDW_PLUS);
MMIO_F(0x5200, 32, 0, 0, 0, D_ALL, NULL, NULL);
......
......@@ -549,18 +549,10 @@ int intel_gvt_init_workload_scheduler(struct intel_gvt *gvt)
void intel_vgpu_clean_gvt_context(struct intel_vgpu *vgpu)
{
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
atomic_notifier_chain_unregister(&vgpu->shadow_ctx->status_notifier,
&vgpu->shadow_ctx_notifier_block);
mutex_lock(&dev_priv->drm.struct_mutex);
/* a little hacky to mark as ctx closed */
vgpu->shadow_ctx->closed = true;
i915_gem_context_put(vgpu->shadow_ctx);
mutex_unlock(&dev_priv->drm.struct_mutex);
i915_gem_context_put_unlocked(vgpu->shadow_ctx);
}
int intel_vgpu_init_gvt_context(struct intel_vgpu *vgpu)
......
......@@ -86,6 +86,102 @@
* general bitmasking mechanism.
*/
/*
* A command that requires special handling by the command parser.
*/
struct drm_i915_cmd_descriptor {
/*
* Flags describing how the command parser processes the command.
*
* CMD_DESC_FIXED: The command has a fixed length if this is set,
* a length mask if not set
* CMD_DESC_SKIP: The command is allowed but does not follow the
* standard length encoding for the opcode range in
* which it falls
* CMD_DESC_REJECT: The command is never allowed
* CMD_DESC_REGISTER: The command should be checked against the
* register whitelist for the appropriate ring
* CMD_DESC_MASTER: The command is allowed if the submitting process
* is the DRM master
*/
u32 flags;
#define CMD_DESC_FIXED (1<<0)
#define CMD_DESC_SKIP (1<<1)
#define CMD_DESC_REJECT (1<<2)
#define CMD_DESC_REGISTER (1<<3)
#define CMD_DESC_BITMASK (1<<4)
#define CMD_DESC_MASTER (1<<5)
/*
* The command's unique identification bits and the bitmask to get them.
* This isn't strictly the opcode field as defined in the spec and may
* also include type, subtype, and/or subop fields.
*/
struct {
u32 value;
u32 mask;
} cmd;
/*
* The command's length. The command is either fixed length (i.e. does
* not include a length field) or has a length field mask. The flag
* CMD_DESC_FIXED indicates a fixed length. Otherwise, the command has
* a length mask. All command entries in a command table must include
* length information.
*/
union {
u32 fixed;
u32 mask;
} length;
/*
* Describes where to find a register address in the command to check
* against the ring's register whitelist. Only valid if flags has the
* CMD_DESC_REGISTER bit set.
*
* A non-zero step value implies that the command may access multiple
* registers in sequence (e.g. LRI), in that case step gives the
* distance in dwords between individual offset fields.
*/
struct {
u32 offset;
u32 mask;
u32 step;
} reg;
#define MAX_CMD_DESC_BITMASKS 3
/*
* Describes command checks where a particular dword is masked and
* compared against an expected value. If the command does not match
* the expected value, the parser rejects it. Only valid if flags has
* the CMD_DESC_BITMASK bit set. Only entries where mask is non-zero
* are valid.
*
* If the check specifies a non-zero condition_mask then the parser
* only performs the check when the bits specified by condition_mask
* are non-zero.
*/
struct {
u32 offset;
u32 mask;
u32 expected;
u32 condition_offset;
u32 condition_mask;
} bits[MAX_CMD_DESC_BITMASKS];
};
/*
* A table of commands requiring special handling by the command parser.
*
* Each engine has an array of tables. Each table consists of an array of
* command descriptors, which must be sorted with command opcodes in
* ascending order.
*/
struct drm_i915_cmd_table {
const struct drm_i915_cmd_descriptor *table;
int count;
};
#define STD_MI_OPCODE_SHIFT (32 - 9)
#define STD_3D_OPCODE_SHIFT (32 - 16)
#define STD_2D_OPCODE_SHIFT (32 - 10)
......@@ -450,7 +546,6 @@ static const struct drm_i915_reg_descriptor gen7_render_regs[] = {
REG64(PS_INVOCATION_COUNT),
REG64(PS_DEPTH_COUNT),
REG64_IDX(RING_TIMESTAMP, RENDER_RING_BASE),
REG32(OACONTROL), /* Only allowed for LRI and SRM. See below. */
REG64(MI_PREDICATE_SRC0),
REG64(MI_PREDICATE_SRC1),
REG32(GEN7_3DPRIM_END_OFFSET),
......@@ -559,7 +654,7 @@ static const struct drm_i915_reg_table hsw_blt_reg_tables[] = {
static u32 gen7_render_get_cmd_length_mask(u32 cmd_header)
{
u32 client = (cmd_header & INSTR_CLIENT_MASK) >> INSTR_CLIENT_SHIFT;
u32 client = cmd_header >> INSTR_CLIENT_SHIFT;
u32 subclient =
(cmd_header & INSTR_SUBCLIENT_MASK) >> INSTR_SUBCLIENT_SHIFT;
......@@ -578,7 +673,7 @@ static u32 gen7_render_get_cmd_length_mask(u32 cmd_header)
static u32 gen7_bsd_get_cmd_length_mask(u32 cmd_header)
{
u32 client = (cmd_header & INSTR_CLIENT_MASK) >> INSTR_CLIENT_SHIFT;
u32 client = cmd_header >> INSTR_CLIENT_SHIFT;
u32 subclient =
(cmd_header & INSTR_SUBCLIENT_MASK) >> INSTR_SUBCLIENT_SHIFT;
u32 op = (cmd_header & INSTR_26_TO_24_MASK) >> INSTR_26_TO_24_SHIFT;
......@@ -601,7 +696,7 @@ static u32 gen7_bsd_get_cmd_length_mask(u32 cmd_header)
static u32 gen7_blt_get_cmd_length_mask(u32 cmd_header)
{
u32 client = (cmd_header & INSTR_CLIENT_MASK) >> INSTR_CLIENT_SHIFT;
u32 client = cmd_header >> INSTR_CLIENT_SHIFT;
if (client == INSTR_MI_CLIENT)
return 0x3F;
......@@ -984,7 +1079,7 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
src = ERR_PTR(-ENODEV);
if (src_needs_clflush &&
i915_memcpy_from_wc((void *)(uintptr_t)batch_start_offset, NULL, 0)) {
i915_can_memcpy_from_wc(NULL, batch_start_offset, 0)) {
src = i915_gem_object_pin_map(src_obj, I915_MAP_WC);
if (!IS_ERR(src)) {
i915_memcpy_from_wc(dst,
......@@ -1036,32 +1131,10 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
return dst;
}
/**
* intel_engine_needs_cmd_parser() - should a given engine use software
* command parsing?
* @engine: the engine in question
*
* Only certain platforms require software batch buffer command parsing, and
* only when enabled via module parameter.
*
* Return: true if the engine requires software command parsing
*/
bool intel_engine_needs_cmd_parser(struct intel_engine_cs *engine)
{
if (!engine->needs_cmd_parser)
return false;
if (!USES_PPGTT(engine->i915))
return false;
return (i915.enable_cmd_parser == 1);
}
static bool check_cmd(const struct intel_engine_cs *engine,
const struct drm_i915_cmd_descriptor *desc,
const u32 *cmd, u32 length,
const bool is_master,
bool *oacontrol_set)
const bool is_master)
{
if (desc->flags & CMD_DESC_SKIP)
return true;
......@@ -1098,31 +1171,6 @@ static bool check_cmd(const struct intel_engine_cs *engine,
return false;
}
/*
* OACONTROL requires some special handling for
* writes. We want to make sure that any batch which
* enables OA also disables it before the end of the
* batch. The goal is to prevent one process from
* snooping on the perf data from another process. To do
* that, we need to check the value that will be written
* to the register. Hence, limit OACONTROL writes to
* only MI_LOAD_REGISTER_IMM commands.
*/
if (reg_addr == i915_mmio_reg_offset(OACONTROL)) {
if (desc->cmd.value == MI_LOAD_REGISTER_MEM) {
DRM_DEBUG_DRIVER("CMD: Rejected LRM to OACONTROL\n");
return false;
}
if (desc->cmd.value == MI_LOAD_REGISTER_REG) {
DRM_DEBUG_DRIVER("CMD: Rejected LRR to OACONTROL\n");
return false;
}
if (desc->cmd.value == MI_LOAD_REGISTER_IMM(1))
*oacontrol_set = (cmd[offset + 1] != 0);
}
/*
* Check the value written to the register against the
* allowed mask/value pair given in the whitelist entry.
......@@ -1214,7 +1262,6 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
u32 *cmd, *batch_end;
struct drm_i915_cmd_descriptor default_desc = noop_desc;
const struct drm_i915_cmd_descriptor *desc = &default_desc;
bool oacontrol_set = false; /* OACONTROL tracking. See check_cmd() */
bool needs_clflush_after = false;
int ret = 0;
......@@ -1270,20 +1317,14 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
break;
}
if (!check_cmd(engine, desc, cmd, length, is_master,
&oacontrol_set)) {
ret = -EINVAL;
if (!check_cmd(engine, desc, cmd, length, is_master)) {
ret = -EACCES;
break;
}
cmd += length;
}
if (oacontrol_set) {
DRM_DEBUG_DRIVER("CMD: batch set OACONTROL but did not clear it\n");
ret = -EINVAL;
}
if (cmd >= batch_end) {
DRM_DEBUG_DRIVER("CMD: Got to the end of the buffer w/o a BBE cmd!\n");
ret = -EINVAL;
......@@ -1313,7 +1354,7 @@ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv)
/* If the command parser is not enabled, report 0 - unsupported */
for_each_engine(engine, dev_priv, id) {
if (intel_engine_needs_cmd_parser(engine)) {
if (engine->needs_cmd_parser) {
active = true;
break;
}
......@@ -1333,6 +1374,11 @@ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv)
* 5. GPGPU dispatch compute indirect registers.
* 6. TIMESTAMP register and Haswell CS GPR registers
* 7. Allow MI_LOAD_REGISTER_REG between whitelisted registers.
* 8. Don't report cmd_check() failures as EINVAL errors to userspace;
* rely on the HW to NOOP disallowed commands as it would without
* the parser enabled.
* 9. Don't whitelist or handle oacontrol specially, as ownership
* for oacontrol state is moving to i915-perf.
*/
return 7;
return 9;
}
此差异已折叠。
......@@ -142,9 +142,8 @@ static enum intel_pch intel_virt_detect_pch(struct drm_i915_private *dev_priv)
return ret;
}
static void intel_detect_pch(struct drm_device *dev)
static void intel_detect_pch(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct pci_dev *pch = NULL;
/* In all current cases, num_pipes is equivalent to the PCH_NOP setting
......@@ -361,10 +360,8 @@ static int i915_getparam(struct drm_device *dev, void *data,
return 0;
}
static int i915_get_bridge_dev(struct drm_device *dev)
static int i915_get_bridge_dev(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
dev_priv->bridge_dev = pci_get_bus_and_slot(0, PCI_DEVFN(0, 0));
if (!dev_priv->bridge_dev) {
DRM_ERROR("bridge device not found\n");
......@@ -375,9 +372,8 @@ static int i915_get_bridge_dev(struct drm_device *dev)
/* Allocate space for the MCH regs if needed, return nonzero on error */
static int
intel_alloc_mchbar_resource(struct drm_device *dev)
intel_alloc_mchbar_resource(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
int reg = INTEL_GEN(dev_priv) >= 4 ? MCHBAR_I965 : MCHBAR_I915;
u32 temp_lo, temp_hi = 0;
u64 mchbar_addr;
......@@ -421,9 +417,8 @@ intel_alloc_mchbar_resource(struct drm_device *dev)
/* Setup MCHBAR if possible, return true if we should disable it again */
static void
intel_setup_mchbar(struct drm_device *dev)
intel_setup_mchbar(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
int mchbar_reg = INTEL_GEN(dev_priv) >= 4 ? MCHBAR_I965 : MCHBAR_I915;
u32 temp;
bool enabled;
......@@ -445,7 +440,7 @@ intel_setup_mchbar(struct drm_device *dev)
if (enabled)
return;
if (intel_alloc_mchbar_resource(dev))
if (intel_alloc_mchbar_resource(dev_priv))
return;
dev_priv->mchbar_need_disable = true;
......@@ -461,9 +456,8 @@ intel_setup_mchbar(struct drm_device *dev)
}
static void
intel_teardown_mchbar(struct drm_device *dev)
intel_teardown_mchbar(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
int mchbar_reg = INTEL_GEN(dev_priv) >= 4 ? MCHBAR_I965 : MCHBAR_I915;
if (dev_priv->mchbar_need_disable) {
......@@ -493,9 +487,9 @@ intel_teardown_mchbar(struct drm_device *dev)
/* true = enable decode, false = disable decoder */
static unsigned int i915_vga_set_decode(void *cookie, bool state)
{
struct drm_device *dev = cookie;
struct drm_i915_private *dev_priv = cookie;
intel_modeset_vga_set_state(to_i915(dev), state);
intel_modeset_vga_set_state(dev_priv, state);
if (state)
return VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM |
VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;
......@@ -503,6 +497,9 @@ static unsigned int i915_vga_set_decode(void *cookie, bool state)
return VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;
}
static int i915_resume_switcheroo(struct drm_device *dev);
static int i915_suspend_switcheroo(struct drm_device *dev, pm_message_t state);
static void i915_switcheroo_set_state(struct pci_dev *pdev, enum vga_switcheroo_state state)
{
struct drm_device *dev = pci_get_drvdata(pdev);
......@@ -544,12 +541,11 @@ static const struct vga_switcheroo_client_ops i915_switcheroo_ops = {
static void i915_gem_fini(struct drm_i915_private *dev_priv)
{
mutex_lock(&dev_priv->drm.struct_mutex);
i915_gem_cleanup_engines(&dev_priv->drm);
i915_gem_context_fini(&dev_priv->drm);
i915_gem_cleanup_engines(dev_priv);
i915_gem_context_fini(dev_priv);
mutex_unlock(&dev_priv->drm.struct_mutex);
rcu_barrier();
flush_work(&dev_priv->mm.free_work);
i915_gem_drain_freed_objects(dev_priv);
WARN_ON(!list_empty(&dev_priv->context_list));
}
......@@ -574,7 +570,7 @@ static int i915_load_modeset_init(struct drm_device *dev)
* then we do not take part in VGA arbitration and the
* vga_client_register() fails with -ENODEV.
*/
ret = vga_client_register(pdev, dev, NULL, i915_vga_set_decode);
ret = vga_client_register(pdev, dev_priv, NULL, i915_vga_set_decode);
if (ret && ret != -ENODEV)
goto out;
......@@ -595,7 +591,7 @@ static int i915_load_modeset_init(struct drm_device *dev)
if (ret)
goto cleanup_csr;
intel_setup_gmbus(dev);
intel_setup_gmbus(dev_priv);
/* Important: The output setup functions called by modeset_init need
* working irqs for e.g. gmbus and dp aux transfers. */
......@@ -603,9 +599,9 @@ static int i915_load_modeset_init(struct drm_device *dev)
if (ret)
goto cleanup_irq;
intel_guc_init(dev);
intel_guc_init(dev_priv);
ret = i915_gem_init(dev);
ret = i915_gem_init(dev_priv);
if (ret)
goto cleanup_irq;
......@@ -626,13 +622,13 @@ static int i915_load_modeset_init(struct drm_device *dev)
return 0;
cleanup_gem:
if (i915_gem_suspend(dev))
if (i915_gem_suspend(dev_priv))
DRM_ERROR("failed to idle hardware; continuing to unload!\n");
i915_gem_fini(dev_priv);
cleanup_irq:
intel_guc_fini(dev);
intel_guc_fini(dev_priv);
drm_irq_uninstall(dev);
intel_teardown_gmbus(dev);
intel_teardown_gmbus(dev_priv);
cleanup_csr:
intel_csr_ucode_fini(dev_priv);
intel_power_domains_fini(dev_priv);
......@@ -643,7 +639,6 @@ static int i915_load_modeset_init(struct drm_device *dev)
return ret;
}
#if IS_ENABLED(CONFIG_FB)
static int i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv)
{
struct apertures_struct *ap;
......@@ -668,12 +663,6 @@ static int i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv)
return ret;
}
#else
static int i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv)
{
return 0;
}
#endif
#if !defined(CONFIG_VGA_CONSOLE)
static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv)
......@@ -811,12 +800,15 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
spin_lock_init(&dev_priv->uncore.lock);
spin_lock_init(&dev_priv->mm.object_stat_lock);
spin_lock_init(&dev_priv->mmio_flip_lock);
spin_lock_init(&dev_priv->wm.dsparb_lock);
mutex_init(&dev_priv->sb_lock);
mutex_init(&dev_priv->modeset_restore_lock);
mutex_init(&dev_priv->av_mutex);
mutex_init(&dev_priv->wm.wm_mutex);
mutex_init(&dev_priv->pps_mutex);
intel_uc_init_early(dev_priv);
i915_memcpy_init_early(dev_priv);
ret = i915_workqueues_init(dev_priv);
......@@ -828,9 +820,9 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
goto err_workqueues;
/* This must be called before any calls to HAS_PCH_* */
intel_detect_pch(&dev_priv->drm);
intel_detect_pch(dev_priv);
intel_pm_setup(&dev_priv->drm);
intel_pm_setup(dev_priv);
intel_init_dpio(dev_priv);
intel_power_domains_init(dev_priv);
intel_irq_init(dev_priv);
......@@ -838,7 +830,7 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
intel_init_display_hooks(dev_priv);
intel_init_clock_gating_hooks(dev_priv);
intel_init_audio_hooks(dev_priv);
ret = i915_gem_load_init(&dev_priv->drm);
ret = i915_gem_load_init(dev_priv);
if (ret < 0)
goto err_gvt;
......@@ -848,6 +840,8 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
intel_detect_preproduction_hw(dev_priv);
i915_perf_init(dev_priv);
return 0;
err_gvt:
......@@ -863,13 +857,13 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
*/
static void i915_driver_cleanup_early(struct drm_i915_private *dev_priv)
{
i915_gem_load_cleanup(&dev_priv->drm);
i915_perf_fini(dev_priv);
i915_gem_load_cleanup(dev_priv);
i915_workqueues_cleanup(dev_priv);
}
static int i915_mmio_setup(struct drm_device *dev)
static int i915_mmio_setup(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct pci_dev *pdev = dev_priv->drm.pdev;
int mmio_bar;
int mmio_size;
......@@ -895,17 +889,16 @@ static int i915_mmio_setup(struct drm_device *dev)
}
/* Try to make sure MCHBAR is enabled before poking at it */
intel_setup_mchbar(dev);
intel_setup_mchbar(dev_priv);
return 0;
}
static void i915_mmio_cleanup(struct drm_device *dev)
static void i915_mmio_cleanup(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct pci_dev *pdev = dev_priv->drm.pdev;
intel_teardown_mchbar(dev);
intel_teardown_mchbar(dev_priv);
pci_iounmap(pdev, dev_priv->regs);
}
......@@ -920,16 +913,15 @@ static void i915_mmio_cleanup(struct drm_device *dev)
*/
static int i915_driver_init_mmio(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = &dev_priv->drm;
int ret;
if (i915_inject_load_failure())
return -ENODEV;
if (i915_get_bridge_dev(dev))
if (i915_get_bridge_dev(dev_priv))
return -EIO;
ret = i915_mmio_setup(dev);
ret = i915_mmio_setup(dev_priv);
if (ret < 0)
goto put_bridge;
......@@ -949,10 +941,8 @@ static int i915_driver_init_mmio(struct drm_i915_private *dev_priv)
*/
static void i915_driver_cleanup_mmio(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = &dev_priv->drm;
intel_uncore_fini(dev_priv);
i915_mmio_cleanup(dev);
i915_mmio_cleanup(dev_priv);
pci_dev_put(dev_priv->bridge_dev);
}
......@@ -1043,7 +1033,7 @@ static int i915_driver_init_hw(struct drm_i915_private *dev_priv)
* behaviour if any general state is accessed within a page above 4GB,
* which also needs to be handled carefully.
*/
if (IS_BROADWATER(dev_priv) || IS_CRESTLINE(dev_priv)) {
if (IS_I965G(dev_priv) || IS_I965GM(dev_priv)) {
ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
if (ret) {
......@@ -1126,6 +1116,9 @@ static void i915_driver_register(struct drm_i915_private *dev_priv)
i915_debugfs_register(dev_priv);
i915_guc_register(dev_priv);
i915_setup_sysfs(dev_priv);
/* Depends on sysfs having been initialized */
i915_perf_register(dev_priv);
} else
DRM_ERROR("Failed to register driver for userspace access!\n");
......@@ -1162,6 +1155,8 @@ static void i915_driver_unregister(struct drm_i915_private *dev_priv)
acpi_video_unregister();
intel_opregion_unregister(dev_priv);
i915_perf_unregister(dev_priv);
i915_teardown_sysfs(dev_priv);
i915_guc_unregister(dev_priv);
i915_debugfs_unregister(dev_priv);
......@@ -1194,8 +1189,7 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
if (dev_priv)
ret = drm_dev_init(&dev_priv->drm, &driver, &pdev->dev);
if (ret) {
dev_printk(KERN_ERR, &pdev->dev,
"[" DRM_NAME ":%s] allocation failed\n", __func__);
DRM_DEV_ERROR(&pdev->dev, "allocation failed\n");
kfree(dev_priv);
return ret;
}
......@@ -1243,6 +1237,8 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
intel_runtime_pm_enable(dev_priv);
dev_priv->ipc_enabled = false;
/* Everything is in place, we can now relax! */
DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n",
driver.name, driver.major, driver.minor, driver.patchlevel,
......@@ -1280,7 +1276,7 @@ void i915_driver_unload(struct drm_device *dev)
intel_fbdev_fini(dev);
if (i915_gem_suspend(dev))
if (i915_gem_suspend(dev_priv))
DRM_ERROR("failed to idle hardware; continuing to unload!\n");
intel_display_power_get(dev_priv, POWER_DOMAIN_INIT);
......@@ -1312,12 +1308,12 @@ void i915_driver_unload(struct drm_device *dev)
/* Free error state after interrupts are fully disabled. */
cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
i915_destroy_error_state(dev);
i915_destroy_error_state(dev_priv);
/* Flush any outstanding unpin_work. */
drain_workqueue(dev_priv->wq);
intel_guc_fini(dev);
intel_guc_fini(dev_priv);
i915_gem_fini(dev_priv);
intel_fbc_cleanup_cfb(dev_priv);
......@@ -1422,14 +1418,14 @@ static int i915_drm_suspend(struct drm_device *dev)
pci_save_state(pdev);
error = i915_gem_suspend(dev);
error = i915_gem_suspend(dev_priv);
if (error) {
dev_err(&pdev->dev,
"GEM idle failed, resume might fail\n");
goto out;
}
intel_guc_suspend(dev);
intel_guc_suspend(dev_priv);
intel_display_suspend(dev);
......@@ -1444,7 +1440,7 @@ static int i915_drm_suspend(struct drm_device *dev)
i915_gem_suspend_gtt_mappings(dev_priv);
i915_save_state(dev);
i915_save_state(dev_priv);
opregion_target_state = suspend_to_idle(dev_priv) ? PCI_D1 : PCI_D3cold;
intel_opregion_notify_adapter(dev_priv, opregion_target_state);
......@@ -1527,7 +1523,7 @@ static int i915_drm_suspend_late(struct drm_device *dev, bool hibernation)
return ret;
}
int i915_suspend_switcheroo(struct drm_device *dev, pm_message_t state)
static int i915_suspend_switcheroo(struct drm_device *dev, pm_message_t state)
{
int error;
......@@ -1565,33 +1561,36 @@ static int i915_drm_resume(struct drm_device *dev)
intel_csr_ucode_resume(dev_priv);
i915_gem_resume(dev);
i915_gem_resume(dev_priv);
i915_restore_state(dev);
i915_restore_state(dev_priv);
intel_pps_unlock_regs_wa(dev_priv);
intel_opregion_setup(dev_priv);
intel_init_pch_refclk(dev);
drm_mode_config_reset(dev);
intel_init_pch_refclk(dev_priv);
/*
* Interrupts have to be enabled before any batches are run. If not the
* GPU will hang. i915_gem_init_hw() will initiate batches to
* update/restore the context.
*
* drm_mode_config_reset() needs AUX interrupts.
*
* Modeset enabling in intel_modeset_init_hw() also needs working
* interrupts.
*/
intel_runtime_pm_enable_interrupts(dev_priv);
drm_mode_config_reset(dev);
mutex_lock(&dev->struct_mutex);
if (i915_gem_init_hw(dev)) {
if (i915_gem_init_hw(dev_priv)) {
DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");
i915_gem_set_wedged(dev_priv);
}
mutex_unlock(&dev->struct_mutex);
intel_guc_resume(dev);
intel_guc_resume(dev_priv);
intel_modeset_init_hw(dev);
......@@ -1715,7 +1714,7 @@ static int i915_drm_resume_early(struct drm_device *dev)
return ret;
}
int i915_resume_switcheroo(struct drm_device *dev)
static int i915_resume_switcheroo(struct drm_device *dev)
{
int ret;
......@@ -1764,11 +1763,10 @@ static void enable_engines_irq(struct drm_i915_private *dev_priv)
*/
void i915_reset(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = &dev_priv->drm;
struct i915_gpu_error *error = &dev_priv->gpu_error;
int ret;
lockdep_assert_held(&dev->struct_mutex);
lockdep_assert_held(&dev_priv->drm.struct_mutex);
if (!test_and_clear_bit(I915_RESET_IN_PROGRESS, &error->flags))
return;
......@@ -1778,6 +1776,7 @@ void i915_reset(struct drm_i915_private *dev_priv)
error->reset_count++;
pr_notice("drm/i915: Resetting chip after gpu hang\n");
i915_gem_reset_prepare(dev_priv);
disable_engines_irq(dev_priv);
ret = intel_gpu_reset(dev_priv, ALL_ENGINES);
......@@ -1791,7 +1790,7 @@ void i915_reset(struct drm_i915_private *dev_priv)
goto error;
}
i915_gem_reset(dev_priv);
i915_gem_reset_finish(dev_priv);
intel_overlay_reset(dev_priv);
/* Ok, now get things going again... */
......@@ -1808,12 +1807,14 @@ void i915_reset(struct drm_i915_private *dev_priv)
* was running at the time of the reset (i.e. we weren't VT
* switched away).
*/
ret = i915_gem_init_hw(dev);
ret = i915_gem_init_hw(dev_priv);
if (ret) {
DRM_ERROR("Failed hw init on reset %d\n", ret);
goto error;
}
i915_queue_hangcheck(dev_priv);
wakeup:
wake_up_bit(&error->flags, I915_RESET_IN_PROGRESS);
return;
......@@ -2320,7 +2321,7 @@ static int intel_runtime_suspend(struct device *kdev)
*/
i915_gem_runtime_suspend(dev_priv);
intel_guc_suspend(dev);
intel_guc_suspend(dev_priv);
intel_runtime_pm_disable_interrupts(dev_priv);
......@@ -2405,10 +2406,10 @@ static int intel_runtime_resume(struct device *kdev)
if (intel_uncore_unclaimed_mmio(dev_priv))
DRM_DEBUG_DRIVER("Unclaimed access during suspend, bios?\n");
intel_guc_resume(dev);
intel_guc_resume(dev_priv);
if (IS_GEN6(dev_priv))
intel_init_pch_refclk(dev);
intel_init_pch_refclk(dev_priv);
if (IS_BROXTON(dev_priv)) {
bxt_disable_dc9(dev_priv);
......@@ -2565,6 +2566,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF_DRV(I915_GEM_USERPTR, i915_gem_userptr_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_GETPARAM, i915_gem_context_getparam_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_SETPARAM, i915_gem_context_setparam_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_PERF_OPEN, i915_perf_open_ioctl, DRM_RENDER_ALLOW),
};
static struct drm_driver driver = {
......
此差异已折叠。
......@@ -38,6 +38,7 @@
#include <linux/reservation.h>
#include <linux/shmem_fs.h>
#include <linux/slab.h>
#include <linux/stop_machine.h>
#include <linux/swap.h>
#include <linux/pci.h>
#include <linux/dma-buf.h>
......@@ -69,7 +70,8 @@ insert_mappable_node(struct i915_ggtt *ggtt,
{
memset(node, 0, sizeof(*node));
return drm_mm_insert_node_in_range_generic(&ggtt->base.mm, node,
size, 0, -1,
size, 0,
I915_COLOR_UNEVICTABLE,
0, ggtt->mappable_end,
DRM_MM_SEARCH_DEFAULT,
DRM_MM_CREATE_DEFAULT);
......@@ -595,52 +597,25 @@ i915_gem_phys_pwrite(struct drm_i915_gem_object *obj,
struct drm_i915_gem_pwrite *args,
struct drm_file *file)
{
struct drm_device *dev = obj->base.dev;
void *vaddr = obj->phys_handle->vaddr + args->offset;
char __user *user_data = u64_to_user_ptr(args->data_ptr);
int ret;
/* We manually control the domain here and pretend that it
* remains coherent i.e. in the GTT domain, like shmem_pwrite.
*/
lockdep_assert_held(&obj->base.dev->struct_mutex);
ret = i915_gem_object_wait(obj,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_LOCKED |
I915_WAIT_ALL,
MAX_SCHEDULE_TIMEOUT,
to_rps_client(file));
if (ret)
return ret;
intel_fb_obj_invalidate(obj, ORIGIN_CPU);
if (__copy_from_user_inatomic_nocache(vaddr, user_data, args->size)) {
unsigned long unwritten;
/* The physical object once assigned is fixed for the lifetime
* of the obj, so we can safely drop the lock and continue
* to access vaddr.
*/
mutex_unlock(&dev->struct_mutex);
unwritten = copy_from_user(vaddr, user_data, args->size);
mutex_lock(&dev->struct_mutex);
if (unwritten) {
ret = -EFAULT;
goto out;
}
}
if (copy_from_user(vaddr, user_data, args->size))
return -EFAULT;
drm_clflush_virt_range(vaddr, args->size);
i915_gem_chipset_flush(to_i915(dev));
i915_gem_chipset_flush(to_i915(obj->base.dev));
out:
intel_fb_obj_flush(obj, false, ORIGIN_CPU);
return ret;
return 0;
}
void *i915_gem_object_alloc(struct drm_device *dev)
void *i915_gem_object_alloc(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
return kmem_cache_zalloc(dev_priv->objects, GFP_KERNEL);
}
......@@ -652,7 +627,7 @@ void i915_gem_object_free(struct drm_i915_gem_object *obj)
static int
i915_gem_create(struct drm_file *file,
struct drm_device *dev,
struct drm_i915_private *dev_priv,
uint64_t size,
uint32_t *handle_p)
{
......@@ -665,7 +640,7 @@ i915_gem_create(struct drm_file *file,
return -EINVAL;
/* Allocate the new object */
obj = i915_gem_object_create(dev, size);
obj = i915_gem_object_create(dev_priv, size);
if (IS_ERR(obj))
return PTR_ERR(obj);
......@@ -687,7 +662,7 @@ i915_gem_dumb_create(struct drm_file *file,
/* have to work out size/pitch and return them */
args->pitch = ALIGN(args->width * DIV_ROUND_UP(args->bpp, 8), 64);
args->size = args->pitch * args->height;
return i915_gem_create(file, dev,
return i915_gem_create(file, to_i915(dev),
args->size, &args->handle);
}
......@@ -701,11 +676,12 @@ int
i915_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_create *args = data;
i915_gem_flush_free_objects(to_i915(dev));
i915_gem_flush_free_objects(dev_priv);
return i915_gem_create(file, dev,
return i915_gem_create(file, dev_priv,
args->size, &args->handle);
}
......@@ -1140,8 +1116,7 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
return -ENOENT;
/* Bounds check source. */
if (args->offset > obj->base.size ||
args->size > obj->base.size - args->offset) {
if (range_overflows_t(u64, args->offset, args->size, obj->base.size)) {
ret = -EINVAL;
goto out;
}
......@@ -1454,8 +1429,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
return -ENOENT;
/* Bounds check destination. */
if (args->offset > obj->base.size ||
args->size > obj->base.size - args->offset) {
if (range_overflows_t(u64, args->offset, args->size, obj->base.size)) {
ret = -EINVAL;
goto err;
}
......@@ -1517,7 +1491,7 @@ static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj)
list_for_each_entry(vma, &obj->vma_list, obj_link) {
if (!i915_vma_is_ggtt(vma))
continue;
break;
if (i915_vma_is_active(vma))
continue;
......@@ -2098,7 +2072,8 @@ u64 i915_gem_get_ggtt_alignment(struct drm_i915_private *dev_priv, u64 size,
* Minimum alignment is 4k (GTT page size), but might be greater
* if a fence register is needed for the object.
*/
if (INTEL_GEN(dev_priv) >= 4 || (!fenced && IS_G33(dev_priv)) ||
if (INTEL_GEN(dev_priv) >= 4 ||
(!fenced && (IS_G33(dev_priv) || IS_PINEVIEW(dev_priv))) ||
tiling_mode == I915_TILING_NONE)
return 4096;
......@@ -2115,23 +2090,21 @@ static int i915_gem_object_create_mmap_offset(struct drm_i915_gem_object *obj)
int err;
err = drm_gem_create_mmap_offset(&obj->base);
if (!err)
if (likely(!err))
return 0;
/* We can idle the GPU locklessly to flush stale objects, but in order
* to claim that space for ourselves, we need to take the big
* struct_mutex to free the requests+objects and allocate our slot.
*/
err = i915_gem_wait_for_idle(dev_priv, I915_WAIT_INTERRUPTIBLE);
if (err)
return err;
/* Attempt to reap some mmap space from dead objects */
do {
err = i915_gem_wait_for_idle(dev_priv, I915_WAIT_INTERRUPTIBLE);
if (err)
break;
err = i915_mutex_lock_interruptible(&dev_priv->drm);
if (!err) {
i915_gem_retire_requests(dev_priv);
i915_gem_drain_freed_objects(dev_priv);
err = drm_gem_create_mmap_offset(&obj->base);
mutex_unlock(&dev_priv->drm.struct_mutex);
}
if (!err)
break;
} while (flush_delayed_work(&dev_priv->gt.retire_work));
return err;
}
......@@ -2324,6 +2297,7 @@ static void i915_sg_trim(struct sg_table *orig_st)
/* called before being DMA mapped, no need to copy sg->dma_* */
new_sg = sg_next(new_sg);
}
GEM_BUG_ON(new_sg); /* Should walk exactly nents and hit the end */
sg_free_table(orig_st);
......@@ -2645,35 +2619,34 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
goto out_unlock;
}
static bool i915_context_is_banned(const struct i915_gem_context *ctx)
static bool ban_context(const struct i915_gem_context *ctx)
{
unsigned long elapsed;
return (i915_gem_context_is_bannable(ctx) &&
ctx->ban_score >= CONTEXT_SCORE_BAN_THRESHOLD);
}
if (ctx->hang_stats.banned)
return true;
static void i915_gem_context_mark_guilty(struct i915_gem_context *ctx)
{
ctx->guilty_count++;
ctx->ban_score += CONTEXT_SCORE_GUILTY;
if (ban_context(ctx))
i915_gem_context_set_banned(ctx);
elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
if (ctx->hang_stats.ban_period_seconds &&
elapsed <= ctx->hang_stats.ban_period_seconds) {
DRM_DEBUG("context hanging too fast, banning!\n");
return true;
}
DRM_DEBUG_DRIVER("context %s marked guilty (score %d) banned? %s\n",
ctx->name, ctx->ban_score,
yesno(i915_gem_context_is_banned(ctx)));
return false;
if (!i915_gem_context_is_banned(ctx) || IS_ERR_OR_NULL(ctx->file_priv))
return;
ctx->file_priv->context_bans++;
DRM_DEBUG_DRIVER("client %s has had %d context banned\n",
ctx->name, ctx->file_priv->context_bans);
}
static void i915_set_reset_status(struct i915_gem_context *ctx,
const bool guilty)
static void i915_gem_context_mark_innocent(struct i915_gem_context *ctx)
{
struct i915_ctx_hang_stats *hs = &ctx->hang_stats;
if (guilty) {
hs->banned = i915_context_is_banned(ctx);
hs->batch_active++;
hs->guilty_ts = get_seconds();
} else {
hs->batch_pending++;
}
ctx->active_count++;
}
struct drm_i915_gem_request *
......@@ -2716,10 +2689,15 @@ static void reset_request(struct drm_i915_gem_request *request)
memset(vaddr + head, 0, request->postfix - head);
}
void i915_gem_reset_prepare(struct drm_i915_private *dev_priv)
{
i915_gem_revoke_fences(dev_priv);
}
static void i915_gem_reset_engine(struct intel_engine_cs *engine)
{
struct drm_i915_gem_request *request;
struct i915_gem_context *incomplete_ctx;
struct i915_gem_context *hung_ctx;
struct intel_timeline *timeline;
unsigned long flags;
bool ring_hung;
......@@ -2731,11 +2709,21 @@ static void i915_gem_reset_engine(struct intel_engine_cs *engine)
if (!request)
return;
ring_hung = engine->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG;
if (engine->hangcheck.seqno != intel_engine_get_seqno(engine))
hung_ctx = request->ctx;
ring_hung = engine->hangcheck.stalled;
if (engine->hangcheck.seqno != intel_engine_get_seqno(engine)) {
DRM_DEBUG_DRIVER("%s pardoned, was guilty? %s\n",
engine->name,
yesno(ring_hung));
ring_hung = false;
}
if (ring_hung)
i915_gem_context_mark_guilty(hung_ctx);
else
i915_gem_context_mark_innocent(hung_ctx);
i915_set_reset_status(request->ctx, ring_hung);
if (!ring_hung)
return;
......@@ -2745,6 +2733,10 @@ static void i915_gem_reset_engine(struct intel_engine_cs *engine)
/* Setup the CS to resume from the breadcrumb of the hung request */
engine->reset_hw(engine, request);
/* If this context is now banned, skip all of its pending requests. */
if (!i915_gem_context_is_banned(hung_ctx))
return;
/* Users of the default context do not rely on logical state
* preserved between batches. They have to emit full state on
* every batch and so it is safe to execute queued requests following
......@@ -2753,17 +2745,16 @@ static void i915_gem_reset_engine(struct intel_engine_cs *engine)
* Other contexts preserve state, now corrupt. We want to skip all
* queued requests that reference the corrupt context.
*/
incomplete_ctx = request->ctx;
if (i915_gem_context_is_default(incomplete_ctx))
if (i915_gem_context_is_default(hung_ctx))
return;
timeline = i915_gem_context_lookup_timeline(incomplete_ctx, engine);
timeline = i915_gem_context_lookup_timeline(hung_ctx, engine);
spin_lock_irqsave(&engine->timeline->lock, flags);
spin_lock(&timeline->lock);
list_for_each_entry_continue(request, &engine->timeline->requests, link)
if (request->ctx == incomplete_ctx)
if (request->ctx == hung_ctx)
reset_request(request);
list_for_each_entry(request, &timeline->requests, link)
......@@ -2773,7 +2764,7 @@ static void i915_gem_reset_engine(struct intel_engine_cs *engine)
spin_unlock_irqrestore(&engine->timeline->lock, flags);
}
void i915_gem_reset(struct drm_i915_private *dev_priv)
void i915_gem_reset_finish(struct drm_i915_private *dev_priv)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
......@@ -2803,6 +2794,12 @@ static void nop_submit_request(struct drm_i915_gem_request *request)
static void i915_gem_cleanup_engine(struct intel_engine_cs *engine)
{
/* We need to be sure that no thread is running the old callback as
* we install the nop handler (otherwise we would submit a request
* to hardware that will never complete). In order to prevent this
* race, we wait until the machine is idle before making the swap
* (using stop_machine()).
*/
engine->submit_request = nop_submit_request;
/* Mark all pending requests as complete so that any concurrent
......@@ -2833,20 +2830,29 @@ static void i915_gem_cleanup_engine(struct intel_engine_cs *engine)
}
}
void i915_gem_set_wedged(struct drm_i915_private *dev_priv)
static int __i915_gem_set_wedged_BKL(void *data)
{
struct drm_i915_private *i915 = data;
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, i915, id)
i915_gem_cleanup_engine(engine);
return 0;
}
void i915_gem_set_wedged(struct drm_i915_private *dev_priv)
{
lockdep_assert_held(&dev_priv->drm.struct_mutex);
set_bit(I915_WEDGED, &dev_priv->gpu_error.flags);
i915_gem_context_lost(dev_priv);
for_each_engine(engine, dev_priv, id)
i915_gem_cleanup_engine(engine);
mod_delayed_work(dev_priv->wq, &dev_priv->gt.idle_work, 0);
stop_machine(__i915_gem_set_wedged_BKL, dev_priv, NULL);
i915_gem_context_lost(dev_priv);
i915_gem_retire_requests(dev_priv);
mod_delayed_work(dev_priv->wq, &dev_priv->gt.idle_work, 0);
}
static void
......@@ -3532,7 +3538,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
void
i915_gem_object_unpin_from_display_plane(struct i915_vma *vma)
{
lockdep_assert_held(&vma->vm->dev->struct_mutex);
lockdep_assert_held(&vma->vm->i915->drm.struct_mutex);
if (WARN_ON(vma->obj->pin_display == 0))
return;
......@@ -3966,14 +3972,9 @@ static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
.put_pages = i915_gem_object_put_pages_gtt,
};
/* Note we don't consider signbits :| */
#define overflows_type(x, T) \
(sizeof(x) > sizeof(T) && (x) >> (sizeof(T) * BITS_PER_BYTE))
struct drm_i915_gem_object *
i915_gem_object_create(struct drm_device *dev, u64 size)
i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_object *obj;
struct address_space *mapping;
gfp_t mask;
......@@ -3990,16 +3991,16 @@ i915_gem_object_create(struct drm_device *dev, u64 size)
if (overflows_type(size, obj->base.size))
return ERR_PTR(-E2BIG);
obj = i915_gem_object_alloc(dev);
obj = i915_gem_object_alloc(dev_priv);
if (obj == NULL)
return ERR_PTR(-ENOMEM);
ret = drm_gem_object_init(dev, &obj->base, size);
ret = drm_gem_object_init(&dev_priv->drm, &obj->base, size);
if (ret)
goto fail;
mask = GFP_HIGHUSER | __GFP_RECLAIMABLE;
if (IS_CRESTLINE(dev_priv) || IS_BROADWATER(dev_priv)) {
if (IS_I965GM(dev_priv) || IS_I965G(dev_priv)) {
/* 965gm cannot relocate objects above 4GiB. */
mask &= ~__GFP_HIGHMEM;
mask |= __GFP_DMA32;
......@@ -4192,12 +4193,12 @@ static void assert_kernel_context_is_current(struct drm_i915_private *dev_priv)
enum intel_engine_id id;
for_each_engine(engine, dev_priv, id)
GEM_BUG_ON(engine->last_context != dev_priv->kernel_context);
GEM_BUG_ON(!i915_gem_context_is_kernel(engine->last_retired_context));
}
int i915_gem_suspend(struct drm_device *dev)
int i915_gem_suspend(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_device *dev = &dev_priv->drm;
int ret;
intel_suspend_gt_powersave(dev_priv);
......@@ -4231,8 +4232,14 @@ int i915_gem_suspend(struct drm_device *dev)
cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
cancel_delayed_work_sync(&dev_priv->gt.retire_work);
flush_delayed_work(&dev_priv->gt.idle_work);
flush_work(&dev_priv->mm.free_work);
/* As the idle_work is rearming if it detects a race, play safe and
* repeat the flush until it is definitely idle.
*/
while (flush_delayed_work(&dev_priv->gt.idle_work))
;
i915_gem_drain_freed_objects(dev_priv);
/* Assert that we sucessfully flushed all the work and
* reset the GPU back to its idle, low power state.
......@@ -4271,9 +4278,9 @@ int i915_gem_suspend(struct drm_device *dev)
return ret;
}
void i915_gem_resume(struct drm_device *dev)
void i915_gem_resume(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_device *dev = &dev_priv->drm;
WARN_ON(dev_priv->gt.awake);
......@@ -4338,9 +4345,8 @@ static void init_unused_rings(struct drm_i915_private *dev_priv)
}
int
i915_gem_init_hw(struct drm_device *dev)
i915_gem_init_hw(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_engine_cs *engine;
enum intel_engine_id id;
int ret;
......@@ -4394,10 +4400,10 @@ i915_gem_init_hw(struct drm_device *dev)
goto out;
}
intel_mocs_init_l3cc_table(dev);
intel_mocs_init_l3cc_table(dev_priv);
/* We can't enable contexts until all firmware is loaded */
ret = intel_guc_setup(dev);
ret = intel_guc_setup(dev_priv);
if (ret)
goto out;
......@@ -4427,12 +4433,11 @@ bool intel_sanitize_semaphores(struct drm_i915_private *dev_priv, int value)
return true;
}
int i915_gem_init(struct drm_device *dev)
int i915_gem_init(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
int ret;
mutex_lock(&dev->struct_mutex);
mutex_lock(&dev_priv->drm.struct_mutex);
if (!i915.enable_execlists) {
dev_priv->gt.resume = intel_legacy_submission_resume;
......@@ -4456,15 +4461,15 @@ int i915_gem_init(struct drm_device *dev)
if (ret)
goto out_unlock;
ret = i915_gem_context_init(dev);
ret = i915_gem_context_init(dev_priv);
if (ret)
goto out_unlock;
ret = intel_engines_init(dev);
ret = intel_engines_init(dev_priv);
if (ret)
goto out_unlock;
ret = i915_gem_init_hw(dev);
ret = i915_gem_init_hw(dev_priv);
if (ret == -EIO) {
/* Allow engine initialisation to fail by marking the GPU as
* wedged. But we only want to do this where the GPU is angry,
......@@ -4477,15 +4482,14 @@ int i915_gem_init(struct drm_device *dev)
out_unlock:
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&dev_priv->drm.struct_mutex);
return ret;
}
void
i915_gem_cleanup_engines(struct drm_device *dev)
i915_gem_cleanup_engines(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_engine_cs *engine;
enum intel_engine_id id;
......@@ -4501,8 +4505,9 @@ i915_gem_load_init_fences(struct drm_i915_private *dev_priv)
if (INTEL_INFO(dev_priv)->gen >= 7 && !IS_VALLEYVIEW(dev_priv) &&
!IS_CHERRYVIEW(dev_priv))
dev_priv->num_fence_regs = 32;
else if (INTEL_INFO(dev_priv)->gen >= 4 || IS_I945G(dev_priv) ||
IS_I945GM(dev_priv) || IS_G33(dev_priv))
else if (INTEL_INFO(dev_priv)->gen >= 4 ||
IS_I945G(dev_priv) || IS_I945GM(dev_priv) ||
IS_G33(dev_priv) || IS_PINEVIEW(dev_priv))
dev_priv->num_fence_regs = 16;
else
dev_priv->num_fence_regs = 8;
......@@ -4525,9 +4530,8 @@ i915_gem_load_init_fences(struct drm_i915_private *dev_priv)
}
int
i915_gem_load_init(struct drm_device *dev)
i915_gem_load_init(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
int err = -ENOMEM;
dev_priv->objects = KMEM_CACHE(drm_i915_gem_object, SLAB_HWCACHE_ALIGN);
......@@ -4596,10 +4600,8 @@ i915_gem_load_init(struct drm_device *dev)
return err;
}
void i915_gem_load_cleanup(struct drm_device *dev)
void i915_gem_load_cleanup(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
WARN_ON(!llist_empty(&dev_priv->mm.free_list));
mutex_lock(&dev_priv->drm.struct_mutex);
......@@ -4750,7 +4752,7 @@ void i915_gem_track_fb(struct drm_i915_gem_object *old,
/* Allocate a new GEM object and fill it with the supplied data */
struct drm_i915_gem_object *
i915_gem_object_create_from_data(struct drm_device *dev,
i915_gem_object_create_from_data(struct drm_i915_private *dev_priv,
const void *data, size_t size)
{
struct drm_i915_gem_object *obj;
......@@ -4758,7 +4760,7 @@ i915_gem_object_create_from_data(struct drm_device *dev,
size_t bytes;
int ret;
obj = i915_gem_object_create(dev, round_up(size, PAGE_SIZE));
obj = i915_gem_object_create(dev_priv, round_up(size, PAGE_SIZE));
if (IS_ERR(obj))
return obj;
......
......@@ -27,8 +27,10 @@
#ifdef CONFIG_DRM_I915_DEBUG_GEM
#define GEM_BUG_ON(expr) BUG_ON(expr)
#define GEM_WARN_ON(expr) WARN_ON(expr)
#else
#define GEM_BUG_ON(expr) do { } while (0)
#define GEM_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr)
#define GEM_WARN_ON(expr) (BUILD_BUG_ON_INVALID(expr), 0)
#endif
#define I915_NUM_ENGINES 5
......
......@@ -141,7 +141,7 @@ void i915_gem_context_free(struct kref *ctx_ref)
lockdep_assert_held(&ctx->i915->drm.struct_mutex);
trace_i915_context_free(ctx);
GEM_BUG_ON(!ctx->closed);
GEM_BUG_ON(!i915_gem_context_is_closed(ctx));
i915_ppgtt_put(ctx->ppgtt);
......@@ -166,15 +166,15 @@ void i915_gem_context_free(struct kref *ctx_ref)
kfree(ctx);
}
struct drm_i915_gem_object *
i915_gem_alloc_context_obj(struct drm_device *dev, size_t size)
static struct drm_i915_gem_object *
alloc_context_obj(struct drm_i915_private *dev_priv, u64 size)
{
struct drm_i915_gem_object *obj;
int ret;
lockdep_assert_held(&dev->struct_mutex);
lockdep_assert_held(&dev_priv->drm.struct_mutex);
obj = i915_gem_object_create(dev, size);
obj = i915_gem_object_create(dev_priv, size);
if (IS_ERR(obj))
return obj;
......@@ -193,7 +193,7 @@ i915_gem_alloc_context_obj(struct drm_device *dev, size_t size)
* This is only applicable for Ivy Bridge devices since
* later platforms don't have L3 control bits in the PTE.
*/
if (IS_IVYBRIDGE(to_i915(dev))) {
if (IS_IVYBRIDGE(dev_priv)) {
ret = i915_gem_object_set_cache_level(obj, I915_CACHE_L3_LLC);
/* Failure shouldn't ever happen this early */
if (WARN_ON(ret)) {
......@@ -228,8 +228,7 @@ static void i915_ppgtt_close(struct i915_address_space *vm)
static void context_close(struct i915_gem_context *ctx)
{
GEM_BUG_ON(ctx->closed);
ctx->closed = true;
i915_gem_context_set_closed(ctx);
if (ctx->ppgtt)
i915_ppgtt_close(&ctx->ppgtt->base);
ctx->file_priv = ERR_PTR(-EBADF);
......@@ -259,10 +258,9 @@ static int assign_hw_id(struct drm_i915_private *dev_priv, unsigned *out)
}
static struct i915_gem_context *
__create_hw_context(struct drm_device *dev,
__create_hw_context(struct drm_i915_private *dev_priv,
struct drm_i915_file_private *file_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct i915_gem_context *ctx;
int ret;
......@@ -286,8 +284,7 @@ __create_hw_context(struct drm_device *dev,
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
obj = i915_gem_alloc_context_obj(dev,
dev_priv->hw_context_size);
obj = alloc_context_obj(dev_priv, dev_priv->hw_context_size);
if (IS_ERR(obj)) {
ret = PTR_ERR(obj);
goto err_out;
......@@ -331,12 +328,21 @@ __create_hw_context(struct drm_device *dev,
* is no remap info, it will be a NOP. */
ctx->remap_slice = ALL_L3_SLICES(dev_priv);
ctx->hang_stats.ban_period_seconds = DRM_I915_CTX_BAN_PERIOD;
i915_gem_context_set_bannable(ctx);
ctx->ring_size = 4 * PAGE_SIZE;
ctx->desc_template = GEN8_CTX_ADDRESSING_MODE(dev_priv) <<
GEN8_CTX_ADDRESSING_MODE_SHIFT;
ATOMIC_INIT_NOTIFIER_HEAD(&ctx->status_notifier);
/* GuC requires the ring to be placed above GUC_WOPCM_TOP. If GuC is not
* present or not in use we still need a small bias as ring wraparound
* at offset 0 sometimes hangs. No idea why.
*/
if (HAS_GUC(dev_priv) && i915.enable_guc_loading)
ctx->ggtt_offset_bias = GUC_WOPCM_TOP;
else
ctx->ggtt_offset_bias = 4096;
return ctx;
err_pid:
......@@ -353,21 +359,21 @@ __create_hw_context(struct drm_device *dev,
* well as an idle case.
*/
static struct i915_gem_context *
i915_gem_create_context(struct drm_device *dev,
i915_gem_create_context(struct drm_i915_private *dev_priv,
struct drm_i915_file_private *file_priv)
{
struct i915_gem_context *ctx;
lockdep_assert_held(&dev->struct_mutex);
lockdep_assert_held(&dev_priv->drm.struct_mutex);
ctx = __create_hw_context(dev, file_priv);
ctx = __create_hw_context(dev_priv, file_priv);
if (IS_ERR(ctx))
return ctx;
if (USES_FULL_PPGTT(dev)) {
if (USES_FULL_PPGTT(dev_priv)) {
struct i915_hw_ppgtt *ppgtt;
ppgtt = i915_ppgtt_create(to_i915(dev), file_priv, ctx->name);
ppgtt = i915_ppgtt_create(dev_priv, file_priv, ctx->name);
if (IS_ERR(ppgtt)) {
DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n",
PTR_ERR(ppgtt));
......@@ -407,35 +413,24 @@ i915_gem_context_create_gvt(struct drm_device *dev)
if (ret)
return ERR_PTR(ret);
ctx = i915_gem_create_context(dev, NULL);
ctx = __create_hw_context(to_i915(dev), NULL);
if (IS_ERR(ctx))
goto out;
ctx->execlists_force_single_submission = true;
ctx->file_priv = ERR_PTR(-EBADF);
i915_gem_context_set_closed(ctx); /* not user accessible */
i915_gem_context_clear_bannable(ctx);
i915_gem_context_set_force_single_submission(ctx);
ctx->ring_size = 512 * PAGE_SIZE; /* Max ring buffer size */
GEM_BUG_ON(i915_gem_context_is_kernel(ctx));
out:
mutex_unlock(&dev->struct_mutex);
return ctx;
}
static void i915_gem_context_unpin(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
int i915_gem_context_init(struct drm_i915_private *dev_priv)
{
if (i915.enable_execlists) {
intel_lr_context_unpin(ctx, engine);
} else {
struct intel_context *ce = &ctx->engine[engine->id];
if (ce->state)
i915_vma_unpin(ce->state);
i915_gem_context_put(ctx);
}
}
int i915_gem_context_init(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct i915_gem_context *ctx;
/* Init should only be called once per module load. Eventually the
......@@ -469,16 +464,19 @@ int i915_gem_context_init(struct drm_device *dev)
}
}
ctx = i915_gem_create_context(dev, NULL);
ctx = i915_gem_create_context(dev_priv, NULL);
if (IS_ERR(ctx)) {
DRM_ERROR("Failed to create default global context (error %ld)\n",
PTR_ERR(ctx));
return PTR_ERR(ctx);
}
i915_gem_context_clear_bannable(ctx);
ctx->priority = I915_PRIORITY_MIN; /* lowest priority; idle task */
dev_priv->kernel_context = ctx;
GEM_BUG_ON(!i915_gem_context_is_kernel(ctx));
DRM_DEBUG_DRIVER("%s context support initialized\n",
i915.enable_execlists ? "LR" :
dev_priv->hw_context_size ? "HW" : "fake");
......@@ -493,10 +491,13 @@ void i915_gem_context_lost(struct drm_i915_private *dev_priv)
lockdep_assert_held(&dev_priv->drm.struct_mutex);
for_each_engine(engine, dev_priv, id) {
if (engine->last_context) {
i915_gem_context_unpin(engine->last_context, engine);
engine->last_context = NULL;
}
engine->legacy_active_context = NULL;
if (!engine->last_retired_context)
continue;
engine->context_unpin(engine, engine->last_retired_context);
engine->last_retired_context = NULL;
}
/* Force the GPU state to be restored on enabling */
......@@ -522,12 +523,13 @@ void i915_gem_context_lost(struct drm_i915_private *dev_priv)
}
}
void i915_gem_context_fini(struct drm_device *dev)
void i915_gem_context_fini(struct drm_i915_private *dev_priv)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct i915_gem_context *dctx = dev_priv->kernel_context;
lockdep_assert_held(&dev->struct_mutex);
lockdep_assert_held(&dev_priv->drm.struct_mutex);
GEM_BUG_ON(!i915_gem_context_is_kernel(dctx));
context_close(dctx);
dev_priv->kernel_context = NULL;
......@@ -551,9 +553,11 @@ int i915_gem_context_open(struct drm_device *dev, struct drm_file *file)
idr_init(&file_priv->context_idr);
mutex_lock(&dev->struct_mutex);
ctx = i915_gem_create_context(dev, file_priv);
ctx = i915_gem_create_context(to_i915(dev), file_priv);
mutex_unlock(&dev->struct_mutex);
GEM_BUG_ON(i915_gem_context_is_kernel(ctx));
if (IS_ERR(ctx)) {
idr_destroy(&file_priv->context_idr);
return PTR_ERR(ctx);
......@@ -719,7 +723,7 @@ static inline bool skip_rcs_switch(struct i915_hw_ppgtt *ppgtt,
if (ppgtt && (intel_engine_flag(engine) & ppgtt->pd_dirty_rings))
return false;
return to == engine->last_context;
return to == engine->legacy_active_context;
}
static bool
......@@ -731,11 +735,11 @@ needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt,
return false;
/* Always load the ppgtt on first use */
if (!engine->last_context)
if (!engine->legacy_active_context)
return true;
/* Same context without new entries, skip */
if (engine->last_context == to &&
if (engine->legacy_active_context == to &&
!(intel_engine_flag(engine) & ppgtt->pd_dirty_rings))
return false;
......@@ -765,57 +769,20 @@ needs_pd_load_post(struct i915_hw_ppgtt *ppgtt,
return false;
}
struct i915_vma *
i915_gem_context_pin_legacy(struct i915_gem_context *ctx,
unsigned int flags)
{
struct i915_vma *vma = ctx->engine[RCS].state;
int ret;
/* Clear this page out of any CPU caches for coherent swap-in/out.
* We only want to do this on the first bind so that we do not stall
* on an active context (which by nature is already on the GPU).
*/
if (!(vma->flags & I915_VMA_GLOBAL_BIND)) {
ret = i915_gem_object_set_to_gtt_domain(vma->obj, false);
if (ret)
return ERR_PTR(ret);
}
ret = i915_vma_pin(vma, 0, ctx->ggtt_alignment, PIN_GLOBAL | flags);
if (ret)
return ERR_PTR(ret);
return vma;
}
static int do_rcs_switch(struct drm_i915_gem_request *req)
{
struct i915_gem_context *to = req->ctx;
struct intel_engine_cs *engine = req->engine;
struct i915_hw_ppgtt *ppgtt = to->ppgtt ?: req->i915->mm.aliasing_ppgtt;
struct i915_vma *vma;
struct i915_gem_context *from;
struct i915_gem_context *from = engine->legacy_active_context;
u32 hw_flags;
int ret, i;
GEM_BUG_ON(engine->id != RCS);
if (skip_rcs_switch(ppgtt, engine, to))
return 0;
/* Trying to pin first makes error handling easier. */
vma = i915_gem_context_pin_legacy(to, 0);
if (IS_ERR(vma))
return PTR_ERR(vma);
/*
* Pin can switch back to the default context if we end up calling into
* evict_everything - as a last ditch gtt defrag effort that also
* switches to the default context. Hence we need to reload from here.
*
* XXX: Doing so is painfully broken!
*/
from = engine->last_context;
if (needs_pd_load_pre(ppgtt, engine, to)) {
/* Older GENs and non render rings still want the load first,
* "PP_DCLV followed by PP_DIR_BASE register through Load
......@@ -824,7 +791,7 @@ static int do_rcs_switch(struct drm_i915_gem_request *req)
trace_switch_mm(engine, to);
ret = ppgtt->switch_mm(ppgtt, req);
if (ret)
goto err;
return ret;
}
if (!to->engine[RCS].initialised || i915_gem_context_is_default(to))
......@@ -841,29 +808,10 @@ static int do_rcs_switch(struct drm_i915_gem_request *req)
if (to != from || (hw_flags & MI_FORCE_RESTORE)) {
ret = mi_set_context(req, hw_flags);
if (ret)
goto err;
}
return ret;
/* The backing object for the context is done after switching to the
* *next* context. Therefore we cannot retire the previous context until
* the next context has already started running. In fact, the below code
* is a bit suboptimal because the retiring can occur simply after the
* MI_SET_CONTEXT instead of when the next seqno has completed.
*/
if (from != NULL) {
/* As long as MI_SET_CONTEXT is serializing, ie. it flushes the
* whole damn pipeline, we don't need to explicitly mark the
* object dirty. The only exception is that the context must be
* correct in case the object gets swapped out. Ideally we'd be
* able to defer doing this until we know the object would be
* swapped, but there is no way to do that yet.
*/
i915_vma_move_to_active(from->engine[RCS].state, req, 0);
/* state is kept alive until the next request */
i915_vma_unpin(from->engine[RCS].state);
i915_gem_context_put(from);
engine->legacy_active_context = to;
}
engine->last_context = i915_gem_context_get(to);
/* GEN8 does *not* require an explicit reload if the PDPs have been
* setup, and we do not wish to move them.
......@@ -904,10 +852,6 @@ static int do_rcs_switch(struct drm_i915_gem_request *req)
}
return 0;
err:
i915_vma_unpin(vma);
return ret;
}
/**
......@@ -947,12 +891,6 @@ int i915_switch_context(struct drm_i915_gem_request *req)
ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);
}
if (to != engine->last_context) {
if (engine->last_context)
i915_gem_context_put(engine->last_context);
engine->last_context = i915_gem_context_get(to);
}
return 0;
}
......@@ -1003,6 +941,11 @@ static bool contexts_enabled(struct drm_device *dev)
return i915.enable_execlists || to_i915(dev)->hw_context_size;
}
static bool client_is_banned(struct drm_i915_file_private *file_priv)
{
return file_priv->context_bans > I915_MAX_CLIENT_CONTEXT_BANS;
}
int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
......@@ -1017,17 +960,27 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
if (args->pad != 0)
return -EINVAL;
if (client_is_banned(file_priv)) {
DRM_DEBUG("client %s[%d] banned from creating ctx\n",
current->comm,
pid_nr(get_task_pid(current, PIDTYPE_PID)));
return -EIO;
}
ret = i915_mutex_lock_interruptible(dev);
if (ret)
return ret;
ctx = i915_gem_create_context(dev, file_priv);
ctx = i915_gem_create_context(to_i915(dev), file_priv);
mutex_unlock(&dev->struct_mutex);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
GEM_BUG_ON(i915_gem_context_is_kernel(ctx));
args->ctx_id = ctx->user_handle;
DRM_DEBUG_DRIVER("HW context %d created\n", args->ctx_id);
DRM_DEBUG("HW context %d created\n", args->ctx_id);
return 0;
}
......@@ -1060,7 +1013,7 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
context_close(ctx);
mutex_unlock(&dev->struct_mutex);
DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id);
DRM_DEBUG("HW context %d destroyed\n", args->ctx_id);
return 0;
}
......@@ -1085,7 +1038,7 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
args->size = 0;
switch (args->param) {
case I915_CONTEXT_PARAM_BAN_PERIOD:
args->value = ctx->hang_stats.ban_period_seconds;
ret = -EINVAL;
break;
case I915_CONTEXT_PARAM_NO_ZEROMAP:
args->value = ctx->flags & CONTEXT_NO_ZEROMAP;
......@@ -1099,7 +1052,10 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
args->value = to_i915(dev)->ggtt.base.total;
break;
case I915_CONTEXT_PARAM_NO_ERROR_CAPTURE:
args->value = !!(ctx->flags & CONTEXT_NO_ERROR_CAPTURE);
args->value = i915_gem_context_no_error_capture(ctx);
break;
case I915_CONTEXT_PARAM_BANNABLE:
args->value = i915_gem_context_is_bannable(ctx);
break;
default:
ret = -EINVAL;
......@@ -1130,13 +1086,7 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
switch (args->param) {
case I915_CONTEXT_PARAM_BAN_PERIOD:
if (args->size)
ret = -EINVAL;
else if (args->value < ctx->hang_stats.ban_period_seconds &&
!capable(CAP_SYS_ADMIN))
ret = -EPERM;
else
ctx->hang_stats.ban_period_seconds = args->value;
ret = -EINVAL;
break;
case I915_CONTEXT_PARAM_NO_ZEROMAP:
if (args->size) {
......@@ -1147,14 +1097,22 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
}
break;
case I915_CONTEXT_PARAM_NO_ERROR_CAPTURE:
if (args->size) {
if (args->size)
ret = -EINVAL;
} else {
if (args->value)
ctx->flags |= CONTEXT_NO_ERROR_CAPTURE;
else
ctx->flags &= ~CONTEXT_NO_ERROR_CAPTURE;
}
else if (args->value)
i915_gem_context_set_no_error_capture(ctx);
else
i915_gem_context_clear_no_error_capture(ctx);
break;
case I915_CONTEXT_PARAM_BANNABLE:
if (args->size)
ret = -EINVAL;
else if (!capable(CAP_SYS_ADMIN) && !args->value)
ret = -EPERM;
else if (args->value)
i915_gem_context_set_bannable(ctx);
else
i915_gem_context_clear_bannable(ctx);
break;
default:
ret = -EINVAL;
......@@ -1170,7 +1128,6 @@ int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_reset_stats *args = data;
struct i915_ctx_hang_stats *hs;
struct i915_gem_context *ctx;
int ret;
......@@ -1189,15 +1146,14 @@ int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
mutex_unlock(&dev->struct_mutex);
return PTR_ERR(ctx);
}
hs = &ctx->hang_stats;
if (capable(CAP_SYS_ADMIN))
args->reset_count = i915_reset_count(&dev_priv->gpu_error);
else
args->reset_count = 0;
args->batch_active = hs->batch_active;
args->batch_pending = hs->batch_pending;
args->batch_active = ctx->guilty_count;
args->batch_pending = ctx->active_count;
mutex_unlock(&dev->struct_mutex);
......
/*
* Copyright © 2016 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
*/
#ifndef __I915_GEM_CONTEXT_H__
#define __I915_GEM_CONTEXT_H__
#include <linux/bitops.h>
#include <linux/list.h>
struct pid;
struct drm_device;
struct drm_file;
struct drm_i915_private;
struct drm_i915_file_private;
struct i915_hw_ppgtt;
struct i915_vma;
struct intel_ring;
#define DEFAULT_CONTEXT_HANDLE 0
/**
* struct i915_gem_context - client state
*
* The struct i915_gem_context represents the combined view of the driver and
* logical hardware state for a particular client.
*/
struct i915_gem_context {
/** i915: i915 device backpointer */
struct drm_i915_private *i915;
/** file_priv: owning file descriptor */
struct drm_i915_file_private *file_priv;
/**
* @ppgtt: unique address space (GTT)
*
* In full-ppgtt mode, each context has its own address space ensuring
* complete seperation of one client from all others.
*
* In other modes, this is a NULL pointer with the expectation that
* the caller uses the shared global GTT.
*/
struct i915_hw_ppgtt *ppgtt;
/**
* @pid: process id of creator
*
* Note that who created the context may not be the principle user,
* as the context may be shared across a local socket. However,
* that should only affect the default context, all contexts created
* explicitly by the client are expected to be isolated.
*/
struct pid *pid;
/**
* @name: arbitrary name
*
* A name is constructed for the context from the creator's process
* name, pid and user handle in order to uniquely identify the
* context in messages.
*/
const char *name;
/** link: place with &drm_i915_private.context_list */
struct list_head link;
/**
* @ref: reference count
*
* A reference to a context is held by both the client who created it
* and on each request submitted to the hardware using the request
* (to ensure the hardware has access to the state until it has
* finished all pending writes). See i915_gem_context_get() and
* i915_gem_context_put() for access.
*/
struct kref ref;
/**
* @flags: small set of booleans
*/
unsigned long flags;
#define CONTEXT_NO_ZEROMAP BIT(0)
#define CONTEXT_NO_ERROR_CAPTURE 1
#define CONTEXT_CLOSED 2
#define CONTEXT_BANNABLE 3
#define CONTEXT_BANNED 4
#define CONTEXT_FORCE_SINGLE_SUBMISSION 5
/**
* @hw_id: - unique identifier for the context
*
* The hardware needs to uniquely identify the context for a few
* functions like fault reporting, PASID, scheduling. The
* &drm_i915_private.context_hw_ida is used to assign a unqiue
* id for the lifetime of the context.
*/
unsigned int hw_id;
/**
* @user_handle: userspace identifier
*
* A unique per-file identifier is generated from
* &drm_i915_file_private.contexts.
*/
u32 user_handle;
/**
* @priority: execution and service priority
*
* All clients are equal, but some are more equal than others!
*
* Requests from a context with a greater (more positive) value of
* @priority will be executed before those with a lower @priority
* value, forming a simple QoS.
*
* The &drm_i915_private.kernel_context is assigned the lowest priority.
*/
int priority;
/** ggtt_alignment: alignment restriction for context objects */
u32 ggtt_alignment;
/** ggtt_offset_bias: placement restriction for context objects */
u32 ggtt_offset_bias;
/** engine: per-engine logical HW state */
struct intel_context {
struct i915_vma *state;
struct intel_ring *ring;
u32 *lrc_reg_state;
u64 lrc_desc;
int pin_count;
bool initialised;
} engine[I915_NUM_ENGINES];
/** ring_size: size for allocating the per-engine ring buffer */
u32 ring_size;
/** desc_template: invariant fields for the HW context descriptor */
u32 desc_template;
/** status_notifier: list of callbacks for context-switch changes */
struct atomic_notifier_head status_notifier;
/** guilty_count: How many times this context has caused a GPU hang. */
unsigned int guilty_count;
/**
* @active_count: How many times this context was active during a GPU
* hang, but did not cause it.
*/
unsigned int active_count;
#define CONTEXT_SCORE_GUILTY 10
#define CONTEXT_SCORE_BAN_THRESHOLD 40
/** ban_score: Accumulated score of all hangs caused by this context. */
int ban_score;
/** remap_slice: Bitmask of cache lines that need remapping */
u8 remap_slice;
};
static inline bool i915_gem_context_is_closed(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_CLOSED, &ctx->flags);
}
static inline void i915_gem_context_set_closed(struct i915_gem_context *ctx)
{
GEM_BUG_ON(i915_gem_context_is_closed(ctx));
__set_bit(CONTEXT_CLOSED, &ctx->flags);
}
static inline bool i915_gem_context_no_error_capture(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_NO_ERROR_CAPTURE, &ctx->flags);
}
static inline void i915_gem_context_set_no_error_capture(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_NO_ERROR_CAPTURE, &ctx->flags);
}
static inline void i915_gem_context_clear_no_error_capture(struct i915_gem_context *ctx)
{
__clear_bit(CONTEXT_NO_ERROR_CAPTURE, &ctx->flags);
}
static inline bool i915_gem_context_is_bannable(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_BANNABLE, &ctx->flags);
}
static inline void i915_gem_context_set_bannable(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_BANNABLE, &ctx->flags);
}
static inline void i915_gem_context_clear_bannable(struct i915_gem_context *ctx)
{
__clear_bit(CONTEXT_BANNABLE, &ctx->flags);
}
static inline bool i915_gem_context_is_banned(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_BANNED, &ctx->flags);
}
static inline void i915_gem_context_set_banned(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_BANNED, &ctx->flags);
}
static inline bool i915_gem_context_force_single_submission(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ctx->flags);
}
static inline void i915_gem_context_set_force_single_submission(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ctx->flags);
}
static inline bool i915_gem_context_is_default(const struct i915_gem_context *c)
{
return c->user_handle == DEFAULT_CONTEXT_HANDLE;
}
static inline bool i915_gem_context_is_kernel(struct i915_gem_context *ctx)
{
return !ctx->file_priv;
}
/* i915_gem_context.c */
int __must_check i915_gem_context_init(struct drm_i915_private *dev_priv);
void i915_gem_context_lost(struct drm_i915_private *dev_priv);
void i915_gem_context_fini(struct drm_i915_private *dev_priv);
int i915_gem_context_open(struct drm_device *dev, struct drm_file *file);
void i915_gem_context_close(struct drm_device *dev, struct drm_file *file);
int i915_switch_context(struct drm_i915_gem_request *req);
int i915_gem_switch_to_kernel_context(struct drm_i915_private *dev_priv);
void i915_gem_context_free(struct kref *ctx_ref);
struct i915_gem_context *
i915_gem_context_create_gvt(struct drm_device *dev);
int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_context_reset_stats_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
#endif /* !__I915_GEM_CONTEXT_H__ */
......@@ -278,7 +278,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
get_dma_buf(dma_buf);
obj = i915_gem_object_alloc(dev);
obj = i915_gem_object_alloc(to_i915(dev));
if (obj == NULL) {
ret = -ENOMEM;
goto fail_detach;
......
......@@ -274,6 +274,7 @@ static void eb_destroy(struct eb_vmas *eb)
exec_list);
list_del_init(&vma->exec_list);
i915_gem_execbuffer_unreserve_vma(vma);
vma->exec_entry = NULL;
i915_vma_put(vma);
}
kfree(eb);
......@@ -437,7 +438,7 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
memset(&cache->node, 0, sizeof(cache->node));
ret = drm_mm_insert_node_in_range_generic
(&ggtt->base.mm, &cache->node,
4096, 0, 0,
4096, 0, I915_COLOR_UNEVICTABLE,
0, ggtt->mappable_end,
DRM_MM_SEARCH_DEFAULT,
DRM_MM_CREATE_DEFAULT);
......@@ -1232,14 +1233,12 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
struct intel_engine_cs *engine, const u32 ctx_id)
{
struct i915_gem_context *ctx;
struct i915_ctx_hang_stats *hs;
ctx = i915_gem_context_lookup(file->driver_priv, ctx_id);
if (IS_ERR(ctx))
return ctx;
hs = &ctx->hang_stats;
if (hs->banned) {
if (i915_gem_context_is_banned(ctx)) {
DRM_DEBUG("Context %u tried to submit while banned\n", ctx_id);
return ERR_PTR(-EIO);
}
......@@ -1260,6 +1259,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
struct drm_i915_gem_object *obj = vma->obj;
const unsigned int idx = req->engine->id;
lockdep_assert_held(&req->i915->drm.struct_mutex);
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
/* Add a reference if we're newly entering the active list.
......@@ -1715,7 +1715,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
}
params->args_batch_start_offset = args->batch_start_offset;
if (intel_engine_needs_cmd_parser(engine) && args->batch_len) {
if (engine->needs_cmd_parser && args->batch_len) {
struct i915_vma *vma;
vma = i915_gem_execbuffer_parse(engine, &shadow_exec_entry,
......
......@@ -290,7 +290,7 @@ i915_vma_put_fence(struct i915_vma *vma)
{
struct drm_i915_fence_reg *fence = vma->fence;
assert_rpm_wakelock_held(to_i915(vma->vm->dev));
assert_rpm_wakelock_held(vma->vm->i915);
if (!fence)
return 0;
......@@ -313,7 +313,7 @@ static struct drm_i915_fence_reg *fence_find(struct drm_i915_private *dev_priv)
}
/* Wait for completion of pending flips which consume fences */
if (intel_has_pending_fb_unpin(&dev_priv->drm))
if (intel_has_pending_fb_unpin(dev_priv))
return ERR_PTR(-EAGAIN);
return ERR_PTR(-EDEADLK);
......@@ -346,7 +346,7 @@ i915_vma_get_fence(struct i915_vma *vma)
/* Note that we revoke fences on runtime suspend. Therefore the user
* must keep the device awake whilst using the fence.
*/
assert_rpm_wakelock_held(to_i915(vma->vm->dev));
assert_rpm_wakelock_held(vma->vm->i915);
/* Just update our place in the LRU if our fence is getting reused. */
if (vma->fence) {
......@@ -357,7 +357,7 @@ i915_vma_get_fence(struct i915_vma *vma)
return 0;
}
} else if (set) {
fence = fence_find(to_i915(vma->vm->dev));
fence = fence_find(vma->vm->i915);
if (IS_ERR(fence))
return PTR_ERR(fence);
} else
......@@ -366,6 +366,30 @@ i915_vma_get_fence(struct i915_vma *vma)
return fence_update(fence, set);
}
/**
* i915_gem_revoke_fences - revoke fence state
* @dev_priv: i915 device private
*
* Removes all GTT mmappings via the fence registers. This forces any user
* of the fence to reacquire that fence before continuing with their access.
* One use is during GPU reset where the fence register is lost and we need to
* revoke concurrent userspace access via GTT mmaps until the hardware has been
* reset and the fence registers have been restored.
*/
void i915_gem_revoke_fences(struct drm_i915_private *dev_priv)
{
int i;
lockdep_assert_held(&dev_priv->drm.struct_mutex);
for (i = 0; i < dev_priv->num_fence_regs; i++) {
struct drm_i915_fence_reg *fence = &dev_priv->fence_regs[i];
if (fence->vma)
i915_gem_release_mmap(fence->vma->obj);
}
}
/**
* i915_gem_restore_fences - restore fence state
* @dev_priv: i915 device private
......@@ -512,8 +536,8 @@ i915_gem_detect_bit_6_swizzle(struct drm_i915_private *dev_priv)
*/
swizzle_x = I915_BIT_6_SWIZZLE_NONE;
swizzle_y = I915_BIT_6_SWIZZLE_NONE;
} else if (IS_MOBILE(dev_priv) || (IS_GEN3(dev_priv) &&
!IS_G33(dev_priv))) {
} else if (IS_MOBILE(dev_priv) ||
IS_I915G(dev_priv) || IS_I945G(dev_priv)) {
uint32_t dcc;
/* On 9xx chipsets, channel interleave by the CPU is
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册