- 23 1月, 2015 3 次提交
-
-
由 Michel Dänzer 提交于
radeon_vm_map_gart can use rdev->gart.pages_entry instead. Also move the masking of the page address to radeon_vm_map_gart from its callers. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Michel Dänzer 提交于
The GART table BO has to be moved out of VRAM for suspend/resume. Any updates to the GART table during that time were silently dropped without this change. This caused GPU lockups on resume in some cases, see the bug reports referenced below. This might also make GPU reset more robust in some cases, as we no longer rely on the GART table in VRAM being preserved across the GPU lockup/reset. v2: Add logic to radeon_gart_table_vram_pin directly instead of reinstating radeon_gart_restore v3: Move code after assignment of rdev->gart.table_addr so that the GART TLB flush can work as intended, add code comment explaining why we're doing this Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85204 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86267Reviewed-by: NChristian König <christian.koenig@amd.com> Cc: stable@vger.kernel.org Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Michel Dänzer 提交于
get_page_entry calculates the GART page table entry, which is just written to the GART page table by set_page_entry. This is a prerequisite for the following fix. Reviewed-by: NChristian König <christian.koenig@amd.com> Cc: stable@vger.kernel.org Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 22 1月, 2015 3 次提交
-
-
由 Oded Gabbay 提交于
This patch fixes a bug where the first_pipe index passed into init_pipelines() was a #define instead of the value that is passed into amdkfd by radeon Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NJammy Zhou <Jammy.Zhou@amd.com>
-
由 Oded Gabbay 提交于
This patch fixes a bug when calling to init_pipeline() interface. The index that was passed to that function didn't take into account the first_pipe value, which represents the first pipe index that is under amdkfd's responsibility. Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NJammy Zhou <Jammy.Zhou@amd.com>
-
由 Oded Gabbay 提交于
This patch fixes the behavior of kgd_init_pipeline in that this function shouldn't automatically increase the pipe_id argument by 1 right at the start of the function. This is because the first_pipe value might not be always 1, and because a proper interface function should not hide this info inside its implementation. In other words, the calling function should provide the real pipe_id and not count on kgd_init_pipeline to "fix" it. Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NJammy Zhou <Jammy.Zhou@amd.com>
-
- 21 1月, 2015 2 次提交
-
-
由 Andrew Jackson 提交于
The I2C address for the TDA9989 and TDA19989 is fixed at 0x34 but the two LSBs of the TDA19988's address are set by two configuration pins on the chip. Irrespective of the chip, the associated CEC peripheral's I2C address is based upon the main I2C address. This patch avoids any special handling required to support systems that contain multiple TDA19988 devices on the same I2C bus. Signed-off-by: NAndrew Jackson <Andrew.Jackson@arm.com> Signed-off-by: NLiviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Rui Wang 提交于
There are still some places in the fb helper that need to avoid sleeping in panic context. Here's an example: [ 65.615496] bad: scheduling from the idle thread! [ 65.620747] CPU: 92 PID: 0 Comm: swapper/92 Tainted: G M E 3.18.0-rc4-7-default+ #20 [ 65.630364] Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRHSXSD1.86B.0056.R01.1409242327 09/24/2014 [ 65.641923] ffff88087f693d80 ffff88087f689878 ffffffff81566db9 0000000000000000 [ 65.650226] ffff88087f693d80 ffff88087f689898 ffffffff810871ff ffff88046eb3e0d0 [ 65.658527] ffff88087f693d80 ffff88087f6898c8 ffffffff8107c1fa 000000017f6898b8 [ 65.666830] Call Trace: [ 65.669557] <#MC> [<ffffffff81566db9>] dump_stack+0x46/0x58 [ 65.675994] [<ffffffff810871ff>] dequeue_task_idle+0x2f/0x40 [ 65.682412] [<ffffffff8107c1fa>] dequeue_task+0x5a/0x80 [ 65.688345] [<ffffffff810804f3>] deactivate_task+0x23/0x30 [ 65.694569] [<ffffffff81569050>] __schedule+0x580/0x7f0 [ 65.700502] [<ffffffff81569739>] schedule_preempt_disabled+0x29/0x70 [ 65.707696] [<ffffffff8156abb6>] __ww_mutex_lock_slowpath+0xb8/0x162 [ 65.714891] [<ffffffff8156acb3>] __ww_mutex_lock+0x53/0x85 [ 65.721125] [<ffffffffa00b3a5d>] drm_modeset_lock+0x3d/0x110 [drm] [ 65.728132] [<ffffffffa00b3c2a>] __drm_modeset_lock_all+0x8a/0x120 [drm] [ 65.735721] [<ffffffffa00b3cd0>] drm_modeset_lock_all+0x10/0x30 [drm] [ 65.743015] [<ffffffffa01af8bf>] drm_fb_helper_pan_display+0x2f/0xf0 [drm_kms_helper] [ 65.751857] [<ffffffff8132bd21>] fb_pan_display+0xd1/0x1a0 [ 65.758081] [<ffffffff81326010>] bit_update_start+0x20/0x50 [ 65.764400] [<ffffffff813259f2>] fbcon_switch+0x3a2/0x550 [ 65.770528] [<ffffffff813a01c9>] redraw_screen+0x189/0x240 [ 65.776750] [<ffffffff81322f8a>] fbcon_blank+0x20a/0x2d0 [ 65.782778] [<ffffffff8137d359>] ? erst_writer+0x209/0x330 [ 65.789002] [<ffffffff810ba2f3>] ? internal_add_timer+0x63/0x80 [ 65.795710] [<ffffffff810bc137>] ? mod_timer+0x127/0x1e0 [ 65.801740] [<ffffffff813a0cd8>] do_unblank_screen+0xa8/0x1d0 [ 65.808255] [<ffffffff813a0e10>] unblank_screen+0x10/0x20 [ 65.814381] [<ffffffff812ca0d9>] bust_spinlocks+0x19/0x40 [ 65.820508] [<ffffffff81561ca7>] panic+0x106/0x1f5 [ 65.825955] [<ffffffff8102336c>] mce_panic+0x2ac/0x2e0 [ 65.831789] [<ffffffff812c796a>] ? delay_tsc+0x4a/0x80 [ 65.837625] [<ffffffff81024e1f>] do_machine_check+0xbaf/0xbf0 [ 65.844138] [<ffffffff813365d7>] ? intel_idle+0xc7/0x150 [ 65.850166] [<ffffffff8156f03f>] machine_check+0x1f/0x30 [ 65.856195] [<ffffffff813365d7>] ? intel_idle+0xc7/0x150 [ 65.862222] <<EOE>> [<ffffffff814283d5>] cpuidle_enter_state+0x55/0x170 [ 65.869823] [<ffffffff814285a7>] cpuidle_enter+0x17/0x20 [ 65.875852] [<ffffffff81097b08>] cpu_startup_entry+0x2d8/0x370 [ 65.882467] [<ffffffff8102fe29>] start_secondary+0x159/0x180 There's __drm_modeset_lock_all() which Daniel Vetter introduced for this purpose. We can leverage that without reinventing anything. This patch works with the latest kernel. Reviewed-by: NRob Clark <robdclark@gmail.com> Tested-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NRui Wang <rui.y.wang@intel.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 19 1月, 2015 1 次提交
-
-
由 Thomas Hellstrom 提交于
Fixes a case where we call vmw_fifo_idle() from within a wait function with task state !TASK_RUNNING, which is illegal. In addition, make the locking fine-grained, so that it is performed once for every read- and write operation. This is of course more costly, but we don't perform much register access in the timing critical paths anyway. Instead we have the extra benefit of being sure that we don't forget the hw lock around register accesses. I think currently the kms code was quite buggy w r t this. This fixes Red Hat Bugzilla Bug 1180796 Cc: stable@vger.kernel.org Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJakob Bornecrantz <jakob@vmware.com>
-
- 18 1月, 2015 4 次提交
-
-
由 Oded Gabbay 提交于
This patch replaces the two current amdkfd module parameters with a new one. The current parameters that are being replaced are: - Maximum number of HSA processes - Maximum number of queues per process The new parameter that replaces them is called "Maximum queues per device" This replacement achieves two goals: - Allows the user to have as many HSA processes as it wants (until a maximum of 512 HSA processes in Kaveri). - Removes the limitation the user had on maximum number of queues per HSA process. E.g. the user can now have processes which only have one queue and other processes which have hundreds of queues, while before the user couldn't have more than 128 queues per process (as default). The default value of the new parameter is 4096 (32 * 128, which were the defaults of the old parameters). There is almost no additional GART memory required for the default case. As a reminder, this amount of queues requires a little bit below 4MB of GART memory. v2: In addition, This patch defines a new counter for queues accounting in the DQM structure. This is done because the current counter only counts active queues which allows the user to create more queues than the max_num_of_queues_per_device module parameter allows. However, we need the current counter for the runlist packet build process, so the solution is to have a dedicated counter for this accounting. Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Reviewed-by: NBen Goz <ben.goz@amd.com>
-
由 Joonyoung Shim 提交于
Prevented re-enabling the vblank interrupt by drm_vblank_off and drm_vblank_get from mixer_wait_for_vblank returns error after drm_vblank_off. We get below warnings without this error handling because vblank reference count is mismatched by above sequence. setting mode 1920x1080-60Hz@XR24 on connectors 16, crtc 13 [ 19.900793] ------------[ cut here ]------------ [ 19.903959] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/drm_irq.c:1072 exynos_drm_crtc_finish_pageflip+0xac/0xdc() [ 19.914076] Modules linked in: [ 19.917116] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.19.0-rc4-00040-g3d729789-dirty #46 [ 19.925342] Hardware name: SAMSUNG EXYNOS (Flattened Device Tree) [ 19.931437] [<c0014430>] (unwind_backtrace) from [<c001158c>] (show_stack+0x10/0x14) [ 19.939131] [<c001158c>] (show_stack) from [<c04cdd50>] (dump_stack+0x84/0xc4) [ 19.946329] [<c04cdd50>] (dump_stack) from [<c00226f4>] (warn_slowpath_common+0x80/0xb0) [ 19.954382] [<c00226f4>] (warn_slowpath_common) from [<c00227c0>] (warn_slowpath_null+0x1c/0x24) [ 19.963132] [<c00227c0>] (warn_slowpath_null) from [<c02c20cc>] (exynos_drm_crtc_finish_pageflip+0xac/0xdc) [ 19.972841] [<c02c20cc>] (exynos_drm_crtc_finish_pageflip) from [<c02cb7ec>] (mixer_irq_handler+0xdc/0x104) [ 19.982546] [<c02cb7ec>] (mixer_irq_handler) from [<c005c904>] (handle_irq_event_percpu+0x78/0x134) [ 19.991555] [<c005c904>] (handle_irq_event_percpu) from [<c005c9fc>] (handle_irq_event+0x3c/0x5c) [ 20.000395] [<c005c9fc>] (handle_irq_event) from [<c005f384>] (handle_fasteoi_irq+0xe0/0x1ac) [ 20.008885] [<c005f384>] (handle_fasteoi_irq) from [<c005bf88>] (generic_handle_irq+0x2c/0x3c) [ 20.017463] [<c005bf88>] (generic_handle_irq) from [<c005c254>] (__handle_domain_irq+0x7c/0xec) [ 20.026128] [<c005c254>] (__handle_domain_irq) from [<c0008698>] (gic_handle_irq+0x30/0x68) [ 20.034449] [<c0008698>] (gic_handle_irq) from [<c00120c0>] (__irq_svc+0x40/0x74) [ 20.041893] Exception stack(0xc06fff68 to 0xc06fffb0) [ 20.046923] ff60: 00000000 00000000 000052f6 c001b460 c06fe000 c07064e8 [ 20.055070] ff80: c04d743c c07392a2 c0739440 c06da340 ef7fca80 00000000 01000000 c06fffb0 [ 20.063212] ffa0: c000f24c c000f250 60000013 ffffffff [ 20.068245] [<c00120c0>] (__irq_svc) from [<c000f250>] (arch_cpu_idle+0x38/0x3c) [ 20.075611] [<c000f250>] (arch_cpu_idle) from [<c0050948>] (cpu_startup_entry+0x108/0x16c) [ 20.083846] [<c0050948>] (cpu_startup_entry) from [<c06aec5c>] (start_kernel+0x3a0/0x3ac) [ 20.091980] ---[ end trace 2c76ee0500489d1b ]--- Signed-off-by: NJoonyoung Shim <jy0922.shim@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com>
-
由 Joonyoung Shim 提交于
In booting, we can see a below message. [ 3.241728] exynos-mixer 14450000.mixer: Unbalanced pm_runtime_enable! Already pm_runtime_enable is called by probe function. Remove pm_runtime_enable/disable from mixer_bind and mixer_unbind. Signed-off-by: NJoonyoung Shim <jy0922.shim@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com>
-
由 Joonyoung Shim 提交于
This fixes reset codes to support memory mapped hdmi phy as well as hdmi phy dedicated i2c lines. Signed-off-by: NJoonyoung Shim <jy0922.shim@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com>
-
- 16 1月, 2015 1 次提交
-
-
由 Alex Deucher 提交于
This was accidently lost in 76a0df85. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
- 15 1月, 2015 1 次提交
-
-
由 Ben Goz 提交于
If the first queue created was failed on DQM then PQM should unregister the process from DQM. Signed-off-by: NBen Goz <ben.goz@amd.com> Signed-off-by: NOded Gabbay <oded.gabbay@amd.com>
-
- 13 1月, 2015 2 次提交
-
-
由 Alex Deucher 提交于
This adds a quirks list to fix stability problems with certain SI boards. bug: https://bugs.freedesktop.org/show_bug.cgi?id=76490Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Christian König 提交于
Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 12 1月, 2015 6 次提交
-
-
由 Chris Wilson 提交于
If CONFIG_DEBUG_MUTEXES is set, the mutex->owner field is only cleared if the mutex debugging is enabled which introduces a race in our mutex_is_locked_by() - i.e. we may inspect the old owner value before it is acquired by the new task. This is the root cause of this error: diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 5cf6731..3ef3736 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -80,13 +80,13 @@ void debug_mutex_unlock(struct mutex *lock) DEBUG_LOCKS_WARN_ON(lock->owner != current); DEBUG_LOCKS_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next); - mutex_clear_owner(lock); } /* * __mutex_slowpath_needs_to_unlock() is explicitly 0 for debug * mutexes so that we can do it here after we've verified state. */ + mutex_clear_owner(lock); atomic_set(&lock->count, 1); } Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87955Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
Like Ivybridge, we have reports that we get random hangs when flipping with multiple pipes. Extend commit 2a92d5bc Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Jul 8 10:40:29 2014 +0100 drm/i915: Disable RCS flips on Ivybridge to also apply to Haswell. Reported-and-tested-by: NScott Tsai <scottt.tw@gmail.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87759Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # 2a92d5bc drm/i915: Disable RCS flips on Ivybridge Cc: stable@vger.kernel.org Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Imre Deak 提交于
We apply the RPS interrupt workaround on VLV everywhere except when writing the mask directly during idling the GPU. For consistency do this also there. While at it also extend the code comment about affected platforms. I couldn't reproduce the issue on VLV fixed by this workaround, by removing the workaround from everywhere, while it's 100% reproducible on SNB using igt/gem_reset_stats/ban-ctx-render. So also add a note that it hasn't been verified if the workaround really applies to VLV/CHV. Suggested-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Imre Deak 提交于
In commit dbea3cea Author: Imre Deak <imre.deak@intel.com> Date: Mon Dec 15 18:59:28 2014 +0200 drm/i915: sanitize RPS resetting during GPU reset we disable RPS interrupts during GPU resetting, but don't apply the necessary GEN6 HW workaround. This leads to a HW lockup during a subsequent "looping batchbuffer" workload. This is triggered by the testcase that submits exactly this kind of workload after a simulated GPU reset. I'm not sure how likely the bug would have triggered otherwise, since we would have applied the workaround anyway shortly after the GPU reset, when enabling GT powersaving from the deferred work. This may also fix unrelated issues, since during driver loading / suspending we also disable RPS interrupts and so we also had a short window during the rest of the loading / resuming where a similar workload could run without the workaround applied. v2: - separate the fix to route RPS interrupts to the CPU on GEN9 too to a separate patch (Daniel) Bisected-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Testcase: igt/gem_reset_stats/ban-ctx-render Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87429Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Imre Deak 提交于
GEN8+ HW has the option to route PM interrupts to either the CPU or to GT. For GEN8 this was already set correctly to routing to CPU, but not for GEN9, so fix this. Note that when disabling RPS interrupts this was set already correctly, though in that case it didn't matter much except for the possibility of spurious interrupts. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Hyungwon Hwang 提交于
This code is unnecessary, because same logic is already included. Refer this mail thread[1] for detail. [1] http://lists.freedesktop.org/archives/dri-devel/2015-January/075132.htmlSigned-off-by: NHyungwon Hwang <human.hwang@samsung.com> Signed-off-by: NInki Dae <inki.dae@samsung.com>
-
- 09 1月, 2015 1 次提交
-
-
由 Alex Deucher 提交于
Disable dpm on certain problematic boards rather than disabling dpm for the entire chip family since most boards work fine. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1386534 https://bugzilla.kernel.org/show_bug.cgi?id=83731Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
- 08 1月, 2015 5 次提交
-
-
由 Oded Gabbay 提交于
Signed-off-by: NOded Gabbay <oded.gabbay@amd.com>
-
由 Alex Deucher 提交于
We need to wait for the GPUVM flush to complete. There was some confusion as to how this mechanism was supposed to work. The operation is not atomic. For GPU initiated invalidations you need to read back a VM register to introduce enough latency for the update to complete. v2: drop gart changes v3: just read back rather than polling Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Alex Deucher 提交于
We need to wait for the GPUVM flush to complete. There was some confusion as to how this mechanism was supposed to work. The operation is not atomic. For GPU initiated invalidations you need to read back a VM register to introduce enough latency for the update to complete. v2: drop gart changes v3: just read back rather than polling Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Alex Deucher 提交于
We need to wait for the GPUVM flush to complete. There was some confusion as to how this mechanism was supposed to work. The operation is not atomic. For GPU initiated invalidations you need to read back a VM register to introduce enough latency for the update to complete. v2: drop gart changes v3: just read back rather than polling Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Michel Dänzer 提交于
The work queue couldn't reliably prevent the SW ring buffer from overflowing, so dmesg was spammed by kfd kfd: Interrupt ring overflow, dropping interrupt. messages when running e.g. the Atlantis Substance demo from https://wiki.unrealengine.com/Linux_Demos on Kaveri. Since the SW ring buffer doesn't actually do anything at this point, just remove it for now. When actual interrupt processing code is added to amdkfd, it should try to do things immediately and only defer to work queues when necessary. Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NOded Gabbay <oded.gabbay@amd.com>
-
- 07 1月, 2015 3 次提交
-
-
由 Oded Gabbay 提交于
This patch changes kfd_ioctl() to be very similar to drm_ioctl(). The patch defines an array of amdkfd_ioctls, which maps IOCTL definition to the ioctl function. The kfd_ioctl() uses that mapping to call the appropriate ioctl function, through a function pointer. This patch also declares a new typedef for the ioctl function pointer. v2: Renamed KFD_COMMAND_(START|END) to AMDKFD_... Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Acked-by: NChristian König <christian.koenig@amd.com>
-
由 Oded Gabbay 提交于
This patch reformats the ioctl definitions in kfd_ioctl.h to be similar to the drm ioctls definition style. v2: Renamed KFD_COMMAND_(START|END) to AMDKFD_... Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Acked-by: NChristian König <christian.koenig@amd.com>
-
由 Oded Gabbay 提交于
This patch moves the copy_to_user() and copy_from_user() calls from the different ioctl functions in amdkfd to the general kfd_ioctl() function, as this is a common code for all ioctls. This was done according to example taken from drm_ioctl.c Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
- 06 1月, 2015 5 次提交
-
-
由 Dan Carpenter 提交于
The test: if (size > RADEON_MAX_TEXTURE_SIZE) { "size" is an integer and it's controled by the user so it can be negative and the test can underflow. Later we use "size" in: dwords = size / 4; ... RADEON_COPY_MT(buffer, data, (int)(dwords * sizeof(u32))); It causes memory corruption to copy a negative size buffer. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Enabling bapm seems to cause clocking problems on some KV configurations. Disable it by default for now. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Alex Deucher 提交于
The check was already in place in the dp mode_valid check, but radeon_dp_get_dp_link_clock() never returned the high clock mode_valid was checking for because that function clipped the clock based on the hw capabilities. Add an explicit check in the mode_valid function. bug: https://bugs.freedesktop.org/show_bug.cgi?id=87172Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc:stable@vge.kernel.org
-
由 Alex Deucher 提交于
Make it consistent with the sad code for other asics to deal with monitors that don't report sads. bug: https://bugzilla.kernel.org/show_bug.cgi?id=89461Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Alex Deucher 提交于
Enable all three in the driver. Early documentation indicated the 3rd one was used for something else, but that is not the case. v2: handle disable as well Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
- 05 1月, 2015 1 次提交
-
-
由 Ben Goz 提交于
This patch fixes a bug where deallocate_vmid() didn't actually unmap the VMID<-->PASID mapping (in the registers). That can cause undefined behavior. This bug only occurs in non-HWS mode. Signed-off-by: NBen Goz <ben.goz@amd.com> Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 29 12月, 2014 2 次提交
-
-
由 Oded Gabbay 提交于
This patch changes the radeon_kfd_init(), which is used to initialize the interface between radeon and amdkfd, so the interface will be initialized only if amdkfd was build, either as module or inside the kernel image. In the modules case, the symbol_request() will be used (same as old code). In the in-image compilation case, a direct call to kgd2kfd_init() will be done. For other cases, radeon_kfd_init() will just return false. This patch is necessary because in case of the following specific configuration: kernel 32-bit, no modules support, random kernel base and no hibernation, the symbol_request() doesn't work as expected - it doesn't return NULL if the symbol doesn't exists - which makes the kernel panic. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NOded Gabbay <oded.gabbay@amd.com>
-
由 Sasha Levin 提交于
Commit "amdkfd: use sizeof(long) granularity for the pasid bitmask" calculated the number of longs it will need, but ended up allocating that number of bytes rather than longs. Fix that silly error and allocate the amount of data really required. Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Signed-off-by: NOded Gabbay <oded.gabbay@amd.com>
-