- 15 6月, 2017 2 次提交
-
-
由 Huang Rui 提交于
gpu_info firmware is released after data is used. But when system enters into suspend, upper class driver will cache all firmware names. At that time, gpu_info will be failing to load. It seems an upper class issue, that we should not release gpu_info firmware until device finished. [ 903.236589] cache_firmware: amdgpu/vega10_sdma1.bin [ 903.236590] fw_set_page_data: fw-amdgpu/vega10_sdma1.bin buf=ffff88041eee10c0 data=ffffc90002561000 size=17408 [ 903.236591] cache_firmware: amdgpu/vega10_sdma1.bin ret=0 [ 903.464160] __allocate_fw_buf: fw-amdgpu/vega10_gpu_info.bin buf=ffff88041eee2c00 [ 903.471815] (NULL device *): loading /lib/firmware/updates/4.11.0-custom/amdgpu/vega10_gpu_info.bin failed with error -2 [ 903.482870] (NULL device *): loading /lib/firmware/updates/amdgpu/vega10_gpu_info.bin failed with error -2 [ 903.492716] (NULL device *): loading /lib/firmware/4.11.0-custom/amdgpu/vega10_gpu_info.bin failed with error -2 [ 903.503156] (NULL device *): direct-loading amdgpu/vega10_gpu_info.bin Signed-off-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Hawking Zhang 提交于
Signed-off-by: NHawking Zhang <Hawking.Zhang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 09 6月, 2017 2 次提交
-
-
由 Alex Xie 提交于
There are two identical function prototypes in same header file Signed-off-by: NAlex Xie <AlexBin.Xie@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Harish Kasiviswanathan 提交于
Add VM update mode module param (amdgpu.vm_update_mode) that can used to control how VM pde/pte are updated for Graphics and Compute. BIT0 controls Graphics and BIT1 Compute. BIT0 [= 0] Graphics updated by SDMA [= 1] by CPU BIT1 [= 0] Compute updated by SDMA [= 1] by CPU By default, only for large BAR system vm_update_mode = 2, indicating that Graphics VMs will be updated via SDMA and Compute VMs will be updated via CPU. And for all all other systems (by default) vm_update_mode = 0 Signed-off-by: NHarish Kasiviswanathan <Harish.Kasiviswanathan@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 08 6月, 2017 3 次提交
-
-
由 Felix Kuehling 提交于
If AMDGPU supports SI, add a module parameter to control SI support. It's off by default in AMDGPU as long as SI suppost is experimental, while it is on by default in radeon. Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NMichel Dänzer <michel.daenzer@amd.com> [ Michel Dänzer: Squash in amdgpu_si_support initialization fix ] Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Felix Kuehling 提交于
If AMDGPU supports CIK, add a module parameter to control CIK support. It's on by default in AMDGPU, while it is off by default in radeon. Signed-off-by: NFelix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Acked-by: NMichel Dänzer <michel.daenzer@amd.com>
-
由 Alex Deucher 提交于
They are gfx related, not general helpers. Reviewed-by: NAlex Xie <AlexBin.Xie@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 07 6月, 2017 1 次提交
-
-
由 Huang Rui 提交于
Something writes over the first 8 MB so reserve this on vega10 until we root cause it. Signed-off-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 01 6月, 2017 9 次提交
-
-
由 Andres Rodriguez 提交于
Use an LRU policy to map usermode rings to HW compute queues. Most compute clients use one queue, and usually the first queue available. This results in poor pipe/queue work distribution when multiple compute apps are running. In most cases pipe 0 queue 0 is the only queue that gets used. In order to better distribute work across multiple HW queues, we adopt a policy to map the usermode ring ids to the LRU HW queue. This fixes a large majority of multi-app compute workloads sharing the same HW queue, even though 7 other queues are available. v2: use ring->funcs->type instead of ring->hw_ip v3: remove amdgpu_queue_mapper_funcs v4: change ring_lru_list_lock to spinlock, grab only once in lru_get() Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Add amdgpu_queue_mgr, a mechanism that allows disjointing usermode's ring ids from the kernel's ring ids. The queue manager maintains a per-file descriptor map of user ring ids to amdgpu_ring pointers. Once a map is created it is permanent (this is required to maintain FIFO execution guarantees for a context's ring). Different queue map policies can be configured for each HW IP. Currently all HW IPs use the identity mapper, i.e. kernel ring id is equal to the user ring id. The purpose of this mechanism is to distribute the load across multiple queues more effectively for HW IPs that support multiple rings. Userspace clients are unable to check whether a specific resource is in use by a different client. Therefore, it is up to the kernel driver to make the optimal choice. v2: remove amdgpu_queue_mapper_funcs v3: made amdgpu_queue_mgr per context instead of per-fd v4: add context_put on error paths v5: rebase and include new IPs UVD_ENC & VCN_* v6: drop unused amdgpu_ring_is_valid_index (Alex) Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Instead of picking an arbitrary queue for KIQ, search for one according to policy. The queue must be unused. Also report the KIQ as an unavailable resource to KFD. In testing I ran into KCQ initialization issues when using pipes 2/3 of MEC2 for the KIQ. Therefore the policy disallows grabbing one of these. v2: fix (ring.me + 1) to (ring.me -1) in amdgpu_amdkfd_device_init Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Pipes provide better concurrency than queues, therefore we want to make sure that apps use queues from different pipes whenever possible. Optimize for the trivial case where an app will consume rings in order, therefore we don't want adjacent rings to belong to the same pipe. Reviewed-by: NEdward O'Callaghan <funfunctor@folklore1984.net> Acked-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Previously the queue/pipe split with kfd operated with pipe granularity. This patch allows amdgpu to take ownership of an arbitrary set of queues. It also consolidates the last few magic numbers in the compute initialization process into mec_init. v2: support for gfx9 v3: renamed AMDGPU_MAX_QUEUES to AMDGPU_MAX_COMPUTE_QUEUES v4: fix off-by-one in num_mec checks in *_compute_queue_acquire Reviewed-by: NEdward O'Callaghan <funfunctor@folklore1984.net> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Andres Rodriguez 提交于
Make amdgpu the owner of all per-pipe state of the HQDs. This change will allow us to split the queues between kfd and amdgpu with a queue granularity instead of pipe granularity. This patch fixes kfd allocating an HDP_EOP region for its 3 pipes which goes unused. v2: support for gfx9 v3: fix gfx7 HPD intitialization Reviewed-by: NEdward O'Callaghan <funfunctor@folklore1984.net> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Acked-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAndres Rodriguez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
Rename adjust_mc_addr to get_vm_pde and check the address bits in one place. v2: handle vcn as well, keep setting the valid bit manually, add a BUG_ON() for GMC v6, v7 and v8 as well. v3: handle vcn_v1_0_enc_ring_emit_vm_flush as well. v4: fix the BUG_ON mask for GFX6-8 Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Hawking Zhang 提交于
Signed-off-by: NHawking Zhang <Hawking.Zhang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Shirish S 提交于
amdgpu_device_resume() & amdgpu_device_init() have a high time consuming call of amdgpu_late_init() which sets the clock_gating state of all IP blocks and is blocking. This patch defers only this setting of clock gating state operation to post resume of amdgpu driver but ideally before the UI comes up or in some cases post ui as well. With this change the resume time of amdgpu_device comes down from 1.299s to 0.199s which further helps in reducing the overall system resume time. V1: made the optimization applicable during driver load as well. TEST:(For ChromiumOS on STONEY only) * UI comes up * amdgpu_late_init() call gets called consistently and no errors reported. Signed-off-by: NShirish S <shirish.s@amd.com> Reviewed-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 25 5月, 2017 16 次提交
-
-
由 Marek Olšák 提交于
v2: bump the DRM version Signed-off-by: NMarek Olšák <marek.olsak@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
The fence in dep_sync cannot be optimized. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Tested and Reviewed-by: Roger.He <Hongbo.He@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
below ioctl will return -ENODEV: amdgpu_cs_ioctl amdgpu_cs_wait_ioctl amdgpu_cs_wait_fences_ioctl amdgpu_gem_va_ioctl amdgpu_info_ioctl v2: only for map and replace cases in amdgpu_gem_va_ioctl Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
backup first 64 byte of gart table as reset magic, check if magic is same after gpu hw reset. v2: use memcmp instead of manual innovation. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Leo Liu 提交于
Signed-off-by: NLeo Liu <leo.liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Leo Liu 提交于
VCN is the new media block on Raven. Add core support and the ring and ib tests for decode. Signed-off-by: NLeo Liu <leo.liu@amd.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NHawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
1,TDR will kickout guilty job if it hang exceed the threshold of the given one from kernel paramter "job_hang_limit", that way a bad command stream will not infinitly cause GPU hang. by default this threshold is 1 so a job will be kicked out after it hang. 2,if a job timeout TDR routine will not reset all sched/ring, instead if will only reset on the givn one which is indicated by @job of amdgpu_sriov_gpu_reset, that way we don't need to reset and recover each sched/ring if we already know which job cause GPU hang. 3,unblock sriov_gpu_reset for AI family. V2: 1:put kickout guilty job after sched parked. 2:since parking scheduler prior to kickout already occupies a while, we can do last check on the in question job before doing hw_reset. TODO: 1:when a job is considered as guilty, we should mark some flag in its fence status flag, and let UMD side aware that this fence signaling is not due to job complete but job hang. 2:if gpu reset cause all video memory lost, we need introduce a new policy to implement TDR, like drop all jobs not yet signaled, and all IOCTL on this device will return ERROR DEVICE_LOST. this will be implemented later. Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
v2: directly return for 'if' case. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Chunming Zhou 提交于
this is an improvement for previous patch, the sched_sync is to store fence that could be skipped as scheduled, when job is executed, we didn't need pipeline_sync if all fences in sched_sync are signalled, otherwise insert pipeline_sync still. v2: handle error when adding fence to sync failed. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> (v1) Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Roger.He 提交于
to cover below case: 1. A task gart bind/unbind but not add to adev->gtt_list yet 2. at this time gpu reset, gtt only recover those gtt in adev->gtt_list Reviewed-by: NChunming Zhou <david1.zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NRoger.He <Hongbo.He@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Nikola Pajkovsky 提交于
else branch is pointless if it's right at the end of function and use unlikely() on err path. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NNikola Pajkovsky <npajkovsky@suse.cz> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Monk Liu 提交于
AI affected: CP/HW team requires KMD insert FRAME_CONTROL(end) after the last IB and before the fence of this DMAframe. this is to make sure the cache are flushed, and it's a must change no matter MCBP/SR-IOV or bare-metal case because new CP hw won't do the cache flush for each IB anymore, it just leaves it to KMD now. with this patch, certain MCBP hang issue when rendering vulkan/chained-ib are resolved. v2: drop gfx8 changes. gfx8 is not affected (Alex) Signed-off-by: NMonk Liu <Monk.Liu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Shaoyun Liu 提交于
The usage of kiq should not depend on the virtualization. Signed-off-by: NShaoyun Liu <Shaoyun.Liu@amd.com> Reviewed-by: NAndres Rodriquez <andresx7@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
I couldn't figure out what this was original good for, but we don't use it any more. Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 David Panariti 提交于
KIQ is the Kernel Interface Queue for managing the MEC. Rather than setting up rings via direct MMIO of ring registers, the rings are configured via special packets sent to the KIQ. The allows the MEC to better manage shared resources and certain power events. v2: squash in s3/s4 fix from Rex v3: further fixes from Rex Signed-off-by: NDavid Panariti <David.Panariti@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Acked-by: NTom St Denis <tom.stdenis@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 16 5月, 2017 1 次提交
-
-
由 Masahiro Yamada 提交于
Include <drm/*.h> instead of relative path from include/drm, then remove the -Iinclude/drm compiler flag. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1493009447-31524-4-git-send-email-yamada.masahiro@socionext.com
-
- 11 5月, 2017 1 次提交
-
-
由 Chunming Zhou 提交于
The problem is that executing the jobs in the right order doesn't give you the right result because consecutive jobs executed on the same engine are pipelined. In other words job B does it buffer read before job A has written it's result. Signed-off-by: NChunming Zhou <David1.Zhou@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 10 5月, 2017 3 次提交
-
-
由 Daniel Vetter 提交于
If we restrict this helper to only kms drivers (which is the case) we can look up the correct mode easily ourselves. But it's a bit tricky: - All legacy drivers look at crtc->hwmode. But that is updated already at the beginning of the modeset helper, which means when we disable a pipe. Hence the final timestamps might be a bit off. But since this is an existing bug I'm not going to change it, but just try to be bug-for-bug compatible with the current code. This only applies to radeon&amdgpu. - i915 tries to get it perfect by updating crtc->hwmode when the pipe is off (i.e. vblank->enabled = false). - All other atomic drivers look at crtc->state->adjusted_mode. Those that look at state->requested_mode simply don't adjust their mode, so it's the same. That has two problems: Accessing crtc->state from interrupt handling code is unsafe, and it's updated before we shut down the pipe. For nonblocking modesets it's even worse. For atomic drivers try to implement what i915 does. To do that we add a new hwmode field to the vblank structure, and update it from drm_calc_timestamping_constants(). For atomic drivers that's called from the right spot by the helper library already, so all fine. But for safety let's enforce that. For legacy driver this function is only called at the end (oh the fun), which is broken, so again let's not bother and just stay bug-for-bug compatible. The benefit is that we can use drm_calc_vbltimestamp_from_scanoutpos directly to implement ->get_vblank_timestamp in every driver, deleting a lot of code. v2: Completely new approach, trying to mimick the i915 solution. v3: Fixup kerneldoc. v4: Drop the WARN_ON to check that the vblank is off, atomic helpers currently unconditionally call this. Recomputing the same stuff should be harmless. v5: Fix typos and move misplaced hunks to the right patches (Neil). v6: Undo hunk movement (kbuild). Cc: Mario Kleiner <mario.kleiner@tuebingen.mpg.de> Cc: Eric Anholt <eric@anholt.net> Cc: Rob Clark <robdclark@gmail.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: Ben Skeggs <bskeggs@redhat.com> Reviewed-by: NNeil Armstrong <narmstrong@baylibre.com> Acked-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170509140329.24114-4-daniel.vetter@ffwll.ch
-
由 Daniel Vetter 提交于
It's overkill to have a flag parameter which is essentially used just as a boolean. This takes care of core + adjusting drivers. Adjusting the scanout position callback is a bit harder, since radeon also supplies it's own driver-private flags in there. v2: Fixup misplaced hunks (Neil). v3: kbuild says v1 was better ... Cc: Mario Kleiner <mario.kleiner@tuebingen.mpg.de> Cc: Eric Anholt <eric@anholt.net> Cc: Rob Clark <robdclark@gmail.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: Ben Skeggs <bskeggs@redhat.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NNeil Armstrong <narmstrong@baylibre.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170509140329.24114-2-daniel.vetter@ffwll.ch
-
由 Daniel Vetter 提交于
There's really no reason for anything more: - Calling this while the crtc vblank stuff isn't set up is a driver bug. Those places alrready DRM_ERROR. - Calling this when the crtc is off is either a driver bug (calling drm_crtc_handle_vblank at the wrong time) or a core bug (for anything else). Again, we DRM_ERROR. - EINVAL is checked at higher levels already, and if we'd use struct drm_crtc * instead of (dev, pipe) it would be real obvious that those are again core bugs. The only valid failure mode is crap hardware that couldn't sample a useful timestamp, to ask the core to just grab a not-so-accurate timestamp. Bool is perfectly fine for that. v2: Also fix up the one caller, I lost that in the shuffling (Jani). v3: Fixup commit message (Neil). Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Mario Kleiner <mario.kleiner@tuebingen.mpg.de> Cc: Eric Anholt <eric@anholt.net> Cc: Rob Clark <robdclark@gmail.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: Ben Skeggs <bskeggs@redhat.com> Reviewed-by: NNeil Armstrong <narmstrong@baylibre.com> Acked-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170509140329.24114-1-daniel.vetter@ffwll.ch
-
- 06 5月, 2017 2 次提交
-
-
由 Guenter Roeck 提交于
alpha:allmodconfig fails to build as follows. drivers/gpu/drm/amd/amdgpu/amdgpu.h:1006:2: error: expected identifier before '(' token drivers/gpu/drm/amd/amdgpu/amdgpu.h:1011:28: error: 'NGG_BUF_MAX' undeclared here The problem is not really the enum definition of NGG_BUF_MAX but PARAM, which happens to be defined differently for alpha and a couple of other architectures. Use less generic defines for NGG enums to solve the problem. Fixes: bce23e00 ("drm/amdgpu: add NGG parameters") Cc: Christian König <christian.koenig@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
We already have this info: max_gs_threads. Drop the duplicate. Reviewed-by: NJunwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-