- 15 7月, 2021 11 次提交
-
-
由 Matt Roper 提交于
All of the Cannon Lake hardware that came out had graphics fused off, and our userspace drivers have already dropped their support for the platform; CNL-specific code in i915 that isn't inherited by subsequent platforms is effectively dead code. Let's remove all of the CNL-specific workarounds as a quick and easy first step. References: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6899Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-12-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
Switch DG1 to use a revid->stepping table as we're trying to do on all platforms going forward. This removes the last use of IS_REVID() and REVID_FOREVER, so remove those now-unused macros as well to prevent their accidental use on future platforms. v2: - Use COMMON_STEP() macro in table. (Anusha) Bspec: 44463 Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-11-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
Switch RKL to use a revid->stepping table as we're trying to do on all platforms going forward. Bspec: 44501 Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-10-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
Switch JSL/EHL to use a revid->stepping table as we're trying to do on all platforms going forward. v2: - Use COMMON_STEP(). (Anusha) Bspec: 29153 Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-9-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
Switch ICL to use a revid->stepping table as we're trying to do on all platforms going forward. While we're at it, let's include some additional steppings that have popped up, even if we don't yet have any workarounds tied to those steppings (we probably need to audit our workaround list soon to see if any of the bounds have moved or if new workarounds have appeared). Note that the current bspec table is missing information about how to map PCI revision ID to GT/display steppings; it only provides an SoC stepping. The mapping to GT/display steppings (which aren't always the same as the SoC stepping) used to be in the bspec, but was apparently dropped during an update in Nov 2019; I've made my changes here based on an older bspec snapshot that still had the necessary information. We've requested that the missing information be restored. I'm only including the production revids in the table here since we're past the point at which we usually stop trying to support pre-production hardware. An appropriate check is added to intel_detect_preproduction_hw() to print an error and taint the kernel just in case someone still tries to load the driver on old pre-production hardware. v2: - Drop pre-production steppings and add error/taint at startup when loading on pre-production hardware. Bspec: 21141 # pre-Nov 2019 snapshot Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NJosé Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-8-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
Switch GLK to use a revid->stepping table as we're trying to do on all platforms going forward. Pre-production and placeholder revisions are omitted. Although nothing in the code is using the data from this table at the moment, we expect some upcoming DMC patches to start utilizing it. Bspec: 19131 Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-7-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
Switch BXT to use a revid->stepping table as we're trying to do on all platforms going forward. Note that the REVID macros we had before weren't being used anywhere in the code and weren't even correct; the table values come from the bspec (and omits all the placeholder and preproduction revisions). Although nothing in the code is using the data from this table at the moment, we expect some upcoming DMC patches to start utilizing it. Bspec: 13620 Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-6-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
We're long past the point where we need to care about pre-production hardware, and we already warn the user and taint the kernel if we detect the driver is being loaded on pre-production hardware. Bspec: 18329 Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-5-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
Switch SKL to use a revid->stepping table as we're trying to do on all platforms going forward. Also drop the preproduction revisions and add the newer steppings we hadn't already handled. Note that SKL has a case where a newer revision ID corresponds to an older GT/disp stepping (0x9 -> STEP_J0, 0xA -> STEP_I1). Also, the lack of a revision ID 0x8 in the table is intentional and not an oversight. We'll re-write the KBL-specific comment to make it clear that these kind of quirks are expected. v2: - Since GT and display steppings are always identical on SKL use a macro to set both values at once in a more readable manner. (Anusha) - Drop preproduction steppings. Bspec: 13626 Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-4-matthew.d.roper@intel.com
-
由 Matt Roper 提交于
Although we're converting our workarounds to use a revid->stepping lookup table, the function that detects pre-production hardware should continue to compare against PCI revision ID values directly. These are listed in the bspec as integers, so it's easier to confirm their correctness if we just use an integer literal rather than a symbolic name anyway. Bspec: 13620, 19131, 13626, 18329 Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-3-matthew.d.roper@intel.com
-
由 Anusha Srivatsa 提交于
Simplify the stepping info array name. Cc: Jani Nikula <jani.nikula@linux.intel.com> Signed-off-by: NAnusha Srivatsa <anusha.srivatsa@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210713193635.3390052-2-matthew.d.roper@intel.com
-
- 07 7月, 2021 4 次提交
-
-
由 Thomas Zimmermann 提交于
Remove all references to DRM's IRQ midlayer. i915 uses Linux' interrupt functions directly. v2: * also remove an outdated comment * move IRQ fix into separate patch * update Fixes tag (Daniel) Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Fixes: b318b824 ("drm/i915: Nuke drm_driver irq vfuncs") Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: intel-gfx@lists.freedesktop.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210701173618.10718-3-tzimmermann@suse.de (cherry picked from commit 91b96f00) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
由 Thomas Zimmermann 提交于
The code in xcs_resume() probably didn't work as intended. It uses struct drm_device.irq, which is allocated to 0, but never initialized by i915 to the device's interrupt number. Change all calls to synchronize_hardirq() to intel_synchronize_irq(), which uses the correct interrupt. _hardirq() functions are not needed in this context. v5: * go back to _hardirq() after PCI probe reported wrong context; add rsp comment v4: * switch everything to intel_synchronize_irq() (Daniel) v3: * also use intel_synchronize_hardirq() at another callsite v2: * wrap irq code in intel_synchronize_hardirq() (Ville) Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Fixes: 536f77b1 ("drm/i915/gt: Call stop_ring() from ring resume, again") Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210701173618.10718-2-tzimmermann@suse.de (cherry picked from commit 27e4b467) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
由 José Roberto de Souza 提交于
_DG1_DPCLKA0_CFGCR0 maps between DPLL 0 and 1 with one bit for phy A and B while _DG1_DPCLKA1_CFGCR0 maps between DPLL 2 and 3 with one bit for phy C and D. Reusing _cnl_ddi_get_pll() don't take that into cosideration returing DPLL 0 and 1 for phy C and D. That is a regression introduced in the refactor done in commit 351221ff ("drm/i915: Move DDI clock readout to encoder->get_config()"). While at it also dropping the macros previously used, not reusing it to improve readability. BSpec: 50286 Fixes: 351221ff ("drm/i915: Move DDI clock readout to encoder->get_config()") Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NJosé Roberto de Souza <jose.souza@intel.com> Reviewed-by: NLucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210630210522.162674-1-jose.souza@intel.com (cherry picked from commit 3352d86d) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
由 Kees Cook 提交于
intel_dp_vsc_sdp_unpack() was using a memset() size (36, struct dp_sdp) larger than the destination (24, struct drm_dp_vsc_sdp), clobbering fields in struct intel_crtc_state after infoframes.vsc. Use the actual target size for the memset(). Fixes: 1b404b7d ("drm/i915/dp: Read out DP SDPs") Cc: stable@vger.kernel.org Signed-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NJosé Roberto de Souza <jose.souza@intel.com> Signed-off-by: NJosé Roberto de Souza <jose.souza@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210617213301.1824728-1-keescook@chromium.org (cherry picked from commit c88e2647) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 02 7月, 2021 3 次提交
-
-
由 Alistair Popple 提交于
Some NVIDIA GPUs do not support direct atomic access to system memory via PCIe. Instead this must be emulated by granting the GPU exclusive access to the memory. This is achieved by replacing CPU page table entries with special swap entries that fault on userspace access. The driver then grants the GPU permission to update the page undergoing atomic access via the GPU page tables. When CPU access to the page is required a CPU fault is raised which calls into the device driver via MMU notifiers to revoke the atomic access. The original page table entries are then restored allowing CPU access to proceed. Link: https://lkml.kernel.org/r/20210616105937.23201-11-apopple@nvidia.comSigned-off-by: NAlistair Popple <apopple@nvidia.com> Reviewed-by: NBen Skeggs <bskeggs@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Peter Xu <peterx@redhat.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: Shakeel Butt <shakeelb@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alistair Popple 提交于
Call mmu_interval_notifier_insert() as part of nouveau_range_fault(). This doesn't introduce any functional change but makes it easier for a subsequent patch to alter the behaviour of nouveau_range_fault() to support GPU atomic operations. Link: https://lkml.kernel.org/r/20210616105937.23201-10-apopple@nvidia.comSigned-off-by: NAlistair Popple <apopple@nvidia.com> Reviewed-by: NBen Skeggs <bskeggs@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Peter Xu <peterx@redhat.com> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: Shakeel Butt <shakeelb@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alistair Popple 提交于
MMU notifier ranges have a migrate_pgmap_owner field which is used by drivers to store a pointer. This is subsequently used by the driver callback to filter MMU_NOTIFY_MIGRATE events. Other notifier event types can also benefit from this filtering, so rename the 'migrate_pgmap_owner' field to 'owner' and create a new notifier initialisation function to initialise this field. Link: https://lkml.kernel.org/r/20210616105937.23201-6-apopple@nvidia.comSigned-off-by: NAlistair Popple <apopple@nvidia.com> Suggested-by: NPeter Xu <peterx@redhat.com> Reviewed-by: NPeter Xu <peterx@redhat.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: Shakeel Butt <shakeelb@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 7月, 2021 22 次提交
-
-
由 Mukul Joshi 提交于
Reset SDMA RAS error counts during init only if persistent EDC harvesting is not supported. Signed-off-by: NMukul Joshi <mukul.joshi@amd.com> Reviewed-by: NJohn Clements <john.clements@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
Each zone-device page holds a reference to the SVM BO that manages its backing storage. This is necessary to correctly hold on to the BO in case zone_device pages are shared with a child-process. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
This is for debug purposes only. It conditionally generates partial migrations to test mixed CPU/GPU memory domain pages in a prange easily. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
Migration skipped for pages that are already in VRAM domain. These could be the result of previous partial migrations to SYS RAM, and prefetch back to VRAM. Ex. Coherent pages in VRAM that were not written/invalidated after a copy-on-write. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
Invalid pages can be the result of pages that have been migrated already due to copy-on-write procedure or pages that were never migrated to VRAM in first place. This is not an issue anymore, as pranges now support mixed memory domains (CPU/GPU). Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
[Why] svm ranges can have mixed pages from device or system memory. A good example is, after a prange has been allocated in VRAM and a copy-on-write is triggered by a fork. This invalidates some pages inside the prange. Endding up in mixed pages. [How] By classifying each page inside a prange, based on its type. Device or system memory, during dma mapping call. If page corresponds to VRAM domain, a flag is set to its dma_addr entry for each GPU. Then, at the GPU page table mapping. All group of contiguous pages within the same type are mapped with their proper pte flags. v2: Instead of using ttm_res to calculate vram pfns in the svm_range. It is now done by setting the vram real physical address into drm_addr array. This makes more flexible VRAM management, plus removes the need to have a BO reference in the svm_range. v3: Remove mapping member from svm_range Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
Now that prange could have mixed domains (VRAM or SYSRAM), actual_loc nor svm_bo can not be used to check its current domain and eventually get its pfns to map them in GPU. Instead, pfns from both domains, are now obtained from hmm_range_fault through amdgpu_hmm_range_get_pages call. This is done everytime a GPU map occur. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
Get the proper owner reference for amdgpu_hmm_range_get_pages function. This is useful for partial migrations. To avoid migrating back to system memory, VRAM pages, that are accessible by all devices in the same memory domain. Ex. multiple devices in the same hive. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
svm_range_prefault is called right before migrations to VRAM, to make sure pages are resident in system memory before the migration. With partial migrations, this reference is used by hmm range get pages to avoid migrating pages that are already in the same VRAM domain. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
The parameter is used in the dev_private_owner to decide if device pages in the range require to be migrated back to system memory, based if they are or not in the same memory domain. In this case, this reference could come from the same memory domain with devices connected to the same hive. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
GPUs in the same XGMI hive have direct access to all members'VRAM. When mapping memory to a GPU, we don't need hmm_range_fault to fault device-private pages in the same hive back to the host. Identifying the page owner as the hive, rather than the individual GPU, accomplishes this. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Sierra 提交于
During GPU page table invalidation with xnack off, new ranges split may occur concurrently in the same prange. Creating a new child per split. Each child should also increment its invalid counter, to assure GPU page table updates in these ranges. Signed-off-by: NAlex Sierra <alex.sierra@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Nicholas Kazlauskas 提交于
[Why & How] Extend existing support for DCN2.1 DMUB diagnostic logging to DCN3.1 so we can collect useful information if the DMUB hangs. Signed-off-by: NNicholas Kazlauskas <nicholas.kazlauskas@amd.com> Reviewed-by: NHarry Wentland <harry.wentland@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Joseph Greathouse 提交于
Navi series GPUs have 2 SIMDs per CU (and then 2 CUs per WGP). The NV enum headers incorrectly listed this as 4, which later meant we were incorrectly reporting the number of SIMDs in the HSA topology. This could cause problems down the line for user-space applications that want to launch a fixed amount of work to each SIMD. Signed-off-by: NJoseph Greathouse <Joseph.Greathouse@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NFelix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Alex Deucher 提交于
Add new PCI device id. Reviewed-by: NGuchun Chen <guchun.chen@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Shyam Sundar S K 提交于
The documentation around PrepareMp1ForUnload message says that anything sent to SMU after this command would be stalled as the PMFW would not be in a state to take further job requests. Technically this is right in case of S3 scenario. But, this might not be the case during s0ix as the PMC driver would be the last to send the SMU on the OS_HINT. If SMU gets a PrepareMp1ForUnload message before the OS_HINT, this would stall the entire S0ix process. Results show that, this message to SMU is not required during S0ix and hence skip it. Reviewed-by: NPrike Liang <Prike.Liang@amd.com> Signed-off-by: NShyam Sundar S K <Shyam-sundar.S-k@amd.com> Acked-by: NHuang Rui <ray.huang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Huang Rui 提交于
In some asics, we need to adjust the behavior according to the apu flags at very early stage. Signed-off-by: NHuang Rui <ray.huang@amd.com> Reviewed-by: NAaron Liu <aaron.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Reka Norman 提交于
Setting CONFIG_FRAME_WARN=0 should disable 'stack frame larger than' warnings. This is useful for example in KASAN builds. Make the dml Makefile respect this config. Fixes the following build warnings with CONFIG_KASAN=y and CONFIG_FRAME_WARN=0: drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn30/display_mode_vba_30.c:3642:6: warning: stack frame size of 2216 bytes in function 'dml30_ModeSupportAndSystemConfigurationFull' [-Wframe-larger-than=] drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn31/display_mode_vba_31.c:3957:6: warning: stack frame size of 2568 bytes in function 'dml31_ModeSupportAndSystemConfigurationFull' [-Wframe-larger-than=] Reviewed-by: NHarry Wentland <harry.wentland@amd.com> Signed-off-by: NReka Norman <rekanorman@google.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Jing Xiangfeng 提交于
radeon_user_framebuffer_create() misses to call drm_gem_object_put() in an error path. Add the missed function call to fix it. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Michal Suchanek 提交于
Also copy over the part that makes old gcc handling cross-platform. Fixes: df7a1658 ("drm/amdgpu/dc: fix DCN3.1 Makefile for PPC64") Fixes: 926d6972 ("drm/amd/display: Add DCN3.1 blocks to the DC Makefile") Reviewed-by: NHarry Wentland <harry.wentland@amd.com> Signed-off-by: NMichal Suchanek <msuchanek@suse.de> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Tiezhu Yang 提交于
On the Loongson64 platform used with Radeon GPU, shutdown or reboot failed when console=tty is in the boot cmdline. radeon_suspend_kms() puts the hw in the suspend state, especially set fb state as FBINFO_STATE_SUSPENDED: if (fbcon) { console_lock(); radeon_fbdev_set_suspend(rdev, 1); console_unlock(); } Then avoid to do any more fb operations in the related functions: if (p->state != FBINFO_STATE_RUNNING) return; So call radeon_suspend_kms() in radeon_pci_shutdown() for Loongson64 to fix this issue, it looks like some kind of workaround like powerpc. Co-developed-by: NJianmin Lv <lvjianmin@loongson.cn> Signed-off-by: NJianmin Lv <lvjianmin@loongson.cn> Signed-off-by: NTiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Oak Zeng 提交于
The ttm caching flags (ttm_cached, ttm_write_combined etc) are used to determine a buffer object's mapping attributes in both CPU page table and GPU page table (when that buffer is also accessed by GPU). Currently the ttm caching flags are set in function amdgpu_ttm_io_mem_reserve which is called during DRM_AMDGPU_GEM_MMAP ioctl. This has a problem since the GPU mapping of the buffer object (ioctl DRM_AMDGPU_GEM_VA) can happen earlier than the mmap time, thus the GPU page table update code can't pick up the right ttm caching flags to decide the right GPU page table attributes. This patch moves the ttm caching flags setting to function amdgpu_vram_mgr_new - this function is called during the first step of a buffer object create (eg, DRM_AMDGPU_GEM_CREATE) so the later both CPU and GPU mapping function calls will pick up this flag for CPU/GPU page table set up. v2: rebase (Alex) Signed-off-by: NOak Zeng <Oak.Zeng@amd.com> Suggested-by: NChristian Koenig <Christian.Koenig@amd.com> Reviewed-by: NChristian Koenig <Christian.Koenig@amd.com> Reviewed-by: NFeifei Xu <Feifei.Xu@amd.com> Tested-by: NPo Huang <Po.Huang@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-