- 04 1月, 2013 2 次提交
-
-
由 Jerome Glisse 提交于
Signed-off-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
count must be a multiple of 2. Fixes crashes on R6xx chips reported by a number of people. Cc: Borislav Petkov <bp@alien8.de> Cc: Markus Trippelsdorf <markus@trippelsdorf.de> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Tested-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 03 1月, 2013 1 次提交
-
-
由 Alex Deucher 提交于
Apple cards do not provide data tables in the vbios so we have to hard code the connector parameters in the driver. Reported-by: NAlbrecht Dreß <albrecht.dress@arcor.de> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
- 22 12月, 2012 1 次提交
-
-
由 Alex Deucher 提交于
It's used in a recent mesa commit: http://cgit.freedesktop.org/mesa/mesa/commit/?id=24b1206ab2dcd506aaac3ef656aebc8bc20cd27a and there may be some other cases in the future where it's required. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Cc: stable@vger.kernel.org
-
- 20 12月, 2012 4 次提交
-
-
由 Jerome Glisse 提交于
To make it easier to debug some lockup from userspace add support to MEM_WRITE packet. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Jerome Glisse 提交于
Modeset path seems to conflict sometimes with the memory management leading to kernel deadlock. This move modesetting reset after GPU acceleration reset. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Jerome Glisse 提交于
radeon_fence_wait_empty_locked should not trigger GPU reset as no place where it's call from would benefit from such thing and it actually lead to a kernel deadlock in case the reset is triggered from pm codepath. Instead force ring completion in place where it makes sense or return early in others. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Jerome Glisse 提交于
Force all fence to signal if GPU reset failed so no process get stuck on waiting fence. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
- 14 12月, 2012 11 次提交
-
-
由 Dave Airlie 提交于
Since 0d0b3e74 drm/radeon: use cached memory when evicting for vram on non agp evicting from TTM would try and evict to TTM instead of system, not so good. This should fix: https://bugs.freedesktop.org/show_bug.cgi?id=58272Signed-off-by: NDave Airlie <airlied@redhat.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
non-mem-to-mem transfers require dw aligned byte count. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
This enables the functionality added in the previous patches. Userspace acceleration drivers can use the CS ioctl to submit command buffers to the async DMA rings. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Allows us to use async DMA from userspace. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Allows us to use the DMA ring from userspace. DMA doesn't have a good NOP packet in which to embed the reloc idx, so userspace has to add a reloc for each buffer used and order them to match the command stream. v2: fix address bounds checking Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Allows us to use the DMA ring from userspace. DMA doesn't have a good NOP packet in which to embed the reloc idx, so userspace has to add a reloc for each buffer used and order them to match the command stream. v2: fix address bounds checking, reloc indexing Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Jerome Glisse 提交于
Fix the size computation of the htile buffer. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Daniel Vetter 提交于
We need to hold bdev->fence_lock while grabbing a reference to the fence, to prevent concurrent clearing/changing of the ttm_bo->sync_obj field. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Daniel Vetter 提交于
With the new per-crtc locking mutliple set-cursor calls could happen in parallel. Out of sheer paranoia I've opted for an irqsave spinlock. But if there's indeed an access from interrupt contexts to these regs it's already broken with the old code, so this can likely just be reduced to a normal spinlock. Otoh the pageflip completion happens from the vblank irq handler ... Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Daniel Vetter 提交于
Just refactoring to make the next patche simpler. Now all indirect register access in the new modesetting driver should go through the r100_mm_(w|r)reg fucntions. RADEON_READ_MM from the old driver seems to be totally unused, so just kill it. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 13 12月, 2012 9 次提交
-
-
由 Jerome Glisse 提交于
The dma ring can't write to register thus have to write to memory its fence value. This ensure that it doesn't try to use scratch register for dma ring fence driver. Should fix: https://bugs.freedesktop.org/show_bug.cgi?id=58166Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Need to verify for copies involving registers. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Need to verify for copies involving registers. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Currently only memory and GDS transfers are allowed. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Currently only memory to memory transfers are allowed. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Along the same lines of what was done for evergreen+ in the last kernel. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Jerome Glisse 提交于
Set the proper number of tile pipe that should be a multiple of pipe depending on the number of se engine. Fix: https://bugs.freedesktop.org/show_bug.cgi?id=56405 https://bugs.freedesktop.org/show_bug.cgi?id=56720 v2: Don't change sumo2 Signed-off-by: NJerome Glisse <jglisse@redhat.com> Cc: stable@vger.kernel.org Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Jerome Glisse 提交于
The bo creation placement is where the bo will be. Instead of trying to move bo at each command stream let this work to another worker thread that will use more advance heuristic. agd5f: remove leftover unused variable Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 11 12月, 2012 10 次提交
-
-
由 Alex Deucher 提交于
DMA engine has special packets to facilitate this and it also keeps the 3D engine free for other things. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Async DMA has a special packet for contiguous pt updates which saves overhead. v2: rebase Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
DMA engine has special packets to facilitate this and it also keeps the 3D engine free for other things. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Async DMA has a special packet for contiguous pt updates which saves overhead. v2: leave the CP method enabled for now as doing the updates in the DMA rings is not working properly yet. v3: update for 2 level pts v4: rebase v5: drop pte/pde packet. doesn't seem to work on NI. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Pretty much the same as cayman. Some changes to the copy packets. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
There are 2 async DMA engines on cayman, one at 0xd000 and one at 0xd800. The programming interface is the same as evergreen however there are some changes to the commands for using vmids. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Pretty similar to 6xx/7xx except the count field increased in the packet header and the max IB size increased. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
Uses the new multi-ring infrastucture. 6xx/7xx has a single async DMA ring. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
- 10 12月, 2012 2 次提交
-
-
由 Maarten Lankhorst 提交于
All items on the lru list are always reservable, so this is a stupid thing to keep. Not only that, it is used in a way which would guarantee deadlocks if it were ever to be set to block on reserve. This is a lot of churn, but mostly because of the removal of the argument which can be nested arbitrarily deeply in many places. No change of code in this patch except removal of the no_wait_reserve argument, the previous patch removed the use of no_wait_reserve. v2: - Warn if -EBUSY is returned on reservation, all objects on the list should be reservable. Adjusted patch slightly due to conflicts. v3: - Focus on no_wait_reserve removal only. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Maarten Lankhorst 提交于
The few places that care should have those checks instead. This allows destruction of bo backed memory without a reservation. It's required for being able to rework the delayed destroy path, as it is no longer guaranteed to hold a reservation before unlocking. However any previous wait is still guaranteed to complete, and it's one of the last things to be done before the buffer object is freed. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-