- 10 9月, 2014 15 次提交
-
-
由 David Herrmann 提交于
Same as the other legacy APIs, most of this is internal, so prefix it with drm_legacy_* and move into drm_legacy.h. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
This merges all the remains of drm_usb into its only user, udl. We can then drop all the drm_usb stuff, including dev->usbdev. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
..we will not miss you.. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
One step closer to dropping all the drm_bus_* code: Add a driver->set_busid() callback and make all drivers use the generic helpers. Nouveau is the only driver that uses two different bus-types with the same drm_driver. This is totally broken if both buses are available on the same machine (unlikely, but lets be safe). Therefore, we create two different drivers for each platform during module_init() and set the set_busid() callback respectively. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
This field is unused and there is really no reason to optimize unique-allocations. Drop it. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
Lets use kasprintf() to avoid pre-allocating the buffer. This is really nothing to optimize for speed and the input is trusted, so kasprintf() is just fine. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
The sigdata structure is only used to group two fields in drm_device. Inline it and make it an unnamed object. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
DRM_DEBUG_CODE is currently always set, so distributions enable it. The only reason to keep support in code is if developers wanted to disable debug support. Sounds unlikely. All the DRM_DEBUG() printks are still guarded by a drm_debug read. So if its cacheline is read once, they're discarded pretty fast.. There should hardly be any performance penalty, it's even guarded by unlikely(). Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
The drm_memory.h header is only used to define PAGE_AGP, which is only used in drm_memory.c. Fold the header into drm_memory.c and drop it. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
pte_wrprotect() is only used by drm_vm.c, so move the include there. Also include it unconditionally, all architectures provide this header! Furthermore, replace asm/current.h with sched.h, which includes asm/current.h unconditionally. This way we get the same effect and avoid direct asm/ includes. Furthermore, drop the weird __alpha__ protection. It's safe to include sched.h everywhere (and the wait.h comment doesn't apply, anyway). Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
Move drm_agp_head to drm_agpsupport.h and drm_agp_mem into drm_legacy.h. Unfortunately, drivers still heavily access drm_agp_head so we cannot move it to drm_legacy.h. However, at least it's no longer visible in drmP.h now (it's directly included from it, though). Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
In drm_release(), we currently call drm_remove_magic() if the drm_file has a drm-magic attached. Therefore, once drm_master_release() is called, the magic-list _must_ be empty. By dropping the no-op cleanup, we can move "struct drm_magic_entry" to drm_auth.c and avoid exposing it to all of DRM. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
Make all the drm_vma_entry handling local to drm_vm.c and hide it from global headers. This requires to extract the inlined legacy drm_vma_entry cleanup into a small helper and also move a weirdly placed drm_vma_info helper into drm_vm.c. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
Move internal declarations to drm_legacy.h and add drm_legacy_*() prefix to all legacy functions. [airlied: add a bit of an explaination to drm_legacy.h] Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
Radeon UMS is the last user of drm_buffer. Move it out of sight so radeon can drop it together with UMS. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NThierry Reding <treding@nvidia.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 03 9月, 2014 5 次提交
-
-
由 Maarten Lankhorst 提交于
This crash was already here before the conversion, but qxl never leaked hard enough to hit this. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
This is how you implement a memory sieve in a driver. ;-) Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
The locking of release_lock was stupid; t should have been be called with fence_lock_irq if it was legitimately used. Unfortunately it never protected anything except the fence implementation correctly. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Andreas Pokorny 提交于
As there should not be any other virtual device that might share buffers, the callbacks remain empty stubs. Still prime can be used to transfer buffers between processes that use qxl. Signed-off-by: NAndreas Pokorny <andreas.pokorny@canonical.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Andreas Pokorny 提交于
Signed-off-by: NAndreas Pokorny <andreas.pokorny@canonical.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 02 9月, 2014 11 次提交
-
-
由 Maarten Lankhorst 提交于
nouveau keeps track in userspace whether a buffer is being written to or being read, but it doesn't use that information. Change this to allow multiple readers on the same bo. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Maarten Lankhorst 提交于
Maintain the original order to handle VRAM/GART/mixed correctly for <nv50, it's likely not as important on newer cards. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
由 Maarten Lankhorst 提交于
With the conversion to the reservation api this should be safe. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
Final driver! \o/ This is not a proper dma_fence because the hardware may never signal anything, so don't use dma-buf with qxl, ever. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Maarten Lankhorst 提交于
Use the new fence interface on vmwgfx too. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> --- Changes since v1: Fix a sleeping function called from invalid context in enable_signaling.
-
由 Maarten Lankhorst 提交于
Only one type was ever used. This is needed to simplify the fence support in the next commit. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
- 01 9月, 2014 7 次提交
-
-
由 Maarten Lankhorst 提交于
Changes since v1: - Kill the sw interrupt dance, add and use radeon_irq_kms_sw_irq_get_delayed instead. - Change custom wait function, lockdep complained about it. Holding exclusive_lock in the wait function might cause deadlocks. Instead do all the processing in .enable_signaling, and wait on the global fence_queue to pick up gpu resets. - Process all fences in radeon_gpu_reset after reset to close a race with the trylock in enable_signaling. Changes since v2: - Small changes to work with the rewritten lockup recovery patches. Changes since v3: - Call radeon_fence_schedule_check when exclusive_lock cannot be acquired to always cause a wake up. - Reset irqs from hangup check. - Drop reading seqno in the callback, use cached value. - Fix indentation in radeon_fence_default_wait - Add a radeon_test_signaled function, drop a few test_bit calls. - Make to_radeon_fence global. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
由 Maarten Lankhorst 提交于
This reorders the list to keep track of what buffers are reserved, so previous members are always unreserved. This gets rid of some bookkeeping that's no longer needed, while simplifying the code some. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
It seems some drivers really want this as a parameter, like vmwgfx. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
No users are left, kill it off! :D Conversion to the reservation api is next on the list, after that the functionality can be restored with rcu. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
This is the last remaining function that doesn't use the reservation lock completely to fence off access to a buffer. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
-
由 Maarten Lankhorst 提交于
This will ensure we always hold the required lock when calling those functions. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Maarten Lankhorst 提交于
Apart from some code inside ttm itself and nouveau_bo_vma_del, this is the only place where ttm_bo_wait is used without a reservation. Fix this so we can remove the fence_lock later on. After the switch to rcu the reservation lock will be removed again. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: NBen Skeggs <bskeggs@redhat.com>
-
- 28 8月, 2014 2 次提交
-
-
由 Christian König 提交于
llocating memory for UVD create and destroy messages can fail, which is rather annoying when this happens in the middle of a GPU reset. Try to avoid this condition by preallocating a page for those dummy messages. Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Christian König 提交于
This improves concurrent stream decoding. Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-