- 29 10月, 2016 29 次提交
-
-
由 Chris Wilson 提交于
Though we will have multiple timelines, we still have a single timeline of execution. This we can use to provide an execution and retirement order of requests. This keeps tracking execution of requests simple, and vital for preserving a single waiter (i.e. so that we can order the waiters so that only the earliest to wakeup need be woken). To accomplish this we distinguish the seqno used to order requests per-context (external) and that used internally for execution. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-26-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In future patches, we will no longer be able to wait on a static global seqno and instead have to break our wait up into phases. First we wait for the global seqno assignment (upon submission to hardware), and once submitted we wait for the hardware to complete. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-25-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Before suspend, we wait for the switch to the kernel context. In order for all the other context images to be complete upon suspend, that switch must be the last operation by the GPU (i.e. this idling request must not overtake any pending requests). To make this request execute last, we make it depend on every other inflight request. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-24-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Our timelines are more than just a seqno. They also provide an ordered list of requests to be executed. Due to the restriction of handling individual address spaces, we are limited to a timeline per address space but we use a fence context per engine within. Our first step to introducing independent timelines per context (i.e. to allow each context to have a queue of requests to execute that have a defined set of dependencies on other requests) is to provide a timeline abstraction for the global execution queue. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-23-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
After combining the dma-buf reservation object and the GEM reservation object, we lost the ability to do a nonblocking wait on the i915 request (as we blocked upon the reservation object during prepare_fb). We can instead convert the reservation object into a fence upon which we can asynchronously wait (including a forced timeout in case the DMA fence is never signaled). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-22-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In preparation to support many distinct timelines, we need to expand the activity tracking on the GEM object to handle more than just a request per engine. We already use the struct reservation_object on the dma-buf to handle many fence contexts, so integrating that into the GEM object itself is the preferred solution. (For example, we can now share the same reservation_object between every consumer/producer using this buffer and skip the manual import/export via dma-buf.) v2: Reimplement busy-ioctl (by walking the reservation object), postpone the ABI change for another day. Similarly use the reservation object to find the last_write request (if active and from i915) for choosing display CS flips. Caveats: * busy-ioctl: busy-ioctl only reports on the native fences, it will not warn of stalls (in set-domain-ioctl, pread/pwrite etc) if the object is being rendered to by external fences. It also will not report the same busy state as wait-ioctl (or polling on the dma-buf) in the same circumstances. On the plus side, it does retain reporting of which *i915* engines are engaged with this object. * non-blocking atomic modesets take a step backwards as the wait for render completion blocks the ioctl. This is fixed in a subsequent patch to use a fence instead for awaiting on the rendering, see "drm/i915: Restore nonblocking awaits for modesetting" * dynamic array manipulation for shared-fences in reservation is slower than the previous lockless static assignment (e.g. gem_exec_lut_handle runtime on ivb goes from 42s to 66s), mainly due to atomic operations (maintaining the fence refcounts). * loss of object-level retirement callbacks, emulated by VMA retirement tracking. * minor loss of object-level last activity information from debugfs, could be replaced with per-vma information if desired Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-21-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Having moved the locked phase of freeing an object to a separate worker, we can now declare to the core that we only need the unlocked variant of driver->gem_free_object, and can use the simple unreference internally. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-20-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We want to hide the latency of releasing objects and their backing storage from the submission, so we move the actual free to a worker. This allows us to switch to struct_mutex freeing of the object in the next patch. Furthermore, if we know that the object we are dereferencing remains valid for the duration of our access, we can forgo the usual synchronisation barriers and atomic reference counting. To ensure this we defer freeing an object til after an RCU grace period, such that any lookup of the object within an RCU read critical section will remain valid until after we exit that critical section. We also employ this delay for rate-limiting the serialisation on reallocation - we have to slow down object creation in order to prevent resource starvation (in particular, files). v2: Return early in i915_gem_tiling() ioctl to skip over superfluous work on error. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-19-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As we can locklessly (well struct_mutex-lessly) acquire the backing storage, do so in set-domain-ioctl to reduce the contention on the struct_mutex. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-18-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We only need struct_mutex within pwrite for a brief window where we need to serialise with rendering and control our cache domains. Elsewhere we can rely on the backing storage being pinned, and forgive userspace any races against us. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-17-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We only need struct_mutex within pread for a brief window where we need to serialise with rendering and control our cache domains. Elsewhere we can rely on the backing storage being pinned, and forgive userspace any races against us. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-16-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Use the per-object mm.lock to allocate the backing storage (and hold a reference to it across the dmabuf access) without resorting to struct_mutex. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-15-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Break the allocation of the backing storage away from struct_mutex into a per-object lock. This allows parallel page allocation, provided we can do so outside of struct_mutex (i.e. set-domain-ioctl, pwrite, GTT fault), i.e. before execbuf! The increased cost of the atomic counters are hidden behind i915_vma_pin() for the typical case of execbuf, i.e. as the object is typically bound between execbufs, the page_pin_count is static. The cost will be felt around set-domain and pwrite, but offset by the improvement from reduced struct_mutex contention. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-14-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The plan is to move obj->pages out from under the struct_mutex into its own per-object lock. We need to prune any assumption of the struct_mutex from the get_pages/put_pages backends, and to make it easier we pass around the sg_table to operate on rather than indirectly via the obj. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-13-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The plan is to make obtaining the backing storage for the object avoid struct_mutex (i.e. use its own locking). The first step is to update the API so that normal users only call pin/unpin whilst working on the backing storage. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-12-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We can use the radixtree index of the obj->pages to find the start position of the desired partial range. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-11-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
A while ago we switched from a contiguous array of pages into an sglist, for that was both more convenient for mapping to hardware and avoided the requirement for a vmalloc array of pages on every object. However, certain GEM API calls (like pwrite, pread as well as performing relocations) do desire access to individual struct pages. A quick hack was to introduce a cache of the last access such that finding the following page was quick - this works so long as the caller desired sequential access. Walking backwards, or multiple callers, still hits a slow linear search for each page. One solution is to store each successful lookup in a radix tree. v2: Rewrite building the radixtree for clarity, hopefully. v3: Rearrange execbuf to avoid calling i915_gem_object_get_sg() from within an atomic section and so relax the allocation context to a simple GFP_KERNEL and mutex. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-10-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Add lockdep_assert_held(struct_mutex) to the API preamble of the internal GEM interfaces. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-9-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The golden render state is constant, but we recreate the batch setting it up for every new context. If we keep that batch in a volatile cache we can safely reuse it whenever we need to initialise a new context. We mark the pages as purgeable and use the shrinker to recover pages from the batch whenever we face memory pressues, recreating that batch afresh on the next new context. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtien@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-8-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Quite a few of our objects used for internal hardware programming do not benefit from being swappable or from being zero initialised. As such they do not benefit from using a shmemfs backing storage and since they are internal and never directly exposed to the user, we do not need to worry about providing a filp. For these we can use an drm_i915_gem_object wrapper around a sg_table of plain struct page. They are not swap backed and not automatically pinned. If they are reaped by the shrinker, the pages are released and the contents discarded. For the internal use case, this is fine as for example, ringbuffers are pinned from being written by a request to be read by the hardware. Once they are idle, they can be discarded entirely. As such they are a good match for execlist ringbuffers and a small variety of other internal objects. In the first iteration, this is limited to the scratch batch buffers we use (for command parsing and state initialisation). v2: Allocate physically contiguous pages, where possible. v3: Reduce maximum order on subsequent requests following an allocation failure. v4: Fix up mismatch between swiotlb segment size and page count (it counts in 2k units, not 4k pages) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-7-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We only need the active reference to keep the object alive after the handle has been deleted (so as to prevent a synchronous gem_close). Why then pay the price of a kref on every execbuf when we can insert that final active ref just in time for the handle deletion? Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-6-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since we only use the more generic unlocked variant, just rename it as the normal i915_gem_active_wait(). The temporary cost is that we need to always acquire the reference in a RCU safe manner, but the benefit is that we will combine the common paths. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-5-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Our low-level wait routine has evolved from our generic wait interface that handled unlocked, RPS boosting, waits with time tracking. If we push our GEM fence tracking to use reservation_objects (required for handling multiple timelines), we lose the ability to pass the required information down to i915_wait_request(). However, if we push the extra functionality from i915_wait_request() to the individual callsites (i915_gem_object_wait_rendering and i915_gem_wait_ioctl) that make use of those extras, we can both simplify our low level wait and prepare for extending the GEM interface for use of reservation_objects. v2: Rewrite i915_wait_request() kerneldocs Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The throttle-ioctl never touches the struct_mutex. It does, however, as part of its ABI report whether the hardware is terminally wedged. For that purposes, it only has to report the current state and not incur the cost of checking/waiting every invocation, as we do not have to wait for a reset before waiting on a request to ensure completion (that is baked into the wait request implementation). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In forthcoming patches, we want to be able to dynamically allocate the wait_queue_t used whilst awaiting. This is more convenient if we extend the i915_sw_fence_await_sw_fence() to perform the allocation for us if we pass in a gfp mask as an alternative than a preallocated struct. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We will need to wait on DMA completion (as signaled via struct fence) before executing our i915_gem_request. Therefore we want to expose a method for adding the await on the fence itself to the request. v2: Add a comment detailing a failure to handle a signal-on-any fence-array. v3: Pretend that magic numbers don't exist. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We are not allowed to touch the GTT entries underneath an atomic section, as they take a rpm wakelock (which is illegal from atomic context) and in the near future acquiring the DMA address for a page within an object may sleep for an allocation. This makes the current shortcircuit in relocation_iomap() for performing a second relocation on an adjacent page illegal, and we need to release the atomic iomapping, lookup the DMA, insert it into the GTT before reentering the atomic iomap section. As it happens, this is precisely what we do on if we are using an iomapping over the full object and not just a single page and by removing the shortcut, we do the right thing. Fixes: 9c870d03 ("drm/i915: Use RPM as the barrier for controlling...") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161028142756.3850-1-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Matt Roper 提交于
This was the only use of (misleadingly-named) intel_num_planes() function, so we can remove it as well. Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1477522291-10874-3-git-send-email-matthew.d.roper@intel.comReviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com>
-
由 Matt Roper 提交于
This macro's name is a bit misleading; it doesn't actually iterate over all planes since it omits the cursor plane. Its only uses are in gen9 code which is using it to iterate over the universal planes (which we treat as primary+sprites); in these cases the legacy cursor registers are programmed independently if necessary. The macro's iterator value (0 for primary plane, spritenum+1 for each secondary plane) also isn't meaningful outside the gen9 context where the hardware considers them to all be "universal" planes that follow this numbering. This is just a renaming/clarification patch with no functional change. However it will make the subsequent patches more clear. Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1477522291-10874-2-git-send-email-matthew.d.roper@intel.comReviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com>
-
- 28 10月, 2016 10 次提交
-
-
由 Navare, Manasi D 提交于
These static helper functions are required to be used during fallback link rate implemnetation so they need to be placed at the top of the file. v3: * Add cleanup to other patch (Mika Kahola) v2: * Dont move around functions declared in intel_drv.h (Rodrigo Vivi) Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Signed-off-by: NManasi Navare <manasi.d.navare@intel.com> Reviewed-by: NMika Kahola <mika.kahola@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1477524358-16563-4-git-send-email-manasi.d.navare@intel.com
-
The port registers related to the phys in broxton map to different channels and specific phys. Make that mapping explicit. v2: Pass enum dpio_phy to macros instead of mmio base. (Imre) v3: Fix typo in macros. (Imre) v4: Also change variables from u32 to enum dpio_phy. (Imre) Remove leftovers from previous version. (Imre) v5: Actually git add the changes. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1476863940-6019-1-git-send-email-ander.conselvan.de.oliveira@intel.com
-
Use struct bxt_ddi_phy_info to hold information of where the Rcomp resistor is located, instead of hard coding it in the init sequence. Note that this moves the enabling of the phy with the Rcomp resistor out of the power well enable code. That should be safe since bxt_ddi_phy_init() is called while the power domains lock is held, and that is the only way that function gets called, so there is no possibility of a concurrent phy enable caused by a power domain get call. v2: Replace comment about lock with lockdep_assert_held() (Imre) Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/62d209950ad48484564f3e793cf247cf62572a39.1475770848.git-series.ander.conselvan.de.oliveira@intel.com
-
Information about which phy is dual channel is hardcoded in the phy init sequence. Split that to a separate struct so the init sequence is more generic. v2: Restore mangled part that ended up in following patch. (Imre) Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/9102f4c984044126057e4fdd1b91a615ff25fae6.1475770848.git-series.ander.conselvan.de.oliveira@intel.com
-
The vswing sequence is related to the DPIO phy, so move it closer to the rest of DPIO phy related code. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/59aa5c85a115c5cbed81e793f20cd7b9f8de694b.1475770848.git-series.ander.conselvan.de.oliveira@intel.com
-
Move the DPIO phy documentation section to intel_dpio_phy.c, since that is a more suitable place now that there is a source file dedicated for those phys. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/55a2d38c15c06a8c5bce498b28decc03948f0224.1475770848.git-series.ander.conselvan.de.oliveira@intel.com
-
The phy in broxton is also a dpio phy, similar to cherryview but with programming through MMIO. So move the code together with the other similar phys. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/d611de6d256593cf904172db7ff27f164480c228.1475770848.git-series.ander.conselvan.de.oliveira@intel.com
-
Pass lane count to bxt_ddi_phy_calc_lane_optmin_mask() instead of having it extract that number from a pipe_config to decouple the phy code from intel_crtc_state. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/a4977e0207e594953c4f9d1b5f2ef972a8679e74.1475770848.git-series.ander.conselvan.de.oliveira@intel.com
-
The mapping from the BXT_DPIO_CMN_* power wells to their respective phys required a detour implemented in the bxt_power_well_to_phy() function. Instead, embed that information directly into the power_well struct, by resurrecting the data field. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/7fe97582fa08c7340ce6a3b6b0ea3e72a73182d7.1475770848.git-series.ander.conselvan.de.oliveira@intel.com
-
Calling it data seems to imply arbitrary data can be associated with the power well. However, that field is used for look ups and expected to be unique, so rename it. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/f3916c3c5bfa793b0fc870fd44007a3ff425194d.1475770848.git-series.ander.conselvan.de.oliveira@intel.com
-
- 27 10月, 2016 1 次提交
-
-
由 Tvrtko Ursulin 提交于
Newline somehow ended up in the middle of the line. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1477572512-4030-1-git-send-email-tvrtko.ursulin@linux.intel.com
-