- 08 8月, 2013 5 次提交
-
-
由 Damien Lespiau 提交于
Cc: Thierry Reding <thierry.reding@avionic-design.de> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Acked-by: NDave Airlie <airlied@gmail.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
In some places, we want to know if an object is bound in any address space, and not just the global GTT. This often applies when there is a single global resource (object, pages, etc.) function | reason -------------------------------------------------- i915_gem_object_is_inactive | global object i915_gem_object_put_pages | object's pages 915_gem_object_unpin | global object i915_gem_execbuffer_unreserve_object | temporary until we plumb vma pread/pwrite | see the note below Note: set_to_gtt_domain in pwrite/pread is abused as a wait_rendering call - but that once only worked if the object is bound. We really should replace this with a plain wait_rendering call, which would have the upside that in pread it would be clearer that we actually only wait for oustanding gpu writes. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Explain the set_to_gtt_domain in pwrite/pread and volunteer Ben to replace those with wait_rendering calls.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Eviction code, like the rest of the converted code needs to be aware of the address space for which it is evicting (or the everything case, all addresses). With the updated bind/unbind interfaces of the last patch, we can now safely move the eviction code over. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
As alluded to in several patches, and it will be reiterated later... A VMA is an abstraction for a GEM BO bound into an address space. Therefore it stands to reason, that the existing bind, and unbind are the ones which will be the most impacted. This patch implements this, and updates all callers which weren't already updated in the series (because it was too messy). This patch represents the bulk of an earlier, larger patch. I've pulled out a bunch of things by the request of Daniel. The history is preserved for posterity with the email convention of ">" One big change from the original patch aside from a bunch of cropping is I've created an i915_vma_unbind() function. That is because we always have the VMA anyway, and doing an extra lookup is useful. There is a caveat, we retain an i915_gem_object_ggtt_unbind, for the global cases which might not talk in VMAs. > drm/i915: plumb VM into object operations > > This patch was formerly known as: > "drm/i915: Create VMAs (part 3) - plumbing" > > This patch adds a VM argument, bind/unbind, and the object > offset/size/color getters/setters. It preserves the old ggtt helper > functions because things still need, and will continue to need them. > > Some code will still need to be ported over after this. > > v2: Fix purge to pick an object and unbind all vmas > This was doable because of the global bound list change. > > v3: With the commit to actually pin/unpin pages in place, there is no > longer a need to check if unbind succeeded before calling put_pages(). > Make put_pages only BUG() after checking pin count. > > v4: Rebased on top of the new hangcheck work by Mika > plumbed eb_destroy also > Many checkpatch related fixes > > v5: Very large rebase > > v6: > Change BUG_ON to WARN_ON (Daniel) > Rename vm to ggtt in preallocate stolen, since it is always ggtt when > dealing with stolen memory. (Daniel) > list_for_each will short-circuit already (Daniel) > remove superflous space (Daniel) > Use per object list of vmas (Daniel) > Make obj_bound_any() use obj_bound for each vm (Ben) > s/bind_to_gtt/bind_to_vm/ (Ben) > > Fixed up the inactive shrinker. As Daniel noticed the code could > potentially count the same object multiple times. While it's not > possible in the current case, since 1 object can only ever be bound into > 1 address space thus far - we may as well try to get something more > future proof in place now. With a prep patch before this to switch over > to using the bound list + inactive check, we're now able to carry that > forward for every address space an object is bound into. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Rebase on top of the loss of "drm/i915: Cleanup more of VMA in destroy".] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
In order to do this for all VMs, it's convenient to rework the logic a bit. This should have no functional impact. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 07 8月, 2013 2 次提交
-
-
由 Ben Widawsky 提交于
Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Rather than open-code the teardown of a framebuffer, export the routine from intel_display.c. This then make intel_fbdev symmetric in its use of the common intel_framebuffer routines to initialise and clean up the struct intel_framebuffer. (And new features need only be added in one location!) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 8月, 2013 33 次提交
-
-
由 Chris Wilson 提交于
MLC_LLC was never validated for Sandybridge and was superseded by a new level of cacheing for the GPU in Ivybridge. Update our names to be consistent with usage, and in the process stop setting the unwanted bit on Sandybridge. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> [danvet: s/BUG/WARN_ON(1) bikeshed.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
We set the mode based on the port, and we already pass the port as an argument. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
These messages are not really useful since it's very easy to check which mode is used for each port: The values programmed are based on the port type, then assigned to the ddi_translations variable. Currently we use DP mode for ports A-D and FDI mode for port E. Also, when we add the code to enable/disable PC8+, intel_prepare_ddi_buffers will be called more often and will eat your dmesg buffers. While at it, fix the coding style of the "for" statement above. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> [danvet: Pimp commit message with Paulo's more detailed explanation of how the ddi translation buffer settings are computed, to answer a question from Chris.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
The code itself is no longer accurate without updating once we have multiple address space since clearing the domains of every object requires scanning the inactive list for all VMs. "This code is dead. Just remove it rather than port it to vma." - Chris Wilson Recommended-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NBen Widawsky <benjamin.widawsky@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
All the ILK+ WM compute functions take the latency values in 0.1us units. Add a few comments to remind people about that. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Adjust the current ILK/SNB/IVB watermark codepaths to use the pre-populated latency values from dev_priv instead of reading them out from the registers every time. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Return UINT_MAX for the calculated WM level if the latency is zero. This will lead to marking the WM level as disabled. I'm not sure if latency==0 should mean that we want to disable the level. But that's the implication I got from the fact that we don't even enable the watermark code of the SSKDP register is 0. v2: Use WARN() to scare people Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Seeing the watermark latency values in dmesg might help sometimes. v2: Use DRM_ERROR() when expected latency values are missing Note: We might hit the DRM_ERROR added in this patch and apparently there's not much we can do about that. But I think it'd be interesting to figure out whether that actually happens in the real world, so I didn't apply a s/DRM_ERROR/DRM_DEBUG_KMS/ bikeshed while applying. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> [danvet: Add note about new error dmesg output.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Rather than pass around the plane latencies, just grab them from dev_priv nearer to where they're needed. Do the same for cursor latencies. v2: Add some comments about latency units Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Rather than having to read the latency values out every time, just store them in dev_priv. On ILK and IVB there is a difference between some of the latency values for different planes, so store the latency values for each plane type separately, and apply the necesary fixups during init. v2: Fix some checkpatch complaints Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
ILK has a slightly different way to read out the watermark latency values. On ILK the LP0 latenciy values are in fact not stored in any register, and instead we must use fixed values. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Hangcheck, and some of the recent reset code for guilty batches need to know which address space the object was in at the time of a hangcheck. This is because we use offsets in the (PP|G)GTT to determine this information, and those offsets can differ depending on which VM they are bound into. Since we still only have 1 VM ever, this code shouldn't yet have any impact. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
With multiple VMs, the eviction code benefits from being able to blindly put pages without needing to know if there are any entities still holding on to those pages. As such it's preferable to return the -EBUSY before the BUG. Eviction code is the only user for now, but overall it makes sense anyway, IMO. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
For now, objects will maintain the same cache levels amongst all address spaces. This is to limit the risk of bugs, as playing with cacheability in the different domains can be very error prone. In the future, it may be optimal to allow setting domains per VMA (ie. an object bound into an address space). Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
This represents the first half of hooking up VMs to execbuf. Here we basically pass an address space all around to the different internal functions. It should be much more readable, and have less risk than the second half, which begins switching over to using VMAs instead of an obj,vm. The overall series echoes this style of, "add a VM, then make it smart later" Signed-off-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Switch a BUG_ON to WARN_ON.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Make it aware of which domain it is bound into GGTT, or PPGTT. While modifying the function, add a global gtt flag to the object description. Global is more interesting than aliasing since aliasing is the default. v2: Access VMA directly for start/size instead of helpers (Daniel) Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Just some small cleanups, and a rename of vm->ggtt_vm requested by Daniel. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
To verbalize it, one can say, "pin an object into the given address space." The semantics of pinning remain the same otherwise. Certain objects will always have to be bound into the global GTT. Therefore, global GTT is a special case, and keep a special interface around for it (i915_gem_obj_ggtt_pin). v2: s/i915_gem_ggtt_pin/i915_gem_obj_ggtt_pin Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Do to the move active/inactive lists, it no longer makes sense to use them for shrinking, since shrinking isn't VM specific (such a need may also exist, but doesn't yet). What we can do instead is use the global bound list to find all objects which aren't active. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Earlier in the conversion sequence we attempted to quickly wedge in the transitional interface as static inlines. Now that we're sure these interfaces are sane, for easier debug and to decrease code size (since many of these functions may be called quite a bit), make them real functions While at it, kill off the set_color interface. We'll always have the VMA, or easily get to it. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
With an upcoming change to bind, to make checkpatch happy and keep the code clean, we need to rework this code a bit. This should have no functional impact. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Add the newline Chris requested.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Move all the similar address space (VM) initialization code to one function. Until we have multiple VMs, there should only ever be 1 VM. The aliasing ppgtt is a special case without it's own VM (since it doesn't need it's own address space management). Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
The default LLC age was changed: commit 0d8ff15e Author: Ben Widawsky <benjamin.widawsky@intel.com> Date: Thu Jul 4 11:02:03 2013 -0700 drm/i915/hsw: Set correct Haswell PTE encodings. On the surface it would seem setting a default age wouldn't matter because all GEM BOs are aged similarly, so the order in which objects are evicted would not be subject to aging. The current working theory as to why this caused a regression though is that LLC is a bit special in that it is shared with the CPU. Presumably (not verified) the CPU fetches cachelines with age 3, and therefore recently cached GPU objects would be evicted before similar CPU object first when the LLC is full. It stands to reason therefore that this would negatively impact CPU bound benchmarks - but those seem to be low on the priority list. eLLC OTOH does not have this same property as LLC. It should be used entirely for the GPU, and so the age really shouldn't matter. Furthermore, we have no evidence to suggest one is better than another on eLLC. Since we've never properly supported eLLC before no, there should be no regression. If the GPU client really wants "younger" objects, they should use MOCS. v2: Drop the extra #define (Chad) v3: Actually git add v4: Pimped commit message Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67062Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChad Versace <chad.versace@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Peter Wu 提交于
Since commit 29a241cc (ACPICA: Add argument typechecking for all predefined ACPI names), _DSM parameters are validated which trigger the following warning: ACPI Warning: \_SB_.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Integer], ACPI requires [Package] (20130517/nsarguments-95) ACPI Warning: \_SB_.PCI0.GFX0._DSM: Argument #4 type mismatch - Found [Integer], ACPI requires [Package] (20130517/nsarguments-95) ACPI Warning: \_SB_.PCI0.P0P2.PEGP._DSM: Argument #4 type mismatch - Found [Integer], ACPI requires [Package] (20130517/nsarguments-95) ACPI Warning: \_SB_.PCI0.P0P2.PEGP._DSM: Argument #4 type mismatch - Found [Integer], ACPI requires [Package] (20130517/nsarguments-95) As the Intel _DSM method seems to ignore this parameter, let's comply to the ACPI spec and use a Package instead. Signed-off-by: NPeter Wu <lekensteyn@gmail.com> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=32602Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
Userspace can pass a mode with an unspecified vsync/hsync polarity setting. All encoders in the Intel driver take this to mean a negative polarity setting. The HW readout/state checker code on the other hand needs these flags to be explicitly set, otherwise the state checker will WARN about the mismatch. Get rid of the WARN by making the polarity setting explicit in the adjusted mode flags based on the requested mode flags. This will keep the existing behavior otherwise. Note that we could guess from the other timing parameters whether the user wanted a VESA or other standard mode and set the polarity accordingly. This is what the NV driver does (drivers/gpu/drm/nouveau/dispnv04/crtc.c), but I think that's not very exact and would change the existing behavior of the Intel driver. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65442Signed-off-by: NImre Deak <imre.deak@intel.com> Tested-by: Ncancan,feng <cancan.feng@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jani Nikula 提交于
VLV wants encoder enabling before the pipe is up. With the previously rearranged VLV DP and HDMI ->pre_enable and ->enable callbacks in place, this no longer depends on the early ->enable hook call. Move the ->enable call at the end of the sequence, similar to the crtc enable on other platforms. This will be needed e.g. for moving the eDP backlight enabling to the right place in the sequence, currently done too early on VLV. There should be no functional changes. v2: Rebase. v3: Explain why this is needed in the commit message (Chris). Signed-off-by: NJani Nikula <jani.nikula@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jani Nikula 提交于
VLV wants encoder enabling before the pipe is up. This is currently achieved through calling the ->enable callback early, right after the ->pre_enable callback, in valleyview_crtc_enable(). This loses both the distinction between ->pre_enable and ->enable on VLV and the possibility to use a hook at the end of the modeset sequence. Rearrange the HDMI callbacks to make it possible to move ->enable call later. Basically do everything in ->pre_enable on VLV, and make ->enable a NOP. There should be no functional changes. v2: Rebase. v3: Explain why this is needed in the commit message (Chris). Signed-off-by: NJani Nikula <jani.nikula@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jani Nikula 提交于
VLV wants encoder enabling before the pipe is up. This is currently achieved through calling the ->enable callback early, right after the ->pre_enable callback, in valleyview_crtc_enable(). This loses both the distinction between ->pre_enable and ->enable on VLV and the possibility to use a hook at the end of the modeset sequence. Rearrange the DP callbacks to make it possible to move ->enable call later. Basically do everything in ->pre_enable on VLV, and make ->enable a NOP. There should be no functional changes. v2: Rebase. v3: Explain why this is needed in the commit message (Chris). Signed-off-by: NJani Nikula <jani.nikula@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Otherwise we get flooded by the kernel warning us that we are doing long sequences of IO without serialisation. For example, WARNING: CPU: 0 PID: 11136 at drivers/gpu/drm/i915/intel_sideband.c:40 vlv_sideband_rw+0x48/0x1ef() Modules linked in: CPU: 0 PID: 11136 Comm: kworker/u2:0 Tainted: G W 3.11.0-rc2+ #4 Call Trace: [<c2028564>] ? warn_slowpath_common+0x63/0x78 [<c227ad43>] ? vlv_sideband_rw+0x48/0x1ef [<c20285dd>] ? warn_slowpath_null+0xf/0x13 [<c227ad43>] ? vlv_sideband_rw+0x48/0x1ef [<c227b060>] ? vlv_dpio_write+0x1c/0x21 [<c2262b3b>] ? intel_dp_set_signal_levels+0x24a/0x385 [<c2264909>] ? intel_dp_complete_link_train+0x25/0x1d1 [<c2264c55>] ? intel_dp_check_link_status+0xf7/0x106 [<c2238ced>] ? i915_hotplug_work_func+0x17b/0x221 [<c203a204>] ? process_one_work+0x12e/0x210 [<c203a5e4>] ? worker_thread+0x116/0x1ad [<c203a4ce>] ? rescuer_thread+0x1cb/0x1cb [<c203d8f5>] ? kthread+0x67/0x6c [<c2457ebb>] ? ret_from_kernel_thread+0x1b/0x30 [<c203d88e>] ? init_completion+0x18/0x18 v2: Retire the locking in vlv_crtc_enable() and do it close to the meat. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJani Nikula <jani.nikula@intel.com> [danvet: Squash in a s/mutex_lock/mutex_unlock/ fixup spotted by the 0 day kernel build/coccinelle and reported by Dan Carpenter.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jesse Barnes 提交于
Art confirms that this should work fine. Since most panels are 18bpp with dithering from 24bpp, the existing code wouldn't be enabled in most cases. Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
SNB and IVB have slightly a different way to read out the watermark latency values. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
The LP1+ watermark latency values need to be multiplied by 5 to make them suitable for watermark calculations. However on pre-HSW platforms we're going to need the raw value later when we have to write it to the WM_LPn registers' latency field. So delay the multiplication until it's needed. Note: Paulo complains that the units of wm (now in 100ns) aren't really clear and I agree. But that can be fixed later on ... Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> [danvet: Add a comment about the unit obfuscation.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Move parsing of MCH_SSKPD to a separate function, we'll add other platforms there later. Note: Chris spotted an empty struct initializer and wondered whether that is hiding a compilier warning. Ville explained that it should have been part of the patch that extends this function to snb/ivb, which don't have all levels hsw has. I've figured it's ok to keep it here with a small note. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> [danvet: Add note about the ominous struct initializer.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-