- 17 6月, 2014 1 次提交
-
-
由 Chew, Chiau Ee 提交于
It was observed that after module removal followed by insertion, the SW mode chipselect is not properly set. Thus causing transfer failure due to incorrect CS toggling. Signed-off-by: NChew, Chiau Ee <chiau.ee.chew@intel.com> Acked-by: NMika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 06 6月, 2014 1 次提交
-
-
由 Chew, Chiau Ee 提交于
This is to fix the SPI DMA transfer failure for speed less than 1M. If using current DMA burst size setting (16), the Rx data bytes are invalid due to each data byte is multiplied according to the burst size setting. Let's said supposedly we shall receive the following 18 bytes of data: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 Instead, the data bytes received consist of "16 bytes of '01' + 2 bytes of '02'" : 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 02 02 Signed-off-by: NChew, Chiau Ee <chiau.ee.chew@intel.com> Acked-by: NMika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 02 6月, 2014 17 次提交
-
-
由 Geert Uytterhoeven 提交于
Extract the common parts of rspi_transfer_one(), rspi_rz_transfer_one(), and qspi_transfer_out_in() into the new function rspi_common_transfer(). Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Enable DMA support for RSPI on r7s72100 (RZ/A1H). Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Enable DMA support for QSPI on R-Car Gen2, for Single, Dual, and Quad SPI Transfers. Performance figures for reading from a QSPI FLASH driven at 24.375 MHz on r8a7791/koelsch: - Single: 1.1 Mbps PIO, 23 Mbps DMA - Dual : 12.7 Mbps PIO, 48 Mbps DMA - Quad : 13 Mbps PIO, 70 Mbps DMA Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
rspi_send_dma() and rspi_send_receive_dma() are very similar. Consolidate into a single function rspi_dma_transfer(), and add missing checks for dmaengine_submit() failures. Both sg_table pointer parameters can be NULL, as RSPI supports TX-only mode, and unidirectional DMA transfers will also be needed later for Dual/Quad DMA support. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
The DMA routines only need access to the scatter-gather tables inside the spi_transfer structures, hence just pass those. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Refactor RSPI (on SH) DMA handling to make it reusable for other RSPI implementations: - Call the DMA routines after configuring the TX Mode bit and after calling rspi_receive_init(), so these RSPI-specific operations can be removed from the DMA routines, - Absorb rspi_transfer_out_in() into rspi_transfer_one(). Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Use the SPI core DMA mapping framework instead of our own. If available, DMA is used for transfers larger than the FIFO size (8 or 32 bytes). Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
The SPI DMA core framework needs both RX and TX DMA to function. As a preparation for converting the driver to use this framework, fall back to PIO if no DMA channel or only one DMA channel is available. This affects only RSPI, which could do DMA transfers for TX-only before. RSPI-RZ and QSPI (at least for Single SPI Transfers) will need both RX and TX DMA anyway. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
The resource is know to exist, as rspi_probe() already mapped it. Remove the test, and just pass the resource. Pass the device pointer instead of the platform device pointer, as the latter is no longer needed. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Setup of the receive and transmit DMA channels is very similar, so let's consolidate. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Fall back to PIO if DMA configuration failed. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
The various PIO loops are very similar. Consolidate into a single function rspi_pio_transfer(). Both buffer pointers can be NULL, as RSPI supports TX-only mode, and Dual/Quad SPI Transfers are unidirectional. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
RSPI needs dummy transfers to generate the SPI clock on receive. RSPI-RZ and QSPI always do both transmit and receive. Use the SPI core SPI_MASTER_MUST_RX/SPI_MASTER_MUST_TX infrastructure instead of checking for the presence of buffers and providing dummy data ourselves (for PIO), or providing a dummy buffer (for DMA). rspi_receive_dma() now provides full duplex DMA transfers on RSPI, and is renamed to rspi_send_receive_dma(). As the SPI core will always provide a TX buffer, the logic to choose between DMA send and DMA send/receive in rspi_transfer_one() now has to check for the presence of an RX buffer. Likewise for the DMA availability tests in rspi_is_dma(). The buffer tests in qspi_transfer_one() are now always true, so they're removed. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
The 16-bit DMA support doesn't fit well within the SPI core DMA framework, as it needs to manage its own double-sized temporary buffers, for handling the interleaved data. Remove it, as there is no in-tree board code that sets rspi_plat_data.dma_width_16bit. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Since commit 8449fd76 ("spi: rspi: Merge rspi_send_pio() and rspi_receive_pio()"), rspi_receive_init() is called for transmit-only transfers too, while this is not needed. Only call rspi_receive_init() when receiving, to preserve behavior on RSPI on SH. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 01 6月, 2014 1 次提交
-
-
由 Antonio Ospite 提交于
In commit 7dd62787 (spi/pxa2xx: Convert to core runtime PM) master->auto_runtime_pm was set to true. In this case pm_runtime_enable() must be called *before* spi_register_master(), otherwise the kernel hangs with this error message: spi_master spi0: Failed to power device: -13 A similar fix, but for spi/hspi, was applied in 268d7643. Signed-off-by: NAntonio Ospite <ao2@ao2.it> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 31 5月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
So a few people complained that commit 177cf92d Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Tue Apr 1 22:14:59 2014 +0200 drm/crtc-helpers: fix dpms on logic which was merged into 3.15-rc1, broke resume on radeons. Strangely git bisect lead everyone to commit 25f397a4 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Fri Jul 19 18:57:11 2013 +0200 drm/crtc-helper: explicit DPMS on after modeset which was merged long ago and actually part of 3.14. Digging deeper I've noticed (again) that the call to drm_helper_resume_force_mode in the radeon resume handlers was a no-op previously because everything gets shut down on suspend. radeon does this with explicit calls to drm_helper_connector_dpms with DPMS_OFF. But with 177c we now force the dpms state to ON, so suddenly resume_force_mode actually forced the crtcs back on. This is the intention of the change after all, the problem is that radeon resumes the fbdev console layer _before_ restoring the display, through calling fb_set_suspend. And fbcon does an immediate ->set_par, which in turn causes the same forced mode restore to happen. Two concurrent modeset operations didn't lead to happiness. Fix this by delaying the fbcon resume until the end of the readeon resum functions. v2: Fix up a bit of the spelling fail. References: https://lkml.org/lkml/2014/5/29/1043 References: https://lkml.org/lkml/2014/5/2/388 Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=74751Tested-by: NKen Moffat <zarniwhoop@ntlworld.com> Cc: Alex Deucher <alexdeucher@gmail.com> Cc: Ken Moffat <zarniwhoop@ntlworld.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@gmail.com>
-
- 30 5月, 2014 4 次提交
-
-
由 Christian König 提交于
No need to always allocate the theoretical maximum here. Signed-off-by: NChristian König <christian.koenig@amd.com>
-
由 Marek Olšák 提交于
It hangs the hardware. Signed-off-by: NMarek Olšák <marek.olsak@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com> Cc: stable@vger.kernel.org
-
由 Christian König 提交于
Signed-off-by: NChristian König <christian.koenig@amd.com> CC: stable@vger.kernel.org
-
由 Christian König 提交于
Let's be conservative and use 100 here until we find something better. Bugs: https://bugzilla.kernel.org/show_bug.cgi?id=75241Signed-off-by: NChristian König <christian.koenig@amd.com>
-
- 29 5月, 2014 1 次提交
-
-
由 Stefan Richter 提交于
Undo a feature introduced in v3.14 by commit fcd46b34 "firewire: Enable remote DMA above 4 GB". That change raised the minimum address at which protocol drivers and user programs can register for request reception from 0x0001'0000'0000 to 0x8000'0000'0000. It turned out that at least one vendor-specific protocol exists which uses lower addresses: https://bugzilla.kernel.org/show_bug.cgi?id=76921 For the time being, revert most of commit fcd46b34 so that affected protocols work like with kernel v3.13 and before. Just keep the valid documentation parts from the regressing commit, and the ability to identify controllers which could be programmed to accept >32 bit physical DMA addresses. The rest of fcd46b34 should probably be brought back as an optional instead of default feature. Reported-by: NFabien Spindler <fabien.spindler@inria.fr> Cc: <stable@vger.kernel.org> # 3.14+ Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
-
- 27 5月, 2014 9 次提交
-
-
由 Hannes Reinecke 提交于
lockdep complains about a circular locking. And indeed, we need to release the lock before calling dm_table_run_md_queue_async(). As such, commit 4cdd2ad7 ("dm mpath: fix lock order inconsistency in multipath_ioctl") must also be reverted in addition to fixing the lock order in the other dm_table_run_md_queue_async() callers. Reported-by: NBart van Assche <bvanassche@acm.org> Tested-by: NBart van Assche <bvanassche@acm.org> Signed-off-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Ming Lei 提交于
When there isn't enough vring descriptor for adding to vq, blk-mq will be put as stopped state until some of pending descriptors are completed & freed. Unfortunately, the vq's interrupt may come just before blk-mq's BLK_MQ_S_STOPPED flag is set, so the blk-mq will still be kept as stopped even though lots of descriptors are completed and freed in the interrupt handler. The worst case is that all pending descriptors are freed in the interrupt handler, and the queue is kept as stopped forever. This patch fixes the problem by starting/stopping blk-mq with holding vq_lock. Cc: Jens Axboe <axboe@kernel.dk> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMing Lei <tom.leiming@gmail.com> Cc: stable@kernel.org Signed-off-by: NJens Axboe <axboe@fb.com> Conflicts: drivers/block/virtio_blk.c
-
由 Heinz Mauelshagen 提交于
The DM cache target cannot cope with discards that span multiple cache blocks, so each discard bio that spans more than one cache block must get split by the DM core. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Acked-by: NJoe Thornber <ejt@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org # v3.9+
-
由 Chris Wilson 提交于
This is pure evil. Userspace, I'm looking at you SNA, repacks batch buffers on the fly after generation as they are being passed to the kernel for execution. These batches also contain self-referenced relocations as a single buffer encompasses the state commands, kernels, vertices and sampler. During generation the buffers are placed at known offsets within the full batch, and then the relocation deltas (as passed to the kernel) are tweaked as the batch is repacked into a smaller buffer. This means that userspace is passing negative relocations deltas, which subsequently wrap to large values if the batch is at a low address. The GPU hangs when it then tries to use the large value as a base for its address offsets, rather than wrapping back to the real value (as one would hope). As the GPU uses positive offsets from the base, we can treat the relocation address as the minimum address read by the GPU. For the upper bound, we trust that userspace will not read beyond the end of the buffer. So, how do we fix negative relocations from wrapping? We can either check that every relocation looks valid when we write it, and then position each object such that we prevent the offset wraparound, or we just special-case the self-referential behaviour of SNA and force all batches to be above 256k. Daniel prefers the latter approach. This fixes a GPU hang when it tries to use an address (relocation + offset) greater than the GTT size. The issue would occur quite easily with full-ppgtt as each fd gets its own VM space, so low offsets would often be handed out. However, with the rearrangement of the low GTT due to capturing the BIOS framebuffer, it is already affecting kernels 3.15 onwards. I think only IVB+ is susceptible to this bug, but the workaround should only kick in rarely, so it seems sensible to always apply it. v3: Use a bias for batch buffers to prevent small negative delta relocations from wrapping. v4 from Daniel: - s/BIAS/BATCH_OFFSET_BIAS/ - Extract eb_vma_misplaced/i915_vma_misplaced since the conditions were growing rather cumbersome. - Add a comment to eb_get_batch explaining why we do this. - Apply the batch offset bias everywhere but mention that we've only observed it on gen7 gpus. - Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch. v5: Add static to eb_get_batch, spotted by 0-day tester. Testcase: igt/gem_bad_reloc Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3) Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
We only want to modifiy a single field in the userspace view of the execbuffer command buffer, so explicitly change that rather than copy everything back again. This serves two purposes: 1. The single fields are much cheaper to copy (constant size so the copy uses special case code) and much smaller than the whole array. 2. We modify the array for internal use that need to be masked from the user. Note: We need this backported since without it the next bugfix will blow up when userspace recycles batchbuffers and relocations. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
A single object may be referenced by multiple registers fundamentally breaking the static allotment of ids in the current design. When the object is used the second time, the physical address of the first assignment is relinquished and a second one granted. However, the hardware is still reading (and possibly writing) to the old physical address now returned to the system. Eventually hilarity will ensue, but in the short term, it just means that cursors are broken when using more than one pipe. v2: Fix up leak of pci handle when handling an error during attachment, and avoid a double kmap/kunmap. (Ville) Rebase against -fixes. v3: And fix the error handling added in v2 (Ville) Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77351Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: stable@vger.kernel.org Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Hans de Goede 提交于
Most of the affected models share pnp-ids for the touchpad. So switching to pnp-ids give us 2 advantages: 1) It shrinks the quirk list 2) It will lower the new quirk addition frequency, ie the recently added W540 quirk would not have been necessary since it uses the same LEN0034 pnp ids as other models already added before it As an added bonus it actually puts the quirk on the actual psmouse, rather then on the machine, which is technically more correct. Cc: stable@vger.kernel.org Signed-off-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
-
由 Hans de Goede 提交于
This is a preparation patch for simplifying the min/max quirk table. Cc: stable@vger.kernel.org Signed-off-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
-
由 Hans de Goede 提交于
The T540p has a touchpad with pnp-id LEN0034, all the models with this pnp-id have the same min/max values, except the T540p where the values are slightly off. Fix them to be identical. This is a preparation patch for simplifying the quirk table. Cc: stable@vger.kernel.org Signed-off-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
-
- 26 5月, 2014 4 次提交
-
-
由 Valentin Longchamp 提交于
By default for every espi transfer, the rx_buf is placed right after the tx_buf. This can lead to a buffer overflow when the size of both the TX and RX data cumulated is higher than the allocated 64K buffer for the transfer (this is the case when sending for instance a read command and reading 64K back, please see: http://article.gmane.org/gmane.linux.drivers.mtd/53411 ) This gets fixed by always setting the RX buffer pointer at the begining of the transfer buffer. [The driver shouldn't be doing the copy in the first place and instead sending directly from the supplied buffer but this is at least not worse than what's there -- broonie] Signed-off-by: NValentin Longchamp <valentin.longchamp@keymile.com> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
Rejecting unsupported values of spi-tx-bus-width and spi-rx-bus-width may break compatibility with future DTs. Just ignore them, falling back to Single SPI Transfers. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Geert Uytterhoeven 提交于
The calculation of the bit rate divider used a standard C division, which rounds down the quotient. This may lead to a higher bitrate than requested. Round up to avoid this. E.g. on Koelsch, the SPI flash (configured for 30 MHz) was driven at 48.75 MHz. After this patch it's driven at a safe 24.375 MHz. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Aaron Lu 提交于
When the thermal module is to be removed, we should destroy the wq acpi_thermal_pm_queue after the ACPI driver's remove callback is executed as we will need to flush the workqueue there, or a NULL pointer access will be hit. Reported-and-tested-by: NKui Zhang <kuizhang@gmail.com> References: http://www.spinics.net/lists/kernel/msg1747251.html Cc: All applicable <stable@vger.kernel.org> Signed-off-by: NAaron Lu <aaron.lu@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 25 5月, 2014 1 次提交
-
-
由 Jean Delvare 提交于
The mapping from OF device IDs to platform device IDs is wrong. TYPE_NCPXXWB473 is 0, TYPE_NCPXXWL333 is 1, so ntc_thermistor_id[TYPE_NCPXXWB473] is { "ncp15wb473", TYPE_NCPXXWB473 } while ntc_thermistor_id[TYPE_NCPXXWL333] is { "ncp18wb473", TYPE_NCPXXWB473 }. So the name is wrong for all but the "ntc,ncp15wb473" entry, and the type is wrong for the "ntc,ncp15wl333" entry. So map the entries by index, it is neither elegant nor robust but at least it is correct. Signed-off-by: NJean Delvare <jdelvare@suse.de> Fixes: 9e8269de hwmon: (ntc_thermistor) Add DT with IIO support to NTC thermistor driver Reviewed-by: NGuenter Roeck <linux@roeck-us.net> Cc: Naveen Krishna Chatradhi <ch.naveen@samsung.com> Cc: Doug Anderson <dianders@chromium.org>
-