- 23 7月, 2014 1 次提交
-
-
由 Rafael J. Wysocki 提交于
During system suspend mark ACPI buttons (other than the lid) as "suspended" and if in that state, report wakeup events on button events, but do not propagate those events up the stack. This prevents systems from being turned off after a button-triggered wakeup from the "freeze" sleep state. Link: https://bugzilla.kernel.org/show_bug.cgi?id=77611 Tested-on: Acer Aspire S5, Toshiba Portege R500 Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 19 7月, 2014 1 次提交
-
-
由 Hannes Frederic Sowa 提交于
The expression entropy_count -= ibytes << (ENTROPY_SHIFT + 3) could actually increase entropy_count if during assignment of the unsigned expression on the RHS (mind the -=) we reduce the value modulo 2^width(int) and assign it to entropy_count. Trinity found this. [ Commit modified by tytso to add an additional safety check for a negative entropy_count -- which should never happen, and to also add an additional paranoia check to prevent overly large count values to be passed into urandom_read(). ] Reported-by: NDave Jones <davej@redhat.com> Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: NTheodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org
-
- 18 7月, 2014 4 次提交
-
-
由 Tomasz Figa 提交于
Certain GIC implementation, namely those found on earlier, single cluster, Exynos SoCs, have registers mapped without per-CPU banking, which means that the driver needs to use different offset for each CPU. Currently the driver calculates the offset by multiplying value returned by cpu_logical_map() by CPU offset parsed from DT. This is correct when CPU topology is not specified in DT and aforementioned function returns core ID alone. However when DT contains CPU topology, the function changes to return cluster ID as well, which is non-zero on mentioned SoCs and so breaks the calculation in GIC driver. This patch fixes this by masking out cluster ID in CPU offset calculation so that only core ID is considered. Multi-cluster Exynos SoCs already have banked GIC implementations, so this simple fix should be enough. Reported-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: NTomasz Figa <t.figa@samsung.com> Fixes: db0d4db2 ("ARM: gic: allow GIC to support non-banked setups") Cc: <stable@vger.kernel.org> # v3.3+ Link: https://lkml.kernel.org/r/1405610624-18722-1-git-send-email-t.figa@samsung.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Dexuan Cui 提交于
We should schedule the 5s "timer work" before starting the data transfer, otherwise, the data transfer code may finish so fast on another virtual cpu that when the code(fcopy_write()) trying to cancel the 5s "timer work" can occasionally fail because the "timer work" may haven't been scheduled yet and as a result the fcopy process will be aborted wrongly by fcopy_work_func() in 5s. Thank Liz Zhang <lizzha@microsoft.com> for the initial investigation on the bug. This addresses https://bugzilla.redhat.com/show_bug.cgi?id=1118123Tested-by: NLiz Zhang <lizzha@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: stable@vger.kernel.org Signed-off-by: NDexuan Cui <decui@microsoft.com> Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Gavin Guo 提交于
When using USB 3.0 pen drive with the [AMD] FCH USB XHCI Controller [1022:7814], the second hotplugging will experience the USB 3.0 pen drive is recognized as high-speed device. After bisecting the kernel, I found the commit number 41e7e056 (USB: Allow USB 3.0 ports to be disabled.) causes the bug. After doing some experiments, the bug can be fixed by avoiding executing the function hub_usb3_port_disable(). Because the port status with [AMD] FCH USB XHCI Controlleris [1022:7814] is already in RxDetect (I tried printing out the port status before setting to Disabled state), it's reasonable to check the port status before really executing hub_usb3_port_disable(). Fixes: 41e7e056 (USB: Allow USB 3.0 ports to be disabled.) Signed-off-by: NGavin Guo <gavin.guo@canonical.com> Acked-by: NAlan Stern <stern@rowland.harvard.edu> Cc: <stable@vger.kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Abbas Raza 提交于
There are 2 methods for ZLP (zero-length packet) generation: 1) In software 2) Automatic generation by device controller 1) is implemented in UDC driver and it attaches ZLP to IN packet if descriptor->size < wLength 2) can be enabled/disabled by setting ZLT bit in the QH When gadget ffs is connected to ubuntu host, the host sends get descriptor request and wLength in setup packet is 255 while the size of descriptor which will be sent by gadget in IN packet is 64 byte. So the composite driver sets req->zero = 1. In UDC driver following code will be executed then if (hwreq->req.zero && hwreq->req.length && (hwreq->req.length % hwep->ep.maxpacket == 0)) add_td_to_list(hwep, hwreq, 0); Case-A: So in case of ubuntu host, UDC driver will attach a ZLP to the IN packet. ubuntu host will request 255 byte in IN request, gadget will send 64 byte with ZLP and host will come to know that there is no more data. But hold on, by default ZLT=0 for endpoint 0 so hardware also tries to automatically generate the ZLP which blocks enumeration for ~6 seconds due to endpoint 0 STALL, NAKs are sent to host for any requests (OUT/PING) Case-B: In case when gadget ffs is connected to Apple device, Apple device sends setup packet with wLength=64. So descriptor->size = 64 and wLength=64 therefore req->zero = 0 and UDC driver will not attach any ZLP to the IN packet. Apple device requests 64 bytes, gets 64 bytes and doesn't further request for IN data. But ZLT=0 by default for endpoint 0 so hardware tries to automatically generate the ZLP which blocks enumeration for ~6 seconds due to endpoint 0 STALL, NAKs are sent to host for any requests (OUT/PING) According to USB2.0 specs: 8.5.3.2 Variable-length Data Stage A control pipe may have a variable-length data phase in which the host requests more data than is contained in the specified data structure. When all of the data structure is returned to the host, the function should indicate that the Data stage is ended by returning a packet that is shorter than the MaxPacketSize for the pipe. If the data structure is an exact multiple of wMaxPacketSize for the pipe, the function will return a zero-length packet to indicate the end of the Data stage. In Case-A mentioned above: If we disable software ZLP generation & ZLT=0 for endpoint 0 OR if software ZLP generation is not disabled but we set ZLT=1 for endpoint 0 then enumeration doesn't block for 6 seconds. In Case-B mentioned above: If we disable software ZLP generation & ZLT=0 for endpoint then enumeration still blocks due to ZLP automatically generated by hardware and host not needing it. But if we keep software ZLP generation enabled but we set ZLT=1 for endpoint 0 then enumeration doesn't block for 6 seconds. So the proper solution for this issue seems to disable automatic ZLP generation by hardware (i.e by setting ZLT=1 for endpoint 0) and let software (UDC driver) handle the ZLP generation based on req->zero field. Cc: stable@vger.kernel.org Signed-off-by: NAbbas Raza <Abbas_Raza@mentor.com> Signed-off-by: NPeter Chen <peter.chen@freescale.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 7月, 2014 13 次提交
-
-
由 Mario Kleiner 提交于
Need to protect mmio flip programming by event lock as well. Need to also first enable pflip irq, then mmio program, otherwise a flip completion may get unnoticed in the vblank of actual completion if the flip is programmed, but radeon_flip_work_func gets preempted immediately after mmio programming and before vblank. In that case the vblank irq handler wouldn't run radeon_crtc_handle_vblank() with the completion check routine, miss the completed flip, and only notice one vblank after actual completion, causing a false/delayed report of flip completion. Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Mario Kleiner 提交于
Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Mario Kleiner 提交于
Not needed anymore, as it is already unreffed within radeon_flip_work_func() after its only use. Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com> Reviewed-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Michel Dänzer 提交于
Otherwise the DRM core and userspace will be confused about which BO the CRTC is scanning out. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Michel Dänzer 提交于
As well as enabling the vblank interrupt. These shouldn't take any significant amount of time, but at least pinning the BO has actually been seen to fail in practice before, in which case we need to let userspace know about it. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Mario Kleiner 提交于
Since 3.16-rc1 we have this new failure: When the userspace XOrg ddx schedules vblank events to trigger deferred kms-pageflips, e.g., via the OML_sync_control extension call glXSwapBuffersMscOML(), or if a glXSwapBuffers() is called immediately after completion of a previous swapbuffers call, e.g., in a tight rendering loop with minimal rendering, it happens frequently that the pageflip ioctl() is executed within the same vblank in which a previous kms-pageflip completed, or - for deferred swaps - always one vblank earlier than requested by the client app. This causes premature pageflips and detection of failure by the ddx, e.g., XOrg log warnings like... "(WW) RADEON(1): radeon_dri2_flip_event_handler: Pageflip completion event has impossible msc 201025 < target_msc 201026" ... and error/invalid return values of glXWaitForSbcOML() and Intel_swap_events extension. Reason is the new way in which kms-pageflips are programmed since 3.16. This commit changes the time window in which the hw can execute pending programmed pageflips. Before, a pending flip would get executed anywhere within the vblank interval. Now a pending flip only gets executed at the leading edge of vblank (start of front porch), making sure that a invocation of the pageflip ioctl() within a given vblank interval will only lead to pageflip completion in the following vblank. Tested to death on a DCE-4 card. Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Alex Deucher 提交于
If the value in the scratch register is 0, set it to the max level. This fixes an issue where the console fb blanking code calls back into the backlight driver on unblank and then sets the backlight level to 0 after the driver has already set the mode and enabled the backlight. bugs: https://bugs.freedesktop.org/show_bug.cgi?id=81382 https://bugs.freedesktop.org/show_bug.cgi?id=70207Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Tested-by: NDavid Heidelberger <david.heidelberger@ixit.cz> Cc: stable@vger.kernel.org
-
由 Alex Deucher 提交于
In some cases we fetch the edid in the detect() callback in order to determine what sort of monitor is connected. If that happens, don't fetch the edid again in the get_modes() callback or we will leak the edid. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Suravee Suthikulpanit 提交于
Commit 3ab72f91 "dt-bindings: add GIC-400 binding" added the "arm,gic-400" compatible string, but the corresponding IRQCHIP_DECLARE was never added to the gic driver. Therefore add the missing irqchip declaration for it. Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Removed additional empty line and adapted commit message to mark it as fixing an issue. Signed-off-by: NHeiko Stuebner <heiko@sntech.de> Acked-by: NWill Deacon <will.deacon@arm.com> Fixes: 3ab72f91 ("dt-bindings: add GIC-400 binding") Cc: <stable@vger.kernel.org> # v3.14+ Link: https://lkml.kernel.org/r/2621565.f5eISveXXJ@diegoSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Viresh Kumar 提交于
This is only relevant to implementations with multiple clusters, where clusters have separate clock lines but all CPUs within a cluster share it. Consider a dual cluster platform with 2 cores per cluster. During suspend we start hot unplugging CPUs in order 1 to 3. When CPU2 is removed, policy->kobj would be moved to CPU3 and when CPU3 goes down we wouldn't free policy or its kobj as we want to retain permissions/values/etc. Now on resume, we will get CPU2 before CPU3 and will call __cpufreq_add_dev(). We will recover the old policy and update policy->cpu from 3 to 2 from update_policy_cpu(). But the kobj is still tied to CPU3 and isn't moved to CPU2. We wouldn't create a link for CPU2, but would try that for CPU3 while bringing it online. Which will report errors as CPU3 already has kobj assigned to it. This bug got introduced with commit 42f921a6, which overlooked this scenario. To fix this, lets move kobj to the new policy->cpu while bringing first CPU of a cluster back. Also do a WARN_ON() if kobject_move failed, as we would reach here only for the first CPU of a non-boot cluster. And we can't recover from this situation, if kobject_move() fails. Fixes: 42f921a6 (cpufreq: remove sysfs files for CPUs which failed to come back after resume) Cc: 3.13+ <stable@vger.kernel.org> # 3.13+ Reported-and-tested-by: NBu Yitian <ybu@qti.qualcomm.com> Reported-by: NSaravana Kannan <skannan@codeaurora.org> Reviewed-by: NSrivatsa S. Bhat <srivatsa@mit.edu> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Or Gerlitz 提交于
In commit f360d88a, we advertise blocking multicast loopback to both kernel and userspace consumers, but don't allow kernel consumers (e.g IPoIB) to use it with their UD QPs. Fix that. Fixes: f360d88a ("IB/mlx5: Add block multicast loopback support") Reported-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Guenter Roeck 提交于
Temperature limit registers are signed. Limits therefore need to be clamped to (-128, 127) degrees C and not to (0, 255) degrees C. Without this fix, writing a limit of 128 degrees C sets the actual limit to -128 degrees C. Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Cc: stable@vger.kernel.org Reviewed-by: NAxel Lin <axel.lin@ingics.com>
-
由 Jason Wang 提交于
Return IRQ_NONE if it was not our irq. This is necessary for the case when qxl is sharing irq line with a device A in a crash kernel. If qxl is initialized before A and A's irq was raised during this gap, returning IRQ_HANDLED in this case will cause this irq to be raised again after EOI since kernel think it was handled but in fact it was not. Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 16 7月, 2014 8 次提交
-
-
由 Viresh Kumar 提交于
OPPs can be populated statically, via DT, or added at run time with dev_pm_opp_add(). While this driver handles the first case correctly, it would fail to populate OPPs added at runtime. Because call to of_init_opp_table() would fail as there are no OPPs in DT and probe will return early. To fix this, remove error checking and call dev_pm_opp_init_cpufreq_table() unconditionally. Update bindings as well. Suggested-by: NStephen Boyd <sboyd@codeaurora.org> Tested-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Quentin Armitage 提交于
Commit ff1f0018 ("drivers: Enable building of Kirkwood drivers for mach-mvebu") added Kirkwood into mach-mvebu, adding MACH_KIRKWOOD to ARCH_KIRKWOOD in the KConfig files. The change for ARM_KIRKWOOD_CPUFREQ replaced ARCH_KIRKWOOD with MACH_KIRKWOOD, whereas all the other changes were ARCH_KIRKWOOD || MACH_KIRKWOOD. As a consequence of this change, the cpufreq driver is no longer enabled for ARCH_KIRKWOOD. This patch reinstates ARM_KIRKWOOD_CPUFREQ for ARCH_KIRKWOOD. Signed-off-by: NQuentin Armitage <quentin@armitage.org.uk> Acked-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Nicolas Del Piano 提交于
PM_OPP is a library used by several of the existing cpufreq drivers. ARM IMX6Q cpufreq driver uses this library for its functionality. Thus, it should be selected in Kconfig. Reported-by: NEzequiel Garcia <ezequiel@vanguardiasur.com.ar> Signed-off-by: NNicolas Del Piano <ndel314@gmail.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Linus Walleij 提交于
The Compaq iPAQ h3600 also has the K4S281632b-1H memory type. Verified by prying apart a broken board. Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Hans de Goede 提交于
As reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1025690 This is yet another model which needs this quirk. Link: https://bugzilla.redhat.com/show_bug.cgi?id=1025690Signed-off-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Brian Norris 提交于
The return value from 'ubi_io_read_ec_hdr()' was stored in 'err', not in 'ret'. This fix makes sure Fastmap-enabled UBI does not miss bit-flip while reading EC headers, events and scrubs the affected PEBs. This issue was reported by Coverity Scan. Artem: improved the commit message. Signed-off-by: NBrian Norris <computersforpeace@gmail.com> Acked-by: NRichard Weinberger <richard@nod.at> Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@linux.intel.com>
-
由 Mike Snitzer 提交于
The block size for the dm-cache's data device must remained fixed for the life of the cache. Disallow any attempt to change the cache's data block size. Signed-off-by: NMike Snitzer <snitzer@redhat.com> Acked-by: NJoe Thornber <ejt@redhat.com> Cc: stable@vger.kernel.org
-
由 Mike Snitzer 提交于
The block size for the thin-pool's data device must remained fixed for the life of the thin-pool. Disallow any attempt to change the thin-pool's data block size. It should be noted that attempting to change the data block size via thin-pool table reload will be ignored as a side-effect of the thin-pool handover that the thin-pool target does during thin-pool table reload. Here is an example outcome of attempting to load a thin-pool table that reduced the thin-pool's data block size from 1024K to 512K. Before: kernel: device-mapper: thin: 253:4: growing the data device from 204800 to 409600 blocks After: kernel: device-mapper: thin metadata: changing the data block size (from 2048 to 1024) is not supported kernel: device-mapper: table: 253:4: thin-pool: Error creating metadata object kernel: device-mapper: ioctl: error adding target to table Signed-off-by: NMike Snitzer <snitzer@redhat.com> Acked-by: NJoe Thornber <ejt@redhat.com> Cc: stable@vger.kernel.org
-
- 15 7月, 2014 12 次提交
-
-
由 Martin Peres 提交于
Signed-off-by: NMartin Peres <martin.peres@free.fr> Tested-by: NStefan Ringel <mail@stefanringel.de> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Olivier Sobrie 提交于
When the module sends bursts of data, sometimes a deadlock happens in the hso driver when the tty buffer doesn't get the chance to be flushed quickly enough. Remove the endless while loop in function put_rxbuf_data() which is called by the urb completion handler. If there isn't enough room in the tty buffer, discards all the data received in the URB. Cc: David Miller <davem@davemloft.net> Cc: David Laight <David.Laight@ACULAB.COM> Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk> Cc: Dan Williams <dcbw@redhat.com> Cc: Jan Dumon <j.dumon@option.com> Signed-off-by: NOlivier Sobrie <olivier@sobrie.be> Acked-by: NAlan Cox <alan@linux.intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Olivier Sobrie 提交于
The workqueue "retry_unthrottle_workqueue" is not scheduled anywhere in the code. So, remove it. Signed-off-by: NOlivier Sobrie <olivier@sobrie.be> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Christoph Schulz 提交于
Commit 568f194e ("net: ppp: use sk_unattached_filter api") causes sk_chk_filter() to be called twice when setting a PPP pass or active filter. This applies to both the generic PPP subsystem implemented by drivers/net/ppp/ppp_generic.c and the ISDN PPP subsystem implemented by drivers/isdn/i4l/isdn_ppp.c. The first call is from within get_filter(). The second one is through the call chain ppp_ioctl() or isdn_ppp_ioctl() --> sk_unattached_filter_create() --> __sk_prepare_filter() --> sk_chk_filter() The first call from within get_filter() should be deleted as get_filter() is called just before calling sk_unattached_filter_create() later on, which eventually calls sk_chk_filter() anyway. For 3.15.x, this proposed change is a bugfix rather than a pure optimization as in that branch, sk_chk_filter() may replace filter codes by other codes which are not recognized when executing sk_chk_filter() a second time. So with 3.15.x, if sk_chk_filter() is called twice, the second invocation may yield EINVAL (this depends on the filter codes found in the filter to be set, but because the replacement is done for frequently used codes, this is almost always the case). The net effect is that setting pass and/or active PPP filters does not work anymore, since sk_unattached_filter_create() always returns EINVAL due to the second call to sk_chk_filter(), regardless whether the filter was originally sane or not. Signed-off-by: NChristoph Schulz <develop@kristov.de> Acked-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
Napi id was not marked for gro_skb, this will lead rx busy loop won't work correctly since they stack never try to call low latency receive method because of a zero socket napi id. Fix this by marking napi id for gro_skb. The transaction rate of 1 byte netperf tcp_rr gets about 50% increased (from 20531.68 to 30610.88). Cc: Amir Vadai <amirv@mellanox.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikolay Aleksandrov 提交于
Obvious copy/paste error when I converted the ad_select to the new option API. "lacp_rate" there should be "ad_select" so we can get the proper value. CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: David S. Miller <davem@davemloft.net> Fixes: 9e5f5eeb ("bonding: convert ad_select to use the new option API") Reported-by: NKarim Scheik <karim.scheik@prisma-solutions.at> Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Christoph Schulz 提交于
The PPP channel MTU is used with Multilink PPP when ppp_mp_explode() (see ppp_generic module) tries to determine how big a fragment might be. According to RFC 1661, the MTU excludes the 2-byte PPP protocol field, see the corresponding comment and code in ppp_mp_explode(): /* * hdrlen includes the 2-byte PPP protocol field, but the * MTU counts only the payload excluding the protocol field. * (RFC1661 Section 2) */ mtu = pch->chan->mtu - (hdrlen - 2); However, the pppoe module *does* include the PPP protocol field in the channel MTU, which is wrong as it causes the PPP payload to be 1-2 bytes too big under certain circumstances (one byte if PPP protocol compression is used, two otherwise), causing the generated Ethernet packets to be dropped. So the pppoe module has to subtract two bytes from the channel MTU. This error only manifests itself when using Multilink PPP, as otherwise the channel MTU is not used anywhere. In the following, I will describe how to reproduce this bug. We configure two pppd instances for multilink PPP over two PPPoE links, say eth2 and eth3, with a MTU of 1492 bytes for each link and a MRRU of 2976 bytes. (This MRRU is computed by adding the two link MTUs and subtracting the MP header twice, which is 4 bytes long.) The necessary pppd statements on both sides are "multilink mtu 1492 mru 1492 mrru 2976". On the client side, we additionally need "plugin rp-pppoe.so eth2" and "plugin rp-pppoe.so eth3", respectively; on the server side, we additionally need to start two pppoe-server instances to be able to establish two PPPoE sessions, one over eth2 and one over eth3. We set the MTU of the PPP network interface to the MRRU (2976) on both sides of the connection in order to make use of the higher bandwidth. (If we didn't do that, IP fragmentation would kick in, which we want to avoid.) Now we send a ICMPv4 echo request with a payload of 2948 bytes from client to server over the PPP link. This results in the following network packet: 2948 (echo payload) + 8 (ICMPv4 header) + 20 (IPv4 header) --------------------- 2976 (PPP payload) These 2976 bytes do not exceed the MTU of the PPP network interface, so the IP packet is not fragmented. Now the multilink PPP code in ppp_mp_explode() prepends one protocol byte (0x21 for IPv4), making the packet one byte bigger than the negotiated MRRU. So this packet would have to be divided in three fragments. But this does not happen as each link MTU is assumed to be two bytes larger. So this packet is diveded into two fragments only, one of size 1489 and one of size 1488. Now we have for that bigger fragment: 1489 (PPP payload) + 4 (MP header) + 2 (PPP protocol field for the MP payload (0x3d)) + 6 (PPPoE header) -------------------------- 1501 (Ethernet payload) This packet exceeds the link MTU and is discarded. If one configures the link MTU on the client side to 1501, one can see the discarded Ethernet frames with tcpdump running on the client. A ping -s 2948 -c 1 192.168.15.254 leads to the smaller fragment that is correctly received on the server side: (tcpdump -vvvne -i eth3 pppoes and ppp proto 0x3d) 52:54:00:ad:87:fd > 52:54:00:79:5c:d0, ethertype PPPoE S (0x8864), length 1514: PPPoE [ses 0x3] MLPPP (0x003d), length 1494: seq 0x000, Flags [end], length 1492 and to the bigger fragment that is not received on the server side: (tcpdump -vvvne -i eth2 pppoes and ppp proto 0x3d) 52:54:00:70:9e:89 > 52:54:00:5d:6f:b0, ethertype PPPoE S (0x8864), length 1515: PPPoE [ses 0x5] MLPPP (0x003d), length 1495: seq 0x000, Flags [begin], length 1493 With the patch below, we correctly obtain three fragments: 52:54:00:ad:87:fd > 52:54:00:79:5c:d0, ethertype PPPoE S (0x8864), length 1514: PPPoE [ses 0x1] MLPPP (0x003d), length 1494: seq 0x000, Flags [begin], length 1492 52:54:00:70:9e:89 > 52:54:00:5d:6f:b0, ethertype PPPoE S (0x8864), length 1514: PPPoE [ses 0x1] MLPPP (0x003d), length 1494: seq 0x000, Flags [none], length 1492 52:54:00:ad:87:fd > 52:54:00:79:5c:d0, ethertype PPPoE S (0x8864), length 27: PPPoE [ses 0x1] MLPPP (0x003d), length 7: seq 0x000, Flags [end], length 5 And the ICMPv4 echo request is successfully received at the server side: IP (tos 0x0, ttl 64, id 21925, offset 0, flags [DF], proto ICMP (1), length 2976) 192.168.222.2 > 192.168.15.254: ICMP echo request, id 30530, seq 0, length 2956 The bug was introduced in commit c9aa6895 ("[PPPOE]: Advertise PPPoE MTU") from the very beginning. This patch applies to 3.10 upwards but the fix can be applied (with minor modifications) to kernels as old as 2.6.32. Signed-off-by: NChristoph Schulz <develop@kristov.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Airlie 提交于
This reverts commit 38aecea0. This breaks Haswell Thinkpad + Lenovo dock in SST mode with a HDMI monitor attached. Before this we can 1920x1200 mode, after this we only ever get 1024x768, and a lot of deferring. This didn't revert clean, but this should be fine. bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1117008 Cc: stable@vger.kernel.org # v3.15 Signed-off-by: NDave Airlie <airlied@redhat.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rafael J. Wysocki 提交于
This reverts commit 886129a8 (ACPI / video: change acpi-video brightness_switch_enabled default to 0) as it is reported to cause problems to happen. Fixes: 886129a8 (ACPI / video: change acpi-video brightness_switch_enabled default to 0) Link: http://marc.info/?l=linux-acpi&m=140534286826819&w=2 Reported by: Bjørn Mork <bjorn@mork.no> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Axel Lin 提交于
Dashes are not allowed in hwmon name attributes. Use "da9055" instead of "da9055-hwmon". Signed-off-by: NAxel Lin <axel.lin@ingics.com> Cc: stable@vger.kernel.org Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
-
由 Axel Lin 提交于
Dashes are not allowed in hwmon name attributes. Use "da9052" instead of "da9052-hwmon". Signed-off-by: NAxel Lin <axel.lin@ingics.com> Cc: stable@vger.kernel.org Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
-
由 Daniel Vetter 提交于
commit 98ec7739 Author: Ville Syrjälä <ville.syrjala@linux.intel.com> Date: Wed Apr 30 17:43:01 2014 +0300 drm/i915: Make primary_enabled match the actual hardware state introduced more accurate tracking of the primary plane and some checks. It missed the plane->pipe reassignement code for gen2/3 though, which the checks caught and resulted in WARNING backtraces. Since we only use this path if the plane is on and on the wrong pipe we can just always set the tracking bit to "enabled". Reported-and-tested-by: NPaul Bolle <pebolle@tiscali.nl> Cc: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 7月, 2014 1 次提交
-
-
由 Amit Shah 提交于
The hwrng core asks for random data in the hwrng_register() call itself from commit d9e79726. This doesn't play well with virtio -- the DRIVER_OK bit is only set by virtio core on a successful probe, and we're not yet out of our probe routine when this call is made. This causes the host to not acknowledge any requests we put in the virtqueue, and the insmod or kernel boot process just waits for data to arrive from the host, which never happens. CC: Kees Cook <keescook@chromium.org> CC: Jason Cooper <jason@lakedaemon.net> CC: Herbert Xu <herbert@gondor.apana.org.au> CC: <stable@vger.kernel.org> # For v3.15+ Reviewed-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NAmit Shah <amit.shah@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-