- 24 7月, 2019 5 次提交
-
-
由 Mike Snitzer 提交于
commit 075c18c3e124a1511ebc10a89f1858c8a77dcb01 upstream. Provides useful context about bio splits in blktrace. Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Mike Snitzer 提交于
commit 6548c7c538e5658cbce686c2dd1a9b4f5398bf34 upstream. Otherwise targets that don't support/expect IO splitting could resubmit bios using code paths with unnecessary IO splitting complexity. Depends-on: 24113d487843 ("dm: avoid indirect call in __dm_make_request") Fixes: 978e51ba ("dm: optimize bio-based NVMe IO submission") Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Mikulas Patocka 提交于
commit 24113d4878439baf1f23c1a33dfcc340fba66e97 upstream. Indirect calls are inefficient because of retpolines that are used for spectre workaround. This patch replaces an indirect call with a condition (that can be predicted by the branch predictor). Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Mike Snitzer 提交于
commit a1e1cb72d96491277ede8d257ce6b48a381dd336 upstream. [Joseph] cherry-pick part_stat_get() from commit 1226b8dd0e91 ("block: switch to per-cpu in-flight counters") since we don't want the whole patch series get involved. The risk of redundant IO accounting was not taken into consideration when commit 18a25da8 ("dm: ensure bio submission follows a depth-first tree walk") introduced IO splitting in terms of recursion via generic_make_request(). Fix this by subtracting the split bio's payload from the IO stats that were already accounted for by start_io_acct() upon dm_make_request() entry. This repeat oscillation of the IO accounting, up then down, isn't ideal but refactoring DM core's IO splitting to pre-split bios _before_ they are accounted turned out to be an excessive amount of change that will need a full development cycle to refine and verify. Before this fix: /dev/mapper/stripe_dev is a 4-way stripe using a 32k chunksize, so bios are split on 32k boundaries. # fio --name=16M --filename=/dev/mapper/stripe_dev --rw=write --bs=64k --size=16M \ --iodepth=1 --ioengine=libaio --direct=1 --refill_buffers with debugging added: [103898.310264] device-mapper: core: start_io_acct: dm-2 WRITE bio->bi_iter.bi_sector=0 len=128 [103898.318704] device-mapper: core: __split_and_process_bio: recursing for following split bio: [103898.329136] device-mapper: core: start_io_acct: dm-2 WRITE bio->bi_iter.bi_sector=64 len=64 ... 16M written yet 136M (278528 * 512b) accounted: # cat /sys/block/dm-2/stat | awk '{ print $7 }' 278528 After this fix: 16M written and 16M (32768 * 512b) accounted: # cat /sys/block/dm-2/stat | awk '{ print $7 }' 32768 Fixes: 18a25da8 ("dm: ensure bio submission follows a depth-first tree walk") Cc: stable@vger.kernel.org # 4.16+ Reported-by: NBryan Gurney <bgurney@redhat.com> Reviewed-by: NMing Lei <ming.lei@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Mike Snitzer 提交于
commit 57c36519e4b949f89381053f7283f5d605595b42 upstream. DM's clone_bio() now benefits from using bio_trim() by fixing the fact that clone_bio() wasn't clearing BIO_SEG_VALID like bio_trim() does; which triggers blk_recount_segments() via bio_phys_segments(). Reviewed-by: NMing Lei <ming.lei@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
- 05 7月, 2019 7 次提交
-
-
由 Suren Baghdasaryan 提交于
commit 8af0c18af1425fc70686c0fdcfc0072cd8431aa0 upstream. kthread.h can't be included in psi_types.h because it creates a circular inclusion with kthread.h eventually including psi_types.h and complaining on kthread structures not being defined because they are defined further in the kthread.h. Resolve this by removing psi_types.h inclusion from the headers included from kthread.h. Link: http://lkml.kernel.org/r/20190319235619.260832-7-surenb@google.comSigned-off-by: NSuren Baghdasaryan <surenb@google.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Dennis Zhou <dennis@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li Zefan <lizefan@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Johannes Weiner 提交于
commit 8508cf3ffad4defa202b303e5b6379efc4cd9054 upstream. There are several definitions of those functions/macros in places that mess with fixed-point load averages. Provide an official version. [Joseph] use stat.mean instead of stat->rqs.mean to fix conflicts in iolatency_check_latencies(); [akpm@linux-foundation.org: fix missed conversion in block/blk-iolatency.c] Link: http://lkml.kernel.org/r/20180828172258.3185-5-hannes@cmpxchg.orgSigned-off-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Tested-by: NSuren Baghdasaryan <surenb@google.com> Tested-by: NDaniel Drake <drake@endlessm.com> Cc: Christopher Lameter <cl@linux.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Johannes Weiner <jweiner@fb.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Enderborg <peter.enderborg@sony.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Shakeel Butt <shakeelb@google.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vinayak Menon <vinmenon@codeaurora.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Alex Williamson 提交于
commit ddefc033eecf23f1e8b81d0663c5db965adf5516 upstream The commit referenced below introduced device locking around save and restore of state for each device during a PCI bus "try" reset, making it decidely non-"try" and prone to deadlock in the event that a device is already locked. Restore __pci_reset_bus() and __pci_reset_slot() to their advertised locking semantics by pushing the save and restore functions into the branch where the entire tree is already locked. Extend the helper function names with "_locked" and update the comment to reflect this calling requirement. Fixes: b014e96d ("PCI: Protect pci_error_handlers->reset_notify() usage with device_lock()") Signed-off-by: NAlex Williamson <alex.williamson@redhat.com> Signed-off-by: NZhiyuan Hou <zhiyuan2048@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Changpeng Liu 提交于
commit 1f23816b8eb8fdc39990abe166c10a18c16f6b21 upstream. In commit 88c85538, "virtio-blk: add discard and write zeroes features to specification" (https://github.com/oasis-tcs/virtio-spec), the virtio block specification has been extended to add VIRTIO_BLK_T_DISCARD and VIRTIO_BLK_T_WRITE_ZEROES commands. This patch enables support for discard and write zeroes in the virtio-blk driver when the device advertises the corresponding features, VIRTIO_BLK_F_DISCARD and VIRTIO_BLK_F_WRITE_ZEROES. Signed-off-by: NChangpeng Liu <changpeng.liu@intel.com> Signed-off-by: NDaniel Verkamp <dverkamp@chromium.org> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NJiufei Xue <jiufei.xue@linux.alibaba.com> Reviewed-by: NLiu Bo <bo.liu@linux.alibaba.com> Acked-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
-
由 Eryu Guan 提交于
Prior to xdragon platform 20181230 release (e.g. 0930 release), vring_use_dma_api() is required to return 'true' unconditionally. Introduce a new kernel boot parameter called "vring_force_dma_api" to control the behavior, boot xdragon host with "vring_force_dma_api" command line to make ENI hotplug work, so that normal ECS hosts keep the original behavior. Reviewed-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Signed-off-by: NEryu Guan <eguan@linux.alibaba.com>
-
由 Arjan van de Ven 提交于
Cherry-pick from clear-linux patches: https://github.com/clearlinux-pkgs/linux-kvm/0104-give-rdrand-some-credit.patch try to credit rdrand/rdseed with some entropy In VMs but even modern hardware, we're super starved for entropy, and while we can and do wear a tin foil hat, it's very hard to argue that rdrand and rdtsc add zero entropy. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Reviewed-by: NJiufei Xue <jiufei.xue@linux.alibaba.com>
-
由 Arjan van de Ven 提交于
Cherry-pick from kata-container patches: https://github.com/kata-containers/packaging/tree/master/kernel/patches/0002-Compile-in-evged-always.patch We need evged for NEMU (and in general for hw reduced) The config option cannot be set normally since it breaks all regular systems, and hardware reduced is really a runtime choice. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NEryu Guan <eguan@linux.alibaba.com> Reviewed-by: NJiufei Xue <jiufei.xue@linux.alibaba.com> Reviewed-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
-
- 03 7月, 2019 23 次提交
-
-
由 Thinh Nguyen 提交于
commit c7152763f02e05567da27462b2277a554e507c89 upstream. Currently req->num_trbs is not reset after the TRBs are skipped and processed from the cancelled list. The gadget driver may reuse the request with an invalid req->num_trbs, and DWC3 will incorrectly skip trbs. To fix this, simply reset req->num_trbs to 0 after skipping through all of them. Fixes: c3acd5901414 ("usb: dwc3: gadget: use num_trbs when skipping TRBs on ->dequeue()") Signed-off-by: NThinh Nguyen <thinhn@synopsys.com> Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> Cc: Sasha Levin <sashal@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jason Gunthorpe 提交于
commit 641114d2af312d39ca9bbc2369d18a5823da51c6 upstream. gcc 9 now does allocation size tracking and thinks that passing the member of a union and then accessing beyond that member's bounds is an overflow. Instead of using the union member, use the entire union with a cast to get to the sockaddr. gcc will now know that the memory extends the full size of the union. Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Fei Li 提交于
[ Upstream commit 72b319dc08b4924a29f5e2560ef6d966fa54c429 ] Currently after setting tap0 link up, the tun code wakes tx/rx waited queues up in tun_net_open() when .ndo_open() is called, however the IFF_UP flag has not been set yet. If there's already a wait queue, it would fail to transmit when checking the IFF_UP flag in tun_sendmsg(). Then the saving vhost_poll_start() will add the wq into wqh until it is waken up again. Although this works when IFF_UP flag has been set when tun_chr_poll detects; this is not true if IFF_UP flag has not been set at that time. Sadly the latter case is a fatal error, as the wq will never be waken up in future unless later manually setting link up on purpose. Fix this by moving the wakeup process into the NETDEV_UP event notifying process, this makes sure IFF_UP has been set before all waited queues been waken up. Signed-off-by: NFei Li <lifei.shirley@bytedance.com> Acked-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 YueHaibing 提交于
[ Upstream commit ee4297420d56a0033a8593e80b33fcc93fda8509 ] We should rather have vlan_tci filled all the way down to the transmitting netdevice and let it do the hw/sw vlan implementation. Suggested-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Roland Hii 提交于
[ Upstream commit d0bb82fd60183868f46c8ccc595a3d61c3334a18 ] When transmitting certain PTP frames, e.g. SYNC and DELAY_REQ, the PTP daemon, e.g. ptp4l, is polling the driver for the frame transmit hardware timestamp. The polling will most likely timeout if the tx coalesce is enabled due to the Interrupt-on-Completion (IC) bit is not set in tx descriptor for those frames. This patch will ignore the tx coalesce parameter and set the IC bit when transmitting PTP frames which need to report out the frame transmit hardware timestamp to user space. Fixes: f748be53 ("net: stmmac: Rework coalesce timer and fix multi-queue races") Signed-off-by: NRoland Hii <roland.king.guan.hii@intel.com> Signed-off-by: NOng Boon Leong <boon.leong.ong@intel.com> Signed-off-by: NVoon Weifeng <weifeng.voon@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Roland Hii 提交于
[ Upstream commit a1e5388b4d5fc78688e5e9ee6641f779721d6291 ] When ADDSUB bit is set, the system time seconds field is calculated as the complement of the seconds part of the update value. For example, if 3.000000001 seconds need to be subtracted from the system time, this field is calculated as 2^32 - 3 = 4294967296 - 3 = 0x100000000 - 3 = 0xFFFFFFFD Previously, the 0x100000000 is mistakenly written as 100000000. This is further simplified from sec = (0x100000000ULL - sec); to sec = -sec; Fixes: ba1ffd74 ("stmmac: fix PTP support for GMAC4") Signed-off-by: NRoland Hii <roland.king.guan.hii@intel.com> Signed-off-by: NOng Boon Leong <boon.leong.ong@intel.com> Signed-off-by: NVoon Weifeng <weifeng.voon@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 YueHaibing 提交于
[ Upstream commit 30d8177e8ac776d89d387fad547af6a0f599210e ] We build vlan on top of bonding interface, which vlan offload is off, bond mode is 802.3ad (LACP) and xmit_hash_policy is BOND_XMIT_POLICY_ENCAP34. Because vlan tx offload is off, vlan tci is cleared and skb push the vlan header in validate_xmit_vlan() while sending from vlan devices. Then in bond_xmit_hash, __skb_flow_dissect() fails to get information from protocol headers encapsulated within vlan, because 'nhoff' is points to IP header, so bond hashing is based on layer 2 info, which fails to distribute packets across slaves. This patch always enable bonding's vlan tx offload, pass the vlan packets to the slave devices with vlan tci, let them to handle vlan implementation. Fixes: 278339a4 ("bonding: propogate vlan_features to bonding master") Suggested-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Wang Xin 提交于
commit 9a9e295e7c5c0409c020088b0ae017e6c2b7df6e upstream. Within at24_loop_until_timeout the timestamp used for timeout checking is recorded after the I2C transfer and sleep_range(). Under high CPU load either the execution time for I2C transfer or sleep_range() could actually be larger than the timeout value. Worst case the I2C transfer is only tried once because the loop will exit due to the timeout although the EEPROM is now ready. To fix this issue the timestamp is recorded at the beginning of each iteration. That is, before I2C transfer and sleep. Then the timeout is actually checked against the timestamp of the previous iteration. This makes sure that even if the timeout is reached, there is still one more chance to try the I2C transfer in case the EEPROM is ready. Example: If you have a system which combines high CPU load with repeated EEPROM writes you will run into the following scenario. - System makes a successful regmap_bulk_write() to EEPROM. - System wants to perform another write to EEPROM but EEPROM is still busy with the last write. - Because of high CPU load the usleep_range() will sleep more than 25 ms (at24_write_timeout). - Within the over-long sleeping the EEPROM finished the previous write operation and is ready again. - at24_loop_until_timeout() will detect timeout and won't try to write. Signed-off-by: NWang Xin <xin.wang7@cn.bosch.com> Signed-off-by: NMark Jonas <mark.jonas@de.bosch.com> Signed-off-by: NBartosz Golaszewski <brgl@bgdev.pl> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Paul Burton 提交于
commit 6d4d367d0e9ffab4d64a3436256a6a052dc1195d upstream. The MIPS GIC contains a block of registers used to map local interrupts to a particular CPU interrupt pin. Since these registers are found at a consecutive range of addresses we access them using an index, via the (read|write)_gic_v[lo]_map accessor functions. We currently use values from enum mips_gic_local_interrupt as those indices. Unfortunately whilst enum mips_gic_local_interrupt provides the correct offsets for bits in the pending & mask registers, the ordering of the map registers is subtly different... Compared with the ordering of pending & mask bits, the map registers move the FDC from the end of the list to index 3 after the timer interrupt. As a result the performance counter & software interrupts are therefore at indices 4-6 rather than indices 3-5. Notably this causes problems with performance counter interrupts being incorrectly mapped on some systems, and presumably will also cause problems for FDC interrupts. Introduce a function to map from enum mips_gic_local_interrupt to the index of the corresponding map register, and use it to ensure we access the map registers for the correct interrupts. Signed-off-by: NPaul Burton <paul.burton@mips.com> Fixes: a0dc5cb5 ("irqchip: mips-gic: Simplify gic_local_irq_domain_map()") Fixes: da61fcf9 ("irqchip: mips-gic: Use irq_cpu_online to (un)mask all-VP(E) IRQs") Reported-and-tested-by: NArcher Yan <ayan@wavecomp.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jan Kara 提交于
commit 240b4cc8fd5db138b675297d4226ec46594d9b3b upstream. Once we unlock adapter->hw_lock in pvscsi_queue_lck() nothing prevents just queued scsi_cmnd from completing and freeing the request. Thus cmd->cmnd[0] dereference can dereference already freed request leading to kernel crashes or other issues (which one of our customers observed). Store cmd->cmnd[0] in a local variable before unlocking adapter->hw_lock to fix the issue. CC: <stable@vger.kernel.org> Signed-off-by: NJan Kara <jack@suse.cz> Reviewed-by: NEwan D. Milne <emilne@redhat.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 zhangyi (F) 提交于
commit 211ad4b733037f66f9be0a79eade3da7ab11cbb8 upstream. Currently, although we submit super bios in order (and super.nr_entries is incremented by each logged entry), submit_bio() is async so each super sector may not be written to log device in order and then the final nr_entries may be smaller than it should be. This problem can be reproduced by the xfstests generic/455 with ext4: QA output created by 455 -Silence is golden +mark 'end' does not exist Fix this by serializing submission of super sectors to make sure each is written to the log disk in order. Fixes: 0e9cebe7 ("dm: add log writes target") Cc: stable@vger.kernel.org Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Suggested-by: NJosef Bacik <josef@toxicpanda.com> Reviewed-by: NJosef Bacik <josef@toxicpanda.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Dinh Nguyen 提交于
commit 74684cce5ebd567b01e9bc0e9a1945c70a32f32f upstream. The fixed dividers for the emac clocks should be 2 not 4. Cc: stable@vger.kernel.org Signed-off-by: NDinh Nguyen <dinguyen@kernel.org> Signed-off-by: NStephen Boyd <sboyd@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jack Pham 提交于
commit bd6742249b9ca918565e4e3abaa06665e587f4b5 upstream OUT endpoint requests may somtimes have this flag set when preparing to be submitted to HW indicating that there is an additional TRB chained to the request for alignment purposes. If that request is removed before the controller can execute the transfer (e.g. ep_dequeue/ep_disable), the request will not go through the dwc3_gadget_ep_cleanup_completed_request() handler and will not have its needs_extra_trb flag cleared when dwc3_gadget_giveback() is called. This same request could be later requeued for a new transfer that does not require an extra TRB and if it is successfully completed, the cleanup and TRB reclamation will incorrectly process the additional TRB which belongs to the next request, and incorrectly advances the TRB dequeue pointer, thereby messing up calculation of the next requeust's actual/remaining count when it completes. The right thing to do here is to ensure that the flag is cleared before it is given back to the function driver. A good place to do that is in dwc3_gadget_del_and_unmap_request(). Fixes: c6267a51 ("usb: dwc3: gadget: align transfers to wMaxPacketSize") Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NJack Pham <jackp@codeaurora.org> Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> (cherry picked from commit bd6742249b9ca918565e4e3abaa06665e587f4b5) Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Felipe Balbi 提交于
commit fec9095bdef4e7c988adb603d0d4f92ee735d4a1 upstream Now that we have a list of cancelled requests, we can skip over TRBs when END_TRANSFER command completes. Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> (cherry picked from commit fec9095bdef4e7c988adb603d0d4f92ee735d4a1) Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Felipe Balbi 提交于
commit d4f1afe5e896c18ae01099a85dab5e1a198bd2a8 upstream Whenever we have a request in flight, we can move it to the cancelled list and later simply iterate over that list and skip over any TRBs we find. Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> (cherry picked from commit d4f1afe5e896c18ae01099a85dab5e1a198bd2a8) Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Felipe Balbi 提交于
commit d5443bbf5fc8f8389cce146b1fc2987cdd229d12 upstream This list will host cancelled requests who still have TRBs being processed. Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> (cherry picked from commit d5443bbf5fc8f8389cce146b1fc2987cdd229d12) Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Felipe Balbi 提交于
commit 7746a8dfb3f9c91b3a0b63a1d5c2664410e6498d upstream Extract the logic for skipping over TRBs to its own function. This makes the code slightly more readable and makes it easier to move this call to its final resting place as a following patch. Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> (cherry picked from commit 7746a8dfb3f9c91b3a0b63a1d5c2664410e6498d) Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Felipe Balbi 提交于
commit c3acd59014148470dc58519870fbc779785b4bf7 upstream Now that we track how many TRBs a request uses, it's easier to skip over them in case of a call to usb_ep_dequeue(). Let's do so and simplify the code a bit. Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> (cherry picked from commit c3acd59014148470dc58519870fbc779785b4bf7) Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Felipe Balbi 提交于
commit 09fe1f8d7e2f461275b1cdd832f2cfa5e9be346d upstream This will help us remove the wait_event() from our ->dequeue(). Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> (cherry picked from commit 09fe1f8d7e2f461275b1cdd832f2cfa5e9be346d) Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Felipe Balbi 提交于
commit 1a22ec643580626f439c8583edafdcc73798f2fb upstream Both flags are used for the same purpose in dwc3: appending an extra TRB at the end to deal with controller requirements. By combining both flags into one, we make it clear that the situation is the same and that they should be treated equally. Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NFelipe Balbi <felipe.balbi@linux.intel.com> (cherry picked from commit 1a22ec643580626f439c8583edafdcc73798f2fb) Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 John Stultz 提交于
This reverts commit 25ad17d6, as we will be cherry-picking a number of changes from upstream that allows us to later cherry-pick the same fix from upstream rather than using this modified backported version. Cc: Fei Yang <fei.yang@intel.com> Cc: Sam Protsenko <semen.protsenko@linaro.org> Cc: Felipe Balbi <balbi@kernel.org> Cc: linux-usb@vger.kernel.org Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Bjørn Mork 提交于
[ Upstream commit 904d88d743b0c94092c5117955eab695df8109e8 ] The syzbot reported Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xca/0x13e lib/dump_stack.c:113 print_address_description+0x67/0x231 mm/kasan/report.c:188 __kasan_report.cold+0x1a/0x32 mm/kasan/report.c:317 kasan_report+0xe/0x20 mm/kasan/common.c:614 qmi_wwan_probe+0x342/0x360 drivers/net/usb/qmi_wwan.c:1417 usb_probe_interface+0x305/0x7a0 drivers/usb/core/driver.c:361 really_probe+0x281/0x660 drivers/base/dd.c:509 driver_probe_device+0x104/0x210 drivers/base/dd.c:670 __device_attach_driver+0x1c2/0x220 drivers/base/dd.c:777 bus_for_each_drv+0x15c/0x1e0 drivers/base/bus.c:454 Caused by too many confusing indirections and casts. id->driver_info is a pointer stored in a long. We want the pointer here, not the address of it. Thanks-to: Hillf Danton <hdanton@sina.com> Reported-by: syzbot+b68605d7fadd21510de1@syzkaller.appspotmail.com Cc: Kristian Evensen <kristian.evensen@gmail.com> Fixes: e4bf63482c30 ("qmi_wwan: Add quirk for Quectel dynamic config") Signed-off-by: NBjørn Mork <bjorn@mork.no> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Mike Marciniszyn 提交于
commit da9de5f8527f4b9efc82f967d29a583318c034c7 upstream. The call to sdma_progress() is called outside the wait lock. In this case, there is a race condition where sdma_progress() can return false and the sdma_engine can idle. If that happens, there will be no more sdma interrupts to cause the wakeup and the user_sdma xmit will hang. Fix by moving the lock to enclose the sdma_progress() call. Also, delete busycount. The need for this was removed by: commit bcad2913 ("IB/hfi1: Serve the most starved iowait entry first") Ported to linux-4.19.y. Cc: <stable@vger.kernel.org> Fixes: 77241056 ("IB/hfi1: add driver files") Reviewed-by: NGary Leshner <Gary.S.Leshner@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 25 6月, 2019 5 次提交
-
-
由 Gao Xiang 提交于
commit 5efe5137f05bbb4688890620934538c005e7d1d6 upstream. There are some backward incompatible features pending for months, mainly due to on-disk format expensions. However, we should ensure that it cannot be mounted with old kernels. Otherwise, it will causes unexpected behaviors. Fixes: ba2b77a8 ("staging: erofs: add super block operations") Cc: <stable@vger.kernel.org> # 4.19+ Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Hellstrom 提交于
commit cc0ba0d8624f210995924bb57a8b181ce8976606 upstream. The HB port may not be available for various reasons. Either it has been disabled by a config option or by the hypervisor for other reasons. In that case, make sure we have a backup plan and use the backdoor port instead with a performance penalty. Cc: stable@vger.kernel.org Fixes: 89da76fd ("drm/vmwgfx: Add VMWare host messaging capability") Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NDeepak Rawat <drawat@vmware.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Joakim Zhang 提交于
commit 247e5356a709eb49a0d95ff2a7f07dac05c8252c upstream. Current we can meet timeout issue when setting a small bitrate like 10000 as follows on i.MX6UL EVK board (ipg clock = 66MHZ, per clock = 30MHZ): | root@imx6ul7d:~# ip link set can0 up type can bitrate 10000 A link change request failed with some changes committed already. Interface can0 may have been left with an inconsistent configuration, please check. | RTNETLINK answers: Connection timed out It is caused by calling of flexcan_chip_unfreeze() timeout. Originally the code is using usleep_range(10, 20) for unfreeze operation, but the patch (8badd65e can: flexcan: avoid calling usleep_range from interrupt context) changed it into udelay(10) which is only a half delay of before, there're also some other delay changes. After double to FLEXCAN_TIMEOUT_US to 100 can fix the issue. Meanwhile, Rasmus Villemoes reported that even with a timeout of 100, flexcan_probe() fails on the MPC8309, which requires a value of at least 140 to work reliably. 250 works for everyone. Signed-off-by: NJoakim Zhang <qiangqing.zhang@nxp.com> Reviewed-by: NDong Aisheng <aisheng.dong@nxp.com> Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Anssi Hannula 提交于
commit 904044dd8fff43e289c11a2f90fa532e946a1d8b upstream. Commit 9e5f1b27 ("can: xilinx_can: add support for Xilinx CAN FD core") added a new can_bittiming_const structure for CAN FD cores that support larger values for tseg1, tseg2, and sjw than previous Xilinx CAN cores, but the commit did not actually take that into use. Fix that. Tested with CAN FD core on a ZynqMP board. Fixes: 9e5f1b27 ("can: xilinx_can: add support for Xilinx CAN FD core") Reported-by: NShubhrajyoti Datta <shubhrajyoti.datta@gmail.com> Signed-off-by: NAnssi Hannula <anssi.hannula@bitwise.fi> Cc: Michal Simek <michal.simek@xilinx.com> Reviewed-by: NShubhrajyoti Datta <shubhrajyoti.datta@gmail.com> Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jaesoo Lee 提交于
[ Upstream commit c8e8c77b3bdbade6e26e8e76595f141ede12b692 ] The Number of Namespaces (nn) field in the identify controller data structure is defined as u32 and the maximum allowed value in NVMe specification is 0xFFFFFFFEUL. This change fixes the possible overflow of the DIV_ROUND_UP() operation used in nvme_scan_ns_list() by casting the nn to u64. Signed-off-by: NJaesoo Lee <jalee@purestorage.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NSasha Levin <sashal@kernel.org>
-