- 24 3月, 2019 40 次提交
-
-
由 Martin Schwidefsky 提交于
commit 86a86804e4f18fc3880541b3d5a07f4df0fe29cb upstream. The fix to make WARN work in the early boot code created a problem on older machines without EDAT-1. The setup_lowcore_dat_on function uses the pointer from lowcore_ptr[0] to set the DAT bit in the new PSWs. That does not work if the kernel page table is set up with 4K pages as the prefix address maps to absolute zero. To make this work the PSWs need to be changed with via address 0 in form of the S390_lowcore definition. Reported-by: NGuenter Roeck <linux@roeck-us.net> Tested-by: NCornelia Huck <cohuck@redhat.com> Fixes: 94f85ed3e2f8 ("s390/setup: fix early warning messages") Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Coly Li 提交于
commit dc7292a5bcb4c878b076fca2ac3fc22f81b8f8df upstream. In 'commit 752f66a75aba ("bcache: use REQ_PRIO to indicate bio for metadata")' REQ_META is replaced by REQ_PRIO to indicate metadata bio. This assumption is not always correct, e.g. XFS uses REQ_META to mark metadata bio other than REQ_PRIO. This is why Nix noticed that bcache does not cache metadata for XFS after the above commit. Thanks to Dave Chinner, he explains the difference between REQ_META and REQ_PRIO from view of file system developer. Here I quote part of his explanation from mailing list, REQ_META is used for metadata. REQ_PRIO is used to communicate to the lower layers that the submitter considers this IO to be more important that non REQ_PRIO IO and so dispatch should be expedited. IOWs, if the filesystem considers metadata IO to be more important that user data IO, then it will use REQ_PRIO | REQ_META rather than just REQ_META. Then it seems bios with REQ_META or REQ_PRIO should both be cached for performance optimation, because they are all probably low I/O latency demand by upper layer (e.g. file system). So in this patch, when we want to decide whether to bypass the cache, REQ_META and REQ_PRIO are both checked. Then both metadata and high priority I/O requests will be handled properly. Reported-by: NNix <nix@esperi.org.uk> Signed-off-by: NColy Li <colyli@suse.de> Reviewed-by: NAndre Noll <maan@tuebingen.mpg.de> Tested-by: NNix <nix@esperi.org.uk> Cc: stable@vger.kernel.org Cc: Dave Chinner <david@fromorbit.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sean Christopherson 提交于
commit 34333cc6c2cb021662fd32e24e618d1b86de95bf upstream. Regarding segments with a limit==0xffffffff, the SDM officially states: When the effective limit is FFFFFFFFH (4 GBytes), these accesses may or may not cause the indicated exceptions. Behavior is implementation-specific and may vary from one execution to another. In practice, all CPUs that support VMX ignore limit checks for "flat segments", i.e. an expand-up data or code segment with base=0 and limit=0xffffffff. This is subtly different than wrapping the effective address calculation based on the address size, as the flat segment behavior also applies to accesses that would wrap the 4g boundary, e.g. a 4-byte access starting at 0xffffffff will access linear addresses 0xffffffff, 0x0, 0x1 and 0x2. Fixes: f9eb4af6 ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions") Cc: stable@vger.kernel.org Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sean Christopherson 提交于
commit 8570f9e881e3fde98801bb3a47eef84dd934d405 upstream. The address size of an instruction affects the effective address, not the virtual/linear address. The final address may still be truncated, e.g. to 32-bits outside of long mode, but that happens irrespective of the address size, e.g. a 32-bit address size can yield a 64-bit virtual address when using FS/GS with a non-zero base. Fixes: 064aea77 ("KVM: nVMX: Decoding memory operands of VMX instructions") Cc: stable@vger.kernel.org Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sean Christopherson 提交于
commit 946c522b603f281195af1df91837a1d4d1eb3bc9 upstream. The VMCS.EXIT_QUALIFCATION field reports the displacements of memory operands for various instructions, including VMX instructions, as a naturally sized unsigned value, but masks the value by the addr size, e.g. given a ModRM encoded as -0x28(%ebp), the -0x28 displacement is reported as 0xffffffd8 for a 32-bit address size. Despite some weird wording regarding sign extension, the SDM explicitly states that bits beyond the instructions address size are undefined: In all cases, bits of this field beyond the instruction’s address size are undefined. Failure to sign extend the displacement results in KVM incorrectly treating a negative displacement as a large positive displacement when the address size of the VMX instruction is smaller than KVM's native size, e.g. a 32-bit address size on a 64-bit KVM. The very original decoding, added by commit 064aea77 ("KVM: nVMX: Decoding memory operands of VMX instructions"), sort of modeled sign extension by truncating the final virtual/linear address for a 32-bit address size. I.e. it messed up the effective address but made it work by adjusting the final address. When segmentation checks were added, the truncation logic was kept as-is and no sign extension logic was introduced. In other words, it kept calculating the wrong effective address while mostly generating the correct virtual/linear address. As the effective address is what's used in the segment limit checks, this results in KVM incorreclty injecting #GP/#SS faults due to non-existent segment violations when a nested VMM uses negative displacements with an address size smaller than KVM's native address size. Using the -0x28(%ebp) example, an EBP value of 0x1000 will result in KVM using 0x100000fd8 as the effective address when checking for a segment limit violation. This causes a 100% failure rate when running a 32-bit KVM build as L1 on top of a 64-bit KVM L0. Fixes: f9eb4af6 ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions") Cc: stable@vger.kernel.org Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sean Christopherson 提交于
commit ddfd1730fd829743e41213e32ccc8b4aa6dc8325 upstream. When installing new memslots, KVM sets bit 0 of the generation number to indicate that an update is in-progress. Until the update is complete, there are no guarantees as to whether a vCPU will see the old or the new memslots. Explicity prevent caching MMIO accesses so as to avoid using an access cached from the old memslots after the new memslots have been installed. Note that it is unclear whether or not disabling caching during the update window is strictly necessary as there is no definitive documentation as to what ordering guarantees KVM provides with respect to updating memslots. That being said, the MMIO spte code does not allow reusing sptes created while an update is in-progress, and the associated documentation explicitly states: We do not want to use an MMIO sptes created with an odd generation number, ... If KVM is unlucky and creates an MMIO spte while the low bit is 1, the next access to the spte will always be a cache miss. At the very least, disabling the per-vCPU MMIO cache during updates will make its behavior consistent with the MMIO spte behavior and documentation. Fixes: 56f17dd3 ("kvm: x86: fix stale mmio cache bug") Cc: <stable@vger.kernel.org> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sean Christopherson 提交于
commit e1359e2beb8b0a1188abc997273acbaedc8ee791 upstream. The check to detect a wrap of the MMIO generation explicitly looks for a generation number of zero. Now that unique memslots generation numbers are assigned to each address space, only address space 0 will get a generation number of exactly zero when wrapping. E.g. when address space 1 goes from 0x7fffe to 0x80002, the MMIO generation number will wrap to 0x2. Adjust the MMIO generation to strip the address space modifier prior to checking for a wrap. Fixes: 4bd518f1 ("KVM: use separate generations for each address space") Cc: <stable@vger.kernel.org> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sean Christopherson 提交于
commit 152482580a1b0accb60676063a1ac57b2d12daf6 upstream. kvm_arch_memslots_updated() is at this point in time an x86-specific hook for handling MMIO generation wraparound. x86 stashes 19 bits of the memslots generation number in its MMIO sptes in order to avoid full page fault walks for repeat faults on emulated MMIO addresses. Because only 19 bits are used, wrapping the MMIO generation number is possible, if unlikely. kvm_arch_memslots_updated() alerts x86 that the generation has changed so that it can invalidate all MMIO sptes in case the effective MMIO generation has wrapped so as to avoid using a stale spte, e.g. a (very) old spte that was created with generation==0. Given that the purpose of kvm_arch_memslots_updated() is to prevent consuming stale entries, it needs to be called before the new generation is propagated to memslots. Invalidating the MMIO sptes after updating memslots means that there is a window where a vCPU could dereference the new memslots generation, e.g. 0, and incorrectly reuse an old MMIO spte that was created with (pre-wrap) generation==0. Fixes: e59dbe09 ("KVM: Introduce kvm_arch_memslots_updated()") Cc: <stable@vger.kernel.org> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Harry Wentland 提交于
commit 59d3191f14dc18881fec1172c7096b7863622803 upstream. Powerplay functions called from dm_pp_* functions tend to do a mutex_lock which isn't safe to do inside a kernel_fpu_begin/end block as those will disable/enable preemption. Rearrange the dm_pp_get_clock_levels_by_type_with_voltage calls to make sure they happen outside of kernel_fpu_begin/end. Cc: stable@vger.kernel.org Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NHarry Wentland <harry.wentland@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Evan Quan 提交于
commit f5742ec36422a39b57f0256e4847f61b3c432f8c upstream. Set sampling period as 500ms to provide a smooth power reading output. Also, correct the register for power reading. Signed-off-by: NEvan Quan <evan.quan@amd.com> Reviewed-by: NFeifei Xu <Feifei.Xu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Gustavo A. R. Silva 提交于
commit cc5034a5d293dd620484d1d836aa16c6764a1c8c upstream. Add missing break statement in order to prevent the code from falling through to case CB_TARGET_MASK. This bug was found thanks to the ongoing efforts to enable -Wimplicit-fallthrough. Fixes: dd220a00 ("drm/radeon/kms: add support for streamout v7") Cc: stable@vger.kernel.org Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Noralf Trønnes 提交于
commit 78de14c23e031420aa5f61973583635eccd6cd2a upstream. If fbdev setup has failed, lastclose will give a NULL pointer deref: [ 77.794295] [drm:drm_lastclose] [ 77.794414] [drm:drm_lastclose] driver lastclose completed [ 77.794660] Unable to handle kernel NULL pointer dereference at virtual address 00000014 [ 77.809460] pgd = b376b71b [ 77.818275] [00000014] *pgd=175ba831, *pte=00000000, *ppte=00000000 [ 77.830813] Internal error: Oops: 17 [#1] ARM [ 77.840963] Modules linked in: mi0283qt mipi_dbi tinydrm raspberrypi_hwmon gpio_backlight backlight snd_bcm2835(C) bcm2835_rng rng_core [ 77.865203] CPU: 0 PID: 527 Comm: lt-modetest Tainted: G C 5.0.0-rc1+ #1 [ 77.879525] Hardware name: BCM2835 [ 77.889185] PC is at restore_fbdev_mode+0x20/0x164 [ 77.900261] LR is at drm_fb_helper_restore_fbdev_mode_unlocked+0x54/0x9c [ 78.002446] Process lt-modetest (pid: 527, stack limit = 0x7a3d5c14) [ 78.291030] Backtrace: [ 78.300815] [<c04f2d0c>] (restore_fbdev_mode) from [<c04f4708>] (drm_fb_helper_restore_fbdev_mode_unlocked+0x54/0x9c) [ 78.319095] r9:d8a8a288 r8:d891acf0 r7:d7697910 r6:00000000 r5:d891ac00 r4:d891ac00 [ 78.334432] [<c04f46b4>] (drm_fb_helper_restore_fbdev_mode_unlocked) from [<c04f47e8>] (drm_fbdev_client_restore+0x18/0x20) [ 78.353296] r8:d76978c0 r7:d7697910 r6:d7697950 r5:d7697800 r4:d891ac00 r3:c04f47d0 [ 78.368689] [<c04f47d0>] (drm_fbdev_client_restore) from [<c051b6b4>] (drm_client_dev_restore+0x7c/0xc0) [ 78.385982] [<c051b638>] (drm_client_dev_restore) from [<c04f8fd0>] (drm_lastclose+0xc4/0xd4) [ 78.402332] r8:d76978c0 r7:d7471080 r6:c0e0c088 r5:d8a85e00 r4:d7697800 [ 78.416688] [<c04f8f0c>] (drm_lastclose) from [<c04f9088>] (drm_release+0xa8/0x10c) [ 78.431929] r5:d8a85e00 r4:d7697800 [ 78.442989] [<c04f8fe0>] (drm_release) from [<c02640c4>] (__fput+0x104/0x1c8) [ 78.457740] r8:d5ccea10 r7:d96cfb10 r6:00000008 r5:d74c1b90 r4:d8a8a280 [ 78.472043] [<c0263fc0>] (__fput) from [<c02641ec>] (____fput+0x18/0x1c) [ 78.486363] r10:00000006 r9:d7722000 r8:c01011c4 r7:00000000 r6:c0ebac6c r5:d892a340 [ 78.501869] r4:d8a8a280 [ 78.512002] [<c02641d4>] (____fput) from [<c013ef1c>] (task_work_run+0x98/0xac) [ 78.527186] [<c013ee84>] (task_work_run) from [<c010cc54>] (do_work_pending+0x4f8/0x570) [ 78.543238] r7:d7722030 r6:00000004 r5:d7723fb0 r4:00000000 [ 78.556825] [<c010c75c>] (do_work_pending) from [<c0101034>] (slow_work_pending+0xc/0x20) [ 78.674256] ---[ end trace 70d3a60cf739be3b ]--- Fix by using drm_fb_helper_lastclose() which checks if fbdev is in use. Fixes: 9060d7f4 ("drm/fb-helper: Finish the generic fbdev emulation") Cc: stable@vger.kernel.org Signed-off-by: NNoralf Trønnes <noralf@tronnes.org> Reviewed-by: NGerd Hoffmann <kraxel@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190125150300.33268-1-noralf@tronnes.orgSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Steve Longerbeam 提交于
commit 4bc1ab41eee9d02ad2483bf8f51a7b72e3504eba upstream. Move upstream stream off to just after receiving the last EOF completion and disabling the CSI (and thus before disabling the IDMA channel) in csi_stop(). For symmetry also move upstream stream on to beginning of csi_start(). Doing this makes csi_s_stream() more symmetric with prp_s_stream() which will require the same change to fix a hard lockup. Signed-off-by: NSteve Longerbeam <slongerbeam@gmail.com> Cc: stable@vger.kernel.org # for 4.13 and up Signed-off-by: NHans Verkuil <hverkuil-cisco@xs4all.nl> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Steve Longerbeam 提交于
commit 2e0fe66e0a136252f4d89dbbccdcb26deb867eb8 upstream. Disable the CSI immediately after receiving the last EOF before stream off (and thus before disabling the IDMA channel). Do this by moving the wait for EOF completion into a new function csi_idmac_wait_last_eof(). This fixes a complete system hard lockup on the SabreAuto when streaming from the ADV7180, by repeatedly sending a stream off immediately followed by stream on: while true; do v4l2-ctl -d4 --stream-mmap --stream-count=3; done Eventually this either causes the system lockup or EOF timeouts at all subsequent stream on, until a system reset. The lockup occurs when disabling the IDMA channel at stream off. Disabling the CSI before disabling the IDMA channel appears to be a reliable fix for the hard lockup. Fixes: 4a34ec8e ("[media] media: imx: Add CSI subdev driver") Reported-by: NGaël PORTAY <gael.portay@collabora.com> Signed-off-by: NSteve Longerbeam <slongerbeam@gmail.com> Cc: stable@vger.kernel.org # for 4.13 and up Signed-off-by: NHans Verkuil <hverkuil-cisco@xs4all.nl> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Lucas A. M. Magalhães 提交于
commit adc589d2a20808fb99d46a78175cd023f2040338 upstream. Add a linear pipeline logic for the stream control. It's created by walking backwards on the entity graph. When the stream starts it will simply loop through the pipeline calling the respective process_frame function of each entity. Fixes: f2fe8906 ("vimc: Virtual Media Controller core, capture and sensor") Cc: stable@vger.kernel.org # for v4.20 Signed-off-by: NLucas A. M. Magalhães <lucmaga@gmail.com> Acked-by: NHelen Koike <helen.koike@collabora.com> Signed-off-by: NHans Verkuil <hverkuil-cisco@xs4all.nl> [hverkuil-cisco@xs4all.nl: fixed small space-after-tab issue in the patch] Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sakari Ailus 提交于
commit 9dd0627d8d62a7ddb001a75f63942d92b5336561 upstream. The UVC video driver converts the timestamp from hardware specific unit to one known by the kernel at the time when the buffer is dequeued. This is fine in general, but the streamoff operation consists of the following steps (among other things): 1. uvc_video_clock_cleanup --- the hardware clock sample array is released and the pointer to the array is set to NULL, 2. buffers in active state are returned to the user and 3. buf_finish callback is called on buffers that are prepared. buf_finish includes calling uvc_video_clock_update that accesses the hardware clock sample array. The above is serialised by a queue specific mutex. Address the problem by skipping the clock conversion if the hardware clock sample array is already released. Fixes: 9c0863b1 ("[media] vb2: call buf_finish from __queue_cancel") Reported-by: NChiranjeevi Rapolu <chiranjeevi.rapolu@intel.com> Tested-by: NChiranjeevi Rapolu <chiranjeevi.rapolu@intel.com> Signed-off-by: NSakari Ailus <sakari.ailus@linux.intel.com> Cc: stable@vger.kernel.org Signed-off-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 French, Nicholas A 提交于
commit 1b4fd9de6ec7f3722c2b3e08cc5ad171c11f93be upstream. A typo in code cleanup commit db9c1007 ("media: lgdt330x: do some cleanups at status logic") broke the FE_HAS_LOCK reporting for 3303 chips by inadvertently modifying the register mask. The broken lock status is critial as it prevents video capture cards from reporting signal strength, scanning for channels, and capturing video. Fix regression by reverting mask change. Cc: stable@vger.kernel.org # Kernel 4.17+ Fixes: db9c1007 ("media: lgdt330x: do some cleanups at status logic") Signed-off-by: NNick French <naf@ou.edu> Reviewed-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Tested-by: NAdam Stylinski <kungfujesus06@gmail.com> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Steve Longerbeam 提交于
commit a19c22677377b87e4354f7306f46ad99bc982a9f upstream. Upstream must be stopped immediately after receiving the last EOF and before disabling the IDMA channel. This can be accomplished by moving upstream stream off to just after receiving the last EOF completion in prp_stop(). For symmetry also move upstream stream on to end of prp_start(). This fixes a complete system hard lockup on the SabreAuto when streaming from the ADV7180, by repeatedly sending a stream off immediately followed by stream on: while true; do v4l2-ctl -d1 --stream-mmap --stream-count=3; done Eventually this either causes the system lockup or EOF timeouts at all subsequent stream on, until a system reset. The lockup occurs when disabling the IDMA channel at stream off. Stopping the video data stream entering the IDMA channel before disabling the channel itself appears to be a reliable fix for the hard lockup. Fixes: f0d9c892 ("[media] media: imx: Add IC subdev drivers") Reported-by: NGaël PORTAY <gael.portay@collabora.com> Tested-by: NGaël PORTAY <gael.portay@collabora.com> Signed-off-by: NSteve Longerbeam <slongerbeam@gmail.com> Cc: stable@vger.kernel.org # for 4.13 and up Signed-off-by: NHans Verkuil <hverkuil-cisco@xs4all.nl> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Zhang, Jun 提交于
commit 1d1f898df6586c5ea9aeaf349f13089c6fa37903 upstream. The rcu_gp_kthread_wake() function is invoked when it might be necessary to wake the RCU grace-period kthread. Because self-wakeups are normally a useless waste of CPU cycles, if rcu_gp_kthread_wake() is invoked from this kthread, it naturally refuses to do the wakeup. Unfortunately, natural though it might be, this heuristic fails when rcu_gp_kthread_wake() is invoked from an interrupt or softirq handler that interrupted the grace-period kthread just after the final check of the wait-event condition but just before the schedule() call. In this case, a wakeup is required, even though the call to rcu_gp_kthread_wake() is within the RCU grace-period kthread's context. Failing to provide this wakeup can result in grace periods failing to start, which in turn results in out-of-memory conditions. This race window is quite narrow, but it actually did happen during real testing. It would of course need to be fixed even if it was strictly theoretical in nature. This patch does not Cc stable because it does not apply cleanly to earlier kernel versions. Fixes: 48a7639c ("rcu: Make callers awaken grace-period kthread") Reported-by: N"He, Bo" <bo.he@intel.com> Co-developed-by: N"Zhang, Jun" <jun.zhang@intel.com> Co-developed-by: N"He, Bo" <bo.he@intel.com> Co-developed-by: N"xiao, jin" <jin.xiao@intel.com> Co-developed-by: NBai, Jie A <jie.a.bai@intel.com> Signed-off: "Zhang, Jun" <jun.zhang@intel.com> Signed-off: "He, Bo" <bo.he@intel.com> Signed-off: "xiao, jin" <jin.xiao@intel.com> Signed-off: Bai, Jie A <jie.a.bai@intel.com> Signed-off-by: N"Zhang, Jun" <jun.zhang@intel.com> [ paulmck: Switch from !in_softirq() to "!in_interrupt() && !in_serving_softirq() to avoid redundant wakeups and to also handle the interrupt-handler scenario as well as the softirq-handler scenario that actually occurred in testing. ] Signed-off-by: NPaul E. McKenney <paulmck@linux.ibm.com> Link: https://lkml.kernel.org/r/CD6925E8781EFD4D8E11882D20FC406D52A11F61@SHSMSX104.ccr.corp.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jarkko Sakkinen 提交于
commit f5595f5baa30e009bf54d0d7653a9a0cc465be60 upstream. The send() callback should never return length as it does not in every driver except tpm_crb in the success case. The reason is that the main transmit functionality only cares about whether the transmit was successful or not and ignores the count completely. Suggested-by: NStefan Berger <stefanb@linux.ibm.com> Cc: stable@vger.kernel.org Signed-off-by: NJarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Reviewed-by: NStefan Berger <stefanb@linux.ibm.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Tested-by: NAlexander Steffen <Alexander.Steffen@infineon.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jarkko Sakkinen 提交于
commit 3d7a850fdc1a2e4d2adbc95cc0fc962974725e88 upstream. The current approach to read first 6 bytes from the response and then tail of the response, can cause the 2nd memcpy_fromio() to do an unaligned read (e.g. read 32-bit word from address aligned to a 16-bits), depending on how memcpy_fromio() is implemented. If this happens, the read will fail and the memory controller will fill the read with 1's. This was triggered by 170d13ca3a2f, which should be probably refined to check and react to the address alignment. Before that commit, on x86 memcpy_fromio() turned out to be memcpy(). By a luck GCC has done the right thing (from tpm_crb's perspective) for us so far, but we should not rely on that. Thus, it makes sense to fix this also in tpm_crb, not least because the fix can be then backported to stable kernels and make them more robust when compiled in differing environments. Cc: stable@vger.kernel.org Cc: James Morris <jmorris@namei.org> Cc: Tomas Winkler <tomas.winkler@intel.com> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Fixes: 30fc8d13 ("tpm: TPM 2.0 CRB Interface") Signed-off-by: NJarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Reviewed-by: NJerry Snitselaar <jsnitsel@redhat.com> Acked-by: NTomas Winkler <tomas.winkler@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Aditya Pakki 提交于
commit e406f12dde1a8375d77ea02d91f313fb1a9c6aec upstream. mddev->sync_thread can be set to NULL on kzalloc failure downstream. The patch checks for such a scenario and frees allocated resources. Committer node: Added similar fix to raid5.c, as suggested by Guoqing. Cc: stable@vger.kernel.org # v3.16+ Acked-by: NGuoqing Jiang <gqjiang@suse.com> Signed-off-by: NAditya Pakki <pakki001@umn.edu> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Adrian Hunter 提交于
commit 076333870c2f5bdd9b6d31e7ca1909cf0c84cbfa upstream. When TSC is not available, "timeless" decoding is used but a divide by zero occurs if perf_time_to_tsc() is called. Ensure the divisor is not zero. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: stable@vger.kernel.org # v4.9+ Link: https://lkml.kernel.org/n/tip-1i4j0wqoc8vlbkcizqqxpsf4@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Kan Liang 提交于
commit 8041ffd36f42d8521d66dd1e236feb58cecd68bc upstream. The client IMC bandwidth events currently return very large values: $ perf stat -e uncore_imc/data_reads/ -e uncore_imc/data_writes/ -I 10000 -a 10.000117222 34,788.76 MiB uncore_imc/data_reads/ 10.000117222 8.26 MiB uncore_imc/data_writes/ 20.000374584 34,842.89 MiB uncore_imc/data_reads/ 20.000374584 10.45 MiB uncore_imc/data_writes/ 30.000633299 37,965.29 MiB uncore_imc/data_reads/ 30.000633299 323.62 MiB uncore_imc/data_writes/ 40.000891548 41,012.88 MiB uncore_imc/data_reads/ 40.000891548 6.98 MiB uncore_imc/data_writes/ 50.001142480 1,125,899,906,621,494.75 MiB uncore_imc/data_reads/ 50.001142480 6.97 MiB uncore_imc/data_writes/ The client IMC events are freerunning counters. They still use the old event encoding format (0x1 for data_read and 0x2 for data write). The counter bit width is calculated by common code, which assume that the standard encoding format is used for the freerunning counters. Error bit width information is calculated. The patch intends to convert the old client IMC event encoding to the standard encoding format. Current common code uses event->attr.config which directly copy from user space. We should not implicitly modify it for a converted event. The event->hw.config is used to replace the event->attr.config in common code. For client IMC events, the event->attr.config is used to calculate a converted event with standard encoding format in the custom event_init(). The converted event is stored in event->hw.config. For other events of freerunning counters, they already use the standard encoding format. The same value as event->attr.config is assigned to event->hw.config in common event_init(). Reported-by: NJin Yao <yao.jin@linux.intel.com> Tested-by: NJin Yao <yao.jin@linux.intel.com> Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: stable@kernel.org # v4.18+ Fixes: 9aae1780 ("perf/x86/intel/uncore: Clean up client IMC uncore") Link: https://lkml.kernel.org/r/20190227165729.1861-1-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Adrian Hunter 提交于
commit 5a99d99e3310a565b0cf63f785b347be9ee0da45 upstream. Auxtrace records might have up to 7 bytes of padding appended. Adjust the overlap accordingly. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20190206103947.15750-3-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Adrian Hunter 提交于
commit c3fcadf0bb765faf45d6d562246e1d08885466df upstream. Define auxtrace record alignment so that it can be referenced elsewhere. Note this is preparation for patch "perf intel-pt: Fix overlap calculation for padding" Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20190206103947.15750-2-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Adrian Hunter 提交于
commit d6d457451eb94fa747dc202765592eb8885a7352 upstream. Kallsyms symbols do not have a size, so the size becomes the distance to the next symbol. Consequently the recently added trampoline symbols end up with large sizes because the trampolines are some distance from one another and the main kernel map. However, symbols that end outside their map can disrupt the symbol tree because, after mapping, it can appear incorrectly that they overlap other symbols. Add logic to truncate symbol size to the end of the corresponding map. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Acked-by: NJiri Olsa <jolsa@kernel.org> Cc: stable@vger.kernel.org Fixes: d83212d5 ("kallsyms, x86: Export addresses of PTI entry trampolines") Link: http://lkml.kernel.org/r/20190109091835.5570-2-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Adrian Hunter 提交于
commit 03997612904866abe7cdcc992784ef65cb3a4b81 upstream. CYC packet timestamp calculation depends upon CBR which was being cleared upon overflow (OVF). That can cause errors due to failing to synchronize with sideband events. Even if a CBR change has been lost, the old CBR is still a better estimate than zero. So remove the clearing of CBR. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20190206103947.15750-4-adrian.hunter@intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Josh Poimboeuf 提交于
commit f76a16adc485699f95bb71fce114f97c832fe664 upstream. The .orc_unwind section is a packed array of 6-byte structs. It's currently aligned to 6 bytes, which is causing warnings in the LLD linker. Six isn't a power of two, so it's not a valid alignment value. The actual alignment doesn't matter much because it's an array of packed structs. An alignment of two is sufficient. In reality it always gets aligned to four bytes because it comes immediately after the 4-byte-aligned .orc_unwind_ip section. Fixes: ee9f8fce ("x86/unwind: Add the ORC unwinder") Reported-by: NNick Desaulniers <ndesaulniers@google.com> Reported-by: NDmitry Golovin <dima@golovin.in> Reported-by: NSedat Dilek <sedat.dilek@gmail.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NSedat Dilek <sedat.dilek@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://github.com/ClangBuiltLinux/linux/issues/218 Link: https://lkml.kernel.org/r/d55027ee95fe73e952dcd8be90aebd31b0095c45.1551892041.git.jpoimboe@redhat.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Nicolas Pitre 提交于
commit a6dbe442755999960ca54a9b8ecfd9606be0ea75 upstream. Commit 4b4ecd9c ("vt: Perform safe console erase only once") removed what appeared to be an extra call to scr_memsetw(). This missed the fact that set_origin() must be called before clearing the screen otherwise old screen content gets restored on the screen when using vgacon. Let's fix that by moving all the scrollback handling to flush_scrollback() where it logically belongs, and invoking it before the actual screen clearing in csi_J(), making the code simpler in the end. Reported-by: NMatthew Whitehead <tedheadster@gmail.com> Signed-off-by: NNicolas Pitre <nico@linaro.org> Tested-by: NMatthew Whitehead <tedheadster@gmail.com> Fixes: 4b4ecd9c ("vt: Perform safe console erase only once") Cc: stable@vger.kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Greg Kroah-Hartman 提交于
commit a41e8f25fa8f8f67360d88eb0eebbabe95a64bdf upstream. The networking maintainer keeps a public list of the patches being queued up for the next round of stable releases. Be sure to check there before asking for a patch to be applied so that you do not waste people's time. Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NJonathan Corbet <corbet@lwn.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Daniel Axtens 提交于
commit 9951379b0ca88c95876ad9778b9099e19a95d566 upstream. Some users see panics like the following when performing fstrim on a bcached volume: [ 529.803060] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 [ 530.183928] #PF error: [normal kernel read fault] [ 530.412392] PGD 8000001f42163067 P4D 8000001f42163067 PUD 1f42168067 PMD 0 [ 530.750887] Oops: 0000 [#1] SMP PTI [ 530.920869] CPU: 10 PID: 4167 Comm: fstrim Kdump: loaded Not tainted 5.0.0-rc1+ #3 [ 531.290204] Hardware name: HP ProLiant DL360 Gen9/ProLiant DL360 Gen9, BIOS P89 12/27/2015 [ 531.693137] RIP: 0010:blk_queue_split+0x148/0x620 [ 531.922205] Code: 60 38 89 55 a0 45 31 db 45 31 f6 45 31 c9 31 ff 89 4d 98 85 db 0f 84 7f 04 00 00 44 8b 6d 98 4c 89 ee 48 c1 e6 04 49 03 70 78 <8b> 46 08 44 8b 56 0c 48 8b 16 44 29 e0 39 d8 48 89 55 a8 0f 47 c3 [ 532.838634] RSP: 0018:ffffb9b708df39b0 EFLAGS: 00010246 [ 533.093571] RAX: 00000000ffffffff RBX: 0000000000046000 RCX: 0000000000000000 [ 533.441865] RDX: 0000000000000200 RSI: 0000000000000000 RDI: 0000000000000000 [ 533.789922] RBP: ffffb9b708df3a48 R08: ffff940d3b3fdd20 R09: 0000000000000000 [ 534.137512] R10: ffffb9b708df3958 R11: 0000000000000000 R12: 0000000000000000 [ 534.485329] R13: 0000000000000000 R14: 0000000000000000 R15: ffff940d39212020 [ 534.833319] FS: 00007efec26e3840(0000) GS:ffff940d1f480000(0000) knlGS:0000000000000000 [ 535.224098] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 535.504318] CR2: 0000000000000008 CR3: 0000001f4e256004 CR4: 00000000001606e0 [ 535.851759] Call Trace: [ 535.970308] ? mempool_alloc_slab+0x15/0x20 [ 536.174152] ? bch_data_insert+0x42/0xd0 [bcache] [ 536.403399] blk_mq_make_request+0x97/0x4f0 [ 536.607036] generic_make_request+0x1e2/0x410 [ 536.819164] submit_bio+0x73/0x150 [ 536.980168] ? submit_bio+0x73/0x150 [ 537.149731] ? bio_associate_blkg_from_css+0x3b/0x60 [ 537.391595] ? _cond_resched+0x1a/0x50 [ 537.573774] submit_bio_wait+0x59/0x90 [ 537.756105] blkdev_issue_discard+0x80/0xd0 [ 537.959590] ext4_trim_fs+0x4a9/0x9e0 [ 538.137636] ? ext4_trim_fs+0x4a9/0x9e0 [ 538.324087] ext4_ioctl+0xea4/0x1530 [ 538.497712] ? _copy_to_user+0x2a/0x40 [ 538.679632] do_vfs_ioctl+0xa6/0x600 [ 538.853127] ? __do_sys_newfstat+0x44/0x70 [ 539.051951] ksys_ioctl+0x6d/0x80 [ 539.212785] __x64_sys_ioctl+0x1a/0x20 [ 539.394918] do_syscall_64+0x5a/0x110 [ 539.568674] entry_SYSCALL_64_after_hwframe+0x44/0xa9 We have observed it where both: 1) LVM/devmapper is involved (bcache backing device is LVM volume) and 2) writeback cache is involved (bcache cache_mode is writeback) On one machine, we can reliably reproduce it with: # echo writeback > /sys/block/bcache0/bcache/cache_mode (not sure whether above line is required) # mount /dev/bcache0 /test # for i in {0..10}; do file="$(mktemp /test/zero.XXX)" dd if=/dev/zero of="$file" bs=1M count=256 sync rm $file done # fstrim -v /test Observing this with tracepoints on, we see the following writes: fstrim-18019 [022] .... 91107.302026: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0 DS 4260112 + 196352 hit 0 bypass 1 fstrim-18019 [022] .... 91107.302050: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0 DS 4456464 + 262144 hit 0 bypass 1 fstrim-18019 [022] .... 91107.302075: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0 DS 4718608 + 81920 hit 0 bypass 1 fstrim-18019 [022] .... 91107.302094: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0 DS 5324816 + 180224 hit 0 bypass 1 fstrim-18019 [022] .... 91107.302121: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0 DS 5505040 + 262144 hit 0 bypass 1 fstrim-18019 [022] .... 91107.302145: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0 DS 5767184 + 81920 hit 0 bypass 1 fstrim-18019 [022] .... 91107.308777: bcache_write: 73f95583-561c-408f-a93a-4cbd2498f5c8 inode 0 DS 6373392 + 180224 hit 1 bypass 0 <crash> Note the final one has different hit/bypass flags. This is because in should_writeback(), we were hitting a case where the partial stripe condition was returning true and so should_writeback() was returning true early. If that hadn't been the case, it would have hit the would_skip test, and as would_skip == s->iop.bypass == true, should_writeback() would have returned false. Looking at the git history from 'commit 72c27061 ("bcache: Write out full stripes")', it looks like the idea was to optimise for raid5/6: * If a stripe is already dirty, force writes to that stripe to writeback mode - to help build up full stripes of dirty data To fix this issue, make sure that should_writeback() on a discard op never returns true. More details of debugging: https://www.spinics.net/lists/linux-bcache/msg06996.html Previous reports: - https://bugzilla.kernel.org/show_bug.cgi?id=201051 - https://bugzilla.kernel.org/show_bug.cgi?id=196103 - https://www.spinics.net/lists/linux-bcache/msg06885.html (Coly Li: minor modification to follow maximum 75 chars per line rule) Cc: Kent Overstreet <koverstreet@google.com> Cc: stable@vger.kernel.org Fixes: 72c27061 ("bcache: Write out full stripes") Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NColy Li <colyli@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Viresh Kumar 提交于
commit 1fad17fb1bbcd73159c2b992668a6957ecc5af8a upstream. If wakeup_source_add() is called right after wakeup_source_remove() for the same wakeup source, timer_setup() may be called for a potentially scheduled timer which is incorrect. To avoid that, move the wakeup source timer cancellation from wakeup_source_drop() to wakeup_source_remove(). Moreover, make wakeup_source_remove() clear the timer function after canceling the timer to let wakeup_source_not_registered() treat unregistered wakeup sources in the same way as the ones that have never been registered. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Cc: 4.4+ <stable@vger.kernel.org> # 4.4+ [ rjw: Subject, changelog, merged two patches together ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 J. Bruce Fields 提交于
commit b7e5034cbecf5a65b7bfdc2b20a8378039577706 upstream. James Pearson found that an NFS server stopped responding to UDP requests if started with more than 1017 threads. sv_max_mesg is about 2^20, so that is probably where the calculation performed by svc_sock_setbufsize(svsk->sk_sock, (serv->sv_nrthreads+3) * serv->sv_max_mesg, (serv->sv_nrthreads+3) * serv->sv_max_mesg); starts to overflow an int. Reported-by: NJames Pearson <jcpearson@gmail.com> Tested-by: NJames Pearson <jcpearson@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NJ. Bruce Fields <bfields@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Trond Myklebust 提交于
commit c1dffe0bf7f9c3d57d9f237a7cb2a81e62babd2b upstream. If we have to retransmit a request, we should ensure that we reinitialise the sequence results structure, since in the event of a signal we need to treat the request as if it had not been sent. Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Yihao Wu 提交于
commit dd838821f0a29781b185cd8fb8e48d5c177bd838 upstream. Commit 62a063b8e7d1 "nfsd4: fix crash on writing v4_end_grace before nfsd startup" is trying to fix a NULL dereference issue, but it mistakenly checks if the nfsd server is started. So fix it. Fixes: 62a063b8e7d1 "nfsd4: fix crash on writing v4_end_grace before nfsd startup" Cc: stable@vger.kernel.org Reviewed-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 NeilBrown 提交于
commit b602345da6cbb135ba68cf042df8ec9a73da7981 upstream. If the result of an NFSv3 readdir{,plus} request results in the "offset" on one entry having to be split across 2 pages, and is sized so that the next directory entry doesn't fit in the requested size, then memory corruption can happen. When encode_entry() is called after encoding the last entry that fits, it notices that ->offset and ->offset1 are set, and so stores the offset value in the two pages as required. It clears ->offset1 but *does not* clear ->offset. Normally this omission doesn't matter as encode_entry_baggage() will be called, and will set ->offset to a suitable value (not on a page boundary). But in the case where cd->buflen < elen and nfserr_toosmall is returned, ->offset is not reset. This means that nfsd3proc_readdirplus will see ->offset with a value 4 bytes before the end of a page, and ->offset1 set to NULL. It will try to write 8bytes to ->offset. If we are lucky, the next page will be read-only, and the system will BUG: unable to handle kernel paging request at... If we are unlucky, some innocent page will have the first 4 bytes corrupted. nfsd3proc_readdir() doesn't even check for ->offset1, it just blindly writes 8 bytes to the offset wherever it is. Fix this by clearing ->offset after it is used, and copying the ->offset handling code from nfsd3_proc_readdirplus into nfsd3_proc_readdir. (Note that the commit hash in the Fixes tag is from the 'history' tree - this bug predates git). Fixes: 0b1d57cf7654 ("[PATCH] kNFSd: Fix nfs3 dentry encoding") Fixes-URL: https://git.kernel.org/pub/scm/linux/kernel/git/history/history.git/commit/?id=0b1d57cf7654 Cc: stable@vger.kernel.org (v2.6.12+) Signed-off-by: NNeilBrown <neilb@suse.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 J. Bruce Fields 提交于
commit c54f24e338ed2a35218f117a4a1afb5f9e2b4e64 upstream. We're unintentionally limiting the number of slots per nfsv4.1 session to 10. Often more than 10 simultaneous RPCs are needed for the best performance. This calculation was meant to prevent any one client from using up more than a third of the limit we set for total memory use across all clients and sessions. Instead, it's limiting the client to a third of the maximum for a single session. Fix this. Reported-by: NChris Tracy <ctracy@engr.scu.edu> Cc: stable@vger.kernel.org Fixes: de766e57 "nfsd: give out fewer session slots as limit approaches" Signed-off-by: NJ. Bruce Fields <bfields@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Trond Myklebust 提交于
commit 8127d82705998568b52ac724e28e00941538083d upstream. If the I/O completion failed with a fatal error, then we should just exit nfs_pageio_complete_mirror() rather than try to recoalesce. Fixes: a7d42ddb ("nfs: add mirroring support to pgio layer") Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org # v4.0+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Trond Myklebust 提交于
commit 4d91969ed4dbcefd0e78f77494f0cb8fada9048a upstream. Whether we need to exit early, or just reprocess the list, we must not lost track of the request which failed to get recoalesced. Fixes: 03d5eb65 ("NFS: Fix a memory leak in nfs_do_recoalesce") Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org # v4.0+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-