- 18 3月, 2020 40 次提交
-
-
由 Yufen Yu 提交于
commit 3b7436cc9449d5ff7fa1c1fd5bc3edb6402ff5b8 upstream. For super_90_load, we need to make sure 'desc_nr' less than MD_SB_DISKS, avoiding invalid memory access of 'sb->disks'. Fixes: 228fc7d76db6 ("md: avoid invalid memory access for array sb->dev_roles") Signed-off-by: NYufen Yu <yuyufen@huawei.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Yufen Yu 提交于
commit 228fc7d76db68732677230a3c64337908fd298e3 upstream. we need to gurantee 'desc_nr' valid before access array of sb->dev_roles. In addition, we should avoid .load_super always return '0' when level is LEVEL_MULTIPATH, which is not expected. Reported-by: Ncoverity-bot <keescook+coverity-bot@chromium.org> Addresses-Coverity-ID: 1487373 ("Memory - illegal accesses") Fixes: 6a5cb53aaa4e ("md: no longer compare spare disk superblock events in super_load") Signed-off-by: NYufen Yu <yuyufen@huawei.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Yufen Yu 提交于
commit 6a5cb53aaa4ef515ddeffa04ce18b771121127b4 upstream. We have a test case as follow: mdadm -CR /dev/md1 -l 1 -n 4 /dev/sd[a-d] \ --assume-clean --bitmap=internal mdadm -S /dev/md1 mdadm -A /dev/md1 /dev/sd[b-c] --run --force mdadm --zero /dev/sda mdadm /dev/md1 -a /dev/sda echo offline > /sys/block/sdc/device/state echo offline > /sys/block/sdb/device/state sleep 5 mdadm -S /dev/md1 echo running > /sys/block/sdb/device/state echo running > /sys/block/sdc/device/state mdadm -A /dev/md1 /dev/sd[a-c] --run --force When we readd /dev/sda to the array, it started to do recovery. After offline the other two disks in md1, the recovery have been interrupted and superblock update info cannot be written to the offline disks. While the spare disk (/dev/sda) can continue to update superblock info. After stopping the array and assemble it, we found the array run fail, with the follow kernel message: [ 172.986064] md: kicking non-fresh sdb from array! [ 173.004210] md: kicking non-fresh sdc from array! [ 173.022383] md/raid1:md1: active with 0 out of 4 mirrors [ 173.022406] md1: failed to create bitmap (-5) [ 173.023466] md: md1 stopped. Since both sdb and sdc have the value of 'sb->events' smaller than that in sda, they have been kicked from the array. However, the only remained disk sda is in 'spare' state before stop and it cannot be added to conf->mirrors[] array. In the end, raid array assemble and run fail. In fact, we can use the older disk sdb or sdc to assemble the array. That means we should not choose the 'spare' disk as the fresh disk in analyze_sbs(). To fix the problem, we do not compare superblock events when it is a spare disk, as same as validate_super. Signed-off-by: NYufen Yu <yuyufen@huawei.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Pawel Baldysiak 提交于
commit c42d3240990814eec1e4b2b93fa0487fc4873aed upstream. Mdadm expects that setting drive as faulty will fail with -EBUSY only if this operation will cause RAID to be failed. If this happens, it will try to stop the array. Currently -EBUSY might also be returned if rdev is in the middle of the removal process - for example there is a race with mdmon that already requested the drive to be failed/removed. If rdev does not contain mddev, return -ENODEV instead, so the caller can distinguish between those two cases and behave accordingly. Reviewed-by: NNeilBrown <neilb@suse.com> Signed-off-by: NPawel Baldysiak <pawel.baldysiak@intel.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Alex Wu 提交于
commit ee37d7314a32ab6809eacc3389bad0406c69a81f upstream. [Symptom] Resync thread hang when new added disk faulty during replacing. [Root Cause] In raid10_sync_request(), we expect to issue a bio with callback end_sync_read(), and a bio with callback end_sync_write(). In normal situation, we will add resyncing sectors into mddev->recovery_active when raid10_sync_request() returned, and sub resynced sectors from mddev->recovery_active when end_sync_write() calls end_sync_request(). If new added disk, which are replacing the old disk, is set faulty, there is a race condition: 1. In the first rcu protected section, resync thread did not detect that mreplace is set faulty and pass the condition. 2. In the second rcu protected section, mreplace is set faulty. 3. But, resync thread will prepare the read object first, and then check the write condition. 4. It will find that mreplace is set faulty and do not have to prepare write object. This cause we add resync sectors but never sub it. [How to Reproduce] This issue can be easily reproduced by the following steps: mdadm -C /dev/md0 --assume-clean -l 10 -n 4 /dev/sd[abcd] mdadm /dev/md0 -a /dev/sde mdadm /dev/md0 --replace /dev/sdd sleep 1 mdadm /dev/md0 -f /dev/sde [How to Fix] This issue can be fixed by using local variables to record the result of test conditions. Once the conditions are satisfied, we can make sure that we need to issue a bio for read and a bio for write. Previous 'commit 24afd80d ("md/raid10: handle recovery of replacement devices.")' will also check whether bio is NULL, but leave the comment saying that it is a pointless test. So we remove this dummy check. Reported-by: NAlex Chen <alexchen@synology.com> Reviewed-by: NAllen Peng <allenpeng@synology.com> Reviewed-by: NBingJing Chang <bingjingc@synology.com> Signed-off-by: NAlex Wu <alexwu@synology.com> Signed-off-by: NShaohua Li <shli@fb.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Rafael J. Wysocki 提交于
commit 22782b3f9bb8ae21c710e2880db21bc729771e92 upstream After commit 61cb5758d3c4 ("cpuidle: Add cpuidle.governor= command line parameter") new cpuidle governors are not added to the list of available governors, so governor selection via sysfs doesn't work as expected (even though it is rarely used anyway). Fix that by making cpuidle_register_governor() add new governors to cpuidle_governors again. Fixes: 61cb5758d3c4 ("cpuidle: Add cpuidle.governor= command line parameter") Reported-by: NKees Cook <keescook@chromium.org> Cc: 5.0+ <stable@vger.kernel.org> # 5.0+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Marc Zyngier 提交于
commit 8e01d9a396e6db153d94a6004e6473d9ff251a6a upstream. When the VHE code was reworked, a lot of the vgic stuff was moved around, but the GICv4 residency code did stay untouched, meaning that we come in and out of residency on each flush/sync, which is obviously suboptimal. To address this, let's move things around a bit: - Residency entry (flush) moves to vcpu_load - Residency exit (sync) moves to vcpu_put - On blocking (entry to WFI), we "put" - On unblocking (exit from WFI), we "load" Because these can nest (load/block/put/load/unblock/put, for example), we now have per-VPE tracking of the residency state. Additionally, vgic_v4_put gains a "need doorbell" parameter, which only gets set to true when blocking because of a WFI. This allows a finer control of the doorbell, which now also gets disabled as soon as it gets signaled. Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20191027144234.8395-2-maz@kernel.orgSigned-off-by: NShannon Zhao <shannon.zhao@linux.alibaba.com> Acked-by: NZou Cao <zoucao@linux.alibaba.com>
-
由 Tony Luck 提交于
commit e80634a75aba90e7485cd1fdb463fcac5d45f14d upstream Skylake logs some additional useful information in per-channel registers in addition the the architectural status/addr/misc logged in the machine check bank. Pick up this information and add it to the EDAC log: retry_rd_err_[five 32-bit register values] Sorry, no definitions for these registers. OEMs and DIMM vendors will be able to use them to isolate which cells in the DIMM are causing problems. correrrcnt[per rank corrected error counts] Note that if additional errors are logged while these registers are being read, you may see a jumble of values some from earlier errors, others from later errors (since the registers report the most recent logged error). The correrrcnt registers provide error counts per possible rank. If these counts only change by one since the previous error logged for this channel, then it is safe to assume that the registers logged provide a coherent view of one error. With this change EDAC logs look like this: EDAC MC4: 1 CE memory read error on CPU_SrcID#2_MC#0_Chan#1_DIMM#0 (channel:1 slot:0 page:0x8f26018 offset:0x0 grain:32 syndrome:0x0 - err_code:0x0101:0x0091 socket:2 imc:0 rank:0 bg:0 ba:0 row:0x1f880 col:0x200 retry_rd_err_log[0001a209 00000000 00000001 04800001 0001f880] correrrcnt[0001 0000 0000 0000 0000 0000 0000 0000]) Acked-by: NAristeu Rozanski <aris@redhat.com> Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: Nzhaobing <zhaobing@linux.alibaba.com> Reviewed-by: Nluanshi <zhangliguang@linux.alibaba.com>
-
由 Sai Praneeth 提交于
[ Upstream commit 9dbbedaa6171247c4c7c40b83f05b200a117c2e0 ] After the kernel has booted, if any accesses by firmware causes a page fault, the efi page fault handler would freeze efi_rts_wq and schedules a new process. To do this, the efi page fault handler needs efi_rts_work. Hence, make it accessible. There will be no race conditions in accessing this structure, because all the calls to efi runtime services are already serialized. [ Wen: This patch also fixes a memory corruption: #define efi_queue_work(_rts, _arg1, _arg2, _arg3, _arg4, _arg5)\ ({ \ struct efi_runtime_work efi_rts_work; \ … init_completion(&efi_rts_work.efi_rts_comp); \ INIT_WORK(&efi_rts_work.work, efi_call_rts); \ … efi_rts_work is on the stack, registering it to workqueue will cause the following error: ODEBUG: object (____ptrval____) is on stack (____ptrval____), but NOT annotated. ------------[ cut here ]------------ WARNING: CPU: 6 PID: 1 at lib/debugobjects.c:368 __debug_object_init+0x218/0x538 Modules linked in: CPU: 6 PID: 1 Comm: swapper/0 Tainted: G W 4.19.91 #19 … Call trace: __debug_object_init+0x218/0x538 debug_object_init+0x20/0x28 __init_work+0x34/0x58 virt_efi_get_time.part.5+0x6c/0x12c virt_efi_get_time+0x4c/0x58 efi_read_time+0x40/0x9c __rtc_read_time+0x50/0x118 rtc_read_time+0x60/0x1f0 rtc_hctosys+0x74/0x124 do_one_initcall+0xac/0x3d4 kernel_init_freeable+0x49c/0x59c kernel_init+0x18/0x110 ] Tested-by: NBhupesh Sharma <bhsharma@redhat.com> Suggested-by: NMatt Fleming <matt@codeblueprint.co.uk> Based-on-code-from: Ricardo Neri <ricardo.neri@intel.com> Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Fixes: 3eb420e7 ("efi: Use a work queue to invoke EFI Runtime Services") Signed-off-by: NWen Yang <wenyang@linux.alibaba.com> Acked-by: NCaspar Zhang <caspar@linux.alibaba.com>
-
由 Zhenzhong Duan 提交于
commit be721b9b24394c088326384d44dcb9ec4b321aae upstream Currenly haltpoll isn't aware of the 'idle=' override, the priority is 'idle=poll' > haltpoll > 'idle=halt'. When 'idle=poll' is used, cpuidle driver is bypassed but current_driver in sys still shows 'haltpoll'. When 'idle=halt' is used, haltpoll takes precedence and makes 'idle=halt' have no effect. Add a check to prevent the haltpoll driver from loading if 'idle=' is present. Signed-off-by: NZhenzhong Duan <zhenzhong.duan@oracle.com> Co-developed-by: NJoao Martins <joao.m.martins@oracle.com> [ rjw: Subject ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Joao Martins 提交于
commit b0513562cf8c97035a60b6463b459edf8b3c9e5d upstream cpuidle-haltpoll can be built as a module to allow optional late load. Given we are setting @owner to THIS_MODULE, cpuidle will attempt to grab a module reference every time a cpuidle_device is registered -- so essentially all online cpus get a reference. This prevents for the module to be unloaded later, which makes the module_exit callback entirely unused. Thus remove the @owner and allow module to be unloaded. Fixes: fa86ee90eb11 ("add cpuidle-haltpoll driver") Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Joao Martins 提交于
commit 26ee75559bd10f07b5b99e40e05df5f0d7d7b7ec upstream When a user loads cpuidle-haltpoll on a non KVM guest the module will successfully load, even though idle driver registration didn't take place. We should instead return -ENODEV signaling the user that the driver can't be loaded, like other error paths in haltpoll_init(). An example of such error paths is when we return -EBUSY when attempting to register an idle driver when it had one already (e.g. intel_idle loads at boot and then we attempt to insert module cpuidle-haltpoll). Fixes: fa86ee90eb11 ("add cpuidle-haltpoll driver") Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Joao Martins 提交于
commit c194c2ad74696ce686a2ee1239f44d9750cb5e5c upstream Right now, guest current governors have the following ratings: * ladder -> 10 * teo -> 19 * menu -> 20 * haltpoll -> 21 * ladder + nohz=off -> 25 haltpoll governor got introduced and it is now the default governor given its highest rating -- with ladder+nohz being the exception -- regardless of idle driver in the guest. An example of an undesirable case is x86 KVM guests with MWAIT which have intel_idle registered first, and consequently will have haltpoll be used as governor which would get limited to a poll state and state 1 and the other states wouldn't get used. To keep the previous defaults we decrease rating of governor to 9 (below current lowest rating) and thus rely on @governor switch on cpuidle_register_driver() to tie in haltpoll idle driver and governor together. Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Joao Martins 提交于
commit 11c59eae6633b8a7e77b8ee1cf908964d80c78cd upstream The recently introduced haltpoll driver is largely only useful with haltpoll governor. To allow drivers to associate with a particular idle behaviour, add a @governor property to 'struct cpuidle_driver' and thus allow a cpuidle driver to switch to a *preferred* governor on idle driver registration. We save the previous governor, and when an idle driver is unregistered we switch back to that. The @governor can be overridden by cpuidle.governor= boot param or alternatively be ignored if the governor doesn't exist. Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Marcelo Tosatti 提交于
commit 97d3eb9da84cae0548359b0aecb8619faad003b7 upstream When cpus != maxcpus cpuidle-haltpoll will fail to register all vcpus past the online ones and thus fail to register the idle driver. This is because cpuidle_add_sysfs() will return with -ENODEV as a consequence from get_cpu_device() return no device for a non-existing CPU. Instead switch to cpuidle_register_driver() and manually register each of the present cpus through cpuhp_setup_state() callbacks and future ones that get onlined or offlined. This mimmics similar logic that intel_idle does. Fixes: fa86ee90eb11 ("add cpuidle-haltpoll driver") Signed-off-by: NJoao Martins <joao.m.martins@oracle.com> Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Marcelo Tosatti 提交于
commit 2cffe9f6b96fece065ee8522673c90e92ef2085d upstream The cpuidle_haltpoll governor, in conjunction with the haltpoll cpuidle driver, allows guest vcpus to poll for a specified amount of time before halting. This provides the following benefits to host side polling: 1) The POLL flag is set while polling is performed, which allows a remote vCPU to avoid sending an IPI (and the associated cost of handling the IPI) when performing a wakeup. 2) The VM-exit cost can be avoided. The downside of guest side polling is that polling is performed even with other runnable tasks in the host. Results comparing halt_poll_ns and server/client application where a small packet is ping-ponged: host --> 31.33 halt_poll_ns=300000 / no guest busy spin --> 33.40 (93.8%) halt_poll_ns=0 / guest_halt_poll_ns=300000 --> 32.73 (95.7%) For the SAP HANA benchmarks (where idle_spin is a parameter of the previous version of the patch, results should be the same): hpns == halt_poll_ns idle_spin=0/ idle_spin=800/ idle_spin=0/ hpns=200000 hpns=0 hpns=800000 DeleteC06T03 (100 thread) 1.76 1.71 (-3%) 1.78 (+1%) InsertC16T02 (100 thread) 2.14 2.07 (-3%) 2.18 (+1.8%) DeleteC00T01 (1 thread) 1.34 1.28 (-4.5%) 1.29 (-3.7%) UpdateC00T03 (1 thread) 4.72 4.18 (-12%) 4.53 (-5%) Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Marcelo Tosatti 提交于
commit fa86ee90eb1111267de67cb4272b5ce711f18cbb upstream Add a cpuidle driver that calls the architecture default_idle routine. To be used in conjunction with the haltpoll governor. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Marcelo Tosatti 提交于
commit 7d4daeedd575bbc3c40c87fc6708a8b88c50fe7e upstream Since this field is shared by all governors, move it to cpuidle device structure. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Rafael J. Wysocki 提交于
commit eb40a380bff28f84b6583bba6786b46ef26ef548 upstream It is not necessary to update data->last_state_idx in menu_select() as it only is used in menu_update() which only runs when data->needs_update is set and that is set only when updating data->last_state_idx in menu_reflect(). Accordingly, drop the update of data->last_state_idx from menu_select() and get rid of the (now redundant) "out" label from it. No intentional behavior changes. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Marcelo Tosatti 提交于
commit 259231a045616c4101d023a8f4dcc8379af265a6 upstream Add a poll_limit_ns variable to cpuidle_device structure. Calculate and configure it in the new cpuidle_poll_time function, in case its zero. Individual governors are allowed to override this value. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Doug Smythies 提交于
commit 1617971c6616c87185cbc78fa1a86dfc70dd16b6 upstream The default time is declared in units of microsecnds, but is used as nanoseconds, resulting in significant accounting errors for idle state 0 time when all idle states deeper than 0 are disabled. Under these unusual conditions, we don't really care about the poll time limit anyhow. Fixes: 800fb34a99ce ("cpuidle: poll_state: Disregard disable idle states") Signed-off-by: NDoug Smythies <dsmythies@telus.net> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Rafael J. Wysocki 提交于
commit 61cb5758d3c46bc1ba87694fefc0d9653613ce6b upstream Add cpuidle.governor= command line parameter to allow the default cpuidle governor to be replaced. That is useful, for example, if someone running a tickful kernel wants to use the menu governor on it. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Rafael J. Wysocki 提交于
commit 800fb34a99ce7d22dca839c90f869c7a12b50f70 upstream When computing the limit of time to spend in the loop in poll_idle(), use the target residency of the first enabled idle state deeper than state 0 instead of always using the target residency of state 1. This helps when state 1 is disabled for diagnostics, for instance. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Rafael J. Wysocki 提交于
commit 01bad1c6896db021db82042e71c2bf1f97cc026b upstream If need_resched() returns "false", breaking out of the loop in poll_idle() will cause a new idle state to be selected, so in fact it usually doesn't make sense to spin in it longer than the target residency of the second state. [Note that the "polling" state is used only if there is at least one "real" state defined in addition to it, so the second state is always there.] On the other hand, breaking out of it early (say in case the next state is disabled) shouldn't hurt as it is polling anyway. For this reason, make the loop in poll_idle() break if the CPU has been spinning longer than the target residency of the second state (the "polling" state can only be state[0]). Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYihao Wu <wuyihao@linux.alibaba.com> Acked-by: NMichael Wang <yun.wang@linux.alibaba.com>
-
由 Cambda Zhu 提交于
commit 4fad78ad6422d9bca62135bbed8b6abc4cbb85b8 upstream This patch fixes the calculation of queue when we restore flow director filters after resetting adapter. In ixgbe_fdir_filter_restore(), filter's vf may be zero which makes the queue outside of the rx_ring array. The calculation is changed to the same as ixgbe_add_ethtool_fdir_entry(). Signed-off-by: NCambda Zhu <cambda@linux.alibaba.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NTony Lu <tonylu@linux.alibaba.com> Acked-by: NDust Li <dust.li@linux.alibaba.com>
-
由 Shirish S 提交于
commit 05794eff1aa6060248bfca34ee936c613f94a942 upstream. Initializing structures with { } is known to be problematic since it doesn't necessararily initialize all bytes, in case of padding, causing random failures when structures are memcmp(). This patch fixes the structure initialisation related compiler error by memset(). V2: rectified missing piece in coding Signed-off-by: NShirish S <shirish.s@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NXu Yu <xuyu@linux.alibaba.com> Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Lorenzo Pieralisi 提交于
commit 3e77eeb7a27fc3dcf6b65e7ee01ac00bf5d2b4fb upstream. Commit 36a2ba07757d ("ACPI/IORT: Reject platform device creation on NUMA node mapping failure") introduced a local variable 'node' in arm_smmu_v3_set_proximity() that shadows the struct acpi_iort_node pointer function parameter. Execution was unaffected but it is prone to errors and can lead to subtle bugs. Rename the local variable to prevent any issue. Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Reported-by: NWill Deacon <will@kernel.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Zenghui Yu 提交于
commit 342be1068d9b5b1fd364d270b4f731764e23de2b upstream. We try to find a free LPI region in device's lpi_map and allocate them (set them to 1) when we want to allocate LPIs for this device. This is what bitmap_find_free_region() has done for us. The following set_bit is redundant and a bit confusing (since we only set_bit against the first allocated LPI idx). Remove it, and make the set_bit explicit by comment. Signed-off-by: NZenghui Yu <yuzenghui@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Shameer Kolothum 提交于
commit 24062fe85860debfdae0eeaa495f27c9971ec163 upstream HiSilicon erratum 162001800 describes the limitation of SMMUv3 PMCG implementation on HiSilicon Hip08 platforms. On these platforms, the PMCG event counter registers (SMMU_PMCG_EVCNTRn) are read only and as a result it is not possible to set the initial counter period value on event monitor start. To work around this, the current value of the counter is read and used for delta calculations. OEM information from ACPI header is used to identify the affected hardware platforms. Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> [will: update silicon-errata.txt and add reason string to acpi match] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Shameer Kolothum 提交于
commit f202cdab3b48d8c2c1846c938ea69cb8aa897699 upstream This adds support for MSI-based counter overflow interrupt. Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Neil Leeder 提交于
commit 7d839b4b9e00645e49345d6ce5dfa8edf53c1a21 upstream Adds a new driver to support the SMMUv3 PMU and add it into the perf events framework. Each SMMU node may have multiple PMUs associated with it, each of which may support different events. SMMUv3 PMCG devices are named as smmuv3_pmcg_<phys_addr_page> where <phys_addr_page> is the physical page address of the SMMU PMCG wrapped to 4K boundary. For example, the PMCG at 0xff88840000 is named smmuv3_pmcg_ff88840 Filtering by stream id is done by specifying filtering parameters with the event. options are: filter_enable - 0 = no filtering, 1 = filtering enabled filter_span - 0 = exact match, 1 = pattern match filter_stream_id - pattern to filter against Example: perf stat -e smmuv3_pmcg_ff88840/transaction,filter_enable=1, filter_span=1,filter_stream_id=0x42/ -a netperf Applies filter pattern 0x42 to transaction events, which means events matching stream ids 0x42 & 0x43 are counted as only upper StreamID bits are required to match the given filter. Further filtering information is available in the SMMU documentation. SMMU events are not attributable to a CPU, so task mode and sampling are not supported. Signed-off-by: NNeil Leeder <nleeder@codeaurora.org> Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> [will: fold in review feedback from Robin] [will: rewrite Kconfig text and allow building as a module] Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Neil Leeder 提交于
commit 24e516049360eda85cf3fe9903221d43886c2689 upstream. Add support for the SMMU Performance Monitor Counter Group information from ACPI. This is in preparation for its use in the SMMUv3 PMU driver. Signed-off-by: NNeil Leeder <nleeder@codeaurora.org> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NShameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Ganapatrao Kulkarni 提交于
commit c4b17afb0a4e8d042320efaf2acf55cb26795f78 upstream. Change function __iommu_dma_alloc_pages() to allocate pages for DMA from respective device NUMA node. The ternary operator which would be for alloc_pages_node() is tidied along with this. The motivation for this change is to have a policy for page allocation consistent with direct DMA mapping, which attempts to allocate pages local to the device, as mentioned in [1]. In addition, for certain workloads it has been observed a marginal performance improvement. The patch caused an observation of 0.9% average throughput improvement for running tcrypt with HiSilicon crypto engine. We also include a modification to use kvzalloc() for kzalloc()/vzalloc() combination. [1] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1692998.htmlSigned-off-by: NGanapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> [JPG: Added kvzalloc(), drop pages ** being device local, remove ternary operator, update message] Signed-off-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Reviewed-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 David Hildenbrand 提交于
commit d15e59260f62bd5e0f625cf5f5240f6ffac78ab6 upstream Patch series "mm: online/offline_pages called w.o. mem_hotplug_lock", v3. Reading through the code and studying how mem_hotplug_lock is to be used, I noticed that there are two places where we can end up calling device_online()/device_offline() - online_pages()/offline_pages() without the mem_hotplug_lock. And there are other places where we call device_online()/device_offline() without the device_hotplug_lock. While e.g. echo "online" > /sys/devices/system/memory/memory9/state is fine, e.g. echo 1 > /sys/devices/system/memory/memory9/online Will not take the mem_hotplug_lock. However the device_lock() and device_hotplug_lock. E.g. via memory_probe_store(), we can end up calling add_memory()->online_pages() without the device_hotplug_lock. So we can have concurrent callers in online_pages(). We e.g. touch in online_pages() basically unprotected zone->present_pages then. Looks like there is a longer history to that (see Patch #2 for details), and fixing it to work the way it was intended is not really possible. We would e.g. have to take the mem_hotplug_lock in device/base/core.c, which sounds wrong. Summary: We had a lock inversion on mem_hotplug_lock and device_lock(). More details can be found in patch 3 and patch 6. I propose the general rules (documentation added in patch 6): 1. add_memory/add_memory_resource() must only be called with device_hotplug_lock. 2. remove_memory() must only be called with device_hotplug_lock. This is already documented and holds for all callers. 3. device_online()/device_offline() must only be called with device_hotplug_lock. This is already documented and true for now in core code. Other callers (related to memory hotplug) have to be fixed up. 4. mem_hotplug_lock is taken inside of add_memory/remove_memory/ online_pages/offline_pages. To me, this looks way cleaner than what we have right now (and easier to verify). And looking at the documentation of remove_memory, using lock_device_hotplug also for add_memory() feels natural. This patch (of 6): remove_memory() is exported right now but requires the device_hotplug_lock, which is not exported. So let's provide a variant that takes the lock and only export that one. The lock is already held in arch/powerpc/platforms/pseries/hotplug-memory.c drivers/acpi/acpi_memhotplug.c arch/powerpc/platforms/powernv/memtrace.c Apart from that, there are not other users in the tree. Link: http://lkml.kernel.org/r/20180925091457.28651-2-david@redhat.comSigned-off-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NPavel Tatashin <pavel.tatashin@microsoft.com> Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NRashmica Gupta <rashmica.g@gmail.com> Reviewed-by: NOscar Salvador <osalvador@suse.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Len Brown <lenb@kernel.org> Cc: Rashmica Gupta <rashmica.g@gmail.com> Cc: Michael Neuling <mikey@neuling.org> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com> Cc: John Allen <jallen@linux.vnet.ibm.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: YASUAKI ISHIMATSU <yasu.isimatu@gmail.com> Cc: Mathieu Malaterre <malat@debian.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Juergen Gross <jgross@suse.com> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Philippe Ombredanne <pombredanne@nexb.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Nyinhe <yinhe@linux.alibaba.com> Reviewed-by: NYang Shi <yang.shi@linux.alibaba.com>
-
由 Christoph Hellwig 提交于
commit 9d6610b76fa374eae3deb93bcbace4a06c2e3b95 upstream. The ->poll_fn has been stale for a while, as a lot of places check for mq ops. But there is no real point in it anyway, as we don't even use the multipath code for subsystems without multiple ports, which is usually what we do high performance I/O to. If it really becomes an issue we should rework the nvme code to also skip the multipath code for any private namespace, even if that could mean some trouble when rescanning. Reviewed-by: NKeith Busch <keith.busch@intel.com> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Reviewed-by: NShile Zhang <shile.zhang@linux.alibaba-inc.com> Reviewed-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
-
由 David Howells 提交于
commit 1fcb748d187d0c7732a75a509e924ead6d070e04 upstream. Remove the undefinition of READ and WRITE because these constants may be used elsewhere in subsequently included header files, thus breaking them. These constants don't actually appear to be used in the driver, so the undefinition seems pointless. Fixes: 4562236b ("drm/amd/dc: Add dc display driver (v2)") Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com> Acked-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
-
由 zhengbin 提交于
commit 70fc085c5015c54a7b8742a45fc9ab05d6da90da upstream. Use dd to test a SCSI device: 1. echo "blocked" >/sys/block/sda/device/state 2. dd if=/dev/sda of=/mnt/t.log bs=1M count=10 3. echo "running" >/sys/block/sda/device/state dd should finish this work after step 3, but it hangs. After step2, the call chain is this: blk_mq_dispatch_rq_list-->scsi_queue_rq-->prep_to_mq prep_to_mq will return BLK_STS_RESOURCE, and scsi_queue_rq will transition it to BLK_STS_DEV_RESOURCE which means that driver can guarantee that IO dispatch will be triggered in future when the resource is available. Need to follow the rule if we set the device state to running. [mkp: tweaked commit description and code comment as suggested by Bart] Signed-off-by: Nzhengbin <zhengbin13@huawei.com> Reviewed-by: NMing Lei <ming.lei@redhat.com> Reviewed-by: NBart Van Assche <bvanassche@acm.org> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Reviewed-by: NXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
-
由 Stephen Boyd 提交于
commit 8ab5e82afa969b65b286d8949c12d2a64c83960c upstream. Cr50 firmware has a different flow control protocol than the one used by this TPM PTP SPI driver. Introduce a flow control callback so we can override the standard sequence with the custom one that Cr50 uses. Cc: Andrey Pronin <apronin@chromium.org> Cc: Duncan Laurie <dlaurie@chromium.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guenter Roeck <groeck@chromium.org> Cc: Alexander Steffen <Alexander.Steffen@infineon.com> Cc: Heiko Stuebner <heiko@sntech.de> Signed-off-by: NStephen Boyd <swboyd@chromium.org> Tested-by: NHeiko Stuebner <heiko@sntech.de> Reviewed-by: NHeiko Stuebner <heiko@sntech.de> Reviewed-by: NJarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: NJarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NXunlei Pang <xlpang@linux.alibaba.com>
-
由 Torsten Duwe 提交于
commit e1a7eafb7350ff118e7538aca76b3f79e9280a8f upstream. In preparation for arm64 supporting ftrace built on other compiler options, let's have Makefiles remove the $(CC_FLAGS_FTRACE) flags, whatever these maybe, rather than assuming '-pg'. While at it, fix arm32 as well. There should be no functional change as a result of this patch. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NTorsten Duwe <duwe@suse.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: Zou Cao<zoucao@linux.alibaba.com> Acked-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-
由 Geert Uytterhoeven 提交于
[ Upstream commit 1723fdec5fcbc4de3d26bbb23a9e1704ee258955 ] While devm_gpiod_get_index_optional() returns NULL if the GPIO is not present (i.e. -ENOENT), it may still return other error codes, like -EPROBE_DEFER. Currently these are not handled, leading to unrecoverable failures later in case of probe deferral: gpiod_set_consumer_name: invalid GPIO (errorpointer) gpiod_direction_output: invalid GPIO (errorpointer) gpiod_set_value_cansleep: invalid GPIO (errorpointer) gpiod_set_value_cansleep: invalid GPIO (errorpointer) gpiod_set_value_cansleep: invalid GPIO (errorpointer) Detect and propagate errors to fix this. Fixes: f3186dd876697e69 ("spi: Optionally use GPIO descriptors for CS GPIOs") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NZou Cao <zoucao@linux.alibaba.com> Reviewed-by: NBaoyou Xie <xie.baoyou@linux.alibaba.com>
-