- 05 10月, 2019 40 次提交
-
-
由 Phil Auld 提交于
[ Upstream commit a46d14eca7b75fffe35603aa8b81df654353d80f ] Enabling WARN_DOUBLE_CLOCK in /sys/kernel/debug/sched_features causes warning to fire in update_rq_clock. This seems to be caused by onlining a new fair sched group not using the rq lock wrappers. [] rq->clock_update_flags & RQCF_UPDATED [] WARNING: CPU: 5 PID: 54385 at kernel/sched/core.c:210 update_rq_clock+0xec/0x150 [] Call Trace: [] online_fair_sched_group+0x53/0x100 [] cpu_cgroup_css_online+0x16/0x20 [] online_css+0x1c/0x60 [] cgroup_apply_control_enable+0x231/0x3b0 [] cgroup_mkdir+0x41b/0x530 [] kernfs_iop_mkdir+0x61/0xa0 [] vfs_mkdir+0x108/0x1a0 [] do_mkdirat+0x77/0xe0 [] do_syscall_64+0x55/0x1d0 [] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Using the wrappers in online_fair_sched_group instead of the raw locking removes this warning. [ tglx: Use rq_*lock_irq() ] Signed-off-by: NPhil Auld <pauld@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20190801133749.11033-1-pauld@redhat.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sudeep Holla 提交于
[ Upstream commit 9dc34d635c67e57051853855c43249408641a5ab ] Sometimes platfom may take too long to respond to the command and OS might timeout before platform transfer the ownership of the shared memory region to the OS with the response. Since the mailbox channel associated with the channel is freed and new commands are dispatch on the same channel, OS needs to wait until it gets back the ownership. If not, either OS may end up overwriting the platform response for the last command(which is fine as OS timed out that command) or platform might overwrite the payload for the next command with the response for the old. The latter is problematic as platform may end up interpretting the response as the payload. In order to avoid such race, let's wait until the OS gets back the ownership before we prepare the shared memory with the payload for the next command. Reported-by: NJim Quinlan <james.quinlan@broadcom.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Xiaofei Tan 提交于
[ Upstream commit b194a77fcc4001dc40aecdd15d249648e8a436d1 ] AER info of PCIe fatal error is not printed in the current driver. Because APEI driver will panic directly for fatal error, and can't run to the place of printing AER info. An example log is as following: {763}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 11 {763}[Hardware Error]: event severity: fatal {763}[Hardware Error]: Error 0, type: fatal {763}[Hardware Error]: section_type: PCIe error {763}[Hardware Error]: port_type: 0, PCIe end point {763}[Hardware Error]: version: 4.0 {763}[Hardware Error]: command: 0x0000, status: 0x0010 {763}[Hardware Error]: device_id: 0000:82:00.0 {763}[Hardware Error]: slot: 0 {763}[Hardware Error]: secondary_bus: 0x00 {763}[Hardware Error]: vendor_id: 0x8086, device_id: 0x10fb {763}[Hardware Error]: class_code: 000002 Kernel panic - not syncing: Fatal hardware error! This issue was imported by the patch, '37448adf ("aerdrv: Move cper_print_aer() call out of interrupt context")'. To fix this issue, this patch adds print of AER info in cper_print_pcie() for fatal error. Here is the example log after this patch applied: {24}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 10 {24}[Hardware Error]: event severity: fatal {24}[Hardware Error]: Error 0, type: fatal {24}[Hardware Error]: section_type: PCIe error {24}[Hardware Error]: port_type: 0, PCIe end point {24}[Hardware Error]: version: 4.0 {24}[Hardware Error]: command: 0x0546, status: 0x4010 {24}[Hardware Error]: device_id: 0000:01:00.0 {24}[Hardware Error]: slot: 0 {24}[Hardware Error]: secondary_bus: 0x00 {24}[Hardware Error]: vendor_id: 0x15b3, device_id: 0x1019 {24}[Hardware Error]: class_code: 000002 {24}[Hardware Error]: aer_uncor_status: 0x00040000, aer_uncor_mask: 0x00000000 {24}[Hardware Error]: aer_uncor_severity: 0x00062010 {24}[Hardware Error]: TLP Header: 000000c0 01010000 00000001 00000000 Kernel panic - not syncing: Fatal hardware error! Fixes: 37448adf ("aerdrv: Move cper_print_aer() call out of interrupt context") Signed-off-by: NXiaofei Tan <tanxiaofei@huawei.com> Reviewed-by: NJames Morse <james.morse@arm.com> [ardb: put parens around terms of && operator] Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Stephen Douthit 提交于
[ Upstream commit 29a3388bfcce7a6d087051376ea02bf8326a957b ] Depending on how BIOS has marked the reserved region containing the 32KB MCHBAR you can get warnings like: resource sanity check: requesting [mem 0xfed10000-0xfed1ffff], which spans more than reserved [mem 0xfed10000-0xfed17fff] caller dnv_rd_reg+0xc8/0x240 [pnd2_edac] mapping multiple BARs Not all of the mmio regions used in dnv_rd_reg() are the same size. The MCHBAR window is 32KB and the sideband ports are 64KB. Pass the correct size to ioremap() depending on which resource we're reading from. Signed-off-by: NStephen Douthit <stephend@silicom-usa.com> Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Alessio Balsini 提交于
[ Upstream commit fdbe4eeeb1aac219b14f10c0ed31ae5d1123e9b8 ] Enabling Direct I/O with loop devices helps reducing memory usage by avoiding double caching. 32 bit applications running on 64 bits systems are currently not able to request direct I/O because is missing from the lo_compat_ioctl. This patch fixes the compatibility issue mentioned above by exporting LOOP_SET_DIRECT_IO as additional lo_compat_ioctl() entry. The input argument for this ioctl is a single long converted to a 1-bit boolean, so compatibility is preserved. Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: NAlessio Balsini <balsini@android.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Jiri Slaby 提交于
[ Upstream commit 2c2b005f549544c13ef4cfb0e4842949066889bc ] Some platforms define their processors in this manner: Device (SCK0) { Name (_HID, "ACPI0004" /* Module Device */) // _HID: Hardware ID Name (_UID, "CPUSCK0") // _UID: Unique ID Processor (CP00, 0x00, 0x00000410, 0x06){} Processor (CP01, 0x02, 0x00000410, 0x06){} Processor (CP02, 0x04, 0x00000410, 0x06){} Processor (CP03, 0x06, 0x00000410, 0x06){} Processor (CP04, 0x01, 0x00000410, 0x06){} Processor (CP05, 0x03, 0x00000410, 0x06){} Processor (CP06, 0x05, 0x00000410, 0x06){} Processor (CP07, 0x07, 0x00000410, 0x06){} Processor (CP08, 0xFF, 0x00000410, 0x06){} Processor (CP09, 0xFF, 0x00000410, 0x06){} Processor (CP0A, 0xFF, 0x00000410, 0x06){} Processor (CP0B, 0xFF, 0x00000410, 0x06){} ... The processors marked as 0xff are invalid, there are only 8 of them in this case. So do not print an error on ids == 0xff, just print an info message. Actually, we could return ENODEV even on the first CPU with ID 0xff, but ACPI spec does not forbid the 0xff value to be a processor ID. Given 0xff could be a correct one, we would break working systems if we returned ENODEV. Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Randy Dunlap 提交于
[ Upstream commit 6898dd580a045341f844862ceb775144156ec1af ] arch/microblaze/ defines out_be32() and in_be32(), so don't do that again in the driver source. Fixes these build warnings: ../drivers/media/platform/fsl-viu.c:36: warning: "out_be32" redefined ../arch/microblaze/include/asm/io.h:50: note: this is the location of the previous definition ../drivers/media/platform/fsl-viu.c:37: warning: "in_be32" redefined ../arch/microblaze/include/asm/io.h:53: note: this is the location of the previous definition Fixes: 29d75068 ("media: fsl-viu: allow building it with COMPILE_TEST") Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NHans Verkuil <hverkuil-cisco@xs4all.nl> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Guoqing Jiang 提交于
[ Upstream commit 062f5b2ae12a153644c765e7ba3b0f825427be1d ] When a disk is added to array, the following path is called in mdadm. Manage_subdevs -> sysfs_freeze_array -> Manage_add -> sysfs_set_str(&info, NULL, "sync_action","idle") Then from kernel side, Manage_add invokes the path (add_new_disk -> validate_super = super_1_validate) to set In_sync flag. Since In_sync means "device is in_sync with rest of array", and the new added disk need to resync thread to help the synchronization of data. And md_reap_sync_thread would call spare_active to set In_sync for the new added disk finally. So don't set In_sync if array is in frozen. Signed-off-by: NGuoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Guoqing Jiang 提交于
[ Upstream commit 0d8ed0e9bf9643f27f4816dca61081784dedb38d ] When add one disk to array, the md_reap_sync_thread is responsible to activate the spare and set In_sync flag for the new member in spare_active(). But if raid1 has one member disk A, and disk B is added to the array. Then we offline A before all the datas are synchronized from A to B, obviously B doesn't have the latest data as A, but B is still marked with In_sync flag. So let's not call spare_active under the condition, otherwise B is still showed with 'U' state which is not correct. Signed-off-by: NGuoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Yufen Yu 提交于
[ Upstream commit eeba6809d8d58908b5ed1b5ceb5fcb09a98a7cad ] When write bio return error, it would be added to conf->retry_list and wait for raid1d thread to retry write and acknowledge badblocks. In narrow_write_error(), the error bio will be split in the unit of badblock shift (such as one sector) and raid1d thread issues them one by one. Until all of the splited bio has finished, raid1d thread can go on processing other things, which is time consuming. But, there is a scene for error handling that is not necessary. When the device has been set faulty, flush_bio_list() may end bios in pending_bio_list with error status. Since these bios has not been issued to the device actually, error handlding to retry write and acknowledge badblocks make no sense. Even without that scene, when the device is faulty, badblocks info can not be written out to the device. Thus, we also no need to handle the error IO. Signed-off-by: NYufen Yu <yuyufen@huawei.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Qian Cai 提交于
[ Upstream commit b99286b088ea843b935dcfb29f187697359fe5cd ] The commit d5370f75 ("arm64: prefetch: add alternative pattern for CPUs without a prefetcher") introduced MIDR_IS_CPU_MODEL_RANGE() to be used in has_no_hw_prefetch() with rv_min=0 which generates a compilation warning from GCC, In file included from ./arch/arm64/include/asm/cache.h:8, from ./include/linux/cache.h:6, from ./include/linux/printk.h:9, from ./include/linux/kernel.h:15, from ./include/linux/cpumask.h:10, from arch/arm64/kernel/cpufeature.c:11: arch/arm64/kernel/cpufeature.c: In function 'has_no_hw_prefetch': ./arch/arm64/include/asm/cputype.h:59:26: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] _model == (model) && rv >= (rv_min) && rv <= (rv_max); \ ^~ arch/arm64/kernel/cpufeature.c:889:9: note: in expansion of macro 'MIDR_IS_CPU_MODEL_RANGE' return MIDR_IS_CPU_MODEL_RANGE(midr, MIDR_THUNDERX, ^~~~~~~~~~~~~~~~~~~~~~~ Fix it by converting MIDR_IS_CPU_MODEL_RANGE to a static inline function. Signed-off-by: NQian Cai <cai@lca.pw> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Kuninori Morimoto 提交于
[ Upstream commit 06e8f5c842f2dbb232897ba967ea7b422745c271 ] ADG is using clk_get_rate() under atomic context, thus, we might have scheduling issue. To avoid this issue, we need to get/keep clk rate under non atomic context. We need to handle ADG as special device at Renesas Sound driver. From SW point of view, we want to impletent it as rsnd_mod_ops :: prepare, but it makes code just complicate. To avoid complicated code/patch, this patch adds new clk_rate[] array, and keep clk IN rate when rsnd_adg_clk_enable() was called. Reported-by: NLeon Kong <Leon.KONG@cn.bosch.com> Signed-off-by: NKuninori Morimoto <kuninori.morimoto.gx@renesas.com> Tested-by: NLeon Kong <Leon.KONG@cn.bosch.com> Link: https://lore.kernel.org/r/87v9vb0xkp.wl-kuninori.morimoto.gx@renesas.comSigned-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Carpenter 提交于
[ Upstream commit 8faa1cf6ed82f33009f63986c3776cc48af1b7b2 ] Smatch complains about the cast of a u32 pointer to unsigned long: drivers/edac/altera_edac.c:1878 altr_edac_a10_irq_handler() warn: passing casted pointer '&irq_status' to 'find_first_bit()' This code wouldn't work on a 64 bit big endian system because it would read past the end of &irq_status. [ bp: massage. ] Fixes: 13ab8448 ("EDAC, altera: Add ECC Manager IRQ controller support") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NThor Thayer <thor.thayer@linux.intel.com> Cc: James Morse <james.morse@arm.com> Cc: kernel-janitors@vger.kernel.org Cc: linux-edac <linux-edac@vger.kernel.org> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20190624134717.GA1754@mwandaSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 chenzefeng 提交于
[ Upstream commit c5e5c48c16422521d363c33cfb0dcf58f88c119b ] The function free_module in file kernel/module.c as follow: void free_module(struct module *mod) { ...... module_arch_cleanup(mod); ...... module_arch_freeing_init(mod); ...... } Both module_arch_cleanup and module_arch_freeing_init function would free the mod->arch.init_unw_table, which cause double free. Here, set mod->arch.init_unw_table = NULL after remove the unwind table to avoid double free. Signed-off-by: Nchenzefeng <chenzefeng2@huawei.com> Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Ard van Breemen 提交于
[ Upstream commit 1b34121d9f26d272b0b2334209af6b6fc82d4bf1 ] The Linux kernel assumes that get_endpoint(alts,0) and get_endpoint(alts,1) are eachothers feedback endpoints. To reassure that validity it will test bsynchaddress to comply with that assumption. But if the bsyncaddress is 0 (invalid), it will flag that as a wrong assumption and return an error. Fix: Skip the test if bSynchAddress is 0. Note: those with a valid bSynchAddress should have a code quirck added. Signed-off-by: NArd van Breemen <ard@kwaak.net> Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Vinod Koul 提交于
[ Upstream commit f7ccc7a397cf2ef64aebb2f726970b93203858d2 ] Qcom Socinfo driver can be built as a module, so export these two APIs. Tested-by: NVinod Koul <vkoul@kernel.org> Signed-off-by: NVinod Koul <vkoul@kernel.org> Signed-off-by: NVaishali Thakkar <vaishali.thakkar@linaro.org> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NStephen Boyd <swboyd@chromium.org> Reviewed-by: NBjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NBjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Oliver Neukum 提交于
[ Upstream commit ab1cbdf159beba7395a13ab70bc71180929ca064 ] The driver needs to check the endpoint types, too, as opposed to the number of endpoints. This also requires moving the check earlier. Reported-by: syzbot+01a77b82edaa374068e1@syzkaller.appspotmail.com Signed-off-by: NOliver Neukum <oneukum@suse.com> Signed-off-by: NSean Young <sean@mess.org> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Robert Richter 提交于
[ Upstream commit 3724ace582d9f675134985727fd5e9811f23c059 ] The grain in EDAC is defined as "minimum granularity for an error report, in bytes". The following calculation of the grain_bits in edac_mc is wrong: grain_bits = fls_long(e->grain) + 1; Where grain_bits is defined as: grain = 1 << grain_bits Example: grain = 8 # 64 bit (8 bytes) grain_bits = fls_long(8) + 1 grain_bits = 4 + 1 = 5 grain = 1 << grain_bits grain = 1 << 5 = 32 Replace it with the correct calculation: grain_bits = fls_long(e->grain - 1); The example gives now: grain_bits = fls_long(8 - 1) grain_bits = fls_long(7) grain_bits = 3 grain = 1 << 3 = 8 Also, check if the hardware reports a reasonable grain != 0 and fallback with a warning to 1 byte granularity otherwise. [ bp: massage a bit. ] Signed-off-by: NRobert Richter <rrichter@marvell.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: "linux-edac@vger.kernel.org" <linux-edac@vger.kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20190624150758.6695-2-rrichter@marvell.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Jia-Ju Bai 提交于
[ Upstream commit 2127c01b7f63b06a21559f56a8c81a3c6535bd1a ] In build_adc_controls(), there is an if statement on line 773 to check whether ak->adc_info is NULL: if (! ak->adc_info || ! ak->adc_info[mixer_ch].switch_name) When ak->adc_info is NULL, it is used on line 792: knew.name = ak->adc_info[mixer_ch].selector_name; Thus, a possible null-pointer dereference may occur. To fix this bug, referring to lines 773 and 774, ak->adc_info and ak->adc_info[mixer_ch].selector_name are checked before being used. This bug is found by a static analysis tool STCheck written by us. Signed-off-by: NJia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Takashi Iwai 提交于
[ Upstream commit dd65f7e19c6961ba6a69f7c925021b7a270cb950 ] The last fallback of CORB/RIRB communication error recovery is to turn on the single command mode, and this last resort usually means that something is really screwed up. Instead of a normal dev_err(), show the error more clearly with dev_WARN() with the caller stack trace. Also, show the bus-reset fallback also as an error, too. Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Thomas Gleixner 提交于
[ Upstream commit 2640da4cccf5cc613bf26f0998b9e340f4b5f69c ] If the APIC was already enabled on entry of setup_local_APIC() then disabling it soft via the SPIV register makes a lot of sense. That masks all LVT entries and brings it into a well defined state. Otherwise previously enabled LVTs which are not touched in the setup function stay unmasked and might surprise the just booting kernel. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190722105219.068290579@linutronix.deSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Grzegorz Halat 提交于
[ Upstream commit 747d5a1bf293dcb33af755a6d285d41b8c1ea010 ] A reboot request sends an IPI via the reboot vector and waits for all other CPUs to stop. If one or more CPUs are in critical regions with interrupts disabled then the IPI is not handled on those CPUs and the shutdown hangs if native_stop_other_cpus() is called with the wait argument set. Such a situation can happen when one CPU was stopped within a lock held section and another CPU is trying to acquire that lock with interrupts disabled. There are other scenarios which can cause such a lockup as well. In theory the shutdown should be attempted by an NMI IPI after the timeout period elapsed. Though the wait loop after sending the reboot vector IPI prevents this. It checks the wait request argument and the timeout. If wait is set, which is true for sys_reboot() then it won't fall through to the NMI shutdown method after the timeout period has finished. This was an oversight when the NMI shutdown mechanism was added to handle the 'reboot IPI is not working' situation. The mechanism was added to deal with stuck panic shutdowns, which do not have the wait request set, so the 'wait request' case was probably not considered. Remove the wait check from the post reboot vector IPI wait loop and enforce that the wait loop in the NMI fallback path is invoked even if NMI IPIs are disabled or the registration of the NMI handler fails. That second wait loop will then hang if not all CPUs shutdown and the wait argument is set. [ tglx: Avoid the hard to parse line break in the NMI fallback path, add comments and massage the changelog ] Fixes: 7d007d21 ("x86/reboot: Use NMI to assist in shutting down if IRQ fails") Signed-off-by: NGrzegorz Halat <ghalat@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Don Zickus <dzickus@redhat.com> Link: https://lkml.kernel.org/r/20190628122813.15500-1-ghalat@redhat.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Juri Lelli 提交于
[ Upstream commit 59d06cea1198d665ba11f7e8c5f45b00ff2e4812 ] If a task happens to be throttled while the CPU it was running on gets hotplugged off, the bandwidth associated with the task is not correctly migrated with it when the replenishment timer fires (offline_migration). Fix things up, for this_bw, running_bw and total_bw, when replenishment timer fires and task is migrated (dl_task_offline_migration()). Tested-by: NDietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: NJuri Lelli <juri.lelli@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bristot@redhat.com Cc: claudio@evidence.eu.com Cc: lizefan@huawei.com Cc: longman@redhat.com Cc: luca.abeni@santannapisa.it Cc: mathieu.poirier@linaro.org Cc: rostedt@goodmis.org Cc: tj@kernel.org Cc: tommaso.cucinotta@santannapisa.it Link: https://lkml.kernel.org/r/20190719140000.31694-5-juri.lelli@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Thomas Gleixner 提交于
[ Upstream commit cc8bf191378c1da8ad2b99cf470ee70193ace84e ] In course of developing shorthand based IPI support issues with the function which tries to clear eventually pending ISR bits in the local APIC were observed. 1) O-day testing triggered the WARN_ON() in apic_pending_intr_clear(). This warning is emitted when the function fails to clear pending ISR bits or observes pending IRR bits which are not delivered to the CPU after the stale ISR bit(s) are ACK'ed. Unfortunately the function only emits a WARN_ON() and fails to dump the IRR/ISR content. That's useless for debugging. Feng added spot on debug printk's which revealed that the stale IRR bit belonged to the APIC timer interrupt vector, but adding ad hoc debug code does not help with sporadic failures in the field. Rework the loop so the full IRR/ISR contents are saved and on failure dumped. 2) The loop termination logic is interesting at best. If the machine has no TSC or cpu_khz is not known yet it tries 1 million times to ack stale IRR/ISR bits. What? With TSC it uses the TSC to calculate the loop termination. It takes a timestamp at entry and terminates the loop when: (rdtsc() - start_timestamp) >= (cpu_hkz << 10) That's roughly one second. Both methods are problematic. The APIC has 256 vectors, which means that in theory max. 256 IRR/ISR bits can be set. In practice this is impossible and the chance that more than a few bits are set is close to zero. With the pure loop based approach the 1 million retries are complete overkill. With TSC this can terminate too early in a guest which is running on a heavily loaded host even with only a couple of IRR/ISR bits set. The reason is that after acknowledging the highest priority ISR bit, pending IRRs must get serviced first before the next round of acknowledge can take place as the APIC (real and virtualized) does not honour EOI without a preceeding interrupt on the CPU. And every APIC read/write takes a VMEXIT if the APIC is virtualized. While trying to reproduce the issue 0-day reported it was observed that the guest was scheduled out long enough under heavy load that it terminated after 8 iterations. Make the loop terminate after 512 iterations. That's plenty enough in any case and does not take endless time to complete. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190722105219.158847694@linutronix.deSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Juri Lelli 提交于
[ Upstream commit a07db5c0865799ebed1f88be0df50c581fb65029 ] On !CONFIG_RT_GROUP_SCHED configurations it is currently not possible to move RT tasks between cgroups to which CPU controller has been attached; but it is oddly possible to first move tasks around and then make them RT (setschedule to FIFO/RR). E.g.: # mkdir /sys/fs/cgroup/cpu,cpuacct/group1 # chrt -fp 10 $$ # echo $$ > /sys/fs/cgroup/cpu,cpuacct/group1/tasks bash: echo: write error: Invalid argument # chrt -op 0 $$ # echo $$ > /sys/fs/cgroup/cpu,cpuacct/group1/tasks # chrt -fp 10 $$ # cat /sys/fs/cgroup/cpu,cpuacct/group1/tasks 2345 2598 # chrt -p 2345 pid 2345's current scheduling policy: SCHED_FIFO pid 2345's current scheduling priority: 10 Also, as Michal noted, it is currently not possible to enable CPU controller on unified hierarchy with !CONFIG_RT_GROUP_SCHED (if there are any kernel RT threads in root cgroup, they can't be migrated to the newly created CPU controller's root in cgroup_update_dfl_csses()). Existing code comes with a comment saying the "we don't support RT-tasks being in separate groups". Such comment is however stale and belongs to pre-RT_GROUP_SCHED times. Also, it doesn't make much sense for !RT_GROUP_ SCHED configurations, since checks related to RT bandwidth are not performed at all in these cases. Make moving RT tasks between CPU controller groups viable by removing special case check for RT (and DEADLINE) tasks. Signed-off-by: NJuri Lelli <juri.lelli@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NMichal Koutný <mkoutny@suse.com> Reviewed-by: NDaniel Bristot de Oliveira <bristot@redhat.com> Acked-by: NTejun Heo <tj@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: lizefan@huawei.com Cc: longman@redhat.com Cc: luca.abeni@santannapisa.it Cc: rostedt@goodmis.org Link: https://lkml.kernel.org/r/20190719063455.27328-1-juri.lelli@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Vincent Guittot 提交于
[ Upstream commit f6cad8df6b30a5d2bbbd2e698f74b4cafb9fb82b ] The load_balance() has a dedicated mecanism to detect when an imbalance is due to CPU affinity and must be handled at parent level. In this case, the imbalance field of the parent's sched_group is set. The description of sg_imbalanced() gives a typical example of two groups of 4 CPUs each and 4 tasks each with a cpumask covering 1 CPU of the first group and 3 CPUs of the second group. Something like: { 0 1 2 3 } { 4 5 6 7 } * * * * But the load_balance fails to fix this UC on my octo cores system made of 2 clusters of quad cores. Whereas the load_balance is able to detect that the imbalanced is due to CPU affinity, it fails to fix it because the imbalance field is cleared before letting parent level a chance to run. In fact, when the imbalance is detected, the load_balance reruns without the CPU with pinned tasks. But there is no other running tasks in the situation described above and everything looks balanced this time so the imbalance field is immediately cleared. The imbalance field should not be cleared if there is no other task to move when the imbalance is detected. Signed-off-by: NVincent Guittot <vincent.guittot@linaro.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1561996022-28829-1-git-send-email-vincent.guittot@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Paul E. McKenney 提交于
[ Upstream commit 84ec3a0787086fcd25f284f59b3aa01fd6fc0a5d ] time/tick-broadcast: Fix tick_broadcast_offline() lockdep complaint The TASKS03 and TREE04 rcutorture scenarios produce the following lockdep complaint: WARNING: inconsistent lock state 5.2.0-rc1+ #513 Not tainted -------------------------------- inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage. migration/1/14 [HC0[0]:SC0[0]:HE1:SE1] takes: (____ptrval____) (tick_broadcast_lock){?...}, at: tick_broadcast_offline+0xf/0x70 {IN-HARDIRQ-W} state was registered at: lock_acquire+0xb0/0x1c0 _raw_spin_lock_irqsave+0x3c/0x50 tick_broadcast_switch_to_oneshot+0xd/0x40 tick_switch_to_oneshot+0x4f/0xd0 hrtimer_run_queues+0xf3/0x130 run_local_timers+0x1c/0x50 update_process_times+0x1c/0x50 tick_periodic+0x26/0xc0 tick_handle_periodic+0x1a/0x60 smp_apic_timer_interrupt+0x80/0x2a0 apic_timer_interrupt+0xf/0x20 _raw_spin_unlock_irqrestore+0x4e/0x60 rcu_nocb_gp_kthread+0x15d/0x590 kthread+0xf3/0x130 ret_from_fork+0x3a/0x50 irq event stamp: 171 hardirqs last enabled at (171): [<ffffffff8a201a37>] trace_hardirqs_on_thunk+0x1a/0x1c hardirqs last disabled at (170): [<ffffffff8a201a53>] trace_hardirqs_off_thunk+0x1a/0x1c softirqs last enabled at (0): [<ffffffff8a264ee0>] copy_process.part.56+0x650/0x1cb0 softirqs last disabled at (0): [<0000000000000000>] 0x0 [...] To reproduce, run the following rcutorture test: $ tools/testing/selftests/rcutorture/bin/kvm.sh --duration 5 --kconfig "CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y" --configs "TASKS03 TREE04" It turns out that tick_broadcast_offline() was an innocent bystander. After all, interrupts are supposed to be disabled throughout take_cpu_down(), and therefore should have been disabled upon entry to tick_offline_cpu() and thus to tick_broadcast_offline(). This suggests that one of the CPU-hotplug notifiers was incorrectly enabling interrupts, and leaving them enabled on return. Some debugging code showed that the culprit was sched_cpu_dying(). It had irqs enabled after return from sched_tick_stop(). Which in turn had irqs enabled after return from cancel_delayed_work_sync(). Which is a wrapper around __cancel_work_timer(). Which can sleep in the case where something else is concurrently trying to cancel the same delayed work, and as Thomas Gleixner pointed out on IRC, sleeping is a decidedly bad idea when you are invoked from take_cpu_down(), regardless of the state you leave interrupts in upon return. Code inspection located no reason why the delayed work absolutely needed to be canceled from sched_tick_stop(): The work is not bound to the outgoing CPU by design, given that the whole point is to collect statistics without disturbing the outgoing CPU. This commit therefore simply drops the cancel_delayed_work_sync() from sched_tick_stop(). Instead, a new ->state field is added to the tick_work structure so that the delayed-work handler function sched_tick_remote() can avoid reposting itself. A cpu_is_offline() check is also added to sched_tick_remote() to avoid mucking with the state of an offlined CPU (though it does appear safe to do so). The sched_tick_start() and sched_tick_stop() functions also update ->state, and sched_tick_start() also schedules the delayed work if ->state indicates that it is not already in flight. Signed-off-by: NPaul E. McKenney <paulmck@linux.ibm.com> [ paulmck: Apply Peter Zijlstra and Frederic Weisbecker atomics feedback. ] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NFrederic Weisbecker <frederic@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190625165238.GJ26519@linux.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Fabio Estevam 提交于
[ Upstream commit 8791a102ce579346cea9d2f911afef1c1985213c ] The power down and reset GPIO are optional, but the return value from devm_gpiod_get_optional() needs to be checked and propagated in the case of error, so that probe deferral can work. Signed-off-by: NFabio Estevam <festevam@gmail.com> Signed-off-by: NSakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Luke Nowakowski-Krijger 提交于
[ Upstream commit d4a6a9537bc32811486282206ecfb7c53754b74d ] Add hdpvr device num check and error handling We need to increment the device count atomically before we checkout a device to make sure that we do not reach the max count, otherwise we get out-of-bounds errors as reported by syzbot. Reported-and-tested-by: syzbot+aac8d0d7205f112045d2@syzkaller.appspotmail.com Signed-off-by: NLuke Nowakowski-Krijger <lnowakow@eng.ucsd.edu> Signed-off-by: NHans Verkuil <hverkuil-cisco@xs4all.nl> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Wen Yang 提交于
[ Upstream commit da79bf41a4d170ca93cc8f3881a70d734a071c37 ] The call to of_get_child_by_name returns a node pointer with refcount incremented thus it must be explicitly decremented after the last usage. Detected by coccinelle with the following warnings: drivers/media/platform/exynos4-is/fimc-is.c:813:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 807, but without a corresponding object release within this function. drivers/media/platform/exynos4-is/fimc-is.c:870:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 807, but without a corresponding object release within this function. drivers/media/platform/exynos4-is/fimc-is.c:885:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 807, but without a corresponding object release within this function. drivers/media/platform/exynos4-is/media-dev.c:545:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 541, but without a corresponding object release within this function. drivers/media/platform/exynos4-is/media-dev.c:528:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 499, but without a corresponding object release within this function. drivers/media/platform/exynos4-is/media-dev.c:534:1-7: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 499, but without a corresponding object release within this function. Signed-off-by: NWen Yang <wen.yang99@zte.com.cn> Signed-off-by: NHans Verkuil <hverkuil-cisco@xs4all.nl> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sean Young 提交于
[ Upstream commit 5dd4b89dc098bf22cd13e82a308f42a02c102b2b ] The rc-mm protocol can't be decoded by the mtk-cir since the de-glitch filter removes pulses/spaces shorter than 294 microseconds. Tested on a BananaPi R2. Signed-off-by: NSean Young <sean@mess.org> Acked-by: NSean Wang <sean.wang@kernel.org> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Arnd Bergmann 提交于
[ Upstream commit 765bb8610d305ee488b35d07e2a04ae52fb2df9c ] When CONFIG_DVB_DIB9000 is disabled, we can still compile code that now fails to link against dibx000_i2c_set_speed: drivers/media/usb/dvb-usb/dib0700_devices.o: In function `dib01x0_pmu_update.constprop.7': dib0700_devices.c:(.text.unlikely+0x1c9c): undefined reference to `dibx000_i2c_set_speed' The call sites are both through dib01x0_pmu_update(), which gets passed an 'i2c' pointer from dib9000_get_i2c_master(), which has returned NULL. Checking this pointer seems to be a good idea anyway, and it avoids the link failure in most cases. Sean Young found another case that is not fixed by that, where certain gcc versions leave an unused function in place that causes the link error, but adding an explict IS_ENABLED() check also solves this. Fixes: b7f54910 ("V4L/DVB (4647): Added module for DiB0700 based devices") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NSean Young <sean@mess.org> Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nick Stoughton 提交于
[ Upstream commit ed2abfebb041473092b41527903f93390d38afa7 ] Firmware files are in ASCII, using 2 hex characters per byte. The maximum length of a firmware string is therefore 16 (commands) * 2 (bytes per command) * 2 (characters per byte) = 64 Fixes: ff45262a ("leds: add new LP5562 LED driver") Signed-off-by: NNick Stoughton <nstoughton@logitech.com> Acked-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NJacek Anaszewski <jacek.anaszewski@gmail.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Stefan Wahren 提交于
[ Upstream commit 72503b25ee363827aafffc3e8d872e6a92a7e422 ] During enabling of the RPi 4, we found out that the driver doesn't provide a helpful error message in case setting DMA mask fails. So add one. Signed-off-by: NStefan Wahren <wahrenst@gmx.net> Link: https://lore.kernel.org/r/1563297318-4900-1-git-send-email-wahrenst@gmx.netSigned-off-by: NVinod Koul <vkoul@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Stephen Boyd 提交于
[ Upstream commit 6e37ccf78a53296c6c7bf426065762c27829eb84 ] We need to use the proper types and convert between physical addresses and dma addresses here to avoid mismatch warnings. This is especially important on systems with a different size for dma addresses and physical addresses. Otherwise, we get the following warning: drivers/firmware/qcom_scm.c: In function "qcom_scm_assign_mem": drivers/firmware/qcom_scm.c:469:47: error: passing argument 3 of "dma_alloc_coherent" from incompatible pointer type [-Werror=incompatible-pointer-types] We also fix the size argument to dma_free_coherent() because that size doesn't need to be aligned after it's already aligned on the allocation size. In fact, dma debugging expects the same arguments to be passed to both the allocation and freeing sides of the functions so changing the size is incorrect regardless. Reported-by: NIan Jackson <ian.jackson@citrix.com> Cc: Ian Jackson <ian.jackson@citrix.com> Cc: Julien Grall <julien.grall@arm.com> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Avaneesh Kumar Dwivedi <akdwived@codeaurora.org> Tested-by: NBjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NStephen Boyd <swboyd@chromium.org> Signed-off-by: NBjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Oleksandr Suvorov 提交于
[ Upstream commit b6319b061ba279577fd7030a9848fbd6a17151e3 ] If VDDA != VDDIO and any of them is greater than 3.1V, charge pump source can be assigned automatically [1]. [1] https://www.nxp.com/docs/en/data-sheet/SGTL5000.pdfSigned-off-by: NOleksandr Suvorov <oleksandr.suvorov@toradex.com> Reviewed-by: NMarcel Ziswiler <marcel.ziswiler@toradex.com> Reviewed-by: NIgor Opaniuk <igor.opaniuk@toradex.com> Reviewed-by: NFabio Estevam <festevam@gmail.com> Link: https://lore.kernel.org/r/20190719100524.23300-7-oleksandr.suvorov@toradex.comSigned-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Oleksandr Suvorov 提交于
[ Upstream commit 631bc8f0134ae9620d86a96b8c5f9445d91a2dca ] To enable "zero cross detect" for ADC/HP, change HP_ZCD_EN/ADC_ZCD_EN bits only instead of writing the whole CHIP_ANA_CTRL register. Signed-off-by: NOleksandr Suvorov <oleksandr.suvorov@toradex.com> Reviewed-by: NMarcel Ziswiler <marcel.ziswiler@toradex.com> Reviewed-by: NIgor Opaniuk <igor.opaniuk@toradex.com> Reviewed-by: NFabio Estevam <festevam@gmail.com> Link: https://lore.kernel.org/r/20190719100524.23300-6-oleksandr.suvorov@toradex.comSigned-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Lucas Stach 提交于
[ Upstream commit b7e814deae33eb30f8f8c6528e8e69b107978d88 ] Both the supplies and reset GPIO might need a probe deferral for the resource to be available. Don't print a error message in that case, as it is a normal operating condition. Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Acked-by: NAndrew F. Davis <afd@ti.com> Link: https://lore.kernel.org/r/20190719143637.2018-1-l.stach@pengutronix.deSigned-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Axel Lin 提交于
[ Upstream commit 1e2cc8c5e0745b545d4974788dc606d678b6e564 ] According to the datasheet https://www.ti.com/lit/ds/symlink/lm3632a.pdf Table 20. VPOS Bias Register Field Descriptions VPOS[5:0] Sets the Positive Display Bias (LDO) Voltage (50 mV per step) 000000: 4 V 000001: 4.05 V 000010: 4.1 V .................... 011101: 5.45 V 011110: 5.5 V (Default) 011111: 5.55 V .................... 100111: 5.95 V 101000: 6 V Note: Codes 101001 to 111111 map to 6 V The LM3632_LDO_VSEL_MAX should be 0b101000 (0x28), so the maximum voltage can match the datasheet. Fixes: 3a8d1a73 ("regulator: add LM363X driver") Signed-off-by: NAxel Lin <axel.lin@ingics.com> Link: https://lore.kernel.org/r/20190626132632.32629-1-axel.lin@ingics.comSigned-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Chris Wilson 提交于
[ Upstream commit caa8422d01e983782548648e125fd617cadcec3f ] I was looking at <4> [241.835158] general protection fault: 0000 [#1] PREEMPT SMP PTI <4> [241.835181] CPU: 1 PID: 214 Comm: kworker/1:3 Tainted: G U 5.2.0-CI-CI_DRM_6509+ #1 <4> [241.835199] Hardware name: Dell Inc. OptiPlex 745 /0GW726, BIOS 2.3.1 05/21/2007 <4> [241.835234] Workqueue: events snd_hdac_bus_process_unsol_events [snd_hda_core] <4> [241.835256] RIP: 0010:input_handle_event+0x16d/0x5e0 <4> [241.835270] Code: 48 8b 93 58 01 00 00 8b 52 08 89 50 04 8b 83 f8 06 00 00 48 8b 93 00 07 00 00 8d 70 01 48 8d 04 c2 83 e1 08 89 b3 f8 06 00 00 <66> 89 28 66 44 89 60 02 44 89 68 04 8b 93 f8 06 00 00 0f 84 fd fe <4> [241.835304] RSP: 0018:ffffc9000019fda0 EFLAGS: 00010046 <4> [241.835317] RAX: 6b6b6b6ec6c6c6c3 RBX: ffff8880290fefc8 RCX: 0000000000000000 <4> [241.835332] RDX: 000000006b6b6b6b RSI: 000000006b6b6b6c RDI: 0000000000000046 <4> [241.835347] RBP: 0000000000000005 R08: 0000000000000000 R09: 0000000000000001 <4> [241.835362] R10: ffffc9000019faa0 R11: 0000000000000000 R12: 0000000000000004 <4> [241.835377] R13: 0000000000000000 R14: ffff8880290ff1d0 R15: 0000000000000293 <4> [241.835392] FS: 0000000000000000(0000) GS:ffff88803de80000(0000) knlGS:0000000000000000 <4> [241.835409] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4> [241.835422] CR2: 00007ffe9a99e9b7 CR3: 000000002f588000 CR4: 00000000000006e0 <4> [241.835436] Call Trace: <4> [241.835449] input_event+0x45/0x70 <4> [241.835464] snd_jack_report+0xdc/0x100 <4> [241.835490] snd_hda_jack_report_sync+0x83/0xc0 [snd_hda_codec] <4> [241.835512] snd_hdac_bus_process_unsol_events+0x5a/0x70 [snd_hda_core] <4> [241.835530] process_one_work+0x245/0x610 which has the hallmarks of a worker queued from interrupt after it was supposedly cancelled (note the POISON_FREE), and I could not see where the interrupt would be flushed on shutdown so added the likely suspects. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111174Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-