- 02 12月, 2020 2 次提交
-
-
由 Vladimir Oltean 提交于
The last user of the RTNL brother of dev_getfirstbyhwtype (the latter being synchronized under RCU) has been deleted in commit b4db2b35 ("afs: Use core kernel UUID generation"). Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Howells <dhowells@redhat.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20201129200550.2433401-1-vladimir.oltean@nxp.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Marco Elver 提交于
It turns out that usage of skb extensions can cause memory leaks. Ido Schimmel reported: "[...] there are instances that blindly overwrite 'skb->extensions' by invoking skb_copy_header() after __alloc_skb()." Therefore, give up on using skb extensions for KCOV handle, and instead directly store kcov_handle in sk_buff. Fixes: 6370cc3b ("net: add kcov handle to skb extensions") Fixes: 85ce50d3 ("net: kcov: don't select SKB_EXTENSIONS when there is no NET") Fixes: 97f53a08 ("net: linux/skbuff.h: combine SKB_EXTENSIONS + KCOV handling") Link: https://lore.kernel.org/linux-wireless/20201121160941.GA485907@shredder.lan/Reported-by: NIdo Schimmel <idosch@idosch.org> Signed-off-by: NMarco Elver <elver@google.com> Link: https://lore.kernel.org/r/20201125224840.2014773-1-elver@google.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 27 11月, 2020 12 次提交
-
-
由 Parav Pandit 提交于
When eswitch manager is running on ECPF, host PF should be treated as non eswitch manager port, similar to other VF vports. Fail to do so, results in firmware treating PF's vport as ECPF vport for eswitch ACL tables. Non zero check to figure out if a given vport is other vport or not is not sufficient becase PF vport number = 0 on ECPF. Hence, create esw acl tables with an attribute of other vport. Signed-off-by: NParav Pandit <parav@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Parav Pandit 提交于
To match the hardware spec, rename peer_pf to host_pf. Signed-off-by: NParav Pandit <parav@nvidia.com> Reviewed-by: NBodong Wang <bodong@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Parav Pandit 提交于
Subsequent patch implements helper API which has mlx5_core_dev as const pointer, make its caller API too const *. Signed-off-by: NParav Pandit <parav@nvidia.com> Reviewed-by: NBodong Wang <bodong@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Yishai Hadas 提交于
Expose other function ifc bits to enable setting HCA caps on behalf of other function. In addition, expose vhca_resource_manager bit to control whether the other function functionality is supported by firmware. Signed-off-by: NYishai Hadas <yishaih@nvidia.com> Reviewed-by: NParav Pandit <parav@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Aya Levin 提交于
Expose FW indication that it supports stateless offloads for IP over IP tunneled packets per direction. In some HW like ConnectX-4 IP-in-IP support is not symmetric, it supports steering on the inner header but it doesn't TX-Checksum and TSO. Add IP-in-IP capability per direction to cover this case as well. Note: only if both indications are turned on, the global tunnel_stateless_ip_over_ip is on too. Signed-off-by: NAya Levin <ayal@nvidia.com> Reviewed-by: NMoshe Shemesh <moshe@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Parav Pandit 提交于
Update the hardware interface definitions to query and modify vhca state, related EQE and event code. Signed-off-by: NParav Pandit <parav@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Parav Pandit 提交于
mlx5 command init and cleanup routines are internal to mlx5_core driver. Hence, avoid exporting them and move their definition to mlx5_core driver's internal file mlx5_core.h Signed-off-by: NParav Pandit <parav@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Eran Ben Elisha 提交于
Add a bit in HCA capabilities layout to indicate if ts_cqe_to_dest_cqn is supported. In addition, add ts_cqe_to_dest_cqn field to SQ context, for driver to set the actual CQN. Signed-off-by: NEran Ben Elisha <eranbe@nvidia.com> Reviewed-by: NTariq Toukan <tariqt@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Muhammad Sammar 提交于
Add misc4 match params to enable matching on prog_sample_fields. Signed-off-by: NMuhammad Sammar <muhammads@nvidia.com> Reviewed-by: NAlex Vesker <valex@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
The flow sampler object is a new destination type. Add a new member for the flow destination. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
Hardware introduces flow sampler object for packet sampling. Add the offload hardware bits and structures. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Feng Tang 提交于
0day reported one -22.7% regression for will-it-scale page_fault2 case [1] on a 4 sockets 144 CPU platform, and bisected to it to be caused by Waiman's optimization (commit bd0b230f) of saving one 'struct page_counter' space for 'struct mem_cgroup'. Initially we thought it was due to the cache alignment change introduced by the patch, but further debug shows that it is due to some hot data members ('vmstats_local', 'vmstats_percpu', 'vmstats') sit in 2 adjacent cacheline (2N and 2N+1 cacheline), and when adjacent cache line prefetch is enabled, it triggers an "extended level" of cache false sharing for 2 adjacent cache lines. So exchange the 2 member blocks, while keeping mostly the original cache alignment, which can restore and even enhance the performance, and save 64 bytes of space for 'struct mem_cgroup' (from 2880 to 2816, with 0day's default RHEL-8.3 kernel config) [1]. https://lore.kernel.org/lkml/20201102091543.GM31092@shao2-debian/ Fixes: bd0b230f ("mm/memcg: unify swap and memsw page counters") Reported-by: Nkernel test robot <rong.a.chen@intel.com> Signed-off-by: NFeng Tang <feng.tang@intel.com> Acked-by: NWaiman Long <longman@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 11月, 2020 2 次提交
-
-
由 Yunsheng Lin 提交于
The current semantic for napi_consume_skb() is that caller need to provide non-zero budget when calling from NAPI context, and breaking this semantic will cause hard to debug problem, because _kfree_skb_defer() need to run in atomic context in order to push the skb to the particular cpu' napi_alloc_cache atomically. So add the lockdep_assert_in_softirq() to assert when the running context is not in_softirq, in_softirq means softirq is serving or BH is disabled, which has a ambiguous semantics due to the BH disabled confusion, so add a comment to emphasize that. And the softirq context can be interrupted by hard IRQ or NMI context, lockdep_assert_in_softirq() need to assert about hard IRQ or NMI context too. Suggested-by: NJakub Kicinski <kuba@kernel.org> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ioana Ciornei 提交于
Now that all the PHY drivers have been migrated to directly implement the generic .handle_interrupt() callback for a seamless support of shared IRQs and all the .config_inter() implementations clear any pending interrupts, we can safely remove the two callbacks. With this patch, phylib has a proper support for shared IRQs (and not just for multi-PHY devices. A PHY driver must implement both the .handle_interrupt() and .config_intr() callbacks for the IRQs to be actually used. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 24 11月, 2020 5 次提交
-
-
由 Amit Sunil Dhamne 提交于
Currently array of fix length PM_API_MAX is used to cache the pm_api version (valid or invalid). However ATF based PM APIs values are much higher then PM_API_MAX. So to include ATF based PM APIs also, use hash-table to store the pm_api version status. Signed-off-by: NAmit Sunil Dhamne <amit.sunil.dhamne@xilinx.com> Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NRavi Patel <ravi.patel@xilinx.com> Signed-off-by: NRajan Vaja <rajan.vaja@xilinx.com> Reviewed-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NMichal Simek <michal.simek@xilinx.com> Fixes: f3217d6f ("firmware: xilinx: fix out-of-bounds access") Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/1606197161-25976-1-git-send-email-rajan.vaja@xilinx.comSigned-off-by: NMichal Simek <michal.simek@xilinx.com>
-
由 Eyal Birger 提交于
In the patchset merged by commit b9fcf0a0 ("Merge branch 'support-AF_PACKET-for-layer-3-devices'") L3 devices which did not have header_ops were given one for the purpose of protocol parsing on af_packet transmit path. That change made af_packet receive path regard these devices as having a visible L3 header and therefore aligned incoming skb->data to point to the skb's mac_header. Some devices, such as ipip, xfrmi, and others, do not reset their mac_header prior to ingress and therefore their incoming packets became malformed. Ideally these devices would reset their mac headers, or af_packet would be able to rely on dev->hard_header_len being 0 for such cases, but it seems this is not the case. Fix by changing af_packet RX ll visibility criteria to include the existence of a '.create()' header operation, which is used when creating a device hard header - via dev_hard_header() - by upper layers, and does not exist in these L3 devices. As this predicate may be useful in other situations, add it as a common dev_has_header() helper in netdevice.h. Fixes: b9fcf0a0 ("Merge branch 'support-AF_PACKET-for-layer-3-devices'") Signed-off-by: NEyal Birger <eyal.birger@gmail.com> Acked-by: NJason A. Donenfeld <Jason@zx2c4.com> Acked-by: NWillem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20201121062817.3178900-1-eyal.birger@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
linux/netdevice.h is included in very many places, touching any of its dependecies causes large incremental builds. Drop the linux/ethtool.h include, linux/netdevice.h just needs a forward declaration of struct ethtool_ops. Fix all the places which made use of this implicit include. Acked-by: NJohannes Berg <johannes@sipsolutions.net> Acked-by: NShannon Nelson <snelson@pensando.io> Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Link: https://lore.kernel.org/r/20201120225052.1427503-1-kuba@kernel.orgSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Christian Eggers 提交于
Using PTP wide defines will obsolete different driver internal defines and uses of magic numbers. Signed-off-by: NChristian Eggers <ceggers@arri.de> Cc: Kurt Kanzenbach <kurt@linutronix.de> Reviewed-by: NVladimir Oltean <olteanv@gmail.com> Reviewed-by: NRichard Cochran <richardcochran@gmail.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 David Howells 提交于
Provide the proposed description (add key) or the original description (update/instantiate key) when preparsing a key so that the key type can validate it against the data. This is important for rxrpc server keys as we need to check that they have the right amount of key material present - and it's better to do that when the key is loaded rather than deep in trying to process a response packet. Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> cc: keyrings@vger.kernel.org
-
- 23 11月, 2020 3 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
Both btrfs and fuse have reported faults caused by seeing a retry entry instead of the page they were looking for. This was caused by a missing check in the iterator. As can be seen in the below panic log, the accessing 0x402 causes a panic. In the xarray.h, 0x402 means RETRY_ENTRY. BUG: kernel NULL pointer dereference, address: 0000000000000402 CPU: 14 PID: 306003 Comm: as Not tainted 5.9.0-1-amd64 #1 Debian 5.9.1-1 Hardware name: Lenovo ThinkSystem SR665/7D2VCTO1WW, BIOS D8E106Q-1.01 05/30/2020 RIP: 0010:fuse_readahead+0x152/0x470 [fuse] Code: 41 8b 57 18 4c 8d 54 10 ff 4c 89 d6 48 8d 7c 24 10 e8 d2 e3 28 f9 48 85 c0 0f 84 fe 00 00 00 44 89 f2 49 89 04 d4 44 8d 72 01 <48> 8b 10 41 8b 4f 1c 48 c1 ea 10 83 e2 01 80 fa 01 19 d2 81 e2 01 RSP: 0018:ffffad99ceaebc50 EFLAGS: 00010246 RAX: 0000000000000402 RBX: 0000000000000001 RCX: 0000000000000002 RDX: 0000000000000000 RSI: ffff94c5af90bd98 RDI: ffffad99ceaebc60 RBP: ffff94ddc1749a00 R08: 0000000000000402 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000100 R12: ffff94de6c429ce0 R13: ffff94de6c4d3700 R14: 0000000000000001 R15: ffffad99ceaebd68 FS: 00007f228c5c7040(0000) GS:ffff94de8ed80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000402 CR3: 0000001dbd9b4000 CR4: 0000000000350ee0 Call Trace: read_pages+0x83/0x270 page_cache_readahead_unbounded+0x197/0x230 generic_file_buffered_read+0x57a/0xa20 new_sync_read+0x112/0x1a0 vfs_read+0xf8/0x180 ksys_read+0x5f/0xe0 do_syscall_64+0x33/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: 042124cc ("mm: add new readahead_control API") Reported-by: NDavid Sterba <dsterba@suse.com> Reported-by: NWonhyuk Yang <vvghjk1234@gmail.com> Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20201103142852.8543-1-willy@infradead.org Link: https://lkml.kernel.org/r/20201103124349.16722-1-vvghjk1234@gmail.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dan Williams 提交于
The core-mm has a default __weak implementation of phys_to_target_node() to mirror the weak definition of memory_add_physaddr_to_nid(). That symbol is exported for modules. However, while the export in mm/memory_hotplug.c exported the symbol in the configuration cases of: CONFIG_NUMA_KEEP_MEMINFO=y CONFIG_MEMORY_HOTPLUG=y ...and: CONFIG_NUMA_KEEP_MEMINFO=n CONFIG_MEMORY_HOTPLUG=y ...it failed to export the symbol in the case of: CONFIG_NUMA_KEEP_MEMINFO=y CONFIG_MEMORY_HOTPLUG=n Not only is that broken, but Christoph points out that the kernel should not be exporting any __weak symbol, which means that memory_add_physaddr_to_nid() example that phys_to_target_node() copied is broken too. Rework the definition of phys_to_target_node() and memory_add_physaddr_to_nid() to not require weak symbols. Move to the common arch override design-pattern of an asm header defining a symbol to replace the default implementation. The only common header that all memory_add_physaddr_to_nid() producing architectures implement is asm/sparsemem.h. In fact, powerpc already defines its memory_add_physaddr_to_nid() helper in sparsemem.h. Double-down on that observation and define phys_to_target_node() where necessary in asm/sparsemem.h. An alternate consideration that was discarded was to put this override in asm/numa.h, but that entangles with the definition of MAX_NUMNODES relative to the inclusion of linux/nodemask.h, and requires powerpc to grow a new header. The dependency on NUMA_KEEP_MEMINFO for DEV_DAX_HMEM_DEVICES is invalid now that the symbol is properly exported / stubbed in all combinations of CONFIG_NUMA_KEEP_MEMINFO and CONFIG_MEMORY_HOTPLUG. [dan.j.williams@intel.com: v4] Link: https://lkml.kernel.org/r/160461461867.1505359.5301571728749534585.stgit@dwillia2-desk3.amr.corp.intel.com [dan.j.williams@intel.com: powerpc: fix create_section_mapping compile warning] Link: https://lkml.kernel.org/r/160558386174.2948926.2740149041249041764.stgit@dwillia2-desk3.amr.corp.intel.com Fixes: a035b6bf ("mm/memory_hotplug: introduce default phys_to_target_node() implementation") Reported-by: NRandy Dunlap <rdunlap@infradead.org> Reported-by: NThomas Gleixner <tglx@linutronix.de> Reported-by: Nkernel test robot <lkp@intel.com> Reported-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NRandy Dunlap <rdunlap@infradead.org> Tested-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Joao Martins <joao.m.martins@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Link: https://lkml.kernel.org/r/160447639846.1133764.7044090803980177548.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nick Desaulniers 提交于
bpftrace parses the kernel headers and uses Clang under the hood. Remove the version check when __BPF_TRACING__ is defined (as bpftrace does) so that this tool can continue to parse kernel headers, even with older clang sources. Fixes: commit 1f7a44f6 ("compiler-clang: add build check for clang 10.0.1") Reported-by: NChen Yu <yu.chen.surf@gmail.com> Reported-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Tested-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NSong Liu <songliubraving@fb.com> Acked-by: NNathan Chancellor <natechancellor@gmail.com> Acked-by: NMiguel Ojeda <ojeda@kernel.org> Link: https://lkml.kernel.org/r/20201104191052.390657-1-ndesaulniers@google.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 11月, 2020 2 次提交
-
-
由 Antonio Cardace 提交于
This bitmask represents all existing coalesce parameters. Signed-off-by: NAntonio Cardace <acardace@redhat.com> Reviewed-by: NMichal Kubecek <mkubecek@suse.cz> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Srujana Challa 提交于
On OcteonTX2 platform CPT instruction enqueue and NIX packet send are only possible via LMTST operations which uses LDEOR instruction. This patch moves lmt flush function from OcteonTX2 nic driver to include/linux/soc since it will be used by OcteonTX2 CPT and NIC driver for LMTST. Signed-off-by: NSuheil Chandran <schandran@marvell.com> Signed-off-by: NSrujana Challa <schalla@marvell.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 20 11月, 2020 6 次提交
-
-
由 Oliver Hartkopp 提交于
This patch adds the following helper to functions to access Classical CAN DLC values. can_get_cc_dlc(): get the data length code for Classical CAN raw DLC access can_frame_set_cc_len(): set len and len8_dlc value for Classical CAN raw DLC access Signed-off-by: NOliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/r/20201110154913.1404582-2-mkl@pengutronix.deSigned-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Oliver Hartkopp 提交于
The helper functions can_len2dlc and can_dlc2len are only relevant for CAN FD data length code (DLC) conversion. To fit the introduced can_cc_dlc2len for Classical CAN we rename: can_dlc2len -> can_fd_dlc2len to get the payload length from the DLC can_len2dlc -> can_fd_len2dlc to get the DLC from the payload length Suggested-by: NVincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: NOliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/r/20201110101852.1973-6-socketcan@hartkopp.netSigned-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Oliver Hartkopp 提交于
The naming of can_dlc as element of struct can_frame and also as variable name is misleading as it claims to be a 'data length CODE' but in reality it always was a plain data length. With the indroduction of a new 'len' element in struct can_frame we can now remove can_dlc as name and make clear which of the former uses was a plain length (-> 'len') or a data length code (-> 'dlc') value. Signed-off-by: NOliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/r/20201120100444.3199-1-socketcan@hartkopp.net [mkl: gs_usb: keep struct gs_host_frame::can_dlc as is] Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Oliver Hartkopp 提交于
The macro was always used together with can_dlc2len() which sanitizes the given dlc value on its own. Signed-off-by: NOliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/r/20201110101852.1973-4-socketcan@hartkopp.netSigned-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Oliver Hartkopp 提交于
The get_can_dlc() macro is used to ensure the payload length information of the Classical CAN frame to be max 8 bytes (the CAN_MAX_DLEN). Rename the macro and use the correct constant in preparation of the len/dlc cleanup for Classical CAN frames. Signed-off-by: NOliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/r/20201110101852.1973-3-socketcan@hartkopp.netSigned-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Mauro Carvalho Chehab 提交于
Kernel-doc markup should use this format: identifier - description They should not have any type before that, as otherwise the parser won't do the right thing. Also, some identifiers have different names between their prototypes and the kernel-doc markup. Reviewed-by: NJan Kara <jack@suse.cz> Signed-off-by: NMauro Carvalho Chehab <mchehab+huawei@kernel.org> Link: https://lore.kernel.org/r/72f5c6628f5f278d67625f60893ffbc2ca28d46e.1605521731.git.mchehab+huawei@kernel.orgSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
-
- 19 11月, 2020 2 次提交
-
-
push_scqe() uses in_interrupt() to figure out if it is allowed to sleep. The usage of in_interrupt() in drivers is phased out and Linus clearly requested that code which changes behaviour depending on context should either be separated or the context be conveyed in an argument passed by the caller, which usually knows the context. Aside of that in_interrupt() is not correct as it does not catch preempt disabled regions which neither can sleep. ns_send() (the only caller of push_scqe()) has the following callers: - vcc_sendmsg() used as proto_ops::sendmsg is expected to be invoked in preemtible context. -> vcc->dev->ops->send() (ns_send()) - atm_vcc::send via atmdev_ops::send either directly (pointer copied by atm_init_aal34() or atm_init_aal5()) or via atm_send_aal0(). This is invoked by drivers (like br2684, clip, pppoatm, ...) which are called from net_device_ops::ndo_start_xmit with BH disabled. Add atmdev_ops::send_bh which is used by callers from BH context (atm_send_aal*()) and if this callback missing then ::send is used instead. Implement this callback in nicstar and use it to replace in_interrupt(). Cc: Chas Williams <3chas3@gmail.com> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Ahmad Fatoum 提交于
It's arguable most people interested in configuring a PPS signal want it as external output, not as kernel input. PTP_CLK_REQ_PPS is for input though. Add documentation to nudge readers into the correct direction. Signed-off-by: NAhmad Fatoum <a.fatoum@pengutronix.de> Acked-by: NRichard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/20201117213826.18235-1-a.fatoum@pengutronix.deSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 18 11月, 2020 4 次提交
-
-
由 Zhenzhong Duan 提交于
"intel_iommu=off" command line is used to disable iommu but iommu is force enabled in a tboot system for security reason. However for better performance on high speed network device, a new option "intel_iommu=tboot_noforce" is introduced to disable the force on. By default kernel should panic if iommu init fail in tboot for security reason, but it's unnecessory if we use "intel_iommu=tboot_noforce,off". Fix the code setting force_on and move intel_iommu_tboot_noforce from tboot code to intel iommu code. Fixes: 7304e8f2 ("iommu/vt-d: Correctly disable Intel IOMMU force on") Signed-off-by: NZhenzhong Duan <zhenzhong.duan@gmail.com> Tested-by: NLukasz Hawrylko <lukasz.hawrylko@linux.intel.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201110071908.3133-1-zhenzhong.duan@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Mauro Carvalho Chehab 提交于
Some identifiers have different names between their prototypes and the kernel-doc markup. In the specific case of netif_subqueue_stopped(), keep the current markup for __netif_subqueue_stopped(), adding a new one for netif_subqueue_stopped(). Signed-off-by: NMauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Xie He 提交于
The DLCI driver (dlci.c) implements the Frame Relay protocol. However, we already have another newer and better implementation of Frame Relay provided by the HDLC_FR driver (hdlc_fr.c). The DLCI driver's implementation of Frame Relay is used by only one hardware driver in the kernel - the SDLA driver (sdla.c). The SDLA driver provides Frame Relay support for the Sangoma S50x devices. However, the vendor provides their own driver (along with their own multi-WAN-protocol implementations including Frame Relay), called WANPIPE. I believe most users of the hardware would use the vendor-provided WANPIPE driver instead. (The WANPIPE driver was even once in the kernel, but was deleted in commit 8db60bcf ("[WAN]: Remove broken and unmaintained Sangoma drivers.") because the vendor no longer updated the in-kernel WANPIPE driver.) Cc: Mike McLagan <mike.mclagan@linux.org> Signed-off-by: NXie He <xie.he.0141@gmail.com> Link: https://lore.kernel.org/r/20201114150921.685594-1-xie.he.0141@gmail.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Randy Dunlap 提交于
The previous Kconfig patch led to some other build errors as reported by the 0day bot and my own overnight build testing. These are all in <linux/skbuff.h> when KCOV is enabled but SKB_EXTENSIONS is not enabled, so fix those by combining those conditions in the header file. Fixes: 6370cc3b ("net: add kcov handle to skb extensions") Fixes: 85ce50d3 ("net: kcov: don't select SKB_EXTENSIONS when there is no NET") Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Reported-by: Nkernel test robot <lkp@intel.com> Cc: Aleksandr Nogikh <nogikh@google.com> Cc: Willem de Bruijn <willemb@google.com> Acked-by: NFlorian Westphal <fw@strlen.de> Link: https://lore.kernel.org/r/20201116212108.32465-1-rdunlap@infradead.orgSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 17 11月, 2020 2 次提交
-
-
由 Juri Lelli 提交于
Glenn reported that "an application [he developed produces] a BUG in deadline.c when a SCHED_DEADLINE task contends with CFS tasks on nested PTHREAD_PRIO_INHERIT mutexes. I believe the bug is triggered when a CFS task that was boosted by a SCHED_DEADLINE task boosts another CFS task (nested priority inheritance). ------------[ cut here ]------------ kernel BUG at kernel/sched/deadline.c:1462! invalid opcode: 0000 [#1] PREEMPT SMP CPU: 12 PID: 19171 Comm: dl_boost_bug Tainted: ... Hardware name: ... RIP: 0010:enqueue_task_dl+0x335/0x910 Code: ... RSP: 0018:ffffc9000c2bbc68 EFLAGS: 00010002 RAX: 0000000000000009 RBX: ffff888c0af94c00 RCX: ffffffff81e12500 RDX: 000000000000002e RSI: ffff888c0af94c00 RDI: ffff888c10b22600 RBP: ffffc9000c2bbd08 R08: 0000000000000009 R09: 0000000000000078 R10: ffffffff81e12440 R11: ffffffff81e1236c R12: ffff888bc8932600 R13: ffff888c0af94eb8 R14: ffff888c10b22600 R15: ffff888bc8932600 FS: 00007fa58ac55700(0000) GS:ffff888c10b00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fa58b523230 CR3: 0000000bf44ab003 CR4: 00000000007606e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: ? intel_pstate_update_util_hwp+0x13/0x170 rt_mutex_setprio+0x1cc/0x4b0 task_blocks_on_rt_mutex+0x225/0x260 rt_spin_lock_slowlock_locked+0xab/0x2d0 rt_spin_lock_slowlock+0x50/0x80 hrtimer_grab_expiry_lock+0x20/0x30 hrtimer_cancel+0x13/0x30 do_nanosleep+0xa0/0x150 hrtimer_nanosleep+0xe1/0x230 ? __hrtimer_init_sleeper+0x60/0x60 __x64_sys_nanosleep+0x8d/0xa0 do_syscall_64+0x4a/0x100 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7fa58b52330d ... ---[ end trace 0000000000000002 ]— He also provided a simple reproducer creating the situation below: So the execution order of locking steps are the following (N1 and N2 are non-deadline tasks. D1 is a deadline task. M1 and M2 are mutexes that are enabled * with priority inheritance.) Time moves forward as this timeline goes down: N1 N2 D1 | | | | | | Lock(M1) | | | | | | Lock(M2) | | | | | | Lock(M2) | | | | Lock(M1) | | (!!bug triggered!) | Daniel reported a similar situation as well, by just letting ksoftirqd run with DEADLINE (and eventually block on a mutex). Problem is that boosted entities (Priority Inheritance) use static DEADLINE parameters of the top priority waiter. However, there might be cases where top waiter could be a non-DEADLINE entity that is currently boosted by a DEADLINE entity from a different lock chain (i.e., nested priority chains involving entities of non-DEADLINE classes). In this case, top waiter static DEADLINE parameters could be null (initialized to 0 at fork()) and replenish_dl_entity() would hit a BUG(). Fix this by keeping track of the original donor and using its parameters when a task is boosted. Reported-by: NGlenn Elliott <glenn@aurora.tech> Reported-by: NDaniel Bristot de Oliveira <bristot@redhat.com> Signed-off-by: NJuri Lelli <juri.lelli@redhat.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Tested-by: NDaniel Bristot de Oliveira <bristot@redhat.com> Link: https://lkml.kernel.org/r/20201117061432.517340-1-juri.lelli@redhat.com
-
由 Peter Zijlstra 提交于
Mel reported that on some ARM64 platforms loadavg goes bananas and Will tracked it down to the following race: CPU0 CPU1 schedule() prev->sched_contributes_to_load = X; deactivate_task(prev); try_to_wake_up() if (p->on_rq &&) // false if (smp_load_acquire(&p->on_cpu) && // true ttwu_queue_wakelist()) p->sched_remote_wakeup = Y; smp_store_release(prev->on_cpu, 0); where both p->sched_contributes_to_load and p->sched_remote_wakeup are in the same word, and thus the stores X and Y race (and can clobber one another's data). Whereas prior to commit c6e7bd7a ("sched/core: Optimize ttwu() spinning on p->on_cpu") the p->on_cpu handoff serialized access to p->sched_remote_wakeup (just as it still does with p->sched_contributes_to_load) that commit broke that by calling ttwu_queue_wakelist() with p->on_cpu != 0. However, due to p->XXX = X ttwu() schedule() if (p->on_rq && ...) // false smp_mb__after_spinlock() if (smp_load_acquire(&p->on_cpu) && deactivate_task() ttwu_queue_wakelist()) p->on_rq = 0; p->sched_remote_wakeup = Y; We can be sure any 'current' store is complete and 'current' is guaranteed asleep. Therefore we can move p->sched_remote_wakeup into the current flags word. Note: while the observed failure was loadavg accounting gone wrong due to ttwu() cobbering p->sched_contributes_to_load, the reverse problem is also possible where schedule() clobbers p->sched_remote_wakeup, this could result in enqueue_entity() wrecking ->vruntime and causing scheduling artifacts. Fixes: c6e7bd7a ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: NMel Gorman <mgorman@techsingularity.net> Debugged-by: NWill Deacon <will@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20201117083016.GK3121392@hirez.programming.kicks-ass.net
-