- 29 12月, 2018 2 次提交
-
-
由 Christian Brauner 提交于
commit 94f82008ce30e2624537d240d64ce718255e0b80 upstream. This reverts commit 55956b59. commit 55956b59 ("vfs: Allow userns root to call mknod on owned filesystems.") enabled mknod() in user namespaces for userns root if CAP_MKNOD is available. However, these device nodes are useless since any filesystem mounted from a non-initial user namespace will set the SB_I_NODEV flag on the filesystem. Now, when a device node s created in a non-initial user namespace a call to open() on said device node will fail due to: bool may_open_dev(const struct path *path) { return !(path->mnt->mnt_flags & MNT_NODEV) && !(path->mnt->mnt_sb->s_iflags & SB_I_NODEV); } The problem with this is that as of the aforementioned commit mknod() creates partially functional device nodes in non-initial user namespaces. In particular, it has the consequence that as of the aforementioned commit open() will be more privileged with respect to device nodes than mknod(). Before it was the other way around. Specifically, if mknod() succeeded then it was transparent for any userspace application that a fatal error must have occured when open() failed. All of this breaks multiple userspace workloads and a widespread assumption about how to handle mknod(). Basically, all container runtimes and systemd live by the slogan "ask for forgiveness not permission" when running user namespace workloads. For mknod() the assumption is that if the syscall succeeds the device nodes are useable irrespective of whether it succeeds in a non-initial user namespace or not. This logic was chosen explicitly to allow for the glorious day when mknod() will actually be able to create fully functional device nodes in user namespaces. A specific problem people are already running into when running 4.18 rc kernels are failing systemd services. For any distro that is run in a container systemd services started with the PrivateDevices= property set will fail to start since the device nodes in question cannot be opened (cf. the arguments in [1]). Full disclosure, Seth made the very sound argument that it is already possible to end up with partially functional device nodes. Any filesystem mounted with MS_NODEV set will allow mknod() to succeed but will not allow open() to succeed. The difference to the case here is that the MS_NODEV case is transparent to userspace since it is an explicitly set mount option while the SB_I_NODEV case is an implicit property enforced by the kernel and hence opaque to userspace. [1]: https://github.com/systemd/systemd/pull/9483Signed-off-by: NChristian Brauner <christian@brauner.io> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Seth Forshee <seth.forshee@canonical.com> Cc: Serge Hallyn <serge@hallyn.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Dave Chinner 提交于
[ Upstream commit a837eca2412051628c0529768c9bc4f3580b040e ] This reverts commit 61c6de667263184125d5ca75e894fcad632b0dd3. The reverted commit added page reference counting to iomap page structures that are used to track block size < page size state. This was supposed to align the code with page migration page accounting assumptions, but what it has done instead is break XFS filesystems. Every fstests run I've done on sub-page block size XFS filesystems has since picking up this commit 2 days ago has failed with bad page state errors such as: # ./run_check.sh "-m rmapbt=1,reflink=1 -i sparse=1 -b size=1k" "generic/038" .... SECTION -- xfs FSTYP -- xfs (debug) PLATFORM -- Linux/x86_64 test1 4.20.0-rc6-dgc+ MKFS_OPTIONS -- -f -m rmapbt=1,reflink=1 -i sparse=1 -b size=1k /dev/sdc MOUNT_OPTIONS -- /dev/sdc /mnt/scratch generic/038 454s ... run fstests generic/038 at 2018-12-20 18:43:05 XFS (sdc): Unmounting Filesystem XFS (sdc): Mounting V5 Filesystem XFS (sdc): Ending clean mount BUG: Bad page state in process kswapd0 pfn:3a7fa page:ffffea0000ccbeb0 count:0 mapcount:0 mapping:ffff88800d9b6360 index:0x1 flags: 0xfffffc0000000() raw: 000fffffc0000000 dead000000000100 dead000000000200 ffff88800d9b6360 raw: 0000000000000001 0000000000000000 00000000ffffffff page dumped because: non-NULL mapping CPU: 0 PID: 676 Comm: kswapd0 Not tainted 4.20.0-rc6-dgc+ #915 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.1-1 04/01/2014 Call Trace: dump_stack+0x67/0x90 bad_page.cold.116+0x8a/0xbd free_pcppages_bulk+0x4bf/0x6a0 free_unref_page_list+0x10f/0x1f0 shrink_page_list+0x49d/0xf50 shrink_inactive_list+0x19d/0x3b0 shrink_node_memcg.constprop.77+0x398/0x690 ? shrink_slab.constprop.81+0x278/0x3f0 shrink_node+0x7a/0x2f0 kswapd+0x34b/0x6d0 ? node_reclaim+0x240/0x240 kthread+0x11f/0x140 ? __kthread_bind_mask+0x60/0x60 ret_from_fork+0x24/0x30 Disabling lock debugging due to kernel taint .... The failures are from anyway that frees pages and empties the per-cpu page magazines, so it's not a predictable failure or an easy to debug failure. generic/038 is a reliable reproducer of this problem - it has a 9 in 10 failure rate on one of my test machines. Failure on other machines have been at random points in fstests runs but every run has ended up tripping this problem. Hence generic/038 was used to bisect the failure because it was the most reliable failure. It is too close to the 4.20 release (not to mention holidays) to try to diagnose, fix and test the underlying cause of the problem, so reverting the commit is the only option we have right now. The revert has been tested against a current tot 4.20-rc7+ kernel across multiple machines running sub-page block size XFs filesystems and none of the bad page state failures have been seen. Signed-off-by: NDave Chinner <dchinner@redhat.com> Cc: Piotr Jaroszynski <pjaroszynski@nvidia.com> Cc: Christoph Hellwig <hch@lst.de> Cc: William Kucharski <william.kucharski@oracle.com> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: Brian Foster <bfoster@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 21 12月, 2018 38 次提交
-
-
由 Greg Kroah-Hartman 提交于
-
由 Omar Sandoval 提交于
[ Upstream commit d6fd0ae2 ] There's a race between close_ctree() and cleaner_kthread(). close_ctree() sets btrfs_fs_closing(), and the cleaner stops when it sees it set, but this is racy; the cleaner might have already checked the bit and could be cleaning stuff. In particular, if it deletes unused block groups, it will create delayed iputs for the free space cache inodes. As of "btrfs: don't run delayed_iputs in commit", we're no longer running delayed iputs after a commit. Therefore, if the cleaner creates more delayed iputs after delayed iputs are run in btrfs_commit_super(), we will leak inodes on unmount and get a busy inode crash from the VFS. Fix it by parking the cleaner before we actually close anything. Then, any remaining delayed iputs will always be handled in btrfs_commit_super(). This also ensures that the commit in close_ctree() is really the last commit, so we can get rid of the commit in cleaner_kthread(). The fstest/generic/475 followed by 476 can trigger a crash that manifests as a slab corruption caused by accessing the freed kthread structure by a wake up function. Sample trace: [ 5657.077612] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc [ 5657.079432] PGD 1c57a067 P4D 1c57a067 PUD da10067 PMD 0 [ 5657.080661] Oops: 0000 [#1] PREEMPT SMP [ 5657.081592] CPU: 1 PID: 5157 Comm: fsstress Tainted: G W 4.19.0-rc8-default+ #323 [ 5657.083703] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.2-0-gf9626cc-prebuilt.qemu-project.org 04/01/2014 [ 5657.086577] RIP: 0010:shrink_page_list+0x2f9/0xe90 [ 5657.091937] RSP: 0018:ffffb5c745c8f728 EFLAGS: 00010287 [ 5657.092953] RAX: 0000000000000074 RBX: ffffb5c745c8f830 RCX: 0000000000000000 [ 5657.094590] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a8747fdf3d0 [ 5657.095987] RBP: ffffb5c745c8f9e0 R08: 0000000000000000 R09: 0000000000000000 [ 5657.097159] R10: ffff9a8747fdf5e8 R11: 0000000000000000 R12: ffffb5c745c8f788 [ 5657.098513] R13: ffff9a877f6ff2c0 R14: ffff9a877f6ff2c8 R15: dead000000000200 [ 5657.099689] FS: 00007f948d853b80(0000) GS:ffff9a877d600000(0000) knlGS:0000000000000000 [ 5657.101032] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 5657.101953] CR2: 00000000000000cc CR3: 00000000684bd000 CR4: 00000000000006e0 [ 5657.103159] Call Trace: [ 5657.103776] shrink_inactive_list+0x194/0x410 [ 5657.104671] shrink_node_memcg.constprop.84+0x39a/0x6a0 [ 5657.105750] shrink_node+0x62/0x1c0 [ 5657.106529] try_to_free_pages+0x1a4/0x500 [ 5657.107408] __alloc_pages_slowpath+0x2c9/0xb20 [ 5657.108418] __alloc_pages_nodemask+0x268/0x2b0 [ 5657.109348] kmalloc_large_node+0x37/0x90 [ 5657.110205] __kmalloc_node+0x236/0x310 [ 5657.111014] kvmalloc_node+0x3e/0x70 Fixes: 30928e9b ("btrfs: don't run delayed_iputs in commit") Signed-off-by: NOmar Sandoval <osandov@fb.com> Reviewed-by: NDavid Sterba <dsterba@suse.com> [ add trace ] Signed-off-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Israel Rukshin 提交于
[ Upstream commit d7dcdf9d4e15189ecfda24cc87339a3425448d5c ] nvmet_rdma_release_rsp() may free the response before using it at error flow. Fixes: 8407879c ("nvmet-rdma: fix possible bogus dereference under heavy load") Signed-off-by: NIsrael Rukshin <israelr@mellanox.com> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NMax Gurtovoy <maxg@mellanox.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 James Smart 提交于
[ Upstream commit 86880d64 ] Delete operations are seeing NULL pointer references in call_timer_fn. Tracking these back, the timer appears to be the keep alive timer. nvme_keep_alive_work() which is tied to the timer that is cancelled by nvme_stop_keep_alive(), simply starts the keep alive io but doesn't wait for it's completion. So nvme_stop_keep_alive() only stops a timer when it's pending. When a keep alive is in flight, there is no timer running and the nvme_stop_keep_alive() will have no affect on the keep alive io. Thus, if the io completes successfully, the keep alive timer will be rescheduled. In the failure case, delete is called, the controller state is changed, the nvme_stop_keep_alive() is called while the io is outstanding, and the delete path continues on. The keep alive happens to successfully complete before the delete paths mark it as aborted as part of the queue termination, so the timer is restarted. The delete paths then tear down the controller, and later on the timer code fires and the timer entry is now corrupt. Fix by validating the controller state before rescheduling the keep alive. Testing with the fix has confirmed the condition above was hit. Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Masahiro Yamada 提交于
[ Upstream commit ece27a337d42a3197935711997f2880f0957ed7e ] Currently, the clock duty is set as tLOW/tHIGH = 1/1. For Fast-mode, tLOW is set to 1.25 us while the I2C spec requires tLOW >= 1.3 us. tLOW/tHIGH = 5/4 would meet both Standard-mode and Fast-mode: Standard-mode: tLOW = 5.56 us, tHIGH = 4.44 us Fast-mode: tLOW = 1.39 us, tHIGH = 1.11 us Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Masahiro Yamada 提交于
[ Upstream commit 8469636ab5d8c77645b953746c10fda6983a8830 ] Currently, the clock duty is set as tLOW/tHIGH = 1/1. For Fast-mode, tLOW is set to 1.25 us while the I2C spec requires tLOW >= 1.3 us. tLOW/tHIGH = 5/4 would meet both Standard-mode and Fast-mode: Standard-mode: tLOW = 5.56 us, tHIGH = 4.44 us Fast-mode: tLOW = 1.39 us, tHIGH = 1.11 us Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Hans de Goede 提交于
[ Upstream commit 0544ee4b1ad574aec3b6379af5f5cdee42840971 ] Some AMD based HP laptops have a SMB0001 ACPI device node which does not define any methods. This leads to the following error in dmesg: [ 5.222731] cmi: probe of SMB0001:00 failed with error -5 This commit makes acpi_smbus_cmi_add() return -ENODEV instead in this case silencing the error. In case of a failure of the i2c_add_adapter() call this commit now propagates the error from that call instead of -EIO. Signed-off-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
[ Upstream commit 6c7f25cae54b840302e4f1b371dbf318fbf09ab2 ] According to Intel (R) Axxia TM Lionfish Communication Processor Peripheral Subsystem Hardware Reference Manual, the AXXIA I2C module have a programmable Master Wait Timer, which among others, checks the time between commands send in manual mode. When a timeout (25ms) passes, TSS bit is set in Master Interrupt Status register and a Stop command is issued by the hardware. The axxia_i2c_xfer(), does not properly handle this situation, however. For each message a separate axxia_i2c_xfer_msg() is called and this function incorrectly assumes that any interrupt might happen only when waiting for completion. This is mostly correct but there is one exception - a master timeout can trigger if enough time has passed between individual transfers. It will, by definition, happen between transfers when the interrupts are disabled by the code. If that happens, the hardware issues Stop command. The interrupt indicating timeout will not be triggered as soon as we enable them since the Master Interrupt Status is cleared when master mode is entered again (which happens before enabling irqs) meaning this error is lost and the transfer is continued even though the Stop was issued on the bus. The subsequent operations completes without error but a bogus value (0xFF in case of read) is read as the client device is confused because aborted transfer. No error is returned from master_xfer() making caller believe that a valid value was read. To fix the problem, the TSS bit (indicating timeout) in Master Interrupt Status register is checked before each transfer. If it is set, there was a timeout before this transfer and (as described above) the hardware already issued Stop command so the transaction should be aborted thus -ETIMEOUT is returned from the master_xfer() callback. In order to be sure no timeout was issued we can't just read the status just before starting new transaction as there will always be a small window of time (few CPU cycles at best) where this might still happen. For this reason we have to temporally disable the timer before checking for TSS bit. Disabling it will, however, clear the TSS bit so in order to preserve that information, we have to read it in ISR so we have to ensure that the TSS interrupt is not masked between transfers of one transaction. There is no need to call bus recovery or controller reinitialization if that happens so it's skipped. Signed-off-by: NKrzysztof Adamski <krzysztof.adamski@nokia.com> Reviewed-by: NAlexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Ido Schimmel 提交于
[ Upstream commit 993107fea5eefdfdfde1ca38d3f01f0bebf76e77 ] When deleting a VLAN device using an ioctl the netdev is unregistered before the VLAN filter is updated via ndo_vlan_rx_kill_vid(). It can lead to a use-after-free in mlxsw in case the VLAN device is deleted while being enslaved to a bridge. The reason for the above is that when mlxsw receives the CHANGEUPPER event, it wrongly assumes that the VLAN device is no longer its upper and thus destroys the internal representation of the bridge port despite the reference count being non-zero. Fix this by checking if the VLAN device is our upper using its real device. In net-next I'm going to remove this trick and instead make mlxsw completely agnostic to the order of the events. Fixes: c57529e1 ("mlxsw: spectrum: Replace vPorts with Port-VLAN") Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NPetr Machata <petrm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Stefan Hajnoczi 提交于
[ Upstream commit c38f57da428b033f2721b611d84b1f40bde674a8 ] If a local process has closed a connected socket and hasn't received a RST packet yet, then the socket remains in the table until a timeout expires. When a vhost_vsock instance is released with the timeout still pending, the socket is never freed because vhost_vsock has already set the SOCK_DONE flag. Check if the close timer is pending and let it close the socket. This prevents the race which can leak sockets. Reported-by: NMaximilian Riemensberger <riemensberger@cadami.net> Cc: Graham Whaley <graham.whaley@gmail.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Steve French 提交于
[ Upstream commit 6e785302dad32228819d8066e5376acd15d0e6ba ] Missing a dependency. Shouldn't show cifs posix extensions in Kconfig if CONFIG_CIFS_ALLOW_INSECURE_DIALECTS (ie SMB1 protocol) is disabled. Signed-off-by: NSteve French <stfrench@microsoft.com> Reviewed-by: NPavel Shilovsky <pshilov@microsoft.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sam Bobroff 提交于
[ Upstream commit e594a5e349ddbfdaca1951bb3f8d72f3f1660d73 ] When unloading the ast driver, a warning message is printed by drm_mode_config_cleanup() because a reference is still held to one of the drm_connector structs. Correct this by calling drm_crtc_force_disable_all() in ast_fbdev_destroy(). Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/1e613f3c630c7bbc72e04a44b178259b9164d2f6.1543798395.git.sbobroff@linux.ibm.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Williams 提交于
[ Upstream commit b5fd2e00a60248902315fb32210550ac3cb9f44c ] A "short" ARS (address range scrub) instructs the platform firmware to return known errors. In contrast, a "long" ARS instructs platform firmware to arrange every data address on the DIMM to be read / checked for poisoned data. The conversion of the flags in commit d3abaf43bab8 "acpi, nfit: Fix Address Range Scrub completion tracking", changed the meaning of passing '0' to acpi_nfit_ars_rescan(). Previously '0' meant "not short", now '0' is ARS_REQ_SHORT. Pass ARS_REQ_LONG to restore the expected scrub-type behavior of user-initiated ARS sessions. Fixes: d3abaf43bab8 ("acpi, nfit: Fix Address Range Scrub completion tracking") Reported-by: NJacek Zloch <jacek.zloch@intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Reviewed-by: NDave Jiang <dave.jiang@intel.com> Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Williams 提交于
[ Upstream commit e3f5df762d4a6ef6326c3c09bc9f89ea8a2eab2c ] In preparation for libnvdimm growing new restrictions to detect section conflicts between persistent memory regions, enable nfit_test to allocate aligned resources. Use a gen_pool to allocate nfit_test's fake resources in a separate address space from the virtual translation of the same. Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Tested-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 James Zhu 提交于
[ Upstream commit 0a9b89b2e2e7b6d90f81ddc47e489be1043e01b1 ] Replace vcn_v1_0_stop with vcn_v1_0_set_powergating_state during suspend, to keep adev->vcn.cur_state update. It will fix VCN S3 hung issue. Signed-off-by: NJames Zhu <James.Zhu@amd.com> Reviewed-by: NLeo Liu <leo.liu@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Baruch Siach 提交于
[ Upstream commit 0fb628f0 ] The .validate phylink callback should empty the supported bitmap when the interface mode is invalid. Cc: Maxime Chevallier <maxime.chevallier@bootlin.com> Cc: Antoine Tenart <antoine.tenart@bootlin.com> Reported-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NBaruch Siach <baruch@tkos.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Baruch Siach 提交于
[ Upstream commit 01b3fd5a ] The mvpp2_phylink_validate() relies on the interface field of phylink_link_state to determine valid link modes. However, when called from phylink_sfp_module_insert() this field in not initialized. The default switch case then excludes 10G link modes. This allows 10G SFP modules that are detected correctly to be configured at max rate of 2.5G. Catch the uninitialized PHY mode case, and allow 10G rates. Fixes: d97c9f4a ("net: mvpp2: 1000baseX support") Cc: Maxime Chevallier <maxime.chevallier@bootlin.com> Cc: Antoine Tenart <antoine.tenart@bootlin.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NBaruch Siach <baruch@tkos.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Russell King 提交于
[ Upstream commit 70bb27b79adf63ea39e37371d09c823c7a8f93ce ] Commit 8c0e64ac ("thermal: armada: get rid of the ->is_valid() pointer") removed the unnecessary indirection through a function pointer, but in doing so, also removed the negation operator too: - if (priv->data->is_valid && !priv->data->is_valid(priv)) { + if (armada_is_valid(priv)) { which results in: armada_thermal f06f808c.thermal: Temperature sensor reading not valid armada_thermal f2400078.thermal: Temperature sensor reading not valid armada_thermal f4400078.thermal: Temperature sensor reading not valid at boot, or whenever the "temp" sysfs file is read. Replace the negation operator. Fixes: 8c0e64ac ("thermal: armada: get rid of the ->is_valid() pointer") Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NEduardo Valentin <edubezval@gmail.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nicolas Saenz Julienne 提交于
[ Upstream commit ecb239d96d369c23c33d41708646df646de669f4 ] After getting a reference to the platform device's of_node the probe function ends up calling of_find_matching_node() using the node as an argument. The function takes care of decreasing the refcount on it. We are then incorrectly decreasing the refcount on that node again. This patch removes the unwarranted call to of_node_put(). Fixes: 414fd46e ("fsl/fman: Add FMan support") Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nathan Jones 提交于
[ Upstream commit c2a3831df6dc164af66d8d86cf356a90c021b86f ] While trying to use the dma_mmap_*() interface, it was noticed that this interface returns strange values when passed an incorrect length. If neither of the if() statements fire then the return value is uninitialized. In the worst case it returns 0 which means the caller will think the function succeeded. Fixes: 1655cf88 ("ARM: dma-mapping: Remove traces of NOMMU code") Signed-off-by: NNathan Jones <nathanj439@gmail.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Vladimir Murzin 提交于
[ Upstream commit 3d0358d0ba048c5afb1385787aaec8fa5ad78fcc ] Chris has discovered and reported that v7_dma_inv_range() may corrupt memory if address range is not aligned to cache line size. Since the whole cache-v7m.S was lifted form cache-v7.S the same observation applies to v7m_dma_inv_range(). So the fix just mirrors what has been done for v7 with a little specific of M-class. Cc: Chris Cole <chris@sageembedded.com> Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Chris Cole 提交于
[ Upstream commit a1208f6a ] This patch addresses possible memory corruption when v7_dma_inv_range(start_address, end_address) address parameters are not aligned to whole cache lines. This function issues "invalidate" cache management operations to all cache lines from start_address (inclusive) to end_address (exclusive). When start_address and/or end_address are not aligned, the start and/or end cache lines are first issued "clean & invalidate" operation. The assumption is this is done to ensure that any dirty data addresses outside the address range (but part of the first or last cache lines) are cleaned/flushed so that data is not lost, which could happen if just an invalidate is issued. The problem is that these first/last partial cache lines are issued "clean & invalidate" and then "invalidate". This second "invalidate" is not required and worse can cause "lost" writes to addresses outside the address range but part of the cache line. If another component writes to its part of the cache line between the "clean & invalidate" and "invalidate" operations, the write can get lost. This fix is to remove the extra "invalidate" operation when unaligned addressed are used. A kernel module is available that has a stress test to reproduce the issue and a unit test of the updated v7_dma_inv_range(). It can be downloaded from http://ftp.sageembedded.com/outgoing/linux/cache-test-20181107.tgz. v7_dma_inv_range() is call by dmac_[un]map_area(addr, len, direction) when the direction is DMA_FROM_DEVICE. One can (I believe) successfully argue that DMA from a device to main memory should use buffers aligned to cache line size, because the "clean & invalidate" might overwrite data that the device just wrote using DMA. But if a driver does use unaligned buffers, at least this fix will prevent memory corruption outside the buffer. Signed-off-by: NChris Cole <chris@sageembedded.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Alexei Starovoitov 提交于
[ Upstream commit c3494801cd1785e2c25f1a5735fa19ddcf9665da ] Malicious user space may try to force the verifier to use as much cpu time and memory as possible. Hence check for pending signals while verifying the program. Note that suspend of sys_bpf(PROG_LOAD) syscall will lead to EAGAIN, since the kernel has to release the resources used for program verification. Reported-by: NAnatoly Trosinenko <anatoly.trosinenko@gmail.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Saeed Mahameed 提交于
[ Upstream commit 1b603f9e4313348608f256b564ed6e3d9e67f377 ] MLX4_EN depends on NETDEVICES, ETHERNET and INET Kconfigs. Make sure they are listed in MLX4_EN Kconfig dependencies. This fixes the following build break: drivers/net/ethernet/mellanox/mlx4/en_rx.c:582:18: warning: ‘struct iphdr’ declared inside parameter list [enabled by default] struct iphdr *iph) ^ drivers/net/ethernet/mellanox/mlx4/en_rx.c:582:18: warning: its scope is only this definition or declaration, which is probably not what you want [enabled by default] drivers/net/ethernet/mellanox/mlx4/en_rx.c: In function ‘get_fixed_ipv4_csum’: drivers/net/ethernet/mellanox/mlx4/en_rx.c:586:20: error: dereferencing pointer to incomplete type _u8 ipproto = iph->protocol; Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Anderson Luiz Alves 提交于
[ Upstream commit a74515604a7b171f2702bdcbd1e231225fb456d0 ] Disable hardware level MAC learning because it breaks station roaming. When enabled it drops all frames that arrive from a MAC address that is on a different port at learning table. Signed-off-by: NAnderson Luiz Alves <alacn1@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Matteo Croce 提交于
[ Upstream commit 59f997b088d26a774958cb7b17b0763cd82de7ec ] A MAC address must be unique among all the macvlan devices with the same lower device. The only exception is the passthru [sic] mode, which shares the lower device address. When duplicate addresses are detected, EBUSY is returned when bringing the interface up: # ip link add macvlan0 link eth0 type macvlan # read addr </sys/class/net/eth0/address # ip link set macvlan0 address $addr # ip link set macvlan0 up RTNETLINK answers: Device or resource busy Use correct error code which is EADDRINUSE, and do the check also earlier, on address change: # ip link set macvlan0 address $addr RTNETLINK answers: Address already in use Signed-off-by: NMatteo Croce <mcroce@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Juha-Matti Tilli 提交于
[ Upstream commit fd6f32f7 ] These devices support read zero after trim (RZAT), as they advertise to the OS. However, the OS doesn't believe the SSDs unless they are explicitly whitelisted. Acked-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NJuha-Matti Tilli <juha-matti.tilli@iki.fi> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Tony Lindgren 提交于
[ Upstream commit 6c3516fed7b61a3527459ccfa67fab130d910610 ] I noticed that the Android v3.0.8 kernel on droid4 is using different keypad values from the mainline kernel and does not have issues with keys occasionally being stuck until pressed again. Turns out there was an earlier patch posted to fix this as "Input: omap-keypad: errata i689: Correct debounce time", but it was never reposted to fix use macros for timing calculations. This updated version is using macros, and also fixes the use of the input clock rate to use 32768KiHz instead of 32000KiHz. And we want to use the known good Android kernel values of 3 and 6 instead of 2 and 6 in the earlier patch. Reported-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Teika Kazura 提交于
[ Upstream commit 5a6dab15f7a79817cab4af612ddd99eda793fce6 ] SMBus works fine for the touchpad with id SYN3221, used in the HP 15-ay000 series, This device has been reported in these messages in the "linux-input" mailing list: * https://marc.info/?l=linux-input&m=152016683003369&w=2 * https://www.spinics.net/lists/linux-input/msg52525.htmlReported-by: NNitesh Debnath <niteshkd1999@gmail.com> Reported-by: NTeika Kazura <teika@gmx.com> Signed-off-by: NTeika Kazura <teika@gmx.com> Reviewed-by: NBenjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Carpenter 提交于
[ Upstream commit 2e85c57493e391b93445c1e0d530b36b95becc64 ] The > comparison should be >= or we write one element beyond the end of the unit->clk_table[] array. (The unit->clk_table[] array is allocated in the mmp_clk_init() function and it has unit->nr_clks elements). Fixes: 4661fda1 ("clk: mmp: add basic support functions for DT support") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NStephen Boyd <sboyd@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Carpenter 提交于
[ Upstream commit d9f5b7f5dd0fa74a89de5a7ac1e26366f211ccee ] These > comparisons should be >= to prevent reading beyond the end of of the clk_data->hws[] buffer. The clk_data->hws[] array is allocated in cp110_syscon_common_probe() when we do: cp110_clk_data = devm_kzalloc(dev, sizeof(*cp110_clk_data) + sizeof(struct clk_hw *) * CP110_CLK_NUM, GFP_KERNEL); As you can see, it has CP110_CLK_NUM elements which is equivalent to CP110_MAX_CORE_CLOCKS + CP110_MAX_GATABLE_CLOCKS. Fixes: d3da3eae ("clk: mvebu: new driver for Armada CP110 system controller") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NStephen Boyd <sboyd@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Evan Quan 提交于
[ Upstream commit 10cb3e6b63bf4266a5198813526fdd7259ffb8be ] For display config change event only, pre-display config settings are needed. Signed-off-by: NEvan Quan <evan.quan@amd.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Wen Yang 提交于
[ Upstream commit 098336deb946f37a70afc0979af388b615c378bf ] The error checks on ret for a negative error return always fails because the return value of iommu_map_sg() is unsigned and can never be negative. Detected with Coccinelle: drivers/gpu/drm/msm/msm_iommu.c:69:9-12: WARNING: Unsigned expression compared with zero: ret < 0 Signed-off-by: NWen Yang <wen.yang99@zte.com.cn> CC: Rob Clark <robdclark@gmail.com> CC: David Airlie <airlied@linux.ie> CC: Julia Lawall <julia.lawall@lip6.fr> CC: linux-arm-msm@vger.kernel.org CC: dri-devel@lists.freedesktop.org CC: freedreno@lists.freedesktop.org CC: linux-kernel@vger.kernel.org Signed-off-by: NSean Paul <seanpaul@chromium.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 YueHaibing 提交于
[ Upstream commit ce25aa3ee6939d83979cccf7adc5737cba9a0cb7 ] 'dpu_enc' is a member of 'drm_enc' And 'drm_enc' got allocated with devm_kzalloc in dpu_encoder_init. This gives this error message: ./drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c:459:1-6: WARNING: invalid free of devm_ allocated data Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Signed-off-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NSean Paul <seanpaul@chromium.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sean Paul 提交于
[ Upstream commit 081679c5 ] It causes a WARN in drm_atomic_get_plane_state(), and is not used by atomic (or dpu). Signed-off-by: NSean Paul <seanpaul@chromium.org> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Todor Tomov 提交于
[ Upstream commit ee445635 ] SoCs that contain MDP5 have a top level wrapper called MDSS that manages locks, power and irq for the sub-blocks within it. Irq for HDMI is also routed through the MDSS. Shortly after the Hot Plug Detection (HPD) is enabled in HDMI, HDMI interrupts are recieved by the MDSS interrupt handler. However at this moment the HDMI irq is still not mapped to the MDSS irq domain so the HDMI irq handler cannot be called to process the interrupts. This leads to a flood of HDMI interrupts on CPU 0. If we are lucky to have the HDMI initialization running on a different CPU, it will eventually map the HDMI irq to MDSS irq domain, the next HDMI interrupt will be handled by the HDMI irq handler, the interrupt flood will stop and we will recover. If the HDMI initialization is running on CPU 0, then it cannot complete and there is nothing to stop the interrupt flood on CPU 0. The system is stuck. Fix this by moving the HPD enablement after the HDMI irq is mapped to the MDSS irq domain. Signed-off-by: NTodor Tomov <todor.tomov@linaro.org> Signed-off-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NSean Paul <seanpaul@chromium.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Yangtao Li 提交于
[ Upstream commit a51921c0db3fd26c4ed83dc0ec5d32988fa02aa5 ] use of_node_put() to release the refcount. Signed-off-by: NYangtao Li <tiny.windzz@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Yangtao Li 提交于
[ Upstream commit dac097c4546e4c5b16dd303a1e97c1d319c8ab3e ] of_find_node_by_path() acquires a reference to the node returned by it and that reference needs to be dropped by its caller. This place is not doing this, so fix it. Signed-off-by: NYangtao Li <tiny.windzz@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-