- 24 3月, 2019 40 次提交
-
-
由 Jann Horn 提交于
commit 83540fbc8812a580b6ad8f93f4c29e62e417687e upstream. The first version of this method was missing the check for `ret == PATH_MAX`; then such a check was added, but it didn't call kfree() on error, so there was still a small memory leak in the error case. Fix it by using strndup_user() instead of open-coding it. Link: http://lkml.kernel.org/r/20190220165443.152385-1-jannh@google.com Cc: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Fixes: 0eadcc7a ("perf/core: Fix perf_uprobe_init()") Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 zhangyi (F) 提交于
commit e7f0c424d0806b05d6f47be9f202b037eb701707 upstream. Commit d716ff71 ("tracing: Remove taking of trace_types_lock in pipe files") use the current tracer instead of the copy in tracing_open_pipe(), but it forget to remove the freeing sentence in the error path. There's an error path that can call kfree(iter->trace) after the iter->trace was assigned to tr->current_trace, which would be bad to free. Link: http://lkml.kernel.org/r/1550060946-45984-1-git-send-email-yi.zhang@huawei.com Cc: stable@vger.kernel.org Fixes: d716ff71 ("tracing: Remove taking of trace_types_lock in pipe files") Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Tom Zanussi 提交于
commit 9f0bbf3115ca9f91f43b7c74e9ac7d79f47fc6c2 upstream. Because there may be random garbage beyond a string's null terminator, it's not correct to copy the the complete character array for use as a hist trigger key. This results in multiple histogram entries for the 'same' string key. So, in the case of a string key, use strncpy instead of memcpy to avoid copying in the extra bytes. Before, using the gdbus entries in the following hist trigger as an example: # echo 'hist:key=comm' > /sys/kernel/debug/tracing/events/sched/sched_waking/trigger # cat /sys/kernel/debug/tracing/events/sched/sched_waking/hist ... { comm: ImgDecoder #4 } hitcount: 203 { comm: gmain } hitcount: 213 { comm: gmain } hitcount: 216 { comm: StreamTrans #73 } hitcount: 221 { comm: mozStorage #3 } hitcount: 230 { comm: gdbus } hitcount: 233 { comm: StyleThread#5 } hitcount: 253 { comm: gdbus } hitcount: 256 { comm: gdbus } hitcount: 260 { comm: StyleThread#4 } hitcount: 271 ... # cat /sys/kernel/debug/tracing/events/sched/sched_waking/hist | egrep gdbus | wc -l 51 After: # cat /sys/kernel/debug/tracing/events/sched/sched_waking/hist | egrep gdbus | wc -l 1 Link: http://lkml.kernel.org/r/50c35ae1267d64eee975b8125e151e600071d4dc.1549309756.git.tom.zanussi@linux.intel.com Cc: Namhyung Kim <namhyung@kernel.org> Cc: stable@vger.kernel.org Fixes: 79e577cb ("tracing: Support string type key properly") Signed-off-by: NTom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Pavel Shilovsky 提交于
commit 6dfbd84684700cb58b34e8602c01c12f3d2595c8 upstream. When we have a READ lease for a file and have just issued a write operation to the server we need to purge the cache and set oplock/lease level to NONE to avoid reading stale data. Currently we do that only if a write operation succedeed thus not covering cases when a request was sent to the server but a negative error code was returned later for some other reasons (e.g. -EIOCBQUEUED or -EINTR). Fix this by turning off caching regardless of the error code being returned. The patches fixes generic tests 075 and 112 from the xfs-tests. Cc: <stable@vger.kernel.org> Signed-off-by: NPavel Shilovsky <pshilov@microsoft.com> Signed-off-by: NSteve French <stfrench@microsoft.com> Reviewed-by: NRonnie Sahlberg <lsahlber@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Pavel Shilovsky 提交于
commit c781af7e0c1fed9f1d0e0ec31b86f5b21a8dca17 upstream. When we hit failures during constructing MIDs or sending PDUs through the network, we end up not using message IDs assigned to the packet. The next SMB packet will skip those message IDs and continue with the next one. This behavior may lead to a server not granting us credits until we use the skipped IDs. Fix this by reverting the current ID to the original value if any errors occur before we push the packet through the network stack. This patch fixes the generic/310 test from the xfs-tests. Cc: <stable@vger.kernel.org> # 4.19.x Signed-off-by: NPavel Shilovsky <pshilov@microsoft.com> Signed-off-by: NSteve French <stfrench@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Pavel Shilovsky 提交于
commit 7b9b9edb49ad377b1e06abf14354c227e9ac4b06 upstream. Currently on lease break the client sets a caching level twice: when oplock is detected and when oplock is processed. While the 1st attempt sets the level to the value provided by the server, the 2nd one resets the level to None unconditionally. This happens because the oplock/lease processing code was changed to avoid races between page cache flushes and oplock breaks. The commit c11f1df5 ("cifs: Wait for writebacks to complete before attempting write.") fixed the races for oplocks but didn't apply the same changes for leases resulting in overwriting the server granted value to None. Fix this by properly processing lease breaks. Signed-off-by: NPavel Shilovsky <pshilov@microsoft.com> Signed-off-by: NSteve French <stfrench@microsoft.com> CC: Stable <stable@vger.kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Ard Biesheuvel 提交于
commit 969e2f59d589c15f6aaf306e590dde16f12ea4b3 upstream. Commit 5092fcf3 ("crypto: arm64/aes-ce-ccm: add non-SIMD generic fallback") introduced C fallback code to replace the NEON routines when invoked from a context where the NEON is not available (i.e., from the context of a softirq taken while the NEON is already being used in kernel process context) Fix two logical flaws in the MAC calculation of the associated data. Reported-by: NEric Biggers <ebiggers@kernel.org> Fixes: 5092fcf3 ("crypto: arm64/aes-ce-ccm: add non-SIMD generic fallback") Cc: stable@vger.kernel.org Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Ard Biesheuvel 提交于
commit eaf46edf6ea89675bd36245369c8de5063a0272c upstream. The NEON MAC calculation routine fails to handle the case correctly where there is some data in the buffer, and the input fills it up exactly. In this case, we enter the loop at the end with w8 == 0, while a negative value is assumed, and so the loop carries on until the increment of the 32-bit counter wraps around, which is quite obviously wrong. So omit the loop altogether in this case, and exit right away. Reported-by: NEric Biggers <ebiggers@kernel.org> Fixes: a3fd8210 ("arm64/crypto: AES in CCM mode using ARMv8 Crypto ...") Cc: stable@vger.kernel.org Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit 2060e284e9595fc3baed6e035903c05b93266555 upstream. The x86 MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can incorrectly sleep in the skcipher_walk_*() functions while preemption has been disabled by kernel_fpu_begin(). Fix these bugs. Fixes: 56e8e57f ("crypto: morus - Add common SIMD glue code for MORUS") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Reviewed-by: NOndrej Mosnacek <omosnace@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit 3af349639597fea582a93604734d717e59a0e223 upstream. gcmaes_crypt_by_sg() dereferences the NULL pointer returned by scatterwalk_ffwd() when encrypting an empty plaintext and the source scatterlist ends immediately after the associated data. Fix it by only fast-forwarding to the src/dst data scatterlists if the data length is nonzero. This bug is reproduced by the "rfc4543(gcm(aes))" test vectors when run with the new AEAD test manager. Fixes: e8455207 ("crypto: aesni - Update aesni-intel_glue to use scatter/gather") Cc: <stable@vger.kernel.org> # v4.17+ Cc: Dave Watson <davejwatson@fb.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit ba6771c0a0bc2fac9d6a8759bab8493bd1cffe3b upstream. The x86 AEGIS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can incorrectly sleep in the skcipher_walk_*() functions while preemption has been disabled by kernel_fpu_begin(). Fix these bugs. Fixes: 1d373d4e ("crypto: x86 - Add optimized AEGIS implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Reviewed-by: NOndrej Mosnacek <omosnace@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit eb5e6730db98fcc4b51148b4a819fa4bf864ae54 upstream. Instantiating "cryptd(crc32c)" causes a crypto self-test failure because the crypto_alloc_shash() in alg_test_crc32c() fails. This is because cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can only be accessed through the ahash API, unlike shash algorithms which can be accessed through both the ahash and shash APIs. As the test is testing the shash descriptor format which is only applicable to shash algorithms, skip it for ahash algorithms. (Note that it's still important to fix crypto self-test failures even for weird algorithm instantiations like cryptd(crc32c) that no one would really use; in fips_enabled mode unprivileged users can use them to panic the kernel, and also they prevent treating a crypto self-test failure as a bug when fuzzing the kernel.) Fixes: 8e3ee85e ("crypto: crc32c - Test descriptor context format") Cc: stable@vger.kernel.org Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit b1f6b4bf416b49f00f3abc49c639371cdecaaad1 upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of memory, then priv::table will be left NULL. After that, encryption with that tfm will cause a NULL pointer dereference. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: f8d33fac ("crypto: skcipher - prevent using skciphers without setting key") Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit 251b7aea34ba3c4d4fdfa9447695642eb8b8b098 upstream. The memcpy()s in the PCBC implementation use walk->iv as both the source and destination, which has undefined behavior. These memcpy()'s are actually unneeded, because walk->iv is already used to hold the previous plaintext block XOR'd with the previous ciphertext block. Thus, walk->iv is already updated to its final value. So remove the broken and unnecessary memcpy()s. Fixes: 91652be5 ("[CRYPTO] pcbc: Add Propagated CBC template") Cc: <stable@vger.kernel.org> # v2.6.21+ Cc: David Howells <dhowells@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit d644f1c8746ed24f81075480f9e9cb3777ae8d65 upstream. The generic MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: 396be41f ("crypto: morus - Add generic MORUS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Reviewed-by: NOndrej Mosnacek <omosnace@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit ba7d7433a0e998c902132bd47330e355a1eaa894 upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so ->setkey() for those must nevertheless be atomic. That's fine for now since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's not intended that OPTIONAL_KEY be used much. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: 9fa68f62 ("crypto: hash - prevent using keyed hashes without setting key") Cc: stable@vger.kernel.org Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Ard Biesheuvel 提交于
commit d72b9d4acd548251f55b16843fc7a05dc5c80de8 upstream. The SIMD routine ported from x86 used to have a special code path for inputs < 16 bytes, which got lost somewhere along the way. Instead, the current glue code aligns the input pointer to 16 bytes, which is not really necessary on this architecture (although it could be beneficial to performance to expose aligned data to the the NEON routine), but this could result in inputs of less than 16 bytes to be passed in. This not only fails the new extended tests that Eric has implemented, it also results in the code reading past the end of the input, which could potentially result in crashes when dealing with less than 16 bytes of input at the end of a page which is followed by an unmapped page. So update the glue code to only invoke the NEON routine if the input is at least 16 bytes. Reported-by: NEric Biggers <ebiggers@kernel.org> Reviewed-by: NEric Biggers <ebiggers@kernel.org> Fixes: 6ef5737f ("crypto: arm64/crct10dif - port x86 SSE implementation to arm64") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit 12455e320e19e9cc7ad97f4ab89c280fe297387c upstream. The arm64 NEON bit-sliced implementation of AES-CTR fails the improved skcipher tests because it sometimes produces the wrong ciphertext. The bug is that the final keystream block isn't returned from the assembly code when the number of non-final blocks is zero. This can happen if the input data ends a few bytes after a page boundary. In this case the last bytes get "encrypted" by XOR'ing them with uninitialized memory. Fix the assembly code to return the final keystream block when needed. Fixes: 88a3f582 ("crypto: arm64/aes - don't use IV buffer to return final keystream block") Cc: <stable@vger.kernel.org> # v4.11+ Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Ard Biesheuvel 提交于
commit 62fecf295e3c48be1b5f17c440b93875b9adb4d6 upstream. The SIMD routine ported from x86 used to have a special code path for inputs < 16 bytes, which got lost somewhere along the way. Instead, the current glue code aligns the input pointer to permit the NEON routine to use special versions of the vld1 instructions that assume 16 byte alignment, but this could result in inputs of less than 16 bytes to be passed in. This not only fails the new extended tests that Eric has implemented, it also results in the code reading past the end of the input, which could potentially result in crashes when dealing with less than 16 bytes of input at the end of a page which is followed by an unmapped page. So update the glue code to only invoke the NEON routine if the input is at least 16 bytes. Reported-by: NEric Biggers <ebiggers@kernel.org> Reviewed-by: NEric Biggers <ebiggers@kernel.org> Fixes: 1d481f1c ("crypto: arm/crct10dif - port x86 SSE implementation to ARM") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit 0f533e67d26f228ea5dfdacc8a4bdeb487af5208 upstream. The generic AEGIS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: f606a88e ("crypto: aegis - Add generic AEGIS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: NEric Biggers <ebiggers@google.com> Reviewed-by: NOndrej Mosnacek <omosnace@redhat.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Eric Biggers 提交于
commit 6ebc97006b196aafa9df0497fdfa866cf26f259b upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. For example, in gcm.c, if the kzalloc() fails due to lack of memory, then the CTR part of GCM will have the new key but GHASH will not. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails, to prevent the tfm from being used until a new key is set. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: dc26c17f ("crypto: aead - prevent using AEADs without setting key") Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Al Viro 提交于
commit 399504e21a10be16dd1408ba0147367d9d82a10c upstream. same story as with last May fixes in sysfs (7b745a4e "unfuck sysfs_mount()"); new_sb is left uninitialized in case of early errors in kernfs_mount_ns() and papering over it by treating any error from kernfs_mount_ns() as equivalent to !new_ns ends up conflating the cases when objects had never been transferred to a superblock with ones when that has happened and resulting new superblock had been dropped. Easily fixed (same way as in sysfs case). Additionally, there's a superblock leak on kernfs_node_dentry() failure *and* a dentry leak inside kernfs_node_dentry() itself - the latter on probably impossible errors, but the former not impossible to trigger (as the matter of fact, injecting allocation failures at that point *does* trigger it). Cc: stable@kernel.org Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Oliver O'Halloran 提交于
commit 07464e88365e9236febaca9ed1a2e2006d8bc952 upstream. Libnvdimm reserves the first 8K of pfn and devicedax namespaces to store a superblock describing the namespace. This 8K reservation is contained within the altmap area which the kernel uses for the vmemmap backing for the pages within the namespace. The altmap allows for some pages at the start of the altmap area to be reserved and that mechanism is used to protect the superblock from being re-used as vmemmap backing. The number of PFNs to reserve is calculated using: PHYS_PFN(SZ_8K) Which is implemented as: #define PHYS_PFN(x) ((unsigned long)((x) >> PAGE_SHIFT)) So on systems where PAGE_SIZE is greater than 8K the reservation size is truncated to zero and the superblock area is re-used as vmemmap backing. As a result all the namespace information stored in the superblock (i.e. if it's a PFN or DAX namespace) is lost and the namespace needs to be re-created to get access to the contents. This patch fixes this by using PFN_UP() rather than PHYS_PFN() to ensure that at least one page is reserved. On systems with a 4K pages size this patch should have no effect. Cc: stable@vger.kernel.org Cc: Dan Williams <dan.j.williams@intel.com> Fixes: ac515c08 ("libnvdimm, pmem, pfn: move pfn setup to the core") Signed-off-by: NOliver O'Halloran <oohall@gmail.com> Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Dan Williams 提交于
commit fa7d2e639cd90442d868dfc6ca1d4cc9d8bf206e upstream. For recovery, where non-dax access is needed to a given physical address range, and testing, allow the 'force_raw' attribute to override the default establishment of a dev_pagemap. Otherwise without this capability it is possible to end up with a namespace that can not be activated due to corrupted info-block, and one that can not be repaired due to a section collision. Cc: <stable@vger.kernel.org> Fixes: 004f1afb ("libnvdimm, pmem: direct map legacy pmem by default") Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Wei Yang 提交于
commit f101ada7da6551127d192c2f1742c1e9e0f62799 upstream. When trying to see whether current nd_region intersects with others, trim_pfn_device() has already calculated the *size* to be expanded to SECTION size. Do not double append 'adjust' to 'size' when calculating whether the end of a region collides with the next pmem region. Fixes: ae86cbfef381 "libnvdimm, pfn: Pad pfn namespaces relative to other regions" Cc: <stable@vger.kernel.org> Signed-off-by: NWei Yang <richardw.yang@linux.intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Dan Williams 提交于
commit 966d23a006ca7b44ac8cf4d0c96b19785e0c3da0 upstream. The UEFI 2.7 specification sets expectations that the 'updating' flag is eventually cleared. To date, the libnvdimm core has never adhered to that protocol. The policy of the core matches the policy of other multi-device info-block formats like MD-Software-RAID that expect administrator intervention on inconsistent info-blocks, not automatic invalidation. However, some pre-boot environments may unfortunately attempt to "clean up" the labels and invalidate a set when it fails to find at least one "non-updating" label in the set. Clear the updating flag after set updates to minimize the window of vulnerability to aggressive pre-boot environments. Ideally implementations would not write to the label area outside of creating namespaces. Note that this only minimizes the window, it does not close it as the system can still crash while clearing the flag and the set can be subsequently deleted / invalidated by the pre-boot environment. Fixes: f524bf27 ("libnvdimm: write pmem label set") Cc: <stable@vger.kernel.org> Cc: Kelly Couch <kelly.j.couch@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Dan Williams 提交于
commit fa3ed4d981b1fc19acdd07fcb152a4bd3706892b upstream. The no_init_ars option is meant to prevent long-ARS, but short-ARS should be allowed to grab any immediate results. Fixes: bc6ba808 ("nfit, address-range-scrub: rework and simplify ARS...") Cc: <stable@vger.kernel.org> Reported-by: NErwin Tsaur <erwin.tsaur@oracle.com> Reviewed-by: NToshi Kani <toshi.kani@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Dan Williams 提交于
commit c6c5df293bf1b488cf8459aac658aecfdccb13a9 upstream. If query-ARS reports that ARS has stopped and requires continuation attempt to retrieve short-ARS results before continuing the long operation. Fixes: bc6ba808 ("nfit, address-range-scrub: rework and simplify ARS...") Cc: <stable@vger.kernel.org> Reported-by: NKrzysztof Rusocki <krzysztof.rusocki@intel.com> Reviewed-by: NToshi Kani <toshi.kani@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Dan Williams 提交于
commit ebe9f6f19d80d8978d16078dff3d5bd93ad8d102 upstream. Commit 11189c1089da "acpi/nfit: Fix command-supported detection" broke ND_CMD_CALL for bus-level commands. The "func = cmd" assumption is only valid for: ND_CMD_ARS_CAP ND_CMD_ARS_START ND_CMD_ARS_STATUS ND_CMD_CLEAR_ERROR The function number otherwise needs to be pulled from the command payload for: NFIT_CMD_TRANSLATE_SPA NFIT_CMD_ARS_INJECT_SET NFIT_CMD_ARS_INJECT_CLEAR NFIT_CMD_ARS_INJECT_GET Update cmd_to_func() for the bus case and call it in the common path. Fixes: 11189c1089da ("acpi/nfit: Fix command-supported detection") Cc: <stable@vger.kernel.org> Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Reported-by: NGrzegorz Burzynski <grzegorz.burzynski@intel.com> Tested-by: NJeff Moyer <jmoyer@redhat.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Dexuan Cui 提交于
commit 43f89877f26671c6309cd87d7364b1a3e66e71cf upstream. In the case of ND_CMD_CALL, we should also check out_obj->type. The patch uses out_obj->type, which is a short alias to out_obj->package.type. Fixes: 31eca76b ("nfit, libnvdimm: limited/whitelisted dimm command marshaling mechanism") Cc: <stable@vger.kernel.org> Signed-off-by: NDexuan Cui <decui@microsoft.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Alexander Shishkin 提交于
commit bf7cbaae0831252b416f375ca9b1027ecd4642dd upstream. Using STP_POLICY_ID_SET ioctl command with dummy_stm device, or any STM device that supplies zero mmio channel size, will trigger a division by zero bug in the kernel. Prevent this by disallowing channel widths other than 1 for such devices. Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com> Fixes: 7bd1d409 ("stm class: Introduce an abstraction for System Trace Module devices") CC: stable@vger.kernel.org # v4.4+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Darrick J. Wong 提交于
[ Upstream commit 29b00e609960ae0fcff382f4c7079dd0874a5311 ] When we made the shmem_reserve_inode call in shmem_link conditional, we forgot to update the declaration for ret so that it always has a known value. Dan Carpenter pointed out this deficiency in the original patch. Fixes: 1062af920c07 ("tmpfs: fix link accounting when a tmpfile is linked in") Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Signed-off-by: NHugh Dickins <hughd@google.com> Cc: Matej Kupljen <matej.kupljen@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
[ Upstream commit af548a27b158d548d41e56255e6eaca1658cc3be ] Just like commit e2ba732a ("selftests: fib_tests: sleep after changing carrier"), wait one second to allow linkwatch to propagate the carrier change to the stack. There are two sets of carrier tests. The first slept after the carrier was set to off, and when the second set ran, it was likely that the linkwatch would be able to run again without much delay, reducing the likelihood of a race. However, if you run 'fib_tests.sh -t carrier' on a loop, you will quickly notice the failures. Sleeping on the second set of tests make the failures go away. Cc: David Ahern <dsahern@gmail.com> Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@canonical.com> Reviewed-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Mao Wenan 提交于
[ Upstream commit 4593403fa516a5a4cffe6883c5062d60932cbfbe ] cards_found is a static variable, but when it enters atl2_probe(), cards_found is set to zero, the value is not consistent with last probe, so next behavior is not our expect. Signed-off-by: NMao Wenan <maowenan@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Michael Chan 提交于
[ Upstream commit 0000b81a063b5f3ab82fa18041c28327ce72c312 ] The code waits up to 20 usec for the firmware response to complete once we've seen the valid response header in the buffer. It turns out that in some scenarios, this wait time is not long enough. Extend it to 150 usec and use usleep_range() instead of udelay(). Fixes: 9751e8e7 ("bnxt_en: reduce timeout on initial HWRM calls") Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Michael Chan 提交于
[ Upstream commit 67681d02aaa1db9044a16df4ca9c77cde1221a3e ] The logic that polls for the firmware message response uses a shorter sleep interval for the first few passes. But there was a typo so it was using the wrong counter (larger counter) for these short sleep passes. The result is a slightly shorter timeout period for these firmware messages than intended. Fix it by using the proper counter. Fixes: 9751e8e7 ("bnxt_en: reduce timeout on initial HWRM calls") Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Jiong Wang 提交于
[ Upstream commit f036ebd9bfbe1e91a3d855e85e05fc5ff156b641 ] NFP BPF JIT compiler is doing a couple of small optimizations when jitting ALU imm instructions, some of these optimizations could save code-gen, for example: A & -1 = A A | 0 = A A ^ 0 = A However, for ALU32, high 32-bit of the 64-bit register should still be cleared according to ISA semantics. Fixes: cd7df56e ("nfp: add BPF to NFP code translator") Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NJiong Wang <jiong.wang@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Jiong Wang 提交于
[ Upstream commit 71c190249f0ced5b26377ea6bf829ab3af77a40c ] The intended optimization should be A ^ 0 = A, not A ^ -1 = A. Fixes: cd7df56e ("nfp: add BPF to NFP code translator") Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NJiong Wang <jiong.wang@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Vadim Lomovtsev 提交于
[ Upstream commit 7db730d9d2f7b6af6aeac621b1890ea477a0cb8d ] The rx_set_mode invokes number of messages to be send to PF for receive mode configuration. In case if there any issues we need to stop sending messages and release allocated memory. This commit is to implement check of nicvf_msg_send_to_pf() result. Signed-off-by: NVadim Lomovtsev <vlomovtsev@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Vadim Lomovtsev 提交于
[ Upstream commit 0dd563b9a62c4cbabf5d4fd6596440c2491e72b1 ] At the end of NIC VF initialization VF sends CFG_DONE message to PF without using nicvf_msg_send_to_pf routine. This potentially could re-write data in mailbox. This commit is to implement common way of sending CFG_DONE message by the same way with other configuration messages by using nicvf_send_msg_to_pf() routine. Signed-off-by: NVadim Lomovtsev <vlomovtsev@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-