- 17 11月, 2019 10 次提交
-
-
由 Ard Biesheuvel 提交于
This integrates the accelerated MIPS 32r2 implementation of ChaCha into both the API and library interfaces of the kernel crypto stack. The significance of this is that, in addition to becoming available as an accelerated library implementation, it can also be used by existing crypto API code such as Adiantum (for block encryption on ultra low performance cores) or IPsec using chacha20poly1305. These are use cases that have already opted into using the abstract crypto API. In order to support Adiantum, the core assembler routine has been adapted to take the round count as a function argument rather than hardcoding it to 20. Co-developed-by: NRené van Dorst <opensource@vdorst.com> Signed-off-by: NRené van Dorst <opensource@vdorst.com> Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Jason A. Donenfeld 提交于
This imports the accelerated MIPS 32r2 ChaCha20 implementation from the Zinc patch set. Co-developed-by: NRené van Dorst <opensource@vdorst.com> Signed-off-by: NRené van Dorst <opensource@vdorst.com> Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
Expose the accelerated NEON ChaCha routine directly as a symbol export so that users of the ChaCha library API can use it directly. Given that calls into the library API will always go through the routines in this module if it is enabled, switch to static keys to select the optimal implementation available (which may be none at all, in which case we defer to the generic implementation for all invocations). Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
Instead of falling back to the generic ChaCha skcipher driver for non-SIMD cases, use a fast scalar implementation for ARM authored by Eric Biggers. This removes the module dependency on chacha-generic altogether, which also simplifies things when we expose the ChaCha library interface from this module. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
Expose the accelerated NEON ChaCha routine directly as a symbol export so that users of the ChaCha library API can use it directly. Given that calls into the library API will always go through the routines in this module if it is enabled, switch to static keys to select the optimal implementation available (which may be none at all, in which case we defer to the generic implementation for all invocations). Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
Depend on the generic ChaCha library routines instead of pulling in the generic ChaCha skcipher driver, which is more than we need, and makes managing the dependencies between the generic library, generic driver, accelerated library and driver more complicated. While at it, drop the logic to prefer the scalar code on short inputs. Turning the NEON on and off is cheap these days, and one major use case for ChaCha20 is ChaCha20-Poly1305, which is guaranteed to hit the scalar path upon every invocation (when doing the Poly1305 nonce generation) Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
Wire the existing x86 SIMD ChaCha code into the new ChaCha library interface, so that users of the library interface will get the accelerated version when available. Given that calls into the library API will always go through the routines in this module if it is enabled, switch to static keys to select the optimal implementation available (which may be none at all, in which case we defer to the generic implementation for all invocations). Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
In preparation of extending the x86 ChaCha driver to also expose the ChaCha library interface, drop the dependency on the chacha_generic crypto driver as a non-SIMD fallback, and depend on the generic ChaCha library directly. This way, we only pull in the code we actually need, without registering a set of ChaCha skciphers that we will never use. Since turning the FPU on and off is cheap these days, simplify the SIMD routine by dropping the per-page yield, which makes for a cleaner switch to the library API as well. This also allows use to invoke the skcipher walk routines in non-atomic mode. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
Currently, our generic ChaCha implementation consists of a permute function in lib/chacha.c that operates on the 64-byte ChaCha state directly [and which is always included into the core kernel since it is used by the /dev/random driver], and the crypto API plumbing to expose it as a skcipher. In order to support in-kernel users that need the ChaCha streamcipher but have no need [or tolerance] for going through the abstractions of the crypto API, let's expose the streamcipher bits via a library API as well, in a way that permits the implementation to be superseded by an architecture specific one if provided. So move the streamcipher code into a separate module in lib/crypto, and expose the init() and crypt() routines to users of the library. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 01 11月, 2019 2 次提交
-
-
由 Eric Biggers 提交于
Now that the blkcipher algorithm type has been removed in favor of skcipher, rename the crypto_blkcipher kernel module to crypto_skcipher, and rename the config options accordingly: CONFIG_CRYPTO_BLKCIPHER => CONFIG_CRYPTO_SKCIPHER CONFIG_CRYPTO_BLKCIPHER2 => CONFIG_CRYPTO_SKCIPHER2 Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Yunfeng Ye 提交于
A warning is found by the static code analysis tool: "Identical condition 'err', second condition is always false" Fix this by adding return value of skcipher_walk_done(). Fixes: 67cfa5d3 ("crypto: arm64/aes-neonbs - implement ciphertext stealing for XTS") Signed-off-by: NYunfeng Ye <yeyunfeng@huawei.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 25 10月, 2019 4 次提交
-
-
由 Ard Biesheuvel 提交于
Add the logic to deal with input sizes that are not a round multiple of the AES block size, as described by the XTS spec. This brings the SPE implementation in line with other kernel drivers that have been updated recently to take this into account. Cc: Eric Biggers <ebiggers@google.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Convert the glue code for the PowerPC SPE implementations of AES-ECB, AES-CBC, AES-CTR, and AES-XTS from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed. Tested with: export ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu- make mpc85xx_defconfig cat >> .config << EOF # CONFIG_MODULES is not set # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set CONFIG_DEBUG_KERNEL=y CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y CONFIG_CRYPTO_AES=y CONFIG_CRYPTO_CBC=y CONFIG_CRYPTO_CTR=y CONFIG_CRYPTO_ECB=y CONFIG_CRYPTO_XTS=y CONFIG_CRYPTO_AES_PPC_SPE=y EOF make olddefconfig make -j32 qemu-system-ppc -M mpc8544ds -cpu e500 -nographic \ -kernel arch/powerpc/boot/zImage \ -append cryptomgr.fuzz_iterations=1000 Note that xts-ppc-spe still fails the comparison tests due to the lack of ciphertext stealing support. This is not addressed by this patch. This patch also cleans up the code by making ->encrypt() and ->decrypt() call a common function for each of ECB, CBC, and XTS, and by using a clearer way to compute the length to process at each step. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Set the ivsize for the "ecb-ppc-spe" algorithm to 0, since ECB mode doesn't take an IV. This fixes a failure in the extra crypto self-tests: alg: skcipher: ivsize for ecb-ppc-spe (16) doesn't match generic impl (0) Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
The PowerPC SPE implementations of AES modes only disable preemption during the actual encryption/decryption, not during the scatterwalk functions. It's therefore unnecessary to request an atomic scatterwalk. So don't do so. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 23 10月, 2019 7 次提交
-
-
由 Eric Biggers 提交于
Convert the glue code for the S390 CPACF implementations of DES-ECB, DES-CBC, DES-CTR, 3DES-ECB, 3DES-CBC, and 3DES-CTR from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed. Note: I made CTR use the same function for encryption and decryption, since CTR encryption and decryption are identical. Signed-off-by: NEric Biggers <ebiggers@google.com> reviewed-by: NHarald Freudenberger <freude@linux.ibm.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Convert the glue code for the S390 CPACF protected key implementations of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed. Note: I made CTR use the same function for encryption and decryption, since CTR encryption and decryption are identical. Signed-off-by: NEric Biggers <ebiggers@google.com> reviewed-by: NHarald Freudenberger <freude@linux.ibm.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Convert the glue code for the S390 CPACF implementations of AES-ECB, AES-CBC, AES-XTS, and AES-CTR from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed. Note: I made CTR use the same function for encryption and decryption, since CTR encryption and decryption are identical. Signed-off-by: NEric Biggers <ebiggers@google.com> Reviewed-by: NHarald Freudenberger <freude@linux.ibm.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Convert the glue code for the SPARC64 DES opcodes implementations of DES-ECB, DES-CBC, 3DES-ECB, and 3DES-CBC from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Convert the glue code for the SPARC64 Camellia opcodes implementations of Camellia-ECB and Camellia-CBC from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Eric Biggers 提交于
Convert the glue code for the SPARC64 AES opcodes implementations of AES-ECB, AES-CBC, and AES-CTR from the deprecated "blkcipher" API to the "skcipher" API. This is needed in order for the blkcipher API to be removed. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
Instead of allowing the Crypto Extensions algorithms to be selected when using a toolchain that does not support them, and complain about it at build time, use the information we have about the compiler to prevent them from being selected in the first place. Users that are stuck with a GCC version <4.8 are unlikely to care about these routines anyway, and it cleans up the Makefile considerably. While at it, add explicit 'armv8-a' CPU specifiers to the code that uses the 'crypto-neon-fp-armv8' FPU specifier so we don't regress Clang, which will complain about this in version 10 and later. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 04 10月, 2019 3 次提交
-
-
由 Tony Lindgren 提交于
Commit 0ed266d7 ("clk: ti: omap3: cleanup unnecessary clock aliases") removed old omap3 clock framework aliases but caused omap3-rom-rng to stop working with clock not found error. Based on discussions on the mailing list it was requested by Tero Kristo that it would be best to fix this issue by probing omap3-rom-rng using device tree to provide a proper clk property. The other option would be to add back the missing clock alias, but that does not help moving things forward with removing old legacy platform_data. Let's also add a proper device tree binding and keep it together with the fix. Cc: devicetree@vger.kernel.org Cc: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: Adam Ford <aford173@gmail.com> Cc: Pali Rohár <pali.rohar@gmail.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Sebastian Reichel <sre@kernel.org> Cc: Tero Kristo <t-kristo@ti.com> Fixes: 0ed266d7 ("clk: ti: omap3: cleanup unnecessary clock aliases") Reported-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NTony Lindgren <tony@atomide.com> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Tony Lindgren 提交于
In general we should check for GP device instead of HS device unless the other options such as EMU are also checked. Otherwise omap3-rom-rng won't probe on few of the old n900 macro boards still in service in automated build and boot test systems. Cc: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: Adam Ford <aford173@gmail.com> Cc: Pali Rohár <pali.rohar@gmail.com> Cc: Sebastian Reichel <sre@kernel.org> Cc: Tero Kristo <t-kristo@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Ard Biesheuvel 提交于
To improve performance on cores with deep pipelines such as ThunderX2, reimplement gcm(aes) using a 4-way interleave rather than the 2-way interleave we use currently. This comes down to a complete rewrite of the GCM part of the combined GCM/GHASH driver, and instead of interleaving two invocations of AES with the GHASH handling at the instruction level, the new version uses a more coarse grained approach where each chunk of 64 bytes is encrypted first and then ghashed (or ghashed and then decrypted in the converse case). The core NEON routine is now able to consume inputs of any size, and tail blocks of less than 64 bytes are handled using overlapping loads and stores, and processed by the same 4-way encryption and hashing routines. This gets rid of most of the branches, and avoids having to return to the C code to handle the tail block using a stack buffer. The table below compares the performance of the old driver and the new one on various micro-architectures and running in various modes. | AES-128 | AES-192 | AES-256 | #bytes | 512 | 1500 | 4k | 512 | 1500 | 4k | 512 | 1500 | 4k | -------+-----+------+-----+-----+------+-----+-----+------+-----+ TX2 | 35% | 23% | 11% | 34% | 20% | 9% | 38% | 25% | 16% | EMAG | 11% | 6% | 3% | 12% | 4% | 2% | 11% | 4% | 2% | A72 | 8% | 5% | -4% | 9% | 4% | -5% | 7% | 4% | -5% | A53 | 11% | 6% | -1% | 10% | 8% | -1% | 10% | 8% | -2% | Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 30 9月, 2019 5 次提交
-
-
由 Krzysztof Wilczynski 提交于
Move the static keyword to the front of declaration of csky_pmu_of_device_ids, and resolve the following compiler warning that can be seen when building with warnings enabled (W=1): arch/csky/kernel/perf_event.c:1340:1: warning: ‘static’ is not at beginning of declaration [-Wold-style-declaration] Signed-off-by: NKrzysztof Wilczynski <kw@linux.com> Signed-off-by: NGuo Ren <guoren@kernel.org>
-
由 Valentin Schneider 提交于
Since the enabling and disabling of IRQs within preempt_schedule_irq() is contained in a need_resched() loop, we don't need the outer arch code loop. Signed-off-by: NValentin Schneider <valentin.schneider@arm.com> Signed-off-by: NGuo Ren <guoren@kernel.org>
-
由 Mao Han 提交于
The csky_pmu.max_period has type u64, and BIT() can only return 32 bits unsigned long on C-SKY. The initialization for max_period will be incorrect when count_width is bigger than 32. Use BIT_ULL() Signed-off-by: NMao Han <han_mao@c-sky.com> Signed-off-by: NGuo Ren <ren_guo@c-sky.com>
-
由 Guo Ren 提交于
We need set fp zero to let backtrace know the end. The patch fixup perf callchain panic problem, because backtrace didn't know what is the end of fp. Signed-off-by: NGuo Ren <ren_guo@c-sky.com> Reported-by: NMao Han <han_mao@c-sky.com>
-
由 Mike Rapoport 提交于
The csky implementation of free_initrd_mem() is an open-coded version of free_reserved_area() without poisoning. Remove it and make csky use the generic version of free_initrd_mem(). Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NGuo Ren <guoren@kernel.org>
-
- 27 9月, 2019 3 次提交
-
-
由 Oliver O'Halloran 提交于
s/CONFIG_IOV/CONFIG_PCI_IOV/ Whoops. Fixes: bd6461cc ("powerpc/eeh: Add a eeh_dev_break debugfs interface") Signed-off-by: NOliver O'Halloran <oohall@gmail.com> [mpe: Fixup the #endif comment as well] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190926122502.14826-1-oohall@gmail.com
-
由 Andrew Morton 提交于
A last-minute fixlet which I'd failed to merge at the appropriate time had the predictable effect. Fixes: f672e2c217e2d4b2 ("lib: untag user pointers in strn*_user") Cc: Andrey Konovalov <andreyknvl@google.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mark Rutland 提交于
The naming of pgtable_page_{ctor,dtor}() seems to have confused a few people, and until recently arm64 used these erroneously/pointlessly for other levels of page table. To make it incredibly clear that these only apply to the PTE level, and to align with the naming of pgtable_pmd_page_{ctor,dtor}(), let's rename them to pgtable_pte_page_{ctor,dtor}(). These changes were generated with the following shell script: ---- git grep -lw 'pgtable_page_.tor' | while read FILE; do sed -i '{s/pgtable_page_ctor/pgtable_pte_page_ctor/}' $FILE; sed -i '{s/pgtable_page_dtor/pgtable_pte_page_dtor/}' $FILE; done ---- ... with the documentation re-flowed to remain under 80 columns, and whitespace fixed up in macros to keep backslashes aligned. There should be no functional change as a result of this patch. Link: http://lkml.kernel.org/r/20190722141133.3116-1-mark.rutland@arm.comSigned-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMike Rapoport <rppt@linux.ibm.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Yu Zhao <yuzhao@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 9月, 2019 6 次提交
-
-
由 Mike Rapoport 提交于
hexagon never reserves or initializes initrd and the only mention of it is the empty free_initrd_mem() function. As we have a generic implementation of free_initrd_mem(), there is no need to define an empty stub for the hexagon implementation and it can be dropped. Link: http://lkml.kernel.org/r/1565858133-25852-1-git-send-email-rppt@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Richard Kuo <rkuo@codeaurora.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
When a process expects no accesses to a certain memory range for a long time, it could hint kernel that the pages can be reclaimed instantly but data should be preserved for future use. This could reduce workingset eviction so it ends up increasing performance. This patch introduces the new MADV_PAGEOUT hint to madvise(2) syscall. MADV_PAGEOUT can be used by a process to mark a memory range as not expected to be used for a long time so that kernel reclaims *any LRU* pages instantly. The hint can help kernel in deciding which pages to evict proactively. A note: It doesn't apply SWAP_CLUSTER_MAX LRU page isolation limit intentionally because it's automatically bounded by PMD size. If PMD size(e.g., 256) makes some trouble, we could fix it later by limit it to SWAP_CLUSTER_MAX[1]. - man-page material MADV_PAGEOUT (since Linux x.x) Do not expect access in the near future so pages in the specified regions could be reclaimed instantly regardless of memory pressure. Thus, access in the range after successful operation could cause major page fault but never lose the up-to-date contents unlike MADV_DONTNEED. Pages belonging to a shared mapping are only processed if a write access is allowed for the calling process. MADV_PAGEOUT cannot be applied to locked pages, Huge TLB pages, or VM_PFNMAP pages. [1] https://lore.kernel.org/lkml/20190710194719.GS29695@dhcp22.suse.cz/ [minchan@kernel.org: clear PG_active on MADV_PAGEOUT] Link: http://lkml.kernel.org/r/20190802200643.GA181880@google.com [akpm@linux-foundation.org: resolve conflicts with hmm.git] Link: http://lkml.kernel.org/r/20190726023435.214162-5-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Reported-by: Nkbuild test robot <lkp@intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Chris Zankel <chris@zankel.net> Cc: Daniel Colascione <dancol@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Hillf Danton <hdanton@sina.com> Cc: Joel Fernandes (Google) <joel@joelfernandes.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Oleksandr Natalenko <oleksandr@redhat.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Sonny Rao <sonnyrao@google.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Tim Murray <timmurray@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Minchan Kim 提交于
Patch series "Introduce MADV_COLD and MADV_PAGEOUT", v7. - Background The Android terminology used for forking a new process and starting an app from scratch is a cold start, while resuming an existing app is a hot start. While we continually try to improve the performance of cold starts, hot starts will always be significantly less power hungry as well as faster so we are trying to make hot start more likely than cold start. To increase hot start, Android userspace manages the order that apps should be killed in a process called ActivityManagerService. ActivityManagerService tracks every Android app or service that the user could be interacting with at any time and translates that into a ranked list for lmkd(low memory killer daemon). They are likely to be killed by lmkd if the system has to reclaim memory. In that sense they are similar to entries in any other cache. Those apps are kept alive for opportunistic performance improvements but those performance improvements will vary based on the memory requirements of individual workloads. - Problem Naturally, cached apps were dominant consumers of memory on the system. However, they were not significant consumers of swap even though they are good candidate for swap. Under investigation, swapping out only begins once the low zone watermark is hit and kswapd wakes up, but the overall allocation rate in the system might trip lmkd thresholds and cause a cached process to be killed(we measured performance swapping out vs. zapping the memory by killing a process. Unsurprisingly, zapping is 10x times faster even though we use zram which is much faster than real storage) so kill from lmkd will often satisfy the high zone watermark, resulting in very few pages actually being moved to swap. - Approach The approach we chose was to use a new interface to allow userspace to proactively reclaim entire processes by leveraging platform information. This allowed us to bypass the inaccuracy of the kernel’s LRUs for pages that are known to be cold from userspace and to avoid races with lmkd by reclaiming apps as soon as they entered the cached state. Additionally, it could provide many chances for platform to use much information to optimize memory efficiency. To achieve the goal, the patchset introduce two new options for madvise. One is MADV_COLD which will deactivate activated pages and the other is MADV_PAGEOUT which will reclaim private pages instantly. These new options complement MADV_DONTNEED and MADV_FREE by adding non-destructive ways to gain some free memory space. MADV_PAGEOUT is similar to MADV_DONTNEED in a way that it hints the kernel that memory region is not currently needed and should be reclaimed immediately; MADV_COLD is similar to MADV_FREE in a way that it hints the kernel that memory region is not currently needed and should be reclaimed when memory pressure rises. This patch (of 5): When a process expects no accesses to a certain memory range, it could give a hint to kernel that the pages can be reclaimed when memory pressure happens but data should be preserved for future use. This could reduce workingset eviction so it ends up increasing performance. This patch introduces the new MADV_COLD hint to madvise(2) syscall. MADV_COLD can be used by a process to mark a memory range as not expected to be used in the near future. The hint can help kernel in deciding which pages to evict early during memory pressure. It works for every LRU pages like MADV_[DONTNEED|FREE]. IOW, It moves active file page -> inactive file LRU active anon page -> inacdtive anon LRU Unlike MADV_FREE, it doesn't move active anonymous pages to inactive file LRU's head because MADV_COLD is a little bit different symantic. MADV_FREE means it's okay to discard when the memory pressure because the content of the page is *garbage* so freeing such pages is almost zero overhead since we don't need to swap out and access afterward causes just minor fault. Thus, it would make sense to put those freeable pages in inactive file LRU to compete other used-once pages. It makes sense for implmentaion point of view, too because it's not swapbacked memory any longer until it would be re-dirtied. Even, it could give a bonus to make them be reclaimed on swapless system. However, MADV_COLD doesn't mean garbage so reclaiming them requires swap-out/in in the end so it's bigger cost. Since we have designed VM LRU aging based on cost-model, anonymous cold pages would be better to position inactive anon's LRU list, not file LRU. Furthermore, it would help to avoid unnecessary scanning if system doesn't have a swap device. Let's start simpler way without adding complexity at this moment. However, keep in mind, too that it's a caveat that workloads with a lot of pages cache are likely to ignore MADV_COLD on anonymous memory because we rarely age anonymous LRU lists. * man-page material MADV_COLD (since Linux x.x) Pages in the specified regions will be treated as less-recently-accessed compared to pages in the system with similar access frequencies. In contrast to MADV_FREE, the contents of the region are preserved regardless of subsequent writes to pages. MADV_COLD cannot be applied to locked pages, Huge TLB pages, or VM_PFNMAP pages. [akpm@linux-foundation.org: resolve conflicts with hmm.git] Link: http://lkml.kernel.org/r/20190726023435.214162-2-minchan@kernel.orgSigned-off-by: NMinchan Kim <minchan@kernel.org> Reported-by: Nkbuild test robot <lkp@intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Chris Zankel <chris@zankel.net> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Daniel Colascione <dancol@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Hillf Danton <hdanton@sina.com> Cc: Joel Fernandes (Google) <joel@joelfernandes.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Oleksandr Natalenko <oleksandr@redhat.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Sonny Rao <sonnyrao@google.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Tim Murray <timmurray@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Konovalov 提交于
Patch series "arm64: untag user pointers passed to the kernel", v19. === Overview arm64 has a feature called Top Byte Ignore, which allows to embed pointer tags into the top byte of each pointer. Userspace programs (such as HWASan, a memory debugging tool [1]) might use this feature and pass tagged user pointers to the kernel through syscalls or other interfaces. Right now the kernel is already able to handle user faults with tagged pointers, due to these patches: 1. 81cddd65 ("arm64: traps: fix userspace cache maintenance emulation on a tagged pointer") 2. 7dcd9dd8 ("arm64: hw_breakpoint: fix watchpoint matching for tagged pointers") 3. 276e9327 ("arm64: entry: improve data abort handling of tagged pointers") This patchset extends tagged pointer support to syscall arguments. As per the proposed ABI change [3], tagged pointers are only allowed to be passed to syscalls when they point to memory ranges obtained by anonymous mmap() or sbrk() (see the patchset [3] for more details). For non-memory syscalls this is done by untaging user pointers when the kernel performs pointer checking to find out whether the pointer comes from userspace (most notably in access_ok). The untagging is done only when the pointer is being checked, the tag is preserved as the pointer makes its way through the kernel and stays tagged when the kernel dereferences the pointer when perfoming user memory accesses. The mmap and mremap (only new_addr) syscalls do not currently accept tagged addresses. Architectures may interpret the tag as a background colour for the corresponding vma. Other memory syscalls (mprotect, etc.) don't do user memory accesses but rather deal with memory ranges, and untagged pointers are better suited to describe memory ranges internally. Thus for memory syscalls we untag pointers completely when they enter the kernel. === Other approaches One of the alternative approaches to untagging that was considered is to completely strip the pointer tag as the pointer enters the kernel with some kind of a syscall wrapper, but that won't work with the countless number of different ioctl calls. With this approach we would need a custom wrapper for each ioctl variation, which doesn't seem practical. An alternative approach to untagging pointers in memory syscalls prologues is to inspead allow tagged pointers to be passed to find_vma() (and other vma related functions) and untag them there. Unfortunately, a lot of find_vma() callers then compare or subtract the returned vma start and end fields against the pointer that was being searched. Thus this approach would still require changing all find_vma() callers. === Testing The following testing approaches has been taken to find potential issues with user pointer untagging: 1. Static testing (with sparse [2] and separately with a custom static analyzer based on Clang) to track casts of __user pointers to integer types to find places where untagging needs to be done. 2. Static testing with grep to find parts of the kernel that call find_vma() (and other similar functions) or directly compare against vm_start/vm_end fields of vma. 3. Static testing with grep to find parts of the kernel that compare user pointers with TASK_SIZE or other similar consts and macros. 4. Dynamic testing: adding BUG_ON(has_tag(addr)) to find_vma() and running a modified syzkaller version that passes tagged pointers to the kernel. Based on the results of the testing the requried patches have been added to the patchset. === Notes This patchset is meant to be merged together with "arm64 relaxed ABI" [3]. This patchset is a prerequisite for ARM's memory tagging hardware feature support [4]. This patchset has been merged into the Pixel 2 & 3 kernel trees and is now being used to enable testing of Pixel phones with HWASan. Thanks! [1] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html [2] https://github.com/lucvoo/sparse-dev/commit/5f960cb10f56ec2017c128ef9d16060e0145f292 [3] https://lkml.org/lkml/2019/6/12/745 [4] https://community.arm.com/processors/b/blog/posts/arm-a-profile-architecture-2018-developments-armv85a This patch (of 11) This patch is a part of a series that extends kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments. strncpy_from_user and strnlen_user accept user addresses as arguments, and do not go through the same path as copy_from_user and others, so here we need to handle the case of tagged user addresses separately. Untag user pointers passed to these functions. Note, that this patch only temporarily untags the pointers to perform validity checks, but then uses them as is to perform user memory accesses. [andreyknvl@google.com: fix sparc4 build] Link: http://lkml.kernel.org/r/CAAeHK+yx4a-P0sDrXTUxMvO2H0CJZUFPffBrg_cU7oJOZyC7ew@mail.gmail.com Link: http://lkml.kernel.org/r/c5a78bcad3e94d6cda71fcaa60a423231ae71e4c.1563904656.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: NKhalid Aziz <khalid.aziz@oracle.com> Acked-by: NKees Cook <keescook@chromium.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Eric Auger <eric.auger@redhat.com> Cc: Felix Kuehling <Felix.Kuehling@amd.com> Cc: Jens Wiklander <jens.wiklander@linaro.org> Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
Add RB_DECLARE_CALLBACKS_MAX, which generates augmented rbtree callbacks for the case where the augmented value is a scalar whose definition follows a max(f(node)) pattern. This actually covers all present uses of RB_DECLARE_CALLBACKS, and saves some (source) code duplication in the various RBCOMPUTE function definitions. [walken@google.com: fix mm/vmalloc.c] Link: http://lkml.kernel.org/r/CANN689FXgK13wDYNh1zKxdipeTuALG4eKvKpsdZqKFJ-rvtGiQ@mail.gmail.com [walken@google.com: re-add check to check_augmented()] Link: http://lkml.kernel.org/r/20190727022027.GA86863@google.com Link: http://lkml.kernel.org/r/20190703040156.56953-3-walken@google.comSigned-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: David Howells <dhowells@redhat.com> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: Uladzislau Rezki <urezki@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Paolo Bonzini 提交于
KVM was incorrectly checking vmcs12->host_ia32_efer even if the "load IA32_EFER" exit control was reset. Also, some checks were not using the new CC macro for tracing. Cleanup everything so that the vCPU's 64-bit mode is determined directly from EFER_LMA and the VMCS checks are based on that, which matches section 26.2.4 of the SDM. Cc: Sean Christopherson <sean.j.christopherson@intel.com> Cc: Krish Sadhukhan <krish.sadhukhan@oracle.com> Fixes: 5845038cReviewed-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-