- 12 6月, 2017 14 次提交
-
-
由 Heiko Carstens 提交于
_REGION3_ENTRY_ORIGIN defines a wrong mask which can be used to extract a segment table origin from a region 3 table entry. It removes only the lower 11 instead of 12 bits from a region 3 table entry. Luckily this bit is currently always zero, so nothing bad happened yet. In order to avoid future bugs just remove the region 3 specific mask and use the correct generic _REGION_ENTRY_ORIGIN mask. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The regset functions for guarded storage are supposed to work on the current task as well. For task == current add the required load and store instructions for the guarded storage control block. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
All call sites of "stfle" check if the instruction is available before executing it. Therefore there is no reason to have the corresponding facility bit set within the architecture level set. This removes the last more or less odd bit from the list. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Thomas Huth 提交于
The code in arch/s390/crypto checks for the availability of the 'message security assist' facility on its own, either by using module_cpu_feature_match(MSA, ...) or by checking the facility bit during cpacf_query(). Thus setting the MSA facility bit in gen_facilities.c as hard requirement is not necessary. We can remove it here, so that the kernel can also run on systems that do not provide the MSA facility yet (like the emulated environment of QEMU, for example). Signed-off-by: NThomas Huth <thuth@redhat.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When masking an ASCE to get its origin use the corresponding define instead of the unrelated PAGE_MASK. This doesn't fix a bug since both masks are identical. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Use proper define instead of open-coding the condition code value. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Christian Borntraeger 提交于
I get number of CPUs - 1 kmemleak hits like unreferenced object 0x37ec6f000 (size 1024): comm "swapper/0", pid 1, jiffies 4294937330 (age 889.690s) hex dump (first 32 bytes): 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk backtrace: [<000000000034a848>] kmem_cache_alloc+0x2b8/0x3d0 [<00000000001164de>] __cpu_up+0x456/0x488 [<000000000016f60c>] bringup_cpu+0x4c/0xd0 [<000000000016d5d2>] cpuhp_invoke_callback+0xe2/0x9e8 [<000000000016f3c6>] cpuhp_up_callbacks+0x5e/0x110 [<000000000016f988>] _cpu_up+0xe0/0x158 [<000000000016faf0>] do_cpu_up+0xf0/0x110 [<0000000000dae1ee>] smp_init+0x126/0x130 [<0000000000d9bd04>] kernel_init_freeable+0x174/0x2e0 [<000000000089fc62>] kernel_init+0x2a/0x148 [<00000000008adce2>] kernel_thread_starter+0x6/0xc [<00000000008adcdc>] kernel_thread_starter+0x0/0xc [<ffffffffffffffff>] 0xffffffffffffffff The pointer of this data structure is stored in the prefix page of that CPU together with some extra bits ORed into the the low bits. Mark the data structure as non-leak. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Harald Freudenberger 提交于
The s390_paes and the s390_aes kernel module used just one config symbol CONFIG_CRYPTO_AES. As paes has a dependency to PKEY and this requires ZCRYPT the aes module also had a dependency to the zcrypt device driver which is not true. Fixed by introducing a new config symbol CONFIG_CRYPTO_PAES which has dependencies to PKEY and ZCRYPT. Removed the dependency for the aes module to ZCRYPT. Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Switch to the improved _install_special_mapping function to install the vdso mapping. This has two advantages, the arch_vma_name function is not needed anymore and the vdso vma still has its name after its memory location has been changed with mremap. Tested-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The account_system_index_scaled gets two cputime values, a raw value derived from CPU timer deltas and a scaled value. The scaled value is always calculated from the raw value, the code can be simplified by moving the scale_vtime call into account_system_index_scaled. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
I missed at least these two header files where we can make use of the generic ones. vga.h is another one, however that is already addressed by a patch from Jiri Slaby. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Add __rcu annotations so sparse correctly warns only if "slot" gets derefenced without using rcu_dereference(). Right now we get warnings because of the missing annotation: arch/s390/mm/gmap.c:135:17: warning: incorrect type in assignment (different address spaces) arch/s390/mm/gmap.c:135:17: expected void **slot arch/s390/mm/gmap.c:135:17: got void [noderef] <asn:4>** Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Add missing include statements to make sure that prototypes match implementation. As reported by sparse: arch/s390/crypto/arch_random.c:18:1: warning: symbol 's390_arch_random_available' was not declared. Should it be static? arch/s390/kernel/traps.c:279:13: warning: symbol 'trap_init' was not declared. Should it be static? Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Add the logic to upgrade the page table for a 64-bit process to five levels. This increases the TASK_SIZE from 8PB to 16EB-4K. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 08 6月, 2017 1 次提交
-
-
由 Martin Schwidefsky 提交于
Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 17 5月, 2017 1 次提交
-
-
由 Christian Borntraeger 提交于
For most cases a protection exception in the host (e.g. copy on write or dirty tracking) on the sie instruction will indicate an instruction length of 4. Turns out that there are some corner cases (e.g. runtime instrumentation) where this is not necessarily true and the ILC is unpredictable. Let's replace our 4 byte rewind_pad with 3 byte nops to prepare for all possible ILCs. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Cc: stable@vger.kernel.org Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 11 5月, 2017 1 次提交
-
-
由 Elena Reshetova 提交于
refcount_t type and corresponding API should be used instead of atomic_t when the variable is used as a reference counter. This allows to avoid accidental refcounter overflows that might lead to use-after-free situations. Signed-off-by: NElena Reshetova <elena.reshetova@intel.com> Signed-off-by: NHans Liljestrand <ishkamiel@gmail.com> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NDavid Windsor <dwindsor@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 10 5月, 2017 1 次提交
-
-
由 Nicolas Dichtel 提交于
Regularly, when a new header is created in include/uapi/, the developer forgets to add it in the corresponding Kbuild file. This error is usually detected after the release is out. In fact, all headers under uapi directories should be exported, thus it's useless to have an exhaustive list. After this patch, the following files, which were not exported, are now exported (with make headers_install_all): asm-arc/kvm_para.h asm-arc/ucontext.h asm-blackfin/shmparam.h asm-blackfin/ucontext.h asm-c6x/shmparam.h asm-c6x/ucontext.h asm-cris/kvm_para.h asm-h8300/shmparam.h asm-h8300/ucontext.h asm-hexagon/shmparam.h asm-m32r/kvm_para.h asm-m68k/kvm_para.h asm-m68k/shmparam.h asm-metag/kvm_para.h asm-metag/shmparam.h asm-metag/ucontext.h asm-mips/hwcap.h asm-mips/reg.h asm-mips/ucontext.h asm-nios2/kvm_para.h asm-nios2/ucontext.h asm-openrisc/shmparam.h asm-parisc/kvm_para.h asm-powerpc/perf_regs.h asm-sh/kvm_para.h asm-sh/ucontext.h asm-tile/shmparam.h asm-unicore32/shmparam.h asm-unicore32/ucontext.h asm-x86/hwcap2.h asm-xtensa/kvm_para.h drm/armada_drm.h drm/etnaviv_drm.h drm/vgem_drm.h linux/aspeed-lpc-ctrl.h linux/auto_dev-ioctl.h linux/bcache.h linux/btrfs_tree.h linux/can/vxcan.h linux/cifs/cifs_mount.h linux/coresight-stm.h linux/cryptouser.h linux/fsmap.h linux/genwqe/genwqe_card.h linux/hash_info.h linux/kcm.h linux/kcov.h linux/kfd_ioctl.h linux/lightnvm.h linux/module.h linux/nbd-netlink.h linux/nilfs2_api.h linux/nilfs2_ondisk.h linux/nsfs.h linux/pr.h linux/qrtr.h linux/rpmsg.h linux/sched/types.h linux/sed-opal.h linux/smc.h linux/smc_diag.h linux/stm.h linux/switchtec_ioctl.h linux/vfio_ccw.h linux/wil6210_uapi.h rdma/bnxt_re-abi.h Note that I have removed from this list the files which are generated in every exported directories (like .install or .install.cmd). Thanks to Julien Floret <julien.floret@6wind.com> for the tip to get all subdirs with a pure makefile command. For the record, note that exported files for asm directories are a mix of files listed by: - include/uapi/asm-generic/Kbuild.asm; - arch/<arch>/include/uapi/asm/Kbuild; - arch/<arch>/include/asm/Kbuild. Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: NMark Salter <msalter@redhat.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 09 5月, 2017 8 次提交
-
-
由 Heiko Carstens 提交于
The perf tool assumes that kernel symbols are never present at address zero. In fact it assumes if functions that map symbols to addresses return zero, that the symbol was not found. Given that s390's _text symbol historically is located at address zero this yields at least a couple of false errors and warnings in one of perf's test cases about not present symbols ("perf test 1"). To fix this simply move the _text symbol to address 0x200, just behind the initial psw and channel program located at the beginning of the kernel image. This is now hard coded within the linker script. I tried a nicer solution which moves the initial psw and channel program into an own section. However that would move the symbols within the "real" head.text section to different addresses, since the ".org" statements within head.S are relative to the head.text section. If there is a new section in front, everything else will be moved. Alternatively I could have adjusted all ".org" statements. But this current solution seems to be the easiest one, since nobody really cares where the _text symbol is actually located. Reported-by: NZvonko Kosic <zkosic@linux.vnet.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Fixes a couple of compile warnings with gcc 7.1.0 : arch/s390/kernel/sysinfo.c:578:20: note: directive argument in the range [-2147483648, 4] sprintf(link_to, "15_1_%d", topology_mnest_limit()); Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The average string that is copied from user space to kernel space is rather short. E.g. booting a system involves about 50.000 strncpy_from_user() calls where the NULL terminated string has an average size of 27 bytes. By default our s390 specific strncpy_from_user() implementation however copies up to 4096 bytes, which is a waste of cpu cycles and cache lines. Therefore reduce the default length to L1_CACHE_BYTES (256 bytes), which also reduces the average execution time of strncpy_from_user() by 30-40%. Alternatively we could have switched to the generic strncpy_from_user() implementation, however it turned out that that variant would be slower than the now optimized s390 variant. Reported-by: NAl Viro <viro@ZenIV.linux.org.uk> Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Laura Abbott 提交于
Now that all call sites, completely decouple cacheflush.h and set_memory.h [sfr@canb.auug.org.au: kprobes/x86: merge fix for set_memory.h decoupling] Link: http://lkml.kernel.org/r/20170418180903.10300fd3@canb.auug.org.au Link: http://lkml.kernel.org/r/1488920133-27229-17-git-send-email-labbott@redhat.comSigned-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laura Abbott 提交于
set_memory_* functions have moved to set_memory.h. Switch to this explicitly Link: http://lkml.kernel.org/r/1488920133-27229-5-git-send-email-labbott@redhat.comSigned-off-by: NLaura Abbott <labbott@redhat.com> Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laura Abbott 提交于
Patch series "set_memory_* functions header refactor", v3. The set_memory_* APIs came out of a desire to have a better way to change memory attributes. Many of these attributes were linked to cache functionality so the prototypes were put in cacheflush.h. These days, the APIs have grown and have a much wider use than just cache APIs. To support this growth, split off set_memory_* and friends into a separate header file to avoid growing cacheflush.h for APIs that have nothing to do with caches. Link: http://lkml.kernel.org/r/1488920133-27229-2-git-send-email-labbott@redhat.comSigned-off-by: NLaura Abbott <labbott@redhat.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michal Hocko 提交于
There are many code paths opencoding kvmalloc. Let's use the helper instead. The main difference to kvmalloc is that those users are usually not considering all the aspects of the memory allocator. E.g. allocation requests <= 32kB (with 4kB pages) are basically never failing and invoke OOM killer to satisfy the allocation. This sounds too disruptive for something that has a reasonable fallback - the vmalloc. On the other hand those requests might fallback to vmalloc even when the memory allocator would succeed after several more reclaim/compaction attempts previously. There is no guarantee something like that happens though. This patch converts many of those places to kv[mz]alloc* helpers because they are more conservative. Link: http://lkml.kernel.org/r/20170306103327.2766-2-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> # Xen bits Acked-by: NKees Cook <keescook@chromium.org> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: Andreas Dilger <andreas.dilger@intel.com> # Lustre Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> # KVM/s390 Acked-by: Dan Williams <dan.j.williams@intel.com> # nvdim Acked-by: David Sterba <dsterba@suse.com> # btrfs Acked-by: Ilya Dryomov <idryomov@gmail.com> # Ceph Acked-by: Tariq Toukan <tariqt@mellanox.com> # mlx4 Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx5 Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Santosh Raspatur <santosh@chelsio.com> Cc: Hariprasad S <hariprasad@chelsio.com> Cc: Yishai Hadas <yishaih@mellanox.com> Cc: Oleg Drokin <oleg.drokin@intel.com> Cc: "Yan, Zheng" <zyan@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexey Dobriyan 提交于
Bit searching functions accept "unsigned long" indices but "nr_cpumask_bits" is "int" which is signed, so inevitable sign extensions occur on x86_64. Those MOVSX are #1 MOVSX bloat by number of uses across whole kernel. Change "nr_cpumask_bits" to unsigned, this number can't be negative after all. It allows to do implicit zero-extension on x86_64 without MOVSX. Change signed comparisons into unsigned comparisons where necessary. Other uses looks fine because it is either argument passed to a function or comparison is already unsigned. Net win on allyesconfig type of kernel: ~2.8 KB (!) add/remove: 0/0 grow/shrink: 8/725 up/down: 93/-2926 (-2833) function old new delta xen_exit_mmap 691 735 +44 qstat_read 426 440 +14 __cpufreq_cooling_register 1678 1687 +9 trace_rb_cpu_prepare 447 455 +8 vermagic 54 60 +6 nfp_driver_version 54 60 +6 rcu_torture_stats_print 1147 1151 +4 find_next_push_cpu 267 269 +2 xen_irq_resume 961 960 -1 ... init_vp_index 946 906 -40 od_set_powersave_bias 328 281 -47 power_cpu_exit 193 139 -54 arch_show_interrupts 3538 3484 -54 select_idle_sibling 1558 1471 -87 Total: Before=158358910, After=158356077, chg -0.00% Same arguments apply to "nr_cpu_ids" but I haven't yet found enough courage to delve into this issue (and proper fix may require new type "cpu_t" which is whole separate story). Link: http://lkml.kernel.org/r/20170309205322.GA1728@avx2Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 5月, 2017 3 次提交
-
-
由 Heiko Carstens 提交于
Fix the following compile error(s) if CONFIG_KPROBES is disabled: arch/s390/kernel/uprobes.c:79:14: error: implicit declaration of function 'probe_get_fixup_type' arch/s390/kernel/uprobes.c:87:14: error: 'FIXUP_PSW_NORMAL' undeclared (first use in this function) Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Fix this compile error if CONFIG_MODULES is disabled: arch/s390/built-in.o: In function `ftrace_plt_init': arch/s390/kernel/ftrace.o:(.init.text+0x34cc): undefined reference to `module_alloc' Reported-by: NRob Landley <rob@landley.net> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
git commit c5328901 "[S390] entry[64].S improvements" removed the update of the exit_timer lowcore field from the critical section cleanup of the .Lsysc_restore/.Lsysc_done and .Lio_restore/.Lio_done blocks. If the PSW is updated by the critical section cleanup to point to user space again, the interrupt entry code will do a vtime calculation after the cleanup completed with an exit_timer value which has *not* been updated. Due to this incorrect system time deltas are calculated. If an interrupt occured with an old PSW between .Lsysc_restore/.Lsysc_done or .Lio_restore/.Lio_done update __LC_EXIT_TIMER with the system entry time of the interrupt. Cc: stable@vger.kernel.org # 3.3+ Tested-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 27 4月, 2017 3 次提交
-
-
由 Radim Krčmář 提交于
Users were expected to use kvm_check_request() for testing and clearing, but request have expanded their use since then and some users want to only test or do a faster clear. Make sure that requests are not directly accessed with bit operations. Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
all architectures converted Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 26 4月, 2017 8 次提交
-
-
由 Harald Freudenberger 提交于
For automatic module loading (e.g. as it is used with cryptsetup) an alias "paes" for the paes_s390 kernel module is needed. Correct the paes_s390 module alias from "aes-all" to "paes". Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Jason J. Herne 提交于
msa6 and msa7 require no changes. msa8 adds kma instruction and feature area. Signed-off-by: NJason J. Herne <jjherne@linux.vnet.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Jason J. Herne 提交于
Provide a kma instruction definition for use by callers of __cpacf_query. Signed-off-by: NJason J. Herne <jjherne@linux.vnet.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Jason J. Herne 提交于
The new KMA instruction requires unique parameters. Update __cpacf_query to generate a compatible assembler instruction. Signed-off-by: NJason J. Herne <jjherne@linux.vnet.ibm.com> Acked-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Harald Freudenberger 提交于
This patch introduces s390 specific arch random functionality. There exists a generic kernel API for arch specific random number implementation (see include/linux/random.h). Here comes the header file and a very small static code part implementing the arch_random_* API based on the TRNG subfunction coming with the reworked PRNG instruction. The arch random implementation hooks into the kernel initialization and checks for availability of the TRNG function. In accordance to the arch random API all functions return false if the TRNG is not available. Otherwise the new high quality entropy source provides fresh random on each invocation. The s390 arch random feature build is controlled via CONFIG_ARCH_RANDOM. This config option located in arch/s390/Kconfig is enabled by default and appears as entry "s390 architectural random number generation API" in the submenu "Processor type and features" for s390 builds. Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Harald Freudenberger 提交于
There is a new TRNG extension in the subcodes for the cpacf PRNO function. This patch introduces new defines and a new cpacf_trng inline function to provide these new features for other kernel code parts. Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Harald Freudenberger 提交于
The PPNO (Perform Pseudorandom Number Operation) instruction has been renamed to PRNO (Perform Random Number Operation). To avoid confusion and conflicts with future extensions with this instruction (like e.g. provide a true random number generator) this patch renames all occurences in cpacf.h and adjusts the only exploiter code which is the prng device driver and one line in the s390 kvm feature check. Signed-off-by: NHarald Freudenberger <freude@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The kernel page table splitting code will split page tables even for features the CPU does not support. E.g. a CPU may not support the NX feature. In order to avoid this, remove those bits from the flags parameter that correlate with unsupported CPU features within __set_memory(). In addition add an early exit if the flags parameter does not have any bits set afterwards. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-