- 17 4月, 2013 17 次提交
-
-
由 Sebastian Ott 提交于
Use the debugfs to keep track of a pci function's status changes. Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
The hash used for mapping irq numbers to msi descriptors does not utilize all buckets that were allocated. Fix this by using the same value (computed by the number of bits used for the hash function) at relevant places. Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
This patch adds the last breaking event address as parameter for 31 bit compat program signal handlers as it is already done for 64 bit programs. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
force_console is used to wake up the CCW based console device to print a panic message in case something goes wrong in a suspend or resume cycle. Stop using the static console_subchannel and add a parameter to this function to specify which ccw device we have to wake up. Reviewed-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
wait_cons_dev is used to busy wait for an interrupt on the console ccw device. Stop using the static console_subchannel and add a parameter to this function to specify on which ccw device/subchannel we have to do the polling. While at it rename the function to ccw_device_wait_idle and move it to device.c Reviewed-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Michael Holzheu 提交于
Since commit 5f954c34 ([S390] hibernation: fix lowcore handling) the absolute zero lowcore is lost during suspend/resume. For example, this leads to the problem that the re-IPL device for kdump is no longer set after resume. With this patch during suspend a buffer is allocated in the new PM notifier "suspend_pm_cb" and then the absolute zero lowcore is saved to that buffer. The resume code then copies back this buffer to absolute zero and afterwards the PM notifier releases the memory. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Akinobu Mita 提交于
Remove unused __BITOPS_ALIGN, and replace __BITOPS_WORDSIZE with BITS_PER_LONG. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Use sske with multiple block control to initialize storage keys within a 1 MB frame at once. It turned out that the sske with mb=1 is an order of magnitude faster than pfmf. This is only an issue for very large systems (several 100GB) where storage key initialization could last more than a minute. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
dumpstack() did not always print a sane callchain when being called. The reason is that show_trace() accessed register 15 directly to get the current stack pointer and passed that pointer to __show_trace() which expects a valid stack frame pointer as argument. However due to tail call optimization the stack frame may not exist anymore when __show_trace() gets called and therefore an invalid stack frame pointer gets passed. To prevent that disable tail call optimization for call chain walking functions. So move all the show_* functions to a dumpstack.c file like other architectures have it already and add a -fno-optimize-sibling-calls compile flag to both dumpstack.c and stacktrace.c to prevent tail call optimization. Fixes callchains that looked e.g. like this: [ 12.868258] Call Trace: [ 12.868262] ([<0000000000008000>] 0x8000) Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Alexandru Gheorghiu 提交于
Used PTR_RET function instead of IS_ERR and PTR_ERR. Patch found using coccinelle. Signed-off-by: NAlexandru Gheorghiu <gheorghiuandru@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Alexandru Gheorghiu 提交于
Rewrote conditional statement and eliminated the out_kthread label. Signed-off-by: NAlexandru Gheorghiu <gheorghiuandru@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Stelian Nirlu 提交于
Signed-off-by: NStelian Nirlu <steliannirlu@gmail.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Stefan Raspl 提交于
Pass buffer length in extra parameter. Signed-off-by: NStefan Raspl <raspl@linux.vnet.ibm.com> Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Wei Yongjun 提交于
Using kmem_cache_zalloc() instead of kmem_cache_alloc() and memset(). Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
To avoid cache synonyms on System zEC12 32 independent zero pages are required, one for each combination for bits 2**12 to 2**16 of the virtual address. To avoid wasting too much memory on small virtual systems the number of zero pages is limited to 4 if the memory size is less or equal to 64MB. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Protection exception usually are suppressing and the fault handler needs to rewind the PSW by the instruction length to get the correct fault address. Except for protection exceptions while the CPU is in the middle of a transaction. The CPU stores the transaction abort PSW at the start of the transaction, if the transaction is aborted the PSW is already correct and may not be modified by the fault handler. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 15 4月, 2013 1 次提交
-
-
由 Michael Holzheu 提交于
For s390 the page table mapping for the crashkernel memory is removed to protect the pre-loaded kdump kernel and ramdisk. Because the crashkernel memory is not included in the page tables for suspend/resume it is not included in the suspend image. Therefore after resume the resumed system does no longer contain the pre-loaded kdump kernel and when kdump is triggered it fails. This patch adds a PM notifier that creates the page tables before suspend is done and removes them for resume. This ensures that the kdump kernel is included in the suspend image. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 02 4月, 2013 2 次提交
-
-
由 Heiko Carstens 提交于
All architectures need to provide a check_pgt_cache() function. The s390 one got lost somewhere. So reintroduce it to prevent future compile errors e.g. if Thomas Gleixner's idle loop rework patches get merged. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When translating user space addresses to kernel addresses the follow_table() function had two bugs: - PROT_NONE mappings could be read accessed via the kernel mapping. That is e.g. putting a filename into a user page, then protecting the page with PROT_NONE and afterwards issuing the "open" syscall with a pointer to the filename would incorrectly succeed. - when walking the page tables it used the pgd/pud/pmd/pte primitives which with dynamic page tables give no indication which real level of page tables is being walked (region2, region3, segment or page table). So in case of an exception the translation exception code passed to __handle_fault() is not necessarily correct. This is not really an issue since __handle_fault() doesn't evaluate the code. Only in case of e.g. a SIGBUS this code gets passed to user space. If user space can do something sane with the value is a different question though. To fix these issues don't use any Linux primitives. Only walk the page tables like the hardware would do it, however we leave quite some checks away since we know that we only have full size page tables and each index is within bounds. In theory this should fix all issues... Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 21 3月, 2013 1 次提交
-
-
由 Heiko Carstens 提交于
The page table walker variant of clear_user() is supposed to copy the contents of the empty zero page to user space. However since 238ec4ef "[S390] zero page cache synonyms" empty_zero_page is not anymore the page itself but contains the pointer to the empty zero pages. Therefore the page table walker variant of clear_user() copied the address of the first empty zero page and afterwards more or less random data to user space instead of clearing the given user space range. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 13 3月, 2013 1 次提交
-
-
由 Stephen Rothwell 提交于
In commit 887cbce0 ("arch Kconfig: centralise ARCH_NO_VIRT_TO_BUS") I introduced the config sybmol HAVE_VIRT_TO_BUS and selected that where needed. I am not sure what I was thinking. Instead, just directly select VIRT_TO_BUS where it is needed. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 3月, 2013 1 次提交
-
-
由 Li Zefan 提交于
Commit 877c6856 ("perf: Remove include of cgroup.h from perf_event.h") caused this build failure if PERF_EVENTS is enabled: In file included from arch/s390/include/asm/perf_event.h:9:0, from include/linux/perf_event.h:24, from kernel/events/ring_buffer.c:12: arch/s390/include/asm/cpu_mf.h: In function 'qctri': arch/s390/include/asm/cpu_mf.h:61:12: error: 'EINVAL' undeclared (first use in this function) cpu_mf.h had an implicit errno.h dependency, which was added indirectly via cgroups.h but not anymore. Add it explicitly. Reported-by: NFengguang Wu <fengguang.wu@intel.com> Tested-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NLi Zefan <lizefan@huawei.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Link: http://lkml.kernel.org/r/51385F79.7000106@huawei.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 3月, 2013 3 次提交
-
-
由 Sebastian Ott 提交于
Let the bus code process scm availability information and notify scm device drivers about the new state. Reviewed-by: NPeter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NPeter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Stop writing to scm after certain error conditions such as a concurrent firmware upgrade. Resume to normal state once scm_blk_set_available is called (due to an scm availability notification). Reviewed-by: NPeter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NPeter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sebastian Ott 提交于
Extend the notify callback of scm_driver by an event parameter to allow to distinguish between different notifications. Reviewed-by: NPeter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: NPeter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 05 3月, 2013 3 次提交
-
-
由 Heiko Carstens 提交于
Our flush_tlb_kernel_range() implementation calls __tlb_flush_mm() with &init_mm as argument. __tlb_flush_mm() however will only flush tlbs for the passed in mm if its mm_cpumask is not empty. For the init_mm however its mm_cpumask has never any bits set. Which in turn means that our flush_tlb_kernel_range() implementation doesn't work at all. This can be easily verified with a vmalloc/vfree loop which allocates a page, writes to it and then frees the page again. A crash will follow almost instantly. To fix this remove the cpumask_empty() check in __tlb_flush_mm() since there shouldn't be too many mms with a zero mm_cpumask, besides the init_mm of course. Cc: stable@vger.kernel.org Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The size of the vmemmap must be a multiple of PAGES_PER_SECTION, since the common code always initializes the vmemmap in such pieces. So we must round up in order to not have a too small vmemmap. Fixes an IPL crash on 31 bit with more than 1920MB. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
The current machine check code uses the registers stored by the machine in the lowcore at __LC_GPREGS_SAVE_AREA as the registers of the interrupted context. The registers 0-7 of a user process can get clobbered if a machine checks interrupts the execution of a critical section in entry[64].S. The reason is that the critical section cleanup code may need to modify the PSW and the registers for the previous context to get to the end of a critical section. If registers 0-7 have to be replaced the relevant copy will be in the registers, which invalidates the copy in the lowcore. The machine check handler needs to explicitly store registers 0-7 to the stack. Cc: stable@vger.kernel.org Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 04 3月, 2013 1 次提交
-
-
由 Eric W. Biederman 提交于
Modify the request_module to prefix the file system type with "fs-" and add aliases to all of the filesystems that can be built as modules to match. A common practice is to build all of the kernel code and leave code that is not commonly needed as modules, with the result that many users are exposed to any bug anywhere in the kernel. Looking for filesystems with a fs- prefix limits the pool of possible modules that can be loaded by mount to just filesystems trivially making things safer with no real cost. Using aliases means user space can control the policy of which filesystem modules are auto-loaded by editing /etc/modprobe.d/*.conf with blacklist and alias directives. Allowing simple, safe, well understood work-arounds to known problematic software. This also addresses a rare but unfortunate problem where the filesystem name is not the same as it's module name and module auto-loading would not work. While writing this patch I saw a handful of such cases. The most significant being autofs that lives in the module autofs4. This is relevant to user namespaces because we can reach the request module in get_fs_type() without having any special permissions, and people get uncomfortable when a user specified string (in this case the filesystem type) goes all of the way to request_module. After having looked at this issue I don't think there is any particular reason to perform any filtering or permission checks beyond making it clear in the module request that we want a filesystem module. The common pattern in the kernel is to call request_module() without regards to the users permissions. In general all a filesystem module does once loaded is call register_filesystem() and go to sleep. Which means there is not much attack surface exposed by loading a filesytem module unless the filesystem is mounted. In a user namespace filesystems are not mounted unless .fs_flags = FS_USERNS_MOUNT, which most filesystems do not set today. Acked-by: NSerge Hallyn <serge.hallyn@canonical.com> Acked-by: NKees Cook <keescook@chromium.org> Reported-by: NKees Cook <keescook@google.com> Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 28 2月, 2013 10 次提交
-
-
由 Heiko Carstens 提交于
Get rid of this one (false positive): arch/s390/kernel/module.c: In function ‘apply_relocate_add’: arch/s390/kernel/module.c:404:5: warning: ‘rc’ may be used uninitialized in this function [-Wmaybe-uninitialized] arch/s390/kernel/module.c:225:6: note: ‘rc’ was declared here Play safe and preinitialize rc with an error value, so we see an error if new users indeed don't initialize it. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When the kernel resides in home space and the mvcos instruction is not available uaccesses for kernel ds happen via simple strnlen() or memcpy() calls. This however can break badly, since uaccesses in kernel space may fail as well, especially if CONFIG_DEBUG_PAGEALLOC is turned on. To fix this implement strnlen_kernel() and copy_in_kernel() functions which can only be used by the page table uaccess functions. These two functions detect invalid memory accesses and return the correct length of processed data.. Both functions are more or less a copy of the std variants without sacf calls. Fixes ipl crashes on 31 bit machines as well on 64 bit machines without mvcos. Caused by changing the default address space of the kernel being home space. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
The "standard" and page table walk variants of strncpy_from_user() first check the length of the to be copied string in userspace. The string is then copied to kernel space and the length returned to the caller. However userspace can modify the string at any time while the kernel checks for the length of the string or copies the string. In result the returned length of the string is not necessarily correct. Fix this by copying in a loop which mimics the mvcos variant of strncpy_from_user(), which handles this correctly. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Syam Sidhardhan 提交于
We are using sizeof operator for an array given as function argument, which is incorrect. Signed-off-by: NSyam Sidhardhan <s.syam@samsung.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
access_ok() always returns 'true' on s390. Therefore all calls are quite pointless and can be removed. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
If the maximum length specified for the to be accessed string for strncpy_from_user() and strnlen_user() is zero the following incorrect values would be returned or incorrect memory accesses would happen: strnlen_user_std() and strnlen_user_pt() incorrectly return "1" strncpy_from_user_pt() would incorrectly access "dst[maxlen - 1]" strncpy_from_user_mvcos() would incorrectly return "-EFAULT" Fix all these oddities by adding early checks. Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Always stay within page boundaries when copying from user within strlen_user_mvcos()/strncpy_from_user_mvcos(). This allows to shorten the code a bit and may prevent unnecessary faults, since we copy quite large amounts of memory to kernel space. Also directly call the mvcos variants of copy_from_user() to avoid indirect branches. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Add hint to the page tables that we don't care about the change bit in storage keys that belong to vmemmap pages. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Sasha Levin 提交于
I'm not sure why, but the hlist for each entry iterators were conceived list_for_each_entry(pos, head, member) The hlist ones were greedy and wanted an extra parameter: hlist_for_each_entry(tpos, pos, head, member) Why did they need an extra pos parameter? I'm not quite sure. Not only they don't really need it, it also prevents the iterator from looking exactly like the list iterator, which is unfortunate. Besides the semantic patch, there was some manual work required: - Fix up the actual hlist iterators in linux/list.h - Fix up the declaration of other iterators based on the hlist ones. - A very small amount of places were using the 'node' parameter, this was modified to use 'obj->member' instead. - Coccinelle didn't handle the hlist_for_each_entry_safe iterator properly, so those had to be fixed up manually. The semantic patch which is mostly the work of Peter Senna Tschudin is here: @@ iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host; type T; expression a,c,d,e; identifier b; statement S; @@ -T b; <+... when != b ( hlist_for_each_entry(a, - b, c, d) S | hlist_for_each_entry_continue(a, - b, c) S | hlist_for_each_entry_from(a, - b, c) S | hlist_for_each_entry_rcu(a, - b, c, d) S | hlist_for_each_entry_rcu_bh(a, - b, c, d) S | hlist_for_each_entry_continue_rcu_bh(a, - b, c) S | for_each_busy_worker(a, c, - b, d) S | ax25_uid_for_each(a, - b, c) S | ax25_for_each(a, - b, c) S | inet_bind_bucket_for_each(a, - b, c) S | sctp_for_each_hentry(a, - b, c) S | sk_for_each(a, - b, c) S | sk_for_each_rcu(a, - b, c) S | sk_for_each_from -(a, b) +(a) S + sk_for_each_from(a) S | sk_for_each_safe(a, - b, c, d) S | sk_for_each_bound(a, - b, c) S | hlist_for_each_entry_safe(a, - b, c, d, e) S | hlist_for_each_entry_continue_rcu(a, - b, c) S | nr_neigh_for_each(a, - b, c) S | nr_neigh_for_each_safe(a, - b, c, d) S | nr_node_for_each(a, - b, c) S | nr_node_for_each_safe(a, - b, c, d) S | - for_each_gfn_sp(a, c, d, b) S + for_each_gfn_sp(a, c, d) S | - for_each_gfn_indirect_valid_sp(a, c, d, b) S + for_each_gfn_indirect_valid_sp(a, c, d) S | for_each_host(a, - b, c) S | for_each_host_safe(a, - b, c, d) S | for_each_mesh_entry(a, - b, c, d) S ) ...+> [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c] [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c] [akpm@linux-foundation.org: checkpatch fixes] [akpm@linux-foundation.org: fix warnings] [akpm@linux-foudnation.org: redo intrusive kvm changes] Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-