- 13 9月, 2013 3 次提交
-
-
由 Johannes Weiner 提交于
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Kernel faults are expected to handle OOM conditions gracefully (gup, uaccess etc.), so they should never invoke the OOM killer. Reserve this for faults triggered in user context when it is the only option. Most architectures already do this, fix up the remaining few. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
The memcg code can trap tasks in the context of the failing allocation until an OOM situation is resolved. They can hold all kinds of locks (fs, mm) at this point, which makes it prone to deadlocking. This series converts memcg OOM handling into a two step process that is started in the charge context, but any waiting is done after the fault stack is fully unwound. Patches 1-4 prepare architecture handlers to support the new memcg requirements, but in doing so they also remove old cruft and unify out-of-memory behavior across architectures. Patch 5 disables the memcg OOM handling for syscalls, readahead, kernel faults, because they can gracefully unwind the stack with -ENOMEM. OOM handling is restricted to user triggered faults that have no other option. Patch 6 reworks memcg's hierarchical OOM locking to make it a little more obvious wth is going on in there: reduce locked regions, rename locking functions, reorder and document. Patch 7 implements the two-part OOM handling such that tasks are never trapped with the full charge stack in an OOM situation. This patch: Back before smart OOM killing, when faulting tasks were killed directly on allocation failures, the arch-specific fault handlers needed special protection for the init process. Now that all fault handlers call into the generic OOM killer (see commit 609838cf: "mm: invoke oom-killer from remaining unconverted page fault handlers"), which already provides init protection, the arch-specific leftovers can be removed. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Acked-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc bits] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 9月, 2013 9 次提交
-
-
由 Michael Holzheu 提交于
Modify the s390 copy_oldmem_page() and remap_oldmem_pfn_range() function for zfcpdump to read from the HSA memory if memory below HSA_SIZE bytes is requested. Otherwise real memory is used. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Jan Willeke <willeke@de.ibm.com> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jan Willeke 提交于
Introduce the s390 specific way to map pages from oldmem. The memory area below OLDMEM_SIZE is mapped with offset OLDMEM_BASE. The other old memory is mapped directly. Signed-off-by: NJan Willeke <willeke@de.ibm.com> Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michael Holzheu 提交于
Exchange the old relocate mechanism with the new arch function call override mechanism that allows to create the ELF core header in the 2nd kernel. Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Jan Willeke <willeke@de.ibm.com> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Heiko Carstens 提交于
With the general-instruction extension facility (z10) a couple of instructions with a pc-relative long displacement were introduced. The kprobes support for these instructions however was never implemented. In result, if anybody ever put a probe on any of these instructions the result would have been random behaviour after the instruction got executed within the insn slot. So lets add the missing handling for these instructions. Since all of the new instructions have 32 bit signed displacement the easiest solution is to allocate an insn slot that is within the same 2GB area like the original instruction and patch the displacement field. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mathieu Desnoyers 提交于
I found the following pattern that leads in to interesting findings: grep -r "ret.*|=.*__put_user" * grep -r "ret.*|=.*__get_user" * grep -r "ret.*|=.*__copy" * The __put_user() calls in compat_ioctl.c, ptrace compat, signal compat, since those appear in compat code, we could probably expect the kernel addresses not to be reachable in the lower 32-bit range, so I think they might not be exploitable. For the "__get_user" cases, I don't think those are exploitable: the worse that can happen is that the kernel will copy kernel memory into in-kernel buffers, and will fail immediately afterward. The alpha csum_partial_copy_from_user() seems to be missing the access_ok() check entirely. The fix is inspired from x86. This could lead to information leak on alpha. I also noticed that many architectures map csum_partial_copy_from_user() to csum_partial_copy_generic(), but I wonder if the latter is performing the access checks on every architectures. Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Cyrill Gorcunov 提交于
_PAGE_SOFT_DIRTY bit should never be set on present pte so add VM_BUG_ON to catch any potential future abuse. Also add a comment on _PAGE_SWP_SOFT_DIRTY definition explaining scope of its usage. Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org> Acked-by: NPavel Emelyanov <xemul@parallels.com> Acked-by: NJan Beulich <jbeulich@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Naoya Horiguchi 提交于
Currently hugepage migration works well only for pmd-based hugepages (mainly due to lack of testing,) so we had better not enable migration of other levels of hugepages until we are ready for it. Some users of hugepage migration (mbind, move_pages, and migrate_pages) do page table walk and check pud/pmd_huge() there, so they are safe. But the other users (softoffline and memory hotremove) don't do this, so without this patch they can try to migrate unexpected types of hugepages. To prevent this, we introduce hugepage_migration_support() as an architecture dependent check of whether hugepage are implemented on a pmd basis or not. And on some architecture multiple sizes of hugepages are available, so hugepage_migration_support() also checks hugepage size. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Hansen 提交于
The previous patch doing vmstats for TLB flushes ("mm: vmstats: tlb flush counters") effectively missed UP since arch/x86/mm/tlb.c is only compiled for SMP. UP systems do not do remote TLB flushes, so compile those counters out on UP. arch/x86/kernel/cpu/mtrr/generic.c calls __flush_tlb() directly. This is probably an optimization since both the mtrr code and __flush_tlb() write cr4. It would probably be safe to make that a flush_tlb_all() (and then get these statistics), but the mtrr code is ancient and I'm hesitant to touch it other than to just stick in the counters. [akpm@linux-foundation.org: tweak comments] Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Hansen 提交于
I was investigating some TLB flush scaling issues and realized that we do not have any good methods for figuring out how many TLB flushes we are doing. It would be nice to be able to do these in generic code, but the arch-independent calls don't explicitly specify whether we actually need to do remote flushes or not. In the end, we really need to know if we actually _did_ global vs. local invalidations, so that leaves us with few options other than to muck with the counters from arch-specific code. Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 9月, 2013 5 次提交
-
-
由 Vaidyanathan Srinivasan 提交于
When adding cpuidle support to pSeries, we introduced two regressions: - The new cpuidle backend driver only works under hypervisors supporting the "SLPLAR" option, which isn't the case of the old POWER4 hypervisor and the HV "light" used on js2x blades - The cpuidle driver registers fairly late, meaning that for a significant portion of the boot process, we end up having all threads spinning. This slows down the boot process and increases the overall resource usage if the hypervisor has shared processors. This fixes both by implementing a "default" idle that will cede to the hypervisor when possible, in a very simple way without all the bells and whisles of cpuidle. Reported-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NVaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Acked-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> CC: <stable@vger.kernel.org>
-
由 Vladimir Murzin 提交于
While cross-building for PPC64 I've got WARNING: vmlinux.o(.text.unlikely+0x1ba): Section mismatch in reference from the function .prom_rtas_call() to the variable .init.data:dt_string_start The function .prom_rtas_call() references the variable __initdata dt_string_start. This is often because .prom_rtas_call lacks a __initdata annotation or the annotation of dt_string_start is wrong. WARNING: vmlinux.o(.meminit.text+0xeb0): Section mismatch in reference from the function .free_area_init_core.isra.47() to the function .init.text:.set_pageblock_order() The function __meminit .free_area_init_core.isra.47() references a function __init .set_pageblock_order(). If .set_pageblock_order is only used by .free_area_init_core.isra.47 then annotate .set_pageblock_order with a matching annotation. Fix it by proper annotation of prom_rtas_call. Signed-off-by: NVladimir Murzin <murzin.v@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
stack_grow_into/14082 is trying to acquire lock: (&mm->mmap_sem){++++++}, at: [<c000000000206d28>] .might_fault+0x78/0xe0 but task is already holding lock: (&mm->mmap_sem){++++++}, at: [<c0000000007ffd8c>] .do_page_fault+0x24c/0x910 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&mm->mmap_sem); lock(&mm->mmap_sem); *** DEADLOCK *** May be due to missing lock nesting notation 1 lock held by stack_grow_into/14082: #0: (&mm->mmap_sem){++++++}, at: [<c0000000007ffd8c>] .do_page_fault+0x24c/0x910 stack backtrace: CPU: 21 PID: 14082 Comm: stack_grow_into Not tainted 3.10.0-10.el7.ppc64.debug #1 Call Trace: [c0000003d396b850] [c000000000016e7c] .show_stack+0x7c/0x1f0 (unreliable) [c0000003d396b920] [c000000000813fc8] .dump_stack+0x28/0x3c [c0000003d396b990] [c000000000124b90] .__lock_acquire+0x1640/0x1800 [c0000003d396bab0] [c00000000012570c] .lock_acquire+0xac/0x250 [c0000003d396bb80] [c000000000206d54] .might_fault+0xa4/0xe0 [c0000003d396bbf0] [c0000000007ffe2c] .do_page_fault+0x2ec/0x910 [c0000003d396be30] [c0000000000092e8] handle_page_fault+0x10/0x30 Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Guenter Roeck 提交于
powerpc allmodconfig build fails with: ERROR: ".cpu_to_chip_id" [drivers/block/mtip32xx/mtip32xx.ko] undefined! The problem was introduced with commit 15863ff3 (powerpc: Make chip-id information available to userspace). Export the missing symbol. Cc: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> Cc: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Akira Takeuchi 提交于
The mn10300 kernel crashes just after starting userspace programs, if CONFIG_PREEMPT is disabled: Freeing unused kernel memory: 96K (90286000 - 9029e000) MISALIGN: 97c33ff9: unsupported instruction f MISALIGN: 97c33ff9: unsupported instruction f MISALIGN: 97c33ff9: unsupported instruction f : This fixes the problem that was introduced by commit d17fc238 ("MN10300: Enable IRQs more in system call exit work path"). Signed-off-by: NAkira Takeuchi <takeuchi.akr@jp.panasonic.com> Signed-off-by: NKiyoshi Owada <owada.kiyoshi@jp.panasonic.com> Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 9月, 2013 11 次提交
-
-
由 Paul Bolle 提交于
Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Copied from frv. Reviewed-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
CC: Mikael Starvik <starvik@axis.com> CC: linux-cris-kernel@axis.com Signed-off-by: NJonas Bonn <jonas@southpole.se> Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Paul Bolle 提交于
These legacy drivers were removed in commit 9c75fc8c ("CRIS: Remove legacy RTC drivers"). Now remove their last traces in two Kconfig files and one Makefile. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NJesper Nilsson <jespern@axis.com>
-
由 Paul Bolle 提交于
The Kconfig symbol OOM_REBOOT got added in v2.6.25. It has never been used. Its entry can safely be removed. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Konrad Rzeszutek Wilk 提交于
As we get compile warnings about .init.data being used by non-init functions. Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Konrad Rzeszutek Wilk 提交于
This reverts commit 70dd4998. Now that the bugs have been resolved we can re-enable the PV ticketlock implementation under PVHVM Xen guests. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
There is no need to setup this kicker IPI if we are never going to use the paravirtualized ticketlock mechanism. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
Before this patch we would patch all of the pv_lock_ops sites using alternative assembler. Then later in the bootup cycle change the unlock_kick and lock_spinning to the Xen specific - without re patching. That meant that for the core of the kernel we would be running with the baremetal version of unlock_kick and lock_spinning while for modules we would have the proper Xen specific slowpaths. As most of the module uses some API from the core kernel that ended up with slowpath lockers waiting forever to be kicked (b/c they would be using the Xen specific slowpath logic). And the kick never came b/c the unlock path that was taken was the baremetal one. On PV we do not have the problem as we initialise before the alternative code kicks in. The fix is to make the updating of the pv_lock_ops function be done before the alternative code starts patching. Note that this patch fixes issues discovered by commit f10cd522. ("xen: disable PV spinlocks on HVM") wherein it mentioned PV spinlocks cannot possibly work with the current code because they are enabled after pvops patching has already been done, and because PV spinlocks use a different data structure than native spinlocks so we cannot switch between them dynamically. The first problem is solved by this patch. The second problem has been solved by commit 816434ec (Merge branch 'x86-spinlocks-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip) P.S. There is still the commit 70dd4998 (xen/spinlock: Disable IRQ spinlock (PV) allocation on PVHVM) to revert but that can be done later after all other bugs have been fixed. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
As we are using the generic ticketlock structs and these old structures are not needed anymore. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
The xen_lock_spinning has a check for the kicker interrupts and if it is not initialized it will spin normally (not enter the slowpath). But for PVHVM case we would initialize the kicker interrupt before the CPU came online. This meant that if the booting CPU used a spinlock and went in the slowpath - it would enter the slowpath and block forever. The forever part because during bootup: the spinlock would be taken _before_ the CPU sets itself to be online (more on this further), and we enter to poll on the event channel forever. The bootup CPU (see commit fc78d343 "xen/smp: initialize IPI vectors before marking CPU online" for details) and the CPU that started the bootup consult the cpu_online_mask to determine whether the booting CPU should get an IPI. The booting CPU has to set itself in this mask via: set_cpu_online(smp_processor_id(), true); However, if the spinlock is taken before this (and it is) and it polls on an event channel - it will never be woken up as the kernel will never send an IPI to an offline CPU. Note that the PVHVM logic in sending IPIs is using the HVM path which has numerous checks using the cpu_online_mask and cpu_active_mask. See above mention git commit for details. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 09 9月, 2013 4 次提交
-
-
由 Julien Grall 提交于
When linux is running as dom0, Xen doesn't show the physical cpu but a virtual CPU. On some ARM SOC (for instance the exynos 5250), linux registers callbacks for cpuidle and cpufreq. When these callbacks are called, they will modify directly the physical cpu not the virtual one. It can impact the whole board instead of only dom0. Signed-off-by: NJulien Grall <julien.grall@linaro.org> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Boris Ostrovsky 提交于
m2p_remove_override() calls get_balloon_scratch_page() in MULTI_update_va_mapping() even though it already has pointer to this page from the earlier call (in scratch_page). This second call doesn't have a matching put_balloon_scratch_page() thus not restoring preempt count back. (Also, there is no put_balloon_scratch_page() in the error path.) In addition, the second multicall uses __xen_mc_entry() which does not disable interrupts. Rearrange xen_mc_* calls to keep interrupts off while performing multicalls. This commit fixes a regression introduced by: commit ee072640 Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Date: Tue Jul 23 17:23:54 2013 +0000 xen/m2p: use GNTTABOP_unmap_and_replace to reinstate the original mapping Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Rob Herring 提交于
xen_pm_init was unconditionally setting pm_power_off and arm_pm_restart function pointers. This breaks multi-platform kernels. Make this conditional on running as a Xen guest and make it a late_initcall to ensure it is setup after platform code for Dom0. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> CC: stable@vger.kernel.org
-
由 Heiko Carstens 提交于
Change the hash algorithm a bit so it produces only values in the range of 0..31. This allows to reduce the size of the external interrupt handler hash array even further while making sure that each of the known interrupt sources keeps its unique hash with the slightly modified algorithm: 0x1004 --> 12 0x1201 --> 10 0x1202 --> 11 0x1406 --> 16 0x1407 --> 17 0x2401 --> 19 0x2603 --> 22 0x4000 --> 0 This also means that the entire array now fits into exactly one cache line; so add a proper align statement as well. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
- 07 9月, 2013 8 次提交
-
-
由 Heiko Carstens 提交于
86a264ab "CRED: Wrap current->cred and a few other accessors" converted all uses of current->cred into current_cred() but left s390 alone. So let's convert s390 finally as well, only five years later. This way we also get rid of a sparse warning which complains about a possible invalid rcu dereference which however is a false positive. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Pointer arithmetics with function pointers is not really defined, but seems to do the right thing. Let's cast to a void pointer to have a defined behaviour, at least when using gcc. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Make various functions static, add declarations to header files to fix a couple of sparse findings. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Add __force annotations to get rid of a couple of sparse warnings: arch/s390/kernel/compat_signal.c:335:35: warning: cast removes address space of expression Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Let sparse not incorrectly complain about unbalanced locking. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
由 Richard Weinberger 提交于
These handlers are not optional and need in our case dummy implementions to avoid NULL pointer bugs within the irq core code. Reported-and-tested-by: NToralf Foester <toralf.foerster@gmx.de> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Richard Weinberger 提交于
On recent toolchains we hit: In file included from arch/x86/um/os-Linux/prctl.c:7:0: /usr/include/linux/ptrace.h:58:8: error: redefinition of ‘struct ptrace_peeksiginfo_args’ struct ptrace_peeksiginfo_args { ^ In file included from arch/x86/um/os-Linux/prctl.c:6:0: /usr/include/sys/ptrace.h:191:8: note: originally defined here struct ptrace_peeksiginfo_args ^ make[2]: *** [arch/x86/um/os-Linux/prctl.o] Error 1 make[1]: *** [arch/x86/um/os-Linux] Error 2 make: *** [arch/x86/um] Error 2 The solution is not to include linux/ptrace.h and obtain the arch specific ptrace command from asm/ptrace.h. Reported-and-tested-by: NDavid Oberhollenzer <david.oberhollenzer@tele2.at> Signed-off-by: NRichard Weinberger <richard@nod.at>
-