- 14 4月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
For future rework of try_to_wake_up() we'd like to push part of that function onto the CPU the task is actually going to run on. In order to do so we need a generic callback from the existing scheduler IPI. This patch introduces such a generic callback: scheduler_ipi() and implements it as a NOP. BenH notes: PowerPC might use this IPI on offline CPUs under rare conditions! Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Acked-by: NJesper Nilsson <jesper.nilsson@axis.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110405152728.744338123@chello.nl
-
- 31 3月, 2011 1 次提交
-
-
由 Lucas De Marchi 提交于
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi>
-
- 30 3月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 26 3月, 2011 2 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NChris Metcalf <cmetcalf@tilera.com> LKML-Reference: <20110325142049.536190130@linutronix.de>
-
由 Thomas Gleixner 提交于
Converted with coccinelle. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NChris Metcalf <cmetcalf@tilera.com> LKML-Reference: <20110325142049.441954268@linutronix.de>
-
- 25 3月, 2011 1 次提交
-
-
由 David Rientjes 提交于
Commit ddd588b5 ("oom: suppress nodes that are not allowed from meminfo on oom kill") moved lib/show_mem.o out of lib/lib.a, which resulted in build warnings on all architectures that implement their own versions of show_mem(): lib/lib.a(show_mem.o): In function `show_mem': show_mem.c:(.text+0x1f4): multiple definition of `show_mem' arch/sparc/mm/built-in.o:(.text+0xd70): first defined here The fix is to remove __show_mem() and add its argument to show_mem() in all implementations to prevent this breakage. Architectures that implement their own show_mem() actually don't do anything with the argument yet, but they could be made to filter nodes that aren't allowed in the current context in the future just like the generic implementation. Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Reported-by: NJames Bottomley <James.Bottomley@hansenpartnership.com> Suggested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 3月, 2011 3 次提交
-
-
由 Akinobu Mita 提交于
minix bit operations are only used by minix filesystem and useless by other modules. Because byte order of inode and block bitmaps is different on each architecture like below: m68k: big-endian 16bit indexed bitmaps h8300, microblaze, s390, sparc, m68knommu: big-endian 32 or 64bit indexed bitmaps m32r, mips, sh, xtensa: big-endian 32 or 64bit indexed bitmaps for big-endian mode little-endian bitmaps for little-endian mode Others: little-endian bitmaps In order to move minix bit operations from asm/bitops.h to architecture independent code in minix filesystem, this provides two config options. CONFIG_MINIX_FS_BIG_ENDIAN_16BIT_INDEXED is only selected by m68k. CONFIG_MINIX_FS_NATIVE_ENDIAN is selected by the architectures which use native byte order bitmaps (h8300, microblaze, s390, sparc, m68knommu, m32r, mips, sh, xtensa). The architectures which always use little-endian bitmaps do not select these options. Finally, we can remove minix bit operations from asm/bitops.h for all architectures. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NGreg Ungerer <gerg@uclinux.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: "David S. Miller" <davem@davemloft.net> Cc: Hirokazu Takata <takata@linux-m32r.org> Acked-by: NRalf Baechle <ralf@linux-mips.org> Acked-by: NPaul Mundt <lethal@linux-sh.org> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Akinobu Mita 提交于
As the result of conversions, there are no users of ext2 non-atomic bit operations except for ext2 filesystem itself. Now we can put them into architecture independent code in ext2 filesystem, and remove from asm/bitops.h for all architectures. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Jan Kara <jack@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Akinobu Mita 提交于
Introduce little-endian bit operations to the big-endian architectures which do not have native little-endian bit operations and the little-endian architectures. (alpha, avr32, blackfin, cris, frv, h8300, ia64, m32r, mips, mn10300, parisc, sh, sparc, tile, x86, xtensa) These architectures can just include generic implementation (asm-generic/bitops/le.h). Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Matthew Wilcox <willy@debian.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Zankel <chris@zankel.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: NHans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> Acked-by: N"H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 3月, 2011 1 次提交
-
-
由 Eric Dumazet 提交于
Add a node parameter to alloc_thread_info(), and change its name to alloc_thread_info_node() This change is needed to allow NUMA aware kthread_create_on_cpu() Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NAndi Kleen <ak@linux.intel.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Tejun Heo <tj@kernel.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: David Howells <dhowells@redhat.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 3月, 2011 1 次提交
-
-
由 Chris Metcalf 提交于
Commit 8d7718aa changed "int" to "u32" in the prototypes but not the definition. I missed this when I saw the patch go by on LKML. We cast "u32 *" to "int *" since we are tying into the underlying atomics framework, and atomic_t uses int as its value type. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> Reviewed-by: NMichel Lespinasse <walken@google.com>
-
- 18 3月, 2011 1 次提交
-
-
由 Chris Metcalf 提交于
This change supports building the kernel with newer binutils where a shift of greater than the word size is no longer interpreted silently as modulo the word size, but instead generates a warning. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 11 3月, 2011 7 次提交
-
-
由 Michel Lespinasse 提交于
Change futex_atomic_op_inuser and futex_atomic_cmpxchg_inatomic prototypes to use u32 types for the futex as this is the data type the futex core code uses all over the place. Signed-off-by: NMichel Lespinasse <walken@google.com> Cc: Darren Hart <darren@dvhart.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: David Howells <dhowells@redhat.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20110311025058.GD26122@google.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Michel Lespinasse 提交于
The cmpxchg_futex_value_locked API was funny in that it returned either the original, user-exposed futex value OR an error code such as -EFAULT. This was confusing at best, and could be a source of livelocks in places that retry the cmpxchg_futex_value_locked after trying to fix the issue by running fault_in_user_writeable(). This change makes the cmpxchg_futex_value_locked API more similar to the get_futex_value_locked one, returning an error code and updating the original value through a reference argument. Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: Chris Metcalf <cmetcalf@tilera.com> [tile] Acked-by: Tony Luck <tony.luck@intel.com> [ia64] Acked-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: Michal Simek <monstr@monstr.eu> [microblaze] Acked-by: David Howells <dhowells@redhat.com> [frv] Cc: Darren Hart <darren@dvhart.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20110311024851.GC26122@google.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Chris Metcalf 提交于
The first issue fixed in this patch is that pending rwlock write locks could lock out new readers; this could cause a deadlock if a read lock was held on cpu 1, a write lock was then attempted on cpu 2 and was pending, and cpu 1 was interrupted and attempted to re-acquire a read lock. The write lock code was modified to not lock out new readers. The second issue fixed is that there was a narrow race window where a tns instruction had been issued (setting the lock value to "1") and the store instruction to reset the lock value correctly had not yet been issued. In this case, if an interrupt occurred and the same cpu then tried to manipulate the lock, it would find the lock value set to "1" and spin forever, assuming some other cpu was partway through updating it. The fix is to enforce an interrupt critical section around the tns/store pair. In addition, this change now arranges to always validate that after a readlock we have not wrapped around the count of readers, which is only eight bits. Since these changes make the rwlock "fast path" code heavier weight, I decided to move all the rwlock code all out of line, leaving only the conventional spinlock code with fastpath inlines. Since the read_lock and read_trylock implementations ended up very similar, I just expressed read_lock in terms of read_trylock. As part of this change I also eliminate support for the now-obsolete tns_atomic mode. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
Add tile support for the EDAC driver, which provides unified system error (memory, PCI, etc.) reporting. For now, the TILEPro port reports memory correctable error (CE) only. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
The Tilera architecture traditionally supports 64KB page sizes to improve TLB utilization and improve performance when the hardware is being used primarily to run a single application. For more generic server scenarios, it can be beneficial to run with 4KB page sizes, so this commit allows that to be specified (by modifying the arch/tile/include/hv/pagesize.h header). As part of this change, we also re-worked the PTE management slightly so that PTE writes all go through a __set_pte() function where we can do some additional validation. The set_pte_order() function was eliminated since the "order" argument wasn't being used. One bug uncovered was in the PCI DMA code, which wasn't properly flushing the specified range. This was benign with 64KB pages, but with 4KB pages we were getting some larger flushes wrong. The per-cpu memory reservation code also needed updating to conform with the newer percpu stuff; before it always chose 64KB, and that was always correct, but with 4KB granularity we now have to pay closer attention and reserve the amount of memory that will be requested when the percpu code starts allocating. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
This renames 3G_OPT to 2_75G, and adds 2_5G and 2_25G. For memory-intensive applications that are also network-buffer intensive it can be helpful to be able to tune the virtual address of the start of kernel memory. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
This is a grab bag of changes with no actual change to generated code. This includes whitespace and comment typos, plus a couple of stale comments being removed. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 02 3月, 2011 14 次提交
-
-
由 Chris Metcalf 提交于
This adds a grab bag of symbols that have been missing for various modules. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
It now takes an additional argument so it can be used to flush-and-invalidate pages that are cached using hash-for-home as well those that are cached with coherence point on a single cpu. This allows it to be used more widely for changing the coherence point of arbitrary pages when necessary. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
The first is that we were using an incorrect hand-rolled variant of __kernel_text_address() which didn't handle module PCs. We now just use the standard API. The second was that we weren't accounting for the three-level page table when we were trying to pre-verify the addresses on the 64-bit TILE-Gx processor; we now do that correctly. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
This avoids having to maintain an additional separate assembly file, and of course the inline is slightly more efficient as well. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
Previously we used iret to atomically return to kernel PL with interrupts enabled. However, it turns out that we are architecturally guaranteed that we can just set and clear the "interrupt critical section" and only interrupt on the following instruction, so we now do that instead, since it's cleaner. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
These headers are used by Linux but are maintained upstream. This change incorporates a few minor fixes to these headers, including a new sim_print() function, cleaner support for the sim_syscall() API, and a sim_query_cpu_speed() method. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
This fixes the "initfree" boot argument. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
As the added comment says, we can sometimes see a coherence warning from our simulator if the "swapper_pgprot" variable on the boot cpu has not been evicted from cache by the time the other cpus come up. Force it to be evicted so we never see the warning. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
This should have been as part of the initial hardwall submission to LKML but was overlooked. The header provides the ioctl definitions for manipulating the hardwall fd, so needs to be available to userspace. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
Previously we assumed this was impossible, but in fact it can happen. Handle it gracefully by retrying after issuing a warning. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
The problem was that this could lead to IPIs being disabled during the softirq processing after a hypervisor downcall (e.g. for I/O), since both IPI and device interrupts use the INCTRL_1 downcall mechanism. When this happened at the wrong time, it could lead to deadlock. Luckily, we were already maintaining the per-interrupt state we need, and using it in the proper way in the hypervisor, so all we had to do was to change Linux to stop blocking downcall interrupts for the entire length of the downcall. (Now they're blocked while we're executing the downcall routine itself, but not while we're executing any subsequent softirq routines.) The hypervisor is doing a very small amount of work it no longer needs to do (masking INTCTRL_1 on entry to the client interrupt routine), but doing so means that older versions of Tile Linux will continue to work with a current hypervisor, so that seems reasonable. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
The current implementations of __ndelay and __udelay call a hypervisor service to delay, but the hypervisor service isn't actually implemented very well, and the consensus is that Linux should handle figuring this out natively and not use a hypervisor service. By converting nanoseconds to cycles, and then spinning until the cycle counter reaches the desired cycle, we get several benefits: first, we are sensitive to the actual clock speed; second, we use less power by issuing a slow SPR read once every six cycles while we delay; and third, we properly handle the case of an interrupt by exiting at the target time rather than after some number of cycles. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
To handle single-step, tile mmap's a page of memory in the process space for each thread and uses it to construct a version of the instruction that we want to single step. If the process exec's, though, we lose that mapping, and the kernel needs to be aware that it will need to recreate it if the exec'ed process than tries to single-step as well. Also correct some int32_t to s32 for better kernel style. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
The convention changed to, e.g., ".data..page_aligned". This commit fixes the places in the tile architecture that were still using the old convention. One tile-specific section (.init.page) was dropped in favor of just using an "aligned" attribute. Sam Ravnborg <sam@ravnborg.org> pointed out __PAGE_ALIGNED_BSS, etc. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 25 2月, 2011 1 次提交
-
-
由 Chris Metcalf 提交于
This adds the volatile cast which forces the compiler to emit the load. Suggested by Peter Zijlstra <peterz@infradead.org>. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 24 2月, 2011 4 次提交
-
-
由 Thomas Gleixner 提交于
irq chip converted and proper accessor functions used. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Peter Zijlstra 提交于
Tile's __pte_free_tlb() implementation makes assumptions about the generic mmu_gather implementation, cure this ;-) Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 25 1月, 2011 1 次提交
-
-
由 Tejun Heo 提交于
Currently percpu readmostly subsection may share cachelines with other percpu subsections which may result in unnecessary cacheline bounce and performance degradation. This patch adds @cacheline parameter to PERCPU() and PERCPU_VADDR() linker macros, makes each arch linker scripts specify its cacheline size and use it to align percpu subsections. This is based on Shaohua's x86 only patch. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Shaohua Li <shaohua.li@intel.com>
-
- 21 1月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
No functional change. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NChris Metcalf <cmetcalf@tilera.com>
-