- 19 6月, 2012 1 次提交
-
-
由 Chris Metcalf 提交于
Instruction bundles are always little-endian, even when running in big-endian mode. I missed this internal bug fix when cherry-picking the big-endian code to return to the community. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 06 6月, 2012 2 次提交
-
-
由 Chris Metcalf 提交于
Some code was moved from init_task.c to setup.c but the appropriate header needed to be moved as well. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
This routine isn't used unless CONFIG_HOMECACHE is enabled, which isn't even available as a public configuration option yet. Since it no longer links correctly in 3.4, just remove it for now. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 02 6月, 2012 5 次提交
-
-
由 Al Viro 提交于
Does block_sigmask() + tracehook_signal_handler(); called when sigframe has been successfully built. All architectures converted to it; block_sigmask() itself is gone now (merged into this one). I'm still not too happy with the signature, but that's a separate story (IMO we need a structure that would contain signal number + siginfo + k_sigaction, so that get_signal_to_deliver() would fill one, signal_delivered(), handle_signal() and probably setup...frame() - take one). Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 Al Viro 提交于
Only 3 out of 63 do not. Renamed the current variant to __set_current_blocked(), added set_current_blocked() that will exclude unblockable signals, switched open-coded instances to it. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 Al Viro 提交于
replace boilerplate "should we use ->saved_sigmask or ->blocked?" with calls of obvious inlined helper... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 Al Viro 提交于
first fruits of ..._restore_sigmask() helpers: now we can take boilerplate "signal didn't have a handler, clear RESTORE_SIGMASK and restore the blocked mask from ->saved_mask" into a common helper. Open-coded instances switched... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 26 5月, 2012 10 次提交
-
-
由 Chris Metcalf 提交于
If the kernel unexpectedly takes a bad trap, it's convenient to have it report the type of trap as part of the error. This gives customers a bit more context before they call up customer support. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
This just adds a few more attributes to the information Linux can query from the hypervisor for the /sys/hypervisor/board/ directory, providing part, serial#, revision#, and description for cpu modules (as opposed to the board itself, or any mezzanine boards). Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
The hardwall drain code was not properly implemented for tilegx, just tilepro, so you couldn't reliably restart an application that made use of the udn. In addition, the code was only applicable to the udn (user dynamic network). On tilegx there is a second user network that is available (the "idn"), and there is support for having I/O shims deliver user-level interrupts to applications ("ipi") which functions in a very similar way to the inter-core permissions used for udn/idn. So this change also generalizes the code from supporting just the udn to supports udn/idn/ipi on tilegx. By default we now use /dev/hardwall/{udn,idn,ipi} with separate minor numbers for the three devices. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
This change adds support for a new "super" bit in the PTE, using the new arch_make_huge_pte() method. The Tilera hypervisor sees the bit set at a given level of the page table and gangs together 4, 16, or 64 consecutive pages from that level of the hierarchy to create a larger TLB entry. One extra "super" page size can be specified at each of the three levels of the page table hierarchy on tilegx, using the "hugepagesz" argument on the boot command line. A new hypervisor API is added to allow Linux to tell the hypervisor how many PTEs to gang together at each level of the page table. To allow pre-allocating huge pages larger than the buddy allocator can handle, this change modifies the Tilera bootmem support to put all of memory on tilegx platforms into bootmem. As part of this change I eliminate the vestigial CONFIG_HIGHPTE support, which never worked anyway, and eliminate the hv_page_size() API in favor of the standard vma_kernel_pagesize() API. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
We already had a syscall that did some dcache flushing, but it was not used in practice. Make it MIPS compatible instead so it can do both the DCACHE and ICACHE actions. We have code that wants to be able to use the ICACHE flush mode from userspace so this change enables that. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
This change introduces new flags for the hv_install_context() API that passes a page table pointer to the hypervisor. Clients can explicitly request 4K, 16K, or 64K small pages when they install a new context. In practice, the page size is fixed at kernel compile time and the same size is always requested every time a new page table is installed. The <hv/hypervisor.h> header changes so that it provides more abstract macros for managing "page" things like PFNs and page tables. For example there is now a HV_DEFAULT_PAGE_SIZE_SMALL instead of the old HV_PAGE_SIZE_SMALL. The various PFN routines have been eliminated and only PA- or PTFN-based ones remain (since PTFNs are always expressed in fixed 2KB "page" size). The page-table management macros are renamed with a leading underscore and take page-size arguments with the presumption that clients will use those macros in some single place to provide the "real" macros they will use themselves. I happened to notice the old hv_set_caching() API was totally broken (it assumed 4KB pages) so I changed it so it would nominally work correctly with other page sizes. Tag modules with the page size so you can't load a module built with a conflicting page size. (And add a test for SMP while we're at it.) Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
Use direct load/store for the get_user/put_user. Previously, we would call out to a helper routine that would do the appropriate thing and then return, handling the possible exception internally. Now we inline the load or store, along with a "we succeeded" indication in a register; if the load or store faults, we write a "we failed" indication into the same register and then return to the following instruction. This is more efficient and gives us more compact code, as well as being more in line with what other architectures do. The special futex assembly source file for TILE-Gx also disappears in this change; we just use the same inlining idiom there as well, putting the appropriate atomic operations directly into futex_atomic_op_inuser() (and thus into the FUTEX_WAIT function). The underlying atomic copy_from_user, copy_to_user functions were renamed using the (cryptic) x86 convention as copy_from_user_ll and copy_to_user_ll. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
The toolchain supports big-endian mode now, so add support for building the kernel to run big-endian as well. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
In general we want to avoid ever touching memory while within an interrupt critical section, since the page fault path goes through a different path from the hypervisor when in an interrupt critical section, and we carefully decided with tilegx that we didn't need to support this path in the kernel. (On tilepro we did implement that path as part of supporting atomic instructions in software.) In practice we always need to touch the kernel stack, since that's where we store the interrupt state before releasing the critical section, but this change cleans up a few things. The IRQ_ENABLE macro is split up so that when we want to enable interrupts in a deferred way (e.g. for cpu_idle or for interrupt return) we can read the per-cpu enable mask before entering the critical section. The cache-migration code is changed to use interrupt masking instead of interrupt critical sections. And, the interrupt-entry code is changed so that we defer loading "tp" from per-cpu data until after we have released the interrupt critical section. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 24 5月, 2012 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 17 5月, 2012 2 次提交
-
-
由 Chris Metcalf 提交于
This passes siginfo and mcontext to tilegx32 signal handlers that don't have SA_SIGINFO set just as we have been doing for tilegx64. Cc: stable@vger.kernel.org Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
First, we were at risk of handling thread-info flags, in particular do_signal(), when returning from kernel space. This could happen after a failed kernel_execve(), or when forking a kernel thread. The fix is to test in do_work_pending() for user_mode() and return immediately if so; we already had this test for one of the flags, so I just hoisted it to the top of the function. Second, if a ptraced process updated the callee-saved registers in the ptregs struct and then processed another thread-info flag, we would overwrite the modifications with the original callee-saved registers. To fix this, we add a register to note if we've already saved the registers once, and skip doing it on additional passes through the loop. To avoid a performance hit from the couple of extra instructions involved, I modified the GET_THREAD_INFO() macro to be guaranteed to be one instruction, then bundled it with adjacent instructions, yielding an overall net savings. Reported-By: NAl Viro <viro@ZenIV.linux.org.uk> Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 08 5月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Use the core allocator and deal with the extra cleanup in arch_release_thread_info(). Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Chris Metcalf <cmetcalf@tilera.com> Link: http://lkml.kernel.org/r/20120505150142.311126440@linutronix.de
-
- 05 5月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Same code. Use the generic version. The special Makefile treatment is pointless anyway as init_task.o contains only data which is handled by the linker script. So no point on being treated like head text. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Chris Metcalf <cmetcalf@tilera.com> Link: http://lkml.kernel.org/r/20120503085035.528129988@linutronix.de
-
- 26 4月, 2012 2 次提交
-
-
由 Thomas Gleixner 提交于
Preparatory patch to make the idle thread allocation for secondary cpus generic. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Richard Weinberger <richard@nod.at> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/20120420124556.964170564@linutronix.de
-
由 Chris Metcalf 提交于
They were marked __devinit by mistake, causing some warnings at link time. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 21 4月, 2012 1 次提交
-
-
由 Linus Torvalds 提交于
This continues the theme started with vm_brk() and vm_munmap(): vm_mmap() does the same thing as do_mmap(), but additionally does the required VM locking. This uninlines (and rewrites it to be clearer) do_mmap(), which sadly duplicates it in mm/mmap.c and mm/nommu.c. But that way we don't have to export our internal do_mmap_pgoff() function. Some day we hopefully don't have to export do_mmap() either, if all modular users can become the simpler vm_mmap() instead. We're actually very close to that already, with the notable exception of the (broken) use in i810, and a couple of stragglers in binfmt_elf. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 4月, 2012 1 次提交
-
-
由 Chris Metcalf 提交于
Until we push the unaligned access support for tilegx, it's silly to have arch/tile/kernel/proc.c generate a warning about an unused variable. Extend the #ifdef to cover all the code and data for now. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 10 4月, 2012 1 次提交
-
-
由 Srivatsa S. Bhat 提交于
The scheduler depends on receiving the CPU_STARTING notification, without which we end up into a lot of trouble. So add the missing call to notify_cpu_starting() in the bringup code. Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
- 03 4月, 2012 12 次提交
-
-
由 Chris Metcalf 提交于
The return path as we reload registers and core state requires that r30 hold a boolean indicating whether we are returning from an NMI, but in a couple of cases we weren't setting this properly, with the result that we could accidentally unmask the NMI interrupt(s), which could cause confusion. Now we set r30 in every place where we jump into the interrupt return path. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
Previously we were returning SIGSEGV in this case. It seems cleaner to return SIGBUS since the hardware figures out alignment traps before TLB violations, so SIGBUS is the "more correct" signal. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
If we are single-stepping and make a syscall, we call ptrace_notify() explicitly on the return path back to user space, since we are returning to a pc value set artificially to the next instruction, and otherwise we won't register that we stepped over the syscall instruction (swint1). Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
This allows the later-panicking tiles to wait in a lower power state until they get interrupted with an smp_send_stop(). Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
This avoids the hardware istream prefetcher doing unnecessary work. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
This is more standard and avoids having to remember what units the options actually take. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
We were failing to track the memory when we allocated it. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
Not associated with any code changes, so I'm just lumping these comment changes into a commit by themselves. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
We now respond to MEM_ERROR traps (e.g. an atomic instruction to non-cacheable memory) with a SIGBUS. We also no longer generate a console crash message if a user process die due to a SIGTRAP. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
In certain circumstances we need to do a bunch of jump-and-link instructions to fill the hardware return-address stack with nonzero values. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
Fix a long-standing bug in the stack backtracer where we would print garbage to the console instead of kernel function names, if the kernel wasn't built with symbol support (e.g. mboot). Make sure to tag every line of userspace backtrace output if we actually have the mmap_sem, since that way if there's no tag, we know that it's because we couldn't trylock the semaphore. Stop doing a TLB flush and examining page tables during backtrace. Instead, just trust that __copy_from_user_inatomic() will properly fault and return a failure, which it should do in all cases. Fix a latent bug where the backtracer would directly examine a signal context in user space, rather than copying it safely to kernel memory first. This meant that a race with another thread could potentially have caused a kernel panic. Guard against unaligned sp when trying to restart backtrace at an interrupt or signal handler point in the kernel backtracer. Report kernel symbolic information for the call instruction rather than for the following instruction. We still report the actual numeric address corresponding to the instruction after the call, for the sake of consistency with the normal expectations for stack backtracers. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> -
由 Chris Metcalf 提交于
With lockstat we can end up trying to get a backtrace before "high_memory" is initialized, so don't worry about range testing if it is zero. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-