- 10 9月, 2009 2 次提交
-
-
由 David S. Miller 提交于
When the perf counter subsystem needs to reschedule work out from an NMI, it invokes set_perf_counter_pending(). This triggers a non-NMI irq which should invoke perf_counter_do_pending(). Currently this won't trigger because sparc64 won't trigger the perf counter subsystem from NMIs, but when the HW counter support is added it will. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 9月, 2009 1 次提交
-
-
由 David S. Miller 提交于
Use a per-cpu 'wd_enabled' boolean and a global atomic_t count of watchdog NMI enabled cpus which is set to '-1' if something is wrong with the watchdog and it can't be used. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 9月, 2009 1 次提交
-
-
由 Jens Axboe 提交于
This wires up the perf_counter_open() syscall so that basic software support for perf is working. Signed-off-by: NJens Axboe <jens.axboe@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 8月, 2009 1 次提交
-
-
由 David S. Miller 提交于
When page alloc debugging is not enabled, we essentially accept any virtual address for linear kernel TLB misses. But with kgdb, kernel address probing, and other facilities we can try to access arbitrary crap. So, make sure the address we miss on will translate to physical memory that actually exists. In order to make this work we have to embed the valid address bitmap into the kernel image. And in order to make that less expensive we make an adjustment, in that the max physical memory address is decreased to "1 << 41", even on the chips that support a 42-bit physical address space. We can do this because bit 41 indicates "I/O space" and thus covers non-memory ranges. The result of this is that: 1) kpte_linear_bitmap shrinks from 2K to 1K in size 2) we need 64K more for the valid address bitmap We can't let the valid address bitmap be dynamically allocated once we start using it to validate TLB misses, otherwise we have crazy issues to deal with wrt. recursive TLB misses and such. If we're in a TLB miss it could be the deepest trap level that's legal inside of the cpu. So if we TLB miss referencing the bitmap, the cpu will be out of trap levels and enter RED state. To guard against out-of-range accesses to the bitmap, we have to check to make sure no bits in the physical address above bit 40 are set. We could export and use last_valid_pfn for this check, but that's just an unnecessary extra memory reference. On the plus side of all this, since we load all of these translations into the special 4MB mapping TSB, and we check the TSB first for TLB misses, there should be absolutely no real cost for these new checks in the TLB miss path. Reported-by: heyongli@gmail.com Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 8月, 2009 3 次提交
-
-
由 Konrad Eisele 提交于
Add sparc_leon enum, M_LEON|M_LEON3_SOC machine. Add compilation of leon.c in mm and kernel if CONFIG_SPARC_LEON is defined. Add sparc_leon dependent initialization to switch statements + head.S. Signed-off-by: NKonrad Eisele <konrad@gaisler.com> Reviewed-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Konrad Eisele 提交于
SPARC-LEON has a different ASI for mmu register accesses. Signed-off-by: NKonrad Eisele <konrad@gaisler.com> Reviewed-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Konrad Eisele 提交于
The macro CONFIG_SPARC_LEON will shield, if undefined, the sun-sparc code from LEON specific code. In particular include/asm/leon.h will get empty through #ifdef and leon_kernel.c and leon_mm.c will not be compiled. Signed-off-by: NKonrad Eisele <konrad@gaisler.com> Reviewed-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 8月, 2009 2 次提交
-
-
由 Alexey Dobriyan 提交于
sched.h inclusion is definitely not needed like in 32-bit version, remove it, fixup compilation. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jaswinder Singh Rajput 提交于
Only difference for 32 and 64 bit version is dma64_addr_t and rest is same. Also fixed the following 'make includecheck' warning: arch/sparc/include/asm/types.h: asm-generic/int-ll64.h is included more than once. Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 7月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() Upcoming paches to support the new 64-bit "BookE" powerpc architecture will need to have the virtual address corresponding to PTE page when freeing it, due to the way the HW table walker works. Basically, the TLB can be loaded with "large" pages that cover the whole virtual space (well, sort-of, half of it actually) represented by a PTE page, and which contain an "indirect" bit indicating that this TLB entry RPN points to an array of PTEs from which the TLB can then create direct entries. Thus, in order to invalidate those when PTE pages are deleted, we need the virtual address to pass to tlbilx or tlbivax instructions. The old trick of sticking it somewhere in the PTE page struct page sucks too much, the address is almost readily available in all call sites and almost everybody implemets these as macros, so we may as well add the argument everywhere. I added it to the pmd and pud variants for consistency. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV] Acked-by: NNick Piggin <npiggin@suse.de> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 7月, 2009 1 次提交
-
-
由 Peter Zijlstra 提交于
Pull the initial preempt_count value into a single definition site. Maintainers for: alpha, ia64 and m68k, please have a look, your arch code is funny. The header magic is a bit odd, but similar to the KERNEL_DS one, CPP waits with expanding these macros until the INIT_THREAD_INFO macro itself is expanded, which is in arch/*/kernel/init_task.c where we've already included sched.h so we're good. Cc: tony.luck@intel.com Cc: rth@twiddle.net Cc: geert@linux-m68k.org Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NMatt Mackall <mpm@selenic.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 6月, 2009 1 次提交
-
-
由 Matthew Wilcox 提交于
This function was only used by pci_claim_resource(), and the last commit deleted that use. Signed-off-by: NMatthew Wilcox <willy@linux.intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 6月, 2009 1 次提交
-
-
由 Randy Dunlap 提交于
Convert most arches to use asm-generic/kmap_types.h. Move the KM_FENCE_ macro additions into asm-generic/kmap_types.h, controlled by __WITH_KM_FENCE from each arch's kmap_types.h file. Would be nice to be able to add custom KM_types per arch, but I don't yet see a nice, clean way to do that. Built on x86_64, i386, mips, sparc, alpha(tonyb), powerpc(tonyb), and 68k(tonyb). Note: avr32 should be able to remove KM_PTE2 (since it's not used) and then just use the generic kmap_types.h file. Get avr32 maintainer approval. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: <linux-arch@vger.kernel.org> Acked-by: NMike Frysinger <vapier@gentoo.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Bryan Wu <cooloney@kernel.org> Cc: Mikael Starvik <starvik@axis.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: "Luck Tony" <tony.luck@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 6月, 2009 14 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Acked-by: NIngo Molnar <mingo@elte.hu>
-
由 FUJITA Tomonori 提交于
This modifies SPARC32 to use struct dma_map ops. It means that we can remove dma-mapping_{32|64}.h. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: NRobert Reif <reif@earthlink.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 FUJITA Tomonori 提交于
This patch converts dma_map_single and dma_unmap_single to use map_page and unmap_page respectively and removes unnecessary map_single and unmap_single. map_page can be used to implement map_single but the opposite is impossible. Having only dma_map_page in struct dma_ops is enough. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: NRobert Reif <reif@earthlink.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 FUJITA Tomonori 提交于
This adds sync_single_for_device() and sync_sg_for_device() to struct dma_ops in order to unify dma-mpping_{32|64}.h. dma-mpping_32.h needs them though dma-mpping_64.h doesn't. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: NRobert Reif <reif@earthlink.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 FUJITA Tomonori 提交于
Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: NRobert Reif <reif@earthlink.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Now that we defer the cpu_data() initializations to the end of per-cpu setup, we can get rid of this local hack we had to setup the per-cpu areas eary. This is a necessary step in order to support HAVE_DYNAMIC_PER_CPU_AREA since the per-cpu setup must run when page structs are available. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
We need to split up the cpu present mask setup from the cpu_data initialization, and this is a first step towards that. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
With feedback from Sam Ravnborg. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Surprisingly this actually makes LOAD_PER_CPU_BASE() a little more efficient. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Later we're going to want to get at these definitions from asm/percpu.h and that's not possible via cpudata.h because of the set of dependencies the non-trap_block[] stuff has. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This really isn't necessary at all, a local variable suits the job just fine. This frees up 8 bytes in the trap_block[] that we can use later to store the per-cpu base addresses. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2009 5 次提交
-
-
由 Rusty Russell 提交于
It's theoretically possible that there are exception table entries which point into the (freed) init text of modules. These could cause future problems if other modules get loaded into that memory and cause an exception as we'd see the wrong fixup. The only case I know of is kvm-intel.ko (when CONFIG_CC_OPTIMIZE_FOR_SIZE=n). Amerigo fixed this long-standing FIXME in the x86 version, but this patch is more general. This implements trim_init_extable(); most archs are simple since they use the standard lib/extable.c sort code. Alpha and IA64 use relative addresses in their fixups, so thier trimming is a slight variation. Sparc32 is unique; it doesn't seem to define ARCH_HAS_SORT_EXTABLE, yet it defines its own sort_extable() which overrides the one in lib. It doesn't sort, so we have to mark deleted entries instead of actually trimming them. Inspired-by: NAmerigo Wang <amwang@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: linux-alpha@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: linux-ia64@vger.kernel.org
-
由 Arnd Bergmann 提交于
The current asm-generic/page.h only contains the get_order function, and asm-generic/uaccess.h only implements unaligned accesses. This renames the file to getorder.h and uaccess-unaligned.h to make room for new page.h and uaccess.h file that will be usable by all simple (e.g. nommu) architectures. Signed-off-by: NRemis Lima Baima <remis.developer@googlemail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Arnd Bergmann 提交于
The existing asm-generic/atomic.h only defines the atomic_long type. This renames it to atomic-long.h so we have a place to add a truly generic atomic.h that can be used on all non-SMP systems. Signed-off-by: NRemis Lima Baima <remis.developer@googlemail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NIngo Molnar <mingo@elte.hu>
-
由 Arnd Bergmann 提交于
This provides a reliable way for asm-generic/types.h and other files to find out if it is running on a 32 or 64 bit platform. We cannot use CONFIG_64BIT for this in headers that are included from user space because CONFIG symbols are not available there. We also cannot do it inside of asm/types.h because some headers need the word size but cannot include types.h. The solution is to introduce a new header <asm/bitsperlong.h> that defines both __BITS_PER_LONG for user space and BITS_PER_LONG for usage in the kernel. The asm-generic version falls back to 32 bit unless the architecture overrides it, which I did for all 64 bit platforms. Signed-off-by: NRemis Lima Baima <remis.developer@googlemail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Arnd Bergmann 提交于
The existing asm-generic versions are incomplete and included by some architectures. New architectures should be able to use a generic version, so rename the existing files and change all users, which lets us add the new files. Signed-off-by: NRemis Lima Baima <remis.developer@googlemail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 07 6月, 2009 1 次提交
-
-
由 Alexander Beregalov 提交于
Commit 1f87f7d3 (cfg80211: add rfkill support) added ERFKILL to asm-generic/errno.h, but alpha, mips, parisc and sparc use their own numbering scheme and do not include asm-generic/errno.h. We need to add definition of ERFKILL for them. Signed-off-by: NAlexander Beregalov <a.beregalov@gmail.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Acked-by: NKyle McMartin <kyle@mcmartin.ca> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 5月, 2009 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 4月, 2009 1 次提交
-
-
由 Stephen Rothwell 提交于
Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 4月, 2009 1 次提交
-
-
由 Alan Cox 提交于
These got overlooked first time around. Signed-off-by: NAlan Cox <alan@linux.intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 4月, 2009 1 次提交
-
-
由 Stephen Rothwell 提交于
Impact: build fix Today's linux-next build (sparc64 defconfig) failed like this: arch/sparc/kernel/built-in.o: In function `trap_init': (.init.text+0x4): undefined reference to `thread_info_offsets_are_bolixed_dave' Caused by commit 52400ba9 ("futex: add requeue_pi functionality") (from the tip-core tree) which changed the size of struct restart_block. Shift TI_KUNA_REGS and TI_KUNA_INSN up by 8 bytes to make space for the larger restart block. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: N"David S. Miller" <davem@davemloft.net> Cc: Darren Hart <dvhltc@us.ibm.com> LKML-Reference: <20090409151722.c8eabb56.sfr@canb.auug.org.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 4月, 2009 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 4月, 2009 1 次提交
-
-
由 Robin Holt 提交于
Pass the original flags to rwlock arch-code, so that it can re-enable interrupts if implemented for that architecture. Initially, make __raw_read_lock_flags and __raw_write_lock_flags stubs which just do the same thing as non-flags variants. Signed-off-by: NPetr Tesarik <ptesarik@suse.cz> Signed-off-by: NRobin Holt <holt@sgi.com> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: <linux-arch@vger.kernel.org> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-