- 05 2月, 2009 3 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Now that x86-64 has directly accessible percpu variables, it can also implement the direct versions of these operations, which operate on a vcpu_info structure directly embedded in the percpu area. In fact, the 64-bit versions are more or less identical, and so can be shared. The only two differences are: 1. xen_restore_fl_direct takes its argument in eax on 32-bit, and rdi on 64-bit. Unfortunately it isn't possible to directly refer to the 2nd lsb of rdi directly (as you can with %ah), so the code isn't quite as dense. 2. check_events needs to variants to save different registers. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Jeremy Fitzhardinge 提交于
We need to access percpu data fairly early, so set up the percpu registers as soon as possible. We only need to load the appropriate segment register. We already have a GDT, but its hard to change it early because we need to manipulate the pagetable to do so, and that hasn't been set up yet. Also, set the kernel stack when bringing up secondary CPUs. If we don't they all end up sharing the same stack... Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Jeremy Fitzhardinge 提交于
Moving the mmu code from enlighten.c to mmu.c inadvertently broke the 32-bit build. Fix it. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 04 2月, 2009 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Impact: Bug fix A hunk went missing in the original patch, and callee-save callsites were not marked as returning the upper 32-bit of result, causing Badness. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 03 2月, 2009 3 次提交
-
-
由 Yinghai Lu 提交于
Impact: fix regression with kexec with vmlinux Split data.init into data.init, percpu, data.init2 sections instead of let data.init wrap percpu secion. Thus kexec loading will be happy, because sections will not overlap. Before the patch we have: Elf file type is EXEC (Executable file) Entry point 0x200000 There are 6 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align LOAD 0x0000000000200000 0xffffffff80200000 0x0000000000200000 0x0000000000ca6000 0x0000000000ca6000 R E 200000 LOAD 0x0000000000ea6000 0xffffffff80ea6000 0x0000000000ea6000 0x000000000014dfe0 0x000000000014dfe0 RWE 200000 LOAD 0x0000000001000000 0xffffffffff600000 0x0000000000ff4000 0x0000000000000888 0x0000000000000888 RWE 200000 LOAD 0x00000000011f6000 0xffffffff80ff6000 0x0000000000ff6000 0x0000000000073086 0x0000000000a2d938 RWE 200000 LOAD 0x0000000001400000 0x0000000000000000 0x000000000106a000 0x00000000001d2ce0 0x00000000001d2ce0 RWE 200000 NOTE 0x00000000009e2c1c 0xffffffff809e2c1c 0x00000000009e2c1c 0x0000000000000024 0x0000000000000024 4 Section to Segment mapping: Segment Sections... 00 .text .notes __ex_table .rodata __bug_table .pci_fixup .builtin_fw __ksymtab __ksymtab_gpl __ksymtab_strings __init_rodata __param 01 .data .init.rodata .data.cacheline_aligned .data.read_mostly 02 .vsyscall_0 .vsyscall_fn .vsyscall_gtod_data .vsyscall_1 .vsyscall_2 .vgetcpu_mode .jiffies 03 .data.init_task .smp_locks .init.text .init.data .init.setup .initcall.init .con_initcall.init .x86_cpu_dev.init .altinstructions .altinstr_replacement .exit.text .init.ramfs .bss 04 .data.percpu 05 .notes After patch we've got: Elf file type is EXEC (Executable file) Entry point 0x200000 There are 7 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align LOAD 0x0000000000200000 0xffffffff80200000 0x0000000000200000 0x0000000000ca6000 0x0000000000ca6000 R E 200000 LOAD 0x0000000000ea6000 0xffffffff80ea6000 0x0000000000ea6000 0x000000000014dfe0 0x000000000014dfe0 RWE 200000 LOAD 0x0000000001000000 0xffffffffff600000 0x0000000000ff4000 0x0000000000000888 0x0000000000000888 RWE 200000 LOAD 0x00000000011f6000 0xffffffff80ff6000 0x0000000000ff6000 0x0000000000073086 0x0000000000073086 RWE 200000 LOAD 0x0000000001400000 0x0000000000000000 0x000000000106a000 0x00000000001d2ce0 0x00000000001d2ce0 RWE 200000 LOAD 0x000000000163d000 0xffffffff8123d000 0x000000000123d000 0x0000000000000000 0x00000000007e6938 RWE 200000 NOTE 0x00000000009e2c1c 0xffffffff809e2c1c 0x00000000009e2c1c 0x0000000000000024 0x0000000000000024 4 Section to Segment mapping: Segment Sections... 00 .text .notes __ex_table .rodata __bug_table .pci_fixup .builtin_fw __ksymtab __ksymtab_gpl __ksymtab_strings __init_rodata __param 01 .data .init.rodata .data.cacheline_aligned .data.read_mostly 02 .vsyscall_0 .vsyscall_fn .vsyscall_gtod_data .vsyscall_1 .vsyscall_2 .vgetcpu_mode .jiffies 03 .data.init_task .smp_locks .init.text .init.data .init.setup .initcall.init .con_initcall.init .x86_cpu_dev.init .altinstructions .altinstr_replacement .exit.text .init.ramfs 04 .data.percpu 05 .bss 06 .notes Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
Zach says: > Enable/Disable have no clobbers at all. > Save clobbers only return value, %eax > Restore also clobbers nothing. This is precisely compatible with the calling convention, so we can just call them directly without wrapping. (Compile tested only.) Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jeremy Fitzhardinge 提交于
Impact: bugfix In the 32-bit calling convention, %eax:%edx is used to return 64-bit values. Don't save and restore %edx around wrapped functions, or they can't return a full 64-bit result. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 31 1月, 2009 12 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Impact: fix xen booting We need to access percpu data fairly early, so set up the percpu registers as soon as possible. We only need to load the appropriate segment register. We already have a GDT, but its hard to change it early because we need to manipulate the pagetable to do so, and that hasn't been set up yet. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Jeremy Fitzhardinge 提交于
Impact: split out a function, no functional change Xen needs to be able to access percpu data from very early on. For various reasons, it cannot also load the gdt at that time. It does, however, have a pefectly functional gdt at that point, so there's no pressing need to reload the gdt. Split the function to load the segment registers off, so Xen can call it directly. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Brian Gerst 提交于
Impact: cleanup, prepare for xen boot fix. Xen needs to call this function very early to setup the GDT and per-cpu segments. Remove the call to smp_processor_id() and just pass in the cpu number. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Cliff Wickman 提交于
Impact: fix possible tlb mis-flushing on UV uv_flush_send_and_wait() should return a pointer if the broadcast remote tlb shootdown requests fail. That causes the conventional IPI method of shootdown to be used. Signed-off-by: NCliff Wickman <cpw@sgi.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Jeremy Fitzhardinge 提交于
Impact: Fix build when CONFIG_PARAVIRT_DEBUG is enabled Fix missed convertion to using callee-saved calls for pud_val, which causes a compile error when CONFIG_PARAVIRT_DEBUG is enabled. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Jeremy Fitzhardinge 提交于
Impact: Optimization In the native case, pte_val, make_pte, etc are all just identity functions, so there's no need to clobber a lot of registers over them. (This changes the 32-bit callee-save calling convention to return both EAX and EDX so functions can return 64-bit values.) Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jeremy Fitzhardinge 提交于
Impact: Optimization Functions with the callee save calling convention clobber many fewer registers than the normal C calling convention. Implement variants of PVOP_V?CALL* accordingly. This only bothers with functions up to 3 args, since functions with more args may as well use the normal calling convention. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jeremy Fitzhardinge 提交于
Impact: Optimization One of the problems with inserting a pile of C calls where previously there were none is that the register pressure is greatly increased. The C calling convention says that the caller must expect a certain set of registers may be trashed by the callee, and that the callee can use those registers without restriction. This includes the function argument registers, and several others. This patch seeks to alleviate this pressure by introducing wrapper thunks that will do the register saving/restoring, so that the callsite doesn't need to worry about it, but the callee function can be conventional compiler-generated code. In many cases (particularly performance-sensitive cases) the callee will be in assembler anyway, and need not use the compiler's calling convention. Standard calling convention is: arguments return scratch x86-32 eax edx ecx eax ? x86-64 rdi rsi rdx rcx rax r8 r9 r10 r11 The thunk preserves all argument and scratch registers. The return register is not preserved, and is available as a scratch register for unwrapped callee code (and of course the return value). Wrapped function pointers are themselves wrapped in a struct paravirt_callee_save structure, in order to get some warning from the compiler when functions with mismatched calling conventions are used. The most common paravirt ops, both statically and dynamically, are interrupt enable/disable/save/restore, so handle them first. This is particularly easy since their calls are handled specially anyway. XXX Deal with VMI. What's their calling convention? Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jeremy Fitzhardinge 提交于
Impact: Optimization Each asm paravirt-ops call says what registers are available for clobbering. This patch makes use of this to selectively save/restore registers around each pvops call. In many cases this significantly shrinks code size. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jeremy Fitzhardinge 提交于
Impact: Fix latent bug The clobber is trying to say that anything except RDI is available for clobbering, but actually clobbers everything. This hasn't mattered because the clobbers were basically ignored, but subsequent patches will rely on them. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jeremy Fitzhardinge 提交于
Impact: Optimization Several paravirt ops implementations simply return their arguments, the most obvious being the make_pte/pte_val class of operations on native. On 32-bit, the identity function is literally a no-op, as the calling convention uses the same registers for the first argument and return. On 64-bit, it can be implemented with a single "mov". This patch adds special identity functions for 32 and 64 bit argument, and machinery to recognize them and replace them with either nops or a mov as appropriate. At the moment, the only users for the identity functions are the pagetable entry conversion functions. The result is a measureable improvement on pagetable-heavy benchmarks (2-3%, reducing the pvops overhead from 5 to 2%). Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
由 Jeremy Fitzhardinge 提交于
Impact: Cleanup Move remaining mmu-related stuff into mmu.c. A general cleanup, and lay the groundwork for later patches. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 30 1月, 2009 5 次提交
-
-
由 Randy Dunlap 提交于
Move DMA-mapping.txt to Documentation/PCI/. DMA-mapping.txt was supposed to be moved from Documentation/ to Documentation/PCI/. The 00-INDEX files in those two directories were updated, along with a few other text files, but the file itself somehow escaped being moved, so move it and update more text files and source files with its new location. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> cc: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ivan Kokshaysky 提交于
The commit "alpha: teach the compiler that BUG doesn't return" (ed6b9b97) moved the asm code into inline function which takes __FILE__ and __LINE__ as arguments. This violates asm constrains there ("i" - an immediate operand with constant value), so that compile may result in warning or error, depending on compiler version. Just adding an infinite loop to the BUG() is sufficient. Signed-off-by: NIvan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ivan Kokshaysky 提交于
- jensen build: fix conflicting declarations for pci_alloc_consistent() and undefined virt_to_phys(); - SMP: arch/alpha/kernel/smp.c:124: warning: passing argument 2 of '__cpu_test_and_set' discards qualifiers from pointer target type Interestingly, this only happens with gcc-4.2; gcc <= 4.1 and gcc-4.3 are OK. Fixed with extra assignment. Signed-off-by: NIvan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ivan Kokshaysky 提交于
Convert OSF syscalls and add alpha specific SYSCALL_ALIAS() macro. Signed-off-by: NIvan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Atsushi SAKAI 提交于
3 points lguest_asm.S => i386_head.S LHCALL_BREAK => LHREQ_BREAK perferred => preferred Signed-off-by: NAtsushi SAKAI <sakaia@jp.fujitsu.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 28 1月, 2009 5 次提交
-
-
由 Gerhard Pircher 提交于
_PAGE_COHERENT is now always set in _PAGE_RAM resp. PAGE_KERNEL. Thus it has to be masked out, if the BAT mapping should be non cacheable or CPU_FTR_NEED_COHERENT is not set. This will work on normal SMP setups because we force-set CPU_FTR_NEED_COHERENT as part of CPU_FTR_COMMON on SMP. Signed-off-by: NGerhard Pircher <gerhard_pircher@gmx.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Robert Jennings 提交于
In the VIO bus code the wrappers for dma alloc_coherent and free_coherent calls are rounding to IOMMU_PAGE_SIZE. Taking a look at the underlying calls, the actual mapping is promoted to PAGE_SIZE. Changing the rounding in these two functions fixes under-reporting the entitlement used by the system. Without this change, the system could run out of entitlement before it believes it has and incur mapping failures at the firmware level. Also in the VIO bus code, the wrapper for dma map_sg is not exiting in an error path where it should. Rather than fall through to code for the success case, this patch adds the return that is needed in the error path. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Josh Boyer 提交于
Remove some leftover cruft from the arch/ppc days Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Stephen Rothwell 提交于
Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 David Brownell 提交于
The DaVinci code had an implementation of the OTG transceiver glue too; make it use the new-standard one. Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Acked-by: NFelipe Balbi <felipe.balbi@nokia.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 27 1月, 2009 11 次提交
-
-
由 Matt Waddel 提交于
The 5329 ColdFire peripheral IO register addresses are not relative to the MBAR register. So fix the serial platform setup array and IRQ acking to use just the direct addresses. Signed-off-by: NMatt Waddel <Matt.Waddel@freescale.com> Signed-off-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Greg Ungerer 提交于
Make restart blocks working, required for proper syscall restarting. Derived from same changes for m68k arch by Andreas Schwab <schwab@suse.de> Signed-off-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Greg Ungerer 提交于
Signed-off-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Adrian Bunk 提交于
Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Adrian Bunk 提交于
Greg Ungerer said about this board: Only ever a handful where made, and that was in 1999. Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Adrian Bunk 提交于
m68knommu does not set the Kconfig NO_DMA variable, but also does not provide the required functions, resulting in the following build error triggered by commit a40c24a1 (net: Add SKB DMA mapping helper functions.): <-- snip --> .. LD vmlinux net/built-in.o: In function `skb_dma_unmap': (.text+0xac5e): undefined reference to `dma_unmap_single' net/built-in.o: In function `skb_dma_unmap': (.text+0xac7a): undefined reference to `dma_unmap_page' net/built-in.o: In function `skb_dma_map': (.text+0xacdc): undefined reference to `dma_map_single' net/built-in.o: In function `skb_dma_map': (.text+0xace8): undefined reference to `dma_mapping_error' net/built-in.o: In function `skb_dma_map': (.text+0xad10): undefined reference to `dma_map_page' net/built-in.o: In function `skb_dma_map': (.text+0xad82): undefined reference to `dma_unmap_page' net/built-in.o: In function `skb_dma_map': (.text+0xadc6): undefined reference to `dma_unmap_single' make[1]: *** [vmlinux] Error 1 <-- snip --> Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Greg Ungerer 提交于
Fix cache flushing for the 527x ColdFire processors Its CACR register format is slightly different. Along with this add support for flushing the 523x cache, which uses the same format as the 527x ColdFire's, and was missing flush support. Signed-off-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Sebastian Siewior 提交于
Part of the code that did not make sense to me got removed by Greg. This is part two: The first compare is to check whether the interrupts are disabled or not. Depending on the result we exectute the RESTORE_ALL macro is not only restoring the stack but also returning to caller. The test for pending softirq has been removed because it is allready done in irq_exit(). Since system_call() is allso using the SAVE_ALL macro and returning via ret_from_exception label I see no reason why we could not do this here as well. This is also handy because if we return from the timer interrupt and we need to resched than we check for this :) Signed-off-by: NSebastian Siewior <bigeasy@linutronix.de> Signed-off-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Tejun Heo 提交于
Impact: cosmetic cleanup Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 James Bottomley 提交于
Impact: build fix x86_cpu_to_apicid and x86_bios_cpu_apicid aren't defined for voyage. Earlier patch forgot to conditionalize early percpu clearing. Fix it. Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Brian Gerst 提交于
Impact: sync 32 and 64-bit code Merge load_gs_base() into switch_to_new_gdt(). Load the GDT and per-cpu state for the boot cpu when its new area is set up. Signed-off-by: NBrian Gerst <brgerst@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-