- 13 8月, 2008 1 次提交
-
-
由 Tony Luck 提交于
ia64 handles per-cpu variables a litle differently from other architectures in that it maps the physical memory allocated for each cpu at a constant virtual address (0xffffffffffff0000). This mapping is not enabled until the architecture specific cpu_init() function is run, which causes problems since some generic code is run before this point. In particular when CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to per-cpu memory at the first printk() call so the boot will fail without the kernel printing anything to the console. Fix this by allocating percpu memory for cpu0 in the kernel data section and doing all initialization to enable percpu access in head.S before calling any generic code. Other cpus must take care not to access per-cpu variables too early, but their code path from start_secondary() to cpu_init() is all in arch/ia64 Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 28 5月, 2008 2 次提交
-
-
由 Isaku Yamahata 提交于
Move the LOAD_OFFSET definition from vmlinux.lds.S into system.h. On paravirtualized environments, it is necessary to detect the execution environment. One of the solutions is the multi entry point. The multi entry point allows a boot loader to start the kernel execution from the entry point which is different from the ELF entry point. The non standard entry point will defined as the specialized elf note which contains the LMA of the entry point symbol. The constant, LOAD_OFFSET, is necessary to calculate the symbol's LMA. Move the definition into the public header file to make it available to the multi entry point support. Cc: "He, Qing" <qing.he@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Tony Luck 提交于
Problem: An application violating the architectural rules regarding operation dependencies and having specific Register Stack Engine (RSE) state at the time of the violation, may result in an illegal operation fault and invalid RSE state. Such faults may initiate a cascade of repeated illegal operation faults within OS interruption handlers. The specific behavior is OS dependent. Implication: An application causing an illegal operation fault with specific RSE state may result in a series of illegal operation faults and an eventual OS stack overflow condition. Workaround: OS interruption handlers that switch to kernel backing store implement a check for invalid RSE state to avoid the series of illegal operation faults. The core of the workaround is the RSE_WORKAROUND code sequence inserted into each invocation of the SAVE_MIN_WITH_COVER and SAVE_MIN_WITH_COVER_R19 macros. This sequence includes hard-coded constants that depend on the number of stacked physical registers being 96. The rest of this patch consists of code to disable this workaround should this not be the case (with the presumption that if a future Itanium processor increases the number of registers, it would also remove the need for this patch). Move the start of the RBS up to a mod32 boundary to avoid some corner cases. The dispatch_illegal_op_fault code outgrew the spot it was squatting in when built with this patch and CONFIG_VIRT_CPU_ACCOUNTING=y Move it out to the end of the ivt. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 29 1月, 2008 1 次提交
-
-
由 Sam Ravnborg 提交于
This patch consolidate all definitions of .init.text, .init.data and .exit.text, .exit.data section definitions in the generic vmlinux.lds.h. This is a preparational patch - alone it does not buy us much good. Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
- 08 12月, 2007 1 次提交
-
-
由 Bernhard Walle 提交于
Rename _bss to __bss_start as on other architectures. That makes it possible to use the <linux/sections.h> instead of own declarations. Also add __bss_stop because that symbol exists on other architectures. Signed-off-by: NBernhard Walle <bwalle@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 14 8月, 2007 2 次提交
-
-
由 David Mosberger-Tang 提交于
Explicitly put the unwind section into its own program-header. This used to be unnecessary (probably because binutils did it for us), but with current binutils (e.g., v2.17.50.20070804) we won't get the PT_IA_64_UNWIND header without this patch which will break unwinding in a debugger and simulators such as Ski. Signed-off-by: NDavid Mosberger-Tang <dmosberger@gmail.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 David Mosberger-Tang 提交于
Add NOTES to linker script such that the kernel can be built with recent versions of binutils. Without this patch, final link fails with this error: ld: .tmp_vmlinux1: section `.text' can't be allocated in segment 0 ld: final link failed: Bad value This error is due to the fact that the --build-id option is used with newer linkers to include a .notes section on the kernel, but without the NOTES macro, that section won't be included in the kernel which then leads to the above error message. Signed-off-by: NDavid Mosberger-Tang <dmosberger@gmail.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 26 7月, 2007 1 次提交
-
-
由 Tony Luck 提交于
In 741f98fe Sam added full checking across the entire vmlinux image. This flushed out a dozen new section mismatch warnings. Start the whack-a-mole game again to stomp them out. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 20 7月, 2007 1 次提交
-
-
由 Fenghua Yu 提交于
per cpu data section contains two types of data. One set which is exclusively accessed by the local cpu and the other set which is per cpu, but also shared by remote cpus. In the current kernel, these two sets are not clearely separated out. This can potentially cause the same data cacheline shared between the two sets of data, which will result in unnecessary bouncing of the cacheline between cpus. One way to fix the problem is to cacheline align the remotely accessed per cpu data, both at the beginning and at the end. Because of the padding at both ends, this will likely cause some memory wastage and also the interface to achieve this is not clean. This patch: Moves the remotely accessed per cpu data (which is currently marked as ____cacheline_aligned_in_smp) into a different section, where all the data elements are cacheline aligned. And as such, this differentiates the local only data and remotely accessed data cleanly. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: <linux-arch@vger.kernel.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 5月, 2007 2 次提交
-
-
由 Sam Ravnborg 提交于
With this consolidation we can now modify the .data section definition in one spot for all archs. Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
由 Sam Ravnborg 提交于
Move definition of .text section to asm-generic. Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
- 12 2月, 2007 1 次提交
-
-
由 Jean-Paul Saman 提交于
Update all arch/*/kernel/vmlinux.lds.S to not include space for initramfs when CONFIG_BLK_DEV_INITRAMFS is not selected. This saves another 4 kbytes on most platfoms (some reserve PAGE_SIZE for initramfs). Signed-off-by: NJean-Paul Saman <jean-paul.saman@nxp.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 2月, 2007 1 次提交
-
-
由 Chen, Kenneth W 提交于
It's not efficient to use a per-cpu variable just to store how many physical stack register a cpu has. Ever since the incarnation of ia64 up till upcoming Montecito processor, that variable has "glued" to 96. Having a variable in memory means that the kernel is burning an extra cacheline access on every syscall and kernel exit path. Such "static" value is better served with the instruction patching utility exists today. Convert ia64_phys_stacked_size_p8 into dynamic insn patching. This also has a pleasant side effect of eliminating access to per-cpu area while psr.ic=0 in the kernel exit path. (fixable for per-cpu DTC work, but why bother?) There are some concerns with the default value that the instruc- tion encoded in the kernel image. It shouldn't be concerned. The reasons are: (1) cpu_init() is called at CPU initialization. In there, we find out physical stack register size from PAL and patch two instructions in kernel exit code. The code in question can not be executed before the patching is done. (2) current implementation stores zero in ia64_phys_stacked_size_p8, and that's what the current kernel exit path loads the value with. With the new code, it is equivalent that we store reg size 96 in ia64_phys_stacked_size_p8, thus creating a better safety net. Given (1) above can never fail, having (2) is just a bonus. All in all, this patch allow one less memory reference in the kernel exit path, thus reducing syscall and interrupt return latency; and avoid polluting potential useful data in the CPU cache. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 06 2月, 2007 1 次提交
-
-
由 Kirill Korotaev 提交于
Occasionally the FSYS_RETURN patch list can have an odd length, causing other data structures to get out of alignment. In OpenVZ it is odd and we get misaligned kernel image, which does not boot. Signed-off-by: NAlexey Kuznetsov <kuznet@ms2.inr.ac.ru> Signed-off-by: NKirill Korotaev <dev@openvz.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 28 10月, 2006 1 次提交
-
-
由 Andrew Morton 提交于
Add a vmlinux.lds.h helper macro for defining the eight-level initcall table, teach all the architectures to use it. This is a prerequisite for a patch which performs initcall synchronisation for multithreaded-probing. Cc: Greg KH <greg@kroah.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> [ Added AVR32 as well ] Signed-off-by: NHaavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 9月, 2006 1 次提交
-
-
由 Al Stone 提交于
Minor reformatting to vmlinux.lds.S to make it 80-column usable, in accordance with Linux coding style. Signed-off-by: NAl Stone <ahs3@fc.hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 01 7月, 2006 1 次提交
-
-
由 Jörn Engel 提交于
Signed-off-by: NJörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: NAdrian Bunk <bunk@stusta.de>
-
- 30 3月, 2006 1 次提交
-
-
由 Russ Anderson 提交于
Move __mca_table out of the __init section. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 25 3月, 2006 1 次提交
-
-
由 Russ Anderson 提交于
Memory errors encountered by user applications may surface when the CPU is running in kernel context. The current code will not attempt recovery if the MCA surfaces in kernel context (privilage mode 0). This patch adds a check for cases where the user initiated the load that surfaces in kernel interrupt code. An example is a user process lauching a load from memory and the data in memory had bad ECC. Before the bad data gets to the CPU register, and interrupt comes in. The code jumps to the IVT interrupt entry point and begins execution in kernel context. The process of saving the user registers (SAVE_REST) causes the bad data to be loaded into a CPU register, triggering the MCA. The MCA surfaces in kernel context, even though the load was initiated from user context. As suggested by David and Tony, this patch uses an exception table like approach, puting the tagged recovery addresses in a searchable table. One difference from the exception table is that MCAs do not surface in precise places (such as with a TLB miss), so instead of tagging specific instructions, address ranges are registers. A single macro is used to do the tagging, with the input parameter being the label of the starting address and the macro being the ending address. This limits clutter in the code. This patch only tags one spot, the interrupt ivt entry. Testing showed that spot to be a "heavy hitter" with MCAs surfacing while saving user registers. Other spots can be added as needed by adding a single macro. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 23 3月, 2006 1 次提交
-
-
由 Chen, Kenneth W 提交于
ia64_mv is initialized based on platform detected or specified. However, there is one instantiation of each platform type. We don't expect to switch platform vector during run time. Move those platform specific type into init section since a copy is made into global ia64_mv at initialization. Also move instruction patch list into init section as well. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 17 12月, 2005 1 次提交
-
-
由 Christoph Lameter 提交于
sparc64, i386 and x86_64 have support for a special data section dedicated to rarely updated data that is frequently read. The section was created to avoid false sharing of those rarely read data with frequently written kernel data. This patch creates such a data section for ia64 and will group rarely written data into this section. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 08 9月, 2005 1 次提交
-
-
由 Prasanna S Panchamukhi 提交于
This patch contains the ia64 architecture specific changes to prevent the possible race conditions. Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 28 6月, 2005 1 次提交
-
-
由 Keshavamurthy Anil S 提交于
Not safe to insert kprobes on IVT code. This patch checks to see if the address on which Kprobes is being inserted is in ivt code and if it is in ivt code then refuse to register kprobe. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Acked-by: NDavid Mosberger <davidm@napali.hpl.hp.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-