- 29 3月, 2006 2 次提交
-
-
由 Satoru Takeuchi 提交于
This patch corrects some wrong comments and a printk message. It also fixes some minor things, and makes all lines fit in 80 columns. Signed-off-by: NSatoru Takeuchi <takeuchi_satoru@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Roe 提交于
Fix a for-loop in sn_hwperf_geoid_to_cnode(). It needs to loop over num_cnodes to ensure it can still process TIO nodes in addition to compute nodes on systems with many nodes. Interim fix until better support for many (>265) nodes is complete. Signed-off-by: NDean Roe <roe@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 28 3月, 2006 3 次提交
-
-
由 hawkes@sgi.com 提交于
Eliminate an unnecessary -- and flawed -- use of the expensive num_online_cpus(). Signed-off-by: NJohn Hawkes <hawkes@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
It was reported from a field customer that global spin lock ptcg_lock is giving a lot of grief on munmap performance running on a large numa machine. What appears to be a problem coming from flush_tlb_range(), which currently unconditionally calls platform_global_tlb_purge(). For some of the numa machines in existence today, this function is mapped into ia64_global_tlb_purge(), which holds ptcg_lock spin lock while executing ptc.ga instruction. Here is a patch that attempt to avoid global tlb purge whenever possible. It will use local tlb purge as much as possible. Though the conditions to use local tlb purge is pretty restrictive. One of the side effect of having flush tlb range instruction on ia64 is that kernel don't get a chance to clear out cpu_vm_mask. On ia64, this mask is sticky and it will accumulate if process bounces around. Thus diminishing the possible use of ptc.l. Thoughts? Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Acked-by: NJack Steiner <steiner@sgi.com> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Zhang, Yanmin 提交于
Function lazy_mmu_prot_update is also used on huge pages when it is called by set_huge_ptep_writable, but it isn't aware of huge pages. Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com> Acked-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 27 3月, 2006 12 次提交
-
-
由 Akinobu Mita 提交于
The find_*_bit() routines are defined to work on a pointer to unsigned long. But partial_page.bitmap is unsigned int and it is passed to find_*_bit() in arch/ia64/ia32/sys_ia32.c. So the compiler will print warnings. This patch changes to unsigned long instead. Signed-off-by: NAkinobu Mita <mita@miraclelinux.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Akinobu Mita 提交于
- remove generic_fls64() - remove find_{next,first}{,_zero}_bit() - remove ext2_{set,clear,test,find_first_zero,find_next_zero}_bit() - remove minix_{test,set,test_and_clear,test,find_first_zero}_bit() - remove sched_find_first_bit() Signed-off-by: NAkinobu Mita <mita@miraclelinux.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Akinobu Mita 提交于
__set_bit() --> cpu_set() cleanup Signed-off-by: NAkinobu Mita <mita@miraclelinux.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Prasanna S Panchamukhi 提交于
Provide proper kprobes fault handling, if a user-specified pre/post handlers tries to access user address space, through copy_from_user(), get_user() etc. The user-specified fault handler gets called only if the fault occurs while executing user-specified handlers. In such a case user-specified handler is allowed to fix it first, later if the user-specifed fault handler does not fix it, we try to fix it by calling fix_exception(). The user-specified handler will not be called if the fault happens when single stepping the original instruction, instead we reset the current probe and allow the system page fault handler to fix it up. Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com> Acked-by: Anil S Keshavamurthy<anil.s.keshavamurthy@intel.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 bibo,mao 提交于
Currently kprobe handler traps only happen in kernel space, so function kprobe_exceptions_notify should skip traps which happen in user space. This patch modifies this, and it is based on 2.6.16-rc4. Signed-off-by: Nbibo mao <bibo.mao@intel.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> Cc: <hiramatu@sdl.hitachi.co.jp> Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 bibo mao 提交于
When kretprobe probes the schedule() function, if the probed process exits then schedule() will never return, so some kretprobe instances will never be recycled. In this patch the parent process will recycle retprobe instances of the probed function and there will be no memory leak of kretprobe instances. Signed-off-by: Nbibo mao <bibo.mao@intel.com> Cc: Masami Hiramatsu <hiramatu@sdl.hitachi.co.jp> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Rothwell 提交于
Create compat_sys_adjtimex and use it an all appropriate places. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Stephen Rothwell 提交于
We had a copy of the compatibility version of struct timex in each 64 bit architecture. This patch just creates a global one and replaces all the usages of the old ones. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: NKyle McMartin <kyle@parisc-linux.org> Acked-by: NTony Luck <tony.luck@intel.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Bjorn Helgaas 提交于
Almost all users of the table addresses from the EFI system table want physical addresses. So rather than doing the pa->va->pa conversion, just keep physical addresses in struct efi. This fixes a DMI bug: the efi structure contained the physical SMBIOS address on x86 but the virtual address on ia64, so dmi_scan_machine() used ioremap() on a virtual address on ia64. This is essentially the same as an earlier patch by Matt Tolentino: http://marc.theaimsgroup.com/?l=linux-kernel&m=112130292316281&w=2 except that this changes all table addresses, not just ACPI addresses. Matt's original patch was backed out because it caused MCAs on HP sx1000 systems. That problem is resolved by the ioremap() attribute checking added for ia64. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: N"Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Bjorn Helgaas 提交于
Check the EFI memory map so we can use the correct memory attributes for ioremap(). Previously, we always used uncacheable access, which blows up on some machines for regular system memory. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: N"Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Bjorn Helgaas 提交于
Pass the size, not a pointer to the size, to efi_mem_attribute_range(). This function validates memory regions for the /dev/mem read/write/mmap paths. The pointer allows arches to reduce the size of the range, but I think that's unnecessary complexity. Simplifying it will let me use efi_mem_attribute_range() to improve the ia64 ioremap() implementation. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: N"Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Matt Domsch 提交于
Enable DMI table parsing on ia64. Andi Kleen has a patch in his x86_64 tree which enables the use of i386 dmi_scan.c on x86_64. dmi_scan.c functions are being used by the drivers/char/ipmi/ipmi_si_intf.c driver for autodetecting the ports or memory spaces where the IPMI controllers may be found. This patch adds equivalent changes for ia64 as to what is in the x86_64 tree. In addition, I reworked the DMI detection, such that on EFI-capable systems, it uses the efi.smbios pointer to find the table, rather than brute-force searching from 0xF0000. On non-EFI systems, it continues the brute-force search. My test system, an Intel S870BN4 'Tiger4', aka Dell PowerEdge 7250, with latest BIOS, does not list the IPMI controller in the ACPI namespace, nor does it have an ACPI SPMI table. Also note, currently shipping Dell x8xx EM64T servers don't have these either, so DMI is the only method for obtaining the address of the IPMI controller. Signed-off-by: NMatt Domsch <Matt_Domsch@dell.com> Acked-by: N"Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 26 3月, 2006 1 次提交
-
-
由 Thomas Gleixner 提交于
alarm() calls the kernel with an unsigend int timeout in seconds. The value is stored in the tv_sec field of a struct timeval to setup the itimer. The tv_sec field of struct timeval is of type long, which causes the tv_sec value to be negative on 32 bit machines if seconds > INT_MAX. Before the hrtimer merge (pre 2.6.16) such a negative value was converted to the maximum jiffies timeout by the timeval_to_jiffies conversion. It's not clear whether this was intended or just happened to be done by the timeval_to_jiffies code. hrtimers expect a timeval in canonical form and treat a negative timeout as already expired. This breaks the legitimate usage of alarm() with a timeout value > INT_MAX seconds. For 32 bit machines it is therefor necessary to limit the internal seconds value to avoid API breakage. Instead of doing this in all implementations of sys_alarm the duplicated sys_alarm code is moved into a common function in itimer.c Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 25 3月, 2006 8 次提交
-
-
由 Fenghua Yu 提交于
IPF SDM 2.2 changes definition of PAL_LOGICAL_TO_PHYSICAL to add proc_number=-1 to get core/thread mapping info on the running processer. Based on this change, we had better to update existing core/thread detection in IA64 kernel correspondingly. The attached patch implements this change. It simplifies detection code and eliminates potential race condition. It also runs a bit faster and has better scalability especially when cores and threads number grows up in one package. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jack Steiner 提交于
Update configuration files with new CONFIG option. Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jack Steiner 提交于
Node number are kept in the cpu_to_node_map which is currently defined as u8. Change to u16 to accomodate larger node numbers. Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jack Steiner 提交于
Add support in IA64 acpi for platforms that support more than 256 nodes. Currently, ACPI is limited to 256 nodes because the proximity domain number is 8-bits. Long term, we expect to use ACPI3.0 to support >256 nodes. This patch is an interim solution that works with platforms that pass the high order bits of the proximity domain in "reserved" fields of the ACPI tables. This code is enabled ONLY on SN platforms. Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jack Steiner 提交于
Add a configuration option to allow the maximum number of nodes to be configurable for GENERIC or SN kernels. Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Prarit Bhargava 提交于
arch/ia64/sn and include/asm-ia64/sn changes required to support Tollhouse system PCI hotplug, fixes the ia64_sn_sysctl_ioboard_get call, and introduces the PRF_HOTPLUG_SUPPORT feature bit. Signed-off-by: NPrarit Bhargava <prarit@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
dig_irq_init is equivalent to machvec_noop, no need to define another empty function. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Russ Anderson 提交于
Memory errors encountered by user applications may surface when the CPU is running in kernel context. The current code will not attempt recovery if the MCA surfaces in kernel context (privilage mode 0). This patch adds a check for cases where the user initiated the load that surfaces in kernel interrupt code. An example is a user process lauching a load from memory and the data in memory had bad ECC. Before the bad data gets to the CPU register, and interrupt comes in. The code jumps to the IVT interrupt entry point and begins execution in kernel context. The process of saving the user registers (SAVE_REST) causes the bad data to be loaded into a CPU register, triggering the MCA. The MCA surfaces in kernel context, even though the load was initiated from user context. As suggested by David and Tony, this patch uses an exception table like approach, puting the tagged recovery addresses in a searchable table. One difference from the exception table is that MCAs do not surface in precise places (such as with a TLB miss), so instead of tagging specific instructions, address ranges are registers. A single macro is used to do the tagging, with the input parameter being the label of the starting address and the macro being the ending address. This limits clutter in the code. This patch only tags one spot, the interrupt ivt entry. Testing showed that spot to be a "heavy hitter" with MCAs surfacing while saving user registers. Other spots can be added as needed by adding a single macro. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 24 3月, 2006 3 次提交
-
-
由 Alexey Dobriyan 提交于
Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Adrian Bunk 提交于
include/linux/platform.h contained nothing that was actually used except the default_idle() prototype, and is therefore removed by this patch. This patch does the following with the platform specific default_idle() functions on different architectures: - remove the unused function: - parisc - sparc64 - make the needlessly global function static: - arm - h8300 - m68k - m68knommu - s390 - v850 - x86_64 - add a prototype in asm/system.h: - cris - i386 - ia64 Signed-off-by: NAdrian Bunk <bunk@stusta.de> Acked-by: NPatrick Mochel <mochel@digitalimplant.org> Acked-by: NKyle McMartin <kyle@parisc-linux.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Horms 提交于
I'm not sure of the worthiness of this idea, so please consider it an RFC. Its key merits are: * Reuse existing infrastructure * Greatly tightens up the parsing of nomca * Greatly simplifies the parsing of machvec Addition cleanup (moving setup_mvec() to machvec.c) by Ken Chen. Signed-Off-By: NHorms <horms@verge.net.au> Signed-Off-By: NTony Luck <tony.luck@intel.com>
-
- 23 3月, 2006 10 次提交
-
-
由 Adrian Bunk 提交于
This patch removes all occurances of _INLINE_ in the kernel. With the exception of tty_flip.h, I've simply removed the inline's since gcc should know best which functions to be inlined. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Chen, Kenneth W 提交于
ia64_mv is initialized based on platform detected or specified. However, there is one instantiation of each platform type. We don't expect to switch platform vector during run time. Move those platform specific type into init section since a copy is made into global ia64_mv at initialization. Also move instruction patch list into init section as well. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
Add __initdata to nolwsys. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
Add init declaration to bunch of patch functions and gate page setup function. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
Add init declaration to variables/functions used for memory initialization. I don't think they would clash with memory hotplug. If they do, please yell. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
Add init declaration to cpu initialization functions. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
Mark init related variable and functions with appropriate __init* declaration to mca functions. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Kenji Kaneshige 提交于
According to the ACPI spec, the OSPM must ignore the contents of the Processor Local APIC/SAPIC Affinity Structure in System Resource Affinity Table (SRAT), if its enable flag is cleared. However, ia64 linux refers all of the Processor Local APIC/SAPIC Affinity Structures in SRAT regardless of the enable flag. This is obviously against the ACPI spec. This patch fixes this bug. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Bjorn Helgaas 提交于
Use the recently-added ia64_get_irr() rather than duplicating the code. Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Acked-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Chen, Kenneth W 提交于
fix is_hugepage_only_range() definition to be "overlaps" instead of "within architectural restricted hugetlb address range". Simplify the ia64 specific code that used to use is_hugepage_only_range() to just check which region the address is in. Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 22 3月, 2006 1 次提交
-
-
由 David Gibson 提交于
Quite a long time back, prepare_hugepage_range() replaced is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to verify if an address range is suitable for a hugepage mapping. is_aligned_hugepage_range() stuck around, but only to implement prepare_hugepage_range() on archs which didn't implement their own. Most archs (everything except ia64 and powerpc) used the same implementation of is_aligned_hugepage_range(). On powerpc, which implements its own prepare_hugepage_range(), the custom version was never used. In addition, "is_aligned_hugepage_range()" was a bad name, because it suggests it returns true iff the given range is a good hugepage range, whereas in fact it returns 0-or-error (so the sense is reversed). This patch cleans up by abolishing is_aligned_hugepage_range(). Instead prepare_hugepage_range() is defined directly. Most archs use the default version, which simply checks the given region is aligned to the size of a hugepage. ia64 and powerpc define custom versions. The ia64 one simply checks that the range is in the correct address space region in addition to being suitably aligned. The powerpc version (just as previously) checks for suitable addresses, and if necessary performs low-level MMU frobbing to set up new areas for use by hugepages. No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR). Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-