- 17 7月, 2008 2 次提交
-
-
由 Zhao Yakui 提交于
"idle=nomwait" disables the use of the MWAIT instruction from both C1 (C1_FFH) and deeper (C2C3_FFH) C-states. When MWAIT is unavailable, the BIOS and OS generally negotiate to use the HALT instruction for C1, and use IO accesses for deeper C-states. This option is useful for power and performance comparisons, and also to work around BIOS bugs where broken MWAIT support is advertised. http://bugzilla.kernel.org/show_bug.cgi?id=10807 http://bugzilla.kernel.org/show_bug.cgi?id=10914Signed-off-by: NZhao Yakui <yakui.zhao@intel.com> Signed-off-by: NLi Shaohua <shaohua.li@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Zhao Yakui 提交于
"idle=halt" limits the idle loop to using the halt instruction. No MWAIT, no IO accesses, no C-states deeper than C1. If something is broken in the idle code, "idle=halt" is a less severe workaround than "idle=poll" which disables all power savings. Signed-off-by: NZhao Yakui <yakui.zhao@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
- 26 6月, 2008 1 次提交
-
-
由 Jens Axboe 提交于
This converts ia64 to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 17 6月, 2008 1 次提交
-
-
由 Jack Steiner 提交于
Fix build error in CONFIG_IA64_SGI_UV config. (GENERIC builds are ok). Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 28 5月, 2008 16 次提交
-
-
由 Isaku Yamahata 提交于
Introduce pv_time_ops which adds hook to steal time accounting. On virtualized environment, cpus are shared by many guests and steal time is the time which is used for other guests. On virtualized environtment, streal time should be accounted. Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
introduce pv_irq_ops which adds hooks to paravirtualize irq related operations. On virtualized environment, interruption may be replaced by something virtualization friendly. So the irq related operation also may need paravirtualization. This patch adds necessary hooks to paravirtualize irq related operations. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
add hooks to paravirtualize iosapic which is a real hardware resource. On virtualized environment it may be replaced something virtualized friendly. Define pv_iosapic_ops and add the hooks. Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
define pv_init_ops hooks which represents various initialization hooks for paravirtualized environment. and add hooks. Signed-off-by: NAlex Williamson <alex.williamson@hp.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
Make NR_IRQ overridable by each pv instances. Pv instance may need each own number of irqs so that NR_IRQS should be the maximum number of nr_irqs each pv instances need. Cc: Jes Sorensen <jes@sgi.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
paravirtualize ia64_swtich_to, ia64_leave_syscall and ia64_leave_kernel. They include sensitive or performance critical privileged instructions so that they need paravirtualization. To paravirtualize them by single source and multi compile they are converted into indirect jump. And define each pv instances. Cc: Keith Owens <kaos@ocs.com.au> Cc: "Dong, Eddie" <eddie.dong@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
paravirtualize minstate.h which are hand written assembly code. They include sensitive or performance critical privileged instructions. So that they are appropriate for paravirtualization. Cc: Keith Owens <kaos@ocs.com.au> Cc: Akio Takebe <takebe_akio@jp.fujitsu.com> Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
pv_cpu_asm_ops: define paravirtualized introduce for native execution environment. Cc: Keith Owens <kaos@ocs.com.au> Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
introduce pv_cpu_ops to paravirtualize privleged instructions which are defined by ia64 intrinsics. make them indirect C function calls by introducing function tables, pv_cpu_ops. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
This patch adds a setup hook in the very early boot sequence before start_kernel() to initialize paravirtualization stuff. The hook will be set by each pv loader code or by using multi entry point. Signed-off-by: NQing He <qing.he@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
introduce pv_info which describes some randome info about underlying execution environment. Cc: Jes Sorensen <jes@sgi.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
__local_irq_save() and local_save_flags() are used to mask interruptions. They read all psr bits that requres whole bit emulation. On the other hand, reading only psr.i, the single bit, can be virtualized cheaply. Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
[IA64] pvops: preparation: introduce ia64_set_rr0_to_rr4() to make kernel paravirtualization friendly. make kernel paravirtualization friendly by introducing ia64_set_rr0_to_rr4(). ia64/Xen will replace setting rr[0-4] with single hypercall later. Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
Move the LOAD_OFFSET definition from vmlinux.lds.S into system.h. On paravirtualized environments, it is necessary to detect the execution environment. One of the solutions is the multi entry point. The multi entry point allows a boot loader to start the kernel execution from the entry point which is different from the ELF entry point. The non standard entry point will defined as the specialized elf note which contains the LMA of the entry point symbol. The constant, LOAD_OFFSET, is necessary to calculate the symbol's LMA. Move the definition into the public header file to make it available to the multi entry point support. Cc: "He, Qing" <qing.he@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
remove extern declaration of handle_IPI() in irq_ia64.c. Instead, declare it in asm-ia64/smp.h. Later handle_IPI() will be referenced from another file. Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Tony Luck 提交于
Problem: An application violating the architectural rules regarding operation dependencies and having specific Register Stack Engine (RSE) state at the time of the violation, may result in an illegal operation fault and invalid RSE state. Such faults may initiate a cascade of repeated illegal operation faults within OS interruption handlers. The specific behavior is OS dependent. Implication: An application causing an illegal operation fault with specific RSE state may result in a series of illegal operation faults and an eventual OS stack overflow condition. Workaround: OS interruption handlers that switch to kernel backing store implement a check for invalid RSE state to avoid the series of illegal operation faults. The core of the workaround is the RSE_WORKAROUND code sequence inserted into each invocation of the SAVE_MIN_WITH_COVER and SAVE_MIN_WITH_COVER_R19 macros. This sequence includes hard-coded constants that depend on the number of stacked physical registers being 96. The rest of this patch consists of code to disable this workaround should this not be the case (with the presumption that if a future Itanium processor increases the number of registers, it would also remove the need for this patch). Move the start of the RBS up to a mod32 boundary to avoid some corner cases. The dispatch_illegal_op_fault code outgrew the spot it was squatting in when built with this patch and CONFIG_VIRT_CPU_ACCOUNTING=y Move it out to the end of the ivt. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 18 5月, 2008 2 次提交
-
-
由 Xiantao Zhang 提交于
Guest's firmware needs an iosapic with 48 pins for ia64 guests. Needed to get networking going. Signed-off-by: NXiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Xiantao Zhang 提交于
The kernel's ia64_fpreg structure conflicts with userspace headers, so define a new structure to replace it. Signed-off-by: NXiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 15 5月, 2008 2 次提交
-
-
由 Jack Steiner 提交于
This patch adds the basic IA64 machvec infrastructure to support the SGI "UV" platform. Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jack Steiner 提交于
Add new UV-specific header files. Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 03 5月, 2008 1 次提交
-
-
由 H. Peter Anvin 提交于
This modifies <asm-ia64/types.h> to use the <asm-generic/int-*.h> generic include files. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Acked-by: NTony Luck <tony.luck@intel.com>
-
- 02 5月, 2008 1 次提交
-
-
由 Roland McGrath 提交于
Replace TIF_RESTORE_SIGMASK with TS_RESTORE_SIGMASK and define our own set_restore_sigmask() function. This saves the costly SMP-safe set_bit operation, which we do not need for the sigmask flag since TIF_SIGPENDING always has to be set too. Signed-off-by: NRoland McGrath <roland@redhat.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 01 5月, 2008 2 次提交
-
-
由 Jean Delvare 提交于
The declaration of dmi helper functions is a bit messy and inconsistent at the moment: * On ia64 they are declared in <asm/io.h>. * On x86-64 they are declared in <asm/dmi.h>. * On i386 they are declared both in <asm/io.h> and <asm/dmi.h>. Fix the header files so that the dmi helper functions are consistently defined in <asm/dmi.h>. Signed-off-by: NJean Delvare <khali@linux-fr.org> Cc: Matt Domsch <Matt_Domsch@dell.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Hidetoshi Seto 提交于
This patch silences: WARNING: vmlinux.o(.text+0x44672): Section mismatch in reference from the function arch_register_cpu() to the function .cpuinit.text:register_cpu() Changes are based on codes in arch/x86/kernel/topology.c Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 30 4月, 2008 4 次提交
-
-
由 Jeff Dike 提交于
Lots of asm-*/futex.h call pagefault_enable and pagefault_disable, which are declared in linux/uaccess.h, without including linux/uaccess.h. They all include asm/uaccess.h, so this patch replaces asm/uaccess.h with linux/uaccess.h. Signed-off-by: NJeff Dike <jdike@linux.intel.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
TIF_RESTORE_SIGMASK no longer needs to be in the _TIF_WORK_* masks. Those low bits are scarce. Renumber TIF_RESTORE_SIGMASK to free one up. Signed-off-by: NRoland McGrath <roland@redhat.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alex Chiang 提交于
Legacy HP ia64 platforms currently cannot provide /proc/cpuinfo/physical_id due to legacy SAL/PAL implementations. However, that physical topology information can be obtained via ACPI. Provide an interface that gives ACPI one last chance to provide physical_id for these legacy platforms. This logic only comes into play iff: - ACPI actually provides slot information for the CPU - we lack a valid socket_id Otherwise, we don't do anything. Since x86 uses the ACPI processor driver as well, we provide a nop stub function for arch_fix_phys_package_id() in asm-x86/topology.h Signed-off-by: NAlex Chiang <achiang@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Dean Nelson 提交于
Enable the uncached allocator to allocate multiple pages of contiguous uncached memory. Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 29 4月, 2008 3 次提交
-
-
由 Harvey Harrison 提交于
Unaligned access is ok for the following arches: cris, m68k, mn10300, powerpc, s390, x86 Arches that use the memmove implementation for native endian, and the byteshifting for the opposite endianness. h8300, m32r, xtensa Packed struct for native endian, byteshifting for other endian: alpha, blackfin, ia64, parisc, sparc, sparc64, mips, sh m86knommu is generic_be for Coldfire, otherwise unaligned access is ok. frv, arm chooses endianness based on compiler settings, uses the byteshifting versions. Remove the unaligned trap handler from frv as it is now unused. v850 is le, uses the byteshifting versions for both be and le. Remove the now unused asm-generic implementation. Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Arthur Kepner 提交于
Change all ia64 machvecs to use the new dma_*map*_attrs() interfaces. Implement the old dma_*map_*() interfaces in terms of the corresponding new interfaces. For ia64/sn, make use of one dma attribute, DMA_ATTR_WRITE_BARRIER. Introduce swiotlb_*map*_attrs() functions. Signed-off-by: NArthur Kepner <akepner@sgi.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Jes Sorensen <jes@sgi.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Roland Dreier <rdreier@cisco.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Al Viro 提交于
psr is not a good name for local variable in macro body when it has a good chance of being the argument of said macro (actually is at least in one place) Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 4月, 2008 4 次提交
-
-
由 Gerald Schaefer 提交于
Huge ptes have a special type on s390 and cannot be handled with the standard pte functions in certain cases, e.g. because of a different location of the invalid bit. This patch adds some new architecture- specific functions to hugetlb common code, as a prerequisite for the s390 large page support. This won't affect other architectures in functionality, but I need to add some new dummy inline functions to the headers. Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Gerald Schaefer 提交于
A cow break on a hugetlbfs page with page_count > 1 will set a new pte with set_huge_pte_at(), w/o any tlb flush operation. The old pte will remain in the tlb and subsequent write access to the page will result in a page fault loop, for as long as it may take until the tlb is flushed from somewhere else. This patch introduces an architecture-specific huge_ptep_clear_flush() function, which is called before the the set_huge_pte_at() in hugetlb_cow(). ATTENTION: This is just a nop on all architectures for now, the s390 implementation will come with our large page patch later. Other architectures should define their own huge_ptep_clear_flush() if needed. Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Gerald Schaefer 提交于
This patch moves all architecture functions for hugetlb to architecture header files (include/asm-foo/hugetlb.h) and converts all macros to inline functions. It also removes (!) ARCH_HAS_HUGEPAGE_ONLY_RANGE, ARCH_HAS_HUGETLB_FREE_PGD_RANGE, ARCH_HAS_PREPARE_HUGEPAGE_RANGE, ARCH_HAS_SETCLEAR_HUGE_PTE and ARCH_HAS_HUGETLB_PREFAULT_HOOK. Getting rid of the ARCH_HAS_xxx #ifdef and macro fugliness should increase readability and maintainability, at the price of some code duplication. An asm-generic common part would have reduced the loc, but we would end up with new ARCH_HAS_xxx defines eventually. Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nick Piggin 提交于
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory model (which is more dynamic than most). Instead, they had proposed to implement it with an additional path through vm_normal_page(), using a bit in the pte to determine whether or not the page should be refcounted: vm_normal_page() { ... if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { if (vma->vm_flags & VM_MIXEDMAP) { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; #else if (!pfn_valid(pfn)) return NULL; #endif goto out; } ... } This is fine, however if we are allowed to use a bit in the pte to determine refcountedness, we can use that to _completely_ replace all the vma based schemes. So instead of adding more cases to the already complex vma-based scheme, we can have a clearly seperate and simple pte-based scheme (and get slightly better code generation in the process): vm_normal_page() { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; return pte_page(pte); #else ... #endif } And finally, we may rather make this concept usable by any architecture rather than making it s390 only, so implement a new type of pte state for this. Unfortunately the old vma based code must stay, because some architectures may not be able to spare pte bits. This makes vm_normal_page a little bit more ugly than we would like, but the 2 cases are clearly seperate. So introduce a pte_special pte state, and use it in mm/memory.c. It is currently a noop for all architectures, so this doesn't actually result in any compiled code changes to mm/memory.o. BTW: I haven't put vm_normal_page() into arch code as-per an earlier suggestion. The reason is that, regardless of where vm_normal_page is actually implemented, the *abstraction* is still exactly the same. Also, while it depends on whether the architecture has pte_special or not, that is the only two possible cases, and it really isn't an arch specific function -- the role of the arch code should be to provide primitive functions and accessors with which to build the core code; pte_special does that. We do not want architectures to know or care about vm_normal_page itself, and we definitely don't want them being able to invent something new there out of sight of mm/ code. If we made vm_normal_page an arch function, then we have to make vm_insert_mixed (next patch) an arch function too. So I don't think moving it to arch code fundamentally improves any abstractions, while it does practically make the code more difficult to follow, for both mm and arch developers, and easier to misuse. [akpm@linux-foundation.org: build fix] Signed-off-by: NNick Piggin <npiggin@suse.de> Acked-by: NCarsten Otte <cotte@de.ibm.com> Cc: Jared Hulbert <jaredeh@gmail.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 4月, 2008 1 次提交
-
-
由 Avi Kivity 提交于
We wish to export it to userspace, so move it into the kvm namespace. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-