- 05 8月, 2008 3 次提交
-
-
由 Robin Holt 提交于
ia64 has compiled with NR_CPUS=4096 for a couple releases, just forgot to update Kconfig to allow it. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Robin Holt 提交于
arch/ia64/kernel/vmlinux.lds is a generated file. Tell git to ignore it. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Isaku Yamahata 提交于
Recent kernels are not booting on some HP systems (though it does boot on others). James and Willy reported the problem. James did the bisection to find the commit that caused the problem: 498c5170. [IA64] pvops: paravirtualize ivt.S Two instructions were wrongly paravirtualized such that _FROM_ macro had been used where _TO_ was intended Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: "Wilcox, Matthew R" <matthew.r.wilcox@intel.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 02 8月, 2008 1 次提交
-
-
由 Tony Luck 提交于
After moving the the include files there were a few clean-ups: 1) Some files used #include <asm-ia64/xyz.h>, changed to <asm/xyz.h> 2) Some comments alerted maintainers to look at various header files to make matching updates if certain code were to be changed. Updated these comments to use the new include paths. 3) Some header files mentioned their own names in initial comments. Just deleted these self references. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 31 7月, 2008 1 次提交
-
-
由 Jack Steiner 提交于
This series of patches adds a driver for the SGI UV GRU. The driver is still in development but it currently compiles for both x86_64 & IA64. All simple regression tests pass on IA64. Although features remain to be added, I'd like to start the process of getting the driver into the kernel. Additional kernel drivers will depend on services provide by the GRU driver. The GRU is a hardware resource located in the system chipset. The GRU contains memory that is mmaped into the user address space. This memory is used to communicate with the GRU to perform functions such as load/store, scatter/gather, bcopy, AMOs, etc. The GRU is directly accessed by user instructions using user virtual addresses. GRU instructions (ex., bcopy) use user virtual addresses for operands. The GRU contains a large TLB that is functionally very similar to processor TLBs. Because the external contains a TLB with user virtual address, it requires callouts from the core VM system when certain types of changes are made to the process page tables. There are several MMUOPS patches currently being discussed but none has been accepted into the kernel. The GRU driver is built using version V18 from Andrea Arcangeli. This patch: Contains the definitions of the hardware GRU data structures that are used by the driver to manage the GRU. [akpm@linux-foundation;org: export hpage_shift] Signed-off-by: NJack Steiner <steiner@sgi.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 7月, 2008 3 次提交
-
-
由 Julia Lawall 提交于
There is a call to local_irq_restore in the normal exit case, so it would seem that there should be one on an error return as well. The semantic patch that finds this problem is as follows: (http://www.emn.fr/x-info/coccinelle/) // <smpl> @@ expression l; expression E,E1,E2; @@ local_irq_save(l); ... when != local_irq_restore(l) when != spin_unlock_irqrestore(E,l) when any when strict ( if (...) { ... when != local_irq_restore(l) when != spin_unlock_irqrestore(E1,l) + local_irq_restore(l); return ...; } | if (...) + {local_irq_restore(l); return ...; + } | spin_unlock_irqrestore(E2,l); | local_irq_restore(l); ) // </smpl> Signed-off-by: NJulia Lawall <julia@diku.dk> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Roland McGrath 提交于
This extends wait_task_inactive() with a new argument so it can be used in a "soft" mode where it will check for the task changing state unexpectedly and back off. There is no change to existing callers. This lays the groundwork to allow robust, noninvasive tracing that can try to sample a blocked thread but back off safely if it wakes up. Signed-off-by: NRoland McGrath <roland@redhat.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Reviewed-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 FUJITA Tomonori 提交于
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER architecture does: This enables us to cleanly fix the Calgary IOMMU issue that some devices are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423). I think that per-device dma_mapping_ops support would be also helpful for KVM people to support PCI passthrough but Andi thinks that this makes it difficult to support the PCI passthrough (see the above thread). So I CC'ed this to KVM camp. Comments are appreciated. A pointer to dma_mapping_ops to struct dev_archdata is added. If the pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's NULL, the system-wide dma_ops pointer is used as before. If it's useful for KVM people, I plan to implement a mechanism to register a hook called when a new pci (or dma capable) device is created (it works with hot plugging). It enables IOMMUs to set up an appropriate dma_mapping_ops per device. The major obstacle is that dma_mapping_error doesn't take a pointer to the device unlike other DMA operations. So x86 can't have dma_mapping_ops per device. Note all the POWER IOMMUs use the same dma_mapping_error function so this is not a problem for POWER but x86 IOMMUs use different dma_mapping_error functions. The first patch adds the device argument to dma_mapping_error. The patch is trivial but large since it touches lots of drivers and dma-mapping.h in all the architecture. This patch: dma_mapping_error() doesn't take a pointer to the device unlike other DMA operations. So we can't have dma_mapping_ops per device. Note that POWER already has dma_mapping_ops per device but all the POWER IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device argument. [akpm@linux-foundation.org: fix sge] [akpm@linux-foundation.org: fix svc_rdma] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix bnx2x] [akpm@linux-foundation.org: fix s2io] [akpm@linux-foundation.org: fix pasemi_mac] [akpm@linux-foundation.org: fix sdhci] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix sparc] [akpm@linux-foundation.org: fix ibmvscsi] Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Avi Kivity <avi@qumranet.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 7月, 2008 2 次提交
-
-
由 Srinivasa D S 提交于
Currently list of kretprobe instances are stored in kretprobe object (as used_instances,free_instances) and in kretprobe hash table. We have one global kretprobe lock to serialise the access to these lists. This causes only one kretprobe handler to execute at a time. Hence affects system performance, particularly on SMP systems and when return probe is set on lot of functions (like on all systemcalls). Solution proposed here gives fine-grain locks that performs better on SMP system compared to present kretprobe implementation. Solution: 1) Instead of having one global lock to protect kretprobe instances present in kretprobe object and kretprobe hash table. We will have two locks, one lock for protecting kretprobe hash table and another lock for kretporbe object. 2) We hold lock present in kretprobe object while we modify kretprobe instance in kretprobe object and we hold per-hash-list lock while modifying kretprobe instances present in that hash list. To prevent deadlock, we never grab a per-hash-list lock while holding a kretprobe lock. 3) We can remove used_instances from struct kretprobe, as we can track used instances of kretprobe instances using kretprobe hash table. Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system with return probes set on all systemcalls looks like this. cacheline non-cacheline Un-patched kernel aligned patch aligned patch =============================================================================== real 9m46.784s 9m54.412s 10m2.450s user 40m5.715s 40m7.142s 40m4.273s sys 2m57.754s 2m58.583s 3m17.430s =========================================================== Time duration for kernel compilation ("make -j 8) on the same system, when kernel is not probed. ========================= real 9m26.389s user 40m8.775s sys 2m7.283s ========================= Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com> Signed-off-by: NJim Keniston <jkenisto@us.ibm.com> Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Tony Luck 提交于
Six new system calls: signalfd4, eventfd2, epoll_create1, dup3, pipe2 and inotify_init1. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 25 7月, 2008 6 次提交
-
-
由 Ulrich Drepper 提交于
This patch introduces the new syscall pipe2 which is like pipe but it also takes an additional parameter which takes a flag value. This patch implements the handling of O_CLOEXEC for the flag. I did not add support for the new syscall for the architectures which have a special sys_pipe implementation. I think the maintainers of those archs have the chance to go with the unified implementation but that's up to them. The implementation introduces do_pipe_flags. I did that instead of changing all callers of do_pipe because some of the callers are written in assembler. I would probably screw up changing the assembly code. To avoid breaking code do_pipe is now a small wrapper around do_pipe_flags. Once all callers are changed over to do_pipe_flags the old do_pipe function can be removed. The following test must be adjusted for architectures other than x86 and x86-64 and in case the syscall numbers changed. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #include <fcntl.h> #include <stdio.h> #include <unistd.h> #include <sys/syscall.h> #ifndef __NR_pipe2 # ifdef __x86_64__ # define __NR_pipe2 293 # elif defined __i386__ # define __NR_pipe2 331 # else # error "need __NR_pipe2" # endif #endif int main (void) { int fd[2]; if (syscall (__NR_pipe2, fd, 0) != 0) { puts ("pipe2(0) failed"); return 1; } for (int i = 0; i < 2; ++i) { int coe = fcntl (fd[i], F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if (coe & FD_CLOEXEC) { printf ("pipe2(0) set close-on-exit for fd[%d]\n", i); return 1; } } close (fd[0]); close (fd[1]); if (syscall (__NR_pipe2, fd, O_CLOEXEC) != 0) { puts ("pipe2(O_CLOEXEC) failed"); return 1; } for (int i = 0; i < 2; ++i) { int coe = fcntl (fd[i], F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if ((coe & FD_CLOEXEC) == 0) { printf ("pipe2(O_CLOEXEC) does not set close-on-exit for fd[%d]\n", i); return 1; } } close (fd[0]); close (fd[1]); puts ("OK"); return 0; } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: NUlrich Drepper <drepper@redhat.com> Acked-by: NDavide Libenzi <davidel@xmailserver.org> Cc: Michael Kerrisk <mtk.manpages@googlemail.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Almost all users of this field need a PFN instead of a physical address, so replace node_boot_start with node_min_pfn. [Lee.Schermerhorn@hp.com: fix spurious BUG_ON() in mark_bootmem()] Signed-off-by: NJohannes Weiner <hannes@saeureba.de> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andi Kleen 提交于
Straight forward extensions for huge pages located in the PUD instead of PMDs. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NNick Piggin <npiggin@suse.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andi Kleen 提交于
The goal of this patchset is to support multiple hugetlb page sizes. This is achieved by introducing a new struct hstate structure, which encapsulates the important hugetlb state and constants (eg. huge page size, number of huge pages currently allocated, etc). The hstate structure is then passed around the code which requires these fields, they will do the right thing regardless of the exact hstate they are operating on. This patch adds the hstate structure, with a single global instance of it (default_hstate), and does the basic work of converting hugetlb to use the hstate. Future patches will add more hstate structures to allow for different hugetlbfs mounts to have different page sizes. [akpm@linux-foundation.org: coding-style fixes] Acked-by: NAdam Litke <agl@us.ibm.com> Acked-by: NNishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jan Beulich 提交于
The double indirection here is not needed anywhere and hence (at least) confusing. Signed-off-by: NJan Beulich <jbeulich@novell.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Acked-by: NJeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
There are a lot of places that define either a single bootmem descriptor or an array of them. Use only one central array with MAX_NUMNODES items instead. Signed-off-by: NJohannes Weiner <hannes@saeurebad.de> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Richard Henderson <rth@twiddle.net> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Kyle McMartin <kyle@parisc-linux.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 7月, 2008 1 次提交
-
-
由 Andi Kleen 提交于
This allow to dynamically generate attributes and share show/store functions between attributes. Right now most attributes are generated by special macros and lots of duplicated code. With the attribute passed it's instead possible to attach some data to the attribute and then use that in shared low level functions to do different things. I need this for the dynamically generated bank attributes in the x86 machine check code, but it'll allow some further cleanups. I converted all users in tree to the new show/store prototype. It's a single huge patch to avoid unbisectable sections. Runtime tested: x86-32, x86-64 Compiled only: ia64, powerpc Not compile tested/only grep converted: sh, arm, avr32 Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 21 7月, 2008 1 次提交
-
-
由 Alan Cox 提交于
Noted by Tony Luck although I've done the patches differently and also removed some other bogus oddments. Signed-off-by: NAlan Cox <alan@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 7月, 2008 4 次提交
-
-
由 Marcelo Tosatti 提交于
Flush the shadow mmu before removing regions to avoid stale entries. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Laurent Vivier 提交于
This patch enables coalesced MMIO for ia64 architecture. It defines KVM_MMIO_PAGE_OFFSET and KVM_CAP_COALESCED_MMIO. It enables the compilation of coalesced_mmio.c. [akpm: fix compile error on ia64] Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Laurent Vivier 提交于
Modify member in_range() of structure kvm_io_device to pass length and the type of the I/O (write or read). This modification allows to use kvm_io_device with coalesced MMIO. Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Obsoleted by the vmx-specific per-cpu list. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 18 7月, 2008 5 次提交
-
-
由 Bernhard Walle 提交于
This patch removes the experimental status of kdump on IA64. kdump is on IA64 now since more than one year and it has proven to be stable. Signed-off-by: NBernhard Walle <bwalle@suse.de> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Alex Chiang 提交于
acpi_map_lsapic tries to stuff a long into ia64_cpu_to_sapicid[], which can only hold ints, so let's fix that. We need to update the signature of acpi_map_cpu2node() too. Signed-off-by: NAlex Chiang <achiang@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Akiyama, Nobuyuki 提交于
module_free() refers the first parameter before checking. But it is called like below(in kernel/kprobes). The first parameter is always NULL. This happens when many probe points(>1024) are set by kprobes. I encountered this with using SystemTap. It can set many probes easily. static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx) { ... if (kip->nused == 0) { hlist_del(&kip->hlist); if (hlist_empty(&kprobe_insn_pages)) { ... } else { module_free(NULL, kip->insns); //<<< 1st param always NULL kfree(kip); } return 1; } return 0; } Signed-off-by: NAkiyama, Nobuyuki <akiyama.nobuyuk@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Denis V. Lunev 提交于
When dprintk is enabled the following warnings are generated: arch/ia64/kernel/cpufreq/acpi-cpufreq.c: In function 'processor_set_pstate': arch/ia64/kernel/cpufreq/acpi-cpufreq.c:54: warning: format '%x' expects type 'unsigned int', but argumen t 3 has type 's64' arch/ia64/kernel/cpufreq/acpi-cpufreq.c: In function 'processor_get_pstate': arch/ia64/kernel/cpufreq/acpi-cpufreq.c:76: warning: format '%x' expects type 'unsigned int', but argumen t 2 has type 's64' Signed-off-by: NDenis V. Lunev <den@openvz.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Takashi Iwai 提交于
Fix calls of smp_call_function*() in arch/ia64/kvm for recent API changes. CC [M] arch/ia64/kvm/kvm-ia64.o arch/ia64/kvm/kvm-ia64.c: In function 'handle_global_purge': arch/ia64/kvm/kvm-ia64.c:398: error: too many arguments to function 'smp_call_function_single' arch/ia64/kvm/kvm-ia64.c: In function 'kvm_vcpu_kick': arch/ia64/kvm/kvm-ia64.c:1696: error: too many arguments to function 'smp_call_function_single' Signed-off-by: NTakashi Iwai <tiwai@suse.de> Acked-by Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 7月, 2008 2 次提交
-
-
由 Zhao Yakui 提交于
"idle=nomwait" disables the use of the MWAIT instruction from both C1 (C1_FFH) and deeper (C2C3_FFH) C-states. When MWAIT is unavailable, the BIOS and OS generally negotiate to use the HALT instruction for C1, and use IO accesses for deeper C-states. This option is useful for power and performance comparisons, and also to work around BIOS bugs where broken MWAIT support is advertised. http://bugzilla.kernel.org/show_bug.cgi?id=10807 http://bugzilla.kernel.org/show_bug.cgi?id=10914Signed-off-by: NZhao Yakui <yakui.zhao@intel.com> Signed-off-by: NLi Shaohua <shaohua.li@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
由 Zhao Yakui 提交于
"idle=halt" limits the idle loop to using the halt instruction. No MWAIT, no IO accesses, no C-states deeper than C1. If something is broken in the idle code, "idle=halt" is a less severe workaround than "idle=poll" which disables all power savings. Signed-off-by: NZhao Yakui <yakui.zhao@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com>
-
- 01 7月, 2008 2 次提交
-
-
由 Doug Chapman 提交于
The symbol account_system_vtime is used by the kvm module but not exported. This breaks building with CONFIG_VIRT_CPU_ACCOUNTING and CONFIG_KVM=m. Signed-off-by: NDoug Chapman <doug.chapman@hp.com> Acked-by: NHidetosho Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Tony Luck 提交于
On a system where there are no hot pluggable cpus "additional_cpus" is still set to -1 at the point where we call per_cpu_scan_finalize(). If we didn't find an SRAT table and so pick the default "32" for the number of cpus, when we get to: high_cpu = min(high_cpu + reserve_cpus, NR_CPUS); we will end up initializing for just 31 cpus ... and so we will die horribly when bringing up cpu#32. Problem introduced by: 2c6e6db4 "Minimize per_cpu reservations." Acked-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 26 6月, 2008 3 次提交
-
-
由 Jens Axboe 提交于
It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
This converts ia64 to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 25 6月, 2008 3 次提交
-
-
由 Julia Lawall 提交于
As noted by Akinobu Mita alloc_bootmem and related functions never return NULL and always return a zeroed region of memory. Thus a NULL test or memset after calls to these functions is unnecessary. Signed-off-by: NJulia Lawall <julia@diku.dk> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Cliff Wickman 提交于
The fix applied in e0c6d97c "security hole in sn2_ptc_proc_write" didn't take into account the case where count==0 (which results in a buffer underrun when adding the trailing '\0'). Thanks to Andi Kleen for pointing this out. Signed-off-by: NCliff Wickman <cpw@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jes Sorensen 提交于
Call check_sal_cache_flush() after platform_setup() as check_sal_cache_flush() now relies on being able to call platform vector code. Problem was introduced by: 3463a93d "Update check_sal_cache_flush to use platform_send_ipi()" Signed-off-by: NJes Sorensen <jes@sgi.com> Tested-by: NAlex Chiang: <achiang@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 21 6月, 2008 1 次提交
-
-
由 Cliff Wickman 提交于
Security hole in sn2_ptc_proc_write It is possible to overrun a buffer with a write to this /proc file. Signed-off-by: NCliff Wickman <cpw@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 17 6月, 2008 1 次提交
-
-
由 Jack Steiner 提交于
Fix build error in CONFIG_IA64_SGI_UV config. (GENERIC builds are ok). Signed-off-by: NJack Steiner <steiner@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 12 6月, 2008 1 次提交
-
-
由 Alex Chiang 提交于
check_sal_cache_flush is used to detect broken firmware that drops pending interrupts. The old implementation schedules a timer interrupt for itself in the future by getting the current value of the Interval Timer Counter + 1000 cycles, waits for the interrupt to be pended, calls SAL_CACHE_FLUSH, and finally checks to see if the interrupt is still pending. This implementation can cause problems for virtual machine code if the process of scheduling the timer interrupt takes more than 1000 cycles; the virtual machine can end up sleeping for several hundred years while waiting for the ITC to wrap around. The fix is to use platform_send_ipi. The processor will still send an interrupt to itself, using the IA64_IPI_DM_INT delivery mode, which causes the IPI to look like an external interrupt. The rest of the SAL_CACHE_FLUSH + checking to see if the interrupt is still pending remains unchanged. This fix has been boot tested successfully on: - intel tiger2 - hp rx6600 - hp rx5670 The rx5670 has known buggy firmware, where SAL_CACHE_FLUSH drops pending interrupts. A boot test on this machine showed this message on the console: SAL: SAL_CACHE_FLUSH drops interrupts; PAL_CACHE_FLUSH will be used instead Which proves that the self-inflicted IPI approach is viable. And as expected, the other tested platforms correctly did not display the warning. Signed-off-by: NAlex Chiang <achiang@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-