- 26 8月, 2008 1 次提交
-
-
由 Linus Torvalds 提交于
This fixes a regression that was indirectly caused by commit 1184dc2f ("x86: modify Kconfig to allow up to 4096 cpus"). Allowing 4k CPU's is not practical at this time, because we still have a number of places that have several 'cpumask_t's on the stack, and a 4k-bit cpumask is 512 bytes of stack-space for each such variable. This literally caused functions like 'smp_call_function_mask' to have a 2.5kB stack frame, and several functions to have 2kB stackframes. With an 8kB stack total, smashing the stack was simply much too likely. At least bugzilla entry http://bugzilla.kernel.org/show_bug.cgi?id=11342 was due to this. The earlier commit to not inline load_module() into sys_init_module() fixed the particular symptoms of this that Alan Brunelle saw in that bugzilla entry, but the huge stack waste by cpumask_t's was the more direct cause. Some day we'll have allocation helpers that allocate large CPU masks dynamically, but in the meantime we simply cannot allow cpumasks this large. Cc: Alan D. Brunelle <Alan.Brunelle@hp.com> Cc: Mike Travis <travis@sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 8月, 2008 1 次提交
-
-
由 Pavel Machek 提交于
Adjust experimental tags in Kconfig, update config to notice that i386/x86_64 is now single architecture. Signed-off-by: NPavel Machek <pavel@suse.cz> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 8月, 2008 1 次提交
-
-
由 Rusty Russell 提交于
Out of line get_user_pages_fast fallback implementation, make it a weak symbol, get rid of CONFIG_HAVE_GET_USER_PAGES_FAST. Export the symbol to modules so lguest can use it. Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 27 7月, 2008 3 次提交
-
-
由 Nick Piggin 提交于
Implement get_user_pages_fast without locking in the fastpath on x86. Do an optimistic lockless pagetable walk, without taking mmap_sem or any page table locks or even mmap_sem. Page table existence is guaranteed by turning interrupts off (combined with the fact that we're always looking up the current mm, means we can do the lockless page table walk within the constraints of the TLB shootdown design). Basically we can do this lockless pagetable walk in a similar manner to the way the CPU's pagetable walker does not have to take any locks to find present ptes. This patch (combined with the subsequent ones to convert direct IO to use it) was found to give about 10% performance improvement on a 2 socket 8 core Intel Xeon system running an OLTP workload on DB2 v9.5 "To test the effects of the patch, an OLTP workload was run on an IBM x3850 M2 server with 2 processors (quad-core Intel Xeon processors at 2.93 GHz) using IBM DB2 v9.5 running Linux 2.6.24rc7 kernel. Comparing runs with and without the patch resulted in an overall performance benefit of ~9.8%. Correspondingly, oprofiles showed that samples from __up_read and __down_read routines that is seen during thread contention for system resources was reduced from 2.8% down to .05%. Monitoring the /proc/vmstat output from the patched run showed that the counter for fast_gup contained a very high number while the fast_gup_slow value was zero." (fast_gup is the old name for get_user_pages_fast, fast_gup_slow is a counter we had for the number of times the slowpath was invoked). The main reason for the improvement is that DB2 has multiple threads each issuing direct-IO. Direct-IO uses get_user_pages, and thus the threads contend the mmap_sem cacheline, and can also contend on page table locks. I would anticipate larger performance gains on larger systems, however I think DB2 uses an adaptive mix of threads and processes, so it could be that thread contention remains pretty constant as machine size increases. In which case, we stuck with "only" a 10% gain. The downside of using get_user_pages_fast is that if there is not a pte with the correct permissions for the access, we end up falling back to get_user_pages and so the get_user_pages_fast is a bit of extra work. However this should not be the common case in most performance critical code. [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: Kconfig fix] [akpm@linux-foundation.org: Makefile fix/cleanup] [akpm@linux-foundation.org: warning fix] Signed-off-by: NNick Piggin <npiggin@suse.de> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andi Kleen <andi@firstfloor.org> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Badari Pulavarty <pbadari@us.ibm.com> Cc: Zach Brown <zach.brown@oracle.com> Cc: Jens Axboe <jens.axboe@oracle.com> Reviewed-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Huang Ying 提交于
This patch implements devices state save/restore before after kexec. This patch together with features in kexec_jump patch can be used for following: - A simple hibernation implementation without ACPI support. You can kexec a hibernating kernel, save the memory image of original system and shutdown the system. When resuming, you restore the memory image of original system via ordinary kexec load then jump back. - Kernel/system debug through making system snapshot. You can make system snapshot, jump back, do some thing and make another system snapshot. - Cooperative multi-kernel/system. With kexec jump, you can switch between several kernels/systems quickly without boot process except the first time. This appears like swap a whole kernel/system out/in. - A general method to call program in physical mode (paging turning off). This can be used to invoke BIOS code under Linux. The following user-space tools can be used with kexec jump: - kexec-tools needs to be patched to support kexec jump. The patches and the precompiled kexec can be download from the following URL: source: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-src_git_kh10.tar.bz2 patches: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-patches_git_kh10.tar.bz2 binary: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec_git_kh10 - makedumpfile with patches are used as memory image saving tool, it can exclude free pages from original kernel memory image file. The patches and the precompiled makedumpfile can be download from the following URL: source: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile-src_cvs_kh10.tar.bz2 patches: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile-patches_cvs_kh10.tar.bz2 binary: http://khibernation.sourceforge.net/download/release_v10/makedumpfile/makedumpfile_cvs_kh10 - An initramfs image can be used as the root file system of kexeced kernel. An initramfs image built with "BuildRoot" can be downloaded from the following URL: initramfs image: http://khibernation.sourceforge.net/download/release_v10/initramfs/rootfs_cvs_kh10.gz All user space tools above are included in the initramfs image. Usage example of simple hibernation: 1. Compile and install patched kernel with following options selected: CONFIG_X86_32=y CONFIG_RELOCATABLE=y CONFIG_KEXEC=y CONFIG_CRASH_DUMP=y CONFIG_PM=y CONFIG_HIBERNATION=y CONFIG_KEXEC_JUMP=y 2. Build an initramfs image contains kexec-tool and makedumpfile, or download the pre-built initramfs image, called rootfs.gz in following text. 3. Prepare a partition to save memory image of original kernel, called hibernating partition in following text. 4. Boot kernel compiled in step 1 (kernel A). 5. In the kernel A, load kernel compiled in step 1 (kernel B) with /sbin/kexec. The shell command line can be as follow: /sbin/kexec --load-preserve-context /boot/bzImage --mem-min=0x100000 --mem-max=0xffffff --initrd=rootfs.gz 6. Boot the kernel B with following shell command line: /sbin/kexec -e 7. The kernel B will boot as normal kexec. In kernel B the memory image of kernel A can be saved into hibernating partition as follow: jump_back_entry=`cat /proc/cmdline | tr ' ' '\n' | grep kexec_jump_back_entry | cut -d '='` echo $jump_back_entry > kexec_jump_back_entry cp /proc/vmcore dump.elf Then you can shutdown the machine as normal. 8. Boot kernel compiled in step 1 (kernel C). Use the rootfs.gz as root file system. 9. In kernel C, load the memory image of kernel A as follow: /sbin/kexec -l --args-none --entry=`cat kexec_jump_back_entry` dump.elf 10. Jump back to the kernel A as follow: /sbin/kexec -e Then, kernel A is resumed. Implementation point: To support jumping between two kernels, before jumping to (executing) the new kernel and jumping back to the original kernel, the devices are put into quiescent state, and the state of devices and CPU is saved. After jumping back from kexeced kernel and jumping to the new kernel, the state of devices and CPU are restored accordingly. The devices/CPU state save/restore code of software suspend is called to implement corresponding function. Known issues: - Because the segment number supported by sys_kexec_load is limited, hibernation image with many segments may not be load. This is planned to be eliminated by adding a new flag to sys_kexec_load to make a image can be loaded with multiple sys_kexec_load invoking. Now, only the i386 architecture is supported. Signed-off-by: NHuang Ying <ying.huang@intel.com> Acked-by: NVivek Goyal <vgoyal@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Nigel Cunningham <nigel@nigel.suspend2.net> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Huang Ying 提交于
This patch provides an enhancement to kexec/kdump. It implements the following features: - Backup/restore memory used by the original kernel before/after kexec. - Save/restore CPU state before/after kexec. The features of this patch can be used as a general method to call program in physical mode (paging turning off). This can be used to call BIOS code under Linux. kexec-tools needs to be patched to support kexec jump. The patches and the precompiled kexec can be download from the following URL: source: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-src_git_kh10.tar.bz2 patches: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec-tools-patches_git_kh10.tar.bz2 binary: http://khibernation.sourceforge.net/download/release_v10/kexec-tools/kexec_git_kh10 Usage example of calling some physical mode code and return: 1. Compile and install patched kernel with following options selected: CONFIG_X86_32=y CONFIG_KEXEC=y CONFIG_PM=y CONFIG_KEXEC_JUMP=y 2. Build patched kexec-tool or download the pre-built one. 3. Build some physical mode executable named such as "phy_mode" 4. Boot kernel compiled in step 1. 5. Load physical mode executable with /sbin/kexec. The shell command line can be as follow: /sbin/kexec --load-preserve-context --args-none phy_mode 6. Call physical mode executable with following shell command line: /sbin/kexec -e Implementation point: To support jumping without reserving memory. One shadow backup page (source page) is allocated for each page used by kexeced code image (destination page). When do kexec_load, the image of kexeced code is loaded into source pages, and before executing, the destination pages and the source pages are swapped, so the contents of destination pages are backupped. Before jumping to the kexeced code image and after jumping back to the original kernel, the destination pages and the source pages are swapped too. C ABI (calling convention) is used as communication protocol between kernel and called code. A flag named KEXEC_PRESERVE_CONTEXT for sys_kexec_load is added to indicate that the loaded kernel image is used for jumping back. Now, only the i386 architecture is supported. Signed-off-by: NHuang Ying <ying.huang@intel.com> Acked-by: NVivek Goyal <vgoyal@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Nigel Cunningham <nigel@nigel.suspend2.net> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 7月, 2008 3 次提交
-
-
由 Ingo Molnar 提交于
first step to add RDC321x support to the default PC architecture. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Michael Buesch 提交于
This patch adds functionality to the gpio-lib subsystem to make it possible to enable the gpio-lib code even if the architecture code didn't request to get it built in. The archtitecture code does still need to implement the gpiolib accessor functions in its asm/gpio.h file. This patch adds the implementations for x86 and PPC. With these changes it is possible to run generic GPIO expansion cards on every architecture that implements the trivial wrapper functions. Support for more architectures can easily be added. Signed-off-by: NMichael Buesch <mb@bu3sch.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: David Brownell <david-b@pacbell.net> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jean Delvare <khali@linux-fr.org> Cc: Samuel Ortiz <sameo@openedhand.com> Cc: Kumar Gala <galak@gate.crashing.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Adrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Berg 提交于
In many cases, especially in networking, it can be beneficial to know at compile time whether the architecture can do unaligned accesses efficiently. This patch introduces a new Kconfig symbol HAVE_EFFICIENT_UNALIGNED_ACCESS for that purpose and adds it to the powerpc and x86 architectures. Also add some documentation about alignment and networking, and especially one intended use of this symbol. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Acked-by: NSam Ravnborg <sam@ravnborg.org> Acked-by: Ingo Molnar <mingo@elte.hu> [x86 architecture part] Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 7月, 2008 1 次提交
-
-
由 Rik van Riel 提交于
In order to be able to debug things like the X server and programs using the PPC Cell SPUs, the debugger needs to be able to access device memory through ptrace and /proc/pid/mem. This patch: Add the generic_access_phys access function and put the hooks in place to allow access_process_vm to access device or PPC Cell SPU memory. [riel@redhat.com: Add documentation for the vm_ops->access function] Signed-off-by: NRik van Riel <riel@redhat.com> Signed-off-by: NBenjamin Herrensmidt <benh@kernel.crashing.org> Cc: Dave Airlie <airlied@linux.ie> Cc: Hugh Dickins <hugh@veritas.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 7月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
only supports memory below max_low_pfn. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 7月, 2008 1 次提交
-
-
由 Thomas Gleixner 提交于
Set default n for MEMTEST and MTRR_SANITIZER and fix the help texts. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 11 7月, 2008 7 次提交
-
-
由 Ingo Molnar 提交于
fix: arch/x86/pci/built-in.o: In function `pci_subsys_init': visws.c:(.init.text+0xfc5): undefined reference to `pci_direct_conf1' Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
fix: arch/x86/kernel/built-in.o: In function `visws_early_detect': : undefined reference to `mach_get_smp_config_quirk' arch/x86/kernel/built-in.o: In function `visws_early_detect': : undefined reference to `mach_find_smp_config_quirk' Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
remove VISWS Kconfig complications, now that it's supported by the generic architecture. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
this is the big move: flip over VISWS to generic arch support. From this commit on CONFIG_X86_VISWS is just another (default-disabled) option that turns on certain quirks - no other complications. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
add early init quirks for VisWS. This gradually turns the VISWS subarch into a generic PC architecture. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 7月, 2008 2 次提交
-
-
由 Ingo Molnar 提交于
these two sub-architectures want PCI to be default-on, not default-off. Reported-by: NRobert Richter <robert.richter@amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 FUJITA Tomonori 提交于
AMD_IOMMU should depend on IOMMU_HELPER since they are the IOMMU helper functions. SWIOTLB requires IOMMU_HELPER so declaring that AMD_IOMMU depends on SWIOTLB properly fixes the problems. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 7月, 2008 11 次提交
-
-
由 Yinghai Lu 提交于
use ACPI_NUMA directly and move srat_32.c to mm/ Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
we already have the same srat handling interface for 32bit. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
Rather than just jumping to 0 when there's a missing operation, raise a BUG. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Bernhard Walle 提交于
I would suggest to remove the "experimental" status from Kdump. Kdump is now in the kernel since a long time and used by Enterprise distributions. I don't think that "experimental" is true any more. Signed-off-by: NBernhard Walle <bwalle@suse.de> Cc: vgoyal@redhat.com Cc: kexec@lists.infradead.org Cc: Bernhard Walle <bwalle@suse.de> Cc: akpm@linux-foundation.org Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mike Travis 提交于
* Introduce a new PER_CPU macro called "EARLY_PER_CPU". This is used by some per_cpu variables that are initialized and accessed before there are per_cpu areas allocated. ["Early" in respect to per_cpu variables is "earlier than the per_cpu areas have been setup".] This patchset adds these new macros: DEFINE_EARLY_PER_CPU(_type, _name, _initvalue) EXPORT_EARLY_PER_CPU_SYMBOL(_name) DECLARE_EARLY_PER_CPU(_type, _name) early_per_cpu_ptr(_name) early_per_cpu_map(_name, _idx) early_per_cpu(_name, _cpu) The DEFINE macro defines the per_cpu variable as well as the early map and pointer. It also initializes the per_cpu variable and map elements to "_initvalue". The early_* macros provide access to the initial map (usually setup during system init) and the early pointer. This pointer is initialized to point to the early map but is then NULL'ed when the actual per_cpu areas are setup. After that the per_cpu variable is the correct access to the variable. The early_per_cpu() macro is not very efficient but does show how to access the variable if you have a function that can be called both "early" and "late". It tests the early ptr to be NULL, and if not then it's still valid. Otherwise, the per_cpu variable is used instead: #define early_per_cpu(_name, _cpu) \ (early_per_cpu_ptr(_name) ? \ early_per_cpu_ptr(_name)[_cpu] : \ per_cpu(_name, _cpu)) A better method is to actually check the pointer manually. In the case below, numa_set_node can be called both "early" and "late": void __cpuinit numa_set_node(int cpu, int node) { int *cpu_to_node_map = early_per_cpu_ptr(x86_cpu_to_node_map); if (cpu_to_node_map) cpu_to_node_map[cpu] = node; else per_cpu(x86_cpu_to_node_map, cpu) = node; } * Add a flag "arch_provides_topology_pointers" that indicates pointers to topology cpumask_t maps are available. Otherwise, use the function returning the cpumask_t value. This is useful if cpumask_t set size is very large to avoid copying data on to/off of the stack. * The coverage of CONFIG_DEBUG_PER_CPU_MAPS has been increased while the non-debug case has been optimized a bit. * Remove an unreferenced compiler warning in drivers/base/topology.c * Clean up #ifdef in setup.c For inclusion into sched-devel/latest tree. Based on: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git + sched-devel/latest .../mingo/linux-2.6-sched-devel.git Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Mike Travis 提交于
* Increase the limit of NR_CPUS to 4096 and introduce a boolean called "MAXSMP" which when set (e.g. "allyesconfig"), will set NR_CPUS = 4096 and NODES_SHIFT = 9 (512). * Changed max setting for NODES_SHIFT from 15 to 9 to accurately reflect the real limit. Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Yinghai Lu 提交于
"Maciej W. Rozycki" <macro@linux-mips.org> said: > Given X86_64 selects X86_LOCAL_APIC I am not sure the redundancy seen >above does not actually obscure the logic behind... I think: > > depends on X86_LOCAL_APIC && !X86_VISWS > >would be clearer and get the same. Suggested-by: NMaciej W. Rozycki <macro@linux-mips.org> Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Len Brown <lenb@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
v2: seperate "fix for compiling when MPPARSE is not set" to another patch make X86_MPPARSE to be selectable only when acpi is set and X86_MPPARSE will be set if acpi is not set. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Len Brown <lenb@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
we already have summit and etc depends on genericarch, so use genericarch only. Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 7月, 2008 1 次提交
-
-
由 Joerg Roedel 提交于
This patch replaces the short description text for AMD IOMMU in Kconfig with a more verbose one. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Cc: iommu@lists.linux-foundation.org Cc: bhavna.sarathy@amd.com Cc: robert.richter@amd.com Cc: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 01 7月, 2008 1 次提交
-
-
由 Thomas Gleixner 提交于
commit 43238382 x86: change size of node ids from u8 to s16 set the range for NODES_SHIFT to 1..15. The possible range is 1..9 Fixes Bugzilla #10726 Reported-by: NDave Jones <davej@codemonkey.org.uk> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 6月, 2008 1 次提交
-
-
由 Dmitry Baryshkov 提交于
Signed-off-by: NDmitry Baryshkov <dbaryshkov@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 6月, 2008 3 次提交
-
-
由 Ingo Molnar 提交于
fix typo causing: arch/x86/kernel/built-in.o: In function `__unmap_single': amd_iommu.c:(.text+0x17771): undefined reference to `iommu_area_free' arch/x86/kernel/built-in.o: In function `__map_single': amd_iommu.c:(.text+0x1797a): undefined reference to `iommu_area_alloc' amd_iommu.c:(.text+0x179a2): undefined reference to `iommu_area_alloc' Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Ingo Molnar 提交于
fix: arch/x86/kernel/amd_iommu_init.c:247: warning: 'struct acpi_table_header' declared inside parameter list arch/x86/kernel/amd_iommu_init.c:247: warning: its scope is only this definition or declaration, which is probably not what you want arch/x86/kernel/amd_iommu_init.c: In function 'find_last_devid_acpi': arch/x86/kernel/amd_iommu_init.c:257: error: dereferencing pointer to incomplete type arch/x86/kernel/amd_iommu_init.c:265: error: dereferencing pointer to incomplete type the AMD IOMMU code depends on ACPI facilities. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Joerg Roedel 提交于
This patch adds the Kconfig entry for the AMD IOMMU driver. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Cc: iommu@lists.linux-foundation.org Cc: bhavna.sarathy@amd.com Cc: Sebastian.Biemueller@amd.com Cc: robert.richter@amd.com Cc: joro@8bytes.org Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 26 6月, 2008 1 次提交
-
-
由 Jens Axboe 提交于
This converts x86, x86-64, and xen to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Acked-by: NIngo Molnar <mingo@elte.hu> Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 25 6月, 2008 1 次提交
-
-
由 Gerd Hoffmann 提交于
This patch updates the kvm host code to use the pvclock structs and functions, thereby making it compatible with Xen. The patch also fixes an initialization bug: on SMP systems the per-cpu has two different locations early at boot and after CPU bringup. kvmclock must take that in account when registering the physical address within the host. Signed-off-by: NGerd Hoffmann <kraxel@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-