- 15 4月, 2009 1 次提交
-
-
由 Hugh Dickins 提交于
Now that shmem's divisions by zero and SHMEM_MAX_BYTES are fixed, let powerpc 256kB pages coexist with CONFIG_SHMEM again. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 07 4月, 2009 1 次提交
-
-
由 Geert Uytterhoeven 提交于
commit 28794d34 ("powerpc/kconfig: Kill PPC_MULTIPLATFORM") broke KEXEC, by making it dependent on BOOK3S, while it should be PPC_BOOK3S. Signed-off-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 01 4月, 2009 1 次提交
-
-
由 Akinobu Mita 提交于
CONFIG_DEBUG_PAGEALLOC is now supported by x86, powerpc, sparc64, and s390. This patch implements it for the rest of the architectures by filling the pages with poison byte patterns after free_pages() and verifying the poison patterns before alloc_pages(). This generic one cannot detect invalid page accesses immediately but invalid read access may cause invalid dereference by poisoned memory and invalid write access can be detected after a long delay. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 3月, 2009 1 次提交
-
-
由 Josh Boyer 提交于
The recent addition of CONFIG_LOWMEM_CAM_BOOL and CONFIG_LOWMEM_CAM_NUM cause the latter to show up in configs that do not need it during 'make oldconfig'. Make LOWMEM_CAM_NUM depend on FSL_BOOKE. Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 30 3月, 2009 1 次提交
-
-
由 Matt LaPlante 提交于
Signed-off-by: NMatt LaPlante <kernel1@cyberdogtech.com> Acked-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 11 3月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
CONFIG_PPC_MULTIPLATFORM is a remain of the pre-powerpc days and isn't really meaningful anymore. It was basically equivalent to PPC64 || 6xx. This removes it along with the following changes: - 32-bit platforms that relied on PPC32 && PPC_MULTIPLATFORM now rely on 6xx which is what they want anyway. - A new symbol, PPC_BOOK3S, is defined that represent compliance with the "Server" variant of the architecture. This is set when either 6xx or PPC64 is set and open the door for future BOOK3E 64-bit. - 64-bit platforms that relied on PPC64 && PPC_MULTIPLATFORM now use PPC64 && PPC_BOOK3S - A separate and selectable CONFIG_PPC_OF_BOOT_TRAMPOLINE option is now used to control the use of prom_init.c Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 2月, 2009 5 次提交
-
-
由 Ilya Yanok 提交于
This patch rewrites consistent dma allocations support to use vmalloc layer to allocate virtual memory space from vmalloc pool and get rid of CONFIG_CONSISTENT_{START,SIZE}. This greatly simplifies the code by effectively removing a custom allocator we had for virtual space. Signed-off-by: NIlya Yanok <yanok@emcraft.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
This patch gets function graph tracing working with dynamic function tracer on PowerPC32. Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
This patch ports the function graph tracer for PowerPC, but only for static function tracing. Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
This is the port of the function graph tracer to PowerPC with dynamic tracing. Geoff Lavand tested on PS3. Tested-by: NGeoff Levand <geoffrey.levand@am.sony.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
This is a port of the function graph tracer that was written by Frederic Weisbecker for the x86. This only works for PPC64 at the moment and only for static tracing. PPC32 and dynamic function graph tracing support will come later. The trace produces a visual calling of functions: # tracer: function_graph # # CPU DURATION FUNCTION CALLS # | | | | | | | 0) 2.224 us | } 0) ! 271.024 us | } 0) ! 320.080 us | } 0) ! 324.656 us | } 0) ! 329.136 us | } 0) | .put_prev_task_fair() { 0) | .update_curr() { 0) 2.240 us | .update_min_vruntime(); 0) 6.512 us | } 0) 2.528 us | .__enqueue_entity(); 0) + 15.536 us | } 0) | .pick_next_task_fair() { 0) 2.032 us | .__pick_next_entity(); 0) 2.064 us | .__clear_buddies(); 0) | .set_next_entity() { 0) 2.672 us | .__dequeue_entity(); 0) 6.864 us | } Geoff Lavand tested on PS3. Tested-by: NGeoff Levand <geoffrey.levand@am.sony.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 2月, 2009 1 次提交
-
-
由 Yuri Tikhonov 提交于
This patch adds support for 256KB pages on ppc44x-based boards. For simplification of implementation with 256KB pages we still assume 2-level paging. As a side effect this leads to wasting extra memory space reserved for PTE tables: only 1/4 of pages allocated for PTEs are actually used. But this may be an acceptable trade-off to achieve the high performance we have with big PAGE_SIZEs in some applications (e.g. RAID). Also with 256KB PAGE_SIZE we increase THREAD_SIZE up to 32KB to minimize the risk of stack overflows in the cases of on-stack arrays, which size depends on the page size (e.g. multipage BIOs, NTFS, etc.). With 256KB PAGE_SIZE we need to decrease the PKMAP_ORDER at least down to 9, otherwise all high memory (2 ^ 10 * PAGE_SIZE == 256MB) we'll be occupied by PKMAP addresses leaving no place for vmalloc. We do not separate PKMAP_ORDER for 256K from 16K/64K PAGE_SIZE here; actually that value of 10 in support for 16K/64K had been selected rather intuitively. Thus now for all cases of PAGE_SIZE on ppc44x (including the default, 4KB, one) we have 512 pages for PKMAP. Because ELF standard supports only page sizes up to 64K, then you should use binutils later than 2.17.50.0.3 with '-zmax-page-size' set to 256K for building applications, which are to be run with the 256KB-page sized kernel. If using the older binutils, then you should patch them like follows: --- binutils/bfd/elf32-ppc.c.orig +++ binutils/bfd/elf32-ppc.c -#define ELF_MAXPAGESIZE 0x10000 +#define ELF_MAXPAGESIZE 0x40000 One more restriction we currently have with 256KB page sizes is inability to use shmem safely, so, for now, the 256KB is available only if you turn the CONFIG_SHMEM option off (another variant is to use BROKEN). Though, if you need shmem with 256KB pages, you can always remove the !SHMEM dependency in 'config PPC_256K_PAGES', and use the workaround available here: http://lkml.org/lkml/2008/12/19/20Signed-off-by: NYuri Tikhonov <yur@emcraft.com> Signed-off-by: NIlya Yanok <yanok@emcraft.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 29 1月, 2009 3 次提交
-
-
由 Kumar Gala 提交于
The FSL PCI code depends on PCI quirks being enabled to function properly. We can ensure this by doing a select in Kconfig of PCI_QUIRKS. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Trent Piepho 提交于
On booke processors, the code that maps low memory only uses up to three CAM entries, even though there are sixteen and nothing else uses them. Make this number configurable in the advanced options menu along with max low memory size. If one wants 1 GB of lowmem, then it's typically necessary to have four CAM entries. Signed-off-by: NTrent Piepho <tpiepho@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Trent Piepho 提交于
The code that maps kernel low memory would only use page sizes up to 256 MB. On E500v2 pages up to 4 GB are supported. However, a page must be aligned to a multiple of the page's size. I.e. 256 MB pages must aligned to a 256 MB boundary. This was enforced by a requirement that the physical and virtual addresses of the start of lowmem be aligned to 256 MB. Clearly requiring 1GB or 4GB alignment to allow pages of that size isn't acceptable. To solve this, I simply have adjust_total_lowmem() take alignment into account when it decides what size pages to use. Give it PAGE_OFFSET = 0x7000_0000, PHYSICAL_START = 0x3000_0000, and 2GB of RAM, and it will map pages like this: PA 0x3000_0000 VA 0x7000_0000 Size 256 MB PA 0x4000_0000 VA 0x8000_0000 Size 1 GB PA 0x8000_0000 VA 0xC000_0000 Size 256 MB PA 0x9000_0000 VA 0xD000_0000 Size 256 MB PA 0xA000_0000 VA 0xE000_0000 Size 256 MB Because the lowmem mapping code now takes alignment into account, PHYSICAL_ALIGN can be lowered from 256 MB to 64 MB. Even lower might be possible. The lowmem code will work down to 4 kB but it's possible some of the boot code will fail before then. Poor alignment will force small pages to be used, which combined with the limited number of TLB1 pages available, will result in very little memory getting mapped. So alignments less than 64 MB probably aren't very useful anyway. Signed-off-by: NTrent Piepho <tpiepho@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 28 1月, 2009 1 次提交
-
-
由 Josh Boyer 提交于
Remove some leftover cruft from the arch/ppc days Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 1月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This enables the use of syscall wrappers to do proper sign extension for 64-bit programs. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
-
- 08 1月, 2009 2 次提交
-
-
由 Steven Rostedt 提交于
This patch enables dynamic ftrace. The PowerPC port was dependent on other code not yet in mainline. Now that the code is, we can now let PowerPC compile with dynamic ftrace. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mohan Kumar M 提交于
Enable RELOCATABLE option if user selects CRASH_DUMP option. Without this patch user has to first select RELOCATABLE option and then has to enable CRASH_DUMP option. Signed-off-by: NM. Mohan Kumar <mohan@in.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 12月, 2008 1 次提交
-
-
由 Ilya Yanok 提交于
This adds support for 16k and 64k page sizes on PowerPC 44x processors. The PGDIR table is much smaller than a page when using 16k or 64k pages (512 and 32 bytes respectively) so we allocate the PGDIR with kzalloc() instead of __get_free_pages(). One PTE table covers rather a large memory area when using 16k or 64k pages (32MB or 512MB respectively), so we can easily put FIXMAP and PKMAP in the area covered by one PTE table. Signed-off-by: NYuri Tikhonov <yur@emcraft.com> Signed-off-by: NVladimir Panfilov <pvr@emcraft.com> Signed-off-by: NIlya Yanok <yanok@emcraft.com> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 23 12月, 2008 1 次提交
-
-
由 Dale Farnsworth 提交于
Wire up the trampoline code for ppc32 to relay exceptions from the vectors at address 0 to vectors at address 32MB, and modify Kconfig to enable Kdump support for all classic powerpcs. Signed-off-by: NDale Farnsworth <dale@farnsworth.org> Signed-off-by: NAnton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 03 12月, 2008 1 次提交
-
-
由 Becky Bruce 提交于
We need to swap these out once we start using swiotlb, so add them to dma_ops. Create CONFIG_PPC_NEED_DMA_SYNC_OPS Kconfig option; this is currently enabled automatically if we're CONFIG_NOT_COHERENT_CACHE. In the future, this will also be enabled for builds that need swiotlb. If PPC_NEED_DMA_SYNC_OPS is not defined, the dma_sync_*_for_* ops compile to nothing. Otherwise, they access the dma_ops pointers for the sync ops. This patch also changes dma_sync_single_range_* to actually sync the range - previously it was using a generous dma_sync_single. dma_sync_single_* is now implemented as a dma_sync_single_range with an offset of 0. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 11 11月, 2008 1 次提交
-
-
由 Ingo Molnar 提交于
Impact: cleanup, change .config option name We had this ugly config name for a long time for hysteric raisons. Rename it to a saner name. We still cannot get rid of it completely, until /proc/<pid>/stack usage replaces WCHAN usage for good. We'll be able to do that in the v2.6.29/v2.6.30 timeframe. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 23 10月, 2008 1 次提交
-
-
由 Steven Rostedt 提交于
The ftrace daemon is complex and can cause nasty races if something goes wrong. Since it affects all of the kernel, this patch disables dynamic ftrace from any arch that depends on the daemon. Until the archs are ported over to the new MCOUNT_RECORD method, I am disabling dynamic ftrace from them. Note: I am leaving in the arch/<arch>/kernel/ftrace.c code alone since that can be used when the arch is ported to MCOUNT_RECORD. To port the arch to MCOUNT_RECORD, the scripts/recordmcount.pl needs to be updated. I will make that easier to do for 2.6.29. For 28, we will keep the archs disabled. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 10月, 2008 1 次提交
-
-
由 Mohan Kumar M 提交于
This adds relocatable kernel support for kdump. With this one can use the same regular kernel to capture the kdump. A signature (0xfeed1234) is passed in r6 from panic code to the next kernel through kexec_sequence and purgatory code. The signature is used to differentiate between kdump kernel and non-kdump kernels. The purgatory code compares the signature and sets the __kdump_flag in head_64.S. During the boot up, kernel code checks __kdump_flag and if it is set, the kernel will behave as relocatable kdump kernel. This kernel will boot at the address where it was loaded by kexec-tools ie. at the address reserved through crashkernel boot parameter. CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump kernel as relocatable. So the same kernel can be used as production and kdump kernel. This patch incorporates the changes suggested by Paul Mackerras to avoid GOT use and to avoid two copies of the code. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NMohan Kumar M <mohan@in.ibm.com> Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 10月, 2008 2 次提交
-
-
由 Kumar Gala 提交于
There are no users of PPC_MERGE in tree so we can get rid of it. It was a hold over from the arch/ppc days. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
Due to confusion between the ftrace infrastructure and the gcc profiling tracer "ftrace", this patch renames the config options from FTRACE to FUNCTION_TRACER. The other two names that are offspring from FTRACE DYNAMIC_FTRACE and FTRACE_MCOUNT_RECORD will stay the same. This patch was generated mostly by script, and partially by hand. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 10月, 2008 1 次提交
-
-
由 Matt Helsley 提交于
This patch implements a new freezer subsystem in the control groups framework. It provides a way to stop and resume execution of all tasks in a cgroup by writing in the cgroup filesystem. The freezer subsystem in the container filesystem defines a file named freezer.state. Writing "FROZEN" to the state file will freeze all tasks in the cgroup. Subsequently writing "RUNNING" will unfreeze the tasks in the cgroup. Reading will return the current state. * Examples of usage : # mkdir /containers/freezer # mount -t cgroup -ofreezer freezer /containers # mkdir /containers/0 # echo $some_pid > /containers/0/tasks to get status of the freezer subsystem : # cat /containers/0/freezer.state RUNNING to freeze all tasks in the container : # echo FROZEN > /containers/0/freezer.state # cat /containers/0/freezer.state FREEZING # cat /containers/0/freezer.state FROZEN to unfreeze all tasks in the container : # echo RUNNING > /containers/0/freezer.state # cat /containers/0/freezer.state RUNNING This is the basic mechanism which should do the right thing for user space task in a simple scenario. It's important to note that freezing can be incomplete. In that case we return EBUSY. This means that some tasks in the cgroup are busy doing something that prevents us from completely freezing the cgroup at this time. After EBUSY, the cgroup will remain partially frozen -- reflected by freezer.state reporting "FREEZING" when read. The state will remain "FREEZING" until one of these things happens: 1) Userspace cancels the freezing operation by writing "RUNNING" to the freezer.state file 2) Userspace retries the freezing operation by writing "FROZEN" to the freezer.state file (writing "FREEZING" is not legal and returns EIO) 3) The tasks that blocked the cgroup from entering the "FROZEN" state disappear from the cgroup's set of tasks. [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: export thaw_process] Signed-off-by: NCedric Le Goater <clg@fr.ibm.com> Signed-off-by: NMatt Helsley <matthltc@us.ibm.com> Acked-by: NSerge E. Hallyn <serue@us.ibm.com> Tested-by: NMatt Helsley <matthltc@us.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 10月, 2008 1 次提交
-
-
由 Johannes Berg 提交于
powerpc uses CONFIG_FORCE_MAX_ZONEORDER, and some things depend on it being at least 10 when 64k pages are not configured (notably the dart iommu code with CONFIG_PM). The defaults are fine, but when going from a 64K pages config to one without 64K pages, MAX_ORDER stays at 9 which is too low for 4K pages. This patch makes the Kconfig enforce at least the defaults. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Acked-by: NTimur Tabi <timur@freescale.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 16 9月, 2008 1 次提交
-
-
由 Paul Mackerras 提交于
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as a position-independent executable (PIE) when it is set. This involves processing the dynamic relocations in the image in the early stages of booting, even if the kernel is being run at the address it is linked at, since the linker does not necessarily fill in words in the image for which there are dynamic relocations. (In fact the linker does fill in such words for 64-bit executables, though not for 32-bit executables, so in principle we could avoid calling relocate() entirely when we're running a 64-bit kernel at the linked address.) The dynamic relocations are processed by a new function relocate(addr), where the addr parameter is the virtual address where the image will be run. In fact we call it twice; once before calling prom_init, and again when starting the main kernel. This means that reloc_offset() returns 0 in prom_init (since it has been relocated to the address it is running at), which necessitated a few adjustments. This also changes __va and __pa to use an equivalent definition that is simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are constants (for 64-bit) whereas PHYSICAL_START is a variable (and KERNELBASE ideally should be too, but isn't yet). With this, relocatable kernels still copy themselves down to physical address 0 and run there. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 14 9月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Add a kernel-wide "phys_addr_t" which is guaranteed to be able to hold any physical address. By default it equals the word size of the architecture, but a 32-bit architecture can set ARCH_PHYS_ADDR_T_64BIT if it needs a 64-bit phys_addr_t. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 8月, 2008 1 次提交
-
-
由 Rusty Russell 提交于
Out of line get_user_pages_fast fallback implementation, make it a weak symbol, get rid of CONFIG_HAVE_GET_USER_PAGES_FAST. Export the symbol to modules so lguest can use it. Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 30 7月, 2008 1 次提交
-
-
由 Nick Piggin 提交于
Implement lockless get_user_pages_fast for 64-bit powerpc. Page table existence is guaranteed with RCU, and speculative page references are used to take a reference to the pages without having a prior existence guarantee on them. Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 28 7月, 2008 1 次提交
-
-
由 Roland McGrath 提交于
The powerpc arch code has all the prerequisites, so set HAVE_ARCH_TRACEHOOK. Signed-off-by: NRoland McGrath <roland@redhat.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 26 7月, 2008 2 次提交
-
-
由 Michael Buesch 提交于
This patch adds functionality to the gpio-lib subsystem to make it possible to enable the gpio-lib code even if the architecture code didn't request to get it built in. The archtitecture code does still need to implement the gpiolib accessor functions in its asm/gpio.h file. This patch adds the implementations for x86 and PPC. With these changes it is possible to run generic GPIO expansion cards on every architecture that implements the trivial wrapper functions. Support for more architectures can easily be added. Signed-off-by: NMichael Buesch <mb@bu3sch.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: David Brownell <david-b@pacbell.net> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jean Delvare <khali@linux-fr.org> Cc: Samuel Ortiz <sameo@openedhand.com> Cc: Kumar Gala <galak@gate.crashing.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Adrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Berg 提交于
In many cases, especially in networking, it can be beneficial to know at compile time whether the architecture can do unaligned accesses efficiently. This patch introduces a new Kconfig symbol HAVE_EFFICIENT_UNALIGNED_ACCESS for that purpose and adds it to the powerpc and x86 architectures. Also add some documentation about alignment and networking, and especially one intended use of this symbol. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Acked-by: NSam Ravnborg <sam@ravnborg.org> Acked-by: Ingo Molnar <mingo@elte.hu> [x86 architecture part] Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 7月, 2008 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This adds ioremap_prot and pte_pgprot() so that one can extract protection bits from a PTE and use them to ioremap_prot() (in order to support ptrace of VM_IO | VM_PFNMAP as per Rik's patch). This moves a couple of flag checks around in the ioremap implementations of arch/powerpc. There's a side effect of allowing non-cacheable and non-guarded mappings on ppc32 which before would always have _PAGE_GUARDED set whenever _PAGE_NO_CACHE is. (standard ioremap will still set _PAGE_GUARDED, but ioremap_prot will be capable of setting such a non guarded mapping). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NRik van Riel <riel@redhat.com> Cc: Dave Airlie <airlied@linux.ie> Cc: Hugh Dickins <hugh@veritas.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Brownell 提交于
Flag platforms as HAVE_CLK (or not) in Kconfig, based on whether they support <linux/clk.h> calls, so that otherwise portable drivers which need those calls can list that dependency. Something like this is a prerequisite for merging the musb_hdrc driver, currently used on platforms including Davinci, OMAP2430, OMAP3xx ... and the discrete TUSB6010 chip, which doesn't have a natural platform dependency. (Used with OMAP 2420 in current Nokia N8x0 tablets.) Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Cc: Russell King <rmk@arm.linux.org.uk> Acked-by: NHaavard Skinnemoen <hskinnemoen@atmel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 7月, 2008 1 次提交
-
-
由 Jason Wessel 提交于
This patch removes the old kgdb reminants from ARCH=powerpc and implements the new style arch specific stub for the common kgdb core interface. It is possible to have xmon and kgdb in the same kernel, but you cannot use both at the same time because there is only one set of debug hooks. The arch specific kgdb implementation saves the previous state of the debug hooks and restores them if you unconfigure the kgdb I/O driver. Kgdb should have no impact on a kernel that has no kgdb I/O driver configured. Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
-
- 17 7月, 2008 1 次提交
-
-
由 John Rigby 提交于
Choosing PCI or not at config time is allowed on some platforms via an if expression in arch/powerpc/Kconfig. To add a new platform with PCI support selectable at config time, you must change the if expression. This patch makes this easier by changing: bool "PCI support" if <long expression> to bool "PCI support" if PPC_PCI_CHOICE and adding select PPC_PCI_CHOICE to all the config nodes that were previously in the PCI if expression. Platforms with unconditional PCI support continue to just select PCI in their config nodes. Signed-off-by: NJohn Rigby <jrigby@freescale.com> Acked-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-