- 15 10月, 2015 12 次提交
-
-
由 Sam bobroff 提交于
The paca display is already more than 24 lines, which can be problematic if you have an old school 80x24 terminal, or more likely you are on a virtual terminal which does not scroll for whatever reason. This patch adds a new command "#", which takes a single (hex) numeric argument: lines per page. It will cause the output of "dp" and "dpa" to be broken into pages, if necessary. Sample output: 0:mon> # 10 0:mon> dp1 paca for cpu 0x1 @ c00000000fdc0480: possible = yes present = yes online = yes lock_token = 0x8000 (0x8) paca_index = 0x1 (0xa) kernel_toc = 0xc000000000eb2400 (0x10) kernelbase = 0xc000000000000000 (0x18) kernel_msr = 0xb000000000001032 (0x20) emergency_sp = 0xc00000003ffe8000 (0x28) mc_emergency_sp = 0xc00000003ffe4000 (0x2e0) in_mce = 0x0 (0x2e8) data_offset = 0x7f170000 (0x30) hw_cpu_id = 0x8 (0x38) cpu_start = 0x1 (0x3a) kexec_state = 0x0 (0x3b) [Hit a key (a:all, q:truncate, any:next page)] 0:mon> __current = 0xc00000007e696620 (0x290) kstack = 0xc00000007e6ebe30 (0x298) stab_rr = 0xb (0x2a0) saved_r1 = 0xc00000007ef37860 (0x2a8) trap_save = 0x0 (0x2b8) soft_enabled = 0x0 (0x2ba) irq_happened = 0x1 (0x2bb) io_sync = 0x0 (0x2bc) irq_work_pending = 0x0 (0x2bd) nap_state_lost = 0x0 (0x2be) 0:mon> Signed-off-by: NSam Bobroff <sam.bobroff@au1.ibm.com> [mpe: Use bool, make some variables static] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Jaillet 提交于
of_get_next_parent can be used to simplify the while() loop and avoid the need of a temp variable. Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Jaillet 提交于
of_get_next_parent can be used to simplify the while() loop and avoid the need of a temp variable. Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Paul Gortmaker 提交于
In commit 3c8464a9 ("powerpc: Delete old PrPMC 280/2800 support") we got rid of most of the C code, and the Makefile/Kconfig hooks, but it seems I left the platform's DTS file orphaned in the tree as well as the boot code. Here we get rid of them both. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Stephen Rothwell 提交于
.exit.text is discarded at run time and there are some references from that to .exit.data, so we need to discard .exit.data at run time as well. Fixes these errors: `.exit.data' referenced in section `.exit.text' of drivers/built-in.o: defined in discarded section `.exit.data' of drivers/built-in.o `.exit.data' referenced in section `.exit.text' of drivers/built-in.o: defined in discarded section `.exit.data' of drivers/built-in.o Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
No need to have two atomic opertions (update and fetch/check) when decreasing PE's number of passed devices as one atomic operation is enough. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrew Donnellan 提交于
Export pcibios_free_controller(), so it can be used by the cxl module to free virtual PHBs. Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam bobroff 提交于
This patch provides individual system call numbers for the following System V IPC system calls, on PowerPC, so that they do not need to be multiplexed: * semop, semget, semctl, semtimedop * msgsnd, msgrcv, msgget, msgctl * shmat, shmdt, shmget, shmctl Signed-off-by: NSam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Now that pseries selects PCI_MSI && PCI, EEH will always be true, and therefore CONFIG_PSERIES_MSI will always be true. So drop it, and move msi.o to obj-y. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Make it entirely clear in the Makefile that we always build the pci related files by moving them to obj-y. Note that CONFIG_EEH is now always enabled on pseries, because it depends on PSERIES && PCI. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Now that we always have CONFIG_PCI=y for pseries, we can stop guarding code with CONFIG_PCI ifdefs. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
The pseries build with PCI=n looks to have been broken for at least 5 years, and no one's noticed or cared. Following the obvious breakages backward, the first commit I can find that builds is the parent of 2eb4afb6 ("powerpc/pci: Move pseries code into pseries platform specific area") from April 2009. A distro would never ship a PCI=n kernel, so it is only useful for folks building custom kernels. Also on KVM the virtio devices appear on PCI, so it would only be useful if you were building kernels specifically to run on PowerVM and with no PCI devices. The added code complexity, and testing load (which we've clearly not been doing), is not justified by the small reduction in kernel size for such a niche use case. So just make PCI non-optional on pseries. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 12 10月, 2015 2 次提交
-
-
由 Aneesh Kumar K.V 提交于
We need to properly identify whether a hugepage is an explicit or a transparent hugepage in follow_huge_addr(). We used to depend on hugepage shift argument to do that. But in some case that can result in wrong results. For ex: On finding a transparent hugepage we set hugepage shift to PMD_SHIFT. But we can end up clearing the thp pte, via pmdp_huge_get_and_clear. We do prevent reusing the pfn page via the usage of kick_all_cpus_sync(). But that happens after we updated the pte to 0. Hence in follow_huge_addr() we can find hugepage shift set, but transparent huge page check fail for a thp pte. NOTE: We fixed a variant of this race against thp split in commit 691e95fd ("powerpc/mm/thp: Make page table walk safe against thp split/collapse") Without this patch, we may hit the BUG_ON(flags & FOLL_GET) in follow_page_mask occasionally. In the long term, we may want to switch ppc64 64k page size config to enable CONFIG_ARCH_WANT_GENERAL_HUGETLB Reported-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
After commit e2b3d202 ("powerpc: Switch 16GB and 16MB explicit hugepages to a different page table format"), we don't need to support is_hugepd() for 64K page size. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 09 10月, 2015 2 次提交
-
-
由 Colin Ian King 提交于
pi_buff is being memset before it is sanity checked. Move the memset after the null pi_buff sanity check to avoid an oops. Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
This avoid errors like unsigned int usize = 1 << 30; int size = 1 << 30; unsigned long addr = 64UL << 30 ; value = _ALIGN_DOWN(addr, usize); -> 0 value = _ALIGN_DOWN(addr, size); -> 0x1000000000 Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 06 10月, 2015 2 次提交
-
-
由 Samuel Mendoza-Jonas 提交于
Always include a timeout when waiting for secondary cpus to enter OPAL in the kexec path, rather than only when crashing. Signed-off-by: NSamuel Mendoza-Jonas <sam.mj@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
show_interrupts() expects the irq_chip name to be max 8 characters otherwise everything get misaligned # cat /proc/interrupts CPU0 17: 0 CPM PIC 0 Level error 19: 0 MPC8XX SIU 15 Level tbint 20: 90 CPM PIC 4 Level cpm_uart 38: 29746 MPC8XX SIU 5 Level fs_enet-mac 39: 0 MPC8XX SIU 7 Level fs_enet-mac 47: 401 CPM PIC 5 Level fsl_spi 68: 1 MPC8XX SIU 2 Level phy_interrupt, phy_interrupt, phy_interrupt LOC: 7225485 Local timer interrupts for timer event device LOC: 9 Local timer interrupts for others SPU: 0 Spurious interrupts PMI: 0 Performance monitoring interrupts MCE: 0 Machine check exceptions Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 05 10月, 2015 6 次提交
-
-
由 Denis Kirjanov 提交于
During the MSI bitmap test on boot kmemleak spews the following trace: unreferenced object 0xc00000016e86c900 (size 64): comm "swapper/0", pid 1, jiffies 4294893173 (age 518.024s) hex dump (first 32 bytes): 00 00 01 ff 7f ff 7f 37 00 00 00 00 00 00 00 00 .......7........ ff ff ff ff ff ff ff ff 01 ff ff ff ff ff ff ff ................ backtrace: [<c00000000003eebc>] .zalloc_maybe_bootmem+0x3c/0x380 [<c000000000042d6c>] .msi_bitmap_alloc+0x3c/0xb0 [<c000000000a9aff8>] .msi_bitmap_selftest+0x30/0x2b4 [<c0000000000090f4>] .do_one_initcall+0xd4/0x270 [<c000000000a8e250>] .kernel_init_freeable+0x1a0/0x280 [<c000000000009b5c>] .kernel_init+0x1c/0x120 [<c000000000007fbc>] .ret_from_kernel_thread+0x58/0x9c Add a flag to msi_bitmap for tracking allocations from slab and memblock so we can properly free/handle memory in msi_bitmap_free(). Signed-off-by: NDenis Kirjanov <kda@linux-powerpc.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> [mpe: Reword changelog & use bitmap_from_slab in the if] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andy Shevchenko 提交于
The derive_parent() has similar semantics to what we have in newly introduced of_helpers module. The replacement reduces code base and propagates the actual error code to the caller. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andy Shevchenko 提交于
In case we have node without '/' strrchr() returns NULL which might lead to crash. Replace strrchr() by kbasename() and modify condition to avoid such behaviour. Suggested-by: NSegher Boessenkool <segher@kernel.crashing.org> Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andy Shevchenko 提交于
The helper kstrndup() will do the same in one line. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andy Shevchenko 提交于
In case we have a full node name like /foo/bar and /foo is not found the parent_path left unfreed. So, free a memory before return to a caller. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andy Shevchenko 提交于
Extract a new module to share the code between other modules. There is no functional change. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 10月, 2015 2 次提交
-
-
由 Christophe Jaillet 提交于
'nvram_create_os_partition' should be 'nvram_create_partition'. Use __func__ to have it right, as done elsewhere in this file. Signed-off-by: NChristophe Jaillet <christophe.jaillet@wanadoo.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Jaillet 提交于
If 'nvram_write_header' fails, then 'new_part' should be freed, otherwise, there is a memory leak. Signed-off-by: NChristophe Jaillet <christophe.jaillet@wanadoo.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 01 10月, 2015 6 次提交
-
-
由 Michael Ellerman 提交于
Based directly on ppc64_defconfig using merge_config. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
This add helper virt_to_pfn and remove the opencoded usage of the same. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Neuling 提交于
powerpc has a link register (lr) used for calling functions. We "bl <func>" to call a function, and "blr" to return back to the call site. The lr is only a single register, so if we call another function from inside this function (ie. nested calls), software must save away the lr on the software stack before calling the new function. Before returning (ie. before the "blr"), the lr is restored by software from the software stack. This makes branch prediction quite difficult for the processor as it will only know the branch target just before the "blr". To help with this, modern powerpc processors keep a (non-architected) hardware stack of lr called a "link stack". When a "bl <func>" is run, the lr is pushed onto this stack. When a "blr" is called, the branch predictor pops the lr value from the top of the link stack, and uses it to predict the branch target. Hence the processor pipeline knows a lot earlier the branch target. This works great but there are some cases where you call "bl" but without a matching "blr". Once such case is when trying to determine the program counter (which can't be read directly). Here you "bl+4; mflr" to get the program counter. If you do this, the link stack will get out of sync with reality, causing the branch predictor to mis-predict subsequent function returns. To avoid this, modern micro-architectures have a special case of bl. Using the form "bcl 20,31,+4", ensures the processor doesn't push to the link stack. The 32 and 64 bit variants of __get_datapage() use a "bl; mflr" to determine the loaded address of the VDSO. The current versions of these attempt to use this special bl variant. Unfortunately they use +8 rather than the required +4. Hence the current code results in the link stack getting out of sync with reality and hence the resulting performance degradation. This patch moves it to bcl+4 by moving __kernel_datapage_offset out of __get_datapage(). With this patch, running a gettimeofday() (which uses __get_datapage()) microbenchmark we get a decent bump in performance on POWER7/8. For the benchmark in tools/testing/selftests/powerpc/benchmarks/gettimeofday.c POWER8: 64bit gets ~4% improvement 32bit gets ~9% improvement POWER7: 64bit gets ~7% improvement Signed-off-by: NMichael Neuling <mikey@neuling.org> Reported-by: NAaron Sawdey <sawdey@us.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
For no reason other than it looks ugly. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch defines macros for the three bolted SLB indexes we use. Switch the functions that take the indexes as an argument to use the enum. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Andy Lutomirski says: Some dynamic loaders may be slightly faster if a GNU hash is available. This is unlikely to have any measurable effect on the time it takes to resolve vdso symbols (since there are so few of them). In some contexts, it can be a win for a different reason: if every DSO has a GNU hash section, then libc can avoid calculating SysV hashes at all. Both musl and glibc appear to have this optimization. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 29 9月, 2015 2 次提交
-
-
由 Geoff Levand 提交于
Refresh and remove obsolete CONFIG_EXT3_FS. Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Boqun Feng 提交于
Currently, little endian is only supported on powernv and pseries, however, Kconfigs still allow us to include other platforms in a LE kernel, this may result in space wasting or even build error if some BE-only platforms always assume they are built for a BE kernel. So just modify the Kconfigs of BE-only platforms to remove them from being built for a LE kernel. For 32bit only platforms, nothing needs to be done, because CPU_LITTLE_ENDIAN depends on PPC64. For 64bit supported platforms, add CPU_BIG_ENDIAN to dependencies explicitly, so that these platforms will be disabled for LE [Suggested-by: Cédric Le Goater <clg@fr.ibm.com>]. Signed-off-by: NBoqun Feng <boqun.feng@gmail.com> Acked-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 25 9月, 2015 1 次提交
-
-
由 David Hildenbrand 提交于
We observed some performance degradation on s390x with dynamic halt polling. Until we can provide a proper fix, let's enable halt_poll_ns as default only for supported architectures. Architectures are now free to set their own halt_poll_ns default value. Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 21 9月, 2015 4 次提交
-
-
由 Michael Ellerman 提交于
The selftest passes on 64-bit LE & BE, and 32-bit. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Thomas Huth 提交于
Access to the kvm->buses (like with the kvm_io_bus_read() and -write() functions) has to be protected via the kvm->srcu lock. The kvmppc_h_logical_ci_load() and -store() functions are missing this lock so far, so let's add it there, too. This fixes the problem that the kernel reports "suspicious RCU usage" when lock debugging is enabled. Cc: stable@vger.kernel.org # v4.1+ Fixes: 99342cf8Signed-off-by: NThomas Huth <thuth@redhat.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Gautham R. Shenoy 提交于
In guest_exit_cont we call kvmhv_commence_exit which expects the trap number as the argument. However r3 doesn't contain the trap number at this point and as a result we would be calling the function with a spurious trap number. Fix this by copying r12 into r3 before calling kvmhv_commence_exit as r12 contains the trap number. Cc: stable@vger.kernel.org # v4.1+ Fixes: eddb60fbSigned-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This fixes a bug which results in stale vcore pointers being left in the per-cpu preempted vcore lists when a VM is destroyed. The result of the stale vcore pointers is usually either a crash or a lockup inside collect_piggybacks() when another VM is run. A typical lockup message looks like: [ 472.161074] NMI watchdog: BUG: soft lockup - CPU#24 stuck for 22s! [qemu-system-ppc:7039] [ 472.161204] Modules linked in: kvm_hv kvm_pr kvm xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ses enclosure shpchp rtc_opal i2c_opal powernv_rng binfmt_misc dm_service_time scsi_dh_alua radeon i2c_algo_bit drm_kms_helper ttm drm tg3 ptp pps_core cxgb3 ipr i2c_core mdio dm_multipath [last unloaded: kvm_hv] [ 472.162111] CPU: 24 PID: 7039 Comm: qemu-system-ppc Not tainted 4.2.0-kvm+ #49 [ 472.162187] task: c000001e38512750 ti: c000001e41bfc000 task.ti: c000001e41bfc000 [ 472.162262] NIP: c00000000096b094 LR: c00000000096b08c CTR: c000000000111130 [ 472.162337] REGS: c000001e41bff520 TRAP: 0901 Not tainted (4.2.0-kvm+) [ 472.162399] MSR: 9000000100009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 24848844 XER: 00000000 [ 472.162588] CFAR: c00000000096b0ac SOFTE: 1 GPR00: c000000000111170 c000001e41bff7a0 c00000000127df00 0000000000000001 GPR04: 0000000000000003 0000000000000001 0000000000000000 0000000000874821 GPR08: c000001e41bff8e0 0000000000000001 0000000000000000 d00000000efde740 GPR12: c000000000111130 c00000000fdae400 [ 472.163053] NIP [c00000000096b094] _raw_spin_lock_irqsave+0xa4/0x130 [ 472.163117] LR [c00000000096b08c] _raw_spin_lock_irqsave+0x9c/0x130 [ 472.163179] Call Trace: [ 472.163206] [c000001e41bff7a0] [c000001e41bff7f0] 0xc000001e41bff7f0 (unreliable) [ 472.163295] [c000001e41bff7e0] [c000000000111170] __wake_up+0x40/0x90 [ 472.163375] [c000001e41bff830] [d00000000efd6fc0] kvmppc_run_core+0x1240/0x1950 [kvm_hv] [ 472.163465] [c000001e41bffa30] [d00000000efd8510] kvmppc_vcpu_run_hv+0x5a0/0xd90 [kvm_hv] [ 472.163559] [c000001e41bffb70] [d00000000e9318a4] kvmppc_vcpu_run+0x44/0x60 [kvm] [ 472.163653] [c000001e41bffba0] [d00000000e92e674] kvm_arch_vcpu_ioctl_run+0x64/0x170 [kvm] [ 472.163745] [c000001e41bffbe0] [d00000000e9263a8] kvm_vcpu_ioctl+0x538/0x7b0 [kvm] [ 472.163834] [c000001e41bffd40] [c0000000002d0f50] do_vfs_ioctl+0x480/0x7c0 [ 472.163910] [c000001e41bffde0] [c0000000002d1364] SyS_ioctl+0xd4/0xf0 [ 472.163986] [c000001e41bffe30] [c000000000009260] system_call+0x38/0xd0 [ 472.164060] Instruction dump: [ 472.164098] ebc1fff0 ebe1fff8 7c0803a6 4e800020 60000000 60000000 60420000 8bad02e2 [ 472.164224] 7fc3f378 4b6a57c1 60000000 7c210b78 <e92d0000> 89290009 792affe3 40820070 The bug is that kvmppc_run_vcpu does not correctly handle the case where a vcpu task receives a signal while its guest vcpu is executing in the guest as a result of being piggy-backed onto the execution of another vcore. In that case we need to wait for the vcpu to finish executing inside the guest, and then remove this vcore from the preempted vcores list. That way, we avoid leaving this vcpu's vcore on the preempted vcores list when the vcpu gets interrupted. Fixes: ec257165Reported-by: NThomas Huth <thuth@redhat.com> Tested-by: NThomas Huth <thuth@redhat.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 17 9月, 2015 1 次提交
-
-
由 LEROY Christophe 提交于
memset() uses instruction dcbz to speed up clearing by not wasting time loading cache line with data that will be overwritten. Some platform like mpc52xx do no have cache active at startup and can therefore not use memset(). Allthough no part of the code explicitly uses memset(), GCC may make calls to it. This patch modifies memset() such that at startup, memset() unconditionally skip the optimised bloc that uses dcbz instruction. Once the initial MMU is set up, in machine_init() we patch memset() by replacing this inconditional jump by a NOP Tested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-