- 27 4月, 2011 2 次提交
-
-
由 Matt Evans 提交于
Some of the 64bit PPC CPU features are MMU-related, so this patch moves them to MMU_FTR_ bits. All cpu_has_feature()-style tests are moved to mmu_has_feature(), and seven feature bits are freed as a result. Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Add the cputable entry, regs and setup & restore entries for the PowerPC A2 core. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 4月, 2011 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This bit indicates that we are operating in hypervisor mode on a CPU compliant to architecture 2.06 or later (currently server only). We set it on POWER7 and have a boot-time CPU setup function that clears it if MSR:HV isn't set (booting under a hypervisor). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 12 4月, 2011 2 次提交
-
-
由 Scott Wood 提交于
e500mc does not support the HID0/MSR mechanism that is used by e500_idle (and there are also issues with waking on certain types of interrupts). Further, even if napping is never actually enabled, just having CPU_FTR_CAN_NAP will cause machine_init() to overwrite the board's supplied ppc_md.power_save(). We drop CPU_FTR_MAYBE_CAN_DOZE becuase we should use 'wait' instead on e500mc. Signed-off-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
The CPU_FTRS_POSSIBLE and CPU_FTRS_ALWAYS defines did not encompass e5500 CPU features when built for 64-bit. This causes issues with cpu_has_feature() as it utilizes the POSSIBLE & ALWAYS defines as part of its check. Create a unique CPU_FTRS_E5500 (as its different from CPU_FTRS_E500MC), created a new group for 64-bit Book3e based CPUs and add CPU_FTRS_E5500 to that group. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 02 2月, 2011 1 次提交
-
-
由 Dave Kleikamp 提交于
The DD2 core still has some unstability. Define CPU_FTR_476_DD2 to enable workarounds in later patches. This is based on an earlier, unreleased patch for DD1 by Ben Herrenschmidt. Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 29 11月, 2010 1 次提交
-
-
由 Anton Blanchard 提交于
POWER5 added popcntb, and POWER7 added popcntw and popcntd. As a first step this patch does all the work out of line, but it would be nice to implement them as inlines with an out of line fallback. The performance issue with hweight was noticed when disabling SMT on a large (192 thread) POWER7 box. The patch improves that testcase by about 8%. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 02 9月, 2010 1 次提交
-
-
由 Anton Blanchard 提交于
The POWER architecture does not require stcx to check that it is operating on the same address as the larx. This means it is possible for an an exception handler to execute a larx, get a reservation, decide not to do the stcx and then return back with an active reservation. If the interrupted code was in the middle of a larx/stcx sequence the stcx could incorrectly succeed. All recent POWER CPUs check the address before letting the stcx succeed so we can create a CPU feature and nop it out. As Ben suggested, we can only do this in our syscall path because there is a remote possibility some kernel code gets interrupted by an exception that ends up operating on the same cacheline. Thanks to Paul Mackerras and Derek Williams for the idea. To test this I used a very simple null syscall (actually getppid) testcase at http://ozlabs.org/~anton/junkcode/null_syscall.c I tested against 2.6.35-git10 with the following changes against the pseries_defconfig: CONFIG_VIRT_CPU_ACCOUNTING=n CONFIG_AUDIT=n CONFIG_PPC_4K_PAGES=n CONFIG_PPC_64K_PAGES=y CONFIG_FORCE_MAX_ZONEORDER=9 CONFIG_PPC_SUBPAGE_PROT=n CONFIG_FUNCTION_TRACER=n CONFIG_FUNCTION_GRAPH_TRACER=n CONFIG_IRQSOFF_TRACER=n CONFIG_STACK_TRACER=n to remove the overhead of virtual CPU accounting, syscall auditing and the ftrace mcount tracers. 64kB pages were enabled to minimise TLB misses. POWER6: +8.2% POWER7: +7.0% Another suggestion was to use a larx to something in the L1 instead of a stcx. This was almost as fast as removing the larx on POWER6, but only 3.5% faster on POWER7. We can use this to speed up the reservation clear in our exception exit code. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 22 6月, 2010 1 次提交
-
-
由 K.Prasad 提交于
Implement perf-events based hw-breakpoint interfaces for PowerPC 64-bit server (Book III S) processors. This allows access to a given location to be used as an event that can be counted or profiled by the perf_events subsystem. This is done using the DABR (data breakpoint register), which can also be used for process debugging via ptrace. When perf_event hw_breakpoint support is configured in, the perf_event subsystem manages the DABR and arbitrates access to it, and ptrace then creates a perf_event when it is requested to set a data breakpoint. [Adopted suggestions from Paul Mackerras <paulus@samba.org> to - emulate_step() all system-wide breakpoints and single-step only the per-task breakpoints - perform arch-specific cleanup before unregistration through arch_unregister_hw_breakpoint() ] Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 6月, 2010 1 次提交
-
-
由 Michael Neuling 提交于
The POWER7 core has dynamic SMT mode switching which is controlled by the hypervisor. There are 3 SMT modes: SMT1 uses thread 0 SMT2 uses threads 0 & 1 SMT4 uses threads 0, 1, 2 & 3 When in any particular SMT mode, all threads have the same performance as each other (ie. at any moment in time, all threads perform the same). The SMT mode switching works such that when linux has threads 2 & 3 idle and 0 & 1 active, it will cede (H_CEDE hypercall) threads 2 and 3 in the idle loop and the hypervisor will automatically switch to SMT2 for that core (independent of other cores). The opposite is not true, so if threads 0 & 1 are idle and 2 & 3 are active, we will stay in SMT4 mode. Similarly if thread 0 is active and threads 1, 2 & 3 are idle, we'll go into SMT1 mode. If we can get the core into a lower SMT mode (SMT1 is best), the threads will perform better (since they share less core resources). Hence when we have idle threads, we want them to be the higher ones. This adds a feature bit for asymmetric packing to powerpc and then enables it on POWER7. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@ozlabs.org LKML-Reference: <20100608045702.31FB5CC8C7@localhost.localdomain> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 5月, 2010 1 次提交
-
-
由 Scott Wood 提交于
Most of the MSCR bit assigments are different in e500mc versus e500, and they are now write-one-to-clear. Some e500mc machine check conditions are made recoverable (as long as they aren't stuck on), most notably L1 instruction cache parity errors. Signed-off-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 05 5月, 2010 2 次提交
-
-
由 Dave Kleikamp 提交于
The 47x core's MCSR varies from 44x, so it needs it's own machine check handler. Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
由 Dave Kleikamp 提交于
This patch adds the base support for the 476 processor. The code was primarily written by Ben Herrenschmidt and Torez Smith, but I've been maintaining it for a while. The goal is to have a single binary that will run on 44x and 47x, but we still have some details to work out. The biggest is that the L1 cache line size differs on the two platforms, but it's currently a compile-time option. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NTorez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 17 2月, 2010 1 次提交
-
-
由 Anton Blanchard 提交于
Nick Piggin discovered that lwsync barriers around locks were faster than isync on 970. That was a long time ago and I completely dropped the ball in testing his patches across other ppc64 processors. Turns out the idea helps on other chips. Using a microbenchmark that uses a lot of threads to contend on a global pthread mutex (and therefore a global futex), POWER6 improves 8% and POWER7 improves 2%. I checked POWER5 and while I couldn't measure an improvement, there was no regression. This patch uses the lwsync patching code to replace the isyncs with lwsyncs on CPUs that support the instruction. We were marking POWER3 and RS64 as lwsync capable but in reality they treat it as a full sync (ie slow). Remove the CPU_FTR_LWSYNC bit from these CPUs so they continue to use the faster isync method. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 17 3月, 2009 1 次提交
-
-
由 Piotr Ziecik 提交于
BestComm, a DMA engine in MPC52xx SoC, requires snooping when CPU caches are enabled to work properly. Adding CPU_FTR_NEED_COHERENT fixes NFS problems on MPC52xx machines introduced by 'powerpc/mm: Fix handling of _PAGE_COHERENT in BAT setup code' (sha1: 4c456a67). Signed-off-by: NPiotr Ziecik <kosmo@semihalf.com> Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-
- 23 2月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
The e500mc supports the new msgsnd/doorbell mechanisms that were added in the Power ISA 2.05 architecture. We use the normal level doorbell for doing SMP IPIs at this point. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 12月, 2008 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We're soon running out of CPU features and I need to add some new ones for various MMU related bits, so this patch separates the MMU features from the CPU features. I moved over the 32-bit MMU related ones, added base features for MMU type families, but didn't move over any 64-bit only feature yet. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Benjamin Herrenschmidt 提交于
This adds supports to the "extended" DCR addressing via the indirect mfdcrx/mtdcrx instructions supported by some 4xx cores (440H6 and later). I enabled the feature for now only on AMCC 460 chips. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 16 12月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We were missing the CPU_FTR_NOEXECUTE bit in our cputable for all these processors. The result is that update_mmu_cache() would flush the cache for all pages mapped to userspace which is totally unnecessary on those processors since we already handle flushing on execute in the page fault path. This should provide a nice speed up ;-) Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 05 11月, 2008 1 次提交
-
-
由 Mark Nelson 提交于
Add a new CPU feature bit, CPU_FTR_UNALIGNED_LD_STD, to be added to the 64bit powerpc chips that can do unaligned load double and store double without any performance hit. This is added to Power6 and Cell and will be used in the next commit to disable the code that gets the destination address aligned on those CPUs where doing that doesn't improve performance. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 16 9月, 2008 1 次提交
-
-
由 Mark Nelson 提交于
Add a new CPU feature bit, CPU_FTR_CP_USE_DCBTZ, to be added to the 64bit powerpc chips that benefit from having dcbt and dcbz instructions used in their memory copy routines. This will be used in a subsequent patch that updates copy_4K_page(). The new bit is added to Cell, PPC970 and Power4 because they show better performance with the new copy_4K_page() when dcbt and dcbz instructions are used. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 20 8月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The file arch/powerpc/kernel/sysfs.c is currently only compiled for 64-bit kernels. It contain code to register CPU sysdevs in sysfs and add various properties such as cache topology and raw access by root to performance monitor counters (PMCs). A lot of that can be re-used as is on 32-bits. This makes the file be built for both, with appropriate ifdef'ing for the few bits that are really 64-bit specific, and adds some support for the raw PMCs for 75x and 74xx processors. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 04 8月, 2008 1 次提交
-
-
由 Stephen Rothwell 提交于
from include/asm-powerpc. This is the result of a mkdir arch/powerpc/include/asm git mv include/asm-powerpc/* arch/powerpc/include/asm Followed by a few documentation/comment fixups and a couple of places where <asm-powepc/...> was being used explicitly. Of the latter only one was outside the arch code and it is a driver only built for powerpc. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 25 7月, 2008 1 次提交
-
-
由 Nathan Lynch 提交于
Stash the first platform string matched by identify_cpu() in powerpc_base_platform, and supply that to the ELF loader for the value of AT_BASE_PLATFORM. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 7月, 2008 1 次提交
-
-
由 Nathan Lynch 提交于
Background from Maynard Johnson: As of POWER6, a set of 32 common events is defined that must be supported on all future POWER processors. The main impetus for this compat set is the need to support partition migration, especially from processor P(n) to processor P(n+1), where performance software that's running in the new partition may not be knowledgeable about processor P(n+1). If a performance tool determines it does not support the physical processor, but is told (via the PPC_FEATURE_PSERIES_PERFMON_COMPAT bit) that the processor supports the notion of the PMU compat set, then the performance tool can surface just those events to the user of the tool. PPC_FEATURE_PSERIES_PERFMON_COMPAT indicates that the PMU supports at least this basic subset of events which is compatible across POWER processor lines. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 7月, 2008 1 次提交
-
-
由 Dave Kleikamp 提交于
Add the CPU feature bit for the new Strong Access Ordering facility of Power7 Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 03 7月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
To allow for a single kernel image on e500 v1/v2/mc we need to fixup lwsync at runtime. On e500v1/v2 lwsync causes an illop so we need to patch up the code. We default to 'sync' since that is always safe and if the cpu is capable we will replace 'sync' with 'lwsync'. We introduce CPU_FTR_LWSYNC as a way to determine at runtime if this is needed. This flag could be moved elsewhere since we dont really use it for the normal CPU_FTR purpose. Finally we only store the relative offset in the fixup section to keep it as small as possible rather than using a full fixup_entry. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 01 7月, 2008 3 次提交
-
-
由 Michael Neuling 提交于
Add a VSX CPU feature. Also add code to detect if VSX is available from the device tree. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
The CPU and firmware feature fixup macros are currently spread across three files, firmware.h, cputable.h and asm-compat.h. Consolidate them into their own file, feature-fixups.h Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Adrian Bunk 提交于
asm/asm-compat.h doesn't seem to be intended for userspace usage. Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 30 6月, 2008 1 次提交
-
-
由 Michael Neuling 提交于
Add a cputable entry for the POWER7 processor. Also tell firmware that we know about POWER7. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 26 6月, 2008 2 次提交
-
-
由 Kumar Gala 提交于
If we have an L2CSR register (e500mc) we need to flush the L2 before going to nap. We use the HW flush mechanism provided in that register. The code reuses the CPU_FTR_604_PERF_MON bit as it is no longer used by any code in the kernel. Additionally we didn't reuse the exist L2CR feature bit as this is intended for the 7xxx L2CR register and L2CSR is part of the new Freescale "Book-E" registers. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
The e500 core enter DOZE/NAP power-saving modes when the core go to cpu_idle routine. The power management default running mode is DOZE, If the user echo 1 > /proc/sys/kernel/powersave-nap the system will change to NAP running mode. Signed-off-by: NDave Liu <daveliu@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 19 6月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
The new e500mc core from Freescale is based on the e500v2 but with the following changes: * Supports only the Enhanced Debug Architecture (DSRR0/1, etc) * Floating Point * No SPE * Supports lwsync * Doorbell Exceptions * Hypervisor * Cache line size is now 64-bytes (e500v1/v2 have a 32-byte cache line) Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 06 2月, 2008 1 次提交
-
-
由 Andy Fleming 提交于
Some of the more recent e300 cores have the same performance monitor implementation as the e500. e300 isn't book-e, so the name isn't really appropriate. In preparation for e300 support, rename a bunch of fsl_booke things to say fsl_emb (Freescale Embedded Performance Monitors). Signed-off-by: NAndy Fleming <afleming@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 24 12月, 2007 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This adds a cputable function pointer for the CPU-side machine check handling. The semantic is still the same as the old one, the one in ppc_md. overrides the one in cputable, though ultimately we'll want to change that so the CPU gets first. This removes CONFIG_440A which was a problem for multiplatform kernels and instead fixes up the IVOR at runtime from a setup_cpu function. The "A" version of the machine check also tweaks the regs->trap value to differenciate the 2 versions at the C level. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 13 11月, 2007 1 次提交
-
-
由 Becky Bruce 提交于
The context switch code in the kernel issues a dummy stwcx. to clear the reservation, as recommended by the architecture. However, some processors can have issues if this stwcx to address A occurs while the reservation is already held to a different address B. To avoid this problem, the dummy stwcx. needs to be paired with a dummy lwarx to the same address. This adds the dummy lwarx, and creates a cpu feature bit to indicate which cpus are affected. Tested on mpc8641_hpcn_defconfig in arch/powerpc; build tested in arch/ppc. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 17 10月, 2007 1 次提交
-
-
由 Olof Johansson 提交于
PA6T has a bug where the slbie instruction does not honor the large segment bit. As a result, we have to always use slbia when switching context. We don't have to worry about changing the slbie's during fault processing, since they should never be replacing one VSID with another using the same ESID. I.e. there's no risk for inserting duplicate entries due to a failed slbie of the old entry. So as long as we clear it out on context switch we should be fine. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 12 10月, 2007 1 次提交
-
-
由 Paul Mackerras 提交于
This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 11 10月, 2007 1 次提交
-
-
由 Paul Mackerras 提交于
Some IBM machines supply a "logical" PVR (processor version register) value in the device tree in the cpu nodes rather than the real PVR. This is used for instance to indicate that the processors in a POWER6 partition have been configured by the hypervisor to run in POWER5+ mode rather than POWER6 mode. To cope with this, we call identify_cpu a second time with the logical PVR value (the first call is with the real PVR value in the very early setup code). However, POWER5+ machines can also supply a logical PVR value, and use the same value (the value that indicates a v2.04 architecture compliant processor). This causes problems for code that uses the performance monitor (such as oprofile), because the PMU registers are different in POWER6 (even in POWER5+ mode) from the real POWER5+. This change works around this problem by taking out the PMU information from the cputable entries for the logical PVR values, and changing identify_cpu so that the second call to it won't overwrite the PMU information that was established by the first call (the one with the real PVR), but does update the other fields. Specifically, if the cputable entry for the logical PVR value has num_pmcs == 0, none of the PMU-related fields get used. So that we can create a mixed cputable entry, we now make cur_cpu_spec point to a single static struct cpu_spec, and copy stuff from cpu_specs[i] into it. This has the side-effect that we can now make cpu_specs[] be initdata. Ultimately it would be good to move the PMU-related fields out to a separate structure, pointed to by the cputable entries, and change identify_cpu so that it saves the PMU info pointer, copies the whole structure, and restores the PMU info pointer, rather than identify_cpu having to list all the fields that are *not* PMU-related. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-