- 24 7月, 2013 1 次提交
-
-
由 Denis Kirjanov 提交于
Commit 801eb73f introduced a bug while checking PTE flags. We have to drop the _PAGE_COHERENT flag when __PAGE_NO_CACHE is set and the cache update policy is not write-through (i.e. _PAGE_WRITETHRU is not set) Signed-off-by: NDenis Kirjanov <kda@linux-powerpc.org> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> CC: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 01 7月, 2013 1 次提交
-
-
由 Michael Ellerman 提交于
On LPAR systems we need to inform the hypervisor that we are using the EBB registers. We do this by setting a bit in the Virtual Processor Area (VPA) - formerly known as the lppaca. For now we do this always, ie. we do not dynamically enable/disable. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 6月, 2013 2 次提交
-
-
由 Aneesh Kumar K.V 提交于
Hugepage invalidate involves invalidating multiple hpte entries. Optimize the operation using H_BULK_REMOVE on lpar platforms. On native, reduce the number of tlb flush. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
If a hash bucket gets full, we "evict" a more/less random entry from it. When we do that we don't invalidate the TLB (hpte_remove) because we assume the old translation is still technically "valid". This implies that when we are invalidating or updating pte, even if HPTE entry is not valid we should do a tlb invalidate. With hugepages, we need to pass the correct actual page size value for tlb invalidation. This change update the patch 0608d692 "powerpc/mm: Always invalidate tlb on hpte invalidate and update" to handle transparent hugepages correctly. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 4月, 2013 2 次提交
-
-
由 Aneesh Kumar K.V 提交于
We look at both the segment base page size and actual page size and store the pte-lp-encodings in an array per base page size. We also update all relevant functions to take actual page size argument so that we can use the correct PTE LP encoding in HPTE. This should also get the basic Multiple Page Size per Segment (MPSS) support. This is needed to enable THP on ppc64. [Fixed PR KVM build --BenH] Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
PAPR defines these errors as negative values. So print them accordingly for easy debugging. Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 08 4月, 2013 1 次提交
-
-
由 Michael Wolf 提交于
powerpc: pSeries_lpar_hpte_remove fails from Adjunct partition being performed before the ANDCOND test Some versions of pHyp will perform the adjunct partition test before the ANDCOND test. The result of this is that H_RESOURCE can be returned and cause the BUG_ON condition to occur. The HPTE is not removed. So add a check for H_RESOURCE, it is ok if this HPTE is not removed as pSeries_lpar_hpte_remove is looking for an HPTE to remove and not a specific HPTE to remove. So it is ok to just move on to the next slot and try again. Cc: stable@vger.kernel.org Signed-off-by: NMichael Wolf <mjw@linux.vnet.ibm.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
-
- 17 9月, 2012 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
This patch convert different functions to take virtual page number instead of virtual address. Virtual page number is virtual address shifted right by VPN_SHIFT (12) bits. This enable us to have an address range of upto 76 bits. Reviewed-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 9月, 2012 1 次提交
-
-
由 Michael Ellerman 提交于
It's empty now, apart from other includes. Fixup a few files that were getting things via this header. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 3月, 2012 1 次提交
-
-
由 Stephen Rothwell 提交于
This is no longer selectable, so just remove all the dependent code. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 11 1月, 2012 1 次提交
-
-
由 Anton Blanchard 提交于
Tracepoints should not be called inside an rcu_idle_enter/rcu_idle_exit region. Since pSeries calls H_CEDE in the idle loop, we were violating this rule. commit a7b152d5 (powerpc: Tell RCU about idle after hcall tracing) tried to work around it by delaying the rcu_idle_enter until after we called the hcall tracepoint, but there are a number of issues with it. The hcall tracepoint trampoline code is called conditionally when the tracepoint is enabled. If the tracepoint is not enabled we never call rcu_idle_enter. The idle_uses_rcu check was also done at compile time which breaks multiplatform builds. The simple fix is to avoid tracing H_CEDE and rely on other tracepoints and the hypervisor dispatch trace log to work out if we called H_CEDE. This fixes a hang during boot on pSeries. Signed-off-by: NAnton Blanchard <anton@samba.org> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 03 1月, 2012 1 次提交
-
-
由 Li Zhong 提交于
Unpaired calling of probe_hcall_entry and probe_hcall_exit might happen as following, which could cause incorrect preempt count. __trace_hcall_entry => trace_hcall_entry -> probe_hcall_entry => get_cpu_var => preempt_disable __trace_hcall_exit => trace_hcall_exit -> probe_hcall_exit => put_cpu_var => preempt_enable where: A => B and A -> B means A calls B, but => means A will call B through function name, and B will definitely be called. -> means A will call B through function pointer, so B might not be called if the function pointer is not set. So error happens when only one of probe_hcall_entry and probe_hcall_exit get called during a hcall. This patch tries to move the preempt count operations from probe_hcall_entry and probe_hcall_exit to its callers. Reported-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Tested-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> CC: stable@kernel.org [v2.6.32+] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 12 12月, 2011 1 次提交
-
-
由 Paul E. McKenney 提交于
The PowerPC pSeries platform (CONFIG_PPC_PSERIES=y) enables hypervisor-call tracing for CONFIG_TRACEPOINTS=y kernels. One of the hypervisor calls that is traced is the H_CEDE call in the idle loop that tells the hypervisor that this OS instance no longer needs the current CPU. However, tracing uses RCU, so this combination of kernel configuration variables needs to avoid telling RCU about the current CPU's idleness until after the H_CEDE-entry tracing completes on the one hand, and must tell RCU that the the current CPU is no longer idle before the H_CEDE-exit tracing starts. In all other cases, it suffices to inform RCU of CPU idleness upon idle-loop entry and exit. This commit makes the required adjustments. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
With module.h being implicitly everywhere via device.h, the absence of explicitly including something for EXPORT_SYMBOL went unnoticed. Since we are heading to fix things up and clean module.h from the device.h file, we need to explicitly include these files now. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 05 8月, 2011 2 次提交
-
-
由 Anton Blanchard 提交于
Make the VPA, SLB shadow and DTL registration and deregistration functions print consistent messages on error. I needed the firmware error code while chasing a kexec bug but we weren't printing it. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
On a box with 8TB of RAM the MMU hashtable is 64GB in size. That means we have 4G PTEs. pSeries_lpar_hptab_clear was using a signed int to store the index which will overflow at 2G. Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: <stable@kernel.org> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 6月, 2011 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
On pseries machines, consoles are provided by the hypervisor using a low level get_chars/put_chars type interface. However, this is really just a transport to the service processor which implements them either as "raw" console (networked consoles, HMC, ...) or as "hvsi" serial ports. The later is a simple packet protocol on top of the raw character interface that is supposed to convey additional "serial port" style semantics. In practice however, all it does is provide a way to read the CD line and set/clear our DTR line, that's it. We currently implement the "raw" protocol as an hvc console backend (/dev/hvcN) and the "hvsi" protocol using a separate tty driver (/dev/hvsi0). However this is quite impractical. The arbitrary difference between the two type of devices has been a major source of user (and distro) confusion. Additionally, there's an additional mini -hvsi implementation in the pseries platform code for our low level debug console and early boot kernel messages, which means code duplication, though that low level variant is impractical as it's incapable of doing the initial protocol negociation to establish the link to the FSP. This essentially replaces the dedicated hvsi driver and the platform udbg code completely by extending the existing hvc_vio backend used in "raw" mode so that: - It now supports HVSI as well - We add support for hvc backend providing tiocm{get,set} - It also provides a udbg interface for early debug and boot console This is overall less code, though this will only be obvious once we remove the old "hvsi" driver, which is still available for now. When the old driver is enabled, the new code still kicks in for the low level udbg console, replacing the old mini implementation in the platform code, it just doesn't provide the higher level "hvc" interface. In addition to producing generally simler code, this has several benefits over our current situation: - The user/distro only has to deal with /dev/hvcN for the hypervisor console, avoiding all sort of confusion that has plagued us in the past - The tty, kernel and low level debug console all use the same code base which supports the full protocol establishment process, thus the console is now available much earlier than it used to be with the old HVSI driver. The kernel console works much earlier and udbg is available much earlier too. Hackers can enable a hard coded very-early debug console as well that works with HVSI (previously that was only supported for the "raw" mode). I've tried to keep the same semantics as hvsi relative to how I react to things like CD changes, with some subtle differences though: - I clear DTR on close if HUPCL is set - Current hvsi triggers a hangup if it detects a up->down transition on CD (you can still open a console with CD down). My new implementation triggers a hangup if the link to the FSP is severed, and severs it upon detecting a up->down transition on CD. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
When CONFIG_PPC_EARLY_DEBUG is set, call register_early_udbg_console() early from generic code. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 04 5月, 2011 1 次提交
-
-
由 Brian King 提交于
Adds support for page coalescing, which is a feature on IBM Power servers which allows for coalescing identical pages between logical partitions. Hint text pages as coalesce candidates, since they are the most likely pages to be able to be coalesced between partitions. This patch also exports some page coalescing statistics available from firmware via lparcfg. [BenH: Moved a couple of things around to fix compile problems] Signed-off-by: NBrian King <brking@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 4月, 2011 1 次提交
-
-
由 Matt Evans 提交于
Some of the 64bit PPC CPU features are MMU-related, so this patch moves them to MMU_FTR_ bits. All cpu_has_feature()-style tests are moved to mmu_has_feature(), and seven feature bits are freed as a result. Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 07 2月, 2011 1 次提交
-
-
由 Anton Blanchard 提交于
Spinlocks on shared processor partitions use H_YIELD to notify the hypervisor we are waiting on another virtual CPU. Unfortunately this means the hcall tracepoints can recurse. The patch below adds a percpu depth and checks it on both the entry and exit hcall tracepoints. Signed-off-by: NAnton Blanchard <anton@samba.org> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> CC: stable@kernel.org
-
- 29 11月, 2010 1 次提交
-
-
由 Will Schmidt 提交于
This introduces a pair of kernel parameters that can be used to disable the MULTITCE and BULK_REMOVE h-calls. By default, those hcalls are enabled, active, and good for throughput and performance. The ability to disable them will be useful for some of the PREEMPT_RT related investigation and work occurring on Power. Signed-off-by: NWill Schmidt <will_schmidt@vnet.ibm.com> cc: Olof Johansson <olof@lixom.net> cc: Anton Blanchard <anton@samba.org> cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 02 9月, 2010 2 次提交
-
-
由 Paul Mackerras 提交于
Currently, when CONFIG_VIRT_CPU_ACCOUNTING is enabled, we use the PURR register for measuring the user and system time used by processes, as well as other related times such as hardirq and softirq times. This turns out to be quite confusing for users because it means that a program will often be measured as taking less time when run on a multi-threaded processor (SMT2 or SMT4 mode) than it does when run on a single-threaded processor (ST mode), even though the program takes longer to finish. The discrepancy is accounted for as stolen time, which is also confusing, particularly when there are no other partitions running. This changes the accounting to use the timebase instead, meaning that the reported user and system times are the actual number of real-time seconds that the program was executing on the processor thread, regardless of which SMT mode the processor is in. Thus a program will generally show greater user and system times when run on a multi-threaded processor than on a single-threaded processor. On pSeries systems on POWER5 or later processors, we measure the stolen time (time when this partition wasn't running) using the hypervisor dispatch trace log. We check for new entries in the log on every entry from user mode and on every transition from kernel process context to soft or hard IRQ context (i.e. when account_system_vtime() gets called). So that we can correctly distinguish time stolen from user time and time stolen from system time, without having to check the log on every exit to user mode, we store separate timestamps for exit to user mode and entry from user mode. On systems that have a SPURR (POWER6 and POWER7), we read the SPURR in account_system_vtime() (as before), and then apportion the SPURR ticks since the last time we read it between scaled user time and scaled system time according to the relative proportions of user time and system time over the same interval. This avoids having to read the SPURR on every kernel entry and exit. On systems that have PURR but not SPURR (i.e., POWER5), we do the same using the PURR rather than the SPURR. This disables the DTL user interface in /sys/debug/kernel/powerpc/dtl for now since it conflicts with the use of the dispatch trace log by the time accounting code. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul Mackerras 提交于
Currently we have the lppaca structs as a simple array of NR_CPUS entries, taking up space in the data section of the kernel image. In future we would like to allocate them dynamically, so this abstracts out the accesses to the array, making it easier to change how we locate the lppaca for a given cpu in future. Specifically, lppaca[cpu] changes to lppaca_of(cpu). Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 5月, 2010 1 次提交
-
-
由 Michael Neuling 提交于
Currently for kexec the PTE tear down on 1TB segment systems normally requires 3 hcalls for each PTE removal. On a machine with 32GB of memory it can take around a minute to remove all the PTEs. This optimises the path so that we only remove PTEs that are valid. It also uses the read 4 PTEs at once HCALL. For the common case where a PTEs is invalid in a 1TB segment, this turns the 3 HCALLs per PTE down to 1 HCALL per 4 PTEs. This gives an > 10x speedup in kexec times on PHYP, taking a 32GB machine from around 1 minute down to a few seconds. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 28 10月, 2009 2 次提交
-
-
由 Anton Blanchard 提交于
While most users of the hcall tracepoints will only want the opcode and return code, some will want all the arguments. To avoid the complexity of using varargs we pass a pointer to the register save area, which contains all the arguments. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Anton Blanchard 提交于
Add hcall_entry and hcall_exit tracepoints. This replaces the inline assembly HCALL_STATS code and converts it to use the new tracepoints. To keep the disabled case as quick as possible, we embed a status word in the TOC so we can get at it with a single load. By doing so we keep the overhead at a minimum. Time taken for a null hcall: No tracepoint code: 135.79 cycles Disabled tracepoints: 137.95 cycles For reference, before this patch enabling HCALL_STATS resulted in a null hcall of 201.44 cycles! Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 08 7月, 2009 1 次提交
-
-
由 Michael Ellerman 提交于
pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. In particular, pSeries_lpar_hpte_insert() goes from 185 instructions to 77 instructions as a result of this patch. Luckily that code isn't called very often ... With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 7284 1552 296 9132 23ac platforms/pseries/lpar.o size after: text data bss dec hex filename 5806 1096 296 7198 1c1e platforms/pseries/lpar.o Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 5月, 2009 1 次提交
-
-
由 Robert Jennings 提交于
Adds support for the "unused" page hint which can be used in shared memory partitions to flag pages not in use, which will then be stolen before active pages by the hypervisor when memory needs to be moved to LPARs in need of additional memory. Failure to mark pages as 'unused' makes the LPAR slower to give up unused memory to other partitions. This adds the kernel parameter 'cmo_free_hint' to disable this functionality. Signed-off-by: NBrian King <brking@linux.vnet.ibm.com> Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 7月, 2008 1 次提交
-
-
由 Dave Kleikamp 提交于
It is okay for both _PAGE_GUARDED and _PAGE_COHERENT (G and M) to be set in the same pte. In fact, even if that were not the case, there doesn't seem to be any place where G is set without also setting I (_PAGE_NO_CACHE), so the test for I is sufficient as a condition to clear _PAGE_COHERENT when filling the hash table. Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 7月, 2008 1 次提交
-
-
由 Dave Kleikamp 提交于
The current low level hash code on LPAR configurations clears _PAGE_COHERENT (M) when either _PAGE_GUARDED (G) or _PAGE_NO_CACHE (I) is set. This conflicts with _PAGE_SAO which has M, I and W bits sets at once (normally invalid combo) to indicate the new SAO attribute. This changes the code to allow that case. Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 5月, 2008 1 次提交
-
-
由 Michael Ellerman 提交于
Don't return void in pseries/iommu.c Make mce_data_buf static in pseries/ras.c Make things static in pseries/rtasd.c Make things static in pseries/setup.c vtermno may as well be static in platforms/pseries/lpar.c Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 4月, 2008 2 次提交
-
-
由 Michael Ellerman 提交于
In pseries/lpar.c, fix some printf specifier mismatches, and add a newline to one printk. In pseries/rtasd.c add "rtasd" to some messages to make it clear where they're coming from. In pseries/scanlog.c remove the hand-rolled runtime debugging support in there. This file has been largely unchanged for eons, if we need to debug it in future we can recompile. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
On pseries LPAR we can call the udbg routines, and the udbg console very early. So mark the udbg console as safe to call early in boot, and register the udbg console as soon as the udbg routines are hooked up. This allows platforms/pseries code to use printk() and pr_debug() rather than needing to call udbg_printf() directly for early debugging. This is nice because a) it's standard, b) it goes via the printk buffer, and c) you can get printk time stamps. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 17 4月, 2008 2 次提交
-
-
由 Michael Ellerman 提交于
There is logic in platforms/peries/lpars.c which checks if the user has specified a console on the command line, and refrains from adding a preferred console entry for the hvc/hvsi console if they have. This trips up if you use "netconsole=foo" on the command line, and has the result that you get _only_ the netconsole, because the hvc device is never added as a preferred console. Worse still if you get the netconsole configuration wrong somehow, you end up with no console at all. As it turns out we don't need to worry about checking the command line. If the user has specified "console=foo", then foo will be set as the preferred console when the command line is parsed in start_kernel(), much later than the pseries code, and so the latter setting will take effect. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Move the prototype for find_udbg_vterm() into pseries.h, removing it from setup.c. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 26 2月, 2008 1 次提交
-
-
由 Badari Pulavarty 提交于
For memory remove, we need to clean up htab mappings for the section of the memory we are removing. This implements support for removing htab bolted mappings for pSeries logical partitions. Other sub-archs may need to implement similar functionality for hotplug memory remove to work on them. Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 15 1月, 2008 1 次提交
-
-
由 Paul Mackerras 提交于
Commit 473980a9 added a call to clear the SLB shadow buffer before registering it. Unfortunately this means that we clear out the entries that slb_initialize has previously set in there. On POWER6, the hypervisor uses the SLB shadow buffer when doing partition switches, and that means that after the next partition switch, each non-boot CPU has no SLB entries to map the kernel text and data, which causes it to crash. This fixes it by reverting most of 473980a9 and instead clearing the 3rd entry explicitly in slb_initialize. This fixes the problem that 473980a9 was trying to solve, but without breaking POWER6. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 11 1月, 2008 1 次提交
-
-
由 Michael Neuling 提交于
Before we register the SLB shadow buffer, we need to invalidate the entries in the buffer, otherwise we can end up stale entries from when we previously offlined the CPU. This does this invalidate as well as unregistering the buffer with PHYP before we offline the cpu. Tested and fixes crashes seen on 970MP (thanks to tonyb) and POWER5. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 12 10月, 2007 1 次提交
-
-
由 Paul Mackerras 提交于
This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-