- 18 12月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
This function is used by sparc, powerpc tile and arm64 for compat support. The patch adds a generic implementation with a wrapper for PowerPC to do the u32->int sign extension. The reason for a single patch covering powerpc, tile, sparc and arm64 is to keep it bisectable, otherwise kernel building may fail with mismatched function declarations. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: Chris Metcalf <cmetcalf@tilera.com> [for tile] Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NArnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 12月, 2012 1 次提交
-
-
由 Wen Congyang 提交于
We use a static array to store struct node. In many cases, we don't have too many nodes, and some memory will be unused. Convert it to per-device dynamically allocated memory. Signed-off-by: NWen Congyang <wency@cn.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: Jiang Liu <liuj97@gmail.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 12月, 2012 1 次提交
-
-
由 Paul Mackerras 提交于
When we change or remove a HPT (hashed page table) entry, we can do either a global TLB invalidation (tlbie) that works across the whole machine, or a local invalidation (tlbiel) that only affects this core. Currently we do local invalidations if the VM has only one vcpu or if the guest requests it with the H_LOCAL flag, though the guest Linux kernel currently doesn't ever use H_LOCAL. Then, to cope with the possibility that vcpus moving around to different physical cores might expose stale TLB entries, there is some code in kvmppc_hv_entry to flush the whole TLB of entries for this VM if either this vcpu is now running on a different physical core from where it last ran, or if this physical core last ran a different vcpu. There are a number of problems on POWER7 with this as it stands: - The TLB invalidation is done per thread, whereas it only needs to be done per core, since the TLB is shared between the threads. - With the possibility of the host paging out guest pages, the use of H_LOCAL by an SMP guest is dangerous since the guest could possibly retain and use a stale TLB entry pointing to a page that had been removed from the guest. - The TLB invalidations that we do when a vcpu moves from one physical core to another are unnecessary in the case of an SMP guest that isn't using H_LOCAL. - The optimization of using local invalidations rather than global should apply to guests with one virtual core, not just one vcpu. (None of this applies on PPC970, since there we always have to invalidate the whole TLB when entering and leaving the guest, and we can't support paging out guest memory.) To fix these problems and simplify the code, we now maintain a simple cpumask of which cpus need to flush the TLB on entry to the guest. (This is indexed by cpu, though we only ever use the bits for thread 0 of each core.) Whenever we do a local TLB invalidation, we set the bits for every cpu except the bit for thread 0 of the core that we're currently running on. Whenever we enter a guest, we test and clear the bit for our core, and flush the TLB if it was set. On initial startup of the VM, and when resetting the HPT, we set all the bits in the need_tlb_flush cpumask, since any core could potentially have stale TLB entries from the previous VM to use the same LPID, or the previous contents of the HPT. Then, we maintain a count of the number of online virtual cores, and use that when deciding whether to use a local invalidation rather than the number of online vcpus. The code to make that decision is extracted out into a new function, global_invalidates(). For multi-core guests on POWER7 (i.e. when we are using mmu notifiers), we now never do local invalidations regardless of the H_LOCAL flag. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 29 11月, 2012 3 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Bill Pemberton 提交于
Remove conditional code based on CONFIG_HOTPLUG being false. It's always on now in preparation of it going away as an option. Signed-off-by: NBill Pemberton <wfp5p@virginia.edu> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Rob Herring <rob.herring@calxeda.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 20 11月, 2012 1 次提交
-
-
由 Frederic Weisbecker 提交于
System time accounting APIs such as vtime_account_system() and vtime_account_idle() need to be irqsafe. Current callers include irq entry, exit and kvm, all of which have been checked against that requirement. Now it's better to grow that with an automatic check in case we have further callers or we missed something. Suggested-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
-
- 19 11月, 2012 4 次提交
-
-
由 Frederic Weisbecker 提交于
On ia64 and powerpc, vtime context switch only consists in flushing system and user pending time, plus a few arch housekeeping. Consolidate that into a generic implementation. s390 is a special case because pending user and system time accounting there is hard to dissociate. So it's keeping its own implementation. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Reviewed-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
-
由 Frederic Weisbecker 提交于
All vtime implementations just flush the user time on process tick. Consolidate that in generic code by calling a user time accounting helper. This avoids an indirect call in ia64 and prepare to also consolidate vtime context switch code. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Reviewed-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
-
由 Frederic Weisbecker 提交于
Prepending irq-unsafe vtime APIs with underscores was actually a bad idea as the result is a big mess in the API namespace that is even waiting to be further extended. Also these helpers are always called from irq safe callers except kvm. Just provide a vtime_account_system_irqsafe() for this specific case so that we can remove the underscore prefix on other vtime functions. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Reviewed-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
-
由 Adam Buchbinder 提交于
"Whether" is misspelled in various comments across the tree; this fixes them. No code changes. Signed-off-by: NAdam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 15 11月, 2012 29 次提交
-
-
由 Michael Neuling 提交于
This turns on MMU on execptions via AIL field in the LPCR. Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
We want to change what's initially set in the LPCR, so start by taking the move from LPCR out of the function and into the caller. Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
POWER8/v2.07 allows exceptions to be taken with the MMU still on. A new set of exception vectors is added at 0xc000_0000_0000_4xxx. When the HW takes us here, MSR IR/DR will be set already and we no longer need a costly RFID to turn the MMU back on again. The original 0x0 based exception vectors remain for when the HW can't leave the MMU on. Examples of this are when we can't trust the current MMU mappings, like when we are changing from guest to hypervisor (HV 0 -> 1) or when the MMU was off already. In these cases the HW will take us to the original 0x0 based exception vectors with the MMU off as before. This uses the new macros added previously too implement these new execption vectors at 0xc000_0000_0000_4xxx. We exit these exception vectors using mflr/blr (rather than mtspr SSR0/RFID), since we don't need the costly MMU switch anymore. This moves the __end_interrupts marker down past these new 0x4000 vectors since they will need to be copied down to 0x0 when the kernel is not at 0x0. Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
POWER8/v2.07 allows exceptions to be taken with the MMU still on. A new set of exception vectors is added at 0xc000_0000_0000_4xxx. When the HW takes us here, MSR IR/DR will be set already and we no longer need a costly RFID to turn the MMU back on again. The original 0x0 based exception vectors remain for when the HW can't leave the MMU on. Examples of this are when we can't trust the current the MMU mappings, like when we are changing from guest to hypervisor (HV 0 -> 1) or when the MMU was off already. In these cases the HW will take us to the original 0x0 based exception vectors with the MMU off as before. The below macros are copies of the macros used at the 0x0 offset but modified to handle the MMU being on. In these macros we use the link register to jump to the secondary handlers rather than using RFID (RFID was also use to turn on the MMU). Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
This turns the syscall handler into macros as we are going to want to reuse them again later. Signed-off-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
If we change load_hander() to use an ori instead of addi, we can load handlers upto 64k away provided we are still 64k aligned. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
This removes the large gap between 0x1800 and 0x3000. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Remove redundancy spaces and make tab usage consistent. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
If we build a kernel with CONFIG_RELOCATABLE=y CONFIG_CRASH_DUMP=n, the kernel fails when we run at a non zero offset. It turns out we were incorrectly wrapping some of the relocatable kernel code with CONFIG_CRASH_DUMP. Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: <stable@kernel.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
A PVR of 0x0F000004 means we are arch v2.07 complicate ie, POWER8. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Update ibm,architecture.vec for POWER8 and allows us to support more than one parition per core. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aravinda Prasad 提交于
On powerpc, ptrace will disable hardware breakpoint request once the breakpoint is hit. It is the responsibility of the caller to set it again. However, when the caller sets the hardware breakpoint again using ptrace(PTRACE_SET_DEBUGREG, child_pid, 0, addr), the hardware breakpoint is not enabled. While gdb's approach is to unregister and re-register the hardware breakpoint every time the breakpoint is hit - which is working fine, this could affect other programs trying to re-register hardware breakpoint without unregistering. This patch enables hardware breakpoint if the caller is re-registering. Signed-off-by: NAravinda Prasad <aravinda@linux.vnet.ibm.com> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Akinobu Mita 提交于
- Caluculate the bitmap size with BITS_TO_LONGS() - Use bitmap_empty() to verify that all bits are cleared This also includes a printk to pr_warn() conversion. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Fix global symbol name to match actual denorm_exception_hv label. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Just a copy of POWER7 for now. Will update with new code later. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
We are going to reuse this in POWER8 so make the name generic. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
s/intruction/instruction/ Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 K.Prasad 提交于
PPC_PTRACE_GETHWDBGINFO, PPC_PTRACE_SETHWDEBUG and PPC_PTRACE_DELHWDEBUG are PowerPC specific ptrace flags that use the watchpoint register. While they are targeted primarily towards BookE users, user-space applications such as GDB have started using them for BookS too. This patch enables the use of generic hardware breakpoint interfaces for these new flags. Apart from the usual benefits of using generic hw-breakpoint interfaces, these changes allow debuggers (such as GDB) to use a common set of ptrace flags for their watchpoint needs and allow more precise breakpoint specification (length of the variable can be specified). Mikey added: rebased and added dbginfo.features around #ifdef CONFIG_HAVE_HW_BREAKPOINT Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com> Acked-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
The last user of ppc_md.idle_loop() was removed when we dropped the legacy iSeries code, in commit 8ee3e0d6. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Li Zhong 提交于
This patch tries to fix the following BUG report: [ 0.012313] BUG: MAX_STACK_TRACE_ENTRIES too low! [ 0.012318] turning off the locking correctness validator. [ 0.012321] Call Trace: [ 0.012330] [c00000017666f6d0] [c000000000012128] .show_stack+0x78/0x184 (unreliable) [ 0.012339] [c00000017666f780] [c0000000000b6348] .save_trace+0x12c/0x14c [ 0.012345] [c00000017666f800] [c0000000000b7448] .mark_lock+0x2bc/0x710 [ 0.012351] [c00000017666f8b0] [c0000000000bb198] .__lock_acquire+0x748/0xaec [ 0.012357] [c00000017666f9b0] [c0000000000bb684] .lock_acquire+0x148/0x194 [ 0.012365] [c00000017666fa80] [c00000000069371c] .mutex_lock_nested+0x84/0x4ec [ 0.012372] [c00000017666fb90] [c000000000096998] .smpboot_register_percpu_thread+0x3c/0x10c [ 0.012380] [c00000017666fc30] [c0000000009ba910] .spawn_ksoftirqd+0x28/0x48 [ 0.012386] [c00000017666fcb0] [c00000000000a98c] .do_one_initcall+0xd8/0x1d0 [ 0.012392] [c00000017666fd60] [c00000000000b1f8] .kernel_init+0x120/0x398 [ 0.012398] [c00000017666fe30] [c000000000009ad4] .ret_from_kernel_thread+0x5c/0x64 [ 0.012404] [c00000017666fa00] [c00000017666fb20] 0xc00000017666fb20 [ 0.012410] [c00000017666fa80] [c00000000069371c] .mutex_lock_nested+0x84/0x4ec [ 0.012416] [c00000017666fb90] [c000000000096998] .smpboot_register_percpu_thread+0x3c/0x10c [ 0.012422] [c00000017666fc30] [c0000000009ba910] .spawn_ksoftirqd+0x28/0x48 [ 0.012427] [c00000017666fcb0] [c00000000000a98c] .do_one_initcall+0xd8/0x1d0 [ 0.012433] [c00000017666fd60] [c00000000000b1f8] .kernel_init+0x120/0x398 [ 0.012439] [c00000017666fe30] [c000000000009ad4] .ret_from_kernel_thread+0x5c/0x64 ....... The reason is that the back chain of c00000017666fe30 (ret_from_kernel_thread) contains some invalid value, which might form a loop. Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
OPAL provides the firmware base/entry in registers at boot time for debugging purposes. We had a bug in the code trying to stash these into the appropriate kernel globals (a line of code was probably dropped by accident back when this was merged) Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Julia Lawall 提交于
The function initialize_flash_pde_data is only called four times. All four calls are in the function rtas_flash_init, and on the failure of any of the calls, remove_flash_pde is called on the third argument of each of the calls. There is thus no need for initialize_flash_pde_data to call remove_flash_pde on the same argument. remove_flash_pde kfrees the data field of its argument, and does not clear that field, so this amounts ot a possible double free. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r@ identifier f,free,a; parameter list[n] ps; type T; expression e; @@ f(ps,T a,...) { ... when any when != a = e if(...) { ... free(a); ... return ...; } ... when any } @@ identifier r.f,r.free; expression x,a; expression list[r.n] xs; @@ * x = f(xs,a,...); if (...) { ... free(a); ... return ...; } // </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
The last user of udbg_read() was removed in 2005, in commit fca5dcd4 "Simplify and clean up the xmon terminal I/O". Given we haven't needed it for 7 years we can probably drop it. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Tony Breeds 提交于
Since the "Disintegrate asm/system.h for PowerPC" (ae3a197e) This has been failing when DEBUG is #defined. Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nathan Fontenot 提交于
Remove the pSeries_reconfig.h header file. At this point there is only one definition in the file, pSeries_coalesce_init(), which can be moved to rtas.h. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Acked-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nathan Fontenot 提交于
Rename the prom_*_property routines of the generic OF code to of_*_property. This brings them in line with the naming used by the rest of the OF code. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Acked-by: NGeoff Levand <geoff@infradead.org> Acked-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nathan Fontenot 提交于
This patch moves the notification chain for updates to the device tree from the powerpc/pseries code to the base OF code. This makes this functionality available to all architectures. Additionally the notification chain is updated to allow notifications for property add/remove/update. To make this work a pointer to a new struct (of_prop_reconfig) is passed to the routines in the notification chain. The of_prop_reconfig property contains a pointer to the node containing the property and a pointer to the property itself. In the case of property updates, the property pointer refers to the new property. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Acked-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-