- 01 5月, 2016 12 次提交
-
-
由 Aneesh Kumar K.V 提交于
PTE_RPN_SHIFT is actually page size dependent. Even though PowerISA 3.0 expects only the lower 12 bits to be zero, we will always find the pages to be PAGE_SHIFT aligned. In case of hash config, this also allows us to use the additional 3 bits to track pte specific information. We need to make sure we use these bits only for hash specific pte flags. For both 4K and 64K config, pte now can hold 57 bits address. Inorder to keep things simpler, drop PTE_RPN_SHIFT and PTE_RPN_SIZE and specify the 57 bit detail explicitly. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
_PAGE_PRIVILEGED means the page can be accessed only by the kernel. This is done to keep pte bits similar to PowerISA 3.0 Radix PTE format. User pages are now marked by clearing _PAGE_PRIVILEGED bit. Previously we allowed the kernel to have a privileged page in the lower address range (USER_REGION). With this patch such access is denied. We also prevent a kernel access to a non-privileged page in higher address range (ie, REGION_ID != 0). Both the above access scenarios should never happen. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Jeremy Kerr <jk@ozlabs.org> Cc: Frederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
We have a common declaration in pte-common.h Add a book3s specific one and switch to pte_user() in callchain.c. In a subsequent patch we will switch _PAGE_USER to _PAGE_PRIVILEGED in the book3s version only. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
In a subsequent patch we want to add a second definition of pte_user(). Before we do that, make the signature clear, ie. it takes a pte_t and returns bool. We move it up inside the existing #ifndef __ASSEMBLY__ block, but otherwise it's a straight conversion. Convert the call in settlbcam(), which passes an unsigned long, to pass a pte_t. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Subpage protection used to depend on the _PAGE_USER bit to implement no access mode. This patch switches that to use _PAGE_RWX. We clear Read, Write and Execute access from the pte instead of clearing _PAGE_USER now. This was done so that we can switch to _PAGE_PRIVILEGED in a later patch. subpage_protection() returns pte bits that need to be cleared. Instead of updating the interface to handle no-access in a separate way, it appears simpler to clear RWX acecss to indicate no access. We still don't insert hash ptes for no access implied by !_PAGE_RWX. Hence we should not get PROT_FAULT with change. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
This splits the _PAGE_RW bit into _PAGE_READ and _PAGE_WRITE. It also removes the dependency on _PAGE_USER for implying read only. Few things to note here is that, we have read implied with write and execute permission. Hence we should always find _PAGE_READ set on hash pte fault. We still can't switch PROT_NONE to !(_PAGE_RWX). Auto numa depends on marking a prot none pte _PAGE_WRITE. (For more details look at b191f9b1 "mm: numa: preserve PTE write permissions across a NUMA hinting fault") Cc: Arnd Bergmann <arnd@arndb.de> Cc: Jeremy Kerr <jk@ozlabs.org> Cc: Frederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
We can avoid doing endian conversions by using pte_raw() in pxx_same(). The swap of the constant (_PAGE_HPTEFLAGS) should be done at compile time by the compiler. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Traditionally Power server machines have used the Hashed Page Table MMU mode. In this mode Linux manages its own tree of nested page tables, aka. "the Linux page tables", which are not used by the hardware directly, and software loads translations into the hash page table for use by the hardware. Power ISA 3.0 defines a new MMU mode, known as Radix Tree Translation, where the hardware can directly operate on the Linux page tables. However the hardware requires that the page tables be in big endian format. To accommodate this, switch the pgtable types to __be64 and add appropriate endian conversions. Because we will be supporting a single kernel binary that boots using either radix or hash mode, we always store the Linux page tables big endian, even in hash mode where they are not actually used by the hardware. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [mpe: Fix sparse errors, flesh out change log] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
We have five locations in 64-bit hash MMU code that do a cmpxchg() of a PTE. Currently doing it inline OK, but in a future patch we will be converting the PTEs to __be64 in some configs. In that case we will need casts at every cmpxchg() site in order to keep sparse happy. So move the logic into a helper, this is a reasonably nice cleanup on its own. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
pmd_hugepage_update() is inside #ifdef CONFIG_TRANSPARENT_HUGEPAGE. THP can only be enabled if PPC_BOOK3S_64=y && PPC_64K_PAGES=y, aka. hash64. On hash64 we always define PTE_ATOMIC_UPDATES to 1, meaning the #ifdef in pmd_hugepage_update() is unnecessary, so drop it. That is also the only use of PTE_ATOMIC_UPDATES in any of the hash code, meaning we no longer need to #define it at all in the hash headers. Note it's still #defined and used in the nohash code. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Testing done by Paul Mackerras has shown that with a modern compiler there is no negative effect on code generation from enabling STRICT_MM_TYPECHECKS. So remove the option, and always use the strict type definitions. Acked-by: NPaul Mackerras <paulus@ozlabs.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
The driver was requesting for a writethrough mapping. But with those flags we will end up with an SAO mapping because we now have memory conherence always enabled. ie, the existing mapping will end up with a WIMG value 0b1110 which is Strong Access Order. Update this to use cache inhibitted guarded mapping. Cc: Doug Ledford <dledford@redhat.com> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Cc: linux-rdma@vger.kernel.org Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 27 4月, 2016 6 次提交
-
-
由 Madhavan Srinivasan 提交于
Minor cleanup patch to replace the raw event hex values in power8-pmu.c with #defines. Signed-off-by: NMadhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Thiago Jung Bauermann 提交于
In the ppc64 big endian ABI, function symbols point to function descriptors. The symbols which point to the function entry points have a dot in front of the function name. Consequently, when the ftrace filter mechanism searches for the symbol corresponding to an entry point address, it gets the dot symbol. As a result, ftrace filter users have to be aware of this ABI detail on ppc64 and prepend a dot to the function name when setting the filter. The perf probe command insulates the user from this by ignoring the dot in front of the symbol name when matching function names to symbols, but the sysfs interface does not. This patch makes the ftrace filter mechanism do the same when searching symbols. Fixes the following failure in ftracetest's kprobe_ftrace.tc: .../kprobe_ftrace.tc: line 9: echo: write error: Invalid argument That failure is on this line of kprobe_ftrace.tc: echo _do_fork > set_ftrace_filter This is because there's no _do_fork entry in the functions list: # cat available_filter_functions | grep _do_fork ._do_fork This change introduces no regressions on the perf and ftracetest testsuite results. Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NThiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Sparse doesn't seem to be passing -maltivec around properly, leading to lots of errors: .../include/altivec.h:34:2: error: Use the "-maltivec" flag to enable PowerPC AltiVec support arch/powerpc/lib/xor_vmx.c:27:16: error: Expected ; at end of declaration arch/powerpc/lib/xor_vmx.c:27:16: error: got signed arch/powerpc/lib/xor_vmx.c:60:9: error: No right hand side of '*'-expression arch/powerpc/lib/xor_vmx.c:60:9: error: Expected ; at end of statement arch/powerpc/lib/xor_vmx.c:60:9: error: got v1_in ... arch/powerpc/lib/xor_vmx.c:87:9: error: too many errors Only include the altivec.h header for non-__CHECKER__ builds. For builds with __CHECKER__, make up some stubs instead, as suggested by Balbir. (The vector size of 16 is arbitrary.) Suggested-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NDaniel Axtens <dja@axtens.net> Tested-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Chris Smart 提交于
The copy paste facility introduced in POWER9 provides an optimised mechanism for a userspace application to copy a cacheline. This is provided by a pair of instructions, copy and paste, while a third, cp_abort (copy paste abort), provides a clean up of the state in case of a failure. The copy instruction will read a 128 byte cacheline and store it in an internal buffer. The subsequent paste instruction will store this internal buffer to memory and set a CR field if the paste succeeds. Since the state of the copy paste buffer is internal (and not architecturally visible), in the unlikely event of a context switch, the state cannot be stored and the paste should therefore fail. The cp_abort instruction exists to fail and clean up any such interrupted copy paste sequence and is to be called by the kernel as part of the context switch. Doing so prevents data from a preceding copy in one process leaking into the paste of another. This code enables use of the cp_abort instruction if a supported processor is detected. NOTE: this is for userspace only, not in kernel, and does not deal with KVM guests. Patch created with much assistance from Michael Neuling <mikey@neuling.org> Signed-off-by: NChris Smart <chris@distroguy.com> Reviewed-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrew Donnellan 提交于
mpic_init_sys() currently doesn't check whether subsys_system_register() succeeded or not. Check the return code of subsys_system_register() and clean up if there's an error. Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrew Donnellan 提交于
Found by smatch. Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Acked-by: NRussell Currey <ruscur@russell.cc> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 26 4月, 2016 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
The current code will set _PAGE_USER to the access flags for any fault address, because the ~ operation will be true for all address we take a fault on. But setting _PAGE_USER also means that the fault will be handled only if the page table have _PAGE_USER set. Hence there is no security hole with the current code. Now if it is an user space access, then the change in this patch really don't have an impact because we have (!ctx->kernel) set true and we take the if condition true. Now kernel context created fault on an address in the kernel range will result in a fault loop because we will not insert the hash pte due to access and pte permission mismatch. This patch fix the above issue. Fixes: f204e0b8 ("cxl: Driver code for powernv PCIe based cards for userspace access") Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 4月, 2016 2 次提交
-
-
由 Frederic Barrat 提交于
PSL designers recommend a larger value for the mmio hang pulse, 256 us instead of 1 us. The CAIA architecture states that it needs to be smaller than 1/2 of the RTOS timeout set in the PHB for outbound non-posted transactions, which is still (easily) the case here. Signed-off-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Tested-by: NFrank Haverkamp <haver@linux.vnet.ibm.com> Tested-by: NManoj Kumar <manoj@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Frederic Barrat 提交于
Failure to synchronize the PSL timebase currently prevents the initialization of the cxl card, thus rendering the card useless. This is too extreme for a feature which is rarely used, if at all. No hardware AFUs or software is currently using PSL timebase. This patch still tries to synchronize the PSL timebase when the card is initialized, but ignores the error if it can't. Instead, it reports a status via /sys. Signed-off-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 21 4月, 2016 6 次提交
-
-
由 Madhavan Srinivasan 提交于
Add sample_reg_mask array with pt_regs registers. This is needed for printing supported regs ( -I? option). Signed-off-by: NMadhavan Srinivasan <maddy@linux.vnet.ibm.com> Acked-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anju T 提交于
Map ID values with corresponding register names. These names are then displayed when user issues perf record with the -I option followed by perf report/script with -D option. To test this patchset, Eg: $ perf record -I ls # record machine state at interrupt $ perf script -D # read the perf.data file Sample output obtained for this patch / output looks like as follows: 496768515470 0x1988 [0x188]: PERF_RECORD_SAMPLE(IP, 0x1): 4522/4522: 0xc0000000001e538c period: 1 addr: 0 ... intr regs: mask 0x7ffffffffff ABI 64-bit .... r0 0xc0000000001e5e34 .... r1 0xc000000fe733f9a0 .... r2 0xc000000001523100 .... r3 0xc000000ffaadeb60 .... r4 0xc000000003456800 .... r5 0x73a9b5e000 .... r6 0x1e000000 .... r7 0x0 .... r8 0x0 .... r9 0x0 .... r10 0x1 .... r11 0x0 .... r12 0x24022822 .... r13 0xc00000000feec180 .... r14 0x0 .... r15 0xc000001e4be18800 .... r16 0x0 .... r17 0xc000000ffaac5000 .... r18 0xc000000fe733f8a0 .... r19 0xc000000001523100 .... r20 0xc00000000009fd1c .... r21 0xc000000fcaa69000 .... r22 0xc0000000001e4968 .... r23 0xc000000001523100 .... r24 0xc000000fe733f850 .... r25 0xc000000fcaa69000 .... r26 0xc000000003b8fcf0 .... r27 0xfffffffffffffead .... r28 0x0 .... r29 0xc000000fcaa69000 .... r30 0x1 .... r31 0x0 .... nip 0xc0000000001dd320 .... msr 0x9000000000009032 .... orig_r3 0xc0000000001e538c .... ctr 0xc00000000009d550 .... link 0xc0000000001e5e34 .... xer 0x0 .... ccr 0x84022882 .... softe 0x0 .... trap 0xf01 .... dar 0x0 .... dsisr 0xf00040060000004 ... thread: :4522:4522 ...... dso: /root/.debug/.build-id/b0/ef11b1a1629e62ac9de75199117ee5ef9469e9 :4522 4522 496.768515: 1 cycles: c0000000001e538c .perf_event_context_sched_in (/boot/vmlinux) Signed-off-by: NAnju T <anju@linux.vnet.ibm.com> Acked-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anju T 提交于
The perf infrastructure uses a bit mask to find out valid registers to display. Define a register mask for supported registers defined in uapi/asm/perf_regs.h. The bit positions also correspond to register IDs which is used by perf infrastructure to fetch the register values. CONFIG_HAVE_PERF_REGS enables sampling of the interrupted machine state. Signed-off-by: NAnju T <anju@linux.vnet.ibm.com> [mpe: Add license, use CONFIG_PPC64, fix 32-bit build] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anju T 提交于
The enum definition assigns an 'id' to each register in "struct pt_regs" of arch/powerpc. The order of these values in the enum definition are based on the order of members in pt_regs. Signed-off-by: NAnju T <anju@linux.vnet.ibm.com> [mpe: Rename LNK to LINK, use _UAPI_ASM for include guards] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Hari Bathini 提交于
The __end_handlers marker was intended to mark down upto code that gets called from exception prologs. But that hasn't kept pace with code changes. Case in point, slb_miss_realmode being called from exception prolog code but isn't below __end_handlers marker. So, __end_handlers marker is as good as a comment but could be misleading at times if it isn't in sync with the code, as is the case now. So, let us avoid this confusion by having a better comment and removing __end_handlers marker altogether. Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Hari Bathini 提交于
Some of the interrupt vectors on 64-bit POWER server processors are only 32 bytes long (8 instructions), which is not enough for the full first-level interrupt handler. For these we need to branch to an out-of-line (OOL) handler. But when we are running a relocatable kernel, interrupt vectors till __end_interrupts marker are copied down to real address 0x100. So, branching to labels (ie. OOL handlers) outside this section must be handled differently (see LOAD_HANDLER()), considering relocatable kernel, which would need at least 4 instructions. However, branching from interrupt vector means that we corrupt the CFAR (come-from address register) on POWER7 and later processors as mentioned in commit 1707dd16. So, EXCEPTION_PROLOG_0 (6 instructions) that contains the part up to the point where the CFAR is saved in the PACA should be part of the short interrupt vectors before we branch out to OOL handlers. But as mentioned already, there are interrupt vectors on 64-bit POWER server processors that are only 32 bytes long (like vectors 0x4f00, 0x4f20, etc.), which cannot accomodate the above two cases at the same time owing to space constraint. Currently, in these interrupt vectors, we simply branch out to OOL handlers, without using LOAD_HANDLER(), which leaves us vulnerable when running a relocatable kernel (eg. kdump case). While this has been the case for sometime now and kdump is used widely, we were fortunate not to see any problems so far, for three reasons: 1. In almost all cases, production kernel (relocatable) is used for kdump as well, which would mean that crashed kernel's OOL handler would be at the same place where we end up branching to, from short interrupt vector of kdump kernel. 2. Also, OOL handler was unlikely the reason for crash in almost all the kdump scenarios, which meant we had a sane OOL handler from crashed kernel that we branched to. 3. On most 64-bit POWER server processors, page size is large enough that marking interrupt vector code as executable (see commit 429d2e83) leads to marking OOL handler code from crashed kernel, that sits right below interrupt vector code from kdump kernel, as executable as well. Let us fix this by moving the __end_interrupts marker down past OOL handlers to make sure that we also copy OOL handlers to real address 0x100 when running a relocatable kernel. This fix has been tested successfully in kdump scenario, on an LPAR with 4K page size by using different default/production kernel and kdump kernel. Also tested by manually corrupting the OOL handlers in the first kernel and then kdump'ing, and then causing the OOL handlers to fire - mpe. Fixes: c1fb6816 ("powerpc: Add relocation on exception vector handlers") Cc: stable@vger.kernel.org Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com> Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 18 4月, 2016 1 次提交
-
-
由 Michael Ellerman 提交于
Merge the support for live patching on ppc64le using mprofile-kernel. This branch has also been merged into the livepatching tree for v4.7.
-
- 14 4月, 2016 5 次提交
-
-
由 Michael Ellerman 提交于
Add the kconfig logic & assembly support for handling live patched functions. This depends on DYNAMIC_FTRACE_WITH_REGS, which in turn depends on the new -mprofile-kernel ftrace ABI, which is only supported currently on ppc64le. Live patching is handled by a special ftrace handler. This means it runs from ftrace_caller(). The live patch handler modifies the NIP so as to redirect the return from ftrace_caller() to the new patched function. However there is one particularly tricky case we need to handle. If a function A calls another function B, and it is known at link time that they share the same TOC, then A will not save or restore its TOC, and will call the local entry point of B. When we live patch B, we replace it with a new function C, which may not have the same TOC as A. At live patch time it's too late to modify A to do the TOC save/restore, so the live patching code must interpose itself between A and C, and do the TOC save/restore that A omitted. An additionaly complication is that the livepatch code can not create a stack frame in order to save the TOC. That is because if C takes > 8 arguments, or is varargs, A will have written the arguments for C in A's stack frame. To solve this, we introduce a "livepatch stack" which grows upward from the base of the regular stack, and is used to store the TOC & LR when calling a live patched function. When the patched function returns, we retrieve the real LR & TOC from the livepatch stack, restore them, and pop the livepatch "stack frame". Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NTorsten Duwe <duwe@suse.de> Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
-
由 Michael Ellerman 提交于
In order to support live patching we need to maintain an alternate stack of TOC & LR values. We use the base of the stack for this, and store the "live patch stack pointer" in struct thread_info. Unlike the other fields of thread_info, we can not statically initialise that value, so it must be done at run time. This patch just adds the code to support that, it is not enabled until the next patch which actually adds live patch support. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NBalbir Singh <bsingharora@gmail.com>
-
由 Michael Ellerman 提交于
Add the powerpc specific livepatch definitions. In particular we provide a non-default implementation of klp_get_ftrace_location(). This is required because the location of the mcount call is not constant when using -mprofile-kernel (which we always do for live patching). Signed-off-by: NTorsten Duwe <duwe@suse.de> Signed-off-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
When livepatch tries to patch a function it takes the function address and asks ftrace to install the livepatch handler at that location. ftrace will look for an mcount call site at that exact address. On powerpc the mcount location is not the first instruction of the function, and in fact it's not at a constant offset from the start of the function. To accommodate this add a hook which arch code can override to customise the behaviour. Signed-off-by: NTorsten Duwe <duwe@suse.de> Signed-off-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NPetr Mladek <pmladek@suse.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
In order to support live patching on powerpc we would like to call ftrace_location_range(), so make it global. Signed-off-by: NTorsten Duwe <duwe@suse.de> Signed-off-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 12 4月, 2016 5 次提交
-
-
由 Markus Elfring 提交于
The kfree() function tests whether its argument is NULL and then returns immediately. Thus the test around the call is not needed. This issue was detected by using the Coccinelle software. Signed-off-by: NMarkus Elfring <elfring@users.sourceforge.net> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aaro Koskinen 提交于
Fix bogus memsets pointed out by sparse: linux-v4.3/drivers/macintosh/rack-meter.c:157:15: warning: memset with byte count of 0 linux-v4.3/drivers/macintosh/rack-meter.c:158:15: warning: memset with byte count of 0 Probably "&" is mistyped "*"; use ARRAY_SIZE to make it more safe. Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aaro Koskinen 提交于
Limit idle ticks to total ticks. This prevents the annoying rackmeter leds fully ON / OFF blinking state that happens on fully idling G5 Xserve systems. Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Sometimes when sparse warns about undefined symbols, it isn't because they should have 'static' added, it's because they're overriding __weak symbols defined elsewhere, and the header has been missed. Fix a few of them by adding appropriate headers. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
As sparse suggests, these should be made static. Signed-off-by: NDaniel Axtens <dja@axtens.net> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Reviewed-by: NStewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 11 4月, 2016 2 次提交
-
-
由 Philippe Bergheaud 提交于
The POWER8NVL chip has two CAPI ports. Configure the PSL to route data to the port corresponding to the CAPP unit. Signed-off-by: NPhilippe Bergheaud <felix@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Philippe Bergheaud 提交于
Signed-off-by: NPhilippe Bergheaud <felix@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-