- 03 6月, 2018 31 次提交
-
-
由 Aneesh Kumar K.V 提交于
In later patch we will update them which require them to be moved to pgtable-radix.c. Keeping the function in radix.h results in compile warning as below. ./arch/powerpc/include/asm/book3s/64/radix.h: In function ‘radix__ptep_set_access_flags’: ./arch/powerpc/include/asm/book3s/64/radix.h:196:28: error: dereferencing pointer to incomplete type ‘struct vm_area_struct’ struct mm_struct *mm = vma->vm_mm; ^~ ./arch/powerpc/include/asm/book3s/64/radix.h:204:6: error: implicit declaration of function ‘atomic_read’; did you mean ‘__atomic_load’? [-Werror=implicit-function-declaration] atomic_read(&mm->context.copros) > 0) { ^~~~~~~~~~~ __atomic_load ./arch/powerpc/include/asm/book3s/64/radix.h:204:21: error: dereferencing pointer to incomplete type ‘struct mm_struct’ atomic_read(&mm->context.copros) > 0) { Instead of fixing header dependencies, we move the function to pgtable-radix.c Also the function is now large to be a static inline . Doing the move in separate patch helps in review. No functional change in this patch. Only code movement. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
In a later patch, we want to update __ptep_set_access_flags take page size arg. This makes ptep_set_access_flags only work with mmu_virtual_psize. To simplify the code make huge_ptep_set_access_flags directly call __ptep_set_access_flags so that we can compute the hugetlb page size in hugetlb function. Now that ptep_set_access_flags won't be called for hugetlb remove the is_vm_hugetlb_page() check and add the assert of pte lock unconditionally. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alastair D'Silva 提交于
Signed-off-by: NAlastair D'Silva <alastair@d-silva.org> Acked-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alastair D'Silva 提交于
In order for a userspace AFU driver to call the POWER9 specific OCXL_IOCTL_ENABLE_P9_WAIT, it needs to verify that it can actually make that call. Signed-off-by: NAlastair D'Silva <alastair@d-silva.org> Acked-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alastair D'Silva 提交于
In order to successfully issue as_notify, an AFU needs to know the TID to notify, which in turn means that this information should be available in userspace so it can be communicated to the AFU. Signed-off-by: NAlastair D'Silva <alastair@d-silva.org> Acked-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alastair D'Silva 提交于
The function removes the process element from NPU cache. Signed-off-by: NAlastair D'Silva <alastair@d-silva.org> Acked-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alastair D'Silva 提交于
The current implementation of TID allocation, using a global IDR, may result in an errant process starving the system of available TIDs. Instead, use task_pid_nr(), as mentioned by the original author. The scenario described which prevented it's use is not applicable, as set_thread_tidr can only be called after the task struct has been populated. In the unlikely event that 2 threads share the TID and are waiting, all potential outcomes have been determined safe. Signed-off-by: NAlastair D'Silva <alastair@d-silva.org> Reviewed-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alastair D'Silva 提交于
Switch the use of TIDR on it's CPU feature, rather than assuming it is available based on architecture. Signed-off-by: NAlastair D'Silva <alastair@d-silva.org> Reviewed-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alastair D'Silva 提交于
This patch adds a CPU feature bit to show whether the CPU has the TIDR register available, enabling as_notify/wait in userspace. Signed-off-by: NAlastair D'Silva <alastair@d-silva.org> Reviewed-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Using irq_work for processing OPAL event interrupts is not necessary. irq_work is typically used to schedule work from NMI context, a softirq may be more appropriate. However OPAL events are not particularly performance or latency critical, so they can all be invoked by kopald. This patch removes the irq_work queueing, and instead wakes up kopald when there is an event to be processed. kopald processes interrupts individually, enabling irqs and calling cond_resched between each one to minimise latencies. Event handlers themselves should still use threaded handlers, workqueues, etc. as necessary to avoid high interrupts-off latencies within any single interrupt. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Although it is often possible to recover a CPU that was interrupted from OPAL with a system reset NMI, it's undesirable to interrupt them for a few reasons. Firstly because dump/debug code itself needs to call firmware, so it could hang on a lock or possibly corrupt a per-cpu data structure if it or another CPU was interrupted from OPAL. Secondly, the kexec crash dump code will not return from interrupt to unwind the OPAL call. Call OPAL_QUIESCE with QUIESCE_HOLD before sending an NMI IPI to another CPU, which wait for it to leave firmware (or time out) to avoid this problem in normal conditions. Firmware bugs may still result in a timeout and interrupting OPAL, but that is the best option (stops the CPU, and possibly allows firmware to be debugged). Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
When the soft enabled flag was changed to a soft disable mask, xmon and register dump code was not updated to reflect that, which is confusing ('SOFTE: 1' previously meant interrupts were soft enabled, currently it means the opposite, the general interrupt type has been disabled). Fix this by using the name irqmask, and printing it in hex. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Acked-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
When soft enabled was changed to irq disabled mask, this test missed being converted (although the equivalent book3s test was converted). The PMU drivers consider it an NMI when they take a PMI while general interrupts are disabled. This change restores that behaviour. Fixes: 01417c6c ("powerpc/64: Change soft_enabled from flag to bitmask") Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Reviewed-by: NMadhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
These are not local timer interrupts but IPIs. It's good to be able to see how timer offloading is behaving, so split these out into their own category. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Large decrementers (e.g., POWER9) can take a very long time to wrap, so when the timer iterrupt handler sets the decrementer to max so as to avoid taking another decrementer interrupt when hard enabling interrupts before running timers, it effectively disables the soft NMI coverage for timer interrupts. Fix this by using the traditional 31-bit value instead, which wraps after a few seconds. masked interrupt code does the same thing, and in normal operation neither of these paths would ever wrap even the 31 bit value. Note: the SMP watchdog should catch timer interrupt lockups, but it is preferable for the local soft-NMI to catch them, mainly to avoid the IPI. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
The broadcast tick recipient can call tick_receive_broadcast rather than re-running the full timer interrupt. It does not have to check for the next event time, because the sender already determined the timer has expired. It does not have to test irq_work_pending, because that's a direct decrementer interrupt and does not go through the clock events subsystem. And it does not have to read PURR because that was removed with the previous patch. This results in no code size change, but both the decrementer and broadcast path lengths are reduced. Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
For SPLPAR, lparcfg provides a sum of PURR registers for all CPUs. Currently this is done by reading PURR in context switch and timer interrupt, and storing that into a per-CPU variable. These are summed to provide the value. This does not work with all timer schemes (e.g., NO_HZ_FULL), and it is sub-optimal for performance because it reads the PURR register on every context switch, although that's been difficult to distinguish from noise in the contxt_switch microbenchmark. This patch implements the sum by calling a function on each CPU, to read and add PURR values of each CPU. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
These fields are only written to. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Book3S minimum supported ISA version now requires mtmsrd L=1. This instruction does not require bits other than RI and EE to be supplied, so __hard_irq_enable() and __hard_irq_disable() does not have to read the kernel_msr from paca. Interrupt entry code already relies on L=1 support. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
This check does not catch IRQ soft mask bugs, but this option is slightly more suitable than TRACE_IRQFLAGS. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
irq_work_raise should not cause a decrementer exception unless it is called from NMI context. Doing so often just results in an immediate masked decrementer interrupt: <...>-550 90d... 4us : update_curr_rt <-dequeue_task_rt <...>-550 90d... 5us : dbs_update_util_handler <-update_curr_rt <...>-550 90d... 6us : arch_irq_work_raise <-irq_work_queue <...>-550 90d... 7us : soft_nmi_interrupt <-soft_nmi_common <...>-550 90d... 7us : printk_nmi_enter <-soft_nmi_interrupt <...>-550 90d.Z. 8us : rcu_nmi_enter <-soft_nmi_interrupt <...>-550 90d.Z. 9us : rcu_nmi_exit <-soft_nmi_interrupt <...>-550 90d... 9us : printk_nmi_exit <-soft_nmi_interrupt <...>-550 90d... 10us : cpuacct_charge <-update_curr_rt The soft_nmi_interrupt here is the call into the watchdog, due to the decrementer interrupt firing with irqs soft-disabled. This is harmless, but sub-optimal. When it's not called from NMI context or with interrupts enabled, mark the decrementer pending in the irq_happened mask directly, rather than having the masked decrementer interupt handler do it. This will be replayed at the next local_irq_enable. See the comment for details. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alexey Kardashevskiy 提交于
When IODA2 creates a PE, it creates an IOMMU table with it_ops::free set to pnv_ioda2_table_free() which calls pnv_pci_ioda2_table_free_pages(). Since iommu_tce_table_put() calls it_ops::free when the last reference to the table is released, explicit call to pnv_pci_ioda2_table_free_pages() is not needed so let's remove it. This should fix double free in the case of PCI hotuplug as pnv_pci_ioda2_table_free_pages() does not reset neither iommu_table::it_base nor ::it_size. This was not exposed by SRIOV as it uses different code path via pnv_pcibios_sriov_disable(). IODA1 does not inialize it_ops::free so it does not have this issue. Fixes: c5f7700b ("powerpc/powernv: Dynamically release PE") Cc: stable@vger.kernel.org # v4.8+ Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Yisheng Xie 提交于
match_string() returns the index of an array for a matching string, which can be used instead of open coded variant. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NYisheng Xie <xieyisheng1@huawei.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
GCC 8.1 emits warnings such as the following. As arch/powerpc code is built with -Werror, this breaks the build with GCC 8.1. In file included from arch/powerpc/kernel/pci_64.c:23: ./include/linux/syscalls.h:233:18: error: 'sys_pciconfig_iobase' alias between functions of incompatible types 'long int(long int, long unsigned int, long unsigned int)' and 'long int(long int, long int, long int)' [-Werror=attribute-alias] asmlinkage long sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \ ^~~ ./include/linux/syscalls.h:222:2: note: in expansion of macro '__SYSCALL_DEFINEx' __SYSCALL_DEFINEx(x, sname, __VA_ARGS__) This patch inhibits those warnings. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> [mpe: Trim change log] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
GCC 8.1 warns about possible string truncation: arch/powerpc/kernel/nvram_64.c:1042:2: error: 'strncpy' specified bound 12 equals destination size [-Werror=stringop-truncation] strncpy(new_part->header.name, name, 12); arch/powerpc/platforms/ps3/repository.c:106:2: error: 'strncpy' output truncated before terminating nul copying 8 bytes from a string of the same length [-Werror=stringop-truncation] strncpy((char *)&n, text, 8); Fix it by using memcpy(). To make that safe we need to ensure the destination is pre-zeroed. Use kzalloc() in the nvram code and initialise the u64 to zero in the ps3 code. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> [mpe: Use kzalloc() in the nvram code, flesh out change log] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
This is a branch with a mixture of mm, x86 and powerpc commits all relating to some minor cross-arch pkeys consolidation. The x86/mm changes have been reviewed by Ingo & Dave Hansen and the tree has been in linux-next for some weeks without issue.
-
由 Michael Ellerman 提交于
We ended up with an ugly conflict between fixes and next in ftrace.h involving multiple nested ifdefs, and the automatic resolution is wrong. So merge fixes into next so we can fix it up.
-
由 Michael Ellerman 提交于
Merge in some commits we're sharing with the kbuild tree.
-
由 Michael Ellerman 提交于
Merge in some commits we're sharing with the kvm-ppc tree.
-
- 01 6月, 2018 7 次提交
-
-
由 Aneesh Kumar K.V 提交于
Fix the below crash on Book3E 64. pgtable_page_dtor expects struct page *arg. Also call the destructor on non book3s platforms correctly. This frees up the split PTL locks correctly if we had allocated them before. Call Trace: .kmem_cache_free+0x9c/0x44c (unreliable) .ptlock_free+0x1c/0x30 .tlb_remove_table+0xdc/0x224 .free_pgd_range+0x298/0x500 .shift_arg_pages+0x10c/0x1e0 .setup_arg_pages+0x200/0x25c .load_elf_binary+0x450/0x16c8 .search_binary_handler.part.11+0x9c/0x248 .do_execveat_common.isra.13+0x868/0xc18 .run_init_process+0x34/0x4c .try_to_run_init_process+0x1c/0x68 .kernel_init+0xdc/0x130 .ret_from_kernel_thread+0x58/0x7c Fixes: 70234676 ("powerpc/mm/nohash: Remove pte fragment dependency from nohash") Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Mathieu Malaterre 提交于
In commit eae5f709 ("powerpc: Add __printf verification to prom_printf") __printf attribute was added to prom_printf(), which means GCC started warning about type/format mismatches. As part of that commit we changed some "%lx" formats to "%llx" where the type is actually unsigned long long. Unfortunately prom_printf() doesn't know how to print "%llx", it just prints a literal "lx", eg: reserved memory map: lx - lx lx - lx prom_printf() also doesn't know how to print "%u" (only "%lu"), it just prints a literal "u", eg: Max number of cores passed to firmware: u (NR_CPUS = 2048) Instead of: Max number of cores passed to firmware: 2048 (NR_CPUS = 2048) This commit adds support for the missing formatters. Fixes: eae5f709 ("powerpc: Add __printf verification to prom_printf") Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NMathieu Malaterre <malat@debian.org> Tested-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Vaibhav Jain 提交于
APC virtual machines arent used on POWER-9 chips and are already disabled in on-chip CAPP. They also need to be disabled on the PSL via 'PSL Data Send Control Register' by setting bit(47). This forces the PSL to send commands to CAPP with queue.id == 0. Fixes: 56328743 ("cxl: Add support for POWER9 DD2") Cc: stable@vger.kernel.org # v4.15+ Signed-off-by: NVaibhav Jain <vaibhav@linux.vnet.ibm.com> Acked-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Reviewed-by: NAlastair D'Silva <alastair@d-silva.org> Reviewed-by: NChristophe Lombard <clombard@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Vaibhav Jain 提交于
Currently we see a kernel-oops reported on Power-9 while attaching a context to an AFU, with radix-mode and sysfs attr 'prefault_mode' set to anything other than 'none'. The backtrace of the oops is of this form: Unable to handle kernel paging request for data at address 0x00000080 Faulting instruction address: 0xc00800000bcf3b20 cpu 0x1: Vector: 300 (Data Access) at [c00000037f003800] pc: c00800000bcf3b20: cxl_load_segment+0x178/0x290 [cxl] lr: c00800000bcf39f0: cxl_load_segment+0x48/0x290 [cxl] sp: c00000037f003a80 msr: 9000000000009033 dar: 80 dsisr: 40000000 current = 0xc00000037f280000 paca = 0xc0000003ffffe600 softe: 3 irq_happened: 0x01 pid = 3529, comm = afp_no_int <snip> cxl_prefault+0xfc/0x248 [cxl] process_element_entry_psl9+0xd8/0x1a0 [cxl] cxl_attach_dedicated_process_psl9+0x44/0x130 [cxl] native_attach_process+0xc0/0x130 [cxl] afu_ioctl+0x3f4/0x5e0 [cxl] do_vfs_ioctl+0xdc/0x890 ksys_ioctl+0x68/0xf0 sys_ioctl+0x40/0xa0 system_call+0x58/0x6c The issue is caused as on Power-8 the AFU attr 'prefault_mode' was used to improve initial storage fault performance by prefaulting process segments. However on Power-9 with radix mode we don't have Storage-Segments that we can prefault. Also prefaulting process Pages will be too costly and fine-grained. Hence, since the prefaulting mechanism doesn't makes sense of radix-mode, this patch updates prefault_mode_store() to not allow any other value apart from CXL_PREFAULT_NONE when radix mode is enabled. Fixes: f24be42a ("cxl: Add psl9 specific code") Cc: stable@vger.kernel.org # v4.12+ Signed-off-by: NVaibhav Jain <vaibhav@linux.ibm.com> Acked-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Acked-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
The powerpc toolchain can compile combinations of 32/64 bit and big/little endian, so it's convenient to consider, e.g., `CC -m64 -mbig-endian` To be the C compiler for the purpose of invoking it to build target artifacts. So overriding the CC variable to include these flags works for this purpose. Unfortunately that is not compatible with the way the proposed new Kconfig macro language will work. After previous patches in this series, these flags can be carefully passed in using flags instead. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Switch VDSO32 build over to use CROSS32_COMPILE directly, and have it pass in -m32 after the standard c_flags. This allows endianness overrides to be removed and the endian and bitness flags moved into standard flags variables. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Some 64-bit toolchains uses the wrong ISA variant for compiling 32-bit kernels, even with -m32. Debian's powerpc64le is one such case, and that is because it is built with --with-cpu=power8. So when cross compiling a 32-bit kernel with a 64-bit toolchain, set -mcpu=powerpc initially, which is the generic 32-bit powerpc machine type and scheduling model. CPU and platform code can override this with subsequent -mcpu flags if necessary. This is not done for 32-bit toolchains otherwise it would override their defaults, which are presumably set appropriately for the environment (moreso than a 64-bit cross compiler). This fixes a lot of build failures due to incompatible assembly when compiling 32-bit kernel with the Debian powerpc64le 64-bit toolchain. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 29 5月, 2018 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
arch/powerpc/kernel/stacktrace.c: In function ‘save_stack_trace_tsk_reliable’: arch/powerpc/kernel/stacktrace.c:176:28: error: ‘kretprobe_trampoline’ undeclared if (ip == (unsigned long)kretprobe_trampoline) ^~~~~~~~~~~~~~~~~~~~ Fixes: df78d3f6 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model") Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 28 5月, 2018 1 次提交
-
-
由 Thiago Jung Bauermann 提交于
This test verifies that the AMR, IAMR and UAMOR are being written to a process' core file. Signed-off-by: NThiago Jung Bauermann <bauerman@linux.ibm.com> [mpe: Simplify make rule] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-