- 14 12月, 2015 4 次提交
-
-
由 Aneesh Kumar K.V 提交于
This further make a copy of pte defines to book3s/64/hash*.h. This remove the dependency on pgtable-ppc64-4k.h and pgtable-ppc64-64k.h Acked-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
In this patch we do: cp pgtable-ppc32.h book3s/32/pgtable.h cp pgtable-ppc64.h book3s/64/pgtable.h This enable us to do further changes to hash specific config. We will change the page table format for 64bit hash in later patches. Acked-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Acked-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
This is the same bug we fixed as part of 09567e7f ("powerpc/mm: Check paca psize is up to date for huge mappings"). Please check that for details. The difference here is that faults were happening on a 4K page at an address previously mapped by hugetlb. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 10 12月, 2015 3 次提交
-
-
由 Anton Blanchard 提交于
Two DSCR tests have a hack in them: /* * XXX: Force a context switch out so that DSCR * current value is copied into the thread struct * which is required for the child to inherit the * changed value. */ sleep(1); We should not be working around this in the testcase, it is a kernel bug. Fix it by copying the current DSCR to the child, instead of what we had in the thread struct at last context switch. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
commit 152d523e ("powerpc: Create context switch helpers save_sprs() and restore_sprs()") moved the restore of SPRs after the call to _switch(). There is an issue with this approach - new tasks do not return through _switch(), they are set up by copy_thread() to directly return through ret_from_fork() or ret_from_kernel_thread(). This means restore_sprs() is not getting called for new tasks. Fix this by moving restore_sprs() before _switch(). Fixes: 152d523e ("powerpc: Create context switch helpers save_sprs() and restore_sprs()") Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Commit a0e72cf1 ("powerpc: Create msr_check_and_{set,clear}()") removed a call to check_if_tm_restore_required() in the enable_kernel_*() functions. Add them back in. Fixes: a0e72cf1 ("powerpc: Create msr_check_and_{set,clear}()") Reported-by: NRashmica Gupta <rashmicy@gmail.com> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 12月, 2015 4 次提交
-
-
由 Anton Blanchard 提交于
Remove a bunch of unnecessary fallback functions and group things in a more logical way. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Most of __switch_to() is housekeeping, TLB batching, timekeeping etc. Move these away from the more complex and critical context switching code. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Create a single function that flushes everything (FP, VMX, VSX, SPE). Doing this all at once means we only do one MSR write. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Create a single function that gives everything up (FP, VMX, VSX, SPE). Doing this all at once means we only do one MSR write. A context switch microbenchmark using yield(): http://ozlabs.org/~anton/junkcode/context_switch2.c ./context_switch2 --test=yield --fp --altivec --vector 0 0 shows an improvement of 3% on POWER8. Signed-off-by: NAnton Blanchard <anton@samba.org> [mpe: giveup_all() needs to be EXPORT_SYMBOL'ed] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 01 12月, 2015 16 次提交
-
-
由 Anton Blanchard 提交于
More consolidation of our MSR available bit handling. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Add a boot option that strictly manages the MSR unavailable bits. This catches kernel uses of FP/Altivec/SPE that would otherwise corrupt user state. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
The enable_kernel_*() functions leave the relevant MSR bits enabled until we exit the kernel sometime later. Create disable versions that wrap the kernel use of FP, Altivec VSX or SPE. While we don't want to disable it normally for performance reasons (MSR writes are slow), it will be used for a debug boot option that does this and catches bad uses in other areas of the kernel. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Create helper functions to set and clear MSR bits after first checking if they are already set. Grouping them will make it easy to avoid the MSR writes in a subsequent optimisation. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
With the recent change to enable_kernel_vsx(), we no longer need to call enable_kernel_fp() and enable_kernel_altivec(). Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Move the MSR modification into c. Removing it from the assembly function will allow us to avoid costly MSR writes by batching them up. Check the FP and VMX bits before calling the relevant giveup_*() function. This makes giveup_vsx() and flush_vsx_to_thread() perform more like their sister functions, and allows us to use flush_vsx_to_thread() in the signal code. Move the check_if_tm_restore_required() check in. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Move the MSR modification into new c functions. Removing it from the low level functions will allow us to avoid costly MSR writes by batching them up. Move the check_if_tm_restore_required() check into these new functions. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
We used to allow giveup_*() to be called with a NULL task struct pointer. Now those cases are handled in the caller we can remove the checks. We can also remove giveup_altivec_notask() which is also unused. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
mtmsrd_isync() will do an mtmsrd followed by an isync on older processors. On newer processors we avoid the isync via a feature fixup. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Instead of having multiple giveup_*_maybe_transactional() functions, separate out the TM check into a new function called check_if_tm_restore_required(). This will make it easier to optimise the giveup_*() functions in a subsequent patch. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
The UP only lazy floating point and vector optimisations were written back when SMP was not common, and neither glibc nor gcc used vector instructions. Now SMP is very common, glibc aggressively uses vector instructions and gcc autovectorises. We want to add new optimisations that apply to both UP and SMP, but in preparation for that remove these UP only optimisations. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
No need to execute mflr twice. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Move all our context switch SPR save and restore code into two helpers. We do a few optimisations: - Group all mfsprs and all mtsprs. In many cases an mtspr sets a scoreboarding bit that an mfspr waits on, so the current practise of mfspr A; mtspr A; mfpsr B; mtspr B is the worst scheduling we can do. - SPR writes are slow, so check that the value is changing before writing it. A context switch microbenchmark using yield(): http://ozlabs.org/~anton/junkcode/context_switch2.c ./context_switch2 --test=yield 0 0 shows an improvement of almost 10% on POWER8. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Similar to the non TM load_up_*() functions, don't disable the MSR bits on the way out. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Writing the MSR is slow, so we want to avoid it whenever possible. A subsequent patch will add a debug option that strictly manages the FP/VMX/VSX unavailable bits. For now just remove it, matching what we do in other areas of the kernel (eg enable_kernel_altivec()). A context switch microbenchmark using yield(): http://ozlabs.org/~anton/junkcode/context_switch2.c ./context_switch2 --test=yield --fp 0 0 shows an improvement of almost 3% on POWER8. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Paul Mackerras 提交于
Currently, if HV KVM is configured but PR KVM isn't, we don't include a test to see whether we were interrupted in KVM guest context for the set of interrupts which get delivered directly to the guest by hardware if they occur in the guest. This includes things like program interrupts. However, the recent bug where userspace could set the MSR for a VCPU to have an illegal value in the TS field, and thus cause a TM Bad Thing type of program interrupt on the hrfid that enters the guest, showed that we can never be completely sure that these interrupts can never occur in the guest entry/exit code. If one of these interrupts does happen and we have HV KVM configured but not PR KVM, then we end up trying to run the handler in the host with the MMU set to the guest MMU context, which generally ends badly. Thus, for robustness it is better to have the test in every interrupt vector, so that if some way is found to trigger some interrupt in the guest entry/exit path, we can handle it without immediately crashing the host. This means that the distinction between KVMTEST and KVMTEST_PR goes away. Thus we delete KVMTEST_PR and associated macros and use KVMTEST everywhere that we previously used either KVMTEST_PR or KVMTEST. It also means that SOFTEN_TEST_HV_201 becomes the same as SOFTEN_TEST_PR, so we deleted SOFTEN_TEST_HV_201 and use SOFTEN_TEST_PR instead. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 26 11月, 2015 8 次提交
-
-
由 Luis de Bethencourt 提交于
This platform driver has a OF device ID table but the OF module alias information is not created so module autoloading won't work. Signed-off-by: NLuis de Bethencourt <luisbg@osg.samsung.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Luis de Bethencourt 提交于
This platform driver has a OF device ID table but the OF module alias information is not created so module autoloading won't work. Signed-off-by: NLuis de Bethencourt <luisbg@osg.samsung.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Rashmica Gupta 提交于
It is common practice with powerpc to use 'rN' to refer to register 'N'. However when using the pt_regs_offset table we have to use 'gprN'. So add aliases such that both 'rN' and 'gprN' can be used. For example, we can currently do: $ su - $ cd /sys/kernel/debug/tracing $ echo "p:probe/sys_fchownat sys_fchownat %gpr3:s32 +0(%gpr4):string %gpr5:s32 %gpr6:s32 %gpr7:s32" > kprobe_events $ echo 1 > events/probe/sys_fchownat/enable $ touch /tmp/foo $ chown root /tmp/foo $ echo 0 > events/enable $ cat trace chown-2925 [014] d... 76.160657: sys_fchownat: (SyS_fchownat+0x8/0x1a0) arg1=-100 arg2="/tmp/foo" arg3=0 arg4=-1 arg5=0 Instead we'd like to be able to use: $ echo "p:probe/sys_fchownat sys_fchownat %r3:s32 +0(%r4):string %r5:s32 %r6:s32 %r7:s32" > kprobe_events Signed-off-by: NRashmica Gupta <rashmicy@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Rashmica Gupta 提交于
Most architectures use NR_syscalls as the #define for the number of syscalls. We use __NR_syscalls, and then define NR_syscalls as __NR_syscalls. __NR_syscalls is not used outside arch code, whereas NR_syscalls is. So as NR_syscalls must be defined and __NR_syscalls does not, replace __NR_syscalls with NR_syscalls. Signed-off-by: NRashmica Gupta <rashmicy@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Rashmica Gupta 提交于
This function has been unused since commit 14cf11af ("powerpc: Merge enough to start building in arch/powerpc."), so remove it. Signed-off-by: NRashmica Gupta <rashmicy@gmail.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Laurent Vivier 提交于
if the language is not english objdump output is not parsed correctly and format is "". Later, "ld -m $format" fails. This patch adds "LANG=C" to force english output for objdump. Signed-off-by: NLaurent Vivier <laurent@vivier.eu> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 John Ogness 提交于
Since gpio1 is not a timer, it also should not use IRQF_TIMER. Similar to commit ba461f09 ("powerpc: Use IRQF_NO_SUSPEND not IRQF_TIMER for non-timer interrupts"). Signed-off-by: NJohn Ogness <john.ogness@linutronix.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 John Ogness 提交于
The xmon and cascade irq handlers must not run as threads. pmac_pic_lock is already a raw_spinlock, but the irq flag IRQF_NO_THREAD needs to be set as well. Signed-off-by: NJohn Ogness <john.ogness@linutronix.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 24 11月, 2015 3 次提交
-
-
由 Krzysztof Kozlowski 提交于
platform_driver does not need to set an owner because platform_driver_register() will set it. Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrew Donnellan 提交于
When writing a value to config space, cxl_pcie_write_config() calls cxl_pcie_config_info() to obtain a mask and shift value, shifts the new value accordingly, then uses the mask to combine the shifted value with the existing value at the address as part of a read-modify-write pattern. Currently, we use a logical OR operator rather than a bitwise OR operator, which means any use of this function results in an incorrect value being written. Replace the logical OR operator with a bitwise OR operator so the value is written correctly. Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Cc: stable@vger.kernel.org Fixes: 6f7f0b3d ("cxl: Add AFU virtual PHB and kernel API") Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Vaibhav Jain 提交于
An idr warning is reported when a context is release after the capi card is unbound from the cxl driver via sysfs. Below are the steps to reproduce: 1. Create multiple afu contexts in an user-space application using libcxl. 2. Unbind capi card from cxl using command of form echo <capi-card-pci-addr> > /sys/bus/pci/drivers/cxl-pci/unbind 3. Exit/kill the application owning afu contexts. After above steps a warning message is usually seen in the kernel logs of the form "idr_remove called for id=<context-id> which is not allocated." This is caused by the function cxl_release_afu which destroys the contexts_idr table. So when a context is release no entry for context pe is found in the contexts_idr table and idr code prints this warning. This patch fixes this issue by increasing & decreasing the ref-count on the afu device when a context is initialized or when its freed respectively. This prevents the afu from being released until all the afu contexts have been released. The patch introduces two new functions namely cxl_afu_get/put that manage the ref-count on the afu device. Also the patch removes code inside cxl_dev_context_init that increases ref on the afu device as its guaranteed to be alive during this function. Reported-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NVaibhav Jain <vaibhav@linux.vnet.ibm.com> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 23 11月, 2015 2 次提交
-
-
由 Linus Torvalds 提交于
-
由 Linus Torvalds 提交于
Merge slub bulk allocator updates from Andrew Morton: "This missed the merge window because I was waiting for some repairs to come in. Nothing actually uses the bulk allocator yet and the changes to other code paths are pretty small. And the net guys are waiting for this so they can start merging the client code" More comments from Jesper Dangaard Brouer: "The kmem_cache_alloc_bulk() call, in mm/slub.c, were included in previous kernel. The present version contains a bug. Vladimir Davydov noticed it contained a bug, when kernel is compiled with CONFIG_MEMCG_KMEM (see commit 03ec0ed5: "slub: fix kmem cgroup bug in kmem_cache_alloc_bulk"). Plus the mem cgroup counterpart in kmem_cache_free_bulk() were missing (see commit 03374518 "slub: add missing kmem cgroup support to kmem_cache_free_bulk"). I don't consider the fix stable-material because there are no in-tree users of the API. But with known bugs (for memcg) I cannot start using the API in the net-tree" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: slab/slub: adjust kmem_cache_alloc_bulk API slub: add missing kmem cgroup support to kmem_cache_free_bulk slub: fix kmem cgroup bug in kmem_cache_alloc_bulk slub: optimize bulk slowpath free by detached freelist slub: support for bulk free with SLUB freelists
-