- 02 3月, 2017 5 次提交
-
-
由 Ingo Molnar 提交于
We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/debug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
We are going to move softlockup APIs out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. <linux/nmi.h> already includes <linux/sched.h>. Include the <linux/nmi.h> header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
We are going to split <linux/sched/clock.h> out of <linux/sched.h>, which will have to be picked up from other headers and .c files. Create a trivial placeholder <linux/sched/clock.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
We are going to split <linux/sched/topology.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/topology.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
So the original intention of tsk_cpus_allowed() was to 'future-proof' the field - but it's pretty ineffectual at that, because half of the code uses ->cpus_allowed directly ... Also, the wrapper makes the code longer than the original expression! So just get rid of it. This also shrinks <linux/sched.h> a bit. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 28 2月, 2017 1 次提交
-
-
由 Vegard Nossum 提交于
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using: git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_count);/mmgrab\(\1\);/' git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_count);/mmgrab\(\&\1\);/' This is needed for a later patch that hooks into the helper, but might be a worthwhile cleanup on its own. (Michal Hocko provided most of the kerneldoc comment.) Link: http://lkml.kernel.org/r/20161218123229.22952-1-vegard.nossum@oracle.comSigned-off-by: NVegard Nossum <vegard.nossum@oracle.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 2月, 2017 1 次提交
-
-
由 Hugh Dickins 提交于
Remove the prototypes for shmem_mapping() and shmem_zero_setup() from linux/mm.h, since they are already provided in linux/shmem_fs.h. But shmem_fs.h must then provide the inline stub for shmem_mapping() when CONFIG_SHMEM is not set, and a few more cfiles now need to #include it. Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1702081658250.1549@eggly.anvilsSigned-off-by: NHugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 2月, 2017 3 次提交
-
-
由 Frédéric Weisbecker 提交于
This type conversion is a leftover that got ignored during the kcpustat conversion to nanosecs, resulting in build breakage with config having CONFIG_NO_HZ_FULL=y. arch/powerpc/kernel/time.c: In function 'running_clock': arch/powerpc/kernel/time.c:712:2: error: implicit declaration of function 'cputime_to_nsecs' [-Werror=implicit-function-declaration] return local_clock() - cputime_to_nsecs(kcpustat_this_cpu->cpustat[CPUTIME_STEAL]); All we need is to remove it. Fixes: e7f340ca ("powerpc, sched/cputime: Remove unused cputime definitions") Reported-by: NAbdul Haleem <abdhalee@linux.vnet.ibm.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
We will set LPCR with correct value for radix during int. This make sure we start with a sanitized value of LPCR. In case of kexec, cpus can have LPCR value based on the previous translation mode we were running. Fixes: fe036a06 ("powerpc/64/kexec: Fix MMU cleanup on radix") Cc: stable@vger.kernel.org # v4.9+ Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Naveen N. Rao 提交于
Optprobes on powerpc are limited to kernel text area. We decided to also optimize kretprobe_trampoline since that is also in kernel text area. However,we failed to take into consideration the fact that the same trampoline is also used to catch function returns from kernel modules. As an example: $ sudo modprobe kobject-example $ sudo bash -c "echo 'r foo_show+8' > /sys/kernel/debug/tracing/kprobe_events" $ sudo bash -c "echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable" $ sudo cat /sys/kernel/debug/kprobes/list c000000000041350 k kretprobe_trampoline+0x0 [OPTIMIZED] d000000000e00200 r foo_show+0x8 kobject_example $ cat /sys/kernel/kobject_example/foo Segmentation fault With the below (trimmed) splat in dmesg: Unable to handle kernel paging request for data at address 0xfec40000 Faulting instruction address: 0xc000000000041540 Oops: Kernel access of bad area, sig: 11 [#1] ... NIP [c000000000041540] optimized_callback+0x70/0xe0 LR [c000000000041e60] optinsn_slot+0xf8/0x10000 Call Trace: [c0000000c7327850] [c000000000289af4] alloc_set_pte+0x1c4/0x860 (unreliable) [c0000000c7327890] [c000000000041e60] optinsn_slot+0xf8/0x10000 --- interrupt: 700 at 0xc0000000c7327a80 LR = kretprobe_trampoline+0x0/0x10 [c0000000c7327ba0] [c0000000003a30d4] sysfs_kf_seq_show+0x104/0x1d0 [c0000000c7327bf0] [c0000000003a0bb4] kernfs_seq_show+0x44/0x60 [c0000000c7327c10] [c000000000330578] seq_read+0xf8/0x560 [c0000000c7327cb0] [c0000000003a1e64] kernfs_fop_read+0x194/0x260 [c0000000c7327d00] [c0000000002f9954] __vfs_read+0x44/0x1a0 [c0000000c7327d90] [c0000000002fb4cc] vfs_read+0xbc/0x1b0 [c0000000c7327de0] [c0000000002fd138] SyS_read+0x68/0x110 [c0000000c7327e30] [c00000000000b8e0] system_call+0x38/0xfc Fix this by loading up the kernel TOC before calling into the kernel. The original TOC gets restored as part of the usual pt_regs restore. Fixes: 762df10b ("powerpc/kprobes: Optimize kprobe in kretprobe_trampoline()") Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 21 2月, 2017 1 次提交
-
-
由 Michael Roth 提交于
With the inclusion of commit 333f7b76 ("powerpc/pseries: Implement indexed-count hotplug memory add") and commit 75384347 ("powerpc/pseries: Implement indexed-count hotplug memory remove"), we now have complete handling of the RTAS hotplug event format as described by PAPR via ACR "PAPR Changes for Hotplug RTAS Events". This capability is indicated by byte 6, bit 2 (5 in IBM numbering) of architecture option vector 5, and allows for greater control over cpu/memory/pci hot plug/unplug operations. Existing pseries kernels will utilize this capability based on the existence of the /event-sources/hot-plug-events DT property, so we only need to advertise it via CAS and do not need a corresponding FW_FEATURE_* value to test for. Signed-off-by: NMichael Roth <mdroth@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 17 2月, 2017 2 次提交
-
-
由 Gavin Shan 提交于
The CAPI driver creates virtual PHB (vPHB) from the CAPI adapter. The vPHB's IO and memory windows aren't built from device-tree node as we do for normal PHBs. A error message is thrown in below path when trying to probe AFUs contained in the adapter. The error message is confusing and unnecessary. cxl_probe() pci_init_afu() cxl_pci_vphb_add() pcibios_scan_phb() pcibios_setup_phb_resources() This removes the error message. We might have the case where the first memory window on real PHB isn't populated properly because of error in "ranges" property in the device-tree node. We can check the device-tree instead for that. This also removes one unnecessary blank line in the function. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Russell Currey 提交于
PVR value of 0x0F000005 means we are arch v3.00 compliant (i.e. POWER9). Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NRussell Currey <ruscur@russell.cc> [mpe: Don't set num_pmcs, so we keep the PMU fields from the raw entry] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 15 2月, 2017 4 次提交
-
-
由 Rashmica Gupta 提交于
There are quite a few entries in asm-offests.c which look like: DEFINE(REG, STACK_FRAME_OVERHEAD+offsetof(struct pt_regs, reg)); So define a macro to do it once. Signed-off-by: NRashmica Gupta <rashmicy@gmail.com> [mpe: Rename to STACK_PT_REGS_OFFSET for excruciating explicitness] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Rashmica Gupta 提交于
A lot of entries in asm-offests.c look like this: DEFINE(TI_FLAGS, offsetof(struct thread_info, flags)); But there is a common macro, OFFSET, which makes this cleaner: OFFSET(TI_flags, thread_info, flags) So use it. Signed-off-by: NRashmica Gupta <rashmicy@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
WARN_ONCE() takes a condition and a format string. We were passing a constant string as the condition, and the function name as the format string. It would work, but the message would be just the function name. Fix it by just using WARN_ONCE() directly instead of if (x) WARN_ONCE(). Noticed-by: NGeliang Tang <geliangtang@163.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ravi Bangoria 提交于
Currently xmon data-breakpoint feature is broken. Whenever there is a watchpoint match occurs, hw_breakpoint_handler will be called by do_break via notifier chains mechanism. If watchpoint is registered by xmon, hw_breakpoint_handler won't find any associated perf_event and returns immediately with NOTIFY_STOP. Similarly, do_break also returns without notifying to xmon. Solve this by returning NOTIFY_DONE when hw_breakpoint_handler does not find any perf_event associated with matched watchpoint, rather than NOTIFY_STOP, which tells the core code to continue calling the other breakpoint handlers including the xmon one. Cc: stable@vger.kernel.org Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 10 2月, 2017 4 次提交
-
-
由 Naveen N. Rao 提交于
... as the generic weak variant will do. Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anju T 提交于
Kprobe placed on the kretprobe_trampoline() during boot time can be optimized, since the instruction at probe point is a 'nop'. Signed-off-by: NAnju T Sudhakar <anju@linux.vnet.ibm.com> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anju T 提交于
Current infrastructure of kprobe uses the unconditional trap instruction to probe a running kernel. Optprobe allows kprobe to replace the trap with a branch instruction to a detour buffer. Detour buffer contains instructions to create an in memory pt_regs. Detour buffer also has a call to optimized_callback() which in turn call the pre_handler(). After the execution of the pre-handler, a call is made for instruction emulation. The NIP is determined in advanced through dummy instruction emulation and a branch instruction is created to the NIP at the end of the trampoline. To address the limitation of branch instruction in POWER architecture, detour buffer slot is allocated from a reserved area. For the time being, 64KB is reserved in memory for this purpose. Instructions which can be emulated using analyse_instr() are the candidates for optimization. Before optimization ensure that the address range between the detour buffer allocated and the instruction being probed is within +/- 32MB. Signed-off-by: NAnju T Sudhakar <anju@linux.vnet.ibm.com> Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 David Gibson 提交于
The hypervisor needs to know a guest is capable of using the HPT resizing PAPR extension in order to make full advantage of it for memory hotplug. If the hypervisor knows the guest is HPT resize aware, it can size the initial HPT based on the initial guest RAM size, relying on the guest to resize the HPT when more memory is hot-added. Without this, the hypervisor must size the HPT for the maximum possible guest RAM, which can lead to a huge waste of space if the guest never actually expends to that maximum size. This patch advertises the guest's support for HPT resizing via the ibm,client-architecture-support OF interface. We use bit 5 of byte 6 of option vector 5 for this purpose, as defined in the PAPR ACR "HPT resizing option". Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Reviewed-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 07 2月, 2017 4 次提交
-
-
由 Benjamin Herrenschmidt 提交于
All entry points already read the MSR so they can easily do the right thing. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@ozlabs.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
The branch from hmi_exception_early to hmi_exception_realmode must use a "relocatable-style" branch, because it is branching from unrelocated exception code to beyond __end_interrupts. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
start,size has the benefit of being easier to search for (start,end usually gives you the preceeding vector from the one you want, as first result). Suggested-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Somewhere along the line, search/replace left some naming garbled, and untidy alignment (aka. mpe stuffed it up). Might as well fix them all up now while git blame history doesn't extend too far. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 06 2月, 2017 8 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This adds AUX vectors for the L1I,D, L2 and L3 cache levels providing for each cache level the size of the cache in bytes and the geometry (line size and number of ways). We chose to not use the existing alpha/sh definition which packs all the information in a single entry per cache level as it is too restricted to represent some of the geometries used on POWER. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
All shipping firmware versions have it wrong in the device-tree Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
Retrieved from device-tree when available Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
We have two set of identical struct members for the I and D sides and mostly identical bunches of code to parse the device-tree to populate them. Instead make a ppc_cache_info structure with one copy for I and one for D Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
It will be used to calculate the associativity Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
In a number of places we called "cache line size" what is actually the cache block size, which in the powerpc architecture, means the effective size to use with cache management instructions (it can be different from the actual cache line size). We fix the naming across the board and properly retrieve both pieces of information when available in the device-tree. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
We don't patch instructions based on the cache lines or block sizes these days. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
The variables are defined twice in setup_32.c and setup_64.c, do it once in setup-common.c instead Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 04 2月, 2017 1 次提交
-
-
由 Ard Biesheuvel 提交于
The modversion symbol CRCs are emitted as ELF symbols, which allows us to easily populate the kcrctab sections by relying on the linker to associate each kcrctab slot with the correct value. This has a couple of downsides: - Given that the CRCs are treated as memory addresses, we waste 4 bytes for each CRC on 64 bit architectures, - On architectures that support runtime relocation, a R_<arch>_RELATIVE relocation entry is emitted for each CRC value, which identifies it as a quantity that requires fixing up based on the actual runtime load offset of the kernel. This results in corrupted CRCs unless we explicitly undo the fixup (and this is currently being handled in the core module code) - Such runtime relocation entries take up 24 bytes of __init space each, resulting in a x8 overhead in [uncompressed] kernel size for CRCs. Switching to explicit 32 bit values on 64 bit architectures fixes most of these issues, given that 32 bit values are not treated as quantities that require fixing up based on the actual runtime load offset. Note that on some ELF64 architectures [such as PPC64], these 32-bit values are still emitted as [absolute] runtime relocatable quantities, even if the value resolves to a build time constant. Since relative relocations are always resolved at build time, this patch enables MODULE_REL_CRCS on powerpc when CONFIG_RELOCATABLE=y, which turns the absolute CRC references into relative references into .rodata where the actual CRC value is stored. So redefine all CRC fields and variables as u32, and redefine the __CRC_SYMBOL() macro for 64 bit builds to emit the CRC reference using inline assembler (which is necessary since 64-bit C code cannot use 32-bit types to hold memory addresses, even if they are ultimately resolved using values that do not exceed 0xffffffff). To avoid potential problems with legacy 32-bit architectures using legacy toolchains, the equivalent C definition of the kcrctab entry is retained for 32-bit architectures. Note that this mostly reverts commit d4703aef ("module: handle ppc64 relocating kcrctabs when CONFIG_RELOCATABLE=y") Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 2月, 2017 1 次提交
-
-
由 Andrew Donnellan 提交于
Commit 38addce8 ("gcc-plugins: Add latent_entropy plugin") excludes certain powerpc early boot code from the latent entropy plugin by adding appropriate CFLAGS. It looks like this was supposed to cover prom_init.o, but ended up saying init.o (which doesn't exist) instead. Fix the typo. Fixes: 38addce8 ("gcc-plugins: Add latent_entropy plugin") Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 02 2月, 2017 1 次提交
-
-
由 John Allen 提交于
Extend the existing PRRN infrastructure to perform the actual affinity updating for cpus and memory in addition to the device tree updating. For cpus, dynamic affinity updating already appears to exist in the kernel in the form of arch_update_cpu_topology(). For memory, we must place a READD operation on the hotplug queue for any phandle included in the PRRN event that is determined to be an LMB. Signed-off-by: NJohn Allen <jallen@linux.vnet.ibm.com> Reviewed-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 01 2月, 2017 4 次提交
-
-
由 Frederic Weisbecker 提交于
Since the core doesn't deal with cputime_t anymore, most of these APIs have been left unused. Lets remove these. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1485832191-26889-33-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Frederic Weisbecker 提交于
This is one more step toward converting cputime accounting to pure nsecs. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1485832191-26889-25-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Frederic Weisbecker 提交于
This is one more step toward converting cputime accounting to pure nsecs. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1485832191-26889-24-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Frederic Weisbecker 提交于
This is one more step toward converting cputime accounting to pure nsecs. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1485832191-26889-23-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-