- 25 9月, 2014 19 次提交
-
-
由 Zhouyi Zhou 提交于
CONFIG_JUMP_LABEL doesn't ensure HAVE_JUMP_LABEL, if it is not the case use maintainers's own mutex to guard the modification of global values. Signed-off-by: NZhouyi Zhou <yizhouzhou@ict.ac.cn> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Pranith Kumar 提交于
Fix build error caused by missing export: ERROR: "dcr_ind_lock" [drivers/net/ethernet/ibm/emac/ibm_emac.ko] undefined! Signed-off-by: NPranith Kumar <bobby.prani@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
A recent patch added a function prototype for htab_remove_mapping in c code. Fix it. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
There were a number of prototypes for functions that no longer exist. Remove them. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Fix a number of places where global functions were not including their prototype. This ensures the prototype and the function match. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Simplify things considerably by moving all the ppc32 specific symbol exports into its own file. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Move the lib symbol exports closer to their function definitions Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Neuling 提交于
Check that the OPAL_DUMP_READ token exists before initalising the elog infrastructure. This avoids littering the OPAL console with: "OPAL: Called with bad token 91" Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Neuling 提交于
Check that the OPAL_ELOG_READ token exists before initalising the elog infrastructure. This avoids littering the OPAL console with: "OPAL: Called with bad token 74" Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Neuling 提交于
Check that the OPAL_RTC_READ token exists before we use the OPAL RTC. Refactors the code a little to merge error paths. This avoids littering the OPAL console with: "OPAL: Called with bad token 3". Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Neuling 提交于
Currently there is no way to generically check if an OPAL call exists or not from the host kernel. This adds an OPAL call opal_check_token() which tells you if the given token is present in OPAL or not. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Pranith Kumar 提交于
Fix ppc 32 build failure as reported here: http://kisskb.ellerman.id.au/kisskb/buildresult/11663513/ The error is as follows: arch/powerpc/include/asm/floppy.h:142:20: error: 'isa_bridge_pcidev' undeclared (first use in this function) This is happening since floppy.o is enabled by BLK_DEV_FD which depends on ARCH_MAY_HAVE_PC_FDC which is in-turn enabled if PPC_PSERIES=n. The following commit changes the dependency so that ARCH_MAY_HAVE_PC_FDC is dependent exclusively on PCI since otherwise it will not compile. Signed-off-by: NPranith Kumar <bobby.prani@gmail.com> Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org> CC: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Tony Breeds 提交于
in commit 29f1aff2 (powerpc: Copy bootable images in the default install script) we changed to copying all the built boot targets based on the assumption that it's backwards compatible. It turns out that debian devived installkernel scripts will barf if not given exactly 4 args. This change reverts make install to just install the vmlinux (we can change the dfault in a seperate patch) and introduces a new make zInstall which works with a more flexible installkernel script. Cc: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Vasant Hegde 提交于
Presently we only support initiating Service Processor dump from host. Hence update sysfs message. Also update couple of other error/info messages. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 23 9月, 2014 1 次提交
-
-
由 Himangi Saraogi 提交于
Continue is not needed at the bottom of a loop. The Coccinelle semantic patch implementing this change is: @@ @@ for (...;...;...) { ... if (...) { ... - continue; } } Signed-off-by: NHimangi Saraogi <himangi774@gmail.com> Acked-by: NJulia Lawall <julia.lawall@lip6.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 09 9月, 2014 5 次提交
-
-
由 Pranith Kumar 提交于
This patch wires up three new syscalls for powerpc. The three new syscalls are seccomp, getrandom and memfd_create. Signed-off-by: NPranith Kumar <bobby.prani@gmail.com> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com>
-
由 Cyril Bur 提交于
CONFIG_FHANDLE is a requirement for systemd and with the increasing uptake of systemd within distros it makes sense for 64 bit defconfigs to include it. Signed-off-by: NCyril Bur <cyril.bur@au1.ibm.com>
-
由 Li Zhong 提交于
As opal_message_init() uses machine_early_initcall(powernv, ), and opal_hmi_handler_init() depends on that early initcall, so it also needs use machine_* to check the machine_id. Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
-
由 Anton Blanchard 提交于
ABIv2 kernels are failing to backtrace through the kernel. An example: 39.30% readseek2_proce [kernel.kallsyms] [k] find_get_entry | --- find_get_entry __GI___libc_read The problem is in valid_next_sp() where we check that the new stack pointer is at least STACK_FRAME_OVERHEAD below the previous one. ABIv1 has a minimum stack frame size of 112 bytes consisting of 48 bytes and 64 bytes of parameter save area. ABIv2 changes that to 32 bytes with no paramter save area. STACK_FRAME_OVERHEAD is in theory the minimum stack frame size, but we over 240 uses of it, some of which assume that it includes space for the parameter area. We need to work through all our stack defines and rationalise them but let's fix perf now by creating STACK_FRAME_MIN_SIZE and using in valid_next_sp(). This fixes the issue: 30.64% readseek2_proce [kernel.kallsyms] [k] find_get_entry | --- find_get_entry pagecache_get_page generic_file_read_iter new_sync_read vfs_read sys_read syscall_exit __GI___libc_read Cc: stable@vger.kernel.org # 3.16+ Reported-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAnton Blanchard <anton@samba.org>
-
由 Thomas Falcon 提交于
Values acquired from Open Firmware are in 32-bit big endian format and need to be handled on little endian architectures. This patch ensures values are in cpu endian when hotplugging memory. Signed-off-by: NThomas Falcon <tlfalcon@linux.vnet.ibm.com>
-
- 03 9月, 2014 1 次提交
-
-
由 Laurent Dufour 提交于
fc95ca72 introduces a memset in kvmppc_alloc_hpt since the general CMA doesn't clear the memory it allocates. However, the size argument passed to memset is computed from a signed value and its signed bit is extended by the cast the compiler is doing. This lead to extremely large size value when dealing with order value >= 31, and almost all the memory following the allocated space is cleaned. As a consequence, the system is panicing and may even fail spawning the kdump kernel. This fix makes use of an unsigned value for the memset's size argument to avoid sign extension. Among this fix, another shift operation which may lead to signed extended value too is also fixed. Cc: Alexey Kardashevskiy <aik@ozlabs.ru> Cc: Paul Mackerras <paulus@samba.org> Cc: Alexander Graf <agraf@suse.de> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 30 8月, 2014 1 次提交
-
-
由 Vivek Goyal 提交于
New system call depends on crypto. As it did not have a separate config option, CONFIG_KEXEC was modified to select CRYPTO and CRYPTO_SHA256. But now previous patch introduced a new config option for new syscall. So CONFIG_KEXEC does not require crypto. Remove that dependency. Signed-off-by: NVivek Goyal <vgoyal@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Shaun Ruffell <sruffell@digium.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 8月, 2014 1 次提交
-
-
由 Alexey Kardashevskiy 提交于
fc95ca72 claims that there is no functional change but this is not true as it calls get_order() (which takes bytes) where it should have called order_base_2() and the kernel stops on VM_BUG_ON(). This replaces get_order() with order_base_2() (round-up version of ilog2). Suggested-by: NPaul Mackerras <paulus@samba.org> Cc: Alexander Graf <agraf@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 13 8月, 2014 12 次提交
-
-
由 Aneesh Kumar K.V 提交于
Add tracepoint to track hugepage invalidate. This help us in debugging difficult to track bugs. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
On ppc64 we support 4K hash pte with 64K page size. That requires us to track the hash pte slot information on a per 4k basis. We do that by storing the slot details in the second half of pte page. The pte bit _PAGE_COMBO is used to indicate whether the second half need to be looked while building real_pte. We need to use read memory barrier while doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO check. On the store side we already do a lwsync in __hash_page_4K CC: <stable@vger.kernel.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
We would get wrong results in compiler recomputed old_pmd. Avoid that by using ACCESS_ONCE CC: <stable@vger.kernel.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
As per ISA, for 4k base page size we compare 14..65 bits of VA specified with the entry_VA in tlb. That implies we need to make sure we do a tlbie with all the possible 4k va we used to access the 16MB hugepage. With 64k base page size we compare 14..57 bits of VA. Hence we cannot ignore the lower 24 bits of va while tlbie .We also cannot tlb invalidate a 16MB entry with just one tlbie instruction because we don't track which va was used to instantiate the tlb entry. CC: <stable@vger.kernel.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Use _PAGE_COMBO to determine the page size with which we should invalidate the hash table entries on unmap. CC: <stable@vger.kernel.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Handle this correctly for 16M pages CC: <stable@vger.kernel.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
The segment identifier and segment size will remain the same in the loop, So we can compute it outside. We also change the hugepage_invalidate interface so that we can use it the later patch CC: <stable@vger.kernel.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
With hugepages, we store the hpte valid information in the pte page whose address is stored in the second half of the PMD. Use a write barrier to make sure clearing pmd busy bit and updating hpte valid info are ordered properly. CC: <stable@vger.kernel.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nishanth Aravamudan 提交于
There is an issue currently where NUMA information is used on powerpc (and possibly ia64) before it has been read from the device-tree, which leads to large slab consumption with CONFIG_SLUB and memoryless nodes. NUMA powerpc non-boot CPU's cpu_to_node/cpu_to_mem is only accurate after start_secondary(), similar to ia64, which is invoked via smp_init(). Commit 6ee0578b ("workqueue: mark init_workqueues() as early_initcall()") made init_workqueues() be invoked via do_pre_smp_initcalls(), which is obviously before the secondary processors are online. Additionally, the following commits changed init_workqueues() to use cpu_to_node to determine the node to use for kthread_create_on_node: bce90380 ("workqueue: add wq_numa_tbl_len and wq_numa_possible_cpumask[]") f3f90ad4 ("workqueue: determine NUMA node of workers accourding to the allowed cpumask") Therefore, when init_workqueues() runs, it sees all CPUs as being on Node 0. On LPARs or KVM guests where Node 0 is memoryless, this leads to a high number of slab deactivations (http://www.spinics.net/lists/linux-mm/msg67489.html). Fix this by initializing the powerpc-specific CPU<->node/local memory node mapping as early as possible, which on powerpc is do_init_bootmem(). Currently that function initializes the mapping for the boot CPU, but we extend it to setup the mapping for all possible CPUs. Then, in smp_prepare_cpus(), we can correspondingly set the per-cpu values for all possible CPUs. That ensures that before the early_initcalls run (and really as early as possible), the per-cpu NUMA mapping is accurate. While testing memoryless nodes on PowerKVM guests with a fix to the workqueue logic to use cpu_to_mem() instead of cpu_to_node(), with a guest topology of: available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 node 0 size: 0 MB node 0 free: 0 MB node 1 cpus: 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 node 1 size: 16336 MB node 1 free: 15329 MB node distances: node 0 1 0: 10 40 1: 40 10 the slab consumption decreases from Slab: 932416 kB SUnreclaim: 902336 kB to Slab: 395264 kB SUnreclaim: 359424 kB And we a corresponding increase in the slab efficiency from slab mem objs slabs used active active ------------------------------------------------------------ kmalloc-16384 337 MB 11.28% 100.00% task_struct 288 MB 9.93% 100.00% to slab mem objs slabs used active active ------------------------------------------------------------ kmalloc-16384 37 MB 100.00% 100.00% task_struct 31 MB 100.00% 100.00% Powerpc didn't support memoryless nodes until recently (64bb80d8 "powerpc/numa: Enable CONFIG_HAVE_MEMORYLESS_NODES" and 8c272261 "powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID"). Those commits also helped improve memory consumption with these kind of environments. Signed-off-by: NNishanth Aravamudan <nacc@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Himangi Saraogi 提交于
Free memory allocated using kmem_cache_zalloc using kmem_cache_free rather than kfree. The Coccinelle semantic patch that makes this change is as follows: // <smpl> @@ expression x,E,c; @@ x = \(kmem_cache_alloc\|kmem_cache_zalloc\|kmem_cache_alloc_node\)(c,...) ... when != x = E when != &x ?-kfree(x) +kmem_cache_free(c,x) // </smpl> Signed-off-by: NHimangi Saraogi <himangi774@gmail.com> Acked-by: NJulia Lawall <julia.lawall@lip6.fr> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Thomas Falcon 提交于
A buffer returned by H_VTERM_PARTNER_INFO contains device information in big endian format, causing problems for little endian architectures. This patch ensures that they are in cpu endian. Signed-off-by: NThomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
xmon only soft disables interrupts. This seems like a bad idea - we certainly don't want decrementer and PMU exceptions going off when we are debugging something inside xmon. This issue was uncovered when the hard lockup detector went off inside xmon. To ensure we wont get a spurious hard lockup warning, I also call touch_nmi_watchdog() when exiting xmon. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-