- 18 12月, 2013 2 次提交
-
-
由 David S. Miller 提交于
Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Francesco Fusco 提交于
We introduce a new hashing library that is meant to be used in the contexts where speed is more important than uniformity of the hashed values. The hash library leverages architecture specific implementation to achieve high performance and fall backs to jhash() for the generic case. On Intel-based x86 architectures, the library can exploit the crc32l instruction, part of the Intel SSE4.2 instruction set, if the instruction is supported by the processor. This implementation is twice as fast as the jhash() implementation on an i7 processor. Additional architectures, such as Arm64 provide instructions for accelerating the computation of CRC, so they could be added as well in follow-up work. Signed-off-by: NFrancesco Fusco <ffusco@redhat.com> Signed-off-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NThomas Graf <tgraf@redhat.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 12月, 2013 6 次提交
-
-
由 Richard Weinberger 提交于
On UML SUBARCH can be x86, x86_64 and i386 and if it is x86 we use uname -m to select a defconfig. Therefore we can no longer use -mcmodel=large only if SUBARCH is x86_64. Reported-and-tested-by: NBoaz Harrosh <bharrosh@panasas.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Richard Weinberger 提交于
We cannot use print_stack_trace because the name conflicts with linux/stacktrace.h. Reported-by: NRandy Dunlap <rdunlap@infradead.org> Acked-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Fabio Estevam 提交于
Currently mx53 (CortexA8) running at 1GHz reports: Calibrating delay loop... 663.55 BogoMIPS (lpj=3317760) Tom Evans verified that alignments of 0x0 and 0x8 run the two instructions of __loop_delay in one clock cycle (1 clock/loop), while alignments of 0x4 and 0xc take 3 clocks to run the loop twice. (1.5 clock/loop) The original object code looks like this: 00000010 <__loop_const_udelay>: 10: e3e01000 mvn r1, #0 14: e51f201c ldr r2, [pc, #-28] ; 0 <__loop_udelay-0x8> 18: e5922000 ldr r2, [r2] 1c: e0800921 add r0, r0, r1, lsr #18 20: e1a00720 lsr r0, r0, #14 24: e0822b21 add r2, r2, r1, lsr #22 28: e1a02522 lsr r2, r2, #10 2c: e0000092 mul r0, r2, r0 30: e0800d21 add r0, r0, r1, lsr #26 34: e1b00320 lsrs r0, r0, #6 38: 01a0f00e moveq pc, lr 0000003c <__loop_delay>: 3c: e2500001 subs r0, r0, #1 40: 8afffffe bhi 3c <__loop_delay> 44: e1a0f00e mov pc, lr After adding the 'align 3' directive to __loop_delay (align to 8 bytes): 00000010 <__loop_const_udelay>: 10: e3e01000 mvn r1, #0 14: e51f201c ldr r2, [pc, #-28] ; 0 <__loop_udelay-0x8> 18: e5922000 ldr r2, [r2] 1c: e0800921 add r0, r0, r1, lsr #18 20: e1a00720 lsr r0, r0, #14 24: e0822b21 add r2, r2, r1, lsr #22 28: e1a02522 lsr r2, r2, #10 2c: e0000092 mul r0, r2, r0 30: e0800d21 add r0, r0, r1, lsr #26 34: e1b00320 lsrs r0, r0, #6 38: 01a0f00e moveq pc, lr 3c: e320f000 nop {0} 00000040 <__loop_delay>: 40: e2500001 subs r0, r0, #1 44: 8afffffe bhi 40 <__loop_delay> 48: e1a0f00e mov pc, lr 4c: e320f000 nop {0} , which now reports: Calibrating delay loop... 996.14 BogoMIPS (lpj=4980736) Some more test results: On mx31 (ARM1136) running at 532 MHz, before the patch: Calibrating delay loop... 351.43 BogoMIPS (lpj=1757184) On mx31 (ARM1136) running at 532 MHz after the patch: Calibrating delay loop... 528.79 BogoMIPS (lpj=2643968) Also tested on mx6 (CortexA9) and on mx27 (ARM926), which shows the same BogoMIPS value before and after this patch. Reported-by: NTom Evans <tom_usenet@optusnet.com.au> Suggested-by: NTom Evans <tom_usenet@optusnet.com.au> Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Dave Martin 提交于
Copying a function with memcpy() and then trying to execute the result isn't trivially portable to Thumb. This patch modifies the kexec soft restart code to copy its assembler trampoline relocate_new_kernel() using fncpy() instead, so that relocate_new_kernel can be in the same ISA as the rest of the kernel without problems. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reported-by: NTaras Kondratiuk <taras.kondratiuk@linaro.org> Tested-by: NTaras Kondratiuk <taras.kondratiuk@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Victor Kamensky 提交于
After "ARM: signal: sigreturn_codes should be endian neutral to work in BE8" commit, thumb only platforms, like armv7m, fails to compile sigreturn_codes.S. The reason is that for such arch values '.arm' directive and arm opcodes are not allowed. Fix conditionally enables arm opcodes only if no CONFIG_CPU_THUMBONLY defined and it uses .org instructions to keep sigreturn_codes layout. Suggested-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Tested-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
- The LEDs register is write-only: it can't be read-modify-written. - The LEDs are write-1-for-off not 0. - The check for the platform was inverted. Fixes: cf6856d6 ("ARM: mach-footbridge: retire custom LED code") Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: stable@vger.kernel.org
-
- 30 11月, 2013 3 次提交
-
-
由 Russell King 提交于
It's no good setting vga_base after the VGA console has been initialised, because if we do that we get this: Unable to handle kernel paging request at virtual address 000b8000 pgd = c0004000 [000b8000] *pgd=07ffc831, *pte=00000000, *ppte=00000000 0Internal error: Oops: 5017 [#1] ARM Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 3.12.0+ #49 task: c03e2974 ti: c03d8000 task.ti: c03d8000 PC is at vgacon_startup+0x258/0x39c LR is at request_resource+0x10/0x1c pc : [<c01725d0>] lr : [<c0022b50>] psr: 60000053 sp : c03d9f68 ip : 000b8000 fp : c03d9f8c r10: 000055aa r9 : 4401a103 r8 : ffffaa55 r7 : c03e357c r6 : c051b460 r5 : 000000ff r4 : 000c0000 r3 : 000b8000 r2 : c03e0514 r1 : 00000000 r0 : c0304971 Flags: nZCv IRQs on FIQs off Mode SVC_32 ISA ARM Segment kernel which is an access to the 0xb8000 without the PCI offset required to make it work. Fixes: cc22b4c1 ("ARM: set vga memory base at run-time") Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: <stable@vger.kernel.org>
-
由 Russell King 提交于
Commit f6f91b0d (ARM: allow kuser helpers to be removed from the vector page) required two pages for the vectors code. Although the code setting up the initial page tables was updated, the code which allocates page tables for new processes wasn't, neither was the code which tears down the mappings. Fix this. Fixes: f6f91b0d ("ARM: allow kuser helpers to be removed from the vector page") Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: <stable@vger.kernel.org>
-
由 Russell King 提交于
Some buses have negative offsets, which causes the DMA mask checks to falsely fail. Fix this by using the actual amount of memory fitted in the system. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 11月, 2013 4 次提交
-
-
由 Catalin Marinas 提交于
PTE_PROT_NONE means that a pte is present but does not have any read/write attributes. However, setting the memory type like pgprot_writecombine() is allowed and such bits overlap with PTE_PROT_NONE. This causes mmap/munmap issues in drivers that change the vma->vm_pg_prot on PROT_NONE mappings. This patch reverts the PTE_FILE/PTE_PROT_NONE shift in commit 59911ca4 (ARM64: mm: Move PTE_PROT_NONE bit) and moves PTE_PROT_NONE together with the other software bits. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NSteve Capper <steve.capper@linaro.org> Cc: <stable@vger.kernel.org> # 3.11+
-
由 Catalin Marinas 提交于
This provides better performance compared to Device GRE and also allows unaligned accesses. Such memory is intended to be used with standard RAM (e.g. framebuffers) and not I/O. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
The current breakpoint instruction checking code for A32 is not endian clean. Fix this with appropriate byte-swapping when retrieving instructions. Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
On a BE system the wrong half of the X registers is retrieved/written when attempting to get/set the value of aarch32 registers through ptrace. Ensure that types are the correct width so that the relevant casting occurs. Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 11月, 2013 7 次提交
-
-
由 Stephen Warren 提交于
The I2C controller node needs #address-cells and #size-cells properties, but these are currently missing. Add them. This allows child nodes to be parsed correctly. Cc: stable@vger.kernel.org Signed-off-by: NStephen Warren <swarren@wwwdotorg.org> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Doug Anderson 提交于
Without the interrupt you'll get problems if you enable CONFIG_RTC_DRV_MAX77686. Setup the interrupt properly in the device tree. Signed-off-by: NDoug Anderson <dianders@chromium.org> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NOlof Johansson <olof@lixom.net> Cc: stable@vger.kernel.org
-
由 Dave Martin 提交于
This patch implements the power_down_finish() method for TC2, to enable the kernel to confirm when CPUs are safely powered down. The information required for determining when a CPU is parked cannot be obtained from any single place, so a few sources of information must be combined: * mcpm_cpu_power_down() must be pending for the CPU, so that we don't get confused by false STANDBYWFI positives arising from CPUidle. This is detected by waiting for the tc2_pm use count for the target CPU to reach 0. * Either the SPC must report that the CPU has asserted STANDBYWFI, or the TC2 tile's reset control logic must be holding the CPU in reset. Just checking for STANDBYWFI is not sufficient, because this signal is not latched when the the cluster is clamped off and powered down: the relevant status bits just drop to zero. This means that STANDBYWFI status cannot be used for reliable detection of the last CPU in a cluster reaching WFI. This patch is required in order for kexec to work with MCPM on TC2. MCPM code was changed in commit 0de0d646 ('ARM: 7848/1: mcpm: Implement cpu_kill() to synchronise on powerdown'), and since then it will hit a WARN_ON_ONCE() due to power_down_finish not being implemented on the TC2 platform. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NPawel Moll <pawel.moll@arm.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Olof Johansson 提交于
Some omap3 code is throwing a warning: arch/arm/mach-omap2/pm34xx.c: In function 'omap3_save_secure_ram_context': arch/arm/mach-omap2/pm34xx.c:123:32: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] In reality this code will never actually execute with LPAE=y, since Cortex-A8 doesn't support it. So downcasting the __pa() is safe in this case. Signed-off-by: NOlof Johansson <olof@lixom.net> Acked-by: NTony Lindgren <tony@atomide.com>
-
由 Catalin Marinas 提交于
The asynchronous aborts are generally fatal for the kernel but they can be masked via the pstate A bit. If a system error happens while in kernel mode, it won't be visible until returning to user space. This patch enables this kind of abort early to help identifying the cause. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
With the spin-table SMP booting method, secondary CPUs poll a location passed in the DT. The foundation-v8.dts file doesn't have this memory reserved and there is a risk of Linux using it before secondary CPUs are started. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Commit f27dde8d (sched: Add NEED_RESCHED to the preempt_count) introduced the use of bit 31 in preempt_count for obscure scheduling purposes. This causes interrupts taken from EL0 to hit the (open coded) BUG when this flag is flipped while handling the interrupt (we compare the values before and after, and kill the kernel if they are different). The fix is to stop messing with the preempt count entirely, as this is already being dealt with in the generic code (irq_enter/irq_exit). Tested on a dual A53 FPGA running cyclictest. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 11月, 2013 9 次提交
-
-
由 Martin Schwidefsky 提交于
Git commit 9e34f2686bb088b211b6cac8772e1f644c6180f8 "s390/mm,tlb: tlb flush on page table upgrade fixup" removed the exception handler for the asce-type exception. This is incorrect as the user-copy with MVCOS can cause asce-type exceptions in the kernel if a user pointer is too large. Those need to be handled with do_no_context to branch to the fixup in the user-copy code. The simplest fix for this problem is to call do_dat_exception for asce-type excpetions, as there is no vma for the address the code will handle the exception correctly. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Git commit 4f37a68c "s390: Use direct ktime path for s390 clockevent device" makes use of the CLOCK_EVT_FEAT_KTIME clockevent option to avoid the delta calculation with ktime_get() in clockevents_program_event and the get_tod_clock() in s390_next_event. This is based on the assumption that the difference between the internal ktime and the hardware clock is reflected in the wall_to_monotonic delta. But this is not true, the ntp corrections are applied via changes to the tk->mult multiplier and this is not reflected in wall_to_monotonic. In theory this could be solved by using the raw monotonic clock but it is simpler to switch back to the standard clock delta calculation. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Martin Schwidefsky 提交于
Switch to the improved update_vsyscall interface that provides sub-nanosecond precision for gettimeofday and clock_gettime. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Heiko Carstens 提交于
When translating a user space address, the address must be checked against the ASCE limit of the process. If the address is larger than the maximum address that is reachable with the ASCE, an ASCE type exception must be generated. The current code simply ignored the higher order bits. This resulted in an address wrap around in user space instead of an exception in user space. Cc: stable@vger.kernel.org # v3.9+ Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Chen Gang 提交于
for tmp_part->header.name: it is "Terminating null required only for names < 12 chars". so need to limit the %.12s for it in printk additional info: %12s limit the width, not for the original string output length if name length is more than 12, it still can be fully displayed. if name length is less than 12, the ' ' will be filled before name. %.12s truly limit the original string output length (precision) Signed-off-by: NChen Gang <gang.chen@asianux.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
In a recent patch: commit c13f20ac Author: Michael Neuling <mikey@neuling.org> powerpc/signals: Mark VSX not saved with small contexts We fixed an issue but an improved solution was later discussed after the patch was merged. Firstly, this patch doesn't handle the 64bit signals case, which could also hit this issue (but has never been reported). Secondly, the original patch isn't clear what MSR VSX should be set to. The new approach below always clears the MSR VSX bit (to indicate no VSX is in the context) and sets it only in the specific case where VSX is available (ie. when VSX has been used and the signal context passed has space to provide the state). This reverts the original patch and replaces it with the improved solution. It also adds a 64 bit version. Signed-off-by: NMichael Neuling <mikey@neuling.org> Cc: stable@vger.kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Hari Bathini 提交于
When CONFIG_SPARSEMEM_VMEMMAP option is used in kernel, makedumpfile fails to filter vmcore dump as it fails to do vmemmap translations. So far dump filtering on ppc64 never had to deal with vmemmap addresses seperately as vmemmap regions where mapped in zone normal. But with the inclusion of CONFIG_SPARSEMEM_VMEMMAP config option in kernel, this vmemmap address translation support becomes necessary for dump filtering. For vmemmap adress translation, few kernel symbols are needed by dump filtering tool. This patch adds those symbols to vmcoreinfo, which a dump filtering tool can use for filtering the kernel dump. Tested this changes successfully with makedumpfile tool that supports vmemmap to physical address translation outside zone normal. [ Removed unneeded #ifdef as suggested by Michael Ellerman --BenH ] Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Stephen reported a failure in an allyesconfig build. CONFIG_CPU_LITTLE_ENDIAN=y gets set but his toolchain is not new enough to support little endian. We really want to default to a big endian build; Ben suggested using a choice which defaults to CPU_BIG_ENDIAN. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Currently if I cross build TAGS or cscope from x86 I get this: % make ARCH=powerpc TAGS gcc-4.8.real: error: unrecognized command line option ‘-mbig-endian’ GEN TAGS % I'm not setting CROSS_COMPILE= as logically I shouldn't need to and I haven't needed to in the past when building TAGS or cscope. Also, the above completess correct as the error is not fatal to the build. This was caused by: commit d72b0801 Author: Ian Munsie <imunsie@au1.ibm.com> powerpc: Add ability to build little endian kernels The below fixes this by testing for the -mbig-endian option before adding it. I've not done the same thing in the little endian case as if -mlittle-endian doesn't exist, we probably want to fail quickly as you probably have an old big endian compiler. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 11月, 2013 4 次提交
-
-
由 Scott Wood 提交于
And in flush_hugetlb_page(), don't check whether vma is NULL after we've already dereferenced it. This was found by Dan using static analysis as described here: https://lists.ozlabs.org/pipermail/linuxppc-dev/2013-November/113161.html We currently get away with this because the callers that currently pass NULL for vma seem to be 32-bit-only (e.g. highmem, and CONFIG_DEBUG_PGALLOC in pgtable_32.c) Hugetlb is currently 64-bit only, so we never saw a NULL vma here. Signed-off-by: NScott Wood <scottwood@freescale.com> Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
-
由 Adam Borowski 提交于
These lines were inoperative for four years, which puts some doubt into their importance, and it's possible the fixed version will regress, but at the very least they should be removed instead. Signed-off-by: NAdam Borowski <kilobyte@angband.pl> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 LEROY Christophe 提交于
Commit beb2dc0a breaks the MPC8xx which seems to not support using mfspr SPRN_TBRx instead of mftb/mftbu despite what is written in the reference manual. This patch reverts to the use of mftb/mftbu when CONFIG_8xx is selected. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Tiejun Chen 提交于
If CONFIG_ALTIVEC is enabled for CoreNet64, and if we also select CONFIG_E{5,6}500_CPU this may introduce -mcpu=e500mc64 into $CFLAGS. But Altivec option not allowed with e500mc64, then some compiling errors occur like this: CC arch/powerpc/lib/xor_vmx.o arch/powerpc/lib/xor_vmx.c:1:0: error: AltiVec not supported in this target make[1]: *** [arch/powerpc/lib/xor_vmx.o] Error 1 make: *** [arch/powerpc/lib] Error 2 So we should restrict e500mc64 in altivec scenario. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 22 11月, 2013 1 次提交
-
-
由 Kirill A. Shutemov 提交于
There are two code paths how page with pmd page table can be freed: pmd_free() and pmd_free_tlb(). I've missed the second one and didn't add page table destructor call there. It leads to leak of page->ptl for pmd page tables, if dynamically allocated page->ptl is in use. The patch adds the missed destructor and modifies documentation accordingly. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: NAndrey Vagin <avagin@openvz.org> Tested-by: NAndrey Vagin <avagin@openvz.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 11月, 2013 4 次提交
-
-
由 Michael Neuling 提交于
The VSX MSR bit in the user context indicates if the context contains VSX state. Currently we set this when the process has touched VSX at any stage. Unfortunately, if the user has not provided enough space to save the VSX state, we can't save it but we currently still set the MSR VSX bit. This patch changes this to clear the MSR VSX bit when the user doesn't provide enough space. This indicates that there is no valid VSX state in the user context. This is needed to support get/set/make/swapcontext for applications that use VSX but only provide a small context. For example, getcontext in glibc provides a smaller context since the VSX registers don't need to be saved over the glibc function call. But since the program calling getcontext may have used VSX, the kernel currently says the VSX state is valid when it's not. If the returned context is then used in setcontext (ie. a small context without VSX but with MSR VSX set), the kernel will refuse the context. This situation has been reported by the glibc community. Based on patch from Carlos O'Donell. Tested-by: NHaren Myneni <haren@linux.vnet.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Cc: stable@vger.kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
In commit a489043f "Implement arch_get_random_long() based on H_RANDOM" I broke the SMP=n build. We were getting plpar_wrappers.h via spinlock.h which breaks when SMP=n. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
Up until now we have only used cpu_to_chip_id() in the topology code, which is only used on SMP builds. However my recent commit a4da0d50 "Implement arch_get_random_long/int() for powernv" added a usage when SMP=n, breaking the build. Move cpu_to_chip_id() into prom.c so it is available for SMP=n builds. We would move the extern to prom.h, but that breaks the include in topology.h. Instead we leave it in smp.h, but move it out of the CONFIG_SMP #ifdef. We also need to include asm/smp.h in rng.c, because the linux version skips asm/smp.h on UP. What a mess. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Li Zhong 提交于
I encountered following issue: [ 0.283035] ibmvscsi 30000015: couldn't initialize event pool [ 5.688822] ibmvscsi: probe of 30000015 failed with error -1 which prevents the storage from being recognized, and the machine from booting. After some digging, it seems that it is caused by commit 4886c399 as dma_mask pointer in viodev->dev is not set, so in dma_set_mask_and_coherent(), dma_set_coherent_mask() is not called because dma_set_mask(), which is dma_set_mask_pSeriesLP() returned EIO. While before the commit, dma_set_coherent_mask() is always called. I tried to replace dma_set_mask_and_coherent() with dma_coerce_mask_and_coherent(), and the machine could boot again. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-