- 17 2月, 2016 4 次提交
-
-
由 Kevin Cernekee 提交于
We can still override these settings via mach/memory.h, but let's provide sensible defaults so that SPARSEMEM is available in the multiplatform kernels. Two platforms currently use SECTION_SIZE_BITS < 28, but are expected to work with 28 (albeit slightly less efficiently if not all banks are populated): - mach-rpc: uses 26 bits. Based on mach/hardware.h it looks like this platform puts RAM at 0x1000_0000 - 0x1fff_ffff, and I/O below 0x1000_0000. - mach-sa1100: uses 27 bits. mach/memory.h indicates that RAM occupies the entire range of 0xc000_0000 - 0xdfff_ffff. But Arnd says in that rpc and sa1100 will never have to use the default since they cannot be part of a multiplatform kernel, and that is unlikely to change. Several platforms need MAX_PHYSMEM_BITS >= 36 so we'll pick that as the minimum. Anything higher and we'll fail the SECTIONS_WIDTH + NODES_WIDTH + ZONES_WIDTH test in <linux/mm.h>. Some analysis from Russell King at http://lists.infradead.org/pipermail/linux-arm-kernel/2014-October/298957.html: I think this is fine in as far as it goes - this means we end up with 256 entries in the mem_section array which means it occupies one page, which I think is acceptable overhead. The other thing to be aware of here is the obvious: #if (MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS #error Allocator MAX_ORDER exceeds SECTION_SIZE #endif Which means that with 28 bits of section, that's a maximum allocator order of 16. We appear to allow FORCE_MAX_ZONEORDER to be set up to 64 in the case of shmobile, which doesn't seem like a sensible upper limit - and certainly isn't when sparsemem is enabled. Given this, I think that FORCE_MAX_ZONEORDER's help, and the dependencies probably could do with some improvement to make the issues more transparent. [gregory.0xf0: added notes from Arnd and Russell] Signed-off-by: NKevin Cernekee <cernekee@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NGregory Fong <gregory.0xf0@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Masahiro Yamada 提交于
These two targets were introduced by commit 13d5fadf ("[ARM] Make 'i' and 'zi' targets work") to short-circuit the dependencies for 'install' and 'zinstall'. After that, commit 19514fc6 ('arm, kbuild: make "make install" not depend on vmlinux') eventually made "(z)install" equivalent to "(z)i". It is true that 'i' and 'zi' might be still useful as shorthands but the original intention had been already lost. They do not even show up in "make ARCH=arm help", so I hope this deletion does not have much impact. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Masahiro Yamada 提交于
"PHONY += FORCE" is already cared by scripts/Makefile.build, which this file is included from. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jean-Philippe Brucker 提交于
ARMv8 introduces system registers for the Generic Interrupt Controllers CPU and virtual interfaces. When GICv3 is implemented, EL2 needs to allow the kernel to use those registers, by changing the value of ICC_HSRE. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 2月, 2016 10 次提交
-
-
由 Kees Cook 提交于
When rodata is large enough that it crosses a section boundary after the kernel text, mark the rest NX. This is as close to full NX of rodata as we can get without splitting page tables or doing section alignment via CONFIG_DEBUG_ALIGN_RODATA. When the config is: CONFIG_DEBUG_RODATA=y # CONFIG_DEBUG_ALIGN_RODATA is not set Before: ---[ Kernel Mapping ]--- 0x80000000-0x80100000 1M RW NX SHD 0x80100000-0x80a00000 9M ro x SHD 0x80a00000-0xa0000000 502M RW NX SHD After: ---[ Kernel Mapping ]--- 0x80000000-0x80100000 1M RW NX SHD 0x80100000-0x80700000 6M ro x SHD 0x80700000-0x80a00000 3M ro NX SHD 0x80a00000-0xa0000000 502M RW NX SHD Signed-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Chris Brandt 提交于
For an XIP build, _etext does not represent the end of the binary image that needs to stay mapped into the MODULES_VADDR area. Years ago, data came before text in the memory map. However, now that the order is text/init/data, an XIP_KERNEL needs to map up to the data location in order to keep from cutting off parts of the kernel that are needed. We only map up to the beginning of data because data has already been copied, so there's no reason to keep it around anymore. A new symbol is created to make it clear what it is we are referring to. This fixes the bug where you might lose the end of your kernel area after page table setup is complete. Signed-off-by: NChris Brandt <chris.brandt@renesas.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
Commit b9b32bf7 ("ARM: use linker magic for vectors and vector stubs") updated the linker script to emit the .vectors and .stubs sections into a VMA range that is zero based and disjoint from the normal static kernel region. The reason for that was that this way, the sections can be placed exactly 4 KB apart, while the payload of the .vectors section is only 32 bytes. Since the symbols that are part of the .stubs section are emitted into the kallsyms table, they appear with zero based addresses as well, e.g., 00001004 t vector_rst 00001020 t vector_irq 000010a0 t vector_dabt 00001120 t vector_pabt 000011a0 t vector_und 00001220 t vector_addrexcptn 00001240 t vector_fiq 00001240 T vector_fiq_offset As this confuses perf when it accesses the kallsyms tables, commit 7122c3e9 ("scripts/link-vmlinux.sh: only filter kernel symbols for arm") implemented a somewhat ugly special case for ARM, where the value of CONFIG_PAGE_OFFSET is passed to scripts/kallsyms, and symbols whose addresses are below it are filtered out. Note that this special case only applies to CONFIG_XIP_KERNEL=n, not because the issue the patch addresses exists only in that case, but because finding a limit below which to apply the filtering is not entirely straightforward. Since the .vectors and .stubs sections contain position independent code that is never executed in place, we can emit it at its most likely runtime VMA (for more recent CPUs), which is 0xffff0000 for the vector table and 0xffff1000 for the stubs. Not only does this fix the perf issue with kallsyms, allowing us to drop the special case in scripts/kallsyms entirely, it also gives debuggers a more realistic view of the address space, and setting breakpoints or single stepping through code in the vector table or the stubs is more likely to work as expected on CPUs that use a high vector address. E.g., 00001240 A vector_fiq_offset ... c0c35000 T __init_begin c0c35000 T __vectors_start c0c35020 T __stubs_start c0c35020 T __vectors_end c0c352e0 T _sinittext c0c352e0 T __stubs_end ... ffff1004 t vector_rst ffff1020 t vector_irq ffff10a0 t vector_dabt ffff1120 t vector_pabt ffff11a0 t vector_und ffff1220 t vector_addrexcptn ffff1240 T vector_fiq (Note that vector_fiq_offset is now an absolute symbol, which kallsyms already ignores by default) The LMA footprint is identical with or without this change, only the VMAs are different: Before: Idx Name Size VMA LMA File off Algn ... 14 .notes 00000024 c0c34020 c0c34020 00a34020 2**2 CONTENTS, ALLOC, LOAD, READONLY, CODE 15 .vectors 00000020 00000000 c0c35000 00a40000 2**1 CONTENTS, ALLOC, LOAD, READONLY, CODE 16 .stubs 000002c0 00001000 c0c35020 00a41000 2**5 CONTENTS, ALLOC, LOAD, READONLY, CODE 17 .init.text 0006b1b8 c0c352e0 c0c352e0 00a452e0 2**5 CONTENTS, ALLOC, LOAD, READONLY, CODE ... After: Idx Name Size VMA LMA File off Algn ... 14 .notes 00000024 c0c34020 c0c34020 00a34020 2**2 CONTENTS, ALLOC, LOAD, READONLY, CODE 15 .vectors 00000020 ffff0000 c0c35000 00a40000 2**1 CONTENTS, ALLOC, LOAD, READONLY, CODE 16 .stubs 000002c0 ffff1000 c0c35020 00a41000 2**5 CONTENTS, ALLOC, LOAD, READONLY, CODE 17 .init.text 0006b1b8 c0c352e0 c0c352e0 00a452e0 2**5 CONTENTS, ALLOC, LOAD, READONLY, CODE ... Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NChris Brandt <chris.brandt@renesas.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
Commit b9b32bf7 ("ARM: use linker magic for vectors and vector stubs") introduced new global definitions of __vectors_start and __stubs_start, and changed the existing ones to have internal linkage only. However, these symbols are still visible to kallsyms, and due to the way the .vectors and .stubs sections are emitted at the base of the VMA space, these duplicate definitions have conflicting values. $ nm -n vmlinux |grep -E __vectors|__stubs 00000000 t __vectors_start 00001000 t __stubs_start c0e77000 T __vectors_start c0e77020 T __stubs_start This is completely harmless by itself, since the wrong values are local symbols that cannot be referenced by other object files directly. However, since these symbols are also listed in the kallsyms symbol table in some cases (i.e., CONFIG_KALLSYMS_ALL=y and CONFIG_XIP_KERNEL=y), having these conflicting values can be confusing. So either remove them, or make them strictly local. Acked-by: NChris Brandt <chris.brandt@renesas.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Chris Brandt 提交于
When building an XIP kernel, the linker script needs to be much different than a conventional kernel's script. Over time, it's been difficult to maintain both XIP and non-XIP layouts in one linker script. Therefore, this patch separates the two procedures into two completely different files. The new linker script is essentially a straight copy of the current script with all the non-CONFIG_XIP_KERNEL portions removed. Additionally, all CONFIG_XIP_KERNEL portions have been removed from the existing linker script...never to return again. It should be noted that this does not fix any current XIP issues, but rather is the first move in fixing them properly with subsequent patches. Signed-off-by: NChris Brandt <chris.brandt@renesas.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
ARM64 PSCI kernel interfaces that initialize idle states and implement the suspend API to enter them are generic and can be shared with the ARM architecture. To achieve that goal, this patch moves ARM64 PSCI idle management code to drivers/firmware, so that the interface to initialize and enter idle states can actually be shared by ARM and ARM64 arches back-ends. The ARM generic CPUidle implementation also requires the definition of a cpuidle_ops section entry for the kernel to initialize the CPUidle operations at boot based on the enable-method (ie ARM64 has the statically initialized cpu_ops counterparts for that purpose); therefore this patch also adds the required section entry on CONFIG_ARM for PSCI so that the kernel can initialize the PSCI CPUidle back-end when PSCI is the probed enable-method. On ARM64 this patch provides no functional change. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arch/arm64] Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJisheng Zhang <jszhang@marvell.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
The code enabled by the ARM_CPU_SUSPEND config option is used by kernel subsystems for purposes that go beyond system suspend so its config entry should be augmented to take more default options into account and avoid forcing its selection to prevent dependencies override. To achieve this goal, this patch reworks the ARM_CPU_SUSPEND config entry and updates its default config value (by adding the BL_SWITCHER option to it) and its dependencies (ARCH_SUSPEND_POSSIBLE), so that the symbol is still selected by default by the subsystems requiring it and at the same time enforcing the dependencies correctly. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Doug Anderson 提交于
If we know that TLB efficiency will not be an issue when memory is accessed then it's not terribly important to allocate big chunks of memory. The whole point of allocating the big chunks was that it would make TLB usage efficient. As Marek Szyprowski indicated: Please note that mapping memory with larger pages significantly improves performance, especially when IOMMU has a little TLB cache. This can be easily observed when multimedia devices do processing of RGB data with 90/270 degree rotation Image rotation is distinctly an operation that needs to bounce around through memory, so it makes sense that TLB efficiency is important there. Video decoding, on the other hand, is a fairly sequential operation. During video decoding it's not expected that we'll be jumping all over memory. Decoding video is also pretty heavy and the TLB misses aren't a huge deal. Presumably most HW video acceleration users of dma-mapping will not care about huge pages and will set DMA_ATTR_ALLOC_SINGLE_PAGES. Allocating big chunks of memory is quite expensive, especially if we're doing it repeadly and memory is full. In one (out of tree) usage model it is common that arm_iommu_alloc_attrs() is called 16 times in a row, each one trying to allocate 4 MB of memory. This is called whenever the system encounters a new video, which could easily happen while the memory system is stressed out. In fact, on certain social media websites that auto-play video and have infinite scrolling, it's quite common to see not just one of these 16x4MB allocations but 2 or 3 right after another. Asking the system even to do a small amount of extra work to give us big chunks in this case is just not a good use of time. Allocating big chunks of memory is also expensive indirectly. Even if we ask the system not to do ANY extra work to allocate _our_ memory, we're still potentially eating up all big chunks in the system. Presumably there are other users in the system that aren't quite as flexible and that actually need these big chunks. By eating all the big chunks we're causing extra work for the rest of the system. We also may start making other memory allocations fail. While the system may be robust to such failures (as is the case with dwc2 USB trying to allocate buffers for Ethernet data and with WiFi trying to allocate buffers for WiFi data), it is yet another big performance hit. Signed-off-by: NDouglas Anderson <dianders@chromium.org> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NJavier Martinez Canillas <javier@osg.samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Doug Anderson 提交于
The __iommu_alloc_buffer() is expected to be called to allocate pretty sizeable buffers. Upon simple tests of video I saw it trying to allocate 4,194,304 bytes. The function tries to allocate large chunks in order to optimize IOMMU TLB usage. The current function is very, very slow. One problem is the way it keeps trying and trying to allocate big chunks. Imagine a very fragmented memory that has 4M free but no contiguous pages at all. Further imagine allocating 4M (1024 pages). We'll do the following memory allocations: - For page 1: - Try to allocate order 10 (no retry) - Try to allocate order 9 (no retry) - ... - Try to allocate order 0 (with retry, but not needed) - For page 2: - Try to allocate order 9 (no retry) - Try to allocate order 8 (no retry) - ... - Try to allocate order 0 (with retry, but not needed) - ... - ... Total number of calls to alloc() calls for this case is: sum(int(math.log(i, 2)) + 1 for i in range(1, 1025)) => 9228 The above is obviously worse case, but given how slow alloc can be we really want to try to avoid even somewhat bad cases. I timed the old code with a device under memory pressure and it wasn't hard to see it take more than 120 seconds to allocate 4 megs of memory! (NOTE: testing was done on kernel 3.14, so possibly mainline would behave differently). A second problem is that allocating big chunks under memory pressure when we don't need them is just not a great idea anyway unless we really need them. We can make due pretty well with smaller chunks so it's probably wise to leave bigger chunks for other users once memory pressure is on. Let's adjust the allocation like this: 1. If a big chunk fails, stop trying to hard and bump down to lower order allocations. 2. Don't try useless orders. The whole point of big chunks is to optimize the TLB and it can really only make use of 2M, 1M, 64K and 4K sizes. We'll still tend to eat up a bunch of big chunks, but that might be the right answer for some users. A future patch could possibly add a new DMA_ATTR that would let the caller decide that TLB optimization isn't important and that we should use smaller chunks. Presumably this would be a sane strategy for some callers. Signed-off-by: NDouglas Anderson <dianders@chromium.org> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Figa <tfiga@chromium.org> Tested-by: NJavier Martinez Canillas <javier@osg.samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
The tmp variable is used twice: first to pose as a register containing a value of zero, and then to provide a temporary register that initially is zero and get added some value. But somehow gcc decides to split those two usages in different registers. Example code: u64 div64const1000(u64 x) { u32 y = 1000; do_div(x, y); return x; } Result: div64const1000: push {r4, r5, r6, r7, lr} mov lr, #0 mov r6, r0 mov r7, r1 adr r5, .L8 ldrd r4, [r5] mov r1, lr umull r2, r3, r4, r6 cmn r2, r4 adcs r3, r3, r5 adc r2, lr, #0 umlal r3, r2, r5, r6 umlal r3, r1, r4, r7 mov r3, #0 adds r2, r1, r2 adc r3, r3, #0 umlal r2, r3, r5, r7 lsr r0, r2, #9 lsr r1, r3, #9 orr r0, r0, r3, lsl #23 pop {r4, r5, r6, r7, pc} .align 3 .L8: .word -1924145349 .word -2095944041 Full kernel build size: text data bss dec hex filename 13663814 1553940 351368 15569122 ed90e2 vmlinux Here the two instances of 'tmp' are assigned to r1 and lr. To avoid that, let's mark the first 'tmp' usage in __arch_xprod_64() with a "+r" constraint even if the register is not written to, so to create a dependency for the second usage with the effect of enforcing a single temporary register throughout. Result: div64const1000: push {r4, r5, r6, r7} movs r3, #0 adr r5, .L8 ldrd r4, [r5] umull r6, r7, r4, r0 cmn r6, r4 adcs r7, r7, r5 adc r6, r3, #0 umlal r7, r6, r5, r0 umlal r7, r3, r4, r1 mov r7, #0 adds r6, r3, r6 adc r7, r7, #0 umlal r6, r7, r5, r1 lsr r0, r6, #9 lsr r1, r7, #9 orr r0, r0, r7, lsl #23 pop {r4, r5, r6, r7} bx lr .align 3 .L8: .word -1924145349 .word -2095944041 text data bss dec hex filename 13663438 1553940 351368 15568746 ed8f6a vmlinux This time 'tmp' is assigned to r3 and used throughout. However, by being assigned to r3, that blocks usage of the r2-r3 double register slot for 64-bit values, forcing more registers to be spilled on the stack. Let's try to help it by forcing 'tmp' to the caller-saved ip register. Result: div64const1000: stmfd sp!, {r4, r5} mov ip, #0 adr r5, .L8 ldrd r4, [r5] umull r2, r3, r4, r0 cmn r2, r4 adcs r3, r3, r5 adc r2, ip, #0 umlal r3, r2, r5, r0 umlal r3, ip, r4, r1 mov r3, #0 adds r2, ip, r2 adc r3, r3, #0 umlal r2, r3, r5, r1 mov r0, r2, lsr #9 mov r1, r3, lsr #9 orr r0, r0, r3, asl #23 ldmfd sp!, {r4, r5} bx lr .align 3 .L8: .word -1924145349 .word -2095944041 text data bss dec hex filename 13662838 1553940 351368 15568146 ed8d12 vmlinux We could make the code marginally smaller yet by forcing 'tmp' to lr instead, but that would have a negative inpact on branch prediction for which "bx lr" is optimal. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 2月, 2016 3 次提交
-
-
由 Kees Cook 提交于
The use of CONFIG_DEBUG_RODATA is generally seen as an essential part of kernel self-protection: http://www.openwall.com/lists/kernel-hardening/2015/11/30/13 Additionally, its name has grown to mean things beyond just rodata. To get ARM closer to this, we ought to rearrange the names of the configs that control how the kernel protects its memory. What was called CONFIG_ARM_KERNMEM_PERMS is realy doing the work that other architectures call CONFIG_DEBUG_RODATA. This redefines CONFIG_DEBUG_RODATA to actually do the bulk of the ROing (and NXing). In the place of the old CONFIG_DEBUG_RODATA, use CONFIG_DEBUG_ALIGN_RODATA, since that's what the option does: adds section alignment for making rodata explicitly NX, as arm does not split the page tables like arm64 does without _ALIGN_RODATA. Also adds human readable names to the sections so I could more easily debug my typos, and makes CONFIG_DEBUG_RODATA default "y" for CPU_V7. Results in /sys/kernel/debug/kernel_page_tables for each config state: # CONFIG_DEBUG_RODATA is not set # CONFIG_DEBUG_ALIGN_RODATA is not set ---[ Kernel Mapping ]--- 0x80000000-0x80900000 9M RW x SHD 0x80900000-0xa0000000 503M RW NX SHD CONFIG_DEBUG_RODATA=y CONFIG_DEBUG_ALIGN_RODATA=y ---[ Kernel Mapping ]--- 0x80000000-0x80100000 1M RW NX SHD 0x80100000-0x80700000 6M ro x SHD 0x80700000-0x80a00000 3M ro NX SHD 0x80a00000-0xa0000000 502M RW NX SHD CONFIG_DEBUG_RODATA=y # CONFIG_DEBUG_ALIGN_RODATA is not set ---[ Kernel Mapping ]--- 0x80000000-0x80100000 1M RW NX SHD 0x80100000-0x80a00000 9M ro x SHD 0x80a00000-0xa0000000 502M RW NX SHD Signed-off-by: NKees Cook <keescook@chromium.org> Reviewed-by: NLaura Abbott <labbott@fedoraproject.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Code run via soft_restart() is run with the MMU disabled, so we need to pass the identity map physical address rather than the address obtained from virt_to_phys(). Therefore, replace virt_to_phys() with virt_to_idmap() for all callers of soft_restart(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Make virt_to_idmap() return an unsigned long rather than phys_addr_t. Returning phys_addr_t here makes no sense, because the definition of virt_to_idmap() is that it shall return a physical address which maps identically with the virtual address. Since virtual addresses are limited to 32-bit, identity mapped physical addresses are as well. Almost all users already had an implicit narrowing cast to unsigned long so let's make this official and part of this interface. Tested-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 1月, 2016 3 次提交
-
-
由 Andiii 提交于
arm: irq: l2c: do not print error in case of missing l2c from dtb In some architectures the L2 cache controller is integrated in the processor's block itself and it doesn't use any external cache controller. This means that an entry in the board's dtb related to the l2c is not necessary. Distinguish between error codes and do not print anything in case l2x0_of_init() doesn't find any L2C DTB entry and returns -ENODEV. This patch mutes the following error message: L2C: failed to init: -19 on boards like odroid-xu4, cortex A7/A15, which don't have external cache controller. Signed-off-by: NAndi Shyti <andi.shyti@samsung.com> Reported-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Reviewed-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Tested-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Juri Lelli 提交于
Instead of looping through all cpus calling set_capacity_scale, we can initialise cpu_scale per-cpu variables to SCHED_CAPACITY_SCALE with their definition. Acked-by: NVincent Guittot <vincent.guittot@linaro.org> Signed-off-by: NJuri Lelli <juri.lelli@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Implement an ARM delay timer to be used for udelay() on orion legacy platforms. This allows us to skip the delay loop calibration at boot. It also means that udelay() will be unaffected by CPU frequency changes when cpufreq is enabled on these platforms. Tested-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 1月, 2016 20 次提交
-
-
由 Alban Bedel 提交于
As most platforms implement the PROM serial interface prom_putchar() add a simple bridge to allow re-using this code for zboot. Signed-off-by: NAlban Bedel <albeu@free.fr> Cc: Alex Smith <alex.smith@imgtec.com> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: Wu Zhangjin <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11811/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Alban Bedel 提交于
Add dummy.o to the targets list, and fill targets automatically from $(vmlinuzobjs) to avoid having to maintain two lists. When building with XZ compression copy ashldi3.c to the build directory to use a different object file for the kernel and zboot. Without this the same object file need to be build with different flags which cause a rebuild at every run. Signed-off-by: NAlban Bedel <albeu@free.fr> Cc: linux-mips@linux-mips.org Cc: Alex Smith <alex.smith@imgtec.com> Cc: Wu Zhangjin <wuzhangjin@gmail.com> Cc: Andrew Bresticker <abrestic@chromium.org> Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11810/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Florian Fainelli 提交于
Allow BMIPS_GENERIC supported platforms to build GPIO controller drivers. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NDragan Stancevic <dragan.stancevic@gmail.com> Cc: cernekee@gmail.com Cc: jaedon.shin@gmail.com Cc: gregory.0xf0@gmail.com Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12019/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Simon Arlott 提交于
Remove bcm63xx_nvram_get_psi_size() as it now has no users. Signed-off-by: NSimon Arlott <simon@fire.lp0.eu> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Brian Norris <computersforpeace@gmail.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org> Cc: MIPS Mailing List <linux-mips@linux-mips.org> Cc: MTD Maling List <linux-mtd@lists.infradead.org> Patchwork: https://patchwork.linux-mips.org/patch/11836/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Simon Arlott 提交于
Move Broadcom BCM963xx image tag data structure to include/linux/ so that drivers outside of mach-bcm63xx can use it. Signed-off-by: NSimon Arlott <simon@fire.lp0.eu> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Brian Norris <computersforpeace@gmail.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org> Cc: MIPS Mailing List <linux-mips@linux-mips.org> Cc: MTD Maling List <linux-mtd@lists.infradead.org> Patchwork: https://patchwork.linux-mips.org/patch/11832/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Simon Arlott 提交于
Use the common definition of the nvram structure from the header file include/linux/bcm963xx_nvram.h instead of maintaining a separate copy. Read the version 5 size of nvram data from memory and then call the new checksum verification function from the header file. Signed-off-by: NSimon Arlott <simon@fire.lp0.eu> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Brian Norris <computersforpeace@gmail.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org> Cc: MIPS Mailing List <linux-mips@linux-mips.org> Cc: MTD Maling List <linux-mtd@lists.infradead.org> Patchwork: https://patchwork.linux-mips.org/patch/11831/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Add missing newline to end of kvm_err string when guest PMAP couldn't be allocated. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11896/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The header arch/mips/kvm/opcode.h defines a few extra opcodes which aren't in arch/mips/include/uapi/asm/inst.h. There's nothing KVM specific about them, so lets move them into inst.h where they belong and delete the header. Note that mfmcz_op is renamed to mfmc0_op to match the instruction set manual, and wait_op was already added to inst.h in commit b0a3eae2 ("MIPS: inst.h: define COP0 wait op"), merged in v3.16-rc1. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11895/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Drop the custom cache operation code definitions used by KVM for emulating guest CACHE instructions, and switch to use the existing definitions in <asm/cacheops.h>. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11893/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Most of the cache op codes defined in cacheops.h are split into a 2-bit cache identifier, and a 3-bit cache op code which does largely the same thing semantically regardless of the cache identifier. To allow the use of these definitions by KVM for decoding cache ops, break the definitions down into parts where it makes sense to do so, and add masks for the Cache and Op field within the cache op. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11892/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The first argument to set_except_vector is the ExcCode, which we now have definitions for. Lets make use of them. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11894/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Add a few missing trap codes. [ralf@linux-mips.org: Drop removal of exception codes. I don't care what the incomplete architecture spec says; it can't change existing hardware and VCEI is supported indeed.] Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11890/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Move the Cause.ExcCode trap code definitions from kvm_host.h to mipsregs.h, since they describe architectural bits rather than KVM specific constants, and change the prefix from T_ to EXCCODE_. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11891/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The module init and exit functions have no need to be global, so make them static. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11889/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
When calculating the offsets into the commpage for dynamically translated mtc0/mfc0 guest instructions, multiple offsetof()s are added together to find the offset of the specific register in the mips_coproc, within the commpage. Simplify each of these cases to a single offsetof() to find the offset of the specific register within the commpage. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11888/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Export symbols only to GPL modules to match other KVM symbols in virt/kvm/ and arch/*/kvm/. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11887/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The function kvm_mips_host_tlb_inv_index() is unused, so drop it completely. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11886/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
The CAUSEB_DC and CAUSEF_DC definitions used by KVM are defined in asm/kvm_host.h, but all the other Cause register field definitions are found in asm/mipsregs.h. Lets reunite the DC bit definitions with its friends in mipsregs.h. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/11885/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
Some definitions in the MIPS asm/kvm_host.h are completely unused, so lets drop them. MS_TO_NS is no longer used since commit e30492bb ("MIPS: KVM: Rewrite count/compare timer emulation"). The others don't appear ever to have been used. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11884/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 James Hogan 提交于
A bunch of misc whitespace and style fixes within arch/mips/kvm/. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11883/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-