- 15 4月, 2015 11 次提交
-
-
由 Russell King 提交于
This errata covers all r1 variants of Cortex A8, it's not limited to just r1p0..r1p2. Update the documentation to reflect this. The code already applies the workaround to all r1p* A8 CPUs. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We have recently had an example of someone wanting to use a 90kHz timer for the software delay loop. udelay() needs to have at least microsecond resolution to allow drivers access to a delay mechanism with a reasonable chance of delaying the period they requested within at least a 50% marging of error, especially for small delays. Discussion about the udelay() accuracy can be found at: https://lkml.org/lkml/2011/1/9/37 Reject timers which are unable to supply this level of resolution. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Switch ARM to use the generic show_mem() implementation, which displays the statistics from the mm zone rather than walking the page arrays. Acked-by: NMel Gorman <mgorman <mgorman@suse.de> Tested-by: NGregory Fong <gregory.0xf0@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Avoid the errata 430973 workaround for non-Cortex A8 CPUs. Having this workaround enabled introduces an additional branch target buffer flush into the context switching path, something we wish to avoid. To allow this errata to be enabled in multiplatform kernels while reducing its impact, rearrange the Cortex-A8 CPU support to avoid impacting on other Version 7 CPUs. Tested-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The effects of not having ARM errata 643719 enabled on affected CPUs can be very confusing and hard to debug. Rather than leave this to chance, enable this workaround by default. Now that we have rearranged the code, it should have a low impact on the majority of CPUs. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Eliminate one unnecessary instruction from this test by pre-shifting the Cortex A9 ID - we can shift the actual ID in the teq instruction thereby losing the pX bit of the ID at no cost. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Optimise the branches such that for the majority of unaffected devices, we avoid needing to execute the errata work-around code path by branching to start_flush_levels early. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Both v7_flush_cache_louis and v7_flush_dcache_all both begin the flush_levels loop with r10 initialised to zero. In each case, this is done immediately prior to entering the loop. Branch to this instruction in v7_flush_dcache_all from v7_flush_cache_louis and eliminate the unnecessary initialisation in v7_flush_cache_louis. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than have code which masks and then shifts, such as: mrc p15, 1, r0, c0, c0, 1 ALT_SMP(ands r3, r0, #7 << 21) ALT_UP( ands r3, r0, #7 << 27) ALT_SMP(mov r3, r3, lsr #20) ALT_UP( mov r3, r3, lsr #26) re-arrange this as a shift and then mask. The masking is the same for each field which we want to extract, so this allows the mask to be shared amongst code paths: mrc p15, 1, r0, c0, c0, 1 ALT_SMP(mov r3, r0, lsr #20) ALT_UP( mov r3, r0, lsr #26) ands r3, r3, #7 << 1 Use this method for the LoUIS, LoUU and LoC fields. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We always build cache-v7.S for ARMv7, so we can use the ARMv7 16-bit move instructions to load large constants, rather than using constants in a literal pool. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Allow ALT_UP() to cope with a 16-bit Thumb instruction by automatically inserting a following nop instruction. This allows us to care less about getting the assembler to emit a 32-bit thumb instruction. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 4月, 2015 7 次提交
-
-
由 Paul Walmsley 提交于
Documentation: DT bindings: Tegra AHB: require the legacy base address for existing chips Per Stephen Warren, note in the Tegra AHB DT binding documentation that we specifically deprecate any attempt to use the IP block's actual hardware base address, and advocate the use of the legacy "off-by-four" address in the 'regs' property, for Tegra chips with existing upstream Linux DT files that include a Tegra AHB node. This patch updates the documentation accordingly. Changing the existing kernel DT data isn't under consideration because Linux kernel DT data policy is to preserve compatibility between newer DT data files and older kernels. However, this additional step of changing the documentation should discourage others from sending kernel patches to try to change the legacy kernel DT data. Furthermore, for out-of-tree software (such as bootloaders or other operating systems) that may rely on Linux kernel DT binding documentation as an ABI (but not the Linux kernel DT data itself), such a change may allow future convergence with the Linux kernel DT data without additional code changes. Signed-off-by: NPaul Walmsley <paul@pwsan.com> Cc: Paul Walmsley <pwalmsley@nvidia.com> Cc: Stephen Warren <swarren@wwwdotorg.org> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Eduardo Valentin <edubezval@gmail.com> Cc: Ian Campbell <ijc+devicetree@hellion.org.uk> Cc: Kumar Gala <galak@codeaurora.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Pawel Moll <pawel.moll@arm.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: devicetree@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Paul Walmsley 提交于
amba: tegra-ahb: detect and correct bogus base address From a hardware SoC integration point of view, the starting address of this IP block in the existing Tegra SoC DT files is off by 4 bytes from the actual base address. Since we attempt to make old DT files forward-compatible with newer kernels, we cannot fix the IP block base address in old DT data. This patch works around the problem by detecting the four byte base address offset in the driver code, and correcting it if it's detected. (In general, IP block base addresses almost always have a null low byte.) Future SoC DT data for Tegra AHB should use the correct Tegra AHB base address, in cases where there is no DT data backward compatibility requirement. This patch is a revision of the patch originally titled "amba: tegra-ahb: use correct base address for future chip support". This revision implements changes requested by Russell King: http://marc.info/?l=linux-tegra&m=142658851825062&w=2 http://marc.info/?l=linux-tegra&m=142658873925178&w=2Signed-off-by: NPaul Walmsley <paul@pwsan.com> Cc: Paul Walmsley <pwalmsley@nvidia.com> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Hiroshi DOYU <hdoyu@nvidia.com> Cc: Stephen Warren <swarren@wwwdotorg.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-kernel@vger.kernel.org Acked-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Paul Walmsley 提交于
amba: tegra-ahb: fix register offsets in the macros From a hardware SoC integration point of view, the offsets of the Tegra AHB registers that are currently defined in tegra-ahb.c macros are all off by four bytes. Similarly, the starting address of this IP block in our existing DT files is also off by four bytes. Since we attempt to make old DT files forward-compatible with newer kernels, we cannot fix the IP block base address in old DT data. However, we can fix the offsets in the driver so that they are correct with respect to the hardware, which is what this patch does. And a subsequent patch will allow the offset to be removed for DT 'compatible' strings used in future DT files for newer Tegra chips that the kernel does not yet support. Signed-off-by: NPaul Walmsley <paul@pwsan.com> Cc: Paul Walmsley <pwalmsley@nvidia.com> Cc: Alexandre Courbot <gnurou@gmail.com> Cc: Hiroshi DOYU <hdoyu@nvidia.com> Cc: Stephen Warren <swarren@wwwdotorg.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: linux-kernel@vger.kernel.org Acked-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Geert Uytterhoeven 提交于
Several interrupt controllers support both edge and level interrupts, so it's useful to provide that information in /proc/interrupts. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Geert Uytterhoeven 提交于
When trying to kexec into a new kernel on a platform where multiple CPU cores are present, but no SMP bringup code is available yet, the kexec_load system call fails with: kexec_load failed: Invalid argument The SMP test added to machine_kexec_prepare() in commit 2103f6cb ("ARM: 7807/1: kexec: validate CPU hotplug support") wants to prohibit kexec on SMP platforms where it cannot disable secondary CPUs. However, this test is too strict: if the secondary CPUs couldn't be enabled in the first place, there's no need to disable them later at kexec time. Hence skip the test in the absence of SMP bringup code. This allows to add all CPU cores to the DTS from the beginning, without having to implement SMP bringup first, improving DT compatibility. Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Acked-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move shutdown and reboot related code to a separate file, out of process.c. This helps to avoid polluting process.c with non-process related code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Normally, when a CPU wants to clear a cache line to zero in the external L2 cache, it would generate bus cycles to write each word as it would do with any other data access. However, a Cortex A9 connected to a L2C-310 has a specific feature where the CPU can detect this operation, and signal that it wants to zero an entire cache line. This feature, known as Full Line of Zeros (FLZ), involves a non-standard AXI signalling mechanism which only the L2C-310 can properly interpret. There are separate enable bits in both the L2C-310 and the Cortex A9 - the L2C-310 needs to be enabled and have the FLZ enable bit set in the auxiliary control register before the Cortex A9 has this feature enabled. Unfortunately, the suspend code was not respecting this - it's not obvious from the code: swsusp_arch_suspend() cpu_suspend() /* saves the Cortex A9 auxiliary control register */ arch_save_image() soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */ cpu_resume() /* restores the Cortex A9 registers, inc auxcr */ At this point, we end up with the L2C disabled, but the Cortex A9 with FLZ enabled - which means any memset() or zeroing of a full cache line will fail to take effect. A similar issue exists in the resume path, but it's slightly more complex: swsusp_arch_suspend() cpu_suspend() /* saves the Cortex A9 auxiliary control register */ arch_save_image() /* image with A9 auxcr saved */ ... swsusp_arch_resume() call_with_stack() arch_restore_image() /* restores image with A9 auxcr saved above */ soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */ cpu_resume() /* restores the Cortex A9 registers, inc auxcr */ Again, here we end up with the L2C disabled, but Cortex A9 FLZ enabled. There's no need to turn off the L2C in either of these two paths; there are benefits from not doing so - for example, the page copies will be faster with the L2C enabled. Hence, fix this by providing a variant of soft_restart() which can be used without turning the L2 cache controller off, and use it in both of these paths to keep the L2C enabled across the respective resume transitions. Fixes: 8ef418c7 ("ARM: l2c: trial at enabling some Cortex-A9 optimisations") Reported-by: NSean Cross <xobs@kosagi.com> Tested-by: NSean Cross <xobs@kosagi.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 3月, 2015 7 次提交
-
-
由 Ard Biesheuvel 提交于
This code calls cpu_resume() using a straight branch (b), so now that we have moved cpu_resume() back to .text, this should be moved there as well. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This code calls cpu_resume() using a straight branch (b), so now that we have moved cpu_resume() back to .text, this should be moved there as well. Any direct references to symbols that will remain in the .data section are replaced with explicit PC-relative references. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
Move cpu_resume() to the .text section where it belongs. Change the adr reference to sleep_save_sp to an explicit PC relative reference so sleep_save_sp itself can remain in .data. This helps prevent linker failure on large kernels, as the code in the .data section may be too far away to be in range for normal b/bl instructions. Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
When building a very large kernel, it is up to the linker to decide when and where to insert stubs to allow calls to functions that are out of range for the ordinary b/bl instructions. However, since the kernel is built as a position dependent binary, these stubs (aka veneers) may contain absolute addresses, which will break far calls performed with the MMU off. For instance, the call from __enable_mmu() in the .head.text section to __turn_mmu_on() in the .idmap.text section may be turned into something like this: c0008168 <__enable_mmu>: c0008168: f020 0002 bic.w r0, r0, #2 c000816c: f420 5080 bic.w r0, r0, #4096 c0008170: f000 b846 b.w c0008200 <____turn_mmu_on_veneer> [...] c0008200 <____turn_mmu_on_veneer>: c0008200: 4778 bx pc c0008202: 46c0 nop c0008204: e59fc000 ldr ip, [pc] c0008208: e12fff1c bx ip c000820c: c13dfae1 teqgt sp, r1, ror #21 [...] c13dfae0 <__turn_mmu_on>: c13dfae0: 4600 mov r0, r0 [...] After adding --pic-veneer to the LDFLAGS, the veneer is emitted like this instead: c0008200 <____turn_mmu_on_veneer>: c0008200: 4778 bx pc c0008202: 46c0 nop c0008204: e59fc004 ldr ip, [pc, #4] c0008208: e08fc00c add ip, pc, ip c000820c: e12fff1c bx ip c0008210: 013d7d31 teqeq sp, r1, lsr sp c0008214: 00000000 andeq r0, r0, r0 Note that this particular example is best addressed by moving .head.text and .idmap.text closer together, but this issue could potentially affect any code that needs to execute with the MMU off. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This moves all fixup snippets to the .text.fixup section, which is a special section that gets emitted along with the .text section for each input object file, i.e., the snippets are kept much closer to the code they refer to, which helps prevent linker failure on large kernels. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This introduces a new .text.fixup input section that gets emitted together with the .text section for each input object file. Note that *(.text) *(.text.fixup) is not the same as *(.text .text.fixup) and we are looking for the latter, to ensure that fixup snippets that are assembled into a separate section in the object file do not end up out of range for the relative branch instructions it contains if the .text section itself grows very large. This helps prevent linker failures on large ARM kernels. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Mark Rutland 提交于
arm64 builds with GCC 5 have caused the __asmeq assertions in the PSCI calling code to fire, so move the ARM PSCI calls out of line into their own assembly file for consistency and to safeguard against the same issue occuring with the 32-bit toolchain. [will: brought into line with arm64 implementation] Reported-by: NAndy Whitcroft <apw@canonical.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 3月, 2015 2 次提交
-
-
由 Uwe Kleine-König 提交于
When the patch for e16343c4 (ARM: 8160/1: drop warning about return_address not using unwind tables) was created there was still more code in said branch. Probably this simplification was just missed during conflict resolution when the patch was applied. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joachim Eastwood 提交于
This patch makes it possible to enter zImage in Thumb mode for ARMv7-M (Cortex-M) CPUs that do not support ARM mode. The kernel entry is also made in Thumb mode. [ukl: fix spelling in commit log, return early in call_cache_fn] Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Tested-by: NStefan Agner <stefan@agner.ch> Tested-by: NEzequiel Garcia <ezequiel@vanguardiasur.com.ar> Tested-by: NChanwoo Choi <cw00.choi@samsung.com> Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 3月, 2015 5 次提交
-
-
由 Ard Biesheuvel 提交于
When running the 32-bit ARM kernel on ARMv8 capable bare metal (e.g., 32-bit Android userland and kernel on a Cortex-A53), or as a KVM guest on a 64-bit host, we should advertise the availability of the Crypto instructions, so that userland libraries such as OpenSSL may use them. (Support for the v8 Crypto instructions in the 32-bit build was added to OpenSSL more than six months ago) This adds the ID feature bit detection, and sets elf_hwcap2 accordingly. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
The various CPU feature registers consist of 4-bit blocks that represent signed quantities, whose positive values represent incremental features, and whose negative values are reserved. To improve forward compatibility, update the feature detection code to take possible future higher values into account, but ignore negative values. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This moves the .idmap.text section closer to .head.text, so that relative branches are less likely to go out of range if the kernel text gets bigger. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
This patch replaces the 'branch to setup()' instructions embedded in the PROCINFO structs with the offset to that setup function relative to the base of the struct. This preserves the position independent nature of that field, but uses a data item rather than an instruction. This is mainly done to prevent linker failures on large kernels, where the setup function is out of reach for the branch. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Occasionally, there's a question about the method we use to find the start of physical memory. Add some documentation so we don't have to keep repeating outselves on the mailing list. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 3月, 2015 1 次提交
-
-
由 Will Deacon 提交于
When using the IOMMU-backed DMA ops for a device, we store a pointer to the dma_iommu_mapping structure (used to keep track of the address space) in the archdata.mapping field of the struct device. Rather than access this field directly, use the to_dma_iommu_mapping helper in dma-mapping, so that we don't really care where the mapping information is held. Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 3月, 2015 1 次提交
-
-
由 Florian Fainelli 提交于
Make sure that we can read the "cache-level" property from the L2 cache controller node, and ensure its value is 2. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 2月, 2015 1 次提交
-
-
由 Russell King 提交于
SMP_ON_UP has been around for a while, and seems to be well-proven now. Drop the EXPERIMENTAL tag from the option. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 2月, 2015 5 次提交
-
-
由 Carlo Caione 提交于
Even without an iommu, NO_KERNEL_MAPPING is still convenient to save on kernel address space in places where we don't need a kernel mapping. Implement support for it in the two places where we're creating an expensive mapping. __alloc_from_pool uses an internal pool from which we already have virtual addresses, so it's not relevant, and __alloc_simple_buffer uses alloc_pages, which will always return a lowmem page, which is already mapped into kernel space, so we can't prevent a mapping for it in that case. Signed-off-by: NJasper St. Pierre <jstpierre@mecheye.net> Signed-off-by: NCarlo Caione <carlo@caione.org> Reviewed-by: NRob Clark <robdclark@gmail.com> Reviewed-by: NDaniel Drake <dsd@endlessm.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
The pci_mmap_page_range() API should be written to expect offset values representing PCI memory resource addresses as seen by user space, through the pci_resource_to_user() API. ARM relies on the standard implementation of pci_resource_to_user() which actually is an identity map and exports to user space PCI memory resources as they are stored in PCI devices resources structures, which represent CPU physical addresses (fixed-up using BUS to CPU address conversions) not PCI bus addresses. Therefore, on ARM platforms where the mapping between CPU and BUS address is not a 1:1 the current pci_mmap_page_range() implementation is erroneous, in that an additional shift is applied to an already fixed-up offset passed from userspace. Hence, this patch removes the mem_offset from the pgoff calculation since the offset as passed from user space already represents the CPU physical address corresponding to the resource to be mapped, ie no additional offset should be applied. Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ard Biesheuvel 提交于
Currently, interworking calls on module boundaries are not supported, and are handled by the same error handling code path as non-interworking calls whose targets are simply out of range. Before modifying the handling of those out-of-range jump and call relocations in a subsequent patch, move the handling of interworking restrictions out of it. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Linus Torvalds 提交于
.. after extensive statistical analysis of my G+ polling, I've come to the inescapable conclusion that internet polls are bad. Big surprise. But "Hurr durr I'ma sheep" trounced "I like online polls" by a 62-to-38% margin, in a poll that people weren't even supposed to participate in. Who can argue with solid numbers like that? 5,796 votes from people who can't even follow the most basic directions? In contrast, "v4.0" beat out "v3.20" by a slimmer margin of 56-to-44%, but with a total of 29,110 votes right now. Now, arguably, that vote spread is only about 3,200 votes, which is less than the almost six thousand votes that the "please ignore" poll got, so it could be considered noise. But hey, I asked, so I'll honor the votes.
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4由 Linus Torvalds 提交于
Pull ext4 fixes from Ted Ts'o: "Ext4 bug fixes. We also reserved code points for encryption and read-only images (for which the implementation is mostly just the reserved code point for a read-only feature :-)" * tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: fix indirect punch hole corruption ext4: ignore journal checksum on remount; don't fail ext4: remove duplicate remount check for JOURNAL_CHECKSUM change ext4: fix mmap data corruption in nodelalloc mode when blocksize < pagesize ext4: support read-only images ext4: change to use setup_timer() instead of init_timer() ext4: reserve codepoints used by the ext4 encryption feature jbd2: complain about descriptor block checksum errors
-