- 07 1月, 2015 4 次提交
-
-
由 Paul Walmsley 提交于
On next-20150105, defconfig compilation breaks with: ./arch/arm64/include/asm/processor.h:47:32: error: ‘PHYS_MASK’ undeclared (first use in this function) Fix by including asm/pgtable-hwdef.h, where PHYS_MASK is defined. This second version incorporates a comment from Mark Rutland <mark.rutland@arm.com> to keep the includes in alphabetical order by filename. Signed-off-by: NPaul Walmsley <paul@pwsan.com> Cc: Paul Walmsley <pwalmsley@nvidia.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We don't currently check a number of registers exposed to AArch32 guests (MVFR{0,1,2}_EL1 and ID_DFR0_EL1), despite the fact these describe AArch32 feature support exposed to userspace and KVM guests similarly to AArch64 registers which we do check. We do not expect these registers to vary across a set of CPUs. This patch adds said registers to the cpuinfo framework and sanity checks. No sanity check failures have been observed on a current ARMv8 big.LITTLE platform (Juno). Cc: Catalin Marinas <catalin.marinas@arm.com> Reported-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Tobias Klauser 提交于
prepare_to_copy() was removed from all architectures supported at that time in commit 55ccf3fe ("fork: move the real prepare_to_copy() users to arch_dup_task_struct()"). Remove it from arm64 as well. Signed-off-by: NTobias Klauser <tklauser@distanz.ch> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Commit 97b56be1 (arm64: compat: Enable bpf syscall) made the usual mistake of forgetting to update __NR_compat_syscalls. Due to this, when el0_sync_compat calls el0_svc_naked, the test against sc_nr (__NR_compat_syscalls) will fail, and we'll call ni_sys, returning -ENOSYS to userspace. This patch bumps __NR_compat_syscalls appropriately, enabling the use of the bpf syscall from compat tasks. Due to the reorganisation of unistd{,32}.h as part of commit f3e5c847 (arm64: Add __NR_* definitions for compat syscalls) it is not currently possible to include both headers and sanity-check the value of __NR_compat_syscalls at build-time to prevent this from happening again. Additional rework is required to make such niceties a possibility. Cc: Will Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 12月, 2014 1 次提交
-
-
由 Jungseok Lee 提交于
This patch adds pgd_page definition in order to keep supporting HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page expression to align with pmd_page for readability. An introduction of pgd_page resolves the following build breakage under 4KB + 4Level memory management combo. mm/gup.c: In function 'gup_huge_pgd': mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration] head = pgd_page(orig); ^ mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast head = pgd_page(orig); Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> [catalin.marinas@arm.com: remove duplicate pmd_page definition] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 12月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
Commit a3a60f81 (dma-mapping: replace set_arch_dma_coherent_ops with arch_setup_dma_ops) changes the of_dma_configure() arch dma_ops callback to arch_setup_dma_ops but only the arch/arm code is updated. Subsequent commit 97890ba9 (dma-mapping: detect and configure IOMMU in of_dma_configure) changes the arch_setup_dma_ops() prototype further to handle iommu. The patch makes the corresponding arm64 changes. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 18 12月, 2014 1 次提交
-
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Change the spinlock code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 13 12月, 2014 4 次提交
-
-
由 Christoffer Dall 提交于
Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Call this function on secondary and subsequent calls to the KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest Stage-1 MMU is off when faulting in pages and make the caches coherent. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
When a vcpu calls SYSTEM_OFF or SYSTEM_RESET with PSCI v0.2, the vcpus should really be turned off for the VM adhering to the suggestions in the PSCI spec, and it's the sane thing to do. Also, clarify the behavior and expectations for exits to user space with the KVM_EXIT_SYSTEM_EVENT case. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
It is not clear that this ioctl can be called multiple times for a given vcpu. Userspace already does this, so clarify the ABI. Also specify that userspace is expected to always make secondary and subsequent calls to the ioctl with the same parameters for the VCPU as the initial call (which userspace also already does). Add code to check that userspace doesn't violate that ABI in the future, and move the kvm_vcpu_set_target() function which is currently duplicated between the 32-bit and 64-bit versions in guest.c to a common static function in arm.c, shared between both architectures. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
When userspace resets the vcpu using KVM_ARM_VCPU_INIT, we should also reset the HCR, because we now modify the HCR dynamically to enable/disable trapping of guest accesses to the VM registers. This is crucial for reboot of VMs working since otherwise we will not be doing the necessary cache maintenance operations when faulting in pages with the guest MMU off. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 12 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
There are a number of situations where the mandatory barriers rmb() and wmb() are used to order memory/memory operations in the device drivers and those barriers are much heavier than they actually need to be. For example in the case of PowerPC wmb() calls the heavy-weight sync instruction when for coherent memory operations all that is really needed is an lsync or eieio instruction. This commit adds a coherent only version of the mandatory memory barriers rmb() and wmb(). In most cases this should result in the barrier being the same as the SMP barriers for the SMP case, however in some cases we use a barrier that is somewhere in between rmb() and smp_rmb(). For example on ARM the rmb barriers break down as follows: Barrier Call Explanation --------- -------- ---------------------------------- rmb() dsb() Data synchronization barrier - system dma_rmb() dmb(osh) data memory barrier - outer sharable smp_rmb() dmb(ish) data memory barrier - inner sharable These new barriers are not as safe as the standard rmb() and wmb(). Specifically they do not guarantee ordering between coherent and incoherent memories. The primary use case for these would be to enforce ordering of reads and writes when accessing coherent memory that is shared between the CPU and a device. It may also be noted that there is no dma_mb(). Most architectures don't provide a good mechanism for performing a coherent only full barrier without resorting to the same mechanism used in mb(). As such there isn't much to be gained in trying to define such a function. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: David Miller <davem@davemloft.net> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2014 2 次提交
-
-
由 Kirill A. Shutemov 提交于
As a small zero page, huge zero page should not be accounted in smaps report as normal page. For small pages we rely on vm_normal_page() to filter out zero page, but vm_normal_page() is not designed to handle pmds. We only get here due hackish cast pmd to pte in smaps_pte_range() -- pte and pmd format is not necessary compatible on each and every architecture. Let's add separate codepath to handle pmds. follow_trans_huge_pmd() will detect huge zero page for us. We would need pmd_dirty() helper to do this properly. The patch adds it to THP-enabled architectures which don't yet have one. [akpm@linux-foundation.org: use do_div to fix 32-bit build] Signed-off-by: N"Kirill A. Shutemov" <kirill@shutemov.name> Reported-by: NFengguang Wu <fengguang.wu@intel.com> Tested-by: NFengwei Yin <yfw.kernel@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Daniel Borkmann 提交于
As there are now no remaining users of arch_fast_hash(), lets kill it entirely. This basically reverts commit 71ae8aac ("lib: introduce arch optimized hash library") and follow-up work, that is f.e., commit 23721754 ("lib: hash: follow-up fixups for arch hash"), commit e3fec2f7 ("lib: Add missing arch generic-y entries for asm-generic/hash.h") and last but not least commit 6a02652d ("perf tools: Fix include for non x86 architectures"). Cc: Francesco Fusco <fusco@ntop.org> Cc: Thomas Graf <tgraf@suug.ch> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 12月, 2014 1 次提交
-
-
由 Sonny Rao 提交于
This is a bug fix for using physical arch timers when the arch_timer_use_virtual boolean is false. It restores the arch_counter_get_cntpct() function after removal in 0d651e4e "clocksource: arch_timer: use virtual counters" We need this on certain ARMv7 systems which are architected like this: * The firmware doesn't know and doesn't care about hypervisor mode and we don't want to add the complexity of hypervisor there. * The firmware isn't involved in SMP bringup or resume. * The ARCH timer come up with an uninitialized offset between the virtual and physical counters. Each core gets a different random offset. * The device boots in "Secure SVC" mode. * Nothing has touched the reset value of CNTHCTL.PL1PCEN or CNTHCTL.PL1PCTEN (both default to 1 at reset) One example of such as system is RK3288 where it is much simpler to use the physical counter since there's nobody managing the offset and each time a core goes down and comes back up it will get reinitialized to some other random value. Fixes: 0d651e4e ("clocksource: arch_timer: use virtual counters") Cc: stable@vger.kernel.org Signed-off-by: NSonny Rao <sonnyrao@chromium.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 04 12月, 2014 6 次提交
-
-
由 Stefano Stabellini 提交于
Merge xen/mm32.c into xen/mm.c. As a consequence the code gets compiled on arm64 too. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Stefano Stabellini 提交于
dev_addr is the machine address of the page. The new parameter can be used by the ARM and ARM64 implementations of xen_dma_map_page to find out if the page is a local page (pfn == mfn) or a foreign page (pfn != mfn). dev_addr could be retrieved again from the physical address, using pfn_to_mfn, but it requires accessing an rbtree. Since we already have the dev_addr in our hands at the call site there is no need to get the mfn twice. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
Introduce a boolean flag and an accessor function to check whether a device is dma_coherent. Set the flag from set_arch_dma_coherent_ops. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andre Przywara 提交于
Currently the kernel patches all necessary instructions once at boot time, so modules are not covered by this. Change the apply_alternatives() function to take a beginning and an end pointer and introduce a new variant (apply_alternatives_all()) to cover the existing use case for the static kernel image section. Add a module_finalize() function to arm64 to check for an alternatives section in a module and patch only the instructions from that specific area. Since that module code is not touched before the module initialization has ended, we don't need to halt the machine before doing the patching in the module's code. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Chunyan Zhang 提交于
If I include asm/irq.h on the top of my code, and set ARCH=arm64, I'll get a compile warning, details are below: warning: ‘struct pt_regs’ declared inside parameter list [enabled by default] This patch is suggested by Arnd, see: http://lists.infradead.org/pipermail/linux-arm-kernel/2014-December/308270.htmlSigned-off-by: NChunyan Zhang <chunyan.zhang@spreadtrum.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Fabio Estevam 提交于
Building arm64.allmodconfig leads to the following warning: usb/gadget/function/f_ncm.c:203:0: warning: "NCAPS" redefined #define NCAPS (USB_CDC_NCM_NCAP_ETH_FILTER | USB_CDC_NCM_NCAP_CRC_MODE) ^ In file included from /home/build/work/batch/arch/arm64/include/asm/io.h:32:0, from /home/build/work/batch/include/linux/clocksource.h:19, from /home/build/work/batch/include/clocksource/arm_arch_timer.h:19, from /home/build/work/batch/arch/arm64/include/asm/arch_timer.h:27, from /home/build/work/batch/arch/arm64/include/asm/timex.h:19, from /home/build/work/batch/include/linux/timex.h:65, from /home/build/work/batch/include/linux/sched.h:19, from /home/build/work/batch/arch/arm64/include/asm/compat.h:25, from /home/build/work/batch/arch/arm64/include/asm/stat.h:23, from /home/build/work/batch/include/linux/stat.h:5, from /home/build/work/batch/include/linux/module.h:10, from /home/build/work/batch/drivers/usb/gadget/function/f_ncm.c:19: arch/arm64/include/asm/cpufeature.h:27:0: note: this is the location of the previous definition #define NCAPS 2 So add a ARM64 prefix to avoid such problem. Reported-by: NOlof's autobuilder <build@lixom.net> Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 12月, 2014 1 次提交
-
-
由 Jungseok Lee 提交于
As putting data which is read mostly together, we can avoid unnecessary cache line bouncing. Other architectures, such as ARM and x86, adopted the same idea. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 12月, 2014 1 次提交
-
-
由 Vladimir Murzin 提交于
Update handling of cacheflush syscall with changes made in arch/arm counterpart: - return error to userspace when flushing syscall fails - split user cache-flushing into interruptible chunks - don't bother rounding to nearest vma Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> [will: changed internal return value from -EINTR to 0 to match arch/arm/] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 28 11月, 2014 3 次提交
-
-
由 AKASHI Takahiro 提交于
secure_computing() is called first in syscall_trace_enter() so that a system call will be aborted quickly without doing succeeding syscall tracing if seccomp rules want to deny that system call. On compat task, syscall numbers for system calls allowed in seccomp mode 1 are different from those on normal tasks, and so _NR_seccomp_xxx_32's need to be redefined. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
SIGSYS is primarily used in secure computing to notify tracer of syscall events. This patch allows signal handler on compat task to get correct information with SA_SIGINFO specified when this signal is delivered. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch allows compat task to issue seccomp() system call. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 11月, 2014 2 次提交
-
-
由 Marc Zyngier 提交于
In order to support CONFIG_GENERIC_MSI_IRQ_DOMAIN, we need to define msi_alloc_info_t. As the generic version exposed in asm-generic/msi.h is perfectly convenient, import this file as asm/msi.h. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/1416839720-18400-2-git-send-email-marc.zyngier@arm.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Laura Abbott 提交于
Every other architecture with permanent fixed addresses has FIX_HOLE as the first entry. This seems to be designed as a debugging aid but there are a couple of side effects of not having FIX_HOLE: - If the first fixed address is 0, fix_to_virt -> virt_to_fix triggers a BUG_ON for the virtual address being equal to FIXADDR_TOP - fix_to_virt may return a value outside of FIXADDR_START and FIXADDR_TOP which may look like a bug to a developer. Match up with other architectures and make everything clearer by adding FIX_HOLE. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 11月, 2014 9 次提交
-
-
由 Laura Abbott 提交于
The fixmap API was originally added for arm64 for early_ioremap purposes. It can be used for other purposes too so move the initialization from ioremap to somewhere more generic. This makes it obvious where the fixmap is being set up and allows for a cleaner implementation of __set_fixmap. Reviewed-by: NKees Cook <keescook@chromium.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laura Abbott 提交于
handle_arch_irq isn't actually text, it's just a function pointer. It doesn't need to be stored in the text section and doing so causes problesm if we ever want to make the kernel text read only. Declare handle_arch_irq as a proper function pointer stored in the data section. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
While we currently expect self-hosted debug support to be identical across CPUs, we don't currently sanity check this. This patch adds logging of the ID_AA64DFR{0,1}_EL1 values and associated sanity checking code. It's not clear to me whether we need to check PMUVer, TraceVer, and DebugVer, as we don't currently rely on these fields at all. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
The ARM erratum 832075 applies to certain revisions of Cortex-A57, one of the workarounds is to change device loads into using load-aquire semantics. This is achieved using the alternatives framework. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
The ARM errata 819472, 826319, 827319 and 824069 define the same workaround for these hardware issues in certain Cortex-A53 parts. Use the new alternatives framework and the CPU MIDR detection to patch "cache clean" into "cache clean and invalidate" instructions if an affected CPU is detected at runtime. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> [will: add __maybe_unused to squash gcc warning] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Laszlo Ersek 提交于
To allow handling of incoherent memslots in a subsequent patch, this patch adds a paramater 'ipa_uncached' to cache_coherent_guest_page() so that we can instruct it to flush the page's contents to DRAM even if the guest has caching globally enabled. Signed-off-by: NLaszlo Ersek <lersek@redhat.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Andre Przywara 提交于
After each CPU has been started, we iterate through a list of CPU features or bugs to detect CPUs which need (or could benefit from) kernel code patches. For each feature/bug there is a function which checks if that particular CPU is affected. We will later provide some more generic functions for common things like testing for certain MIDR ranges. We do this for every CPU to cover big.LITTLE systems properly as well. If a certain feature/bug has been detected, the capability bit will be set, so that later the call to apply_alternatives() will trigger the actual code patching. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
With a blatant copy of some x86 bits we introduce the alternative runtime patching "framework" to arm64. This is quite basic for now and we only provide the functions we need at this time. This is connected to the newly introduced feature bits. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andre Przywara 提交于
For taking note if at least one CPU in the system needs a bug workaround or would benefit from a code optimization, we create a new bitmap to hold (artificial) feature bits. Since elf_hwcap is part of the userland ABI, we keep it alone and introduce a new data structure for that (along with some accessors). Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 21 11月, 2014 3 次提交
-
-
由 Punit Agrawal 提交于
The CP15 barrier instructions (CP15ISB, CP15DSB and CP15DMB) are deprecated in the ARMv7 architecture, superseded by ISB, DSB and DMB instructions respectively. Some implementations may provide the ability to disable the CP15 barriers by disabling the CP15BEN bit in SCTLR_EL1. If not enabled, the encodings for these instructions become undefined. To support legacy software using these instructions, this patch register hooks to - * emulate CP15 barriers and warn the user about their use * toggle CP15BEN in SCTLR_EL1 Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Punit Agrawal 提交于
The SWP instruction was deprecated in the ARMv6 architecture. The ARMv7 multiprocessing extensions mandate that SWP/SWPB instructions are treated as undefined from reset, with the ability to enable them through the System Control Register SW bit. With ARMv8, the option to enable these instructions through System Control Register was dropped as well. To support legacy applications using these instructions, port the emulation of the SWP and SWPB instructions from the arm port to arm64. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Punit Agrawal 提交于
Port support for AArch32 instruction condition code checking from arm to arm64. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-