- 16 1月, 2015 3 次提交
-
-
由 Mario Smarduch 提交于
This patch enables ARMv8 ditry page logging support. Plugs ARMv8 into generic layer through Kconfig symbol, and drops earlier ARM64 constraints to enable logging at architecture layer. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMario Smarduch <m.smarduch@samsung.com>
-
由 Mario Smarduch 提交于
This patch adds support for arm64 hyp interface to flush all TLBs associated with VMID. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMario Smarduch <m.smarduch@samsung.com>
-
由 Mario Smarduch 提交于
This patch adds arm64 helpers to write protect pmds/ptes and retrieve permissions while logging dirty pages. Also adds prototype to write protect a memory slot and adds a pmd define to check for read-only pmds. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMario Smarduch <m.smarduch@samsung.com>
-
- 15 1月, 2015 1 次提交
-
-
由 Wei Huang 提交于
arm64 uses its own copy of exit handler (arm64/kvm/handle_exit.c). Currently this file doesn't hook up with any trace points. As a result users might not see certain events (e.g. HVC & WFI) while using ftrace with arm64 KVM. This patch fixes this issue by adding a new trace file and defining two trace events (one of which is shared by wfi and wfe) for arm64. The new trace points are then linked with related functions in handle_exit.c. Signed-off-by: NWei Huang <wei@redhat.com> Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 11 1月, 2015 1 次提交
-
-
由 Eric Auger 提交于
Since the advent of VGIC dynamic initialization, this latter is initialized quite late on the first vcpu run or "on-demand", when injecting an IRQ or when the guest sets its registers. This initialization could be initiated explicitly much earlier by the users-space, as soon as it has provided the requested dimensioning parameters. This patch adds a new entry to the VGIC KVM device that allows the user to manually request the VGIC init: - a new KVM_DEV_ARM_VGIC_GRP_CTRL group is introduced. - Its first attribute is KVM_DEV_ARM_VGIC_CTRL_INIT The rationale behind introducing a group is to be able to add other controls later on, if needed. Signed-off-by: NEric Auger <eric.auger@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 24 12月, 2014 3 次提交
-
-
由 Jungseok Lee 提交于
This patch adds pgd_page definition in order to keep supporting HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page expression to align with pmd_page for readability. An introduction of pgd_page resolves the following build breakage under 4KB + 4Level memory management combo. mm/gup.c: In function 'gup_huge_pgd': mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration] head = pgd_page(orig); ^ mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast head = pgd_page(orig); Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> [catalin.marinas@arm.com: remove duplicate pmd_page definition] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
The usual defconfig tweaks, this time: - FHANDLE and AUTOFS4_FS to keep systemd happy - PID_NS, QUOTA and KEYS to keep LTP happy - Disable DEBUG_PREEMPT, as this *really* hurts performance Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Lorenzo Pieralisi 提交于
On arm64 the TTBR0_EL1 register is set to either the reserved TTBR0 page tables on boot or to the active_mm mappings belonging to user space processes, it must never be set to swapper_pg_dir page tables mappings. When a CPU is booted its active_mm is set to init_mm even though its TTBR0_EL1 points at the reserved TTBR0 page mappings. This implies that when __cpu_suspend is triggered the active_mm can point at init_mm even if the current TTBR0_EL1 register contains the reserved TTBR0_EL1 mappings. Therefore, the mm save and restore executed in __cpu_suspend might turn out to be erroneous in that, if the current->active_mm corresponds to init_mm, on resume from low power it ends up restoring in the TTBR0_EL1 the init_mm mappings that are global and can cause speculation of TLB entries which end up being propagated to user space. This patch fixes the issue by checking the active_mm pointer before restoring the TTBR0 mappings. If the current active_mm == &init_mm, the code sets the TTBR0_EL1 to the reserved TTBR0 mapping instead of switching back to the active_mm, which is the expected behaviour corresponding to the TTBR0_EL1 settings when __cpu_suspend was entered. Fixes: 95322526 ("arm64: kernel: cpu_{suspend/resume} implementation") Cc: <stable@vger.kernel.org> # 3.14+: 18ab7db6 Cc: <stable@vger.kernel.org> # 3.14+: 714f5992 Cc: <stable@vger.kernel.org> # 3.14+: c3684fbb Cc: <stable@vger.kernel.org> # 3.14+ Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 12月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
Commit a3a60f81 (dma-mapping: replace set_arch_dma_coherent_ops with arch_setup_dma_ops) changes the of_dma_configure() arch dma_ops callback to arch_setup_dma_ops but only the arch/arm code is updated. Subsequent commit 97890ba9 (dma-mapping: detect and configure IOMMU in of_dma_configure) changes the arch_setup_dma_ops() prototype further to handle iommu. The patch makes the corresponding arm64 changes. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 18 12月, 2014 1 次提交
-
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Change the spinlock code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 14 12月, 2014 1 次提交
-
-
由 Riku Voipio 提交于
Following the suggestions from Andrew Morton and Stephen Rothwell, Dont expand the ARCH list in kernel/gcov/Kconfig. Instead, define a ARCH_HAS_GCOV_PROFILE_ALL bool which architectures can enable. set ARCH_HAS_GCOV_PROFILE_ALL on Architectures where it was previously allowed + ARM64 which I tested. Signed-off-by: NRiku Voipio <riku.voipio@linaro.org> Cc: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 12月, 2014 4 次提交
-
-
由 Christoffer Dall 提交于
Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Call this function on secondary and subsequent calls to the KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest Stage-1 MMU is off when faulting in pages and make the caches coherent. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
When a vcpu calls SYSTEM_OFF or SYSTEM_RESET with PSCI v0.2, the vcpus should really be turned off for the VM adhering to the suggestions in the PSCI spec, and it's the sane thing to do. Also, clarify the behavior and expectations for exits to user space with the KVM_EXIT_SYSTEM_EVENT case. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
It is not clear that this ioctl can be called multiple times for a given vcpu. Userspace already does this, so clarify the ABI. Also specify that userspace is expected to always make secondary and subsequent calls to the ioctl with the same parameters for the VCPU as the initial call (which userspace also already does). Add code to check that userspace doesn't violate that ABI in the future, and move the kvm_vcpu_set_target() function which is currently duplicated between the 32-bit and 64-bit versions in guest.c to a common static function in arm.c, shared between both architectures. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Christoffer Dall 提交于
When userspace resets the vcpu using KVM_ARM_VCPU_INIT, we should also reset the HCR, because we now modify the HCR dynamically to enable/disable trapping of guest accesses to the VM registers. This is crucial for reboot of VMs working since otherwise we will not be doing the necessary cache maintenance operations when faulting in pages with the guest MMU off. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 12 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
There are a number of situations where the mandatory barriers rmb() and wmb() are used to order memory/memory operations in the device drivers and those barriers are much heavier than they actually need to be. For example in the case of PowerPC wmb() calls the heavy-weight sync instruction when for coherent memory operations all that is really needed is an lsync or eieio instruction. This commit adds a coherent only version of the mandatory memory barriers rmb() and wmb(). In most cases this should result in the barrier being the same as the SMP barriers for the SMP case, however in some cases we use a barrier that is somewhere in between rmb() and smp_rmb(). For example on ARM the rmb barriers break down as follows: Barrier Call Explanation --------- -------- ---------------------------------- rmb() dsb() Data synchronization barrier - system dma_rmb() dmb(osh) data memory barrier - outer sharable smp_rmb() dmb(ish) data memory barrier - inner sharable These new barriers are not as safe as the standard rmb() and wmb(). Specifically they do not guarantee ordering between coherent and incoherent memories. The primary use case for these would be to enforce ordering of reads and writes when accessing coherent memory that is shared between the CPU and a device. It may also be noted that there is no dma_mb(). Most architectures don't provide a good mechanism for performing a coherent only full barrier without resorting to the same mechanism used in mb(). As such there isn't much to be gained in trying to define such a function. Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: David Miller <davem@davemloft.net> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2014 5 次提交
-
-
由 Mark Rutland 提交于
If the final page table entry we walk is a valid mapping, the page table dumping code will not log the region this entry is part of, as the final note_page call in ptdump_show will trigger an early return. Luckily this isn't seen on contemporary systems as they typically don't have enough RAM to extend the linear mapping right to the end of the address space. In note_page, we log a region when we reach its end (i.e. we hit an entry immediately afterwards which has different prot bits or is invalid). The final entry has no subsequent entry, so we will not log this immediately. We try to cater for this with a subsequent call to note_page in ptdump_show, but this returns early as 0 < LOWEST_ADDR, and hence we will skip a valid mapping if it spans to the final entry we note. Unlike 32-bit ARM, the pgd with the kernel mapping is never shared with user mappings, so we do not need the check to ensure we don't log user page tables. Due to the way addr is constructed in the walk_* functions, it can never be less than LOWEST_ADDR when walking the page tables, so it is not necessary to avoid dereferencing invalid table addresses. The existing checks for st->current_prot and st->marker[1].start_address are sufficient to ensure we will not print and/or dereference garbage when trying to log information. This patch removes the unnecessary check against LOWEST_ADDR, ensuring we log all regions in the kernel page table, including those which span right to the end of the address space. Cc: Kees Cook <keescook@chromium.org> Acked-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
When building with 48-bit VAs, it's possible to get the following warning when building the arm64 page table dumping code: arch/arm64/mm/dump.c: In function ‘walk_pgd’: arch/arm64/mm/dump.c:266:2: warning: right shift count >= width of type pgd_t *pgd = pgd_offset(mm, 0); ^ As pgd_offset is a macro and the second argument is not cast to any particular type, the zero will be given integer type by the compiler. As pgd_offset passes the pargument to pgd_index, we then try to shift the 32-bit integer by at least 39 bits (for 4k pages). Elsewhere the pgd_offset is passed a second argument of unsigned long type, so let's do the same here by passing '0UL' rather than '0'. Cc: Kees Cook <keescook@chromium.org> Acked-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Krzysztof Kozlowski 提交于
Fix build failure of defconfig when PM_SLEEP is disabled (e.g. by disabling SUSPEND) and CPU_IDLE enabled: arch/arm64/kernel/psci.c:543:2: error: unknown field ‘cpu_suspend’ specified in initializer .cpu_suspend = cpu_psci_cpu_suspend, ^ arch/arm64/kernel/psci.c:543:2: warning: initialization from incompatible pointer type [enabled by default] arch/arm64/kernel/psci.c:543:2: warning: (near initialization for ‘cpu_psci_ops.cpu_prepare’) [enabled by default] make[1]: *** [arch/arm64/kernel/psci.o] Error 1 The cpu_operations.cpu_suspend field exists only if ARM64_CPU_SUSPEND is defined, not CPU_IDLE. Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kirill A. Shutemov 提交于
As a small zero page, huge zero page should not be accounted in smaps report as normal page. For small pages we rely on vm_normal_page() to filter out zero page, but vm_normal_page() is not designed to handle pmds. We only get here due hackish cast pmd to pte in smaps_pte_range() -- pte and pmd format is not necessary compatible on each and every architecture. Let's add separate codepath to handle pmds. follow_trans_huge_pmd() will detect huge zero page for us. We would need pmd_dirty() helper to do this properly. The patch adds it to THP-enabled architectures which don't yet have one. [akpm@linux-foundation.org: use do_div to fix 32-bit build] Signed-off-by: N"Kirill A. Shutemov" <kirill@shutemov.name> Reported-by: NFengguang Wu <fengguang.wu@intel.com> Tested-by: NFengwei Yin <yfw.kernel@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Daniel Borkmann 提交于
As there are now no remaining users of arch_fast_hash(), lets kill it entirely. This basically reverts commit 71ae8aac ("lib: introduce arch optimized hash library") and follow-up work, that is f.e., commit 23721754 ("lib: hash: follow-up fixups for arch hash"), commit e3fec2f7 ("lib: Add missing arch generic-y entries for asm-generic/hash.h") and last but not least commit 6a02652d ("perf tools: Fix include for non x86 architectures"). Cc: Francesco Fusco <fusco@ntop.org> Cc: Thomas Graf <tgraf@suug.ch> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDaniel Borkmann <dborkman@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 12月, 2014 3 次提交
-
-
由 Ding Tianhong 提交于
The commit 3690951f (arm64: Use swiotlb late initialisation) switches the DMA mapping code to swiotlb_tlb_late_init_with_default_size(), the arm64_swiotlb_init() will not used anymore, so remove this function. Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sonny Rao 提交于
This is a bug fix for using physical arch timers when the arch_timer_use_virtual boolean is false. It restores the arch_counter_get_cntpct() function after removal in 0d651e4e "clocksource: arch_timer: use virtual counters" We need this on certain ARMv7 systems which are architected like this: * The firmware doesn't know and doesn't care about hypervisor mode and we don't want to add the complexity of hypervisor there. * The firmware isn't involved in SMP bringup or resume. * The ARCH timer come up with an uninitialized offset between the virtual and physical counters. Each core gets a different random offset. * The device boots in "Secure SVC" mode. * Nothing has touched the reset value of CNTHCTL.PL1PCEN or CNTHCTL.PL1PCTEN (both default to 1 at reset) One example of such as system is RK3288 where it is much simpler to use the physical counter since there's nobody managing the offset and each time a core goes down and comes back up it will get reinitialized to some other random value. Fixes: 0d651e4e ("clocksource: arch_timer: use virtual counters") Cc: stable@vger.kernel.org Signed-off-by: NSonny Rao <sonnyrao@chromium.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Suravee Suthikulpanit 提交于
Since PCIe is using SMMUv1 which only supports 15-bit stream ID, only 7-bit PCI bus id is used to specify stream ID. Therefore, we only limit the PCI bus range to 0x7f. Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 04 12月, 2014 8 次提交
-
-
由 Stefano Stabellini 提交于
Merge xen/mm32.c into xen/mm.c. As a consequence the code gets compiled on arm64 too. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Stefano Stabellini 提交于
dev_addr is the machine address of the page. The new parameter can be used by the ARM and ARM64 implementations of xen_dma_map_page to find out if the page is a local page (pfn == mfn) or a foreign page (pfn != mfn). dev_addr could be retrieved again from the physical address, using pfn_to_mfn, but it requires accessing an rbtree. Since we already have the dev_addr in our hands at the call site there is no need to get the mfn twice. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
Introduce a boolean flag and an accessor function to check whether a device is dma_coherent. Set the flag from set_arch_dma_coherent_ops. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andre Przywara 提交于
Currently the kernel patches all necessary instructions once at boot time, so modules are not covered by this. Change the apply_alternatives() function to take a beginning and an end pointer and introduce a new variant (apply_alternatives_all()) to cover the existing use case for the static kernel image section. Add a module_finalize() function to arm64 to check for an alternatives section in a module and patch only the instructions from that specific area. Since that module code is not touched before the module initialization has ended, we don't need to halt the machine before doing the patching in the module's code. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Daniel Thompson 提交于
If the overflow threshold for a counter is set above or near the 0xffffffff boundary then the kernel may lose track of the overflow causing only events that occur *after* the overflow to be recorded. Specifically the problem occurs when the value of the performance counter overtakes its original programmed value due to wrap around. Typical solutions to this problem are either to avoid programming in values likely to be overtaken or to treat the overflow bit as the 33rd bit of the counter. Its somewhat fiddly to refactor the code to correctly handle the 33rd bit during irqsave sections (context switches for example) so instead we take the simpler approach of avoiding values likely to be overtaken. We set the limit to half of max_period because this matches the limit imposed in __hw_perf_event_init(). This causes a doubling of the interrupt rate for large threshold values, however even with a very fast counter ticking at 4GHz the interrupt rate would only be ~1Hz. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Chunyan Zhang 提交于
If I include asm/irq.h on the top of my code, and set ARCH=arm64, I'll get a compile warning, details are below: warning: ‘struct pt_regs’ declared inside parameter list [enabled by default] This patch is suggested by Arnd, see: http://lists.infradead.org/pipermail/linux-arm-kernel/2014-December/308270.htmlSigned-off-by: NChunyan Zhang <chunyan.zhang@spreadtrum.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Fabio Estevam 提交于
Building arm64.allmodconfig leads to the following warning: usb/gadget/function/f_ncm.c:203:0: warning: "NCAPS" redefined #define NCAPS (USB_CDC_NCM_NCAP_ETH_FILTER | USB_CDC_NCM_NCAP_CRC_MODE) ^ In file included from /home/build/work/batch/arch/arm64/include/asm/io.h:32:0, from /home/build/work/batch/include/linux/clocksource.h:19, from /home/build/work/batch/include/clocksource/arm_arch_timer.h:19, from /home/build/work/batch/arch/arm64/include/asm/arch_timer.h:27, from /home/build/work/batch/arch/arm64/include/asm/timex.h:19, from /home/build/work/batch/include/linux/timex.h:65, from /home/build/work/batch/include/linux/sched.h:19, from /home/build/work/batch/arch/arm64/include/asm/compat.h:25, from /home/build/work/batch/arch/arm64/include/asm/stat.h:23, from /home/build/work/batch/include/linux/stat.h:5, from /home/build/work/batch/include/linux/module.h:10, from /home/build/work/batch/drivers/usb/gadget/function/f_ncm.c:19: arch/arm64/include/asm/cpufeature.h:27:0: note: this is the location of the previous definition #define NCAPS 2 So add a ARM64 prefix to avoid such problem. Reported-by: NOlof's autobuilder <build@lixom.net> Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Zi Shen Lim 提交于
Earlier implementation assumed last instruction is BPF_EXIT. Since this is no longer a restriction in eBPF, we remove this limitation. Per Alexei Starovoitov [1]: > classic BPF has a restriction that last insn is always BPF_RET. > eBPF doesn't have BPF_RET instruction and this restriction. > It has BPF_EXIT insn which can appear anywhere in the program > one or more times and it doesn't have to be last insn. [1] https://lkml.org/lkml/2014/11/27/2 Fixes: e54bcde3 ("arm64: eBPF JIT compiler") Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NZi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 12月, 2014 1 次提交
-
-
由 Jungseok Lee 提交于
As putting data which is read mostly together, we can avoid unnecessary cache line bouncing. Other architectures, such as ARM and x86, adopted the same idea. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 12月, 2014 1 次提交
-
-
由 Vladimir Murzin 提交于
Update handling of cacheflush syscall with changes made in arch/arm counterpart: - return error to userspace when flushing syscall fails - split user cache-flushing into interruptible chunks - don't bother rounding to nearest vma Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> [will: changed internal return value from -EINTR to 0 to match arch/arm/] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 11月, 2014 1 次提交
-
-
由 Liviu Dudau 提交于
The Cortex-A5x TRM states in paragraph "9.2 Generic Timer functional description" that generic timers provide an active-LOW interrupt output. Fix the device trees to correctly describe this. While doing this update the CPU mask to match the number of described CPUs as well. Signed-off-by: NLiviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 28 11月, 2014 5 次提交
-
-
由 Suravee Suthikulpanit 提交于
Initial revision of device tree for AMD Seattle Development platform. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NThomas Lendacky <Thomas.Lendacky@amd.com> Signed-off-by: NJoel Schopp <Joel.Schopp@amd.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 AKASHI Takahiro 提交于
secure_computing() is called first in syscall_trace_enter() so that a system call will be aborted quickly without doing succeeding syscall tracing if seccomp rules want to deny that system call. On compat task, syscall numbers for system calls allowed in seccomp mode 1 are different from those on normal tasks, and so _NR_seccomp_xxx_32's need to be redefined. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
SIGSYS is primarily used in secure computing to notify tracer of syscall events. This patch allows signal handler on compat task to get correct information with SA_SIGINFO specified when this signal is delivered. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch allows compat task to issue seccomp() system call. Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
If tracer modifies a syscall number to -1, this traced system call should be skipped with a return value specified in x0. This patch implements this semantics. Please note: * syscall entry tracing and syscall exit tracing (ftrace tracepoint and audit) are always executed, if enabled, even when skipping a system call (that is, -1). In this way, we can avoid a potential bug where audit_syscall_entry() might be called without audit_syscall_exit() at the previous system call being called, that would cause OOPs in audit_syscall_entry(). Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> [will: fixed up conflict with blr rework] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-