- 09 11月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 25 10月, 2013 2 次提交
-
-
由 Stefano Stabellini 提交于
This is similar to what it is done on X86: biovecs are prevented from merging otherwise every dma requests would be forced to bounce on the swiotlb buffer. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Changes in v7: - remove the extra autotranslate check in biomerge.c.
-
由 Stefano Stabellini 提交于
Introduce xen_dma_map_page, xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device. They have empty implementations on x86 and ia64 but they call the corresponding platform dma_ops function on arm and arm64. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v9: - xen_dma_map_page return void, avoid page_to_phys.
-
- 10 10月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
xen_swiotlb_alloc_coherent needs to allocate a coherent buffer for cpu and devices. On native x86 is sufficient to call __get_free_pages in order to get a coherent buffer, while on ARM (and potentially ARM64) we need to call the native dma_ops->alloc implementation. Introduce xen_alloc_coherent_pages to abstract the arch specific buffer allocation. Similarly introduce xen_free_coherent_pages to free a coherent buffer: on x86 is simply a call to free_pages while on ARM and ARM64 is arm_dma_ops.free. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v7: - rename __get_dma_ops to __generic_dma_ops; - call __generic_dma_ops(hwdev)->alloc/free on arm64 too. Changes in v6: - call __get_dma_ops to get the native dma_ops pointer on arm.
-
- 19 10月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
We can't simply override arm_dma_ops with xen_dma_ops because devices are allowed to have their own dma_ops and they take precedence over arm_dma_ops. When running on Xen as initial domain, we always want xen_dma_ops to be the one in use. We introduce __generic_dma_ops to allow xen_dma_ops functions to call back to the native implementation. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> CC: will.deacon@arm.com Changes in v7: - return xen_dma_ops only if we are the initial domain; - rename __get_dma_ops to __generic_dma_ops.
-
- 10 10月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to program the hardware with mfns rather than pfns for dma addresses. Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select SWIOTLB_XEN on arm and arm64. At the moment always rely on swiotlb-xen, but when Xen starts supporting hardware IOMMUs we'll be able to avoid it conditionally on the presence of an IOMMU on the platform. Implement xen_create_contiguous_region on arm and arm64: for the moment we assume that dom0 has been mapped 1:1 (physical addresses == machine addresses) therefore we don't need to call XENMEM_exchange. Simply return the physical address as dma address. Initialize the xen-swiotlb from xen_early_init (before the native dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v8: - assume dom0 is mapped 1:1, no need to call XENMEM_exchange. Changes in v7: - call __set_phys_to_machine_multi from xen_create_contiguous_region and xen_destroy_contiguous_region to update the P2M; - don't call XENMEM_unpin, it has been removed; - call XENMEM_exchange instead of XENMEM_exchange_and_pin; - set nr_exchanged to 0 before calling the hypercall. Changes in v6: - introduce and export xen_dma_ops; - call xen_mm_init from as arch_initcall. Changes in v4: - remove redefinition of DMA_ERROR_CODE; - update the code to use XENMEM_exchange_and_pin and XENMEM_unpin; - add a note about hardware IOMMU in the commit message. Changes in v3: - code style changes; - warn on XENMEM_put_dma_buf failures.
-
- 18 10月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Introduce physical to machine and machine to physical tracking mechanisms based on rbtrees for arm/xen and arm64/xen. We need it because any guests on ARM are an autotranslate guests, therefore a physical address is potentially different from a machine address. When programming a device to do DMA, we need to be extra-careful to use machine addresses rather than physical addresses to program the device. Therefore we need to know the physical to machine mappings. For the moment we assume that dom0 starts with a 1:1 physical to machine mapping, in other words physical addresses correspond to machine addresses. However when mapping a foreign grant reference, obviously the 1:1 model doesn't work anymore. So at the very least we need to be able to track grant mappings. We need locking to protect accesses to the two trees. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v8: - move pfn_to_mfn and mfn_to_pfn to page.h as static inline functions; - no need to walk the tree if phys_to_mach.rb_node is NULL; - correctly handle multipage p2m entries; - substitute the spin_lock with a rwlock.
-
- 15 10月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> CC: will.deacon@arm.com Changes in v8: - cast to dma_addr_t before returning.
-
- 03 10月, 2013 1 次提交
-
-
由 Stephen Boyd 提交于
This config item already exists generically in lib/Kconfig.debug. Remove the duplicate config in arm64. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 9月, 2013 2 次提交
-
-
由 Ramkumar Ramachandra 提交于
Currently, development on arm64 is aided by a Foundation_v8 emulator distributed by ARM [1]. To run their kernels, users will execute: $ ./Foundation_v8 --image linux-system.axf --block-device raring-rootfs To mount the raring-rootfs filesystem, the kernel parameter should typically include: root=/dev/vda For this device to be present, the kernel must be compiled with VIRTIO_{MMIO,BLK}. To make this work out-of-the-box, make it part of the default configuration. [1]: https://silver.arm.com/browse/FM00A Cc: Will Deacon <will.deacon@arm.com> Cc: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: NRamkumar Ramachandra <artagnon@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ramkumar Ramachandra 提交于
Most readily available root filesystems are formatted as EXT4 these days. For example, see the raring rootfs that the Debian folk is preparing [1]. [1]: http://people.debian.org/~wookey/bootstrap/rootfs/ Cc: Will Deacon <will.deacon@arm.com> Cc: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: NRamkumar Ramachandra <artagnon@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 9月, 2013 1 次提交
-
-
由 Jiang Liu 提交于
If context switching happens during executing fpsimd_flush_thread(), stale value in FPSIMD registers will be saved into current thread's fpsimd_state by fpsimd_thread_switch(). That may cause invalid initialization state for the new process, so disable preemption when executing fpsimd_flush_thread(). Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Jiang Liu <liuj97@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 9月, 2013 2 次提交
-
-
由 Matthew Leach 提交于
The ASID is represented as an unsigned int in mm_context_t and we currently use the mmid assembler macro to access this element of the struct. This should be accessed with a register of 32-bit width. If the incorrect register width is used the ASID will be returned in bits[32:63] of the register when running under big-endian. Fix a use of the mmid macro in tlb.S to use a 32-bit access. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
get_user() is defined as a function macro in arm64, and trace_get_user() calls it as followed: get_user(ch, ptr++); Since the second parameter occurs twice in the definition, 'ptr++' is unexpectedly evaluated twice and trace_get_user() will generate a bogus string from user-provided one. As a result, some ftrace sysfs operations, like "echo FUNCNAME > set_ftrace_filter," hit this case and eventually fail. This patch fixes the issue both in get_user() and put_user(). Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> [catalin.marinas@arm.com: added __user type annotation and s/optr/__p/] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 9月, 2013 3 次提交
-
-
由 Steve Capper 提交于
Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit. This patch widens elf_hwcap to be 64 bit. Signed-off-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
When a task crashes and we print debugging information, ensure that compat tasks show the actual AArch32 LR and SP registers rather than the AArch64 ones. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This function is only called from arch/arm64/mm/fault.c. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 9月, 2013 3 次提交
-
-
由 Martin Schwidefsky 提交于
After the last architecture switched to generic hard irqs the config options HAVE_GENERIC_HARDIRQS & GENERIC_HARDIRQS and the related code for !CONFIG_GENERIC_HARDIRQS can be removed. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Johannes Weiner 提交于
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Kernel faults are expected to handle OOM conditions gracefully (gup, uaccess etc.), so they should never invoke the OOM killer. Reserve this for faults triggered in user context when it is the only option. Most architectures already do this, fix up the remaining few. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 9月, 2013 1 次提交
-
-
由 Naoya Horiguchi 提交于
Currently hugepage migration works well only for pmd-based hugepages (mainly due to lack of testing,) so we had better not enable migration of other levels of hugepages until we are ready for it. Some users of hugepage migration (mbind, move_pages, and migrate_pages) do page table walk and check pud/pmd_huge() there, so they are safe. But the other users (softoffline and memory hotremove) don't do this, so without this patch they can try to migrate unexpected types of hugepages. To prevent this, we introduce hugepage_migration_support() as an architecture dependent check of whether hugepage are implemented on a pmd basis or not. And on some architecture multiple sizes of hugepages are available, so hugepage_migration_support() also checks hugepage size. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 9月, 2013 1 次提交
-
-
由 Will Deacon 提交于
TCR.TBI0 can be used to cause hardware address translation to ignore the top byte of userspace virtual addresses. Whilst not especially useful in standard C programs, this can be used by JITs to `tag' pointers with various pieces of metadata. This patch enables this bit for AArch64 Linux, and adds a new file to Documentation/arm64/ which describes some potential caveats when using tagged virtual addresses. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 9月, 2013 2 次提交
-
-
由 Dan Aloni 提交于
Signed-off-by: NDan Aloni <alonid@stratoscale.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This string has been moved to arch/arm64/kernel/cputable.c. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 31 8月, 2013 1 次提交
-
-
由 Will Deacon 提交于
We always use a timer-backed delay loop for arm64, so don't bother reporting a bogomips value which appears to confuse some people. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 8月, 2013 1 次提交
-
-
由 Grant Likely 提交于
Most architectures use the same implementation. Collapse the common ones into a single weak function that can be overridden. Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 28 8月, 2013 2 次提交
-
-
由 Catalin Marinas 提交于
The map_mem() function limits the current memblock limit to PGDIR_SIZE (the initial swapper_pg_dir mapping) to avoid create_mapping() allocating memory from unmapped areas. However, if the first block is within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page configuration is enabled, create_mapping() will try to allocate a pte page. Such page may be returned by memblock_alloc() from the end of such bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet. The patch limits the current memblock limit to the aligned end of the first bank and gradually increases it as more memory is mapped. It also ensures that the start of the first bank is aligned to PMD_SIZE to avoid pte page allocation for this mapping. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: N"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com> Tested-by: N"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
-
由 Mark Salter 提交于
The current vmlinux.lds.S places the notes sections between the end of rw data and start of bss. This means that _edata doesn't really point to the end of data. Since notes are read-only, this patch moves them to the read-only segment so that _edata does point to the end of initialized rw data. Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 8月, 2013 3 次提交
-
-
由 Catalin Marinas 提交于
do_undefinstr() has to be called with interrupts disabled since it may read the instruction from the user address space which could lead to a data abort and subsequent might_sleep() warning in do_page_fault(). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Roy Franz 提交于
Expand the arm64 image header to allow for co-existance with PE/COFF header required by the EFI stub. The PE/COFF format requires the "MZ" header to be at offset 0, and the offset to the PE/COFF header to be at offset 0x3c. The image header is expanded to allow 2 instructions at the beginning to accommodate a benign intruction at offset 0 that includes the "MZ" header, a magic number, and the offset to the PE/COFF header. Signed-off-by: NRoy Franz <roy.franz@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Chen Gang 提交于
Need include "asm/types.h", just like arm has done, or can not pass compiling, the related error: In file included from arch/arm64/include/asm/page.h:37:0, from drivers/staging/lustre/include/linux/lnet/linux/lib-lnet.h:42, from drivers/staging/lustre/include/linux/lnet/lib-lnet.h:44, from drivers/staging/lustre/lnet/lnet/api-ni.c:38: arch/arm64/include/asm/pgtable-2level-types.h:19:1: error: unknown type name ‘u64 arch/arm64/include/asm/pgtable-2level-types.h:20:1: error: unknown type name ‘u64’ Signed-off-by: NChen Gang <gang.chen@asianux.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 8月, 2013 5 次提交
-
-
由 Ard Biesheuvel 提交于
Add <asm/neon.h> containing kernel_neon_begin/kernel_neon_end function declarations and corresponding definitions in fpsimd.c These are needed to wrap uses of NEON in kernel mode. The names are identical to the ones used in arm/ so code using intrinsics or vectorized by GCC can be shared between arm and arm64. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
This is a port of f2fe09b0 ("ARM: 7663/1: perf: fix ARMv7 EVTYPE_MASK to include NSH bit") to arm64, which fixes the broken evtype mask to include the NSH bit, allowing profiling at EL2. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
This is a port of cb2d8b34 ("ARM: 7698/1: perf: fix group validation when using enable_on_exec") to arm64, which fixes the event validation checking so that events in the OFF state are still considered when enable_on_exec is true. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
This is a port of c95eb318 ("ARM: 7809/1: perf: fix event validation for software group leaders") to arm64, which fixes a panic in the arm64 perf backend found as a result of Vince's fuzzing tool. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
This is a port of d9f96635 ("ARM: 7810/1: perf: Fix array out of bounds access in armpmu_map_hw_event()") to arm64, which fixes an oops in the arm64 perf backend found as a result of Vince's fuzzing tool. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 8月, 2013 1 次提交
-
-
由 Linus Torvalds 提交于
Ben Tebulin reported: "Since v3.7.2 on two independent machines a very specific Git repository fails in 9/10 cases on git-fsck due to an SHA1/memory failures. This only occurs on a very specific repository and can be reproduced stably on two independent laptops. Git mailing list ran out of ideas and for me this looks like some very exotic kernel issue" and bisected the failure to the backport of commit 53a59fc6 ("mm: limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT"). That commit itself is not actually buggy, but what it does is to make it much more likely to hit the partial TLB invalidation case, since it introduces a new case in tlb_next_batch() that previously only ever happened when running out of memory. The real bug is that the TLB gather virtual memory range setup is subtly buggered. It was introduced in commit 597e1c35 ("mm/mmu_gather: enable tlb flush range in generic mmu_gather"), and the range handling was already fixed at least once in commit e6c495a9 ("mm: fix the TLB range flushed when __tlb_remove_page() runs out of slots"), but that fix was not complete. The problem with the TLB gather virtual address range is that it isn't set up by the initial tlb_gather_mmu() initialization (which didn't get the TLB range information), but it is set up ad-hoc later by the functions that actually flush the TLB. And so any such case that forgot to update the TLB range entries would potentially miss TLB invalidates. Rather than try to figure out exactly which particular ad-hoc range setup was missing (I personally suspect it's the hugetlb case in zap_huge_pmd(), which didn't have the same logic as zap_pte_range() did), this patch just gets rid of the problem at the source: make the TLB range information available to tlb_gather_mmu(), and initialize it when initializing all the other tlb gather fields. This makes the patch larger, but conceptually much simpler. And the end result is much more understandable; even if you want to play games with partial ranges when invalidating the TLB contents in chunks, now the range information is always there, and anybody who doesn't want to bother with it won't introduce subtle bugs. Ben verified that this fixes his problem. Reported-bisected-and-tested-by: NBen Tebulin <tebulin@googlemail.com> Build-testing-by: NStephen Rothwell <sfr@canb.auug.org.au> Build-testing-by: NRichard Weinberger <richard.weinberger@gmail.com> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Acked-by: NPeter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 8月, 2013 3 次提交
-
-
由 Chen Gang 提交于
'target' will be set to '-1' in kvm_arch_vcpu_init(), and it need check 'target' whether less than zero or not in kvm_vcpu_initialized(). So need define target as 'int' instead of 'u32', just like ARM has done. The related warning: arch/arm64/kvm/../../../arch/arm/kvm/arm.c:497:2: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] Signed-off-by: NChen Gang <gang.chen@asianux.com> [Marc: reformated the Subject line to fit the series] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
When performing a Stage-2 TLB invalidation, it is necessary to make sure the write to the page tables is observable by all CPUs. For this purpose, add dsb instructions to __kvm_tlb_flush_vmid_ipa and __kvm_flush_vm_context before doing the TLB invalidation itself. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Not saving PAR_EL1 is an unfortunate oversight. If the guest performs an AT* operation and gets scheduled out before reading the result of the translation from PAREL1, it could become corrupted by another guest or the host. Saving this register is made slightly more complicated as KVM also uses it on the permission fault handling path, leading to an ugly "stash and restore" sequence. Fortunately, this is already a slow path so we don't really care. Also, Linux doesn't do any AT* operation, so Linux guests are not impacted by this bug. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-