- 23 10月, 2015 5 次提交
-
-
由 Stefano Stabellini 提交于
Build cpu_hotplug for ARM and ARM64 guests. Rename arch_(un)register_cpu to xen_(un)register_cpu and provide an empty implementation on ARM and ARM64. On x86 just call arch_(un)register_cpu as we are already doing. Initialize cpu_hotplug on ARM. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
由 Julien Grall 提交于
Swiotlb is used on ARM64 to support DMA on platform where devices are not protected by an SMMU. Furthermore it's only enabled for DOM0. While Xen is always using 4KB page granularity in the stage-2 page table, Linux ARM64 may either use 4KB or 64KB. This means that a Linux page can be spanned accross multiple Xen page. The Swiotlb code has to validate that the buffer used for DMA is physically contiguous in the memory. As a Linux page can't be shared between local memory and foreign page by design (the balloon code always removing entirely a Linux page), the changes in the code are very minimal because we only need to check the first Xen PFN. Note that it may be possible to optimize the function check_page_physically_contiguous to avoid looping over every Xen PFN for local memory. Although I will let this optimization for a follow-up. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
With 64KB page granularity support, the frame number will be different. It will be easier to modify the behavior in a single place rather than in each caller. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
The hypercall interface is always using 4KB page granularity. This is requiring to use xen page definition macro when we deal with hypercall. Note that pfn_to_gfn is working with a Xen pfn (i.e 4KB). We may want to rename pfn_gfn to make this explicit. We also allocate a 64KB page for the shared page even though only the first 4KB is used. I don't think this is really important for now as it helps to have the pointer 4KB aligned (XENMEM_add_to_physmap is taking a Xen PFN). Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
They are not used in common code expect in one place in balloon.c which is only compiled when Linux is using PV MMU. It's not the case on ARM. Rather than worrying how to handle the 64KB case, drop them. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 09 9月, 2015 3 次提交
-
-
由 Julien Grall 提交于
Based on include/xen/mm.h [1], Linux is mistakenly using MFN when GFN is meant, I suspect this is because the first support for Xen was for PV. This resulted in some misimplementation of helpers on ARM and confused developers about the expected behavior. For instance, with pfn_to_mfn, we expect to get an MFN based on the name. Although, if we look at the implementation on x86, it's returning a GFN. For clarity and avoid new confusion, replace any reference to mfn with gfn in any helpers used by PV drivers. The x86 code will still keep some reference of pfn_to_mfn which may be used by all kind of guests No changes as been made in the hypercall field, even though they may be invalid, in order to keep the same as the defintion in xen repo. Note that page_to_mfn has been renamed to xen_page_to_gfn to avoid a name to close to the KVM function gfn_to_page. Take also the opportunity to simplify simple construction such as pfn_to_mfn(page_to_pfn(page)) into xen_page_to_gfn. More complex clean up will come in follow-up patches. [1] http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=e758ed14f390342513405dd766e874934573e6cbSigned-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
After the commit introducing convertion between DMA and guest addresses, all the callers of pfn_to_mfn are expecting to get a GFN (Guest Frame Number). On ARM, all the guests are auto-translated so the GFN is equal to the Linux PFN (Pseudo-physical Frame Number). The current implementation may return an MFN if the caller is passing a PFN associated to a mapped foreign grant. In pratice, I haven't seen the problem on running guest but we should fix it for the sake of correctness. Correct the implementation by always returning the pfn passed in parameter. A follow-up patch will take care to rename pfn_to_mfn to a suitable name. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
The swiotlb is required when programming a DMA address on ARM when a device is not protected by an IOMMU. In this case, the DMA address should always be equal to the machine address. For DOM0 memory, Xen ensure it by have an identity mapping between the guest address and host address. However, when mapping a foreign grant reference, the 1:1 model doesn't work. For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN (Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point of view given that all ARM guest are auto-translated. Even though the name pfn_to_mfn is misleading, we need to ensure that those caller get a GFN and not by mistake a MFN. In pratical, I haven't seen error related to this but we should fix it for the sake of correctness. In order to fix the implementation of pfn_to_mfn on ARM in a follow-up patch, we have to introduce new helpers to return the DMA from a PFN and the invert. On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn. The helpers will be used in swiotlb and xen_biovec_phys_mergeable. This is necessary in the latter because we have to ensure that the biovec code will not try to merge a biovec using foreign page and another using Linux memory. Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn given that the only usage was in swiotlb. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 20 8月, 2015 2 次提交
-
-
由 Julien Grall 提交于
ARM guests are always HVM. The current implementation is assuming a 1:1 mapping which is only true for DOM0 and may not be at all in the future. Furthermore, all the helpers but arbitrary_virt_to_machine are used in x86 specific code (or only compiled for). The helper arbitrary_virt_to_machine is only used in PV specific code. Therefore we should never call the function. Add a BUG() in this helper and drop all the others. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
Currently, the event channel rebind code is gated with the presence of the vector callback. The virtual interrupt controller on ARM has the concept of per-CPU interrupt (PPI) which allow us to support per-VCPU event channel. Therefore there is no need of vector callback for ARM. Xen is already using a free PPI to notify the guest VCPU of an event. Furthermore, the xen code initialization in Linux (see arch/arm/xen/enlighten.c) is requesting correctly a per-CPU IRQ. Introduce new helper xen_support_evtchn_rebind to allow architecture decide whether rebind an event is support or not. It will always return true on ARM and keep the same behavior on x86. This is also allow us to drop the usage of xen_have_vector_callback entirely in the ARM code. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 17 6月, 2015 1 次提交
-
-
由 Julien Grall 提交于
Signed-off-by: NJulien Grall <julien.grall@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 28 5月, 2015 1 次提交
-
-
由 Stefano Stabellini 提交于
Currently, Xen is initialized/discovered in an initcall. This doesn't allow us to support earlyprintk or choosing the preferred console when running on Xen. The current function xen_guest_init is now split in 2 parts: - xen_early_init: Check if there is a Xen node in the device tree and setup domain type - xen_guest_init: Retrieve the information from the device node and initialize Xen (grant table, shared page...) The former is called in setup_arch, while the latter is an initcall. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NJulien Grall <julien.grall@linaro.org> Acked-by: NIan Campbell <ian.campbell@citrix.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 06 5月, 2015 1 次提交
-
-
由 Stefano Stabellini 提交于
Make sure that xen_swiotlb_init allocates buffers that are DMA capable when at least one memblock is available below 4G. Otherwise we assume that all devices on the SoC can cope with >4G addresses. We do this on ARM and ARM64, where dom0 is mapped 1:1, so pfn == mfn in this case. No functional changes on x86. From: Chen Baozi <baozich@gmail.com> Signed-off-by: NChen Baozi <baozich@gmail.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: NChen Baozi <baozich@gmail.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 28 1月, 2015 1 次提交
-
-
由 David Vrabel 提交于
When unmapping grants, instead of converting the kernel map ops to unmap ops on the fly, pre-populate the set of unmap ops. This allows the grant unmap for the kernel mappings to be trivially batched in the future. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 04 12月, 2014 4 次提交
-
-
由 Stefano Stabellini 提交于
Introduce an arch specific function to find out whether a particular dma mapping operation needs to bounce on the swiotlb buffer. On ARM and ARM64, if the page involved is a foreign page and the device is not coherent, we need to bounce because at unmap time we cannot execute any required cache maintenance operations (we don't know how to find the pfn from the mfn). No change of behaviour for x86. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
In xen_dma_map_page, if the page is a local page, call the native map_page dma_ops. If the page is foreign, call __xen_dma_map_page that issues any required cache maintenane operations via hypercall. The reason for doing this is that the native dma_ops map_page could allocate buffers than need to be freed. If the page is foreign we don't call the native unmap_page dma_ops function, resulting in a memory leak. Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Stefano Stabellini 提交于
dev_addr is the machine address of the page. The new parameter can be used by the ARM and ARM64 implementations of xen_dma_map_page to find out if the page is a local page (pfn == mfn) or a foreign page (pfn != mfn). dev_addr could be retrieved again from the physical address, using pfn_to_mfn, but it requires accessing an rbtree. Since we already have the dev_addr in our hands at the call site there is no need to get the mfn twice. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
Remove code duplication in mm32.c by calling the native dma_ops if the page is a local page (not a foreign page). Use a simple pfn_valid(pfn) check to figure out if the page is local, exploiting the fact that dom0 is mapped 1:1, therefore pfn_valid always returns false when called on a foreign mfn. Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 9月, 2014 2 次提交
-
-
由 Stefano Stabellini 提交于
Remove the rbtree used to keep track of machine to physical mappings: the frontend can grant the same page multiple times, leading to errors inserting or removing entries from the mach_to_phys tree. Linux only needed to know the physical address corresponding to a given machine address in swiotlb-xen. Now that swiotlb-xen can call the xen_dma_* functions passing the machine address directly, we can remove it. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: NDenis Schneider <v1ne2go@gmail.com>
-
由 Stefano Stabellini 提交于
xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device are currently implemented by calling into the corresponding generic ARM implementation of these functions. In order to do this, firstly the dma_addr_t handle, that on Xen is a machine address, needs to be translated into a physical address. The operation is expensive and inaccurate, given that a single machine address can correspond to multiple physical addresses in one domain, because the same page can be granted multiple times by the frontend. To avoid this problem, we introduce a Xen specific implementation of xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device, that can operate on machine addresses directly. The new implementation relies on the fact that the hypervisor creates a second p2m mapping of any grant pages at physical address == machine address of the page for dom0. Therefore we can access memory at physical address == dma_addr_r handle and perform the cache flushing there. Some cache maintenance operations require a virtual address. Instead of using ioremap_cache, that is not safe in interrupt context, we allocate a per-cpu PAGE_KERNEL scratch page and we manually update the pte for it. arm64 doesn't need cache maintenance operations on unmap for now. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: NDenis Schneider <v1ne2go@gmail.com>
-
- 13 5月, 2014 1 次提交
-
-
由 Stefano Stabellini 提交于
Introduce HYPERVISOR_suspend() and a few additional empty stubs for Xen arch specific functions called by drivers/xen/manage.c. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 28 4月, 2014 1 次提交
-
-
由 Julien Grall 提交于
virt_to_pfn has been defined in asm/memory.h by the commit e26a9e00 "ARM: Better virt_to_page() handling" This will result of a compilation warning when CONFIG_XEN is enabled. arch/arm/include/asm/xen/page.h:80:0: warning: "virt_to_pfn" redefined [enabled by default] #define virt_to_pfn(v) (PFN_DOWN(__pa(v))) ^ In file included from arch/arm/include/asm/page.h:163:0, from arch/arm/include/asm/xen/page.h:4, from include/xen/page.h:4, from arch/arm/xen/grant-table.c:33: The definition in memory.h is nearly the same (it directly expand PFN_DOWN), so we can safely drop virt_to_pfn in xen include. Signed-off-by: NJulien Grall <julien.grall@linaro.org> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 24 4月, 2014 2 次提交
-
-
由 Julien Grall 提交于
virt_to_pfn has been defined in asm/memory.h by the commit e26a9e00 "ARM: Better virt_to_page() handling" This will result of a compilation warning when CONFIG_XEN is enabled. arch/arm/include/asm/xen/page.h:80:0: warning: "virt_to_pfn" redefined [enabled by default] #define virt_to_pfn(v) (PFN_DOWN(__pa(v))) ^ In file included from arch/arm/include/asm/page.h:163:0, from arch/arm/include/asm/xen/page.h:4, from include/xen/page.h:4, from arch/arm/xen/grant-table.c:33: The definition in memory.h is nearly the same (it directly expand PFN_DOWN), so we can safely drop virt_to_pfn in xen include. Signed-off-by: NJulien Grall <julien.grall@linaro.org> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Ian Campbell 提交于
As part of this make the usual change to xen_ulong_t in place of unsigned long. This change has no impact on x86. The Linux definition of struct multicall_entry.result differs from the Xen definition, I think for good reasons, and used a long rather than an unsigned long. Therefore introduce a xen_long_t, which is a long on x86 architectures and a signed 64-bit integer on ARM. Use uint32_t nr_calls on x86 for consistency with the ARM definition. Build tested on amd64 and i386 builds. Runtime tested on ARM. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 18 3月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
The grant mapping API does m2p_override unnecessarily: only gntdev needs it, for blkback and future netback patches it just cause a lock contention, as those pages never go to userspace. Therefore this series does the following: - the bulk of the original function (everything after the mapping hypercall) is moved to arch-dependent set/clear_foreign_p2m_mapping - the "if (xen_feature(XENFEAT_auto_translated_physmap))" branch goes to ARM - therefore the ARM function could be much smaller, the m2p_override stubs could be also removed - on x86 the set_phys_to_machine calls were moved up to this new funcion from m2p_override functions - and m2p_override functions are only called when there is a kmap_ops param It also removes a stray space from arch/x86/include/asm/xen/page.h. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Suggested-by: NAnthony Liguori <aliguori@amazon.com> Suggested-by: NDavid Vrabel <david.vrabel@citrix.com> Suggested-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 06 1月, 2014 2 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
The 'xen_hvm_resume_frames' used to be an 'unsigned long' and contain the virtual address of the grants. That was OK for most architectures (PVHVM, ARM) were the grants are contiguous in memory. That however is not the case for PVH - in which case we will have to do a lookup for each virtual address for the PFN. Instead of doing that, lets make it a structure which will contain the array of PFNs, the virtual address and the count of said PFNs. Also provide a generic functions: gnttab_setup_auto_xlat_frames and gnttab_free_auto_xlat_frames to populate said structure with appropriate values for PVHVM and ARM. To round it off, change the name from 'xen_hvm_resume_frames' to a more descriptive one - 'xen_auto_xlat_grant_frames'. For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver" we will populate the 'xen_auto_xlat_grant_frames' by ourselves. v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames and also introduces xen_unmap for gnttab_free_auto_xlat_frames. Suggested-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v3: Based on top of 'asm/xen/page.h: remove redundant semicolon'] Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Wei Liu 提交于
Signed-off-by: NWei Liu <liuw@liuw.name> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 05 1月, 2014 1 次提交
-
-
由 Rob Herring 提交于
ioremap_cache is more aligned with other architectures. There are only 2 users of this in the kernel: pxa2xx-flash and Xen. This fixes Xen build failures on arm64: drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 12月, 2013 1 次提交
-
-
由 Rob Herring 提交于
ioremap_cache is more aligned with other architectures. There are only 2 users of this in the kernel: pxa2xx-flash and Xen. This fixes Xen build failures on arm64 caused by commit c04e8e2f (arm64: allow ioremap_cache() to use existing RAM mappings) drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 11月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Some common Xen drivers, like balloon.c, call pfn_to_mfn and mfn_to_pfn even for autotranslate guests, expecting the argument back. The following commit broke these drivers by changing the behavior of pfn_to_mfn and mfn_to_pfn: commit 4a19138c Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Date: Thu Oct 17 16:22:27 2013 +0000 arm/xen,arm64/xen: introduce p2m They now return INVALID_P2M_ENTRY if Linux doesn't actually know what is the mfn backing a pfn or what is the pfn corresponding to an mfn. Fix the regression by switching to the old behavior. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Tested-by: NIan Campbell <ian.campbell@citrix.com>
-
- 25 10月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Introduce xen_dma_map_page, xen_dma_unmap_page, xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device. They have empty implementations on x86 and ia64 but they call the corresponding platform dma_ops function on arm and arm64. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v9: - xen_dma_map_page return void, avoid page_to_phys.
-
- 10 10月, 2013 2 次提交
-
-
由 Stefano Stabellini 提交于
xen_swiotlb_alloc_coherent needs to allocate a coherent buffer for cpu and devices. On native x86 is sufficient to call __get_free_pages in order to get a coherent buffer, while on ARM (and potentially ARM64) we need to call the native dma_ops->alloc implementation. Introduce xen_alloc_coherent_pages to abstract the arch specific buffer allocation. Similarly introduce xen_free_coherent_pages to free a coherent buffer: on x86 is simply a call to free_pages while on ARM and ARM64 is arm_dma_ops.free. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v7: - rename __get_dma_ops to __generic_dma_ops; - call __generic_dma_ops(hwdev)->alloc/free on arm64 too. Changes in v6: - call __get_dma_ops to get the native dma_ops pointer on arm.
-
由 Stefano Stabellini 提交于
Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to program the hardware with mfns rather than pfns for dma addresses. Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select SWIOTLB_XEN on arm and arm64. At the moment always rely on swiotlb-xen, but when Xen starts supporting hardware IOMMUs we'll be able to avoid it conditionally on the presence of an IOMMU on the platform. Implement xen_create_contiguous_region on arm and arm64: for the moment we assume that dom0 has been mapped 1:1 (physical addresses == machine addresses) therefore we don't need to call XENMEM_exchange. Simply return the physical address as dma address. Initialize the xen-swiotlb from xen_early_init (before the native dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v8: - assume dom0 is mapped 1:1, no need to call XENMEM_exchange. Changes in v7: - call __set_phys_to_machine_multi from xen_create_contiguous_region and xen_destroy_contiguous_region to update the P2M; - don't call XENMEM_unpin, it has been removed; - call XENMEM_exchange instead of XENMEM_exchange_and_pin; - set nr_exchanged to 0 before calling the hypercall. Changes in v6: - introduce and export xen_dma_ops; - call xen_mm_init from as arch_initcall. Changes in v4: - remove redefinition of DMA_ERROR_CODE; - update the code to use XENMEM_exchange_and_pin and XENMEM_unpin; - add a note about hardware IOMMU in the commit message. Changes in v3: - code style changes; - warn on XENMEM_put_dma_buf failures.
-
- 18 10月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Introduce physical to machine and machine to physical tracking mechanisms based on rbtrees for arm/xen and arm64/xen. We need it because any guests on ARM are an autotranslate guests, therefore a physical address is potentially different from a machine address. When programming a device to do DMA, we need to be extra-careful to use machine addresses rather than physical addresses to program the device. Therefore we need to know the physical to machine mappings. For the moment we assume that dom0 starts with a 1:1 physical to machine mapping, in other words physical addresses correspond to machine addresses. However when mapping a foreign grant reference, obviously the 1:1 model doesn't work anymore. So at the very least we need to be able to track grant mappings. We need locking to protect accesses to the two trees. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v8: - move pfn_to_mfn and mfn_to_pfn to page.h as static inline functions; - no need to walk the tree if phys_to_mach.rb_node is NULL; - correctly handle multipage p2m entries; - substitute the spin_lock with a rwlock.
-
- 04 7月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 05 6月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Define xen_remap as ioremap_cache (MT_MEMORY and MT_DEVICE_CACHED end up having the same AttrIndx encoding). Remove include asm/mach/map.h, not unneeded. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 26 4月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NIan Campbell <ian.campbell@citrix.com>
-
- 12 3月, 2013 1 次提交
-
-
由 Ian Campbell 提交于
Rob Herring has observed that c81611c4 "xen: event channel arrays are xen_ulong_t and not unsigned long" introduced a compile failure when building without CONFIG_AEABI: /tmp/ccJaIZOW.s: Assembler messages: /tmp/ccJaIZOW.s:831: Error: even register required -- `ldrexd r5,r6,[r4]' Will Deacon pointed out that this is because OABI does not require even base registers for 64-bit values. We can avoid this by simply using the existing atomic64_xchg operation and the same containerof trick as used by the cmpxchg macros. However since this code is used on memory which is shared with the hypervisor we require proper atomic instructions and cannot use the generic atomic64 callbacks (which are based on spinlocks), therefore add a dependency on !GENERIC_ATOMIC64. Since we already depend on !CPU_V6 there isn't much downside to this. While thinking about this we also observed that OABI has different struct alignment requirements to EABI, which is a problem for hypercall argument structs which are shared with the hypervisor and which must be in EABI layout. Since I don't expect people to want to run OABI kernels on Xen depend on CONFIG_AEABI explicitly too (although it also happens to be enforced by the !GENERIC_ATOMIC64 requirement too). Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Rob Herring <robherring2@gmail.com> Acked-by: NStefano Stabellini <Stefano.Stabellini@eu.citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 20 2月, 2013 2 次提交
-
-
由 Ian Campbell 提交于
On ARM we want these to be the same size on 32- and 64-bit. This is an ABI change on ARM. X86 does not change. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: Keir (Xen.org) <keir@xen.org> Cc: Tim Deegan <tim@xen.org> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: linux-arm-kernel@lists.infradead.org Cc: xen-devel@lists.xen.org Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
ioremap can't be used to map ring pages on ARM because it uses device memory caching attributes (MT_DEVICE*). Introduce a Xen specific abstraction to map ring pages, called xen_remap, that is defined as ioremap on x86 (no behavioral changes). On ARM it explicitly calls __arm_ioremap with the right caching attributes: MT_MEMORY. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-