- 14 12月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
The auto-xlat logic vs the non-xlat means that we don't need to for auto-xlat guests (like PVH, HVM or ARM): - use P2M - use scratch page. However the code in increase_reservation does modify the p2m for auto_translate guests, but not in decrease_reservation. Fix that by avoiding any p2m modifications in both increase_reservation and decrease_reservation for auto_translated guests. And also avoid allocating or using scratch pages for auto_translated guests. Lastly, since !auto-xlat is really another way of saying 'xen_pv' remove the redundant 'xen_pv_domain' check. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> [v2: Updated the description] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 07 12月, 2013 2 次提交
-
-
由 Lv Zheng 提交于
Replace direct inclusions of <acpi/acpi.h>, <acpi/acpi_bus.h> and <acpi/acpi_drivers.h>, which are incorrect, with <linux/acpi.h> inclusions and remove some inclusions of those files that aren't necessary. First of all, <acpi/acpi.h>, <acpi/acpi_bus.h> and <acpi/acpi_drivers.h> should not be included directly from any files that are built for CONFIG_ACPI unset, because that generally leads to build warnings about undefined symbols in !CONFIG_ACPI builds. For CONFIG_ACPI set, <linux/acpi.h> includes those files and for CONFIG_ACPI unset it provides stub ACPI symbols to be used in that case. Second, there are ordering dependencies between those files that always have to be met. Namely, it is required that <acpi/acpi_bus.h> be included prior to <acpi/acpi_drivers.h> so that the acpi_pci_root declarations the latter depends on are always there. And <acpi/acpi.h> which provides basic ACPICA type declarations should always be included prior to any other ACPI headers in CONFIG_ACPI builds. That also is taken care of including <linux/acpi.h> as appropriate. Signed-off-by: NLv Zheng <lv.zheng@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Tony Luck <tony.luck@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> (drivers/pci stuff) Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> (Xen stuff) Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ian Campbell 提交于
This failure represents a hypervisor issue, but if it does occur then nothing good can come of returning pages which still refer to a foreign owned page into the general allocation pool. Instead we are forced to leak them. Log that we have done so. The potential for failure only exists for autotranslated guest (e.g. ARM and x86 PVH). Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 04 12月, 2013 1 次提交
-
-
由 Eric Trudeau 提交于
From: Eric Trudeau <etrudeau@broadcom.com> xen_hvm_resume_frames stores the physical address of the grant table. englighten.c was incorrectly setting it as if it was a page frame number. This caused the table to be mapped into the guest at an unexpected physical address. Additionally, a warning is improved to include the grant table address which failed in xen_remap. Signed-off-by: NEric Trudeau <etrudeau@broadcom.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 27 11月, 2013 1 次提交
-
-
由 Matt Wilson 提交于
Commit f62805f1 introduced a bug where lazy MMU mode isn't exited if a m2p_add/remove_override call fails. Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NAnthony Liguori <aliguori@amazon.com> Cc: xen-devel@lists.xenproject.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NMatt Wilson <msw@amazon.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
-
- 23 11月, 2013 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Modify the ACPI namespace scanning code to register a struct acpi_device object for every namespace node representing a device, processor and so on, even if the device represented by that namespace node is reported to be not present and not functional by _STA. There are multiple reasons to do that. First of all, it avoids quite a lot of overhead when struct acpi_device objects are deleted every time acpi_bus_trim() is run and then added again by a subsequent acpi_bus_scan() for the same scope, although the namespace objects they correspond to stay in memory all the time (which always is the case on a vast majority of systems). Second, it will allow user space to see that there are namespace nodes representing devices that are not present at the moment and may be added to the system. It will also allow user space to evaluate _SUN for those nodes to check what physical slots the "missing" devices may be put into and it will make sense to add a sysfs attribute for _STA evaluation after this change (that will be useful for thermal management on some systems). Next, it will help to consolidate the ACPI hotplug handling among subsystems by making it possible to store hotplug-related information in struct acpi_device objects in a standard common way. Finally, it will help to avoid a race condition related to the deletion of ACPI namespace nodes. Namely, namespace nodes may be deleted as a result of a table unload triggered by _EJ0 or _DCK. If a hotplug notification for one of those nodes is triggered right before the deletion and it executes a hotplug callback via acpi_hotplug_execute(), the ACPI handle passed to that callback may be stale when the callback actually runs. One way to work around that is to always pass struct acpi_device pointers to hotplug callbacks after doing a get_device() on the objects in question which eliminates the use-after-free possibility (the ACPI handles in those objects are invalidated by acpi_scan_drop_device(), so they will trigger ACPICA errors on attempts to use them). Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: NMika Westerberg <mika.westerberg@linux.intel.com>
-
- 15 11月, 2013 2 次提交
-
-
由 Stefano Stabellini 提交于
swiotlb-xen is missing a xen_dma_map_page call in xen_swiotlb_map_sg_attrs, in the bounce buffer path. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Rafael J. Wysocki 提交于
Since DEVICE_ACPI_HANDLE() is now literally identical to ACPI_HANDLE(), replace it with the latter everywhere and drop its definition from include/acpi.h. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 09 11月, 2013 3 次提交
-
-
由 Paul Gortmaker 提交于
commit 6efa20e4 ("xen: Support 64-bit PV guest receiving NMIs") and commit cd9151e2 ( "xen/balloon: set a mapping for ballooned out pages") added new instances of __cpuinit usage. We removed this a couple versions ago; we now want to remove the compat no-op stubs. Introducing new users is not what we want to see at this point in time, as it will break once the stubs are gone. Cc: Konrad Rzeszutek Wilk <konrad@kernel.org> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Boris Ostrovsky 提交于
Currently balloon's initial value is set to max_pfn which includes non-RAM ranges such as MMIO hole. As result, initial memory target (specified by guest's configuration file) will appear smaller than what balloon driver perceives to be the current number of available pages. Thus it will balloon down "extra" pages, decreasing amount of available memory for no good reason. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Konrad Rzeszutek Wilk 提交于
The PCI MMCONFIG area is usually reserved via the E820 so the Xen hypervisor is aware of these regions. But they can also be enumerated in the ACPI DSDT which means the hypervisor won't know of them until the initial domain informs it of via PHYSDEVOP_pci_mmcfg_reserved. This is what this patch does for all of the MCFG regions that the initial domain is aware of (E820 enumerated and ACPI). Reported-by: NSantosh Jodh <Santosh.Jodh@citrix.com> CC: Jan Beulich <JBeulich@suse.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> CC: David Vrabel <david.vrabel@citrix.com> CC: Mukesh Rathor <mukesh.rathor@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v1: Redid it a bit] [v2: Dropped the P2M 1-1 setting] [v3: Check for Xen in-case we are running under baremetal] [v4: Wrap with CONFIG_PCI_MMCONFIG]
-
- 07 11月, 2013 1 次提交
-
-
由 Michael Opdenacker 提交于
This patch proposes to remove the IRQF_DISABLED flag from drivers/xen code. It's a NOOP since 2.6.35 and it will be removed one day. Note that architecture dependent fixes for IRQF_DISABLED were already submitted through separate patches. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 29 10月, 2013 1 次提交
-
-
由 Stefano Stabellini 提交于
map_sg returns the number of elements mapped, not a dma_addr_t. In case of error return 0, not DMA_ERROR_CODE. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 25 10月, 2013 4 次提交
-
-
由 Stefano Stabellini 提交于
swiotlb-xen: static inline xen_phys_to_bus, xen_bus_to_phys, xen_virt_to_bus and range_straddles_page_boundary This functions are small and called frequently. Static inline them. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Stefano Stabellini 提交于
When mapping/unmapping grant refs, call set_phys_to_machine to update the P2M with the new mappings for autotranslate guests. This is (almost) a nop on x86. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v9: - add in-code comments.
-
由 Stefano Stabellini 提交于
Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v7: - use dev_warn instead of pr_warn.
-
由 Stefano Stabellini 提交于
Call xen_dma_map_page, xen_dma_unmap_page, xen_dma_sync_single_for_cpu, xen_dma_sync_single_for_device from swiotlb-xen to ensure cpu/device coherency of the pages used for DMA, including the ones belonging to the swiotlb buffer. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 24 10月, 2013 1 次提交
-
-
由 Thierry Reding 提交于
Tracepoints are only created when Xen support is enabled, but they are also referenced within lib/swiotlb.c. So unless Xen support is enabled the tracepoints will be missing, therefore causing builds to fail. Fix this by moving the tracepoint creation to lib/swiotlb.c, which works nicely because the Xen swiotlb support selects the generic swiotlb support. Signed-off-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 17 10月, 2013 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
The dev_attrs field of struct bus_type is going away soon, dev_groups should be used instead. This converts the xenbus code to use the correct field. Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: <xen-devel@lists.xenproject.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 10 10月, 2013 4 次提交
-
-
由 Stefano Stabellini 提交于
Use xen_alloc_coherent_pages and xen_free_coherent_pages to allocate or free coherent pages. We need to be careful handling the pointer returned by xen_alloc_coherent_pages, because on ARM the pointer is not equal to phys_to_virt(*dma_handle). In fact virt_to_phys only works for kernel direct mapped RAM memory. In ARM case the pointer could be an ioremap address, therefore passing it to virt_to_phys would give you another physical address that doesn't correspond to it. Make xen_create_contiguous_region take a phys_addr_t as start parameter to avoid the virt_to_phys calls which would be incorrect. Changes in v6: - remove extra spaces. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
Implement xen_swiotlb_set_dma_mask, use it for set_dma_mask on arm. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Stefano Stabellini 提交于
Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to program the hardware with mfns rather than pfns for dma addresses. Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select SWIOTLB_XEN on arm and arm64. At the moment always rely on swiotlb-xen, but when Xen starts supporting hardware IOMMUs we'll be able to avoid it conditionally on the presence of an IOMMU on the platform. Implement xen_create_contiguous_region on arm and arm64: for the moment we assume that dom0 has been mapped 1:1 (physical addresses == machine addresses) therefore we don't need to call XENMEM_exchange. Simply return the physical address as dma address. Initialize the xen-swiotlb from xen_early_init (before the native dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v8: - assume dom0 is mapped 1:1, no need to call XENMEM_exchange. Changes in v7: - call __set_phys_to_machine_multi from xen_create_contiguous_region and xen_destroy_contiguous_region to update the P2M; - don't call XENMEM_unpin, it has been removed; - call XENMEM_exchange instead of XENMEM_exchange_and_pin; - set nr_exchanged to 0 before calling the hypercall. Changes in v6: - introduce and export xen_dma_ops; - call xen_mm_init from as arch_initcall. Changes in v4: - remove redefinition of DMA_ERROR_CODE; - update the code to use XENMEM_exchange_and_pin and XENMEM_unpin; - add a note about hardware IOMMU in the commit message. Changes in v3: - code style changes; - warn on XENMEM_put_dma_buf failures.
-
由 Stefano Stabellini 提交于
Modify xen_create_contiguous_region to return the dma address of the newly contiguous buffer. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Changes in v4: - use virt_to_machine instead of virt_to_bus.
-
- 03 10月, 2013 1 次提交
-
-
由 Zoltan Kiss 提交于
Ftrace is currently not able to detect when SWIOTLB has to do double buffering. Under Xen you can only see it indirectly in function_graph, when xen_swiotlb_map_page() doesn't stop after range_straddles_page_boundary(), but calls spinlock functions, memcpy() and xen_phys_to_bus() as well. This patch introduces the swiotlb:swiotlb_bounced event, which also prints out the following informations to help you find out why bouncing happened: dev_name: 0000:08:00.0 dma_mask=ffffffffffffffff dev_addr=9149f000 size=32768 swiotlb_force=0 If you use Xen, and (dev_addr + size + 1) > dma_mask, the buffer is out of the device's DMA range. If swiotlb_force == 1, you should really change the kernel parameters. Otherwise, the buffer is not contiguous in mfn space. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> [v1: Don't print 'swiotlb_force=X', just print swiotlb_force if it is enabled] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 25 9月, 2013 1 次提交
-
-
由 David Vrabel 提交于
get_balloon_scratch_page() disables preemption so we cannot call alloc_page() in between get/put_balloon_scratch_page(). Shuffle bits around in decrease_reservation() to avoid this. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 12 9月, 2013 2 次提交
-
-
由 Wei Liu 提交于
The BUG_ON in increase_reservation is wrong as we have P2M entry ballooned out page set to balloon scratch page, so it might have a valid P2M entry at that point. Signed-off-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 David Vrabel 提交于
In decrease_reservation(), if the kernel is preempted between updating the mapping and updating the p2m then they may end up using different scratch pages. Use get_balloon_scratch_page() and put_balloon_scratch_page() which use get_cpu_var() and put_cpu_var() to correctly disable preemption. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: NSander Eikelenboom <linux@eikelenboom.it>
-
- 31 8月, 2013 1 次提交
-
-
由 Wei Liu 提交于
In commit cd9151e2: xen/balloon: set a mapping for ballooned out pages we have the ballooned out page's mapping set to a scratch page. That commit also sets the P2M entry of ballooned out page to the scratch page's MFN. This is necessary for PV guest but not for HVM guest. On the other hand, setting the P2M entry would trigger BUG_ON in __set_phys_to_machine. The correct thing to do here is to avoid calling __set_phys_to_machine for auto translated guest. Signed-off-by: NWei Liu <wei.liu2@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 30 8月, 2013 2 次提交
-
-
由 Dan Carpenter 提交于
The call to del_evtchn() frees "evtchn". Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Andres Lagar-Cavilla 提交于
When a foreign mapper attempts to map guest frames that are paged out, the mapper receives an ENOENT response and will have to try again while a helper process pages the target frame back in. Gating checks on PRIVCMD_MMAPBATCH* ioctl args were preventing retries of mapping calls. Permit subsequent calls to update a sub-range of the VMA, iff nothing is yet mapped in that range. Since it is now valid to call PRIVCMD_MMAPBATCH* multiple times, only set vma->vm_private_data if the parameters are valid and (if necessary) the pages for the auto_translated_physmap case have been allocated. This prevents subsequent calls from incorrectly entering the 'retry' path when there are no pages allocated etc. Signed-off-by: NAndres Lagar-Cavilla <andres@lagarcavilla.org> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 20 8月, 2013 5 次提交
-
-
由 Stefano Stabellini 提交于
GNTTABOP_unmap_grant_ref unmaps a grant and replaces it with a 0 mapping instead of reinstating the original mapping. Doing so separately would be racy. To unmap a grant and reinstate the original mapping atomically we use GNTTABOP_unmap_and_replace. GNTTABOP_unmap_and_replace doesn't work with GNTMAP_contains_pte, so don't use it for kmaps. GNTTABOP_unmap_and_replace zeroes the mapping passed in new_addr so we have to reinstate it, however that is a per-cpu mapping only used for balloon scratch pages, so we can be sure that it's not going to be accessed while the mapping is not valid. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: alex@alex.org.uk CC: dcrisan@flexiant.com [v1: Konrad fixed up the conflicts] Conflicts: arch/x86/xen/p2m.c
-
由 Stefano Stabellini 提交于
The following commit: commit 6efa20e4 Author: Konrad Rzeszutek Wilk <konrad@kernel.org> Date: Fri Jul 19 11:51:31 2013 -0400 xen: Support 64-bit PV guest receiving NMIs breaks the Xen ARM build: CC drivers/xen/events.o drivers/xen/events.c: In function 'xen_send_IPI_one': drivers/xen/events.c:1218:6: error: 'XEN_NMI_VECTOR' undeclared (first use in this function) Simply ifdef the undeclared symbol in the code. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 David Vrabel 提交于
The original comment on the scanning of the start word on the 2nd pass did not reflect the actual behaviour (the code was incorrectly masking bit_idx instead of the pending word itself). The documented behaviour is not actually required since if event were pending in the MSBs, they would be immediately scanned anyway as we go through the loop again. Update the documentation to reflect this (instead of trying to change the behaviour). Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
由 David Vrabel 提交于
When a event is being bound to a VCPU there is a window between the EVTCHNOP_bind_vpcu call and the adjustment of the local per-cpu masks where an event may be lost. The hypervisor upcalls the new VCPU but the kernel thinks that event is still bound to the old VCPU and ignores it. There is even a problem when the event is being bound to the same VCPU as there is a small window beween the clear_bit() and set_bit() calls in bind_evtchn_to_cpu(). When scanning for pending events, the kernel may read the bit when it is momentarily clear and ignore the event. Avoid this by masking the event during the whole bind operation. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NJan Beulich <jbeulich@suse.com> CC: stable@vger.kernel.org
-
由 David Vrabel 提交于
The sizeof() argument in init_evtchn_cpu_bindings() is incorrect resulting in only the first 64 (or 32 in 32-bit guests) ports having their bindings being initialized to VCPU 0. In most cases this does not cause a problem as request_irq() will set the irq affinity which will set the correct local per-cpu mask. However, if the request_irq() is called on a VCPU other than 0, there is a window between the unmasking of the event and the affinity being set were an event may be lost because it is not locally unmasked on any VCPU. If request_irq() is called on VCPU 0 then local irqs are disabled during the window and the race does not occur. Fix this by initializing all NR_EVENT_CHANNEL bits in the local per-cpu masks. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
-
- 09 8月, 2013 5 次提交
-
-
由 Stefano Stabellini 提交于
swiotlb-xen has an implicit dependency on CONFIG_NEED_SG_DMA_LENGTH. Remove it by replacing dma_length with sg_dma_len. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
Currently ballooned out pages are mapped to 0 and have INVALID_P2M_ENTRY in the p2m. These ballooned out pages are used to map foreign grants by gntdev and blkback (see alloc_xenballooned_pages). Allocate a page per cpu and map all the ballooned out pages to the corresponding mfn. Set the p2m accordingly. This way reading from a ballooned out page won't cause a kernel crash (see http://lists.xen.org/archives/html/xen-devel/2012-12/msg01154.html). Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> CC: alex@alex.org.uk CC: dcrisan@flexiant.com Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 David Vrabel 提交于
The global array of port users and the port_user_lock limits scalability of the evtchn device. Instead of the global array lookup, use a per-use (per-fd) tree of event channels bound by that user and protect the tree with a per-user lock. This is also a prerequiste for extended the number of supported event channels, by removing the fixed size, per-event channel array. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Jingoo Han 提交于
The usage of strict_strtoul() is not preferred, because strict_strtoul() is obsolete. Thus, kstrtoul() should be used. Signed-off-by: NJingoo Han <jg1.han@samsung.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Roger Pau Monne 提交于
With the current implementation, the callback in the tail of the list can be added twice, because the check done in gnttab_request_free_callback is bogus, callback->next can be NULL if it is the last callback in the list. If we add the same callback twice we end up with an infinite loop, were callback == callback->next. Replace this check with a proper one that iterates over the list to see if the callback has already been added. Signed-off-by: NRoger Pau Monné <roger.pau@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NMatt Wilson <msw@amazon.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> CC: stable@vger.kernel.org
-