- 11 3月, 2015 1 次提交
-
-
由 Jan Beulich 提交于
Otherwise the guest can abuse that control to cause e.g. PCIe Unsupported Request responses by disabling memory and/or I/O decoding and subsequently causing (CPU side) accesses to the respective address ranges, which (depending on system configuration) may be fatal to the host. Note that to alter any of the bits collected together as PCI_COMMAND_GUEST permissive mode is now required to be enabled globally or on the specific device. This is CVE-2015-2150 / XSA-120. Signed-off-by: NJan Beulich <jbeulich@suse.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 06 3月, 2015 1 次提交
-
-
由 Juergen Gross 提交于
Using the pvops kernel a NULL pointer dereference was detected on a large machine (144 processors) when booting as dom0 in evtchn_fifo_unmask() during assignment of a pirq. The event channel in question was the first to need a new entry in event_array[] in events_fifo.c. Unfortunately xen_irq_info_pirq_setup() is called with evtchn being 0 for a new pirq and the real event channel number is assigned to the pirq only during __startup_pirq(). It is mandatory to call xen_evtchn_port_setup() after assigning the event channel number to the pirq to make sure all memory needed for the event channel is allocated. Signed-off-by: NJuergen Gross <jgross@suse.com> Cc: <stable@vger.kernel.org> # 3.14+ Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 24 2月, 2015 2 次提交
-
-
由 Juergen Gross 提交于
A request in the ring buffer mustn't be read after it has been marked as consumed. Otherwise it might already have been reused by the frontend without violating the ring protocol. To avoid inconsistencies in the backend only work on a private copy of the request. This will ensure a malicious guest not being able to bypass consistency checks of the backend by modifying an active request. Signed-off-by: NJuergen Gross <jgross@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 David Vrabel 提交于
Hypercalls submitted by user space tools via the privcmd driver can take a long time (potentially many 10s of seconds) if the hypercall has many sub-operations. A fully preemptible kernel may deschedule such as task in any upcall called from a hypercall continuation. However, in a kernel with voluntary or no preemption, hypercall continuations in Xen allow event handlers to be run but the task issuing the hypercall will not be descheduled until the hypercall is complete and the ioctl returns to user space. These long running tasks may also trigger the kernel's soft lockup detection. Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to bracket hypercalls that may be preempted. Use these in the privcmd driver. When returning from an upcall, call xen_maybe_preempt_hcall() which adds a schedule point if if the current task was within a preemptible hypercall. Since _cond_resched() can move the task to a different CPU, clear and set xen_in_preemptible_hcall around the call. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
- 06 2月, 2015 1 次提交
-
-
由 Ross Lagerwall 提交于
Commit 61a734d3 ("xen/manage: Always freeze/thaw processes when suspend/resuming") ensured that userspace processes were always frozen before suspending to reduce interaction issues when resuming devices. However, freeze_processes() does not freeze kernel threads. Freeze kernel threads as well to prevent deadlocks with the khubd thread when resuming devices. This is what native suspend and resume does. Example deadlock: [ 7279.648010] [<ffffffff81446bde>] ? xen_poll_irq_timeout+0x3e/0x50 [ 7279.648010] [<ffffffff81448d60>] xen_poll_irq+0x10/0x20 [ 7279.648010] [<ffffffff81011723>] xen_lock_spinning+0xb3/0x120 [ 7279.648010] [<ffffffff810115d1>] __raw_callee_save_xen_lock_spinning+0x11/0x20 [ 7279.648010] [<ffffffff815620b6>] ? usb_control_msg+0xe6/0x120 [ 7279.648010] [<ffffffff81747e50>] ? _raw_spin_lock_irq+0x50/0x60 [ 7279.648010] [<ffffffff8174522c>] wait_for_completion+0xac/0x160 [ 7279.648010] [<ffffffff8109c520>] ? try_to_wake_up+0x2c0/0x2c0 [ 7279.648010] [<ffffffff814b60f2>] dpm_wait+0x32/0x40 [ 7279.648010] [<ffffffff814b6eb0>] device_resume+0x90/0x210 [ 7279.648010] [<ffffffff814b7d71>] dpm_resume+0x121/0x250 [ 7279.648010] [<ffffffff8144c570>] ? xenbus_dev_request_and_reply+0xc0/0xc0 [ 7279.648010] [<ffffffff814b80d5>] dpm_resume_end+0x15/0x30 [ 7279.648010] [<ffffffff81449fba>] do_suspend+0x10a/0x200 [ 7279.648010] [<ffffffff8144a2f0>] ? xen_pre_suspend+0x20/0x20 [ 7279.648010] [<ffffffff8144a1d0>] shutdown_handler+0x120/0x150 [ 7279.648010] [<ffffffff8144c60f>] xenwatch_thread+0x9f/0x160 [ 7279.648010] [<ffffffff810ac510>] ? finish_wait+0x80/0x80 [ 7279.648010] [<ffffffff8108d189>] kthread+0xc9/0xe0 [ 7279.648010] [<ffffffff8108d0c0>] ? flush_kthread_worker+0x80/0x80 [ 7279.648010] [<ffffffff8175087c>] ret_from_fork+0x7c/0xb0 [ 7279.648010] [<ffffffff8108d0c0>] ? flush_kthread_worker+0x80/0x80 [ 7441.216287] INFO: task khubd:89 blocked for more than 120 seconds. [ 7441.219457] Tainted: G X 3.13.11-ckt12.kz #1 [ 7441.222176] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 7441.225827] khubd D ffff88003f433440 0 89 2 0x00000000 [ 7441.229258] ffff88003ceb9b98 0000000000000046 ffff88003ce83000 0000000000013440 [ 7441.232959] ffff88003ceb9fd8 0000000000013440 ffff88003cd13000 ffff88003ce83000 [ 7441.236658] 0000000000000286 ffff88003d3e0000 ffff88003ceb9bd0 00000001001aa01e [ 7441.240415] Call Trace: [ 7441.241614] [<ffffffff817442f9>] schedule+0x29/0x70 [ 7441.243930] [<ffffffff81743406>] schedule_timeout+0x166/0x2c0 [ 7441.246681] [<ffffffff81075b80>] ? call_timer_fn+0x110/0x110 [ 7441.249339] [<ffffffff8174357e>] schedule_timeout_uninterruptible+0x1e/0x20 [ 7441.252644] [<ffffffff81077710>] msleep+0x20/0x30 [ 7441.254812] [<ffffffff81555f00>] hub_port_reset+0xf0/0x580 [ 7441.257400] [<ffffffff81558465>] hub_port_init+0x75/0xb40 [ 7441.259981] [<ffffffff814bb3c9>] ? update_autosuspend+0x39/0x60 [ 7441.262817] [<ffffffff814bb4f0>] ? pm_runtime_set_autosuspend_delay+0x50/0xa0 [ 7441.266212] [<ffffffff8155a64a>] hub_thread+0x71a/0x1750 [ 7441.268728] [<ffffffff810ac510>] ? finish_wait+0x80/0x80 [ 7441.271272] [<ffffffff81559f30>] ? usb_port_resume+0x670/0x670 [ 7441.274067] [<ffffffff8108d189>] kthread+0xc9/0xe0 [ 7441.276305] [<ffffffff8108d0c0>] ? flush_kthread_worker+0x80/0x80 [ 7441.279131] [<ffffffff8175087c>] ret_from_fork+0x7c/0xb0 [ 7441.281659] [<ffffffff8108d0c0>] ? flush_kthread_worker+0x80/0x80 Signed-off-by: NRoss Lagerwall <ross.lagerwall@citrix.com> Cc: stable@vger.kernel.org Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 05 2月, 2015 1 次提交
-
-
由 Jennifer Herbert 提交于
If Xenstore sends back a XS_ERROR for TRANSACTION_END, the driver BUGs because it cannot find the matching transaction in the list. For TRANSACTION_START, it leaks memory. Check the message as returned from xenbus_dev_request_and_reply(), and clean up for TRANSACTION_START or discard the error for TRANSACTION_END. Signed-off-by: NJennifer Herbert <Jennifer.Herbert@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 28 1月, 2015 9 次提交
-
-
由 David Vrabel 提交于
For a PV guest, use the find_special_page op to find the right page. To handle VMAs being split, remember the start of the original VMA so the correct index in the pages array can be calculated. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 David Vrabel 提交于
In an x86 PV guest, get_user_pages_fast() on a userspace address range containing foreign mappings does not work correctly because the M2P lookup of the MFN from a userspace PTE may return the wrong page. Force get_user_pages_fast() to fail on such addresses by marking the PTEs as special. If Xen has XENFEAT_gnttab_map_avail_bits (available since at least 4.0), we can do so efficiently in the grant map hypercall. Otherwise, it needs to be done afterwards. This is both inefficient and racy (the mapping is visible to the task before we fixup the PTEs), but will be fine for well-behaved applications that do not use the mapping until after the mmap() system call returns. Guests with XENFEAT_auto_translated_physmap (ARM and x86 HVM or PVH) do not need this since get_user_pages() has always worked correctly for them. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Jennifer Herbert 提交于
Use gnttab_unmap_refs_async() to wait until the mapped pages are no longer in use before unmapping them. This allows userspace programs to safely use Direct I/O and AIO to a network filesystem which may retain refs to pages in queued skbs after the filesystem I/O has completed. Signed-off-by: NJennifer Herbert <jennifer.herbert@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 David Vrabel 提交于
Unmapping may require sleeping and we unmap while holding priv->lock, so convert it to a mutex. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Jennifer Herbert 提交于
Introduce gnttab_unmap_refs_async() that can be used to safely unmap pages that may be in use (ref count > 1). If the pages are in use the unmap is deferred and retried later. This polling is not very clever but it should be good enough if the cases where the delay is necessary are rare. The initial delay is 5 ms and is increased linearly on each subsequent retry (to reduce load if the page is in use for a long time). This is needed to allow block backends using grant mapping to safely use network storage (block or filesystem based such as iSCSI or NFS). The network storage driver may complete a block request whilst there is a queued network packet retry (because the ack from the remote end races with deciding to queue the retry). The pages for the retried packet would be grant unmapped and the network driver (or hardware) would access the unmapped page. Signed-off-by: NJennifer Herbert <jennifer.herbert@citrix.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Jennifer Herbert 提交于
Use the "foreign" page flag to mark pages that have a grant map. Use page->private to store information of the grant (the granting domain and the grant reference). Signed-off-by: NJennifer Herbert <jennifer.herbert@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 David Vrabel 提交于
Add gnttab_alloc_pages() and gnttab_free_pages() to allocate/free pages suitable to for granted maps. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 David Vrabel 提交于
The scratch frame mappings for ballooned pages and the m2p override are broken. Remove them in preparation for replacing them with simpler mechanisms that works. The scratch pages did not ensure that the page was not in use. In particular, the foreign page could still be in use by hardware. If the guest reused the frame the hardware could read or write that frame. The m2p override did not handle the same frame being granted by two different grant references. Trying an M2P override lookup in this case is impossible. With the m2p override removed, the grant map/unmap for the kernel mappings (for x86 PV) can be easily batched in set_foreign_p2m_mapping() and clear_foreign_p2m_mapping(). Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 David Vrabel 提交于
When unmapping grants, instead of converting the kernel map ops to unmap ops on the fly, pre-populate the set of unmap ops. This allows the grant unmap for the kernel mappings to be trivially batched in the future. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 26 1月, 2015 1 次提交
-
-
由 Lv Zheng 提交于
struct acpi_resource_address and struct acpi_resource_extended_address64 share substracts just at different offsets. To unify the parsing functions, OSPMs like Linux need a new ACPI_ADDRESS64_ATTRIBUTE as their substructs, so they can extract the shared data. This patch also synchronizes the structure changes to the Linux kernel. The usages are searched by matching the following keywords: 1. acpi_resource_address 2. acpi_resource_extended_address 3. ACPI_RESOURCE_TYPE_ADDRESS 4. ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS And we found and fixed the usages in the following files: arch/ia64/kernel/acpi-ext.c arch/ia64/pci/pci.c arch/x86/pci/acpi.c arch/x86/pci/mmconfig-shared.c drivers/xen/xen-acpi-memhotplug.c drivers/acpi/acpi_memhotplug.c drivers/acpi/pci_root.c drivers/acpi/resource.c drivers/char/hpet.c drivers/pnp/pnpacpi/rsparser.c drivers/hv/vmbus_drv.c Build tests are passed with defconfig/allnoconfig/allyesconfig and defconfig+CONFIG_ACPI=n. Original-by: NThomas Gleixner <tglx@linutronix.de> Original-by: NJiang Liu <jiang.liu@linux.intel.com> Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 23 1月, 2015 1 次提交
-
-
由 Jan Beulich 提交于
As a module_init() function, this should have been this way from the beginning. Signed-off-by: NJan Beulich <jbeulich@suse.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 09 1月, 2015 1 次提交
-
-
由 Hannes Reinecke 提交于
Instead of having constants.c littered with ifdef statements we should be moving dummy functions into the header and condintionally compile in constants.c if selected. And update the Kconfig description to reflect the actual size difference. Suggested-by: NChristoph Hellwig <hch@infradead.org> Tested-by: NRobert Elliott <elliott@hp.com> Reviewed-by: NRobert Elliott <elliott@hp.com> Signed-off-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 11 12月, 2014 1 次提交
-
-
由 David Vrabel 提交于
This reverts commit 2c3fc8d2. This commit broke on x86 PV because entries in the generic SWIOTLB are indexed using (pseudo-)physical address not DMA address and these are not the same in a x86 PV guest. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 10 12月, 2014 1 次提交
-
-
由 David Vrabel 提交于
This reverts commit 2c3fc8d2. This commit broke on x86 PV because entries in the generic SWIOTLB are indexed using (pseudo-)physical address not DMA address and these are not the same in a x86 PV guest. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 04 12月, 2014 14 次提交
-
-
由 Jan Beulich 提交于
When a PF driver unloads, it may find it necessary to leave the VFs around simply because of pciback having marked them as assigned to a guest. Utilize a suitable notification to let go of the VFs, thus allowing the PF to go back into the state it was before its driver loaded (which in particular allows the driver to be loaded again with it being able to create the VFs anew, but which also allows to then pass through the PF instead of the VFs). Don't do this however for any VFs currently in active use by a guest. Signed-off-by: NJan Beulich <jbeulich@suse.com> [v2: Removed the switch statement, moved it about] [v3: Redid it a bit differently] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
The commit "xen/pciback: Don't deadlock when unbinding." was using the version of pci_reset_function which would lock the device lock. That is no good as we can dead-lock. As such we swapped to using the lock-less version and requiring that the callers of 'pcistub_put_pci_dev' take the device lock. And as such this bug got exposed. Using the lock-less version is OK, except that we tried to use 'pci_restore_state' after the lock-less version of __pci_reset_function_locked - which won't work as 'state_saved' is set to false. Said 'state_saved' is a toggle boolean that is to be used by the sequence of a) pci_save_state/pci_restore_state or b) pci_load_and_free_saved_state/pci_restore_state. We don't want to use a) as the guest might have messed up the PCI configuration space and we want it to revert to the state when the PCI device was binded to us. Therefore we pick b) to restore the configuration space. We restore from our 'golden' version of PCI configuration space, when an: - Device is unbinded from pciback - Device is detached from a guest. Reported-by: NSander Eikelenboom <linux@eikelenboom.it> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
A little cleanup. No functional difference. Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
We had been printing it only if the device was built with debug enabled. But this information is useful in the field to troubleshoot. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
Cleanup the function a bit - also include the id of the domain that is using the device. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
Instead of open-coding it in drivers that want to double check that their functions are indeed holding the device lock. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Suggested-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
As commit 0a9fd015 'xen/pciback: Document the entry points for 'pcistub_put_pci_dev'' explained there are four entry points in this function. Two of them are when the user fiddles in the SysFS to unbind a device which might be in use by a guest or not. Both 'unbind' states will cause a deadlock as the the PCI lock has already been taken, which then pci_device_reset tries to take. We can simplify this by requiring that all callers of pcistub_put_pci_dev MUST hold the device lock. And then we can just call the lockless version of pci_device_reset. To make it even simpler we will modify xen_pcibk_release_pci_dev to quality whether it should take a lock or not - as it ends up calling xen_pcibk_release_pci_dev and needs to hold the lock. Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Stefano Stabellini 提交于
Need to pass the pointer within the swiotlb internal buffer to the swiotlb library, that in the case of xen_unmap_single is dev_addr, not paddr. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
-
由 Stefano Stabellini 提交于
In xen_swiotlb_sync_single we always call xen_dma_sync_single_for_cpu, even when we should call xen_dma_sync_single_for_device. Fix that. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
-
由 Stefano Stabellini 提交于
On x86 truncation cannot occur because config XEN depends on X86_64 || (X86_32 && X86_PAE). On ARM truncation can occur without CONFIG_ARM_LPAE, when the dma operation involves foreign grants. However in that case the physical address returned by xen_bus_to_phys is actually invalid (there is no mfn to pfn tracking for foreign grants on ARM) and it is not used. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
-
由 Stefano Stabellini 提交于
xen_dma_unmap_page and xen_dma_sync_single_for_cpu take a dma_addr_t handle as argument, not a physical address. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
-
由 Stefano Stabellini 提交于
Introduce an arch specific function to find out whether a particular dma mapping operation needs to bounce on the swiotlb buffer. On ARM and ARM64, if the page involved is a foreign page and the device is not coherent, we need to bounce because at unmap time we cannot execute any required cache maintenance operations (we don't know how to find the pfn from the mfn). No change of behaviour for x86. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
dev_addr is the machine address of the page. The new parameter can be used by the ARM and ARM64 implementations of xen_dma_map_page to find out if the page is a local page (pfn == mfn) or a foreign page (pfn != mfn). dev_addr could be retrieved again from the physical address, using pfn_to_mfn, but it requires accessing an rbtree. Since we already have the dev_addr in our hands at the call site there is no need to get the mfn twice. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Christoph Hellwig 提交于
For SPI drivers use the message definitions from scsi.h, and for target drivers introduce a new TCM_*_TAG namespace. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NBart Van Assche <bvanassche@acm.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com
-
- 12 11月, 2014 1 次提交
-
-
由 Hannes Reinecke 提交于
We should be using the standard dev_printk() variants for sense code printing. [hch: remove __scsi_print_sense call in xen-scsiback, Acked by Juergen] [hch: folded bracing fix from Dan Carpenter] Signed-off-by: NHannes Reinecke <hare@suse.de> Reviewed-by: NRobert Elliott <elliott@hp.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 05 11月, 2014 1 次提交
-
-
由 Ard Biesheuvel 提交于
This adds support to the UEFI side for detecting the presence of a SMBIOS 3.0 64-bit entry point. This allows the actual SMBIOS structure table to reside at a physical offset over 4 GB, which cannot be supported by the legacy SMBIOS 32-bit entry point. Since the firmware can legally provide both entry points, store the SMBIOS 3.0 entry point in a separate variable, and let the DMI decoding layer decide which one will be used. Tested-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NMatt Fleming <matt.fleming@intel.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 23 10月, 2014 2 次提交
-
-
由 Boris Ostrovsky 提交于
physdev_pci_device_add's optarr[] is a zero-sized array and therefore reference to add.optarr[0] is accessing memory that does not belong to the 'add' variable. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: NJan Beulich <jbeulich@suse.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Boris Ostrovsky 提交于
Commit 3dcf6367 ("xen/balloon: cancel ballooning if adding new memory failed") makes reserve_additional_memory() return BP_ECANCELED when an error is encountered. This error, however, is ignored by the caller (balloon_process()) since it is overwritten by subsequent call to update_schedule(). This results in continuous attempts to add more memory, all of which are likely to fail again. We should stop trying to schedule next iteration of ballooning when the current one has failed. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: NDaniel Kiper <daniel.kiper@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 06 10月, 2014 2 次提交
-
-
由 David Vrabel 提交于
The DEFINE_XENBUS_DRIVER() macro looks a bit weird and causes sparse errors. Replace the uses with standard structure definitions instead. This is similar to pci and usb device registration. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Chen Gang 提交于
xenbus_va_dev_error() is for printing error, so when error string is too long to be truncated, need not BUG_ON(), still return truncation string is OK. Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-