- 25 9月, 2018 2 次提交
-
-
由 Robin Murphy 提交于
The external interface to get/set window attributes is already abstracted behind iommu_domain_{get,set}_attr(), so there's no real reason for the internal interface to be different. Since we only have one window-based driver anyway, clean up the core code by just moving the DOMAIN_ATTR_WINDOWS handling directly into the PAMU driver. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
While iommu_get_domain_for_dev() is the robust way for arbitrary IOMMU API callers to retrieve the domain pointer, for DMA ops domains it doesn't scale well for large systems and multi-queue devices, since the momentary refcount adjustment will lead to exclusive cacheline contention when multiple CPUs are operating in parallel on different mappings for the same device. In the case of DMA ops domains, however, this refcounting is actually unnecessary, since they already imply that the group exists and is managed by platform code and IOMMU internals (by virtue of iommu_group_get_for_dev()) such that a reference will already be held for the lifetime of the device. Thus we can avoid the bottleneck by providing a fast lookup specifically for the DMA code to retrieve the default domain it already knows it has set up - a simple read-only dereference plays much nicer with cache-coherency protocols. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 08 8月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
All iommu drivers use the default_iommu_map_sg implementation, and there is no good reason to ever override it. Just expose it as iommu_map_sg directly and remove the indirection, specially in our post-spectre world where indirect calls are horribly expensive. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 7月, 2018 2 次提交
-
-
由 Olof Johansson 提交于
This allows the default behavior to be controlled by a kernel config option instead of changing the commandline for the kernel to include "iommu.passthrough=on" or "iommu=pt" on machines where this is desired. Likewise, for machines where this config option is enabled, it can be disabled at boot time with "iommu.passthrough=off" or "iommu=nopt". Also corrected iommu=pt documentation for IA-64, since it has no code that parses iommu= at all. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Olof Johansson 提交于
While we could print it at setup time, this is an easier way to match each device to their default IOMMU allocation type. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 7月, 2018 1 次提交
-
-
由 Gary R Hook 提交于
Provide base enablement for using debugfs to expose internal data of an IOMMU driver. When called, create the /sys/kernel/debug/iommu directory. Emit a strong warning at boot time to indicate that this feature is enabled. This function is called from iommu_init, and creates the initial DebugFS directory. Drivers may then call iommu_debugfs_new_driver_dir() to instantiate a device-specific directory to expose internal data. It will return a pointer to the new dentry structure created in /sys/kernel/debug/iommu, or NULL in the event of a failure. Since the IOMMU driver can not be removed from the running system, there is no need for an "off" function. Signed-off-by: NGary R Hook <gary.hook@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 15 5月, 2018 2 次提交
-
-
由 Lu Baolu 提交于
@name parameter has been removed. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Andy Shevchenko 提交于
strtobool() does check for NULL parameter already. No need to repeat. While here, switch to kstrtobool() and unshadow actual error code (which is still -EINVAL). No functional change intended. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 2月, 2018 1 次提交
-
-
由 Suravee Suthikulpanit 提交于
Currently, iommu_unmap, iommu_unmap_fast and iommu_map_sg return size_t. However, some of the return values are error codes (< 0), which can be misinterpreted as large size. Therefore, returning size 0 instead to signify failure to map/unmap. Cc: Joerg Roedel <joro@8bytes.org> Cc: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 21 12月, 2017 1 次提交
-
-
由 Jordan Crouse 提交于
The result of iommu_group_get() was being blindly used in both attach and detach which results in a dereference when trying to work with an unknown device. Signed-off-by: NJordan Crouse <jcrouse@codeaurora.org> Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
-
- 31 8月, 2017 1 次提交
-
-
由 Joerg Roedel 提交于
With the current IOMMU-API the hardware TLBs have to be flushed in every iommu_ops->unmap() call-back. For unmapping large amounts of address space, like it happens when a KVM domain with assigned devices is destroyed, this causes thousands of unnecessary TLB flushes in the IOMMU hardware because the unmap call-back runs for every unmapped physical page. With the TLB Flush Interface and the new iommu_unmap_fast() function introduced here the need to clean the hardware TLBs is removed from the unmapping code-path. Users of iommu_unmap_fast() have to explicitly call the TLB-Flush functions to sync the page-table changes to the hardware. Three functions for TLB-Flushes are introduced: * iommu_flush_tlb_all() - Flushes all TLB entries associated with that domain. TLBs entries are flushed when this function returns. * iommu_tlb_range_add() - This will add a given range to the flush queue for this domain. * iommu_tlb_sync() - Flushes all queued ranges from the hardware TLBs. Returns when the flush is finished. The semantic of this interface is intentionally similar to the iommu_gather_ops from the io-pgtable code. Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 18 8月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
The recently-removed FIXME in iommu_get_domain_for_dev() turns out to have been a little misleading, since that check is still worthwhile even when groups *are* universal. We have a few IOMMU-aware drivers which only care whether their device is already attached to an existing domain or not, for which the previous behaviour of iommu_get_domain_for_dev() was ideal, and who now crash if their device does not have an IOMMU. With IOMMU groups now serving as a reliable indicator of whether a device has an IOMMU or not (barring false-positives from VFIO no-IOMMU mode), drivers could arguably do this: group = iommu_group_get(dev); if (group) { domain = iommu_get_domain_for_dev(dev); iommu_group_put(group); } However, rather than duplicate that code across multiple callsites, particularly when it's still only the domain they care about, let's skip straight to the next step and factor out the check into the common place it applies - in iommu_get_domain_for_dev() itself. Sure, it ends up looking rather familiar, but now it's backed by the reasoning of having a robust API able to do the expected thing for all devices regardless. Fixes: 05f80300 ("iommu: Finish making iommu_group support mandatory") Reported-by: NShawn Lin <shawn.lin@rock-chips.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 8月, 2017 1 次提交
-
-
由 Baoquan He 提交于
This new call-back will be used to check if the domain attach need be deferred for now. If yes, the domain attach/detach will return directly. Signed-off-by: NBaoquan He <bhe@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 8月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
Now that all the drivers properly implementing the IOMMU API support groups (I'm ignoring the etnaviv GPU MMUs which seemingly only do just enough to convince the ARM DMA mapping ops), we can remove the FIXME workarounds from the core code. In the process, it also seems logical to make the .device_group callback non-optional for drivers calling iommu_group_get_for_dev() - the current callers all implement it anyway, and it doesn't make sense for any future callers not to either. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 28 6月, 2017 2 次提交
-
-
由 Joerg Roedel 提交于
This callback should never return NULL. Print a warning if that happens so that we notice and can fix it. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The generic device_group call-backs in iommu.c return NULL in case of error. Since they are getting ERR_PTR values from iommu_group_alloc(), just pass them up instead. Reported-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 27 4月, 2017 1 次提交
-
-
由 Joerg Roedel 提交于
The function is in no fast-path, there is no need for it to be static inline in a header file. This also removes the need to include iommu trace-points in iommu.h. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 20 4月, 2017 1 次提交
-
-
由 zhichang.yuan 提交于
In iommu_bus_notifier(), when action is BUS_NOTIFY_ADD_DEVICE, it will return 'ops->add_device(dev)' directly. But ops->add_device will return ERR_VAL, such as -ENODEV. These value will make notifier_call_chain() not to traverse the remain nodes in struct notifier_block list. This patch revises iommu_bus_notifier() to return NOTIFY_DONE when some errors happened in ops->add_device(). Signed-off-by: Nzhichang.yuan <yuanzhichang@hisilicon.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 4月, 2017 1 次提交
-
-
由 Will Deacon 提交于
The IOMMU core currently initialises the default domain for each group to IOMMU_DOMAIN_DMA, under the assumption that devices will use IOMMU-backed DMA ops by default. However, in some cases it is desirable for the DMA ops to bypass the IOMMU for performance reasons, reserving use of translation for subsystems such as VFIO that require it for enforcing device isolation. Rather than modify each IOMMU driver to provide different semantics for DMA domains, instead we introduce a command line parameter that can be used to change the type of the default domain. Passthrough can then be specified using "iommu.passthrough=1" on the kernel command line. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 3月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
The introduction of reserved regions has left a couple of rough edges which we could do with sorting out sooner rather than later. Since we are not yet addressing the potential dynamic aspect of software-managed reservations and presenting them at arbitrary fixed addresses, it is incongruous that we end up displaying hardware vs. software-managed MSI regions to userspace differently, especially since ARM-based systems may actually require one or the other, or even potentially both at once, (which iommu-dma currently has no hope of dealing with at all). Let's resolve the former user-visible inconsistency ASAP before the ABI has been baked into a kernel release, in a way that also lays the groundwork for the latter shortcoming to be addressed by follow-up patches. For clarity, rename the software-managed type to IOMMU_RESV_SW_MSI, use IOMMU_RESV_MSI to describe the hardware type, and document everything a little bit. Since the x86 MSI remapping hardware falls squarely under this meaning of IOMMU_RESV_MSI, apply that type to their regions as well, so that we tell the same story to userspace across all platforms. Secondly, as the various region types require quite different handling, and it really makes little sense to ever try combining them, convert the bitfield-esque #defines to a plain enum in the process before anyone gets the wrong impression. Fixes: d30ddcaa ("iommu: Add a new type field in iommu_resv_region") Reviewed-by: NEric Auger <eric.auger@redhat.com> CC: Alex Williamson <alex.williamson@redhat.com> CC: David Woodhouse <dwmw2@infradead.org> CC: kvm@vger.kernel.org Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 2月, 2017 4 次提交
-
-
由 Joerg Roedel 提交于
And also move its remaining functionality to iommu_device_register() and 'struct iommu_device'. Cc: Rob Herring <robh+dt@kernel.org> Cc: Frank Rowand <frowand.list@gmail.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: devicetree@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
This struct represents one hardware iommu in the iommu core code. For now it only has the iommu-ops associated with it, but that will be extended soon. The register/unregister interface is also added, as well as making use of it in the Intel and AMD IOMMU drivers. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
The struct is used to link devices to iommu-groups, so 'struct group_device' is a better name. Further this makes the name iommu_device available for a struct representing hardware iommus. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Joerg Roedel 提交于
Rename the function to iommu_ops_from_fwnode(), because that is what the function actually does. The new name is much more descriptive about what the function does. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 06 2月, 2017 2 次提交
-
-
由 Eric Auger 提交于
In case the device reserved region list is void, the returned value of iommu_insert_device_resv_regions is uninitialized. Let's return 0 in that case. This fixes commit 6c65fb31 ("iommu: iommu_get_group_resv_regions"). Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Zhen Lei 提交于
Move the assignment statement into if branch above, where it only needs to be. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 23 1月, 2017 5 次提交
-
-
由 Eric Auger 提交于
A new iommu-group sysfs attribute file is introduced. It contains the list of reserved regions for the iommu-group. Each reserved region is described on a separate line: - first field is the start IOVA address, - second is the end IOVA address, - third is the type. Signed-off-by: NEric Auger <eric.auger@redhat.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Eric Auger 提交于
Introduce iommu_get_group_resv_regions whose role consists in enumerating all devices from the group and collecting their reserved regions. The list is sorted and overlaps between regions of the same type are handled by merging the regions. Signed-off-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Eric Auger 提交于
As we introduced new reserved region types which do not require mapping, let's make sure we only map direct mapped regions. Signed-off-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Eric Auger 提交于
Introduce a new helper serving the purpose to allocate a reserved region. This will be used in iommu driver implementing reserved region callbacks. Signed-off-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Eric Auger 提交于
We want to extend the callbacks used for dm regions and use them for reserved regions. Reserved regions can be - directly mapped regions - regions that cannot be iommu mapped (PCI host bridge windows, ...) - MSI regions (because they belong to another address space or because they are not translated by the IOMMU and need special handling) So let's rename the struct and also the callbacks. Signed-off-by: NEric Auger <eric.auger@redhat.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 17 1月, 2017 1 次提交
-
-
由 Robin Murphy 提交于
We wouldn't normally expect ops->attach_dev() to fail, but on IOMMUs with limited hardware resources, or generally misconfigured systems, it is certainly possible. We report failure correctly from the external iommu_attach_device() interface, but do not do so in iommu_group_add() when attaching to the default domain. The result of failure there is that the device, group and domain all get left in a broken, part-configured state which leads to weird errors and misbehaviour down the line when IOMMU API calls sort-of-but-don't-quite work. Check the return value of __iommu_attach_device() on the default domain, and refactor the error handling paths to cope with its failure and clean up correctly in such cases. Fixes: e39cb8a3 ("iommu: Make sure a device is always attached to a domain") Reported-by: NPunit Agrawal <punit.agrawal@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 11月, 2016 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The of_iommu_{set/get}_ops() API is used to associate a device tree node with a specific set of IOMMU operations. The same kernel interface is required on systems booting with ACPI, where devices are not associated with a device tree node, therefore the interface requires generalization. The struct device fwnode member represents the fwnode token associated with the device and the struct it points at is firmware specific; regardless, it is initialized on both ACPI and DT systems and makes an ideal candidate to use it to associate a set of IOMMU operations to a given device, through its struct device.fwnode member pointer, paving the way for representing per-device iommu_ops (ie an iommu instance associated with a device). Convert the DT specific of_iommu_{set/get}_ops() interface to use struct device.fwnode as a look-up token, making the interface usable on ACPI systems and rename the data structures and the registration API so that they are made to represent their usage more clearly. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 11月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
iommu_group_get_for_dev() expects that the IOMMU driver's device_group callback return a group with a reference held for the given device. Whilst allocating a new group is fine, and pci_device_group() correctly handles reusing an existing group, there is no general means for IOMMU drivers doing their own group lookup to take additional references on an existing group pointer without having to also store device pointers or resort to elaborate trickery. Add an IOMMU-driver-specific function to fill the hole. Acked-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 16 9月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
Introduce a common structure to hold the per-device firmware data that most IOMMU drivers need to keep track of. This enables us to configure much of that data from common firmware code, and consolidate a lot of the equivalent implementations, device look-up tables, etc. which are currently strewn across IOMMU drivers. This will also be enable us to address the outstanding "multiple IOMMUs on the platform bus" problem by tweaking IOMMU API calls to prefer dev->fwspec->ops before falling back to dev->bus->iommu_ops, and thus gracefully handle those troublesome systems which we currently cannot. As the first user, hook up the OF IOMMU configuration mechanism. The driver-defined nature of DT cells means that we still need the drivers to translate and add the IDs themselves, but future users such as the much less free-form ACPI IORT will be much simpler and self-contained. CC: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Suggested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 13 7月, 2016 3 次提交
-
-
由 Joerg Roedel 提交于
This new call-back will be used by the iommu driver to do reserve the given dm_region in its iova space before the mapping is created. The call-back is temporary until the dma-ops implementation is part of the common iommu code. Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Heiner Kallweit 提交于
Ida handling can be much simplified by using the ida_simple_.. functions. This change also fixes the bug that previously checking for errors returned by ida_get_new() was incomplete. ida_get_new() can return errors other than EAGAIN, e.g. ENOSPC. This case wasn't handled. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Heiner Kallweit 提交于
iommu_group_ida and iommu_group_mutex can be initialized statically. There's no need to do this dynamically in the init function. Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 09 5月, 2016 1 次提交
-
-
由 Robin Murphy 提交于
Many IOMMUs support multiple page table formats, meaning that any given domain may only support a subset of the hardware page sizes presented in iommu_ops->pgsize_bitmap. There are also certain use-cases where the creator of a domain may want to control which page sizes are used, for example to force the use of hugepage mappings to reduce pagetable walk depth. To this end, add a per-domain pgsize_bitmap to represent the subset of page sizes actually in use, to make it possible for domains with different requirements to coexist. Signed-off-by: NWill Deacon <will.deacon@arm.com> [rm: hijacked and rebased original patch with new commit message] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 12 4月, 2016 1 次提交
-
-
由 Jacek Lawrynowicz 提交于
Solve IOMMU support issues with PCIe non-transparent bridges that use Requester ID look-up tables (RID-LUT), e.g., the PEX8733. The NTB connects devices in two independent PCI domains. Devices separated by the NTB are not able to discover each other. A PCI packet being forwared from one domain to another has to have its RID modified so it appears on correct bus and completions are forwarded back to the original domain through the NTB. The RID is translated using a preprogrammed table (LUT) and the PCI packet propagates upstream away from the NTB. If the destination system has IOMMU enabled, the packet will be discarded because the new RID is unknown to the IOMMU. Adding a DMA alias for the new RID allows IOMMU to properly recognize the packet. Each device behind the NTB has a unique RID assigned in the RID-LUT. The current DMA alias implementation supports only a single alias, so it's not possible to support mutiple devices behind the NTB when IOMMU is enabled. Enable all possible aliases on a given bus (256) that are stored in a bitset. Alias devfn is directly translated to a bit number. The bitset is not allocated for devices that have no need for DMA aliases. More details can be found in the following article: http://www.plxtech.com/files/pdf/technical/expresslane/RTC_Enabling%20MulitHostSystemDesigns.pdfSigned-off-by: NJacek Lawrynowicz <jacek.lawrynowicz@intel.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NAlex Williamson <alex.williamson@redhat.com> Acked-by: NDavid Woodhouse <David.Woodhouse@intel.com> Acked-by: NJoerg Roedel <jroedel@suse.de>
-