提交 6eae81a5 编写于 作者: L Linus Torvalds

Merge tag 'iommu-updates-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:
 "This time with bigger changes than usual:

   - A new IOMMU driver for the ARM SMMUv3.

     This IOMMU is pretty different from SMMUv1 and v2 in that it is
     configured through in-memory structures and not through the MMIO
     register region.  The ARM SMMUv3 also supports IO demand paging for
     PCI devices with PRI/PASID capabilities, but this is not
     implemented in the driver yet.

   - Lots of cleanups and device-tree support for the Exynos IOMMU
     driver.  This is part of the effort to bring Exynos DRM support
     upstream.

   - Introduction of default domains into the IOMMU core code.

     The rationale behind this is to move functionalily out of the IOMMU
     drivers to common code to get to a unified behavior between
     different drivers.  The patches here introduce a default domain for
     iommu-groups (isolation groups).

     A device will now always be attached to a domain, either the
     default domain or another domain handled by the device driver.  The
     IOMMU drivers have to be modified to make use of that feature.  So
     long the AMD IOMMU driver is converted, with others to follow.

   - Patches for the Intel VT-d drvier to fix DMAR faults that happen
     when a kdump kernel boots.

     When the kdump kernel boots it re-initializes the IOMMU hardware,
     which destroys all mappings from the crashed kernel.  As this
     happens before the endpoint devices are re-initialized, any
     in-flight DMA causes a DMAR fault.  These faults cause PCI master
     aborts, which some devices can't handle properly and go into an
     undefined state, so that the device driver in the kdump kernel
     fails to initialize them and the dump fails.

     This is now fixed by copying over the mapping structures (only
     context tables and interrupt remapping tables) from the old kernel
     and keep the old mappings in place until the device driver of the
     new kernel takes over.  This emulates the the behavior without an
     IOMMU to the best degree possible.

   - A couple of other small fixes and cleanups"

* tag 'iommu-updates-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (69 commits)
  iommu/amd: Handle large pages correctly in free_pagetable
  iommu/vt-d: Don't disable IR when it was previously enabled
  iommu/vt-d: Make sure copied over IR entries are not reused
  iommu/vt-d: Copy IR table from old kernel when in kdump mode
  iommu/vt-d: Set IRTA in intel_setup_irq_remapping
  iommu/vt-d: Disable IRQ remapping in intel_prepare_irq_remapping
  iommu/vt-d: Move QI initializationt to intel_setup_irq_remapping
  iommu/vt-d: Move EIM detection to intel_prepare_irq_remapping
  iommu/vt-d: Enable Translation only if it was previously disabled
  iommu/vt-d: Don't disable translation prior to OS handover
  iommu/vt-d: Don't copy translation tables if RTT bit needs to be changed
  iommu/vt-d: Don't do early domain assignment if kdump kernel
  iommu/vt-d: Allocate si_domain in init_dmars()
  iommu/vt-d: Mark copied context entries
  iommu/vt-d: Do not re-use domain-ids from the old kernel
  iommu/vt-d: Copy translation tables from old kernel
  iommu/vt-d: Detect pre enabled translation
  iommu/vt-d: Make root entry visible for hardware right after allocation
  iommu/vt-d: Init QI before root entry is allocated
  iommu/vt-d: Cleanup log messages
  ...
* ARM SMMUv3 Architecture Implementation
The SMMUv3 architecture is a significant deparature from previous
revisions, replacing the MMIO register interface with in-memory command
and event queues and adding support for the ATS and PRI components of
the PCIe specification.
** SMMUv3 required properties:
- compatible : Should include:
* "arm,smmu-v3" for any SMMUv3 compliant
implementation. This entry should be last in the
compatible list.
- reg : Base address and size of the SMMU.
- interrupts : Non-secure interrupt list describing the wired
interrupt sources corresponding to entries in
interrupt-names. If no wired interrupts are
present then this property may be omitted.
- interrupt-names : When the interrupts property is present, should
include the following:
* "eventq" - Event Queue not empty
* "priq" - PRI Queue not empty
* "cmdq-sync" - CMD_SYNC complete
* "gerror" - Global Error activated
** SMMUv3 optional properties:
- dma-coherent : Present if DMA operations made by the SMMU (page
table walks, stream table accesses etc) are cache
coherent with the CPU.
NOTE: this only applies to the SMMU itself, not
masters connected upstream of the SMMU.
......@@ -1637,11 +1637,12 @@ F: drivers/i2c/busses/i2c-cadence.c
F: drivers/mmc/host/sdhci-of-arasan.c
F: drivers/edac/synopsys_edac.c
ARM SMMU DRIVER
ARM SMMU DRIVERS
M: Will Deacon <will.deacon@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/iommu/arm-smmu.c
F: drivers/iommu/arm-smmu-v3.c
F: drivers/iommu/io-pgtable-arm.c
ARM64 PORT (AARCH64 ARCHITECTURE)
......
......@@ -339,6 +339,7 @@ config SPAPR_TCE_IOMMU
Enables bits of IOMMU API required by VFIO. The iommu_ops
is not implemented as it is not necessary for VFIO.
# ARM IOMMU support
config ARM_SMMU
bool "ARM Ltd. System MMU (SMMU) Support"
depends on (ARM64 || ARM) && MMU
......@@ -352,4 +353,16 @@ config ARM_SMMU
Say Y here if your SoC includes an IOMMU device implementing
the ARM SMMU architecture.
config ARM_SMMU_V3
bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
depends on ARM64 && PCI
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
help
Support for implementations of the ARM System MMU architecture
version 3 providing translation support to a PCIe root complex.
Say Y here if your system includes an IOMMU device implementing
the ARM SMMUv3 architecture.
endif # IOMMU_SUPPORT
......@@ -9,6 +9,7 @@ obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o
obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o
obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o
obj-$(CONFIG_ARM_SMMU) += arm-smmu.o
obj-$(CONFIG_ARM_SMMU_V3) += arm-smmu-v3.o
obj-$(CONFIG_DMAR_TABLE) += dmar.o
obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o
obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
......
......@@ -65,10 +65,6 @@
static DEFINE_RWLOCK(amd_iommu_devtable_lock);
/* A list of preallocated protection domains */
static LIST_HEAD(iommu_pd_list);
static DEFINE_SPINLOCK(iommu_pd_list_lock);
/* List of all available dev_data structures */
static LIST_HEAD(dev_data_list);
static DEFINE_SPINLOCK(dev_data_list_lock);
......@@ -120,7 +116,7 @@ struct iommu_cmd {
struct kmem_cache *amd_iommu_irq_cache;
static void update_domain(struct protection_domain *domain);
static int __init alloc_passthrough_domain(void);
static int alloc_passthrough_domain(void);
/****************************************************************************
*
......@@ -235,31 +231,38 @@ static bool pdev_pri_erratum(struct pci_dev *pdev, u32 erratum)
}
/*
* In this function the list of preallocated protection domains is traversed to
* find the domain for a specific device
* This function actually applies the mapping to the page table of the
* dma_ops domain.
*/
static struct dma_ops_domain *find_protection_domain(u16 devid)
static void alloc_unity_mapping(struct dma_ops_domain *dma_dom,
struct unity_map_entry *e)
{
struct dma_ops_domain *entry, *ret = NULL;
unsigned long flags;
u16 alias = amd_iommu_alias_table[devid];
if (list_empty(&iommu_pd_list))
return NULL;
spin_lock_irqsave(&iommu_pd_list_lock, flags);
u64 addr;
list_for_each_entry(entry, &iommu_pd_list, list) {
if (entry->target_dev == devid ||
entry->target_dev == alias) {
ret = entry;
break;
}
for (addr = e->address_start; addr < e->address_end;
addr += PAGE_SIZE) {
if (addr < dma_dom->aperture_size)
__set_bit(addr >> PAGE_SHIFT,
dma_dom->aperture[0]->bitmap);
}
}
/*
* Inits the unity mappings required for a specific device
*/
static void init_unity_mappings_for_device(struct device *dev,
struct dma_ops_domain *dma_dom)
{
struct unity_map_entry *e;
u16 devid;
spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
devid = get_device_id(dev);
return ret;
list_for_each_entry(e, &amd_iommu_unity_map, list) {
if (!(devid >= e->devid_start && devid <= e->devid_end))
continue;
alloc_unity_mapping(dma_dom, e);
}
}
/*
......@@ -291,11 +294,23 @@ static bool check_device(struct device *dev)
static void init_iommu_group(struct device *dev)
{
struct dma_ops_domain *dma_domain;
struct iommu_domain *domain;
struct iommu_group *group;
group = iommu_group_get_for_dev(dev);
if (!IS_ERR(group))
iommu_group_put(group);
if (IS_ERR(group))
return;
domain = iommu_group_default_domain(group);
if (!domain)
goto out;
dma_domain = to_pdomain(domain)->priv;
init_unity_mappings_for_device(dev, dma_domain);
out:
iommu_group_put(group);
}
static int __last_alias(struct pci_dev *pdev, u16 alias, void *data)
......@@ -435,64 +450,15 @@ static void iommu_uninit_device(struct device *dev)
/* Unlink from alias, it may change if another device is re-plugged */
dev_data->alias_data = NULL;
/* Remove dma-ops */
dev->archdata.dma_ops = NULL;
/*
* We keep dev_data around for unplugged devices and reuse it when the
* device is re-plugged - not doing so would introduce a ton of races.
*/
}
void __init amd_iommu_uninit_devices(void)
{
struct iommu_dev_data *dev_data, *n;
struct pci_dev *pdev = NULL;
for_each_pci_dev(pdev) {
if (!check_device(&pdev->dev))
continue;
iommu_uninit_device(&pdev->dev);
}
/* Free all of our dev_data structures */
list_for_each_entry_safe(dev_data, n, &dev_data_list, dev_data_list)
free_dev_data(dev_data);
}
int __init amd_iommu_init_devices(void)
{
struct pci_dev *pdev = NULL;
int ret = 0;
for_each_pci_dev(pdev) {
if (!check_device(&pdev->dev))
continue;
ret = iommu_init_device(&pdev->dev);
if (ret == -ENOTSUPP)
iommu_ignore_device(&pdev->dev);
else if (ret)
goto out_free;
}
/*
* Initialize IOMMU groups only after iommu_init_device() has
* had a chance to populate any IVRS defined aliases.
*/
for_each_pci_dev(pdev) {
if (check_device(&pdev->dev))
init_iommu_group(&pdev->dev);
}
return 0;
out_free:
amd_iommu_uninit_devices();
return ret;
}
#ifdef CONFIG_AMD_IOMMU_STATS
/*
......@@ -1464,94 +1430,6 @@ static unsigned long iommu_unmap_page(struct protection_domain *dom,
return unmapped;
}
/*
* This function checks if a specific unity mapping entry is needed for
* this specific IOMMU.
*/
static int iommu_for_unity_map(struct amd_iommu *iommu,
struct unity_map_entry *entry)
{
u16 bdf, i;
for (i = entry->devid_start; i <= entry->devid_end; ++i) {
bdf = amd_iommu_alias_table[i];
if (amd_iommu_rlookup_table[bdf] == iommu)
return 1;
}
return 0;
}
/*
* This function actually applies the mapping to the page table of the
* dma_ops domain.
*/
static int dma_ops_unity_map(struct dma_ops_domain *dma_dom,
struct unity_map_entry *e)
{
u64 addr;
int ret;
for (addr = e->address_start; addr < e->address_end;
addr += PAGE_SIZE) {
ret = iommu_map_page(&dma_dom->domain, addr, addr, e->prot,
PAGE_SIZE);
if (ret)
return ret;
/*
* if unity mapping is in aperture range mark the page
* as allocated in the aperture
*/
if (addr < dma_dom->aperture_size)
__set_bit(addr >> PAGE_SHIFT,
dma_dom->aperture[0]->bitmap);
}
return 0;
}
/*
* Init the unity mappings for a specific IOMMU in the system
*
* Basically iterates over all unity mapping entries and applies them to
* the default domain DMA of that IOMMU if necessary.
*/
static int iommu_init_unity_mappings(struct amd_iommu *iommu)
{
struct unity_map_entry *entry;
int ret;
list_for_each_entry(entry, &amd_iommu_unity_map, list) {
if (!iommu_for_unity_map(iommu, entry))
continue;
ret = dma_ops_unity_map(iommu->default_dom, entry);
if (ret)
return ret;
}
return 0;
}
/*
* Inits the unity mappings required for a specific device
*/
static int init_unity_mappings_for_device(struct dma_ops_domain *dma_dom,
u16 devid)
{
struct unity_map_entry *e;
int ret;
list_for_each_entry(e, &amd_iommu_unity_map, list) {
if (!(devid >= e->devid_start && devid <= e->devid_end))
continue;
ret = dma_ops_unity_map(dma_dom, e);
if (ret)
return ret;
}
return 0;
}
/****************************************************************************
*
* The next functions belong to the address allocator for the dma_ops
......@@ -1705,14 +1583,16 @@ static unsigned long dma_ops_area_alloc(struct device *dev,
unsigned long next_bit = dom->next_address % APERTURE_RANGE_SIZE;
int max_index = dom->aperture_size >> APERTURE_RANGE_SHIFT;
int i = start >> APERTURE_RANGE_SHIFT;
unsigned long boundary_size;
unsigned long boundary_size, mask;
unsigned long address = -1;
unsigned long limit;
next_bit >>= PAGE_SHIFT;
boundary_size = ALIGN(dma_get_seg_boundary(dev) + 1,
PAGE_SIZE) >> PAGE_SHIFT;
mask = dma_get_seg_boundary(dev);
boundary_size = mask + 1 ? ALIGN(mask + 1, PAGE_SIZE) >> PAGE_SHIFT :
1UL << (BITS_PER_LONG - PAGE_SHIFT);
for (;i < max_index; ++i) {
unsigned long offset = dom->aperture[i]->offset >> PAGE_SHIFT;
......@@ -1870,9 +1750,15 @@ static void free_pt_##LVL (unsigned long __pt) \
pt = (u64 *)__pt; \
\
for (i = 0; i < 512; ++i) { \
/* PTE present? */ \
if (!IOMMU_PTE_PRESENT(pt[i])) \
continue; \
\
/* Large PTE? */ \
if (PM_PTE_LEVEL(pt[i]) == 0 || \
PM_PTE_LEVEL(pt[i]) == 7) \
continue; \
\
p = (unsigned long)IOMMU_PTE_PAGE(pt[i]); \
FN(p); \
} \
......@@ -2009,7 +1895,6 @@ static struct dma_ops_domain *dma_ops_domain_alloc(void)
goto free_dma_dom;
dma_dom->need_flush = false;
dma_dom->target_dev = 0xffff;
add_domain_to_list(&dma_dom->domain);
......@@ -2374,110 +2259,67 @@ static void detach_device(struct device *dev)
dev_data->ats.enabled = false;
}
/*
* Find out the protection domain structure for a given PCI device. This
* will give us the pointer to the page table root for example.
*/
static struct protection_domain *domain_for_device(struct device *dev)
{
struct iommu_dev_data *dev_data;
struct protection_domain *dom = NULL;
unsigned long flags;
dev_data = get_dev_data(dev);
if (dev_data->domain)
return dev_data->domain;
if (dev_data->alias_data != NULL) {
struct iommu_dev_data *alias_data = dev_data->alias_data;
read_lock_irqsave(&amd_iommu_devtable_lock, flags);
if (alias_data->domain != NULL) {
__attach_device(dev_data, alias_data->domain);
dom = alias_data->domain;
}
read_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
}
return dom;
}
static int device_change_notifier(struct notifier_block *nb,
unsigned long action, void *data)
static int amd_iommu_add_device(struct device *dev)
{
struct dma_ops_domain *dma_domain;
struct protection_domain *domain;
struct iommu_dev_data *dev_data;
struct device *dev = data;
struct iommu_domain *domain;
struct amd_iommu *iommu;
unsigned long flags;
u16 devid;
int ret;
if (!check_device(dev))
if (!check_device(dev) || get_dev_data(dev))
return 0;
devid = get_device_id(dev);
iommu = amd_iommu_rlookup_table[devid];
dev_data = get_dev_data(dev);
switch (action) {
case BUS_NOTIFY_ADD_DEVICE:
devid = get_device_id(dev);
iommu = amd_iommu_rlookup_table[devid];
iommu_init_device(dev);
init_iommu_group(dev);
ret = iommu_init_device(dev);
if (ret) {
if (ret != -ENOTSUPP)
pr_err("Failed to initialize device %s - trying to proceed anyway\n",
dev_name(dev));
/*
* dev_data is still NULL and
* got initialized in iommu_init_device
*/
dev_data = get_dev_data(dev);
iommu_ignore_device(dev);
dev->archdata.dma_ops = &nommu_dma_ops;
goto out;
}
init_iommu_group(dev);
if (iommu_pass_through || dev_data->iommu_v2) {
dev_data->passthrough = true;
attach_device(dev, pt_domain);
break;
}
dev_data = get_dev_data(dev);
domain = domain_for_device(dev);
BUG_ON(!dev_data);
/* allocate a protection domain if a device is added */
dma_domain = find_protection_domain(devid);
if (!dma_domain) {
dma_domain = dma_ops_domain_alloc();
if (!dma_domain)
goto out;
dma_domain->target_dev = devid;
spin_lock_irqsave(&iommu_pd_list_lock, flags);
list_add_tail(&dma_domain->list, &iommu_pd_list);
spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
}
if (dev_data->iommu_v2)
iommu_request_dm_for_dev(dev);
/* Domains are initialized for this device - have a look what we ended up with */
domain = iommu_get_domain_for_dev(dev);
if (domain->type == IOMMU_DOMAIN_IDENTITY) {
dev_data->passthrough = true;
dev->archdata.dma_ops = &nommu_dma_ops;
} else {
dev->archdata.dma_ops = &amd_iommu_dma_ops;
break;
case BUS_NOTIFY_REMOVED_DEVICE:
iommu_uninit_device(dev);
default:
goto out;
}
out:
iommu_completion_wait(iommu);
out:
return 0;
}
static struct notifier_block device_nb = {
.notifier_call = device_change_notifier,
};
void amd_iommu_init_notifier(void)
static void amd_iommu_remove_device(struct device *dev)
{
bus_register_notifier(&pci_bus_type, &device_nb);
struct amd_iommu *iommu;
u16 devid;
if (!check_device(dev))
return;
devid = get_device_id(dev);
iommu = amd_iommu_rlookup_table[devid];
iommu_uninit_device(dev);
iommu_completion_wait(iommu);
}
/*****************************************************************************
......@@ -2496,28 +2338,20 @@ void amd_iommu_init_notifier(void)
static struct protection_domain *get_domain(struct device *dev)
{
struct protection_domain *domain;
struct dma_ops_domain *dma_dom;
u16 devid = get_device_id(dev);
struct iommu_domain *io_domain;
if (!check_device(dev))
return ERR_PTR(-EINVAL);
domain = domain_for_device(dev);
if (domain != NULL && !dma_ops_domain(domain))
return ERR_PTR(-EBUSY);
if (domain != NULL)
return domain;
io_domain = iommu_get_domain_for_dev(dev);
if (!io_domain)
return NULL;
/* Device not bound yet - bind it */
dma_dom = find_protection_domain(devid);
if (!dma_dom)
dma_dom = amd_iommu_rlookup_table[devid]->default_dom;
attach_device(dev, &dma_dom->domain);
DUMP_printk("Using protection domain %d for device %s\n",
dma_dom->domain.id, dev_name(dev));
domain = to_pdomain(io_domain);
if (!dma_ops_domain(domain))
return ERR_PTR(-EBUSY);
return &dma_dom->domain;
return domain;
}
static void update_device_table(struct protection_domain *domain)
......@@ -3013,54 +2847,6 @@ static int amd_iommu_dma_supported(struct device *dev, u64 mask)
return check_device(dev);
}
/*
* The function for pre-allocating protection domains.
*
* If the driver core informs the DMA layer if a driver grabs a device
* we don't need to preallocate the protection domains anymore.
* For now we have to.
*/
static void __init prealloc_protection_domains(void)
{
struct iommu_dev_data *dev_data;
struct dma_ops_domain *dma_dom;
struct pci_dev *dev = NULL;
u16 devid;
for_each_pci_dev(dev) {
/* Do we handle this device? */
if (!check_device(&dev->dev))
continue;
dev_data = get_dev_data(&dev->dev);
if (!amd_iommu_force_isolation && dev_data->iommu_v2) {
/* Make sure passthrough domain is allocated */
alloc_passthrough_domain();
dev_data->passthrough = true;
attach_device(&dev->dev, pt_domain);
pr_info("AMD-Vi: Using passthrough domain for device %s\n",
dev_name(&dev->dev));
}
/* Is there already any domain for it? */
if (domain_for_device(&dev->dev))
continue;
devid = get_device_id(&dev->dev);
dma_dom = dma_ops_domain_alloc();
if (!dma_dom)
continue;
init_unity_mappings_for_device(dma_dom, devid);
dma_dom->target_dev = devid;
attach_device(&dev->dev, &dma_dom->domain);
list_add_tail(&dma_dom->list, &iommu_pd_list);
}
}
static struct dma_map_ops amd_iommu_dma_ops = {
.alloc = alloc_coherent,
.free = free_coherent,
......@@ -3071,76 +2857,16 @@ static struct dma_map_ops amd_iommu_dma_ops = {
.dma_supported = amd_iommu_dma_supported,
};
static unsigned device_dma_ops_init(void)
{
struct iommu_dev_data *dev_data;
struct pci_dev *pdev = NULL;
unsigned unhandled = 0;
for_each_pci_dev(pdev) {
if (!check_device(&pdev->dev)) {
iommu_ignore_device(&pdev->dev);
unhandled += 1;
continue;
}
dev_data = get_dev_data(&pdev->dev);
if (!dev_data->passthrough)
pdev->dev.archdata.dma_ops = &amd_iommu_dma_ops;
else
pdev->dev.archdata.dma_ops = &nommu_dma_ops;
}
return unhandled;
}
/*
* The function which clues the AMD IOMMU driver into dma_ops.
*/
void __init amd_iommu_init_api(void)
int __init amd_iommu_init_api(void)
{
bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
return bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
}
int __init amd_iommu_init_dma_ops(void)
{
struct amd_iommu *iommu;
int ret, unhandled;
/*
* first allocate a default protection domain for every IOMMU we
* found in the system. Devices not assigned to any other
* protection domain will be assigned to the default one.
*/
for_each_iommu(iommu) {
iommu->default_dom = dma_ops_domain_alloc();
if (iommu->default_dom == NULL)
return -ENOMEM;
iommu->default_dom->domain.flags |= PD_DEFAULT_MASK;
ret = iommu_init_unity_mappings(iommu);
if (ret)
goto free_domains;
}
/*
* Pre-allocate the protection domains for each device.
*/
prealloc_protection_domains();
iommu_detected = 1;
swiotlb = 0;
/* Make the driver finally visible to the drivers */
unhandled = device_dma_ops_init();
if (unhandled && max_pfn > MAX_DMA32_PFN) {
/* There are unhandled devices - initialize swiotlb for them */
swiotlb = 1;
}
amd_iommu_stats_init();
if (amd_iommu_unmap_flush)
......@@ -3149,14 +2875,6 @@ int __init amd_iommu_init_dma_ops(void)
pr_info("AMD-Vi: Lazy IO/TLB flushing enabled\n");
return 0;
free_domains:
for_each_iommu(iommu) {
dma_ops_domain_free(iommu->default_dom);
}
return ret;
}
/*****************************************************************************
......@@ -3223,7 +2941,7 @@ static struct protection_domain *protection_domain_alloc(void)
return NULL;
}
static int __init alloc_passthrough_domain(void)
static int alloc_passthrough_domain(void)
{
if (pt_domain != NULL)
return 0;
......@@ -3241,30 +2959,46 @@ static int __init alloc_passthrough_domain(void)
static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
{
struct protection_domain *pdomain;
struct dma_ops_domain *dma_domain;
/* We only support unmanaged domains for now */
if (type != IOMMU_DOMAIN_UNMANAGED)
return NULL;
pdomain = protection_domain_alloc();
if (!pdomain)
goto out_free;
switch (type) {
case IOMMU_DOMAIN_UNMANAGED:
pdomain = protection_domain_alloc();
if (!pdomain)
return NULL;
pdomain->mode = PAGE_MODE_3_LEVEL;
pdomain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
if (!pdomain->pt_root)
goto out_free;
pdomain->mode = PAGE_MODE_3_LEVEL;
pdomain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
if (!pdomain->pt_root) {
protection_domain_free(pdomain);
return NULL;
}
pdomain->domain.geometry.aperture_start = 0;
pdomain->domain.geometry.aperture_end = ~0ULL;
pdomain->domain.geometry.force_aperture = true;
pdomain->domain.geometry.aperture_start = 0;
pdomain->domain.geometry.aperture_end = ~0ULL;
pdomain->domain.geometry.force_aperture = true;
return &pdomain->domain;
break;
case IOMMU_DOMAIN_DMA:
dma_domain = dma_ops_domain_alloc();
if (!dma_domain) {
pr_err("AMD-Vi: Failed to allocate\n");
return NULL;
}
pdomain = &dma_domain->domain;
break;
case IOMMU_DOMAIN_IDENTITY:
pdomain = protection_domain_alloc();
if (!pdomain)
return NULL;
out_free:
protection_domain_free(pdomain);
pdomain->mode = PAGE_MODE_NONE;
break;
default:
return NULL;
}
return NULL;
return &pdomain->domain;
}
static void amd_iommu_domain_free(struct iommu_domain *dom)
......@@ -3414,6 +3148,47 @@ static bool amd_iommu_capable(enum iommu_cap cap)
return false;
}
static void amd_iommu_get_dm_regions(struct device *dev,
struct list_head *head)
{
struct unity_map_entry *entry;
u16 devid;
devid = get_device_id(dev);
list_for_each_entry(entry, &amd_iommu_unity_map, list) {
struct iommu_dm_region *region;
if (devid < entry->devid_start || devid > entry->devid_end)
continue;
region = kzalloc(sizeof(*region), GFP_KERNEL);
if (!region) {
pr_err("Out of memory allocating dm-regions for %s\n",
dev_name(dev));
return;
}
region->start = entry->address_start;
region->length = entry->address_end - entry->address_start;
if (entry->prot & IOMMU_PROT_IR)
region->prot |= IOMMU_READ;
if (entry->prot & IOMMU_PROT_IW)
region->prot |= IOMMU_WRITE;
list_add_tail(&region->list, head);
}
}
static void amd_iommu_put_dm_regions(struct device *dev,
struct list_head *head)
{
struct iommu_dm_region *entry, *next;
list_for_each_entry_safe(entry, next, head, list)
kfree(entry);
}
static const struct iommu_ops amd_iommu_ops = {
.capable = amd_iommu_capable,
.domain_alloc = amd_iommu_domain_alloc,
......@@ -3424,6 +3199,10 @@ static const struct iommu_ops amd_iommu_ops = {
.unmap = amd_iommu_unmap,
.map_sg = default_iommu_map_sg,
.iova_to_phys = amd_iommu_iova_to_phys,
.add_device = amd_iommu_add_device,
.remove_device = amd_iommu_remove_device,
.get_dm_regions = amd_iommu_get_dm_regions,
.put_dm_regions = amd_iommu_put_dm_regions,
.pgsize_bitmap = AMD_IOMMU_PGSIZES,
};
......
......@@ -226,6 +226,7 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
static int amd_iommu_enable_interrupts(void);
static int __init iommu_go_to_state(enum iommu_init_state state);
static void init_device_table_dma(void);
static inline void update_last_devid(u16 devid)
{
......@@ -1389,9 +1390,15 @@ static int __init amd_iommu_init_pci(void)
break;
}
ret = amd_iommu_init_devices();
init_device_table_dma();
for_each_iommu(iommu)
iommu_flush_all_caches(iommu);
ret = amd_iommu_init_api();
print_iommu_info();
if (!ret)
print_iommu_info();
return ret;
}
......@@ -1829,8 +1836,6 @@ static bool __init check_ioapic_information(void)
static void __init free_dma_resources(void)
{
amd_iommu_uninit_devices();
free_pages((unsigned long)amd_iommu_pd_alloc_bitmap,
get_order(MAX_DOMAIN_ID/8));
......@@ -2023,27 +2028,10 @@ static bool detect_ivrs(void)
static int amd_iommu_init_dma(void)
{
struct amd_iommu *iommu;
int ret;
if (iommu_pass_through)
ret = amd_iommu_init_passthrough();
return amd_iommu_init_passthrough();
else
ret = amd_iommu_init_dma_ops();
if (ret)
return ret;
init_device_table_dma();
for_each_iommu(iommu)
iommu_flush_all_caches(iommu);
amd_iommu_init_api();
amd_iommu_init_notifier();
return 0;
return amd_iommu_init_dma_ops();
}
/****************************************************************************
......
......@@ -30,7 +30,7 @@ extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
extern int amd_iommu_init_devices(void);
extern void amd_iommu_uninit_devices(void);
extern void amd_iommu_init_notifier(void);
extern void amd_iommu_init_api(void);
extern int amd_iommu_init_api(void);
/* Needed for interrupt remapping */
extern int amd_iommu_prepare(void);
......
......@@ -447,8 +447,6 @@ struct aperture_range {
* Data container for a dma_ops specific protection domain
*/
struct dma_ops_domain {
struct list_head list;
/* generic protection domain information */
struct protection_domain domain;
......@@ -463,12 +461,6 @@ struct dma_ops_domain {
/* This will be set to true when TLB needs to be flushed */
bool need_flush;
/*
* if this is a preallocated domain, keep the device for which it was
* preallocated in this variable
*/
u16 target_dev;
};
/*
......@@ -553,9 +545,6 @@ struct amd_iommu {
/* if one, we need to send a completion wait command */
bool need_sync;
/* default dma_ops domain for that IOMMU */
struct dma_ops_domain *default_dom;
/* IOMMU sysfs device */
struct device *iommu_dev;
......
此差异已折叠。
......@@ -202,8 +202,7 @@
#define ARM_SMMU_CB_S1_TLBIVAL 0x620
#define ARM_SMMU_CB_S2_TLBIIPAS2 0x630
#define ARM_SMMU_CB_S2_TLBIIPAS2L 0x638
#define ARM_SMMU_CB_ATS1PR_LO 0x800
#define ARM_SMMU_CB_ATS1PR_HI 0x804
#define ARM_SMMU_CB_ATS1PR 0x800
#define ARM_SMMU_CB_ATSR 0x8f0
#define SCTLR_S1_ASIDPNE (1 << 12)
......@@ -247,7 +246,7 @@
#define FSYNR0_WNR (1 << 4)
static int force_stage;
module_param_named(force_stage, force_stage, int, S_IRUGO | S_IWUSR);
module_param_named(force_stage, force_stage, int, S_IRUGO);
MODULE_PARM_DESC(force_stage,
"Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation.");
......@@ -1229,18 +1228,18 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
void __iomem *cb_base;
u32 tmp;
u64 phys;
unsigned long va;
cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
if (smmu->version == 1) {
u32 reg = iova & ~0xfff;
writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_LO);
} else {
u32 reg = iova & ~0xfff;
writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_LO);
reg = ((u64)iova & ~0xfff) >> 32;
writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_HI);
}
/* ATS1 registers can only be written atomically */
va = iova & ~0xfffUL;
#ifdef CONFIG_64BIT
if (smmu->version == ARM_SMMU_V2)
writeq_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR);
else
#endif
writel_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR);
if (readl_poll_timeout_atomic(cb_base + ARM_SMMU_CB_ATSR, tmp,
!(tmp & ATSR_ACTIVE), 5, 50)) {
......
......@@ -26,7 +26,7 @@
* These routines are used by both DMA-remapping and Interrupt-remapping
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt /* has to precede printk.h */
#define pr_fmt(fmt) "DMAR: " fmt
#include <linux/pci.h>
#include <linux/dmar.h>
......@@ -555,7 +555,7 @@ static int dmar_walk_remapping_entries(struct acpi_dmar_header *start,
break;
} else if (next > end) {
/* Avoid passing table end */
pr_warn(FW_BUG "record passes table end\n");
pr_warn(FW_BUG "Record passes table end\n");
ret = -EINVAL;
break;
}
......@@ -802,7 +802,7 @@ int __init dmar_table_init(void)
ret = parse_dmar_table();
if (ret < 0) {
if (ret != -ENODEV)
pr_info("parse DMAR table failure.\n");
pr_info("Parse DMAR table failure.\n");
} else if (list_empty(&dmar_drhd_units)) {
pr_info("No DMAR devices found\n");
ret = -ENODEV;
......@@ -847,7 +847,7 @@ dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
else
addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
if (!addr) {
pr_warn("IOMMU: can't validate: %llx\n", drhd->address);
pr_warn("Can't validate DRHD address: %llx\n", drhd->address);
return -EINVAL;
}
......@@ -921,14 +921,14 @@ static int map_iommu(struct intel_iommu *iommu, u64 phys_addr)
iommu->reg_size = VTD_PAGE_SIZE;
if (!request_mem_region(iommu->reg_phys, iommu->reg_size, iommu->name)) {
pr_err("IOMMU: can't reserve memory\n");
pr_err("Can't reserve memory\n");
err = -EBUSY;
goto out;
}
iommu->reg = ioremap(iommu->reg_phys, iommu->reg_size);
if (!iommu->reg) {
pr_err("IOMMU: can't map the region\n");
pr_err("Can't map the region\n");
err = -ENOMEM;
goto release;
}
......@@ -952,13 +952,13 @@ static int map_iommu(struct intel_iommu *iommu, u64 phys_addr)
iommu->reg_size = map_size;
if (!request_mem_region(iommu->reg_phys, iommu->reg_size,
iommu->name)) {
pr_err("IOMMU: can't reserve memory\n");
pr_err("Can't reserve memory\n");
err = -EBUSY;
goto out;
}
iommu->reg = ioremap(iommu->reg_phys, iommu->reg_size);
if (!iommu->reg) {
pr_err("IOMMU: can't map the region\n");
pr_err("Can't map the region\n");
err = -ENOMEM;
goto release;
}
......@@ -1014,14 +1014,14 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
return -ENOMEM;
if (dmar_alloc_seq_id(iommu) < 0) {
pr_err("IOMMU: failed to allocate seq_id\n");
pr_err("Failed to allocate seq_id\n");
err = -ENOSPC;
goto error;
}
err = map_iommu(iommu, drhd->reg_base_addr);
if (err) {
pr_err("IOMMU: failed to map %s\n", iommu->name);
pr_err("Failed to map %s\n", iommu->name);
goto error_free_seq_id;
}
......@@ -1045,8 +1045,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
iommu->node = -1;
ver = readl(iommu->reg + DMAR_VER_REG);
pr_info("IOMMU %d: reg_base_addr %llx ver %d:%d cap %llx ecap %llx\n",
iommu->seq_id,
pr_info("%s: reg_base_addr %llx ver %d:%d cap %llx ecap %llx\n",
iommu->name,
(unsigned long long)drhd->reg_base_addr,
DMAR_VER_MAJOR(ver), DMAR_VER_MINOR(ver),
(unsigned long long)iommu->cap,
......@@ -1646,13 +1646,13 @@ int dmar_set_interrupt(struct intel_iommu *iommu)
if (irq > 0) {
iommu->irq = irq;
} else {
pr_err("IOMMU: no free vectors\n");
pr_err("No free IRQ vectors\n");
return -EINVAL;
}
ret = request_irq(irq, dmar_fault, IRQF_NO_THREAD, iommu->name, iommu);
if (ret)
pr_err("IOMMU: can't request irq\n");
pr_err("Can't request irq\n");
return ret;
}
......
此差异已折叠。
此差异已折叠。
#define pr_fmt(fmt) "DMAR-IR: " fmt
#include <linux/interrupt.h>
#include <linux/dmar.h>
#include <linux/spinlock.h>
......@@ -9,6 +12,7 @@
#include <linux/intel-iommu.h>
#include <linux/acpi.h>
#include <linux/irqdomain.h>
#include <linux/crash_dump.h>
#include <asm/io_apic.h>
#include <asm/smp.h>
#include <asm/cpu.h>
......@@ -74,8 +78,28 @@ static struct hpet_scope ir_hpet[MAX_HPET_TBS];
static DEFINE_RAW_SPINLOCK(irq_2_ir_lock);
static struct irq_domain_ops intel_ir_domain_ops;
static void iommu_disable_irq_remapping(struct intel_iommu *iommu);
static int __init parse_ioapics_under_ir(void);
static bool ir_pre_enabled(struct intel_iommu *iommu)
{
return (iommu->flags & VTD_FLAG_IRQ_REMAP_PRE_ENABLED);
}
static void clear_ir_pre_enabled(struct intel_iommu *iommu)
{
iommu->flags &= ~VTD_FLAG_IRQ_REMAP_PRE_ENABLED;
}
static void init_ir_status(struct intel_iommu *iommu)
{
u32 gsts;
gsts = readl(iommu->reg + DMAR_GSTS_REG);
if (gsts & DMA_GSTS_IRES)
iommu->flags |= VTD_FLAG_IRQ_REMAP_PRE_ENABLED;
}
static int alloc_irte(struct intel_iommu *iommu, int irq,
struct irq_2_iommu *irq_iommu, u16 count)
{
......@@ -93,8 +117,7 @@ static int alloc_irte(struct intel_iommu *iommu, int irq,
}
if (mask > ecap_max_handle_mask(iommu->ecap)) {
printk(KERN_ERR
"Requested mask %x exceeds the max invalidation handle"
pr_err("Requested mask %x exceeds the max invalidation handle"
" mask value %Lx\n", mask,
ecap_max_handle_mask(iommu->ecap));
return -1;
......@@ -268,7 +291,7 @@ static int set_ioapic_sid(struct irte *irte, int apic)
up_read(&dmar_global_lock);
if (sid == 0) {
pr_warning("Failed to set source-id of IOAPIC (%d)\n", apic);
pr_warn("Failed to set source-id of IOAPIC (%d)\n", apic);
return -1;
}
......@@ -295,7 +318,7 @@ static int set_hpet_sid(struct irte *irte, u8 id)
up_read(&dmar_global_lock);
if (sid == 0) {
pr_warning("Failed to set source-id of HPET block (%d)\n", id);
pr_warn("Failed to set source-id of HPET block (%d)\n", id);
return -1;
}
......@@ -359,11 +382,59 @@ static int set_msi_sid(struct irte *irte, struct pci_dev *dev)
return 0;
}
static int iommu_load_old_irte(struct intel_iommu *iommu)
{
struct irte *old_ir_table;
phys_addr_t irt_phys;
unsigned int i;
size_t size;
u64 irta;
if (!is_kdump_kernel()) {
pr_warn("IRQ remapping was enabled on %s but we are not in kdump mode\n",
iommu->name);
clear_ir_pre_enabled(iommu);
iommu_disable_irq_remapping(iommu);
return -EINVAL;
}
/* Check whether the old ir-table has the same size as ours */
irta = dmar_readq(iommu->reg + DMAR_IRTA_REG);
if ((irta & INTR_REMAP_TABLE_REG_SIZE_MASK)
!= INTR_REMAP_TABLE_REG_SIZE)
return -EINVAL;
irt_phys = irta & VTD_PAGE_MASK;
size = INTR_REMAP_TABLE_ENTRIES*sizeof(struct irte);
/* Map the old IR table */
old_ir_table = ioremap_cache(irt_phys, size);
if (!old_ir_table)
return -ENOMEM;
/* Copy data over */
memcpy(iommu->ir_table->base, old_ir_table, size);
__iommu_flush_cache(iommu, iommu->ir_table->base, size);
/*
* Now check the table for used entries and mark those as
* allocated in the bitmap
*/
for (i = 0; i < INTR_REMAP_TABLE_ENTRIES; i++) {
if (iommu->ir_table->base[i].present)
bitmap_set(iommu->ir_table->bitmap, i, 1);
}
return 0;
}
static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
{
unsigned long flags;
u64 addr;
u32 sts;
unsigned long flags;
addr = virt_to_phys((void *)iommu->ir_table->base);
......@@ -380,10 +451,16 @@ static void iommu_set_irq_remapping(struct intel_iommu *iommu, int mode)
raw_spin_unlock_irqrestore(&iommu->register_lock, flags);
/*
* global invalidation of interrupt entry cache before enabling
* interrupt-remapping.
* Global invalidation of interrupt entry cache to make sure the
* hardware uses the new irq remapping table.
*/
qi_global_iec(iommu);
}
static void iommu_enable_irq_remapping(struct intel_iommu *iommu)
{
unsigned long flags;
u32 sts;
raw_spin_lock_irqsave(&iommu->register_lock, flags);
......@@ -449,6 +526,37 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
ir_table->base = page_address(pages);
ir_table->bitmap = bitmap;
iommu->ir_table = ir_table;
/*
* If the queued invalidation is already initialized,
* shouldn't disable it.
*/
if (!iommu->qi) {
/*
* Clear previous faults.
*/
dmar_fault(-1, iommu);
dmar_disable_qi(iommu);
if (dmar_enable_qi(iommu)) {
pr_err("Failed to enable queued invalidation\n");
goto out_free_bitmap;
}
}
init_ir_status(iommu);
if (ir_pre_enabled(iommu)) {
if (iommu_load_old_irte(iommu))
pr_err("Failed to copy IR table for %s from previous kernel\n",
iommu->name);
else
pr_info("Copied IR table for %s from previous kernel\n",
iommu->name);
}
iommu_set_irq_remapping(iommu, eim_mode);
return 0;
out_free_bitmap:
......@@ -457,6 +565,9 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
__free_pages(pages, INTR_REMAP_PAGE_ORDER);
out_free_table:
kfree(ir_table);
iommu->ir_table = NULL;
return -ENOMEM;
}
......@@ -534,17 +645,17 @@ static void __init intel_cleanup_irq_remapping(void)
}
if (x2apic_supported())
pr_warn("Failed to enable irq remapping. You are vulnerable to irq-injection attacks.\n");
pr_warn("Failed to enable irq remapping. You are vulnerable to irq-injection attacks.\n");
}
static int __init intel_prepare_irq_remapping(void)
{
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
int eim = 0;
if (irq_remap_broken) {
printk(KERN_WARNING
"This system BIOS has enabled interrupt remapping\n"
pr_warn("This system BIOS has enabled interrupt remapping\n"
"on a chipset that contains an erratum making that\n"
"feature unstable. To maintain system stability\n"
"interrupt remapping is being disabled. Please\n"
......@@ -560,7 +671,7 @@ static int __init intel_prepare_irq_remapping(void)
return -ENODEV;
if (parse_ioapics_under_ir() != 1) {
printk(KERN_INFO "Not enabling interrupt remapping\n");
pr_info("Not enabling interrupt remapping\n");
goto error;
}
......@@ -569,10 +680,34 @@ static int __init intel_prepare_irq_remapping(void)
if (!ecap_ir_support(iommu->ecap))
goto error;
/* Do the allocations early */
for_each_iommu(iommu, drhd)
if (intel_setup_irq_remapping(iommu))
/* Detect remapping mode: lapic or x2apic */
if (x2apic_supported()) {
eim = !dmar_x2apic_optout();
if (!eim) {
pr_info("x2apic is disabled because BIOS sets x2apic opt out bit.");
pr_info("Use 'intremap=no_x2apic_optout' to override the BIOS setting.\n");
}
}
for_each_iommu(iommu, drhd) {
if (eim && !ecap_eim_support(iommu->ecap)) {
pr_info("%s does not support EIM\n", iommu->name);
eim = 0;
}
}
eim_mode = eim;
if (eim)
pr_info("Queued invalidation will be enabled to support x2apic and Intr-remapping.\n");
/* Do the initializations early */
for_each_iommu(iommu, drhd) {
if (intel_setup_irq_remapping(iommu)) {
pr_err("Failed to setup irq remapping for %s\n",
iommu->name);
goto error;
}
}
return 0;
......@@ -606,68 +741,13 @@ static int __init intel_enable_irq_remapping(void)
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
bool setup = false;
int eim = 0;
if (x2apic_supported()) {
eim = !dmar_x2apic_optout();
if (!eim)
pr_info("x2apic is disabled because BIOS sets x2apic opt out bit. You can use 'intremap=no_x2apic_optout' to override the BIOS setting.\n");
}
for_each_iommu(iommu, drhd) {
/*
* If the queued invalidation is already initialized,
* shouldn't disable it.
*/
if (iommu->qi)
continue;
/*
* Clear previous faults.
*/
dmar_fault(-1, iommu);
/*
* Disable intr remapping and queued invalidation, if already
* enabled prior to OS handover.
*/
iommu_disable_irq_remapping(iommu);
dmar_disable_qi(iommu);
}
/*
* check for the Interrupt-remapping support
*/
for_each_iommu(iommu, drhd)
if (eim && !ecap_eim_support(iommu->ecap)) {
printk(KERN_INFO "DRHD %Lx: EIM not supported by DRHD, "
" ecap %Lx\n", drhd->reg_base_addr, iommu->ecap);
eim = 0;
}
eim_mode = eim;
if (eim)
pr_info("Queued invalidation will be enabled to support x2apic and Intr-remapping.\n");
/*
* Enable queued invalidation for all the DRHD's.
*/
for_each_iommu(iommu, drhd) {
int ret = dmar_enable_qi(iommu);
if (ret) {
printk(KERN_ERR "DRHD %Lx: failed to enable queued, "
" invalidation, ecap %Lx, ret %d\n",
drhd->reg_base_addr, iommu->ecap, ret);
goto error;
}
}
/*
* Setup Interrupt-remapping for all the DRHD's now.
*/
for_each_iommu(iommu, drhd) {
iommu_set_irq_remapping(iommu, eim);
if (!ir_pre_enabled(iommu))
iommu_enable_irq_remapping(iommu);
setup = true;
}
......@@ -678,9 +758,9 @@ static int __init intel_enable_irq_remapping(void)
set_irq_posting_cap();
pr_info("Enabled IRQ remapping in %s mode\n", eim ? "x2apic" : "xapic");
pr_info("Enabled IRQ remapping in %s mode\n", eim_mode ? "x2apic" : "xapic");
return eim ? IRQ_REMAP_X2APIC_MODE : IRQ_REMAP_XAPIC_MODE;
return eim_mode ? IRQ_REMAP_X2APIC_MODE : IRQ_REMAP_XAPIC_MODE;
error:
intel_cleanup_irq_remapping();
......@@ -905,6 +985,7 @@ static int reenable_irq_remapping(int eim)
/* Set up interrupt remapping for iommu.*/
iommu_set_irq_remapping(iommu, eim);
iommu_enable_irq_remapping(iommu);
setup = true;
}
......@@ -1169,7 +1250,6 @@ static void intel_free_irq_resources(struct irq_domain *domain,
struct irq_2_iommu *irq_iommu;
unsigned long flags;
int i;
for (i = 0; i < nr_irqs; i++) {
irq_data = irq_domain_get_irq_data(domain, virq + i);
if (irq_data && irq_data->chip_data) {
......@@ -1317,28 +1397,12 @@ static int dmar_ir_add(struct dmar_drhd_unit *dmaru, struct intel_iommu *iommu)
/* Setup Interrupt-remapping now. */
ret = intel_setup_irq_remapping(iommu);
if (ret) {
pr_err("DRHD %Lx: failed to allocate resource\n",
iommu->reg_phys);
ir_remove_ioapic_hpet_scope(iommu);
return ret;
}
if (!iommu->qi) {
/* Clear previous faults. */
dmar_fault(-1, iommu);
iommu_disable_irq_remapping(iommu);
dmar_disable_qi(iommu);
}
/* Enable queued invalidation */
ret = dmar_enable_qi(iommu);
if (!ret) {
iommu_set_irq_remapping(iommu, eim);
} else {
pr_err("DRHD %Lx: failed to enable queued invalidation, ecap %Lx, ret %d\n",
iommu->reg_phys, iommu->ecap, ret);
pr_err("Failed to setup irq remapping for %s\n",
iommu->name);
intel_teardown_irq_remapping(iommu);
ir_remove_ioapic_hpet_scope(iommu);
} else {
iommu_enable_irq_remapping(iommu);
}
return ret;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册