1. 01 11月, 2013 2 次提交
  2. 15 8月, 2013 1 次提交
  3. 20 6月, 2013 1 次提交
    • A
      iommu/{vt-d,amd}: Remove multifunction assumption around grouping · c14d2690
      Alex Williamson 提交于
      If a device is multifunction and does not have ACS enabled then we
      assume that the entire package lacks ACS and use function 0 as the
      base of the group.  The PCIe spec however states that components are
      permitted to implement ACS on some, none, or all of their applicable
      functions.  It's therefore conceivable that function 0 may be fully
      independent and support ACS while other functions do not.  Instead
      use the lowest function of the slot that does not have ACS enabled
      as the base of the group.  This may be the current device, which is
      intentional.  So long as we use a consistent algorithm, all the
      non-ACS functions will be grouped together and ACS functions will
      get separate groups.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NJoerg Roedel <joro@8bytes.org>
      c14d2690
  4. 23 4月, 2013 2 次提交
    • V
      iommu: Move swap_pci_ref function to drivers/iommu/pci.h. · 61e015ac
      Varun Sethi 提交于
      The swap_pci_ref function is used by the IOMMU API code for
      swapping pci device pointers, while determining the iommu
      group for the device.
      Currently this function was being implemented for different
      IOMMU drivers.  This patch moves the function to a new file,
      drivers/iommu/pci.h so that the implementation can be
      shared across various IOMMU drivers.
      Signed-off-by: NVarun Sethi <Varun.Sethi@freescale.com>
      Signed-off-by: NJoerg Roedel <joro@8bytes.org>
      61e015ac
    • T
      iommu/vt-d: Disable translation if already enabled · 3a93c841
      Takao Indoh 提交于
      This patch disables translation(dma-remapping) before its initialization
      if it is already enabled.
      
      This is needed for kexec/kdump boot. If dma-remapping is enabled in the
      first kernel, it need to be disabled before initializing its page table
      during second kernel boot. Wei Hu also reported that this is needed
      when second kernel boots with intel_iommu=off.
      
      Basically iommu->gcmd is used to know whether translation is enabled or
      disabled, but it is always zero at boot time even when translation is
      enabled since iommu->gcmd is initialized without considering such a
      case. Therefor this patch synchronizes iommu->gcmd value with global
      command register when iommu structure is allocated.
      Signed-off-by: NTakao Indoh <indou.takao@jp.fujitsu.com>
      Signed-off-by: NJoerg Roedel <joro@8bytes.org>
      3a93c841
  5. 03 4月, 2013 1 次提交
  6. 20 2月, 2013 1 次提交
  7. 28 1月, 2013 1 次提交
  8. 23 1月, 2013 1 次提交
  9. 04 1月, 2013 1 次提交
    • G
      Drivers: iommu: remove __dev* attributes. · d34d6517
      Greg Kroah-Hartman 提交于
      CONFIG_HOTPLUG is going away as an option.  As a result, the __dev*
      markings need to be removed.
      
      This change removes the use of __devinit, __devexit_p, __devinitdata,
      and __devexit from these drivers.
      
      Based on patches originally written by Bill Pemberton, but redone by me
      in order to handle some of the coding style issues better, by hand.
      
      Cc: Bill Pemberton <wfp5p@virginia.edu>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Ohad Ben-Cohen <ohad@wizery.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Omar Ramirez Luna <omar.luna@linaro.org>
      Cc: Mauro Carvalho Chehab <mchehab@redhat.com>
      Cc: Hiroshi Doyu <hdoyu@nvidia.com>
      Cc: Stephen Warren <swarren@wwwdotorg.org>
      Cc: Bharat Nihalani <bnihalani@nvidia.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d34d6517
  10. 21 12月, 2012 1 次提交
    • W
      intel-iommu: Free old page tables before creating superpage · 6491d4d0
      Woodhouse, David 提交于
      The dma_pte_free_pagetable() function will only free a page table page
      if it is asked to free the *entire* 2MiB range that it covers. So if a
      page table page was used for one or more small mappings, it's likely to
      end up still present in the page tables... but with no valid PTEs.
      
      This was fine when we'd only be repopulating it with 4KiB PTEs anyway
      but the same virtual address range can end up being reused for a
      *large-page* mapping. And in that case were were trying to insert the
      large page into the second-level page table, and getting a complaint
      from the sanity check in __domain_mapping() because there was already a
      corresponding entry. This was *relatively* harmless; it led to a memory
      leak of the old page table page, but no other ill-effects.
      
      Fix it by calling dma_pte_clear_range (hopefully redundant) and
      dma_pte_free_pagetable() before setting up the new large page.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Tested-by: NRavi Murty <Ravi.Murty@intel.com>
      Tested-by: NSudeep Dutt <sudeep.dutt@intel.com>
      Cc: stable@kernel.org [3.0+]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6491d4d0
  11. 21 11月, 2012 1 次提交
  12. 17 11月, 2012 1 次提交
  13. 18 9月, 2012 1 次提交
    • A
      intel-iommu: Default to non-coherent for domains unattached to iommus · 2e12bc29
      Alex Williamson 提交于
      domain_update_iommu_coherency() currently defaults to setting domains
      as coherent when the domain is not attached to any iommus.  This
      allows for a window in domain_context_mapping_one() where such a
      domain can update context entries non-coherently, and only after
      update the domain capability to clear iommu_coherency.
      
      This can be seen using KVM device assignment on VT-d systems that
      do not support coherency in the ecap register.  When a device is
      added to a guest, a domain is created (iommu_coherency = 0), the
      device is attached, and ranges are mapped.  If we then hot unplug
      the device, the coherency is updated and set to the default (1)
      since no iommus are attached to the domain.  A subsequent attach
      of a device makes use of the same dmar domain (now marked coherent)
      updates context entries with coherency enabled, and only disables
      coherency as the last step in the process.
      
      To fix this, switch domain_update_iommu_coherency() to use the
      safer, non-coherent default for domains not attached to iommus.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Tested-by: NDonald Dutile <ddutile@redhat.com>
      Acked-by: NDonald Dutile <ddutile@redhat.com>
      Acked-by: NChris Wright <chrisw@sous-sol.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      2e12bc29
  14. 23 8月, 2012 1 次提交
  15. 07 8月, 2012 1 次提交
  16. 03 8月, 2012 1 次提交
  17. 11 7月, 2012 1 次提交
  18. 25 6月, 2012 3 次提交
    • A
      intel-iommu: Make use of DMA quirks and ACS checks in IOMMU groups · 783f157b
      Alex Williamson 提交于
      Work around broken devices and adhere to ACS support when determining
      IOMMU grouping.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      783f157b
    • A
      intel-iommu: Support IOMMU groups · abdfdde2
      Alex Williamson 提交于
      Add IOMMU group support to Intel VT-d code.  This driver sets up
      devices ondemand, so make use of the add_device/remove_device
      callbacks in IOMMU API to manage setting up the groups.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      abdfdde2
    • A
      iommu: IOMMU Groups · d72e31c9
      Alex Williamson 提交于
      IOMMU device groups are currently a rather vague associative notion
      with assembly required by the user or user level driver provider to
      do anything useful.  This patch intends to grow the IOMMU group concept
      into something a bit more consumable.
      
      To do this, we first create an object representing the group, struct
      iommu_group.  This structure is allocated (iommu_group_alloc) and
      filled (iommu_group_add_device) by the iommu driver.  The iommu driver
      is free to add devices to the group using it's own set of policies.
      This allows inclusion of devices based on physical hardware or topology
      limitations of the platform, as well as soft requirements, such as
      multi-function trust levels or peer-to-peer protection of the
      interconnects.  Each device may only belong to a single iommu group,
      which is linked from struct device.iommu_group.  IOMMU groups are
      maintained using kobject reference counting, allowing for automatic
      removal of empty, unreferenced groups.  It is the responsibility of
      the iommu driver to remove devices from the group
      (iommu_group_remove_device).
      
      IOMMU groups also include a userspace representation in sysfs under
      /sys/kernel/iommu_groups.  When allocated, each group is given a
      dynamically assign ID (int).  The ID is managed by the core IOMMU group
      code to support multiple heterogeneous iommu drivers, which could
      potentially collide in group naming/numbering.  This also keeps group
      IDs to small, easily managed values.  A directory is created under
      /sys/kernel/iommu_groups for each group.  A further subdirectory named
      "devices" contains links to each device within the group.  The iommu_group
      file in the device's sysfs directory, which formerly contained a group
      number when read, is now a link to the iommu group.  Example:
      
      $ ls -l /sys/kernel/iommu_groups/26/devices/
      total 0
      lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
      		../../../../devices/pci0000:00/0000:00:1e.0
      lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
      		../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
      lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
      		../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
      
      $ ls -l  /sys/kernel/iommu_groups/26/devices/*/iommu_group
      [truncating perms/owner/timestamp]
      /sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
      					../../../kernel/iommu_groups/26
      /sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
      					../../../../kernel/iommu_groups/26
      /sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
      					../../../../kernel/iommu_groups/26
      
      Groups also include several exported functions for use by user level
      driver providers, for example VFIO.  These include:
      
      iommu_group_get(): Acquires a reference to a group from a device
      iommu_group_put(): Releases reference
      iommu_group_for_each_dev(): Iterates over group devices using callback
      iommu_group_[un]register_notifier(): Allows notification of device add
              and remove operations relevant to the group
      iommu_group_id(): Return the group number
      
      This patch also extends the IOMMU API to allow attaching groups to
      domains.  This is currently a simple wrapper for iterating through
      devices within a group, but it's expected that the IOMMU API may
      eventually make groups a more integral part of domains.
      
      Groups intentionally do not try to manage group ownership.  A user
      level driver provider must independently acquire ownership for each
      device within a group before making use of the group as a whole.
      This may change in the future if group usage becomes more pervasive
      across both DMA and IOMMU ops.
      
      Groups intentionally do not provide a mechanism for driver locking
      or otherwise manipulating driver matching/probing of devices within
      the group.  Such interfaces are generic to devices and beyond the
      scope of IOMMU groups.  If implemented, user level providers have
      ready access via iommu_group_for_each_dev and group notifiers.
      
      iommu_device_group() is removed here as it has no users.  The
      replacement is:
      
      	group = iommu_group_get(dev);
      	id = iommu_group_id(group);
      	iommu_group_put(group);
      
      AMD-Vi & Intel VT-d support re-added in following patches.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      d72e31c9
  19. 14 6月, 2012 1 次提交
  20. 26 5月, 2012 2 次提交
  21. 07 5月, 2012 3 次提交
  22. 28 3月, 2012 1 次提交
  23. 06 3月, 2012 2 次提交
  24. 06 2月, 2012 1 次提交
  25. 17 12月, 2011 1 次提交
    • E
      iommu: Export intel_iommu_enabled to signal when iommu is in use · 8bc1f85c
      Eugeni Dodonov 提交于
      In i915 driver, we do not enable either rc6 or semaphores on SNB when dmar
      is enabled. The new 'intel_iommu_enabled' variable signals when the
      iommu code is in operation.
      
      Cc: Ted Phelps <phelps@gnusto.com>
      Cc: Peter <pab1612@gmail.com>
      Cc: Lukas Hejtmanek <xhejtman@fi.muni.cz>
      Cc: Andrew Lutomirski <luto@mit.edu>
      CC: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Eugeni Dodonov <eugeni.dodonov@intel.com>
      Signed-off-by: NKeith Packard <keithp@keithp.com>
      8bc1f85c
  26. 09 12月, 2011 1 次提交
    • T
      memblock: Kill early_node_map[] · 0ee332c1
      Tejun Heo 提交于
      Now all ARCH_POPULATES_NODE_MAP archs select HAVE_MEBLOCK_NODE_MAP -
      there's no user of early_node_map[] left.  Kill early_node_map[] and
      replace ARCH_POPULATES_NODE_MAP with HAVE_MEMBLOCK_NODE_MAP.  Also,
      relocate for_each_mem_pfn_range() and helper from mm.h to memblock.h
      as page_alloc.c would no longer host an alternative implementation.
      
      This change is ultimately one to one mapping and shouldn't cause any
      observable difference; however, after the recent changes, there are
      some functions which now would fit memblock.c better than page_alloc.c
      and dependency on HAVE_MEMBLOCK_NODE_MAP instead of HAVE_MEMBLOCK
      doesn't make much sense on some of them.  Further cleanups for
      functions inside HAVE_MEMBLOCK_NODE_MAP in mm.h would be nice.
      
      -v2: Fix compile bug introduced by mis-spelling
       CONFIG_HAVE_MEMBLOCK_NODE_MAP to CONFIG_MEMBLOCK_HAVE_NODE_MAP in
       mmzone.h.  Reported by Stephen Rothwell.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Chen Liqin <liqin.chen@sunplusct.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      0ee332c1
  27. 06 12月, 2011 1 次提交
  28. 15 11月, 2011 2 次提交
  29. 10 11月, 2011 2 次提交
    • O
      iommu/intel: announce supported page sizes · 6d1c56a9
      Ohad Ben-Cohen 提交于
      Let the IOMMU core know we support arbitrary page sizes (as long as
      they're an order of 4KiB).
      
      This way the IOMMU core will retain the existing behavior we're used to;
      it will let us map regions that:
      - their size is an order of 4KiB
      - they are naturally aligned
      
      Note: Intel IOMMU hardware doesn't support arbitrary page sizes,
      but the driver does (it splits arbitrary-sized mappings into
      the pages supported by the hardware).
      
      To make everything simpler for now, though, this patch effectively tells
      the IOMMU core to keep giving this driver the same memory regions it did
      before, so nothing is changed as far as it's concerned.
      
      At this point, the page sizes announced remain static within the IOMMU
      core. To correctly utilize the pgsize-splitting of the IOMMU core by
      this driver, it seems that some core changes should still be done,
      because Intel's IOMMU page size capabilities seem to have the potential
      to be different between different DMA remapping devices.
      Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      6d1c56a9
    • O
      iommu/core: stop converting bytes to page order back and forth · 5009065d
      Ohad Ben-Cohen 提交于
      Express sizes in bytes rather than in page order, to eliminate the
      size->order->size conversions we have whenever the IOMMU API is calling
      the low level drivers' map/unmap methods.
      
      Adopt all existing drivers.
      Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com>
      Cc: David Brown <davidb@codeaurora.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Joerg Roedel <Joerg.Roedel@amd.com>
      Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
      Cc: KyongHo Cho <pullip.cho@samsung.com>
      Cc: Hiroshi DOYU <hdoyu@nvidia.com>
      Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      5009065d
  30. 01 11月, 2011 1 次提交