提交 e1ba8459 编写于 作者: L Linus Torvalds

Merge tag 'pci-v3.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
 "PCI changes for the v3.14 merge window:

  Resource management
    - Change pci_bus_region addresses to dma_addr_t (Bjorn Helgaas)
    - Support 64-bit AGP BARs (Bjorn Helgaas, Yinghai Lu)
    - Add pci_bus_address() to get bus address of a BAR (Bjorn Helgaas)
    - Use pci_resource_start() for CPU address of AGP BARs (Bjorn Helgaas)
    - Enforce bus address limits in resource allocation (Yinghai Lu)
    - Allocate 64-bit BARs above 4G when possible (Yinghai Lu)
    - Convert pcibios_resource_to_bus() to take pci_bus, not pci_dev (Yinghai Lu)

  PCI device hotplug
    - Major rescan/remove locking update (Rafael J. Wysocki)
    - Make ioapic builtin only (not modular) (Yinghai Lu)
    - Fix release/free issues (Yinghai Lu)
    - Clean up pciehp (Bjorn Helgaas)
    - Announce pciehp slot info during enumeration (Bjorn Helgaas)

  MSI
    - Add pci_msi_vec_count(), pci_msix_vec_count() (Alexander Gordeev)
    - Add pci_enable_msi_range(), pci_enable_msix_range() (Alexander Gordeev)
    - Deprecate "tri-state" interfaces: fail/success/fail+info (Alexander Gordeev)
    - Export MSI mode using attributes, not kobjects (Greg Kroah-Hartman)
    - Drop "irq" param from *_restore_msi_irqs() (DuanZhenzhong)

  SR-IOV
    - Clear NumVFs when disabling SR-IOV in sriov_init() (ethan.zhao)

  Virtualization
    - Add support for save/restore of extended capabilities (Alex Williamson)
    - Add Virtual Channel to save/restore support (Alex Williamson)
    - Never treat a VF as a multifunction device (Alex Williamson)
    - Add pci_try_reset_function(), et al (Alex Williamson)

  AER
    - Ignore non-PCIe error sources (Betty Dall)
    - Support ACPI HEST error sources for domains other than 0 (Betty Dall)
    - Consolidate HEST error source parsers (Bjorn Helgaas)
    - Add a TLP header print helper (Borislav Petkov)

  Freescale i.MX6
    - Remove unnecessary code (Fabio Estevam)
    - Make reset-gpio optional (Marek Vasut)
    - Report "link up" only after link training completes (Marek Vasut)
    - Start link in Gen1 before negotiating for Gen2 mode (Marek Vasut)
    - Fix PCIe startup code (Richard Zhu)

  Marvell MVEBU
    - Remove duplicate of_clk_get_by_name() call (Andrew Lunn)
    - Drop writes to bridge Secondary Status register (Jason Gunthorpe)
    - Obey bridge PCI_COMMAND_MEM and PCI_COMMAND_IO bits (Jason Gunthorpe)
    - Support a bridge with no IO port window (Jason Gunthorpe)
    - Use max_t() instead of max(resource_size_t,) (Jingoo Han)
    - Remove redundant of_match_ptr (Sachin Kamat)
    - Call pci_ioremap_io() at startup instead of dynamically (Thomas Petazzoni)

  NVIDIA Tegra
    - Disable Gen2 for Tegra20 and Tegra30 (Eric Brower)

  Renesas R-Car
    - Add runtime PM support (Valentine Barshak)
    - Fix rcar_pci_probe() return value check (Wei Yongjun)

  Synopsys DesignWare
    - Fix crash in dw_msi_teardown_irq() (Bjørn Erik Nilsen)
    - Remove redundant call to pci_write_config_word() (Bjørn Erik Nilsen)
    - Fix missing MSI IRQs (Harro Haan)
    - Add dw_pcie prefix before cfg_read/write (Pratyush Anand)
    - Fix I/O transfers by using CPU (not realio) address (Pratyush Anand)
    - Whitespace cleanup (Jingoo Han)

  EISA
    - Call put_device() if device_register() fails (Levente Kurusa)
    - Revert EISA initialization breakage ((Bjorn Helgaas)

  Miscellaneous
    - Remove unused code, including PCIe 3.0 interfaces (Stephen Hemminger)
    - Prevent bus conflicts while checking for bridge apertures (Bjorn Helgaas)
    - Stop clearing bridge Secondary Status when setting up I/O aperture (Bjorn Helgaas)
    - Use dev_is_pci() to identify PCI devices (Yijing Wang)
    - Deprecate DEFINE_PCI_DEVICE_TABLE (Joe Perches)
    - Update documentation 00-INDEX (Erik Ekman)"

* tag 'pci-v3.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (119 commits)
  Revert "EISA: Initialize device before its resources"
  Revert "EISA: Log device resources in dmesg"
  vfio-pci: Use pci "try" reset interface
  PCI: Check parent kobject in pci_destroy_dev()
  xen/pcifront: Use global PCI rescan-remove locking
  powerpc/eeh: Use global PCI rescan-remove locking
  PCI: Fix pci_check_and_unmask_intx() comment typos
  PCI: Add pci_try_reset_function(), pci_try_reset_slot(), pci_try_reset_bus()
  MPT / PCI: Use pci_stop_and_remove_bus_device_locked()
  platform / x86: Use global PCI rescan-remove locking
  PCI: hotplug: Use global PCI rescan-remove locking
  pcmcia: Use global PCI rescan-remove locking
  ACPI / hotplug / PCI: Use global PCI rescan-remove locking
  ACPI / PCI: Use global PCI rescan-remove locking in PCI root hotplug
  PCI: Add global pci_lock_rescan_remove()
  PCI: Cleanup pci.h whitespace
  PCI: Reorder so actual code comes before stubs
  PCI/AER: Support ACPI HEST AER error sources for PCI domains other than 0
  ACPICA: Add helper macros to extract bus/segment numbers from HEST table.
  PCI: Make local functions static
  ...
......@@ -70,18 +70,15 @@ Date: September, 2011
Contact: Neil Horman <nhorman@tuxdriver.com>
Description:
The /sys/devices/.../msi_irqs directory contains a variable set
of sub-directories, with each sub-directory being named after a
corresponding msi irq vector allocated to that device. Each
numbered sub-directory N contains attributes of that irq.
Note that this directory is not created for device drivers which
do not support msi irqs
of files, with each file being named after a corresponding msi
irq vector allocated to that device.
What: /sys/bus/pci/devices/.../msi_irqs/<N>/mode
What: /sys/bus/pci/devices/.../msi_irqs/<N>
Date: September 2011
Contact: Neil Horman <nhorman@tuxdriver.com>
Description:
This attribute indicates the mode that the irq vector named by
the parent directory is in (msi vs. msix)
the file is in (msi vs. msix)
What: /sys/bus/pci/devices/.../remove
Date: January 2009
......
......@@ -2,12 +2,12 @@
- this file
MSI-HOWTO.txt
- the Message Signaled Interrupts (MSI) Driver Guide HOWTO and FAQ.
PCI-DMA-mapping.txt
- info for PCI drivers using DMA portably across all platforms
PCIEBUS-HOWTO.txt
- a guide describing the PCI Express Port Bus driver
pci-error-recovery.txt
- info on PCI error recovery
pci-iov-howto.txt
- the PCI Express I/O Virtualization HOWTO
pci.txt
- info on the PCI subsystem for device driver authors
pcieaer-howto.txt
......
......@@ -82,93 +82,111 @@ Most of the hard work is done for the driver in the PCI layer. It simply
has to request that the PCI layer set up the MSI capability for this
device.
4.2.1 pci_enable_msi
4.2.1 pci_enable_msi_range
int pci_enable_msi(struct pci_dev *dev)
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
A successful call allocates ONE interrupt to the device, regardless
of how many MSIs the device supports. The device is switched from
pin-based interrupt mode to MSI mode. The dev->irq number is changed
to a new number which represents the message signaled interrupt;
consequently, this function should be called before the driver calls
request_irq(), because an MSI is delivered via a vector that is
different from the vector of a pin-based interrupt.
This function allows a device driver to request any number of MSI
interrupts within specified range from 'minvec' to 'maxvec'.
4.2.2 pci_enable_msi_block
If this function returns a positive number it indicates the number of
MSI interrupts that have been successfully allocated. In this case
the device is switched from pin-based interrupt mode to MSI mode and
updates dev->irq to be the lowest of the new interrupts assigned to it.
The other interrupts assigned to the device are in the range dev->irq
to dev->irq + returned value - 1. Device driver can use the returned
number of successfully allocated MSI interrupts to further allocate
and initialize device resources.
int pci_enable_msi_block(struct pci_dev *dev, int count)
If this function returns a negative number, it indicates an error and
the driver should not attempt to request any more MSI interrupts for
this device.
This variation on the above call allows a device driver to request multiple
MSIs. The MSI specification only allows interrupts to be allocated in
powers of two, up to a maximum of 2^5 (32).
This function should be called before the driver calls request_irq(),
because MSI interrupts are delivered via vectors that are different
from the vector of a pin-based interrupt.
If this function returns 0, it has succeeded in allocating at least as many
interrupts as the driver requested (it may have allocated more in order
to satisfy the power-of-two requirement). In this case, the function
enables MSI on this device and updates dev->irq to be the lowest of
the new interrupts assigned to it. The other interrupts assigned to
the device are in the range dev->irq to dev->irq + count - 1.
It is ideal if drivers can cope with a variable number of MSI interrupts;
there are many reasons why the platform may not be able to provide the
exact number that a driver asks for.
If this function returns a negative number, it indicates an error and
the driver should not attempt to request any more MSI interrupts for
this device. If this function returns a positive number, it is
less than 'count' and indicates the number of interrupts that could have
been allocated. In neither case is the irq value updated or the device
switched into MSI mode.
The device driver must decide what action to take if
pci_enable_msi_block() returns a value less than the number requested.
For instance, the driver could still make use of fewer interrupts;
in this case the driver should call pci_enable_msi_block()
again. Note that it is not guaranteed to succeed, even when the
'count' has been reduced to the value returned from a previous call to
pci_enable_msi_block(). This is because there are multiple constraints
on the number of vectors that can be allocated; pci_enable_msi_block()
returns as soon as it finds any constraint that doesn't allow the
call to succeed.
4.2.3 pci_enable_msi_block_auto
int pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *count)
This variation on pci_enable_msi() call allows a device driver to request
the maximum possible number of MSIs. The MSI specification only allows
interrupts to be allocated in powers of two, up to a maximum of 2^5 (32).
If this function returns a positive number, it indicates that it has
succeeded and the returned value is the number of allocated interrupts. In
this case, the function enables MSI on this device and updates dev->irq to
be the lowest of the new interrupts assigned to it. The other interrupts
assigned to the device are in the range dev->irq to dev->irq + returned
value - 1.
There could be devices that can not operate with just any number of MSI
interrupts within a range. See chapter 4.3.1.3 to get the idea how to
handle such devices for MSI-X - the same logic applies to MSI.
If this function returns a negative number, it indicates an error and
the driver should not attempt to request any more MSI interrupts for
this device.
4.2.1.1 Maximum possible number of MSI interrupts
The typical usage of MSI interrupts is to allocate as many vectors as
possible, likely up to the limit returned by pci_msi_vec_count() function:
static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec)
{
return pci_enable_msi_range(pdev, 1, nvec);
}
Note the value of 'minvec' parameter is 1. As 'minvec' is inclusive,
the value of 0 would be meaningless and could result in error.
If the device driver needs to know the number of interrupts the device
supports it can pass the pointer count where that number is stored. The
device driver must decide what action to take if pci_enable_msi_block_auto()
succeeds, but returns a value less than the number of interrupts supported.
If the device driver does not need to know the number of interrupts
supported, it can set the pointer count to NULL.
Some devices have a minimal limit on number of MSI interrupts.
In this case the function could look like this:
4.2.4 pci_disable_msi
static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec)
{
return pci_enable_msi_range(pdev, FOO_DRIVER_MINIMUM_NVEC, nvec);
}
4.2.1.2 Exact number of MSI interrupts
If a driver is unable or unwilling to deal with a variable number of MSI
interrupts it could request a particular number of interrupts by passing
that number to pci_enable_msi_range() function as both 'minvec' and 'maxvec'
parameters:
static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec)
{
return pci_enable_msi_range(pdev, nvec, nvec);
}
4.2.1.3 Single MSI mode
The most notorious example of the request type described above is
enabling the single MSI mode for a device. It could be done by passing
two 1s as 'minvec' and 'maxvec':
static int foo_driver_enable_single_msi(struct pci_dev *pdev)
{
return pci_enable_msi_range(pdev, 1, 1);
}
4.2.2 pci_disable_msi
void pci_disable_msi(struct pci_dev *dev)
This function should be used to undo the effect of pci_enable_msi() or
pci_enable_msi_block() or pci_enable_msi_block_auto(). Calling it restores
dev->irq to the pin-based interrupt number and frees the previously
allocated message signaled interrupt(s). The interrupt may subsequently be
assigned to another device, so drivers should not cache the value of
dev->irq.
This function should be used to undo the effect of pci_enable_msi_range().
Calling it restores dev->irq to the pin-based interrupt number and frees
the previously allocated MSIs. The interrupts may subsequently be assigned
to another device, so drivers should not cache the value of dev->irq.
Before calling this function, a device driver must always call free_irq()
on any interrupt for which it previously called request_irq().
Failure to do so results in a BUG_ON(), leaving the device with
MSI enabled and thus leaking its vector.
4.2.3 pci_msi_vec_count
int pci_msi_vec_count(struct pci_dev *dev)
This function could be used to retrieve the number of MSI vectors the
device requested (via the Multiple Message Capable register). The MSI
specification only allows the returned value to be a power of two,
up to a maximum of 2^5 (32).
If this function returns a negative number, it indicates the device is
not capable of sending MSIs.
If this function returns a positive number, it indicates the maximum
number of MSI interrupt vectors that could be allocated.
4.3 Using MSI-X
The MSI-X capability is much more flexible than the MSI capability.
......@@ -188,26 +206,31 @@ in each element of the array to indicate for which entries the kernel
should assign interrupts; it is invalid to fill in two entries with the
same number.
4.3.1 pci_enable_msix
4.3.1 pci_enable_msix_range
int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
int minvec, int maxvec)
Calling this function asks the PCI subsystem to allocate 'nvec' MSIs.
Calling this function asks the PCI subsystem to allocate any number of
MSI-X interrupts within specified range from 'minvec' to 'maxvec'.
The 'entries' argument is a pointer to an array of msix_entry structs
which should be at least 'nvec' entries in size. On success, the
device is switched into MSI-X mode and the function returns 0.
The 'vector' member in each entry is populated with the interrupt number;
which should be at least 'maxvec' entries in size.
On success, the device is switched into MSI-X mode and the function
returns the number of MSI-X interrupts that have been successfully
allocated. In this case the 'vector' member in entries numbered from
0 to the returned value - 1 is populated with the interrupt number;
the driver should then call request_irq() for each 'vector' that it
decides to use. The device driver is responsible for keeping track of the
interrupts assigned to the MSI-X vectors so it can free them again later.
Device driver can use the returned number of successfully allocated MSI-X
interrupts to further allocate and initialize device resources.
If this function returns a negative number, it indicates an error and
the driver should not attempt to allocate any more MSI-X interrupts for
this device. If it returns a positive number, it indicates the maximum
number of interrupt vectors that could have been allocated. See example
below.
this device.
This function, in contrast with pci_enable_msi(), does not adjust
This function, in contrast with pci_enable_msi_range(), does not adjust
dev->irq. The device will not generate interrupts for this interrupt
number once MSI-X is enabled.
......@@ -218,28 +241,103 @@ It is ideal if drivers can cope with a variable number of MSI-X interrupts;
there are many reasons why the platform may not be able to provide the
exact number that a driver asks for.
A request loop to achieve that might look like:
There could be devices that can not operate with just any number of MSI-X
interrupts within a range. E.g., an network adapter might need let's say
four vectors per each queue it provides. Therefore, a number of MSI-X
interrupts allocated should be a multiple of four. In this case interface
pci_enable_msix_range() can not be used alone to request MSI-X interrupts
(since it can allocate any number within the range, without any notion of
the multiple of four) and the device driver should master a custom logic
to request the required number of MSI-X interrupts.
4.3.1.1 Maximum possible number of MSI-X interrupts
The typical usage of MSI-X interrupts is to allocate as many vectors as
possible, likely up to the limit returned by pci_msix_vec_count() function:
static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
{
while (nvec >= FOO_DRIVER_MINIMUM_NVEC) {
rc = pci_enable_msix(adapter->pdev,
adapter->msix_entries, nvec);
if (rc > 0)
nvec = rc;
else
return rc;
return pci_enable_msi_range(adapter->pdev, adapter->msix_entries,
1, nvec);
}
Note the value of 'minvec' parameter is 1. As 'minvec' is inclusive,
the value of 0 would be meaningless and could result in error.
Some devices have a minimal limit on number of MSI-X interrupts.
In this case the function could look like this:
static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
{
return pci_enable_msi_range(adapter->pdev, adapter->msix_entries,
FOO_DRIVER_MINIMUM_NVEC, nvec);
}
4.3.1.2 Exact number of MSI-X interrupts
If a driver is unable or unwilling to deal with a variable number of MSI-X
interrupts it could request a particular number of interrupts by passing
that number to pci_enable_msix_range() function as both 'minvec' and 'maxvec'
parameters:
static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
{
return pci_enable_msi_range(adapter->pdev, adapter->msix_entries,
nvec, nvec);
}
4.3.1.3 Specific requirements to the number of MSI-X interrupts
As noted above, there could be devices that can not operate with just any
number of MSI-X interrupts within a range. E.g., let's assume a device that
is only capable sending the number of MSI-X interrupts which is a power of
two. A routine that enables MSI-X mode for such device might look like this:
/*
* Assume 'minvec' and 'maxvec' are non-zero
*/
static int foo_driver_enable_msix(struct foo_adapter *adapter,
int minvec, int maxvec)
{
int rc;
minvec = roundup_pow_of_two(minvec);
maxvec = rounddown_pow_of_two(maxvec);
if (minvec > maxvec)
return -ERANGE;
retry:
rc = pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
maxvec, maxvec);
/*
* -ENOSPC is the only error code allowed to be analized
*/
if (rc == -ENOSPC) {
if (maxvec == 1)
return -ENOSPC;
maxvec /= 2;
if (minvec > maxvec)
return -ENOSPC;
goto retry;
}
return -ENOSPC;
return rc;
}
Note how pci_enable_msix_range() return value is analized for a fallback -
any error code other than -ENOSPC indicates a fatal error and should not
be retried.
4.3.2 pci_disable_msix
void pci_disable_msix(struct pci_dev *dev)
This function should be used to undo the effect of pci_enable_msix(). It frees
the previously allocated message signaled interrupts. The interrupts may
This function should be used to undo the effect of pci_enable_msix_range().
It frees the previously allocated MSI-X interrupts. The interrupts may
subsequently be assigned to another device, so drivers should not cache
the value of the 'vector' elements over a call to pci_disable_msix().
......@@ -255,18 +353,32 @@ MSI-X Table. This address is mapped by the PCI subsystem, and should not
be accessed directly by the device driver. If the driver wishes to
mask or unmask an interrupt, it should call disable_irq() / enable_irq().
4.3.4 pci_msix_vec_count
int pci_msix_vec_count(struct pci_dev *dev)
This function could be used to retrieve number of entries in the device
MSI-X table.
If this function returns a negative number, it indicates the device is
not capable of sending MSI-Xs.
If this function returns a positive number, it indicates the maximum
number of MSI-X interrupt vectors that could be allocated.
4.4 Handling devices implementing both MSI and MSI-X capabilities
If a device implements both MSI and MSI-X capabilities, it can
run in either MSI mode or MSI-X mode, but not both simultaneously.
This is a requirement of the PCI spec, and it is enforced by the
PCI layer. Calling pci_enable_msi() when MSI-X is already enabled or
pci_enable_msix() when MSI is already enabled results in an error.
If a device driver wishes to switch between MSI and MSI-X at runtime,
it must first quiesce the device, then switch it back to pin-interrupt
mode, before calling pci_enable_msi() or pci_enable_msix() and resuming
operation. This is not expected to be a common operation but may be
useful for debugging or testing during development.
PCI layer. Calling pci_enable_msi_range() when MSI-X is already
enabled or pci_enable_msix_range() when MSI is already enabled
results in an error. If a device driver wishes to switch between MSI
and MSI-X at runtime, it must first quiesce the device, then switch
it back to pin-interrupt mode, before calling pci_enable_msi_range()
or pci_enable_msix_range() and resuming operation. This is not expected
to be a common operation but may be useful for debugging or testing
during development.
4.5 Considerations when using MSIs
......@@ -381,5 +493,5 @@ or disabled (0). If 0 is found in any of the msi_bus files belonging
to bridges between the PCI root and the device, MSIs are disabled.
It is also worth checking the device driver to see whether it supports MSIs.
For example, it may contain calls to pci_enable_msi(), pci_enable_msix() or
pci_enable_msi_block().
For example, it may contain calls to pci_enable_msi_range() or
pci_enable_msix_range().
......@@ -123,8 +123,10 @@ initialization with a pointer to a structure describing the driver
The ID table is an array of struct pci_device_id entries ending with an
all-zero entry; use of the macro DEFINE_PCI_DEVICE_TABLE is the preferred
method of declaring the table. Each entry consists of:
all-zero entry. Definitions with static const are generally preferred.
Use of the deprecated macro DEFINE_PCI_DEVICE_TABLE should be avoided.
Each entry consists of:
vendor,device Vendor and device ID to match (or PCI_ANY_ID)
......
......@@ -19,6 +19,8 @@ Required properties:
to define the mapping of the PCIe interface to interrupt
numbers.
- num-lanes: number of lanes to use
Optional properties:
- reset-gpio: gpio pin number of power good signal
Optional properties for fsl,imx6q-pcie
......
......@@ -83,7 +83,7 @@ static int pci_mmap_resource(struct kobject *kobj,
if (iomem_is_exclusive(res->start))
return -EINVAL;
pcibios_resource_to_bus(pdev, &bar, res);
pcibios_resource_to_bus(pdev->bus, &bar, res);
vma->vm_pgoff += bar.start >> (PAGE_SHIFT - (sparse ? 5 : 0));
mmap_type = res->flags & IORESOURCE_MEM ? pci_mmap_mem : pci_mmap_io;
......@@ -139,7 +139,7 @@ static int sparse_mem_mmap_fits(struct pci_dev *pdev, int num)
long dense_offset;
unsigned long sparse_size;
pcibios_resource_to_bus(pdev, &bar, &pdev->resource[num]);
pcibios_resource_to_bus(pdev->bus, &bar, &pdev->resource[num]);
/* All core logic chips have 4G sparse address space, except
CIA which has 16G (see xxx_SPARSE_MEM and xxx_DENSE_MEM
......
......@@ -325,7 +325,7 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size,
/* Helper for generic DMA-mapping functions. */
static struct pci_dev *alpha_gendev_to_pci(struct device *dev)
{
if (dev && dev->bus == &pci_bus_type)
if (dev && dev_is_pci(dev))
return to_pci_dev(dev);
/* Assume that non-PCI devices asking for DMA are either ISA or EISA,
......
......@@ -257,7 +257,7 @@ static int it8152_needs_bounce(struct device *dev, dma_addr_t dma_addr, size_t s
*/
static int it8152_pci_platform_notify(struct device *dev)
{
if (dev->bus == &pci_bus_type) {
if (dev_is_pci(dev)) {
if (dev->dma_mask)
*dev->dma_mask = (SZ_64M - 1) | PHYS_OFFSET;
dev->coherent_dma_mask = (SZ_64M - 1) | PHYS_OFFSET;
......@@ -268,7 +268,7 @@ static int it8152_pci_platform_notify(struct device *dev)
static int it8152_pci_platform_notify_remove(struct device *dev)
{
if (dev->bus == &pci_bus_type)
if (dev_is_pci(dev))
dmabounce_unregister_dev(dev);
return 0;
......
......@@ -326,7 +326,7 @@ static int ixp4xx_needs_bounce(struct device *dev, dma_addr_t dma_addr, size_t s
*/
static int ixp4xx_pci_platform_notify(struct device *dev)
{
if(dev->bus == &pci_bus_type) {
if (dev_is_pci(dev)) {
*dev->dma_mask = SZ_64M - 1;
dev->coherent_dma_mask = SZ_64M - 1;
dmabounce_register_dev(dev, 2048, 4096, ixp4xx_needs_bounce);
......@@ -336,9 +336,9 @@ static int ixp4xx_pci_platform_notify(struct device *dev)
static int ixp4xx_pci_platform_notify_remove(struct device *dev)
{
if(dev->bus == &pci_bus_type) {
if (dev_is_pci(dev))
dmabounce_unregister_dev(dev);
}
return 0;
}
......
......@@ -255,7 +255,7 @@ static u64 prefetch_spill_page;
#endif
#ifdef CONFIG_PCI
# define GET_IOC(dev) (((dev)->bus == &pci_bus_type) \
# define GET_IOC(dev) ((dev_is_pci(dev)) \
? ((struct ioc *) PCI_CONTROLLER(to_pci_dev(dev))->iommu) : NULL)
#else
# define GET_IOC(dev) NULL
......
......@@ -34,7 +34,7 @@
*/
static int sn_dma_supported(struct device *dev, u64 mask)
{
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
if (mask < 0x7fffffff)
return 0;
......@@ -50,7 +50,7 @@ static int sn_dma_supported(struct device *dev, u64 mask)
*/
int sn_dma_set_mask(struct device *dev, u64 dma_mask)
{
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
if (!sn_dma_supported(dev, dma_mask))
return 0;
......@@ -85,7 +85,7 @@ static void *sn_dma_alloc_coherent(struct device *dev, size_t size,
struct pci_dev *pdev = to_pci_dev(dev);
struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
/*
* Allocate the memory.
......@@ -143,7 +143,7 @@ static void sn_dma_free_coherent(struct device *dev, size_t size, void *cpu_addr
struct pci_dev *pdev = to_pci_dev(dev);
struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
provider->dma_unmap(pdev, dma_handle, 0);
free_pages((unsigned long)cpu_addr, get_order(size));
......@@ -187,7 +187,7 @@ static dma_addr_t sn_dma_map_page(struct device *dev, struct page *page,
dmabarr = dma_get_attr(DMA_ATTR_WRITE_BARRIER, attrs);
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
phys_addr = __pa(cpu_addr);
if (dmabarr)
......@@ -223,7 +223,7 @@ static void sn_dma_unmap_page(struct device *dev, dma_addr_t dma_addr,
struct pci_dev *pdev = to_pci_dev(dev);
struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
provider->dma_unmap(pdev, dma_addr, dir);
}
......@@ -247,7 +247,7 @@ static void sn_dma_unmap_sg(struct device *dev, struct scatterlist *sgl,
struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
struct scatterlist *sg;
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
for_each_sg(sgl, sg, nhwentries, i) {
provider->dma_unmap(pdev, sg->dma_address, dir);
......@@ -284,7 +284,7 @@ static int sn_dma_map_sg(struct device *dev, struct scatterlist *sgl,
dmabarr = dma_get_attr(DMA_ATTR_WRITE_BARRIER, attrs);
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
/*
* Setup a DMA address for each entry in the scatterlist.
......@@ -323,26 +323,26 @@ static int sn_dma_map_sg(struct device *dev, struct scatterlist *sgl,
static void sn_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
}
static void sn_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size,
enum dma_data_direction dir)
{
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
}
static void sn_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir)
{
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
}
static void sn_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir)
{
BUG_ON(dev->bus != &pci_bus_type);
BUG_ON(!dev_is_pci(dev));
}
static int sn_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
......
......@@ -282,18 +282,6 @@ find_pa_parent_type(const struct parisc_device *padev, int type)
return NULL;
}
#ifdef CONFIG_PCI
static inline int is_pci_dev(struct device *dev)
{
return dev->bus == &pci_bus_type;
}
#else
static inline int is_pci_dev(struct device *dev)
{
return 0;
}
#endif
/*
* get_node_path fills in @path with the firmware path to the device.
* Note that if @node is a parisc device, we don't fill in the 'mod' field.
......@@ -306,7 +294,7 @@ static void get_node_path(struct device *dev, struct hardware_path *path)
int i = 5;
memset(&path->bc, -1, 6);
if (is_pci_dev(dev)) {
if (dev_is_pci(dev)) {
unsigned int devfn = to_pci_dev(dev)->devfn;
path->mod = PCI_FUNC(devfn);
path->bc[i--] = PCI_SLOT(devfn);
......@@ -314,7 +302,7 @@ static void get_node_path(struct device *dev, struct hardware_path *path)
}
while (dev != &root) {
if (is_pci_dev(dev)) {
if (dev_is_pci(dev)) {
unsigned int devfn = to_pci_dev(dev)->devfn;
path->bc[i--] = PCI_SLOT(devfn) | (PCI_FUNC(devfn)<< 5);
} else if (dev->bus == &parisc_bus_type) {
......@@ -695,7 +683,7 @@ static int check_parent(struct device * dev, void * data)
if (dev->bus == &parisc_bus_type) {
if (match_parisc_device(dev, d->index, d->modpath))
d->dev = dev;
} else if (is_pci_dev(dev)) {
} else if (dev_is_pci(dev)) {
if (match_pci_device(dev, d->index, d->modpath))
d->dev = dev;
} else if (dev->bus == NULL) {
......@@ -753,7 +741,7 @@ struct device *hwpath_to_device(struct hardware_path *modpath)
if (!parent)
return NULL;
}
if (is_pci_dev(parent)) /* pci devices already parse MOD */
if (dev_is_pci(parent)) /* pci devices already parse MOD */
return parent;
else
return parse_tree_node(parent, 6, modpath);
......@@ -772,7 +760,7 @@ void device_to_hwpath(struct device *dev, struct hardware_path *path)
padev = to_parisc_device(dev);
get_node_path(dev->parent, path);
path->mod = padev->hw_path;
} else if (is_pci_dev(dev)) {
} else if (dev_is_pci(dev)) {
get_node_path(dev, path);
}
}
......
......@@ -369,7 +369,9 @@ static void *eeh_rmv_device(void *data, void *userdata)
edev->mode |= EEH_DEV_DISCONNECTED;
(*removed)++;
pci_lock_rescan_remove();
pci_stop_and_remove_bus_device(dev);
pci_unlock_rescan_remove();
return NULL;
}
......@@ -416,10 +418,13 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus)
* into pcibios_add_pci_devices().
*/
eeh_pe_state_mark(pe, EEH_PE_KEEP);
if (bus)
if (bus) {
pci_lock_rescan_remove();
pcibios_remove_pci_devices(bus);
else if (frozen_bus)
pci_unlock_rescan_remove();
} else if (frozen_bus) {
eeh_pe_dev_traverse(pe, eeh_rmv_device, &removed);
}
/* Reset the pci controller. (Asserts RST#; resets config space).
* Reconfigure bridges and devices. Don't try to bring the system
......@@ -429,6 +434,8 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus)
if (rc)
return rc;
pci_lock_rescan_remove();
/* Restore PE */
eeh_ops->configure_bridge(pe);
eeh_pe_restore_bars(pe);
......@@ -462,6 +469,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus)
pe->tstamp = tstamp;
pe->freeze_count = cnt;
pci_unlock_rescan_remove();
return 0;
}
......@@ -618,8 +626,11 @@ static void eeh_handle_normal_event(struct eeh_pe *pe)
eeh_pe_dev_traverse(pe, eeh_report_failure, NULL);
/* Shut down the device drivers for good. */
if (frozen_bus)
if (frozen_bus) {
pci_lock_rescan_remove();
pcibios_remove_pci_devices(frozen_bus);
pci_unlock_rescan_remove();
}
}
static void eeh_handle_special_event(void)
......@@ -692,6 +703,7 @@ static void eeh_handle_special_event(void)
if (rc == 2 || rc == 1)
eeh_handle_normal_event(pe);
else {
pci_lock_rescan_remove();
list_for_each_entry_safe(hose, tmp,
&hose_list, list_node) {
phb_pe = eeh_phb_pe_get(hose);
......@@ -703,6 +715,7 @@ static void eeh_handle_special_event(void)
eeh_pe_dev_traverse(pe, eeh_report_failure, NULL);
pcibios_remove_pci_devices(bus);
}
pci_unlock_rescan_remove();
}
}
......
......@@ -835,7 +835,7 @@ static void pcibios_fixup_resources(struct pci_dev *dev)
* at 0 as unset as well, except if PCI_PROBE_ONLY is also set
* since in that case, we don't want to re-assign anything
*/
pcibios_resource_to_bus(dev, &reg, res);
pcibios_resource_to_bus(dev->bus, &reg, res);
if (pci_has_flag(PCI_REASSIGN_ALL_RSRC) ||
(reg.start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) {
/* Only print message if not re-assigning */
......@@ -886,7 +886,7 @@ static int pcibios_uninitialized_bridge_resource(struct pci_bus *bus,
/* Job is a bit different between memory and IO */
if (res->flags & IORESOURCE_MEM) {
pcibios_resource_to_bus(dev, &region, res);
pcibios_resource_to_bus(dev->bus, &region, res);
/* If the BAR is non-0 then it's probably been initialized */
if (region.start != 0)
......
......@@ -111,7 +111,7 @@ static void of_pci_parse_addrs(struct device_node *node, struct pci_dev *dev)
res->name = pci_name(dev);
region.start = base;
region.end = base + size - 1;
pcibios_bus_to_resource(dev, res, &region);
pcibios_bus_to_resource(dev->bus, res, &region);
}
}
......@@ -280,7 +280,7 @@ void of_scan_pci_bridge(struct pci_dev *dev)
res->flags = flags;
region.start = of_read_number(&ranges[1], 2);
region.end = region.start + size - 1;
pcibios_bus_to_resource(dev, res, &region);
pcibios_bus_to_resource(dev->bus, res, &region);
}
sprintf(bus->name, "PCI Bus %04x:%02x", pci_domain_nr(bus),
bus->number);
......
......@@ -407,8 +407,8 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
struct msi_msg msg;
int rc;
if (type != PCI_CAP_ID_MSIX && type != PCI_CAP_ID_MSI)
return -EINVAL;
if (type == PCI_CAP_ID_MSI && nvec > 1)
return 1;
msi_vecs = min(nvec, ZPCI_MSI_VEC_MAX);
msi_vecs = min_t(unsigned int, msi_vecs, CONFIG_PCI_NR_MSI);
......
......@@ -392,7 +392,7 @@ static void apb_fake_ranges(struct pci_dev *dev,
res->flags = IORESOURCE_IO;
region.start = (first << 21);
region.end = (last << 21) + ((1 << 21) - 1);
pcibios_bus_to_resource(dev, res, &region);
pcibios_bus_to_resource(dev->bus, res, &region);
pci_read_config_byte(dev, APB_MEM_ADDRESS_MAP, &map);
apb_calc_first_last(map, &first, &last);
......@@ -400,7 +400,7 @@ static void apb_fake_ranges(struct pci_dev *dev,
res->flags = IORESOURCE_MEM;
region.start = (first << 29);
region.end = (last << 29) + ((1 << 29) - 1);
pcibios_bus_to_resource(dev, res, &region);
pcibios_bus_to_resource(dev->bus, res, &region);
}
static void pci_of_scan_bus(struct pci_pbm_info *pbm,
......@@ -491,7 +491,7 @@ static void of_scan_pci_bridge(struct pci_pbm_info *pbm,
res->flags = flags;
region.start = GET_64BIT(ranges, 1);
region.end = region.start + size - 1;
pcibios_bus_to_resource(dev, res, &region);
pcibios_bus_to_resource(dev->bus, res, &region);
}
after_ranges:
sprintf(bus->name, "PCI Bus %04x:%02x", pci_domain_nr(bus),
......
......@@ -104,7 +104,7 @@ extern void pci_iommu_alloc(void);
struct msi_desc;
int native_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
void native_teardown_msi_irq(unsigned int irq);
void native_restore_msi_irqs(struct pci_dev *dev, int irq);
void native_restore_msi_irqs(struct pci_dev *dev);
int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc,
unsigned int irq_base, unsigned int irq_offset);
#else
......@@ -125,7 +125,6 @@ int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc,
/* generic pci stuff */
#include <asm-generic/pci.h>
#define PCIBIOS_MAX_MEM_32 0xffffffff
#ifdef CONFIG_NUMA
/* Returns the node based on pci bus */
......
......@@ -181,7 +181,7 @@ struct x86_msi_ops {
u8 hpet_id);
void (*teardown_msi_irq)(unsigned int irq);
void (*teardown_msi_irqs)(struct pci_dev *dev);
void (*restore_msi_irqs)(struct pci_dev *dev, int irq);
void (*restore_msi_irqs)(struct pci_dev *dev);
int (*setup_hpet_msi)(unsigned int irq, unsigned int id);
u32 (*msi_mask_irq)(struct msi_desc *desc, u32 mask, u32 flag);
u32 (*msix_mask_irq)(struct msi_desc *desc, u32 flag);
......
......@@ -1034,9 +1034,7 @@ static int mp_config_acpi_gsi(struct device *dev, u32 gsi, int trigger,
if (!acpi_ioapic)
return 0;
if (!dev)
return 0;
if (dev->bus != &pci_bus_type)
if (!dev || !dev_is_pci(dev))
return 0;
pdev = to_pci_dev(dev);
......
......@@ -136,9 +136,9 @@ void arch_teardown_msi_irq(unsigned int irq)
x86_msi.teardown_msi_irq(irq);
}
void arch_restore_msi_irqs(struct pci_dev *dev, int irq)
void arch_restore_msi_irqs(struct pci_dev *dev)
{
x86_msi.restore_msi_irqs(dev, irq);
x86_msi.restore_msi_irqs(dev);
}
u32 arch_msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
{
......
......@@ -337,7 +337,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
return ret;
}
static void xen_initdom_restore_msi_irqs(struct pci_dev *dev, int irq)
static void xen_initdom_restore_msi_irqs(struct pci_dev *dev)
{
int ret = 0;
......
......@@ -599,7 +599,9 @@ static int acpi_pci_root_add(struct acpi_device *device,
pci_assign_unassigned_root_bus_resources(root->bus);
}
pci_lock_rescan_remove();
pci_bus_add_devices(root->bus);
pci_unlock_rescan_remove();
return 1;
end:
......@@ -611,6 +613,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
{
struct acpi_pci_root *root = acpi_driver_data(device);
pci_lock_rescan_remove();
pci_stop_root_bus(root->bus);
device_set_run_wake(root->bus->bridge, false);
......@@ -618,6 +622,8 @@ static void acpi_pci_root_remove(struct acpi_device *device)
pci_remove_root_bus(root->bus);
pci_unlock_rescan_remove();
kfree(root);
}
......
......@@ -1148,26 +1148,40 @@ static inline void ahci_gtf_filter_workaround(struct ata_host *host)
{}
#endif
static int ahci_init_interrupts(struct pci_dev *pdev, struct ahci_host_priv *hpriv)
static int ahci_init_interrupts(struct pci_dev *pdev, unsigned int n_ports,
struct ahci_host_priv *hpriv)
{
int rc;
unsigned int maxvec;
int rc, nvec;
if (!(hpriv->flags & AHCI_HFLAG_NO_MSI)) {
rc = pci_enable_msi_block_auto(pdev, &maxvec);
if (rc > 0) {
if ((rc == maxvec) || (rc == 1))
return rc;
/*
* Assume that advantage of multipe MSIs is negated,
* so fallback to single MSI mode to save resources
*/
pci_disable_msi(pdev);
if (!pci_enable_msi(pdev))
return 1;
}
}
if (hpriv->flags & AHCI_HFLAG_NO_MSI)
goto intx;
rc = pci_msi_vec_count(pdev);
if (rc < 0)
goto intx;
/*
* If number of MSIs is less than number of ports then Sharing Last
* Message mode could be enforced. In this case assume that advantage
* of multipe MSIs is negated and use single MSI mode instead.
*/
if (rc < n_ports)
goto single_msi;
nvec = rc;
rc = pci_enable_msi_block(pdev, nvec);
if (rc)
goto intx;
return nvec;
single_msi:
rc = pci_enable_msi(pdev);
if (rc)
goto intx;
return 1;
intx:
pci_intx(pdev, 1);
return 0;
}
......@@ -1328,10 +1342,6 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
hpriv->mmio = pcim_iomap_table(pdev)[ahci_pci_bar];
n_msis = ahci_init_interrupts(pdev, hpriv);
if (n_msis > 1)
hpriv->flags |= AHCI_HFLAG_MULTI_MSI;
/* save initial config */
ahci_pci_save_initial_config(pdev, hpriv);
......@@ -1386,6 +1396,10 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
*/
n_ports = max(ahci_nr_ports(hpriv->cap), fls(hpriv->port_map));
n_msis = ahci_init_interrupts(pdev, n_ports, hpriv);
if (n_msis > 1)
hpriv->flags |= AHCI_HFLAG_MULTI_MSI;
host = ata_host_alloc_pinfo(&pdev->dev, ppi, n_ports);
if (!host)
return -ENOMEM;
......
......@@ -239,6 +239,7 @@ long compat_agp_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
/* Chipset independent registers (from AGP Spec) */
#define AGP_APBASE 0x10
#define AGP_APERTURE_BAR 0
#define AGPSTAT 0x4
#define AGPCMD 0x8
......
......@@ -85,8 +85,8 @@ static int ali_configure(void)
pci_write_config_dword(agp_bridge->dev, ALI_TLBCTRL, ((temp & 0xffffff00) | 0x00000010));
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
#if 0
if (agp_bridge->type == ALI_M1541) {
......
......@@ -11,7 +11,7 @@
#include <linux/slab.h>
#include "agp.h"
#define AMD_MMBASE 0x14
#define AMD_MMBASE_BAR 1
#define AMD_APSIZE 0xac
#define AMD_MODECNTL 0xb0
#define AMD_MODECNTL2 0xb2
......@@ -126,7 +126,6 @@ static int amd_create_gatt_table(struct agp_bridge_data *bridge)
unsigned long __iomem *cur_gatt;
unsigned long addr;
int retval;
u32 temp;
int i;
value = A_SIZE_LVL2(agp_bridge->current_size);
......@@ -149,8 +148,7 @@ static int amd_create_gatt_table(struct agp_bridge_data *bridge)
* used to program the agp master not the cpu
*/
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
addr = pci_bus_address(agp_bridge->dev, AGP_APERTURE_BAR);
agp_bridge->gart_bus_addr = addr;
/* Calculate the agp offset */
......@@ -207,6 +205,7 @@ static int amd_irongate_fetch_size(void)
static int amd_irongate_configure(void)
{
struct aper_size_info_lvl2 *current_size;
phys_addr_t reg;
u32 temp;
u16 enable_reg;
......@@ -214,9 +213,8 @@ static int amd_irongate_configure(void)
if (!amd_irongate_private.registers) {
/* Get the memory mapped registers */
pci_read_config_dword(agp_bridge->dev, AMD_MMBASE, &temp);
temp = (temp & PCI_BASE_ADDRESS_MEM_MASK);
amd_irongate_private.registers = (volatile u8 __iomem *) ioremap(temp, 4096);
reg = pci_resource_start(agp_bridge->dev, AMD_MMBASE_BAR);
amd_irongate_private.registers = (volatile u8 __iomem *) ioremap(reg, 4096);
if (!amd_irongate_private.registers)
return -ENOMEM;
}
......
......@@ -269,7 +269,6 @@ static int agp_aperture_valid(u64 aper, u32 size)
*/
static int fix_northbridge(struct pci_dev *nb, struct pci_dev *agp, u16 cap)
{
u32 aper_low, aper_hi;
u64 aper, nb_aper;
int order = 0;
u32 nb_order, nb_base;
......@@ -295,9 +294,7 @@ static int fix_northbridge(struct pci_dev *nb, struct pci_dev *agp, u16 cap)
apsize |= 0xf00;
order = 7 - hweight16(apsize);
pci_read_config_dword(agp, 0x10, &aper_low);
pci_read_config_dword(agp, 0x14, &aper_hi);
aper = (aper_low & ~((1<<22)-1)) | ((u64)aper_hi << 32);
aper = pci_bus_address(agp, AGP_APERTURE_BAR);
/*
* On some sick chips APSIZE is 0. This means it wants 4G
......
......@@ -12,7 +12,7 @@
#include <asm/agp.h>
#include "agp.h"
#define ATI_GART_MMBASE_ADDR 0x14
#define ATI_GART_MMBASE_BAR 1
#define ATI_RS100_APSIZE 0xac
#define ATI_RS100_IG_AGPMODE 0xb0
#define ATI_RS300_APSIZE 0xf8
......@@ -196,12 +196,12 @@ static void ati_cleanup(void)
static int ati_configure(void)
{
phys_addr_t reg;
u32 temp;
/* Get the memory mapped registers */
pci_read_config_dword(agp_bridge->dev, ATI_GART_MMBASE_ADDR, &temp);
temp = (temp & 0xfffff000);
ati_generic_private.registers = (volatile u8 __iomem *) ioremap(temp, 4096);
reg = pci_resource_start(agp_bridge->dev, ATI_GART_MMBASE_BAR);
ati_generic_private.registers = (volatile u8 __iomem *) ioremap(reg, 4096);
if (!ati_generic_private.registers)
return -ENOMEM;
......@@ -211,18 +211,18 @@ static int ati_configure(void)
else
pci_write_config_dword(agp_bridge->dev, ATI_RS300_IG_AGPMODE, 0x20000);
/* address to map too */
/* address to map to */
/*
pci_read_config_dword(agp_bridge.dev, AGP_APBASE, &temp);
agp_bridge.gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge.gart_bus_addr = pci_bus_address(agp_bridge.dev,
AGP_APERTURE_BAR);
printk(KERN_INFO PFX "IGP320 gart_bus_addr: %x\n", agp_bridge.gart_bus_addr);
*/
writel(0x60000, ati_generic_private.registers+ATI_GART_FEATURE_ID);
readl(ati_generic_private.registers+ATI_GART_FEATURE_ID); /* PCI Posting.*/
/* SIGNALED_SYSTEM_ERROR @ NB_STATUS */
pci_read_config_dword(agp_bridge->dev, 4, &temp);
pci_write_config_dword(agp_bridge->dev, 4, temp | (1<<14));
pci_read_config_dword(agp_bridge->dev, PCI_COMMAND, &temp);
pci_write_config_dword(agp_bridge->dev, PCI_COMMAND, temp | (1<<14));
/* Write out the address of the gatt table */
writel(agp_bridge->gatt_bus_addr, ati_generic_private.registers+ATI_GART_BASE);
......@@ -385,8 +385,7 @@ static int ati_create_gatt_table(struct agp_bridge_data *bridge)
* This is a bus address even on the alpha, b/c its
* used to program the agp master not the cpu
*/
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
addr = pci_bus_address(agp_bridge->dev, AGP_APERTURE_BAR);
agp_bridge->gart_bus_addr = addr;
/* Calculate the agp offset */
......
......@@ -128,7 +128,6 @@ static void efficeon_cleanup(void)
static int efficeon_configure(void)
{
u32 temp;
u16 temp2;
struct aper_size_info_lvl2 *current_size;
......@@ -141,8 +140,8 @@ static int efficeon_configure(void)
current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* agpctrl */
pci_write_config_dword(agp_bridge->dev, INTEL_AGPCTRL, 0x2280);
......
......@@ -1396,8 +1396,8 @@ int agp3_generic_configure(void)
current_size = A_SIZE_16(agp_bridge->current_size);
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* set aperture size */
pci_write_config_word(agp_bridge->dev, agp_bridge->capndx+AGPAPSIZE, current_size->size_value);
......
......@@ -118,7 +118,6 @@ static void intel_8xx_cleanup(void)
static int intel_configure(void)
{
u32 temp;
u16 temp2;
struct aper_size_info_16 *current_size;
......@@ -128,8 +127,8 @@ static int intel_configure(void)
pci_write_config_word(agp_bridge->dev, INTEL_APSIZE, current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* attbase - aperture base */
pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr);
......@@ -148,7 +147,7 @@ static int intel_configure(void)
static int intel_815_configure(void)
{
u32 temp, addr;
u32 addr;
u8 temp2;
struct aper_size_info_8 *current_size;
......@@ -167,8 +166,8 @@ static int intel_815_configure(void)
current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
pci_read_config_dword(agp_bridge->dev, INTEL_ATTBASE, &addr);
addr &= INTEL_815_ATTBASE_MASK;
......@@ -208,7 +207,6 @@ static void intel_820_cleanup(void)
static int intel_820_configure(void)
{
u32 temp;
u8 temp2;
struct aper_size_info_8 *current_size;
......@@ -218,8 +216,8 @@ static int intel_820_configure(void)
pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* attbase - aperture base */
pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr);
......@@ -239,7 +237,6 @@ static int intel_820_configure(void)
static int intel_840_configure(void)
{
u32 temp;
u16 temp2;
struct aper_size_info_8 *current_size;
......@@ -249,8 +246,8 @@ static int intel_840_configure(void)
pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* attbase - aperture base */
pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr);
......@@ -268,7 +265,6 @@ static int intel_840_configure(void)
static int intel_845_configure(void)
{
u32 temp;
u8 temp2;
struct aper_size_info_8 *current_size;
......@@ -282,9 +278,9 @@ static int intel_845_configure(void)
agp_bridge->apbase_config);
} else {
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->apbase_config = temp;
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
agp_bridge->apbase_config = agp_bridge->gart_bus_addr;
}
/* attbase - aperture base */
......@@ -303,7 +299,6 @@ static int intel_845_configure(void)
static int intel_850_configure(void)
{
u32 temp;
u16 temp2;
struct aper_size_info_8 *current_size;
......@@ -313,8 +308,8 @@ static int intel_850_configure(void)
pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* attbase - aperture base */
pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr);
......@@ -332,7 +327,6 @@ static int intel_850_configure(void)
static int intel_860_configure(void)
{
u32 temp;
u16 temp2;
struct aper_size_info_8 *current_size;
......@@ -342,8 +336,8 @@ static int intel_860_configure(void)
pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* attbase - aperture base */
pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr);
......@@ -361,7 +355,6 @@ static int intel_860_configure(void)
static int intel_830mp_configure(void)
{
u32 temp;
u16 temp2;
struct aper_size_info_8 *current_size;
......@@ -371,8 +364,8 @@ static int intel_830mp_configure(void)
pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* attbase - aperture base */
pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr);
......@@ -390,7 +383,6 @@ static int intel_830mp_configure(void)
static int intel_7505_configure(void)
{
u32 temp;
u16 temp2;
struct aper_size_info_8 *current_size;
......@@ -400,8 +392,8 @@ static int intel_7505_configure(void)
pci_write_config_byte(agp_bridge->dev, INTEL_APSIZE, current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* attbase - aperture base */
pci_write_config_dword(agp_bridge->dev, INTEL_ATTBASE, agp_bridge->gatt_bus_addr);
......
......@@ -55,8 +55,8 @@
#define INTEL_I860_ERRSTS 0xc8
/* Intel i810 registers */
#define I810_GMADDR 0x10
#define I810_MMADDR 0x14
#define I810_GMADR_BAR 0
#define I810_MMADR_BAR 1
#define I810_PTE_BASE 0x10000
#define I810_PTE_MAIN_UNCACHED 0x00000000
#define I810_PTE_LOCAL 0x00000002
......@@ -113,9 +113,9 @@
#define INTEL_I850_ERRSTS 0xc8
/* intel 915G registers */
#define I915_GMADDR 0x18
#define I915_MMADDR 0x10
#define I915_PTEADDR 0x1C
#define I915_GMADR_BAR 2
#define I915_MMADR_BAR 0
#define I915_PTE_BAR 3
#define I915_GMCH_GMS_STOLEN_48M (0x6 << 4)
#define I915_GMCH_GMS_STOLEN_64M (0x7 << 4)
#define G33_GMCH_GMS_STOLEN_128M (0x8 << 4)
......
......@@ -64,7 +64,7 @@ static struct _intel_private {
struct pci_dev *pcidev; /* device one */
struct pci_dev *bridge_dev;
u8 __iomem *registers;
phys_addr_t gtt_bus_addr;
phys_addr_t gtt_phys_addr;
u32 PGETBL_save;
u32 __iomem *gtt; /* I915G */
bool clear_fake_agp; /* on first access via agp, fill with scratch */
......@@ -172,7 +172,7 @@ static void i8xx_destroy_pages(struct page *page)
#define I810_GTT_ORDER 4
static int i810_setup(void)
{
u32 reg_addr;
phys_addr_t reg_addr;
char *gtt_table;
/* i81x does not preallocate the gtt. It's always 64kb in size. */
......@@ -181,8 +181,7 @@ static int i810_setup(void)
return -ENOMEM;
intel_private.i81x_gtt_table = gtt_table;
pci_read_config_dword(intel_private.pcidev, I810_MMADDR, &reg_addr);
reg_addr &= 0xfff80000;
reg_addr = pci_resource_start(intel_private.pcidev, I810_MMADR_BAR);
intel_private.registers = ioremap(reg_addr, KB(64));
if (!intel_private.registers)
......@@ -191,7 +190,7 @@ static int i810_setup(void)
writel(virt_to_phys(gtt_table) | I810_PGETBL_ENABLED,
intel_private.registers+I810_PGETBL_CTL);
intel_private.gtt_bus_addr = reg_addr + I810_PTE_BASE;
intel_private.gtt_phys_addr = reg_addr + I810_PTE_BASE;
if ((readl(intel_private.registers+I810_DRAM_CTL)
& I810_DRAM_ROW_0) == I810_DRAM_ROW_0_SDRAM) {
......@@ -608,9 +607,8 @@ static bool intel_gtt_can_wc(void)
static int intel_gtt_init(void)
{
u32 gma_addr;
u32 gtt_map_size;
int ret;
int ret, bar;
ret = intel_private.driver->setup();
if (ret != 0)
......@@ -636,10 +634,10 @@ static int intel_gtt_init(void)
intel_private.gtt = NULL;
if (intel_gtt_can_wc())
intel_private.gtt = ioremap_wc(intel_private.gtt_bus_addr,
intel_private.gtt = ioremap_wc(intel_private.gtt_phys_addr,
gtt_map_size);
if (intel_private.gtt == NULL)
intel_private.gtt = ioremap(intel_private.gtt_bus_addr,
intel_private.gtt = ioremap(intel_private.gtt_phys_addr,
gtt_map_size);
if (intel_private.gtt == NULL) {
intel_private.driver->cleanup();
......@@ -660,14 +658,11 @@ static int intel_gtt_init(void)
}
if (INTEL_GTT_GEN <= 2)
pci_read_config_dword(intel_private.pcidev, I810_GMADDR,
&gma_addr);
bar = I810_GMADR_BAR;
else
pci_read_config_dword(intel_private.pcidev, I915_GMADDR,
&gma_addr);
intel_private.gma_bus_addr = (gma_addr & PCI_BASE_ADDRESS_MEM_MASK);
bar = I915_GMADR_BAR;
intel_private.gma_bus_addr = pci_bus_address(intel_private.pcidev, bar);
return 0;
}
......@@ -787,16 +782,15 @@ EXPORT_SYMBOL(intel_enable_gtt);
static int i830_setup(void)
{
u32 reg_addr;
phys_addr_t reg_addr;
pci_read_config_dword(intel_private.pcidev, I810_MMADDR, &reg_addr);
reg_addr &= 0xfff80000;
reg_addr = pci_resource_start(intel_private.pcidev, I810_MMADR_BAR);
intel_private.registers = ioremap(reg_addr, KB(64));
if (!intel_private.registers)
return -ENOMEM;
intel_private.gtt_bus_addr = reg_addr + I810_PTE_BASE;
intel_private.gtt_phys_addr = reg_addr + I810_PTE_BASE;
return 0;
}
......@@ -1108,12 +1102,10 @@ static void i965_write_entry(dma_addr_t addr,
static int i9xx_setup(void)
{
u32 reg_addr, gtt_addr;
phys_addr_t reg_addr;
int size = KB(512);
pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &reg_addr);
reg_addr &= 0xfff80000;
reg_addr = pci_resource_start(intel_private.pcidev, I915_MMADR_BAR);
intel_private.registers = ioremap(reg_addr, size);
if (!intel_private.registers)
......@@ -1121,15 +1113,14 @@ static int i9xx_setup(void)
switch (INTEL_GTT_GEN) {
case 3:
pci_read_config_dword(intel_private.pcidev,
I915_PTEADDR, &gtt_addr);
intel_private.gtt_bus_addr = gtt_addr;
intel_private.gtt_phys_addr =
pci_resource_start(intel_private.pcidev, I915_PTE_BAR);
break;
case 5:
intel_private.gtt_bus_addr = reg_addr + MB(2);
intel_private.gtt_phys_addr = reg_addr + MB(2);
break;
default:
intel_private.gtt_bus_addr = reg_addr + KB(512);
intel_private.gtt_phys_addr = reg_addr + KB(512);
break;
}
......
......@@ -106,6 +106,7 @@ static int nvidia_configure(void)
{
int i, rc, num_dirs;
u32 apbase, aplimit;
phys_addr_t apbase_phys;
struct aper_size_info_8 *current_size;
u32 temp;
......@@ -115,9 +116,8 @@ static int nvidia_configure(void)
pci_write_config_byte(agp_bridge->dev, NVIDIA_0_APSIZE,
current_size->size_value);
/* address to map to */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &apbase);
apbase &= PCI_BASE_ADDRESS_MEM_MASK;
/* address to map to */
apbase = pci_bus_address(agp_bridge->dev, AGP_APERTURE_BAR);
agp_bridge->gart_bus_addr = apbase;
aplimit = apbase + (current_size->size * 1024 * 1024) - 1;
pci_write_config_dword(nvidia_private.dev_2, NVIDIA_2_APBASE, apbase);
......@@ -153,8 +153,9 @@ static int nvidia_configure(void)
pci_write_config_dword(agp_bridge->dev, NVIDIA_0_APSIZE, temp | 0x100);
/* map aperture */
apbase_phys = pci_resource_start(agp_bridge->dev, AGP_APERTURE_BAR);
nvidia_private.aperture =
(volatile u32 __iomem *) ioremap(apbase, 33 * PAGE_SIZE);
(volatile u32 __iomem *) ioremap(apbase_phys, 33 * PAGE_SIZE);
if (!nvidia_private.aperture)
return -ENOMEM;
......
......@@ -50,13 +50,12 @@ static void sis_tlbflush(struct agp_memory *mem)
static int sis_configure(void)
{
u32 temp;
struct aper_size_info_8 *current_size;
current_size = A_SIZE_8(agp_bridge->current_size);
pci_write_config_byte(agp_bridge->dev, SIS_TLBCNTRL, 0x05);
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
pci_write_config_dword(agp_bridge->dev, SIS_ATTBASE,
agp_bridge->gatt_bus_addr);
pci_write_config_byte(agp_bridge->dev, SIS_APSIZE,
......
......@@ -43,16 +43,15 @@ static int via_fetch_size(void)
static int via_configure(void)
{
u32 temp;
struct aper_size_info_8 *current_size;
current_size = A_SIZE_8(agp_bridge->current_size);
/* aperture size */
pci_write_config_byte(agp_bridge->dev, VIA_APSIZE,
current_size->size_value);
/* address to map too */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
/* address to map to */
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* GART control register */
pci_write_config_dword(agp_bridge->dev, VIA_GARTCTRL, 0x0000000f);
......@@ -132,9 +131,9 @@ static int via_configure_agp3(void)
current_size = A_SIZE_16(agp_bridge->current_size);
/* address to map too */
pci_read_config_dword(agp_bridge->dev, AGP_APBASE, &temp);
agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
/* address to map to */
agp_bridge->gart_bus_addr = pci_bus_address(agp_bridge->dev,
AGP_APERTURE_BAR);
/* attbase - aperture GATT base */
pci_write_config_dword(agp_bridge->dev, VIA_AGP3_ATTBASE,
......
......@@ -232,8 +232,10 @@ static int __init eisa_init_device(struct eisa_root_device *root,
static int __init eisa_register_device(struct eisa_device *edev)
{
int rc = device_register(&edev->dev);
if (rc)
if (rc) {
put_device(&edev->dev);
return rc;
}
rc = device_create_file(&edev->dev, &dev_attr_signature);
if (rc)
......@@ -275,18 +277,19 @@ static int __init eisa_request_resources(struct eisa_root_device *root,
}
if (slot) {
edev->res[i].name = NULL;
edev->res[i].start = SLOT_ADDRESS(root, slot)
+ (i * 0x400);
edev->res[i].end = edev->res[i].start + 0xff;
edev->res[i].flags = IORESOURCE_IO;
} else {
edev->res[i].name = NULL;
edev->res[i].start = SLOT_ADDRESS(root, slot)
+ EISA_VENDOR_ID_OFFSET;
edev->res[i].end = edev->res[i].start + 3;
edev->res[i].flags = IORESOURCE_IO | IORESOURCE_BUSY;
}
dev_printk(KERN_DEBUG, &edev->dev, "%pR\n", &edev->res[i]);
if (request_resource(root->res, &edev->res[i]))
goto failed;
}
......@@ -326,19 +329,20 @@ static int __init eisa_probe(struct eisa_root_device *root)
return -ENOMEM;
}
if (eisa_init_device(root, edev, 0)) {
if (eisa_request_resources(root, edev, 0)) {
dev_warn(root->dev,
"EISA: Cannot allocate resource for mainboard\n");
kfree(edev);
if (!root->force_probe)
return -ENODEV;
return -EBUSY;
goto force_probe;
}
if (eisa_request_resources(root, edev, 0)) {
dev_warn(root->dev,
"EISA: Cannot allocate resource for mainboard\n");
if (eisa_init_device(root, edev, 0)) {
eisa_release_resources(edev);
kfree(edev);
if (!root->force_probe)
return -EBUSY;
return -ENODEV;
goto force_probe;
}
......@@ -361,11 +365,6 @@ static int __init eisa_probe(struct eisa_root_device *root)
continue;
}
if (eisa_init_device(root, edev, i)) {
kfree(edev);
continue;
}
if (eisa_request_resources(root, edev, i)) {
dev_warn(root->dev,
"Cannot allocate resource for EISA slot %d\n",
......@@ -374,6 +373,12 @@ static int __init eisa_probe(struct eisa_root_device *root)
continue;
}
if (eisa_init_device(root, edev, i)) {
eisa_release_resources(edev);
kfree(edev);
continue;
}
if (edev->state == (EISA_CONFIG_ENABLED | EISA_CONFIG_FORCED))
enabled_str = " (forced enabled)";
else if (edev->state == EISA_CONFIG_FORCED)
......
......@@ -1265,14 +1265,14 @@ static int ggtt_probe_common(struct drm_device *dev,
size_t gtt_size)
{
struct drm_i915_private *dev_priv = dev->dev_private;
phys_addr_t gtt_bus_addr;
phys_addr_t gtt_phys_addr;
int ret;
/* For Modern GENs the PTEs and register space are split in the BAR */
gtt_bus_addr = pci_resource_start(dev->pdev, 0) +
gtt_phys_addr = pci_resource_start(dev->pdev, 0) +
(pci_resource_len(dev->pdev, 0) / 2);
dev_priv->gtt.gsm = ioremap_wc(gtt_bus_addr, gtt_size);
dev_priv->gtt.gsm = ioremap_wc(gtt_phys_addr, gtt_size);
if (!dev_priv->gtt.gsm) {
DRM_ERROR("Failed to map the gtt page table\n");
return -ENOMEM;
......
......@@ -346,7 +346,7 @@ static int mpt_remove_dead_ioc_func(void *arg)
if ((pdev == NULL))
return -1;
pci_stop_and_remove_bus_device(pdev);
pci_stop_and_remove_bus_device_locked(pdev);
return 0;
}
......
......@@ -105,9 +105,10 @@ config PCI_PASID
If unsure, say N.
config PCI_IOAPIC
tristate "PCI IO-APIC hotplug support" if X86
bool "PCI IO-APIC hotplug support" if X86
depends on PCI
depends on ACPI
depends on X86_IO_APIC
default !X86
config PCI_LABEL
......
......@@ -4,7 +4,7 @@
obj-y += access.o bus.o probe.o host-bridge.o remove.o pci.o \
pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \
irq.o vpd.o setup-bus.o
irq.o vpd.o setup-bus.o vc.o
obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_SYSFS) += slot.o
......
......@@ -380,30 +380,6 @@ int pci_vpd_pci22_init(struct pci_dev *dev)
return 0;
}
/**
* pci_vpd_truncate - Set available Vital Product Data size
* @dev: pci device struct
* @size: available memory in bytes
*
* Adjust size of available VPD area.
*/
int pci_vpd_truncate(struct pci_dev *dev, size_t size)
{
if (!dev->vpd)
return -EINVAL;
/* limited by the access method */
if (size > dev->vpd->len)
return -EINVAL;
dev->vpd->len = size;
if (dev->vpd->attr)
dev->vpd->attr->size = size;
return 0;
}
EXPORT_SYMBOL(pci_vpd_truncate);
/**
* pci_cfg_access_lock - Lock PCI config reads/writes
* @dev: pci device struct
......
......@@ -234,27 +234,6 @@ void pci_disable_pri(struct pci_dev *pdev)
}
EXPORT_SYMBOL_GPL(pci_disable_pri);
/**
* pci_pri_enabled - Checks if PRI capability is enabled
* @pdev: PCI device structure
*
* Returns true if PRI is enabled on the device, false otherwise
*/
bool pci_pri_enabled(struct pci_dev *pdev)
{
u16 control;
int pos;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
if (!pos)
return false;
pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control);
return (control & PCI_PRI_CTRL_ENABLE) ? true : false;
}
EXPORT_SYMBOL_GPL(pci_pri_enabled);
/**
* pci_reset_pri - Resets device's PRI state
* @pdev: PCI device structure
......@@ -282,67 +261,6 @@ int pci_reset_pri(struct pci_dev *pdev)
return 0;
}
EXPORT_SYMBOL_GPL(pci_reset_pri);
/**
* pci_pri_stopped - Checks whether the PRI capability is stopped
* @pdev: PCI device structure
*
* Returns true if the PRI capability on the device is disabled and the
* device has no outstanding PRI requests, false otherwise. The device
* indicates this via the STOPPED bit in the status register of the
* capability.
* The device internal state can be cleared by resetting the PRI state
* with pci_reset_pri(). This can force the capability into the STOPPED
* state.
*/
bool pci_pri_stopped(struct pci_dev *pdev)
{
u16 control, status;
int pos;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
if (!pos)
return true;
pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control);
pci_read_config_word(pdev, pos + PCI_PRI_STATUS, &status);
if (control & PCI_PRI_CTRL_ENABLE)
return false;
return (status & PCI_PRI_STATUS_STOPPED) ? true : false;
}
EXPORT_SYMBOL_GPL(pci_pri_stopped);
/**
* pci_pri_status - Request PRI status of a device
* @pdev: PCI device structure
*
* Returns negative value on failure, status on success. The status can
* be checked against status-bits. Supported bits are currently:
* PCI_PRI_STATUS_RF: Response failure
* PCI_PRI_STATUS_UPRGI: Unexpected Page Request Group Index
* PCI_PRI_STATUS_STOPPED: PRI has stopped
*/
int pci_pri_status(struct pci_dev *pdev)
{
u16 status, control;
int pos;
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
if (!pos)
return -EINVAL;
pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control);
pci_read_config_word(pdev, pos + PCI_PRI_STATUS, &status);
/* Stopped bit is undefined when enable == 1, so clear it */
if (control & PCI_PRI_CTRL_ENABLE)
status &= ~PCI_PRI_STATUS_STOPPED;
return status;
}
EXPORT_SYMBOL_GPL(pci_pri_status);
#endif /* CONFIG_PCI_PRI */
#ifdef CONFIG_PCI_PASID
......
......@@ -98,41 +98,54 @@ void pci_bus_remove_resources(struct pci_bus *bus)
}
}
/**
* pci_bus_alloc_resource - allocate a resource from a parent bus
* @bus: PCI bus
* @res: resource to allocate
* @size: size of resource to allocate
* @align: alignment of resource to allocate
* @min: minimum /proc/iomem address to allocate
* @type_mask: IORESOURCE_* type flags
* @alignf: resource alignment function
* @alignf_data: data argument for resource alignment function
*
* Given the PCI bus a device resides on, the size, minimum address,
* alignment and type, try to find an acceptable resource allocation
* for a specific device resource.
static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL};
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
static struct pci_bus_region pci_64_bit = {0,
(dma_addr_t) 0xffffffffffffffffULL};
static struct pci_bus_region pci_high = {(dma_addr_t) 0x100000000ULL,
(dma_addr_t) 0xffffffffffffffffULL};
#endif
/*
* @res contains CPU addresses. Clip it so the corresponding bus addresses
* on @bus are entirely within @region. This is used to control the bus
* addresses of resources we allocate, e.g., we may need a resource that
* can be mapped by a 32-bit BAR.
*/
int
pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res,
static void pci_clip_resource_to_region(struct pci_bus *bus,
struct resource *res,
struct pci_bus_region *region)
{
struct pci_bus_region r;
pcibios_resource_to_bus(bus, &r, res);
if (r.start < region->start)
r.start = region->start;
if (r.end > region->end)
r.end = region->end;
if (r.end < r.start)
res->end = res->start - 1;
else
pcibios_bus_to_resource(bus, res, &r);
}
static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res,
resource_size_t size, resource_size_t align,
resource_size_t min, unsigned int type_mask,
resource_size_t (*alignf)(void *,
const struct resource *,
resource_size_t,
resource_size_t),
void *alignf_data)
void *alignf_data,
struct pci_bus_region *region)
{
int i, ret = -ENOMEM;
struct resource *r;
resource_size_t max = -1;
int i, ret;
struct resource *r, avail;
resource_size_t max;
type_mask |= IORESOURCE_IO | IORESOURCE_MEM;
/* don't allocate too high if the pref mem doesn't support 64bit*/
if (!(res->flags & IORESOURCE_MEM_64))
max = PCIBIOS_MAX_MEM_32;
pci_bus_for_each_resource(bus, r, i) {
if (!r)
continue;
......@@ -147,15 +160,74 @@ pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res,
!(res->flags & IORESOURCE_PREFETCH))
continue;
avail = *r;
pci_clip_resource_to_region(bus, &avail, region);
if (!resource_size(&avail))
continue;
/*
* "min" is typically PCIBIOS_MIN_IO or PCIBIOS_MIN_MEM to
* protect badly documented motherboard resources, but if
* this is an already-configured bridge window, its start
* overrides "min".
*/
if (avail.start)
min = avail.start;
max = avail.end;
/* Ok, try it out.. */
ret = allocate_resource(r, res, size,
r->start ? : min,
max, align,
alignf, alignf_data);
ret = allocate_resource(r, res, size, min, max,
align, alignf, alignf_data);
if (ret == 0)
break;
return 0;
}
return ret;
return -ENOMEM;
}
/**
* pci_bus_alloc_resource - allocate a resource from a parent bus
* @bus: PCI bus
* @res: resource to allocate
* @size: size of resource to allocate
* @align: alignment of resource to allocate
* @min: minimum /proc/iomem address to allocate
* @type_mask: IORESOURCE_* type flags
* @alignf: resource alignment function
* @alignf_data: data argument for resource alignment function
*
* Given the PCI bus a device resides on, the size, minimum address,
* alignment and type, try to find an acceptable resource allocation
* for a specific device resource.
*/
int pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res,
resource_size_t size, resource_size_t align,
resource_size_t min, unsigned int type_mask,
resource_size_t (*alignf)(void *,
const struct resource *,
resource_size_t,
resource_size_t),
void *alignf_data)
{
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
int rc;
if (res->flags & IORESOURCE_MEM_64) {
rc = pci_bus_alloc_from_region(bus, res, size, align, min,
type_mask, alignf, alignf_data,
&pci_high);
if (rc == 0)
return 0;
return pci_bus_alloc_from_region(bus, res, size, align, min,
type_mask, alignf, alignf_data,
&pci_64_bit);
}
#endif
return pci_bus_alloc_from_region(bus, res, size, align, min,
type_mask, alignf, alignf_data,
&pci_32_bit);
}
void __weak pcibios_resource_survey_bus(struct pci_bus *bus) { }
......@@ -176,6 +248,7 @@ int pci_bus_add_device(struct pci_dev *dev)
*/
pci_fixup_device(pci_fixup_final, dev);
pci_create_sysfs_dev_files(dev);
pci_proc_attach_device(dev);
dev->match_driver = true;
retval = device_attach(&dev->dev);
......
......@@ -9,22 +9,19 @@
#include "pci.h"
static struct pci_bus *find_pci_root_bus(struct pci_dev *dev)
static struct pci_bus *find_pci_root_bus(struct pci_bus *bus)
{
struct pci_bus *bus;
bus = dev->bus;
while (bus->parent)
bus = bus->parent;
return bus;
}
static struct pci_host_bridge *find_pci_host_bridge(struct pci_dev *dev)
static struct pci_host_bridge *find_pci_host_bridge(struct pci_bus *bus)
{
struct pci_bus *bus = find_pci_root_bus(dev);
struct pci_bus *root_bus = find_pci_root_bus(bus);
return to_pci_host_bridge(bus->bridge);
return to_pci_host_bridge(root_bus->bridge);
}
void pci_set_host_bridge_release(struct pci_host_bridge *bridge,
......@@ -40,10 +37,10 @@ static bool resource_contains(struct resource *res1, struct resource *res2)
return res1->start <= res2->start && res1->end >= res2->end;
}
void pcibios_resource_to_bus(struct pci_dev *dev, struct pci_bus_region *region,
void pcibios_resource_to_bus(struct pci_bus *bus, struct pci_bus_region *region,
struct resource *res)
{
struct pci_host_bridge *bridge = find_pci_host_bridge(dev);
struct pci_host_bridge *bridge = find_pci_host_bridge(bus);
struct pci_host_bridge_window *window;
resource_size_t offset = 0;
......@@ -68,10 +65,10 @@ static bool region_contains(struct pci_bus_region *region1,
return region1->start <= region2->start && region1->end >= region2->end;
}
void pcibios_bus_to_resource(struct pci_dev *dev, struct resource *res,
void pcibios_bus_to_resource(struct pci_bus *bus, struct resource *res,
struct pci_bus_region *region)
{
struct pci_host_bridge *bridge = find_pci_host_bridge(dev);
struct pci_host_bridge *bridge = find_pci_host_bridge(bus);
struct pci_host_bridge_window *window;
resource_size_t offset = 0;
......
......@@ -468,7 +468,7 @@ static int exynos_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
int ret;
exynos_pcie_sideband_dbi_r_mode(pp, true);
ret = cfg_read(pp->dbi_base + (where & ~0x3), where, size, val);
ret = dw_pcie_cfg_read(pp->dbi_base + (where & ~0x3), where, size, val);
exynos_pcie_sideband_dbi_r_mode(pp, false);
return ret;
}
......@@ -479,7 +479,8 @@ static int exynos_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
int ret;
exynos_pcie_sideband_dbi_w_mode(pp, true);
ret = cfg_write(pp->dbi_base + (where & ~0x3), where, size, val);
ret = dw_pcie_cfg_write(pp->dbi_base + (where & ~0x3),
where, size, val);
exynos_pcie_sideband_dbi_w_mode(pp, false);
return ret;
}
......
......@@ -44,10 +44,18 @@ struct imx6_pcie {
void __iomem *mem_base;
};
/* PCIe Root Complex registers (memory-mapped) */
#define PCIE_RC_LCR 0x7c
#define PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN1 0x1
#define PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2 0x2
#define PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK 0xf
/* PCIe Port Logic registers (memory-mapped) */
#define PL_OFFSET 0x700
#define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28)
#define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c)
#define PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING (1 << 29)
#define PCIE_PHY_DEBUG_R1_XMLH_LINK_UP (1 << 4)
#define PCIE_PHY_CTRL (PL_OFFSET + 0x114)
#define PCIE_PHY_CTRL_DATA_LOC 0
......@@ -59,6 +67,9 @@ struct imx6_pcie {
#define PCIE_PHY_STAT (PL_OFFSET + 0x110)
#define PCIE_PHY_STAT_ACK_LOC 16
#define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C
#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17)
/* PHY registers (not memory-mapped) */
#define PCIE_PHY_RX_ASIC_OUT 0x100D
......@@ -209,15 +220,9 @@ static int imx6_pcie_assert_core_reset(struct pcie_port *pp)
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 1 << 10);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_REF_CLK_EN, 0 << 16);
gpio_set_value(imx6_pcie->reset_gpio, 0);
msleep(100);
gpio_set_value(imx6_pcie->reset_gpio, 1);
return 0;
}
......@@ -261,6 +266,12 @@ static int imx6_pcie_deassert_core_reset(struct pcie_port *pp)
/* allow the clocks to stabilize */
usleep_range(200, 500);
/* Some boards don't have PCIe reset GPIO. */
if (gpio_is_valid(imx6_pcie->reset_gpio)) {
gpio_set_value(imx6_pcie->reset_gpio, 0);
msleep(100);
gpio_set_value(imx6_pcie->reset_gpio, 1);
}
return 0;
err_pcie_axi:
......@@ -299,11 +310,90 @@ static void imx6_pcie_init_phy(struct pcie_port *pp)
IMX6Q_GPR8_TX_SWING_LOW, 127 << 25);
}
static void imx6_pcie_host_init(struct pcie_port *pp)
static int imx6_pcie_wait_for_link(struct pcie_port *pp)
{
int count = 200;
while (!dw_pcie_link_up(pp)) {
usleep_range(100, 1000);
if (--count)
continue;
dev_err(pp->dev, "phy link never came up\n");
dev_dbg(pp->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n",
readl(pp->dbi_base + PCIE_PHY_DEBUG_R0),
readl(pp->dbi_base + PCIE_PHY_DEBUG_R1));
return -EINVAL;
}
return 0;
}
static int imx6_pcie_start_link(struct pcie_port *pp)
{
int count = 0;
struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp);
uint32_t tmp;
int ret, count;
/*
* Force Gen1 operation when starting the link. In case the link is
* started in Gen2 mode, there is a possibility the devices on the
* bus will not be detected at all. This happens with PCIe switches.
*/
tmp = readl(pp->dbi_base + PCIE_RC_LCR);
tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK;
tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN1;
writel(tmp, pp->dbi_base + PCIE_RC_LCR);
/* Start LTSSM. */
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 1 << 10);
ret = imx6_pcie_wait_for_link(pp);
if (ret)
return ret;
/* Allow Gen2 mode after the link is up. */
tmp = readl(pp->dbi_base + PCIE_RC_LCR);
tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK;
tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2;
writel(tmp, pp->dbi_base + PCIE_RC_LCR);
/*
* Start Directed Speed Change so the best possible speed both link
* partners support can be negotiated.
*/
tmp = readl(pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL);
tmp |= PORT_LOGIC_SPEED_CHANGE;
writel(tmp, pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL);
count = 200;
while (count--) {
tmp = readl(pp->dbi_base + PCIE_LINK_WIDTH_SPEED_CONTROL);
/* Test if the speed change finished. */
if (!(tmp & PORT_LOGIC_SPEED_CHANGE))
break;
usleep_range(100, 1000);
}
/* Make sure link training is finished as well! */
if (count)
ret = imx6_pcie_wait_for_link(pp);
else
ret = -EINVAL;
if (ret) {
dev_err(pp->dev, "Failed to bring link up!\n");
} else {
tmp = readl(pp->dbi_base + 0x80);
dev_dbg(pp->dev, "Link up, Gen=%i\n", (tmp >> 16) & 0xf);
}
return ret;
}
static void imx6_pcie_host_init(struct pcie_port *pp)
{
imx6_pcie_assert_core_reset(pp);
imx6_pcie_init_phy(pp);
......@@ -312,33 +402,41 @@ static void imx6_pcie_host_init(struct pcie_port *pp)
dw_pcie_setup_rc(pp);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 1 << 10);
imx6_pcie_start_link(pp);
}
while (!dw_pcie_link_up(pp)) {
usleep_range(100, 1000);
count++;
if (count >= 200) {
dev_err(pp->dev, "phy link never came up\n");
dev_dbg(pp->dev,
"DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n",
readl(pp->dbi_base + PCIE_PHY_DEBUG_R0),
readl(pp->dbi_base + PCIE_PHY_DEBUG_R1));
break;
}
}
static void imx6_pcie_reset_phy(struct pcie_port *pp)
{
uint32_t temp;
pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &temp);
temp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN |
PHY_RX_OVRD_IN_LO_RX_PLL_EN);
pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, temp);
usleep_range(2000, 3000);
return;
pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &temp);
temp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN |
PHY_RX_OVRD_IN_LO_RX_PLL_EN);
pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, temp);
}
static int imx6_pcie_link_up(struct pcie_port *pp)
{
u32 rc, ltssm, rx_valid, temp;
u32 rc, ltssm, rx_valid;
/* link is debug bit 36, debug register 1 starts at bit 32 */
rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1) & (0x1 << (36 - 32));
if (rc)
return -EAGAIN;
/*
* Test if the PHY reports that the link is up and also that
* the link training finished. It might happen that the PHY
* reports the link is already up, but the link training bit
* is still set, so make sure to check the training is done
* as well here.
*/
rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1);
if ((rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_UP) &&
!(rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING))
return 1;
/*
* From L0, initiate MAC entry to gen2 if EP/RC supports gen2.
......@@ -358,21 +456,7 @@ static int imx6_pcie_link_up(struct pcie_port *pp)
dev_err(pp->dev, "transition to gen2 is stuck, reset PHY!\n");
pcie_phy_read(pp->dbi_base,
PHY_RX_OVRD_IN_LO, &temp);
temp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN
| PHY_RX_OVRD_IN_LO_RX_PLL_EN);
pcie_phy_write(pp->dbi_base,
PHY_RX_OVRD_IN_LO, temp);
usleep_range(2000, 3000);
pcie_phy_read(pp->dbi_base,
PHY_RX_OVRD_IN_LO, &temp);
temp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN
| PHY_RX_OVRD_IN_LO_RX_PLL_EN);
pcie_phy_write(pp->dbi_base,
PHY_RX_OVRD_IN_LO, temp);
imx6_pcie_reset_phy(pp);
return 0;
}
......@@ -426,30 +510,19 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
"imprecise external abort");
dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!dbi_base) {
dev_err(&pdev->dev, "dbi_base memory resource not found\n");
return -ENODEV;
}
pp->dbi_base = devm_ioremap_resource(&pdev->dev, dbi_base);
if (IS_ERR(pp->dbi_base)) {
ret = PTR_ERR(pp->dbi_base);
goto err;
}
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
/* Fetch GPIOs */
imx6_pcie->reset_gpio = of_get_named_gpio(np, "reset-gpio", 0);
if (!gpio_is_valid(imx6_pcie->reset_gpio)) {
dev_err(&pdev->dev, "no reset-gpio defined\n");
ret = -ENODEV;
}
ret = devm_gpio_request_one(&pdev->dev,
imx6_pcie->reset_gpio,
GPIOF_OUT_INIT_LOW,
"PCIe reset");
if (ret) {
dev_err(&pdev->dev, "unable to get reset gpio\n");
goto err;
if (gpio_is_valid(imx6_pcie->reset_gpio)) {
ret = devm_gpio_request_one(&pdev->dev, imx6_pcie->reset_gpio,
GPIOF_OUT_INIT_LOW, "PCIe reset");
if (ret) {
dev_err(&pdev->dev, "unable to get reset gpio\n");
return ret;
}
}
imx6_pcie->power_on_gpio = of_get_named_gpio(np, "power-on-gpio", 0);
......@@ -460,7 +533,7 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
"PCIe power enable");
if (ret) {
dev_err(&pdev->dev, "unable to get power-on gpio\n");
goto err;
return ret;
}
}
......@@ -472,7 +545,7 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
"PCIe wake up");
if (ret) {
dev_err(&pdev->dev, "unable to get wake-up gpio\n");
goto err;
return ret;
}
}
......@@ -484,7 +557,7 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
"PCIe disable endpoint");
if (ret) {
dev_err(&pdev->dev, "unable to get disable-ep gpio\n");
goto err;
return ret;
}
}
......@@ -493,32 +566,28 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
if (IS_ERR(imx6_pcie->lvds_gate)) {
dev_err(&pdev->dev,
"lvds_gate clock select missing or invalid\n");
ret = PTR_ERR(imx6_pcie->lvds_gate);
goto err;
return PTR_ERR(imx6_pcie->lvds_gate);
}
imx6_pcie->sata_ref_100m = devm_clk_get(&pdev->dev, "sata_ref_100m");
if (IS_ERR(imx6_pcie->sata_ref_100m)) {
dev_err(&pdev->dev,
"sata_ref_100m clock source missing or invalid\n");
ret = PTR_ERR(imx6_pcie->sata_ref_100m);
goto err;
return PTR_ERR(imx6_pcie->sata_ref_100m);
}
imx6_pcie->pcie_ref_125m = devm_clk_get(&pdev->dev, "pcie_ref_125m");
if (IS_ERR(imx6_pcie->pcie_ref_125m)) {
dev_err(&pdev->dev,
"pcie_ref_125m clock source missing or invalid\n");
ret = PTR_ERR(imx6_pcie->pcie_ref_125m);
goto err;
return PTR_ERR(imx6_pcie->pcie_ref_125m);
}
imx6_pcie->pcie_axi = devm_clk_get(&pdev->dev, "pcie_axi");
if (IS_ERR(imx6_pcie->pcie_axi)) {
dev_err(&pdev->dev,
"pcie_axi clock source missing or invalid\n");
ret = PTR_ERR(imx6_pcie->pcie_axi);
goto err;
return PTR_ERR(imx6_pcie->pcie_axi);
}
/* Grab GPR config register range */
......@@ -526,19 +595,15 @@ static int __init imx6_pcie_probe(struct platform_device *pdev)
syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr");
if (IS_ERR(imx6_pcie->iomuxc_gpr)) {
dev_err(&pdev->dev, "unable to find iomuxc registers\n");
ret = PTR_ERR(imx6_pcie->iomuxc_gpr);
goto err;
return PTR_ERR(imx6_pcie->iomuxc_gpr);
}
ret = imx6_add_pcie_port(pp, pdev);
if (ret < 0)
goto err;
return ret;
platform_set_drvdata(pdev, imx6_pcie);
return 0;
err:
return ret;
}
static const struct of_device_id imx6_pcie_of_match[] = {
......
......@@ -150,6 +150,11 @@ static inline u32 mvebu_readl(struct mvebu_pcie_port *port, u32 reg)
return readl(port->base + reg);
}
static inline bool mvebu_has_ioport(struct mvebu_pcie_port *port)
{
return port->io_target != -1 && port->io_attr != -1;
}
static bool mvebu_pcie_link_up(struct mvebu_pcie_port *port)
{
return !(mvebu_readl(port, PCIE_STAT_OFF) & PCIE_STAT_LINK_DOWN);
......@@ -300,7 +305,8 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
/* Are the new iobase/iolimit values invalid? */
if (port->bridge.iolimit < port->bridge.iobase ||
port->bridge.iolimitupper < port->bridge.iobaseupper) {
port->bridge.iolimitupper < port->bridge.iobaseupper ||
!(port->bridge.command & PCI_COMMAND_IO)) {
/* If a window was configured, remove it */
if (port->iowin_base) {
......@@ -313,6 +319,12 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
return;
}
if (!mvebu_has_ioport(port)) {
dev_WARN(&port->pcie->pdev->dev,
"Attempt to set IO when IO is disabled\n");
return;
}
/*
* We read the PCI-to-PCI bridge emulated registers, and
* calculate the base address and size of the address decoding
......@@ -330,14 +342,13 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
mvebu_mbus_add_window_remap_by_id(port->io_target, port->io_attr,
port->iowin_base, port->iowin_size,
iobase);
pci_ioremap_io(iobase, port->iowin_base);
}
static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
{
/* Are the new membase/memlimit values invalid? */
if (port->bridge.memlimit < port->bridge.membase) {
if (port->bridge.memlimit < port->bridge.membase ||
!(port->bridge.command & PCI_COMMAND_MEMORY)) {
/* If a window was configured, remove it */
if (port->memwin_base) {
......@@ -426,9 +437,12 @@ static int mvebu_sw_pci_bridge_read(struct mvebu_pcie_port *port,
break;
case PCI_IO_BASE:
*value = (bridge->secondary_status << 16 |
bridge->iolimit << 8 |
bridge->iobase);
if (!mvebu_has_ioport(port))
*value = bridge->secondary_status << 16;
else
*value = (bridge->secondary_status << 16 |
bridge->iolimit << 8 |
bridge->iobase);
break;
case PCI_MEMORY_BASE:
......@@ -490,8 +504,19 @@ static int mvebu_sw_pci_bridge_write(struct mvebu_pcie_port *port,
switch (where & ~3) {
case PCI_COMMAND:
{
u32 old = bridge->command;
if (!mvebu_has_ioport(port))
value &= ~PCI_COMMAND_IO;
bridge->command = value & 0xffff;
if ((old ^ bridge->command) & PCI_COMMAND_IO)
mvebu_pcie_handle_iobase_change(port);
if ((old ^ bridge->command) & PCI_COMMAND_MEMORY)
mvebu_pcie_handle_membase_change(port);
break;
}
case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1:
bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4] = value;
......@@ -505,7 +530,6 @@ static int mvebu_sw_pci_bridge_write(struct mvebu_pcie_port *port,
*/
bridge->iobase = (value & 0xff) | PCI_IO_RANGE_TYPE_32;
bridge->iolimit = ((value >> 8) & 0xff) | PCI_IO_RANGE_TYPE_32;
bridge->secondary_status = value >> 16;
mvebu_pcie_handle_iobase_change(port);
break;
......@@ -656,7 +680,9 @@ static int mvebu_pcie_setup(int nr, struct pci_sys_data *sys)
struct mvebu_pcie *pcie = sys_to_pcie(sys);
int i;
pci_add_resource_offset(&sys->resources, &pcie->realio, sys->io_offset);
if (resource_size(&pcie->realio) != 0)
pci_add_resource_offset(&sys->resources, &pcie->realio,
sys->io_offset);
pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset);
pci_add_resource(&sys->resources, &pcie->busn);
......@@ -707,9 +733,9 @@ static resource_size_t mvebu_pcie_align_resource(struct pci_dev *dev,
* aligned on their size
*/
if (res->flags & IORESOURCE_IO)
return round_up(start, max((resource_size_t)SZ_64K, size));
return round_up(start, max_t(resource_size_t, SZ_64K, size));
else if (res->flags & IORESOURCE_MEM)
return round_up(start, max((resource_size_t)SZ_1M, size));
return round_up(start, max_t(resource_size_t, SZ_1M, size));
else
return start;
}
......@@ -757,12 +783,17 @@ static void __iomem *mvebu_pcie_map_registers(struct platform_device *pdev,
#define DT_CPUADDR_TO_ATTR(cpuaddr) (((cpuaddr) >> 48) & 0xFF)
static int mvebu_get_tgt_attr(struct device_node *np, int devfn,
unsigned long type, int *tgt, int *attr)
unsigned long type,
unsigned int *tgt,
unsigned int *attr)
{
const int na = 3, ns = 2;
const __be32 *range;
int rlen, nranges, rangesz, pna, i;
*tgt = -1;
*attr = -1;
range = of_get_property(np, "ranges", &rlen);
if (!range)
return -EINVAL;
......@@ -832,16 +863,15 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
}
mvebu_mbus_get_pcie_io_aperture(&pcie->io);
if (resource_size(&pcie->io) == 0) {
dev_err(&pdev->dev, "invalid I/O aperture size\n");
return -EINVAL;
}
pcie->realio.flags = pcie->io.flags;
pcie->realio.start = PCIBIOS_MIN_IO;
pcie->realio.end = min_t(resource_size_t,
IO_SPACE_LIMIT,
resource_size(&pcie->io));
if (resource_size(&pcie->io) != 0) {
pcie->realio.flags = pcie->io.flags;
pcie->realio.start = PCIBIOS_MIN_IO;
pcie->realio.end = min_t(resource_size_t,
IO_SPACE_LIMIT,
resource_size(&pcie->io));
} else
pcie->realio = pcie->io;
/* Get the bus range */
ret = of_pci_parse_bus_range(np, &pcie->busn);
......@@ -900,12 +930,12 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
continue;
}
ret = mvebu_get_tgt_attr(np, port->devfn, IORESOURCE_IO,
&port->io_target, &port->io_attr);
if (ret < 0) {
dev_err(&pdev->dev, "PCIe%d.%d: cannot get tgt/attr for io window\n",
port->port, port->lane);
continue;
if (resource_size(&pcie->io) != 0)
mvebu_get_tgt_attr(np, port->devfn, IORESOURCE_IO,
&port->io_target, &port->io_attr);
else {
port->io_target = -1;
port->io_attr = -1;
}
port->reset_gpio = of_get_named_gpio_flags(child,
......@@ -954,14 +984,6 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
mvebu_pcie_set_local_dev_nr(port, 1);
port->clk = of_clk_get_by_name(child, NULL);
if (IS_ERR(port->clk)) {
dev_err(&pdev->dev, "PCIe%d.%d: cannot get clock\n",
port->port, port->lane);
iounmap(port->base);
continue;
}
port->dn = child;
spin_lock_init(&port->conf_lock);
mvebu_sw_pci_bridge_init(port);
......@@ -969,6 +991,10 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
}
pcie->nports = i;
for (i = 0; i < (IO_SPACE_LIMIT - SZ_64K); i += SZ_64K)
pci_ioremap_io(i, pcie->io.start + i);
mvebu_pcie_msi_enable(pcie);
mvebu_pcie_enable(pcie);
......@@ -988,8 +1014,7 @@ static struct platform_driver mvebu_pcie_driver = {
.driver = {
.owner = THIS_MODULE,
.name = "mvebu-pcie",
.of_match_table =
of_match_ptr(mvebu_pcie_of_match_table),
.of_match_table = mvebu_pcie_of_match_table,
/* driver unloading/unbinding currently not supported */
.suppress_bind_attrs = true,
},
......
......@@ -17,6 +17,7 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
/* AHB-PCI Bridge PCI communication registers */
......@@ -77,6 +78,7 @@
#define RCAR_PCI_NR_CONTROLLERS 3
struct rcar_pci_priv {
struct device *dev;
void __iomem *reg;
struct resource io_res;
struct resource mem_res;
......@@ -169,8 +171,11 @@ static int __init rcar_pci_setup(int nr, struct pci_sys_data *sys)
void __iomem *reg = priv->reg;
u32 val;
pm_runtime_enable(priv->dev);
pm_runtime_get_sync(priv->dev);
val = ioread32(reg + RCAR_PCI_UNIT_REV_REG);
pr_info("PCI: bus%u revision %x\n", sys->busnr, val);
dev_info(priv->dev, "PCI: bus%u revision %x\n", sys->busnr, val);
/* Disable Direct Power Down State and assert reset */
val = ioread32(reg + RCAR_USBCTR_REG) & ~RCAR_USBCTR_DIRPD;
......@@ -276,8 +281,8 @@ static int __init rcar_pci_probe(struct platform_device *pdev)
cfg_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
reg = devm_ioremap_resource(&pdev->dev, cfg_res);
if (!reg)
return -ENODEV;
if (IS_ERR(reg))
return PTR_ERR(reg);
mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!mem_res || !mem_res->start)
......@@ -301,6 +306,7 @@ static int __init rcar_pci_probe(struct platform_device *pdev)
priv->irq = platform_get_irq(pdev, 0);
priv->reg = reg;
priv->dev = &pdev->dev;
return rcar_pci_add_controller(priv);
}
......
......@@ -805,7 +805,7 @@ static int tegra_pcie_enable_controller(struct tegra_pcie *pcie)
afi_writel(pcie, value, AFI_PCIE_CONFIG);
value = afi_readl(pcie, AFI_FUSE);
value &= ~AFI_FUSE_PCIE_T0_GEN2_DIS;
value |= AFI_FUSE_PCIE_T0_GEN2_DIS;
afi_writel(pcie, value, AFI_FUSE);
/* initialize internal PHY, enable up to 16 PCIE lanes */
......
......@@ -74,7 +74,7 @@ static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys)
return sys->private_data;
}
int cfg_read(void __iomem *addr, int where, int size, u32 *val)
int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val)
{
*val = readl(addr);
......@@ -88,7 +88,7 @@ int cfg_read(void __iomem *addr, int where, int size, u32 *val)
return PCIBIOS_SUCCESSFUL;
}
int cfg_write(void __iomem *addr, int where, int size, u32 val)
int dw_pcie_cfg_write(void __iomem *addr, int where, int size, u32 val)
{
if (size == 4)
writel(val, addr);
......@@ -126,7 +126,8 @@ static int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
if (pp->ops->rd_own_conf)
ret = pp->ops->rd_own_conf(pp, where, size, val);
else
ret = cfg_read(pp->dbi_base + (where & ~0x3), where, size, val);
ret = dw_pcie_cfg_read(pp->dbi_base + (where & ~0x3), where,
size, val);
return ret;
}
......@@ -139,8 +140,8 @@ static int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
if (pp->ops->wr_own_conf)
ret = pp->ops->wr_own_conf(pp, where, size, val);
else
ret = cfg_write(pp->dbi_base + (where & ~0x3), where, size,
val);
ret = dw_pcie_cfg_write(pp->dbi_base + (where & ~0x3), where,
size, val);
return ret;
}
......@@ -167,11 +168,13 @@ void dw_handle_msi_irq(struct pcie_port *pp)
while ((pos = find_next_bit(&val, 32, pos)) != 32) {
irq = irq_find_mapping(pp->irq_domain,
i * 32 + pos);
dw_pcie_wr_own_conf(pp,
PCIE_MSI_INTR0_STATUS + i * 12,
4, 1 << pos);
generic_handle_irq(irq);
pos++;
}
}
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4, val);
}
}
......@@ -209,6 +212,23 @@ static int find_valid_pos0(struct pcie_port *pp, int msgvec, int pos, int *pos0)
return 0;
}
static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
unsigned int nvec, unsigned int pos)
{
unsigned int i, res, bit, val;
for (i = 0; i < nvec; i++) {
irq_set_msi_desc_off(irq_base, i, NULL);
clear_bit(pos + i, pp->msi_irq_in_use);
/* Disable corresponding interrupt on MSI controller */
res = ((pos + i) / 32) * 12;
bit = (pos + i) % 32;
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
val &= ~(1 << bit);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
}
}
static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
{
int res, bit, irq, pos0, pos1, i;
......@@ -242,18 +262,25 @@ static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
if (!irq)
goto no_valid_irq;
i = 0;
while (i < no_irqs) {
/*
* irq_create_mapping (called from dw_pcie_host_init) pre-allocates
* descs so there is no need to allocate descs here. We can therefore
* assume that if irq_find_mapping above returns non-zero, then the
* descs are also successfully allocated.
*/
for (i = 0; i < no_irqs; i++) {
if (irq_set_msi_desc_off(irq, i, desc) != 0) {
clear_irq_range(pp, irq, i, pos0);
goto no_valid_irq;
}
set_bit(pos0 + i, pp->msi_irq_in_use);
irq_alloc_descs((irq + i), (irq + i), 1, 0);
irq_set_msi_desc(irq + i, desc);
/*Enable corresponding interrupt in MSI interrupt controller */
res = ((pos0 + i) / 32) * 12;
bit = (pos0 + i) % 32;
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
val |= 1 << bit;
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
i++;
}
*pos = pos0;
......@@ -266,7 +293,7 @@ static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
static void clear_irq(unsigned int irq)
{
int res, bit, val, pos;
unsigned int pos, nvec;
struct irq_desc *desc;
struct msi_desc *msi;
struct pcie_port *pp;
......@@ -281,18 +308,15 @@ static void clear_irq(unsigned int irq)
return;
}
/* undo what was done in assign_irq */
pos = data->hwirq;
nvec = 1 << msi->msi_attrib.multiple;
irq_free_desc(irq);
clear_bit(pos, pp->msi_irq_in_use);
clear_irq_range(pp, irq, nvec, pos);
/* Disable corresponding interrupt on MSI interrupt controller */
res = (pos / 32) * 12;
bit = pos % 32;
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
val &= ~(1 << bit);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
/* all irqs cleared; reset attributes */
msi->irq = 0;
msi->msi_attrib.multiple = 0;
}
static int dw_msi_setup_irq(struct msi_chip *chip, struct pci_dev *pdev,
......@@ -320,10 +344,10 @@ static int dw_msi_setup_irq(struct msi_chip *chip, struct pci_dev *pdev,
if (irq < 0)
return irq;
msg_ctr &= ~PCI_MSI_FLAGS_QSIZE;
msg_ctr |= msgvec << 4;
pci_write_config_word(pdev, desc->msi_attrib.pos + PCI_MSI_FLAGS,
msg_ctr);
/*
* write_msi_msg() will update PCI_MSI_FLAGS so there is
* no need to explicitly call pci_write_config_word().
*/
desc->msi_attrib.multiple = msgvec;
msg.address_lo = virt_to_phys((void *)pp->msi_data);
......@@ -394,6 +418,7 @@ int __init dw_pcie_host_init(struct pcie_port *pp)
+ global_io_offset);
pp->config.io_size = resource_size(&pp->io);
pp->config.io_bus_addr = range.pci_addr;
pp->io_base = range.cpu_addr;
}
if (restype == IORESOURCE_MEM) {
of_pci_range_to_resource(&range, np, &pp->mem);
......@@ -419,7 +444,6 @@ int __init dw_pcie_host_init(struct pcie_port *pp)
pp->cfg0_base = pp->cfg.start;
pp->cfg1_base = pp->cfg.start + pp->config.cfg0_size;
pp->io_base = pp->io.start;
pp->mem_base = pp->mem.start;
pp->va_cfg0_base = devm_ioremap(pp->dev, pp->cfg0_base,
......@@ -551,11 +575,13 @@ static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
if (bus->parent->number == pp->root_bus_nr) {
dw_pcie_prog_viewport_cfg0(pp, busdev);
ret = cfg_read(pp->va_cfg0_base + address, where, size, val);
ret = dw_pcie_cfg_read(pp->va_cfg0_base + address, where, size,
val);
dw_pcie_prog_viewport_mem_outbound(pp);
} else {
dw_pcie_prog_viewport_cfg1(pp, busdev);
ret = cfg_read(pp->va_cfg1_base + address, where, size, val);
ret = dw_pcie_cfg_read(pp->va_cfg1_base + address, where, size,
val);
dw_pcie_prog_viewport_io_outbound(pp);
}
......@@ -574,18 +600,19 @@ static int dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
if (bus->parent->number == pp->root_bus_nr) {
dw_pcie_prog_viewport_cfg0(pp, busdev);
ret = cfg_write(pp->va_cfg0_base + address, where, size, val);
ret = dw_pcie_cfg_write(pp->va_cfg0_base + address, where, size,
val);
dw_pcie_prog_viewport_mem_outbound(pp);
} else {
dw_pcie_prog_viewport_cfg1(pp, busdev);
ret = cfg_write(pp->va_cfg1_base + address, where, size, val);
ret = dw_pcie_cfg_write(pp->va_cfg1_base + address, where, size,
val);
dw_pcie_prog_viewport_io_outbound(pp);
}
return ret;
}
static int dw_pcie_valid_config(struct pcie_port *pp,
struct pci_bus *bus, int dev)
{
......@@ -679,7 +706,7 @@ static int dw_pcie_setup(int nr, struct pci_sys_data *sys)
if (global_io_offset < SZ_1M && pp->config.io_size > 0) {
sys->io_offset = global_io_offset - pp->config.io_bus_addr;
pci_ioremap_io(sys->io_offset, pp->io.start);
pci_ioremap_io(global_io_offset, pp->io_base);
global_io_offset += SZ_64K;
pci_add_resource_offset(&sys->resources, &pp->io,
sys->io_offset);
......
......@@ -66,8 +66,8 @@ struct pcie_host_ops {
void (*host_init)(struct pcie_port *pp);
};
int cfg_read(void __iomem *addr, int where, int size, u32 *val);
int cfg_write(void __iomem *addr, int where, int size, u32 val);
int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val);
int dw_pcie_cfg_write(void __iomem *addr, int where, int size, u32 val);
void dw_handle_msi_irq(struct pcie_port *pp);
void dw_pcie_msi_init(struct pcie_port *pp);
int dw_pcie_link_up(struct pcie_port *pp);
......
......@@ -77,6 +77,8 @@ struct acpiphp_bridge {
/* PCI-to-PCI bridge device */
struct pci_dev *pci_dev;
bool is_going_away;
};
......@@ -150,6 +152,7 @@ struct acpiphp_attention_info
/* slot flags */
#define SLOT_ENABLED (0x00000001)
#define SLOT_IS_GOING_AWAY (0x00000002)
/* function flags */
......@@ -169,7 +172,7 @@ void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *slot);
typedef int (*acpiphp_callback)(struct acpiphp_slot *slot, void *data);
int acpiphp_enable_slot(struct acpiphp_slot *slot);
int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot);
int acpiphp_disable_slot(struct acpiphp_slot *slot);
u8 acpiphp_get_power_status(struct acpiphp_slot *slot);
u8 acpiphp_get_attention_status(struct acpiphp_slot *slot);
u8 acpiphp_get_latch_status(struct acpiphp_slot *slot);
......
......@@ -156,7 +156,7 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
/* disable the specified slot */
return acpiphp_disable_and_eject_slot(slot->acpi_slot);
return acpiphp_disable_slot(slot->acpi_slot);
}
......
......@@ -432,6 +432,7 @@ static void cleanup_bridge(struct acpiphp_bridge *bridge)
pr_err("failed to remove notify handler\n");
}
}
slot->flags |= SLOT_IS_GOING_AWAY;
if (slot->slot)
acpiphp_unregister_hotplug_slot(slot);
}
......@@ -439,6 +440,8 @@ static void cleanup_bridge(struct acpiphp_bridge *bridge)
mutex_lock(&bridge_mutex);
list_del(&bridge->list);
mutex_unlock(&bridge_mutex);
bridge->is_going_away = true;
}
/**
......@@ -757,6 +760,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
{
struct acpiphp_slot *slot;
/* Bail out if the bridge is going away. */
if (bridge->is_going_away)
return;
list_for_each_entry(slot, &bridge->slots, node) {
struct pci_bus *bus = slot->bus;
struct pci_dev *dev, *tmp;
......@@ -827,6 +834,8 @@ void acpiphp_check_host_bridge(acpi_handle handle)
}
}
static int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot);
static void hotplug_event(acpi_handle handle, u32 type, void *data)
{
struct acpiphp_context *context = data;
......@@ -856,6 +865,9 @@ static void hotplug_event(acpi_handle handle, u32 type, void *data)
} else {
struct acpiphp_slot *slot = func->slot;
if (slot->flags & SLOT_IS_GOING_AWAY)
break;
mutex_lock(&slot->crit_sect);
enable_slot(slot);
mutex_unlock(&slot->crit_sect);
......@@ -871,6 +883,9 @@ static void hotplug_event(acpi_handle handle, u32 type, void *data)
struct acpiphp_slot *slot = func->slot;
int ret;
if (slot->flags & SLOT_IS_GOING_AWAY)
break;
/*
* Check if anything has changed in the slot and rescan
* from the parent if that's the case.
......@@ -900,9 +915,11 @@ static void hotplug_event_work(void *data, u32 type)
acpi_handle handle = context->handle;
acpi_scan_lock_acquire();
pci_lock_rescan_remove();
hotplug_event(handle, type, context);
pci_unlock_rescan_remove();
acpi_scan_lock_release();
acpi_evaluate_hotplug_ost(handle, type, ACPI_OST_SC_SUCCESS, NULL);
put_bridge(context->func.parent);
......@@ -1070,12 +1087,19 @@ void acpiphp_remove_slots(struct pci_bus *bus)
*/
int acpiphp_enable_slot(struct acpiphp_slot *slot)
{
pci_lock_rescan_remove();
if (slot->flags & SLOT_IS_GOING_AWAY)
return -ENODEV;
mutex_lock(&slot->crit_sect);
/* configure all functions */
if (!(slot->flags & SLOT_ENABLED))
enable_slot(slot);
mutex_unlock(&slot->crit_sect);
pci_unlock_rescan_remove();
return 0;
}
......@@ -1083,10 +1107,12 @@ int acpiphp_enable_slot(struct acpiphp_slot *slot)
* acpiphp_disable_and_eject_slot - power off and eject slot
* @slot: ACPI PHP slot
*/
int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot)
static int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot)
{
struct acpiphp_func *func;
int retval = 0;
if (slot->flags & SLOT_IS_GOING_AWAY)
return -ENODEV;
mutex_lock(&slot->crit_sect);
......@@ -1104,9 +1130,18 @@ int acpiphp_disable_and_eject_slot(struct acpiphp_slot *slot)
}
mutex_unlock(&slot->crit_sect);
return retval;
return 0;
}
int acpiphp_disable_slot(struct acpiphp_slot *slot)
{
int ret;
pci_lock_rescan_remove();
ret = acpiphp_disable_and_eject_slot(slot);
pci_unlock_rescan_remove();
return ret;
}
/*
* slot enabled: 1
......@@ -1117,7 +1152,6 @@ u8 acpiphp_get_power_status(struct acpiphp_slot *slot)
return (slot->flags & SLOT_ENABLED);
}
/*
* latch open: 1
* latch closed: 0
......@@ -1127,7 +1161,6 @@ u8 acpiphp_get_latch_status(struct acpiphp_slot *slot)
return !(get_slot_status(slot) & ACPI_STA_DEVICE_UI);
}
/*
* adapter presence : 1
* absence : 0
......
......@@ -254,9 +254,12 @@ int __ref cpci_configure_slot(struct slot *slot)
{
struct pci_dev *dev;
struct pci_bus *parent;
int ret = 0;
dbg("%s - enter", __func__);
pci_lock_rescan_remove();
if (slot->dev == NULL) {
dbg("pci_dev null, finding %02x:%02x:%x",
slot->bus->number, PCI_SLOT(slot->devfn), PCI_FUNC(slot->devfn));
......@@ -277,7 +280,8 @@ int __ref cpci_configure_slot(struct slot *slot)
slot->dev = pci_get_slot(slot->bus, slot->devfn);
if (slot->dev == NULL) {
err("Could not find PCI device for slot %02x", slot->number);
return -ENODEV;
ret = -ENODEV;
goto out;
}
}
parent = slot->dev->bus;
......@@ -294,8 +298,10 @@ int __ref cpci_configure_slot(struct slot *slot)
pci_bus_add_devices(parent);
out:
pci_unlock_rescan_remove();
dbg("%s - exit", __func__);
return 0;
return ret;
}
int cpci_unconfigure_slot(struct slot* slot)
......@@ -308,6 +314,8 @@ int cpci_unconfigure_slot(struct slot* slot)
return -ENODEV;
}
pci_lock_rescan_remove();
list_for_each_entry_safe(dev, temp, &slot->bus->devices, bus_list) {
if (PCI_SLOT(dev->devfn) != PCI_SLOT(slot->devfn))
continue;
......@@ -318,6 +326,8 @@ int cpci_unconfigure_slot(struct slot* slot)
pci_dev_put(slot->dev);
slot->dev = NULL;
pci_unlock_rescan_remove();
dbg("%s - exit", __func__);
return 0;
}
......@@ -86,6 +86,8 @@ int cpqhp_configure_device (struct controller* ctrl, struct pci_func* func)
struct pci_bus *child;
int num;
pci_lock_rescan_remove();
if (func->pci_dev == NULL)
func->pci_dev = pci_get_bus_and_slot(func->bus,PCI_DEVFN(func->device, func->function));
......@@ -100,7 +102,7 @@ int cpqhp_configure_device (struct controller* ctrl, struct pci_func* func)
func->pci_dev = pci_get_bus_and_slot(func->bus, PCI_DEVFN(func->device, func->function));
if (func->pci_dev == NULL) {
dbg("ERROR: pci_dev still null\n");
return 0;
goto out;
}
}
......@@ -113,6 +115,8 @@ int cpqhp_configure_device (struct controller* ctrl, struct pci_func* func)
pci_dev_put(func->pci_dev);
out:
pci_unlock_rescan_remove();
return 0;
}
......@@ -123,6 +127,7 @@ int cpqhp_unconfigure_device(struct pci_func* func)
dbg("%s: bus/dev/func = %x/%x/%x\n", __func__, func->bus, func->device, func->function);
pci_lock_rescan_remove();
for (j=0; j<8 ; j++) {
struct pci_dev* temp = pci_get_bus_and_slot(func->bus, PCI_DEVFN(func->device, j));
if (temp) {
......@@ -130,6 +135,7 @@ int cpqhp_unconfigure_device(struct pci_func* func)
pci_stop_and_remove_bus_device(temp);
}
}
pci_unlock_rescan_remove();
return 0;
}
......
......@@ -718,6 +718,8 @@ static void ibm_unconfigure_device(struct pci_func *func)
func->device, func->function);
debug("func->device << 3 | 0x0 = %x\n", func->device << 3 | 0x0);
pci_lock_rescan_remove();
for (j = 0; j < 0x08; j++) {
temp = pci_get_bus_and_slot(func->busno, (func->device << 3) | j);
if (temp) {
......@@ -725,7 +727,10 @@ static void ibm_unconfigure_device(struct pci_func *func)
pci_dev_put(temp);
}
}
pci_dev_put(func->dev);
pci_unlock_rescan_remove();
}
/*
......@@ -780,6 +785,8 @@ static int ibm_configure_device(struct pci_func *func)
int flag = 0; /* this is to make sure we don't double scan the bus,
for bridged devices primarily */
pci_lock_rescan_remove();
if (!(bus_structure_fixup(func->busno)))
flag = 1;
if (func->dev == NULL)
......@@ -789,7 +796,7 @@ static int ibm_configure_device(struct pci_func *func)
if (func->dev == NULL) {
struct pci_bus *bus = pci_find_bus(0, func->busno);
if (!bus)
return 0;
goto out;
num = pci_scan_slot(bus,
PCI_DEVFN(func->device, func->function));
......@@ -800,7 +807,7 @@ static int ibm_configure_device(struct pci_func *func)
PCI_DEVFN(func->device, func->function));
if (func->dev == NULL) {
err("ERROR... : pci_dev still NULL\n");
return 0;
goto out;
}
}
if (!(flag) && (func->dev->hdr_type == PCI_HEADER_TYPE_BRIDGE)) {
......@@ -810,6 +817,8 @@ static int ibm_configure_device(struct pci_func *func)
pci_bus_add_devices(child);
}
out:
pci_unlock_rescan_remove();
return 0;
}
......
......@@ -43,7 +43,6 @@
extern bool pciehp_poll_mode;
extern int pciehp_poll_time;
extern bool pciehp_debug;
extern bool pciehp_force;
#define dbg(format, arg...) \
do { \
......@@ -140,15 +139,15 @@ struct controller *pcie_init(struct pcie_device *dev);
int pcie_init_notification(struct controller *ctrl);
int pciehp_enable_slot(struct slot *p_slot);
int pciehp_disable_slot(struct slot *p_slot);
int pcie_enable_notification(struct controller *ctrl);
void pcie_enable_notification(struct controller *ctrl);
int pciehp_power_on_slot(struct slot *slot);
int pciehp_power_off_slot(struct slot *slot);
int pciehp_get_power_status(struct slot *slot, u8 *status);
int pciehp_get_attention_status(struct slot *slot, u8 *status);
void pciehp_power_off_slot(struct slot *slot);
void pciehp_get_power_status(struct slot *slot, u8 *status);
void pciehp_get_attention_status(struct slot *slot, u8 *status);
int pciehp_set_attention_status(struct slot *slot, u8 status);
int pciehp_get_latch_status(struct slot *slot, u8 *status);
int pciehp_get_adapter_status(struct slot *slot, u8 *status);
void pciehp_set_attention_status(struct slot *slot, u8 status);
void pciehp_get_latch_status(struct slot *slot, u8 *status);
void pciehp_get_adapter_status(struct slot *slot, u8 *status);
int pciehp_query_power_fault(struct slot *slot);
void pciehp_green_led_on(struct slot *slot);
void pciehp_green_led_off(struct slot *slot);
......
......@@ -41,7 +41,7 @@
bool pciehp_debug;
bool pciehp_poll_mode;
int pciehp_poll_time;
bool pciehp_force;
static bool pciehp_force;
#define DRIVER_VERSION "0.4"
#define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>"
......@@ -160,7 +160,8 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n",
__func__, slot_name(slot));
return pciehp_set_attention_status(slot, status);
pciehp_set_attention_status(slot, status);
return 0;
}
......@@ -192,7 +193,8 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n",
__func__, slot_name(slot));
return pciehp_get_power_status(slot, value);
pciehp_get_power_status(slot, value);
return 0;
}
static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
......@@ -202,7 +204,8 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n",
__func__, slot_name(slot));
return pciehp_get_attention_status(slot, value);
pciehp_get_attention_status(slot, value);
return 0;
}
static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
......@@ -212,7 +215,8 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n",
__func__, slot_name(slot));
return pciehp_get_latch_status(slot, value);
pciehp_get_latch_status(slot, value);
return 0;
}
static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
......@@ -222,7 +226,8 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n",
__func__, slot_name(slot));
return pciehp_get_adapter_status(slot, value);
pciehp_get_adapter_status(slot, value);
return 0;
}
static int reset_slot(struct hotplug_slot *hotplug_slot, int probe)
......
......@@ -158,11 +158,8 @@ static void set_slot_off(struct controller *ctrl, struct slot * pslot)
{
/* turn off slot, turn on Amber LED, turn off Green LED if supported*/
if (POWER_CTRL(ctrl)) {
if (pciehp_power_off_slot(pslot)) {
ctrl_err(ctrl,
"Issue of Slot Power Off command failed\n");
return;
}
pciehp_power_off_slot(pslot);
/*
* After turning power off, we must wait for at least 1 second
* before taking any action that relies on power having been
......@@ -171,16 +168,8 @@ static void set_slot_off(struct controller *ctrl, struct slot * pslot)
msleep(1000);
}
if (PWR_LED(ctrl))
pciehp_green_led_off(pslot);
if (ATTN_LED(ctrl)) {
if (pciehp_set_attention_status(pslot, 1)) {
ctrl_err(ctrl,
"Issue of Set Attention Led command failed\n");
return;
}
}
pciehp_green_led_off(pslot);
pciehp_set_attention_status(pslot, 1);
}
/**
......@@ -203,8 +192,7 @@ static int board_added(struct slot *p_slot)
return retval;
}
if (PWR_LED(ctrl))
pciehp_green_led_blink(p_slot);
pciehp_green_led_blink(p_slot);
/* Check link training status */
retval = pciehp_check_link_status(ctrl);
......@@ -227,9 +215,7 @@ static int board_added(struct slot *p_slot)
goto err_exit;
}
if (PWR_LED(ctrl))
pciehp_green_led_on(p_slot);
pciehp_green_led_on(p_slot);
return 0;
err_exit:
......@@ -243,7 +229,7 @@ static int board_added(struct slot *p_slot)
*/
static int remove_board(struct slot *p_slot)
{
int retval = 0;
int retval;
struct controller *ctrl = p_slot->ctrl;
retval = pciehp_unconfigure_device(p_slot);
......@@ -251,13 +237,8 @@ static int remove_board(struct slot *p_slot)
return retval;
if (POWER_CTRL(ctrl)) {
/* power off slot */
retval = pciehp_power_off_slot(p_slot);
if (retval) {
ctrl_err(ctrl,
"Issue of Slot Disable command failed\n");
return retval;
}
pciehp_power_off_slot(p_slot);
/*
* After turning power off, we must wait for at least 1 second
* before taking any action that relies on power having been
......@@ -267,9 +248,7 @@ static int remove_board(struct slot *p_slot)
}
/* turn off Green LED */
if (PWR_LED(ctrl))
pciehp_green_led_off(p_slot);
pciehp_green_led_off(p_slot);
return 0;
}
......@@ -305,7 +284,7 @@ static void pciehp_power_thread(struct work_struct *work)
break;
case POWERON_STATE:
mutex_unlock(&p_slot->lock);
if (pciehp_enable_slot(p_slot) && PWR_LED(p_slot->ctrl))
if (pciehp_enable_slot(p_slot))
pciehp_green_led_off(p_slot);
mutex_lock(&p_slot->lock);
p_slot->state = STATIC_STATE;
......@@ -372,11 +351,8 @@ static void handle_button_press_event(struct slot *p_slot)
"press.\n", slot_name(p_slot));
}
/* blink green LED and turn off amber */
if (PWR_LED(ctrl))
pciehp_green_led_blink(p_slot);
if (ATTN_LED(ctrl))
pciehp_set_attention_status(p_slot, 0);
pciehp_green_led_blink(p_slot);
pciehp_set_attention_status(p_slot, 0);
queue_delayed_work(p_slot->wq, &p_slot->work, 5*HZ);
break;
case BLINKINGOFF_STATE:
......@@ -389,14 +365,11 @@ static void handle_button_press_event(struct slot *p_slot)
ctrl_info(ctrl, "Button cancel on Slot(%s)\n", slot_name(p_slot));
cancel_delayed_work(&p_slot->work);
if (p_slot->state == BLINKINGOFF_STATE) {
if (PWR_LED(ctrl))
pciehp_green_led_on(p_slot);
pciehp_green_led_on(p_slot);
} else {
if (PWR_LED(ctrl))
pciehp_green_led_off(p_slot);
pciehp_green_led_off(p_slot);
}
if (ATTN_LED(ctrl))
pciehp_set_attention_status(p_slot, 0);
pciehp_set_attention_status(p_slot, 0);
ctrl_info(ctrl, "PCI slot #%s - action canceled "
"due to button press\n", slot_name(p_slot));
p_slot->state = STATIC_STATE;
......@@ -456,10 +429,8 @@ static void interrupt_event_handler(struct work_struct *work)
case INT_POWER_FAULT:
if (!POWER_CTRL(ctrl))
break;
if (ATTN_LED(ctrl))
pciehp_set_attention_status(p_slot, 1);
if (PWR_LED(ctrl))
pciehp_green_led_off(p_slot);
pciehp_set_attention_status(p_slot, 1);
pciehp_green_led_off(p_slot);
break;
case INT_PRESENCE_ON:
case INT_PRESENCE_OFF:
......@@ -482,14 +453,14 @@ int pciehp_enable_slot(struct slot *p_slot)
int rc;
struct controller *ctrl = p_slot->ctrl;
rc = pciehp_get_adapter_status(p_slot, &getstatus);
if (rc || !getstatus) {
pciehp_get_adapter_status(p_slot, &getstatus);
if (!getstatus) {
ctrl_info(ctrl, "No adapter on slot(%s)\n", slot_name(p_slot));
return -ENODEV;
}
if (MRL_SENS(p_slot->ctrl)) {
rc = pciehp_get_latch_status(p_slot, &getstatus);
if (rc || getstatus) {
pciehp_get_latch_status(p_slot, &getstatus);
if (getstatus) {
ctrl_info(ctrl, "Latch open on slot(%s)\n",
slot_name(p_slot));
return -ENODEV;
......@@ -497,8 +468,8 @@ int pciehp_enable_slot(struct slot *p_slot)
}
if (POWER_CTRL(p_slot->ctrl)) {
rc = pciehp_get_power_status(p_slot, &getstatus);
if (rc || getstatus) {
pciehp_get_power_status(p_slot, &getstatus);
if (getstatus) {
ctrl_info(ctrl, "Already enabled on slot(%s)\n",
slot_name(p_slot));
return -EINVAL;
......@@ -518,15 +489,14 @@ int pciehp_enable_slot(struct slot *p_slot)
int pciehp_disable_slot(struct slot *p_slot)
{
u8 getstatus = 0;
int ret = 0;
struct controller *ctrl = p_slot->ctrl;
if (!p_slot->ctrl)
return 1;
if (!HP_SUPR_RM(p_slot->ctrl)) {
ret = pciehp_get_adapter_status(p_slot, &getstatus);
if (ret || !getstatus) {
pciehp_get_adapter_status(p_slot, &getstatus);
if (!getstatus) {
ctrl_info(ctrl, "No adapter on slot(%s)\n",
slot_name(p_slot));
return -ENODEV;
......@@ -534,8 +504,8 @@ int pciehp_disable_slot(struct slot *p_slot)
}
if (MRL_SENS(p_slot->ctrl)) {
ret = pciehp_get_latch_status(p_slot, &getstatus);
if (ret || getstatus) {
pciehp_get_latch_status(p_slot, &getstatus);
if (getstatus) {
ctrl_info(ctrl, "Latch open on slot(%s)\n",
slot_name(p_slot));
return -ENODEV;
......@@ -543,8 +513,8 @@ int pciehp_disable_slot(struct slot *p_slot)
}
if (POWER_CTRL(p_slot->ctrl)) {
ret = pciehp_get_power_status(p_slot, &getstatus);
if (ret || !getstatus) {
pciehp_get_power_status(p_slot, &getstatus);
if (!getstatus) {
ctrl_info(ctrl, "Already disabled on slot(%s)\n",
slot_name(p_slot));
return -EINVAL;
......
此差异已折叠。
......@@ -39,22 +39,26 @@ int pciehp_configure_device(struct slot *p_slot)
struct pci_dev *dev;
struct pci_dev *bridge = p_slot->ctrl->pcie->port;
struct pci_bus *parent = bridge->subordinate;
int num;
int num, ret = 0;
struct controller *ctrl = p_slot->ctrl;
pci_lock_rescan_remove();
dev = pci_get_slot(parent, PCI_DEVFN(0, 0));
if (dev) {
ctrl_err(ctrl, "Device %s already exists "
"at %04x:%02x:00, cannot hot-add\n", pci_name(dev),
pci_domain_nr(parent), parent->number);
pci_dev_put(dev);
return -EINVAL;
ret = -EINVAL;
goto out;
}
num = pci_scan_slot(parent, PCI_DEVFN(0, 0));
if (num == 0) {
ctrl_err(ctrl, "No new device found\n");
return -ENODEV;
ret = -ENODEV;
goto out;
}
list_for_each_entry(dev, &parent->devices, bus_list)
......@@ -73,12 +77,14 @@ int pciehp_configure_device(struct slot *p_slot)
pci_bus_add_devices(parent);
return 0;
out:
pci_unlock_rescan_remove();
return ret;
}
int pciehp_unconfigure_device(struct slot *p_slot)
{
int ret, rc = 0;
int rc = 0;
u8 bctl = 0;
u8 presence = 0;
struct pci_dev *dev, *temp;
......@@ -88,9 +94,9 @@ int pciehp_unconfigure_device(struct slot *p_slot)
ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:00\n",
__func__, pci_domain_nr(parent), parent->number);
ret = pciehp_get_adapter_status(p_slot, &presence);
if (ret)
presence = 0;
pciehp_get_adapter_status(p_slot, &presence);
pci_lock_rescan_remove();
/*
* Stopping an SR-IOV PF device removes all the associated VFs,
......@@ -126,5 +132,6 @@ int pciehp_unconfigure_device(struct slot *p_slot)
pci_dev_put(dev);
}
pci_unlock_rescan_remove();
return rc;
}
......@@ -354,10 +354,15 @@ int dlpar_remove_pci_slot(char *drc_name, struct device_node *dn)
{
struct pci_bus *bus;
struct slot *slot;
int ret = 0;
pci_lock_rescan_remove();
bus = pcibios_find_pci_bus(dn);
if (!bus)
return -EINVAL;
if (!bus) {
ret = -EINVAL;
goto out;
}
pr_debug("PCI: Removing PCI slot below EADS bridge %s\n",
bus->self ? pci_name(bus->self) : "<!PHB!>");
......@@ -371,7 +376,8 @@ int dlpar_remove_pci_slot(char *drc_name, struct device_node *dn)
printk(KERN_ERR
"%s: unable to remove hotplug slot %s\n",
__func__, drc_name);
return -EIO;
ret = -EIO;
goto out;
}
}
......@@ -382,7 +388,8 @@ int dlpar_remove_pci_slot(char *drc_name, struct device_node *dn)
if (pcibios_unmap_io_space(bus)) {
printk(KERN_ERR "%s: failed to unmap bus range\n",
__func__);
return -ERANGE;
ret = -ERANGE;
goto out;
}
/* Remove the EADS bridge device itself */
......@@ -390,7 +397,9 @@ int dlpar_remove_pci_slot(char *drc_name, struct device_node *dn)
pr_debug("PCI: Now removing bridge device %s\n", pci_name(bus->self));
pci_stop_and_remove_bus_device(bus->self);
return 0;
out:
pci_unlock_rescan_remove();
return ret;
}
/**
......
......@@ -398,7 +398,9 @@ static int enable_slot(struct hotplug_slot *hotplug_slot)
return retval;
if (state == PRESENT) {
pci_lock_rescan_remove();
pcibios_add_pci_devices(slot->bus);
pci_unlock_rescan_remove();
slot->state = CONFIGURED;
} else if (state == EMPTY) {
slot->state = EMPTY;
......@@ -418,7 +420,9 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
if (slot->state == NOT_CONFIGURED)
return -EINVAL;
pci_lock_rescan_remove();
pcibios_remove_pci_devices(slot->bus);
pci_unlock_rescan_remove();
vm_unmap_aliases();
slot->state = NOT_CONFIGURED;
......
......@@ -80,7 +80,9 @@ static int enable_slot(struct hotplug_slot *hotplug_slot)
goto out_deconfigure;
pci_scan_slot(slot->zdev->bus, ZPCI_DEVFN);
pci_lock_rescan_remove();
pci_bus_add_devices(slot->zdev->bus);
pci_unlock_rescan_remove();
return rc;
......@@ -98,7 +100,7 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
return -EIO;
if (slot->zdev->pdev)
pci_stop_and_remove_bus_device(slot->zdev->pdev);
pci_stop_and_remove_bus_device_locked(slot->zdev->pdev);
rc = zpci_disable_device(slot->zdev);
if (rc)
......
......@@ -459,12 +459,15 @@ static int enable_slot(struct hotplug_slot *bss_hotplug_slot)
acpi_scan_lock_release();
}
pci_lock_rescan_remove();
/* Call the driver for the new device */
pci_bus_add_devices(slot->pci_bus);
/* Call the drivers for the new devices subordinate to PPB */
if (new_ppb)
pci_bus_add_devices(new_bus);
pci_unlock_rescan_remove();
mutex_unlock(&sn_hotplug_mutex);
if (rc == 0)
......@@ -540,6 +543,7 @@ static int disable_slot(struct hotplug_slot *bss_hotplug_slot)
acpi_scan_lock_release();
}
pci_lock_rescan_remove();
/* Free the SN resources assigned to the Linux device.*/
list_for_each_entry_safe(dev, temp, &slot->pci_bus->devices, bus_list) {
if (PCI_SLOT(dev->devfn) != slot->device_num + 1)
......@@ -550,6 +554,7 @@ static int disable_slot(struct hotplug_slot *bss_hotplug_slot)
pci_stop_and_remove_bus_device(dev);
pci_dev_put(dev);
}
pci_unlock_rescan_remove();
/* Remove the SSDT for the slot from the ACPI namespace */
if (SN_ACPI_BASE_SUPPORT() && ssdt_id) {
......
......@@ -40,7 +40,9 @@ int __ref shpchp_configure_device(struct slot *p_slot)
struct controller *ctrl = p_slot->ctrl;
struct pci_dev *bridge = ctrl->pci_dev;
struct pci_bus *parent = bridge->subordinate;
int num;
int num, ret = 0;
pci_lock_rescan_remove();
dev = pci_get_slot(parent, PCI_DEVFN(p_slot->device, 0));
if (dev) {
......@@ -48,13 +50,15 @@ int __ref shpchp_configure_device(struct slot *p_slot)
"at %04x:%02x:%02x, cannot hot-add\n", pci_name(dev),
pci_domain_nr(parent), p_slot->bus, p_slot->device);
pci_dev_put(dev);
return -EINVAL;
ret = -EINVAL;
goto out;
}
num = pci_scan_slot(parent, PCI_DEVFN(p_slot->device, 0));
if (num == 0) {
ctrl_err(ctrl, "No new device found\n");
return -ENODEV;
ret = -ENODEV;
goto out;
}
list_for_each_entry(dev, &parent->devices, bus_list) {
......@@ -75,7 +79,9 @@ int __ref shpchp_configure_device(struct slot *p_slot)
pci_bus_add_devices(parent);
return 0;
out:
pci_unlock_rescan_remove();
return ret;
}
int shpchp_unconfigure_device(struct slot *p_slot)
......@@ -89,6 +95,8 @@ int shpchp_unconfigure_device(struct slot *p_slot)
ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:%02x\n",
__func__, pci_domain_nr(parent), p_slot->bus, p_slot->device);
pci_lock_rescan_remove();
list_for_each_entry_safe(dev, temp, &parent->devices, bus_list) {
if (PCI_SLOT(dev->devfn) != p_slot->device)
continue;
......@@ -108,6 +116,8 @@ int shpchp_unconfigure_device(struct slot *p_slot)
pci_stop_and_remove_bus_device(dev);
pci_dev_put(dev);
}
pci_unlock_rescan_remove();
return rc;
}
......@@ -113,6 +113,10 @@ static struct pci_driver ioapic_driver = {
.remove = ioapic_remove,
};
module_pci_driver(ioapic_driver);
static int __init ioapic_init(void)
{
return pci_register_driver(&ioapic_driver);
}
module_init(ioapic_init);
MODULE_LICENSE("GPL");
......@@ -84,6 +84,7 @@ static int virtfn_add(struct pci_dev *dev, int id, int reset)
virtfn->dev.parent = dev->dev.parent;
virtfn->physfn = pci_dev_get(dev);
virtfn->is_virtfn = 1;
virtfn->multifunction = 0;
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
res = dev->resource + PCI_IOV_RESOURCES + i;
......@@ -441,6 +442,7 @@ static int sriov_init(struct pci_dev *dev, int pos)
found:
pci_write_config_word(dev, pos + PCI_SRIOV_CTRL, ctrl);
pci_write_config_word(dev, pos + PCI_SRIOV_NUM_VF, 0);
pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &offset);
pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &stride);
if (!offset || (total > 1 && !stride))
......
......@@ -116,7 +116,7 @@ void __weak arch_teardown_msi_irqs(struct pci_dev *dev)
return default_teardown_msi_irqs(dev);
}
void default_restore_msi_irqs(struct pci_dev *dev, int irq)
static void default_restore_msi_irq(struct pci_dev *dev, int irq)
{
struct msi_desc *entry;
......@@ -134,9 +134,9 @@ void default_restore_msi_irqs(struct pci_dev *dev, int irq)
write_msi_msg(irq, &entry->msg);
}
void __weak arch_restore_msi_irqs(struct pci_dev *dev, int irq)
void __weak arch_restore_msi_irqs(struct pci_dev *dev)
{
return default_restore_msi_irqs(dev, irq);
return default_restore_msi_irqs(dev);
}
static void msi_set_enable(struct pci_dev *dev, int enable)
......@@ -262,6 +262,15 @@ void unmask_msi_irq(struct irq_data *data)
msi_set_mask_bit(data, 0);
}
void default_restore_msi_irqs(struct pci_dev *dev)
{
struct msi_desc *entry;
list_for_each_entry(entry, &dev->msi_list, list) {
default_restore_msi_irq(dev, entry->irq);
}
}
void __read_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
{
BUG_ON(entry->dev->current_state != PCI_D0);
......@@ -363,6 +372,9 @@ void write_msi_msg(unsigned int irq, struct msi_msg *msg)
static void free_msi_irqs(struct pci_dev *dev)
{
struct msi_desc *entry, *tmp;
struct attribute **msi_attrs;
struct device_attribute *dev_attr;
int count = 0;
list_for_each_entry(entry, &dev->msi_list, list) {
int i, nvec;
......@@ -398,6 +410,22 @@ static void free_msi_irqs(struct pci_dev *dev)
list_del(&entry->list);
kfree(entry);
}
if (dev->msi_irq_groups) {
sysfs_remove_groups(&dev->dev.kobj, dev->msi_irq_groups);
msi_attrs = dev->msi_irq_groups[0]->attrs;
list_for_each_entry(entry, &dev->msi_list, list) {
dev_attr = container_of(msi_attrs[count],
struct device_attribute, attr);
kfree(dev_attr->attr.name);
kfree(dev_attr);
++count;
}
kfree(msi_attrs);
kfree(dev->msi_irq_groups[0]);
kfree(dev->msi_irq_groups);
dev->msi_irq_groups = NULL;
}
}
static struct msi_desc *alloc_msi_entry(struct pci_dev *dev)
......@@ -430,7 +458,7 @@ static void __pci_restore_msi_state(struct pci_dev *dev)
pci_intx_for_msi(dev, 0);
msi_set_enable(dev, 0);
arch_restore_msi_irqs(dev, dev->irq);
arch_restore_msi_irqs(dev);
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control);
msi_mask_irq(entry, msi_capable_mask(control), entry->masked);
......@@ -455,8 +483,8 @@ static void __pci_restore_msix_state(struct pci_dev *dev)
control |= PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL;
pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, control);
arch_restore_msi_irqs(dev);
list_for_each_entry(entry, &dev->msi_list, list) {
arch_restore_msi_irqs(dev, entry->irq);
msix_mask_irq(entry, entry->masked);
}
......@@ -471,94 +499,95 @@ void pci_restore_msi_state(struct pci_dev *dev)
}
EXPORT_SYMBOL_GPL(pci_restore_msi_state);
#define to_msi_attr(obj) container_of(obj, struct msi_attribute, attr)
#define to_msi_desc(obj) container_of(obj, struct msi_desc, kobj)
struct msi_attribute {
struct attribute attr;
ssize_t (*show)(struct msi_desc *entry, struct msi_attribute *attr,
char *buf);
ssize_t (*store)(struct msi_desc *entry, struct msi_attribute *attr,
const char *buf, size_t count);
};
static ssize_t show_msi_mode(struct msi_desc *entry, struct msi_attribute *atr,
static ssize_t msi_mode_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%s\n", entry->msi_attrib.is_msix ? "msix" : "msi");
}
static ssize_t msi_irq_attr_show(struct kobject *kobj,
struct attribute *attr, char *buf)
{
struct msi_attribute *attribute = to_msi_attr(attr);
struct msi_desc *entry = to_msi_desc(kobj);
if (!attribute->show)
return -EIO;
return attribute->show(entry, attribute, buf);
}
static const struct sysfs_ops msi_irq_sysfs_ops = {
.show = msi_irq_attr_show,
};
static struct msi_attribute mode_attribute =
__ATTR(mode, S_IRUGO, show_msi_mode, NULL);
struct pci_dev *pdev = to_pci_dev(dev);
struct msi_desc *entry;
unsigned long irq;
int retval;
static struct attribute *msi_irq_default_attrs[] = {
&mode_attribute.attr,
NULL
};
retval = kstrtoul(attr->attr.name, 10, &irq);
if (retval)
return retval;
static void msi_kobj_release(struct kobject *kobj)
{
struct msi_desc *entry = to_msi_desc(kobj);
pci_dev_put(entry->dev);
list_for_each_entry(entry, &pdev->msi_list, list) {
if (entry->irq == irq) {
return sprintf(buf, "%s\n",
entry->msi_attrib.is_msix ? "msix" : "msi");
}
}
return -ENODEV;
}
static struct kobj_type msi_irq_ktype = {
.release = msi_kobj_release,
.sysfs_ops = &msi_irq_sysfs_ops,
.default_attrs = msi_irq_default_attrs,
};
static int populate_msi_sysfs(struct pci_dev *pdev)
{
struct attribute **msi_attrs;
struct attribute *msi_attr;
struct device_attribute *msi_dev_attr;
struct attribute_group *msi_irq_group;
const struct attribute_group **msi_irq_groups;
struct msi_desc *entry;
struct kobject *kobj;
int ret;
int ret = -ENOMEM;
int num_msi = 0;
int count = 0;
pdev->msi_kset = kset_create_and_add("msi_irqs", NULL, &pdev->dev.kobj);
if (!pdev->msi_kset)
return -ENOMEM;
/* Determine how many msi entries we have */
list_for_each_entry(entry, &pdev->msi_list, list) {
++num_msi;
}
if (!num_msi)
return 0;
/* Dynamically create the MSI attributes for the PCI device */
msi_attrs = kzalloc(sizeof(void *) * (num_msi + 1), GFP_KERNEL);
if (!msi_attrs)
return -ENOMEM;
list_for_each_entry(entry, &pdev->msi_list, list) {
kobj = &entry->kobj;
kobj->kset = pdev->msi_kset;
pci_dev_get(pdev);
ret = kobject_init_and_add(kobj, &msi_irq_ktype, NULL,
"%u", entry->irq);
if (ret)
goto out_unroll;
count++;
char *name = kmalloc(20, GFP_KERNEL);
msi_dev_attr = kzalloc(sizeof(*msi_dev_attr), GFP_KERNEL);
if (!msi_dev_attr)
goto error_attrs;
sprintf(name, "%d", entry->irq);
sysfs_attr_init(&msi_dev_attr->attr);
msi_dev_attr->attr.name = name;
msi_dev_attr->attr.mode = S_IRUGO;
msi_dev_attr->show = msi_mode_show;
msi_attrs[count] = &msi_dev_attr->attr;
++count;
}
msi_irq_group = kzalloc(sizeof(*msi_irq_group), GFP_KERNEL);
if (!msi_irq_group)
goto error_attrs;
msi_irq_group->name = "msi_irqs";
msi_irq_group->attrs = msi_attrs;
msi_irq_groups = kzalloc(sizeof(void *) * 2, GFP_KERNEL);
if (!msi_irq_groups)
goto error_irq_group;
msi_irq_groups[0] = msi_irq_group;
ret = sysfs_create_groups(&pdev->dev.kobj, msi_irq_groups);
if (ret)
goto error_irq_groups;
pdev->msi_irq_groups = msi_irq_groups;
return 0;
out_unroll:
list_for_each_entry(entry, &pdev->msi_list, list) {
if (!count)
break;
kobject_del(&entry->kobj);
kobject_put(&entry->kobj);
count--;
error_irq_groups:
kfree(msi_irq_groups);
error_irq_group:
kfree(msi_irq_group);
error_attrs:
count = 0;
msi_attr = msi_attrs[count];
while (msi_attr) {
msi_dev_attr = container_of(msi_attr, struct device_attribute, attr);
kfree(msi_attr->name);
kfree(msi_dev_attr);
++count;
msi_attr = msi_attrs[count];
}
return ret;
}
......@@ -729,7 +758,7 @@ static int msix_capability_init(struct pci_dev *dev,
ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
if (ret)
goto error;
goto out_avail;
/*
* Some devices require MSI-X to be enabled before we can touch the
......@@ -742,10 +771,8 @@ static int msix_capability_init(struct pci_dev *dev,
msix_program_entries(dev, entries);
ret = populate_msi_sysfs(dev);
if (ret) {
ret = 0;
goto error;
}
if (ret)
goto out_free;
/* Set MSI-X enabled bits and unmask the function */
pci_intx_for_msi(dev, 0);
......@@ -756,7 +783,7 @@ static int msix_capability_init(struct pci_dev *dev,
return 0;
error:
out_avail:
if (ret < 0) {
/*
* If we had some success, report the number of irqs
......@@ -773,6 +800,7 @@ static int msix_capability_init(struct pci_dev *dev,
ret = avail;
}
out_free:
free_msi_irqs(dev);
return ret;
......@@ -823,6 +851,31 @@ static int pci_msi_check_device(struct pci_dev *dev, int nvec, int type)
return 0;
}
/**
* pci_msi_vec_count - Return the number of MSI vectors a device can send
* @dev: device to report about
*
* This function returns the number of MSI vectors a device requested via
* Multiple Message Capable register. It returns a negative errno if the
* device is not capable sending MSI interrupts. Otherwise, the call succeeds
* and returns a power of two, up to a maximum of 2^5 (32), according to the
* MSI specification.
**/
int pci_msi_vec_count(struct pci_dev *dev)
{
int ret;
u16 msgctl;
if (!dev->msi_cap)
return -EINVAL;
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl);
ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1);
return ret;
}
EXPORT_SYMBOL(pci_msi_vec_count);
/**
* pci_enable_msi_block - configure device's MSI capability structure
* @dev: device to configure
......@@ -836,16 +889,16 @@ static int pci_msi_check_device(struct pci_dev *dev, int nvec, int type)
* updates the @dev's irq member to the lowest new interrupt number; the
* other interrupt numbers allocated to this device are consecutive.
*/
int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec)
int pci_enable_msi_block(struct pci_dev *dev, int nvec)
{
int status, maxvec;
u16 msgctl;
if (!dev->msi_cap || dev->current_state != PCI_D0)
if (dev->current_state != PCI_D0)
return -EINVAL;
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl);
maxvec = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1);
maxvec = pci_msi_vec_count(dev);
if (maxvec < 0)
return maxvec;
if (nvec > maxvec)
return maxvec;
......@@ -867,31 +920,6 @@ int pci_enable_msi_block(struct pci_dev *dev, unsigned int nvec)
}
EXPORT_SYMBOL(pci_enable_msi_block);
int pci_enable_msi_block_auto(struct pci_dev *dev, unsigned int *maxvec)
{
int ret, nvec;
u16 msgctl;
if (!dev->msi_cap || dev->current_state != PCI_D0)
return -EINVAL;
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl);
ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1);
if (maxvec)
*maxvec = ret;
do {
nvec = ret;
ret = pci_enable_msi_block(dev, nvec);
} while (ret > 0);
if (ret < 0)
return ret;
return nvec;
}
EXPORT_SYMBOL(pci_enable_msi_block_auto);
void pci_msi_shutdown(struct pci_dev *dev)
{
struct msi_desc *desc;
......@@ -925,25 +953,29 @@ void pci_disable_msi(struct pci_dev *dev)
pci_msi_shutdown(dev);
free_msi_irqs(dev);
kset_unregister(dev->msi_kset);
dev->msi_kset = NULL;
}
EXPORT_SYMBOL(pci_disable_msi);
/**
* pci_msix_table_size - return the number of device's MSI-X table entries
* pci_msix_vec_count - return the number of device's MSI-X table entries
* @dev: pointer to the pci_dev data structure of MSI-X device function
*/
int pci_msix_table_size(struct pci_dev *dev)
* This function returns the number of device's MSI-X table entries and
* therefore the number of MSI-X vectors device is capable of sending.
* It returns a negative errno if the device is not capable of sending MSI-X
* interrupts.
**/
int pci_msix_vec_count(struct pci_dev *dev)
{
u16 control;
if (!dev->msix_cap)
return 0;
return -EINVAL;
pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control);
return msix_table_size(control);
}
EXPORT_SYMBOL(pci_msix_vec_count);
/**
* pci_enable_msix - configure device's MSI-X capability structure
......@@ -972,7 +1004,9 @@ int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
if (status)
return status;
nr_entries = pci_msix_table_size(dev);
nr_entries = pci_msix_vec_count(dev);
if (nr_entries < 0)
return nr_entries;
if (nvec > nr_entries)
return nr_entries;
......@@ -1023,8 +1057,6 @@ void pci_disable_msix(struct pci_dev *dev)
pci_msix_shutdown(dev);
free_msi_irqs(dev);
kset_unregister(dev->msi_kset);
dev->msi_kset = NULL;
}
EXPORT_SYMBOL(pci_disable_msix);
......@@ -1079,3 +1111,77 @@ void pci_msi_init_pci_dev(struct pci_dev *dev)
if (dev->msix_cap)
msix_set_enable(dev, 0);
}
/**
* pci_enable_msi_range - configure device's MSI capability structure
* @dev: device to configure
* @minvec: minimal number of interrupts to configure
* @maxvec: maximum number of interrupts to configure
*
* This function tries to allocate a maximum possible number of interrupts in a
* range between @minvec and @maxvec. It returns a negative errno if an error
* occurs. If it succeeds, it returns the actual number of interrupts allocated
* and updates the @dev's irq member to the lowest new interrupt number;
* the other interrupt numbers allocated to this device are consecutive.
**/
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
{
int nvec = maxvec;
int rc;
if (maxvec < minvec)
return -ERANGE;
do {
rc = pci_enable_msi_block(dev, nvec);
if (rc < 0) {
return rc;
} else if (rc > 0) {
if (rc < minvec)
return -ENOSPC;
nvec = rc;
}
} while (rc);
return nvec;
}
EXPORT_SYMBOL(pci_enable_msi_range);
/**
* pci_enable_msix_range - configure device's MSI-X capability structure
* @dev: pointer to the pci_dev data structure of MSI-X device function
* @entries: pointer to an array of MSI-X entries
* @minvec: minimum number of MSI-X irqs requested
* @maxvec: maximum number of MSI-X irqs requested
*
* Setup the MSI-X capability structure of device function with a maximum
* possible number of interrupts in the range between @minvec and @maxvec
* upon its software driver call to request for MSI-X mode enabled on its
* hardware device function. It returns a negative errno if an error occurs.
* If it succeeds, it returns the actual number of interrupts allocated and
* indicates the successful configuration of MSI-X capability structure
* with new allocated MSI-X interrupts.
**/
int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
int minvec, int maxvec)
{
int nvec = maxvec;
int rc;
if (maxvec < minvec)
return -ERANGE;
do {
rc = pci_enable_msix(dev, entries, nvec);
if (rc < 0) {
return rc;
} else if (rc > 0) {
if (rc < minvec)
return -ENOSPC;
nvec = rc;
}
} while (rc);
return nvec;
}
EXPORT_SYMBOL(pci_enable_msix_range);
......@@ -361,7 +361,7 @@ static void pci_acpi_cleanup(struct device *dev)
static bool pci_acpi_bus_match(struct device *dev)
{
return dev->bus == &pci_bus_type;
return dev_is_pci(dev);
}
static struct acpi_bus_type acpi_pci_bus = {
......
......@@ -34,21 +34,7 @@
#define DEVICE_LABEL_DSM 0x07
#ifndef CONFIG_DMI
static inline int
pci_create_smbiosname_file(struct pci_dev *pdev)
{
return -1;
}
static inline void
pci_remove_smbiosname_file(struct pci_dev *pdev)
{
}
#else
#ifdef CONFIG_DMI
enum smbios_attr_enum {
SMBIOS_ATTR_NONE = 0,
SMBIOS_ATTR_LABEL_SHOW,
......@@ -156,31 +142,20 @@ pci_remove_smbiosname_file(struct pci_dev *pdev)
{
sysfs_remove_group(&pdev->dev.kobj, &smbios_attr_group);
}
#endif
#ifndef CONFIG_ACPI
static inline int
pci_create_acpi_index_label_files(struct pci_dev *pdev)
{
return -1;
}
#else
static inline int
pci_remove_acpi_index_label_files(struct pci_dev *pdev)
pci_create_smbiosname_file(struct pci_dev *pdev)
{
return -1;
}
static inline bool
device_has_dsm(struct device *dev)
static inline void
pci_remove_smbiosname_file(struct pci_dev *pdev)
{
return false;
}
#endif
#else
#ifdef CONFIG_ACPI
static const char device_label_dsm_uuid[] = {
0xD0, 0x37, 0xC9, 0xE5, 0x53, 0x35, 0x7A, 0x4D,
0x91, 0x17, 0xEA, 0x4D, 0x19, 0xC3, 0x43, 0x4D
......@@ -364,6 +339,24 @@ pci_remove_acpi_index_label_files(struct pci_dev *pdev)
sysfs_remove_group(&pdev->dev.kobj, &acpi_attr_group);
return 0;
}
#else
static inline int
pci_create_acpi_index_label_files(struct pci_dev *pdev)
{
return -1;
}
static inline int
pci_remove_acpi_index_label_files(struct pci_dev *pdev)
{
return -1;
}
static inline bool
device_has_dsm(struct device *dev)
{
return false;
}
#endif
void pci_create_firmware_label_files(struct pci_dev *pdev)
......
......@@ -297,7 +297,6 @@ msi_bus_store(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RW(msi_bus);
static DEFINE_MUTEX(pci_remove_rescan_mutex);
static ssize_t bus_rescan_store(struct bus_type *bus, const char *buf,
size_t count)
{
......@@ -308,10 +307,10 @@ static ssize_t bus_rescan_store(struct bus_type *bus, const char *buf,
return -EINVAL;
if (val) {
mutex_lock(&pci_remove_rescan_mutex);
pci_lock_rescan_remove();
while ((b = pci_find_next_bus(b)) != NULL)
pci_rescan_bus(b);
mutex_unlock(&pci_remove_rescan_mutex);
pci_unlock_rescan_remove();
}
return count;
}
......@@ -342,9 +341,9 @@ dev_rescan_store(struct device *dev, struct device_attribute *attr,
return -EINVAL;
if (val) {
mutex_lock(&pci_remove_rescan_mutex);
pci_lock_rescan_remove();
pci_rescan_bus(pdev->bus);
mutex_unlock(&pci_remove_rescan_mutex);
pci_unlock_rescan_remove();
}
return count;
}
......@@ -354,11 +353,7 @@ static struct device_attribute dev_rescan_attr = __ATTR(rescan,
static void remove_callback(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
mutex_lock(&pci_remove_rescan_mutex);
pci_stop_and_remove_bus_device(pdev);
mutex_unlock(&pci_remove_rescan_mutex);
pci_stop_and_remove_bus_device_locked(to_pci_dev(dev));
}
static ssize_t
......@@ -395,12 +390,12 @@ dev_bus_rescan_store(struct device *dev, struct device_attribute *attr,
return -EINVAL;
if (val) {
mutex_lock(&pci_remove_rescan_mutex);
pci_lock_rescan_remove();
if (!pci_is_root_bus(bus) && list_empty(&bus->devices))
pci_rescan_bus_bridge_resize(bus->self);
else
pci_rescan_bus(bus);
mutex_unlock(&pci_remove_rescan_mutex);
pci_unlock_rescan_remove();
}
return count;
}
......
此差异已折叠。
......@@ -6,7 +6,6 @@
#define PCI_CFG_SPACE_SIZE 256
#define PCI_CFG_SPACE_EXP_SIZE 4096
extern const unsigned char pcix_bus_speed[];
extern const unsigned char pcie_link_speed[];
/* Functions internal to the PCI core code */
......@@ -68,7 +67,6 @@ void pci_power_up(struct pci_dev *dev);
void pci_disable_enabled_device(struct pci_dev *dev);
int pci_finish_runtime_suspend(struct pci_dev *dev);
int __pci_pme_wakeup(struct pci_dev *dev, void *ign);
void pci_wakeup_bus(struct pci_bus *bus);
void pci_config_pm_runtime_get(struct pci_dev *dev);
void pci_config_pm_runtime_put(struct pci_dev *dev);
void pci_pm_init(struct pci_dev *dev);
......
此差异已折叠。
......@@ -984,18 +984,6 @@ void pcie_no_aspm(void)
}
}
/**
* pcie_aspm_enabled - is PCIe ASPM enabled?
*
* Returns true if ASPM has not been disabled by the command-line option
* pcie_aspm=off.
**/
int pcie_aspm_enabled(void)
{
return !aspm_disabled;
}
EXPORT_SYMBOL(pcie_aspm_enabled);
bool pcie_aspm_support_enabled(void)
{
return aspm_support_enabled;
......
此差异已折叠。
此差异已折叠。
......@@ -339,7 +339,7 @@ static void quirk_io_region(struct pci_dev *dev, int port,
/* Convert from PCI bus to resource space */
bus_region.start = region;
bus_region.end = region + size - 1;
pcibios_bus_to_resource(dev, res, &bus_region);
pcibios_bus_to_resource(dev->bus, res, &bus_region);
if (!pci_claim_resource(dev, nr))
dev_info(&dev->dev, "quirk: %pR claimed by %s\n", res, name);
......
此差异已折叠。
......@@ -31,7 +31,7 @@ int pci_enable_rom(struct pci_dev *pdev)
if (!res->flags)
return -1;
pcibios_resource_to_bus(pdev, &region, res);
pcibios_resource_to_bus(pdev->bus, &region, res);
pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr);
rom_addr &= ~PCI_ROM_ADDRESS_MASK;
rom_addr |= region.start | PCI_ROM_ADDRESS_ENABLE;
......
此差异已折叠。
......@@ -52,7 +52,7 @@ void pci_update_resource(struct pci_dev *dev, int resno)
if (res->flags & IORESOURCE_PCI_FIXED)
return;
pcibios_resource_to_bus(dev, &region, res);
pcibios_resource_to_bus(dev->bus, &region, res);
new = region.start | (res->flags & PCI_REGION_FLAG_MASK);
if (res->flags & IORESOURCE_IO)
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -608,7 +608,7 @@ static int i82092aa_set_mem_map(struct pcmcia_socket *socket, struct pccard_mem_
enter("i82092aa_set_mem_map");
pcibios_resource_to_bus(sock_info->dev, &region, mem->res);
pcibios_resource_to_bus(sock_info->dev->bus, &region, mem->res);
map = mem->map;
if (map > 4) {
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -128,7 +128,7 @@ static int mpt2sas_remove_dead_ioc_func(void *arg)
pdev = ioc->pdev;
if ((pdev == NULL))
return -1;
pci_stop_and_remove_bus_device(pdev);
pci_stop_and_remove_bus_device_locked(pdev);
return 0;
}
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册