1. 22 9月, 2012 1 次提交
  2. 23 6月, 2012 2 次提交
  3. 14 6月, 2012 2 次提交
  4. 08 5月, 2012 1 次提交
  5. 01 5月, 2012 5 次提交
  6. 02 3月, 2012 1 次提交
  7. 29 2月, 2012 1 次提交
  8. 24 2月, 2012 1 次提交
    • Y
      PCI: fix memleak when ACPI _CRS is not used. · 1cc1c96c
      Yinghai Lu 提交于
      warning:
      unreferenced object 0xffff8801f6914200 (size 512):
        comm "swapper/0", pid 1, jiffies 4294893643 (age 2664.644s)
        hex dump (first 32 bytes):
          00 00 c0 fe 00 00 00 00 ff ff ff ff 00 00 00 00  ................
          60 58 2f f6 03 88 ff ff 00 02 00 00 00 00 00 00  `X/.............
        backtrace:
          [<ffffffff81c2408c>] kmemleak_alloc+0x26/0x43
          [<ffffffff8113764f>] __kmalloc+0x121/0x183
          [<ffffffff81ca8d93>] get_current_resources+0x5a/0xc6
          [<ffffffff81c5bedd>] pci_acpi_scan_root+0x13c/0x21c
          [<ffffffff81c2a745>] acpi_pci_root_add+0x1e1/0x421
          [<ffffffff81408f50>] acpi_device_probe+0x50/0x190
          [<ffffffff8149edc7>] really_probe+0x99/0x126
          [<ffffffff8149ef83>] driver_probe_device+0x3b/0x56
          [<ffffffff8149effd>] __driver_attach+0x5f/0x82
          [<ffffffff8149d860>] bus_for_each_dev+0x5c/0x88
          [<ffffffff8149eb87>] driver_attach+0x1e/0x20
          [<ffffffff8149e7cc>] bus_add_driver+0xca/0x21d
          [<ffffffff8149f47b>] driver_register+0x91/0xfe
          [<ffffffff81409d09>] acpi_bus_register_driver+0x43/0x45
          [<ffffffff8278bdc9>] acpi_pci_root_init+0x20/0x28
          [<ffffffff810001e7>] do_one_initcall+0x57/0x134
      
      The system has _CRS for root buses, but they are not used because the machine
      date is before the cutoff date for _CRS usage.
      
      Try to free those unused resource arrays and names.
      Reviewed-by: NBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      1cc1c96c
  9. 15 2月, 2012 1 次提交
  10. 07 1月, 2012 5 次提交
  11. 07 10月, 2011 1 次提交
    • P
      x86/PCI: use host bridge _CRS info on ASUS M2V-MX SE · 29cf7a30
      Paul Menzel 提交于
      In summary, this DMI quirk uses the _CRS info by default for the ASUS
      M2V-MX SE by turning on `pci=use_crs` and is similar to the quirk
      added by commit 2491762c ("x86/PCI: use host bridge _CRS info on
      ASRock ALiveSATA2-GLAN") whose commit message should be read for further
      information.
      
      Since commit 3e3da00c ("x86/pci: AMD one chain system to use pci
      read out res") Linux gives the following oops:
      
          parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
          HDA Intel 0000:20:01.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
          HDA Intel 0000:20:01.0: setting latency timer to 64
          BUG: unable to handle kernel paging request at ffffc90011c08000
          IP: [<ffffffffa0578402>] azx_probe+0x3ad/0x86b [snd_hda_intel]
          PGD 13781a067 PUD 13781b067 PMD 1300ba067 PTE 800000fd00000173
          Oops: 0009 [#1] SMP
          last sysfs file: /sys/module/snd_pcm/initstate
          CPU 0
          Modules linked in: snd_hda_intel(+) snd_hda_codec snd_hwdep snd_pcm_oss snd_mixer_oss snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event tpm_tis tpm snd_seq tpm_bios psmouse parport_pc snd_timer snd_seq_device parport processor evdev snd i2c_viapro thermal_sys amd64_edac_mod k8temp i2c_core soundcore shpchp pcspkr serio_raw asus_atk0110 pci_hotplug edac_core button snd_page_alloc edac_mce_amd ext3 jbd mbcache sha256_generic cryptd aes_x86_64 aes_generic cbc dm_crypt dm_mod raid1 md_mod usbhid hid sg sd_mod crc_t10dif sr_mod cdrom ata_generic uhci_hcd sata_via pata_via libata ehci_hcd usbcore scsi_mod via_rhine mii nls_base [last unloaded: scsi_wait_scan]
          Pid: 1153, comm: work_for_cpu Not tainted 2.6.37-1-amd64 #1 M2V-MX SE/System Product Name
          RIP: 0010:[<ffffffffa0578402>]  [<ffffffffa0578402>] azx_probe+0x3ad/0x86b [snd_hda_intel]
          RSP: 0018:ffff88013153fe50  EFLAGS: 00010286
          RAX: ffffc90011c08000 RBX: ffff88013029ec00 RCX: 0000000000000006
          RDX: 0000000000000000 RSI: 0000000000000246 RDI: 0000000000000246
          RBP: ffff88013341d000 R08: 0000000000000000 R09: 0000000000000040
          R10: 0000000000000286 R11: 0000000000003731 R12: ffff88013029c400
          R13: 0000000000000000 R14: 0000000000000000 R15: ffff88013341d090
          FS:  0000000000000000(0000) GS:ffff8800bfc00000(0000) knlGS:00000000f7610ab0
          CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
          CR2: ffffc90011c08000 CR3: 0000000132f57000 CR4: 00000000000006f0
          DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
          DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
          Process work_for_cpu (pid: 1153, threadinfo ffff88013153e000, task ffff8801303c86c0)
          Stack:
           0000000000000005 ffffffff8123ad65 00000000000136c0 ffff88013029c400
           ffff8801303c8998 ffff88013341d000 ffff88013341d090 ffff8801322d9dc8
           ffff88013341d208 0000000000000000 0000000000000000 ffffffff811ad232
          Call Trace:
           [<ffffffff8123ad65>] ? __pm_runtime_set_status+0x162/0x186
           [<ffffffff811ad232>] ? local_pci_probe+0x49/0x92
           [<ffffffff8105afc5>] ? do_work_for_cpu+0x0/0x1b
           [<ffffffff8105afc5>] ? do_work_for_cpu+0x0/0x1b
           [<ffffffff8105afd0>] ? do_work_for_cpu+0xb/0x1b
           [<ffffffff8105fd3f>] ? kthread+0x7a/0x82
           [<ffffffff8100a824>] ? kernel_thread_helper+0x4/0x10
           [<ffffffff8105fcc5>] ? kthread+0x0/0x82
           [<ffffffff8100a820>] ? kernel_thread_helper+0x0/0x10
          Code: f4 01 00 00 ef 31 f6 48 89 df e8 29 dd ff ff 85 c0 0f 88 2b 03 00 00 48 89 ef e8 b4 39 c3 e0 8b 7b 40 e8 fc 9d b1 e0 48 8b 43 38 <66> 8b 10 66 89 14 24 8b 43 14 83 e8 03 83 f8 01 77 32 31 d2 be
          RIP  [<ffffffffa0578402>] azx_probe+0x3ad/0x86b [snd_hda_intel]
           RSP <ffff88013153fe50>
          CR2: ffffc90011c08000
          ---[ end trace 8d1f3ebc136437fd ]---
      
      Trusting the ACPI _CRS information (`pci=use_crs`) fixes this problem.
      
          $ dmesg | grep -i crs # with the quirk
          PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
      
      The match has to be against the DMI board entries though since the vendor entries are not populated.
      
          DMI: System manufacturer System Product Name/M2V-MX SE, BIOS 0304    10/30/2007
      
      This quirk should be removed when `pci=use_crs` is enabled for machines
      from 2006 or earlier or some other solution is implemented.
      
      Using coreboot [1] with this board the problem does not exist but this
      quirk also does not affect it either. To be safe though the check is
      tightened to only take effect when the BIOS from American Megatrends is
      used.
      
              15:13 < ruik> but coreboot does not need that
              15:13 < ruik> because i have there only one root bus
              15:13 < ruik> the audio is behind a bridge
      
              $ sudo dmidecode
              BIOS Information
                      Vendor: American Megatrends Inc.
                      Version: 0304
                      Release Date: 10/30/2007
      
      [1] http://www.coreboot.org/
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=30552
      
      Cc: stable@kernel.org (2.6.34)
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: x86@kernel.org
      Signed-off-by: NPaul Menzel <paulepanter@users.sourceforge.net>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Acked-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      29cf7a30
  12. 10 9月, 2011 1 次提交
  13. 02 8月, 2011 1 次提交
    • J
      PCI: Set PCI-E Max Payload Size on fabric · b03e7495
      Jon Mason 提交于
      On a given PCI-E fabric, each device, bridge, and root port can have a
      different PCI-E maximum payload size.  There is a sizable performance
      boost for having the largest possible maximum payload size on each PCI-E
      device.  However, if improperly configured, fatal bus errors can occur.
      Thus, it is important to ensure that PCI-E payloads sends by a device
      are never larger than the MPS setting of all devices on the way to the
      destination.
      
      This can be achieved two ways:
      
      - A conservative approach is to use the smallest common denominator of
        the entire tree below a root complex for every device on that fabric.
      
      This means for example that having a 128 bytes MPS USB controller on one
      leg of a switch will dramatically reduce performances of a video card or
      10GE adapter on another leg of that same switch.
      
      It also means that any hierarchy supporting hotplug slots (including
      expresscard or thunderbolt I suppose, dbl check that) will have to be
      entirely clamped to 128 bytes since we cannot predict what will be
      plugged into those slots, and we cannot change the MPS on a "live"
      system.
      
      - A more optimal way is possible, if it falls within a couple of
        constraints:
      * The top-level host bridge will never generate packets larger than the
        smallest TLP (or if it can be controlled independently from its MPS at
        least)
      * The device will never generate packets larger than MPS (which can be
        configured via MRRS)
      * No support of direct PCI-E <-> PCI-E transfers between devices without
        some additional code to specifically deal with that case
      
      Then we can use an approach that basically ignores downstream requests
      and focuses exclusively on upstream requests. In that case, all we need
      to care about is that a device MPS is no larger than its parent MPS,
      which allows us to keep all switches/bridges to the max MPS supported by
      their parent and eventually the PHB.
      
      In this case, your USB controller would no longer "starve" your 10GE
      Ethernet and your hotplug slots won't affect your global MPS.
      Additionally, the hotplugged devices themselves can be configured to a
      larger MPS up to the value configured in the hotplug bridge.
      
      To choose between the two available options, two PCI kernel boot args
      have been added to the PCI calls.  "pcie_bus_safe" will provide the
      former behavior, while "pcie_bus_perf" will perform the latter behavior.
      By default, the latter behavior is used.
      
      NOTE: due to the location of the enablement, each arch will need to add
      calls to this function.  This patch only enables x86.
      
      This patch includes a number of changes recommended by Benjamin
      Herrenschmidt.
      
      Tested-by: Jordan_Hargrave@dell.com
      Signed-off-by: NJon Mason <mason@myri.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      b03e7495
  14. 22 7月, 2011 1 次提交
  15. 02 6月, 2011 1 次提交
    • M
      x86/PCI/ACPI: fix type mismatch · 6e33a852
      Márton Németh 提交于
      The flags field of struct resource from linux/ioport.h is "unsigned
      long". Change the "type" parameter of coalesce_windows() function to
      match that field. This fixes the following warning messages when
      compiling with "make C=1 W=1 bzImage modules":
      
      arch/x86/pci/acpi.c: In function ‘coalesce_windows’:
      arch/x86/pci/acpi.c:198: warning: conversion to ‘long unsigned int’ from ‘int’ may change the sign of the result
      arch/x86/pci/acpi.c:203: warning: conversion to ‘long unsigned int’ from ‘int’ may change the sign of the result
      Signed-off-by: NMárton Németh <nm127@freemail.hu>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      6e33a852
  16. 12 11月, 2010 1 次提交
    • B
      x86/PCI: coalesce overlapping host bridge windows · 4723d0f2
      Bjorn Helgaas 提交于
      Some BIOSes provide PCI host bridge windows that overlap, e.g.,
      
          pci_root PNP0A03:00: host bridge window [mem 0xb0000000-0xffffffff]
          pci_root PNP0A03:00: host bridge window [mem 0xafffffff-0xdfffffff]
          pci_root PNP0A03:00: host bridge window [mem 0xf0000000-0xffffffff]
      
      If we simply insert these as children of iomem_resource, the second window
      fails because it conflicts with the first, and the third is inserted as a
      child of the first, i.e.,
      
          b0000000-ffffffff PCI Bus 0000:00
            f0000000-ffffffff PCI Bus 0000:00
      
      When we claim PCI device resources, this can cause collisions like this
      if we put them in the first window:
      
          pci 0000:00:01.0: address space collision: [mem 0xff300000-0xff4fffff] conflicts with PCI Bus 0000:00 [mem 0xf0000000-0xffffffff]
      
      Host bridge windows are top-level resources by definition, so it doesn't
      make sense to make the third window a child of the first.  This patch
      coalesces any host bridge windows that overlap.  For the example above,
      the result is this single window:
      
          pci_root PNP0A03:00: host bridge window [mem 0xafffffff-0xffffffff]
      
      This fixes a 2.6.34 regression.
      
      Reference: https://bugzilla.kernel.org/show_bug.cgi?id=17011Reported-and-tested-by: NAnisse Astier <anisse@astier.eu>
      Reported-and-tested-by: NPramod Dematagoda <pmd.lotr.gandalf@gmail.com>
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      4723d0f2
  17. 31 7月, 2010 1 次提交
    • B
      x86/PCI: use host bridge _CRS info on ASRock ALiveSATA2-GLAN · 2491762c
      Bjorn Helgaas 提交于
      This DMI quirk turns on "pci=use_crs" for the ALiveSATA2-GLAN because
      amd_bus.c doesn't handle this system correctly.
      
      The system has a single HyperTransport I/O chain, but has two PCI host
      bridges to buses 00 and 80.  amd_bus.c learns the MMIO range associated
      with buses 00-ff and that this range is routed to the HT chain hosted at
      node 0, link 0:
      
          bus: [00, ff] on node 0 link 0
          bus: 00 index 1 [mem 0x80000000-0xfcffffffff]
      
      This includes the address space for both bus 00 and bus 80, and amd_bus.c
      assumes it's all routed to bus 00.
      
      We find device 80:01.0, which BIOS left in the middle of that space, but
      we don't find a bridge from bus 00 to bus 80, so we conclude that 80:01.0
      is unreachable from bus 00, and we move it from the original, working,
      address to something outside the bus 00 aperture, which does not work:
      
          pci 0000:80:01.0: reg 10: [mem 0xfebfc000-0xfebfffff 64bit]
          pci 0000:80:01.0: BAR 0: assigned [mem 0xfd00000000-0xfd00003fff 64bit]
      
      The BIOS told us everything we need to know to handle this correctly,
      so we're better off if we just pay attention, which lets us leave the
      80:01.0 device at the original, working, address:
      
          ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f])
          pci_root PNP0A03:00: host bridge window [mem 0x80000000-0xff37ffff]
          ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 80-ff])
          pci_root PNP0A08:00: host bridge window [mem 0xfebfc000-0xfebfffff]
      
      This was a regression between 2.6.33 and 2.6.34.  In 2.6.33, amd_bus.c
      was used only when we found multiple HT chains.  3e3da00c, which
      enabled amd_bus.c even on systems with a single HT chain, caused this
      failure.
      
      This quirk was written by Graham.  If we ever enable "pci=use_crs" for
      machines from 2006 or earlir, this quirk should be removed.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=16007
      
      Cc: stable@kernel.org
      Reported-by: NGraham Ramsey <ramsey.graham@ntlworld.com>
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      2491762c
  18. 25 5月, 2010 1 次提交
  19. 29 4月, 2010 1 次提交
  20. 23 4月, 2010 1 次提交
  21. 09 4月, 2010 1 次提交
  22. 04 4月, 2010 1 次提交
  23. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  24. 26 3月, 2010 2 次提交
  25. 24 2月, 2010 2 次提交
  26. 20 2月, 2010 2 次提交
    • T
      x86: Add pci_init_irq to x86_init · ab3b3793
      Thomas Gleixner 提交于
      Moorestown wants to reuse pcibios_init_irq but needs to provide its
      own implementation of pci_enable_irq. After we distangled the init we
      can move the init_irq call to x86_init and remove the pci_enable_irq
      != NULL check in pcibios_init_irq. pci_enable_irq is compile time
      initialized to pirq_enable_irq and the special cases which override it
      (visws and acpi) set the x86_init function pointer to noop. That
      allows MSRT to override pci_enable_irq and otherwise run
      pcibios_init_irq unmodified.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <43F901BD926A4E43B106BF17856F07559FB80CFF@orsmsx508.amr.corp.intel.com>
      Acked-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: NJacob Pan <jacob.jun.pan@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      ab3b3793
    • T
      x86: Move pci init function to x86_init · b72d0db9
      Thomas Gleixner 提交于
      The PCI initialization in pci_subsys_init() is a mess. pci_numaq_init,
      pci_acpi_init, pci_visws_init and pci_legacy_init are called and each
      implementation checks and eventually modifies the global variable
      pcibios_scanned.
      
      x86_init functions allow us to do this more elegant. The pci.init
      function pointer is preset to pci_legacy_init. numaq, acpi and visws
      can modify the pointer in their early setup functions. The functions
      return 0 when they did the full initialization including bus scan. A
      non zero return value indicates that pci_legacy_init needs to be
      called either because the selected function failed or wants the
      generic bus scan in pci_legacy_init to happen (e.g. visws).
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <43F901BD926A4E43B106BF17856F07559FB80CFE@orsmsx508.amr.corp.intel.com>
      Acked-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: NJacob Pan <jacob.jun.pan@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      b72d0db9
  27. 07 11月, 2009 1 次提交