1. 20 11月, 2008 1 次提交
  2. 12 11月, 2008 1 次提交
    • K
      PCI: ignore bit0 of _OSC return code · 2485b867
      Kenji Kaneshige 提交于
      Currently acpi_run_osc() checks all the bits in _OSC result code (the
      first DWORD in the capabilities buffer) to see error condition. But the
      bit 0, which doesn't indicate any error, must be ignored.
      
      The bit 0 is used as the query flag at _OSC invocation time. Some
      platforms clear it during _OSC evaluation, but the others don't. On
      latter platforms, current acpi_run_osc() mis-detects error when _OSC is
      evaluated with query flag set because it doesn't ignore the bit 0.
      Because of this, the __acpi_query_osc() always fails on such platforms.
      
      And this is the cause of the problem that pci_osc_control_set() doesn't
      work since the commit 4e39432f which
      changed pci_osc_control_set() to use __acpi_query_osc().
      Tested-by: N"Tomasz Czernecki <czernecki@gmail.com>
      Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      2485b867
  3. 04 11月, 2008 3 次提交
  4. 25 10月, 2008 2 次提交
  5. 24 10月, 2008 8 次提交
  6. 23 10月, 2008 25 次提交