1. 06 1月, 2015 1 次提交
  2. 23 12月, 2014 1 次提交
  3. 17 12月, 2014 1 次提交
    • L
      cpuidle / ACPI: remove unused CPUIDLE_FLAG_TIME_INVALID · 62c4cf97
      Len Brown 提交于
      CPUIDLE_FLAG_TIME_INVALID is no longer checked
      by menu or ladder cpuidle governors, so don't
      bother setting or defining it.
      
      It was originally invented to account for the fact that
      acpi_safe_halt() enables interrupts to invoke HLT.
      That would allow interrupt service routines to be included
      in the last_idle duration measurements made in cpuidle_enter_state(),
      potentially returning a duration much larger than reality.
      
      But menu and ladder can gracefully handle erroneously large duration
      intervals without checking for CPUIDLE_FLAG_TIME_INVALID.
      Further, if they don't check CPUIDLE_FLAG_TIME_INVALID, they
      can also benefit from the instances when the duration interval
      is not erroneously large.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      62c4cf97
  4. 16 12月, 2014 3 次提交
    • J
      x86, irq: Keep balance of IOAPIC pin reference count · cffe0a2b
      Jiang Liu 提交于
      To keep balance of IOAPIC pin reference count, we need to protect
      pirq_enable_irq(), acpi_pci_irq_enable() and intel_mid_pci_irq_enable()
      from reentrance. There are two cases which will cause reentrance.
      
      The first case is caused by suspend/hibernation. If pcibios_disable_irq
      is called during suspending/hibernating, we don't release the assigned
      IRQ number, otherwise it may break the suspend/hibernation. So late when
      pcibios_enable_irq is called during resume, we shouldn't allocate IRQ
      number again.
      
      The second case is that function acpi_pci_irq_enable() may be called
      twice for PCI devices present at boot time as below:
      1) pci_acpi_init()
      	--> acpi_pci_irq_enable() if pci_routeirq is true
      2) pci_enable_device()
      	--> pcibios_enable_device()
      		--> acpi_pci_irq_enable()
      We can't kill kernel parameter pci_routeirq yet because it's still
      needed for debugging purpose.
      
      So flag irq_managed is introduced to track whether IRQ number is
      assigned by OS and to protect pirq_enable_irq(), acpi_pci_irq_enable()
      and intel_mid_pci_irq_enable() from reentrance.
      Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Len Brown <lenb@kernel.org>
      Link: http://lkml.kernel.org/r/1414387308-27148-13-git-send-email-jiang.liu@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      cffe0a2b
    • J
      ACPI: Fix minor syntax issues in processor_core.c · 13ca62b2
      Jiang Liu 提交于
      Fix minor syntax issues in processor_core.c to follow coding styles.
      Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Len Brown <lenb@kernel.org>
      Link: http://lkml.kernel.org/r/1414387308-27148-7-git-send-email-jiang.liu@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      13ca62b2
    • J
      ACPI: Correct return value of acpi_dev_resource_address_space() · 6658c739
      Jiang Liu 提交于
      Change acpi_dev_resource_address_space() to return failure if the
      acpi_resource structure can't be converted to an ACPI address64
      structure, so caller could correctly detect failure.
      Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com>
      Acked-by: NRafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Len Brown <lenb@kernel.org>
      Link: http://lkml.kernel.org/r/1414387308-27148-6-git-send-email-jiang.liu@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      6658c739
  5. 15 12月, 2014 2 次提交
  6. 13 12月, 2014 3 次提交
  7. 11 12月, 2014 1 次提交
  8. 04 12月, 2014 2 次提交
  9. 03 12月, 2014 1 次提交
  10. 02 12月, 2014 2 次提交
    • R
      ACPI / sleep: Drain outstanding events after disabling multiple GPEs · c52fa70c
      Rafael J. Wysocki 提交于
      After multiple GPEs have been disabled at the low level in one go,
      like when acpi_disable_all_gpes() is called, we should always drain
      all of the outstanding events from them, or interesting races become
      possible.
      
      For this reason, call acpi_os_wait_events_complete() after
      acpi_enable_all_wakeup_gpes() and acpi_disable_all_gpes() in
      acpi_freeze_prepare() and acpi_power_off_prepare(), respectively.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      c52fa70c
    • R
      ACPICA: Save current masks of enabled GPEs after enable register writes · c50f13c6
      Rafael J. Wysocki 提交于
      There is a race condition between acpi_hw_disable_all_gpes() or
      acpi_enable_all_wakeup_gpes() and acpi_ev_asynch_enable_gpe() such
      that if the latter wins the race, it may mistakenly enable a GPE
      disabled by the former.  This may lead to premature system wakeups
      during system suspend and potentially to more serious consequences.
      
      The source of the problem is how acpi_hw_low_set_gpe() works when
      passed ACPI_GPE_CONDITIONAL_ENABLE as the second argument.  In that
      case, the GPE will be enabled if the corresponding bit is set in the
      enable_for_run mask of the GPE enable register containing that bit.
      However, acpi_hw_disable_all_gpes() and acpi_enable_all_wakeup_gpes()
      don't modify the enable_for_run masks of GPE registers when writing
      to them.  In consequence, if acpi_ev_asynch_enable_gpe(), which
      eventually calls acpi_hw_low_set_gpe() with the second argument
      equal to ACPI_GPE_CONDITIONAL_ENABLE, is executed in parallel with
      one of these functions, it may reverse changes made by them.
      
      To fix the problem, introduce a new enable_mask field in struct
      acpi_gpe_register_info in which to store the current mask of
      enabled GPEs and modify acpi_hw_low_set_gpe() to take this
      mask into account instead of enable_for_run when its second
      argument is equal to ACPI_GPE_CONDITIONAL_ENABLE.  Also modify
      the low-level routines called by acpi_hw_disable_all_gpes(),
      acpi_enable_all_wakeup_gpes() and acpi_enable_all_runtime_gpes()
      to update the enable_mask masks of GPE registers after all
      (successful) writes to those registers.
      Acked-by: NLv Zheng <lv.zheng@intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      c50f13c6
  11. 01 12月, 2014 1 次提交
  12. 28 11月, 2014 5 次提交
  13. 27 11月, 2014 5 次提交
  14. 26 11月, 2014 1 次提交
  15. 25 11月, 2014 1 次提交
  16. 24 11月, 2014 1 次提交
  17. 20 11月, 2014 1 次提交
    • R
      ACPI / PM: Ignore wakeup setting if the ACPI companion can't wake up · 78579b7c
      Rafael J. Wysocki 提交于
      As reported by Dmitry, on some Chromebooks there are devices with
      corresponding ACPI objects and with unusual system wakeup
      configuration.  Namely, they technically are wakeup-capable, but the
      wakeup is handled via a platform-specific out-of-band mechanism and
      the ACPI PM layer has no information on the wakeup capability.  As
      a result, device_may_wakeup(dev) called from acpi_dev_suspend_late()
      returns 'true' for those devices, but the wakeup.flags.valid flag is
      unset for the corresponding ACPI device objects, so acpi_device_wakeup()
      reproducibly fails for them causing acpi_dev_suspend_late() to return
      an error code.  The entire system suspend is then aborted and the
      machines in question cannot suspend at all.
      
      Address the problem by ignoring the device_may_wakeup(dev) return
      value in acpi_dev_suspend_late() if the ACPI companion of the device
      being handled has wakeup.flags.valid unset (in which case it is clear
      that the wakeup is supposed to be handled by other means).
      
      This fixes a regression introduced by commit a76e9bd8 (i2c:
      attach/detach I2C client device to the ACPI power domain) as the
      affected systems could suspend and resume successfully before that
      commit.
      
      Fixes: a76e9bd8 (i2c: attach/detach I2C client device to the ACPI power domain)
      Reported-by: NDmitry Torokhov <dtor@chromium.org>
      Reviewed-by: NDmitry Torokhov <dtor@chromium.org>
      Cc: 3.13+ <stable@vger.kernel.org> # 3.13+
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      78579b7c
  18. 18 11月, 2014 3 次提交
  19. 13 11月, 2014 1 次提交
    • D
      cpuidle: Invert CPUIDLE_FLAG_TIME_VALID logic · b82b6cca
      Daniel Lezcano 提交于
      The only place where the time is invalid is when the ACPI_CSTATE_FFH entry
      method is not set. Otherwise for all the drivers, the time can be correctly
      measured.
      
      Instead of duplicating the CPUIDLE_FLAG_TIME_VALID flag in all the drivers
      for all the states, just invert the logic by replacing it by the flag
      CPUIDLE_FLAG_TIME_INVALID, hence we can set this flag only for the acpi idle
      driver, remove the former flag from all the drivers and invert the logic with
      this flag in the different governor.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      b82b6cca
  20. 12 11月, 2014 4 次提交
    • L
      ACPI / OSL: Add IRQ handler flushing support in the OSL. · 90253a79
      Lv Zheng 提交于
      It is possible that a GPE handler or a fixed event handler still accessed
      after removing the handlers by invoking acpi_remove_gpe_handler() or
      acpi_remove_fixed_event_handler(), this possibility can crash OPSM after a
      module removal. In the Linux kernel, though all other GPE drivers are not
      modules, since the IPMI_SI (ipmi_si_intf.c) can be compiled as a module, we
      still need to consider a solution for this issue when the driver switches
      to ACPI_GPE_RAW_HANDLER mode in order to invoke GPE APIs.
      
      ACPICA expects acpi_os_wait_events_complete() to be invoked after GPE
      disabling so that OSPM can ensure all running GPE handlers have exitted.
      But currently acpi_os_wait_events_complete() can only flush _Lxx/_Exx
      evaluation work queue and this philosophy cannot work for drivers that have
      installed a dedicated GPE handler.
      
      The only way to protect a callback is to perform some state holders
      (reference count, state machine) before invoking the callback. Then this
      issue can only be fixed by the following means:
      1. Flush GPE in ACPICA before invoking the GPE handler. But currently,
         there is no such implementation in acpi_ev_gpe_dispatch().
      2. Flush GPE in ACPICA OSL before invoking the SCI handler. But currently,
         there is no such implementation in acpi_irq().
      3. Flush IRQ in OSPM IRQ layer before invoking the IRQ handler. In Linus
         kernel, this can be done by synchronize_irq().
      4. Flush scheduling in OSPM vector entry layer before invoking the vector.
         In Linux, this can be done by synchronize_sched().
      
      Since ACPICA expects the GPE handlers to be flushed by the ACPICA OSL or
      the GPE drivers. If it is implemented by the GPE driver, we should see
      synchronize_irq()/synchronize_sched() invoked in such drivers. If it is
      implemented by the ACPICA OSL, ACPICA currently provides
      acpi_os_wait_events_complete() hook to achieve this. After the following
      commit:
        Commit: 69c841b6
        Author: Lv Zheng <lv.zheng@intel.com>
        Subject: ACPICA: Update use of acpi_os_wait_events_complete interface.
      The OSL acpi_os_wait_events_complete() is invoked after a GPE handler is
      removed from acpi_remove_gpe_handler() or a fixed event handler is removed
      from acpi_remove_fixed_event_handler(). Thus it is possible to implement
      GPE handler flushing using this ACPICA OSL now. So the solution 1 is
      currently not taken into account.
      
      By examining the IPMI_SI driver, we noticed that the IPMI_SI driver:
      1. Uses free_irq() to flush non GPE based IRQ handlers, in free_irq(),
         synchronize_irq() is invoked, and
      2. Uses acpi_remove_gpe_handler() to flush GPE based IRQ handlers, for such
         IRQ handlers, there is no synchronize_irq() invoked.
      Since there isn't synchronize_sched() implemented for this driver, from the
      driver's perspective, acpi_remove_gpe_handler() should have properly
      flushed the GPE handlers for it. Since the driver doesn't invoke
      synchronize_irq(), the solution 3 is not what the drivers expect.
      
      This patch implements solution 2. But since given the fact that the GPE is
      managed inside of ACPICA, and implementing the GPE flushing requires to
      implement the whole GPE management code again in the OSL, instead of
      flushing GPE, this patch flushes IRQ in acpi_os_wait_events_complete(). The
      flushing could last longer than expected as though the target GPE/fixed
      event that is removed can be fastly flushed, other GPEs/fix events can still
      be issued during the flushing period.
      
      This patch fixes this issue by invoking synchronize_hardirq() in
      acpi_os_wait_events_complete(). The reason why we don't invoke
      synchronize_irq() is: currently ACPICA is not threaded IRQ capable and the
      only difference between synchronize_irq() and synchronize_hardirq() is
      synchronize_irq() also flushes threaded IRQ handlers. Thus using
      synchronize_hardirq() can help to reduce the overall synchronization time
      for the current ACPICA implementation.
      Signed-off-by: NLv Zheng <lv.zheng@intel.com>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Robert Moore <robert.moore@intel.com>
      Cc: Corey Minyard <minyard@acm.org>
      Cc: linux-acpi@vger.kernel.org
      Cc: devel@acpica.org
      Cc: openipmi-developer@lists.sourceforge.net
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      90253a79
    • A
      ACPI / LPSS: introduce a 'proxy' device to power on LPSS for DMA · 6c17ee44
      Andy Shevchenko 提交于
      The LPSS DMA controller does not have _PS0 and _PS3 methods. Moreover it can be
      powered off automatically whenever the last LPSS device goes down. In case of
      no power any access to the DMA controller will hang the system. The behaviour
      is reproduced on some HP laptops based on Intel Bay Trail [1] as well as on
      Asus T100 transformer.
      
      This patch introduces a so called 'proxy' device that has the knobs to handle a
      power of the LPSS island. When the system needs to program the DMA controller
      it calls to the ACPI LPSS power domain callbacks that wake or suspend the
      'proxy' device.
      
      [1] http://www.spinics.net/lists/dmaengine/msg01514.htmlSuggested-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Tested-by: NScott Ashcroft <scott.ashcroft@talk21.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      6c17ee44
    • A
      ACPI / LPSS: allow to use specific PM domain during ->probe() · 01ac170b
      Andy Shevchenko 提交于
      The LPSS DMA controller would like to use the specific PM domain callbacks
      during early stage, namely in ->probe(). This patch moves the specific PM
      domain assignment early to be accessible during a whole life time of the device
      in the system.
      Suggested-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Tested-by: NScott Ashcroft <scott.ashcroft@talk21.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      01ac170b
    • A
      ACPI / LPSS: add all LPSS devices to the specific power domain · cb39dcdd
      Andy Shevchenko 提交于
      Currently the LPSS devices are located in the different power domains depends
      on LPSS_SAVE_CTX flag. We would like to use the specific power domain for all
      LPSS devices.
      
      The LPSS DMA controller has no knobs to control its power state. The specific
      power domain implementation will handle this case. The patch does a preparation
      for that.
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Tested-by: NScott Ashcroft <scott.ashcroft@talk21.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      cb39dcdd