提交 39ed853a 编写于 作者: L Linus Torvalds

Merge tag 'pm+acpi-4.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI fixes from Rafael Wysocki:
 "These are fixes for recent regressions (ACPI resources management,
  suspend-to-idle), stable-candidate fixes (ACPI backlight), fixes
  related to the wakeup IRQ management changes made in v3.18, other
  fixes (suspend-to-idle, cpufreq ppc driver) and a couple of cleanups
  (suspend-to-idle, generic power domains, ACPI backlight).

  Specifics:

   - Fix ACPI resources management problems introduced by the recent
     rework of the code in question (Jiang Liu) and a build issue
     introduced by those changes (Joachim Nilsson).

   - Fix a recent suspend-to-idle regression on systems where entering
     idle states causes local timers to stop, prevent suspend-to-idle
     from crashing in restricted configurations (no cpuidle driver,
     cpuidle disabled etc.) and clean up the idle loop somewhat while at
     it (Rafael J Wysocki).

   - Fix build problem in the cpufreq ppc driver (Geert Uytterhoeven).

   - Allow the ACPI backlight driver module to be loaded if ACPI is
     disabled which helps the i915 driver in those configurations
     (stable-candidate) and change the code to help debug unusual use
     cases (Chris Wilson).

   - Wakeup IRQ management changes in v3.18 caused some drivers on the
     at91 platform to trigger a warning from the IRQ core related to an
     unexpected combination of interrupt action handler flags.  However,
     on at91 a timer IRQ is shared with some other devices (including
     system wakeup ones) and that leads to the unusual combination of
     flags in question.

     To make it possible to avoid the warning introduce a new interrupt
     action handler flag (which can be used by drivers to indicate the
     special case to the core) and rework the problematic at91 drivers
     to use it and work as expected during system suspend/resume.  From
     Boris Brezillon, Rafael J Wysocki and Mark Rutland.

   - Clean up the generic power domains subsystem's debugfs interface
     (Kevin Hilman)"

* tag 'pm+acpi-4.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  genirq / PM: describe IRQF_COND_SUSPEND
  tty: serial: atmel: rework interrupt and wakeup handling
  watchdog: at91sam9: request the irq with IRQF_NO_SUSPEND
  cpuidle / sleep: Use broadcast timer for states that stop local timer
  clk: at91: implement suspend/resume for the PMC irqchip
  rtc: at91rm9200: rework wakeup and interrupt handling
  rtc: at91sam9: rework wakeup and interrupt handling
  PM / wakeup: export pm_system_wakeup symbol
  genirq / PM: Add flag for shared NO_SUSPEND interrupt lines
  ACPI / video: Propagate the error code for acpi_video_register
  ACPI / video: Load the module even if ACPI is disabled
  PM / Domains: cleanup: rename gpd -> genpd in debugfs interface
  cpufreq: ppc: Add missing #include <asm/smp.h>
  x86/PCI/ACPI: Relax ACPI resource descriptor checks to work around BIOS bugs
  x86/PCI/ACPI: Ignore resources consumed by host bridge itself
  cpuidle: Clean up fallback handling in cpuidle_idle_call()
  cpuidle / sleep: Do sanity checks in cpuidle_enter_freeze() too
  idle / sleep: Avoid excessive disabling and enabling interrupts
  PCI: versatile: Update for list_for_each_entry() API change
  genirq / PM: better describe IRQF_NO_SUSPEND semantics
......@@ -40,8 +40,10 @@ but also to IPIs and to some other special-purpose interrupts.
The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when
requesting a special-purpose interrupt. It causes suspend_device_irqs() to
leave the corresponding IRQ enabled so as to allow the interrupt to work all
the time as expected.
leave the corresponding IRQ enabled so as to allow the interrupt to work as
expected during the suspend-resume cycle, but does not guarantee that the
interrupt will wake the system from a suspended state -- for such cases it is
necessary to use enable_irq_wake().
Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one
user of it. Thus, if the IRQ is shared, all of the interrupt handlers installed
......@@ -110,8 +112,9 @@ any special interrupt handling logic for it to work.
IRQF_NO_SUSPEND and enable_irq_wake()
-------------------------------------
There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND
flag on the same IRQ.
There are very few valid reasons to use both enable_irq_wake() and the
IRQF_NO_SUSPEND flag on the same IRQ, and it is never valid to use both for the
same device.
First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND
interrupts (interrupt handlers are invoked after suspend_device_irqs()) are
......@@ -120,4 +123,13 @@ handlers are not invoked after suspend_device_irqs()).
Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not
to individual interrupt handlers, so sharing an IRQ between a system wakeup
interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense.
interrupt source and an IRQF_NO_SUSPEND interrupt source does not generally
make sense.
In rare cases an IRQ can be shared between a wakeup device driver and an
IRQF_NO_SUSPEND user. In order for this to be safe, the wakeup device driver
must be able to discern spurious IRQs from genuine wakeup events (signalling
the latter to the core with pm_system_wakeup()), must use enable_irq_wake() to
ensure that the IRQ will function as a wakeup source, and must request the IRQ
with IRQF_COND_SUSPEND to tell the core that it meets these requirements. If
these requirements are not met, it is not valid to use IRQF_COND_SUSPEND.
......@@ -331,7 +331,7 @@ static void probe_pci_root_info(struct pci_root_info *info,
struct list_head *list)
{
int ret;
struct resource_entry *entry;
struct resource_entry *entry, *tmp;
sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum);
info->bridge = device;
......@@ -345,8 +345,13 @@ static void probe_pci_root_info(struct pci_root_info *info,
dev_dbg(&device->dev,
"no IO and memory resources present in _CRS\n");
else
resource_list_for_each_entry(entry, list)
entry->res->name = info->name;
resource_list_for_each_entry_safe(entry, tmp, list) {
if ((entry->res->flags & IORESOURCE_WINDOW) == 0 ||
(entry->res->flags & IORESOURCE_DISABLED))
resource_list_destroy_entry(entry);
else
entry->res->name = info->name;
}
}
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
......
......@@ -42,8 +42,10 @@ static bool acpi_dev_resource_len_valid(u64 start, u64 end, u64 len, bool io)
* CHECKME: len might be required to check versus a minimum
* length as well. 1 for io is fine, but for memory it does
* not make any sense at all.
* Note: some BIOSes report incorrect length for ACPI address space
* descriptor, so remove check of 'reslen == len' to avoid regression.
*/
if (len && reslen && reslen == len && start <= end)
if (len && reslen && start <= end)
return true;
pr_debug("ACPI: invalid or unassigned resource %s [%016llx - %016llx] length [%016llx]\n",
......
......@@ -2110,7 +2110,8 @@ static int __init intel_opregion_present(void)
int acpi_video_register(void)
{
int result = 0;
int ret;
if (register_count) {
/*
* if the function of acpi_video_register is already called,
......@@ -2122,9 +2123,9 @@ int acpi_video_register(void)
mutex_init(&video_list_lock);
INIT_LIST_HEAD(&video_bus_head);
result = acpi_bus_register_driver(&acpi_video_bus);
if (result < 0)
return -ENODEV;
ret = acpi_bus_register_driver(&acpi_video_bus);
if (ret)
return ret;
/*
* When the acpi_video_bus is loaded successfully, increase
......@@ -2176,6 +2177,17 @@ EXPORT_SYMBOL(acpi_video_unregister_backlight);
static int __init acpi_video_init(void)
{
/*
* Let the module load even if ACPI is disabled (e.g. due to
* a broken BIOS) so that i915.ko can still be loaded on such
* old systems without an AcpiOpRegion.
*
* acpi_video_register() will report -ENODEV later as well due
* to acpi_disabled when i915.ko tries to register itself afterwards.
*/
if (acpi_disabled)
return 0;
dmi_check_system(video_dmi_table);
if (intel_opregion_present())
......
......@@ -2242,7 +2242,7 @@ static void rtpm_status_str(struct seq_file *s, struct device *dev)
}
static int pm_genpd_summary_one(struct seq_file *s,
struct generic_pm_domain *gpd)
struct generic_pm_domain *genpd)
{
static const char * const status_lookup[] = {
[GPD_STATE_ACTIVE] = "on",
......@@ -2256,26 +2256,26 @@ static int pm_genpd_summary_one(struct seq_file *s,
struct gpd_link *link;
int ret;
ret = mutex_lock_interruptible(&gpd->lock);
ret = mutex_lock_interruptible(&genpd->lock);
if (ret)
return -ERESTARTSYS;
if (WARN_ON(gpd->status >= ARRAY_SIZE(status_lookup)))
if (WARN_ON(genpd->status >= ARRAY_SIZE(status_lookup)))
goto exit;
seq_printf(s, "%-30s %-15s ", gpd->name, status_lookup[gpd->status]);
seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]);
/*
* Modifications on the list require holding locks on both
* master and slave, so we are safe.
* Also gpd->name is immutable.
* Also genpd->name is immutable.
*/
list_for_each_entry(link, &gpd->master_links, master_node) {
list_for_each_entry(link, &genpd->master_links, master_node) {
seq_printf(s, "%s", link->slave->name);
if (!list_is_last(&link->master_node, &gpd->master_links))
if (!list_is_last(&link->master_node, &genpd->master_links))
seq_puts(s, ", ");
}
list_for_each_entry(pm_data, &gpd->dev_list, list_node) {
list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
if (kobj_path == NULL)
continue;
......@@ -2287,14 +2287,14 @@ static int pm_genpd_summary_one(struct seq_file *s,
seq_puts(s, "\n");
exit:
mutex_unlock(&gpd->lock);
mutex_unlock(&genpd->lock);
return 0;
}
static int pm_genpd_summary_show(struct seq_file *s, void *data)
{
struct generic_pm_domain *gpd;
struct generic_pm_domain *genpd;
int ret = 0;
seq_puts(s, " domain status slaves\n");
......@@ -2305,8 +2305,8 @@ static int pm_genpd_summary_show(struct seq_file *s, void *data)
if (ret)
return -ERESTARTSYS;
list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
ret = pm_genpd_summary_one(s, gpd);
list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
ret = pm_genpd_summary_one(s, genpd);
if (ret)
break;
}
......
......@@ -730,6 +730,7 @@ void pm_system_wakeup(void)
pm_abort_suspend = true;
freeze_wake();
}
EXPORT_SYMBOL_GPL(pm_system_wakeup);
void pm_wakeup_clear(void)
{
......
......@@ -89,12 +89,29 @@ static int pmc_irq_set_type(struct irq_data *d, unsigned type)
return 0;
}
static void pmc_irq_suspend(struct irq_data *d)
{
struct at91_pmc *pmc = irq_data_get_irq_chip_data(d);
pmc->imr = pmc_read(pmc, AT91_PMC_IMR);
pmc_write(pmc, AT91_PMC_IDR, pmc->imr);
}
static void pmc_irq_resume(struct irq_data *d)
{
struct at91_pmc *pmc = irq_data_get_irq_chip_data(d);
pmc_write(pmc, AT91_PMC_IER, pmc->imr);
}
static struct irq_chip pmc_irq = {
.name = "PMC",
.irq_disable = pmc_irq_mask,
.irq_mask = pmc_irq_mask,
.irq_unmask = pmc_irq_unmask,
.irq_set_type = pmc_irq_set_type,
.irq_suspend = pmc_irq_suspend,
.irq_resume = pmc_irq_resume,
};
static struct lock_class_key pmc_lock_class;
......@@ -224,7 +241,8 @@ static struct at91_pmc *__init at91_pmc_init(struct device_node *np,
goto out_free_pmc;
pmc_write(pmc, AT91_PMC_IDR, 0xffffffff);
if (request_irq(pmc->virq, pmc_irq_handler, IRQF_SHARED, "pmc", pmc))
if (request_irq(pmc->virq, pmc_irq_handler,
IRQF_SHARED | IRQF_COND_SUSPEND, "pmc", pmc))
goto out_remove_irqdomain;
return pmc;
......
......@@ -33,6 +33,7 @@ struct at91_pmc {
spinlock_t lock;
const struct at91_pmc_caps *caps;
struct irq_domain *irqdomain;
u32 imr;
};
static inline void pmc_lock(struct at91_pmc *pmc)
......
......@@ -22,6 +22,8 @@
#include <linux/smp.h>
#include <sysdev/fsl_soc.h>
#include <asm/smp.h> /* for get_hard_smp_processor_id() in UP configs */
/**
* struct cpu_data - per CPU data struct
* @parent: the parent node of cpu clock
......
......@@ -44,6 +44,12 @@ void disable_cpuidle(void)
off = 1;
}
bool cpuidle_not_available(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{
return off || !initialized || !drv || !dev || !dev->enabled;
}
/**
* cpuidle_play_dead - cpu off-lining
*
......@@ -66,14 +72,8 @@ int cpuidle_play_dead(void)
return -ENODEV;
}
/**
* cpuidle_find_deepest_state - Find deepest state meeting specific conditions.
* @drv: cpuidle driver for the given CPU.
* @dev: cpuidle device for the given CPU.
* @freeze: Whether or not the state should be suitable for suspend-to-idle.
*/
static int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev, bool freeze)
static int find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev, bool freeze)
{
unsigned int latency_req = 0;
int i, ret = freeze ? -1 : CPUIDLE_DRIVER_STATE_START - 1;
......@@ -92,6 +92,17 @@ static int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
return ret;
}
/**
* cpuidle_find_deepest_state - Find the deepest available idle state.
* @drv: cpuidle driver for the given CPU.
* @dev: cpuidle device for the given CPU.
*/
int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{
return find_deepest_state(drv, dev, false);
}
static void enter_freeze_proper(struct cpuidle_driver *drv,
struct cpuidle_device *dev, int index)
{
......@@ -113,15 +124,14 @@ static void enter_freeze_proper(struct cpuidle_driver *drv,
/**
* cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle.
* @drv: cpuidle driver for the given CPU.
* @dev: cpuidle device for the given CPU.
*
* If there are states with the ->enter_freeze callback, find the deepest of
* them and enter it with frozen tick. Otherwise, find the deepest state
* available and enter it normally.
* them and enter it with frozen tick.
*/
void cpuidle_enter_freeze(void)
int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{
struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices);
struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
int index;
/*
......@@ -129,24 +139,11 @@ void cpuidle_enter_freeze(void)
* that interrupts won't be enabled when it exits and allows the tick to
* be frozen safely.
*/
index = cpuidle_find_deepest_state(drv, dev, true);
if (index >= 0) {
enter_freeze_proper(drv, dev, index);
return;
}
/*
* It is not safe to freeze the tick, find the deepest state available
* at all and try to enter it normally.
*/
index = cpuidle_find_deepest_state(drv, dev, false);
index = find_deepest_state(drv, dev, true);
if (index >= 0)
cpuidle_enter(drv, dev, index);
else
arch_cpu_idle();
enter_freeze_proper(drv, dev, index);
/* Interrupts are enabled again here. */
local_irq_disable();
return index;
}
/**
......@@ -205,12 +202,6 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
*/
int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{
if (off || !initialized)
return -ENODEV;
if (!drv || !dev || !dev->enabled)
return -EBUSY;
return cpuidle_curr_governor->select(drv, dev);
}
......
......@@ -80,7 +80,7 @@ static int versatile_pci_parse_request_of_pci_ranges(struct device *dev,
if (err)
return err;
resource_list_for_each_entry(win, res, list) {
resource_list_for_each_entry(win, res) {
struct resource *parent, *res = win->res;
switch (resource_type(res)) {
......
......@@ -31,6 +31,7 @@
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/suspend.h>
#include <linux/uaccess.h>
#include "rtc-at91rm9200.h"
......@@ -54,6 +55,10 @@ static void __iomem *at91_rtc_regs;
static int irq;
static DEFINE_SPINLOCK(at91_rtc_lock);
static u32 at91_rtc_shadow_imr;
static bool suspended;
static DEFINE_SPINLOCK(suspended_lock);
static unsigned long cached_events;
static u32 at91_rtc_imr;
static void at91_rtc_write_ier(u32 mask)
{
......@@ -290,7 +295,9 @@ static irqreturn_t at91_rtc_interrupt(int irq, void *dev_id)
struct rtc_device *rtc = platform_get_drvdata(pdev);
unsigned int rtsr;
unsigned long events = 0;
int ret = IRQ_NONE;
spin_lock(&suspended_lock);
rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read_imr();
if (rtsr) { /* this interrupt is shared! Is it ours? */
if (rtsr & AT91_RTC_ALARM)
......@@ -304,14 +311,22 @@ static irqreturn_t at91_rtc_interrupt(int irq, void *dev_id)
at91_rtc_write(AT91_RTC_SCCR, rtsr); /* clear status reg */
rtc_update_irq(rtc, 1, events);
if (!suspended) {
rtc_update_irq(rtc, 1, events);
dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", __func__,
events >> 8, events & 0x000000FF);
dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n",
__func__, events >> 8, events & 0x000000FF);
} else {
cached_events |= events;
at91_rtc_write_idr(at91_rtc_imr);
pm_system_wakeup();
}
return IRQ_HANDLED;
ret = IRQ_HANDLED;
}
return IRQ_NONE; /* not handled */
spin_lock(&suspended_lock);
return ret;
}
static const struct at91_rtc_config at91rm9200_config = {
......@@ -401,8 +416,8 @@ static int __init at91_rtc_probe(struct platform_device *pdev)
AT91_RTC_CALEV);
ret = devm_request_irq(&pdev->dev, irq, at91_rtc_interrupt,
IRQF_SHARED,
"at91_rtc", pdev);
IRQF_SHARED | IRQF_COND_SUSPEND,
"at91_rtc", pdev);
if (ret) {
dev_err(&pdev->dev, "IRQ %d already in use.\n", irq);
return ret;
......@@ -454,8 +469,6 @@ static void at91_rtc_shutdown(struct platform_device *pdev)
/* AT91RM9200 RTC Power management control */
static u32 at91_rtc_imr;
static int at91_rtc_suspend(struct device *dev)
{
/* this IRQ is shared with DBGU and other hardware which isn't
......@@ -464,21 +477,42 @@ static int at91_rtc_suspend(struct device *dev)
at91_rtc_imr = at91_rtc_read_imr()
& (AT91_RTC_ALARM|AT91_RTC_SECEV);
if (at91_rtc_imr) {
if (device_may_wakeup(dev))
if (device_may_wakeup(dev)) {
unsigned long flags;
enable_irq_wake(irq);
else
spin_lock_irqsave(&suspended_lock, flags);
suspended = true;
spin_unlock_irqrestore(&suspended_lock, flags);
} else {
at91_rtc_write_idr(at91_rtc_imr);
}
}
return 0;
}
static int at91_rtc_resume(struct device *dev)
{
struct rtc_device *rtc = dev_get_drvdata(dev);
if (at91_rtc_imr) {
if (device_may_wakeup(dev))
if (device_may_wakeup(dev)) {
unsigned long flags;
spin_lock_irqsave(&suspended_lock, flags);
if (cached_events) {
rtc_update_irq(rtc, 1, cached_events);
cached_events = 0;
}
suspended = false;
spin_unlock_irqrestore(&suspended_lock, flags);
disable_irq_wake(irq);
else
at91_rtc_write_ier(at91_rtc_imr);
}
at91_rtc_write_ier(at91_rtc_imr);
}
return 0;
}
......
......@@ -23,6 +23,7 @@
#include <linux/io.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
#include <linux/suspend.h>
#include <linux/clk.h>
/*
......@@ -77,6 +78,9 @@ struct sam9_rtc {
unsigned int gpbr_offset;
int irq;
struct clk *sclk;
bool suspended;
unsigned long events;
spinlock_t lock;
};
#define rtt_readl(rtc, field) \
......@@ -271,14 +275,9 @@ static int at91_rtc_proc(struct device *dev, struct seq_file *seq)
return 0;
}
/*
* IRQ handler for the RTC
*/
static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc)
static irqreturn_t at91_rtc_cache_events(struct sam9_rtc *rtc)
{
struct sam9_rtc *rtc = _rtc;
u32 sr, mr;
unsigned long events = 0;
/* Shared interrupt may be for another device. Note: reading
* SR clears it, so we must only read it in this irq handler!
......@@ -290,18 +289,54 @@ static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc)
/* alarm status */
if (sr & AT91_RTT_ALMS)
events |= (RTC_AF | RTC_IRQF);
rtc->events |= (RTC_AF | RTC_IRQF);
/* timer update/increment */
if (sr & AT91_RTT_RTTINC)
events |= (RTC_UF | RTC_IRQF);
rtc->events |= (RTC_UF | RTC_IRQF);
return IRQ_HANDLED;
}
static void at91_rtc_flush_events(struct sam9_rtc *rtc)
{
if (!rtc->events)
return;
rtc_update_irq(rtc->rtcdev, 1, events);
rtc_update_irq(rtc->rtcdev, 1, rtc->events);
rtc->events = 0;
pr_debug("%s: num=%ld, events=0x%02lx\n", __func__,
events >> 8, events & 0x000000FF);
rtc->events >> 8, rtc->events & 0x000000FF);
}
return IRQ_HANDLED;
/*
* IRQ handler for the RTC
*/
static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc)
{
struct sam9_rtc *rtc = _rtc;
int ret;
spin_lock(&rtc->lock);
ret = at91_rtc_cache_events(rtc);
/* We're called in suspended state */
if (rtc->suspended) {
/* Mask irqs coming from this peripheral */
rtt_writel(rtc, MR,
rtt_readl(rtc, MR) &
~(AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN));
/* Trigger a system wakeup */
pm_system_wakeup();
} else {
at91_rtc_flush_events(rtc);
}
spin_unlock(&rtc->lock);
return ret;
}
static const struct rtc_class_ops at91_rtc_ops = {
......@@ -421,7 +456,8 @@ static int at91_rtc_probe(struct platform_device *pdev)
/* register irq handler after we know what name we'll use */
ret = devm_request_irq(&pdev->dev, rtc->irq, at91_rtc_interrupt,
IRQF_SHARED, dev_name(&rtc->rtcdev->dev), rtc);
IRQF_SHARED | IRQF_COND_SUSPEND,
dev_name(&rtc->rtcdev->dev), rtc);
if (ret) {
dev_dbg(&pdev->dev, "can't share IRQ %d?\n", rtc->irq);
return ret;
......@@ -482,7 +518,12 @@ static int at91_rtc_suspend(struct device *dev)
rtc->imr = mr & (AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN);
if (rtc->imr) {
if (device_may_wakeup(dev) && (mr & AT91_RTT_ALMIEN)) {
unsigned long flags;
enable_irq_wake(rtc->irq);
spin_lock_irqsave(&rtc->lock, flags);
rtc->suspended = true;
spin_unlock_irqrestore(&rtc->lock, flags);
/* don't let RTTINC cause wakeups */
if (mr & AT91_RTT_RTTINCIEN)
rtt_writel(rtc, MR, mr & ~AT91_RTT_RTTINCIEN);
......@@ -499,10 +540,18 @@ static int at91_rtc_resume(struct device *dev)
u32 mr;
if (rtc->imr) {
unsigned long flags;
if (device_may_wakeup(dev))
disable_irq_wake(rtc->irq);
mr = rtt_readl(rtc, MR);
rtt_writel(rtc, MR, mr | rtc->imr);
spin_lock_irqsave(&rtc->lock, flags);
rtc->suspended = false;
at91_rtc_cache_events(rtc);
at91_rtc_flush_events(rtc);
spin_unlock_irqrestore(&rtc->lock, flags);
}
return 0;
......
......@@ -47,6 +47,7 @@
#include <linux/gpio/consumer.h>
#include <linux/err.h>
#include <linux/irq.h>
#include <linux/suspend.h>
#include <asm/io.h>
#include <asm/ioctls.h>
......@@ -173,6 +174,12 @@ struct atmel_uart_port {
bool ms_irq_enabled;
bool is_usart; /* usart or uart */
struct timer_list uart_timer; /* uart timer */
bool suspended;
unsigned int pending;
unsigned int pending_status;
spinlock_t lock_suspended;
int (*prepare_rx)(struct uart_port *port);
int (*prepare_tx)(struct uart_port *port);
void (*schedule_rx)(struct uart_port *port);
......@@ -1179,12 +1186,15 @@ static irqreturn_t atmel_interrupt(int irq, void *dev_id)
{
struct uart_port *port = dev_id;
struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
unsigned int status, pending, pass_counter = 0;
unsigned int status, pending, mask, pass_counter = 0;
bool gpio_handled = false;
spin_lock(&atmel_port->lock_suspended);
do {
status = atmel_get_lines_status(port);
pending = status & UART_GET_IMR(port);
mask = UART_GET_IMR(port);
pending = status & mask;
if (!gpio_handled) {
/*
* Dealing with GPIO interrupt
......@@ -1206,11 +1216,21 @@ static irqreturn_t atmel_interrupt(int irq, void *dev_id)
if (!pending)
break;
if (atmel_port->suspended) {
atmel_port->pending |= pending;
atmel_port->pending_status = status;
UART_PUT_IDR(port, mask);
pm_system_wakeup();
break;
}
atmel_handle_receive(port, pending);
atmel_handle_status(port, pending, status);
atmel_handle_transmit(port, pending);
} while (pass_counter++ < ATMEL_ISR_PASS_LIMIT);
spin_unlock(&atmel_port->lock_suspended);
return pass_counter ? IRQ_HANDLED : IRQ_NONE;
}
......@@ -1742,7 +1762,8 @@ static int atmel_startup(struct uart_port *port)
/*
* Allocate the IRQ
*/
retval = request_irq(port->irq, atmel_interrupt, IRQF_SHARED,
retval = request_irq(port->irq, atmel_interrupt,
IRQF_SHARED | IRQF_COND_SUSPEND,
tty ? tty->name : "atmel_serial", port);
if (retval) {
dev_err(port->dev, "atmel_startup - Can't get irq\n");
......@@ -2513,8 +2534,14 @@ static int atmel_serial_suspend(struct platform_device *pdev,
/* we can not wake up if we're running on slow clock */
atmel_port->may_wakeup = device_may_wakeup(&pdev->dev);
if (atmel_serial_clk_will_stop())
if (atmel_serial_clk_will_stop()) {
unsigned long flags;
spin_lock_irqsave(&atmel_port->lock_suspended, flags);
atmel_port->suspended = true;
spin_unlock_irqrestore(&atmel_port->lock_suspended, flags);
device_set_wakeup_enable(&pdev->dev, 0);
}
uart_suspend_port(&atmel_uart, port);
......@@ -2525,6 +2552,18 @@ static int atmel_serial_resume(struct platform_device *pdev)
{
struct uart_port *port = platform_get_drvdata(pdev);
struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
unsigned long flags;
spin_lock_irqsave(&atmel_port->lock_suspended, flags);
if (atmel_port->pending) {
atmel_handle_receive(port, atmel_port->pending);
atmel_handle_status(port, atmel_port->pending,
atmel_port->pending_status);
atmel_handle_transmit(port, atmel_port->pending);
atmel_port->pending = 0;
}
atmel_port->suspended = false;
spin_unlock_irqrestore(&atmel_port->lock_suspended, flags);
uart_resume_port(&atmel_uart, port);
device_set_wakeup_enable(&pdev->dev, atmel_port->may_wakeup);
......@@ -2593,6 +2632,8 @@ static int atmel_serial_probe(struct platform_device *pdev)
port->backup_imr = 0;
port->uart.line = ret;
spin_lock_init(&port->lock_suspended);
ret = atmel_init_gpios(port, &pdev->dev);
if (ret < 0)
dev_err(&pdev->dev, "%s",
......
......@@ -208,7 +208,8 @@ static int at91_wdt_init(struct platform_device *pdev, struct at91wdt *wdt)
if ((tmp & AT91_WDT_WDFIEN) && wdt->irq) {
err = request_irq(wdt->irq, wdt_interrupt,
IRQF_SHARED | IRQF_IRQPOLL,
IRQF_SHARED | IRQF_IRQPOLL |
IRQF_NO_SUSPEND,
pdev->name, wdt);
if (err)
return err;
......
......@@ -126,6 +126,8 @@ struct cpuidle_driver {
#ifdef CONFIG_CPU_IDLE
extern void disable_cpuidle(void);
extern bool cpuidle_not_available(struct cpuidle_driver *drv,
struct cpuidle_device *dev);
extern int cpuidle_select(struct cpuidle_driver *drv,
struct cpuidle_device *dev);
......@@ -150,11 +152,17 @@ extern void cpuidle_resume(void);
extern int cpuidle_enable_device(struct cpuidle_device *dev);
extern void cpuidle_disable_device(struct cpuidle_device *dev);
extern int cpuidle_play_dead(void);
extern void cpuidle_enter_freeze(void);
extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev);
extern int cpuidle_enter_freeze(struct cpuidle_driver *drv,
struct cpuidle_device *dev);
extern struct cpuidle_driver *cpuidle_get_cpu_driver(struct cpuidle_device *dev);
#else
static inline void disable_cpuidle(void) { }
static inline bool cpuidle_not_available(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{return true; }
static inline int cpuidle_select(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{return -ENODEV; }
......@@ -183,7 +191,12 @@ static inline int cpuidle_enable_device(struct cpuidle_device *dev)
{return -ENODEV; }
static inline void cpuidle_disable_device(struct cpuidle_device *dev) { }
static inline int cpuidle_play_dead(void) {return -ENODEV; }
static inline void cpuidle_enter_freeze(void) { }
static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{return -ENODEV; }
static inline int cpuidle_enter_freeze(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{return -ENODEV; }
static inline struct cpuidle_driver *cpuidle_get_cpu_driver(
struct cpuidle_device *dev) {return NULL; }
#endif
......
......@@ -52,11 +52,17 @@
* IRQF_ONESHOT - Interrupt is not reenabled after the hardirq handler finished.
* Used by threaded interrupts which need to keep the
* irq line disabled until the threaded handler has been run.
* IRQF_NO_SUSPEND - Do not disable this IRQ during suspend
* IRQF_NO_SUSPEND - Do not disable this IRQ during suspend. Does not guarantee
* that this interrupt will wake the system from a suspended
* state. See Documentation/power/suspend-and-interrupts.txt
* IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set
* IRQF_NO_THREAD - Interrupt cannot be threaded
* IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device
* resume time.
* IRQF_COND_SUSPEND - If the IRQ is shared with a NO_SUSPEND user, execute this
* interrupt handler after suspending interrupts. For system
* wakeup devices users need to implement wakeup detection in
* their interrupt handlers.
*/
#define IRQF_DISABLED 0x00000020
#define IRQF_SHARED 0x00000080
......@@ -70,6 +76,7 @@
#define IRQF_FORCE_RESUME 0x00008000
#define IRQF_NO_THREAD 0x00010000
#define IRQF_EARLY_RESUME 0x00020000
#define IRQF_COND_SUSPEND 0x00040000
#define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
......
......@@ -78,6 +78,7 @@ struct irq_desc {
#ifdef CONFIG_PM_SLEEP
unsigned int nr_actions;
unsigned int no_suspend_depth;
unsigned int cond_suspend_depth;
unsigned int force_resume_depth;
#endif
#ifdef CONFIG_PROC_FS
......
......@@ -1474,8 +1474,13 @@ int request_threaded_irq(unsigned int irq, irq_handler_t handler,
* otherwise we'll have trouble later trying to figure out
* which interrupt is which (messes up the interrupt freeing
* logic etc).
*
* Also IRQF_COND_SUSPEND only makes sense for shared interrupts and
* it cannot be set along with IRQF_NO_SUSPEND.
*/
if ((irqflags & IRQF_SHARED) && !dev_id)
if (((irqflags & IRQF_SHARED) && !dev_id) ||
(!(irqflags & IRQF_SHARED) && (irqflags & IRQF_COND_SUSPEND)) ||
((irqflags & IRQF_NO_SUSPEND) && (irqflags & IRQF_COND_SUSPEND)))
return -EINVAL;
desc = irq_to_desc(irq);
......
......@@ -43,9 +43,12 @@ void irq_pm_install_action(struct irq_desc *desc, struct irqaction *action)
if (action->flags & IRQF_NO_SUSPEND)
desc->no_suspend_depth++;
else if (action->flags & IRQF_COND_SUSPEND)
desc->cond_suspend_depth++;
WARN_ON_ONCE(desc->no_suspend_depth &&
desc->no_suspend_depth != desc->nr_actions);
(desc->no_suspend_depth +
desc->cond_suspend_depth) != desc->nr_actions);
}
/*
......@@ -61,6 +64,8 @@ void irq_pm_remove_action(struct irq_desc *desc, struct irqaction *action)
if (action->flags & IRQF_NO_SUSPEND)
desc->no_suspend_depth--;
else if (action->flags & IRQF_COND_SUSPEND)
desc->cond_suspend_depth--;
}
static bool suspend_device_irq(struct irq_desc *desc, int irq)
......
......@@ -82,6 +82,7 @@ static void cpuidle_idle_call(void)
struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
int next_state, entered_state;
unsigned int broadcast;
bool reflect;
/*
* Check if the idle task must be rescheduled. If it is the
......@@ -105,6 +106,9 @@ static void cpuidle_idle_call(void)
*/
rcu_idle_enter();
if (cpuidle_not_available(drv, dev))
goto use_default;
/*
* Suspend-to-idle ("freeze") is a system state in which all user space
* has been frozen, all I/O devices have been suspended and the only
......@@ -115,30 +119,24 @@ static void cpuidle_idle_call(void)
* until a proper wakeup interrupt happens.
*/
if (idle_should_freeze()) {
cpuidle_enter_freeze();
local_irq_enable();
goto exit_idle;
}
entered_state = cpuidle_enter_freeze(drv, dev);
if (entered_state >= 0) {
local_irq_enable();
goto exit_idle;
}
/*
* Ask the cpuidle framework to choose a convenient idle state.
* Fall back to the default arch idle method on errors.
*/
next_state = cpuidle_select(drv, dev);
if (next_state < 0) {
use_default:
reflect = false;
next_state = cpuidle_find_deepest_state(drv, dev);
} else {
reflect = true;
/*
* We can't use the cpuidle framework, let's use the default
* idle routine.
* Ask the cpuidle framework to choose a convenient idle state.
*/
if (current_clr_polling_and_test())
local_irq_enable();
else
arch_cpu_idle();
goto exit_idle;
next_state = cpuidle_select(drv, dev);
}
/* Fall back to the default arch idle method on errors. */
if (next_state < 0)
goto use_default;
/*
* The idle task must be scheduled, it is pointless to
......@@ -183,7 +181,8 @@ static void cpuidle_idle_call(void)
/*
* Give the governor an opportunity to reflect on the outcome
*/
cpuidle_reflect(dev, entered_state);
if (reflect)
cpuidle_reflect(dev, entered_state);
exit_idle:
__current_set_polling();
......@@ -196,6 +195,19 @@ static void cpuidle_idle_call(void)
rcu_idle_exit();
start_critical_timings();
return;
use_default:
/*
* We can't use the cpuidle framework, let's use the default
* idle routine.
*/
if (current_clr_polling_and_test())
local_irq_enable();
else
arch_cpu_idle();
goto exit_idle;
}
/*
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册