提交 1af447bd 编写于 作者: T Thomas Gleixner

Merge branch 'clockevents/3.17' of...

Merge branch 'clockevents/3.17' of git://git.linaro.org/people/daniel.lezcano/linux into timers/core

Pull clockevents from Danel Lezcano:
 * New timer driver for the Cirrus Logic CLPS711X SoC
 * New driver for the Mediatek SoC which includes:
 * A new function for of, acked by Rob Herring
 * Move the PXA driver to drivers/clocksource, add DT support
 * Optimization of the exynos_mct driver
 * DT support for the renesas timers family.
 * Some Kconfig and driver fixlets
* Cirrus Logic CLPS711X Timer Counter
Required properties:
- compatible: Shall contain "cirrus,clps711x-timer".
- reg : Address and length of the register set.
- interrupts: The interrupt number of the timer.
- clocks : phandle of timer reference clock.
Note: Each timer should have an alias correctly numbered in "aliases" node.
Example:
aliases {
timer0 = &timer1;
timer1 = &timer2;
};
timer1: timer@80000300 {
compatible = "cirrus,ep7312-timer", "cirrus,clps711x-timer";
reg = <0x80000300 0x4>;
interrupts = <8>;
clocks = <&clks 5>;
};
timer2: timer@80000340 {
compatible = "cirrus,ep7312-timer", "cirrus,clps711x-timer";
reg = <0x80000340 0x4>;
interrupts = <9>;
clocks = <&clks 6>;
};
Mediatek MT6577, MT6572 and MT6589 Timers
---------------------------------------
Required properties:
- compatible: Should be "mediatek,mt6577-timer"
- reg: Should contain location and length for timers register.
- clocks: Clocks driving the timer hardware. This list should include two
clocks. The order is system clock and as second clock the RTC clock.
Examples:
timer@10008000 {
compatible = "mediatek,mt6577-timer";
reg = <0x10008000 0x80>;
interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>;
clocks = <&system_clk>, <&rtc_clk>;
};
* Renesas R-Car Compare Match Timer (CMT)
The CMT is a multi-channel 16/32/48-bit timer/counter with configurable clock
inputs and programmable compare match.
Channels share hardware resources but their counter and compare match value
are independent. A particular CMT instance can implement only a subset of the
channels supported by the CMT model. Channel indices represent the hardware
position of the channel in the CMT and don't match the channel numbers in the
datasheets.
Required Properties:
- compatible: must contain one of the following.
- "renesas,cmt-32" for the 32-bit CMT
(CMT0 on sh7372, sh73a0 and r8a7740)
- "renesas,cmt-32-fast" for the 32-bit CMT with fast clock support
(CMT[234] on sh7372, sh73a0 and r8a7740)
- "renesas,cmt-48" for the 48-bit CMT
(CMT1 on sh7372, sh73a0 and r8a7740)
- "renesas,cmt-48-gen2" for the second generation 48-bit CMT
(CMT[01] on r8a73a4, r8a7790 and r8a7791)
- reg: base address and length of the registers block for the timer module.
- interrupts: interrupt-specifier for the timer, one per channel.
- clocks: a list of phandle + clock-specifier pairs, one for each entry
in clock-names.
- clock-names: must contain "fck" for the functional clock.
- renesas,channels-mask: bitmask of the available channels.
Example: R8A7790 (R-Car H2) CMT0 node
CMT0 on R8A7790 implements hardware channels 5 and 6 only and names
them channels 0 and 1 in the documentation.
cmt0: timer@ffca0000 {
compatible = "renesas,cmt-48-gen2";
reg = <0 0xffca0000 0 0x1004>;
interrupts = <0 142 IRQ_TYPE_LEVEL_HIGH>,
<0 142 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7790_CLK_CMT0>;
clock-names = "fck";
renesas,channels-mask = <0x60>;
};
* Renesas R-Car Multi-Function Timer Pulse Unit 2 (MTU2)
The MTU2 is a multi-purpose, multi-channel timer/counter with configurable
clock inputs and programmable compare match.
Channels share hardware resources but their counter and compare match value
are independent. The MTU2 hardware supports five channels indexed from 0 to 4.
Required Properties:
- compatible: must contain "renesas,mtu2"
- reg: base address and length of the registers block for the timer module.
- interrupts: interrupt specifiers for the timer, one for each entry in
interrupt-names.
- interrupt-names: must contain one entry named "tgi?a" for each enabled
channel, where "?" is the channel index expressed as one digit from "0" to
"4".
- clocks: a list of phandle + clock-specifier pairs, one for each entry
in clock-names.
- clock-names: must contain "fck" for the functional clock.
Example: R7S72100 (RZ/A1H) MTU2 node
mtu2: timer@fcff0000 {
compatible = "renesas,mtu2";
reg = <0xfcff0000 0x400>;
interrupts = <0 139 IRQ_TYPE_LEVEL_HIGH>,
<0 146 IRQ_TYPE_LEVEL_HIGH>,
<0 150 IRQ_TYPE_LEVEL_HIGH>,
<0 154 IRQ_TYPE_LEVEL_HIGH>,
<0 159 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "tgi0a", "tgi1a", "tgi2a", "tgi3a", "tgi4a";
clocks = <&mstp3_clks R7S72100_CLK_MTU2>;
clock-names = "fck";
};
* Renesas R-Car Timer Unit (TMU)
The TMU is a 32-bit timer/counter with configurable clock inputs and
programmable compare match.
Channels share hardware resources but their counter and compare match value
are independent. The TMU hardware supports up to three channels.
Required Properties:
- compatible: must contain "renesas,tmu"
- reg: base address and length of the registers block for the timer module.
- interrupts: interrupt-specifier for the timer, one per channel.
- clocks: a list of phandle + clock-specifier pairs, one for each entry
in clock-names.
- clock-names: must contain "fck" for the functional clock.
Optional Properties:
- #renesas,channels: number of channels implemented by the timer, must be 2
or 3 (if not specified the value defaults to 3).
Example: R8A7779 (R-Car H1) TMU0 node
tmu0: timer@ffd80000 {
compatible = "renesas,tmu";
reg = <0xffd80000 0x30>;
interrupts = <0 32 IRQ_TYPE_LEVEL_HIGH>,
<0 33 IRQ_TYPE_LEVEL_HIGH>,
<0 34 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7779_CLK_TMU0>;
clock-names = "fck";
#renesas,channels = <3>;
};
...@@ -77,6 +77,7 @@ lsi LSI Corp. (LSI Logic) ...@@ -77,6 +77,7 @@ lsi LSI Corp. (LSI Logic)
lltc Linear Technology Corporation lltc Linear Technology Corporation
marvell Marvell Technology Group Ltd. marvell Marvell Technology Group Ltd.
maxim Maxim Integrated Products maxim Maxim Integrated Products
mediatek MediaTek Inc.
micrel Micrel Inc. micrel Micrel Inc.
microchip Microchip Technology Inc. microchip Microchip Technology Inc.
mosaixtech Mosaix Technologies, Inc. mosaixtech Mosaix Technologies, Inc.
......
...@@ -634,6 +634,7 @@ config ARCH_PXA ...@@ -634,6 +634,7 @@ config ARCH_PXA
select AUTO_ZRELADDR select AUTO_ZRELADDR
select CLKDEV_LOOKUP select CLKDEV_LOOKUP
select CLKSRC_MMIO select CLKSRC_MMIO
select CLKSRC_OF
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select GPIO_PXA select GPIO_PXA
select HAVE_IDE select HAVE_IDE
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
# Common support (must be linked before board specific support) # Common support (must be linked before board specific support)
obj-y += clock.o devices.o generic.o irq.o \ obj-y += clock.o devices.o generic.o irq.o \
time.o reset.o reset.o
obj-$(CONFIG_PM) += pm.o sleep.o standby.o obj-$(CONFIG_PM) += pm.o sleep.o standby.o
# Generic drivers that other drivers may depend upon # Generic drivers that other drivers may depend upon
......
...@@ -25,11 +25,13 @@ ...@@ -25,11 +25,13 @@
#include <asm/mach/map.h> #include <asm/mach/map.h>
#include <asm/mach-types.h> #include <asm/mach-types.h>
#include <mach/irqs.h>
#include <mach/reset.h> #include <mach/reset.h>
#include <mach/smemc.h> #include <mach/smemc.h>
#include <mach/pxa3xx-regs.h> #include <mach/pxa3xx-regs.h>
#include "generic.h" #include "generic.h"
#include <clocksource/pxa.h>
void clear_reset_status(unsigned int mask) void clear_reset_status(unsigned int mask)
{ {
...@@ -56,6 +58,15 @@ unsigned long get_clock_tick_rate(void) ...@@ -56,6 +58,15 @@ unsigned long get_clock_tick_rate(void)
} }
EXPORT_SYMBOL(get_clock_tick_rate); EXPORT_SYMBOL(get_clock_tick_rate);
/*
* For non device-tree builds, keep legacy timer init
*/
void pxa_timer_init(void)
{
pxa_timer_nodt_init(IRQ_OST0, io_p2v(0x40a00000),
get_clock_tick_rate());
}
/* /*
* Get the clock frequency as reflected by CCCR and the turbo flag. * Get the clock frequency as reflected by CCCR and the turbo flag.
* We assume these values have been applied via a fcs. * We assume these values have been applied via a fcs.
......
...@@ -127,6 +127,7 @@ config CLKSRC_METAG_GENERIC ...@@ -127,6 +127,7 @@ config CLKSRC_METAG_GENERIC
config CLKSRC_EXYNOS_MCT config CLKSRC_EXYNOS_MCT
def_bool y if ARCH_EXYNOS def_bool y if ARCH_EXYNOS
depends on !ARM64
help help
Support for Multi Core Timer controller on Exynos SoCs. Support for Multi Core Timer controller on Exynos SoCs.
...@@ -151,6 +152,11 @@ config VF_PIT_TIMER ...@@ -151,6 +152,11 @@ config VF_PIT_TIMER
config SYS_SUPPORTS_SH_CMT config SYS_SUPPORTS_SH_CMT
bool bool
config MTK_TIMER
select CLKSRC_OF
select CLKSRC_MMIO
bool
config SYS_SUPPORTS_SH_MTU2 config SYS_SUPPORTS_SH_MTU2
bool bool
...@@ -175,7 +181,7 @@ config SH_TIMER_MTU2 ...@@ -175,7 +181,7 @@ config SH_TIMER_MTU2
default SYS_SUPPORTS_SH_MTU2 default SYS_SUPPORTS_SH_MTU2
help help
This enables build of a clockevent driver for the Multi-Function This enables build of a clockevent driver for the Multi-Function
Timer Pulse Unit 2 (TMU2) hardware available on SoCs from Renesas. Timer Pulse Unit 2 (MTU2) hardware available on SoCs from Renesas.
This hardware comes with 16 bit-timer registers. This hardware comes with 16 bit-timer registers.
config SH_TIMER_TMU config SH_TIMER_TMU
...@@ -189,7 +195,7 @@ config SH_TIMER_TMU ...@@ -189,7 +195,7 @@ config SH_TIMER_TMU
config EM_TIMER_STI config EM_TIMER_STI
bool "Renesas STI timer driver" if COMPILE_TEST bool "Renesas STI timer driver" if COMPILE_TEST
depends on GENERIC_CLOCKEVENTS depends on GENERIC_CLOCKEVENTS && HAS_IOMEM
default SYS_SUPPORTS_EM_STI default SYS_SUPPORTS_EM_STI
help help
This enables build of a clocksource and clockevent driver for This enables build of a clocksource and clockevent driver for
......
...@@ -16,9 +16,11 @@ obj-$(CONFIG_CLKSRC_DBX500_PRCMU) += clksrc-dbx500-prcmu.o ...@@ -16,9 +16,11 @@ obj-$(CONFIG_CLKSRC_DBX500_PRCMU) += clksrc-dbx500-prcmu.o
obj-$(CONFIG_ARMADA_370_XP_TIMER) += time-armada-370-xp.o obj-$(CONFIG_ARMADA_370_XP_TIMER) += time-armada-370-xp.o
obj-$(CONFIG_ORION_TIMER) += time-orion.o obj-$(CONFIG_ORION_TIMER) += time-orion.o
obj-$(CONFIG_ARCH_BCM2835) += bcm2835_timer.o obj-$(CONFIG_ARCH_BCM2835) += bcm2835_timer.o
obj-$(CONFIG_ARCH_CLPS711X) += clps711x-timer.o
obj-$(CONFIG_ARCH_MARCO) += timer-marco.o obj-$(CONFIG_ARCH_MARCO) += timer-marco.o
obj-$(CONFIG_ARCH_MOXART) += moxart_timer.o obj-$(CONFIG_ARCH_MOXART) += moxart_timer.o
obj-$(CONFIG_ARCH_MXS) += mxs_timer.o obj-$(CONFIG_ARCH_MXS) += mxs_timer.o
obj-$(CONFIG_ARCH_PXA) += pxa_timer.o
obj-$(CONFIG_ARCH_PRIMA2) += timer-prima2.o obj-$(CONFIG_ARCH_PRIMA2) += timer-prima2.o
obj-$(CONFIG_ARCH_U300) += timer-u300.o obj-$(CONFIG_ARCH_U300) += timer-u300.o
obj-$(CONFIG_SUN4I_TIMER) += sun4i_timer.o obj-$(CONFIG_SUN4I_TIMER) += sun4i_timer.o
...@@ -34,6 +36,7 @@ obj-$(CONFIG_CLKSRC_SAMSUNG_PWM) += samsung_pwm_timer.o ...@@ -34,6 +36,7 @@ obj-$(CONFIG_CLKSRC_SAMSUNG_PWM) += samsung_pwm_timer.o
obj-$(CONFIG_FSL_FTM_TIMER) += fsl_ftm_timer.o obj-$(CONFIG_FSL_FTM_TIMER) += fsl_ftm_timer.o
obj-$(CONFIG_VF_PIT_TIMER) += vf_pit_timer.o obj-$(CONFIG_VF_PIT_TIMER) += vf_pit_timer.o
obj-$(CONFIG_CLKSRC_QCOM) += qcom-timer.o obj-$(CONFIG_CLKSRC_QCOM) += qcom-timer.o
obj-$(CONFIG_MTK_TIMER) += mtk_timer.o
obj-$(CONFIG_ARM_ARCH_TIMER) += arm_arch_timer.o obj-$(CONFIG_ARM_ARCH_TIMER) += arm_arch_timer.o
obj-$(CONFIG_ARM_GLOBAL_TIMER) += arm_global_timer.o obj-$(CONFIG_ARM_GLOBAL_TIMER) += arm_global_timer.o
......
/*
* Cirrus Logic CLPS711X clocksource driver
*
* Copyright (C) 2014 Alexander Shiyan <shc_work@mail.ru>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/clk.h>
#include <linux/clockchips.h>
#include <linux/clocksource.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/sched_clock.h>
#include <linux/slab.h>
enum {
CLPS711X_CLKSRC_CLOCKSOURCE,
CLPS711X_CLKSRC_CLOCKEVENT,
};
static void __iomem *tcd;
static u64 notrace clps711x_sched_clock_read(void)
{
return ~readw(tcd);
}
static int __init _clps711x_clksrc_init(struct clk *clock, void __iomem *base)
{
unsigned long rate;
if (!base)
return -ENOMEM;
if (IS_ERR(clock))
return PTR_ERR(clock);
rate = clk_get_rate(clock);
tcd = base;
clocksource_mmio_init(tcd, "clps711x-clocksource", rate, 300, 16,
clocksource_mmio_readw_down);
sched_clock_register(clps711x_sched_clock_read, 16, rate);
return 0;
}
static irqreturn_t clps711x_timer_interrupt(int irq, void *dev_id)
{
struct clock_event_device *evt = dev_id;
evt->event_handler(evt);
return IRQ_HANDLED;
}
static void clps711x_clockevent_set_mode(enum clock_event_mode mode,
struct clock_event_device *evt)
{
}
static int __init _clps711x_clkevt_init(struct clk *clock, void __iomem *base,
unsigned int irq)
{
struct clock_event_device *clkevt;
unsigned long rate;
if (!irq)
return -EINVAL;
if (!base)
return -ENOMEM;
if (IS_ERR(clock))
return PTR_ERR(clock);
clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
if (!clkevt)
return -ENOMEM;
rate = clk_get_rate(clock);
/* Set Timer prescaler */
writew(DIV_ROUND_CLOSEST(rate, HZ), base);
clkevt->name = "clps711x-clockevent";
clkevt->rating = 300;
clkevt->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_C3STOP;
clkevt->set_mode = clps711x_clockevent_set_mode;
clkevt->cpumask = cpumask_of(0);
clockevents_config_and_register(clkevt, HZ, 0, 0);
return request_irq(irq, clps711x_timer_interrupt, IRQF_TIMER,
"clps711x-timer", clkevt);
}
void __init clps711x_clksrc_init(void __iomem *tc1_base, void __iomem *tc2_base,
unsigned int irq)
{
struct clk *tc1 = clk_get_sys("clps711x-timer.0", NULL);
struct clk *tc2 = clk_get_sys("clps711x-timer.1", NULL);
BUG_ON(_clps711x_clksrc_init(tc1, tc1_base));
BUG_ON(_clps711x_clkevt_init(tc2, tc2_base, irq));
}
#ifdef CONFIG_CLKSRC_OF
static void __init clps711x_timer_init(struct device_node *np)
{
unsigned int irq = irq_of_parse_and_map(np, 0);
struct clk *clock = of_clk_get(np, 0);
void __iomem *base = of_iomap(np, 0);
switch (of_alias_get_id(np, "timer")) {
case CLPS711X_CLKSRC_CLOCKSOURCE:
BUG_ON(_clps711x_clksrc_init(clock, base));
break;
case CLPS711X_CLKSRC_CLOCKEVENT:
BUG_ON(_clps711x_clkevt_init(clock, base, irq));
break;
default:
break;
}
}
CLOCKSOURCE_OF_DECLARE(clps711x, "cirrus,clps711x-timer", clps711x_timer_init);
#endif
...@@ -94,7 +94,7 @@ static void exynos4_mct_write(unsigned int value, unsigned long offset) ...@@ -94,7 +94,7 @@ static void exynos4_mct_write(unsigned int value, unsigned long offset)
u32 mask; u32 mask;
u32 i; u32 i;
__raw_writel(value, reg_base + offset); writel_relaxed(value, reg_base + offset);
if (likely(offset >= EXYNOS4_MCT_L_BASE(0))) { if (likely(offset >= EXYNOS4_MCT_L_BASE(0))) {
stat_addr = (offset & ~EXYNOS4_MCT_L_MASK) + MCT_L_WSTAT_OFFSET; stat_addr = (offset & ~EXYNOS4_MCT_L_MASK) + MCT_L_WSTAT_OFFSET;
...@@ -144,8 +144,8 @@ static void exynos4_mct_write(unsigned int value, unsigned long offset) ...@@ -144,8 +144,8 @@ static void exynos4_mct_write(unsigned int value, unsigned long offset)
/* Wait maximum 1 ms until written values are applied */ /* Wait maximum 1 ms until written values are applied */
for (i = 0; i < loops_per_jiffy / 1000 * HZ; i++) for (i = 0; i < loops_per_jiffy / 1000 * HZ; i++)
if (__raw_readl(reg_base + stat_addr) & mask) { if (readl_relaxed(reg_base + stat_addr) & mask) {
__raw_writel(mask, reg_base + stat_addr); writel_relaxed(mask, reg_base + stat_addr);
return; return;
} }
...@@ -157,28 +157,51 @@ static void exynos4_mct_frc_start(void) ...@@ -157,28 +157,51 @@ static void exynos4_mct_frc_start(void)
{ {
u32 reg; u32 reg;
reg = __raw_readl(reg_base + EXYNOS4_MCT_G_TCON); reg = readl_relaxed(reg_base + EXYNOS4_MCT_G_TCON);
reg |= MCT_G_TCON_START; reg |= MCT_G_TCON_START;
exynos4_mct_write(reg, EXYNOS4_MCT_G_TCON); exynos4_mct_write(reg, EXYNOS4_MCT_G_TCON);
} }
static cycle_t notrace _exynos4_frc_read(void) /**
* exynos4_read_count_64 - Read all 64-bits of the global counter
*
* This will read all 64-bits of the global counter taking care to make sure
* that the upper and lower half match. Note that reading the MCT can be quite
* slow (hundreds of nanoseconds) so you should use the 32-bit (lower half
* only) version when possible.
*
* Returns the number of cycles in the global counter.
*/
static u64 exynos4_read_count_64(void)
{ {
unsigned int lo, hi; unsigned int lo, hi;
u32 hi2 = __raw_readl(reg_base + EXYNOS4_MCT_G_CNT_U); u32 hi2 = readl_relaxed(reg_base + EXYNOS4_MCT_G_CNT_U);
do { do {
hi = hi2; hi = hi2;
lo = __raw_readl(reg_base + EXYNOS4_MCT_G_CNT_L); lo = readl_relaxed(reg_base + EXYNOS4_MCT_G_CNT_L);
hi2 = __raw_readl(reg_base + EXYNOS4_MCT_G_CNT_U); hi2 = readl_relaxed(reg_base + EXYNOS4_MCT_G_CNT_U);
} while (hi != hi2); } while (hi != hi2);
return ((cycle_t)hi << 32) | lo; return ((cycle_t)hi << 32) | lo;
} }
/**
* exynos4_read_count_32 - Read the lower 32-bits of the global counter
*
* This will read just the lower 32-bits of the global counter. This is marked
* as notrace so it can be used by the scheduler clock.
*
* Returns the number of cycles in the global counter (lower 32 bits).
*/
static u32 notrace exynos4_read_count_32(void)
{
return readl_relaxed(reg_base + EXYNOS4_MCT_G_CNT_L);
}
static cycle_t exynos4_frc_read(struct clocksource *cs) static cycle_t exynos4_frc_read(struct clocksource *cs)
{ {
return _exynos4_frc_read(); return exynos4_read_count_32();
} }
static void exynos4_frc_resume(struct clocksource *cs) static void exynos4_frc_resume(struct clocksource *cs)
...@@ -190,21 +213,23 @@ struct clocksource mct_frc = { ...@@ -190,21 +213,23 @@ struct clocksource mct_frc = {
.name = "mct-frc", .name = "mct-frc",
.rating = 400, .rating = 400,
.read = exynos4_frc_read, .read = exynos4_frc_read,
.mask = CLOCKSOURCE_MASK(64), .mask = CLOCKSOURCE_MASK(32),
.flags = CLOCK_SOURCE_IS_CONTINUOUS, .flags = CLOCK_SOURCE_IS_CONTINUOUS,
.resume = exynos4_frc_resume, .resume = exynos4_frc_resume,
}; };
static u64 notrace exynos4_read_sched_clock(void) static u64 notrace exynos4_read_sched_clock(void)
{ {
return _exynos4_frc_read(); return exynos4_read_count_32();
} }
static struct delay_timer exynos4_delay_timer; static struct delay_timer exynos4_delay_timer;
static cycles_t exynos4_read_current_timer(void) static cycles_t exynos4_read_current_timer(void)
{ {
return _exynos4_frc_read(); BUILD_BUG_ON_MSG(sizeof(cycles_t) != sizeof(u32),
"cycles_t needs to move to 32-bit for ARM64 usage");
return exynos4_read_count_32();
} }
static void __init exynos4_clocksource_init(void) static void __init exynos4_clocksource_init(void)
...@@ -218,14 +243,14 @@ static void __init exynos4_clocksource_init(void) ...@@ -218,14 +243,14 @@ static void __init exynos4_clocksource_init(void)
if (clocksource_register_hz(&mct_frc, clk_rate)) if (clocksource_register_hz(&mct_frc, clk_rate))
panic("%s: can't register clocksource\n", mct_frc.name); panic("%s: can't register clocksource\n", mct_frc.name);
sched_clock_register(exynos4_read_sched_clock, 64, clk_rate); sched_clock_register(exynos4_read_sched_clock, 32, clk_rate);
} }
static void exynos4_mct_comp0_stop(void) static void exynos4_mct_comp0_stop(void)
{ {
unsigned int tcon; unsigned int tcon;
tcon = __raw_readl(reg_base + EXYNOS4_MCT_G_TCON); tcon = readl_relaxed(reg_base + EXYNOS4_MCT_G_TCON);
tcon &= ~(MCT_G_TCON_COMP0_ENABLE | MCT_G_TCON_COMP0_AUTO_INC); tcon &= ~(MCT_G_TCON_COMP0_ENABLE | MCT_G_TCON_COMP0_AUTO_INC);
exynos4_mct_write(tcon, EXYNOS4_MCT_G_TCON); exynos4_mct_write(tcon, EXYNOS4_MCT_G_TCON);
...@@ -238,14 +263,14 @@ static void exynos4_mct_comp0_start(enum clock_event_mode mode, ...@@ -238,14 +263,14 @@ static void exynos4_mct_comp0_start(enum clock_event_mode mode,
unsigned int tcon; unsigned int tcon;
cycle_t comp_cycle; cycle_t comp_cycle;
tcon = __raw_readl(reg_base + EXYNOS4_MCT_G_TCON); tcon = readl_relaxed(reg_base + EXYNOS4_MCT_G_TCON);
if (mode == CLOCK_EVT_MODE_PERIODIC) { if (mode == CLOCK_EVT_MODE_PERIODIC) {
tcon |= MCT_G_TCON_COMP0_AUTO_INC; tcon |= MCT_G_TCON_COMP0_AUTO_INC;
exynos4_mct_write(cycles, EXYNOS4_MCT_G_COMP0_ADD_INCR); exynos4_mct_write(cycles, EXYNOS4_MCT_G_COMP0_ADD_INCR);
} }
comp_cycle = exynos4_frc_read(&mct_frc) + cycles; comp_cycle = exynos4_read_count_64() + cycles;
exynos4_mct_write((u32)comp_cycle, EXYNOS4_MCT_G_COMP0_L); exynos4_mct_write((u32)comp_cycle, EXYNOS4_MCT_G_COMP0_L);
exynos4_mct_write((u32)(comp_cycle >> 32), EXYNOS4_MCT_G_COMP0_U); exynos4_mct_write((u32)(comp_cycle >> 32), EXYNOS4_MCT_G_COMP0_U);
...@@ -327,7 +352,7 @@ static void exynos4_mct_tick_stop(struct mct_clock_event_device *mevt) ...@@ -327,7 +352,7 @@ static void exynos4_mct_tick_stop(struct mct_clock_event_device *mevt)
unsigned long mask = MCT_L_TCON_INT_START | MCT_L_TCON_TIMER_START; unsigned long mask = MCT_L_TCON_INT_START | MCT_L_TCON_TIMER_START;
unsigned long offset = mevt->base + MCT_L_TCON_OFFSET; unsigned long offset = mevt->base + MCT_L_TCON_OFFSET;
tmp = __raw_readl(reg_base + offset); tmp = readl_relaxed(reg_base + offset);
if (tmp & mask) { if (tmp & mask) {
tmp &= ~mask; tmp &= ~mask;
exynos4_mct_write(tmp, offset); exynos4_mct_write(tmp, offset);
...@@ -349,7 +374,7 @@ static void exynos4_mct_tick_start(unsigned long cycles, ...@@ -349,7 +374,7 @@ static void exynos4_mct_tick_start(unsigned long cycles,
/* enable MCT tick interrupt */ /* enable MCT tick interrupt */
exynos4_mct_write(0x1, mevt->base + MCT_L_INT_ENB_OFFSET); exynos4_mct_write(0x1, mevt->base + MCT_L_INT_ENB_OFFSET);
tmp = __raw_readl(reg_base + mevt->base + MCT_L_TCON_OFFSET); tmp = readl_relaxed(reg_base + mevt->base + MCT_L_TCON_OFFSET);
tmp |= MCT_L_TCON_INT_START | MCT_L_TCON_TIMER_START | tmp |= MCT_L_TCON_INT_START | MCT_L_TCON_TIMER_START |
MCT_L_TCON_INTERVAL_MODE; MCT_L_TCON_INTERVAL_MODE;
exynos4_mct_write(tmp, mevt->base + MCT_L_TCON_OFFSET); exynos4_mct_write(tmp, mevt->base + MCT_L_TCON_OFFSET);
...@@ -401,7 +426,7 @@ static int exynos4_mct_tick_clear(struct mct_clock_event_device *mevt) ...@@ -401,7 +426,7 @@ static int exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
exynos4_mct_tick_stop(mevt); exynos4_mct_tick_stop(mevt);
/* Clear the MCT tick interrupt */ /* Clear the MCT tick interrupt */
if (__raw_readl(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1) { if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1) {
exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET); exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
return 1; return 1;
} else { } else {
......
/*
* Mediatek SoCs General-Purpose Timer handling.
*
* Copyright (C) 2014 Matthias Brugger
*
* Matthias Brugger <matthias.bgg@gmail.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/clk.h>
#include <linux/clockchips.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqreturn.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/slab.h>
#define GPT_IRQ_EN_REG 0x00
#define GPT_IRQ_ENABLE(val) BIT((val) - 1)
#define GPT_IRQ_ACK_REG 0x08
#define GPT_IRQ_ACK(val) BIT((val) - 1)
#define TIMER_CTRL_REG(val) (0x10 * (val))
#define TIMER_CTRL_OP(val) (((val) & 0x3) << 4)
#define TIMER_CTRL_OP_ONESHOT (0)
#define TIMER_CTRL_OP_REPEAT (1)
#define TIMER_CTRL_OP_FREERUN (3)
#define TIMER_CTRL_CLEAR (2)
#define TIMER_CTRL_ENABLE (1)
#define TIMER_CTRL_DISABLE (0)
#define TIMER_CLK_REG(val) (0x04 + (0x10 * (val)))
#define TIMER_CLK_SRC(val) (((val) & 0x1) << 4)
#define TIMER_CLK_SRC_SYS13M (0)
#define TIMER_CLK_SRC_RTC32K (1)
#define TIMER_CLK_DIV1 (0x0)
#define TIMER_CLK_DIV2 (0x1)
#define TIMER_CNT_REG(val) (0x08 + (0x10 * (val)))
#define TIMER_CMP_REG(val) (0x0C + (0x10 * (val)))
#define GPT_CLK_EVT 1
#define GPT_CLK_SRC 2
struct mtk_clock_event_device {
void __iomem *gpt_base;
u32 ticks_per_jiffy;
struct clock_event_device dev;
};
static inline struct mtk_clock_event_device *to_mtk_clk(
struct clock_event_device *c)
{
return container_of(c, struct mtk_clock_event_device, dev);
}
static void mtk_clkevt_time_stop(struct mtk_clock_event_device *evt, u8 timer)
{
u32 val;
val = readl(evt->gpt_base + TIMER_CTRL_REG(timer));
writel(val & ~TIMER_CTRL_ENABLE, evt->gpt_base +
TIMER_CTRL_REG(timer));
}
static void mtk_clkevt_time_setup(struct mtk_clock_event_device *evt,
unsigned long delay, u8 timer)
{
writel(delay, evt->gpt_base + TIMER_CMP_REG(timer));
}
static void mtk_clkevt_time_start(struct mtk_clock_event_device *evt,
bool periodic, u8 timer)
{
u32 val;
/* Acknowledge interrupt */
writel(GPT_IRQ_ACK(timer), evt->gpt_base + GPT_IRQ_ACK_REG);
val = readl(evt->gpt_base + TIMER_CTRL_REG(timer));
/* Clear 2 bit timer operation mode field */
val &= ~TIMER_CTRL_OP(0x3);
if (periodic)
val |= TIMER_CTRL_OP(TIMER_CTRL_OP_REPEAT);
else
val |= TIMER_CTRL_OP(TIMER_CTRL_OP_ONESHOT);
writel(val | TIMER_CTRL_ENABLE | TIMER_CTRL_CLEAR,
evt->gpt_base + TIMER_CTRL_REG(timer));
}
static void mtk_clkevt_mode(enum clock_event_mode mode,
struct clock_event_device *clk)
{
struct mtk_clock_event_device *evt = to_mtk_clk(clk);
mtk_clkevt_time_stop(evt, GPT_CLK_EVT);
switch (mode) {
case CLOCK_EVT_MODE_PERIODIC:
mtk_clkevt_time_setup(evt, evt->ticks_per_jiffy, GPT_CLK_EVT);
mtk_clkevt_time_start(evt, true, GPT_CLK_EVT);
break;
case CLOCK_EVT_MODE_ONESHOT:
/* Timer is enabled in set_next_event */
break;
case CLOCK_EVT_MODE_UNUSED:
case CLOCK_EVT_MODE_SHUTDOWN:
default:
/* No more interrupts will occur as source is disabled */
break;
}
}
static int mtk_clkevt_next_event(unsigned long event,
struct clock_event_device *clk)
{
struct mtk_clock_event_device *evt = to_mtk_clk(clk);
mtk_clkevt_time_stop(evt, GPT_CLK_EVT);
mtk_clkevt_time_setup(evt, event, GPT_CLK_EVT);
mtk_clkevt_time_start(evt, false, GPT_CLK_EVT);
return 0;
}
static irqreturn_t mtk_timer_interrupt(int irq, void *dev_id)
{
struct mtk_clock_event_device *evt = dev_id;
/* Acknowledge timer0 irq */
writel(GPT_IRQ_ACK(GPT_CLK_EVT), evt->gpt_base + GPT_IRQ_ACK_REG);
evt->dev.event_handler(&evt->dev);
return IRQ_HANDLED;
}
static void mtk_timer_global_reset(struct mtk_clock_event_device *evt)
{
/* Disable all interrupts */
writel(0x0, evt->gpt_base + GPT_IRQ_EN_REG);
/* Acknowledge all interrupts */
writel(0x3f, evt->gpt_base + GPT_IRQ_ACK_REG);
}
static void
mtk_timer_setup(struct mtk_clock_event_device *evt, u8 timer, u8 option)
{
writel(TIMER_CTRL_CLEAR | TIMER_CTRL_DISABLE,
evt->gpt_base + TIMER_CTRL_REG(timer));
writel(TIMER_CLK_SRC(TIMER_CLK_SRC_SYS13M) | TIMER_CLK_DIV1,
evt->gpt_base + TIMER_CLK_REG(timer));
writel(0x0, evt->gpt_base + TIMER_CMP_REG(timer));
writel(TIMER_CTRL_OP(option) | TIMER_CTRL_ENABLE,
evt->gpt_base + TIMER_CTRL_REG(timer));
}
static void mtk_timer_enable_irq(struct mtk_clock_event_device *evt, u8 timer)
{
u32 val;
val = readl(evt->gpt_base + GPT_IRQ_EN_REG);
writel(val | GPT_IRQ_ENABLE(timer),
evt->gpt_base + GPT_IRQ_EN_REG);
}
static void __init mtk_timer_init(struct device_node *node)
{
struct mtk_clock_event_device *evt;
struct resource res;
unsigned long rate = 0;
struct clk *clk;
evt = kzalloc(sizeof(*evt), GFP_KERNEL);
if (!evt) {
pr_warn("Can't allocate mtk clock event driver struct");
return;
}
evt->dev.name = "mtk_tick";
evt->dev.rating = 300;
evt->dev.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT;
evt->dev.set_mode = mtk_clkevt_mode;
evt->dev.set_next_event = mtk_clkevt_next_event;
evt->dev.cpumask = cpu_possible_mask;
evt->gpt_base = of_io_request_and_map(node, 0, "mtk-timer");
if (IS_ERR(evt->gpt_base)) {
pr_warn("Can't get resource\n");
return;
}
evt->dev.irq = irq_of_parse_and_map(node, 0);
if (evt->dev.irq <= 0) {
pr_warn("Can't parse IRQ");
goto err_mem;
}
clk = of_clk_get(node, 0);
if (IS_ERR(clk)) {
pr_warn("Can't get timer clock");
goto err_irq;
}
if (clk_prepare_enable(clk)) {
pr_warn("Can't prepare clock");
goto err_clk_put;
}
rate = clk_get_rate(clk);
if (request_irq(evt->dev.irq, mtk_timer_interrupt,
IRQF_TIMER | IRQF_IRQPOLL, "mtk_timer", evt)) {
pr_warn("failed to setup irq %d\n", evt->dev.irq);
goto err_clk_disable;
}
evt->ticks_per_jiffy = DIV_ROUND_UP(rate, HZ);
mtk_timer_global_reset(evt);
/* Configure clock source */
mtk_timer_setup(evt, GPT_CLK_SRC, TIMER_CTRL_OP_FREERUN);
clocksource_mmio_init(evt->gpt_base + TIMER_CNT_REG(GPT_CLK_SRC),
node->name, rate, 300, 32, clocksource_mmio_readl_up);
/* Configure clock event */
mtk_timer_setup(evt, GPT_CLK_EVT, TIMER_CTRL_OP_REPEAT);
mtk_timer_enable_irq(evt, GPT_CLK_EVT);
clockevents_config_and_register(&evt->dev, rate, 0x3,
0xffffffff);
return;
err_clk_disable:
clk_disable_unprepare(clk);
err_clk_put:
clk_put(clk);
err_irq:
irq_dispose_mapping(evt->dev.irq);
err_mem:
iounmap(evt->gpt_base);
of_address_to_resource(node, 0, &res);
release_mem_region(res.start, resource_size(&res));
}
CLOCKSOURCE_OF_DECLARE(mtk_mt6577, "mediatek,mt6577-timer", mtk_timer_init);
...@@ -15,14 +15,30 @@ ...@@ -15,14 +15,30 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/clk.h>
#include <linux/clockchips.h> #include <linux/clockchips.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/sched_clock.h> #include <linux/sched_clock.h>
#include <asm/div64.h> #include <asm/div64.h>
#include <asm/mach/irq.h>
#include <asm/mach/time.h> #define OSMR0 0x00 /* OS Timer 0 Match Register */
#include <mach/regs-ost.h> #define OSMR1 0x04 /* OS Timer 1 Match Register */
#include <mach/irqs.h> #define OSMR2 0x08 /* OS Timer 2 Match Register */
#define OSMR3 0x0C /* OS Timer 3 Match Register */
#define OSCR 0x10 /* OS Timer Counter Register */
#define OSSR 0x14 /* OS Timer Status Register */
#define OWER 0x18 /* OS Timer Watchdog Enable Register */
#define OIER 0x1C /* OS Timer Interrupt Enable Register */
#define OSSR_M3 (1 << 3) /* Match status channel 3 */
#define OSSR_M2 (1 << 2) /* Match status channel 2 */
#define OSSR_M1 (1 << 1) /* Match status channel 1 */
#define OSSR_M0 (1 << 0) /* Match status channel 0 */
#define OIER_E0 (1 << 0) /* Interrupt enable channel 0 */
/* /*
* This is PXA's sched_clock implementation. This has a resolution * This is PXA's sched_clock implementation. This has a resolution
...@@ -33,9 +49,14 @@ ...@@ -33,9 +49,14 @@
* calls to sched_clock() which should always be the case in practice. * calls to sched_clock() which should always be the case in practice.
*/ */
#define timer_readl(reg) readl_relaxed(timer_base + (reg))
#define timer_writel(val, reg) writel_relaxed((val), timer_base + (reg))
static void __iomem *timer_base;
static u64 notrace pxa_read_sched_clock(void) static u64 notrace pxa_read_sched_clock(void)
{ {
return readl_relaxed(OSCR); return timer_readl(OSCR);
} }
...@@ -47,8 +68,8 @@ pxa_ost0_interrupt(int irq, void *dev_id) ...@@ -47,8 +68,8 @@ pxa_ost0_interrupt(int irq, void *dev_id)
struct clock_event_device *c = dev_id; struct clock_event_device *c = dev_id;
/* Disarm the compare/match, signal the event. */ /* Disarm the compare/match, signal the event. */
writel_relaxed(readl_relaxed(OIER) & ~OIER_E0, OIER); timer_writel(timer_readl(OIER) & ~OIER_E0, OIER);
writel_relaxed(OSSR_M0, OSSR); timer_writel(OSSR_M0, OSSR);
c->event_handler(c); c->event_handler(c);
return IRQ_HANDLED; return IRQ_HANDLED;
...@@ -59,10 +80,10 @@ pxa_osmr0_set_next_event(unsigned long delta, struct clock_event_device *dev) ...@@ -59,10 +80,10 @@ pxa_osmr0_set_next_event(unsigned long delta, struct clock_event_device *dev)
{ {
unsigned long next, oscr; unsigned long next, oscr;
writel_relaxed(readl_relaxed(OIER) | OIER_E0, OIER); timer_writel(timer_readl(OIER) | OIER_E0, OIER);
next = readl_relaxed(OSCR) + delta; next = timer_readl(OSCR) + delta;
writel_relaxed(next, OSMR0); timer_writel(next, OSMR0);
oscr = readl_relaxed(OSCR); oscr = timer_readl(OSCR);
return (signed)(next - oscr) <= MIN_OSCR_DELTA ? -ETIME : 0; return (signed)(next - oscr) <= MIN_OSCR_DELTA ? -ETIME : 0;
} }
...@@ -72,15 +93,15 @@ pxa_osmr0_set_mode(enum clock_event_mode mode, struct clock_event_device *dev) ...@@ -72,15 +93,15 @@ pxa_osmr0_set_mode(enum clock_event_mode mode, struct clock_event_device *dev)
{ {
switch (mode) { switch (mode) {
case CLOCK_EVT_MODE_ONESHOT: case CLOCK_EVT_MODE_ONESHOT:
writel_relaxed(readl_relaxed(OIER) & ~OIER_E0, OIER); timer_writel(timer_readl(OIER) & ~OIER_E0, OIER);
writel_relaxed(OSSR_M0, OSSR); timer_writel(OSSR_M0, OSSR);
break; break;
case CLOCK_EVT_MODE_UNUSED: case CLOCK_EVT_MODE_UNUSED:
case CLOCK_EVT_MODE_SHUTDOWN: case CLOCK_EVT_MODE_SHUTDOWN:
/* initializing, released, or preparing for suspend */ /* initializing, released, or preparing for suspend */
writel_relaxed(readl_relaxed(OIER) & ~OIER_E0, OIER); timer_writel(timer_readl(OIER) & ~OIER_E0, OIER);
writel_relaxed(OSSR_M0, OSSR); timer_writel(OSSR_M0, OSSR);
break; break;
case CLOCK_EVT_MODE_RESUME: case CLOCK_EVT_MODE_RESUME:
...@@ -94,12 +115,12 @@ static unsigned long osmr[4], oier, oscr; ...@@ -94,12 +115,12 @@ static unsigned long osmr[4], oier, oscr;
static void pxa_timer_suspend(struct clock_event_device *cedev) static void pxa_timer_suspend(struct clock_event_device *cedev)
{ {
osmr[0] = readl_relaxed(OSMR0); osmr[0] = timer_readl(OSMR0);
osmr[1] = readl_relaxed(OSMR1); osmr[1] = timer_readl(OSMR1);
osmr[2] = readl_relaxed(OSMR2); osmr[2] = timer_readl(OSMR2);
osmr[3] = readl_relaxed(OSMR3); osmr[3] = timer_readl(OSMR3);
oier = readl_relaxed(OIER); oier = timer_readl(OIER);
oscr = readl_relaxed(OSCR); oscr = timer_readl(OSCR);
} }
static void pxa_timer_resume(struct clock_event_device *cedev) static void pxa_timer_resume(struct clock_event_device *cedev)
...@@ -113,12 +134,12 @@ static void pxa_timer_resume(struct clock_event_device *cedev) ...@@ -113,12 +134,12 @@ static void pxa_timer_resume(struct clock_event_device *cedev)
if (osmr[0] - oscr < MIN_OSCR_DELTA) if (osmr[0] - oscr < MIN_OSCR_DELTA)
osmr[0] += MIN_OSCR_DELTA; osmr[0] += MIN_OSCR_DELTA;
writel_relaxed(osmr[0], OSMR0); timer_writel(osmr[0], OSMR0);
writel_relaxed(osmr[1], OSMR1); timer_writel(osmr[1], OSMR1);
writel_relaxed(osmr[2], OSMR2); timer_writel(osmr[2], OSMR2);
writel_relaxed(osmr[3], OSMR3); timer_writel(osmr[3], OSMR3);
writel_relaxed(oier, OIER); timer_writel(oier, OIER);
writel_relaxed(oscr, OSCR); timer_writel(oscr, OSCR);
} }
#else #else
#define pxa_timer_suspend NULL #define pxa_timer_suspend NULL
...@@ -142,21 +163,65 @@ static struct irqaction pxa_ost0_irq = { ...@@ -142,21 +163,65 @@ static struct irqaction pxa_ost0_irq = {
.dev_id = &ckevt_pxa_osmr0, .dev_id = &ckevt_pxa_osmr0,
}; };
void __init pxa_timer_init(void) static void pxa_timer_common_init(int irq, unsigned long clock_tick_rate)
{ {
unsigned long clock_tick_rate = get_clock_tick_rate(); timer_writel(0, OIER);
timer_writel(OSSR_M0 | OSSR_M1 | OSSR_M2 | OSSR_M3, OSSR);
writel_relaxed(0, OIER);
writel_relaxed(OSSR_M0 | OSSR_M1 | OSSR_M2 | OSSR_M3, OSSR);
sched_clock_register(pxa_read_sched_clock, 32, clock_tick_rate); sched_clock_register(pxa_read_sched_clock, 32, clock_tick_rate);
ckevt_pxa_osmr0.cpumask = cpumask_of(0); ckevt_pxa_osmr0.cpumask = cpumask_of(0);
setup_irq(IRQ_OST0, &pxa_ost0_irq); setup_irq(irq, &pxa_ost0_irq);
clocksource_mmio_init(OSCR, "oscr0", clock_tick_rate, 200, 32, clocksource_mmio_init(timer_base + OSCR, "oscr0", clock_tick_rate, 200,
clocksource_mmio_readl_up); 32, clocksource_mmio_readl_up);
clockevents_config_and_register(&ckevt_pxa_osmr0, clock_tick_rate, clockevents_config_and_register(&ckevt_pxa_osmr0, clock_tick_rate,
MIN_OSCR_DELTA * 2, 0x7fffffff); MIN_OSCR_DELTA * 2, 0x7fffffff);
}
static void __init pxa_timer_dt_init(struct device_node *np)
{
struct clk *clk;
int irq;
/* timer registers are shared with watchdog timer */
timer_base = of_iomap(np, 0);
if (!timer_base)
panic("%s: unable to map resource\n", np->name);
clk = of_clk_get(np, 0);
if (IS_ERR(clk)) {
pr_crit("%s: unable to get clk\n", np->name);
return;
}
clk_prepare_enable(clk);
/* we are only interested in OS-timer0 irq */
irq = irq_of_parse_and_map(np, 0);
if (irq <= 0) {
pr_crit("%s: unable to parse OS-timer0 irq\n", np->name);
return;
}
pxa_timer_common_init(irq, clk_get_rate(clk));
}
CLOCKSOURCE_OF_DECLARE(pxa_timer, "marvell,pxa-timer", pxa_timer_dt_init);
/*
* Legacy timer init for non device-tree boards.
*/
void __init pxa_timer_nodt_init(int irq, void __iomem *base,
unsigned long clock_tick_rate)
{
struct clk *clk;
timer_base = base;
clk = clk_get(NULL, "OSTIMER0");
if (clk && !IS_ERR(clk))
clk_prepare_enable(clk);
else
pr_crit("%s: unable to get clk\n", __func__);
pxa_timer_common_init(irq, clock_tick_rate);
} }
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_domain.h> #include <linux/pm_domain.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
...@@ -114,14 +115,15 @@ struct sh_cmt_device { ...@@ -114,14 +115,15 @@ struct sh_cmt_device {
struct platform_device *pdev; struct platform_device *pdev;
const struct sh_cmt_info *info; const struct sh_cmt_info *info;
bool legacy;
void __iomem *mapbase_ch;
void __iomem *mapbase; void __iomem *mapbase;
struct clk *clk; struct clk *clk;
raw_spinlock_t lock; /* Protect the shared start/stop register */
struct sh_cmt_channel *channels; struct sh_cmt_channel *channels;
unsigned int num_channels; unsigned int num_channels;
unsigned int hw_channels;
bool has_clockevent; bool has_clockevent;
bool has_clocksource; bool has_clocksource;
...@@ -301,14 +303,12 @@ static unsigned long sh_cmt_get_counter(struct sh_cmt_channel *ch, ...@@ -301,14 +303,12 @@ static unsigned long sh_cmt_get_counter(struct sh_cmt_channel *ch,
return v2; return v2;
} }
static DEFINE_RAW_SPINLOCK(sh_cmt_lock);
static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start) static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start)
{ {
unsigned long flags, value; unsigned long flags, value;
/* start stop register shared by multiple timer channels */ /* start stop register shared by multiple timer channels */
raw_spin_lock_irqsave(&sh_cmt_lock, flags); raw_spin_lock_irqsave(&ch->cmt->lock, flags);
value = sh_cmt_read_cmstr(ch); value = sh_cmt_read_cmstr(ch);
if (start) if (start)
...@@ -317,7 +317,7 @@ static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start) ...@@ -317,7 +317,7 @@ static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start)
value &= ~(1 << ch->timer_bit); value &= ~(1 << ch->timer_bit);
sh_cmt_write_cmstr(ch, value); sh_cmt_write_cmstr(ch, value);
raw_spin_unlock_irqrestore(&sh_cmt_lock, flags); raw_spin_unlock_irqrestore(&ch->cmt->lock, flags);
} }
static int sh_cmt_enable(struct sh_cmt_channel *ch, unsigned long *rate) static int sh_cmt_enable(struct sh_cmt_channel *ch, unsigned long *rate)
...@@ -792,7 +792,7 @@ static int sh_cmt_register_clockevent(struct sh_cmt_channel *ch, ...@@ -792,7 +792,7 @@ static int sh_cmt_register_clockevent(struct sh_cmt_channel *ch,
int irq; int irq;
int ret; int ret;
irq = platform_get_irq(ch->cmt->pdev, ch->cmt->legacy ? 0 : ch->index); irq = platform_get_irq(ch->cmt->pdev, ch->index);
if (irq < 0) { if (irq < 0) {
dev_err(&ch->cmt->pdev->dev, "ch%u: failed to get irq\n", dev_err(&ch->cmt->pdev->dev, "ch%u: failed to get irq\n",
ch->index); ch->index);
...@@ -863,33 +863,26 @@ static int sh_cmt_setup_channel(struct sh_cmt_channel *ch, unsigned int index, ...@@ -863,33 +863,26 @@ static int sh_cmt_setup_channel(struct sh_cmt_channel *ch, unsigned int index,
* Compute the address of the channel control register block. For the * Compute the address of the channel control register block. For the
* timers with a per-channel start/stop register, compute its address * timers with a per-channel start/stop register, compute its address
* as well. * as well.
*
* For legacy configuration the address has been mapped explicitly.
*/ */
if (cmt->legacy) { switch (cmt->info->model) {
ch->ioctrl = cmt->mapbase_ch; case SH_CMT_16BIT:
} else { ch->ioctrl = cmt->mapbase + 2 + ch->hwidx * 6;
switch (cmt->info->model) { break;
case SH_CMT_16BIT: case SH_CMT_32BIT:
ch->ioctrl = cmt->mapbase + 2 + ch->hwidx * 6; case SH_CMT_48BIT:
break; ch->ioctrl = cmt->mapbase + 0x10 + ch->hwidx * 0x10;
case SH_CMT_32BIT: break;
case SH_CMT_48BIT: case SH_CMT_32BIT_FAST:
ch->ioctrl = cmt->mapbase + 0x10 + ch->hwidx * 0x10; /*
break; * The 32-bit "fast" timer has a single channel at hwidx 5 but
case SH_CMT_32BIT_FAST: * is located at offset 0x40 instead of 0x60 for some reason.
/* */
* The 32-bit "fast" timer has a single channel at hwidx ch->ioctrl = cmt->mapbase + 0x40;
* 5 but is located at offset 0x40 instead of 0x60 for break;
* some reason. case SH_CMT_48BIT_GEN2:
*/ ch->iostart = cmt->mapbase + ch->hwidx * 0x100;
ch->ioctrl = cmt->mapbase + 0x40; ch->ioctrl = ch->iostart + 0x10;
break; break;
case SH_CMT_48BIT_GEN2:
ch->iostart = cmt->mapbase + ch->hwidx * 0x100;
ch->ioctrl = ch->iostart + 0x10;
break;
}
} }
if (cmt->info->width == (sizeof(ch->max_match_value) * 8)) if (cmt->info->width == (sizeof(ch->max_match_value) * 8))
...@@ -900,12 +893,7 @@ static int sh_cmt_setup_channel(struct sh_cmt_channel *ch, unsigned int index, ...@@ -900,12 +893,7 @@ static int sh_cmt_setup_channel(struct sh_cmt_channel *ch, unsigned int index,
ch->match_value = ch->max_match_value; ch->match_value = ch->max_match_value;
raw_spin_lock_init(&ch->lock); raw_spin_lock_init(&ch->lock);
if (cmt->legacy) { ch->timer_bit = cmt->info->model == SH_CMT_48BIT_GEN2 ? 0 : ch->hwidx;
ch->timer_bit = ch->hwidx;
} else {
ch->timer_bit = cmt->info->model == SH_CMT_48BIT_GEN2
? 0 : ch->hwidx;
}
ret = sh_cmt_register(ch, dev_name(&cmt->pdev->dev), ret = sh_cmt_register(ch, dev_name(&cmt->pdev->dev),
clockevent, clocksource); clockevent, clocksource);
...@@ -938,75 +926,65 @@ static int sh_cmt_map_memory(struct sh_cmt_device *cmt) ...@@ -938,75 +926,65 @@ static int sh_cmt_map_memory(struct sh_cmt_device *cmt)
return 0; return 0;
} }
static int sh_cmt_map_memory_legacy(struct sh_cmt_device *cmt) static const struct platform_device_id sh_cmt_id_table[] = {
{ { "sh-cmt-16", (kernel_ulong_t)&sh_cmt_info[SH_CMT_16BIT] },
struct sh_timer_config *cfg = cmt->pdev->dev.platform_data; { "sh-cmt-32", (kernel_ulong_t)&sh_cmt_info[SH_CMT_32BIT] },
struct resource *res, *res2; { "sh-cmt-32-fast", (kernel_ulong_t)&sh_cmt_info[SH_CMT_32BIT_FAST] },
{ "sh-cmt-48", (kernel_ulong_t)&sh_cmt_info[SH_CMT_48BIT] },
/* map memory, let mapbase_ch point to our channel */ { "sh-cmt-48-gen2", (kernel_ulong_t)&sh_cmt_info[SH_CMT_48BIT_GEN2] },
res = platform_get_resource(cmt->pdev, IORESOURCE_MEM, 0); { }
if (!res) { };
dev_err(&cmt->pdev->dev, "failed to get I/O memory\n"); MODULE_DEVICE_TABLE(platform, sh_cmt_id_table);
return -ENXIO;
}
cmt->mapbase_ch = ioremap_nocache(res->start, resource_size(res));
if (cmt->mapbase_ch == NULL) {
dev_err(&cmt->pdev->dev, "failed to remap I/O memory\n");
return -ENXIO;
}
/* optional resource for the shared timer start/stop register */
res2 = platform_get_resource(cmt->pdev, IORESOURCE_MEM, 1);
/* map second resource for CMSTR */
cmt->mapbase = ioremap_nocache(res2 ? res2->start :
res->start - cfg->channel_offset,
res2 ? resource_size(res2) : 2);
if (cmt->mapbase == NULL) {
dev_err(&cmt->pdev->dev, "failed to remap I/O second memory\n");
iounmap(cmt->mapbase_ch);
return -ENXIO;
}
/* identify the model based on the resources */
if (resource_size(res) == 6)
cmt->info = &sh_cmt_info[SH_CMT_16BIT];
else if (res2 && (resource_size(res2) == 4))
cmt->info = &sh_cmt_info[SH_CMT_48BIT_GEN2];
else
cmt->info = &sh_cmt_info[SH_CMT_32BIT];
return 0; static const struct of_device_id sh_cmt_of_table[] __maybe_unused = {
} { .compatible = "renesas,cmt-32", .data = &sh_cmt_info[SH_CMT_32BIT] },
{ .compatible = "renesas,cmt-32-fast", .data = &sh_cmt_info[SH_CMT_32BIT_FAST] },
{ .compatible = "renesas,cmt-48", .data = &sh_cmt_info[SH_CMT_48BIT] },
{ .compatible = "renesas,cmt-48-gen2", .data = &sh_cmt_info[SH_CMT_48BIT_GEN2] },
{ }
};
MODULE_DEVICE_TABLE(of, sh_cmt_of_table);
static void sh_cmt_unmap_memory(struct sh_cmt_device *cmt) static int sh_cmt_parse_dt(struct sh_cmt_device *cmt)
{ {
iounmap(cmt->mapbase); struct device_node *np = cmt->pdev->dev.of_node;
if (cmt->mapbase_ch)
iounmap(cmt->mapbase_ch); return of_property_read_u32(np, "renesas,channels-mask",
&cmt->hw_channels);
} }
static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev) static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
{ {
struct sh_timer_config *cfg = pdev->dev.platform_data; unsigned int mask;
const struct platform_device_id *id = pdev->id_entry; unsigned int i;
unsigned int hw_channels;
int ret; int ret;
memset(cmt, 0, sizeof(*cmt)); memset(cmt, 0, sizeof(*cmt));
cmt->pdev = pdev; cmt->pdev = pdev;
raw_spin_lock_init(&cmt->lock);
if (IS_ENABLED(CONFIG_OF) && pdev->dev.of_node) {
const struct of_device_id *id;
id = of_match_node(sh_cmt_of_table, pdev->dev.of_node);
cmt->info = id->data;
if (!cfg) { ret = sh_cmt_parse_dt(cmt);
if (ret < 0)
return ret;
} else if (pdev->dev.platform_data) {
struct sh_timer_config *cfg = pdev->dev.platform_data;
const struct platform_device_id *id = pdev->id_entry;
cmt->info = (const struct sh_cmt_info *)id->driver_data;
cmt->hw_channels = cfg->channels_mask;
} else {
dev_err(&cmt->pdev->dev, "missing platform data\n"); dev_err(&cmt->pdev->dev, "missing platform data\n");
return -ENXIO; return -ENXIO;
} }
cmt->info = (const struct sh_cmt_info *)id->driver_data;
cmt->legacy = cmt->info ? false : true;
/* Get hold of clock. */ /* Get hold of clock. */
cmt->clk = clk_get(&cmt->pdev->dev, cmt->legacy ? "cmt_fck" : "fck"); cmt->clk = clk_get(&cmt->pdev->dev, "fck");
if (IS_ERR(cmt->clk)) { if (IS_ERR(cmt->clk)) {
dev_err(&cmt->pdev->dev, "cannot get clock\n"); dev_err(&cmt->pdev->dev, "cannot get clock\n");
return PTR_ERR(cmt->clk); return PTR_ERR(cmt->clk);
...@@ -1016,28 +994,13 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev) ...@@ -1016,28 +994,13 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
if (ret < 0) if (ret < 0)
goto err_clk_put; goto err_clk_put;
/* /* Map the memory resource(s). */
* Map the memory resource(s). We need to support both the legacy ret = sh_cmt_map_memory(cmt);
* platform device configuration (with one device per channel) and the
* new version (with multiple channels per device).
*/
if (cmt->legacy)
ret = sh_cmt_map_memory_legacy(cmt);
else
ret = sh_cmt_map_memory(cmt);
if (ret < 0) if (ret < 0)
goto err_clk_unprepare; goto err_clk_unprepare;
/* Allocate and setup the channels. */ /* Allocate and setup the channels. */
if (cmt->legacy) { cmt->num_channels = hweight8(cmt->hw_channels);
cmt->num_channels = 1;
hw_channels = 0;
} else {
cmt->num_channels = hweight8(cfg->channels_mask);
hw_channels = cfg->channels_mask;
}
cmt->channels = kzalloc(cmt->num_channels * sizeof(*cmt->channels), cmt->channels = kzalloc(cmt->num_channels * sizeof(*cmt->channels),
GFP_KERNEL); GFP_KERNEL);
if (cmt->channels == NULL) { if (cmt->channels == NULL) {
...@@ -1045,35 +1008,21 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev) ...@@ -1045,35 +1008,21 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
goto err_unmap; goto err_unmap;
} }
if (cmt->legacy) { /*
ret = sh_cmt_setup_channel(&cmt->channels[0], * Use the first channel as a clock event device and the second channel
cfg->timer_bit, cfg->timer_bit, * as a clock source. If only one channel is available use it for both.
cfg->clockevent_rating != 0, */
cfg->clocksource_rating != 0, cmt); for (i = 0, mask = cmt->hw_channels; i < cmt->num_channels; ++i) {
unsigned int hwidx = ffs(mask) - 1;
bool clocksource = i == 1 || cmt->num_channels == 1;
bool clockevent = i == 0;
ret = sh_cmt_setup_channel(&cmt->channels[i], i, hwidx,
clockevent, clocksource, cmt);
if (ret < 0) if (ret < 0)
goto err_unmap; goto err_unmap;
} else {
unsigned int mask = hw_channels;
unsigned int i;
/* mask &= ~(1 << hwidx);
* Use the first channel as a clock event device and the second
* channel as a clock source. If only one channel is available
* use it for both.
*/
for (i = 0; i < cmt->num_channels; ++i) {
unsigned int hwidx = ffs(mask) - 1;
bool clocksource = i == 1 || cmt->num_channels == 1;
bool clockevent = i == 0;
ret = sh_cmt_setup_channel(&cmt->channels[i], i, hwidx,
clockevent, clocksource,
cmt);
if (ret < 0)
goto err_unmap;
mask &= ~(1 << hwidx);
}
} }
platform_set_drvdata(pdev, cmt); platform_set_drvdata(pdev, cmt);
...@@ -1082,7 +1031,7 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev) ...@@ -1082,7 +1031,7 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
err_unmap: err_unmap:
kfree(cmt->channels); kfree(cmt->channels);
sh_cmt_unmap_memory(cmt); iounmap(cmt->mapbase);
err_clk_unprepare: err_clk_unprepare:
clk_unprepare(cmt->clk); clk_unprepare(cmt->clk);
err_clk_put: err_clk_put:
...@@ -1132,22 +1081,12 @@ static int sh_cmt_remove(struct platform_device *pdev) ...@@ -1132,22 +1081,12 @@ static int sh_cmt_remove(struct platform_device *pdev)
return -EBUSY; /* cannot unregister clockevent and clocksource */ return -EBUSY; /* cannot unregister clockevent and clocksource */
} }
static const struct platform_device_id sh_cmt_id_table[] = {
{ "sh_cmt", 0 },
{ "sh-cmt-16", (kernel_ulong_t)&sh_cmt_info[SH_CMT_16BIT] },
{ "sh-cmt-32", (kernel_ulong_t)&sh_cmt_info[SH_CMT_32BIT] },
{ "sh-cmt-32-fast", (kernel_ulong_t)&sh_cmt_info[SH_CMT_32BIT_FAST] },
{ "sh-cmt-48", (kernel_ulong_t)&sh_cmt_info[SH_CMT_48BIT] },
{ "sh-cmt-48-gen2", (kernel_ulong_t)&sh_cmt_info[SH_CMT_48BIT_GEN2] },
{ }
};
MODULE_DEVICE_TABLE(platform, sh_cmt_id_table);
static struct platform_driver sh_cmt_device_driver = { static struct platform_driver sh_cmt_device_driver = {
.probe = sh_cmt_probe, .probe = sh_cmt_probe,
.remove = sh_cmt_remove, .remove = sh_cmt_remove,
.driver = { .driver = {
.name = "sh_cmt", .name = "sh_cmt",
.of_match_table = of_match_ptr(sh_cmt_of_table),
}, },
.id_table = sh_cmt_id_table, .id_table = sh_cmt_id_table,
}; };
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_domain.h> #include <linux/pm_domain.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
...@@ -37,7 +38,6 @@ struct sh_mtu2_channel { ...@@ -37,7 +38,6 @@ struct sh_mtu2_channel {
unsigned int index; unsigned int index;
void __iomem *base; void __iomem *base;
int irq;
struct clock_event_device ced; struct clock_event_device ced;
}; };
...@@ -48,15 +48,14 @@ struct sh_mtu2_device { ...@@ -48,15 +48,14 @@ struct sh_mtu2_device {
void __iomem *mapbase; void __iomem *mapbase;
struct clk *clk; struct clk *clk;
raw_spinlock_t lock; /* Protect the shared registers */
struct sh_mtu2_channel *channels; struct sh_mtu2_channel *channels;
unsigned int num_channels; unsigned int num_channels;
bool legacy;
bool has_clockevent; bool has_clockevent;
}; };
static DEFINE_RAW_SPINLOCK(sh_mtu2_lock);
#define TSTR -1 /* shared register */ #define TSTR -1 /* shared register */
#define TCR 0 /* channel register */ #define TCR 0 /* channel register */
#define TMDR 1 /* channel register */ #define TMDR 1 /* channel register */
...@@ -162,12 +161,8 @@ static inline unsigned long sh_mtu2_read(struct sh_mtu2_channel *ch, int reg_nr) ...@@ -162,12 +161,8 @@ static inline unsigned long sh_mtu2_read(struct sh_mtu2_channel *ch, int reg_nr)
{ {
unsigned long offs; unsigned long offs;
if (reg_nr == TSTR) { if (reg_nr == TSTR)
if (ch->mtu->legacy) return ioread8(ch->mtu->mapbase + 0x280);
return ioread8(ch->mtu->mapbase);
else
return ioread8(ch->mtu->mapbase + 0x280);
}
offs = mtu2_reg_offs[reg_nr]; offs = mtu2_reg_offs[reg_nr];
...@@ -182,12 +177,8 @@ static inline void sh_mtu2_write(struct sh_mtu2_channel *ch, int reg_nr, ...@@ -182,12 +177,8 @@ static inline void sh_mtu2_write(struct sh_mtu2_channel *ch, int reg_nr,
{ {
unsigned long offs; unsigned long offs;
if (reg_nr == TSTR) { if (reg_nr == TSTR)
if (ch->mtu->legacy) return iowrite8(value, ch->mtu->mapbase + 0x280);
return iowrite8(value, ch->mtu->mapbase);
else
return iowrite8(value, ch->mtu->mapbase + 0x280);
}
offs = mtu2_reg_offs[reg_nr]; offs = mtu2_reg_offs[reg_nr];
...@@ -202,7 +193,7 @@ static void sh_mtu2_start_stop_ch(struct sh_mtu2_channel *ch, int start) ...@@ -202,7 +193,7 @@ static void sh_mtu2_start_stop_ch(struct sh_mtu2_channel *ch, int start)
unsigned long flags, value; unsigned long flags, value;
/* start stop register shared by multiple timer channels */ /* start stop register shared by multiple timer channels */
raw_spin_lock_irqsave(&sh_mtu2_lock, flags); raw_spin_lock_irqsave(&ch->mtu->lock, flags);
value = sh_mtu2_read(ch, TSTR); value = sh_mtu2_read(ch, TSTR);
if (start) if (start)
...@@ -211,7 +202,7 @@ static void sh_mtu2_start_stop_ch(struct sh_mtu2_channel *ch, int start) ...@@ -211,7 +202,7 @@ static void sh_mtu2_start_stop_ch(struct sh_mtu2_channel *ch, int start)
value &= ~(1 << ch->index); value &= ~(1 << ch->index);
sh_mtu2_write(ch, TSTR, value); sh_mtu2_write(ch, TSTR, value);
raw_spin_unlock_irqrestore(&sh_mtu2_lock, flags); raw_spin_unlock_irqrestore(&ch->mtu->lock, flags);
} }
static int sh_mtu2_enable(struct sh_mtu2_channel *ch) static int sh_mtu2_enable(struct sh_mtu2_channel *ch)
...@@ -331,7 +322,6 @@ static void sh_mtu2_register_clockevent(struct sh_mtu2_channel *ch, ...@@ -331,7 +322,6 @@ static void sh_mtu2_register_clockevent(struct sh_mtu2_channel *ch,
const char *name) const char *name)
{ {
struct clock_event_device *ced = &ch->ced; struct clock_event_device *ced = &ch->ced;
int ret;
ced->name = name; ced->name = name;
ced->features = CLOCK_EVT_FEAT_PERIODIC; ced->features = CLOCK_EVT_FEAT_PERIODIC;
...@@ -344,24 +334,12 @@ static void sh_mtu2_register_clockevent(struct sh_mtu2_channel *ch, ...@@ -344,24 +334,12 @@ static void sh_mtu2_register_clockevent(struct sh_mtu2_channel *ch,
dev_info(&ch->mtu->pdev->dev, "ch%u: used for clock events\n", dev_info(&ch->mtu->pdev->dev, "ch%u: used for clock events\n",
ch->index); ch->index);
clockevents_register_device(ced); clockevents_register_device(ced);
ret = request_irq(ch->irq, sh_mtu2_interrupt,
IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,
dev_name(&ch->mtu->pdev->dev), ch);
if (ret) {
dev_err(&ch->mtu->pdev->dev, "ch%u: failed to request irq %d\n",
ch->index, ch->irq);
return;
}
} }
static int sh_mtu2_register(struct sh_mtu2_channel *ch, const char *name, static int sh_mtu2_register(struct sh_mtu2_channel *ch, const char *name)
bool clockevent)
{ {
if (clockevent) { ch->mtu->has_clockevent = true;
ch->mtu->has_clockevent = true; sh_mtu2_register_clockevent(ch, name);
sh_mtu2_register_clockevent(ch, name);
}
return 0; return 0;
} }
...@@ -372,40 +350,32 @@ static int sh_mtu2_setup_channel(struct sh_mtu2_channel *ch, unsigned int index, ...@@ -372,40 +350,32 @@ static int sh_mtu2_setup_channel(struct sh_mtu2_channel *ch, unsigned int index,
static const unsigned int channel_offsets[] = { static const unsigned int channel_offsets[] = {
0x300, 0x380, 0x000, 0x300, 0x380, 0x000,
}; };
bool clockevent; char name[6];
int irq;
int ret;
ch->mtu = mtu; ch->mtu = mtu;
if (mtu->legacy) { sprintf(name, "tgi%ua", index);
struct sh_timer_config *cfg = mtu->pdev->dev.platform_data; irq = platform_get_irq_byname(mtu->pdev, name);
if (irq < 0) {
clockevent = cfg->clockevent_rating != 0;
ch->irq = platform_get_irq(mtu->pdev, 0);
ch->base = mtu->mapbase - cfg->channel_offset;
ch->index = cfg->timer_bit;
} else {
char name[6];
clockevent = true;
sprintf(name, "tgi%ua", index);
ch->irq = platform_get_irq_byname(mtu->pdev, name);
ch->base = mtu->mapbase + channel_offsets[index];
ch->index = index;
}
if (ch->irq < 0) {
/* Skip channels with no declared interrupt. */ /* Skip channels with no declared interrupt. */
if (!mtu->legacy) return 0;
return 0; }
dev_err(&mtu->pdev->dev, "ch%u: failed to get irq\n", ret = request_irq(irq, sh_mtu2_interrupt,
ch->index); IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,
return ch->irq; dev_name(&ch->mtu->pdev->dev), ch);
if (ret) {
dev_err(&ch->mtu->pdev->dev, "ch%u: failed to request irq %d\n",
index, irq);
return ret;
} }
return sh_mtu2_register(ch, dev_name(&mtu->pdev->dev), clockevent); ch->base = mtu->mapbase + channel_offsets[index];
ch->index = index;
return sh_mtu2_register(ch, dev_name(&mtu->pdev->dev));
} }
static int sh_mtu2_map_memory(struct sh_mtu2_device *mtu) static int sh_mtu2_map_memory(struct sh_mtu2_device *mtu)
...@@ -422,46 +392,21 @@ static int sh_mtu2_map_memory(struct sh_mtu2_device *mtu) ...@@ -422,46 +392,21 @@ static int sh_mtu2_map_memory(struct sh_mtu2_device *mtu)
if (mtu->mapbase == NULL) if (mtu->mapbase == NULL)
return -ENXIO; return -ENXIO;
/*
* In legacy platform device configuration (with one device per channel)
* the resource points to the channel base address.
*/
if (mtu->legacy) {
struct sh_timer_config *cfg = mtu->pdev->dev.platform_data;
mtu->mapbase += cfg->channel_offset;
}
return 0; return 0;
} }
static void sh_mtu2_unmap_memory(struct sh_mtu2_device *mtu)
{
if (mtu->legacy) {
struct sh_timer_config *cfg = mtu->pdev->dev.platform_data;
mtu->mapbase -= cfg->channel_offset;
}
iounmap(mtu->mapbase);
}
static int sh_mtu2_setup(struct sh_mtu2_device *mtu, static int sh_mtu2_setup(struct sh_mtu2_device *mtu,
struct platform_device *pdev) struct platform_device *pdev)
{ {
struct sh_timer_config *cfg = pdev->dev.platform_data;
const struct platform_device_id *id = pdev->id_entry;
unsigned int i; unsigned int i;
int ret; int ret;
mtu->pdev = pdev; mtu->pdev = pdev;
mtu->legacy = id->driver_data;
if (mtu->legacy && !cfg) { raw_spin_lock_init(&mtu->lock);
dev_err(&mtu->pdev->dev, "missing platform data\n");
return -ENXIO;
}
/* Get hold of clock. */ /* Get hold of clock. */
mtu->clk = clk_get(&mtu->pdev->dev, mtu->legacy ? "mtu2_fck" : "fck"); mtu->clk = clk_get(&mtu->pdev->dev, "fck");
if (IS_ERR(mtu->clk)) { if (IS_ERR(mtu->clk)) {
dev_err(&mtu->pdev->dev, "cannot get clock\n"); dev_err(&mtu->pdev->dev, "cannot get clock\n");
return PTR_ERR(mtu->clk); return PTR_ERR(mtu->clk);
...@@ -479,10 +424,7 @@ static int sh_mtu2_setup(struct sh_mtu2_device *mtu, ...@@ -479,10 +424,7 @@ static int sh_mtu2_setup(struct sh_mtu2_device *mtu,
} }
/* Allocate and setup the channels. */ /* Allocate and setup the channels. */
if (mtu->legacy) mtu->num_channels = 3;
mtu->num_channels = 1;
else
mtu->num_channels = 3;
mtu->channels = kzalloc(sizeof(*mtu->channels) * mtu->num_channels, mtu->channels = kzalloc(sizeof(*mtu->channels) * mtu->num_channels,
GFP_KERNEL); GFP_KERNEL);
...@@ -491,16 +433,10 @@ static int sh_mtu2_setup(struct sh_mtu2_device *mtu, ...@@ -491,16 +433,10 @@ static int sh_mtu2_setup(struct sh_mtu2_device *mtu,
goto err_unmap; goto err_unmap;
} }
if (mtu->legacy) { for (i = 0; i < mtu->num_channels; ++i) {
ret = sh_mtu2_setup_channel(&mtu->channels[0], 0, mtu); ret = sh_mtu2_setup_channel(&mtu->channels[i], i, mtu);
if (ret < 0) if (ret < 0)
goto err_unmap; goto err_unmap;
} else {
for (i = 0; i < mtu->num_channels; ++i) {
ret = sh_mtu2_setup_channel(&mtu->channels[i], i, mtu);
if (ret < 0)
goto err_unmap;
}
} }
platform_set_drvdata(pdev, mtu); platform_set_drvdata(pdev, mtu);
...@@ -509,7 +445,7 @@ static int sh_mtu2_setup(struct sh_mtu2_device *mtu, ...@@ -509,7 +445,7 @@ static int sh_mtu2_setup(struct sh_mtu2_device *mtu,
err_unmap: err_unmap:
kfree(mtu->channels); kfree(mtu->channels);
sh_mtu2_unmap_memory(mtu); iounmap(mtu->mapbase);
err_clk_unprepare: err_clk_unprepare:
clk_unprepare(mtu->clk); clk_unprepare(mtu->clk);
err_clk_put: err_clk_put:
...@@ -560,17 +496,23 @@ static int sh_mtu2_remove(struct platform_device *pdev) ...@@ -560,17 +496,23 @@ static int sh_mtu2_remove(struct platform_device *pdev)
} }
static const struct platform_device_id sh_mtu2_id_table[] = { static const struct platform_device_id sh_mtu2_id_table[] = {
{ "sh_mtu2", 1 },
{ "sh-mtu2", 0 }, { "sh-mtu2", 0 },
{ }, { },
}; };
MODULE_DEVICE_TABLE(platform, sh_mtu2_id_table); MODULE_DEVICE_TABLE(platform, sh_mtu2_id_table);
static const struct of_device_id sh_mtu2_of_table[] __maybe_unused = {
{ .compatible = "renesas,mtu2" },
{ }
};
MODULE_DEVICE_TABLE(of, sh_mtu2_of_table);
static struct platform_driver sh_mtu2_device_driver = { static struct platform_driver sh_mtu2_device_driver = {
.probe = sh_mtu2_probe, .probe = sh_mtu2_probe,
.remove = sh_mtu2_remove, .remove = sh_mtu2_remove,
.driver = { .driver = {
.name = "sh_mtu2", .name = "sh_mtu2",
.of_match_table = of_match_ptr(sh_mtu2_of_table),
}, },
.id_table = sh_mtu2_id_table, .id_table = sh_mtu2_id_table,
}; };
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_domain.h> #include <linux/pm_domain.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
...@@ -32,7 +33,6 @@ ...@@ -32,7 +33,6 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
enum sh_tmu_model { enum sh_tmu_model {
SH_TMU_LEGACY,
SH_TMU, SH_TMU,
SH_TMU_SH3, SH_TMU_SH3,
}; };
...@@ -62,6 +62,8 @@ struct sh_tmu_device { ...@@ -62,6 +62,8 @@ struct sh_tmu_device {
enum sh_tmu_model model; enum sh_tmu_model model;
raw_spinlock_t lock; /* Protect the shared start/stop register */
struct sh_tmu_channel *channels; struct sh_tmu_channel *channels;
unsigned int num_channels; unsigned int num_channels;
...@@ -69,8 +71,6 @@ struct sh_tmu_device { ...@@ -69,8 +71,6 @@ struct sh_tmu_device {
bool has_clocksource; bool has_clocksource;
}; };
static DEFINE_RAW_SPINLOCK(sh_tmu_lock);
#define TSTR -1 /* shared register */ #define TSTR -1 /* shared register */
#define TCOR 0 /* channel register */ #define TCOR 0 /* channel register */
#define TCNT 1 /* channel register */ #define TCNT 1 /* channel register */
...@@ -91,8 +91,6 @@ static inline unsigned long sh_tmu_read(struct sh_tmu_channel *ch, int reg_nr) ...@@ -91,8 +91,6 @@ static inline unsigned long sh_tmu_read(struct sh_tmu_channel *ch, int reg_nr)
if (reg_nr == TSTR) { if (reg_nr == TSTR) {
switch (ch->tmu->model) { switch (ch->tmu->model) {
case SH_TMU_LEGACY:
return ioread8(ch->tmu->mapbase);
case SH_TMU_SH3: case SH_TMU_SH3:
return ioread8(ch->tmu->mapbase + 2); return ioread8(ch->tmu->mapbase + 2);
case SH_TMU: case SH_TMU:
...@@ -115,8 +113,6 @@ static inline void sh_tmu_write(struct sh_tmu_channel *ch, int reg_nr, ...@@ -115,8 +113,6 @@ static inline void sh_tmu_write(struct sh_tmu_channel *ch, int reg_nr,
if (reg_nr == TSTR) { if (reg_nr == TSTR) {
switch (ch->tmu->model) { switch (ch->tmu->model) {
case SH_TMU_LEGACY:
return iowrite8(value, ch->tmu->mapbase);
case SH_TMU_SH3: case SH_TMU_SH3:
return iowrite8(value, ch->tmu->mapbase + 2); return iowrite8(value, ch->tmu->mapbase + 2);
case SH_TMU: case SH_TMU:
...@@ -137,7 +133,7 @@ static void sh_tmu_start_stop_ch(struct sh_tmu_channel *ch, int start) ...@@ -137,7 +133,7 @@ static void sh_tmu_start_stop_ch(struct sh_tmu_channel *ch, int start)
unsigned long flags, value; unsigned long flags, value;
/* start stop register shared by multiple timer channels */ /* start stop register shared by multiple timer channels */
raw_spin_lock_irqsave(&sh_tmu_lock, flags); raw_spin_lock_irqsave(&ch->tmu->lock, flags);
value = sh_tmu_read(ch, TSTR); value = sh_tmu_read(ch, TSTR);
if (start) if (start)
...@@ -146,7 +142,7 @@ static void sh_tmu_start_stop_ch(struct sh_tmu_channel *ch, int start) ...@@ -146,7 +142,7 @@ static void sh_tmu_start_stop_ch(struct sh_tmu_channel *ch, int start)
value &= ~(1 << ch->index); value &= ~(1 << ch->index);
sh_tmu_write(ch, TSTR, value); sh_tmu_write(ch, TSTR, value);
raw_spin_unlock_irqrestore(&sh_tmu_lock, flags); raw_spin_unlock_irqrestore(&ch->tmu->lock, flags);
} }
static int __sh_tmu_enable(struct sh_tmu_channel *ch) static int __sh_tmu_enable(struct sh_tmu_channel *ch)
...@@ -476,27 +472,12 @@ static int sh_tmu_channel_setup(struct sh_tmu_channel *ch, unsigned int index, ...@@ -476,27 +472,12 @@ static int sh_tmu_channel_setup(struct sh_tmu_channel *ch, unsigned int index,
return 0; return 0;
ch->tmu = tmu; ch->tmu = tmu;
ch->index = index;
if (tmu->model == SH_TMU_LEGACY) { if (tmu->model == SH_TMU_SH3)
struct sh_timer_config *cfg = tmu->pdev->dev.platform_data; ch->base = tmu->mapbase + 4 + ch->index * 12;
else
/* ch->base = tmu->mapbase + 8 + ch->index * 12;
* The SH3 variant (SH770x, SH7705, SH7710 and SH7720) maps
* channel registers blocks at base + 2 + 12 * index, while all
* other variants map them at base + 4 + 12 * index. We can
* compute the index by just dividing by 12, the 2 bytes or 4
* bytes offset being hidden by the integer division.
*/
ch->index = cfg->channel_offset / 12;
ch->base = tmu->mapbase + cfg->channel_offset;
} else {
ch->index = index;
if (tmu->model == SH_TMU_SH3)
ch->base = tmu->mapbase + 4 + ch->index * 12;
else
ch->base = tmu->mapbase + 8 + ch->index * 12;
}
ch->irq = platform_get_irq(tmu->pdev, index); ch->irq = platform_get_irq(tmu->pdev, index);
if (ch->irq < 0) { if (ch->irq < 0) {
...@@ -526,46 +507,53 @@ static int sh_tmu_map_memory(struct sh_tmu_device *tmu) ...@@ -526,46 +507,53 @@ static int sh_tmu_map_memory(struct sh_tmu_device *tmu)
if (tmu->mapbase == NULL) if (tmu->mapbase == NULL)
return -ENXIO; return -ENXIO;
/*
* In legacy platform device configuration (with one device per channel)
* the resource points to the channel base address.
*/
if (tmu->model == SH_TMU_LEGACY) {
struct sh_timer_config *cfg = tmu->pdev->dev.platform_data;
tmu->mapbase -= cfg->channel_offset;
}
return 0; return 0;
} }
static void sh_tmu_unmap_memory(struct sh_tmu_device *tmu) static int sh_tmu_parse_dt(struct sh_tmu_device *tmu)
{ {
if (tmu->model == SH_TMU_LEGACY) { struct device_node *np = tmu->pdev->dev.of_node;
struct sh_timer_config *cfg = tmu->pdev->dev.platform_data;
tmu->mapbase += cfg->channel_offset; tmu->model = SH_TMU;
tmu->num_channels = 3;
of_property_read_u32(np, "#renesas,channels", &tmu->num_channels);
if (tmu->num_channels != 2 && tmu->num_channels != 3) {
dev_err(&tmu->pdev->dev, "invalid number of channels %u\n",
tmu->num_channels);
return -EINVAL;
} }
iounmap(tmu->mapbase); return 0;
} }
static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev) static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev)
{ {
struct sh_timer_config *cfg = pdev->dev.platform_data;
const struct platform_device_id *id = pdev->id_entry;
unsigned int i; unsigned int i;
int ret; int ret;
if (!cfg) { tmu->pdev = pdev;
raw_spin_lock_init(&tmu->lock);
if (IS_ENABLED(CONFIG_OF) && pdev->dev.of_node) {
ret = sh_tmu_parse_dt(tmu);
if (ret < 0)
return ret;
} else if (pdev->dev.platform_data) {
const struct platform_device_id *id = pdev->id_entry;
struct sh_timer_config *cfg = pdev->dev.platform_data;
tmu->model = id->driver_data;
tmu->num_channels = hweight8(cfg->channels_mask);
} else {
dev_err(&tmu->pdev->dev, "missing platform data\n"); dev_err(&tmu->pdev->dev, "missing platform data\n");
return -ENXIO; return -ENXIO;
} }
tmu->pdev = pdev;
tmu->model = id->driver_data;
/* Get hold of clock. */ /* Get hold of clock. */
tmu->clk = clk_get(&tmu->pdev->dev, tmu->clk = clk_get(&tmu->pdev->dev, "fck");
tmu->model == SH_TMU_LEGACY ? "tmu_fck" : "fck");
if (IS_ERR(tmu->clk)) { if (IS_ERR(tmu->clk)) {
dev_err(&tmu->pdev->dev, "cannot get clock\n"); dev_err(&tmu->pdev->dev, "cannot get clock\n");
return PTR_ERR(tmu->clk); return PTR_ERR(tmu->clk);
...@@ -583,11 +571,6 @@ static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev) ...@@ -583,11 +571,6 @@ static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev)
} }
/* Allocate and setup the channels. */ /* Allocate and setup the channels. */
if (tmu->model == SH_TMU_LEGACY)
tmu->num_channels = 1;
else
tmu->num_channels = hweight8(cfg->channels_mask);
tmu->channels = kzalloc(sizeof(*tmu->channels) * tmu->num_channels, tmu->channels = kzalloc(sizeof(*tmu->channels) * tmu->num_channels,
GFP_KERNEL); GFP_KERNEL);
if (tmu->channels == NULL) { if (tmu->channels == NULL) {
...@@ -595,23 +578,15 @@ static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev) ...@@ -595,23 +578,15 @@ static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev)
goto err_unmap; goto err_unmap;
} }
if (tmu->model == SH_TMU_LEGACY) { /*
ret = sh_tmu_channel_setup(&tmu->channels[0], 0, * Use the first channel as a clock event device and the second channel
cfg->clockevent_rating != 0, * as a clock source.
cfg->clocksource_rating != 0, tmu); */
for (i = 0; i < tmu->num_channels; ++i) {
ret = sh_tmu_channel_setup(&tmu->channels[i], i,
i == 0, i == 1, tmu);
if (ret < 0) if (ret < 0)
goto err_unmap; goto err_unmap;
} else {
/*
* Use the first channel as a clock event device and the second
* channel as a clock source.
*/
for (i = 0; i < tmu->num_channels; ++i) {
ret = sh_tmu_channel_setup(&tmu->channels[i], i,
i == 0, i == 1, tmu);
if (ret < 0)
goto err_unmap;
}
} }
platform_set_drvdata(pdev, tmu); platform_set_drvdata(pdev, tmu);
...@@ -620,7 +595,7 @@ static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev) ...@@ -620,7 +595,7 @@ static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev)
err_unmap: err_unmap:
kfree(tmu->channels); kfree(tmu->channels);
sh_tmu_unmap_memory(tmu); iounmap(tmu->mapbase);
err_clk_unprepare: err_clk_unprepare:
clk_unprepare(tmu->clk); clk_unprepare(tmu->clk);
err_clk_put: err_clk_put:
...@@ -671,18 +646,24 @@ static int sh_tmu_remove(struct platform_device *pdev) ...@@ -671,18 +646,24 @@ static int sh_tmu_remove(struct platform_device *pdev)
} }
static const struct platform_device_id sh_tmu_id_table[] = { static const struct platform_device_id sh_tmu_id_table[] = {
{ "sh_tmu", SH_TMU_LEGACY },
{ "sh-tmu", SH_TMU }, { "sh-tmu", SH_TMU },
{ "sh-tmu-sh3", SH_TMU_SH3 }, { "sh-tmu-sh3", SH_TMU_SH3 },
{ } { }
}; };
MODULE_DEVICE_TABLE(platform, sh_tmu_id_table); MODULE_DEVICE_TABLE(platform, sh_tmu_id_table);
static const struct of_device_id sh_tmu_of_table[] __maybe_unused = {
{ .compatible = "renesas,tmu" },
{ }
};
MODULE_DEVICE_TABLE(of, sh_tmu_of_table);
static struct platform_driver sh_tmu_device_driver = { static struct platform_driver sh_tmu_device_driver = {
.probe = sh_tmu_probe, .probe = sh_tmu_probe,
.remove = sh_tmu_remove, .remove = sh_tmu_remove,
.driver = { .driver = {
.name = "sh_tmu", .name = "sh_tmu",
.of_match_table = of_match_ptr(sh_tmu_of_table),
}, },
.id_table = sh_tmu_id_table, .id_table = sh_tmu_id_table,
}; };
......
...@@ -260,6 +260,9 @@ static void __init sirfsoc_marco_timer_init(struct device_node *np) ...@@ -260,6 +260,9 @@ static void __init sirfsoc_marco_timer_init(struct device_node *np)
clk = of_clk_get(np, 0); clk = of_clk_get(np, 0);
BUG_ON(IS_ERR(clk)); BUG_ON(IS_ERR(clk));
BUG_ON(clk_prepare_enable(clk));
rate = clk_get_rate(clk); rate = clk_get_rate(clk);
BUG_ON(rate < MARCO_CLOCK_FREQ); BUG_ON(rate < MARCO_CLOCK_FREQ);
......
...@@ -200,6 +200,9 @@ static void __init sirfsoc_prima2_timer_init(struct device_node *np) ...@@ -200,6 +200,9 @@ static void __init sirfsoc_prima2_timer_init(struct device_node *np)
clk = of_clk_get(np, 0); clk = of_clk_get(np, 0);
BUG_ON(IS_ERR(clk)); BUG_ON(IS_ERR(clk));
BUG_ON(clk_prepare_enable(clk));
rate = clk_get_rate(clk); rate = clk_get_rate(clk);
BUG_ON(rate < PRIMA2_CLOCK_FREQ); BUG_ON(rate < PRIMA2_CLOCK_FREQ);
......
...@@ -702,6 +702,42 @@ void __iomem *of_iomap(struct device_node *np, int index) ...@@ -702,6 +702,42 @@ void __iomem *of_iomap(struct device_node *np, int index)
} }
EXPORT_SYMBOL(of_iomap); EXPORT_SYMBOL(of_iomap);
/*
* of_io_request_and_map - Requests a resource and maps the memory mapped IO
* for a given device_node
* @device: the device whose io range will be mapped
* @index: index of the io range
* @name: name of the resource
*
* Returns a pointer to the requested and mapped memory or an ERR_PTR() encoded
* error code on failure. Usage example:
*
* base = of_io_request_and_map(node, 0, "foo");
* if (IS_ERR(base))
* return PTR_ERR(base);
*/
void __iomem *of_io_request_and_map(struct device_node *np, int index,
char *name)
{
struct resource res;
void __iomem *mem;
if (of_address_to_resource(np, index, &res))
return IOMEM_ERR_PTR(-EINVAL);
if (!request_mem_region(res.start, resource_size(&res), name))
return IOMEM_ERR_PTR(-EBUSY);
mem = ioremap(res.start, resource_size(&res));
if (!mem) {
release_mem_region(res.start, resource_size(&res));
return IOMEM_ERR_PTR(-ENOMEM);
}
return mem;
}
EXPORT_SYMBOL(of_io_request_and_map);
/** /**
* of_dma_get_range - Get DMA range info * of_dma_get_range - Get DMA range info
* @np: device node to get DMA range info * @np: device node to get DMA range info
......
/*
* PXA clocksource, clockevents, and OST interrupt handlers.
*
* Copyright (C) 2014 Robert Jarzmik
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
*/
#ifndef _CLOCKSOURCE_PXA_H
#define _CLOCKSOURCE_PXA_H
extern void pxa_timer_nodt_init(int irq, void __iomem *base,
unsigned long clock_tick_rate);
#endif
...@@ -58,6 +58,8 @@ static inline void devm_ioport_unmap(struct device *dev, void __iomem *addr) ...@@ -58,6 +58,8 @@ static inline void devm_ioport_unmap(struct device *dev, void __iomem *addr)
} }
#endif #endif
#define IOMEM_ERR_PTR(err) (__force void __iomem *)ERR_PTR(err)
void __iomem *devm_ioremap(struct device *dev, resource_size_t offset, void __iomem *devm_ioremap(struct device *dev, resource_size_t offset,
unsigned long size); unsigned long size);
void __iomem *devm_ioremap_nocache(struct device *dev, resource_size_t offset, void __iomem *devm_ioremap_nocache(struct device *dev, resource_size_t offset,
......
...@@ -109,7 +109,12 @@ static inline bool of_dma_is_coherent(struct device_node *np) ...@@ -109,7 +109,12 @@ static inline bool of_dma_is_coherent(struct device_node *np)
extern int of_address_to_resource(struct device_node *dev, int index, extern int of_address_to_resource(struct device_node *dev, int index,
struct resource *r); struct resource *r);
void __iomem *of_iomap(struct device_node *node, int index); void __iomem *of_iomap(struct device_node *node, int index);
void __iomem *of_io_request_and_map(struct device_node *device,
int index, char *name);
#else #else
#include <linux/io.h>
static inline int of_address_to_resource(struct device_node *dev, int index, static inline int of_address_to_resource(struct device_node *dev, int index,
struct resource *r) struct resource *r)
{ {
...@@ -120,6 +125,12 @@ static inline void __iomem *of_iomap(struct device_node *device, int index) ...@@ -120,6 +125,12 @@ static inline void __iomem *of_iomap(struct device_node *device, int index)
{ {
return NULL; return NULL;
} }
static inline void __iomem *of_io_request_and_map(struct device_node *device,
int index, char *name)
{
return IOMEM_ERR_PTR(-EINVAL);
}
#endif #endif
#if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_PCI) #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_PCI)
......
...@@ -2,11 +2,6 @@ ...@@ -2,11 +2,6 @@
#define __SH_TIMER_H__ #define __SH_TIMER_H__
struct sh_timer_config { struct sh_timer_config {
char *name;
long channel_offset;
int timer_bit;
unsigned long clockevent_rating;
unsigned long clocksource_rating;
unsigned int channels_mask; unsigned int channels_mask;
}; };
......
...@@ -86,8 +86,6 @@ void devm_iounmap(struct device *dev, void __iomem *addr) ...@@ -86,8 +86,6 @@ void devm_iounmap(struct device *dev, void __iomem *addr)
} }
EXPORT_SYMBOL(devm_iounmap); EXPORT_SYMBOL(devm_iounmap);
#define IOMEM_ERR_PTR(err) (__force void __iomem *)ERR_PTR(err)
/** /**
* devm_ioremap_resource() - check, request region, and ioremap resource * devm_ioremap_resource() - check, request region, and ioremap resource
* @dev: generic device to handle the resource for * @dev: generic device to handle the resource for
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册