提交 e1ddf67c 编写于 作者: S Simon Glass 提交者: Bin Meng

timer: Allow delays with a 32-bit microsecond timer

The current get_timer_us() uses 64-bit arithmetic on 32-bit machines.
When implementing microsecond-level timeouts, 32-bits is plenty. Add a
new function that uses an unsigned long. On 64-bit machines this is
still 64-bit, but this doesn't introduce a penalty. On 32-bit machines
it is more efficient.
Signed-off-by: NSimon Glass <sjg@chromium.org>
Reviewed-by: NBin Meng <bmeng.cn@gmail.com>
上级 ce04a902
......@@ -17,6 +17,17 @@ unsigned long get_timer(unsigned long base);
unsigned long timer_get_us(void);
uint64_t get_timer_us(uint64_t base);
/**
* get_timer_us_long() - Get the number of elapsed microseconds
*
* This uses 32-bit arithmetic on 32-bit machines, which is enough to handle
* delays of over an hour. For 64-bit machines it uses a 64-bit value.
*
*@base: Base time to consider
*@return elapsed time since @base
*/
unsigned long get_timer_us_long(unsigned long base);
/*
* timer_test_add_offset()
*
......
......@@ -152,6 +152,11 @@ uint64_t __weak get_timer_us(uint64_t base)
return tick_to_time_us(get_ticks()) - base;
}
unsigned long __weak get_timer_us_long(unsigned long base)
{
return timer_get_us() - base;
}
unsigned long __weak notrace timer_get_us(void)
{
return tick_to_time(get_ticks() * 1000);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册