提交 c6083cd6 编写于 作者: D David Brownell 提交者: Haavard Skinnemoen

[AVR32] faster avr32 unaligned access

Use a more conventional implementation for unaligned access, and include
an AT32AP-specific optimization:  the CPU will handle unaligned words.

The result is always faster and smaller for 8, 16, and 32 bit values.
For 64 bit quantities, it's presumably larger.
Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: NHaavard Skinnemoen <hskinnemoen@atmel.com>
上级 8b4a4080
......@@ -6,20 +6,31 @@
* implementation. The AVR32 AP implementation can handle unaligned
* words, but halfwords must be halfword-aligned, and doublewords must
* be word-aligned.
*
* TODO: Make all this CPU-specific and optimize.
*/
#include <linux/string.h>
#include <asm-generic/unaligned.h>
/* Use memmove here, so gcc does not insert a __builtin_memcpy. */
#ifdef CONFIG_CPU_AT32AP7000
/* REVISIT calling memmove() may be smaller for 64-bit values ... */
#undef get_unaligned
#define get_unaligned(ptr) \
({ __typeof__(*(ptr)) __tmp; memmove(&__tmp, (ptr), sizeof(*(ptr))); __tmp; })
___get_unaligned(ptr, sizeof((*ptr)))
#define ___get_unaligned(ptr, size) \
((size == 4) ? *(ptr) : __get_unaligned(ptr, size))
#undef put_unaligned
#define put_unaligned(val, ptr) \
___put_unaligned((__u64)(val), ptr, sizeof((*ptr)))
#define ___put_unaligned(val, ptr, size) \
do { \
if (size == 4) \
*(ptr) = (val); \
else \
__put_unaligned(val, ptr, size); \
} while (0)
#define put_unaligned(val, ptr) \
({ __typeof__(*(ptr)) __tmp = (val); \
memmove((ptr), &__tmp, sizeof(*(ptr))); \
(void)0; })
#endif
#endif /* __ASM_AVR32_UNALIGNED_H */
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册