• D
    include/uapi/linux/byteorder, swab: force inlining of some byteswap operations · bc27fb68
    Denys Vlasenko 提交于
    Sometimes gcc mysteriously doesn't inline
    very small functions we expect to be inlined. See
    
        https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122
    
    With this .config:
    http://busybox.net/~vda/kernel_config_OPTIMIZE_INLINING_and_Os,
    the following functions get deinlined many times.
    Examples of disassembly:
    
    <get_unaligned_be16> (12 copies, 51 calls):
           66 8b 07                mov    (%rdi),%ax
           55                      push   %rbp
           48 89 e5                mov    %rsp,%rbp
           86 e0                   xchg   %ah,%al
           5d                      pop    %rbp
           c3                      retq
    
    <get_unaligned_be32> (12 copies, 135 calls):
           8b 07                   mov    (%rdi),%eax
           55                      push   %rbp
           48 89 e5                mov    %rsp,%rbp
           0f c8                   bswap  %eax
           5d                      pop    %rbp
           c3                      retq
    
    <get_unaligned_be64> (2 copies, 20 calls):
           48 8b 07                mov    (%rdi),%rax
           55                      push   %rbp
           48 89 e5                mov    %rsp,%rbp
           48 0f c8                bswap  %rax
           5d                      pop    %rbp
           c3                      retq
    
    <__swab16p> (16 copies, 146 calls):
           55                      push   %rbp
           89 f8                   mov    %edi,%eax
           86 e0                   xchg   %ah,%al
           48 89 e5                mov    %rsp,%rbp
           5d                      pop    %rbp
           c3                      retq
    
    <__swab32p> (43 copies, ~560 calls):
           55                      push   %rbp
           89 f8                   mov    %edi,%eax
           0f c8                   bswap  %eax
           48 89 e5                mov    %rsp,%rbp
           5d                      pop    %rbp
           c3                      retq
    
    <__swab64p> (21 copies, 119 calls):
           55                      push   %rbp
           48 89 f8                mov    %rdi,%rax
           48 0f c8                bswap  %rax
           48 89 e5                mov    %rsp,%rbp
           5d                      pop    %rbp
           c3                      retq
    
    <__swab32s> (6 copies, 47 calls):
           8b 07                   mov    (%rdi),%eax
           55                      push   %rbp
           48 89 e5                mov    %rsp,%rbp
           0f c8                   bswap  %eax
           89 07                   mov    %eax,(%rdi)
           5d                      pop    %rbp
           c3                      retq
    
    This patch fixes this via s/inline/__always_inline/.
    Code size decrease after the patch is ~4.5k:
    
        text     data      bss       dec     hex filename
    92202377 20826112 36417536 149446025 8e85d89 vmlinux
    92197848 20826112 36417536 149441496 8e84bd8 vmlinux5_swap_after
    Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
    Acked-by: NIngo Molnar <mingo@kernel.org>
    Cc: Thomas Graf <tgraf@suug.ch>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    bc27fb68
swab.h 6.5 KB