x86/asm: Reorder early variables
Moving early_recursion_flag (4 bytes) after early_level4_pgt (4k) and early_dynamic_pgts (256k) saves 4k which are used for alignment of early_level4_pgt after early_recursion_flag. The real improvement is merely on the source code side. Previously it was: * __INITDATA + .balign * early_recursion_flag variable * a ton of CPP MACROS * __INITDATA (again) * early_top_pgt and early_recursion_flag variables * .data Now, it is a bit simpler: * a ton of CPP MACROS * __INITDATA + .balign * early_top_pgt and early_recursion_flag variables * early_recursion_flag variable * .data On the binary level the change looks like this: Before: (sections) 12 .init.data 00042000 0000000000000000 0000000000000000 00008000 2**12 (symbols) 000000 4 OBJECT GLOBAL DEFAULT 22 early_recursion_flag 001000 4096 OBJECT GLOBAL DEFAULT 22 early_top_pgt 002000 0x40000 OBJECT GLOBAL DEFAULT 22 early_dynamic_pgts After: (sections) 12 .init.data 00041004 0000000000000000 0000000000000000 00008000 2**12 (symbols) 000000 4096 OBJECT GLOBAL DEFAULT 22 early_top_pgt 001000 0x40000 OBJECT GLOBAL DEFAULT 22 early_dynamic_pgts 041000 4 OBJECT GLOBAL DEFAULT 22 early_recursion_flag So the resulting vmlinux is smaller by 4k with my toolchain as many other variables can be placed after early_recursion_flag to fill the rest of the page. Note that this is only .init data, so it is freed right after being booted anyway. Savings on-disk are none -- compression of zeros is easy, so the size of bzImage is the same pre and post the change. Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191003095238.29831-1-jslaby@suse.cz
Showing
想要评论请 注册 或 登录