提交 6c8c17cc 编写于 作者: M Mathias Krause 提交者: Herbert Xu

crypto: x86/sha1 - fix stack alignment of AVX2 variant

The AVX2 implementation might waste up to a page of stack memory because
of a wrong alignment calculation. This will, in the worst case, increase
the stack usage of sha1_transform_avx2() alone to 5.4 kB -- way to big
for a kernel function. Even worse, it might also allocate *less* bytes
than needed if the stack pointer is already aligned bacause in that case
the 'sub %rbx, %rsp' is effectively moving the stack pointer upwards,
not downwards.

Fix those issues by changing and simplifying the alignment calculation
to use a 32 byte alignment, the alignment really needed.

Cc: Chandramouli Narayanan <mouli@linux.intel.com>
Signed-off-by: NMathias Krause <minipli@googlemail.com>
Reviewed-by: NH. Peter Anvin <hpa@linux.intel.com>
Reviewed-by: NMarek Vasut <marex@denx.de>
Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
上级 6ca5afb8
...@@ -636,9 +636,7 @@ _loop3: ...@@ -636,9 +636,7 @@ _loop3:
/* Align stack */ /* Align stack */
mov %rsp, %rbx mov %rsp, %rbx
and $(0x1000-1), %rbx and $~(0x20-1), %rsp
sub $(8+32), %rbx
sub %rbx, %rsp
push %rbx push %rbx
sub $RESERVE_STACK, %rsp sub $RESERVE_STACK, %rsp
...@@ -665,8 +663,7 @@ _loop3: ...@@ -665,8 +663,7 @@ _loop3:
avx2_zeroupper avx2_zeroupper
add $RESERVE_STACK, %rsp add $RESERVE_STACK, %rsp
pop %rbx pop %rsp
add %rbx, %rsp
pop %r15 pop %r15
pop %r14 pop %r14
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册