ARM intrinsics for AES deviate from the x86 ones in the way they cover
the different stages of each round, and so mapping one to the other is
not entirely straight-forward. However, with a bit of care, we can still
use the x86 ones to emulate the ARM ones, which makes them constant time
(which is an important property in crypto) and substantially more
efficient.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Alex Bennée <alex.bennee@linaro.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
Suggestions welcome on how to make this more generic across targets and
compilers etc.
target/arm/tcg/crypto_helper.c | 43 ++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/target/arm/tcg/crypto_helper.c b/target/arm/tcg/crypto_helper.c
index d28690321f..961112b6bd 100644
--- a/target/arm/tcg/crypto_helper.c
+++ b/target/arm/tcg/crypto_helper.c
@@ -18,10 +18,32 @@
#include "crypto/sm4.h"
#include "vec_internal.h"
+#ifdef __x86_64
+#pragma GCC target ("aes")
+#include <cpuid.h>
+#include <wmmintrin.h>
+
+static bool have_aes(void)
+{
+ static int cpuid_have_aes = -1;
+
+ if (cpuid_have_aes == -1) {
+ unsigned int eax, ebx, ecx, edx;
+ int ret = __get_cpuid(0x1, &eax, &ebx, &ecx, &edx);
+
+ cpuid_have_aes = ret && (ecx & bit_AES);
+ }
+ return cpuid_have_aes > 0;
+}
+#endif