Thanks for using Compiler Explorer
Sponsors
Jakt
C++
Ada
Analysis
Android Java
Android Kotlin
Assembly
C
C3
Carbon
C++ (Circle)
CIRCT
Clean
CMake
CMakeScript
COBOL
C++ for OpenCL
MLIR
Cppx
Cppx-Blue
Cppx-Gold
Cpp2-cppfront
Crystal
C#
CUDA C++
D
Dart
Elixir
Erlang
Fortran
F#
GLSL
Go
Haskell
HLSL
Hook
Hylo
IL
ispc
Java
Julia
Kotlin
LLVM IR
LLVM MIR
Modula-2
Nim
Objective-C
Objective-C++
OCaml
OpenCL C
Pascal
Pony
Python
Racket
Ruby
Rust
Snowball
Scala
Solidity
Spice
SPIR-V
Swift
LLVM TableGen
Toit
TypeScript Native
V
Vala
Visual Basic
Vyper
WASM
Zig
Javascript
GIMPLE
Ygen
c++ source #1
Output
Compile to binary object
Link to binary
Execute the code
Intel asm syntax
Demangle identifiers
Verbose demangling
Filters
Unused labels
Library functions
Directives
Comments
Horizontal whitespace
Debug intrinsics
Compiler
6502-c++ 11.1.0
ARM GCC 10.2.0
ARM GCC 10.3.0
ARM GCC 10.4.0
ARM GCC 10.5.0
ARM GCC 11.1.0
ARM GCC 11.2.0
ARM GCC 11.3.0
ARM GCC 11.4.0
ARM GCC 12.1.0
ARM GCC 12.2.0
ARM GCC 12.3.0
ARM GCC 12.4.0
ARM GCC 13.1.0
ARM GCC 13.2.0
ARM GCC 13.2.0 (unknown-eabi)
ARM GCC 13.3.0
ARM GCC 13.3.0 (unknown-eabi)
ARM GCC 14.1.0
ARM GCC 14.1.0 (unknown-eabi)
ARM GCC 14.2.0
ARM GCC 14.2.0 (unknown-eabi)
ARM GCC 4.5.4
ARM GCC 4.6.4
ARM GCC 5.4
ARM GCC 6.3.0
ARM GCC 6.4.0
ARM GCC 7.3.0
ARM GCC 7.5.0
ARM GCC 8.2.0
ARM GCC 8.5.0
ARM GCC 9.3.0
ARM GCC 9.4.0
ARM GCC 9.5.0
ARM GCC trunk
ARM gcc 10.2.1 (none)
ARM gcc 10.3.1 (2021.07 none)
ARM gcc 10.3.1 (2021.10 none)
ARM gcc 11.2.1 (none)
ARM gcc 5.4.1 (none)
ARM gcc 7.2.1 (none)
ARM gcc 8.2 (WinCE)
ARM gcc 8.3.1 (none)
ARM gcc 9.2.1 (none)
ARM msvc v19.0 (WINE)
ARM msvc v19.10 (WINE)
ARM msvc v19.14 (WINE)
ARM64 Morello gcc 10.1 Alpha 2
ARM64 gcc 10.2
ARM64 gcc 10.3
ARM64 gcc 10.4
ARM64 gcc 10.5.0
ARM64 gcc 11.1
ARM64 gcc 11.2
ARM64 gcc 11.3
ARM64 gcc 11.4.0
ARM64 gcc 12.1
ARM64 gcc 12.2.0
ARM64 gcc 12.3.0
ARM64 gcc 12.4.0
ARM64 gcc 13.1.0
ARM64 gcc 13.2.0
ARM64 gcc 13.3.0
ARM64 gcc 14.1.0
ARM64 gcc 14.2.0
ARM64 gcc 4.9.4
ARM64 gcc 5.4
ARM64 gcc 5.5.0
ARM64 gcc 6.3
ARM64 gcc 6.4
ARM64 gcc 7.3
ARM64 gcc 7.5
ARM64 gcc 8.2
ARM64 gcc 8.5
ARM64 gcc 9.3
ARM64 gcc 9.4
ARM64 gcc 9.5
ARM64 gcc trunk
ARM64 msvc v19.14 (WINE)
AVR gcc 10.3.0
AVR gcc 11.1.0
AVR gcc 12.1.0
AVR gcc 12.2.0
AVR gcc 12.3.0
AVR gcc 12.4.0
AVR gcc 13.1.0
AVR gcc 13.2.0
AVR gcc 13.3.0
AVR gcc 14.1.0
AVR gcc 14.2.0
AVR gcc 4.5.4
AVR gcc 4.6.4
AVR gcc 5.4.0
AVR gcc 9.2.0
AVR gcc 9.3.0
Arduino Mega (1.8.9)
Arduino Uno (1.8.9)
BPF clang (trunk)
BPF clang 13.0.0
BPF clang 14.0.0
BPF clang 15.0.0
BPF clang 16.0.0
BPF clang 17.0.1
BPF clang 18.1.0
BPF clang 19.1.0
BPF gcc 13.1.0
BPF gcc 13.2.0
BPF gcc 13.3.0
BPF gcc trunk
EDG (experimental reflection)
EDG 6.5
EDG 6.5 (GNU mode gcc 13)
EDG 6.6
EDG 6.6 (GNU mode gcc 13)
FRC 2019
FRC 2020
FRC 2023
HPPA gcc 14.2.0
KVX ACB 4.1.0 (GCC 7.5.0)
KVX ACB 4.1.0-cd1 (GCC 7.5.0)
KVX ACB 4.10.0 (GCC 10.3.1)
KVX ACB 4.11.1 (GCC 10.3.1)
KVX ACB 4.12.0 (GCC 11.3.0)
KVX ACB 4.2.0 (GCC 7.5.0)
KVX ACB 4.3.0 (GCC 7.5.0)
KVX ACB 4.4.0 (GCC 7.5.0)
KVX ACB 4.6.0 (GCC 9.4.1)
KVX ACB 4.8.0 (GCC 9.4.1)
KVX ACB 4.9.0 (GCC 9.4.1)
KVX ACB 5.0.0 (GCC 12.2.1)
KVX ACB 5.2.0 (GCC 13.2.1)
LoongArch64 clang (trunk)
LoongArch64 clang 17.0.1
LoongArch64 clang 18.1.0
LoongArch64 clang 19.1.0
M68K gcc 13.1.0
M68K gcc 13.2.0
M68K gcc 13.3.0
M68K gcc 14.1.0
M68K gcc 14.2.0
M68k clang (trunk)
MRISC32 gcc (trunk)
MSP430 gcc 4.5.3
MSP430 gcc 5.3.0
MSP430 gcc 6.2.1
MinGW clang 14.0.3
MinGW clang 14.0.6
MinGW clang 15.0.7
MinGW clang 16.0.0
MinGW clang 16.0.2
MinGW gcc 11.3.0
MinGW gcc 12.1.0
MinGW gcc 12.2.0
MinGW gcc 13.1.0
RISC-V (32-bits) gcc (trunk)
RISC-V (32-bits) gcc 10.2.0
RISC-V (32-bits) gcc 10.3.0
RISC-V (32-bits) gcc 11.2.0
RISC-V (32-bits) gcc 11.3.0
RISC-V (32-bits) gcc 11.4.0
RISC-V (32-bits) gcc 12.1.0
RISC-V (32-bits) gcc 12.2.0
RISC-V (32-bits) gcc 12.3.0
RISC-V (32-bits) gcc 12.4.0
RISC-V (32-bits) gcc 13.1.0
RISC-V (32-bits) gcc 13.2.0
RISC-V (32-bits) gcc 13.3.0
RISC-V (32-bits) gcc 14.1.0
RISC-V (32-bits) gcc 14.2.0
RISC-V (32-bits) gcc 8.2.0
RISC-V (32-bits) gcc 8.5.0
RISC-V (32-bits) gcc 9.4.0
RISC-V (64-bits) gcc (trunk)
RISC-V (64-bits) gcc 10.2.0
RISC-V (64-bits) gcc 10.3.0
RISC-V (64-bits) gcc 11.2.0
RISC-V (64-bits) gcc 11.3.0
RISC-V (64-bits) gcc 11.4.0
RISC-V (64-bits) gcc 12.1.0
RISC-V (64-bits) gcc 12.2.0
RISC-V (64-bits) gcc 12.3.0
RISC-V (64-bits) gcc 12.4.0
RISC-V (64-bits) gcc 13.1.0
RISC-V (64-bits) gcc 13.2.0
RISC-V (64-bits) gcc 13.3.0
RISC-V (64-bits) gcc 14.1.0
RISC-V (64-bits) gcc 14.2.0
RISC-V (64-bits) gcc 8.2.0
RISC-V (64-bits) gcc 8.5.0
RISC-V (64-bits) gcc 9.4.0
RISC-V rv32gc clang (trunk)
RISC-V rv32gc clang 10.0.0
RISC-V rv32gc clang 10.0.1
RISC-V rv32gc clang 11.0.0
RISC-V rv32gc clang 11.0.1
RISC-V rv32gc clang 12.0.0
RISC-V rv32gc clang 12.0.1
RISC-V rv32gc clang 13.0.0
RISC-V rv32gc clang 13.0.1
RISC-V rv32gc clang 14.0.0
RISC-V rv32gc clang 15.0.0
RISC-V rv32gc clang 16.0.0
RISC-V rv32gc clang 17.0.1
RISC-V rv32gc clang 18.1.0
RISC-V rv32gc clang 19.1.0
RISC-V rv32gc clang 9.0.0
RISC-V rv32gc clang 9.0.1
RISC-V rv64gc clang (trunk)
RISC-V rv64gc clang 10.0.0
RISC-V rv64gc clang 10.0.1
RISC-V rv64gc clang 11.0.0
RISC-V rv64gc clang 11.0.1
RISC-V rv64gc clang 12.0.0
RISC-V rv64gc clang 12.0.1
RISC-V rv64gc clang 13.0.0
RISC-V rv64gc clang 13.0.1
RISC-V rv64gc clang 14.0.0
RISC-V rv64gc clang 15.0.0
RISC-V rv64gc clang 16.0.0
RISC-V rv64gc clang 17.0.1
RISC-V rv64gc clang 18.1.0
RISC-V rv64gc clang 19.1.0
RISC-V rv64gc clang 9.0.0
RISC-V rv64gc clang 9.0.1
Raspbian Buster
Raspbian Stretch
SPARC LEON gcc 12.2.0
SPARC LEON gcc 12.3.0
SPARC LEON gcc 12.4.0
SPARC LEON gcc 13.1.0
SPARC LEON gcc 13.2.0
SPARC LEON gcc 13.3.0
SPARC LEON gcc 14.1.0
SPARC LEON gcc 14.2.0
SPARC gcc 12.2.0
SPARC gcc 12.3.0
SPARC gcc 12.4.0
SPARC gcc 13.1.0
SPARC gcc 13.2.0
SPARC gcc 13.3.0
SPARC gcc 14.1.0
SPARC gcc 14.2.0
SPARC64 gcc 12.2.0
SPARC64 gcc 12.3.0
SPARC64 gcc 12.4.0
SPARC64 gcc 13.1.0
SPARC64 gcc 13.2.0
SPARC64 gcc 13.3.0
SPARC64 gcc 14.1.0
SPARC64 gcc 14.2.0
TI C6x gcc 12.2.0
TI C6x gcc 12.3.0
TI C6x gcc 12.4.0
TI C6x gcc 13.1.0
TI C6x gcc 13.2.0
TI C6x gcc 13.3.0
TI C6x gcc 14.1.0
TI C6x gcc 14.2.0
TI CL430 21.6.1
VAX gcc NetBSDELF 10.4.0
VAX gcc NetBSDELF 10.5.0 (Nov 15 03:50:22 2023)
WebAssembly clang (trunk)
Xtensa ESP32 gcc 11.2.0 (2022r1)
Xtensa ESP32 gcc 12.2.0 (20230208)
Xtensa ESP32 gcc 8.2.0 (2019r2)
Xtensa ESP32 gcc 8.2.0 (2020r1)
Xtensa ESP32 gcc 8.2.0 (2020r2)
Xtensa ESP32 gcc 8.4.0 (2020r3)
Xtensa ESP32 gcc 8.4.0 (2021r1)
Xtensa ESP32 gcc 8.4.0 (2021r2)
Xtensa ESP32-S2 gcc 11.2.0 (2022r1)
Xtensa ESP32-S2 gcc 12.2.0 (20230208)
Xtensa ESP32-S2 gcc 8.2.0 (2019r2)
Xtensa ESP32-S2 gcc 8.2.0 (2020r1)
Xtensa ESP32-S2 gcc 8.2.0 (2020r2)
Xtensa ESP32-S2 gcc 8.4.0 (2020r3)
Xtensa ESP32-S2 gcc 8.4.0 (2021r1)
Xtensa ESP32-S2 gcc 8.4.0 (2021r2)
Xtensa ESP32-S3 gcc 11.2.0 (2022r1)
Xtensa ESP32-S3 gcc 12.2.0 (20230208)
Xtensa ESP32-S3 gcc 8.4.0 (2020r3)
Xtensa ESP32-S3 gcc 8.4.0 (2021r1)
Xtensa ESP32-S3 gcc 8.4.0 (2021r2)
arm64 msvc v19.20 VS16.0
arm64 msvc v19.21 VS16.1
arm64 msvc v19.22 VS16.2
arm64 msvc v19.23 VS16.3
arm64 msvc v19.24 VS16.4
arm64 msvc v19.25 VS16.5
arm64 msvc v19.27 VS16.7
arm64 msvc v19.28 VS16.8
arm64 msvc v19.28 VS16.9
arm64 msvc v19.29 VS16.10
arm64 msvc v19.29 VS16.11
arm64 msvc v19.30 VS17.0
arm64 msvc v19.31 VS17.1
arm64 msvc v19.32 VS17.2
arm64 msvc v19.33 VS17.3
arm64 msvc v19.34 VS17.4
arm64 msvc v19.35 VS17.5
arm64 msvc v19.36 VS17.6
arm64 msvc v19.37 VS17.7
arm64 msvc v19.38 VS17.8
arm64 msvc v19.39 VS17.9
arm64 msvc v19.40 VS17.10
arm64 msvc v19.latest
armv7-a clang (trunk)
armv7-a clang 10.0.0
armv7-a clang 10.0.1
armv7-a clang 11.0.0
armv7-a clang 11.0.1
armv7-a clang 12.0.0
armv7-a clang 12.0.1
armv7-a clang 13.0.0
armv7-a clang 13.0.1
armv7-a clang 14.0.0
armv7-a clang 15.0.0
armv7-a clang 16.0.0
armv7-a clang 17.0.1
armv7-a clang 18.1.0
armv7-a clang 19.1.0
armv7-a clang 9.0.0
armv7-a clang 9.0.1
armv8-a clang (all architectural features, trunk)
armv8-a clang (trunk)
armv8-a clang 10.0.0
armv8-a clang 10.0.1
armv8-a clang 11.0.0
armv8-a clang 11.0.1
armv8-a clang 12.0.0
armv8-a clang 13.0.0
armv8-a clang 14.0.0
armv8-a clang 15.0.0
armv8-a clang 16.0.0
armv8-a clang 17.0.1
armv8-a clang 18.1.0
armv8-a clang 19.1.0
armv8-a clang 9.0.0
armv8-a clang 9.0.1
clang-cl 18.1.0
ellcc 0.1.33
ellcc 0.1.34
ellcc 2017-07-16
hexagon-clang 16.0.5
llvm-mos atari2600-3e
llvm-mos atari2600-4k
llvm-mos atari2600-common
llvm-mos atari5200-supercart
llvm-mos atari8-cart-megacart
llvm-mos atari8-cart-std
llvm-mos atari8-cart-xegs
llvm-mos atari8-common
llvm-mos atari8-dos
llvm-mos c128
llvm-mos c64
llvm-mos commodore
llvm-mos cpm65
llvm-mos cx16
llvm-mos dodo
llvm-mos eater
llvm-mos mega65
llvm-mos nes
llvm-mos nes-action53
llvm-mos nes-cnrom
llvm-mos nes-gtrom
llvm-mos nes-mmc1
llvm-mos nes-mmc3
llvm-mos nes-nrom
llvm-mos nes-unrom
llvm-mos nes-unrom-512
llvm-mos osi-c1p
llvm-mos pce
llvm-mos pce-cd
llvm-mos pce-common
llvm-mos pet
llvm-mos rp6502
llvm-mos rpc8e
llvm-mos supervision
llvm-mos vic20
loongarch64 gcc 12.2.0
loongarch64 gcc 12.3.0
loongarch64 gcc 12.4.0
loongarch64 gcc 13.1.0
loongarch64 gcc 13.2.0
loongarch64 gcc 13.3.0
loongarch64 gcc 14.1.0
loongarch64 gcc 14.2.0
mips clang 13.0.0
mips clang 14.0.0
mips clang 15.0.0
mips clang 16.0.0
mips clang 17.0.1
mips clang 18.1.0
mips clang 19.1.0
mips gcc 11.2.0
mips gcc 12.1.0
mips gcc 12.2.0
mips gcc 12.3.0
mips gcc 12.4.0
mips gcc 13.1.0
mips gcc 13.2.0
mips gcc 13.3.0
mips gcc 14.1.0
mips gcc 14.2.0
mips gcc 4.9.4
mips gcc 5.4
mips gcc 5.5.0
mips gcc 9.3.0 (codescape)
mips gcc 9.5.0
mips64 (el) gcc 12.1.0
mips64 (el) gcc 12.2.0
mips64 (el) gcc 12.3.0
mips64 (el) gcc 12.4.0
mips64 (el) gcc 13.1.0
mips64 (el) gcc 13.2.0
mips64 (el) gcc 13.3.0
mips64 (el) gcc 14.1.0
mips64 (el) gcc 14.2.0
mips64 (el) gcc 4.9.4
mips64 (el) gcc 5.4.0
mips64 (el) gcc 5.5.0
mips64 (el) gcc 9.5.0
mips64 clang 13.0.0
mips64 clang 14.0.0
mips64 clang 15.0.0
mips64 clang 16.0.0
mips64 clang 17.0.1
mips64 clang 18.1.0
mips64 clang 19.1.0
mips64 gcc 11.2.0
mips64 gcc 12.1.0
mips64 gcc 12.2.0
mips64 gcc 12.3.0
mips64 gcc 12.4.0
mips64 gcc 13.1.0
mips64 gcc 13.2.0
mips64 gcc 13.3.0
mips64 gcc 14.1.0
mips64 gcc 14.2.0
mips64 gcc 4.9.4
mips64 gcc 5.4.0
mips64 gcc 5.5.0
mips64 gcc 9.5.0
mips64el clang 13.0.0
mips64el clang 14.0.0
mips64el clang 15.0.0
mips64el clang 16.0.0
mips64el clang 17.0.1
mips64el clang 18.1.0
mips64el clang 19.1.0
mipsel clang 13.0.0
mipsel clang 14.0.0
mipsel clang 15.0.0
mipsel clang 16.0.0
mipsel clang 17.0.1
mipsel clang 18.1.0
mipsel clang 19.1.0
mipsel gcc 12.1.0
mipsel gcc 12.2.0
mipsel gcc 12.3.0
mipsel gcc 12.4.0
mipsel gcc 13.1.0
mipsel gcc 13.2.0
mipsel gcc 13.3.0
mipsel gcc 14.1.0
mipsel gcc 14.2.0
mipsel gcc 4.9.4
mipsel gcc 5.4.0
mipsel gcc 5.5.0
mipsel gcc 9.5.0
nanoMIPS gcc 6.3.0 (mtk)
power gcc 11.2.0
power gcc 12.1.0
power gcc 12.2.0
power gcc 12.3.0
power gcc 12.4.0
power gcc 13.1.0
power gcc 13.2.0
power gcc 13.3.0
power gcc 14.1.0
power gcc 14.2.0
power gcc 4.8.5
power64 AT12.0 (gcc8)
power64 AT13.0 (gcc9)
power64 gcc 11.2.0
power64 gcc 12.1.0
power64 gcc 12.2.0
power64 gcc 12.3.0
power64 gcc 12.4.0
power64 gcc 13.1.0
power64 gcc 13.2.0
power64 gcc 13.3.0
power64 gcc 14.1.0
power64 gcc 14.2.0
power64 gcc trunk
power64le AT12.0 (gcc8)
power64le AT13.0 (gcc9)
power64le clang (trunk)
power64le gcc 11.2.0
power64le gcc 12.1.0
power64le gcc 12.2.0
power64le gcc 12.3.0
power64le gcc 12.4.0
power64le gcc 13.1.0
power64le gcc 13.2.0
power64le gcc 13.3.0
power64le gcc 14.1.0
power64le gcc 14.2.0
power64le gcc 6.3.0
power64le gcc trunk
powerpc64 clang (trunk)
s390x gcc 11.2.0
s390x gcc 12.1.0
s390x gcc 12.2.0
s390x gcc 12.3.0
s390x gcc 12.4.0
s390x gcc 13.1.0
s390x gcc 13.2.0
s390x gcc 13.3.0
s390x gcc 14.1.0
s390x gcc 14.2.0
sh gcc 12.2.0
sh gcc 12.3.0
sh gcc 12.4.0
sh gcc 13.1.0
sh gcc 13.2.0
sh gcc 13.3.0
sh gcc 14.1.0
sh gcc 14.2.0
sh gcc 4.9.4
sh gcc 9.5.0
vast (trunk)
x64 msvc v19.0 (WINE)
x64 msvc v19.10 (WINE)
x64 msvc v19.14 (WINE)
x64 msvc v19.20 VS16.0
x64 msvc v19.21 VS16.1
x64 msvc v19.22 VS16.2
x64 msvc v19.23 VS16.3
x64 msvc v19.24 VS16.4
x64 msvc v19.25 VS16.5
x64 msvc v19.27 VS16.7
x64 msvc v19.28 VS16.8
x64 msvc v19.28 VS16.9
x64 msvc v19.29 VS16.10
x64 msvc v19.29 VS16.11
x64 msvc v19.30 VS17.0
x64 msvc v19.31 VS17.1
x64 msvc v19.32 VS17.2
x64 msvc v19.33 VS17.3
x64 msvc v19.34 VS17.4
x64 msvc v19.35 VS17.5
x64 msvc v19.36 VS17.6
x64 msvc v19.37 VS17.7
x64 msvc v19.38 VS17.8
x64 msvc v19.39 VS17.9
x64 msvc v19.40 VS17.10
x64 msvc v19.latest
x86 djgpp 4.9.4
x86 djgpp 5.5.0
x86 djgpp 6.4.0
x86 djgpp 7.2.0
x86 msvc v19.0 (WINE)
x86 msvc v19.10 (WINE)
x86 msvc v19.14 (WINE)
x86 msvc v19.20 VS16.0
x86 msvc v19.21 VS16.1
x86 msvc v19.22 VS16.2
x86 msvc v19.23 VS16.3
x86 msvc v19.24 VS16.4
x86 msvc v19.25 VS16.5
x86 msvc v19.27 VS16.7
x86 msvc v19.28 VS16.8
x86 msvc v19.28 VS16.9
x86 msvc v19.29 VS16.10
x86 msvc v19.29 VS16.11
x86 msvc v19.30 VS17.0
x86 msvc v19.31 VS17.1
x86 msvc v19.32 VS17.2
x86 msvc v19.33 VS17.3
x86 msvc v19.34 VS17.4
x86 msvc v19.35 VS17.5
x86 msvc v19.36 VS17.6
x86 msvc v19.37 VS17.7
x86 msvc v19.38 VS17.8
x86 msvc v19.39 VS17.9
x86 msvc v19.40 VS17.10
x86 msvc v19.latest
x86 nvc++ 22.11
x86 nvc++ 22.7
x86 nvc++ 22.9
x86 nvc++ 23.1
x86 nvc++ 23.11
x86 nvc++ 23.3
x86 nvc++ 23.5
x86 nvc++ 23.7
x86 nvc++ 23.9
x86 nvc++ 24.1
x86 nvc++ 24.11
x86 nvc++ 24.3
x86 nvc++ 24.5
x86 nvc++ 24.7
x86 nvc++ 24.9
x86-64 Zapcc 190308
x86-64 clang (EricWF contracts)
x86-64 clang (amd-staging)
x86-64 clang (assertions trunk)
x86-64 clang (clangir)
x86-64 clang (dascandy contracts)
x86-64 clang (experimental -Wlifetime)
x86-64 clang (experimental P1061)
x86-64 clang (experimental P1144)
x86-64 clang (experimental P1221)
x86-64 clang (experimental P2996)
x86-64 clang (experimental P3068)
x86-64 clang (experimental P3309)
x86-64 clang (experimental P3367)
x86-64 clang (experimental P3372)
x86-64 clang (experimental metaprogramming - P2632)
x86-64 clang (old concepts branch)
x86-64 clang (p1974)
x86-64 clang (pattern matching - P2688)
x86-64 clang (reflection)
x86-64 clang (resugar)
x86-64 clang (string interpolation - P3412)
x86-64 clang (thephd.dev)
x86-64 clang (trunk)
x86-64 clang (variadic friends - P2893)
x86-64 clang (widberg)
x86-64 clang 10.0.0
x86-64 clang 10.0.0 (assertions)
x86-64 clang 10.0.1
x86-64 clang 11.0.0
x86-64 clang 11.0.0 (assertions)
x86-64 clang 11.0.1
x86-64 clang 12.0.0
x86-64 clang 12.0.0 (assertions)
x86-64 clang 12.0.1
x86-64 clang 13.0.0
x86-64 clang 13.0.0 (assertions)
x86-64 clang 13.0.1
x86-64 clang 14.0.0
x86-64 clang 14.0.0 (assertions)
x86-64 clang 15.0.0
x86-64 clang 15.0.0 (assertions)
x86-64 clang 16.0.0
x86-64 clang 16.0.0 (assertions)
x86-64 clang 17.0.1
x86-64 clang 17.0.1 (assertions)
x86-64 clang 18.1.0
x86-64 clang 18.1.0 (assertions)
x86-64 clang 19.1.0
x86-64 clang 19.1.0 (assertions)
x86-64 clang 2.6.0 (assertions)
x86-64 clang 2.7.0 (assertions)
x86-64 clang 2.8.0 (assertions)
x86-64 clang 2.9.0 (assertions)
x86-64 clang 3.0.0
x86-64 clang 3.0.0 (assertions)
x86-64 clang 3.1
x86-64 clang 3.1 (assertions)
x86-64 clang 3.2
x86-64 clang 3.2 (assertions)
x86-64 clang 3.3
x86-64 clang 3.3 (assertions)
x86-64 clang 3.4 (assertions)
x86-64 clang 3.4.1
x86-64 clang 3.5
x86-64 clang 3.5 (assertions)
x86-64 clang 3.5.1
x86-64 clang 3.5.2
x86-64 clang 3.6
x86-64 clang 3.6 (assertions)
x86-64 clang 3.7
x86-64 clang 3.7 (assertions)
x86-64 clang 3.7.1
x86-64 clang 3.8
x86-64 clang 3.8 (assertions)
x86-64 clang 3.8.1
x86-64 clang 3.9.0
x86-64 clang 3.9.0 (assertions)
x86-64 clang 3.9.1
x86-64 clang 4.0.0
x86-64 clang 4.0.0 (assertions)
x86-64 clang 4.0.1
x86-64 clang 5.0.0
x86-64 clang 5.0.0 (assertions)
x86-64 clang 5.0.1
x86-64 clang 5.0.2
x86-64 clang 6.0.0
x86-64 clang 6.0.0 (assertions)
x86-64 clang 6.0.1
x86-64 clang 7.0.0
x86-64 clang 7.0.0 (assertions)
x86-64 clang 7.0.1
x86-64 clang 7.1.0
x86-64 clang 8.0.0
x86-64 clang 8.0.0 (assertions)
x86-64 clang 8.0.1
x86-64 clang 9.0.0
x86-64 clang 9.0.0 (assertions)
x86-64 clang 9.0.1
x86-64 clang rocm-4.5.2
x86-64 clang rocm-5.0.2
x86-64 clang rocm-5.1.3
x86-64 clang rocm-5.2.3
x86-64 clang rocm-5.3.3
x86-64 clang rocm-5.7.0
x86-64 clang rocm-6.0.2
x86-64 clang rocm-6.1.2
x86-64 gcc (contract labels)
x86-64 gcc (contracts natural syntax)
x86-64 gcc (contracts)
x86-64 gcc (coroutines)
x86-64 gcc (modules)
x86-64 gcc (trunk)
x86-64 gcc 10.1
x86-64 gcc 10.2
x86-64 gcc 10.3
x86-64 gcc 10.4
x86-64 gcc 10.5
x86-64 gcc 11.1
x86-64 gcc 11.2
x86-64 gcc 11.3
x86-64 gcc 11.4
x86-64 gcc 12.1
x86-64 gcc 12.2
x86-64 gcc 12.3
x86-64 gcc 12.4
x86-64 gcc 13.1
x86-64 gcc 13.2
x86-64 gcc 13.3
x86-64 gcc 14.1
x86-64 gcc 14.2
x86-64 gcc 3.4.6
x86-64 gcc 4.0.4
x86-64 gcc 4.1.2
x86-64 gcc 4.4.7
x86-64 gcc 4.5.3
x86-64 gcc 4.6.4
x86-64 gcc 4.7.1
x86-64 gcc 4.7.2
x86-64 gcc 4.7.3
x86-64 gcc 4.7.4
x86-64 gcc 4.8.1
x86-64 gcc 4.8.2
x86-64 gcc 4.8.3
x86-64 gcc 4.8.4
x86-64 gcc 4.8.5
x86-64 gcc 4.9.0
x86-64 gcc 4.9.1
x86-64 gcc 4.9.2
x86-64 gcc 4.9.3
x86-64 gcc 4.9.4
x86-64 gcc 5.1
x86-64 gcc 5.2
x86-64 gcc 5.3
x86-64 gcc 5.4
x86-64 gcc 5.5
x86-64 gcc 6.1
x86-64 gcc 6.2
x86-64 gcc 6.3
x86-64 gcc 6.4
x86-64 gcc 6.5
x86-64 gcc 7.1
x86-64 gcc 7.2
x86-64 gcc 7.3
x86-64 gcc 7.4
x86-64 gcc 7.5
x86-64 gcc 8.1
x86-64 gcc 8.2
x86-64 gcc 8.3
x86-64 gcc 8.4
x86-64 gcc 8.5
x86-64 gcc 9.1
x86-64 gcc 9.2
x86-64 gcc 9.3
x86-64 gcc 9.4
x86-64 gcc 9.5
x86-64 icc 13.0.1
x86-64 icc 16.0.3
x86-64 icc 17.0.0
x86-64 icc 18.0.0
x86-64 icc 19.0.0
x86-64 icc 19.0.1
x86-64 icc 2021.1.2
x86-64 icc 2021.10.0
x86-64 icc 2021.2.0
x86-64 icc 2021.3.0
x86-64 icc 2021.4.0
x86-64 icc 2021.5.0
x86-64 icc 2021.6.0
x86-64 icc 2021.7.0
x86-64 icc 2021.7.1
x86-64 icc 2021.8.0
x86-64 icc 2021.9.0
x86-64 icx 2021.1.2
x86-64 icx 2021.2.0
x86-64 icx 2021.3.0
x86-64 icx 2021.4.0
x86-64 icx 2022.0.0
x86-64 icx 2022.1.0
x86-64 icx 2022.2.0
x86-64 icx 2022.2.1
x86-64 icx 2023.0.0
x86-64 icx 2023.1.0
x86-64 icx 2023.2.1
x86-64 icx 2024.0.0
x86-64 icx 2024.1.0
x86-64 icx 2024.2.0
x86-64 icx 2025.0.0
x86-64 icx 2025.0.0
zig c++ 0.10.0
zig c++ 0.11.0
zig c++ 0.12.0
zig c++ 0.12.1
zig c++ 0.13.0
zig c++ 0.6.0
zig c++ 0.7.0
zig c++ 0.7.1
zig c++ 0.8.0
zig c++ 0.9.0
zig c++ trunk
Options
Source code
// https://godbolt.org/ [[[ #include <stdlib.h> #include <stdint.h> // uint64_t needed #include <string.h> // memset #include <smmintrin.h> // SSE4.1 intrinsics #include <wmmintrin.h> #include <immintrin.h> // AVX intrinsics //#include <zmmintrin.h> // AVX2 intrinsics, definitions and declarations for use with 512-bit compiler intrinsics. void SlowCopy128bit (const char *SOURCE, char *TARGET) { _mm_storeu_si128((__m128i *)(TARGET), _mm_loadu_si128((const __m128i *)(SOURCE))); } unsigned char DDAES[16]; // https://godbolt.org/ ]]] //static const uint8_t VectorsNeedNonVAriable1[256] __attribute__((aligned(16))) = static const uint8_t VectorsNeedNonVAriable1[256] = { 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00,0x00, 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0x00 }; static const __m128i *Mumbotron = (__m128i *) VectorsNeedNonVAriable1; //static const uint8_t VectorsNeedNonVAriable2[256] __attribute__((aligned(16))) = static const uint8_t VectorsNeedNonVAriable2[256] = { 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, 0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF, 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF }; static const __m128i *Jumbotron = (__m128i *) VectorsNeedNonVAriable2; void DoubleDeuceAES_Gumbotron_YMM(const uint8_t *buffer, size_t length) { size_t i, Cycles; __m128i hashA = _mm_set_epi64x(0x6c62272e07bb0142, 0x62b821756295c58d); // 0x6c62272e07bb014262b821756295c58d // _mm_setzero_si128(); __m128i hashB = _mm_set_epi64x(0xdd268dbcaac55036, 0x2d98c384c4e576cc); // 0xdd268dbcaac550362d98c384c4e576ccc8b1536847b6bbb31023b4c8caee0535 // FNV offset basis __m128i hashC = _mm_set_epi64x(0xc8b1536847b6bbb3, 0x1023b4c8caee0535); // 0xdd268dbcaac550362d98c384c4e576ccc8b1536847b6bbb31023b4c8caee0535 // FNV offset basis __m128i hashD = _mm_setzero_si128(); __m128i a0,a1,a2,a3; // Instead of this chunkenization, ZMM houses the 4 XMMs, if there is shuffle across all the 512bits, use it. There is, but __m256i _mm256_shuffle_epi8(__m256i a, __m256i b) is more handy. __m256i a0YMM,a2YMM; __m128i b0,b1,b2,b3; __m256i b0YMM,b2YMM; __m128i c0,c1,c2,c3; __m256i c0YMM,c2YMM; __m128i d0,d1,d2,d3; __m256i d0YMM,d2YMM; __m128i tmp0,tmp1,tmp2,tmp3; __m256i tmp0YMM,tmp2YMM; __m128i ReverseMask = _mm_set_epi8(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15); __m256i ReverseMaskYMM = _mm256_set_epi8(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31); __m128i PartialInterleavingMask1 = _mm_set_epi8(0x80,7,0x80,6,0x80,5,0x80,4,0x80,3,0x80,2,0x80,1,0x80,0); __m128i PartialInterleavingMask2 = _mm_set_epi8(0x80,0xf,0x80,0xe,0x80,0xd,0x80,0xc,0x80,0xb,0x80,0xa,0x80,9,0x80,8); __m128i PartialInterleavingMask3 = _mm_set_epi8(7,0x80,6,0x80,5,0x80,4,0x80,3,0x80,2,0x80,1,0x80,0,0x80); __m128i PartialInterleavingMask4 = _mm_set_epi8(0xf,0x80,0xe,0x80,0xd,0x80,0xc,0x80,0xb,0x80,0xa,0x80,9,0x80,8,0x80); __m256i PartialInterleavingMask1YMM = _mm256_set_epi8(0x80,0xf,0x80,0xe,0x80,0xd,0x80,0xc,0x80,0xb,0x80,0xa,0x80,9,0x80,8,0x80,7,0x80,6,0x80,5,0x80,4,0x80,3,0x80,2,0x80,1,0x80,0); // __m256i PartialInterleavingMask2YMM = _mm256_set_epi8(0x80,0xf+16,0x80,0xe+16,0x80,0xd+16,0x80,0xc+16,0x80,0xb+16,0x80,0xa+16,0x80,9+16,0x80,8+16,0x80,7+16,0x80,6+16,0x80,5+16,0x80,4+16,0x80,3+16,0x80,2+16,0x80,1+16,0x80,0+16); __m256i PartialInterleavingMask3YMM = _mm256_set_epi8(0xf,0x80,0xe,0x80,0xd,0x80,0xc,0x80,0xb,0x80,0xa,0x80,9,0x80,8,0x80,7,0x80,6,0x80,5,0x80,4,0x80,3,0x80,2,0x80,1,0x80,0,0x80); // __m256i PartialInterleavingMask4YMM = _mm256_set_epi8(0xf+16,0x80,0xe+16,0x80,0xd+16,0x80,0xc+16,0x80,0xb+16,0x80,0xa+16,0x80,9+16,0x80,8+16,0x80,7+16,0x80,6+16,0x80,5+16,0x80,4+16,0x80,3+16,0x80,2+16,0x80,1+16,0x80,0+16,0x80); const __m128i *ptr128a, *ptr128b, *ptr128c, *ptr128d; __m128i AgainstRules, GumbotronREVER, GumbotronINTER, Gumbotron, GumbotronREVERINTER; const __m128i *ptr128; __m128i InterleaveMask = _mm_set_epi8(15,7,14,6,13,5,12,4,11,3,10,2,9,1,8,0); uint8_t vector[32]; if (length >= 64) { Cycles = length/64; for(; Cycles--; buffer += 64) { //a0 = _mm_loadu_si128((__m128i *)(buffer+0*16)); //a1 = _mm_loadu_si128((__m128i *)(buffer+1*16)); //a2 = _mm_loadu_si128((__m128i *)(buffer+2*16)); //a3 = _mm_loadu_si128((__m128i *)(buffer+3*16)); a0YMM = _mm256_loadu_si256((__m256i *)(buffer+0*16)); a2YMM = _mm256_loadu_si256((__m256i *)(buffer+2*16)); //b0 = _mm_shuffle_epi8 (a3, ReverseMask); //b1 = _mm_shuffle_epi8 (a2, ReverseMask); //b2 = _mm_shuffle_epi8 (a1, ReverseMask); //b3 = _mm_shuffle_epi8 (a0, ReverseMask); b0YMM = _mm256_shuffle_epi8 (a2YMM, ReverseMaskYMM); // Caution: the stupid intrinsic works on 128bit not on 256bit! b0YMM = b1+b0 not b0+b1 b2YMM = _mm256_shuffle_epi8 (a0YMM, ReverseMaskYMM); // Should swap: // https://godbolt.org/z/dY74zv1Ph b0YMM = _mm256_permute4x64_epi64(b0YMM, 0b01001110); //# ymm0 = ymm0[2,3,0,1] b2YMM = _mm256_permute4x64_epi64(b2YMM, 0b01001110); //# ymm0 = ymm0[2,3,0,1] /* __m256i _mm256_permute4x64_epi64 (__m256i a, const int imm8) #include <immintrin.h> Instruction: vpermq ymm, ymm, imm8 CPUID Flags: AVX2 Description Shuffle 64-bit integers in a across lanes using the control in imm8, and store the results in dst. Operation DEFINE SELECT4(src, control) { CASE(control[1:0]) OF 0: tmp[63:0] := src[63:0] 1: tmp[63:0] := src[127:64] 2: tmp[63:0] := src[191:128] 3: tmp[63:0] := src[255:192] ESAC RETURN tmp[63:0] } dst[63:0] := SELECT4(a[255:0], imm8[1:0]) dst[127:64] := SELECT4(a[255:0], imm8[3:2]) dst[191:128] := SELECT4(a[255:0], imm8[5:4]) dst[255:192] := SELECT4(a[255:0], imm8[7:6]) dst[MAX:256] := 0 */ //a8C7E8G7H5G3H1F2H3G1E2C1A2B4A6B8D7F8H7G5F7H8G6H4G2E1C2A1B3A5B7D8C6A7C8E7G8H6G4H2F1D2B1A3B5D6F5D4F3E5C4B2D3F4E6C5A4B6D5F6E4C3D1E3 // a0] a2] a0] a2] //a8C7E8G7H5G3H1F2H3G1E2C1A2B4A6B8 D7F8H7G5F7H8G6H4G2E1C2A1B3A5B7D8 C6A7C8E7G8H6G4H2F1D2B1A3B5D6F5D4 F3E5C4B2D3F4E6C5A4B6D5F6E4C3D1E3 //a0: 61 38 43 37 | 45 38 47 37 | 48 35 47 33 | 48 31 46 32 ! 48 33 47 31 | 45 32 43 31 | 41 32 42 34 | 41 36 42 38 //a2: 44 37 46 38 | 48 37 47 35 | 46 37 48 38 | 47 36 48 34 ! 47 32 45 31 | 43 32 41 31 | 42 33 41 35 | 42 37 44 38 //b0: 34 48 36 47 | 38 48 37 46 | 35 47 37 48 | 38 46 37 44 ! 38 44 37 42 | 35 41 33 42 | 31 41 32 43 | 31 45 32 47 //b2: 32 46 31 48 | 33 47 35 48 | 37 47 38 45 | 37 43 38 61 ! 38 42 36 41 | 34 42 32 41 | 31 43 32 45 | 31 47 33 48 //tmp0 = _mm_shuffle_epi8 (a0, PartialInterleavingMask1); //tmp1 = _mm_shuffle_epi8 (a0, PartialInterleavingMask2); //tmp2 = _mm_shuffle_epi8 (a2, PartialInterleavingMask3); //tmp3 = _mm_shuffle_epi8 (a2, PartialInterleavingMask4); //c0 = _mm_or_si128 (tmp0, tmp2); //c1 = _mm_or_si128 (tmp1, tmp3); //tmp0 = _mm_shuffle_epi8 (a1, PartialInterleavingMask1); //tmp1 = _mm_shuffle_epi8 (a1, PartialInterleavingMask2); //tmp2 = _mm_shuffle_epi8 (a3, PartialInterleavingMask3); //tmp3 = _mm_shuffle_epi8 (a3, PartialInterleavingMask4); //c2 = _mm_or_si128 (tmp0, tmp2); //c3 = _mm_or_si128 (tmp1, tmp3); // c0: 00 20 01 21 | 02 22 03 23 | 04 24 05 25 | 06 26 07 27 // c1: 08 28 09 29 | 0a 2a 0b 2b | 0c 2c 0d 2d | 0e 2e 0f 2f // c2: 10 30 11 31 | 12 32 13 33 | 14 34 15 35 | 16 36 17 37 // c3: 18 38 19 39 | 1a 3a 1b 3b | 1c 3c 1d 3d | 1e 3e 1f 3f /* __m256i _mm256_unpacklo_epi8 (__m256i a, __m256i b) Synopsis __m256i _mm256_unpacklo_epi8 (__m256i a, __m256i b) #include <immintrin.h> Instruction: vpunpcklbw ymm, ymm, ymm CPUID Flags: AVX2 Description Unpack and interleave 8-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst. Operation DEFINE INTERLEAVE_BYTES(src1[127:0], src2[127:0]) { dst[7:0] := src1[7:0] dst[15:8] := src2[7:0] dst[23:16] := src1[15:8] dst[31:24] := src2[15:8] dst[39:32] := src1[23:16] dst[47:40] := src2[23:16] dst[55:48] := src1[31:24] dst[63:56] := src2[31:24] dst[71:64] := src1[39:32] dst[79:72] := src2[39:32] dst[87:80] := src1[47:40] dst[95:88] := src2[47:40] dst[103:96] := src1[55:48] dst[111:104] := src2[55:48] dst[119:112] := src1[63:56] dst[127:120] := src2[63:56] RETURN dst[127:0] } dst[127:0] := INTERLEAVE_BYTES(a[127:0], b[127:0]) dst[255:128] := INTERLEAVE_BYTES(a[255:128], b[255:128]) dst[MAX:256] := 0 */ //__m128i _mm_unpacklo_epi8 (__m128i a, __m128i b) c0YMM = _mm256_unpacklo_epi8 (a0YMM, a2YMM); c2YMM = _mm256_unpackhi_epi8 (a0YMM, a2YMM); // Above two lines gave: /* [ 0] [ 1] [ 2] [ 3] a0: 61 38 43 37 | 45 38 47 37 | 48 35 47 33 | 48 31 46 32 ! 48 33 47 31 | 45 32 43 31 | 41 32 42 34 | 41 36 42 38 [ 4] [ 5] [ 6] [ 7] a2: 44 37 46 38 | 48 37 47 35 | 46 37 48 38 | 47 36 48 34 ! 47 32 45 31 | 43 32 41 31 | 42 33 41 35 | 42 37 44 38 [ 0+4] [ 2+6] c0: 61 44 38 37 | 43 46 37 38 | 45 48 38 37 | 47 47 37 35 ! 48 47 33 32 | 47 45 31 31 | 45 43 32 32 | 43 41 31 31 [ 1+5] [ 3+7] c2: 48 46 35 37 | 47 48 33 38 | 48 47 31 36 | 46 48 32 34 ! 41 42 32 33 | 42 41 34 35 | 41 42 36 37 | 42 44 38 38 */ // But I need: // c0,c1,c2,c3 not c0,c2,c1,c3 // 0+4,1+5,2+6,3+7 not 0+4,2+6,1+5,3+7 // as in XMM: // c0: 00 20 01 21 | 02 22 03 23 | 04 24 05 25 | 06 26 07 27 // c1: 08 28 09 29 | 0a 2a 0b 2b | 0c 2c 0d 2d | 0e 2e 0f 2f // c2: 10 30 11 31 | 12 32 13 33 | 14 34 15 35 | 16 36 17 37 // c3: 18 38 19 39 | 1a 3a 1b 3b | 1c 3c 1d 3d | 1e 3e 1f 3f //a0: 61 38 43 37 | 45 38 47 37 | 48 35 47 33 | 48 31 46 32 ! 48 33 47 31 | 45 32 43 31 | 41 32 42 34 | 41 36 42 38 //a2: 44 37 46 38 | 48 37 47 35 | 46 37 48 38 | 47 36 48 34 ! 47 32 45 31 | 43 32 41 31 | 42 33 41 35 | 42 37 44 38 //tmp0YMM: 61 00 38 00 | 43 00 37 00 | 45 00 38 00 | 47 00 37 00 ! 41 00 32 00 | 42 00 34 00 | 41 00 36 00 | 42 00 38 00 //tmp2YMM: 00 44 00 37 | 00 46 00 38 | 00 48 00 37 | 00 47 00 35 ! 00 42 00 33 | 00 41 00 35 | 00 42 00 37 | 00 44 00 38 // tmp0YMM = _mm256_shuffle_epi8 (a0YMM, PartialInterleavingMask1YMM); // tmp2YMM = _mm256_shuffle_epi8 (a2YMM, PartialInterleavingMask3YMM); // c0YMM = _mm256_or_si256 (tmp0YMM, tmp2YMM); // tmp0YMM = _mm256_shuffle_epi8 (a0YMM, PartialInterleavingMask1YMM); // tmp2YMM = _mm256_shuffle_epi8 (a2YMM, PartialInterleavingMask3YMM); // c2YMM = _mm256_or_si256 (tmp0YMM, tmp2YMM); //tmp0 = _mm_shuffle_epi8 (b0, PartialInterleavingMask1); //tmp1 = _mm_shuffle_epi8 (b0, PartialInterleavingMask2); //tmp2 = _mm_shuffle_epi8 (b2, PartialInterleavingMask3); //tmp3 = _mm_shuffle_epi8 (b2, PartialInterleavingMask4); //d0 = _mm_or_si128 (tmp0, tmp2); //d1 = _mm_or_si128 (tmp1, tmp3); //tmp0 = _mm_shuffle_epi8 (b1, PartialInterleavingMask1); //tmp1 = _mm_shuffle_epi8 (b1, PartialInterleavingMask2); //tmp2 = _mm_shuffle_epi8 (b3, PartialInterleavingMask3); //tmp3 = _mm_shuffle_epi8 (b3, PartialInterleavingMask4); //d2 = _mm_or_si128 (tmp0, tmp2); //d3 = _mm_or_si128 (tmp1, tmp3); // d0: 3f 1f 3e 1e | 3d 1d 3c 1c | 3b 1b 3a 1a | 39 19 38 18 // d1: 37 17 36 16 | 35 15 34 14 | 33 13 32 12 | 31 11 30 10 // d2: 2f 0f 2e 0e | 2d 0d 2c 0c | 2b 0b 2a 0a | 29 09 28 08 // d3: 27 07 26 06 | 25 05 24 04 | 23 03 22 02 | 21 01 20 00 // [[[ Next 6 lines are identical to simply REVERSE C vector - which is in 2 lines /* tmp0YMM = _mm256_shuffle_epi8 (b0YMM, PartialInterleavingMask1YMM); tmp2YMM = _mm256_shuffle_epi8 (b2YMM, PartialInterleavingMask3YMM); d0YMM = _mm256_or_si256 (tmp0YMM, tmp2YMM); tmp0YMM = _mm256_shuffle_epi8 (b0YMM, PartialInterleavingMask2YMM); tmp2YMM = _mm256_shuffle_epi8 (b2YMM, PartialInterleavingMask4YMM); d2YMM = _mm256_or_si256 (tmp0YMM, tmp2YMM); */ // ]]] Next 6 lines are identical to simply REVERSE C vector - which is in 2 lines, but I don't have it, so {{{ d0YMM = _mm256_unpacklo_epi8 (b0YMM, b2YMM); d2YMM = _mm256_unpackhi_epi8 (b0YMM, b2YMM); // }}} // For above code (the last thing to fix: b should be REVERSED a, not b1,b0,b3,b2): //a0: 61 38 43 37 | 45 38 47 37 | 48 35 47 33 | 48 31 46 32 ! 48 33 47 31 | 45 32 43 31 | 41 32 42 34 | 41 36 42 38 //a2: 44 37 46 38 | 48 37 47 35 | 46 37 48 38 | 47 36 48 34 ! 47 32 45 31 | 43 32 41 31 | 42 33 41 35 | 42 37 44 38 // //b0: 34 48 36 47 | 38 48 37 46 | 35 47 37 48 | 38 46 37 44 ! 38 44 37 42 | 35 41 33 42 | 31 41 32 43 | 31 45 32 47 //b2: 32 46 31 48 | 33 47 35 48 | 37 47 38 45 | 37 43 38 61 ! 38 42 36 41 | 34 42 32 41 | 31 43 32 45 | 31 47 33 48 // //c0: 61 44 38 37 | 43 46 37 38 | 45 48 38 37 | 47 47 37 35 ! 48 47 33 32 | 47 45 31 31 | 45 43 32 32 | 43 41 31 31 //c2: 48 46 35 37 | 47 48 33 38 | 48 47 31 36 | 46 48 32 34 ! 41 42 32 33 | 42 41 34 35 | 41 42 36 37 | 42 44 38 38 // //d0: 34 32 48 46 | 36 31 47 48 | 38 33 48 47 | 37 35 46 48 ! 38 38 44 42 | 37 36 42 41 | 35 34 41 42 | 33 32 42 41 //d2: 35 37 47 47 | 37 38 48 45 | 38 37 46 43 | 37 38 44 61 ! 31 31 41 43 | 32 32 43 45 | 31 31 45 47 | 32 33 47 48 /* _mm256_storeu_si256((__m256i*)vector, a0YMM); printf("a0: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x ! %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15], vector[0+16], vector[1+16], vector[2+16], vector[3+16], vector[4+16], vector[5+16], vector[6+16], vector[7+16], vector[8+16], vector[9+16], vector[10+16], vector[11+16], vector[12+16], vector[13+16], vector[14+16], vector[15+16]); _mm256_storeu_si256((__m256i*)vector, a2YMM); printf("a2: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x ! %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15], vector[0+16], vector[1+16], vector[2+16], vector[3+16], vector[4+16], vector[5+16], vector[6+16], vector[7+16], vector[8+16], vector[9+16], vector[10+16], vector[11+16], vector[12+16], vector[13+16], vector[14+16], vector[15+16]); _mm256_storeu_si256((__m256i*)vector, b0YMM); printf("b0: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x ! %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15], vector[0+16], vector[1+16], vector[2+16], vector[3+16], vector[4+16], vector[5+16], vector[6+16], vector[7+16], vector[8+16], vector[9+16], vector[10+16], vector[11+16], vector[12+16], vector[13+16], vector[14+16], vector[15+16]); _mm256_storeu_si256((__m256i*)vector, b2YMM); printf("b2: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x ! %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15], vector[0+16], vector[1+16], vector[2+16], vector[3+16], vector[4+16], vector[5+16], vector[6+16], vector[7+16], vector[8+16], vector[9+16], vector[10+16], vector[11+16], vector[12+16], vector[13+16], vector[14+16], vector[15+16]); _mm256_storeu_si256((__m256i*)vector, c0YMM); printf("c0: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x ! %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15], vector[0+16], vector[1+16], vector[2+16], vector[3+16], vector[4+16], vector[5+16], vector[6+16], vector[7+16], vector[8+16], vector[9+16], vector[10+16], vector[11+16], vector[12+16], vector[13+16], vector[14+16], vector[15+16]); _mm256_storeu_si256((__m256i*)vector, c2YMM); printf("c2: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x ! %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15], vector[0+16], vector[1+16], vector[2+16], vector[3+16], vector[4+16], vector[5+16], vector[6+16], vector[7+16], vector[8+16], vector[9+16], vector[10+16], vector[11+16], vector[12+16], vector[13+16], vector[14+16], vector[15+16]); _mm256_storeu_si256((__m256i*)vector, d0YMM); printf("d0: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x ! %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15], vector[0+16], vector[1+16], vector[2+16], vector[3+16], vector[4+16], vector[5+16], vector[6+16], vector[7+16], vector[8+16], vector[9+16], vector[10+16], vector[11+16], vector[12+16], vector[13+16], vector[14+16], vector[15+16]); _mm256_storeu_si256((__m256i*)vector, d2YMM); printf("d2: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x ! %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15], vector[0+16], vector[1+16], vector[2+16], vector[3+16], vector[4+16], vector[5+16], vector[6+16], vector[7+16], vector[8+16], vector[9+16], vector[10+16], vector[11+16], vector[12+16], vector[13+16], vector[14+16], vector[15+16]); printf("\n"); */ /* G:\Lookupperorama_r13\COLLISION_Hashliner>Hashliner_DDAES_dump5byteshash.exe 1.KnightTours.txt 394e907d43 G:\Lookupperorama_r13\COLLISION_Hashliner>Hashliner_DDAES-XMM_dump5byteshash.exe 1.KnightTours.txt f4b027e3ab G:\Lookupperorama_r13\COLLISION_Hashliner>d: D:\>g: G:\Lookupperorama_r13\COLLISION_Hashliner>Hashliner_DDAES_dump5byteshash.exe 1.KnightTours.txt a0: 61 38 43 37 | 45 38 47 37 | 48 35 47 33 | 48 31 46 32 ! 48 33 47 31 | 45 32 43 31 | 41 32 42 34 | 41 36 42 38 a2: 44 37 46 38 | 48 37 47 35 | 46 37 48 38 | 47 36 48 34 ! 47 32 45 31 | 43 32 41 31 | 42 33 41 35 | 42 37 44 38 c0: 61 44 38 37 | 43 46 37 38 | 45 48 38 37 | 47 47 37 35 ! 41 42 32 33 | 42 41 34 35 | 41 42 36 37 | 42 44 38 38 c2: 61 44 38 37 | 43 46 37 38 | 45 48 38 37 | 47 47 37 35 ! 41 42 32 33 | 42 41 34 35 | 41 42 36 37 | 42 44 38 38 a0: 43 36 41 37 | 43 38 45 37 | 47 38 48 36 | 47 34 48 32 ! 46 31 44 32 | 42 31 41 33 | 42 35 44 36 | 46 35 44 34 a2: 46 33 45 35 | 43 34 42 32 | 44 33 46 34 | 45 36 43 35 ! 41 34 42 36 | 44 35 46 36 | 45 34 43 33 | 44 31 45 33 c0: 43 46 36 33 | 41 45 37 35 | 43 43 38 34 | 45 42 37 32 ! 42 45 35 34 | 44 43 36 33 | 46 44 35 31 | 44 45 34 33 c2: 43 46 36 33 | 41 45 37 35 | 43 43 38 34 | 45 42 37 32 ! 42 45 35 34 | 44 43 36 33 | 46 44 35 31 | 44 45 34 33 394e907d43 G:\Lookupperorama_r13\COLLISION_Hashliner>type 1.KnightTours.txt a8C7E8G7H5G3H1F2H3G1E2C1A2B4A6B8D7F8H7G5F7H8G6H4G2E1C2A1B3A5B7D8C6A7C8E7G8H6G4H2F1D2B1A3B5D6F5D4F3E5C4B2D3F4E6C5A4B6D5F6E4C3D1E3 G:\Lookupperorama_r13\COLLISION_Hashliner> a0] a2] a0] a2] a8C7E8G7H5G3H1F2H3G1E2C1A2B4A6B8 D7F8H7G5F7H8G6H4G2E1C2A1B3A5B7D8 C6A7C8E7G8H6G4H2F1D2B1A3B5D6F5D4 F3E5C4B2D3F4E6C5A4B6D5F6E4C3D1E3 */ /* _mm_storeu_si128((__m128i*)vector, *(__m128i *)(&a0YMM)); printf("a0lo: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15]); _mm_storeu_si128((__m128i*)vector, *((__m128i *)(&a0YMM)+1)); printf("a0hi: %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x | %02x %02x %02x %02x\n", vector[0], vector[1], vector[2], vector[3], vector[4], vector[5], vector[6], vector[7], vector[8], vector[9], vector[10], vector[11], vector[12], vector[13], vector[14], vector[15]); printf("Above two should equal a0YMM\n"); */ //void SlowCopy128bit (const char *SOURCE, char *TARGET) { _mm_storeu_si128((__m128i *)(TARGET), _mm_loadu_si128((const __m128i *)(SOURCE))); } // Okay, the final dump (shows XMM and YMM variants produce the same hash): /* D:\Lookupperorama_r13\COLLISION_Hashliner>Hashliner_DDAES_dump5byteshash.exe 1.KnightTours.txt a0: 61 38 43 37 | 45 38 47 37 | 48 35 47 33 | 48 31 46 32 a1: 48 33 47 31 | 45 32 43 31 | 41 32 42 34 | 41 36 42 38 a2: 44 37 46 38 | 48 37 47 35 | 46 37 48 38 | 47 36 48 34 a3: 47 32 45 31 | 43 32 41 31 | 42 33 41 35 | 42 37 44 38 b0: 38 44 37 42 | 35 41 33 42 | 31 41 32 43 | 31 45 32 47 b1: 34 48 36 47 | 38 48 37 46 | 35 47 37 48 | 38 46 37 44 b2: 38 42 36 41 | 34 42 32 41 | 31 43 32 45 | 31 47 33 48 b3: 32 46 31 48 | 33 47 35 48 | 37 47 38 45 | 37 43 38 61 c0: 61 44 38 37 | 43 46 37 38 | 45 48 38 37 | 47 47 37 35 c1: 48 46 35 37 | 47 48 33 38 | 48 47 31 36 | 46 48 32 34 c2: 48 47 33 32 | 47 45 31 31 | 45 43 32 32 | 43 41 31 31 c3: 41 42 32 33 | 42 41 34 35 | 41 42 36 37 | 42 44 38 38 d0: 38 38 44 42 | 37 36 42 41 | 35 34 41 42 | 33 32 42 41 d1: 31 31 41 43 | 32 32 43 45 | 31 31 45 47 | 32 33 47 48 d2: 34 32 48 46 | 36 31 47 48 | 38 33 48 47 | 37 35 46 48 d3: 35 37 47 47 | 37 38 48 45 | 38 37 46 43 | 37 38 44 61 a0: 43 36 41 37 | 43 38 45 37 | 47 38 48 36 | 47 34 48 32 a1: 46 31 44 32 | 42 31 41 33 | 42 35 44 36 | 46 35 44 34 a2: 46 33 45 35 | 43 34 42 32 | 44 33 46 34 | 45 36 43 35 a3: 41 34 42 36 | 44 35 46 36 | 45 34 43 33 | 44 31 45 33 b0: 33 45 31 44 | 33 43 34 45 | 36 46 35 44 | 36 42 34 41 b1: 35 43 36 45 | 34 46 33 44 | 32 42 34 43 | 35 45 33 46 b2: 34 44 35 46 | 36 44 35 42 | 33 41 31 42 | 32 44 31 46 b3: 32 48 34 47 | 36 48 38 47 | 37 45 38 43 | 37 41 36 43 c0: 43 46 36 33 | 41 45 37 35 | 43 43 38 34 | 45 42 37 32 c1: 47 44 38 33 | 48 46 36 34 | 47 45 34 36 | 48 43 32 35 c2: 46 41 31 34 | 44 42 32 36 | 42 44 31 35 | 41 46 33 36 c3: 42 45 35 34 | 44 43 36 33 | 46 44 35 31 | 44 45 34 33 d0: 33 34 45 44 | 31 35 44 46 | 33 36 43 44 | 34 35 45 42 d1: 36 33 46 41 | 35 31 44 42 | 36 32 42 44 | 34 31 41 46 d2: 35 32 43 48 | 36 34 45 47 | 34 36 46 48 | 33 38 44 47 d3: 32 37 42 45 | 34 38 43 43 | 35 37 45 41 | 33 36 46 43 f4b027e3ab a0: 61 38 43 37 | 45 38 47 37 | 48 35 47 33 | 48 31 46 32 ! 48 33 47 31 | 45 32 43 31 | 41 32 42 34 | 41 36 42 38 a2: 44 37 46 38 | 48 37 47 35 | 46 37 48 38 | 47 36 48 34 ! 47 32 45 31 | 43 32 41 31 | 42 33 41 35 | 42 37 44 38 b0: 38 44 37 42 | 35 41 33 42 | 31 41 32 43 | 31 45 32 47 ! 34 48 36 47 | 38 48 37 46 | 35 47 37 48 | 38 46 37 44 b2: 38 42 36 41 | 34 42 32 41 | 31 43 32 45 | 31 47 33 48 ! 32 46 31 48 | 33 47 35 48 | 37 47 38 45 | 37 43 38 61 c0: 61 44 38 37 | 43 46 37 38 | 45 48 38 37 | 47 47 37 35 ! 48 47 33 32 | 47 45 31 31 | 45 43 32 32 | 43 41 31 31 c2: 48 46 35 37 | 47 48 33 38 | 48 47 31 36 | 46 48 32 34 ! 41 42 32 33 | 42 41 34 35 | 41 42 36 37 | 42 44 38 38 d0: 38 38 44 42 | 37 36 42 41 | 35 34 41 42 | 33 32 42 41 ! 34 32 48 46 | 36 31 47 48 | 38 33 48 47 | 37 35 46 48 d2: 31 31 41 43 | 32 32 43 45 | 31 31 45 47 | 32 33 47 48 ! 35 37 47 47 | 37 38 48 45 | 38 37 46 43 | 37 38 44 61 a0lo: 61 38 43 37 | 45 38 47 37 | 48 35 47 33 | 48 31 46 32 a0hi: 48 33 47 31 | 45 32 43 31 | 41 32 42 34 | 41 36 42 38 Above two should equal a0YMM a0: 43 36 41 37 | 43 38 45 37 | 47 38 48 36 | 47 34 48 32 ! 46 31 44 32 | 42 31 41 33 | 42 35 44 36 | 46 35 44 34 a2: 46 33 45 35 | 43 34 42 32 | 44 33 46 34 | 45 36 43 35 ! 41 34 42 36 | 44 35 46 36 | 45 34 43 33 | 44 31 45 33 b0: 33 45 31 44 | 33 43 34 45 | 36 46 35 44 | 36 42 34 41 ! 35 43 36 45 | 34 46 33 44 | 32 42 34 43 | 35 45 33 46 b2: 34 44 35 46 | 36 44 35 42 | 33 41 31 42 | 32 44 31 46 ! 32 48 34 47 | 36 48 38 47 | 37 45 38 43 | 37 41 36 43 c0: 43 46 36 33 | 41 45 37 35 | 43 43 38 34 | 45 42 37 32 ! 46 41 31 34 | 44 42 32 36 | 42 44 31 35 | 41 46 33 36 c2: 47 44 38 33 | 48 46 36 34 | 47 45 34 36 | 48 43 32 35 ! 42 45 35 34 | 44 43 36 33 | 46 44 35 31 | 44 45 34 33 d0: 33 34 45 44 | 31 35 44 46 | 33 36 43 44 | 34 35 45 42 ! 35 32 43 48 | 36 34 45 47 | 34 36 46 48 | 33 38 44 47 d2: 36 33 46 41 | 35 31 44 42 | 36 32 42 44 | 34 31 41 46 ! 32 37 42 45 | 34 38 43 43 | 35 37 45 41 | 33 36 46 43 a0lo: 43 36 41 37 | 43 38 45 37 | 47 38 48 36 | 47 34 48 32 a0hi: 46 31 44 32 | 42 31 41 33 | 42 35 44 36 | 46 35 44 34 Above two should equal a0YMM f4b027e3ab D:\Lookupperorama_r13\COLLISION_Hashliner> */ //In YMM should swap c1 and c2, also d1 and d2 //hashA = _mm_aesenc_si128(hashA, a0); hashA = _mm_aesenc_si128(hashA, *(__m128i *)(&a0YMM)); //hashB = _mm_aesenc_si128(hashB, b0); hashB = _mm_aesenc_si128(hashB, *(__m128i *)(&b0YMM)); //hashB = _mm_aesenc_si128(hashB, *((__m128i *)(&b0YMM)+1)); //hashC = _mm_aesenc_si128(hashC, c0); hashC = _mm_aesenc_si128(hashC, *(__m128i *)(&c0YMM)); //hashD = _mm_aesenc_si128(hashD, d0); hashD = _mm_aesenc_si128(hashD, *(__m128i *)(&d0YMM)); //hashA = _mm_aesenc_si128(hashA, a1); hashA = _mm_aesenc_si128(hashA, *((__m128i *)(&a0YMM)+1)); //hashB = _mm_aesenc_si128(hashB, b1); hashB = _mm_aesenc_si128(hashB, *((__m128i *)(&b0YMM)+1)); //hashB = _mm_aesenc_si128(hashB, *(__m128i *)(&b0YMM)); //hashC = _mm_aesenc_si128(hashC, c1); hashC = _mm_aesenc_si128(hashC, *((__m128i *)(&c2YMM))); //hashD = _mm_aesenc_si128(hashD, d1); hashD = _mm_aesenc_si128(hashD, *((__m128i *)(&d2YMM))); //hashA = _mm_aesenc_si128(hashA, a2); hashA = _mm_aesenc_si128(hashA, *(__m128i *)(&a2YMM)); //hashB = _mm_aesenc_si128(hashB, b2); hashB = _mm_aesenc_si128(hashB, *(__m128i *)(&b2YMM)); //hashB = _mm_aesenc_si128(hashB, *((__m128i *)(&b2YMM)+1)); //hashC = _mm_aesenc_si128(hashC, c2); hashC = _mm_aesenc_si128(hashC, *((__m128i *)(&c0YMM)+1)); //hashD = _mm_aesenc_si128(hashD, d2); hashD = _mm_aesenc_si128(hashD, *((__m128i *)(&d0YMM)+1)); //hashA = _mm_aesenc_si128(hashA, a3); hashA = _mm_aesenc_si128(hashA, *((__m128i *)(&a2YMM)+1)); //hashB = _mm_aesenc_si128(hashB, b3); hashB = _mm_aesenc_si128(hashB, *((__m128i *)(&b2YMM)+1)); //hashB = _mm_aesenc_si128(hashB, *(__m128i *)(&b2YMM)); //hashC = _mm_aesenc_si128(hashC, c3); hashC = _mm_aesenc_si128(hashC, *((__m128i *)(&c2YMM)+1)); //hashD = _mm_aesenc_si128(hashD, d3); hashD = _mm_aesenc_si128(hashD, *((__m128i *)(&d2YMM)+1)); hashA = _mm_aesenc_si128(hashA, hashB); hashA = _mm_aesenc_si128(hashA, hashC); hashA = _mm_aesenc_si128(hashA, hashD); length = length - 64; } } ptr128 = (__m128i *)buffer; if (length >=16) { Cycles = length/16; for(; Cycles--; buffer += 16) { AgainstRules = _mm_loadu_si128(ptr128++); GumbotronREVER = _mm_shuffle_epi8 (AgainstRules, ReverseMask); GumbotronINTER = _mm_shuffle_epi8 (AgainstRules, InterleaveMask); GumbotronREVERINTER = _mm_shuffle_epi8 (GumbotronREVER, InterleaveMask); hashA = _mm_aesenc_si128(hashA, AgainstRules); hashB = _mm_aesenc_si128(hashB, GumbotronREVER); hashC = _mm_aesenc_si128(hashC, GumbotronINTER); hashD = _mm_aesenc_si128(hashD, GumbotronREVERINTER); hashA = _mm_aesenc_si128(hashA, hashB); hashA = _mm_aesenc_si128(hashA, hashC); hashA = _mm_aesenc_si128(hashA, hashD); length = length - 16; } } // Inhere using Pippip's approach to read past the end ("the dirty" sentinel like style, or more like padding): if (length&(16-1)) { AgainstRules = _mm_loadu_si128(ptr128); //AgainstRules = _mm_srli_si128 (AgainstRules, 16-length); // catastrophic error: Intrinsic parameter must be an immediate value AgainstRules = _mm_and_si128 (AgainstRules, Mumbotron[length]); //Gumbotron = _mm_slli_si128 (Gumbotron, 16-length); // catastrophic error: Intrinsic parameter must be an immediate value Gumbotron = _mm_and_si128 (hashB, Jumbotron[length]); AgainstRules = _mm_or_si128 (AgainstRules, Gumbotron); GumbotronREVER = _mm_shuffle_epi8 (AgainstRules, ReverseMask); GumbotronINTER = _mm_shuffle_epi8 (AgainstRules, InterleaveMask); GumbotronREVERINTER = _mm_shuffle_epi8 (GumbotronREVER, InterleaveMask); hashA = _mm_aesenc_si128(hashA, AgainstRules); hashB = _mm_aesenc_si128(hashB, GumbotronREVER); hashC = _mm_aesenc_si128(hashC, GumbotronINTER); hashD = _mm_aesenc_si128(hashD, GumbotronREVERINTER); hashA = _mm_aesenc_si128(hashA, hashB); hashA = _mm_aesenc_si128(hashA, hashC); hashA = _mm_aesenc_si128(hashA, hashD); } SlowCopy128bit( (const char *)(&hashA), (char *)&DDAES[0]); }
Become a Patron
Sponsor on GitHub
Donate via PayPal
Source on GitHub
Mailing list
Installed libraries
Wiki
Report an issue
How it works
Contact the author
CE on Mastodon
CE on Bluesky
About the author
Statistics
Changelog
Version tree