Thanks for using Compiler Explorer
Sponsors
Jakt
C++
Ada
Algol68
Analysis
Android Java
Android Kotlin
Assembly
C
C3
Carbon
C with Coccinelle
C++ with Coccinelle
C++ (Circle)
CIRCT
Clean
CMake
CMakeScript
COBOL
C++ for OpenCL
MLIR
Cppx
Cppx-Blue
Cppx-Gold
Cpp2-cppfront
Crystal
C#
CUDA C++
D
Dart
Elixir
Erlang
Fortran
F#
GLSL
Go
Haskell
HLSL
Hook
Hylo
IL
ispc
Java
Julia
Kotlin
LLVM IR
LLVM MIR
Modula-2
Mojo
Nim
Numba
Nix
Objective-C
Objective-C++
OCaml
Odin
OpenCL C
Pascal
Pony
PTX
Python
Racket
Raku
Ruby
Rust
Sail
Snowball
Scala
Slang
Solidity
Spice
SPIR-V
Swift
LLVM TableGen
Toit
Triton
TypeScript Native
V
Vala
Visual Basic
Vyper
WASM
Zig
Javascript
GIMPLE
Ygen
sway
c++ source #1
Output
Compile to binary object
Link to binary
Execute the code
Intel asm syntax
Demangle identifiers
Verbose demangling
Filters
Unused labels
Library functions
Directives
Comments
Horizontal whitespace
Debug intrinsics
Compiler
6502-c++ 11.1.0
ARM GCC 10.2.0
ARM GCC 10.3.0
ARM GCC 10.4.0
ARM GCC 10.5.0
ARM GCC 11.1.0
ARM GCC 11.2.0
ARM GCC 11.3.0
ARM GCC 11.4.0
ARM GCC 12.1.0
ARM GCC 12.2.0
ARM GCC 12.3.0
ARM GCC 12.4.0
ARM GCC 12.5.0
ARM GCC 13.1.0
ARM GCC 13.2.0
ARM GCC 13.2.0 (unknown-eabi)
ARM GCC 13.3.0
ARM GCC 13.3.0 (unknown-eabi)
ARM GCC 13.4.0
ARM GCC 13.4.0 (unknown-eabi)
ARM GCC 14.1.0
ARM GCC 14.1.0 (unknown-eabi)
ARM GCC 14.2.0
ARM GCC 14.2.0 (unknown-eabi)
ARM GCC 14.3.0
ARM GCC 14.3.0 (unknown-eabi)
ARM GCC 15.1.0
ARM GCC 15.1.0 (unknown-eabi)
ARM GCC 15.2.0
ARM GCC 15.2.0 (unknown-eabi)
ARM GCC 4.5.4
ARM GCC 4.6.4
ARM GCC 5.4
ARM GCC 6.3.0
ARM GCC 6.4.0
ARM GCC 7.3.0
ARM GCC 7.5.0
ARM GCC 8.2.0
ARM GCC 8.5.0
ARM GCC 9.3.0
ARM GCC 9.4.0
ARM GCC 9.5.0
ARM GCC trunk
ARM gcc 10.2.1 (none)
ARM gcc 10.3.1 (2021.07 none)
ARM gcc 10.3.1 (2021.10 none)
ARM gcc 11.2.1 (none)
ARM gcc 5.4.1 (none)
ARM gcc 7.2.1 (none)
ARM gcc 8.2 (WinCE)
ARM gcc 8.3.1 (none)
ARM gcc 9.2.1 (none)
ARM msvc v19.0 (ex-WINE)
ARM msvc v19.10 (ex-WINE)
ARM msvc v19.14 (ex-WINE)
ARM64 Morello gcc 10.1 Alpha 2
ARM64 gcc 10.2
ARM64 gcc 10.3
ARM64 gcc 10.4
ARM64 gcc 10.5.0
ARM64 gcc 11.1
ARM64 gcc 11.2
ARM64 gcc 11.3
ARM64 gcc 11.4.0
ARM64 gcc 12.1
ARM64 gcc 12.2.0
ARM64 gcc 12.3.0
ARM64 gcc 12.4.0
ARM64 gcc 12.5.0
ARM64 gcc 13.1.0
ARM64 gcc 13.2.0
ARM64 gcc 13.3.0
ARM64 gcc 13.4.0
ARM64 gcc 14.1.0
ARM64 gcc 14.2.0
ARM64 gcc 14.3.0
ARM64 gcc 15.1.0
ARM64 gcc 15.2.0
ARM64 gcc 4.9.4
ARM64 gcc 5.4
ARM64 gcc 5.5.0
ARM64 gcc 6.3
ARM64 gcc 6.4
ARM64 gcc 7.3
ARM64 gcc 7.5
ARM64 gcc 8.2
ARM64 gcc 8.5
ARM64 gcc 9.3
ARM64 gcc 9.4
ARM64 gcc 9.5
ARM64 gcc trunk
ARM64 msvc v19.14 (ex-WINE)
AVR gcc 10.3.0
AVR gcc 11.1.0
AVR gcc 12.1.0
AVR gcc 12.2.0
AVR gcc 12.3.0
AVR gcc 12.4.0
AVR gcc 12.5.0
AVR gcc 13.1.0
AVR gcc 13.2.0
AVR gcc 13.3.0
AVR gcc 13.4.0
AVR gcc 14.1.0
AVR gcc 14.2.0
AVR gcc 14.3.0
AVR gcc 15.1.0
AVR gcc 15.2.0
AVR gcc 4.5.4
AVR gcc 4.6.4
AVR gcc 5.4.0
AVR gcc 9.2.0
AVR gcc 9.3.0
Arduino Mega (1.8.9)
Arduino Uno (1.8.9)
BPF clang (trunk)
BPF clang 13.0.0
BPF clang 14.0.0
BPF clang 15.0.0
BPF clang 16.0.0
BPF clang 17.0.1
BPF clang 18.1.0
BPF clang 19.1.0
BPF clang 20.1.0
BPF clang 21.1.0
EDG (experimental reflection)
EDG 6.5
EDG 6.5 (GNU mode gcc 13)
EDG 6.6
EDG 6.6 (GNU mode gcc 13)
EDG 6.7
EDG 6.7 (GNU mode gcc 14)
FRC 2019
FRC 2020
FRC 2023
HPPA gcc 14.2.0
HPPA gcc 14.3.0
HPPA gcc 15.1.0
HPPA gcc 15.2.0
KVX ACB 4.1.0 (GCC 7.5.0)
KVX ACB 4.1.0-cd1 (GCC 7.5.0)
KVX ACB 4.10.0 (GCC 10.3.1)
KVX ACB 4.11.1 (GCC 10.3.1)
KVX ACB 4.12.0 (GCC 11.3.0)
KVX ACB 4.2.0 (GCC 7.5.0)
KVX ACB 4.3.0 (GCC 7.5.0)
KVX ACB 4.4.0 (GCC 7.5.0)
KVX ACB 4.6.0 (GCC 9.4.1)
KVX ACB 4.8.0 (GCC 9.4.1)
KVX ACB 4.9.0 (GCC 9.4.1)
KVX ACB 5.0.0 (GCC 12.2.1)
KVX ACB 5.2.0 (GCC 13.2.1)
LoongArch64 clang (trunk)
LoongArch64 clang 17.0.1
LoongArch64 clang 18.1.0
LoongArch64 clang 19.1.0
LoongArch64 clang 20.1.0
LoongArch64 clang 21.1.0
M68K gcc 13.1.0
M68K gcc 13.2.0
M68K gcc 13.3.0
M68K gcc 13.4.0
M68K gcc 14.1.0
M68K gcc 14.2.0
M68K gcc 14.3.0
M68K gcc 15.1.0
M68K gcc 15.2.0
M68k clang (trunk)
MRISC32 gcc (trunk)
MSP430 gcc 4.5.3
MSP430 gcc 5.3.0
MSP430 gcc 6.2.1
MinGW clang 14.0.3
MinGW clang 14.0.6
MinGW clang 15.0.7
MinGW clang 16.0.0
MinGW clang 16.0.2
MinGW gcc 11.3.0
MinGW gcc 12.1.0
MinGW gcc 12.2.0
MinGW gcc 13.1.0
MinGW gcc 14.3.0
MinGW gcc 15.2.0
RISC-V (32-bits) gcc (trunk)
RISC-V (32-bits) gcc 10.2.0
RISC-V (32-bits) gcc 10.3.0
RISC-V (32-bits) gcc 11.2.0
RISC-V (32-bits) gcc 11.3.0
RISC-V (32-bits) gcc 11.4.0
RISC-V (32-bits) gcc 12.1.0
RISC-V (32-bits) gcc 12.2.0
RISC-V (32-bits) gcc 12.3.0
RISC-V (32-bits) gcc 12.4.0
RISC-V (32-bits) gcc 12.5.0
RISC-V (32-bits) gcc 13.1.0
RISC-V (32-bits) gcc 13.2.0
RISC-V (32-bits) gcc 13.3.0
RISC-V (32-bits) gcc 13.4.0
RISC-V (32-bits) gcc 14.1.0
RISC-V (32-bits) gcc 14.2.0
RISC-V (32-bits) gcc 14.3.0
RISC-V (32-bits) gcc 15.1.0
RISC-V (32-bits) gcc 15.2.0
RISC-V (32-bits) gcc 8.2.0
RISC-V (32-bits) gcc 8.5.0
RISC-V (32-bits) gcc 9.4.0
RISC-V (64-bits) gcc (trunk)
RISC-V (64-bits) gcc 10.2.0
RISC-V (64-bits) gcc 10.3.0
RISC-V (64-bits) gcc 11.2.0
RISC-V (64-bits) gcc 11.3.0
RISC-V (64-bits) gcc 11.4.0
RISC-V (64-bits) gcc 12.1.0
RISC-V (64-bits) gcc 12.2.0
RISC-V (64-bits) gcc 12.3.0
RISC-V (64-bits) gcc 12.4.0
RISC-V (64-bits) gcc 12.5.0
RISC-V (64-bits) gcc 13.1.0
RISC-V (64-bits) gcc 13.2.0
RISC-V (64-bits) gcc 13.3.0
RISC-V (64-bits) gcc 13.4.0
RISC-V (64-bits) gcc 14.1.0
RISC-V (64-bits) gcc 14.2.0
RISC-V (64-bits) gcc 14.3.0
RISC-V (64-bits) gcc 15.1.0
RISC-V (64-bits) gcc 15.2.0
RISC-V (64-bits) gcc 8.2.0
RISC-V (64-bits) gcc 8.5.0
RISC-V (64-bits) gcc 9.4.0
RISC-V rv32gc clang (trunk)
RISC-V rv32gc clang 10.0.0
RISC-V rv32gc clang 10.0.1
RISC-V rv32gc clang 11.0.0
RISC-V rv32gc clang 11.0.1
RISC-V rv32gc clang 12.0.0
RISC-V rv32gc clang 12.0.1
RISC-V rv32gc clang 13.0.0
RISC-V rv32gc clang 13.0.1
RISC-V rv32gc clang 14.0.0
RISC-V rv32gc clang 15.0.0
RISC-V rv32gc clang 16.0.0
RISC-V rv32gc clang 17.0.1
RISC-V rv32gc clang 18.1.0
RISC-V rv32gc clang 19.1.0
RISC-V rv32gc clang 20.1.0
RISC-V rv32gc clang 21.1.0
RISC-V rv32gc clang 9.0.0
RISC-V rv32gc clang 9.0.1
RISC-V rv64gc clang (trunk)
RISC-V rv64gc clang 10.0.0
RISC-V rv64gc clang 10.0.1
RISC-V rv64gc clang 11.0.0
RISC-V rv64gc clang 11.0.1
RISC-V rv64gc clang 12.0.0
RISC-V rv64gc clang 12.0.1
RISC-V rv64gc clang 13.0.0
RISC-V rv64gc clang 13.0.1
RISC-V rv64gc clang 14.0.0
RISC-V rv64gc clang 15.0.0
RISC-V rv64gc clang 16.0.0
RISC-V rv64gc clang 17.0.1
RISC-V rv64gc clang 18.1.0
RISC-V rv64gc clang 19.1.0
RISC-V rv64gc clang 20.1.0
RISC-V rv64gc clang 21.1.0
RISC-V rv64gc clang 9.0.0
RISC-V rv64gc clang 9.0.1
Raspbian Buster
Raspbian Stretch
SPARC LEON gcc 12.2.0
SPARC LEON gcc 12.3.0
SPARC LEON gcc 12.4.0
SPARC LEON gcc 12.5.0
SPARC LEON gcc 13.1.0
SPARC LEON gcc 13.2.0
SPARC LEON gcc 13.3.0
SPARC LEON gcc 13.4.0
SPARC LEON gcc 14.1.0
SPARC LEON gcc 14.2.0
SPARC LEON gcc 14.3.0
SPARC LEON gcc 15.1.0
SPARC LEON gcc 15.2.0
SPARC gcc 12.2.0
SPARC gcc 12.3.0
SPARC gcc 12.4.0
SPARC gcc 12.5.0
SPARC gcc 13.1.0
SPARC gcc 13.2.0
SPARC gcc 13.3.0
SPARC gcc 13.4.0
SPARC gcc 14.1.0
SPARC gcc 14.2.0
SPARC gcc 14.3.0
SPARC gcc 15.1.0
SPARC gcc 15.2.0
SPARC64 gcc 12.2.0
SPARC64 gcc 12.3.0
SPARC64 gcc 12.4.0
SPARC64 gcc 12.5.0
SPARC64 gcc 13.1.0
SPARC64 gcc 13.2.0
SPARC64 gcc 13.3.0
SPARC64 gcc 13.4.0
SPARC64 gcc 14.1.0
SPARC64 gcc 14.2.0
SPARC64 gcc 14.3.0
SPARC64 gcc 15.1.0
SPARC64 gcc 15.2.0
TI C6x gcc 12.2.0
TI C6x gcc 12.3.0
TI C6x gcc 12.4.0
TI C6x gcc 12.5.0
TI C6x gcc 13.1.0
TI C6x gcc 13.2.0
TI C6x gcc 13.3.0
TI C6x gcc 13.4.0
TI C6x gcc 14.1.0
TI C6x gcc 14.2.0
TI C6x gcc 14.3.0
TI C6x gcc 15.1.0
TI C6x gcc 15.2.0
TI CL430 21.6.1
Tricore gcc 11.3.0 (EEESlab)
VAX gcc NetBSDELF 10.4.0
VAX gcc NetBSDELF 10.5.0 (Nov 15 03:50:22 2023)
VAX gcc NetBSDELF 12.4.0 (Apr 16 05:27 2025)
WebAssembly clang (trunk)
Xtensa ESP32 gcc 11.2.0 (2022r1)
Xtensa ESP32 gcc 12.2.0 (20230208)
Xtensa ESP32 gcc 14.2.0 (20241119)
Xtensa ESP32 gcc 8.2.0 (2019r2)
Xtensa ESP32 gcc 8.2.0 (2020r1)
Xtensa ESP32 gcc 8.2.0 (2020r2)
Xtensa ESP32 gcc 8.4.0 (2020r3)
Xtensa ESP32 gcc 8.4.0 (2021r1)
Xtensa ESP32 gcc 8.4.0 (2021r2)
Xtensa ESP32-S2 gcc 11.2.0 (2022r1)
Xtensa ESP32-S2 gcc 12.2.0 (20230208)
Xtensa ESP32-S2 gcc 14.2.0 (20241119)
Xtensa ESP32-S2 gcc 8.2.0 (2019r2)
Xtensa ESP32-S2 gcc 8.2.0 (2020r1)
Xtensa ESP32-S2 gcc 8.2.0 (2020r2)
Xtensa ESP32-S2 gcc 8.4.0 (2020r3)
Xtensa ESP32-S2 gcc 8.4.0 (2021r1)
Xtensa ESP32-S2 gcc 8.4.0 (2021r2)
Xtensa ESP32-S3 gcc 11.2.0 (2022r1)
Xtensa ESP32-S3 gcc 12.2.0 (20230208)
Xtensa ESP32-S3 gcc 14.2.0 (20241119)
Xtensa ESP32-S3 gcc 8.4.0 (2020r3)
Xtensa ESP32-S3 gcc 8.4.0 (2021r1)
Xtensa ESP32-S3 gcc 8.4.0 (2021r2)
arm64 msvc v19.20 VS16.0
arm64 msvc v19.21 VS16.1
arm64 msvc v19.22 VS16.2
arm64 msvc v19.23 VS16.3
arm64 msvc v19.24 VS16.4
arm64 msvc v19.25 VS16.5
arm64 msvc v19.27 VS16.7
arm64 msvc v19.28 VS16.8
arm64 msvc v19.28 VS16.9
arm64 msvc v19.29 VS16.10
arm64 msvc v19.29 VS16.11
arm64 msvc v19.30 VS17.0
arm64 msvc v19.31 VS17.1
arm64 msvc v19.32 VS17.2
arm64 msvc v19.33 VS17.3
arm64 msvc v19.34 VS17.4
arm64 msvc v19.35 VS17.5
arm64 msvc v19.36 VS17.6
arm64 msvc v19.37 VS17.7
arm64 msvc v19.38 VS17.8
arm64 msvc v19.39 VS17.9
arm64 msvc v19.40 VS17.10
arm64 msvc v19.41 VS17.11
arm64 msvc v19.42 VS17.12
arm64 msvc v19.43 VS17.13
arm64 msvc v19.latest
armv7-a clang (trunk)
armv7-a clang 10.0.0
armv7-a clang 10.0.1
armv7-a clang 11.0.0
armv7-a clang 11.0.1
armv7-a clang 12.0.0
armv7-a clang 12.0.1
armv7-a clang 13.0.0
armv7-a clang 13.0.1
armv7-a clang 14.0.0
armv7-a clang 15.0.0
armv7-a clang 16.0.0
armv7-a clang 17.0.1
armv7-a clang 18.1.0
armv7-a clang 19.1.0
armv7-a clang 20.1.0
armv7-a clang 21.1.0
armv7-a clang 9.0.0
armv7-a clang 9.0.1
armv8-a clang (all architectural features, trunk)
armv8-a clang (trunk)
armv8-a clang 10.0.0
armv8-a clang 10.0.1
armv8-a clang 11.0.0
armv8-a clang 11.0.1
armv8-a clang 12.0.0
armv8-a clang 13.0.0
armv8-a clang 14.0.0
armv8-a clang 15.0.0
armv8-a clang 16.0.0
armv8-a clang 17.0.1
armv8-a clang 18.1.0
armv8-a clang 19.1.0
armv8-a clang 20.1.0
armv8-a clang 21.1.0
armv8-a clang 9.0.0
armv8-a clang 9.0.1
clad trunk (clang 21.1.0)
clad v1.10 (clang 20.1.0)
clad v1.8 (clang 18.1.0)
clad v1.9 (clang 19.1.0)
clad v2.00 (clang 20.1.0)
clang-cl 18.1.0
ellcc 0.1.33
ellcc 0.1.34
ellcc 2017-07-16
ez80-clang 15.0.0
ez80-clang 15.0.7
hexagon-clang 16.0.5
llvm-mos atari2600-3e
llvm-mos atari2600-4k
llvm-mos atari2600-common
llvm-mos atari5200-supercart
llvm-mos atari8-cart-megacart
llvm-mos atari8-cart-std
llvm-mos atari8-cart-xegs
llvm-mos atari8-common
llvm-mos atari8-dos
llvm-mos c128
llvm-mos c64
llvm-mos commodore
llvm-mos cpm65
llvm-mos cx16
llvm-mos dodo
llvm-mos eater
llvm-mos mega65
llvm-mos nes
llvm-mos nes-action53
llvm-mos nes-cnrom
llvm-mos nes-gtrom
llvm-mos nes-mmc1
llvm-mos nes-mmc3
llvm-mos nes-nrom
llvm-mos nes-unrom
llvm-mos nes-unrom-512
llvm-mos osi-c1p
llvm-mos pce
llvm-mos pce-cd
llvm-mos pce-common
llvm-mos pet
llvm-mos rp6502
llvm-mos rpc8e
llvm-mos supervision
llvm-mos vic20
loongarch64 gcc 12.2.0
loongarch64 gcc 12.3.0
loongarch64 gcc 12.4.0
loongarch64 gcc 12.5.0
loongarch64 gcc 13.1.0
loongarch64 gcc 13.2.0
loongarch64 gcc 13.3.0
loongarch64 gcc 13.4.0
loongarch64 gcc 14.1.0
loongarch64 gcc 14.2.0
loongarch64 gcc 14.3.0
loongarch64 gcc 15.1.0
loongarch64 gcc 15.2.0
mips clang 13.0.0
mips clang 14.0.0
mips clang 15.0.0
mips clang 16.0.0
mips clang 17.0.1
mips clang 18.1.0
mips clang 19.1.0
mips clang 20.1.0
mips clang 21.1.0
mips gcc 11.2.0
mips gcc 12.1.0
mips gcc 12.2.0
mips gcc 12.3.0
mips gcc 12.4.0
mips gcc 12.5.0
mips gcc 13.1.0
mips gcc 13.2.0
mips gcc 13.3.0
mips gcc 13.4.0
mips gcc 14.1.0
mips gcc 14.2.0
mips gcc 14.3.0
mips gcc 15.1.0
mips gcc 15.2.0
mips gcc 4.9.4
mips gcc 5.4
mips gcc 5.5.0
mips gcc 9.3.0 (codescape)
mips gcc 9.5.0
mips64 (el) gcc 12.1.0
mips64 (el) gcc 12.2.0
mips64 (el) gcc 12.3.0
mips64 (el) gcc 12.4.0
mips64 (el) gcc 12.5.0
mips64 (el) gcc 13.1.0
mips64 (el) gcc 13.2.0
mips64 (el) gcc 13.3.0
mips64 (el) gcc 13.4.0
mips64 (el) gcc 14.1.0
mips64 (el) gcc 14.2.0
mips64 (el) gcc 14.3.0
mips64 (el) gcc 15.1.0
mips64 (el) gcc 15.2.0
mips64 (el) gcc 4.9.4
mips64 (el) gcc 5.4.0
mips64 (el) gcc 5.5.0
mips64 (el) gcc 9.5.0
mips64 clang 13.0.0
mips64 clang 14.0.0
mips64 clang 15.0.0
mips64 clang 16.0.0
mips64 clang 17.0.1
mips64 clang 18.1.0
mips64 clang 19.1.0
mips64 clang 20.1.0
mips64 clang 21.1.0
mips64 gcc 11.2.0
mips64 gcc 12.1.0
mips64 gcc 12.2.0
mips64 gcc 12.3.0
mips64 gcc 12.4.0
mips64 gcc 12.5.0
mips64 gcc 13.1.0
mips64 gcc 13.2.0
mips64 gcc 13.3.0
mips64 gcc 13.4.0
mips64 gcc 14.1.0
mips64 gcc 14.2.0
mips64 gcc 14.3.0
mips64 gcc 15.1.0
mips64 gcc 15.2.0
mips64 gcc 4.9.4
mips64 gcc 5.4.0
mips64 gcc 5.5.0
mips64 gcc 9.5.0
mips64el clang 13.0.0
mips64el clang 14.0.0
mips64el clang 15.0.0
mips64el clang 16.0.0
mips64el clang 17.0.1
mips64el clang 18.1.0
mips64el clang 19.1.0
mips64el clang 20.1.0
mips64el clang 21.1.0
mipsel clang 13.0.0
mipsel clang 14.0.0
mipsel clang 15.0.0
mipsel clang 16.0.0
mipsel clang 17.0.1
mipsel clang 18.1.0
mipsel clang 19.1.0
mipsel clang 20.1.0
mipsel clang 21.1.0
mipsel gcc 12.1.0
mipsel gcc 12.2.0
mipsel gcc 12.3.0
mipsel gcc 12.4.0
mipsel gcc 12.5.0
mipsel gcc 13.1.0
mipsel gcc 13.2.0
mipsel gcc 13.3.0
mipsel gcc 13.4.0
mipsel gcc 14.1.0
mipsel gcc 14.2.0
mipsel gcc 14.3.0
mipsel gcc 15.1.0
mipsel gcc 15.2.0
mipsel gcc 4.9.4
mipsel gcc 5.4.0
mipsel gcc 5.5.0
mipsel gcc 9.5.0
nanoMIPS gcc 6.3.0 (mtk)
power gcc 11.2.0
power gcc 12.1.0
power gcc 12.2.0
power gcc 12.3.0
power gcc 12.4.0
power gcc 12.5.0
power gcc 13.1.0
power gcc 13.2.0
power gcc 13.3.0
power gcc 13.4.0
power gcc 14.1.0
power gcc 14.2.0
power gcc 14.3.0
power gcc 15.1.0
power gcc 15.2.0
power gcc 4.8.5
power64 AT12.0 (gcc8)
power64 AT13.0 (gcc9)
power64 gcc 11.2.0
power64 gcc 12.1.0
power64 gcc 12.2.0
power64 gcc 12.3.0
power64 gcc 12.4.0
power64 gcc 12.5.0
power64 gcc 13.1.0
power64 gcc 13.2.0
power64 gcc 13.3.0
power64 gcc 13.4.0
power64 gcc 14.1.0
power64 gcc 14.2.0
power64 gcc 14.3.0
power64 gcc 15.1.0
power64 gcc 15.2.0
power64 gcc trunk
power64le AT12.0 (gcc8)
power64le AT13.0 (gcc9)
power64le clang (trunk)
power64le gcc 11.2.0
power64le gcc 12.1.0
power64le gcc 12.2.0
power64le gcc 12.3.0
power64le gcc 12.4.0
power64le gcc 12.5.0
power64le gcc 13.1.0
power64le gcc 13.2.0
power64le gcc 13.3.0
power64le gcc 13.4.0
power64le gcc 14.1.0
power64le gcc 14.2.0
power64le gcc 14.3.0
power64le gcc 15.1.0
power64le gcc 15.2.0
power64le gcc 6.3.0
power64le gcc trunk
powerpc64 clang (trunk)
qnx 8.0.0
s390x gcc 11.2.0
s390x gcc 12.1.0
s390x gcc 12.2.0
s390x gcc 12.3.0
s390x gcc 12.4.0
s390x gcc 12.5.0
s390x gcc 13.1.0
s390x gcc 13.2.0
s390x gcc 13.3.0
s390x gcc 13.4.0
s390x gcc 14.1.0
s390x gcc 14.2.0
s390x gcc 14.3.0
s390x gcc 15.1.0
s390x gcc 15.2.0
sh gcc 12.2.0
sh gcc 12.3.0
sh gcc 12.4.0
sh gcc 12.5.0
sh gcc 13.1.0
sh gcc 13.2.0
sh gcc 13.3.0
sh gcc 13.4.0
sh gcc 14.1.0
sh gcc 14.2.0
sh gcc 14.3.0
sh gcc 15.1.0
sh gcc 15.2.0
sh gcc 4.9.4
sh gcc 9.5.0
vast (trunk)
x64 msvc v19.0 (ex-WINE)
x64 msvc v19.10 (ex-WINE)
x64 msvc v19.14 (ex-WINE)
x64 msvc v19.20 VS16.0
x64 msvc v19.21 VS16.1
x64 msvc v19.22 VS16.2
x64 msvc v19.23 VS16.3
x64 msvc v19.24 VS16.4
x64 msvc v19.25 VS16.5
x64 msvc v19.27 VS16.7
x64 msvc v19.28 VS16.8
x64 msvc v19.28 VS16.9
x64 msvc v19.29 VS16.10
x64 msvc v19.29 VS16.11
x64 msvc v19.30 VS17.0
x64 msvc v19.31 VS17.1
x64 msvc v19.32 VS17.2
x64 msvc v19.33 VS17.3
x64 msvc v19.34 VS17.4
x64 msvc v19.35 VS17.5
x64 msvc v19.36 VS17.6
x64 msvc v19.37 VS17.7
x64 msvc v19.38 VS17.8
x64 msvc v19.39 VS17.9
x64 msvc v19.40 VS17.10
x64 msvc v19.41 VS17.11
x64 msvc v19.42 VS17.12
x64 msvc v19.43 VS17.13
x64 msvc v19.latest
x86 djgpp 4.9.4
x86 djgpp 5.5.0
x86 djgpp 6.4.0
x86 djgpp 7.2.0
x86 msvc v19.0 (ex-WINE)
x86 msvc v19.10 (ex-WINE)
x86 msvc v19.14 (ex-WINE)
x86 msvc v19.20 VS16.0
x86 msvc v19.21 VS16.1
x86 msvc v19.22 VS16.2
x86 msvc v19.23 VS16.3
x86 msvc v19.24 VS16.4
x86 msvc v19.25 VS16.5
x86 msvc v19.27 VS16.7
x86 msvc v19.28 VS16.8
x86 msvc v19.28 VS16.9
x86 msvc v19.29 VS16.10
x86 msvc v19.29 VS16.11
x86 msvc v19.30 VS17.0
x86 msvc v19.31 VS17.1
x86 msvc v19.32 VS17.2
x86 msvc v19.33 VS17.3
x86 msvc v19.34 VS17.4
x86 msvc v19.35 VS17.5
x86 msvc v19.36 VS17.6
x86 msvc v19.37 VS17.7
x86 msvc v19.38 VS17.8
x86 msvc v19.39 VS17.9
x86 msvc v19.40 VS17.10
x86 msvc v19.41 VS17.11
x86 msvc v19.42 VS17.12
x86 msvc v19.43 VS17.13
x86 msvc v19.latest
x86 nvc++ 22.11
x86 nvc++ 22.7
x86 nvc++ 22.9
x86 nvc++ 23.1
x86 nvc++ 23.11
x86 nvc++ 23.3
x86 nvc++ 23.5
x86 nvc++ 23.7
x86 nvc++ 23.9
x86 nvc++ 24.1
x86 nvc++ 24.11
x86 nvc++ 24.3
x86 nvc++ 24.5
x86 nvc++ 24.7
x86 nvc++ 24.9
x86 nvc++ 25.1
x86 nvc++ 25.3
x86 nvc++ 25.5
x86 nvc++ 25.7
x86 nvc++ 25.9
x86-64 Zapcc 190308
x86-64 clang (-fimplicit-constexpr)
x86-64 clang (Chris Bazley N3089)
x86-64 clang (EricWF contracts)
x86-64 clang (amd-staging)
x86-64 clang (assertions trunk)
x86-64 clang (clangir)
x86-64 clang (experimental -Wlifetime)
x86-64 clang (experimental P1061)
x86-64 clang (experimental P1144)
x86-64 clang (experimental P1221)
x86-64 clang (experimental P2998)
x86-64 clang (experimental P3068)
x86-64 clang (experimental P3309)
x86-64 clang (experimental P3367)
x86-64 clang (experimental P3372)
x86-64 clang (experimental P3385)
x86-64 clang (experimental P3776)
x86-64 clang (experimental metaprogramming - P2632)
x86-64 clang (old concepts branch)
x86-64 clang (p1974)
x86-64 clang (pattern matching - P2688)
x86-64 clang (reflection - C++26)
x86-64 clang (reflection - TS)
x86-64 clang (resugar)
x86-64 clang (string interpolation - P3412)
x86-64 clang (thephd.dev)
x86-64 clang (trunk)
x86-64 clang (variadic friends - P2893)
x86-64 clang (widberg)
x86-64 clang 10.0.0
x86-64 clang 10.0.0 (assertions)
x86-64 clang 10.0.1
x86-64 clang 11.0.0
x86-64 clang 11.0.0 (assertions)
x86-64 clang 11.0.1
x86-64 clang 12.0.0
x86-64 clang 12.0.0 (assertions)
x86-64 clang 12.0.1
x86-64 clang 13.0.0
x86-64 clang 13.0.0 (assertions)
x86-64 clang 13.0.1
x86-64 clang 14.0.0
x86-64 clang 14.0.0 (assertions)
x86-64 clang 15.0.0
x86-64 clang 15.0.0 (assertions)
x86-64 clang 16.0.0
x86-64 clang 16.0.0 (assertions)
x86-64 clang 17.0.1
x86-64 clang 17.0.1 (assertions)
x86-64 clang 18.1.0
x86-64 clang 18.1.0 (assertions)
x86-64 clang 19.1.0
x86-64 clang 19.1.0 (assertions)
x86-64 clang 2.6.0 (assertions)
x86-64 clang 2.7.0 (assertions)
x86-64 clang 2.8.0 (assertions)
x86-64 clang 2.9.0 (assertions)
x86-64 clang 20.1.0
x86-64 clang 20.1.0 (assertions)
x86-64 clang 21.1.0
x86-64 clang 21.1.0 (assertions)
x86-64 clang 3.0.0
x86-64 clang 3.0.0 (assertions)
x86-64 clang 3.1
x86-64 clang 3.1 (assertions)
x86-64 clang 3.2
x86-64 clang 3.2 (assertions)
x86-64 clang 3.3
x86-64 clang 3.3 (assertions)
x86-64 clang 3.4 (assertions)
x86-64 clang 3.4.1
x86-64 clang 3.5
x86-64 clang 3.5 (assertions)
x86-64 clang 3.5.1
x86-64 clang 3.5.2
x86-64 clang 3.6
x86-64 clang 3.6 (assertions)
x86-64 clang 3.7
x86-64 clang 3.7 (assertions)
x86-64 clang 3.7.1
x86-64 clang 3.8
x86-64 clang 3.8 (assertions)
x86-64 clang 3.8.1
x86-64 clang 3.9.0
x86-64 clang 3.9.0 (assertions)
x86-64 clang 3.9.1
x86-64 clang 4.0.0
x86-64 clang 4.0.0 (assertions)
x86-64 clang 4.0.1
x86-64 clang 5.0.0
x86-64 clang 5.0.0 (assertions)
x86-64 clang 5.0.1
x86-64 clang 5.0.2
x86-64 clang 6.0.0
x86-64 clang 6.0.0 (assertions)
x86-64 clang 6.0.1
x86-64 clang 7.0.0
x86-64 clang 7.0.0 (assertions)
x86-64 clang 7.0.1
x86-64 clang 7.1.0
x86-64 clang 8.0.0
x86-64 clang 8.0.0 (assertions)
x86-64 clang 8.0.1
x86-64 clang 9.0.0
x86-64 clang 9.0.0 (assertions)
x86-64 clang 9.0.1
x86-64 clang rocm-4.5.2
x86-64 clang rocm-5.0.2
x86-64 clang rocm-5.1.3
x86-64 clang rocm-5.2.3
x86-64 clang rocm-5.3.3
x86-64 clang rocm-5.7.0
x86-64 clang rocm-6.0.2
x86-64 clang rocm-6.1.2
x86-64 clang rocm-6.2.4
x86-64 clang rocm-6.3.3
x86-64 clang rocm-6.4.0
x86-64 gcc (P2034 lambdas)
x86-64 gcc (contract labels)
x86-64 gcc (contracts natural syntax)
x86-64 gcc (contracts)
x86-64 gcc (coroutines)
x86-64 gcc (modules)
x86-64 gcc (trunk)
x86-64 gcc 10.1
x86-64 gcc 10.2
x86-64 gcc 10.3
x86-64 gcc 10.3 (assertions)
x86-64 gcc 10.4
x86-64 gcc 10.4 (assertions)
x86-64 gcc 10.5
x86-64 gcc 10.5 (assertions)
x86-64 gcc 11.1
x86-64 gcc 11.1 (assertions)
x86-64 gcc 11.2
x86-64 gcc 11.2 (assertions)
x86-64 gcc 11.3
x86-64 gcc 11.3 (assertions)
x86-64 gcc 11.4
x86-64 gcc 11.4 (assertions)
x86-64 gcc 12.1
x86-64 gcc 12.1 (assertions)
x86-64 gcc 12.2
x86-64 gcc 12.2 (assertions)
x86-64 gcc 12.3
x86-64 gcc 12.3 (assertions)
x86-64 gcc 12.4
x86-64 gcc 12.4 (assertions)
x86-64 gcc 12.5
x86-64 gcc 12.5 (assertions)
x86-64 gcc 13.1
x86-64 gcc 13.1 (assertions)
x86-64 gcc 13.2
x86-64 gcc 13.2 (assertions)
x86-64 gcc 13.3
x86-64 gcc 13.3 (assertions)
x86-64 gcc 13.4
x86-64 gcc 13.4 (assertions)
x86-64 gcc 14.1
x86-64 gcc 14.1 (assertions)
x86-64 gcc 14.2
x86-64 gcc 14.2 (assertions)
x86-64 gcc 14.3
x86-64 gcc 14.3 (assertions)
x86-64 gcc 15.1
x86-64 gcc 15.1 (assertions)
x86-64 gcc 15.2
x86-64 gcc 15.2 (assertions)
x86-64 gcc 3.4.6
x86-64 gcc 4.0.4
x86-64 gcc 4.1.2
x86-64 gcc 4.4.7
x86-64 gcc 4.5.3
x86-64 gcc 4.6.4
x86-64 gcc 4.7.1
x86-64 gcc 4.7.2
x86-64 gcc 4.7.3
x86-64 gcc 4.7.4
x86-64 gcc 4.8.1
x86-64 gcc 4.8.2
x86-64 gcc 4.8.3
x86-64 gcc 4.8.4
x86-64 gcc 4.8.5
x86-64 gcc 4.9.0
x86-64 gcc 4.9.1
x86-64 gcc 4.9.2
x86-64 gcc 4.9.3
x86-64 gcc 4.9.4
x86-64 gcc 5.1
x86-64 gcc 5.2
x86-64 gcc 5.3
x86-64 gcc 5.4
x86-64 gcc 5.5
x86-64 gcc 6.1
x86-64 gcc 6.2
x86-64 gcc 6.3
x86-64 gcc 6.4
x86-64 gcc 6.5
x86-64 gcc 7.1
x86-64 gcc 7.2
x86-64 gcc 7.3
x86-64 gcc 7.4
x86-64 gcc 7.5
x86-64 gcc 8.1
x86-64 gcc 8.2
x86-64 gcc 8.3
x86-64 gcc 8.4
x86-64 gcc 8.5
x86-64 gcc 9.1
x86-64 gcc 9.2
x86-64 gcc 9.3
x86-64 gcc 9.4
x86-64 gcc 9.5
x86-64 icc 13.0.1
x86-64 icc 16.0.3
x86-64 icc 17.0.0
x86-64 icc 18.0.0
x86-64 icc 19.0.0
x86-64 icc 19.0.1
x86-64 icc 2021.1.2
x86-64 icc 2021.10.0
x86-64 icc 2021.2.0
x86-64 icc 2021.3.0
x86-64 icc 2021.4.0
x86-64 icc 2021.5.0
x86-64 icc 2021.6.0
x86-64 icc 2021.7.0
x86-64 icc 2021.7.1
x86-64 icc 2021.8.0
x86-64 icc 2021.9.0
x86-64 icx 2021.1.2
x86-64 icx 2021.2.0
x86-64 icx 2021.3.0
x86-64 icx 2021.4.0
x86-64 icx 2022.0.0
x86-64 icx 2022.1.0
x86-64 icx 2022.2.0
x86-64 icx 2022.2.1
x86-64 icx 2023.0.0
x86-64 icx 2023.1.0
x86-64 icx 2023.2.1
x86-64 icx 2024.0.0
x86-64 icx 2024.1.0
x86-64 icx 2024.2.0
x86-64 icx 2024.2.1
x86-64 icx 2025.0.0
x86-64 icx 2025.0.1
x86-64 icx 2025.0.3
x86-64 icx 2025.0.4
x86-64 icx 2025.1.0
x86-64 icx 2025.1.1
x86-64 icx 2025.2.0
x86-64 icx 2025.2.1
x86-64 icx 2025.2.1
z180-clang 15.0.0
z180-clang 15.0.7
z80-clang 15.0.0
z80-clang 15.0.7
zig c++ 0.10.0
zig c++ 0.11.0
zig c++ 0.12.0
zig c++ 0.12.1
zig c++ 0.13.0
zig c++ 0.14.0
zig c++ 0.14.1
zig c++ 0.15.1
zig c++ 0.6.0
zig c++ 0.7.0
zig c++ 0.7.1
zig c++ 0.8.0
zig c++ 0.9.0
zig c++ trunk
Options
Source code
/* * VectorizedKernel.h * * Created on: Apr 16, 2022 * Author: tugrul */ #ifndef VECTORIZEDKERNEL_H_ #define VECTORIZEDKERNEL_H_ #include <vector> #include <iostream> #include <string> #include <functional> #include <cmath> #include <chrono> #include <thread> #include <atomic> namespace Vectorization { #define CREATE_PRAGMA(x) _Pragma (#x) #if defined(__INTEL_COMPILER) #define VECTORIZED_KERNEL_METHOD __attribute__((always_inline)) #define VECTORIZED_KERNEL_LOOP CREATE_PRAGMA(simd) #elif defined(__clang__) #define VECTORIZED_KERNEL_METHOD inline #define VECTORIZED_KERNEL_LOOP CREATE_PRAGMA(clang loop vectorize(assume_safety) vectorize_width(Simd)) #elif defined(__GNUC__) || defined(__GNUG__) #define VECTORIZED_KERNEL_METHOD inline #define VECTORIZED_KERNEL_LOOP #elif defined(_MSC_VER) #define VECTORIZED_KERNEL_METHOD __declspec(inline) #define VECTORIZED_KERNEL_LOOP CREATE_PRAGMA(loop( ivdep )) #elif #define VECTORIZED_KERNEL_METHOD inline #define VECTORIZED_KERNEL_LOOP CREATE_PRAGMA(notoptimized) #endif class Bench { public: Bench(size_t * targetPtr) { target=targetPtr; t1 = std::chrono::duration_cast< std::chrono::nanoseconds >(std::chrono::high_resolution_clock::now().time_since_epoch()); } ~Bench() { t2 = std::chrono::duration_cast< std::chrono::nanoseconds >(std::chrono::high_resolution_clock::now().time_since_epoch()); if(target) { *target= t2.count() - t1.count(); } else { std::cout << (t2.count() - t1.count())/1000000000.0 << " seconds" << std::endl; } } private: size_t * target; std::chrono::nanoseconds t1,t2; }; template<typename Type, int Simd> struct KernelData { alignas(64) Type data[Simd]; VECTORIZED_KERNEL_METHOD KernelData(){} VECTORIZED_KERNEL_METHOD KernelData(const Type & broadcastedInit) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = broadcastedInit; } } VECTORIZED_KERNEL_METHOD KernelData(const KernelData<Type,Simd> & vectorizedIit) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = vectorizedIit.data[i]; } } VECTORIZED_KERNEL_METHOD KernelData(KernelData&& dat) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = dat.data[i]; } } VECTORIZED_KERNEL_METHOD KernelData& operator=(const KernelData& dat) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = dat.data[i]; } return *this; }; VECTORIZED_KERNEL_METHOD KernelData& operator=(KernelData&& dat) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = dat.data[i]; } return *this; }; VECTORIZED_KERNEL_METHOD ~KernelData() noexcept { }; // contiguous read element by element starting from beginning of ptr VECTORIZED_KERNEL_METHOD void readFrom(const Type * const __restrict__ ptr) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = ptr[i]; } } // contiguous read element by element starting from beginning of ptr // masked read operation: if mask lane is set then read. if not set then don't read template<typename TypeMask> VECTORIZED_KERNEL_METHOD void readFromMasked(const Type * const __restrict__ ptr, const KernelData<TypeMask,Simd> & mask) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = mask.data[i]?ptr[i]:data[i]; } } // contiguous write element by element starting from beginning of ptr VECTORIZED_KERNEL_METHOD void writeTo(Type * const __restrict__ ptr) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { ptr[i] = data[i]; } } // contiguous write element by element starting from beginning of ptr // masked write: if mask lane is set then write, if not set then don't write template<typename TypeMask> VECTORIZED_KERNEL_METHOD void writeToMasked(Type * const __restrict__ ptr, const KernelData<TypeMask,Simd> & mask) const noexcept { for(int i=0;i<Simd;i++) { if(mask.data[i]) ptr[i] = data[i]; } } // does scatter operation (every element writes its own targeted ptr element, decided by elements of id) VECTORIZED_KERNEL_METHOD void writeTo(Type * const __restrict__ ptr, const KernelData<int,Simd> & id) const noexcept { for(int i=0;i<Simd;i++) { ptr[id.data[i]] = data[i]; } } // does scatter operation (every element writes its own targeted ptr element, decided by elements of id) // masked write: if mask lane is set then write, if not set then don't write template<typename TypeMask> VECTORIZED_KERNEL_METHOD void writeToMasked(Type * const __restrict__ ptr, const KernelData<int,Simd> & id, const KernelData<TypeMask,Simd> & mask) const noexcept { for(int i=0;i<Simd;i++) { if(mask.data[i]) ptr[id.data[i]] = data[i]; } } // uses only first item of id to compute the starting point of target ptr element. // writes Simd number of elements to target starting from ptr + id.data[0] VECTORIZED_KERNEL_METHOD void writeToContiguous(Type * const __restrict__ ptr, const KernelData<int,Simd> & id) const noexcept { const int idx = id.data[0]; VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { ptr[idx+i] = data[i]; } } // uses only first item of id to compute the starting point of target ptr element. // writes Simd number of elements to target starting from ptr + id.data[0] // masked write: if mask lane is set then writes, if not set then does not write template<typename TypeMask> VECTORIZED_KERNEL_METHOD void writeToContiguousMasked(Type * const __restrict__ ptr, const KernelData<int,Simd> & id, const KernelData<TypeMask,Simd> & mask) const noexcept { const int idx = id.data[0]; for(int i=0;i<Simd;i++) { if(mask.data[i]) ptr[idx+i] = data[i]; } } // does gather operation (every element reads its own sourced ptr element, decided by elements of id) VECTORIZED_KERNEL_METHOD void readFrom(Type * const __restrict__ ptr, const KernelData<int,Simd> & id) noexcept { for(int i=0;i<Simd;i++) { data[i] = ptr[id.data[i]]; } } // does gather operation (every element reads its own sourced ptr element, decided by elements of id) // masked operation: if mask lane is set, then it reads from pointer+id.data[i], if not set then it does not read anything template<typename TypeMask> VECTORIZED_KERNEL_METHOD void readFromMasked(Type * const __restrict__ ptr, const KernelData<int,Simd> & id, const KernelData<TypeMask,Simd> & mask) noexcept { for(int i=0;i<Simd;i++) { data[i] = mask.data[i]?ptr[id.data[i]]:data[i]; } } // uses only first item of id to compute the starting point of source ptr element. // reads Simd number of elements from target starting from ptr + id.data[0] VECTORIZED_KERNEL_METHOD void readFromContiguous(Type * const __restrict__ ptr, const KernelData<int,Simd> & id) noexcept { const int idx = id.data[0]; VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = ptr[idx+i]; } } // uses only first item of id to compute the starting point of source ptr element. // reads Simd number of elements from target starting from ptr + id.data[0] // masked operation: if mask lane is set, then it reads from pointer+id.data[0], if not set then it does not read anything template<typename TypeMask> VECTORIZED_KERNEL_METHOD void readFromContiguousMasked(Type * const __restrict__ ptr, const KernelData<int,Simd> & id, const KernelData<TypeMask,Simd> & mask) noexcept { const int idx = id.data[0]; VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = mask.data[i]?ptr[idx+i]:data[i]; } } template<typename F> VECTORIZED_KERNEL_METHOD void idCompute(const int id, const F & f) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = f(id+i); } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void lessThan(const KernelData<Type,Simd> & vec, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]<vec.data[i]; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void lessThanOrEquals(const KernelData<Type,Simd> & vec, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]<=vec.data[i]; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void lessThanOrEquals(const Type val, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]<=val; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void greaterThan(const KernelData<Type,Simd> & vec, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]>vec.data[i]; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void greaterThan(const Type val, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]>val; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void equals(const KernelData<Type,Simd> & vec, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] == vec.data[i]; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void equals(const Type val, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] == val; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void notEqual(const KernelData<Type,Simd> & vec, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] != vec.data[i]; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void notEqual(const Type val, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] != val; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void logicalAnd(const KernelData<TypeMask,Simd> vec, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] && vec.data[i]; } } // bool template<typename TypeMask> VECTORIZED_KERNEL_METHOD void logicalOr(const KernelData<TypeMask,Simd> vec, KernelData<TypeMask,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] || vec.data[i]; } } VECTORIZED_KERNEL_METHOD bool areAllTrue() const noexcept { int result = 0; for(int i=0;i<Simd;i++) { result = result + (data[i]>0); } return result==Simd; } VECTORIZED_KERNEL_METHOD bool isAnyTrue() const noexcept { int result = 0; for(int i=0;i<Simd;i++) { result = result + (data[i]>0); } return result>0; } template<typename ComparedType> VECTORIZED_KERNEL_METHOD void ternary(const KernelData<ComparedType,Simd> vec1, const KernelData<ComparedType,Simd> vec2, KernelData<ComparedType,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]?vec1.data[i]:vec2.data[i]; } } template<typename ComparedType> VECTORIZED_KERNEL_METHOD void ternary(const ComparedType val1, const KernelData<ComparedType,Simd> vec2, KernelData<ComparedType,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]?val1:vec2.data[i]; } } template<typename ComparedType> VECTORIZED_KERNEL_METHOD void ternary(const KernelData<ComparedType,Simd> vec1, const ComparedType val2, KernelData<ComparedType,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]?vec1.data[i]:val2; } } template<typename ComparedType> VECTORIZED_KERNEL_METHOD void ternary(const ComparedType val1, const ComparedType val2, KernelData<ComparedType,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i]?val1:val2; } } VECTORIZED_KERNEL_METHOD void broadcast(const Type val) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = val; } } // in-place operation so the result variable must be different than current variable // gets value from a so-called thread (a lane) in the current SIMD // for main body of kernel launch, lane must not overflow Simd // for the tail the number of lanes is 1 so the only available lane is 0 that is itself // lane value[i] = lane value [id.data[i]] // this is a gather operation within the SIMD unit template<typename IntegerType> VECTORIZED_KERNEL_METHOD void gatherFromLane(const KernelData<IntegerType,Simd> & id, KernelData<Type,Simd> & result) const noexcept { for(int i=0;i<Simd;i++) { result.data[i] = data[id.data[i]]; } } // similar to gatherFromLane but with constant index values for faster operation VECTORIZED_KERNEL_METHOD void transposeLanes(const int widthTranspose, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<widthTranspose;i++) for(int j=0;j<widthTranspose;j++) { result.data[i*widthTranspose+j] = data[j*widthTranspose+i]; } } // in-place operation so the result variable must be different than current variable // shifts lanes (wraps around) left n times out-of-place // writes result to another result variable template<typename IntegerType> VECTORIZED_KERNEL_METHOD void lanesLeftShift(const IntegerType & n, KernelData<Type,Simd> & result) const noexcept { for(int i=0;i<Simd;i++) { const int j = (i+n)&(Simd-1); result.data[i] = data[j]; } } // in-place operation so the result variable must be different than current variable // shifts lanes (wraps around) right n times out-of-place // writes result to another result variable // n must not be greater than Simd*2 template<typename IntegerType> VECTORIZED_KERNEL_METHOD void lanesRightShift(const IntegerType & n, KernelData<Type,Simd> & result) const noexcept { for(int i=0;i<Simd;i++) { const int j = (i+2*Simd-n)&(Simd-1); result.data[i] = data[j]; } } // shifts lanes (wraps around) left n times in-place template<typename IntegerType> VECTORIZED_KERNEL_METHOD void lanesLeftShift(const IntegerType & n) const noexcept { alignas(64) Type tmp[Simd]; for(int i=0;i<Simd;i++) { tmp[i] = data[i]; } for(int i=0;i<Simd;i++) { const int j = (i+n)&(Simd-1); data[i] = tmp[j]; } } // shifts lanes (wraps around) left n times in-place // n must not be greater than Simd*2 template<typename IntegerType> VECTORIZED_KERNEL_METHOD void lanesRightShift(const IntegerType & n) const noexcept { alignas(64) Type tmp[Simd]; for(int i=0;i<Simd;i++) { tmp[i] = data[i]; } for(int i=0;i<Simd;i++) { const int j = (i+2*Simd-n)&(Simd-1); data[i] = tmp[j]; } } // gets value from a so-called thread in the current SIMD // for main body of kernel launch, lane must not overflow Simd // for the tail the number of lanes is 1 so the only available lane is 0 that is itself VECTORIZED_KERNEL_METHOD void broadcastFromLane(const int lane) noexcept { const Type bcast = data[lane]; VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = bcast; } } // same as broadcastFromLane(lane) but the target to copy is a result vector VECTORIZED_KERNEL_METHOD void broadcastFromLaneToVector(const int lane, KernelData<Type,Simd> & result) noexcept { const Type bcast = data[lane]; VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = bcast; } } VECTORIZED_KERNEL_METHOD void readFrom(const KernelData<Type,Simd> & vec) noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { data[i] = vec.data[i]; } } template<typename NewType> VECTORIZED_KERNEL_METHOD void castAndCopyTo(KernelData<NewType,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (NewType)data[i]; } } template<typename NewType> VECTORIZED_KERNEL_METHOD void castBitwiseAndCopyTo(KernelData<NewType,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = *reinterpret_cast<const NewType*>(&data[i]); } } VECTORIZED_KERNEL_METHOD void sqrt(KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = std::sqrt(data[i]); } } VECTORIZED_KERNEL_METHOD void rsqrt(KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = 1.0f/std::sqrt(data[i]); } } VECTORIZED_KERNEL_METHOD void add(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] + vec.data[i]; } } VECTORIZED_KERNEL_METHOD void add(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] + val; } } VECTORIZED_KERNEL_METHOD void sub(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] - vec.data[i]; } } VECTORIZED_KERNEL_METHOD void sub(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] - val; } } VECTORIZED_KERNEL_METHOD void div(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] / vec.data[i]; } } VECTORIZED_KERNEL_METHOD void div(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] / val; } } VECTORIZED_KERNEL_METHOD void fusedMultiplyAdd(const KernelData<Type,Simd> & vec1, const KernelData<Type,Simd> & vec2, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (data[i]* vec1.data[i]+ vec2.data[i]); } } VECTORIZED_KERNEL_METHOD void fusedMultiplyAdd(const KernelData<Type,Simd> & vec1, const Type & val2, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (data[i]* vec1.data[i]+ val2); } } VECTORIZED_KERNEL_METHOD void fusedMultiplyAdd(const Type & val1, const KernelData<Type,Simd> & vec2, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (data[i]* val1+ vec2.data[i]); } } VECTORIZED_KERNEL_METHOD void fusedMultiplyAdd(const Type & val1, const Type & val2, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (data[i]* val1+ val2); } } VECTORIZED_KERNEL_METHOD void fusedMultiplySub(const KernelData<Type,Simd> & vec1, const KernelData<Type,Simd> & vec2, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (data[i]* vec1.data[i] -vec2.data[i]); } } VECTORIZED_KERNEL_METHOD void fusedMultiplySub(const Type & val1, const KernelData<Type,Simd> & vec2, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (data[i]* val1 -vec2.data[i]); } } VECTORIZED_KERNEL_METHOD void fusedMultiplySub(const KernelData<Type,Simd> & vec1, const Type & val2, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (data[i]* vec1.data[i] -val2); } } VECTORIZED_KERNEL_METHOD void fusedMultiplySub(const Type & val1, const Type & val2, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = (data[i]* val1 -val2); } } VECTORIZED_KERNEL_METHOD void mul(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] * vec.data[i]; } } VECTORIZED_KERNEL_METHOD void mul(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] * val; } } VECTORIZED_KERNEL_METHOD void modulus(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] % vec.data[i]; } } VECTORIZED_KERNEL_METHOD void modulus(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] % val; } } VECTORIZED_KERNEL_METHOD void leftShift(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] << vec.data[i]; } } VECTORIZED_KERNEL_METHOD void leftShift(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] << val; } } VECTORIZED_KERNEL_METHOD void rightShift(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] >> vec.data[i]; } } VECTORIZED_KERNEL_METHOD void rightShift(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] >> val; } } // this function is not accelerated. use it sparsely. VECTORIZED_KERNEL_METHOD void pow(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = std::pow(data[i],vec.data[i]); } } // this function is not accelerated. use it sparsely. // x^y = x.pow(y,result) VECTORIZED_KERNEL_METHOD void pow(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = std::pow(data[i],val); } } // this function is not accelerated. use it sparsely. // computes y^x = x.powFrom(y,result) is called VECTORIZED_KERNEL_METHOD void powFrom(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = std::pow(val,data[i]); } } // this function is not accelerated. use it sparsely. VECTORIZED_KERNEL_METHOD void exp(KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = std::exp(data[i]); } } // this function is not accelerated. use it sparsely. VECTORIZED_KERNEL_METHOD void log(KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = std::log(data[i]); } } // this function is not accelerated. use it sparsely. VECTORIZED_KERNEL_METHOD void log2(KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = std::log2(data[i]); } } VECTORIZED_KERNEL_METHOD void bitwiseXor(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] ^ vec.data[i]; } } VECTORIZED_KERNEL_METHOD void bitwiseXor(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] ^ val; } } VECTORIZED_KERNEL_METHOD void bitwiseAnd(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] & vec.data[i]; } } VECTORIZED_KERNEL_METHOD void bitwiseAnd(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] & val; } } VECTORIZED_KERNEL_METHOD void bitwiseOr(const KernelData<Type,Simd> & vec, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] | vec.data[i]; } } VECTORIZED_KERNEL_METHOD void bitwiseOr(const Type & val, KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = data[i] | val; } } VECTORIZED_KERNEL_METHOD void bitwiseNot(KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = ~data[i]; } } VECTORIZED_KERNEL_METHOD void logicalNot(KernelData<Type,Simd> & result) const noexcept { VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = !data[i]; } } VECTORIZED_KERNEL_METHOD void factorial(KernelData<Type,Simd> & result) const noexcept { alignas(64) Type tmpC[Simd]; alignas(64) Type tmpD[Simd]; alignas(64) Type tmpE[Simd]; VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { tmpC[i]=data[i]; } VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { tmpD[i]=data[i]-(Type)1; } int mask[Simd]; int anyWorking = true; while(anyWorking) { anyWorking = false; VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { mask[i] = (tmpD[i]>0); } VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { anyWorking += mask[i]; } VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { tmpE[i] = tmpC[i] * tmpD[i]; } VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { tmpC[i] = mask[i] ? tmpE[i] : tmpC[i]; } VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { tmpD[i]--; } } VECTORIZED_KERNEL_LOOP for(int i=0;i<Simd;i++) { result.data[i] = tmpC[i]?tmpC[i]:1; } } }; template<typename Type,int Simd,int ArraySize> struct KernelDataArray { KernelData<Type,Simd> arr[ArraySize]; KernelData<Type,Simd> & operator[](const int index) { return arr[index]; } }; template<int CurrentSimd> struct KernelDataFactory { KernelDataFactory():width(CurrentSimd) { } template<typename Type> inline KernelData<Type,CurrentSimd> generate() const { return KernelData<Type,CurrentSimd>(); } template<typename Type> inline KernelData<Type,CurrentSimd> generate(const KernelData<Type,CurrentSimd> & vec) const { return KernelData<Type,CurrentSimd>(vec); } // size has to be compile-time known otherwise it won't work template<typename Type,int Size> inline KernelDataArray<Type,CurrentSimd,Size> generateArray() const { return KernelDataArray<Type,CurrentSimd,Size>(); } const int width; }; template<class...Args> struct KernelArgs {}; template<int SimdWidth, typename F, typename... Args> class Kernel { public: Kernel(F&& kernelPrm):kernel(std::move(kernelPrm)) { } void run(int n, Args... args) { const int nLoop = (n/SimdWidth); const KernelDataFactory<SimdWidth> factory; auto id = factory.template generate<int>(); for(int i=0;i<nLoop;i++) { id.idCompute(i*SimdWidth,[](const int prm){ return prm;}); kernel(factory, id, args...); } if((n/SimdWidth)*SimdWidth != n) { const KernelDataFactory<1> factoryLast; const int m = n%SimdWidth; auto id = factoryLast.template generate<int>(); for(int i=0;i<m;i++) { id.idCompute(nLoop*SimdWidth+i,[](const int prm){ return prm;}); kernel(factoryLast, id, args...); } } } template<int numThreads, int loadBalanceResolution = 100> void runMultithreadedLoadBalanced(int n, Args... args) { const int nLoop = (n/SimdWidth); const KernelDataFactory<SimdWidth> factory; // simple work scheduling. std::vector<std::thread> threads; const int nChunk = 1 + (nLoop/numThreads)/loadBalanceResolution; std::atomic<int> index; index.store(0); for(int ii=0;ii<numThreads;ii++) { threads.emplace_back([&,ii](){ bool work = true; while(work) { work = false; const int curIndex = index.fetch_add(nChunk); work = (curIndex<nLoop); for(int j=0;j<nChunk;j++) { const int i = curIndex+j; if(i>=nLoop) break; auto id = factory.template generate<int>(); id.idCompute(i*SimdWidth,[](const int prm){ return prm;}); kernel(factory, id, args...); } } }); } for(int i=0;i<threads.size();i++) { threads[i].join(); // this is a synchronization point for the data changes } // then do the tail computation serially (assume simd is not half of a big work) if((n/SimdWidth)*SimdWidth != n) { const KernelDataFactory<1> factoryLast; const int m = n%SimdWidth; auto id = factoryLast.template generate<int>(); for(int i=0;i<m;i++) { id.idCompute(nLoop*SimdWidth+i,[](const int prm){ return prm;}); kernel(factoryLast, id, args...); } } } template<int numThreads> void runMultithreaded(int n, Args... args) { const int nLoop = (n/SimdWidth); const KernelDataFactory<SimdWidth> factory; #ifdef _OPENMP #include<omp.h> // distribute to threads by openmp #pragma omp parallel for num_threads(numThreads) for(int i=0;i<nLoop;i++) { auto id = factory.template generate<int>(); id.idCompute(i*SimdWidth,[](const int prm){ return prm;}); kernel(factory, id, args...); } #else // simple work scheduling. std::vector<std::thread> threads; const int nChunk = numThreads>0?(1 + nLoop/numThreads):nLoop; for(int ii=0;ii<nLoop;ii+=nChunk) { threads.emplace_back([&,ii](){ for(int j=0;j<nChunk;j++) { const int i = ii+j; if(i>=nLoop) break; auto id = factory.template generate<int>(); id.idCompute(i*SimdWidth,[](const int prm){ return prm;}); kernel(factory, id, args...); } }); } for(int i=0;i<threads.size();i++) { threads[i].join(); // this is a synchronization point for the data changes } #endif // then do the tail computation serially (assume simd is not half of a big work) if((n/SimdWidth)*SimdWidth != n) { const KernelDataFactory<1> factoryLast; const int m = n%SimdWidth; auto id = factoryLast.template generate<int>(); for(int i=0;i<m;i++) { id.idCompute(nLoop*SimdWidth+i,[](const int prm){ return prm;}); kernel(factoryLast, id, args...); } } } private: F kernel; std::vector<double> threadPerformances; std::vector<double> threadPerformancesOld; }; template<int SimdWidth, typename F, class...Args> auto createKernel(F&& kernelPrm, KernelArgs<Args...> const& _prm_) { return Kernel<SimdWidth, F, Args...>(std::forward<F>(kernelPrm)); } } #endif /* VECTORIZEDKERNEL_H_ */ #include <iostream> int main() { // is this the right stride? constexpr int stride = 100000; constexpr int simd = 16; auto kernel = Vectorization::createKernel<simd>( [&](auto & factory, auto & idThread, unsigned char * bufferIn, unsigned int * bufferOut ) { const int currentSimdWidth = factory.width; auto in1 = factory.template generate<unsigned char>(); auto in2 = factory.template generate<unsigned char>(); auto in3 = factory.template generate<unsigned char>(); auto in4 = factory.template generate<unsigned char>(); auto index1 = factory.template generate<int>(); auto index2 = factory.template generate<int>(); auto index3 = factory.template generate<int>(); auto index4 = factory.template generate<int>(); auto integer1 = factory.template generate<unsigned int>(); auto integer2 = factory.template generate<unsigned int>(); auto integer3 = factory.template generate<unsigned int>(); auto integer4 = factory.template generate<unsigned int>(); for(int i=0;i<1000;i++) { // calculate read indices for input idThread.add(i*currentSimdWidth,index1); idThread.add(i*currentSimdWidth+stride,index2); idThread.add(i*currentSimdWidth+stride*2,index3); idThread.add(i*currentSimdWidth+stride*3,index4); // read in chunks = memory latency hidden in1.readFrom(bufferIn,index1); in2.readFrom(bufferIn,index2); in3.readFrom(bufferIn,index3); in4.readFrom(bufferIn,index4); // 4 chars --> 1 int (16 at a time) in1.template castAndCopyTo<unsigned int>(integer1); in2.template castAndCopyTo<unsigned int>(integer2); in3.template castAndCopyTo<unsigned int>(integer3); in4.template castAndCopyTo<unsigned int>(integer4); // do the math, with speedup integer2.mul(256,integer2); integer3.mul(256*256,integer3); integer4.mul(256*256*256,integer4); integer1.add(integer2,integer1); integer1.add(integer3,integer1); integer1.add(integer4,integer1); integer1.writeTo(i*currentSimdWidth+bufferOut,idThread); } }, /* defining kernel parameter types */ Vectorization::KernelArgs<unsigned char*,unsigned int*>{}); alignas(64) unsigned char input[1000000]; alignas(64) unsigned int output[100000]; for(int i=0;i<1000000;i++) { input[i]=1; } size_t time1; { Vectorization::Bench bench(&time1); kernel.run(simd,input,output); } std::cout<<"simd*1000 operations took "<<time1<<" nanoseconds"; for(int j=0;j<10;j++) { std::cout<<output[j]<<std::endl; } return 0; }
Become a Patron
Sponsor on GitHub
Donate via PayPal
Compiler Explorer Shop
Source on GitHub
Mailing list
Installed libraries
Wiki
Report an issue
How it works
Contact the author
CE on Mastodon
CE on Bluesky
Statistics
Changelog
Version tree