Thanks for using Compiler Explorer
Sponsors
Jakt
C++
Ada
Analysis
Android Java
Android Kotlin
Assembly
C
C3
Carbon
C++ (Circle)
CIRCT
Clean
CMake
CMakeScript
COBOL
C++ for OpenCL
MLIR
Cppx
Cppx-Blue
Cppx-Gold
Cpp2-cppfront
Crystal
C#
CUDA C++
D
Dart
Elixir
Erlang
Fortran
F#
Go
Haskell
HLSL
Hook
Hylo
ispc
Java
Julia
Kotlin
LLVM IR
LLVM MIR
Modula-2
Nim
Objective-C
Objective-C++
OCaml
OpenCL C
Pascal
Pony
Python
Racket
Ruby
Rust
Snowball
Scala
Solidity
Spice
Swift
LLVM TableGen
Toit
TypeScript Native
V
Vala
Visual Basic
WASM
Zig
Javascript
GIMPLE
assembly source #1
Output
Compile to binary object
Link to binary
Execute the code
Intel asm syntax
Demangle identifiers
Verbose demangling
Filters
Unused labels
Library functions
Directives
Comments
Horizontal whitespace
Debug intrinsics
Compiler
AArch64 binutils 2.28
AArch64 binutils 2.31.1
AArch64 binutils 2.33.1
AArch64 binutils 2.35.1
AArch64 binutils 2.38
ARM binutils 2.25
ARM binutils 2.28
ARM binutils 2.31.1
ARM gcc 10.2 (linux)
ARM gcc 9.3 (linux)
ARMhf binutils 2.28
BeebAsm 1.09
NASM 2.12.02
NASM 2.13.02
NASM 2.13.03
NASM 2.14.02
NASM 2.16.01
PTX Assembler 10.0.130
PTX Assembler 10.1.105
PTX Assembler 10.1.168
PTX Assembler 10.1.243
PTX Assembler 10.2.89
PTX Assembler 11.0.2
PTX Assembler 11.0.3
PTX Assembler 11.1.0
PTX Assembler 11.1.1
PTX Assembler 11.2.0
PTX Assembler 11.2.1
PTX Assembler 11.2.2
PTX Assembler 11.3.0
PTX Assembler 11.3.1
PTX Assembler 11.4.0
PTX Assembler 11.4.1
PTX Assembler 11.5.0
PTX Assembler 9.1.85
PTX Assembler 9.2.88
RISC-V binutils 2.31.1
RISC-V binutils 2.31.1
RISC-V binutils 2.35.1
RISC-V binutils 2.35.1
RISC-V binutils 2.37.0
RISC-V binutils 2.37.0
RISC-V binutils 2.38.0
RISC-V binutils 2.38.0
x86-64 binutils (trunk)
x86-64 binutils 2.27
x86-64 binutils 2.28
x86-64 binutils 2.29.1
x86-64 binutils 2.34
x86-64 binutils 2.36.1
x86-64 binutils 2.38
x86-64 clang (assertions trunk)
x86-64 clang (trunk)
x86-64 clang 10.0.0
x86-64 clang 10.0.1
x86-64 clang 11.0.0
x86-64 clang 11.0.1
x86-64 clang 12.0.0
x86-64 clang 12.0.1
x86-64 clang 13.0.0
x86-64 clang 14.0.0
x86-64 clang 15.0.0
x86-64 clang 16.0.0
x86-64 clang 17.0.1
x86-64 clang 18.1.0
x86-64 clang 3.0.0
x86-64 clang 3.1
x86-64 clang 3.2
x86-64 clang 3.3
x86-64 clang 3.4.1
x86-64 clang 3.5
x86-64 clang 3.5.1
x86-64 clang 3.5.2
x86-64 clang 3.6
x86-64 clang 3.7
x86-64 clang 3.7.1
x86-64 clang 3.8
x86-64 clang 3.8.1
x86-64 clang 3.9.0
x86-64 clang 3.9.1
x86-64 clang 4.0.0
x86-64 clang 4.0.1
x86-64 clang 5.0.0
x86-64 clang 6.0.0
x86-64 clang 7.0.0
x86-64 clang 8.0.0
x86-64 clang 9.0.0
Options
Source code
%use smartalign alignmode p6, 64 gb: mov esi, edi .outer: .find_unset: ; do{ lodsb ; AL=a[i++] cmp [esi], al ; cmp a[i], a[i-i] ja .find_unset ; }while( a[i] > a[i-1] ); // while increasing test al,al ; fixme: test byte [esi], 0xff jz .found_terminator ; FIXME: check that last unsetting? xor al, [esi] ; AL = bit that was just unset xchg edx, eax ;; verify that it was the oldest bit to be set .find_set: mov al, [edi+1] scasb ; cmp [edi+1], [edi] / edi++ jb .find_set xor al, [edi-1] ; AL = bit that was just set xor al, dl ; non-zero = retval for non-Beckett jz .outer ; }while(set order matches unset order) ; jnz .error_return .found_terminator: ; return AL = 0 .error_return: ; return AL != 0 ret align 32 gb_avx512: ; AVX512VL + AVX512BW ;cmpps xmm1, xmm1, 0x0F ; same 4 bytes, probably illegal anyway. ; vpcmpeqw xmm2, xmm2 ; -1 ; first iter resets to all 0xFF, first code is 0 .loop: lodsb ; kmovb k1, al ;[esi] ; vpaddw xmm2{k1}, xmm2, xmm2 ; shift in a zero in each set element, making it smaller (unsigned) vpaddw xmm2, xmm2, xmm2 ; no mask because it's right before resetting zero elements mov edx, eax not edx kmovb k3, edx ;knotb k3, k1 vpternlogd xmm2{k3}, xmm2, xmm2, 0xFF ; 0 elements: reset counter to 0xFF. (EVEX vpcmpeqw writes a mask register, not XMM) ;vpaddusw xmm2{k3}, xmm2, xmm2 ; 0 elements reset counter to 0xFF, assuming it hasn't overflowed to 0 ;; FIXME: may need to only add -1 to handle bits that stay set for more than 16 codes cmp [esi], al jnb .not_unset ; next step is an unset vphminposuw xmm0, xmm2 ; non-VEX also 5 bytes vpextrw edx, xmm3, 1 ; take the min index btr eax, edx ; unset oldest set bit in current mask xor al, [esi] ; and see if it matches the next code jnz .mismatch_exit ;kmovb k2, al ;kxorb k2, k1, [esi] ; only allows kreg operands ; vpbroadcastw xmm0, xmm0 ; broadcast min value .not_unset: loop .loop ; xor eax, eax ; success. AL already zero from last successful unset? .mismatch_exit: ret ;align 64 ;global _start ;_start: ;...
Become a Patron
Sponsor on GitHub
Donate via PayPal
Source on GitHub
Mailing list
Installed libraries
Wiki
Report an issue
How it works
Contact the author
CE on Mastodon
About the author
Statistics
Changelog
Version tree