This answer is an example of a compiler recognizing that a complex expression is equivalent to a single operation:
uint8_t pcnt64(uint64_t n) { n = n - ((n >> 1) & 0x5555555555555555ULL); n = (n & 0x3333333333333333ULL) + (n >> 2 & 0x3333333333333333ULL); n = (n + (n >> 4)) & 0xF0F0F0F0F0F0F0FULL; return (n * 0x0101010101010101ULL) >> 56;}
popcnt64: xor eax, eax popcnt rax, rdi ret
The compiler knows that all of that code, constants and all, comes down to just counting the number of set bits in a number.
But how would a compiler be able to figure that out, besides hardcoding numerous possible implementations and then comparing until there is a match? Do compilers have a way of 'simulating' the possible inputs and outputs of a function, and determining that they will match the result of a specific assembly instruction? Can compilers reduce any function into some 'canonical form' and then compare with a 'canonical form' of the candidate instruction?