loongarch64: wrong implementation of `xvldi` instruction
Host environment
- Operating system: Debian
- QEMU flavor: qemu-user-static
- QEMU version: qemu-loongarch64 version 9.2.2 (Debian 1:9.2.2+ds-1+b2)
Emulated/Virtualized environment
- Architecture: loongarch64 (default one)
Description of problem
Consider this sample program.
#include <cstdio>
#include <cstdint>
#include <lsxintrin.h>
#include <lasxintrin.h>
void dump_u32(__m256i x) {
uint32_t tmp[32/4];
__lasx_xvst(x, tmp, 0);
putchar('[');
for (int i=0; i < 32/4; i++) {
if (i > 0) {
putchar(' ');
}
printf("%08x", tmp[i]);
}
puts("]");
}
int main() {
__m256i const1 = __lasx_xvldi(-3832);
dump_u32(const1);
}
The magic constants here means: replicate in 32-bit words a byte (0x4) shifted left by 8. We should have a vector of words 0x800, and indeed, the program run on a real hardware prints expected:
[00000800 00000800 00000800 00000800 00000800 00000800 00000800 00000800]
The same program run under Qemu prints:
[08000800 00000000 08000800 00000000 08000800 00000000 08000800 00000000]
Additional information
I grabbed the latest sources, it seems there's bug in target/loongarch/tcg/insn_trans/trans_vec.c.inc, in function vldi_get_value.
case 1:
/* data: {2{16'0, imm[7:0], 8'0}} */
data = (t << 24) | (t << 8);
break;
There should be (t << (8+32)) | t << 8.