-
Notifications
You must be signed in to change notification settings - Fork 14.8k
Open
Description
https://godbolt.org/z/PqG76bKcq
I believe that
define <8 x i16> @narrow_manual(<4 x i32> %a, <4 x i32> %b) unnamed_addr {
bb2:
%0 = shufflevector <4 x i32> %a, <4 x i32> %b, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
%1 = tail call <8 x i32> @llvm.smax.v8i32(<8 x i32> %0, <8 x i32> splat (i32 -32768))
%2 = tail call <8 x i32> @llvm.smin.v8i32(<8 x i32> %1, <8 x i32> splat (i32 32767))
%3 = trunc nsw <8 x i32> %2 to <8 x i16>
ret <8 x i16> %3
}
declare <8 x i32> @llvm.smin.v8i32(<8 x i32>, <8 x i32>) #2
declare <8 x i32> @llvm.smax.v8i32(<8 x i32>, <8 x i32>) #2
should optimize into a vec_packs
. In the linked godbolt, we see that x86_64
and aarch64
are able to make this optimization. (wasm32 is not, that'll be its own issue).
Instead we get
narrow_manual:
vrepif %v0, -32768
vmxf %v1, %v24, %v0
vmxf %v0, %v26, %v0
vrepif %v2, 32767
vmnf %v0, %v2, %v0
vmnf %v1, %v2, %v1
vpkf %v24, %v1, %v0
br %r14
narrow_builtin:
vpksf %v24, %v24, %v26
br %r14
cc @uweigand