-
Notifications
You must be signed in to change notification settings - Fork 14.5k
[LV] Vectorize maxnum/minnum w/o fast-math flags. #148239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-github-workflow @llvm/pr-subscribers-vectorizers Author: Florian Hahn (fhahn) ChangesUpdate LV to vectorize maxnum/minnum reductions without fast-math flags, by adding an extra check in the loop if any inputs to maxnum/minnum are NaN. If any input is NaN,
New recurrence kinds are added for reductions using maxnum/minnum without fast-math flags. The new recurrence kinds are not supported in the code to generate IR to perform the reductions to prevent accidential mis-use. Users need to add the required checks ensuring no NaN inputs, and convert to regular FMin/FMax recurrence kinds. Patch is 35.27 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/148239.diff 13 Files Affected:
diff --git a/llvm/include/llvm/Analysis/IVDescriptors.h b/llvm/include/llvm/Analysis/IVDescriptors.h
index b985292ccee40..e28901c072e29 100644
--- a/llvm/include/llvm/Analysis/IVDescriptors.h
+++ b/llvm/include/llvm/Analysis/IVDescriptors.h
@@ -47,6 +47,8 @@ enum class RecurKind {
FMul, ///< Product of floats.
FMin, ///< FP min implemented in terms of select(cmp()).
FMax, ///< FP max implemented in terms of select(cmp()).
+ FMinNumNoFMFs, ///< FP min with llvm.minnum semantics and no fast-math flags.
+ FMaxNumNoFMFs, ///< FP max with llvm.maxnumsemantics and no fast-math flags.
FMinimum, ///< FP min with llvm.minimum semantics
FMaximum, ///< FP max with llvm.maximum semantics
FMinimumNum, ///< FP min with llvm.minimumnum semantics
diff --git a/llvm/lib/Analysis/IVDescriptors.cpp b/llvm/lib/Analysis/IVDescriptors.cpp
index 39f74beca082f..d8923ab7bc908 100644
--- a/llvm/lib/Analysis/IVDescriptors.cpp
+++ b/llvm/lib/Analysis/IVDescriptors.cpp
@@ -941,10 +941,28 @@ RecurrenceDescriptor::InstDesc RecurrenceDescriptor::isRecurrenceInstr(
m_Intrinsic<Intrinsic::minimumnum>(m_Value(), m_Value())) ||
match(I, m_Intrinsic<Intrinsic::maximumnum>(m_Value(), m_Value()));
};
- if (isIntMinMaxRecurrenceKind(Kind) ||
- (HasRequiredFMF() && isFPMinMaxRecurrenceKind(Kind)))
+ if (isIntMinMaxRecurrenceKind(Kind))
return isMinMaxPattern(I, Kind, Prev);
- else if (isFMulAddIntrinsic(I))
+ if (isFPMinMaxRecurrenceKind(Kind)) {
+ if (HasRequiredFMF())
+ return isMinMaxPattern(I, Kind, Prev);
+ // For FMax/FMin reductions using maxnum/minnum intrinsics with non-NaN
+ // start value, we may be able to vectorize with extra checks ensuring the
+ // inputs are not NaN.
+ auto *StartV = dyn_cast<ConstantFP>(
+ OrigPhi->getIncomingValueForBlock(L->getLoopPredecessor()));
+ if (StartV && !StartV->getValue().isNaN() &&
+ isMinMaxPattern(I, Kind, Prev).isRecurrence()) {
+ if (((Kind == RecurKind::FMax &&
+ match(I, m_Intrinsic<Intrinsic::maxnum>(m_Value(), m_Value()))) ||
+ Kind == RecurKind::FMaxNumNoFMFs))
+ return InstDesc(I, RecurKind::FMaxNumNoFMFs);
+ if (((Kind == RecurKind::FMin &&
+ match(I, m_Intrinsic<Intrinsic::minnum>(m_Value(), m_Value()))) ||
+ Kind == RecurKind::FMinNumNoFMFs))
+ return InstDesc(I, RecurKind::FMinNumNoFMFs);
+ }
+ } else if (isFMulAddIntrinsic(I))
return InstDesc(Kind == RecurKind::FMulAdd, I,
I->hasAllowReassoc() ? nullptr : I);
return InstDesc(false, I);
diff --git a/llvm/lib/Transforms/Utils/LoopUtils.cpp b/llvm/lib/Transforms/Utils/LoopUtils.cpp
index 200d1fb854155..0cf7bc7af878c 100644
--- a/llvm/lib/Transforms/Utils/LoopUtils.cpp
+++ b/llvm/lib/Transforms/Utils/LoopUtils.cpp
@@ -938,8 +938,10 @@ constexpr Intrinsic::ID llvm::getReductionIntrinsicID(RecurKind RK) {
case RecurKind::UMin:
return Intrinsic::vector_reduce_umin;
case RecurKind::FMax:
+ case RecurKind::FMaxNumNoFMFs:
return Intrinsic::vector_reduce_fmax;
case RecurKind::FMin:
+ case RecurKind::FMinNumNoFMFs:
return Intrinsic::vector_reduce_fmin;
case RecurKind::FMaximum:
return Intrinsic::vector_reduce_fmaximum;
diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
index f3de24aa4c3d1..ffd9d5c22cc13 100644
--- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
+++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
@@ -4346,8 +4346,15 @@ bool LoopVectorizationPlanner::isCandidateForEpilogueVectorization(
ElementCount VF) const {
// Cross iteration phis such as reductions need special handling and are
// currently unsupported.
- if (any_of(OrigLoop->getHeader()->phis(),
- [&](PHINode &Phi) { return Legal->isFixedOrderRecurrence(&Phi); }))
+ if (any_of(OrigLoop->getHeader()->phis(), [&](PHINode &Phi) {
+ if (Legal->isReductionVariable(&Phi)) {
+ RecurKind RK =
+ Legal->getRecurrenceDescriptor(&Phi).getRecurrenceKind();
+ return RK == RecurKind::FMinNumNoFMFs ||
+ RK == RecurKind::FMaxNumNoFMFs;
+ }
+ return Legal->isFixedOrderRecurrence(&Phi);
+ }))
return false;
// Phis with uses outside of the loop require special handling and are
@@ -8808,6 +8815,9 @@ VPlanPtr LoopVectorizationPlanner::tryToBuildVPlanWithVPRecipes(
// Adjust the recipes for any inloop reductions.
adjustRecipesForReductions(Plan, RecipeBuilder, Range.Start);
+ if (!VPlanTransforms::runPass(
+ VPlanTransforms::handleMaxMinNumReductionsWithoutFastMath, *Plan))
+ return nullptr;
// Transform recipes to abstract recipes if it is legal and beneficial and
// clamp the range for better cost estimation.
diff --git a/llvm/lib/Transforms/Vectorize/VPlan.h b/llvm/lib/Transforms/Vectorize/VPlan.h
index 9a6e4b36397b3..96eced547d0ba 100644
--- a/llvm/lib/Transforms/Vectorize/VPlan.h
+++ b/llvm/lib/Transforms/Vectorize/VPlan.h
@@ -1326,20 +1326,22 @@ class LLVM_ABI_FOR_TEST VPWidenRecipe : public VPRecipeWithIRFlags,
unsigned Opcode;
public:
- VPWidenRecipe(unsigned Opcode, ArrayRef<VPValue *> Operands,
- const VPIRFlags &Flags, DebugLoc DL)
- : VPRecipeWithIRFlags(VPDef::VPWidenSC, Operands, Flags, DL),
- Opcode(Opcode) {}
-
VPWidenRecipe(Instruction &I, ArrayRef<VPValue *> Operands)
: VPRecipeWithIRFlags(VPDef::VPWidenSC, Operands, I), VPIRMetadata(I),
Opcode(I.getOpcode()) {}
+ VPWidenRecipe(unsigned Opcode, ArrayRef<VPValue *> Operands,
+ const VPIRFlags &Flags, const VPIRMetadata &Metadata,
+ DebugLoc DL)
+ : VPRecipeWithIRFlags(VPDef::VPWidenSC, Operands, Flags, DL),
+ VPIRMetadata(Metadata), Opcode(Opcode) {}
+
~VPWidenRecipe() override = default;
VPWidenRecipe *clone() override {
- auto *R = new VPWidenRecipe(*getUnderlyingInstr(), operands());
- R->transferFlags(*this);
+ auto *R =
+ new VPWidenRecipe(getOpcode(), operands(), *this, *this, getDebugLoc());
+ R->setUnderlyingValue(getUnderlyingValue());
return R;
}
diff --git a/llvm/lib/Transforms/Vectorize/VPlanAnalysis.cpp b/llvm/lib/Transforms/Vectorize/VPlanAnalysis.cpp
index 92db9674ef42b..971e86c3c7b8c 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanAnalysis.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanAnalysis.cpp
@@ -85,6 +85,7 @@ Type *VPTypeAnalysis::inferScalarTypeForRecipe(const VPInstruction *R) {
return ResTy;
}
case Instruction::ICmp:
+ case Instruction::FCmp:
case VPInstruction::ActiveLaneMask:
assert(inferScalarType(R->getOperand(0)) ==
inferScalarType(R->getOperand(1)) &&
diff --git a/llvm/lib/Transforms/Vectorize/VPlanConstruction.cpp b/llvm/lib/Transforms/Vectorize/VPlanConstruction.cpp
index 37e15326e01f9..005392eaaea76 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanConstruction.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanConstruction.cpp
@@ -628,3 +628,111 @@ void VPlanTransforms::attachCheckBlock(VPlan &Plan, Value *Cond,
Term->addMetadata(LLVMContext::MD_prof, BranchWeights);
}
}
+
+static VPValue *getMinMaxCompareValue(VPSingleDefRecipe *MinMaxOp,
+ VPReductionPHIRecipe *RedPhi) {
+ auto *RepR = dyn_cast<VPReplicateRecipe>(MinMaxOp);
+ if (!isa<VPWidenIntrinsicRecipe>(MinMaxOp) &&
+ !(RepR && (isa<IntrinsicInst>(RepR->getUnderlyingInstr()))))
+ return nullptr;
+
+ if (MinMaxOp->getOperand(0) == RedPhi)
+ return MinMaxOp->getOperand(1);
+ return MinMaxOp->getOperand(0);
+}
+
+bool VPlanTransforms::handleMaxMinNumReductionsWithoutFastMath(VPlan &Plan) {
+ VPRegionBlock *LoopRegion = Plan.getVectorLoopRegion();
+ VPValue *AnyNaN = nullptr;
+ VPReductionPHIRecipe *RedPhiR = nullptr;
+ VPRecipeWithIRFlags *MinMaxOp = nullptr;
+ for (auto &R : LoopRegion->getEntryBasicBlock()->phis()) {
+ auto *Cur = dyn_cast<VPReductionPHIRecipe>(&R);
+ if (!Cur)
+ continue;
+ if (RedPhiR)
+ return false;
+ if (Cur->getRecurrenceKind() != RecurKind::FMaxNumNoFMFs &&
+ Cur->getRecurrenceKind() != RecurKind::FMinNumNoFMFs)
+ continue;
+
+ RedPhiR = Cur;
+
+ MinMaxOp = dyn_cast<VPRecipeWithIRFlags>(
+ RedPhiR->getBackedgeValue()->getDefiningRecipe());
+ if (!MinMaxOp)
+ return false;
+ VPValue *In = getMinMaxCompareValue(MinMaxOp, RedPhiR);
+ if (!In)
+ return false;
+
+ auto *IsNaN =
+ new VPInstruction(Instruction::FCmp, {In, In}, {CmpInst::FCMP_UNO}, {});
+ IsNaN->insertBefore(MinMaxOp);
+ AnyNaN = new VPInstruction(VPInstruction::AnyOf, {IsNaN});
+ AnyNaN->getDefiningRecipe()->insertAfter(IsNaN);
+ }
+
+ if (!AnyNaN)
+ return true;
+
+ auto *MiddleVPBB = Plan.getMiddleBlock();
+ auto *RdxResult = cast<VPSingleDefRecipe>(&MiddleVPBB->front());
+ auto *ScalarPH = Plan.getScalarPreheader();
+ // Update the resume phis in the scalar preheader. They all must either resume
+ // from the reduction result or the canonical induction. Bail out if there are
+ // other resume phis.
+ for (auto &R : ScalarPH->phis()) {
+ auto *ResumeR = cast<VPPhi>(&R);
+ VPValue *VecV = ResumeR->getOperand(0);
+ VPValue *BypassV = ResumeR->getOperand(ResumeR->getNumOperands() - 1);
+ if (VecV != RdxResult && VecV != &Plan.getVectorTripCount())
+ return false;
+ ResumeR->setOperand(
+ 1, VecV == &Plan.getVectorTripCount() ? Plan.getCanonicalIV() : VecV);
+ ResumeR->addOperand(BypassV);
+ }
+
+ // Create a new reduction phi recipe with either FMin/FMax, replacing
+ // FMinNumNoFMFs/FMaxNumNoFMFs.
+ RecurKind NewRK = RedPhiR->getRecurrenceKind() != RecurKind::FMinNumNoFMFs
+ ? RecurKind::FMin
+ : RecurKind::FMax;
+ auto *NewRedPhiR = new VPReductionPHIRecipe(
+ cast<PHINode>(RedPhiR->getUnderlyingValue()), NewRK,
+ *RedPhiR->getStartValue(), RedPhiR->isInLoop(), RedPhiR->isOrdered());
+ NewRedPhiR->addOperand(RedPhiR->getOperand(1));
+ NewRedPhiR->insertBefore(RedPhiR);
+ RedPhiR->replaceAllUsesWith(NewRedPhiR);
+ RedPhiR->eraseFromParent();
+
+ // Update the loop exit condition to exit if either any of the inputs is NaN
+ // or the vector trip count is reached.
+ VPBasicBlock *LatchVPBB = LoopRegion->getExitingBasicBlock();
+ VPBuilder Builder(LatchVPBB->getTerminator());
+ auto *LatchExitingBranch = cast<VPInstruction>(LatchVPBB->getTerminator());
+ assert(LatchExitingBranch->getOpcode() == VPInstruction::BranchOnCount &&
+ "Unexpected terminator");
+ auto *IsLatchExitTaken =
+ Builder.createICmp(CmpInst::ICMP_EQ, LatchExitingBranch->getOperand(0),
+ LatchExitingBranch->getOperand(1));
+ auto *AnyExitTaken =
+ Builder.createNaryOp(Instruction::Or, {AnyNaN, IsLatchExitTaken});
+ Builder.createNaryOp(VPInstruction::BranchOnCond, AnyExitTaken);
+ LatchExitingBranch->eraseFromParent();
+
+ // Split the middle block and introduce a new block, branching to the scalar
+ // preheader to resume iteration in the scalar loop if any NaNs have been
+ // encountered.
+ MiddleVPBB->splitAt(std::prev(MiddleVPBB->end()));
+ Builder.setInsertPoint(MiddleVPBB, MiddleVPBB->begin());
+ auto *NewSel =
+ Builder.createSelect(AnyNaN, NewRedPhiR, RdxResult->getOperand(1));
+ RdxResult->setOperand(1, NewSel);
+ Builder.setInsertPoint(MiddleVPBB);
+ Builder.createNaryOp(VPInstruction::BranchOnCond, AnyNaN);
+ VPBlockUtils::connectBlocks(MiddleVPBB, ScalarPH);
+ MiddleVPBB->swapSuccessors();
+ std::swap(ScalarPH->getPredecessors()[1], ScalarPH->getPredecessors().back());
+ return true;
+}
diff --git a/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp b/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp
index 75ade13b09d9c..ea360fb63f69e 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp
@@ -585,6 +585,7 @@ Value *VPInstruction::generate(VPTransformState &State) {
Value *Op = State.get(getOperand(0), vputils::onlyFirstLaneUsed(this));
return Builder.CreateFreeze(Op, Name);
}
+ case Instruction::FCmp:
case Instruction::ICmp: {
bool OnlyFirstLaneUsed = vputils::onlyFirstLaneUsed(this);
Value *A = State.get(getOperand(0), OnlyFirstLaneUsed);
@@ -595,8 +596,11 @@ Value *VPInstruction::generate(VPTransformState &State) {
llvm_unreachable("should be handled by VPPhi::execute");
}
case Instruction::Select: {
- bool OnlyFirstLaneUsed = vputils::onlyFirstLaneUsed(this);
- Value *Cond = State.get(getOperand(0), OnlyFirstLaneUsed);
+ bool OnlyFirstLaneUsed =
+ State.VF.isScalar() || vputils::onlyFirstLaneUsed(this);
+ Value *Cond =
+ State.get(getOperand(0),
+ OnlyFirstLaneUsed || vputils::isSingleScalar(getOperand(0)));
Value *Op1 = State.get(getOperand(1), OnlyFirstLaneUsed);
Value *Op2 = State.get(getOperand(2), OnlyFirstLaneUsed);
return Builder.CreateSelect(Cond, Op1, Op2, Name);
@@ -858,7 +862,7 @@ Value *VPInstruction::generate(VPTransformState &State) {
Value *Res = State.get(getOperand(0));
for (VPValue *Op : drop_begin(operands()))
Res = Builder.CreateOr(Res, State.get(Op));
- return Builder.CreateOrReduce(Res);
+ return State.VF.isScalar() ? Res : Builder.CreateOrReduce(Res);
}
case VPInstruction::FirstActiveLane: {
if (getNumOperands() == 1) {
@@ -1031,6 +1035,7 @@ bool VPInstruction::opcodeMayReadOrWriteFromMemory() const {
switch (getOpcode()) {
case Instruction::ExtractElement:
case Instruction::Freeze:
+ case Instruction::FCmp:
case Instruction::ICmp:
case Instruction::Select:
case VPInstruction::AnyOf:
@@ -1066,6 +1071,7 @@ bool VPInstruction::onlyFirstLaneUsed(const VPValue *Op) const {
return Op == getOperand(1);
case Instruction::PHI:
return true;
+ case Instruction::FCmp:
case Instruction::ICmp:
case Instruction::Select:
case Instruction::Or:
@@ -1098,6 +1104,7 @@ bool VPInstruction::onlyFirstPartUsed(const VPValue *Op) const {
switch (getOpcode()) {
default:
return false;
+ case Instruction::FCmp:
case Instruction::ICmp:
case Instruction::Select:
return vputils::onlyFirstPartUsed(this);
@@ -1782,7 +1789,7 @@ bool VPIRFlags::flagsValidForOpcode(unsigned Opcode) const {
return Opcode == Instruction::ZExt;
break;
case OperationType::Cmp:
- return Opcode == Instruction::ICmp;
+ return Opcode == Instruction::FCmp || Opcode == Instruction::ICmp;
case OperationType::Other:
return true;
}
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
index 870b1bb68b79a..fe590e9410941 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
@@ -99,6 +99,11 @@ struct VPlanTransforms {
/// not valid.
static bool adjustFixedOrderRecurrences(VPlan &Plan, VPBuilder &Builder);
+ /// Check if \p Plan contains any FMaxNumNoFMFs or FMinNumNoFMFs reductions.
+ /// If they do, try to update the vector loop to exit early if any input is
+ /// NaN and resume executing in the scalar loop to handle the NaNs there.
+ static bool handleMaxMinNumReductionsWithoutFastMath(VPlan &Plan);
+
/// Clear NSW/NUW flags from reduction instructions if necessary.
static void clearReductionWrapFlags(VPlan &Plan);
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/fmax-without-fast-math-flags.ll b/llvm/test/Transforms/LoopVectorize/AArch64/fmax-without-fast-math-flags.ll
index 77b40dabae1e1..2cc456627f6b0 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/fmax-without-fast-math-flags.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/fmax-without-fast-math-flags.ll
@@ -42,18 +42,56 @@ define float @fmaxnum(ptr %src, i64 %n) {
; CHECK-LABEL: define float @fmaxnum(
; CHECK-SAME: ptr [[SRC:%.*]], i64 [[N:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*]]:
+; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 [[N]], 8
+; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label %[[SCALAR_PH:.*]], label %[[VECTOR_PH:.*]]
+; CHECK: [[VECTOR_PH]]:
+; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 [[N]], 8
+; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 [[N]], [[N_MOD_VF]]
+; CHECK-NEXT: br label %[[VECTOR_BODY:.*]]
+; CHECK: [[VECTOR_BODY]]:
+; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ]
+; CHECK-NEXT: [[VEC_PHI:%.*]] = phi <4 x float> [ <float -1.000000e+07, float 0xFFF8000000000000, float 0xFFF8000000000000, float 0xFFF8000000000000>, %[[VECTOR_PH]] ], [ [[TMP7:%.*]], %[[VECTOR_BODY]] ]
+; CHECK-NEXT: [[VEC_PHI1:%.*]] = phi <4 x float> [ splat (float 0xFFF8000000000000), %[[VECTOR_PH]] ], [ [[TMP8:%.*]], %[[VECTOR_BODY]] ]
+; CHECK-NEXT: [[GEP_SRC:%.*]] = getelementptr inbounds nuw float, ptr [[SRC]], i64 [[IV]]
+; CHECK-NEXT: [[TMP1:%.*]] = getelementptr inbounds nuw float, ptr [[GEP_SRC]], i32 0
+; CHECK-NEXT: [[TMP2:%.*]] = getelementptr inbounds nuw float, ptr [[GEP_SRC]], i32 4
+; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <4 x float>, ptr [[TMP1]], align 4
+; CHECK-NEXT: [[WIDE_LOAD2:%.*]] = load <4 x float>, ptr [[TMP2]], align 4
+; CHECK-NEXT: [[TMP3:%.*]] = fcmp uno <4 x float> [[WIDE_LOAD]], [[WIDE_LOAD]]
+; CHECK-NEXT: [[TMP4:%.*]] = fcmp uno <4 x float> [[WIDE_LOAD2]], [[WIDE_LOAD2]]
+; CHECK-NEXT: [[TMP5:%.*]] = or <4 x i1> [[TMP3]], [[TMP4]]
+; CHECK-NEXT: [[TMP6:%.*]] = call i1 @llvm.vector.reduce.or.v4i1(<4 x i1> [[TMP5]])
+; CHECK-NEXT: [[TMP7]] = call <4 x float> @llvm.maxnum.v4f32(<4 x float> [[VEC_PHI]], <4 x float> [[WIDE_LOAD]])
+; CHECK-NEXT: [[TMP8]] = call <4 x float> @llvm.maxnum.v4f32(<4 x float> [[VEC_PHI1]], <4 x float> [[WIDE_LOAD2]])
+; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[IV]], 8
+; CHECK-NEXT: [[TMP9:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
+; CHECK-NEXT: [[TMP10:%.*]] = or i1 [[TMP6]], [[TMP9]]
+; CHECK-NEXT: br i1 [[TMP10]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
+; CHECK: [[MIDDLE_BLOCK]]:
+; CHECK-NEXT: [[TMP11:%.*]] = select i1 [[TMP6]], <4 x float> [[VEC_PHI]], <4 x float> [[TMP7]]
+; CHECK-NEXT: [[TMP12:%.*]] = select i1 [[TMP6]], <4 x float> [[VEC_PHI1]], <4 x float> [[TMP8]]
+; CHECK-NEXT: [[RDX_MINMAX_CMP:%.*]] = fcmp olt <4 x float> [[TMP11]], [[TMP12]]
+; CHECK-NEXT: [[RDX_MINMAX_SELECT:%.*]] = select <4 x i1> [[RDX_MINMAX_CMP]], <4 x float> [[TMP11]], <4 x float> [[TMP12]]
+; CHECK-NEXT: [[TMP13:%.*]] = call float @llvm.vector.reduce.fmin.v4f32(<4 x float> [[RDX_MINMAX_SELECT]])
+; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 [[N]], [[N_VEC]]
+; CHECK-NEXT: br i1 [[TMP6]], label %[[SCALAR_PH]], label %[[MIDDLE_BLOCK_SPLIT:.*]]
+; CHECK: [[MIDDLE_BLOCK_SPLIT]]:
+; CHECK-NEXT: br i1 [[CMP_N]], label %[[EXIT:.*]], label %[[SCALAR_PH]]
+; CHECK: [[SCALAR_PH]]:
+; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[N_VEC]], %[[MIDDLE_BLOCK_SPLIT]] ], [ [[IV]], %[[MIDDLE_BLOCK]] ], [ 0, %[[ENTRY]] ]
+; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi float [ [[TMP13]], %[[MIDDLE_BLOCK_SPLIT]] ], [ [[TMP13]], %[[MIDDLE_BLOCK]] ], [ -1.000000e+07, %[[ENTRY]] ]
; CHECK-NEXT: br label %[[LOOP:.*]]
; CHECK: [[LOOP]]:
-; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
-; CHECK-NEXT: [[MAX:%.*]] = phi float [ -1.000000e+07, %[[ENTRY]] ], [ [[MAX_NEXT:%.*]], %[[LOOP]] ]
-; CHECK-NEXT: [[GEP_SRC:%.*]] = getelementptr inbounds nuw float, ptr [[SRC]], i64 [[IV]]
-; CHECK-NEXT: [[L:%.*]] = load float, ptr [[GEP_SRC]], align 4
+; CHECK-NEXT: [[IV1:%.*]] = phi i64 [ [[BC_RESUME_VAL]], %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[MAX:%.*]] = phi float [ [[BC_MERGE_RDX]], %[[SCALAR_PH]] ], [ [[MAX_NEXT:%.*]], %[[LOOP]] ]
+; CHECK-NEXT: [[GEP_SRC1:%.*]] = getelementptr inbounds nuw float, ptr [[SRC]], i64 [[IV1]]
+; CHECK-NEXT: [[L:%.*]] = load float, ptr [[GEP_SRC1]], align 4
; CHECK-NEXT: [[MAX_NEXT]] = call float @llvm.maxnum.f32(float [[MAX]], float [[L]])
-; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1
+; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV1]], 1
; CHECK-NEXT: [[EC:%.*]] = icmp eq i64 [[IV_N...
[truncated]
|
85eb08f
to
ffdf95f
Compare
✅ With the latest revision this PR passed the C/C++ code formatter. |
ffdf95f
to
aa19ff3
Compare
Some of those reductions can be vectorized with extra checks. Extra tests for #148239 and follow-ups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ping :)
aa19ff3
to
de30701
Compare
Some of those reductions can be vectorized with extra checks. Extra tests for llvm/llvm-project#148239 and follow-ups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks ok to me, but I don't know this too well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall approach looks very nice! Raises several thoughts.
auto *R = | ||
new VPWidenRecipe(getOpcode(), operands(), *this, *this, getDebugLoc()); | ||
R->setUnderlyingValue(getUnderlyingValue()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be done independently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, done in 2cdcc4f, thanks
FMinNumNoFMFs, ///< FP min with llvm.minnum semantics and no fast-math flags. | ||
FMaxNumNoFMFs, ///< FP max with llvm.maxnumsemantics and no fast-math flags. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FMinNumNoFMFs, ///< FP min with llvm.minnum semantics and no fast-math flags. | |
FMaxNumNoFMFs, ///< FP max with llvm.maxnumsemantics and no fast-math flags. | |
FMinNum, ///< FP min with llvm.minnum semantics including NaNs. | |
FMaxNum, ///< FP max with llvm.maxnum semantics including NaNs. |
This is the general support, more specific cases relying on FMF should be the ones to be designated as such?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, thanks!
llvm/lib/Analysis/IVDescriptors.cpp
Outdated
if (HasRequiredFMF()) | ||
return isMinMaxPattern(I, Kind, Prev); | ||
// We may be able to vectorize FMax/FMin reductions using maxnum/minnum | ||
// intrinsics with extra checks ensuring the inputs are not NaN. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// intrinsics with extra checks ensuring the inputs are not NaN. | |
// intrinsics with extra checks ensuring the vector loop handles only non-NaN inputs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, thanks
llvm/lib/Analysis/IVDescriptors.cpp
Outdated
if (StartV && !StartV->getValue().isNaN() && | ||
isMinMaxPattern(I, Kind, Prev).isRecurrence()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Early return if any of these are not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplified some checks and updated to use early exit, thanks!
llvm/lib/Analysis/IVDescriptors.cpp
Outdated
if (((Kind == RecurKind::FMin && | ||
match(I, m_Intrinsic<Intrinsic::minnum>(m_Value(), m_Value()))) || | ||
Kind == RecurKind::FMinNumNoFMFs)) | ||
return InstDesc(I, RecurKind::FMinNumNoFMFs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete this with the default answer for isFPMinMaxRecurrenceKind(Kind)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added return of empty InstDesc here, thanks
; CHECK-NEXT: [[TMP7]] = call <4 x float> @llvm.maxnum.v4f32(<4 x float> [[VEC_PHI]], <4 x float> [[WIDE_LOAD]]) | ||
; CHECK-NEXT: [[TMP8]] = call <4 x float> @llvm.maxnum.v4f32(<4 x float> [[VEC_PHI1]], <4 x float> [[WIDE_LOAD2]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this was to follow vectorization of original early-exit, code below the early-exit would be predicated (e.g., masking stores), or selecting between which of TMP7/8 or WIDE_LOAD/2 to feed back VEC_PHI/1. As this effectively takes place only once, in the last iteration, the selection is sunk into the middle block.
Another option is to always have the scalar loop re-execute one vector iteration, as in requires-scalar-epilog, even if no NaNs are found, effectively having only VEC_PHI/1 as live out of the vector loop, rather than also TMP7/8, BROADCAST_SPLAT/TMP6, and branch from middle block unconditionally to scalar loop rather than checking both trip count and isNaN (in a new middle block split).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, always requiring a scalar epilgoue would also be an option, but that would slightly pessimize the common case (i.e. no NaNs in input, which in practive is probbaly more likely)
; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 [[N]], [[N_VEC]] | ||
; CHECK-NEXT: br i1 [[TMP6]], label %[[SCALAR_PH]], label %[[MIDDLE_BLOCK_SPLIT:.*]] | ||
; CHECK: [[MIDDLE_BLOCK_SPLIT]]: | ||
; CHECK-NEXT: br i1 [[CMP_N]], label %[[EXIT:.*]], label %[[SCALAR_PH]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Easier to branch conditionally to scalar PH here if (TMP6 OR NOT CMP_N) else to exit, similar to the vector loop latch branch, leaving its potential split into multiple compare-branch blocks to subsequent/BE passes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, updated to have a single middle block and update the induction resume phis via selects, thanks
; CHECK-NEXT: [[TMP12:%.*]] = select <4 x i1> [[BROADCAST_SPLAT]], <4 x float> [[VEC_PHI1]], <4 x float> [[TMP8]] | ||
; CHECK-NEXT: [[RDX_MINMAX_CMP:%.*]] = fcmp olt <4 x float> [[TMP11]], [[TMP12]] | ||
; CHECK-NEXT: [[RDX_MINMAX_SELECT:%.*]] = select <4 x i1> [[RDX_MINMAX_CMP]], <4 x float> [[TMP11]], <4 x float> [[TMP12]] | ||
; CHECK-NEXT: [[TMP13:%.*]] = call float @llvm.vector.reduce.fmin.v4f32(<4 x float> [[RDX_MINMAX_SELECT]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be fmin or fmax, coming from maxnum? (Consistently below, as commented in code above).
Has this been/Can it be execution tested to confirm results are ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep this got flipped during moving things around/refactoring. Should be fixed now.
I've added a set of extensive runtime tests to llvm-test-suite, which should cover various intresting combinations of NaNs/signed zeros in the input data: llvm/llvm-test-suite#266
; CHECK-NEXT: [[TMP7]] = call <4 x float> @llvm.maxnum.v4f32(<4 x float> [[VEC_PHI]], <4 x float> [[WIDE_LOAD]]) | ||
; CHECK-NEXT: [[TMP8]] = call <4 x float> @llvm.maxnum.v4f32(<4 x float> [[VEC_PHI1]], <4 x float> [[WIDE_LOAD2]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maxnum continues to be used by the vector loop to compute its reduction, rather than potentially faster cmp/sel, even though it effectively operates on non-NaNs, in order for it to continue and treat -0 as less than +0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, for now just keeps using maxnum/minnum intrinsics, to preserve the handling of -0 and +0
RecurKind NewRK = RedPhiR->getRecurrenceKind() != RecurKind::FMinNumNoFMFs | ||
? RecurKind::FMin | ||
: RecurKind::FMax; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set to FMax if FMinNum, and to FMin if FMaxNum?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Argh yes, should be flipped of course, updated
de30701
to
9e59945
Compare
Update VPWidenRecipe::clone() to use the constructor w/o mandatory Instruction, to facilitate cloning VPWidenRecipe without underlying instructions. Split off from #148239.
Update LV to vectorize maxnum/minnum reductions without fast-math flags, by adding an extra check in the loop if any inputs to maxnum/minnum are NaN. If any input is NaN, *exit the vector loop, *compute the reduction result up to the vector iteration that contained NaN inputs and * resume in the scalar loop New recurrence kinds are added for reductions using maxnum/minnum without fast-math flags. The new recurrence kinds are not supported in the code to generate IR to perform the reductions to prevent accidential mis-use. Users need to add the required checks ensuring no NaN inputs, and convert to regular FMin/FMax recurrence kinds.
9e59945
to
de2d6f2
Compare
…instr (NFC). Update VPWidenRecipe::clone() to use the constructor w/o mandatory Instruction, to facilitate cloning VPWidenRecipe without underlying instructions. Split off from llvm/llvm-project#148239.
Fixed the PR the remove various changes added by accident. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, now it LGTM!
Update LV to vectorize maxnum/minnum reductions without fast-math flags, by adding an extra check in the loop if any inputs to maxnum/minnum are NaN, due to maxnum/minnum behavior w.r.t to signaling NaNs. Signed-zeros are already handled consistently by maxnum/minnum. If any input is NaN, *exit the vector loop, *compute the reduction result up to the vector iteration that contained NaN inputs and * resume in the scalar loop New recurrence kinds are added for reductions using maxnum/minnum without fast-math flags. PR: llvm/llvm-project#148239
Update LV to vectorize maxnum/minnum reductions without fast-math flags, by adding an extra check in the loop if any inputs to maxnum/minnum are NaN. If any input is NaN,
*exit the vector loop,
*compute the reduction result up to the vector iteration that contained NaN
inputs and
New recurrence kinds are added for reductions using maxnum/minnum without fast-math flags. The new recurrence kinds are not supported in the code to generate IR to perform the reductions to prevent accidential mis-use. Users need to add the required checks ensuring no NaN inputs, and convert to regular FMin/FMax recurrence kinds.