Skip to content

[IR] Add CallBr intrinsics support #133907

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: users/ro-i/callbr-amdgpu_2
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 21 additions & 6 deletions llvm/docs/LangRef.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9624,8 +9624,12 @@ The '``callbr``' instruction causes control to transfer to a specified
function, with the possibility of control flow transfer to either the
'``fallthrough``' label or one of the '``indirect``' labels.

This instruction should only be used to implement the "goto" feature of gcc
style inline assembly. Any other usage is an error in the IR verifier.
This instruction can currently only be used

#. to implement the "goto" feature of gcc style inline assembly or
#. to call selected intrinsics.

Any other usage is an error in the IR verifier.

Note that in order to support outputs along indirect edges, LLVM may need to
split critical edges, which may require synthesizing a replacement block for
Expand Down Expand Up @@ -9674,7 +9678,7 @@ This instruction requires several arguments:
indicates the function accepts a variable number of arguments, the
extra arguments can be specified.
#. '``fallthrough label``': the label reached when the inline assembly's
execution exits the bottom.
execution exits the bottom / the intrinsic call terminates.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
execution exits the bottom / the intrinsic call terminates.
execution exits the bottom / the intrinsic call returns.

#. '``indirect labels``': the labels reached when a callee transfers control
to a location other than the '``fallthrough label``'. Label constraints
refer to these destinations.
Expand All @@ -9692,9 +9696,12 @@ flow goes after the call.
The output values of a '``callbr``' instruction are available both in the
the '``fallthrough``' block, and any '``indirect``' blocks(s).

The only use of this today is to implement the "goto" feature of gcc inline
assembly where additional labels can be provided as locations for the inline
assembly to jump to.
The only current uses of this are:

#. implement the "goto" feature of gcc inline assembly where additional
labels can be provided as locations for the inline assembly to jump to.
#. support selected intrinsics which manipulate control flow and should
be chained to specific terminators, such as '``unreachable``'.

Example:
""""""""
Expand All @@ -9709,6 +9716,14 @@ Example:
<result> = callbr i32 asm "", "=r,r,!i"(i32 %x)
to label %fallthrough [label %indirect]

; intrinsic which should be followed by unreachable (the order of the
; blocks after the callbr instruction doesn't matter)
callbr void @llvm.amdgcn.kill(i1 %c) to label %cont [label %kill]
cont:
...
kill:
unreachable

.. _i_resume:

'``resume``' Instruction
Expand Down
6 changes: 6 additions & 0 deletions llvm/include/llvm/CodeGen/GlobalISel/IRTranslator.h
Original file line number Diff line number Diff line change
Expand Up @@ -297,6 +297,10 @@ class IRTranslator : public MachineFunctionPass {
/// \pre \p U is a call instruction.
bool translateCall(const User &U, MachineIRBuilder &MIRBuilder);

bool translateTargetIntrinsic(
const CallBase &CB, Intrinsic::ID ID, MachineIRBuilder &MIRBuilder,
TargetLowering::IntrinsicInfo *TgtMemIntrinsicInfo = nullptr);

/// When an invoke or a cleanupret unwinds to the next EH pad, there are
/// many places it could ultimately go. In the IR, we have a single unwind
/// destination, but in the machine CFG, we enumerate all the possible blocks.
Expand All @@ -313,6 +317,8 @@ class IRTranslator : public MachineFunctionPass {
bool translateInvoke(const User &U, MachineIRBuilder &MIRBuilder);

bool translateCallBr(const User &U, MachineIRBuilder &MIRBuilder);
bool translateCallBrIntrinsic(const CallBrInst &I,
MachineIRBuilder &MIRBuilder);

bool translateLandingPad(const User &U, MachineIRBuilder &MIRBuilder);

Expand Down
97 changes: 73 additions & 24 deletions llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2789,20 +2789,35 @@ bool IRTranslator::translateCall(const User &U, MachineIRBuilder &MIRBuilder) {
if (translateKnownIntrinsic(CI, ID, MIRBuilder))
return true;

TargetLowering::IntrinsicInfo Info;
// TODO: Add a GlobalISel version of getTgtMemIntrinsic.
bool IsTgtMemIntrinsic = TLI->getTgtMemIntrinsic(Info, CI, *MF, ID);
Comment on lines +2793 to +2794
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing is preventing use of the current one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't find one, tbh. How is it called?
(I kept this todo because it was in the original code. It has been added in https://reviews.llvm.org/D33724.)


return translateTargetIntrinsic(CI, ID, MIRBuilder,
IsTgtMemIntrinsic ? &Info : nullptr);
}

/// Translate a call or callbr to a target intrinsic.
/// Depending on whether TLI->getTgtMemIntrinsic() is true, TgtMemIntrinsicInfo
/// is a pointer to the correspondingly populated IntrinsicInfo object.
/// Otherwise, this pointer is null.
bool IRTranslator::translateTargetIntrinsic(
const CallBase &CB, Intrinsic::ID ID, MachineIRBuilder &MIRBuilder,
TargetLowering::IntrinsicInfo *TgtMemIntrinsicInfo) {
ArrayRef<Register> ResultRegs;
if (!CI.getType()->isVoidTy())
ResultRegs = getOrCreateVRegs(CI);
if (!CB.getType()->isVoidTy())
ResultRegs = getOrCreateVRegs(CB);

// Ignore the callsite attributes. Backend code is most likely not expecting
// an intrinsic to sometimes have side effects and sometimes not.
MachineInstrBuilder MIB = MIRBuilder.buildIntrinsic(ID, ResultRegs);
if (isa<FPMathOperator>(CI))
MIB->copyIRFlags(CI);
if (isa<FPMathOperator>(CB))
MIB->copyIRFlags(CB);

for (const auto &Arg : enumerate(CI.args())) {
for (const auto &Arg : enumerate(CB.args())) {
// If this is required to be an immediate, don't materialize it in a
// register.
if (CI.paramHasAttr(Arg.index(), Attribute::ImmArg)) {
if (CB.paramHasAttr(Arg.index(), Attribute::ImmArg)) {
if (ConstantInt *CI = dyn_cast<ConstantInt>(Arg.value())) {
// imm arguments are more convenient than cimm (and realistically
// probably sufficient), so use them.
Expand Down Expand Up @@ -2831,29 +2846,32 @@ bool IRTranslator::translateCall(const User &U, MachineIRBuilder &MIRBuilder) {
}

// Add a MachineMemOperand if it is a target mem intrinsic.
TargetLowering::IntrinsicInfo Info;
// TODO: Add a GlobalISel version of getTgtMemIntrinsic.
if (TLI->getTgtMemIntrinsic(Info, CI, *MF, ID)) {
Align Alignment = Info.align.value_or(
DL->getABITypeAlign(Info.memVT.getTypeForEVT(F->getContext())));
LLT MemTy = Info.memVT.isSimple()
? getLLTForMVT(Info.memVT.getSimpleVT())
: LLT::scalar(Info.memVT.getStoreSizeInBits());
if (TgtMemIntrinsicInfo) {
const Function *F = CB.getCalledFunction();

Align Alignment = TgtMemIntrinsicInfo->align.value_or(DL->getABITypeAlign(
TgtMemIntrinsicInfo->memVT.getTypeForEVT(F->getContext())));
LLT MemTy =
TgtMemIntrinsicInfo->memVT.isSimple()
? getLLTForMVT(TgtMemIntrinsicInfo->memVT.getSimpleVT())
: LLT::scalar(TgtMemIntrinsicInfo->memVT.getStoreSizeInBits());

// TODO: We currently just fallback to address space 0 if getTgtMemIntrinsic
// didn't yield anything useful.
MachinePointerInfo MPI;
if (Info.ptrVal)
MPI = MachinePointerInfo(Info.ptrVal, Info.offset);
else if (Info.fallbackAddressSpace)
MPI = MachinePointerInfo(*Info.fallbackAddressSpace);
if (TgtMemIntrinsicInfo->ptrVal)
MPI = MachinePointerInfo(TgtMemIntrinsicInfo->ptrVal,
TgtMemIntrinsicInfo->offset);
else if (TgtMemIntrinsicInfo->fallbackAddressSpace)
MPI = MachinePointerInfo(*TgtMemIntrinsicInfo->fallbackAddressSpace);
MIB.addMemOperand(MF->getMachineMemOperand(
MPI, Info.flags, MemTy, Alignment, CI.getAAMetadata(),
/*Ranges=*/nullptr, Info.ssid, Info.order, Info.failureOrder));
MPI, TgtMemIntrinsicInfo->flags, MemTy, Alignment, CB.getAAMetadata(),
/*Ranges=*/nullptr, TgtMemIntrinsicInfo->ssid,
TgtMemIntrinsicInfo->order, TgtMemIntrinsicInfo->failureOrder));
}

if (CI.isConvergent()) {
if (auto Bundle = CI.getOperandBundle(LLVMContext::OB_convergencectrl)) {
if (CB.isConvergent()) {
if (auto Bundle = CB.getOperandBundle(LLVMContext::OB_convergencectrl)) {
auto *Token = Bundle->Inputs[0].get();
Register TokenReg = getOrCreateVReg(*Token);
MIB.addUse(TokenReg, RegState::Implicit);
Expand Down Expand Up @@ -3006,10 +3024,41 @@ bool IRTranslator::translateInvoke(const User &U,
return true;
}

/// The intrinsics currently supported by callbr are implicit control flow
/// intrinsics such as amdgcn.kill.
bool IRTranslator::translateCallBr(const User &U,
MachineIRBuilder &MIRBuilder) {
// FIXME: Implement this.
return false;
if (containsBF16Type(U))
return false; // see translateCall

const CallBrInst &I = cast<CallBrInst>(U);
MachineBasicBlock *CallBrMBB = &MIRBuilder.getMBB();

// FIXME: inline asm not yet supported for callbr in GlobalISel As soon as we
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// FIXME: inline asm not yet supported for callbr in GlobalISel As soon as we
// FIXME: inline asm is not yet supported for callbr in GlobalISel. As soon as we

// add support, we need to handle the indirect asm targets, see
// SelectionDAGBuilder::visitCallBr().
Intrinsic::ID IID = I.getIntrinsicID();
if (I.isInlineAsm())
return false;
if (IID == Intrinsic::not_intrinsic)
return false;
if (!translateTargetIntrinsic(I, IID, MIRBuilder))
return false;

// Retrieve successors.
SmallPtrSet<BasicBlock *, 8> Dests = {I.getDefaultDest()};
MachineBasicBlock *Return = &getMBB(*I.getDefaultDest());

// Update successor info.
addSuccessorWithProb(CallBrMBB, Return, BranchProbability::getOne());
// TODO: For most of the cases where there is an intrinsic callbr, we're
// having exactly one indirect target, which will be unreachable. As soon as
// this changes, we might need to enhance
// Target->setIsInlineAsmBrIndirectTarget or add something similar for
// intrinsic indirect branches.
CallBrMBB->normalizeSuccProbs();

return true;
}

bool IRTranslator::translateLandingPad(const User &U,
Expand Down
Loading
Loading