- Writing an LLVM Pass
- Introduction — What is a pass?
- Quick Start — Writing hello world
- Pass classes and requirements
- Pass Statistics
- Building pass plugins
- Registering dynamically loaded passes
Writing an LLVM Pass
Introduction — What is a pass?
The LLVM Pass Framework is an important part of the LLVM system, because LLVMpasses are where most of the interesting parts of the compiler exist. Passesperform the transformations and optimizations that make up the compiler, theybuild the analysis results that are used by these transformations, and theyare, above all, a structuring technique for compiler code.
All LLVM passes are subclasses of the Pass class, which implementfunctionality by overriding virtual methods inherited from Pass
. Dependingon how your pass works, you should inherit from the ModulePass , CallGraphSCCPass, FunctionPass , or LoopPass, or RegionPass classes, which gives the system moreinformation about what your pass does, and how it can be combined with otherpasses. One of the main features of the LLVM Pass Framework is that itschedules passes to run in an efficient way based on the constraints that yourpass meets (which are indicated by which class they derive from).
We start by showing you how to construct a pass, everything from setting up thecode, to compiling, loading, and executing it. After the basics are down, moreadvanced features are discussed.
Quick Start — Writing hello world
Here we describe how to write the “hello world” of passes. The “Hello” pass isdesigned to simply print out the name of non-external functions that exist inthe program being compiled. It does not modify the program at all, it justinspects it. The source code and files for this pass are available in the LLVMsource tree in the lib/Transforms/Hello
directory.
Setting up the build environment
First, configure and build LLVM. Next, you need to create a new directorysomewhere in the LLVM source base. For this example, we’ll assume that youmade lib/Transforms/Hello
. Finally, you must set up a build scriptthat will compile the source code for the new pass. To do this,copy the following into CMakeLists.txt
:
- add_llvm_library( LLVMHello MODULE
- Hello.cpp
- PLUGIN_TOOL
- opt
- )
and the following line into lib/Transforms/CMakeLists.txt
:
- add_subdirectory(Hello)
(Note that there is already a directory named Hello
with a sample “Hello”pass; you may play with it – in which case you don’t need to modify anyCMakeLists.txt
files – or, if you want to create everything from scratch,use another name.)
This build script specifies that Hello.cpp
file in the current directoryis to be compiled and linked into a shared object $(LEVEL)/lib/LLVMHello.so
thatcan be dynamically loaded by the opt tool via its -load
option. If your operating system uses a suffix other than .so
(such asWindows or macOS), the appropriate extension will be used.
Now that we have the build scripts set up, we just need to write the code forthe pass itself.
Basic code required
Now that we have a way to compile our new pass, we just have to write it.Start out with:
- #include "llvm/Pass.h"
- #include "llvm/IR/Function.h"
- #include "llvm/Support/raw_ostream.h"
Which are needed because we are writing a Pass, we are operating onFunctions, and we willbe doing some printing.
Next we have:
- using namespace llvm;
… which is required because the functions from the include files live in thellvm namespace.
Next we have:
- namespace {
… which starts out an anonymous namespace. Anonymous namespaces are to C++what the “static
” keyword is to C (at global scope). It makes the thingsdeclared inside of the anonymous namespace visible only to the current file.If you’re not familiar with them, consult a decent C++ book for moreinformation.
Next, we declare our pass itself:
- struct Hello : public FunctionPass {
This declares a “Hello
” class that is a subclass of FunctionPass. The different builtin pass subclassesare described in detail later, butfor now, know that FunctionPass
operates on a function at a time.
- static char ID;
- Hello() : FunctionPass(ID) {}
This declares pass identifier used by LLVM to identify pass. This allows LLVMto avoid using expensive C++ runtime information.
- bool runOnFunction(Function &F) override {
- errs() << "Hello: ";
- errs().write_escaped(F.getName()) << '\n';
- return false;
- }
- }; // end of struct Hello
- } // end of anonymous namespace
We declare a runOnFunction method,which overrides an abstract virtual method inherited from FunctionPass. This is where we are supposed to do ourthing, so we just print out our message with the name of each function.
- char Hello::ID = 0;
We initialize pass ID here. LLVM uses ID’s address to identify a pass, soinitialization value is not important.
- static RegisterPass<Hello> X("hello", "Hello World Pass",
- false /* Only looks at CFG */,
- false /* Analysis Pass */);
Lastly, we register our classHello
, giving it a command line argument “hello
”, and a name “HelloWorld Pass”. The last two arguments describe its behavior: if a pass walks CFGwithout modifying it then the third argument is set to true
; if a pass isan analysis pass, for example dominator tree pass, then true
is supplied asthe fourth argument.
If we want to register the pass as a step of an existing pipeline, some extensionpoints are provided, e.g. PassManagerBuilder::EP_EarlyAsPossible
to apply ourpass before any optimization, or PassManagerBuilder::EP_FullLinkTimeOptimizationLast
to apply it after Link Time Optimizations.
- static llvm::RegisterStandardPasses Y(
- llvm::PassManagerBuilder::EP_EarlyAsPossible,
- [](const llvm::PassManagerBuilder &Builder,
- llvm::legacy::PassManagerBase &PM) { PM.add(new Hello()); });
As a whole, the .cpp
file looks like:
- #include "llvm/Pass.h"
- #include "llvm/IR/Function.h"
- #include "llvm/Support/raw_ostream.h"
- #include "llvm/IR/LegacyPassManager.h"
- #include "llvm/Transforms/IPO/PassManagerBuilder.h"
- using namespace llvm;
- namespace {
- struct Hello : public FunctionPass {
- static char ID;
- Hello() : FunctionPass(ID) {}
- bool runOnFunction(Function &F) override {
- errs() << "Hello: ";
- errs().write_escaped(F.getName()) << '\n';
- return false;
- }
- }; // end of struct Hello
- } // end of anonymous namespace
- char Hello::ID = 0;
- static RegisterPass<Hello> X("hello", "Hello World Pass",
- false /* Only looks at CFG */,
- false /* Analysis Pass */);
- static RegisterStandardPasses Y(
- PassManagerBuilder::EP_EarlyAsPossible,
- [](const PassManagerBuilder &Builder,
- legacy::PassManagerBase &PM) { PM.add(new Hello()); });
Now that it’s all together, compile the file with a simple “gmake
” commandfrom the top level of your build directory and you should get a new file“lib/LLVMHello.so
”. Note that everything in this file iscontained in an anonymous namespace — this reflects the fact that passesare self contained units that do not need external interfaces (although theycan have them) to be useful.
Running a pass with opt
Now that you have a brand new shiny shared object file, we can use theopt command to run an LLVM program through your pass. Because youregistered your pass with RegisterPass
, you will be able to use theopt tool to access it, once loaded.
To test it, follow the example at the end of the Getting Started with the LLVM System tocompile “Hello World” to LLVM. We can now run the bitcode file (hello.bc) forthe program through our transformation like this (or course, any bitcode filewill work):
- $ opt -load lib/LLVMHello.so -hello < hello.bc > /dev/null
- Hello: __main
- Hello: puts
- Hello: main
The -load
option specifies that opt should load your passas a shared object, which makes “-hello
” a valid command line argument(which is one reason you need to register your pass). Because the Hello pass does not modifythe program in any interesting way, we just throw away the result ofopt (sending it to /dev/null
).
To see what happened to the other string you registered, try runningopt with the -help
option:
- $ opt -load lib/LLVMHello.so -help
- OVERVIEW: llvm .bc -> .bc modular optimizer and analysis printer
- USAGE: opt [subcommand] [options] <input bitcode file>
- OPTIONS:
- Optimizations available:
- ...
- -guard-widening - Widen guards
- -gvn - Global Value Numbering
- -gvn-hoist - Early GVN Hoisting of Expressions
- -hello - Hello World Pass
- -indvars - Induction Variable Simplification
- -inferattrs - Infer set function attributes
- ...
The pass name gets added as the information string for your pass, giving somedocumentation to users of opt. Now that you have a working pass,you would go ahead and make it do the cool transformations you want. Once youget it all working and tested, it may become useful to find out how fast yourpass is. The PassManager provides anice command line option (-time-passes
) that allows you to getinformation about the execution time of your pass along with the other passesyou queue up. For example:
- $ opt -load lib/LLVMHello.so -hello -time-passes < hello.bc > /dev/null
- Hello: __main
- Hello: puts
- Hello: main
- ===-------------------------------------------------------------------------===
- ... Pass execution timing report ...
- ===-------------------------------------------------------------------------===
- Total Execution Time: 0.0007 seconds (0.0005 wall clock)
- ---User Time--- --User+System-- ---Wall Time--- --- Name ---
- 0.0004 ( 55.3%) 0.0004 ( 55.3%) 0.0004 ( 75.7%) Bitcode Writer
- 0.0003 ( 44.7%) 0.0003 ( 44.7%) 0.0001 ( 13.6%) Hello World Pass
- 0.0000 ( 0.0%) 0.0000 ( 0.0%) 0.0001 ( 10.7%) Module Verifier
- 0.0007 (100.0%) 0.0007 (100.0%) 0.0005 (100.0%) Total
As you can see, our implementation above is pretty fast. The additionalpasses listed are automatically inserted by the opt tool to verifythat the LLVM emitted by your pass is still valid and well formed LLVM, whichhasn’t been broken somehow.
Now that you have seen the basics of the mechanics behind passes, we can talkabout some more details of how they work and how to use them.
Pass classes and requirements
One of the first things that you should do when designing a new pass is todecide what class you should subclass for your pass. The Hello World example uses the FunctionPass class for its implementation, but we didnot discuss why or when this should occur. Here we talk about the classesavailable, from the most general to the most specific.
When choosing a superclass for your Pass
, you should choose the mostspecific class possible, while still being able to meet the requirementslisted. This gives the LLVM Pass Infrastructure information necessary tooptimize how passes are run, so that the resultant compiler isn’t unnecessarilyslow.
The ImmutablePass class
The most plain and boring type of pass is the “ImmutablePass” class. This passtype is used for passes that do not have to be run, do not change state, andnever need to be updated. This is not a normal type of transformation oranalysis, but can provide information about the current compiler configuration.
Although this pass class is very infrequently used, it is important forproviding information about the current target machine being compiled for, andother static information that can affect the various transformations.
ImmutablePass
es never invalidate other transformations, are neverinvalidated, and are never “run”.
The ModulePass class
The ModulePass classis the most general of all superclasses that you can use. Deriving fromModulePass
indicates that your pass uses the entire program as a unit,referring to function bodies in no predictable order, or adding and removingfunctions. Because nothing is known about the behavior of ModulePass
subclasses, no optimization can be done for their execution.
A module pass can use function level passes (e.g. dominators) using thegetAnalysis
interface getAnalysis<DominatorTree>(llvm::Function *)
toprovide the function to retrieve analysis result for, if the function pass doesnot require any module or immutable passes. Note that this can only be donefor functions for which the analysis ran, e.g. in the case of dominators youshould only ask for the DominatorTree
for function definitions, notdeclarations.
To write a correct ModulePass
subclass, derive from ModulePass
andoverload the runOnModule
method with the following signature:
The runOnModule method
- virtual bool runOnModule(Module &M) = 0;
The runOnModule
method performs the interesting work of the pass. Itshould return true
if the module was modified by the transformation andfalse
otherwise.
The CallGraphSCCPass class
The CallGraphSCCPass is used bypasses that need to traverse the program bottom-up on the call graph (calleesbefore callers). Deriving from CallGraphSCCPass
provides some mechanicsfor building and traversing the CallGraph
, but also allows the system tooptimize execution of CallGraphSCCPass
es. If your pass meets therequirements outlined below, and doesn’t meet the requirements of aFunctionPass, you should derive fromCallGraphSCCPass
.
TODO
: explain briefly what SCC, Tarjan’s algo, and B-U mean.
To be explicit, CallGraphSCCPass subclasses are:
- … not allowed to inspect or modify any
Function
s other than thosein the current SCC and the direct callers and direct callees of the SCC. - … required to preserve the current
CallGraph
object, updating it toreflect any changes made to the program. - … not allowed to add or remove SCC’s from the current Module, thoughthey may change the contents of an SCC.
- … allowed to add or remove global variables from the current Module.
- … allowed to maintain state across invocations of runOnSCC (including global data).Implementing a
CallGraphSCCPass
is slightly tricky in some cases because ithas to handle SCCs with more than one node in it. All of the virtual methodsdescribed below should returntrue
if they modified the program, orfalse
if they didn’t.
The doInitialization(CallGraph &) method
- virtual bool doInitialization(CallGraph &CG);
The doInitialization
method is allowed to do most of the things thatCallGraphSCCPass
es are not allowed to do. They can add and removefunctions, get pointers to functions, etc. The doInitialization
method isdesigned to do simple initialization type of stuff that does not depend on theSCCs being processed. The doInitialization
method call is not scheduled tooverlap with any other pass executions (thus it should be very fast).
The runOnSCC method
- virtual bool runOnSCC(CallGraphSCC &SCC) = 0;
The runOnSCC
method performs the interesting work of the pass, and shouldreturn true
if the module was modified by the transformation, false
otherwise.
The doFinalization(CallGraph &) method
- virtual bool doFinalization(CallGraph &CG);
The doFinalization
method is an infrequently used method that is calledwhen the pass framework has finished calling runOnSCC for every SCC in the program being compiled.
The FunctionPass class
In contrast to ModulePass
subclasses, FunctionPass subclasses do have apredictable, local behavior that can be expected by the system. AllFunctionPass
execute on each function in the program independent of all ofthe other functions in the program. FunctionPass
es do not require thatthey are executed in a particular order, and FunctionPass
es do not modifyexternal functions.
To be explicit, FunctionPass
subclasses are not allowed to:
- Inspect or modify a
Function
other than the one currently being processed. - Add or remove
Function
s from the currentModule
. - Add or remove global variables from the current
Module
. - Maintain state across invocations of runOnFunction (including global data).Implementing a
FunctionPass
is usually straightforward (See the HelloWorld pass for example).FunctionPass
es may overload three virtual methods to do their work. Allof these methods should returntrue
if they modified the program, orfalse
if they didn’t.
The doInitialization(Module &) method
- virtual bool doInitialization(Module &M);
The doInitialization
method is allowed to do most of the things thatFunctionPass
es are not allowed to do. They can add and remove functions,get pointers to functions, etc. The doInitialization
method is designed todo simple initialization type of stuff that does not depend on the functionsbeing processed. The doInitialization
method call is not scheduled tooverlap with any other pass executions (thus it should be very fast).
A good example of how this method should be used is the LowerAllocations pass. This passconverts malloc
and free
instructions into platform dependentmalloc()
and free()
function calls. It uses the doInitialization
method to get a reference to the malloc
and free
functions that itneeds, adding prototypes to the module if necessary.
The runOnFunction method
- virtual bool runOnFunction(Function &F) = 0;
The runOnFunction
method must be implemented by your subclass to do thetransformation or analysis work of your pass. As usual, a true
valueshould be returned if the function is modified.
The doFinalization(Module &) method
- virtual bool doFinalization(Module &M);
The doFinalization
method is an infrequently used method that is calledwhen the pass framework has finished calling runOnFunction for every function in the program beingcompiled.
The LoopPass class
All LoopPass
execute on each loop in the functionindependent of all of the other loops in the function. LoopPass
processesloops in loop nest order such that outer most loop is processed last.
LoopPass
subclasses are allowed to update loop nest using LPPassManager
interface. Implementing a loop pass is usually straightforward.LoopPass
es may overload three virtual methods to do their work. Allthese methods should return true
if they modified the program, or false
if they didn’t.
A LoopPass
subclass which is intended to run as part of the main loop passpipeline needs to preserve all of the same function analyses that the otherloop passes in its pipeline require. To make that easier,a getLoopAnalysisUsage
function is provided by LoopUtils.h
. It can becalled within the subclass’s getAnalysisUsage
override to get consistentand correct behavior. Analogously, INITIALIZE_PASS_DEPENDENCY(LoopPass)
will initialize this set of function analyses.
The doInitialization(Loop *, LPPassManager &) method
- virtual bool doInitialization(Loop *, LPPassManager &LPM);
The doInitialization
method is designed to do simple initialization type ofstuff that does not depend on the functions being processed. ThedoInitialization
method call is not scheduled to overlap with any otherpass executions (thus it should be very fast). LPPassManager
interfaceshould be used to access Function
or Module
level analysis information.
The runOnLoop method
- virtual bool runOnLoop(Loop *, LPPassManager &LPM) = 0;
The runOnLoop
method must be implemented by your subclass to do thetransformation or analysis work of your pass. As usual, a true
valueshould be returned if the function is modified. LPPassManager
interfaceshould be used to update loop nest.
The doFinalization() method
- virtual bool doFinalization();
The doFinalization
method is an infrequently used method that is calledwhen the pass framework has finished calling runOnLoop for every loop in the program being compiled.
The RegionPass class
RegionPass
is similar to LoopPass,but executes on each single entry single exit region in the function.RegionPass
processes regions in nested order such that the outer mostregion is processed last.
RegionPass
subclasses are allowed to update the region tree by using theRGPassManager
interface. You may overload three virtual methods ofRegionPass
to implement your own region pass. All these methods shouldreturn true
if they modified the program, or false
if they did not.
The doInitialization(Region *, RGPassManager &) method
- virtual bool doInitialization(Region *, RGPassManager &RGM);
The doInitialization
method is designed to do simple initialization type ofstuff that does not depend on the functions being processed. ThedoInitialization
method call is not scheduled to overlap with any otherpass executions (thus it should be very fast). RPPassManager
interfaceshould be used to access Function
or Module
level analysis information.
The runOnRegion method
- virtual bool runOnRegion(Region *, RGPassManager &RGM) = 0;
The runOnRegion
method must be implemented by your subclass to do thetransformation or analysis work of your pass. As usual, a true value should bereturned if the region is modified. RGPassManager
interface should be used toupdate region tree.
The doFinalization() method
- virtual bool doFinalization();
The doFinalization
method is an infrequently used method that is calledwhen the pass framework has finished calling runOnRegion for every region in the program beingcompiled.
The MachineFunctionPass class
A MachineFunctionPass
is a part of the LLVM code generator that executes onthe machine-dependent representation of each LLVM function in the program.
Code generator passes are registered and initialized specially byTargetMachine::addPassesToEmitFile
and similar routines, so they cannotgenerally be run from the opt or bugpoint commands.
A MachineFunctionPass
is also a FunctionPass
, so all the restrictionsthat apply to a FunctionPass
also apply to it. MachineFunctionPass
esalso have additional restrictions. In particular, MachineFunctionPass
esare not allowed to do any of the following:
- Modify or create any LLVM IR
Instruction
s,BasicBlock
s,Argument
s,Function
s,GlobalVariable
s,GlobalAlias
es, orModule
s. - Modify a
MachineFunction
other than the one currently being processed. - Maintain state across invocations of runOnMachineFunction (including global data).
The runOnMachineFunction(MachineFunction &MF) method
- virtual bool runOnMachineFunction(MachineFunction &MF) = 0;
runOnMachineFunction
can be considered the main entry point of aMachineFunctionPass
; that is, you should override this method to do thework of your MachineFunctionPass
.
The runOnMachineFunction
method is called on every MachineFunction
in aModule
, so that the MachineFunctionPass
may perform optimizations onthe machine-dependent representation of the function. If you want to get atthe LLVM Function
for the MachineFunction
you’re working on, useMachineFunction
’s getFunction()
accessor method — but remember, youmay not modify the LLVM Function
or its contents from aMachineFunctionPass
.
Pass registration
In the Hello World example pass weillustrated how pass registration works, and discussed some of the reasons thatit is used and what it does. Here we discuss how and why passes areregistered.
As we saw above, passes are registered with the RegisterPass
template. Thetemplate parameter is the name of the pass that is to be used on the commandline to specify that the pass should be added to a program (for example, withopt or bugpoint). The first argument is the name of thepass, which is to be used for the -help
output of programs, as wellas for debug output generated by the –debug-pass option.
If you want your pass to be easily dumpable, you should implement the virtualprint method:
The print method
- virtual void print(llvm::raw_ostream &O, const Module *M) const;
The print
method must be implemented by “analyses” in order to print ahuman readable version of the analysis results. This is useful for debuggingan analysis itself, as well as for other people to figure out how an analysisworks. Use the opt -analyze
argument to invoke this method.
The llvm::raw_ostream
parameter specifies the stream to write the resultson, and the Module
parameter gives a pointer to the top level module of theprogram that has been analyzed. Note however that this pointer may be NULL
in certain circumstances (such as calling the Pass::dump()
from adebugger), so it should only be used to enhance debug output, it should not bedepended on.
Specifying interactions between passes
One of the main responsibilities of the PassManager
is to make sure thatpasses interact with each other correctly. Because PassManager
tries tooptimize the execution of passes itmust know how the passes interact with each other and what dependencies existbetween the various passes. To track this, each pass can declare the set ofpasses that are required to be executed before the current pass, and the passeswhich are invalidated by the current pass.
Typically this functionality is used to require that analysis results arecomputed before your pass is run. Running arbitrary transformation passes caninvalidate the computed analysis results, which is what the invalidation setspecifies. If a pass does not implement the getAnalysisUsage method, it defaults to not having anyprerequisite passes, and invalidating all other passes.
The getAnalysisUsage method
- virtual void getAnalysisUsage(AnalysisUsage &Info) const;
By implementing the getAnalysisUsage
method, the required and invalidatedsets may be specified for your transformation. The implementation should fillin the AnalysisUsage object withinformation about which passes are required and not invalidated. To do this, apass may call any of the following methods on the AnalysisUsage
object:
The AnalysisUsage::addRequired<> and AnalysisUsage::addRequiredTransitive<> methods
If your pass requires a previous pass to be executed (an analysis for example),it can use one of these methods to arrange for it to be run before your pass.LLVM has many different types of analyses and passes that can be required,spanning the range from DominatorSet
to BreakCriticalEdges
. RequiringBreakCriticalEdges
, for example, guarantees that there will be no criticaledges in the CFG when your pass has been run.
Some analyses chain to other analyses to do their job. For example, anAliasAnalysis <AliasAnalysis> implementation is required to chain to other alias analysis passes. In cases whereanalyses chain, the addRequiredTransitive
method should be used instead ofthe addRequired
method. This informs the PassManager
that thetransitively required pass should be alive as long as the requiring pass is.
The AnalysisUsage::addPreserved<> method
One of the jobs of the PassManager
is to optimize how and when analyses arerun. In particular, it attempts to avoid recomputing data unless it needs to.For this reason, passes are allowed to declare that they preserve (i.e., theydon’t invalidate) an existing analysis if it’s available. For example, asimple constant folding pass would not modify the CFG, so it can’t possiblyaffect the results of dominator analysis. By default, all passes are assumedto invalidate all others.
The AnalysisUsage
class provides several methods which are useful incertain circumstances that are related to addPreserved
. In particular, thesetPreservesAll
method can be called to indicate that the pass does notmodify the LLVM program at all (which is true for analyses), and thesetPreservesCFG
method can be used by transformations that changeinstructions in the program but do not modify the CFG or terminatorinstructions.
addPreserved
is particularly useful for transformations likeBreakCriticalEdges
. This pass knows how to update a small set of loop anddominator related analyses if they exist, so it can preserve them, despite thefact that it hacks on the CFG.
Example implementations of getAnalysisUsage
- // This example modifies the program, but does not modify the CFG
- void LICM::getAnalysisUsage(AnalysisUsage &AU) const {
- AU.setPreservesCFG();
- AU.addRequired<LoopInfoWrapperPass>();
- }
The getAnalysis<> and getAnalysisIfAvailable<> methods
The Pass::getAnalysis<>
method is automatically inherited by your class,providing you with access to the passes that you declared that you requiredwith the getAnalysisUsagemethod. It takes a single template argument that specifies which pass classyou want, and returns a reference to that pass. For example:
- bool LICM::runOnFunction(Function &F) {
- LoopInfo &LI = getAnalysis<LoopInfoWrapperPass>().getLoopInfo();
- //...
- }
This method call returns a reference to the pass desired. You may get aruntime assertion failure if you attempt to get an analysis that you did notdeclare as required in your getAnalysisUsage implementation. This method can becalled by your run
method implementation, or by any other local methodinvoked by your run
method.
A module level pass can use function level analysis info using this interface.For example:
- bool ModuleLevelPass::runOnModule(Module &M) {
- //...
- DominatorTree &DT = getAnalysis<DominatorTree>(Func);
- //...
- }
In above example, runOnFunction
for DominatorTree
is called by passmanager before returning a reference to the desired pass.
If your pass is capable of updating analyses if they exist (e.g.,BreakCriticalEdges
, as described above), you can use thegetAnalysisIfAvailable
method, which returns a pointer to the analysis ifit is active. For example:
- if (DominatorSet *DS = getAnalysisIfAvailable<DominatorSet>()) {
- // A DominatorSet is active. This code will update it.
- }
Implementing Analysis Groups
Now that we understand the basics of how passes are defined, how they are used,and how they are required from other passes, it’s time to get a little bitfancier. All of the pass relationships that we have seen so far are verysimple: one pass depends on one other specific pass to be run before it canrun. For many applications, this is great, for others, more flexibility isrequired.
In particular, some analyses are defined such that there is a single simpleinterface to the analysis results, but multiple ways of calculating them.Consider alias analysis for example. The most trivial alias analysis returns“may alias” for any alias query. The most sophisticated analysis aflow-sensitive, context-sensitive interprocedural analysis that can take asignificant amount of time to execute (and obviously, there is a lot of roombetween these two extremes for other implementations). To cleanly supportsituations like this, the LLVM Pass Infrastructure supports the notion ofAnalysis Groups.
Analysis Group Concepts
An Analysis Group is a single simple interface that may be implemented bymultiple different passes. Analysis Groups can be given human readable namesjust like passes, but unlike passes, they need not derive from the Pass
class. An analysis group may have one or more implementations, one of which isthe “default” implementation.
Analysis groups are used by client passes just like other passes are: theAnalysisUsage::addRequired()
and Pass::getAnalysis()
methods. In orderto resolve this requirement, the PassManager scans the available passes to see if anyimplementations of the analysis group are available. If none is available, thedefault implementation is created for the pass to use. All standard rules forinteraction between passes stillapply.
Although Pass Registration isoptional for normal passes, all analysis group implementations must beregistered, and must use the INITIALIZE_AG_PASS template to join theimplementation pool. Also, a default implementation of the interface mustbe registered with RegisterAnalysisGroup.
As a concrete example of an Analysis Group in action, consider theAliasAnalysisanalysis group. The default implementation of the alias analysis interface(the basicaa pass)just does a few simple checks that don’t require significant analysis tocompute (such as: two different globals can never alias each other, etc).Passes that use the AliasAnalysis interface (forexample the gvn pass), do notcare which implementation of alias analysis is actually provided, they just usethe designated interface.
From the user’s perspective, commands work just like normal. Issuing thecommand opt -gvn …
will cause the basicaa
class to be instantiatedand added to the pass sequence. Issuing the command opt -somefancyaa -gvn…
will cause the gvn
pass to use the somefancyaa
alias analysis(which doesn’t actually exist, it’s just a hypothetical example) instead.
Using RegisterAnalysisGroup
The RegisterAnalysisGroup
template is used to register the analysis groupitself, while the INITIALIZE_AG_PASS
is used to add pass implementations tothe analysis group. First, an analysis group should be registered, with ahuman readable name provided for it. Unlike registration of passes, there isno command line argument to be specified for the Analysis Group Interfaceitself, because it is “abstract”:
- static RegisterAnalysisGroup<AliasAnalysis> A("Alias Analysis");
Once the analysis is registered, passes can declare that they are validimplementations of the interface by using the following code:
- namespace {
- // Declare that we implement the AliasAnalysis interface
- INITIALIZE_AG_PASS(FancyAA, AliasAnalysis , "somefancyaa",
- "A more complex alias analysis implementation",
- false, // Is CFG Only?
- true, // Is Analysis?
- false); // Is default Analysis Group implementation?
- }
This just shows a class FancyAA
that uses the INITIALIZE_AG_PASS
macroboth to register and to “join” the AliasAnalysis analysis group.Every implementation of an analysis group should join using this macro.
- namespace {
- // Declare that we implement the AliasAnalysis interface
- INITIALIZE_AG_PASS(BasicAA, AliasAnalysis, "basicaa",
- "Basic Alias Analysis (default AA impl)",
- false, // Is CFG Only?
- true, // Is Analysis?
- true); // Is default Analysis Group implementation?
- }
Here we show how the default implementation is specified (using the finalargument to the INITIALIZE_AG_PASS
template). There must be exactly onedefault implementation available at all times for an Analysis Group to be used.Only default implementation can derive from ImmutablePass
. Here we declarethat the BasicAliasAnalysis pass is the defaultimplementation for the interface.
Pass Statistics
The Statistic class isdesigned to be an easy way to expose various success metrics from passes.These statistics are printed at the end of a run, when the -stats
command line option is enabled on the command line. See the Statisticssection in the Programmer’s Manual for details.
What PassManager does
The PassManagerclass takes a list ofpasses, ensures their prerequisitesare set up correctly, and then schedules passes to run efficiently. All of theLLVM tools that run passes use the PassManager for execution of these passes.
The PassManager does two main things to try to reduce the execution time of aseries of passes:
Share analysis results. The
PassManager
attempts to avoidrecomputing analysis results as much as possible. This means keeping trackof which analyses are available already, which analyses get invalidated, andwhich analyses are needed to be run for a pass. An important part of workis that thePassManager
tracks the exact lifetime of all analysisresults, allowing it to free memory allocated to holding analysis resultsas soon as they are no longer needed.Pipeline the execution of passes on the program. The
PassManager
attempts to get better cache and memory usage behavior out of a series ofpasses by pipelining the passes together. This means that, given a seriesof consecutive FunctionPass, itwill execute all of the FunctionPass on the first function, then all of theFunctionPasses on the secondfunction, etc… until the entire program has been run through the passes.
This improves the cache behavior of the compiler, because it is onlytouching the LLVM program representation for a single function at a time,instead of traversing the entire program. It reduces the memory consumptionof compiler, because, for example, only one DominatorSet needs to becalculated at a time. This also makes it possible to implement someinteresting enhancements in the future.
The effectiveness of the PassManager
is influenced directly by how muchinformation it has about the behaviors of the passes it is scheduling. Forexample, the “preserved” set is intentionally conservative in the face of anunimplemented getAnalysisUsagemethod. Not implementing when it should be implemented will have the effect ofnot allowing any analysis results to live across the execution of your pass.
The PassManager
class exposes a —debug-pass
command line options thatis useful for debugging pass execution, seeing how things work, and diagnosingwhen you should be preserving more analyses than you currently are. (To getinformation about all of the variants of the —debug-pass
option, just type“opt -help-hidden
”).
By using the –debug-pass=Structure option, for example, we can see how ourHello World pass interacts with otherpasses. Lets try it out with the gvn and licm passes:
- $ opt -load lib/LLVMHello.so -gvn -licm --debug-pass=Structure < hello.bc > /dev/null
- ModulePass Manager
- FunctionPass Manager
- Dominator Tree Construction
- Basic Alias Analysis (stateless AA impl)
- Function Alias Analysis Results
- Memory Dependence Analysis
- Global Value Numbering
- Natural Loop Information
- Canonicalize natural loops
- Loop-Closed SSA Form Pass
- Basic Alias Analysis (stateless AA impl)
- Function Alias Analysis Results
- Scalar Evolution Analysis
- Loop Pass Manager
- Loop Invariant Code Motion
- Module Verifier
- Bitcode Writer
This output shows us when passes are constructed.Here we see that GVN uses dominator tree information to do its job. The LICM passuses natural loop information, which uses dominator tree as well.
After the LICM pass, the module verifier runs (which is automatically added bythe opt tool), which uses the dominator tree to check that theresultant LLVM code is well formed. Note that the dominator tree is computedonce, and shared by three passes.
Lets see how this changes when we run the Hello World pass in between the two passes:
- $ opt -load lib/LLVMHello.so -gvn -hello -licm --debug-pass=Structure < hello.bc > /dev/null
- ModulePass Manager
- FunctionPass Manager
- Dominator Tree Construction
- Basic Alias Analysis (stateless AA impl)
- Function Alias Analysis Results
- Memory Dependence Analysis
- Global Value Numbering
- Hello World Pass
- Dominator Tree Construction
- Natural Loop Information
- Canonicalize natural loops
- Loop-Closed SSA Form Pass
- Basic Alias Analysis (stateless AA impl)
- Function Alias Analysis Results
- Scalar Evolution Analysis
- Loop Pass Manager
- Loop Invariant Code Motion
- Module Verifier
- Bitcode Writer
- Hello: __main
- Hello: puts
- Hello: main
Here we see that the Hello World passhas killed the Dominator Tree pass, even though it doesn’t modify the code atall! To fix this, we need to add the following getAnalysisUsage method to our pass:
- // We don't modify the program, so we preserve all analyses
- void getAnalysisUsage(AnalysisUsage &AU) const override {
- AU.setPreservesAll();
- }
Now when we run our pass, we get this output:
- $ opt -load lib/LLVMHello.so -gvn -hello -licm --debug-pass=Structure < hello.bc > /dev/null
- Pass Arguments: -gvn -hello -licm
- ModulePass Manager
- FunctionPass Manager
- Dominator Tree Construction
- Basic Alias Analysis (stateless AA impl)
- Function Alias Analysis Results
- Memory Dependence Analysis
- Global Value Numbering
- Hello World Pass
- Natural Loop Information
- Canonicalize natural loops
- Loop-Closed SSA Form Pass
- Basic Alias Analysis (stateless AA impl)
- Function Alias Analysis Results
- Scalar Evolution Analysis
- Loop Pass Manager
- Loop Invariant Code Motion
- Module Verifier
- Bitcode Writer
- Hello: __main
- Hello: puts
- Hello: main
Which shows that we don’t accidentally invalidate dominator informationanymore, and therefore do not have to compute it twice.
The releaseMemory method
- virtual void releaseMemory();
The PassManager
automatically determines when to compute analysis results,and how long to keep them around for. Because the lifetime of the pass objectitself is effectively the entire duration of the compilation process, we needsome way to free analysis results when they are no longer useful. ThereleaseMemory
virtual method is the way to do this.
If you are writing an analysis or any other pass that retains a significantamount of state (for use by another pass which “requires” your pass and usesthe getAnalysis method) you shouldimplement releaseMemory
to, well, release the memory allocated to maintainthis internal state. This method is called after the run
method for theclass, before the next call of run
in your pass.
Building pass plugins
As an alternative to using PLUGIN_TOOL
, LLVM provides a mechanism toautomatically register pass plugins within clang
, opt
and bugpoint
.One first needs to create an independent project and add it to either tools/
or, using the MonoRepo layout, at the root of the repo alongside other projects.This project must contain the following minimal CMakeLists.txt
:
- add_llvm_pass_plugin(Name source0.cpp)
The pass must provide two entry points for the new pass manager, one for staticregistration and one for dynamically loaded plugins:
llvm::PassPluginLibraryInfo get##Name##PluginInfo();
extern "C" ::llvm::PassPluginLibraryInfo llvmGetPassPluginInfo() LLVM_ATTRIBUTE_WEAK;
Pass plugins are compiled and link dynamically by default, but it’spossible to set the following variables to change this behavior:
LLVM_${NAME}_LINK_INTO_TOOLS
, when set toON
, turns the project intoa statically linked extension
When building a tool that uses the new pass manager, one can use the following snippet toinclude statically linked pass plugins:
- // fetch the declaration
- #define HANDLE_EXTENSION(Ext) llvm::PassPluginLibraryInfo get##Ext##PluginInfo();
- #include "llvm/Support/Extension.def"
- [...]
- // use them, PB is an llvm::PassBuilder instance
- #define HANDLE_EXTENSION(Ext) get##Ext##PluginInfo().RegisterPassBuilderCallbacks(PB);
- #include "llvm/Support/Extension.def"
Registering dynamically loaded passes
Size matters when constructing production quality tools using LLVM, both forthe purposes of distribution, and for regulating the resident code size whenrunning on the target system. Therefore, it becomes desirable to selectivelyuse some passes, while omitting others and maintain the flexibility to changeconfigurations later on. You want to be able to do all this, and, providefeedback to the user. This is where pass registration comes into play.
The fundamental mechanisms for pass registration are theMachinePassRegistry
class and subclasses of MachinePassRegistryNode
.
An instance of MachinePassRegistry
is used to maintain a list ofMachinePassRegistryNode
objects. This instance maintains the list andcommunicates additions and deletions to the command line interface.
An instance of MachinePassRegistryNode
subclass is used to maintaininformation provided about a particular pass. This information includes thecommand line name, the command help string and the address of the function usedto create an instance of the pass. A global static constructor of one of theseinstances registers with a corresponding MachinePassRegistry
, the staticdestructor unregisters. Thus a pass that is statically linked in the toolwill be registered at start up. A dynamically loaded pass will register onload and unregister at unload.
Using existing registries
There are predefined registries to track instruction scheduling(RegisterScheduler
) and register allocation (RegisterRegAlloc
) machinepasses. Here we will describe how to register a register allocator machinepass.
Implement your register allocator machine pass. In your register allocator.cpp
file add the following include:
- #include "llvm/CodeGen/RegAllocRegistry.h"
Also in your register allocator .cpp
file, define a creator function in theform:
- FunctionPass *createMyRegisterAllocator() {
- return new MyRegisterAllocator();
- }
Note that the signature of this function should match the type ofRegisterRegAlloc::FunctionPassCtor
. In the same file add the “installing”declaration, in the form:
- static RegisterRegAlloc myRegAlloc("myregalloc",
- "my register allocator help string",
- createMyRegisterAllocator);
Note the two spaces prior to the help string produces a tidy result on the-help
query.
- $ llc -help
- ...
- -regalloc - Register allocator to use (default=linearscan)
- =linearscan - linear scan register allocator
- =local - local register allocator
- =simple - simple register allocator
- =myregalloc - my register allocator help string
- ...
And that’s it. The user is now free to use -regalloc=myregalloc
as anoption. Registering instruction schedulers is similar except use theRegisterScheduler
class. Note that theRegisterScheduler::FunctionPassCtor
is significantly different fromRegisterRegAlloc::FunctionPassCtor
.
To force the load/linking of your register allocator into thellc/lli tools, add your creator function’s globaldeclaration to Passes.h
and add a “pseudo” call line tollvm/Codegen/LinkAllCodegenComponents.h
.
Creating new registries
The easiest way to get started is to clone one of the existing registries; werecommend llvm/CodeGen/RegAllocRegistry.h
. The key things to modify arethe class name and the FunctionPassCtor
type.
Then you need to declare the registry. Example: if your pass registry isRegisterMyPasses
then define:
- MachinePassRegistry RegisterMyPasses::Registry;
And finally, declare the command line option for your passes. Example:
- cl::opt<RegisterMyPasses::FunctionPassCtor, false,
- RegisterPassParser<RegisterMyPasses> >
- MyPassOpt("mypass",
- cl::init(&createDefaultMyPass),
- cl::desc("my pass option help"));
Here the command option is “mypass
”, with createDefaultMyPass
as thedefault creator.
Using GDB with dynamically loaded passes
Unfortunately, using GDB with dynamically loaded passes is not as easy as itshould be. First of all, you can’t set a breakpoint in a shared object thathas not been loaded yet, and second of all there are problems with inlinedfunctions in shared objects. Here are some suggestions to debugging your passwith GDB.
For sake of discussion, I’m going to assume that you are debugging atransformation invoked by opt, although nothing described heredepends on that.
Setting a breakpoint in your pass
First thing you do is start gdb on the opt process:
- $ gdb opt
- GNU gdb 5.0
- Copyright 2000 Free Software Foundation, Inc.
- GDB is free software, covered by the GNU General Public License, and you are
- welcome to change it and/or distribute copies of it under certain conditions.
- Type "show copying" to see the conditions.
- There is absolutely no warranty for GDB. Type "show warranty" for details.
- This GDB was configured as "sparc-sun-solaris2.6"...
- (gdb)
Note that opt has a lot of debugging information in it, so it takestime to load. Be patient. Since we cannot set a breakpoint in our pass yet(the shared object isn’t loaded until runtime), we must execute the process,and have it stop before it invokes our pass, but after it has loaded the sharedobject. The most foolproof way of doing this is to set a breakpoint inPassManager::run
and then run the process with the arguments you want:
- $ (gdb) break llvm::PassManager::run
- Breakpoint 1 at 0x2413bc: file Pass.cpp, line 70.
- (gdb) run test.bc -load $(LLVMTOP)/llvm/Debug+Asserts/lib/[libname].so -[passoption]
- Starting program: opt test.bc -load $(LLVMTOP)/llvm/Debug+Asserts/lib/[libname].so -[passoption]
- Breakpoint 1, PassManager::run (this=0xffbef174, M=@0x70b298) at Pass.cpp:70
- 70 bool PassManager::run(Module &M) { return PM->run(M); }
- (gdb)
Once the opt stops in the PassManager::run
method you are nowfree to set breakpoints in your pass so that you can trace through execution ordo other standard debugging stuff.
Miscellaneous Problems
Once you have the basics down, there are a couple of problems that GDB has,some with solutions, some without.
- Inline functions have bogus stack information. In general, GDB does a prettygood job getting stack traces and stepping through inline functions. When apass is dynamically loaded however, it somehow completely loses thiscapability. The only solution I know of is to de-inline a function (move itfrom the body of a class to a
.cpp
file). - Restarting the program breaks breakpoints. After following the informationabove, you have succeeded in getting some breakpoints planted in your pass.Next thing you know, you restart the program (i.e., you type “
run
” again),and you start getting errors about breakpoints being unsettable. The onlyway I have found to “fix” this problem is to delete the breakpoints that arealready set in your pass, run the program, and re-set the breakpoints onceexecution stops inPassManager::run
.
Hopefully these tips will help with common case debugging situations. If you’dlike to contribute some tips of your own, just contact Chris.
Future extensions planned
Although the LLVM Pass Infrastructure is very capable as it stands, and doessome nifty stuff, there are things we’d like to add in the future. Here iswhere we are going:
Multithreaded LLVM
Multiple CPU machines are becoming more common and compilation can never befast enough: obviously we should allow for a multithreaded compiler. Becauseof the semantics defined for passes above (specifically they cannot maintainstate across invocations of their run*
methods), a nice clean way toimplement a multithreaded compiler would be for the PassManager
class tocreate multiple instances of each pass object, and allow the separate instancesto be hacking on different parts of the program at the same time.
This implementation would prevent each of the passes from having to implementmultithreaded constructs, requiring only the LLVM core to have locking in a fewplaces (for global resources). Although this is a simple extension, we simplyhaven’t had time (or multiprocessor machines, thus a reason) to implement this.Despite that, we have kept the LLVM passes SMP ready, and you should too.