10.3. Optimisation (code improvement)¶
The -O*
options specify convenient “packages” of optimisation flags;
the -f*
options described later on specify individual
optimisations to be turned on/off; the -m*
options specify
machine-specific optimisations to be turned on/off.
Most of these options are boolean and have options to turn them both “on” and
“off” (beginning with the prefix no-
). For instance, while -fspecialise
enables specialisation, -fno-specialise
disables it. When multiple flags for
the same option appear in the command-line they are evaluated from left to
right. For instance, -fno-specialise -fspecialise
will enable
specialisation.
It is important to note that the -O*
flags are roughly equivalent to
combinations of -f*
flags. For this reason, the effect of the
-O*
and -f*
flags is dependent upon the order in which they
occur on the command line.
For instance, take the example of -fno-specialise -O1
. Despite the
-fno-specialise
appearing in the command line, specialisation will
still be enabled. This is the case as -O1
implies -fspecialise
,
overriding the previous flag. By contrast, -O1 -fno-specialise
will
compile without specialisation, as one would expect.
10.3.1. -O*
: convenient “packages” of optimisation flags.¶
There are many options that affect the quality of code produced by GHC. Most people only have a general goal, something like “Compile quickly” or “Make my program run like greased lightning.” The following “packages” of optimisations (or lack thereof) should suffice.
Note that higher optimisation levels cause more cross-module optimisation to be performed, which can have an impact on how much of your program needs to be recompiled when you change something. This is one reason to stick to no-optimisation when developing code.
No ``-O*``-type option specified: This is taken to mean “Please
compile quickly; I’m not over-bothered about compiled-code quality.”
So, for example, ghc -c Foo.hs
-
-O0
¶ Means “turn off all optimisation”, reverting to the same settings as if no
-O
options had been specified. Saying-O0
can be useful if e.g.make
has inserted a-O
on the command line already.
-
-O
¶ -
-O1
¶ Means: “Generate good-quality code without taking too long about it.” Thus, for example:
ghc -c -O Main.lhs
-
-O2
¶ Means: “Apply every non-dangerous optimisation, even if it means significantly longer compile times.”
The avoided “dangerous” optimisations are those that can make runtime or space worse if you’re unlucky. They are normally turned on or off individually.
We don’t use a -O*
flag for day-to-day work. We use -O
to get
respectable speed; e.g., when we want to measure something. When we want
to go for broke, we tend to use -O2
(and we go for lots of coffee
breaks).
The easiest way to see what -O
(etc.) “really mean” is to run with
-v
, then stand back in amazement.
10.3.2. -f*
: platform-independent flags¶
These flags turn on and off individual optimisations. Flags marked as
on by default are enabled by -O
, and as such you shouldn’t
need to set any of them explicitly. A flag -fwombat
can be negated
by saying -fno-wombat
.
-
-fcase-merge
¶ Default: on Merge immediately-nested case expressions that scrutinise the same variable. For example,
case x of Red -> e1 _ -> case x of Blue -> e2 Green -> e3
Is transformed to,
case x of Red -> e1 Blue -> e2 Green -> e2
-
-fcase-folding
¶ Default: on Allow constant folding in case expressions that scrutinise some primops: For example,
case x `minusWord#` 10## of 10## -> e1 20## -> e2 v -> e3
Is transformed to,
case x of 20## -> e1 30## -> e2 _ -> let v = x `minusWord#` 10## in e3
-
-fcall-arity
¶ Default: on Enable call-arity analysis.
-
-fexitification
¶ Default: on Enables the floating of exit paths out of recursive functions.
-
-fcmm-elim-common-blocks
¶ Default: on Enables the common block elimination optimisation in the code generator. This optimisation attempts to find identical Cmm blocks and eliminate the duplicates.
-
-fcmm-sink
¶ Default: on Enables the sinking pass in the code generator. This optimisation attempts to find identical Cmm blocks and eliminate the duplicates attempts to move variable bindings closer to their usage sites. It also inlines simple expressions like literals or registers.
-
-fasm-shortcutting
¶ Default: off This enables shortcutting at the assembly stage of the code generator. In simpler terms shortcutting means if a block of instructions A only consists of a unconditionally jump, we replace all jumps to A by jumps to the successor of A.
This is mostly done during Cmm passes. However this can miss corner cases. So at -O2 we run the pass again at the asm stage to catch these.
-
-fcpr-anal
¶ Default: on Turn on CPR analysis in the demand analyser.
-
-fcse
¶ Default: on Enables the common-sub-expression elimination optimisation. Switching this off can be useful if you have some
unsafePerformIO
expressions that you don’t want commoned-up.
-
-fstg-cse
¶ Default: on Enables the common-sub-expression elimination optimisation on the STG intermediate language, where it is able to common up some subexpressions that differ in their types, but not their represetation.
-
-fdicts-cheap
¶ Default: off A very experimental flag that makes dictionary-valued expressions seem cheap to the optimiser.
-
-fdicts-strict
¶ Default: off Make dictionaries strict.
-
-fdmd-tx-dict-sel
¶ Default: on Use a special demand transformer for dictionary selectors.
-
-fdo-eta-reduction
¶ Default: on Eta-reduce lambda expressions, if doing so gets rid of a whole group of lambdas.
-
-fdo-lambda-eta-expansion
¶ Default: on Eta-expand let-bindings to increase their arity.
-
-feager-blackholing
¶ Default: off Usually GHC black-holes a thunk only when it switches threads. This flag makes it do so as soon as the thunk is entered. See Haskell on a shared-memory multiprocessor.
See Compile-time options for SMP parallelism for a dicussion on its use.
-
-fexcess-precision
¶ Default: off When this option is given, intermediate floating point values can have a greater precision/range than the final type. Generally this is a good thing, but some programs may rely on the exact precision/range of
Float
/Double
values and should not use this option for their compilation.Note that the 32-bit x86 native code generator only supports excess-precision mode, so neither
-fexcess-precision
nor-fno-excess-precision
has any effect. This is a known bug, see Bugs in GHC.
-
-fexpose-all-unfoldings
¶ Default: off An experimental flag to expose all unfoldings, even for very large or recursive functions. This allows for all functions to be inlined while usually GHC would avoid inlining larger functions.
-
-ffloat-in
¶ Default: on Float let-bindings inwards, nearer their binding site. See Let-floating: moving bindings to give faster programs (ICFP‘96).
This optimisation moves let bindings closer to their use site. The benefit here is that this may avoid unnecessary allocation if the branch the let is now on is never executed. It also enables other optimisation passes to work more effectively as they have more information locally.
This optimisation isn’t always beneficial though (so GHC applies some heuristics to decide when to apply it). The details get complicated but a simple example is that it is often beneficial to move let bindings outwards so that multiple let bindings can be grouped into a larger single let binding, effectively batching their allocation and helping the garbage collector and allocator.
-
-ffull-laziness
¶ Default: on Run the full laziness optimisation (also known as let-floating), which floats let-bindings outside enclosing lambdas, in the hope they will be thereby be computed less often. See Let-floating: moving bindings to give faster programs (ICFP‘96). Full laziness increases sharing, which can lead to increased memory residency.
Note
GHC doesn’t implement complete full-laziness. When optimisation in on, and
-fno-full-laziness
is not given, some transformations that increase sharing are performed, such as extracting repeated computations from a loop. These are the same transformations that a fully lazy implementation would do, the difference is that GHC doesn’t consistently apply full-laziness, so don’t rely on it.
-
-ffun-to-thunk
¶ Default: off Worker-wrapper removes unused arguments, but usually we do not remove them all, lest it turn a function closure into a thunk, thereby perhaps creating a space leak and/or disrupting inlining. This flag allows worker/wrapper to remove all value lambdas.
-
-fignore-asserts
¶ Default: on Causes GHC to ignore uses of the function
Exception.assert
in source code (in other words, rewritingException.assert p e
toe
(see Assertions).
-
-fignore-interface-pragmas
¶ Default: off Tells GHC to ignore all inessential information when reading interface files. That is, even if
M.hi
contains unfolding or strictness information for a function, GHC will ignore that information.
-
-flate-dmd-anal
¶ Default: off Run demand analysis again, at the end of the simplification pipeline. We found some opportunities for discovering strictness that were not visible earlier; and optimisations like
-fspec-constr
can create functions with unused arguments which are eliminated by late demand analysis. Improvements are modest, but so is the cost. See notes on the Trac wiki page.
-
-fliberate-case
¶ Default: off but enabled with -O2
.Turn on the liberate-case transformation. This unrolls recursive function once in its own RHS, to avoid repeated case analysis of free variables. It’s a bit like the call-pattern specialiser (
-fspec-constr
) but for free variables rather than arguments.
-
-fliberate-case-threshold
=⟨n⟩
¶ Default: 2000 Set the size threshold for the liberate-case transformation.
-
-floopification
¶ Default: on When this optimisation is enabled the code generator will turn all self-recursive saturated tail calls into local jumps rather than function calls.
-
-fllvm-pass-vectors-in-regs
¶ Default: on Instructs GHC to use the platform’s native vector registers to pass vector arguments during function calls. As with all vector support, this requires
-fllvm
.
-
-fmax-inline-alloc-size
=⟨n⟩
¶ Default: 128 Set the maximum size of inline array allocations to n bytes. GHC will allocate non-pinned arrays of statically known size in the current nursery block if they’re no bigger than n bytes, ignoring GC overheap. This value should be quite a bit smaller than the block size (typically: 4096).
-
-fmax-inline-memcpy-insns
=⟨n⟩
¶ Default: 32 Inline
memcpy
calls if they would generate no more than ⟨n⟩ pseudo-instructions.
-
-fmax-inline-memset-insns
=⟨n⟩
¶ Default: 32 Inline
memset
calls if they would generate no more than n pseudo instructions.
-
-fmax-relevant-binds
=⟨n⟩
¶ Default: 6 The type checker sometimes displays a fragment of the type environment in error messages, but only up to some maximum number, set by this flag. Turning it off with
-fno-max-relevant-bindings
gives an unlimited number. Syntactically top-level bindings are also usually excluded (since they may be numerous), but-fno-max-relevant-bindings
includes them too.
-
-fmax-uncovered-patterns
=⟨n⟩
¶ Default: 4 Maximum number of unmatched patterns to be shown in warnings generated by
-Wincomplete-patterns
and-Wincomplete-uni-patterns
.
-
-fmax-simplifier-iterations
=⟨n⟩
¶ Default: 4 Sets the maximal number of iterations for the simplifier.
-
-fmax-worker-args
=⟨n⟩
¶ Default: 10 If a worker has that many arguments, none will be unpacked anymore.
-
-fno-opt-coercion
¶ Default: coercion optimisation enabled. Turn off the coercion optimiser.
-
-fno-pre-inlining
¶ Default: pre-inlining enabled Turn off pre-inlining.
-
-fno-state-hack
¶ Default: state hack is enabled Turn off the “state hack” whereby any lambda with a
State#
token as argument is considered to be single-entry, hence it is considered okay to inline things inside it. This can improve performance of IO and ST monad code, but it runs the risk of reducing sharing.
-
-fomit-interface-pragmas
¶ Default: Implied by -O0
, otherwise off.Tells GHC to omit all inessential information from the interface file generated for the module being compiled (say M). This means that a module importing M will see only the types of the functions that M exports, but not their unfoldings, strictness info, etc. Hence, for example, no function exported by M will be inlined into an importing module. The benefit is that modules that import M will need to be recompiled less often (only when M’s exports change their type, not when they change their implementation).
-
-fomit-yields
¶ Default: yield points enabled Tells GHC to omit heap checks when no allocation is being performed. While this improves binary sizes by about 5%, it also means that threads run in tight non-allocating loops will not get preempted in a timely fashion. If it is important to always be able to interrupt such threads, you should turn this optimization off. Consider also recompiling all libraries with this optimization turned off, if you need to guarantee interruptibility.
-
-fpedantic-bottoms
¶ Default: off Make GHC be more precise about its treatment of bottom (but see also
-fno-state-hack
). In particular, stop GHC eta-expanding through a case expression, which is good for performance, but bad if you are usingseq
on partial applications.
-
-fregs-graph
¶ Default: off due to a performance regression bug (Trac #7679) Only applies in combination with the native code generator. Use the graph colouring register allocator for register allocation in the native code generator. By default, GHC uses a simpler, faster linear register allocator. The downside being that the linear register allocator usually generates worse code.
Note that the graph colouring allocator is a bit experimental and may fail when faced with code with high register pressure Trac #8657.
-
-fregs-iterative
¶ Default: off Only applies in combination with the native code generator. Use the iterative coalescing graph colouring register allocator for register allocation in the native code generator. This is the same register allocator as the
-fregs-graph
one but also enables iterative coalescing during register allocation.
-
-fsimplifier-phases
=⟨n⟩
¶ Default: 2 Set the number of phases for the simplifier. Ignored with
-O0
.
-
-fsimpl-tick-factor
=⟨n⟩
¶ Default: 100 GHC’s optimiser can diverge if you write rewrite rules (Rewrite rules) that don’t terminate, or (less satisfactorily) if you code up recursion through data types (Bugs in GHC). To avoid making the compiler fall into an infinite loop, the optimiser carries a “tick count” and stops inlining and applying rewrite rules when this count is exceeded. The limit is set as a multiple of the program size, so bigger programs get more ticks. The
-fsimpl-tick-factor
flag lets you change the multiplier. The default is 100; numbers larger than 100 give more ticks, and numbers smaller than 100 give fewer.If the tick-count expires, GHC summarises what simplifier steps it has done; you can use
-fddump-simpl-stats
to generate a much more detailed list. Usually that identifies the loop quite accurately, because some numbers are very large.
-
-fspec-constr
¶ Default: off but enabled by -O2
.Turn on call-pattern specialisation; see Call-pattern specialisation for Haskell programs.
This optimisation specializes recursive functions according to their argument “shapes”. This is best explained by example so consider:
last :: [a] -> a last [] = error "last" last (x : []) = x last (x : xs) = last xs
In this code, once we pass the initial check for an empty list we know that in the recursive case this pattern match is redundant. As such
-fspec-constr
will transform the above code to:last :: [a] -> a last [] = error "last" last (x : xs) = last' x xs where last' x [] = x last' x (y : ys) = last' y ys
As well avoid unnecessary pattern matching it also helps avoid unnecessary allocation. This applies when a argument is strict in the recursive call to itself but not on the initial entry. As strict recursive branch of the function is created similar to the above example.
It is also possible for library writers to instruct GHC to perform call-pattern specialisation extremely aggressively. This is necessary for some highly optimized libraries, where we may want to specialize regardless of the number of specialisations, or the size of the code. As an example, consider a simplified use-case from the
vector
library:import GHC.Types (SPEC(..)) foldl :: (a -> b -> a) -> a -> Stream b -> a {-# INLINE foldl #-} foldl f z (Stream step s _) = foldl_loop SPEC z s where foldl_loop !sPEC z s = case step s of Yield x s' -> foldl_loop sPEC (f z x) s' Skip -> foldl_loop sPEC z s' Done -> z
Here, after GHC inlines the body of
foldl
to a call site, it will perform call-pattern specialisation very aggressively onfoldl_loop
due to the use ofSPEC
in the argument of the loop body.SPEC
fromGHC.Types
is specifically recognised by the compiler.(NB: it is extremely important you use
seq
or a bang pattern on theSPEC
argument!)In particular, after inlining this will expose
f
to the loop body directly, allowing heavy specialisation over the recursive cases.
-
-fspec-constr-keen
¶ Default: off If this flag is on, call-pattern specialisation will specialise a call
(f (Just x))
with an explicit constructor argument, even if the argument is not scrutinised in the body of the function. This is sometimes beneficial; e.g. the argument might be given to some other function that can itself be specialised.
-
-fspec-constr-count
=⟨n⟩
¶ Default: 3 Set the maximum number of specialisations that will be created for any one function by the SpecConstr transformation.
-
-fspec-constr-threshold
=⟨n⟩
¶ Default: 2000 Set the size threshold for the SpecConstr transformation.
-
-fspecialise
¶ Default: on Specialise each type-class-overloaded function defined in this module for the types at which it is called in this module. If
-fcross-module-specialise
is set imported functions that have an INLINABLE pragma (INLINABLE pragma) will be specialised as well.
-
-fspecialise-aggressively
¶ Default: off By default only type class methods and methods marked
INLINABLE
orINLINE
are specialised. This flag will specialise any overloaded function regardless of size if its unfolding is available. This flag is not included in any optimisation level as it can massively increase code size. It can be used in conjunction with-fexpose-all-unfoldings
if you want to ensure all calls are specialised.
-
-fcross-module-specialise
¶ Default: on Specialise
INLINABLE
(INLINABLE pragma) type-class-overloaded functions imported from other modules for the types at which they are called in this module. Note that specialisation must be enabled (by-fspecialise
) for this to have any effect.
-
-flate-specialise
¶ Default: off Runs another specialisation pass towards the end of the optimisation pipeline. This can catch specialisation opportunities which arose from the previous specialisation pass or other inlining.
You might want to use this if you are you have a type class method which returns a constrained type. For example, a type class where one of the methods implements a traversal.
-
-fsolve-constant-dicts
¶ Default: on When solving constraints, try to eagerly solve super classes using available dictionaries.
For example:
class M a b where m :: a -> b type C a b = (Num a, M a b) f :: C Int b => b -> Int -> Int f _ x = x + 1
The body of f requires a Num Int instance. We could solve this constraint from the context because we have C Int b and that provides us a solution for Num Int. However, we can often produce much better code by directly solving for an available Num Int dictionary we might have at hand. This removes potentially many layers of indirection and crucially allows other optimisations to fire as the dictionary will be statically known and selector functions can be inlined.
The optimisation also works for GADTs which bind dictionaries. If we statically know which class dictionary we need then we will solve it directly rather than indirectly using the one passed in at run time.
-
-fstatic-argument-transformation
¶ Default: off Turn on the static argument transformation, which turns a recursive function into a non-recursive one with a local recursive loop. See Chapter 7 of Andre Santos’s PhD thesis
-
-fstrictness
¶ Default: on Switch on the strictness analyser. The implementation is described in the paper `Theory and Practice of Demand Analysis in Haskell`<https://www.microsoft.com/en-us/research/wp-content/uploads/2017/03/demand-jfp-draft.pdf>`__.
The strictness analyser figures out when arguments and variables in a function can be treated ‘strictly’ (that is they are always evaluated in the function at some point). This allow GHC to apply certain optimisations such as unboxing that otherwise don’t apply as they change the semantics of the program when applied to lazy arguments.
-
-fstrictness-before
=⟨n⟩
¶ Run an additional strictness analysis before simplifier phase ⟨n⟩.
-
-funbox-small-strict-fields
¶ Default: on This option causes all constructor fields which are marked strict (i.e. “!”) and which representation is smaller or equal to the size of a pointer to be unpacked, if possible. It is equivalent to adding an
UNPACK
pragma (see UNPACK pragma) to every strict constructor field that fulfils the size restriction.For example, the constructor fields in the following data types
data A = A !Int data B = B !A newtype C = C B data D = D !C
would all be represented by a single
Int#
(see Unboxed types and primitive operations) value with-funbox-small-strict-fields
enabled.This option is less of a sledgehammer than
-funbox-strict-fields
: it should rarely make things worse. If you use-funbox-small-strict-fields
to turn on unboxing by default you can disable it for certain constructor fields using theNOUNPACK
pragma (see NOUNPACK pragma).Note that for consistency
Double
,Word64
, andInt64
constructor fields are unpacked on 32-bit platforms, even though they are technically larger than a pointer on those platforms.
-
-funbox-strict-fields
¶ Default: off This option causes all constructor fields which are marked strict (i.e.
!
) to be unpacked if possible. It is equivalent to adding anUNPACK
pragma to every strict constructor field (see UNPACK pragma).This option is a bit of a sledgehammer: it might sometimes make things worse. Selectively unboxing fields by using
UNPACK
pragmas might be better. An alternative is to use-funbox-strict-fields
to turn on unboxing by default but disable it for certain constructor fields using theNOUNPACK
pragma (see NOUNPACK pragma).Alternatively you can use
-funbox-small-strict-fields
to only unbox strict fields which are “small”.
-
-funfolding-creation-threshold
=⟨n⟩
¶ Default: 750 Governs the maximum size that GHC will allow a function unfolding to be. (An unfolding has a “size” that reflects the cost in terms of “code bloat” of expanding (aka inlining) that unfolding at a call site. A bigger function would be assigned a bigger cost.)
Consequences:
- nothing larger than this will be inlined (unless it has an
INLINE
pragma) - nothing larger than this will be spewed into an interface file.
Increasing this figure is more likely to result in longer compile times than faster code. The
-funfolding-use-threshold=⟨n⟩
is more useful.- nothing larger than this will be inlined (unless it has an
-
-funfolding-dict-discount
=⟨n⟩
¶ Default: 30 How eager should the compiler be to inline dictionaries?
-
-funfolding-fun-discount
=⟨n⟩
¶ Default: 60 How eager should the compiler be to inline functions?
-
-funfolding-keeness-factor
=⟨n⟩
¶ Default: 1.5 How eager should the compiler be to inline functions?
-
-funfolding-use-threshold
=⟨n⟩
¶ Default: 60 This is the magic cut-off figure for unfolding (aka inlining): below this size, a function definition will be unfolded at the call-site, any bigger and it won’t. The size computed for a function depends on two things: the actual size of the expression minus any discounts that apply depending on the context into which the expression is to be inlined.
The difference between this and
-funfolding-creation-threshold=⟨n⟩
is that this one determines if a function definition will be inlined at a call site. The other option determines if a function definition will be kept around at all for potential inlining.