The -O*
options specify convenient
“packages” of optimisation flags; the
-f*
options described later on specify
individual optimisations to be turned on/off;
the -m*
options specify
machine-specific optimisations to be turned
on/off.
There are many options that affect the quality of code produced by GHC. Most people only have a general goal, something like “Compile quickly” or “Make my program run like greased lightning.” The following “packages” of optimisations (or lack thereof) should suffice.
Note that higher optimisation levels cause more cross-module optimisation to be performed, which can have an impact on how much of your program needs to be recompiled when you change something. This is one reason to stick to no-optimisation when developing code.
-O*
-type option specified:
This is taken to mean: “Please compile quickly; I'm not over-bothered about compiled-code quality.” So, for example: ghc -c Foo.hs
-O0
:
Means “turn off all optimisation”,
reverting to the same settings as if no
-O
options had been specified. Saying
-O0
can be useful if
eg. make has inserted a
-O
on the command line already.
-O
or -O1
:
Means: “Generate good-quality code without taking too long about it.” Thus, for example: ghc -c -O Main.lhs
-O2
:
Means: “Apply every non-dangerous optimisation, even if it means significantly longer compile times.”
The avoided “dangerous” optimisations are those that can make runtime or space worse if you're unlucky. They are normally turned on or off individually.
At the moment, -O2
is
unlikely to produce better code than
-O
.
-Ofile <file>
:
(NOTE: not supported since GHC 4.x. Please ask if you're interested in this.)
For those who need absolute
control over exactly what options are
used (e.g., compiler writers, sometimes :-), a list of
options can be put in a file and then slurped in with
-Ofile
.
In that file, comments are of the
#
-to-end-of-line variety; blank
lines and most whitespace is ignored.
Please ask if you are baffled and would like an
example of -Ofile
!
We don't use a -O*
flag for day-to-day
work. We use -O
to get respectable speed;
e.g., when we want to measure something. When we want to go for
broke, we tend to use -O2 -fvia-C
(and we go for
lots of coffee breaks).
The easiest way to see what -O
(etc.)
“really mean” is to run with -v
,
then stand back in amazement.
These flags turn on and off individual optimisations.
They are normally set via the -O
options
described above, and as such, you shouldn't need to set any of
them explicitly (indeed, doing so could lead to unexpected
results). However, there are one or two that may be of
interest:
-fexcess-precision
:When this option is given, intermediate floating
point values can have a greater
precision/range than the final type. Generally this is a
good thing, but some programs may rely on the exact
precision/range of
Float
/Double
values
and should not use this option for their compilation.
-fignore-asserts
:Causes GHC to ignore uses of the function
Exception.assert
in source code (in
other words, rewriting Exception.assert p
e
to e
(see Section 8.11, “Assertions
”). This flag is turned on by
-O
.
-fno-cse
Turns off the common-sub-expression elimination optimisation.
Can be useful if you have some unsafePerformIO
expressions that you don't want commoned-up.
-fno-strictness
Turns off the strictness analyser; sometimes it eats too many cycles.
-fno-full-laziness
Turns off the full laziness optimisation (also known as let-floating). Full laziness increases sharing, which can lead to increased memory residency.
NOTE: GHC doesn't implement complete full-laziness.
When optimisation in on, and
-fno-full-laziness
is not given, some
transformations that increase sharing are performed, such
as extracting repeated computations from a loop. These
are the same transformations that a fully lazy
implementation would do, the difference is that GHC
doesn't consistently apply full-laziness, so don't rely on
it.
-fno-state-hack
Turn off the "state hack" whereby any lambda with a
State#
token as argument is considered to be
single-entry, hence it is considered OK to inline things inside
it. This can improve performance of IO and ST monad code, but it
runs the risk of reducing sharing.
-fno-state-hack
Turn off the "state hack" whereby any lambda with a
State#
token as argument is considered to be
single-entry, hence it is considered OK to inline things inside
it. This can improve performance of IO and ST monad code, but it
runs the risk of reducing sharing.
-fomit-interface-pragmas
Tells GHC to omit all inessential information from the interface file generated for the module being compiled (say M). This means that a module importing M will see only the types of the functions that M exports, but not their unfoldings, strictness info, etc. Hence, for example, no function exported by M will be inlined into an importing module. The benefit is that modules that import M will need to be recompiled less often (only when M's exports change their type, not when they change their implementation).
-fignore-interface-pragmas
Tells GHC to ignore all inessential information when reading interface files.
That is, even if M.hi
contains unfolding or strictness information
for a function, GHC will ignore that information.
-funbox-strict-fields
:
This option causes all constructor fields which are
marked strict (i.e. “!”) to be unboxed or
unpacked if possible. It is equivalent to adding an
UNPACK
pragma to every strict
constructor field (see Section 8.12.10, “UNPACK pragma”).
This option is a bit of a sledgehammer: it might
sometimes make things worse. Selectively unboxing fields
by using UNPACK
pragmas might be
better.
-funfolding-creation-threshold=n
:
(Default: 45) Governs the maximum size that GHC will allow a function unfolding to be. (An unfolding has a “size” that reflects the cost in terms of “code bloat” of expanding that unfolding at at a call site. A bigger function would be assigned a bigger cost.)
Consequences: (a) nothing larger than this will be inlined (unless it has an INLINE pragma); (b) nothing larger than this will be spewed into an interface file.
Increasing this figure is more likely to result in longer compile times than faster code. The next option is more useful:
-funfolding-use-threshold=n
(Default: 8) This is the magic cut-off figure for
unfolding: below this size, a function definition will be
unfolded at the call-site, any bigger and it won't. The
size computed for a function depends on two things: the
actual size of the expression minus any discounts that
apply (see -funfolding-con-discount
).