4.14. Running a compiled program

To make an executable program, the GHC system compiles your code and then links it with a non-trivial runtime system (RTS), which handles storage management, profiling, etc.

You have some control over the behaviour of the RTS, by giving special command-line arguments to your program.

When your Haskell program starts up, its RTS extracts command-line arguments bracketed between +RTS and -RTS as its own. For example:

% ./a.out -f +RTS -p -S -RTS -h foo bar

The RTS will snaffle -p -S for itself, and the remaining arguments -f -h foo bar will be handed to your program if/when it calls System.getArgs.

No -RTS option is required if the runtime-system options extend to the end of the command line, as in this example:

% hls -ltr /usr/etc +RTS -A5m

If you absolutely positively want all the rest of the options in a command line to go to the program (and not the RTS), use a ––RTS.

As always, for RTS options that take sizes: If the last character of size is a K or k, multiply by 1000; if an M or m, by 1,000,000; if a G or G, by 1,000,000,000. (And any wraparound in the counters is your fault!)

Giving a +RTS -f option will print out the RTS options actually available in your program (which vary, depending on how you compiled).

NOTE: since GHC is itself compiled by GHC, you can change RTS options in the compiler using the normal +RTS ... -RTS combination. eg. to increase the maximum heap size for a compilation to 128M, you would add +RTS -M128m -RTS to the command line.

4.14.1. Setting global RTS options

RTS options are also taken from the environment variable GHCRTS. For example, to set the maximum heap size to 128M for all GHC-compiled programs (using an sh-like shell):

   GHCRTS='-M128m'
   export GHCRTS

RTS options taken from the GHCRTS environment variable can be overridden by options given on the command line.

4.14.2. Miscellaneous RTS options

-Vsecs

Sets the interval that the RTS clock ticks at. The runtime uses a single timer signal to count ticks; this timer signal is used to control the context switch timer (Section 4.11, “Using Concurrent Haskell”) and the heap profiling timer Section 5.4.1, “RTS options for heap profiling”. Also, the time profiler uses the RTS timer signal directly to record time profiling samples.

Normally, setting the -V option directly is not necessary: the resolution of the RTS timer is adjusted automatically if a short interval is requested with the -C or -i options. However, setting -V is required in order to increase the resolution of the time profiler.

Using a value of zero disables the RTS clock completely, and has the effect of disabling timers that depend on it: the context switch timer and the heap profiling timer. Context switches will still happen, but deterministically and at a rate much faster than normal. Disabling the interval timer is useful for debugging, because it eliminates a source of non-determinism at runtime.

--install-signal-handlers=yes|no

If yes (the default), the RTS installs signal handlers to catch things like ctrl-C. This option is primarily useful for when you are using the Haskell code as a DLL, and want to set your own signal handlers.

-xmaddress

WARNING: this option is for working around memory allocation problems only. Do not use unless GHCi fails with a message like “failed to mmap() memory below 2Gb”. If you need to use this option to get GHCi working on your machine, please file a bug.

On 64-bit machines, the RTS needs to allocate memory in the low 2Gb of the address space. Support for this across different operating systems is patchy, and sometimes fails. This option is there to give the RTS a hint about where it should be able to allocate memory in the low 2Gb of the address space. For example, +RTS -xm20000000 -RTS would hint that the RTS should allocate starting at the 0.5Gb mark. The default is to use the OS's built-in support for allocating memory in the low 2Gb if available (e.g. mmap with MAP_32BIT on Linux), or otherwise -xm40000000.

4.14.3. RTS options to control the garbage collector

There are several options to give you precise control over garbage collection. Hopefully, you won't need any of these in normal operation, but there are several things that can be tweaked for maximum performance.

-Asize

[Default: 256k] Set the allocation area size used by the garbage collector. The allocation area (actually generation 0 step 0) is fixed and is never resized (unless you use -H, below).

Increasing the allocation area size may or may not give better performance (a bigger allocation area means worse cache behaviour but fewer garbage collections and less promotion).

With only 1 generation (-G1) the -A option specifies the minimum allocation area, since the actual size of the allocation area will be resized according to the amount of data in the heap (see -F, below).

-c

Use a compacting algorithm for collecting the oldest generation. By default, the oldest generation is collected using a copying algorithm; this option causes it to be compacted in-place instead. The compaction algorithm is slower than the copying algorithm, but the savings in memory use can be considerable.

For a given heap size (using the -H option), compaction can in fact reduce the GC cost by allowing fewer GCs to be performed. This is more likely when the ratio of live data to heap size is high, say >30%.

NOTE: compaction doesn't currently work when a single generation is requested using the -G1 option.

-cn

[Default: 30] Automatically enable compacting collection when the live data exceeds n% of the maximum heap size (see the -M option). Note that the maximum heap size is unlimited by default, so this option has no effect unless the maximum heap size is set with -Msize.

-Ffactor

[Default: 2] This option controls the amount of memory reserved for the older generations (and in the case of a two space collector the size of the allocation area) as a factor of the amount of live data. For example, if there was 2M of live data in the oldest generation when we last collected it, then by default we'll wait until it grows to 4M before collecting it again.

The default seems to work well here. If you have plenty of memory, it is usually better to use -Hsize than to increase -Ffactor.

The -F setting will be automatically reduced by the garbage collector when the maximum heap size (the -Msize setting) is approaching.

-Ggenerations

[Default: 2] Set the number of generations used by the garbage collector. The default of 2 seems to be good, but the garbage collector can support any number of generations. Anything larger than about 4 is probably not a good idea unless your program runs for a long time, because the oldest generation will hardly ever get collected.

Specifying 1 generation with +RTS -G1 gives you a simple 2-space collector, as you would expect. In a 2-space collector, the -A option (see above) specifies the minimum allocation area size, since the allocation area will grow with the amount of live data in the heap. In a multi-generational collector the allocation area is a fixed size (unless you use the -H option, see below).

-gthreads

[Default: 1] [new in GHC 6.10] Set the number of threads to use for garbage collection. This option is only accepted when the program was linked with the -threaded option; see Section 4.10.7, “Options affecting linking”.

The garbage collector is able to work in parallel when given more than one OS thread. Experiments have shown that this usually results in a performance improvement given 3 cores or more; with 2 cores it may or may not be beneficial, depending on the workload. Bigger heaps work better with parallel GC, so set your -H value high (3 or more times the maximum residency). Look at the timing stats with +RTS -s to see whether you're getting any benefit from parallel GC or not. If you find parallel GC is significantly slower (in elapsed time) than sequential GC, please report it as a bug.

This value is set automatically when the -N option is used, so the only reason to use -g would be if you wanted to use a different number of threads for GC than for execution. For example, if your program is strictly single-threaded but you still want to benefit from parallel GC, then it might make sense to use -g rather than -N.

-Hsize

[Default: 0] This option provides a “suggested heap size” for the garbage collector. The garbage collector will use about this much memory until the program residency grows and the heap size needs to be expanded to retain reasonable performance.

By default, the heap will start small, and grow and shrink as necessary. This can be bad for performance, so if you have plenty of memory it's worthwhile supplying a big -Hsize. For improving GC performance, using -Hsize is usually a better bet than -Asize.

-Iseconds

(default: 0.3) In the threaded and SMP versions of the RTS (see -threaded, Section 4.10.7, “Options affecting linking”), a major GC is automatically performed if the runtime has been idle (no Haskell computation has been running) for a period of time. The amount of idle time which must pass before a GC is performed is set by the -Iseconds option. Specifying -I0 disables the idle GC.

For an interactive application, it is probably a good idea to use the idle GC, because this will allow finalizers to run and deadlocked threads to be detected in the idle time when no Haskell computation is happening. Also, it will mean that a GC is less likely to happen when the application is busy, and so responsiveness may be improved. However, if the amount of live data in the heap is particularly large, then the idle GC can cause a significant delay, and too small an interval could adversely affect interactive responsiveness.

This is an experimental feature, please let us know if it causes problems and/or could benefit from further tuning.

-ksize

[Default: 1k] Set the initial stack size for new threads. Thread stacks (including the main thread's stack) live on the heap, and grow as required. The default value is good for concurrent applications with lots of small threads; if your program doesn't fit this model then increasing this option may help performance.

The main thread is normally started with a slightly larger heap to cut down on unnecessary stack growth while the program is starting up.

-Ksize

[Default: 8M] Set the maximum stack size for an individual thread to size bytes. This option is there purely to stop the program eating up all the available memory in the machine if it gets into an infinite loop.

-mn

Minimum % n of heap which must be available for allocation. The default is 3%.

-Msize

[Default: unlimited] Set the maximum heap size to size bytes. The heap normally grows and shrinks according to the memory requirements of the program. The only reason for having this option is to stop the heap growing without bound and filling up all the available swap space, which at the least will result in the program being summarily killed by the operating system.

The maximum heap size also affects other garbage collection parameters: when the amount of live data in the heap exceeds a certain fraction of the maximum heap size, compacting collection will be automatically enabled for the oldest generation, and the -F parameter will be reduced in order to avoid exceeding the maximum heap size.

-t[file] , -s[file] , -S[file] , --machine-readable

These options produce runtime-system statistics, such as the amount of time spent executing the program and in the garbage collector, the amount of memory allocated, the maximum size of the heap, and so on. The three variants give different levels of detail: -t produces a single line of output in the same format as GHC's -Rghc-timing option, -s produces a more detailed summary at the end of the program, and -S additionally produces information about each and every garbage collection.

The output is placed in file. If file is omitted, then the output is sent to stderr.

If you use the -t flag then, when your program finishes, you will see something like this:

<<ghc: 36169392 bytes, 69 GCs, 603392/1065272 avg/max bytes residency (2 samples), 3M in use, 0.00 INIT (0.00 elapsed), 0.02 MUT (0.02 elapsed), 0.07 GC (0.07 elapsed) :ghc>>

This tells you:

  • The total bytes allocated by the program. This may be less than the peak memory use, as some may be freed.

  • The total number of garbage collections that occurred.

  • The average and maximum space used by your program. This is only checked during major garbage collections, so it is only an approximation; the number of samples tells you how many times it is checked.

  • The peak memory the RTS has allocated from the OS.

  • The amount of CPU time and elapsed wall clock time while initialising the runtime system (INIT), running the program itself (MUT, the mutator), and garbage collecting (GC).

You can also get this in a more future-proof, machine readable format, with -t --machine-readable:

 [("bytes allocated", "36169392")
 ,("num_GCs", "69")
 ,("average_bytes_used", "603392")
 ,("max_bytes_used", "1065272")
 ,("num_byte_usage_samples", "2")
 ,("peak_megabytes_allocated", "3")
 ,("init_cpu_seconds", "0.00")
 ,("init_wall_seconds", "0.00")
 ,("mutator_cpu_seconds", "0.02")
 ,("mutator_wall_seconds", "0.02")
 ,("GC_cpu_seconds", "0.07")
 ,("GC_wall_seconds", "0.07")
 ]

If you use the -s flag then, when your program finishes, you will see something like this (the exact details will vary depending on what sort of RTS you have, e.g. you will only see profiling data if your RTS is compiled for profiling):

      36,169,392 bytes allocated in the heap
       4,057,632 bytes copied during GC
       1,065,272 bytes maximum residency (2 sample(s))
          54,312 bytes maximum slop
               3 MB total memory in use (0 MB lost due to fragmentation)

  Generation 0:    67 collections,     0 parallel,  0.04s,  0.03s elapsed
  Generation 1:     2 collections,     0 parallel,  0.03s,  0.04s elapsed

  INIT  time    0.00s  (  0.00s elapsed)
  MUT   time    0.01s  (  0.02s elapsed)
  GC    time    0.07s  (  0.07s elapsed)
  EXIT  time    0.00s  (  0.00s elapsed)
  Total time    0.08s  (  0.09s elapsed)

  %GC time      89.5%  (75.3% elapsed)

  Alloc rate    4,520,608,923 bytes per MUT second

  Productivity  10.5% of total user, 9.1% of total elapsed
  • The "bytes allocated in the heap" is the total bytes allocated by the program. This may be less than the peak memory use, as some may be freed.

  • GHC uses a copying garbage collector. "bytes copied during GC" tells you how many bytes it had to copy during garbage collection.

  • The maximum space actually used by your program is the "bytes maximum residency" figure. This is only checked during major garbage collections, so it is only an approximation; the number of samples tells you how many times it is checked.

  • The "bytes maximum slop" tells you the most space that is ever wasted due to the way GHC packs data into so-called "megablocks".

  • The "total memory in use" tells you the peak memory the RTS has allocated from the OS.

  • Next there is information about the garbage collections done. For each generation it says how many garbage collections were done, how many of those collections used multiple threads, the total CPU time used for garbage collecting that generation, and the total wall clock time elapsed while garbage collecting that generation.

  • Next there is the CPU time and wall clock time elapsedm broken down by what the runtiem system was doing at the time. INIT is the runtime system initialisation. MUT is the mutator time, i.e. the time spent actually running your code. GC is the time spent doing garbage collection. RP is the time spent doing retainer profiling. PROF is the time spent doing other profiling. EXIT is the runtime system shutdown time. And finally, Total is, of course, the total.

    %GC time tells you what percentage GC is of Total. "Alloc rate" tells you the "bytes allocated in the heap" divided by the MUT CPU time. "Productivity" tells you what percentage of the Total CPU and wall clock elapsed times are spent in the mutator (MUT).

The -S flag, as well as giving the same output as the -s flag, prints information about each GC as it happens:

    Alloc    Copied     Live    GC    GC     TOT     TOT  Page Flts
    bytes     bytes     bytes  user  elap    user    elap
   528496     47728    141512  0.01  0.02    0.02    0.02    0    0  (Gen:  1)
[...]
   524944    175944   1726384  0.00  0.00    0.08    0.11    0    0  (Gen:  0)

For each garbage collection, we print:

  • How many bytes we allocated this garbage collection.

  • How many bytes we copied this garbage collection.

  • How many bytes are currently live.

  • How long this garbage collection took (CPU time and elapsed wall clock time).

  • How long the program has been running (CPU time and elapsed wall clock time).

  • How many page faults occured this garbage collection.

  • How many page faults occured since the end of the last garbage collection.

  • Which generation is being garbage collected.

4.14.4. RTS options for concurrency and parallelism

The RTS options related to concurrency are described in Section 4.11, “Using Concurrent Haskell”, and those for parallelism in Section 4.12.1, “Options for SMP parallelism”.

4.14.5. RTS options for profiling

Most profiling runtime options are only available when you compile your program for profiling (see Section 5.2, “Compiler options for profiling”, and Section 5.4.1, “RTS options for heap profiling” for the runtime options). However, there is one profiling option that is available for ordinary non-profiled executables:

-hT

Generates a basic heap profile, in the file prog.hp. To produce the heap profile graph, use hp2ps (see Section 5.5, “hp2ps––heap profile to PostScript”). The basic heap profile is broken down by data constructor, with other types of closures (functions, thunks, etc.) grouped into broad categories (e.g. FUN, THUNK). To get a more detailed profile, use the full profiling support (Chapter 5, Profiling).

4.14.6. RTS options for hackers, debuggers, and over-interested souls

These RTS options might be used (a) to avoid a GHC bug, (b) to see “what's really happening”, or (c) because you feel like it. Not recommended for everyday use!

-B

Sound the bell at the start of each (major) garbage collection.

Oddly enough, people really do use this option! Our pal in Durham (England), Paul Callaghan, writes: “Some people here use it for a variety of purposes—honestly!—e.g., confirmation that the code/machine is doing something, infinite loop detection, gauging cost of recently added code. Certain people can even tell what stage [the program] is in by the beep pattern. But the major use is for annoying others in the same office…”

-Dnum

An RTS debugging flag; varying quantities of output depending on which bits are set in num. Only works if the RTS was compiled with the DEBUG option.

-rfile

Produce “ticky-ticky” statistics at the end of the program run. The file business works just like on the -S RTS option (above).

“Ticky-ticky” statistics are counts of various program actions (updates, enters, etc.) The program must have been compiled using -ticky (a.k.a. “ticky-ticky profiling”), and, for it to be really useful, linked with suitable system libraries. Not a trivial undertaking: consult the installation guide on how to set things up for easy “ticky-ticky” profiling. For more information, see Section 5.7, “Using “ticky-ticky” profiling (for implementors)”.

-xc

(Only available when the program is compiled for profiling.) When an exception is raised in the program, this option causes the current cost-centre-stack to be dumped to stderr.

This can be particularly useful for debugging: if your program is complaining about a head [] error and you haven't got a clue which bit of code is causing it, compiling with -prof -auto-all and running with +RTS -xc -RTS will tell you exactly the call stack at the point the error was raised.

The output contains one line for each exception raised in the program (the program might raise and catch several exceptions during its execution), where each line is of the form:

< cc1, ..., ccn >

each cci is a cost centre in the program (see Section 5.1, “Cost centres and cost-centre stacks”), and the sequence represents the “call stack” at the point the exception was raised. The leftmost item is the innermost function in the call stack, and the rightmost item is the outermost function.

-Z

Turn off “update-frame squeezing” at garbage-collection time. (There's no particularly good reason to turn it off, except to ensure the accuracy of certain data collected regarding thunk entry counts.)

4.14.7. “Hooks” to change RTS behaviour

GHC lets you exercise rudimentary control over the RTS settings for any given program, by compiling in a “hook” that is called by the run-time system. The RTS contains stub definitions for all these hooks, but by writing your own version and linking it on the GHC command line, you can override the defaults.

Owing to the vagaries of DLL linking, these hooks don't work under Windows when the program is built dynamically.

The hook ghc_rts_optslets you set RTS options permanently for a given program. A common use for this is to give your program a default heap and/or stack size that is greater than the default. For example, to set -H128m -K1m, place the following definition in a C source file:

char *ghc_rts_opts = "-H128m -K1m";

Compile the C file, and include the object file on the command line when you link your Haskell program.

These flags are interpreted first, before any RTS flags from the GHCRTS environment variable and any flags on the command line.

You can also change the messages printed when the runtime system “blows up,” e.g., on stack overflow. The hooks for these are as follows:

void OutOfHeapHook (unsigned long, unsigned long)

The heap-overflow message.

void StackOverflowHook (long int)

The stack-overflow message.

void MallocFailHook (long int)

The message printed if malloc fails.

For examples of the use of these hooks, see GHC's own versions in the file ghc/compiler/parser/hschooks.c in a GHC source tree.

4.14.8. Getting information about the RTS

It is possible to ask the RTS to give some information about itself. To do this, use the --info flag, e.g.

$ ./a.out +RTS --info
 [("GHC RTS", "YES")
 ,("GHC version", "6.7")
 ,("RTS way", "rts_p")
 ,("Host platform", "x86_64-unknown-linux")
 ,("Host architecture", "x86_64")
 ,("Host OS", "linux")
 ,("Host vendor", "unknown")
 ,("Build platform", "x86_64-unknown-linux")
 ,("Build architecture", "x86_64")
 ,("Build OS", "linux")
 ,("Build vendor", "unknown")
 ,("Target platform", "x86_64-unknown-linux")
 ,("Target architecture", "x86_64")
 ,("Target OS", "linux")
 ,("Target vendor", "unknown")
 ,("Word size", "64")
 ,("Compiler unregisterised", "NO")
 ,("Tables next to code", "YES")
 ]

The information is formatted such that it can be read as a of type [(String, String)]. Currently the following fields are present:

GHC RTS

Is this program linked against the GHC RTS? (always "YES").

GHC version

The version of GHC used to compile this program.

RTS way

The variant (“way”) of the runtime. The most common values are rts (vanilla), rts_thr (threaded runtime, i.e. linked using the -threaded option) and rts_p (profiling runtime, i.e. linked using the -prof option). Other variants include debug (linked using -debug), t (ticky-ticky profiling) and dyn (the RTS is linked in dynamically, i.e. a shared library, rather than statically linked into the executable itself). These can be combined, e.g. you might have rts_thr_debug_p.

Target platform, Target architecture, Target OS, Target vendor

These are the platform the program is compiled to run on.

Build platform, Build architecture, Build OS, Build vendor

These are the platform where the program was built on. (That is, the target platform of GHC itself.) Ordinarily this is identical to the target platform. (It could potentially be different if cross-compiling.)

Host platform, Host architecture Host OS Host vendor

These are the platform where GHC itself was compiled. Again, this would normally be identical to the build and target platforms.

Word size

Either "32" or "64", reflecting the word size of the target platform.

Compiler unregistered

Was this program compiled with an “unregistered” version of GHC? (I.e., a version of GHC that has no platform-specific optimisations compiled in, usually because this is a currently unsupported platform.) This value will usually be no, unless you're using an experimental build of GHC.

Tables next to code

Putting info tables directly next to entry code is a useful performance optimisation that is not available on all platforms. This field tells you whether the program has been compiled with this optimisation. (Usually yes, except on unusual platforms.)