Next Previous Contents

5. GHC Language Features

As with all known Haskell systems, GHC implements some extensions to the language. To use them, you'll need to give a -fglasgow-exts option.

Virtually all of the Glasgow extensions serve to give you access to the underlying facilities with which we implement Haskell. Thus, you can get at the Raw Iron, if you are willing to write some non-standard code at a more primitive level. You need not be ``stuck'' on performance because of the implementation costs of Haskell's ``high-level'' features---you can always code ``under'' them. In an extreme case, you can write all your time-critical code in C, and then just glue it together with Haskell!

Executive summary of our extensions:

Unboxed types and primitive operations:

You can get right down to the raw machine types and operations; included in this are ``primitive arrays'' (direct access to Big Wads of Bytes). Please see Section Unboxed types and following.

Multi-parameter type classes:

GHC's type system supports extended type classes with multiple parameters. Please see Section Mult-parameter type classes.

Local universal quantification:

GHC's type system supports explicit universal quantification in constructor fields and function arguments. This is useful for things like defining runST from the state-thread world. See Section Local universal quantification.

Extistentially quantification in data types:

Some or all of the type variables in a datatype declaration may be existentially quantified. More details in Section Existential Quantification.

Scoped type variables:

Scoped type variables enable the programmer to supply type signatures for some nested declarations, where this would not be legal in Haskell 98. Details in Section Scoped Type Variables.

Calling out to C:

Just what it sounds like. We provide lots of rope that you can dangle around your neck. Please see Section Calling~C directly from Haskell.

Pragmas

Pragmas are special instructions to the compiler placed in the source file. The pragmas GHC supports are described in Section Pragmas.

Rewrite rules:

The programmer can specify rewrite rules as part of the source program (in a pragma). GHC applies these rewrite rules wherever it can. Details in Section Rewrite Rules.

Before you get too carried away working at the lowest level (e.g., sloshing MutableByteArray#s around your program), you may wish to check if there are system libraries that provide a ``Haskellised veneer'' over the features you want. See Section GHC Prelude and libraries.

5.1 Unboxed types

These types correspond to the ``raw machine'' types you would use in C: Int# (long int), Double# (double), Addr# (void *), etc. The primitive operations (PrimOps) on these types are what you might expect; e.g., (+#) is addition on Int#s, and is the machine-addition that we all know and love---usually one instruction.

There are some restrictions on the use of unboxed types, the main one being that you can't pass an unboxed value to a polymorphic function or store one in a polymorphic data type. This rules out things like [Int#] (ie. lists of unboxed integers). The reason for this restriction is that polymorphic arguments and constructor fields are assumed to be pointers: if an unboxed integer is stored in one of these, the garbage collector would attempt to follow it, leading to unpredictable space leaks. Or a seq operation on the polymorphic component may attempt to dereference the pointer, with disastrous results. Even worse, the unboxed value might be larger than a pointer (Double# for instance).

Nevertheless, A numerically-intensive program using unboxed types can go a lot faster than its ``standard'' counterpart---we saw a threefold speedup on one example.

Please see Section The module PrelGHC: really primitive stuff for the details of unboxed types and the operations on them.

5.2 Primitive state-transformer monad

This monad underlies our implementation of arrays, mutable and immutable, and our implementation of I/O, including ``C calls''.

The ST library, which provides access to the ST monad, is a GHC/Hugs extension library and is described in the separate GHC/Hugs Extension Libraries document.

5.3 Primitive arrays, mutable and otherwise

GHC knows about quite a few flavours of Large Swathes of Bytes.

First, GHC distinguishes between primitive arrays of (boxed) Haskell objects (type Array# obj) and primitive arrays of bytes (type ByteArray#).

Second, it distinguishes between...

Immutable:

Arrays that do not change (as with ``standard'' Haskell arrays); you can only read from them. Obviously, they do not need the care and attention of the state-transformer monad.

Mutable:

Arrays that may be changed or ``mutated.'' All the operations on them live within the state-transformer monad and the updates happen in-place.

``Static'' (in C land):

A C routine may pass an Addr# pointer back into Haskell land. There are then primitive operations with which you may merrily grab values over in C land, by indexing off the ``static'' pointer.

``Stable'' pointers:

If, for some reason, you wish to hand a Haskell pointer (i.e., not an unboxed value) to a C routine, you first make the pointer ``stable,'' so that the garbage collector won't forget that it exists. That is, GHC provides a safe way to pass Haskell pointers to C.

Please see Section Subverting automatic unboxing with ``stable pointers'' for more details.

``Foreign objects'':

A ``foreign object'' is a safe way to pass an external object (a C-allocated pointer, say) to Haskell and have Haskell do the Right Thing when it no longer references the object. So, for example, C could pass a large bitmap over to Haskell and say ``please free this memory when you're done with it.''

Please see Section Pointing outside the Haskell heap for more details.

The libraries section gives more details on all these ``primitive array'' types and the operations on them, Section The GHC Prelude and Libraries. Some of these extensions are also supported by Hugs, and the supporting libraries are described in the GHC/Hugs Extension Libraries document.

5.4 Calling C directly from Haskell

GOOD ADVICE: Because this stuff is not Entirely Stable as far as names and things go, you would be well-advised to keep your C-callery corraled in a few modules, rather than sprinkled all over your code. It will then be quite easy to update later on.

_ccall_ and _casm_: an introduction

The simplest way to use a simple C function

double fooC( FILE *in, char c, int i, double d, unsigned int u )

is to provide a Haskell wrapper:

fooH :: Char -> Int -> Double -> Word -> IO Double
fooH c i d w = _ccall_ fooC (``stdin''::Addr) c i d w

The function fooH will unbox all of its arguments, call the C function fooC and box the corresponding arguments.

One of the annoyances about _ccall_s is when the C types don't quite match the Haskell compiler's ideas. For this, the _casm_ variant may be just the ticket (NB: no chance of such code going through a native-code generator):

import Addr
import CString

oldGetEnv name
  = _casm_ ``%r = getenv((char *) %0);'' name >>= \ litstring ->
    return (
        if (litstring == nullAddr) then
            Left ("Fail:oldGetEnv:"++name)
        else
            Right (unpackCString litstring)
    )

The first literal-literal argument to a _casm_ is like a printf format: %r is replaced with the ``result,'' %0--%n-1 are replaced with the 1st--nth arguments. As you can see above, it is an easy way to do simple C casting. Everything said about _ccall_ goes for _casm_ as well.

The use of _casm_ in your code does pose a problem to the compiler when it comes to generating an interface file for a freshly compiled module. Included in an interface file is the unfolding (if any) of a declaration. However, if a declaration's unfolding happens to contain a _casm_, its unfolding will not be emitted into the interface file even if it qualifies by all the other criteria. The reason why the compiler prevents this from happening is that unfolding _casm_s into an interface file unduly constrains how code that import your module have to be compiled. If an imported declaration is unfolded and it contains a _casm_, you now have to be using a compiler backend capable of dealing with it (i.e., the C compiler backend). If you are using the C compiler backend, the unfolded _casm_ may still cause you problems since the C code snippet it contains may mention CPP symbols that were in scope when compiling the original module are not when compiling the importing module.

If you're willing to put up with the drawbacks of doing cross-module inlining of C code (GHC - A Better C Compiler :-), the option -funfold-casms-in-hi-file will turn off the default behaviour.

Literal-literals

The literal-literal argument to _casm_ can be made use of separately from the _casm_ construct itself. Indeed, we've already used it:

fooH :: Char -> Int -> Double -> Word -> IO Double
fooH c i d w = _ccall_ fooC (``stdin''::Addr) c i d w

The first argument that's passed to fooC is given as a literal-literal, that is, a literal chunk of C code that will be inserted into the generated .hc code at the right place.

A literal-literal is restricted to having a type that's an instance of the CCallable class, see CCallable for more information.

Notice that literal-literals are by their very nature unfriendly to native code generators, so exercise judgement about whether or not to make use of them in your code.

Using function headers

When generating C (using the -fvia-C directive), one can assist the C compiler in detecting type errors by using the -#include directive to provide .h files containing function headers.

For example,

typedef unsigned long *StgForeignObj;
typedef long StgInt;

void          initialiseEFS (StgInt size);
StgInt        terminateEFS (void);
StgForeignObj emptyEFS(void);
StgForeignObj updateEFS (StgForeignObj a, StgInt i, StgInt x);
StgInt        lookupEFS (StgForeignObj a, StgInt i);

You can find appropriate definitions for StgInt, StgForeignObj, etc using gcc on your architecture by consulting ghc/includes/StgTypes.h. The following table summarises the relationship between Haskell types and C types.


C type name Haskell Type StgChar Char# StgInt Int# StgWord Word# StgAddr Addr# StgFloat Float# StgDouble Double# StgArray Array# StgByteArray ByteArray# StgArray MutableArray# StgByteArray MutableByteArray# StgStablePtr StablePtr# StgForeignObj ForeignObj#

Note that this approach is only essential for returning floats (or if sizeof(int) != sizeof(int *) on your architecture) but is a Good Thing for anyone who cares about writing solid code. You're crazy not to do it.

Subverting automatic unboxing with ``stable pointers''

The arguments of a _ccall_ are automatically unboxed before the call. There are two reasons why this is usually the Right Thing to do:

It is possible to subvert the unboxing process by creating a ``stable pointer'' to a value and passing the stable pointer instead. For example, to pass/return an integer lazily to C functions storeC and fetchC, one might write:

storeH :: Int -> IO ()
storeH x = makeStablePtr x              >>= \ stable_x ->
           _ccall_ storeC stable_x

fetchH :: IO Int
fetchH x = _ccall_ fetchC               >>= \ stable_x ->
           deRefStablePtr stable_x      >>= \ x ->
           freeStablePtr stable_x       >>
           return x

The garbage collector will refrain from throwing a stable pointer away until you explicitly call one of the following from C or Haskell.

void freeStablePointer( StgStablePtr stablePtrToToss )
freeStablePtr :: StablePtr a -> IO ()

As with the use of free in C programs, GREAT CARE SHOULD BE EXERCISED to ensure these functions are called at the right time: too early and you get dangling references (and, if you're lucky, an error message from the runtime system); too late and you get space leaks.

And to force evaluation of the argument within fooC, one would call one of the following C functions (according to type of argument).

void     performIO  ( StgStablePtr stableIndex /* StablePtr s (IO ()) */ );
StgInt   enterInt   ( StgStablePtr stableIndex /* StablePtr s Int */ );
StgFloat enterFloat ( StgStablePtr stableIndex /* StablePtr s Float */ );

Note Bene: _ccall_GC_ must be used if any of these functions are used.

Foreign objects: pointing outside the Haskell heap

There are two types that ghc programs can use to reference (heap-allocated) objects outside the Haskell world: Addr and ForeignObj.

If you use Addr, it is up to you to the programmer to arrange allocation and deallocation of the objects.

If you use ForeignObj, ghc's garbage collector will call upon the user-supplied finaliser function to free the object when the Haskell world no longer can access the object. (An object is associated with a finaliser function when the abstract Haskell type ForeignObj is created). The finaliser function is expressed in C, and is passed as argument the object:

void foreignFinaliser ( StgForeignObj fo )

when the Haskell world can no longer access the object. Since ForeignObjs only get released when a garbage collection occurs, we provide ways of triggering a garbage collection from within C and from within Haskell.

void GarbageCollect()
performGC :: IO ()

More information on the programmers' interface to ForeignObj can be found in the library documentation.

Avoiding monads

The _ccall_ construct is part of the IO monad because 9 out of 10 uses will be to call imperative functions with side effects such as printf. Use of the monad ensures that these operations happen in a predictable order in spite of laziness and compiler optimisations.

To avoid having to be in the monad to call a C function, it is possible to use unsafePerformIO, which is available from the IOExts module. There are three situations where one might like to call a C function from outside the IO world:

C-calling ``gotchas'' checklist

And some advice, too.

5.5 Multi-parameter type classes

This section documents GHC's implementation of multi-paramter type classes. There's lots of background in the paper Type classes: exploring the design space (Simon Peyton Jones, Mark Jones, Erik Meijer).

I'd like to thank people who reported shorcomings in the GHC 3.02 implementation. Our default decisions were all conservative ones, and the experience of these heroic pioneers has given useful concrete examples to support several generalisations. (These appear below as design choices not implemented in 3.02.)

I've discussed these notes with Mark Jones, and I believe that Hugs will migrate towards the same design choices as I outline here. Thanks to him, and to many others who have offered very useful feedback.

Types

There are the following restrictions on the form of a qualified type:

  forall tv1..tvn (c1, ...,cn) => type

(Here, I write the "foralls" explicitly, although the Haskell source language omits them; in Haskell 1.4, all the free type variables of an explicit source-language type signature are universally quantified, except for the class type variables in a class declaration. However, in GHC, you can give the foralls if you want. See Section Explicit universal quantification).

  1. Each universally quantified type variable tvi must be mentioned (i.e. appear free) in type. The reason for this is that a value with a type that does not obey this restriction could not be used without introducing ambiguity. Here, for example, is an illegal type:
      forall a. Eq a => Int
    
    When a value with this type was used, the constraint Eq tv would be introduced where tv is a fresh type variable, and (in the dictionary-translation implementation) the value would be applied to a dictionary for Eq tv. The difficulty is that we can never know which instance of Eq to use because we never get any more information about tv.
  2. Every constraint ci must mention at least one of the universally quantified type variables tvi. For example, this type is OK because C a b mentions the universally quantified type variable b:
      forall a. C a b => burble
    
    The next type is illegal because the constraint Eq b does not mention a:
      forall a. Eq b => burble
    
    The reason for this restriction is milder than the other one. The excluded types are never useful or necessary (because the offending context doesn't need to be witnessed at this point; it can be floated out). Furthermore, floating them out increases sharing. Lastly, excluding them is a conservative choice; it leaves a patch of territory free in case we need it later.

These restrictions apply to all types, whether declared in a type signature or inferred.

Unlike Haskell 1.4, constraints in types do not have to be of the form (class type-variables). Thus, these type signatures are perfectly OK

  f :: Eq (m a) => [m a] -> [m a]
  g :: Eq [a] => ...

This choice recovers principal types, a property that Haskell 1.4 does not have.

Class declarations

  1. Multi-parameter type classes are permitted. For example:
      class Collection c a where
        union :: c a -> c a -> c a
        ...etc..
    
  2. The class hierarchy must be acyclic. However, the definition of "acyclic" involves only the superclass relationships. For example, this is OK:
      class C a where { 
        op :: D b => a -> b -> b
      }
    
      class C a => D a where { ... }
    
    Here, C is a superclass of D, but it's OK for a class operation op of C to mention D. (It would not be OK for D to be a superclass of C.)
  3. There are no restrictions on the context in a class declaration (which introduces superclasses), except that the class hierarchy must be acyclic. So these class declarations are OK:
      class Functor (m k) => FiniteMap m k where
        ...
    
      class (Monad m, Monad (t m)) => Transform t m where
        lift :: m a -> (t m) a
    
  4. In the signature of a class operation, every constraint must mention at least one type variable that is not a class type variable. Thus:
      class Collection c a where
        mapC :: Collection c b => (a->b) -> c a -> c b
    
    is OK because the constraint (Collection a b) mentions b, even though it also mentions the class variable a. On the other hand:
      class C a where
        op :: Eq a => (a,b) -> (a,b)
    
    is not OK because the constraint (Eq a) mentions on the class type variable a, but not b. However, any such example is easily fixed by moving the offending context up to the superclass context:
      class Eq a => C a where
        op ::(a,b) -> (a,b)
    
    A yet more relaxed rule would allow the context of a class-op signature to mention only class type variables. However, that conflicts with Rule 1(b) for types above.
  5. The type of each class operation must mention all of the class type variables. For example:
      class Coll s a where
        empty  :: s
        insert :: s -> a -> s
    
    is not OK, because the type of empty doesn't mention a. This rule is a consequence of Rule 1(a), above, for types, and has the same motivation. Sometimes, offending class declarations exhibit misunderstandings. For example, Coll might be rewritten
      class Coll s a where
        empty  :: s a
        insert :: s a -> a -> s a
    
    which makes the connection between the type of a collection of a's (namely (s a)) and the element type a. Occasionally this really doesn't work, in which case you can split the class like this:
      class CollE s where
        empty  :: s
    
      class CollE s => Coll s a where
        insert :: s -> a -> s
    

Instance declarations

  1. Instance declarations may not overlap. The two instance declarations
      instance context1 => C type1 where ...
      instance context2 => C type2 where ...
    
    "overlap" if type1 and type2 unify However, if you give the command line option -fallow-overlapping-instances then two overlapping instance declarations are permitted iff Notice that these rules Regrettably, GHC doesn't guarantee to detect overlapping instance declarations if they appear in different modules. GHC can "see" the instance declarations in the transitive closure of all the modules imported by the one being compiled, so it can "see" all instance decls when it is compiling Main. However, it currently chooses not to look at ones that can't possibly be of use in the module currently being compiled, in the interests of efficiency. (Perhaps we should change that decision, at least for Main.)
  2. There are no restrictions on the type in an instance head, except that at least one must not be a type variable. The instance "head" is the bit after the "=>" in an instance decl. For example, these are OK:
      instance C Int a where ...
    
      instance D (Int, Int) where ...
    
      instance E [[a]] where ...
    
    Note that instance heads may contain repeated type variables. For example, this is OK:
      instance Stateful (ST s) (MutVar s) where ...
    
    The "at least one not a type variable" restriction is to ensure that context reduction terminates: each reduction step removes one type constructor. For example, the following would make the type checker loop if it wasn't excluded:
      instance C a => C a where ...
    
    There are two situations in which the rule is a bit of a pain. First, if one allows overlapping instance declarations then it's quite convenient to have a "default instance" declaration that applies if something more specific does not:
      instance C a where
        op = ... -- Default
    
    Second, sometimes you might want to use the following to get the effect of a "class synonym":
      class (C1 a, C2 a, C3 a) => C a where { }
    
      instance (C1 a, C2 a, C3 a) => C a where { }
    
    This allows you to write shorter signatures:
      f :: C a => ...
    
    instead of
      f :: (C1 a, C2 a, C3 a) => ...
    
    I'm on the lookout for a simple rule that preserves decidability while allowing these idioms. The experimental flag -fallow-undecidable-instances lifts this restriction, allowing all the types in an instance head to be type variables.
  3. Unlike Haskell 1.4, instance heads may use type synonyms. As always, using a type synonym is just shorthand for writing the RHS of the type synonym definition. For example:
      type Point = (Int,Int) 
      instance C Point   where ...
      instance C [Point] where ...
    
    is legal. However, if you added
      instance C (Int,Int) where ...
    
    as well, then the compiler will complain about the overlapping (actually, identical) instance declarations. As always, type synonyms must be fully applied. You cannot, for example, write:
      type P a = [[a]]
      instance Monad P where ...
    
    This design decision is independent of all the others, and easily reversed, but it makes sense to me.
  4. The types in an instance-declaration context must all be type variables. Thus
      instance C a b => Eq (a,b) where ...
    
    is OK, but
      instance C Int b => Foo b where ...
    
    is not OK. Again, the intent here is to make sure that context reduction terminates. Voluminous correspondence on the Haskell mailing list has convinced me that it's worth experimenting with a more liberal rule. If you use the flag -fallow-undecidable-instances you can use arbitrary types in an instance context. Termination is ensured by having a fixed-depth recursion stack. If you exceed the stack depth you get a sort of backtrace, and the opportunity to increase the stack depth with -fcontext-stackN.

5.6 Explicit universal quantification

GHC now allows you to write explicitly quantified types. GHC's syntax for this now agrees with Hugs's, namely:

        forall a b. (Ord a, Eq  b) => a -> b -> a

The context is, of course, optional. You can't use forall as a type variable any more!

Haskell type signatures are implicitly quantified. The forall allows us to say exactly what this means. For example:

        g :: b -> b

means this:

        g :: forall b. (b -> b)

The two are treated identically.

Universally-quantified data type fields

In a data or newtype declaration one can quantify the types of the constructor arguments. Here are several examples:

data T a = T1 (forall b. b -> b -> b) a

data MonadT m = MkMonad { return :: forall a. a -> m a,
                          bind   :: forall a b. m a -> (a -> m b) -> m b
                        }

newtype Swizzle = MkSwizzle (Ord a => [a] -> [a])

The constructors now have so-called rank 2 polymorphic types, in which there is a for-all in the argument types.:

T1 :: forall a. (forall b. b -> b -> b) -> a -> T1 a
MkMonad :: forall m. (forall a. a -> m a)
                  -> (forall a b. m a -> (a -> m b) -> m b)
                  -> MonadT m
MkSwizzle :: (Ord a => [a] -> [a]) -> Swizzle

Notice that you don't need to use a forall if there's an explicit context. For example in the first argument of the constructor MkSwizzle, an implicit "forall a." is prefixed to the argument type. The implicit forall quantifies all type variables that are not already in scope, and are mentioned in the type quantified over.

As for type signatures, implicit quantification happens for non-overloaded types too. So if you write this:

  data T a = MkT (Either a b) (b -> b)
it's just as if you had written this:
  data T a = MkT (forall b. Either a b) (forall b. b -> b)
That is, since the type variable b isn't in scope, it's implicitly universally quantified. (Arguably, it would be better to require explicit quantification on constructor arguments where that is what is wanted. Feedback welcomed.)

Construction

You construct values of types T1, MonadT, Swizzle by applying the constructor to suitable values, just as usual. For example,

(T1 (\xy->x) 3) :: T Int

(MkSwizzle sort)    :: Swizzle
(MkSwizzle reverse) :: Swizzle

(let r x = Just x
     b m k = case m of
                Just y -> k y
                Nothing -> Nothing
  in
  MkMonad r b) :: MonadT Maybe

The type of the argument can, as usual, be more general than the type required, as (MkSwizzle reverse) shows. (reverse does not need the Ord constraint.)

Pattern matching

When you use pattern matching, the bound variables may now have polymorphic types. For example:

        f :: T a -> a -> (a, Char)
        f (T1 f k) x = (f k x, f 'c' 'd')

        g :: (Ord a, Ord b) => Swizzle -> [a] -> (a -> b) -> [b]
        g (MkSwizzle s) xs f = s (map f (s xs))

        h :: MonadT m -> [m a] -> m [a]
        h m [] = return m []
        h m (x:xs) = bind m x           $ \y ->
                      bind m (h m xs)   $ \ys ->
                      return m (y:ys)

In the function h we use the record selectors return and bind to extract the polymorphic bind and return functions from the MonadT data structure, rather than using pattern matching.

You cannot pattern-match against an argument that is polymorphic. For example:

        newtype TIM s a = TIM (ST s (Maybe a))

        runTIM :: (forall s. TIM s a) -> Maybe a
        runTIM (TIM m) = runST m

Here the pattern-match fails, because you can't pattern-match against an argument of type (forall s. TIM s a). Instead you must bind the variable and pattern match in the right hand side:

        runTIM :: (forall s. TIM s a) -> Maybe a
        runTIM tm = case tm of { TIM m -> runST m }
The tm on the right hand side is (invisibly) instantiated, like any polymorphic value at its occurrence site, and now you can pattern-match against it.

The partial-application restriction

There is really only one way in which data structures with polymorphic components might surprise you: you must not partially apply them. For example, this is illegal:

        map MkSwizzle [sort, reverse]

The restriction is this: every subexpression of the program must have a type that has no for-alls, except that in a function application (f e1 ... en) the partial applications are not subject to this rule. The restriction makes type inference feasible.

In the illegal example, the sub-expression MkSwizzle has the polymorphic type (Ord b => [b] -> [b]) -> Swizzle and is not a sub-expression of an enclosing application. On the other hand, this expression is OK:

        map (T1 (\a b -> a)) [1,2,3]

even though it involves a partial application of T1, because the sub-expression T1 (\a b -> a) has type Int -> T Int.

Type signatures

Once you have data constructors with universally-quantified fields, or constants such as runST that have rank-2 types, it isn't long before you discover that you need more! Consider:

  mkTs f x y = [T1 f x, T1 f y]

mkTs is a fuction that constructs some values of type T, using some pieces passed to it. The trouble is that since f is a function argument, Haskell assumes that it is monomorphic, so we'll get a type error when applying T1 to it. This is a rather silly example, but the problem really bites in practice. Lots of people trip over the fact that you can't make "wrappers functions" for runST for exactly the same reason. In short, it is impossible to build abstractions around functions with rank-2 types.

The solution is fairly clear. We provide the ability to give a rank-2 type signature for ordinary functions (not only data constructors), thus:

  mkTs :: (forall b. b -> b -> b) -> a -> [T a]
  mkTs f x y = [T1 f x, T1 f y]

This type signature tells the compiler to attribute f with the polymorphic type (forall b. b -> b -> b) when type checking the body of mkTs, so now the application of T1 is fine.

There are two restrictions:

5.7 Existentially quantified data constructors

The idea of using existential quantification in data type declarations was suggested by Laufer (I believe, thought doubtless someone will correct me), and implemented in Hope+. It's been in Lennart Augustsson's hbc Haskell compiler for several years, and proved very useful. Here's the idea. Consider the declaration:

  data Foo = forall a. MkFoo a (a -> Bool)
           | Nil

The data type Foo has two constructors with types:

  MkFoo :: forall a. a -> (a -> Bool) -> Foo
  Nil   :: Foo

Notice that the type variable a in the type of MkFoo does not appear in the data type itself, which is plain Foo. For example, the following expression is fine:

  [MkFoo 3 even, MkFoo 'c' isUpper] :: [Foo]

Here, (MkFoo 3 even) packages an integer with a function even that maps an integer to Bool; and MkFoo 'c' isUpper packages a character with a compatible function. These two things are each of type Foo and can be put in a list.

What can we do with a value of type Foo?. In particular, what happens when we pattern-match on MkFoo?

  f (MkFoo val fn) = ???

Since all we know about val and fn is that they are compatible, the only (useful) thing we can do with them is to apply fn to val to get a boolean. For example:

  f :: Foo -> Bool
  f (MkFoo val fn) = fn val

What this allows us to do is to package heterogenous values together with a bunch of functions that manipulate them, and then treat that collection of packages in a uniform manner. You can express quite a bit of object-oriented-like programming this way.

Why existential?

What has this to do with existential quantification? Simply that MkFoo has the (nearly) isomorphic type

  MkFoo :: (exists a . (a, a -> Bool)) -> Foo

But Haskell programmers can safely think of the ordinary universally quantified type given above, thereby avoiding adding a new existential quantification construct.

Type classes

An easy extension (implemented in hbc) is to allow arbitrary contexts before the constructor. For example:

  data Baz = forall a. Eq a => Baz1 a a
           | forall b. Show b => Baz2 b (b -> b)

The two constructors have the types you'd expect:

  Baz1 :: forall a. Eq a => a -> a -> Baz
  Baz2 :: forall b. Show b => b -> (b -> b) -> Baz

But when pattern matching on Baz1 the matched values can be compared for equality, and when pattern matching on Baz2 the first matched value can be converted to a string (as well as applying the function to it). So this program is legal:

  f :: Baz -> String
  f (Baz1 p q) | p == q    = "Yes"
               | otherwise = "No"
  f (Baz1 v fn)            = show (fn v)

Operationally, in a dictionary-passing implementation, the constructors Baz1 and Baz2 must store the dictionaries for Eq and Show respectively, and extract it on pattern matching.

Notice the way that the syntax fits smoothly with that used for universal quantification earlier.

Restrictions

There are several restrictions on the ways in which existentially-quantified constructors can be use.

5.8 Assertions

If you want to make use of assertions in your standard Haskell code, you could define a function like the following:

assert :: Bool -> a -> a
assert False x = error "assertion failed!"
assert _     x = x

which works, but gives you back a less than useful error message -- an assertion failed, but which and where?

One way out is to define an extended assert function which also takes a descriptive string to include in the error message and perhaps combine this with the use of a pre-processor which inserts the source location where assert was used.

Ghc offers a helping hand here, doing all of this for you. For every use of assert in the user's source:

kelvinToC :: Double -> Double
kelvinToC k = assert (k >= 0.0) (k+273.15)

Ghc will rewrite this to also include the source location where the assertion was made,

assert pred val ==> assertError "Main.hs|15" pred val

The rewrite is only performed by the compiler when it spots applications of Exception.assert, so you can still define and use your own versions of assert, should you so wish. If not, import Exception to make use assert in your code.

To have the compiler ignore uses of assert, use the compiler option -fignore-asserts. That is, expressions of the form assert pred e will be rewritten to e.

Assertion failures can be caught, see the documentation for the Hugs/GHC Exception library for information of how.

5.9 Scoped Type Variables

A pattern type signature can introduce a scoped type variable. For example

f (xs::[a]) = ys ++ ys
           where
              ys :: [a]
              ys = reverse xs 

The pattern (xs::[a]) includes a type signature for xs. This brings the type variable a into scope; it scopes over all the patterns and right hand sides for this equation for f. In particular, it is in scope at the type signature for y.

At ordinary type signatures, such as that for ys, any type variables mentioned in the type signature that are not in scope are implicitly universally quantified. (If there are no type variables in scope, all type variables mentioned in the signature are universally quantified, which is just as in Haskell 98.) In this case, since a is in scope, it is not universally quantified, so the type of ys is the same as that of xs. In Haskell 98 it is not possible to declare a type for ys; a major benefit of scoped type variables is that it becomes possible to do so.

Scoped type variables are implemented in both GHC and Hugs. Where the implementations differ from the specification below, those differences are noted.

So much for the basic idea. Here are the details.

Scope and implicit quantification

Polymorphism

Result type signatures

Result type signatures are not yet implemented in Hugs.

Pattern signatures on other constructs

Such degnerate function bindings do not fall under the monomorphism restriction. Thus:

  g :: a -> a -> Bool = \x y. x==y

Here g has type forall a. Eq a => a -> a -> Bool, just as if g had a separate type signature. Lacking a type signature, g would get a monomorphic type.

Existentials

5.10 Pragmas

GHC supports several pragmas, or instructions to the compiler placed in the source code. Pragmas don't affect the meaning of the program, but they might affect the efficiency of the generated code.

INLINE pragma

GHC (with -O, as always) tries to inline (or ``unfold'') functions/values that are ``small enough,'' thus avoiding the call overhead and possibly exposing other more-wonderful optimisations.

You will probably see these unfoldings (in Core syntax) in your interface files.

Normally, if GHC decides a function is ``too expensive'' to inline, it will not do so, nor will it export that unfolding for other modules to use.

The sledgehammer you can bring to bear is the INLINE pragma, used thusly:

key_function :: Int -> String -> (Bool, Double) 

#ifdef __GLASGOW_HASKELL__
{-# INLINE key_function #-}
#endif
(You don't need to do the C pre-processor carry-on unless you're going to stick the code through HBC---it doesn't like INLINE pragmas.)

The major effect of an INLINE pragma is to declare a function's ``cost'' to be very low. The normal unfolding machinery will then be very keen to inline it.

An INLINE pragma for a function can be put anywhere its type signature could be put.

INLINE pragmas are a particularly good idea for the then/return (or bind/unit) functions in a monad. For example, in GHC's own UniqueSupply monad code, we have:

#ifdef __GLASGOW_HASKELL__
{-# INLINE thenUs #-}
{-# INLINE returnUs #-}
#endif

NOINLINE pragma

The NOINLINE pragma does exactly what you'd expect: it stops the named function from being inlined by the compiler. You shouldn't ever need to do this, unless you're very cautious about code size.

SPECIALIZE pragma

(UK spelling also accepted.) For key overloaded functions, you can create extra versions (NB: more code space) specialised to particular types. Thus, if you have an overloaded function:

hammeredLookup :: Ord key => [(key, value)] -> key -> value

If it is heavily used on lists with Widget keys, you could specialise it as follows:

{-# SPECIALIZE hammeredLookup :: [(Widget, value)] -> Widget -> value #-}

To get very fancy, you can also specify a named function to use for the specialised value, by adding = blah, as in:

{-# SPECIALIZE hammeredLookup :: ...as before... = blah #-}
It's Your Responsibility to make sure that blah really behaves as a specialised version of hammeredLookup!!!

NOTE: the =blah feature isn't implemented in GHC 4.xx.

An example in which the = blah form will Win Big:

toDouble :: Real a => a -> Double
toDouble = fromRational . toRational

{-# SPECIALIZE toDouble :: Int -> Double = i2d #-}
i2d (I# i) = D# (int2Double# i) -- uses Glasgow prim-op directly
The i2d function is virtually one machine instruction; the default conversion---via an intermediate Rational---is obscenely expensive by comparison.

By using the US spelling, your SPECIALIZE pragma will work with HBC, too. Note that HBC doesn't support the = blah form.

A SPECIALIZE pragma for a function can be put anywhere its type signature could be put.

SPECIALIZE instance pragma

Same idea, except for instance declarations. For example:

instance (Eq a) => Eq (Foo a) where { ... usual stuff ... }

{-# SPECIALIZE instance Eq (Foo [(Int, Bar)] #-}
Compatible with HBC, by the way.

LINE pragma

This pragma is similar to C's #line pragma, and is mainly for use in automatically generated Haskell code. It lets you specify the line number and filename of the original code; for example

{-# LINE 42 "Foo.vhs" #-}

if you'd generated the current file from something called Foo.vhs and this line corresponds to line 42 in the original. GHC will adjust its error messages to refer to the line/file named in the LINE pragma.

RULES pragma

The RULES pragma lets you specify rewrite rules. It is described in Section Rewrite Rules.

5.11 Rewrite rules

The programmer can specify rewrite rules as part of the source program (in a pragma). GHC applies these rewrite rules wherever it can.

Here is an example:

  {-# RULES
        "map/map"       forall f,g,xs. map f (map g) xs = map (f.g) xs
  #-}

Syntax

From a syntactic point of view:

Semantics

From a semantic point of view:

Controlling what's going on

5.12 Concurrent and Parallel Haskell

Concurrent and Parallel Haskell are Glasgow extensions to Haskell which let you structure your program as a group of independent `threads'.

Concurrent and Parallel Haskell have very different purposes.

Concurrent Haskell is for applications which have an inherent structure of interacting, concurrent tasks (i.e. `threads'). Threads in such programs may be required. For example, if a concurrent thread has been spawned to handle a mouse click, it isn't optional---the user wants something done!

A Concurrent Haskell program implies multiple `threads' running within a single Unix process on a single processor.

You will find at least one paper about Concurrent Haskell hanging off of Simon Peyton Jones's Web page.

Parallel Haskell is about speed---spawning threads onto multiple processors so that your program will run faster. The `threads' are always advisory---if the runtime system thinks it can get the job done more quickly by sequential execution, then fine.

A Parallel Haskell program implies multiple processes running on multiple processors, under a PVM (Parallel Virtual Machine) framework.

Parallel Haskell is still relatively new; it is more about ``research fun'' than about ``speed.'' That will change.

Again, check Simon's Web page for publications about Parallel Haskell (including ``GUM'', the key bits of the runtime system).

Some details about Concurrent and Parallel Haskell follow.

Language features specific to Concurrent Haskell

The Concurrent interface (recommended)

GHC provides a Concurrent module, a common interface to a collection of useful concurrency abstractions, including those mentioned in the ``concurrent paper''.

Just add the flag -syslib concurrent to your GHC command line and put import Concurrent into your modules, and away you go. To create a ``required thread'':

forkIO :: IO () -> IO ThreadId

where ThreadId is an abstract type representing a handle to the newly created thread. Threads may also be killed:

killThread :: ThreadId -> IO ()

this terminates the given thread. Any work already done by the thread isn't lost: the computation is suspended until required by another thread. The memory used by the thread will be garbage collected if it isn't referenced from anywhere else.

More generally, an arbitrary exception may be raised in any thread for which we have a ThreadId, with raiseInThread:

raiseInThread :: ThreadId -> Exception -> IO ()

Actually killThread just raises the ThreadKilled exception in the target thread, the normal action of which is to just terminate the thread. The target thread will stop whatever it was doing (even if it was blocked on an MVar or other computation) and handle the exception.

The ThreadId for the current thread can be obtained with myThreadId:

myThreadId :: IO ThreadId

NOTE: if you have a ThreadId, you essentially have a pointer to the thread itself. This means the thread itself can't be garbage collected until you drop the ThreadId. This misfeature will hopefully be corrected at a later date.

The Concurrent interface also provides access to ``M-Vars'', which are synchronising variables.

MVars are rendezvous points, mostly for concurrent threads. They begin either empty or full, and any attempt to read an empty MVar blocks. When an MVar is written, a single blocked thread may be freed. Reading an MVar toggles its state from full back to empty. Therefore, any value written to an MVar may only be read once. Multiple reads and writes are allowed, but there must be at least one read between any two writes. Interface:

newEmptyMVar :: IO (MVar a)
newMVar      :: a -> IO (MVar a)
takeMVar     :: MVar a -> IO a
putMVar      :: MVar a -> a -> IO ()
readMVar     :: MVar a -> IO a
swapMVar     :: MVar a -> a -> IO a

A channel variable (CVar) is a one-element channel, as described in the paper:

data CVar a
newCVar :: IO (CVar a)
putCVar :: CVar a -> a -> IO ()
getCVar :: CVar a -> IO a

A Channel is an unbounded channel:

data Chan a 
newChan         :: IO (Chan a)
putChan         :: Chan a -> a -> IO ()
getChan         :: Chan a -> IO a
dupChan         :: Chan a -> IO (Chan a)
unGetChan       :: Chan a -> a -> IO ()
getChanContents :: Chan a -> IO [a]

General and quantity semaphores:

data QSem
newQSem     :: Int   -> IO QSem
waitQSem    :: QSem  -> IO ()
signalQSem  :: QSem  -> IO ()

data QSemN
newQSemN    :: Int   -> IO QSemN
signalQSemN :: QSemN -> Int -> IO ()
waitQSemN   :: QSemN -> Int -> IO ()

Merging streams---binary and n-ary:

mergeIO  :: [a]   -> [a] -> IO [a]
nmergeIO :: [[a]] -> IO [a]

A Sample variable (SampleVar) is slightly different from a normal MVar:

type SampleVar a = MVar (Int, MVar a)

emptySampleVar :: SampleVar a -> IO ()
newSampleVar   :: IO (SampleVar a)
readSample     :: SampleVar a -> IO a
writeSample    :: SampleVar a -> a -> IO ()

Finally, there are operations to delay a concurrent thread, and to make one wait:

threadDelay     :: Int -> IO () -- delay rescheduling for N microseconds
threadWaitRead  :: Int -> IO () -- wait for input on specified file descriptor
threadWaitWrite :: Int -> IO () -- (read and write, respectively).

Features specific to Parallel Haskell

The Parallel interface (recommended)

GHC provides two functions for controlling parallel execution, through the Parallel interface:

interface Parallel where
infixr 0 `par`
infixr 1 `seq`

par :: a -> b -> b
seq :: a -> b -> b

The expression (x `par` y) sparks the evaluation of x (to weak head normal form) and returns y. Sparks are queued for execution in FIFO order, but are not executed immediately. At the next heap allocation, the currently executing thread will yield control to the scheduler, and the scheduler will start a new thread (until reaching the active thread limit) for each spark which has not already been evaluated to WHNF.

The expression (x `seq` y) evaluates x to weak head normal form and then returns y. The seq primitive can be used to force evaluation of an expression beyond WHNF, or to impose a desired execution sequence for the evaluation of an expression.

For example, consider the following parallel version of our old nemesis, nfib:

import Parallel

nfib :: Int -> Int
nfib n | n <= 1 = 1
       | otherwise = par n1 (seq n2 (n1 + n2 + 1))
                     where n1 = nfib (n-1) 
                           n2 = nfib (n-2)

For values of n greater than 1, we use par to spark a thread to evaluate nfib (n-1), and then we use seq to force the parent thread to evaluate nfib (n-2) before going on to add together these two subexpressions. In this divide-and-conquer approach, we only spark a new thread for one branch of the computation (leaving the parent to evaluate the other branch). Also, we must use seq to ensure that the parent will evaluate n2 before n1 in the expression (n1 + n2 + 1). It is not sufficient to reorder the expression as (n2 + n1 + 1), because the compiler may not generate code to evaluate the addends from left to right.

Underlying functions and primitives

The functions par and seq are wired into GHC, and unfold into uses of the par# and seq# primitives, respectively. If you'd like to see this with your very own eyes, just run GHC with the -ddump-simpl option. (Anything for a good time...)

Scheduling policy for concurrent/parallel threads

Runnable threads are scheduled in round-robin fashion. Context switches are signalled by the generation of new sparks or by the expiry of a virtual timer (the timer interval is configurable with the -C[<num>] RTS option). However, a context switch doesn't really happen until the current heap block is full. You can't get any faster context switching than this.

When a context switch occurs, pending sparks which have not already been reduced to weak head normal form are turned into new threads. However, there is a limit to the number of active threads (runnable or blocked) which are allowed at any given time. This limit can be adjusted with the -t<num> RTS option (the default is 32). Once the thread limit is reached, any remaining sparks are deferred until some of the currently active threads are completed.

5.13 Haskell 98 vs. Glasgow Haskell: language non-compliance

This section lists Glasgow Haskell infelicities in its implementation of Haskell 98. See also the ``when things go wrong'' section (Section What to do when something goes wrong) for information about crashes, space leaks, and other undesirable phenomena.

The limitations here are listed in Haskell-Report order (roughly).

Expressions and patterns

Very long String constants:

May not go through. If you add a ``string gap'' every few thousand characters, then the strings can be as long as you like.

Bear in mind that string gaps and the -cpp option don't mix very well (see Section The C pre-processor).

Very long literal lists:

These may tickle a ``yacc stack overflow'' error in the parser. (It depends on the Yacc used to build your parser.)

Single quotes in module names:

It might work, but it's just begging for trouble.

Declarations and bindings

Derived instances for records:

Hmmm.

Module system and interface files

Namespace pollution

Several modules internal to GHC are visible in the standard namespace. All of these modules begin with Prel, so the rule is: don't use any modules beginning with Prel in your program, or you will be comprehensively screwed.

Numbers, basic types, and built-in classes

Unchecked arithmetic:

Arguably not an infelicity, but... Bear in mind that operations on Int, Float, and Double numbers are unchecked for overflow, underflow, and other sad occurrences. (note, however that some architectures trap floating-point overflow and loss-of-precision and report a floating-point exception) .

Use Integer, Rational, etc., numeric types if this stuff keeps you awake at night.

Multiply-defined array elements---not checked:

This code fragment should elicit a fatal error, but it does not:

main = print (array (1,1) [ 1:=2, 1:=3 ])

In Prelude support

Arbitrary-sized tuples:

Plain old tuples of arbitrary size do work.

HOWEVER: standard instances for tuples (Eq, Ord, Bounded, Ix Read, and Show) are available only up to 5-tuples.

These limitations are easily subvertible, so please ask if you get stuck on them.

Unicode character set:

Haskell 98 embraces the Unicode character set, but GHC doesn't handle it. Yet.


Next Previous Contents