Dynamic Graphical Models

Graphical models

BAli-Phy has been extended to express generic models written as graphical models. This is similar to other projects, such as RevBayes, BEAST, and OpenBUGS/JAGS. These projects use languages like XML or R to flexibly express generic models, and to construct models in a modular fashion [1]. BAli-Phy currently does inference using MCMC via both MH and slice sampling. We have not yet added any support for SMC.

Dynamic graphical models

BAli-Phy attempts to extend this paradigm by implementing dynamic graphical models via probabilistic programming. The approach is similar to Church and Venture because it uses execution traces to represent the dynamic graph and is Turing-complete. The approach

• allows a single model to express a changing graph.
• treats functions as first-class objects and allows recursive functions.
• allows the use of data-structures with random fields.
• will (eventually) allow random numbers of random variables.
The modelling framework is under rapid development, and I haven't written much documentation yet.

Modelling language

We use Haskell as the modelling language. This has a number of benefits, including lazy evaluation, static type checking, a module system, and so on. We currently use monads and do-notation to represent sampling random variables:

module Demo1 where
{
import Distributions;

main = do
{
p <- beta 10.0 1.0;
Log "p" p;

n <- geometric p;
Log "n" n;
}
}


Unfortunately, not all Haskell language features are ready yet. In particular, type checking is not implemented yet, and curly braces are required.

Some more example files are here. For example, you might run bali-phy -m CoalMine.hs --iter=1000 to perform a poisson regression.

Random data structures

We can also sample random data structures. For example, the iid distribution returns a random list. We can apply the map and sum operations to such lists to sample a sum of squares.

module Demo2 where
{
import Distributions;

main = do
{
xs <- iid 10 (normal 0.0 1.0);
Log "xs" xs;

let {ys = map (\x -> x*x) xs};
Log "ys" ys;

Log "sum" (sum ys);
}
}

Here (\x -> x*x) describes an un-named function that takes an argument x and returns x*x.

Dynamic graphs I: if-then-else

The graph in a traditional graphical model never changes. Changing the graph involves changing the model. In contrast, dynamic graphical models use a single model to specify a changing graph. One thing that leads to a changing graph is control-flow statements like if-then-else:

module Demo3 where
{
import Distributions;

main = do
{
i <- bernoulli 0.5;
y <- normal 0.0 1.0;
z <- exponential 0.1;
let {x = if (i==1) then y else z};
Log "x" x;
}
}

While x always depends on i, it depends on either y or z, but not both. The dynamic graphical model can represent this by updating the graph to handle the cases where i=0 or i=1:
In contrast, a traditional graphical model makes x always depend on everything that it might depend on.

Dynamic graphs II: arrays with random subscripts

Traditional graphical modelling languages, like BUGS, allow arrays of random variables. However, they do not allow selecting a random element of these arrays. Dynamic graphs allow random subscripts. This can be used to divide observations into categories. Here different elements of ys will be exactly equal, if they belong to the same category:

module Demo4 where
{
import Distributions;

main = do
{
xs <- iid 10 (normal 0.0 1.0);

categories <- iid 10 (categorical (replicate 10 0.1));

let {ys = [xs!!(categories!!i) | i <- [0..9]]};
Log "ys" ys;
}
}

In Haskell, we use the !! operator to subscript a list. The C equivalent of xs!!(categories!!i)) would be something like xs[categories[i]].

Random sampling via recursive functions: Brownian Bridge

We can use recursive functions to randomly sample lists of random values. Here, we define a function random_walk that produces a list of random values starting from x0.

module Demo5 where
{
import Distributions;

random_walk x0 0 _ = return [x0];
random_walk x0 n f = do {
x1 <- f x0;
xs <- random_walk x1 (n-1) f;
return (x0:xs);
};

main = do
{
zs <- random_walk 0.0 10 (\mu -> normal mu 1.0);
Log "zs" zs;

Observe 2.0 (normal (zs!!10) 1.0);
}
}

The argument f is a function. In Haskell, we write f x instead of f(x) to apply a function. Here, f x gives the distribution of the point after x.

The Observe command specifies observed data. Here we observe that the next point after element 10 of zs is 2.0. This constraints the random walk to end at 2.0, creating a Brownian Bridge.

comments and suggestions: benjamin . redelings * gmail + com