For the link between brightness and size:

If the input size is phi1

and the input brightness is phi0

then the brightness actually scales by a factor of phi0*sqrt(2*PI*phi1^2)

3B is different to FIONA because is models the behaviour of many fluorophores simultaneously, allowing data to be used where the fluorophores overlap a lot. FIONA is really designed to find the positions of single molecules very very accurately. STORM, PALM, fPALM etc all use single molecule localisation methods to produce super-resolution images. 3B is only different in terms of the density of fluorophores imaged in a single frame that the analysis method can cope with. There are other related methods, e.g. Selvin has one based on localising bleaching events.

The basic idea behind our method is that we are comparing two hypotheses. For example, one hypothesis might be that a fluorophore is present at a certain point with a particular brightness and size, whereas the second hypothesis could be that the data arises from noise. For each hypothesis we calculate the model evidence, and the hypothesis for which the model evidence is greater is the most likely model of those two. By doing repeated comparisons of this type you can get to what is the most likely model of your data.

When you model fluorophores there are two types of variables. Continuous variables, such as brightness and size, can take any value. Discrete variables can only take a certain number of possible values, for example a fluorophore can be emitting light, not emitting, or bleached (of course the reality is more complex but this is our approximation). We can integrate out using Laplace's approximation, which makes the assumption that the distribution is Gaussian. You just need to know the position of the peak, its height and its width. For discrete variables, we first tried to use the Forward algorithm. This just adds up the probabilities of all possible state sequences to give you the model evidence. It is important to note that the model evidence is just a number, not a probability between 0 and 1. It is only significant when compared to another value for model evidence calculated in the same way.

So, if we could have used the forward algorithm everything would have been fine. Unfortunately it's exponential in the number of fluorophores, so for example a calculation for 15 fluorophores took a week, and for 18 fluorophores would have taken years. So we had to find an approximation. What we used was Monte Carlo Markov Chain sampling. This draws state sequences from a distribution (we used Gibbs sampling for this). So you only have to do the calculation for a certain number of state sequences. MCMC sampling says that, for a certain set of assumptions, the sample that you draw should be representative of the distribution. This still leaves a problem because this is a much noisier method, making it difficult to optimise (slides 12 and 13). So what we did was take one fluorophore at a time, calculate the forward algorithm for it, and then use MCMC for the other fluorophores.

Each iteration of the algorithm then involves changing the model in some way e.g. by adding a fluorophore, seeing if the model evidence improves, and if it does optimising all the fluorophores and changing the model again.

If you want some more detailed background about Hidden Markov Models at a less breakneck speed than the paper, I would recommend:

http://ieeexplore.ieee.org/xpls/abs_all ... mber=18626http://neuro.bstu.by/ai/To-dom/My_resea ... doblel.pdf(both point to the same paper, a really good tutorial paper on HMMs)

and David McKay's book

http://www.inference.phy.cam.ac.uk/mackay/itila/ which covers things like Gibbs sampling very clearly.