3B Image processing

Bayesian analysis of blinking and bleaching

3B Image processing

Postby Fan » Mon Mar 04, 2013 10:05 pm

Hi, I have some questions regarding to the 3B image processing:

1. What is the most important limiting factor for the image processing speed? CPU, RAM or the read/write speed of hard disk?

2. We have a dell workstation equipped with 2 Xeon quad core 2.66 GHZ cpu, which make it a 8 cores computer, I noticed that when I was running ImageJ 3B plugin, only 1 out of 8 cores of the cpu (12%) was utilized. Is there any way that we could fully utilize all 8 cores?

3. Can we process multiple image-stack at the same time? Would it increase the processing time proportionately?

4. How does 3B handle with out of focus light? If we took a time-lapse movie with Z-stack, for 3B reconstruction, should we process each z plane separately or project the z planes first then proceed to 3B reconstruction?

Thanks!
Fan
 
Posts: 3
Joined: Fri Mar 01, 2013 7:25 pm

Re: 3B Image processing

Postby edrosten » Tue Mar 05, 2013 12:05 pm

Fan wrote:Hi, I have some questions regarding to the 3B image processing:

1. What is the most important limiting factor for the image processing speed? CPU, RAM or the read/write speed of hard disk?


The CPU. The images are read once at the beginning (which takes a few seconds), then processed for a long time. Also, 3B doesn't generally take up a huge amount of memory for each run,

2. We have a dell workstation equipped with 2 Xeon quad core 2.66 GHZ cpu, which make it a 8 cores computer, I noticed that when I was running ImageJ 3B plugin, only 1 out of 8 cores of the cpu (12%) was utilized. Is there any way that we could fully utilize all 8 cores?

3. Can we process multiple image-stack at the same time? Would it increase the processing time proportionately?


It's easier to answer these two together. A single run of 3B will only use one core at once. However, you can perform multiple runs at the same time. They can be multiple areas from the same stack, or different stacks entirely. Note that processing multiple different stacks using the ImageJ plugin will use more memory since ImageJ will need to load all of the stacks. The standalone program is not affected in this manner.

Since you have 8 cores, running 8 different areas at once will not substantially increasing the running time. Moving from 8 to 16 runs at once would double the running time. You will generally get best performance by having the number of simultaneous runs equal to the number of cores.
edrosten
 
Posts: 42
Joined: Wed Jul 18, 2012 2:54 pm

Re: 3B Image processing

Postby susancox » Tue Mar 05, 2013 12:14 pm

With regard to point 4:

3B treats out of focus light as a smoothly varying background, so it is removed in the reconstructed superresolution image. However, it should be noted that if the level of out-of-focus light is high, the higher the background on which blinking and bleaching events much be fitted is high and so the fit will be less good. This will lead to a degraded resolution in the reconstructed image. If a very high level of out-of-focus light is present it may not be possible for the algorithm to identify blinking events and so the algorithm will fail.

Because of this disadvantage of increased background I would always analyse each z plane separately. In addition, either a maximum projection or an average projection will certainly change, and may even remove, blinking and bleaching events. This would make it very likely that the fit would not reflect the underlying structure if you fitted a z-projection.

I would note that another problem quite a few people have encountered is due to drift - some systems, even some systems that are relatively stable in xy can suffer from z-drift. If you are not sure how stable your system is, it may be worth checking with a bead sample.
susancox
 
Posts: 14
Joined: Wed Jul 18, 2012 3:34 pm

Re: 3B Image processing

Postby Fan » Tue Mar 05, 2013 6:43 pm

susancox wrote:With regard to point 4:

3B treats out of focus light as a smoothly varying background, so it is removed in the reconstructed superresolution image. However, it should be noted that if the level of out-of-focus light is high, the higher the background on which blinking and bleaching events much be fitted is high and so the fit will be less good. This will lead to a degraded resolution in the reconstructed image. If a very high level of out-of-focus light is present it may not be possible for the algorithm to identify blinking events and so the algorithm will fail.

Because of this disadvantage of increased background I would always analyse each z plane separately. In addition, either a maximum projection or an average projection will certainly change, and may even remove, blinking and bleaching events. This would make it very likely that the fit would not reflect the underlying structure if you fitted a z-projection.

I would note that another problem quite a few people have encountered is due to drift - some systems, even some systems that are relatively stable in xy can suffer from z-drift. If you are not sure how stable your system is, it may be worth checking with a bead sample.


Hi edrosten and susancox, thanks for your explaination.

Regarding to the image acquisition, to remove the out-of-focus light as much as possible, would it be better to use confocal microscopy instead of conventional microscopy?

In addition, to maximize the bleaching (blinking) events, would it be better to use high excitation light and bleach the sample at the end of the movie?
Fan
 
Posts: 3
Joined: Fri Mar 01, 2013 7:25 pm

Re: 3B Image processing

Postby susancox » Tue Mar 05, 2013 7:01 pm

The issue with confocal is that it is a scanning technique, so you are only illuminating each point for a very short time. This will change the fluorophore blinking dynamics a lot, in practice what seems to happen is that you don't see blinking in confocal images. The dynamics that we have built into the model all assume widefield illumination.

If you did realy want to try a sectioning technique, I think it's possible that spinning disk might work - it is confocal, but much more of the sample is illuminated at the same time.

With regard to whether you want to bleach at the end it depends on what information you want to get out - for example in a recent experiment I got some really nice data by illuminating with high intensity, but because I needed to do several images at different time intervals I settled for a lower intensity which didn't give such a good resolution. Are you going to need multiple images of the sample?
In general I would suggest trying several different illumination levels. I have had good results previously where the sample looks substantially bleached (intensity loss of at least 50%) at the end but it still visible (and of course where you can see blinking!), though if you have bleached to this level you will not be able to get another superresolution image in the same area.
susancox
 
Posts: 14
Joined: Wed Jul 18, 2012 3:34 pm

Re: 3B Image processing

Postby sajjad123 » Tue Dec 09, 2014 5:57 am

Hi All
When I compute a 25 by 25 pixel square from an image with a pixel size of 80nm, and reconstruct with a pixel size of 10nm, I expect to get a 200 by 200 pixel image. Instead, I get an image which is 193 by 193 images.

This happens for other input image sizes, and changing the reconstruction pixel size doesn't help.

Any thoughts on what's happening here? Or how to go about overlaying the reconstruction exactly on the original image? I could be as much as 70nm out here and that's a bit too much!
We offer best quality NS0-155 exam dump test papers and testking.net - 642-035 dumps dumps materials. You can get our 100% guaranteed stanford.edu questions pittstate to help you in passing the real exam of www.mica.edu
sajjad123
 
Posts: 1
Joined: Tue Dec 09, 2014 5:44 am

Re: 3B Image processing

Postby edrosten » Sat Dec 13, 2014 5:58 pm

sajjad123 wrote:Hi All
When I compute a 25 by 25 pixel square from an image with a pixel size of 80nm, and reconstruct with a pixel size of 10nm, I expect to get a 200 by 200 pixel image. Instead, I get an image which is 193 by 193 images.

This happens for other input image sizes, and changing the reconstruction pixel size doesn't help.

Any thoughts on what's happening here? Or how to go about overlaying the reconstruction exactly on the original image? I could be as much as 70nm out here and that's a bit too much!


Hi,

There might be a minor bug in the reconstruction code to with how it computes the reconstructed image size. This could result in it very slightly cropping the image. Can you send me the data and the results of the analysis so I can have a look at what's going wrong in your case?

The reconstruction will be correct, but the bug will make it hard to overlay it in the correct position. Pixel (0,0) i.e. top left will be aligned correctly to the top-left of a pixel in the original image.

Regards

-Ed
edrosten
 
Posts: 42
Joined: Wed Jul 18, 2012 2:54 pm


Return to 3B

Who is online

Users browsing this forum: No registered users and 1 guest

cron