This document is the official manual of the rapidSTORM software. It is generated deterministically during the build process and can be identified and cited using the version number on the front cover as the edition. Dieses Dokument wurde automatisch erstellt und ist ohne Unterschrift gültig.
Copyright © 2014 Steve Wolter, Sven Proppert
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License".
January 15, 2014
Table of Contents
List of Tables
List of Equations
The rapidSTORM software is a single-molecule localization microscopy evaluation software. Its aim is to identify single fluorophores in sequences of images of sparsely emitting samples. Such images are generated by techniques such as dSTORM, STORM, PALM and FPALM and are used there to achieve super-resolution imaging. Since a sequence usually consists of several thousand images and nanometre precision is necessary for super-resolution, computational speed and precision are both extremely important. Delivering these attributes is the goal of rapidSTORM.
rapidSTORM aims to support both three-dimensional imaging and spectral unmixing applications. Three-dimensional imaging refers to the computation of individual fluorophore's Z positions via the observation of shape changes in the point spread function. Both astigmatic and multi-layer approaches are supported here. Spectral unmixing refers to distinguishing multiple fluorophore types by their differential emission strength in multiple optical detection paths. For example, a dichroic mirror might be used to project the red part of the fluorescence on one camera and the blue part on a second camera.
rapidSTORM is not meant as an all-purpose evaluation tool for super-resolution microscopy or as a camera driver. It is, however, designed to play nice with other systems and has a good number of evaluation options built in. You are encouraged to interface the direct camera input driver with a camera system or use the different output options for necessary control systems.
You can find discussions, help, advice and announcements for rapidSTORM on the rapidstorm-discuss mailing list. The source code is available from the GitHub repository. Bugs can be reported at the GitHub issue tracker.
It is recommended to read Chapter 1, User interface first to get a broad overview over the user interface. Further information can be found in the relevant reference sections, which can be studied in any order.
Table of Contents
rapidSTORM's user interface is based on a single window (the main window) with multiple tabs. Each tab represents either a job configuration or a running job. When rapidSTORM is started, a single job configuration for a localization job is opened by default.
Jobs can be started by clicking the Run button at the bottom of a job configuration. Further job configurations can be found in the menu.
This chapter will first exemplify basic usage and the process of opening jobs in a tutorial. Then, the general behaviour of input elements and the different kinds of job configurations will be explained.
Start an evaluation
Manage an evaluation
Job configurations consist of a series of input fields and control elements that configure the job. Most of them are standard UI tools like buttons and checkboxes, and we assume you are familiar with their use. However, there are a number of specialities, and we will introduce them in this section.
The visibility of many input fields in rapidSTORM is dependent on previous choices, i.e. the value of fields that are displayed above the current field. Therefore, changes in text fields are not immediately committed while you type. While a field is not committed, its values do not take effect, and its background is red. You can commit a textfield by pressing Enter or switching the keyboard focus away from the field, e.g. with the Tab key or clicking another field.
Text fields in rapidSTORM are often part of a matrix. You can recognize matrices by the presence of multiple text fields with one description. Next to a matrix field, a button with a chain is shown.
The chain button is used to "chain" the matrix elements, that is to change all elements at once (chained) or individually (unchained). When text fields are chained, entering text in one of the fields immediately changes the text in all fields. When text fields are unchained, each field is changed and committed individually. Leading unchained text fields in a matrix do not commit their values when they lose focus; only the last text field in the matrix behaves in this way.
Some values in rapidSTORM are optional, that is, they can have no value at all. These fields have a checkbox in front of them. The checkbox controls the presence of a value, and the text field is not displayed while the checkbox is not checked. When you check the checkbox, the text field will be uncommitted and has to be committed in the usual way to take effect.
When a text field in rapidSTORM requires a file name, you can enter the file name directly, select it interactively with a dialog by clicking the Select button next to the field, or by dragging & dropping the file on the input field.
Localization jobs and replay jobs organize their output in an output options tree. This tree is displayed with a tree element on the left, which shows the structure of the tree and allows selecting a single node, and the node detail field, which shows the detailed options for the tree node selected on the left-hand side.
Each node in the output tree represents an output module, which will perform some action with or display information about the output. Some of the output modules such as the Expression filter modify their input and can have subordinate output modules. The subordinate output modules will act on the localizations that were modified by their parent output modules.
The first, root node (called "dSTORM engine output") symbolizes the output of the Section 7.2, “dSTORM engine”. Each module connected to it receives all localizations found by the engine without filtering or processing.
For an example, consider the tree shown in the graphic: The first Localizations file, Count localizations, Display progress, Filter localizations and Average images receive the unmodified output from the Section 7.2, “dSTORM engine”. The localizations received by the second Localizations file and for Image display are those modified by the Filter localizations module.
Clicking on an output module in the tree view shows this module's configuration in the right part of the display. Each module has specific configuration options, which are documented together with the module's description in the Chapter 5, Output options chapter. You can manage the suboutputs by the following standard control elements:
The job menu is used to open and save job configurations.
You can open new jobs by selecting any of the template entries in one of the submenus. The submenu selects the type of the new job configuration, e.g. Localization for localizing fluorophores in images, Replay for reading a localizations file, or Alignment fitter for fitting a linear alignment matrix to two localization files.
You can save the configuration of the current job tab by clicking on Save .... You can open a saven configuration by using the From file ... menu item in the submenu with the same job type (e.g. Localization for a saved localization config).
The user level menu is used to limit the number of available input fields. At low user levels, only the most important input fields are displayed. You can change the user level by clicking on the appropriate menu entry.
When a job is started with the Image display output module active, a window showing an image with the already computed localizations is displayed.
The image display window shows one or two keys on its right side. If there is one key, it shows the mapping of intensity values, i.e. accumulated emission intensities, to colors. The numbers are suffixed with SI unit prefixes, and the unit is displayed at the top of the key.
If the image is hued by coordinate, a second key appears. The left key is in black and white and shows the image intensity, while the right key shows a color spectrum and displays the values indicated by the colours. The two text boxes at the bottom of the right key allow setting the range of the hueing. If the right key is completely black, the range is unknown and must be entered.
Table of Contents
Record a bead calibration data file
We assume that you have an appropriate cylindrical lens in detection path and are using an objective piezo (e.g. PIfoc, Physik Instrumente). The sample should be a thinly coated Tetraspeck bead sample. Beads should be preferred because of the superior brightness and - in case of tetraspec beads - of the sub-resoulution size. You should know the size of the input pixels.
Set the piezo to a sawtooth or triangular pattern that scans your whole planned axial localization range. Exceeding the localization range is not critical, the excess measurements can be cropped later. For an example, we used the following settings:
Produce calibration curve
8 nm/fr * frame
psffwhmx - 25 nm
psffwhmy - 25 nm
foo.txt(columns 6 and 7) against the Z truth (column 3) and check for local maxima and outlier points
posz > 2000 nm && posz < 6000 nm. Go back to Step 11.
foo-sigma-table.txt(see the Fundamental Section 7.7, “Z calibration file” for details about the file format).
Make 3D super-resolved image colour-coded by Z
Make 3D super-resolved Z stack
This usage example shows how to produce two-color images from spectrally unmixed data sets. It was written for an Alexa647/Alexa700 measurement on the Würzburg 1 biplane setup as documented in [Aufmkolk2012]. The first two tasks in this example produce prerequisite knowledge for the image generation, the alignment information (Produce linear alignment matrix) and the F2 ratios, i.e. the relative intensity of fluorophores between the channels.
Produce linear alignment matrix
We assume you have two input data files X1.tif and X2.tif showing 2 spectrally overlapping fluorophores. The images in both files are assumed to be synchronous, spectrally different views of the same sample area.
Check Ignore libtiff warnings
We use Andor SOLIS for recording, which records broken TIFF files.
Set Size of input pixel to precalibrated value (107 nm)
Enable the Minimum localization strength field and set its value very high, adjusting it until the second counter shows approximately as many localizations as the acquisition has frames.
This ensures a sparse population of multi-fluorophore localizations in the output, which can easily be paired through the time coordinate. This is the "bead of opportunity" technique.
Analyze two-colour acquisition
We assume you have the same two input data files as in Produce linear alignment matrix
and have a linear alignment matrix
Select tab Channel 2 and set Input file to
It is crucial to keep the channel naming
Set Size of input pixel (
107 nm) and
PSF FWHM (370 and 390 nm, respectively) in both Input layer tabs
Are the F2 ratios of the fluorophores already known?
Produce a single two-colour image from two-colour localizations file
We assume that you have a localizations file with assigned colors
(fluo == 0) ? amp * 1.5 : amp, varying the
Produce two spectrally separated images from two-colour localizations file
We assume that you have a localizations file with assigned colors
fluo == 0
fluo == 1
 This value is a wild guess. It should denote how much wider a fairly large object like a Tetraspec looks than a fluorophore.
The 3D PSF model denotes the functional form of the PSF's change with respect to the emitter's Z position.
This input driver can read the Andor SIF file format produced by Andor Technology software such as SOLIS.
SIF files are stored in an uncompressed binary format with a simple text header. Because reading SIF files cannot be implemented in a forward-compatible way (reading new SIF files with old software), this driver might be unable to open the file; in this case, an error message is shown indicating the maximum known and the encountered version of the Andor SIF structure. Please obtain a newer version of this software in this case.
A bUnwarpJ raw transformation file is given in this field to characterize the channel alignment.
This option allows to give the mean intensity each photon generates on the camera. In other terms, a value of 5 in this field assumes that the value of a pixel goes up by 5 (on average) every time the pixel is hit by a photon.
This option allows to give the dark intensity of the camera, i.e. the mean pixel value for a pixel that is hit by no photons during the integration time.
This option allows splitting an image that contains two logical layers. For example, if the reflected beam of a beam splitter is projected onto the left side of the camera and the transmitted on the right side, this output allows to separate them into two different layers that can be used for 3D estimation.
The options are None, which indicates no splitting at all, Left and right, which splits the image at X = Width/2, and Top and bottom, which splits the image at Y = Height/2.
The dual view output always splits images exactly in the middle. Use plane alignment to configure other shifts.
This choice for Input method indicates that a file is used for input instead of generating the input stochastically or reading directly from a camera.
This field is used to select the type of data that are present in the input file. Usually, this is auto-detected, and when the Input file location is changed, this field is updated with the auto-determined file type for the selected input file.
This option allows skipping of the initial frames of the input. The frames are indexed sequentially from 0, and the value given here is inclusive. Consequently, the default value of 0 in this field causes all images to be loaded. A value of 5 would skip the first 5 images.
See Also Last image.
The location of the input data is selected in this field. You can put any existing file here. It's interpretation (file type) and file-type-specific input options are set in the File type field. This field is only available when you have selected File as Input type.
This field is used to select the type of data that are present in the input file. Usually, this is auto-detected, and when the Input file location is changed, this field is updated with the auto-determined file type for the selected input file.
You can enforce the file type instead of auto-detection by explicitly selecting a choice in this field after entering a file name.
See Also File.
The piecewise cubic 3D model (see Section 7.6, “Piecewise cubic model”) determines PSF widths from the Z coordinate.
When multiple channels are selected, this field is used to declare how these multiple channels are combined into a single image or dataset. The available options are:
If this option is enabled, it allows limiting the length of the computation. All images past the given frame number, counted from 0 and excluding the given number, are skipped. For example, a value of 3 would case the four images with indices 0, 1, 2 and 3 to be computed.
See Also First image.
Naive pixel coordinates are computed identically to No alignment and then transformed by an affine transformation matrix.
This input driver can be used to read the files written by the Localizations file output module. The file format is documented with the output module.
The PSF model will be considered valid up to this distance from the point of sharpest Z. Any considered Z coordinate further away than this value will be immediately discarded.
If this option is enabled, each input image is flipped such that the topmost row in each image becomes the bottommost row.
The PSF width is constant, and no Z coordinate is considered.
The upper left pixel is assumed to be at (0,0) nanometers. The Size of one input pixel field gives the offset of each pixel to the next. For example, if the pixel sizes are 100 nm in X and 110 nm in Y, the pixel at (10,15) is at (1,1.65) μm.
This field is used to declare how many Section 7.1, “Input channels” should be made available. New input channel configuration tabs are added or removed as needed when this field is changed.
This is the location and filename prefix for rapidSTORM output files. More precisely, the filenames for rapidSTORM's output files are produced by adding a file-specific suffix to the value of this field.
This field is automatically set when a new input file is selected.
This option allows to describe the mapping from input pixels to sample space coordinates. For the majority of single-layer applications, the default choice of No alignment is appropriate.
A plain text file containing a 3x3 affine matrix for linear alignment, with the translation given in meters. The matrix is assumed to yield the aligned coordinate positions if multiplied with a vector (x, y, 1), where x and y are the unaligned coordinates. The upper left 2x2 part of the matrix is a classic rotation/scaling matrix, the elements (0,2) and (1,2) give a translation. For now, the bottom row must be (0,0,1), i.e. not a projective transform.
The Z coordinate of the focal plane (the plane where the PSF has the lowest width). You can indicate astigmatism by giving different positions for the two dimensions of one layer, or biplane by giving different positions for two layers.
The polynomial 3D model (see Section 7.8, “Polynomial model”) determines PSF widths from the Z coordinate.
This option allows selecting a single layer from a multi-layer acquisition. The default option, All planes, has no effect. All other options select a single layer from the available input layers, which will be the only layer that is processed.
The full width at half maximum of the optical point spread function. More precisely, the typical width of an emitter's image should be entered here, including fluorophore size and camera pixelation effects. rapidSTORM will fit spots in the images with a Gaussian with the same FWHM as given here. If the PSF is unknown, it can be determined semi-automatically by using the Estimate PSF form output.
This field gives the size of the sample part that is imaged in a single camera pixel. Typically, this value should be in the order of 100 nm. See [Thompson2002] for a discussion about ideal values.
Naive pixel coordinates are computed identically to No alignment. A nonlinear transformation image is read from a file. The naive coordinates are projected into the source image of the nonlinear transformation, and the transformation is applied with linear interpolation.
This input driver reads a multipage TIFF stack. All images in the TIFF stack must have the same size, and be greyscale images of up to 64 bit depth. Both integer and floating point data are allowed, even though all data are converted internally to 16 bit unsigned format.
The resolution (pixel size) of the file given in bUnwarpJ transformation. This resolution is not necessarily the same as the value of Size of one input pixel. For example, you can super-resolve two images with an easily aligned structure such as the nuclear pore complex ([Loeschberger2012]), which will result in a raw transformation with a 10 nm resolution.
There is one transmission coefficient field for each layer and fluorophore. The fields give the relative intensities of each fluorophore in each layer, and will be used as scaling factors for the intensity for multi-colour inference or for biplane imaging.
As an example, values of 0.1 in layer 1, fluorophore 0 and 0.9 in layer 2, fluorophore 0 would indicate that 90\% of the photons arrive on the second camera and 10\% on the first.
These entries give the speed of PSF growth. They have to be determined experimentally, and we know of no reliable method to do so. For more information, see Section 7.8, “Polynomial model”.
The filename of a Z calibration file (Section 7.7, “Z calibration file”) should be given in this option. The calibration file will be used to determine PSF widths as a function of the Z coordinate during fitting.
Employ the computational optimization of separating the X and Y dimensions of the Gaussian for computing the function's derivatives. This optimization is only performed if the alignment is set to No alignment, but can drastically improve computation time for large fit windows.
The full width of a square structuring element for the background averaging.
Perform the Section 7.11, “Two-kernel analysis” computation, which sets the two kernel improvement field.
Fit the emission intensity in each plane independently. This option can be useful if the number and nature of fluorophore populations in the sample is unknown. However, it will break multi-colour inference, and all Transmission of fluorophore N fields should be set to 1.
Smooth by applying a morphological erosion with a square structuring element of the specified size.
The full width of a square erosion mask.
The nonlinear fit process for a localization attempt is stopped after this number of iterations.
The fit judging method controls the decision whether a set of fitted PSF parameters is a localization or just background noise.
See Also Fixed global threshold.
All pixels within this radius of a spot are used for fitting. The selected pixels form the data points for the nonlinear fitting routine, and the PSF is fitted to their intensities.
A larger value here allows more precise fitting at the cost of slower computation.
This fit judging method judges parameter sets by their intensity. If the intensity surpasses the threshold, the parameter set is counted as a localization, and discarded otherwise.
See Also Intensity threshold.
The full width of a square structuring element for the foreground averaging.
After successfully fitting a spot with a least squares error model, improve the fitted position using a maximum likelihood error model. This improves precision, especially for low photon counts, in exchange for a considerable increase in computation time. The Camera response to photon and Dark intensity fields must be set if this option is used.
The usual lambda factor of Levenberg-Marquardt fitting controls the size of the trust region for Gauss-Newton steps. Refer to a good textbook for its meaning, e.g. [Recipes].
Minimum fitted emission intensity necessary for a spot to be considered a localization. If the fitted position has an intensity lower than this value, it is discarded as an artifact.
See Also Fixed global threshold.
Fit the lateral emitter position (x,y) in each plane independently. This option can mitigate small errors in alignment at the cost of reduced precision.
The nonlinear fit process for a localization attempt is continued while the lateral mean position (x,y) changes absolutely by more than this parameter.
Currently, Levenberg-Marquardt fitting is the only implementation of a spot fitter, i.e. a routine that localizes a fluorescence emission to subpixel precision. The LM fitter works by building a PSF model (in most cases a Gaussian function), estimating crude initial guesses for a the parameter of this model, and then optimizing the distance between the data in the immediate surroundings of the spot and the theoretical model. The parameters of the model then give the location of the emitter and its intensity.
This fit judging method judges parameter sets by their intensity and the local background. Both values are the estimations from fitting the PSF model to the data. If the ratio of intensity to square root of local background surpasses a threshold, the parameter set is counted as a localization, and discarded otherwise. The square root of the background is used because it estimates the standard deviation of a Poisson-distributed background. The Dark intensity and the Camera response to photon should be set to use this option.
See Also Signal-to-noise ratio.
The minimum distance between local maxima of the smoothed image that will be used as fit start locations. In other terms, this entry gives the size of a local maximum suppression window that selects the spots.
High values will ensure a higher noise tolerance at the cost of some missed localizations.
Smooth the input by performing a morphological fillhole transformation (using reconstruction by dilation) followed by a rectangular erosion.
Treat the PSF width as variables in the fit process rather than as constants. The estimated or fixed standard deviation parameters act as initial values for the estimation when the free covariancce matrix is selected.
This checkbox will drastically reduce the localization precision and increase noise localizations, but is useful when the PSF width is variable between spots (e.g. in 3D estimation). If it is merely unknown, you should prefer using the Estimate PSF form output.
See also: PSF model
The nonlinear fit process for a localization attempt is continued while any parameter (except the lateral means x and y) changes relatively by more than this parameter.
Minimum ratio of emission intensity to square root of background signal intensity necessary for a spot to be considered a localization.
See Also Local relative threshold.
Smooth by applying a square moving window average filter.
Smooth by applying a square moving window average filter. Then substract the result of a wider square moving window average filter, which estimates the local background and can thereby deal with uneven backgrounds.
Smooth by applying a square median filter of the specified width.
Smooth the input images with a Gaussian kernel of the specified width. This kernel can be set to the PSF size or specified independently. Gaussian smoothing is often suboptimal, see [WolterDiplomarbeit] for details.
The standard deviation (σ) of a Gaussian smoothing kernel.
Select a smoothing method to be employed before selecting local maximums as spot candidates. The standard method here is smoothing with an average mask (Spalttiefpass), which gives good performance for most images. Median smoothing provides slower, but sometimes more accurate and less blurring smoothing. Erosion (also known as local minimum filter) is faster than the median filter and gives similar results for small (standard deviation close to 1) spots, while the fillhole transformation followed by erosion is better for large spots. For a complete discussion and quantitative comparison, see [WolterDiplomarbeit] and [Wolter2010].
The spot fitting method is the method for converting suspected fluorophore positions (spots) into localizations. There is currently only one useful choice, Levenberg-Marquardt fitter.
The rapidSTORM engine uses dynamical thresholding, i.e. fits the spots with at the most intense positions in the smoothed image first and continues in order of decreasing intensity. Fitting is aborted when a number of spots equal to the motivation is rejected by the Fit judging method. This parameter controls the motivation.
Higher values in this field will cause more localizations to be found, albeit at the cost of more false positives.
Set the psffwhmx and psffwhmy fields of localizations to the widths used in computation. If this field is checked, the localization output files will contain PSF width information, and all outputs working with localization widths depend on this checkbox.
When performing two-kernel analysis (see Section 7.11, “Two-kernel analysis”), any double-kernel fit with the two kernels further apart than this number is immediately discarded, resulting in a two-kernel improvement of 0.
This parameter ensures that large fitting windows and two-kernel analysis can cooperate.
See Also Compute two kernel improvement.
Compute values and derivatives of the PSF with 64 bit wide floating point numbers instead of 32 bit. Ensures higher reliability and precision, but with a small speed cost.
Table of Contents
This chapter is dedicated to the description of the different output modules that can be inserted into the output tree. Each of these modules will be defined by a general description of its purpose and its mode of operations, a table of the configuration options it will display in job options window (if any), and a table of the configuration options it will display in the job status window (if any).
The localizations file module stores received localizations in an ASCII text file. This text file is line-based with one line per localization and one line header information. Both the header line and localization lines consist of at least 4 space-delimited fields. For the localizations, the fields denote X and Y position in pixels, the image number of the localization and the strength of the localization (Gaussian fit amplitude) in camera A/D counts. The header consists of the maximum allowed X/Y coordinates and image number and of a 0 for the amplitude.
More fields may be present, but will not be documented. Forward compatibility can be achieved by ignoring all but the first four fields.
The image display module displays all received localizations in an image and optionally saves this image to a file.
The image construction is performed with the algorithm documented in [WolterDiplomarbeit]. Summarizing this algorithm, a localization density map is constructed by linearly interpolating and accumulating the localizations, weighted by their amplitude, on a pixel lattice. This density map is discretized with a very high depth to generate a high dynamic range image; this high-dynamic range image is reduced to a displayable range via weighted histogram equalization.
The histogram equalization operation modifies the absolute brightness differences between image areas to optimize contrast. This means that a pixel in the result image with a brightness of 200 did not necessarily receive twice as many localizations as a pixel with brightness 100 did. Histogram equalizations guarantees only that a brighter pixel represents at least as many localizations as a weaker pixel.
You can change the extent to which histogram equalization is performed by changing the histogram equalization power between 0 and 1. 0 means no histogram equalization, and pixel values are linear with localization density. 1 means full equalization: All brightness values appear equally often in the image. While images without histogram equalization aplied often suffer in contrast due to few, very bright pixels suppressing the normal structures, too much histogram equalization overemphasises regions with weak signals and background noise. The default value for the histogram power is usually a good compromise.
Several different coloring schemes are available for the resulting image. All of these operate on the histogram-normalized brightness, but display the brightness in various ways to enhance information depth, produce a pretty-looking image or give information about the time coordinate. All of these color schemes show brighter colors to indicate more localizations; to invert this meaning and show, for example, black localizations on a white background, use the "Invert colours" option.
The black and white colour scheme is the fastest colour code. It displays the equalized brightness directly on a scale ranging from black (no localizations) to white (maximum amount).
The black, red, yellow and white colour scheme offers a higher dynamic range by displaying the lowest third of the brightness values on a scale ranging from black to red, the middle third on a scale from red to yellow and the highest third on a scale from yellow to white. In total, about 760 brightness levels are displayed.
The constant colour colour scheme is similar to the black-and-white scheme, but uses an arbitray colour instead of white. You can use the "Select colour hue" and "Select saturation" to choose the colour.
The Vary hue by time coordinate colour scheme colour-codes each localization by its time coordinate, that is, by the number of the image it occured in. The code starts at the hue selected in "Select colour hue" and then follows the colour circle of the HSV colour model, that is, ranges from red over yellow, green, cyan, blue to violet. If multiple localizations contribute to the same pixel, hue and variance are interpreted as angle and radius on a plane, converted to cartesian coordinates, averaged arithmetically and converted back to hue and variance. For example, if red (hue 0, saturation 1) and yellow-green (hue 0.25, saturation 1) are present with amplitudes 1 and 4 on a pixel, they are converted to the points (1,0) and (0,1), averaged to (0.2,0.8) and transformed back to (hue 0.21, saturation 0.82), which is a slightly pale yellow. Observe that a point with localizations equally distributed over a long range of images tends to have a low saturation, that is, appear white.
If a Repeater service is available through a parent module, most image display parameters can be changed even after the job is started.
Select one of the color schemes given in the section called “Color coding”.
If this checkbox is checked, the result image will be shown online in an image window. Disable for faster computation.
Invert the color display. Each color is inverted among the three primary color axes (red, green, blue), making red to cyan and black to white.
Set the aspect ratio of source (camera detector) pixels to target pixels. If set to 10, for example, the result image will be have 10 times more pixels in X direction and 10 times more pixels in Y direction.
If a filename is given here, the final result image will be saved to the given file. The file extension determines the file type, and all common file formats (GIF, JPG, PNG, TIF) are supported.
Set the hue for constant color coding or the starting hue for variable hueing. Ranges from 0 (red) over 1/6 (yellow), 1/3 (green), 1/2 (cyan), 2/3 (blue) and 5/6 (violet) to 1 (red again).
Set the saturation for constant color coding. Ranges from 0 (no colors at all, only grey) to 1 (fully saturated colors).
Interactively change the intensity scale of the result image from a linear scale (value 0) to a contrast-enhanced image (value 1).
Shows the current resolution enhancement (see job options table for a definition) and allows, if a Repeater service is present, to dynamically change it.
Save the image at the current state of computation to the file given by Save image to.
Shows and changes the file name where the result image should be written to. The file extension determines the file type, and all common file formats (GIF, JPG, PNG, TIF) are supported.
Control element for zooming in (positive values) or out (negative values) in the displayed image.
This output module stores all its input localizations in the computer's main memory. It provides a Repeater service to its submodules, enabling features like interactive range selection in the image viewer, but also does not store the source images, preventing modules like Estimate PSF form from working as submodules of this module.
Slice localization set — Split the input along the time axis to make a movie
This output module can automatically slice one input acquisition into a number of output acquisitions, each of which is processed seperately.
After determining the number of slices, this module makes one copy of its children output modules for each slice. Each copy will receive only the localizations in its slice. To keep memory usage low, the copies will be made on demand, when the first localization of a slice arrives, and will be closed as soon as the last localization of its slice has been processed.
Pattern for output file basenames. The output files (for example for images or localization files) will be generated by this pattern, where %i will be replaced with the slice number.
Each slice will have as many images as specified here. Suppose slices in an acquisition 1000 images long start at 0, 300, 600 and 900 and this parameter is set to 500, there will be the four slices 0-499, 300-799, 600-999 and 900-999.
One slice will be started at each image number divisible by this number. Suppose 1000 images are in an acquisition and this parameter is set to 300, slices will be started at images 0, 300, 600 and 900.
This module does not process localizations, but rather averages all source images it receives to produce one averaged image. This image can be used to see how the specimen would have looked without dSTORM processing.
Show a progress bar to indicate state of computation.
Estimate PSF form
Assume that the polynomial 3D widening parameters are the same in all layers, and fit them as common coordinates.
Allow the X and Y focal planes to differ. If this option is unchecked, both planes are set to the same value and fitted together. Checking this option allows fitting data on an astigmatic 3D setup, e.g. with a cylindrical lens.
Assume that the PSF has the same parameters in X and Y, and fit these parameters in common. Enhances estimation precision and stability at the cost of flexibility and thruthful replication of the PSF. This option applies to PSF width and 3D widening factors, but not to the focus plane coordinates, which are controlled by Allow astigmatism
When fitting multiple layers, fit intensities in all layers independently. This allows fitting a multi-plane data set without knowing the prefactors or making assumptions about fluorophore populations, at the cost of reduced precision and the inability to fit transmission coefficients.
When this option is checked, the Z position of the best-focused planes (i.e. the Z positions at which the X and Y PSF FWHMs are smallest, respectively) will be fitted to the data.
When this option is checked, the width of the PSF will be fitted to the data. Otherwise, it will be treated as ground truth and not changed. If the PSF FWHM has been established reliably on other measurements, disabling this option enhances estimation stability and reliability.
When this option is checked, the fluorophore transmission factors are fitted to the data. This allows for straightforward multi-color calibration on live data.
Include all pixels into the fit window that are closer than this radius to the selected spot. This is currently an L1 distance.
When fitting multiple layers, fit fluorophore positions in all layers independently. This can mitigate an imprecise plane alignment at the cost of reduced precision.
If this option is checked, the selection window (as described above) is shown. Otherwise, all eligible spots are used in estimation.
This option configures the total number of spots that are used to fit the PSF form. Localizations are scanned and presented until this number of spots is selected.
This option configures the maximum number of spots that are eligible for estimation in each source image. The spots with the highest amplitude are selected, the rest is discarded and not shown in the selection window. If this number is exceeded, the rest of the spots in an image is not displayed. A fractional number can be entered in this field, translating to one spot being picked in each n-th frame. For example, a value of 0.25 selects one spot every fourth frame.
Assume that the PSF FWHM is the same in all layers, and fit it as a common coordinate. This option is related, but orthogonal to Assume circular PSF.
Optimize for maximum likelihood (ML) of the data with the optimized model instead of minimizing the squared deviations. You should use the same distance model for fitting the PSF as you configured in the engine.
Treat the fluorophore's Z position as ground truth, i.e. do not fit it. This option is useful when calibrating polynomial 3D and in conjunction with setting a ground-truth Z position in the Expression filter output.
Look up 3D via sigma difference
This output sets the Z position on its input localizations by computing the difference between the X and Y PSF widths and looking up the difference in the provided Section 7.7, “Z calibration file”. Thereby, a Z coordinate can be estimated on data that have been fitted with free PSF widths. For more information, see [Henriques2010].
It is preferable to provide the Section 7.7, “Z calibration file” directly to the fitting module by choosing the Interpolated 3D as 3D PSF model. Thereby, a degree of fitting freedom is avoided and precision is enhanced.
Input file name for the calibration table in which the difference is looked up. The format is described in Section 7.7, “Z calibration file”.
3D PSF width calibration table
This output stores a Section 7.7, “Z calibration file” for the relationship between the Z position and the PSF widths. It assumes that the Z position and PSF widths are set on its input localizations, orders the input localizations, smoothes them with cubic B splines and stores the values at the control points in the calibration output file.
Output file name for calibration table. The format is described in Section 7.7, “Z calibration file”.
Number of breakpoints to use for cubic B-spline interpolation. The number of breakpoints determines the number of cubic pieces used to approximate the measured PSF width curve. This number can be viewed as the adaptibility of the sigma curve to the measured PSF widths.
This output allows filtering on and changing individual localization fields. It is distributed into two parts: The simple input fields for minimum localization strength, linear drift correction and maximum two kernel improvement give easy access to often-needed manipulations, while the more complex test-based expressions give full access to all localization fields.
The number of Value to assign to and Expression to assign from fields in this output is variable and can be changed with the Number of expressions field. These fields form pairs called "actions", and all of the actions take effect. The order of application is top-to-bottom.
This text field gives an arithmetic expression for the value of the action. An
expression consists of variables explained in Value to assign to and constants,
linked by the usual arithmetic operators. Constants are given as numbers with units, which
meters, camera A/D counts of frames, respectively, and be prefixed with the usual SI prefixes
u for μ). The available operators are given in Table 5.4, “Operators in expressions”.
Examples for complete actions are listed in Table 5.5, “Example actions”
If this field is enabled, all localizations are shifted in space by an amount proportional to the time coordinate of their detection. This is used to implement linear drift correction.
If this field is checked, all localizations with a two-kernel improvement (quotient of residues with two kernels and residues with one kernel) greater than the given value are discarded. This value is used for artifact suppression and is only available when Compute two kernel improvement has been checked.
If this field is enabled, all localizations with a strength lower than its value are discarded and not displayed or stored in the suboutputs. The localization strength denotes the total signal detected in the PSF, i.e. the integral of the point spread function.
This option box determines the effect of the action. It can either take the value of Filter, in which case Expression to assign from must give a boolean result, or a variable name, in which case Expression to assign from must give a numerical result which matching dimensions. The possible variable names are formed by applying the modifiers from Table 5.2, “Variable name modifiers for expressions” to Table 5.1, “Variable base names for expressions”, with examples in Table 5.3, “Variable example names for expressions”.
Table 5.1. Variable base names for expressions
|Variable base name||Dimension||Type||Meaning|
|pos||m||3D vector||Position in sample space|
|amp||ADC||scalar||Intensity of emission|
|frame||fr||scalar||Time of emission|
|psffwhm||m||2D vector||PSF model's full width at half maximum|
|chisq||dimensionless||scalar||Fit residues. For least-squares fitting, this value gives the sum of squared deviations between PSF model and data in photons squared. For MLE fitting, this value gives the total likelihood score. In both cases, the value is not normed in any way and unlikely to be comparable between measurements.|
|fluo||dimensionless||scalar||Fluorophore type (0 for first fluorophore, 1 for second, etc.)|
|bg||ADC||scalar||Local background intensity|
Table 5.2. Variable name modifiers for expressions
||… in X dimension|
||… in Y dimension|
||… in Z dimension|
||uncertainty of …|
||lower bound of …|
||upper bound of …|
Table 5.3. Variable example names for expressions
|Example variable name||Meaning|
|posx||Localization's X coordinate|
|posminy||Highest possible Y localization coordinate|
Table 5.4. Operators in expressions
||numerical||grouping of numerical expressions|
||numerical||choice operator (if
||boolean||comparison for equality, inequality, equality (same as
Table 5.5. Example actions
|Value to assign to||Expression to assign from||Intention|
||Only show localizations from the first fluorophore|
||Only show localizations right of a line at 1 μm|
||Discard emissions with intensities above 5000 counts|
||Limit region of interest to a single bead early in the acquisition to calibrate 3D|
||Set a Z ground truth raising 8 nanometre per frame|
||Make all localizations from the first fluorophore type more intense by a half|
Before starting a computation job, each filter output module must have at least one output assigned. Filter outputs are those that can have more outputs as children; in other terms, output modules that have the "Add output" control element.
You can fix this error by assigning an output to each filter output. If you just want to test something and are concerned about file overwriting, choose a simple output like Count localizations.
Table of Contents
The dSTORM engine is the collective term for the core of the rapidSTORM software that is not part of the input or output drivers. The dSTORM engine is responsible for converting a vector of input images into a set of localizations, performing the steps of spot detection, spot fitting and spot judging defined in [WolterDiplomarbeit]. While the exact algorithms are out of the scope of this manual, a short summary of the engine operation can be given:
First, the input images are smoothed to reduce the amount of noise present. The local maximums of these noise-reduced images are located and stored as spot candidates, that is, positions where spot positions are likely to be present. The candidates are sorted with the strongest values first in the list and then nonlinearly fitted with the PSF model in the order established by that sorting. Once three successive candidates have failed to be fitted as localizations, the fitting process for the image is aborted.
A localization denotes the position (space and time) and the strength of a suspected fluorophore emission. A localization denotes only a suspected position because the high noise conditions in photoswitching micrsocopy introduce false localizations, either through background noise or multiple close-by emitters.
Localization coordinates are not given on a lattice as pixels are, but are rather subpixel-accurate. The accuracy is mostly given by the emission strength and the background noise as described by [Thompson2002].
rapidSTORM can fit both astigmatic and biplane 3D data by introducing an explicit Z parameter into the point spread function. It is assumed that the 2D Gaussian model describes the PSF and that the widths of the PSF in X and Y direction are functions of the emitter's Z coordinate. Two different models are supported for this function, a set of piecewise cubic functions or a single quartic function (polynomial model).
The points in a Section 7.7, “Z calibration file” are interpolated with a cubic B-spline. This approach is described in depth in [Proppert2014]. A short overview over the theoretical background is available from Sven Propperts description included with this package.
A Z calibration file is a plain text file with three whitespace-separated columns. The first column gives an emitter's Z coordinate in nanometres, and the second and third columns give the standard deviation of a Gaussian describing the PSF for an emitter at the given Z coordinate. They describe the X and Y dimension of the camera, respectively, and are given in micrometers.
The range of the Z calibration file (lowest and highest Z coordinate) gives the working depth of the approximation.
rapidSTORM can fit both astigmatic and biplane 3D data by introducing an explicit Z parameter into the point spread function. The Z parameter modifies the width parameters of the PSF according to Equation 7.1, “Polynomial 3D PSF Width”. In other terms, we model the variance of the PSF in the lateral directions (σx) as a polynom of the axial offset from the best-focused plane. The necessary parameters are the axial positions of the best-focused planes (zx and zy), the standard deviation of the PSF in the best-focused plane (σ0,x and σ0,y) and the effective focus depths for the polynomial terms (Δσi,x and Δσi,y). The point spread function model has been adapted from [Huang2008] and expanded with the natural linear term. However, rapidSTORM improves upon it by fitting the Z coordinate directly instead of using the complicated variance-space distance determination presented in the paper.
These parameters are normally determined externally from calibration samples. For astigmatic imaging, the best-focused planes zx and zy are set to different values. While the distance between the planes is crucially important for 3D localization, the absolute values and relative sign of the best-focused plane coordinates determine the direction and offset of the Z axis in the results. For biplane imaging, zx and zy are set equal to each other, but take different values for each plane.
Traditionally, rapidSTORM supported two PSF models called "Parabolic 3D" and "Holtzer 3D". Both of these models are subsets of to the polynomial model, and their parameters can be converted. For the Holtzer model, only the second derivative needs to be given as
, where ω denotes the Holtzer widening constant. For the parabolic model, the second and fourth derivatives must be given as
, where ω denotes the parabolic widening constant.
A repeater is any output module that stores all received localizations in memory and can repeat them if necessary. While this costs, naturally, roughly 32 bytes of memory per localization, it allows changing many processing parameters even after computation has started.
However, repeaters are not able to store the input images used in computation because doing so would quickly exhaust the available memory. Therefore, output modules that need access to source images may not be used as children of repeater modules.
The point spread function (PSF) is modeled for rapidSTORM purposes as a two-dimensional Gaussian function added to a background signal. This function has the parameters amplitude, background signal, center position and covariance matrix. We can assume the covariance matrix to be constant for any acquisition, and so only the amplitude, the background signal and the center position are fitted by the engine, while the covariance matrix is estimated iteratively by a second fitting process or given a-priori.
While this Gaussian model does not match the point spread functions of real systems exactly, it is a good approximation with easily computed derivatives. Studies such as Thomann et al. have shown that the approximation is good enough for practical purposes.
When (FOO) indicated a likely double emission, this hypothesis is tested by fitting a model consisting of the sum of two Gaussian functions with a common background to the data. This is called two-kernel analysis. When two-kernel analysis produces a two-kernel fit with two nonnegligible kernels and with significantly smaller residues than the normal one-kernel fit, the hypothesis of a double emission is deemed confirmed.
Instabilities in the experimental setup can, despite all experimental effort, lead to a slow, creeping shift (called drift) of the specimen's image on the camera detector. In this case, the quality of the resulting image is greatly degraded because structures appear smeared in the direction of the drift.
In most cases, the drift is small and approximately linear over the course of the acquisition. Such a drift can be corrected by substracting the drift velocity times the time elapsed since acquisition start from each image.
A convenient way to fine-tune drift correction is to add a time-hued image display as a child of the localization filter and optimize the settings for a mostly white or, at least, largely color-uncorrelated image.
[WolterDiplomarbeit] An accurate and efficient algorithm for real-time localisation of photoswitchable fluorophores. Bielefeld University. 2009-mar.
[Wolter2010] Real-Time Computation of Subdiffraction-Resolution Fluorescence Images. 2010. j-microsc. 2010. 1. 12-22.
[Thompson2002] Precise Nanometer Localization Analysis for Individual Fluorescent Probes. 2002. biophys-j. http://www.biophysj.org/cgi/content/abstract/82/5/2775. 2002. 5. 2775-2783.
[Huang2008] Three-Dimensional Super-Resolution Imaging by Stochastic Optical Reconstruction Microscopy. 2008-February-08. sci. 1095-9203. doi:10.1126/science.1153529http://dx.doi.org/10.1126/science.1153529. 2008-February-8. 5864. 810-813.
[Henriques2010] QuickPALM. 3D real-time photoactivation nanoscopy image processing in ImageJ. 2010-may-01. nat-methods. Nature Publishing Group. 1548-7091. doi:10.1038/nmeth0510-339http://dx.doi.org/10.1038/nmeth0510-339. 2010-may-01. 5. 339-340.
[Aufmkolk2012] Hochauflösende Mehrfarben-Fluoeszenzmikroskopie. Julius-Maximilians-Universität Würzburg. 2012-mar.
[Loeschberger2012] Super-resolution imaging visualizes the eightfold symmetry of gp210 proteins around the nuclear pore complex and resolves the central channel with nanometer resolution. 2012. Journal of Cell Science. doi:10.1242/jcs.098822http://jcs.biologists.org/content/125/3/570.abstract. 2012. 3. 570-575.
[Proppert2014] Cubic B-spline calibration for 3D-Superresolution measurements. 2014. opt-express. OSA. 2014.