API Reference

piscat.Analysis

class piscat.Analysis.ReadProteinAnalysis(camera_Name='Photonfocus.json')[source]

This class is developed to read the results of different video analyses that were analyzed with the protein_analysis function. In the end, this class has different methods to display histograms or save histogram results.

Parameters:

camera_Name (str) – Define the name of the camera configuration JSON file that will use for reading pixel size.

plot_hist(his_setting)[source]

This method plots histograms for different contrast extraction methods for black PSFs, white PSFs and all together.

Parameters:

his_setting (dic) –

This dictionary defines a histogram plotting setting. In the following you can see the example for it:

his_setting = {‘bins’: None, ‘lower_limitation’: -7e-4, ‘upper_limitation’: 7e-4, ‘Flag_GMM_fit’: True, ‘max_n_components’: 3, ‘step_range’: 1e-6, ‘face’: ‘g’, ‘edge’: ‘k’, ‘scale’: 1e1, ‘external_GMM’: False}

plot_hist_2Dfit(his_setting)[source]

This method plots histograms for 2D Gaussian fitting contrast for black PSFs, white PSFs and all together.

Parameters:

his_setting (dic) –

This dictionary defines a histogram plotting setting. In the following you can see the example for it:

his_setting = {‘bins’: None, ‘lower_limitation’: -7e-4, ‘upper_limitation’: 7e-4, ‘Flag_GMM_fit’: True, ‘max_n_components’: 3, ‘step_range’: 1e-6, ‘face’: ‘g’, ‘edge’: ‘k’, }

plot_localization_heatmap(pixelSize=None, unit='um', flag_in_time=False, time_delay=0.1, dir_name=None)[source]

This method plots heatmap of particle localization. The size of each disk depicts the movment of each particles during tracking.

Parameters:
  • pixelSize (float) – Camera pixel size.

  • unit (str) – The axis unit.

  • flag_in_time (bool) – In the case of True, show binding and unbinding events in time.

  • time_delay (float) – Define the time delay between binding and unbinding events frames. This only works when flag_in_time is set to True.

  • dir_name (str) – You can save time slap frames if you specify a save path.

save_hist_data(dirName, name_dir, his_setting)[source]

This function save the histogram data with HDF5 format.

Parameters:
  • dirName (str) – Path for saving data.

  • name_dir (str) – Name that use for saving data.

  • his_setting (dic) –

    This dictionary defines a histogram plotting setting. In the following you can see the example for it:

    his_setting = {‘lower_limitation’: -7e-4, ‘upper_limitation’: 7e-4, ‘Flag_GMM_fit’: True, ‘max_n_components’: 3}

class piscat.Analysis.PlotProteinHistogram(intersection_display_flag=False, imgSizex=5, imgSizey=5, flag_localization_filter=False, radius=None)[source]

This class use video analysis data (‘HDF5’, ‘Matlab’) to plot histograms.

Parameters:
  • intersection_display_flag (bool) – This flag can be used when we want to see the result of intersection on top of v-shaped (Please read tutorials 3).

  • imgSizex (int) – The width of the histogram figure.

  • imgSizey (int) – The height of the histogram figure.

  • flag_localization_filter (bool) – This flag is used to define a mask and filter PSF depending on the localization map.

  • radius (int) – This parameter is used to define the radius of a circular mask that has the same center as the localization map and filters PSF in the edges and border.

localization_data()[source]

This method generates a particle localization heatmap. Every disk’s size represents the movement of each particle during tracking.

Parameters:
  • pixel_size (float) – camera pixel size

  • unit (str) – unit of axises

plot_fit_histogram(bins=None, upper_limitation=1, lower_limitation=-1, step_range=1e-07, face='g', edge='y', Flag_GMM_fit=True, max_n_components=3, imgSizex=20, imgSizey=20, font_size=12, scale=10.0, external_GMM=False)[source]

This method plots histograms for 2D Gaussian fitting contrast for black PSFs, white PSFs and all together.

Parameters:
  • bins (int) – Number of histogram bins.

  • upper_limitation (float) – The upper limit for trimming histogram.

  • lower_limitation (float) – The lower limit for trimming histogram.

  • step_range (float) – The resolution that is used for GMM plotting.

  • face (str) – Face color of the histogram.

  • edge (str) – Edge color of the histogram.

  • Flag_GMM_fit (bool) – Activate/Deactivate GMM.

  • max_n_components (int) – The maximum number of components that GMM used for AIC and BIC tests. This helps to find an optimum number of the mixture.

  • imgSizex (int) – The width of the histogram figure.

  • imgSizey (int) – The height of the histogram figure.

  • font_size (float) – The font size of the text in the table information.

  • scale (float) – This value multiplies the full range for x-axis plotting.

  • external_GMM (bool) – This flag modifies GMM’s visualization. Only the external border is visible if it is set to True.

plot_histogram(bins=None, upper_limitation=1, lower_limitation=-1, step_range=1e-07, face='g', edge='y', Flag_GMM_fit=True, max_n_components=3, imgSizex=20, imgSizey=20, font_size=12, scale=10.0, external_GMM=False)[source]

This method plots histograms for different contrast extraction methods for black PSFs, white PSFs and all together.

Parameters:
  • bins (int) – Number of histogram bins.

  • upper_limitation (float) – The upper limit for trimming histogram.

  • lower_limitation (float) – The lower limit for trimming histogram.

  • step_range (float) – The resolution that is used for GMM plotting.

  • face (str) – Face color of the histogram.

  • edge (str) – Edge color of the histogram.

  • Flag_GMM_fit (bool) – Activate/Deactivate GMM.

  • max_n_components (int) – The maximum number of components that GMM used for AIC and BIC tests. This helps to find an optimum number of the mixture.

  • imgSizex (int) – The width of the histogram figure.

  • imgSizey (int) – The height of the histogram figure.

  • font_size (float) – The font size of the text in the table information.

  • scale (float) – This value multiplies the full range for x-axis plotting.

  • external_GMM (bool) – This flag modifies GMM’s visualization. Only the external border is visible if it is set to True.

plot_localization_heatmap(pixel_size, unit, flag_in_time=False, time_delay=0.1, dir_name=None)[source]

This method generates a particle localization heatmap. Every disk’s size represents the movement of each particle during tracking.

Parameters:
  • pixel_size (float) – camera pixel size

  • unit (str) – The axis unit.

  • flag_in_time (bool) – In the case of True, show binding and unbinding events in time.

  • time_delay (float) – Define the time delay between binding and unbinding events frames. This only works when flag_in_time is set to True.

  • dir_name (str) – You can save time slap frames if you specify a save path.

save_hist_data(dirName, name, upper_limitation=1, lower_limitation=-1, Flag_GMM_fit=True, max_n_components=3)[source]

This function save the histogram data with HDF5 format.

Parameters:
  • dirName (str) – Path for saving data.

  • name (str) – Name that use for saving data.

  • upper_limitation (float) – The upper limit for trimming histogram.

  • lower_limitation (float) – The lower limit for trimming histogram.

  • Flag_GMM_fit (bool) – Activate/Deactivate GMM.

  • max_n_components (int) – The maximum number of components that GMM used for AIC and BIC tests. This helps to find an optimum number of the mixture.

piscat.Analysis.protein_analysis(paths, video_names, hyperparameters, flags, name_mkdir)[source]

This function analyses several videos based on the setting that the user defines in the hyperparameters and flags.

Parameters:
  • paths (list) – List of all paths for videos.

  • video_names (str) – list of all video names.

  • hyperparameters (dic) –

    The dictionary is used to define different parameters for analysis. In the following you can see the example of this dictionary:

    hyperparameters = {‘function’: ‘dog’, ‘batch_size’: 3000, ‘min_V_shape_width’: 1500, ‘threshold_max’: 6000 ‘search_range’: 2, ‘memory’: 20, ‘min_sigma’: 1.3, ‘max_sigma’: 3, ‘sigma_ratio’: 1.1, ‘PSF_detection_thr’: 4e-5, ‘overlap’: 0, ‘outlier_frames_thr’: 20, ‘Mode_PSF_Segmentation’: ‘BOTH’, ‘symmetric_PSFs_thr’: 0.6, ‘mode_FPN’: name_mkdir, ‘select_correction_axis’: 1, ‘im_size_x’: 72, ‘im_size_y’: 72, ‘image_format’: ‘<u2’}

  • flags (dic) –

    The dictionary is used to active/deactivate different parts in analyzing pipelines. In the following you can see the example of this dictionary:

    flags = {‘PN’: True, ‘FPNc’: True, ‘outlier_frames_filter’: True, ‘Dense_Filter’: True, ‘symmetric_PSFs_Filter’: True, ‘FFT_flag’: True}

  • name_mkdir (str) – It defines the name of the folder that automatically creates next to each video to save the results of the analysis and setting history.

Returns:

  • hyperparameters

  • flags

  • Number of particles and PSFs after each steps

  • All extracted trajectories with ‘HDF5’ and ‘Matlab’ format.

    • MATLAB saves array contains the following information for each particle:

    [intensity_horizontal, intensity_vertical, particle_center_intensity, particle_center_intensity_follow, particle_frame, particle_sigma, particle_X, particle_Y, particle_ID, optional(fit_intensity, fit_x, fit_y, fit_X_sigma, fit_Y_sigma, fit_Bias, fit_intensity_error, fit_x_error, fit_y_error, fit_X_sigma_error, fit_Y_sigma_error, fit_Bias_error)]
    • HDF5 saves dictionary similar to the following structures:

    {“#0”: {‘intensity_horizontal’: …, ‘intensity_vertical’: …, …, ‘particle_ID’: …}, “#1”: {}, …}
  • Table of PSFs information

    • particles: pandas dataframe

    Saving the clean data frame (x, y, frame, sigma, particle, …)

Return type:

The following information will be saved

piscat.BackgroundCorrection

class piscat.BackgroundCorrection.DifferentialRollingAverage(video=None, batchSize=500, flag_GUI=False, object_update_progressBar=None, mode_FPN='mFPN', FPN_flag_GUI=False, gui_select_correction_axis=1)[source]

Differential Rolling Average (DRA).

Parameters:
  • video (NDArray) – The video is 3D-numpy (number of frames, width, height).

  • batchSize (int) – The number of frames in each batch.

  • mode_FPN ({‘cpFPN’, ‘mFPN’, ‘wFPN’, 'fFPN'}, optional) –

    Flag that defines method of FPNc.

    • mFPN: Median fixed pattern noise correction

    • cpFPN: Median fixed pattern noise correction

    • wFPN: Wavelet FPNc

    • fFPN: FFT2D_Wavelet FPNc

  • optional_1 (GUI) –

    These flags are used when GUI calls this method.

    • flag_GUI: bool

      This flag is defined as True when GUI calls this method.

    • FPN_flag_GUI: bool

      This flag is defined as True when GUI calls this method while we want activate FPNc.

    • gui_select_correction_axis: int (0/1), ‘Both’

      This parameter is used only when FPN_flag_GUI is True, otherwise it will be ignored.

    • object_update_progressBar: object

      Object that updates the progress bar in GUI.

differential_rolling(FPN_flag=False, select_correction_axis=1, FFT_flag=False, inter_flag_parallel_active=True, max_iterations=10, FFT_widith=1)[source]

To use DRA, you’ll need to call the “differential rolling” process.

Parameters:
  • FPN_flag (bool) – This flag activates the fixed pattern noise correction function in case define as true.

  • select_correction_axis (int (0/1), 'Both') –

    This parameter is used only when FPN_flag is True, otherwise it will be ignored.

    • 0: FPN will be applied row-wise.

    • 1: FPN will be applied column-wise.

    • ’Both’: FPN will be applied on two axis.

  • FFT_flag (bool) – In case it is True, DRA will be performed in parallel to improve the time performance.

  • inter_flag_parallel_active (bool) – This flag actives/inactives parallel computation of wFPNc.

  • max_iterations (int) – This parameter is used when fFPT is selected that defines the total number of filtering iterations.

  • FFT_widith (int) – This parameter is used when fFPT is selected that defines the frequency mask’s width.

Returns:

  • output (NDArray) – Returns DRA video.

  • gainMap1D_ (NDArray) – Returns projection on each frame based on the correction axis

run(self) None[source]
class piscat.BackgroundCorrection.NoiseFloor(video, list_range, FPN_flag=False, mode_FPN='mFPN', select_correction_axis=1, n_jobs=None, inter_flag_parallel_active=False, max_iterations=10, FFT_widith=1, mode='mode_temporal')[source]

This class measures the noise floor for various batch sizes.

Parameters:
  • video (NDArray) – The video is 3D-numpy (number of frames, width, height).

  • list_range (list) – list os all batch size that DRA should be calculated for them.

  • FPN_flag (bool) – This flag activates the fixed pattern noise correction function in case define as true.

  • mode_FPN ({‘cpFPN’, ‘mFPN’, ‘wFPN’, 'fFPN'}, optional) –

    Flag that defines method of FPNc.

    • mFPN: Median fixed pattern noise correction

    • cpFPN: Median fixed pattern noise correction

    • wFPN: Wavelet FPNc

    • fFPN: FFT2D_Wavelet FPNc

  • select_correction_axis (int (0/1), 'Both') –

    This parameter is used only when FPN_flag is True, otherwise it will be ignored.

    • 0: FPN will be applied row-wise.

    • 1: FPN will be applied column-wise.

    • ’Both’: FPN will be applied on two axis.

  • max_iterations (int) – This parameter is used when fFPT is selected that defines the total number of filtering iterations.

  • FFT_widith (int) – This parameter is used when fFPT is selected that defines the frequency mask’s width.

plot_result(flag_log=True)[source]

The result of the noise floor is plotted when this function is called.

Parameters:

flag_log (bool) – The log-log plot style is enabled by this parameter.

piscat.InputOutput

class piscat.InputOutput.CameraParameters(name, quantum_efficiency=0.3, electron_well_depth=180000.0, max_electron_well_depth=200000.0, bit_depth=12, pixelSize=0.66)[source]

Based on the camera features, this class generates a JSON file. This JSON was used by other functions and methods to set certain parameters, such as pixel size.

Parameters:
  • name (str) – Name of camera.

  • quantum_efficiency (float) – Quantum efficiency of camera.

  • electron_well_depth (float) – Electron well depth of camera.

  • max_electron_well_depth (float) – Maximum electron well depth of camera

  • bit_depth (float) – Bit depth of the camera.

  • pixelSize (float) – Pixel size of camera.

class piscat.InputOutput.CPUConfigurations(n_jobs=-1, backend='multiprocessing', verbose=0, parallel_active=True, threshold_for_parallel_run=None, flag_report=False)[source]

This class generates a JSON file based on the parallel loop setting on the CPU that the user prefers. This JSON was used by other functions and methods to set hyperparameters in a parallel loop. For parallelization, PiSCAT used Joblib.

Parameters:
  • n_jobs (int) – The maximum number of workers that can work at the same time. If -1, all CPU cores are available for use.

  • backend (str) –

    Specify the implementation of the parallelization backend. The following backends are supported:

    • “loky”:

      It can induce some communication and Memory overhead when exchanging input and output data with the worker Python processes.

    • “multiprocessing”:

      It previous process-based backend based on multiprocessing.Pool. Less robust than loky.

    • “threading”:

      It is a very low-overhead backend but it suffers from the Python Global Interpreter. Lock if the called function relies a lot on Python objects. “threading” is mostly useful when the execution bottleneck is a compiled extension that explicitly releases the GIL (for instance a Cython loop wrapped in a “with nogil” block or an expensive call to a library such as NumPy).

  • verbose (int, optional) – The verbosity level, if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported.

  • parallel_active (bool) – Functions will run the parallel implementation if it is True.

  • threshold_for_parallel_run (float) – It reserved for next generation of PiSCAT.

  • flag_report (bool) – This flag is set if you need to see the values that will be used for CPU configuration.

read_cpu_setting(flag_report=False)[source]
flag_report: bool

Whether you need to see the values that will be used for CPU configuration.

class piscat.InputOutput.Image2Video(path, file_format, width_size, height_size, image_type, reader_type)[source]

This class reads images of a particular kind from a folder and concatenates them into a single NumPy array.

Parameters:
  • path (str) – The directory path that includes images.

  • file_format (str) – Postfix of image names.

  • width_size (int) – For binary images, it specifies the image width.

  • height_size (int) – For binary images, it specifies the image height.

  • image_type (str) –

    • “i” (signed) integer, “u” unsigned integer, “f” floating-point.

    • ”<” active little-endian.

    • ”1” 8-bit, “2” 16-bit, “4” 32-bit, “8” 64-bit.

  • reader_type (str) –

    Specify the video/image format to be loaded.

    • ’binary’: use this flag to load binary

    • ’tif’: use this flag to load tif

    • ’avi’: use this flag to load avi

    • ’png’: use this flag to load png

class piscat.InputOutput.StatusLine(video)[source]

A status line from an image frame is read. The last line of the Photonfocus camra picture is the status line. All data is returned in the form of a struct, as well as the cut-frame without the status line.

Parameters:

video (NDArray) – Numpy array with the following form should be used for video (number of frame, width, height).

find_status_line()[source]
Returns:

  • self.out_video (NDArray) – Video without status line.

  • self.camera_info (dic) – The dictionary that illustrates the obtain information in the status line.

read_status_line(frame)[source]
Parameters:

frame (NDArray) – First frame in video.

piscat.InputOutput.save_mat(data, path, name='')[source]

This function saves the array as matlab format.

Parameters:
  • data (list) – List or array.

  • path (str) – Path of the directory that data saves on it.

  • name (str) – Name of the save file.

piscat.InputOutput.read_mat(path, name='')[source]

This function reads the array with matlab format.

Parameters:
  • path (str) – Path of the directory that data reads from it.

  • name (str) – Name of the file.

piscat.InputOutput.save_dic_to_hdf5(dic_data, path, name)[source]

This function writes the dictionary data as hdf5 format.

Parameters:
  • data (dic) – Dictionary data.

  • path (str) – Path of the directory that data saves on it.

  • name (str) – Name of the save file

piscat.InputOutput.save_list_to_hdf5(list_data, path, name)[source]

This function writes the list data as hdf5 format.

Parameters:
  • data (list) – List data.

  • path (str) – Path of the directory that data saves on it.

  • name (str) – Name of the save file.

piscat.InputOutput.load_dict_from_hdf5(filename)[source]

This function reads the hdf5 file and convert it to dictionary.

Parameters:

filename (str) – Path and name of the hdf5 file.

piscat.InputOutput.save_df2csv(df, path, name='')[source]

This function writes the panda data frame to CSV file

Parameters:
  • data (data frame) – Panda data frame.

  • path (str) – Path of the directory that data save on it.

  • name (str) – Name of the save file

piscat.InputOutput.save_dic2json(data_dictionary, path, name='')[source]

This function writes the dictionary data to JSON file

Parameters:
  • data (dic) – Dictionary data.

  • path (str) – Path of the directory that data save on it.

  • name (str) – Name of the save file.

piscat.InputOutput.read_json2dic(path, name='')[source]

This function reads the JSON file and converts it to dictionary.

Parameters:
  • path (str) – Path of the directory that data load from it.

  • name (str) – Name of the JSON file.

piscat.InputOutput.video_reader(file_name, type='binary', img_width=128, img_height=128, image_type=dtype('float64'), s_frame=0, e_frame=-1)[source]

This is a wrapper that can be used to call various video/image readers.

Parameters:
  • file_name (str) – Path of video and file name, e.g. test.jpg.

  • type (str) –

    Define the video/image format to be loaded.

    • ’binary’: use this flag to load binary

    • ’tif’: use this flag to load tif

    • ’avi’: use this flag to load avi

    • ’png’: use this flag to load png

    • ’fits’: use this flag to load fits

    • ’fli’: use this flag to load fli

optional_parameters:

These parameters are used when video ‘bin_type’ define as binary.

img_width: int

For binary images, it specifies the image width.

img_height: int

For binary images, it specifies the image height.

image_type: str

Numpy.dtype(‘<u2’) –> video with uint16 pixels data type

  • “i” (signed) integer, “u” unsigned integer, “f” floating-point

  • “<” active little-endian

  • “1” 8-bit, “2” 16-bit, “4” 32-bit, “8” 64-bit

s_frame: int

Video reads from this frame. This is used for cropping a video.

e_frame: int

Video reads until this frame. This is used for cropping a video.

Returns:

@returns – The video/image

Return type:

NDArray

piscat.InputOutput.read_binary(file_name, img_width=128, img_height=128, image_type=dtype('float64'), s_frame=0, e_frame=-1)[source]

This function reads binary video.

Parameters:
  • file_name (str) – Path and name of binary video.

  • img_width (int) – It specifies the image width.

  • img_height (int) – It specifies the image height.

  • image_type (str) –

    • “i” (signed) integer, “u” unsigned integer, “f” floating-point

    • ”<” active little-endian

    • ”1” 8-bit, “2” 16-bit, “4” 32-bit, “8” 64-bit

  • s_frame (int) – Video reads from this frame. This is used for cropping a video.

  • e_frame (int) – Video reads until this frame. This is used for cropping a video.

Returns:

@returns – The video is 3D-numpy (number of frames, width, height).

Return type:

NDArray

piscat.InputOutput.read_tif(filename)[source]

Reading image/video with TIF format.

Parameters:

file_name (str) – Path and name of TIF image/video.

Returns:

@returns – The video is 3D-numpy (number of frames, width, height).

Return type:

NDArray

piscat.InputOutput.read_avi(filename)[source]

Reading video with AVI format.

Parameters:

file_name (str) – Path and name of AVI video.

Returns:

video – The video is 3D-numpy (number of frames, width, height).

Return type:

NDArray

piscat.InputOutput.read_png(filename)[source]

Reading image with PNG format.

Parameters:

file_name (str) – Path and name of PNG image.

Returns:

@returns – The video is 2D-numpy (width, height).

Return type:

NDArray

piscat.InputOutput.read_fits(filename)[source]

Reading image/video with fits format.

Parameters:

file_name (str) – Path and name of fits image/video.

Returns:

@returns – The video as 3D-numpy with the following shape (number of frames, width, height)

Return type:

NDArray

class piscat.InputOutput.DirectoryType(dirName, type_file)[source]

Based on the type file description, this class generates a dataframe containing ‘Directory,’ ‘Folder,’ and ‘File’ from all files below the define directory.

Parameters:
  • dirName (str) – A directory that is used to look for files of a particular type file.

  • type_file (str) – The type of files that the user is looking for.

return_df()[source]

Based on the type file specification, this function returns a pandas data frame containing ‘Directory,”Folder,’ and ‘File’ from all files below the define directory.

Return type:

The data frame contains (‘Directory’, ‘Folder’, ‘File’)

piscat.InputOutput.write_binary(dir_path, file_name, data, type='original')[source]

This function writes video as a binary.

Parameters:
  • dir_path (str) – Path of the directory that video save on it.

  • file_name (str) – Name of the save file.

  • data (NDArray) – Video with numpy format.

  • type (str or bin_type) – The video bin type is not changed by ‘original,’ but the user can convert it (e.g. float –> int16).

Return type:

The path to the new folder that was created to save the video is returned.

piscat.InputOutput.write_MP4(dir_path, file_name, data, jump=0)[source]

This function writes video as a MP4.

Parameters:
  • dir_path (str) – Path of the directory that video save on it.

  • file_name (str) – Name of the save file.

  • data (NDArray) – Video with numpy format.

  • jump (int) – Define stride between frames.

Return type:

The path to the new folder that was created to save the video is returned.

piscat.InputOutput.write_GIF(dir_path, file_name, data, jump=0)[source]

This function writes video as a GIF.

Parameters:
  • dir_path (str) – Path of the directory that video save on it.

  • file_name (str) – Name of the save file

  • data (NDArray) – Video with numpy format.

  • jump (int) – Define stride between frames.

Return type:

The path to the new folder that was created to save the video is returned.

piscat.Localization

class piscat.Localization.RadialCenter[source]

The RadialCenter localization algorithm is implemented in Python.

References

The Radial Center localization algorithm code has been adopted from this paper.

Parthasarathy, R. Rapid, accurate particle tracking by calculation of radial symmetry centers. Nat Methods 9, 724–726 (2012). https://doi.org/10.1038/nmeth.2071

class piscat.Localization.PSFsExtraction(video, flag_transform=False, flag_GUI=False, **kwargs)[source]

This class employs a variety of PSF localization methods, including DoG/LoG/DoH/RS/RVT.

It returns a list containing the following details:

[[frame number, center y, center x, sigma], [frame number, center y, center x, sigma], …]

Parameters:
  • video (NDArray) – Numpy 3D video array.

  • flag_transform (bool) – In case it is defined as true, the input video is already transformed. So it does not need to run this task during localization.

  • flag_GUI (bool) – While the GUI is calling this class, it is true.

dog(image)[source]

PSF localization using DoG.

Parameters:

image (NDArray) – image is an input numpy array.

Returns:

tmp – [y, x, sigma]

Return type:

list

doh(image)[source]

PSF localization using DoH.

Parameters:

image (NDArray) – image is an input numpy array.

Returns:

tmp – [y, x, sigma]

Return type:

list

fit_Gaussian2D_wrapper(PSF_List, scale=5, internal_parallel_flag=False)[source]

PSF localization using fit_Gaussian2D.

Parameters:
  • PSF_List (pandas dataframe) – The data frame contains PSFs locations( x, y, frame, sigma)

  • scale (int) – The ROI around PSFs is defined using this scale, which is based on their sigmas.

  • internal_parallel_flag (bool) – Internal flag for activating parallel computation. Default is True!

Returns:

df – The data frame contains PSFs locations ( ‘y’, ‘x’, ‘frame’, ‘center_intensity’, ‘sigma’, ‘Sigma_ratio’) and fitting information. fit_params is a list include (‘Fit_Amplitude’, ‘Fit_X-Center’, ‘Fit_Y-Center’, ‘Fit_X-Sigma’, ‘Fit_Y-Sigma’, ‘Fit_Bias’, ‘Fit_errors_Amplitude’, ‘Fit_errors_X-Center’, ‘Fit_errors_Y-Center’, ‘Fit_errors_X-Sigma’, ‘Fit_errors_Y-Sigma’, ‘Fit_errors_Bias’].

Return type:

pandas dataframe

frst_one_PSF(image)[source]

The lateral position of PSFs with subpixel resolution is returned by this function. Because this function only works when there is only one PSF in the field of view, it is typically used after coarse localization to extract fine localization for each PSF.

Parameters:

image (NDArray) – image is an input numpy array.

Returns:

@returns – [y, x, sigma]

Return type:

list

References

[1] Parthasarathy, R. Rapid, accurate particle tracking by calculation of radial symmetry centers. Nat Methods 9, 724–726 (2012). https://doi.org/10.1038/nmeth.2071

improve_localization_with_frst(df_PSFs, scale, flag_preview=False)[source]

It extracts subpixels localization based on initial pixel localization using frst_one_PSF methods for all detected PSFs.

Parameters:
  • df_PSFs (pandas dataframe) – The data frame contains PSFs locations( x, y, frame, sigma)

  • scale (int) – The ROI around PSFs is defined using this scale, which is based on their sigmas.

  • flag_preview (bool) – When the GUI calls these functions, this flag is set as True.

Returns:

sub_pixel_localization – The data frame contains subpixels PSFs locations( x, y, frame, sigma)

Return type:

pandas dataframe

log(image)[source]

PSF localization using LoG.

Parameters:

image (NDArray) – image is an input numpy array.

Returns:

@returns – [y, x, sigma]

Return type:

list

psf_detection(function, min_sigma=1, max_sigma=2, sigma_ratio=1.1, threshold=0, overlap=0, min_radial=1, max_radial=2, radial_step=0.1, alpha=2, beta=1, stdFactor=1, rvt_kind='basic', highpass_size=None, upsample=1, rweights=None, coarse_factor=1, coarse_mode='add', pad_mode='constant', mode='BOTH', flag_GUI_=False)[source]

This function is a wrapper for calling various PSF localization methods.

Parameters:
  • function (str) – PSF localization algorithm which should be selected from : ('dog', 'log', 'doh', 'frst_one_psf', 'RVT')

  • mode (str) – Defines which PSFs will be detected ('BRIGHT', 'DARK', or 'BOTH').

  • flag_GUI (bool) – Only is used when GUI calls this function.

  • optional_1

    These parameters are used when 'dog', 'log', 'doh' are defined as function.

    • min_sigma: float, list of floats

      The is the minimum standard deviation for the kernel. The lower the value, the smaller blobs can be detected. The standard deviations of the filter are given for each axis in sequence or with a single number which is considered for both axis.

    • max_sigma: float, list of floats

      The is the maximum standard deviation for the kernel. The higher the value, the bigger blobs can be detected. The standard deviations of the filter are given for each axis in sequence or with a single number which is considered for both axis.

    • sigma_ratio: float
      • The ratio between the standard deviation of Kernels which is used for computing the DoG and LoG.

      • The number of intermediate values of standard deviations between min_sigma and max_sigma for computing the DoH.

    • threshold_min: float

      The absolute lower bound for scale space maxima. Local maxima smaller than thresh are ignored. Reduce this to detect blobs with less intensities.

    • overlap: float

      A value between 0 and 1. If the area of two blobs are overlapping by a fraction greater than threshold_min, smaller blobs are eliminated.

  • optional_2

    These parameters are used when "RVT" is defined as function.

    • min_radial:

      minimal radius (inclusive)

    • max_radial:

      maximal radius (inclusive)

    • rvt_kind:

      either "basic" (only VoM), or "normalized" (VoM/MoV); normalized version increases subpixel bias, but it works better at lower SNR

    • highpass_size:

      size of the high-pass filter; None means no filter (effectively, infinite size)

    • upsample: int

      integer image upsampling factor; rmin and rmax are adjusted automatically (i.e., they refer to the non-upsampled image); if upsample>1, the resulting image size is multiplied by upsample

    • rweights:

      relative weights of different radial kernels; must be a 1D array of the length (rmax-rmin+1)//coarse_factor coarse_factor: the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision coarse_mode: the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

    • coarse_factor:

      the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision

    • coarse_mode:

      the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

    • pad_mode:

      edge padding mode for convolutions; can be either one of modes accepted by np.pad (such as "constant", "reflect", or "edge"), or "fast", which means faster no-padding (a combination of "wrap" and "constant" padding depending on the image size); "fast" mode works faster for smaller images and larger rmax, but the border pixels (within rmax from the edge) are less reliable; note that the image mean is subtracted before processing, so pad_mode="constant" (default) is equivalent to padding with a constant value equal to the image mean

Returns:

df_PSF – The dataframe for PSFs that contains the [‘x’, ‘y’, ‘frame number’, ‘sigma’] for each PSF.

Return type:

pandas dataframe

psf_detection_preview(function, min_sigma=1, max_sigma=2, sigma_ratio=1.1, threshold=0, overlap=0, min_radial=1, max_radial=2, radial_step=0.1, alpha=2, beta=1, stdFactor=1, rvt_kind='basic', highpass_size=None, upsample=1, rweights=None, coarse_factor=1, coarse_mode='add', pad_mode='constant', mode='BOTH', frame_number=0, median_filter_flag=False, color='gray', imgSizex=5, imgSizey=5, IntSlider_width='500px', title='')[source]

This function is a preview wrapper for calling various PSF localization methods.

Parameters:
  • function (str) – PSF localization algorithm which should be selected from : ('dog', 'log', 'doh', 'frst_one_psf’)

  • mode (str) – Defines which PSFs will be detected ('BRIGHT', 'DARK', or 'BOTH').

  • frame_number (int) – Selecting frame number that PSFs detection should apply on it.

  • optional_1

    These parameters are used when 'dog', 'log', 'doh' are defined as function.

    • min_sigma: float, list of floats

      The is the minimum standard deviation for the kernel. The lower the value, the smaller blobs can be detected. The standard deviations of the filter are given for each axis in sequence or with a single number which is considered for both axis.

    • max_sigma: float, list of floats

      The is the maximum standard deviation for the kernel. The higher the value, the bigger blobs can be detected. The standard deviations of the filter are given for each axis in sequence or with a single number which is considered for both axis.

    • sigma_ratio: float
      • The ratio between the standard deviation of Kernels which is used for computing the DoG and LoG.

      • The number of intermediate values of standard deviations between min_sigma and max_sigma for computing the DoH.

    • threshold_min: float

      The absolute lower bound for scale space maxima. Local maxima smaller than thresh are ignored. Reduce this to detect blobs with less intensities.

    • overlap: float

      A value between 0 and 1. If the area of two blobs are overlapping by a fraction greater than threshold_min, smaller blobs are eliminated.

  • optional_2

    These parameters are used when 'RVT' is defined as function.

    • min_radial:

      minimal radius (inclusive)

    • max_radial:

      maximal radius (inclusive)

    • rvt_kind:

      either "basic" (only VoM), or "normalized" (VoM/MoV); normalized version increases subpixel bias, but it works better at lower SNR

    • highpass_size:

      size of the high-pass filter; None means no filter (effectively, infinite size)

    • upsample: int

      integer image upsampling factor; rmin and rmax are adjusted automatically (i.e., they refer to the non-upsampled image); if upsample>1, the resulting image size is multiplied by upsample

    • rweights:

      relative weights of different radial kernels; must be a 1D array of the length (rmax-rmin+1)//coarse_factor coarse_factor: the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision coarse_mode: the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

    • coarse_factor:

      the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision

    • coarse_mode:

      the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

    • pad_mode:

      edge padding mode for convolutions; can be either one of modes accepted by np.pad (such as "constant", "reflect", or "edge"), or "fast", which means faster no-padding (a combination of "wrap" and "constant" padding depending on the image size); "fast" mode works faster for smaller images and larger rmax, but the border pixels (within rmax from the edge) are less reliable; note that the image mean is subtracted before processing, so pad_mode="constant" (default) is equivalent to padding with a constant value equal to the image mean

Returns:

df_PSF – The dataframe for PSFs that contains the [‘x’, ‘y’, ‘frame number’, ‘sigma’] for each PSF

Return type:

pandas dataframe

class piscat.Localization.SpatialFilter[source]

We have a SpatialFilter class in PiSCAT that allows users to filter outlier_frames that have a strong vibration or a particle flying by, dense_PSFs, and non-symmetric PSFs that may not properly resemble the iPSF expected from the experimental setup. The threshold_min parameter in each of these filters determines the filter’s sensitivity.

dense_PSFs(df_PSFs, threshold=0)[source]

Remove PSFs from the dataframe that have an overlap greater than the specified portion.

Parameters:
  • df_PSFs (pandas DataFrame) – The dataframe containing PSF locations (x, y, frame, sigma).

  • threshold (float) – The threshold specifies the maximum allowable portion of overlap between two PSFs. It should be a value between 0 and 1.

Returns:

  • filter_df_PSFs (pandas DataFrame) – The filtered dataframe containing PSF locations (x, y, frame, sigma).

  • Equation

  • ———

  • - The radius of PSF1 and PSF2 are calculated as sqrt(2) * sigma1 and sqrt(2) * sigma2, respectively.

  • - The distance between two PSFs is calculated as d = sqrt((x1 - x2)^2 + (y1 - y2)^2).

  • - The minimum acceptable distance (without overlap) is calculated as l = sqrt(2) * (sigma1 + sigma2).

  • - PSFs are removed if d <= l * (1 - threshold).

outlier_frames(df_PSFs, threshold=20)[source]

This function eliminates all detected PSFs in the frame that are greater than the threshold_min value. PSFs that were detected in unstable frames are reduced using this method.

Parameters:
  • df_PSFs (pandas dataframe) – The data frame contains PSFs locations( x, y, frame, sigma)

  • threshold (int) – Maximum number of PSFs in one frame.

Returns:

filter_df_PSFs – The filter data frame contains PSFs locations( x, y, frame, sigma)

Return type:

pandas dataframe

remove_side_lobes_artifact(df_PSFs, threshold=0)[source]

This filter removes false detections on side lobes of PSFs caused by the localization algorithm by comparing center intensity contrast.

Parameters:
  • df_PSFs (pandas dataframe) – The data frame contains PSFs locations( x, y, frame, sigma, center_intensity)

  • threshold (float) – It specifies the portion of the overlay that two PSFs must have to remove from the list.

Returns:

filter_df_PSFs – The filter data frame contains PSFs locations( x, y, frame, sigma, center_intensity)

Return type:

pandas dataframe

symmetric_PSFs(df_PSFs, threshold=0.7)[source]

Remove astigmatism-affected PSFs with this filter.

Parameters:
  • df_PSFs (pandas dataframe) – The data frame contains PSFs locations( x, y, frame, sigma)

  • threshold (float) – The smallest sigma ratio that is acceptable (sigma max/sigma min).

Returns:

df_PSF_thr – The filter data frame contains PSFs locations( x, y, frame, sigma)

Return type:

pandas dataframe

class piscat.Localization.DirectionalIntensity[source]
bin_by(x, y, nbins=30)[source]

Bin x by y, given paired observations of x & y. Returns the binned “x” values and the left edges of the bins.

index_coords(data, origin=None)[source]

Creates x & y coords for the indicies in a numpy array “data”. “origin” defaults to the center of the image. Specify origin=(0,0) to set the origin to the lower left corner of the image.

interpolate_pixels_along_line(x0, y0, x1, y1)[source]

Uses Xiaolin Wu’s line algorithm to interpolate all of the pixels along a straight line, given two points (x0, y0) and (x1, y1)

References

Adapted from: [1] https://stackoverflow.com/questions/3798333/image-information-along-a-polar-coordinate-system [2] Wikipedia article containing pseudo code that function was based off of: http://en.wikipedia.org/wiki/Xiaolin_Wu’s_line_algorithm

plot_directional_intensity(data, origin=None)[source]

Makes a cicular histogram showing average intensity binned by direction from “origin” for each band in “data” (a 3D numpy array). “origin” defaults to the center of the image.

plot_polar_image(data, origin=None)[source]

Plots an image reprojected into polar coordinages with the origin at “origin” (a tuple of (x0, y0), defaults to the center of the image)

reproject_image_into_polar(data, origin=None)[source]

Reprojects a 3D numpy array (“data”) into a polar coordinate system. “origin” is a tuple of (x0, y0) and defaults to the center of the image.

piscat.Localization.gaussian_2d(xy_mesh, amp, xc, yc, sigma_x, sigma_y, b)[source]
piscat.Localization.fit_2D_Gaussian_varAmp(image, sigma_x, sigma_y, display_flag=False)[source]

This function uses non-linear squares to fit 2D Gaussian.

Parameters:
  • image (NDArray) – 2D numpy array, image.

  • sigma_x (float) – It is initial value for sigma X.

  • sigma_y (float) – It is initial value for sigma y.

  • display_flag (bool) – This flag is used to display the result of fitting for each PSF.

Returns:

@return – [sigma_ratio, fit_params, fit_errors]

Return type:

(list)

piscat.Localization.feature2df(feature_position, videos)[source]
piscat.Localization.list2dataframe(feature_position, video)[source]

This function converts the output of particle_localization.PSFsExtraction method from list to data frame.

Parameters:
  • feature_position (list) – List of position of PSFs (x, y, frame, sigma)

  • video (NDArray) – The video is 3D-numpy (number of frames, width, height).

Returns:

df_features – PSF positions are stored in the data frame. ( ‘y’, ‘x’, ‘frame’, ‘center_intensity’, ‘sigma’, ‘Sigma_ratio’, …).

Return type:

pandas dataframe

piscat.Preproccessing

class piscat.Preproccessing.FFT2D(video)[source]

This class computes the 2D spectrum of video.

Parameters:

video (NDArray) – The video is 3D-numpy (number of frames, width, height).

run(self) None[source]
class piscat.Preproccessing.Filters(video, inter_flag_parallel_active=True)[source]

This class generates a list of video/image filters. To improve performance on large video files, some of them have a parallel implementation.

Parameters:
  • video (NDArray) – The video is 3D-numpy (number of frames, width, height).

  • inter_flag_parallel_active (bool) – If the user wants to enable general parallel tasks in the CPU configuration, he or she can only use this flag to enable or disable this process.

flat_field(sigma)[source]

This method corrects the video background by creating a synthetic flat fielding version of the background.

Parameters:

sigma (float) – Sigma of Gaussian filter that use to create blur video.

Returns:

flat_field_video – The background corrected video as 3D-numpy

Return type:

NDArray

gaussian(sigma)[source]

This function applies a 2D Gaussian filter on each frame.

Parameters:

sigma (float) – Sigma of Gaussian filter that use to create blur video.

Returns:

self.blur_video – The filter video as 3D-numpy.

Return type:

NDArray

median(size)[source]

This function applies a 2D median filter on each frame.

Parameters:

size (int) – Kernel size of the median filter.

Returns:

self.blur_video – The filter video as 3D-numpy.

Return type:

NDArray

temporal_median()[source]

By extracting the temporal median from pixels, the background is corrected.

Returns:

@returns – The background corrected video as 3D-numpy

Return type:

NDArray

class piscat.Preproccessing.RadialVarianceTransform(inter_flag_parallel_active=True)[source]

Efficient Python implementation of Radial Variance Transform.

The main function is rvt() in the bottom of the file, which applies the transform to a single image (2D numpy array).

Compared to the vanilla convolution implementation, there are two speed-ups:

1) Pre-calculating and caching kernel FFT; this way so only one inverse FFT is calculated per convolution + one direct fft of the image is used for all convolutions

2) When finding MoV, calculate np.mean(rsqmeans) in a single convolution by averaging all kernels first

Parameters:

inter_flag_parallel_active (bool) – In case the user wants to active general parallel tasks in CPU configuration, the user can only active or deactivate this method by this flag.

References

The Radial Variance Transform code has been adopted from the GitHub repository mentioned below.

[1] Kashkanova, Anna D., et al. “Precision single-particle localization using radial variance transform.” Optics Express 29.7 (2021): 11070-11083.

[2] https://github.com/SandoghdarLab/rvt

convolve_fft(sp1, sp2, s1, s2, fshape, fast_mode=False)[source]

Calculate the convolution from the Fourier transforms of the original image and the kernel, trimming the result if necessary.

gen_r_kernel(r, rmax)[source]

Generate a ring kernel with radius r and size 2*rmax+1

generate_all_kernels(rmin, rmax, coarse_factor=1, coarse_mode='add')[source]

Generate a set of kernels with radii between rmin and rmax and sizes 2*rmax+1.

coarse_factor and coarse_mode determine if the number of those kernels is reduced by either skipping or adding them (see rvt() for a more detail explanation).

get_fshape(s1, s2, fast_mode=False)[source]

Get the required shape of the transformed image given the shape of the original image and the kernel.

high_pass(img, size)[source]

Perform Gaussian high-pass filter on the image

prepare_fft(inp, fshape, pad_mode='constant')[source]

Prepare the image for a convolution by taking its Fourier transform, applying padding if necessary.

rvt(img, rmin, rmax, kind='basic', highpass_size=None, upsample=1, rweights=None, coarse_factor=1, coarse_mode='add', pad_mode='constant')[source]

Perform Radial Variance Transform (RVT) of an image.

Parameters:
  • img (NDArray) – source image (2D numpy array)

  • rmin – minimal radius (inclusive)

  • rmax – maximal radius (inclusive)

  • kind – either "basic" (only VoM), or "normalized" (VoM/MoV); normalized version increases subpixel bias, but it works better at lower SNR

  • highpass_size – size of the high-pass filter; None means no filter (effectively, infinite size)

  • upsample (int) – integer image upsampling factor; rmin and rmax are adjusted automatically (i.e., they refer to the non-upsampled image); if upsample>1, the resulting image size is multiplied by upsample

  • rweights – relative weights of different radial kernels; must be a 1D array of the length (rmax-rmin+1)//coarse_factor coarse_factor: the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision coarse_mode: the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

  • coarse_factor – the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision

  • coarse_mode – the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

  • pad_mode – edge padding mode for convolutions; can be either one of modes accepted by np.pad (such as "constant", "reflect", or "edge"), or "fast", which means faster no-padding (a combination of "wrap" and "constant" padding depending on the image size); "fast" mode works faster for smaller images and larger rmax, but the border pixels (within rmax from the edge) are less reliable; note that the image mean is subtracted before processing, so pad_mode="constant" (default) is equivalent to padding with a constant value equal to the image mean

Return type:

Returns transform source image

rvt_core(img, rmin, rmax, kind='basic', rweights=None, coarse_factor=1, coarse_mode='add', pad_mode='constant')[source]

Perform core part of Radial Variance Transform (RVT) of an image.

Parameters:
  • img (NDArray) – source image (2D numpy array)

  • rmin (float) – minimal radius (inclusive)

  • rmax (float) – maximal radius (inclusive)

  • kind (str, ("basic", "normalized")) – either "basic" (only VoM), or "normalized" (VoM/MoV); normalized version increases subpixel bias, but it works better at lower SNR

  • rweights – relative weights of different radial kernels; must be a 1D array of the length (rmax-rmin+1)//coarse_factor

  • coarse_factor – the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision

  • coarse_mode (str, ("add", "skip")) – The reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

  • pad_mode (str, ("constant", "reflect", "edge", "fast")) – Edge padding mode for convolutions; can be either one of modes accepted by np.pad (such as "constant", "reflect", or "edge"), or "fast", which means faster no-padding (a combination of "wrap" and "constant" padding depending on the image size); "fast" mode works faster for smaller images and larger rmax, but the border pixels (within rmax from the edge) are less reliable; note that the image mean is subtracted before processing, so pad_mode="constant" (default) is equivalent to padding with a constant value equal to the image mean

rvt_kernel(frame_num, rmin, rmax, kind='basic', highpass_size=None, upsample=1, rweights=None, coarse_factor=1, coarse_mode='add', pad_mode='constant')[source]

Perform Radial Variance Transform (RVT) of an image.

Parameters:
  • frame_num (int) – frame number

  • rmin – minimal radius (inclusive)

  • rmax – maximal radius (inclusive)

  • kind – either "basic" (only VoM), or "normalized" (VoM/MoV); normalized version increases subpixel bias, but it works better at lower SNR

  • highpass_size – size of the high-pass filter; None means no filter (effectively, infinite size)

  • upsample (int) – integer image upsampling factor; rmin and rmax are adjusted automatically (i.e., they refer to the non-upsampled image); if upsample>1, the resulting image size is multiplied by upsample

  • rweights – relative weights of different radial kernels; must be a 1D array of the length (rmax-rmin+1)//coarse_factor coarse_factor: the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision coarse_mode: the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

  • coarse_factor – the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision

  • coarse_mode – the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

  • pad_mode – edge padding mode for convolutions; can be either one of modes accepted by np.pad (such as "constant", "reflect", or "edge"), or "fast", which means faster no-padding (a combination of "wrap" and "constant" padding depending on the image size); "fast" mode works faster for smaller images and larger rmax, but the border pixels (within rmax from the edge) are less reliable; note that the image mean is subtracted before processing, so pad_mode="constant" (default) is equivalent to padding with a constant value equal to the image mean

Return type:

Returns transform source image

rvt_video(video, rmin, rmax, kind='basic', highpass_size=None, upsample=1, rweights=None, coarse_factor=1, coarse_mode='add', pad_mode='constant')[source]

This is an RVT wrapper that allows you to get video in parallel.

Parameters:
  • video (NDArray) – The video is 3D-numpy (number of frames, width, height).

  • rmin – minimal radius (inclusive)

  • rmax – maximal radius (inclusive)

  • kind – either "basic" (only VoM), or "normalized" (VoM/MoV); normalized version increases subpixel bias, but it works better at lower SNR

  • highpass_size – size of the high-pass filter; None means no filter (effectively, infinite size)

  • upsample (int) – integer image upsampling factor; rmin and rmax are adjusted automatically (i.e., they refer to the non-upsampled image); if upsample>1, the resulting image size is multiplied by upsample

  • rweights – relative weights of different radial kernels; must be a 1D array of the length (rmax-rmin+1)//coarse_factor coarse_factor: the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision coarse_mode: the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

  • coarse_factor – the reduction factor for the number ring kernels; can be used to speed up calculations at the expense of precision

  • coarse_mode – the reduction method; can be "add" (add coarse_factor rings in a row to get a thicker ring, which works better for smooth features), or "skip" (only keep on in coarse_factor rings, which works better for very fine features)

  • pad_mode – edge padding mode for convolutions; can be either one of modes accepted by np.pad (such as "constant", "reflect", or "edge"), or "fast", which means faster no-padding (a combination of "wrap" and "constant" padding depending on the image size); "fast" mode works faster for smaller images and larger rmax, but the border pixels (within rmax from the edge) are less reliable; note that the image mean is subtracted before processing, so pad_mode="constant" (default) is equivalent to padding with a constant value equal to the image mean

Return type:

Returns transform source video.

class piscat.Preproccessing.FastRadialSymmetryTransform[source]

Implementation of fast radial symmetry transform in pure Python using OpenCV and numpy.

References

The fast radial symmetry transform code has been adopted from the GitHub repository mentioned below.

[1] Loy, G., & Zelinsky, A. (2002). A fast radial symmetry transform for detecting points of interest. Computer Vision, ECCV 2002.

[2] https://github.com/Xonxt/frst

class piscat.Preproccessing.GuidedFilter(I, radius, eps)[source]

This is a class which builds guided filter according to the channel number of guided Input. The guided input could be gray image, color image, or multi-dimensional feature map.

Parameters:
  • I (NDArray) – Guided image or guided feature map

  • radius (int) – Radius of filter

  • eps (float) – Value controlling sharpness

References

[1] K.He, J.Sun, and X.Tang. Guided Image Filtering. TPAMI’12.

filter(p)[source]
Parameters:

p (NDArray) – Filtering input which is 2D or 3D with format HW or HWC

Returns:

ret – Filtering output whose shape is same with input

Return type:

NDArray

class piscat.Preproccessing.GrayGuidedFilter(I, radius, eps)[source]

Specific guided filter for gray guided image.

Parameters:
  • I (NDArray) – 2D guided image

  • radius (int) – Radius of filter

  • eps (float) – Value controlling sharpness

References

The guided filter for gray guided image code has been adopted from the GitHub repository mentioned below.

filter(p)[source]
Parameters:

p (NDArray) – Filtering input of 2D

Returns:

q – Filtering output of 2D

Return type:

NDArray

class piscat.Preproccessing.MultiDimGuidedFilter(I, radius, eps)[source]

Specific guided filter for color guided image or multi-dimensional feature map.

Parameters:
  • I (NDArray) – Image.

  • radius (int) – Radius of filter

  • eps (float) – Value controlling sharpness

References

The guided filter for color guided image code has been adopted from the GitHub repository mentioned below.

filter(p)[source]
Parameters:

p (NDArray) – Filtering input of 2D

Returns:

q – Filtering output of 2D

Return type:

NDArray

class piscat.Preproccessing.MedianProjectionFPNc(video, select_correction_axis, flag_GUI=False)[source]

This class uses a heuristic procedure called Median Projection FPN (mFPN) to reduce fixed pattern noise (FPN).

References

[1] Mirzaalian Dastjerdi, Houman, et al. “Optimized analysis for sensitive detection and analysis of single proteins via interferometric scattering microscopy.” Journal of Physics D: Applied Physics (2021). (http://iopscience.iop.org/article/10.1088/1361-6463/ac2f68)

Parameters:
  • video (NDArray) – The video is 3D-numpy (number of frames, width, height).

  • select_correction_axis (int (0/1)) –

    • 0: FPN will be applied row-wise.

    • 1: FPN will be applied column-wise.

mFPNc(select_correction_axis)[source]

Using the mPN approach on video.

Parameters:

select_correction_axis (int (0/1)) –

  • 0: FPN will be applied row-wise.

  • 1: FPN will be applied column-wise.

Returns:

output – Video after using the mFPNc technique.

Return type:

NDArray

run(self) None[source]
class piscat.Preproccessing.ColumnProjectionFPNc(video, select_correction_axis, flag_GUI=False)[source]

This class uses a heuristic procedure called Column Projection FPN (cpFPN) to reduce fixed pattern noise (FPN).

Parameters:
  • video (NDArray) – The video is 3D-numpy (number of frames, width, height).

  • select_correction_axis (int (0/1)) –

    • 0: FPN will be applied row-wise.

    • 1: FPN will be applied column-wise.

cpFPNc(select_correction_axis)[source]

Using the cpFPN approach on video.

Parameters:

select_correction_axis (int (0/1)) –

  • 0: FPN will be applied row-wise.

  • 1: FPN will be applied column-wise.

Returns:

output – Video after using the cpFPNc technique.

Return type:

NDArray

run(self) None[source]
class piscat.Preproccessing.FrequencyFPNc(video, inter_flag_parallel_active=True)[source]

This class corrects FPN using two well-known frequency domain techniques from the literature.

References

[1] Cao, Yanlong, et al. “A multi-scale non-uniformity correction method based on wavelet decomposition and guided filtering for uncooled long wave infrared camera.” Signal Processing: Image Communication 60 (2018): 13-21.

[2] Zeng, Qingjie, et al. “Single infrared image-based stripe non-uniformity correction via a two-stage filtering method.” Sensors 18.12 (2018): 4299.

Parameters:
  • video (NDArray) – The video is 3D-numpy (number of frames, width, height).

  • inter_flag_parallel_active (bool) – If the user wants to enable generic parallel tasks in CPU configuration, this flag is used to disable parallel execution of this function.

update_fFPN(direction='Horizontal', max_iterations=10, width=1)[source]

This method corrects the FPNc by using FFT [2].

Parameters:
  • direction (str) –

    Axis that FPN correction should apply on it.

    • 'Horizontal'

    • 'Vertical'

  • max_iterations (int) – Total number of filtering iterations.

  • width (int) – The frequency mask’s width.

Returns:

n_video – Video after using the fFPNc technique.

Return type:

NDArray

update_wFPN(direction='Horizontal')[source]

This method corrects the FPNc by using wavelet[1].

Parameters:

direction (str) –

Axis that FPN correction should apply on it.

  • 'Horizontal'

  • 'Vertical'

Returns:

n_video – Video after using the wFPNc technique.

Return type:

NDArray

piscat.Trajectory

class piscat.Trajectory.Linking[source]
To obtain the temporal activity of each iPSF, we use the Trackpy

packages’ algorithm.

References

[1] http://soft-matter.github.io/trackpy/v0.4.2/

Each iPSF temporal activity is obtained.

Parameters:
  • psf_position (pandas data frame) – The data frame contains PSFs locations( x, y, frame, sigma, …)

  • search_range (float or tuple) – The maximum distance features can move between frames, optionally per dimension.

  • memory (int) – The maximum number of frames during which a feature can vanish, then reappear nearby, and be considered the same particle. 0 by default.

Returns:

df_PSF – To the input data frame, append the ‘particle’ ID column. ( x, y, frame, sigma, particle, …).

Return type:

pandas dataframe

sorting_linking(df_PSFs)[source]

This function uses trajectory lengths to sort particles in a dataframe.

Parameters:

df_PSFs (pandas dataframe) – The data frame contains PSFs locations(x, y, frame, sigma, particle, …)

Returns:

total_sort_df_PSFs – The sort version of data frame contains PSFs locations(x, y, frame, sigma, particle, …)

Return type:

pandas dataframe

trajectory_counter(df_PSFs)[source]

This function counts the number of unique particles in the data frame.

Parameters:

df_PSFs (pandas dataframe) – The data frame contains PSFs locations(x, y, frame, sigma, particle,…)

Returns:

unique_list – Returns the number of particles in data frame

Return type:

int

class piscat.Trajectory.TemporalFilter(video, batchSize)[source]

Filters to be applied to temporal features are included in this class.

Parameters:
  • video (NDArray) – Input video.

  • batchSize (int) – Batch size that is used for DRA.

filter_tarj_base_length(df_PSFs, threshold_min, threshold_max)[source]

This function removes the particle from data frames that have a temporal length smaller and bigger than threshold_min values.

Parameters:
  • df_PSFs (pandas dataframe) – The data frame contains PSFs locations and ID (x, y, frame, sigma, particle, …)

  • threshold_min (int) – The minimum acceptable temporal length of one particle.

  • threshold_max (int) – The maximum acceptable temporal length of one particle.

Returns:

  • particles (pandas dataframe) – The filter data frame (x, y, frame, sigma, particle, …)

  • his_all_particles (list) – List that shows the statistics of particles length.

v_profile(df_PSFs, window_size=2000)[source]

The V-Shape trajectories and extended version are calculated.

Parameters:

window_size (int) – The maximum number of the frames that follow the V-Shape contrast from both sides.

Returns:

all_trajectories – Returns array contains the following information for each particle.

[intensity_horizontal, intensity_vertical, particle_center_intensity, particle_center_intensity_follow, particle_frame, particle_sigma, particle_X, particle_Y, particle_ID, optional(fit_intensity, fit_x, fit_y, fit_X_sigma, fit_Y_sigma, fit_Bias, fit_intensity_error, fit_x_error, fit_y_error, fit_X_sigma_error, fit_Y_sigma_error, fit_Bias_error)]

Return type:

List of list

v_trajectory(df_PSFs, threshold_min, threshold_max)[source]

This function extract v-shape of the particle that has a temporal length between threshold_min and threshold_min max values.

Parameters:
  • df_PSFs (pandas dataframe) – The data frame contains PSFs locations and ID (x, y, frame, sigma, particle, …).

  • threshold_min (int) – The minimum acceptable temporal length of one particle.

  • threshold_max (int) – The maximum acceptable temporal length of one particle.

Returns:

  • all_trajectories (List of list) – Returns list of extracted data (i.e [List of list])

    [intensity_horizontal, intensity_vertical, particle_center_intensity, particle_center_intensity_follow, particle_frame, particle_sigma, particle_X, particle_Y, particle_ID, optional(fit_intensity, fit_x, fit_y, fit_X_sigma, fit_Y_sigma, fit_Bias, fit_intensity_error, fit_x_error, fit_y_error, fit_X_sigma_error, fit_Y_sigma_error, fit_Bias_error)]
  • particles (pandas dataframe) – The dataframe after using temporal filter (x, y, frame, sigma, particle, …)

  • his_all_particles (list) – List that shows the statistics of particles length.

piscat.Trajectory.protein_trajectories_list2dic(v_shape_list)[source]

From the output of TemporalFilter.v_trajectory, this function converts the list to dictionary format.

Parameters:

v_shape_list (List of list) –

[intensity_horizontal, intensity_vertical, particle_center_intensity, particle_center_intensity_follow, particle_frame, particle_sigma, particle_X, particle_Y, particle_ID, optional(fit_intensity, fit_x, fit_y, fit_X_sigma, fit_Y_sigma, fit_Bias, fit_intensity_error, fit_x_error, fit_y_error, fit_X_sigma_error, fit_Y_sigma_error, fit_Bias_error)]

Returns:

dic_all – Return dictionary similar to the following structures

{“#0”: {‘intensity_horizontal’: …, ‘intensity_vertical’: …, …, ‘particle_ID’: …}, “#1”: {}, …}

Return type:

dic

piscat.Visualization

class piscat.Visualization.ContrastAdjustment(video)[source]

This class is used in the GUI to change the visualization’s brightness and contrast.

Parameters:

video (NDArray) – Input video.

auto_pixelTransforms(image)[source]

Adjusting the brightness and contrast of the current image automatically.

Parameters:

image (NDArray) – Input image (2D-Numpy).

Returns:

image_ – Output image (2D-Numpy)

Return type:

NDArray

pixel_transforms(image, alpha, beta, min_intensity, max_intensity)[source]

Using the value of the hyperparameters, adjust the brightness and contrast of the current image.

Parameters:
  • image (NDArray) – Input image (2D-Numpy).

  • alpha (float) – Scale factor.

  • beta (float) – Delta added to the scaled values.

  • min_intensity (float) – Min intensity values of output image.

  • max_intensity (float) – Max intensity values of output image.

Returns:

image_ – Output image (2D-Numpy)

Return type:

NDArray

class piscat.Visualization.Display(video, step=0, color='gray', time_delay=0, median_filter_flag=False)[source]

This class display the video.

Parameters:
  • video ((NDArray)) – Input video.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • time_delay (float) – Delay between frames in milliseconds.

  • step (int) – Stride between visualization frames.

class piscat.Visualization.DisplayDataFramePSFsLocalization(video, df_PSFs, time_delay=0.1, GUI_progressbar=False, *args, **kwargs)[source]

This class displays video while highlighting PSFs.

Parameters:
  • video (NDArray) – Input video.

  • df_PSFs (panda data_frame) – Data Frames that contains the location of PSFs.

  • time_delay (float) – Delay between frames in milliseconds.

  • GUI_progressbar (bool) – This actives GUI progress bar

run(self) None[source]
class piscat.Visualization.DisplayPSFs_subplotLocalizationDisplay(list_videos, list_df_PSFs, list_titles, numRows, numColumns, color='gray', median_filter_flag=False, imgSizex=5, imgSizey=5, time_delay=0.1)[source]

This class shows several videos (with the same number of frames) at once while highlight localize PSFs.

Parameters:
  • list_videos (list of NDArray) – List of videos

  • list_df_PSFs (list panda data_frame) – List Data Frames that contains the location of PSFs for each video.

  • numRows (int) – It defines number of rows in sub-display

  • numColumns (int) – It defines number of columns in sub-display

  • list_titles (list str) – List of titles for each sub plot

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • time_delay (float) – Delay between frames in milliseconds.

class piscat.Visualization.DisplaySubplot(list_videos, numRows, numColumns, step=0, median_filter_flag=False, color='gray')[source]

This class shows several videos (with the same number of frames) at once.

Parameters:
  • list_videos (list of NDArray) – List of videos.

  • numRows (int) – It defines number of rows in sub-display.

  • numColumns (int) – It defines number of columns in sub-display.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • step (int) – Stride between visualization frames.

class piscat.Visualization.JupyterDisplay(video, median_filter_flag=False, color='gray', title=None, xlabel=None, ylabel=None, imgSizex=5, imgSizey=5, extent=None, IntSlider_width='500px', step=1)[source]

This class displays the video in jupyter notebook.

Parameters:
  • video ((NDArray)) – Input video.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • title (list) – A list of string titles with a length equal to the number of frames in the video.

  • xlabel (str) – The label text for x-axis.

  • ylabel (str) – The label text for y-axis.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • extent – The extent kwarg determines the bounding box in data coordinates that the image will fill, which is specified in data coordinates as (left, right, bottom, top).

  • IntSlider_width (str) – Size of slider

  • step (int) – Stride between visualization frames.

class piscat.Visualization.JupyterDisplay_StatusLine(video, median_filter_flag=False, color='gray', imgSizex=5, imgSizey=5, IntSlider_width='500px', step=1, value=0)[source]

This class displays the video in the Jupyter notebook interactively while highlight status line

Parameters:
  • video (NDArray) – Input video.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • IntSlider_width (str) – Size of slider.

  • step (int) – Stride between visualization frames.

  • value (int) – Initial frame value for visualization

class piscat.Visualization.JupyterPSFs_localizationDisplay(video, df_PSFs, median_filter_flag=False, color='gray', imgSizex=5, imgSizey=5, IntSlider_width='500px', step=1, value=0)[source]

This class displays the video in the Jupyter notebook interactively while highlight PSFs.

Parameters:
  • video (NDArray) – Input video.

  • df_PSFs (panda data_frame) – Data Frames that contains the location of PSFs.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • IntSlider_width (str) – Size of slider.

  • step (int) – Stride between visualization frames.

  • value (int) – Initial frame value for visualization

class piscat.Visualization.JupyterPSFs_localizationPreviewDisplay(video, df_PSFs, title='', frame_num=None, median_filter_flag=False, color='gray', imgSizex=5, imgSizey=5, IntSlider_width='500px', step=1)[source]

This class displays the video in the Jupyter notebook interactively while highlight PSFs.

Parameters:
  • video (NDArray) – Input video.

  • df_PSFs (panda data_frame) – Data Frames that contains the location of PSFs.

  • title (str) – It defines title of plot.

  • frame_num (list) – list of frame that we want to see preview of localization.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • IntSlider_width (str) – Size of slider.

  • step (int) – Stride between visualization frames.

class piscat.Visualization.JupyterPSFs_subplotLocalizationDisplay(list_videos, list_df_PSFs, numRows, numColumns, list_titles=None, median_filter_flag=False, color='gray', imgSizex=5, imgSizey=5, IntSlider_width='500px', step=1, value=0)[source]

This class shows several videos (with the same number of frames) at once in the Jupyter notebook interactively while highlight localize PSFs.

Parameters:
  • list_videos (list of NDArray) – List of videos.

  • list_df_PSFs (list panda data_frame) – List Data Frames that contains the location of PSFs for each video.

  • numRows (int) – It defines number of rows in sub-display.

  • numColumns (int) – It defines number of columns in sub-display.

  • list_titles (list str) – List of titles for each sub plot.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • IntSlider_width (str) – Size of slider

  • step (int) – Stride between visualization frames.

  • value (int) – Initial frame value for visualization

class piscat.Visualization.JupyterPSFs_TrackingDisplay(video, df_PSFs, median_filter_flag=False, step=1, color='gray', imgSizex=5, imgSizey=5, value=0)[source]

This class displays video in the Jupyter notebook interactively while highlighting PSFs with trajectories.

Parameters:
  • video (NDArray) – Input video.

  • df_PSFs (panda data_frame) – Data Frames that contains the location of PSFs.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • IntSlider_width (str) – Size of slider.

  • step (int) – Stride between visualization frames.

  • value (int) – Initial frame value for visualization

class piscat.Visualization.JupyterSelectedPSFs_localizationDisplay(video, particles, particles_num='#0', frame_extend=0, median_filter_flag=False, flag_fit2D=False, color='gray', imgSizex=10, imgSizey=10, IntSlider_width='500px', step=1, value=0)[source]

This class interactively shows video in a Jupyter notebook while highlighting PSFs based on ID.

Parameters:
  • video (NDArray) – Input video.

  • particles (dic) –

    Dictionary similar to the following structures.:

    {“#0”: {‘intensity_horizontal’: …, ‘intensity_vertical’: …, …, ‘particle_ID’: …}, “#1”: {}, …}

  • particles_num (str) – Choose the corresponding key in the particles dictionary.

  • frame_extend (int) – Display particle for frame_extend before and after segmented ones. In case there are not enough frames before/after, it shows only for the number of existing frames.

  • flag_fit2D (bool) – It activate 2D-Gaussian fit to extract fitting information of selected PSF.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • IntSlider_width (str) – Size of slider.

  • step (int) – Stride between visualization frames.

  • value (int) – Initial frame value for visualization

class piscat.Visualization.JupyterPSFs_2_modality_subplotLocalizationDisplay(list_videos, list_df_PSFs_1, list_df_PSFs_2, numRows, numColumns, list_titles=None, median_filter_flag=False, color='gray', imgSizex=5, imgSizey=5, IntSlider_width='500px', step=1, value=0, edgecolor_1='r', edgecolor_2='g')[source]

This class will interactively sub-display multiple videos in a Jupyter notebook while highlighting PSFs determined in two modalities.

Parameters:
  • list_videos (list of NDArray) – List of videos.

  • list_df_PSFs_1 (list) – list of all data Frames that contains the location of PSFs in first modality.

  • list_df_PSFs_2 (list) – list of all data Frames that contains the location of PSFs in second modality.

  • numRows (int) – It defines number of rows in sub-display.

  • numColumns (int) – It defines number of columns in sub-display.

  • list_titles (list str) – List of titles for each sub plot.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • IntSlider_width (str) – Size of slider

  • step (int) – Stride between visualization frames.

  • value (int) – Initial frame value for visualization

  • edgecolor_1 (str) – The color of the circle that annotates the first modality on the display.

  • edgecolor_2 (str) – The color of the circle that annotates the second modality on the display.

class piscat.Visualization.JupyterFPNcDisplay(list_videos, list_titles=None, correction_axis=0, numRows=1, numColumns=2, imgSizex=20, imgSizey=20, IntSlider_width='500px', median_filter_flag=False, color='gray', step=1, value=0)[source]

This class can sub-display several FPNc video in the Jupyter notebook interactively while 1D projection on the direction of correction axis illustrates below on each subplot.

Parameters:
  • list_videos (list of NDArray) – List of videos

  • correction_axis ((0/1)) – selecting the axis that FPNc correction should visualize for it.

  • numRows (int) – It defines number of rows in sub-display.

  • numColumns (int) – It defines number of columns in sub-display.

  • list_titles (list str) – List of titles for each sub plot.

  • median_filter_flag (bool) – In case it defines as True, a median filter is applied with size 3 to remove hot pixel effect.

  • color (str) – It defines the colormap for visualization.

  • imgSizex (int) – Image length size.

  • imgSizey (int) – Image width size.

  • IntSlider_width (str) – Size of slider

  • step (int) – Stride between visualization frames.

  • value (int) – Initial frame value for visualization

piscat.Visualization.plot2df(df_PSFs, pixel_size=1, scale='(um)', title='', flag_label=False)[source]

2D localization plotting with color code for each particle.

Parameters:
  • df_PSFs (panda data frame) – Data frame contains PSFs localization and ID.

  • pixel_size – The camera’s pixel size is used to scale the results of the localization.

  • scale (str) – Measurement scale. e.g (‘nm’)

  • title (str) – The title of the figure.

  • flag_label (bool) –

piscat.Visualization.plot3(X, Y, Z, scale='(um)', title='')[source]

3D localization plotting

Parameters:
  • X (list) – X-position of PSFs.

  • Y (list) – Y-position of PSFs.

  • Z (list) – Z-position of PSFs.

  • scale (str) – Measurement scale. e.g (‘nm’)

  • title (str) – The title of the figure.

piscat.Visualization.plot_histogram(data, title, fc='C1', ec='k')[source]