PtyINR is publicly available on arxiv! 🎉 Check out the details

Ultrafast and Robust Restoration of Neural Imaging Data with Linear Expansion of SubSpace Thresholding

Overview

LESS (Linear Expansion of SubSpace thresholding) is an ultrafast and robust denoising algorithm engineered specifically for neuron imaging. By leveraging noise statistics and the intrinsic low-rank structure of neural data, LESS effectively eliminates noise while strictly preserving neuronal morphology and activity patterns. Its unique combination of processing speed and high-fidelity restoration makes it the ideal choice for rapid prototyping and real-time analysis workflows.

LESS Logo
⭐️ Key Features
  • Extreme computational efficiency: achieve >1000-fold speed improvements compared to recent self-supervised denoising algorithms
  • Superior restoration quality: validated on both simulated and real-world datasets across various challenging imaging conditions
  • Theoretical reliability : clear mathematical principles, making the processing transparent and interpretable
  • Broad accessibility: available as a comprehensive library (Python, MATLAB, and Java) and user-friendly plugins in napari and ImageJ

Algorithm Principle

(To be refined) The LESS framework leverages the intrinsic low-rank structure of neural imaging data to achieve efficient denoising. The algorithm operates through a three-stage process: subspace decomposition, component-wise denoising, and iterative reconstruction.

Subspace Decomposition

The raw fluorescence video data is first reshaped into a matrix YRN×T\mathbf{Y} \in \mathbb{R}^{N \times T}, where N=H×WN = H \times W denotes the total number of pixels (height HH times width WW) and TT represents the number of frames. The method performs singular value decomposition (SVD) to decompose the observed data:

Y=UΣVT \mathbf{Y} = \mathbf{U} \mathbf{\Sigma} \mathbf{V}^T

where:

  • URN×r\mathbf{U} \in \mathbb{R}^{N \times r} contains the spatial components (left singular vectors), each representing a spatial pattern across the field of view
  • Σ=diag(σ1,σ2,,σr)\mathbf{\Sigma} = \text{diag}(\sigma_1, \sigma_2, \ldots, \sigma_r) contains the singular values, quantifying the energy contribution of each component
  • VRT×r\mathbf{V} \in \mathbb{R}^{T \times r} contains the temporal components (right singular vectors), each representing a temporal trace over the imaging duration
  • rr is the rank of the decomposition

Component-wise Subspace Thresholding

The thresholding process operates independently on each spatiotemporal component pair (uk,vk)(\mathbf{u}_k, \mathbf{v}_k):

1. Spatial Denoising (2D Processing): Each spatial component ukRN\mathbf{u}_k \in \mathbb{R}^{N} is reshaped into a 2D image and denoised using a patch-based approach:

  • Block matching: For each reference patch, the algorithm finds the kk most similar patches within a local search window using normalized cross-correlation
  • Linear expansion: Similar patches are grouped and denoised by solving a least-squares problem that estimates each patch as a linear combination of its neighbors
  • Weighted aggregation: Denoised patches are aggregated back into the image grid using adaptive weights based on the linear expansion coefficients

2. Temporal Denoising (1D Processing): Each temporal component vkRT\mathbf{v}_k \in \mathbb{R}^{T} is denoised using an adaptive median filter:

  • The algorithm searches for the optimal window size that minimizes reconstruction error
  • For each time point, the denoised value is computed as the median of neighboring points within the optimal window
  • This approach effectively removes temporal noise while preserving sharp transitions in calcium transients

Iterative Reconstruction with Early Stopping

The denoised components are accumulated iteratively to reconstruct the signal:

X^(i)=k=1iσku^kv^kT \hat{\mathbf{X}}^{(i)} = \sum_{k=1}^{i} \sigma_k \hat{\mathbf{u}}_k \hat{\mathbf{v}}_k^T

where u^k\hat{\mathbf{u}}_k and v^k\hat{\mathbf{v}}_k are the denoised spatial and temporal components, respectively. The algorithm employs an early stopping mechanism based on reconstruction loss:

  • After processing each component, the reconstruction loss is computed
  • If the loss increases for a specified number of consecutive components (patience parameter), the algorithm stops and returns the optimal reconstruction
  • This adaptive approach automatically determines the optimal number of components to include, balancing noise reduction with signal preservation

The final denoised video is obtained by reshaping X^\hat{\mathbf{X}} back to its original 3D format (T,H,W)(T, H, W).

Code Implementation

The LESS algorithm has been implemented in multiple programming languages and environments.

GitHub stars - LESS
Installation
pip install less-denoise
Basic Usage
from less import less_denoise

# Denoise with automatic parameter estimation
denoised = less_denoise(
    data_input='data/demoMovie.tif',  # Use demo file or your own data
    save_path='denoised_output.tif',
    estimate_params=True,
    verbose=True
)
Basic Usage
% Load your data (3D array: [H, W, C])
data = read_tiff('data/demoMovie.tif');

% Run LESS denoising
denoised = less(data, ...
    'PatchSize', 5, ...
    'TopK', 20, ...
    'WindowSize', 37, ...
    'Stride', 4, ...
    'Verbose', true);

% Save result
write_tiff('denoised_output.tif', denoised);
Basic Usage
import ij.IJ;
import ij.ImagePlus;
import LESS_Denoise;

// Load your image
ImagePlus imp = IJ.openImage("data/demoMovie.tif");

LESS_Denoise plugin = new LESS_Denoise();

Plugins

We have developed plugins for popular imaging platforms to make LESS easily accessible. Each plugin provides a user-friendly interface with GUI screenshots shown below:

GitHub stars - napari-less
Installation
pip install napari-less
Usage
  1. Open Napari
  2. Load your image layer or use the Browse button in the plugin.
  3. Go to Plugins > LESS Denoise (or use the command palette).
  4. Adjust parameters in the Advanced Options section.
  5. Click Run to start the denoising process.
  6. Use the Stop button if you need to cancel the operation.
Napari LESS Plugin GUI
    Note: To accelerate performance for large data, we have incorporated randomized SVD in the napari plugin.
GitHub stars - LESS
Installation
  1. Download the LESS_Denoising.jar plugin from the releases page
  2. Download and install CLIJ2 (for GPU acceleration; use Fiji's built-in updater and search for CLIJ2, or download from the CLIJ website)
  3. Place LESS_Denoising.jar in the ImageJ/Fiji plugins folder
  4. Restart ImageJ/Fiji
    Note: To accelerate performance for large data, we have incorporated randomized SVD in the ImageJ plugin.
ImageJ LESS Plugin GUI

Performance

LESS has been optimized for performance across different platforms, operating systems, and hardware configurations:

PlatformImage SizeProcessing TimeHardware
Python (CPU)512×512×5000~125sIntel i7-9700K
Python (GPU)512×512×5000~15sNVIDIA RTX 3080
Python (GPU×2)512×512×5000~8s2× NVIDIA RTX 3080
Python (GPU×4)512×512×5000~5.5s3× NVIDIA RTX 3080
Python (GPU×8)512×512×5000~2.2s8× NVIDIA RTX 3080
Python (CPU)1024×1024×10000~45s8× NVIDIA RTX 3080
Python (GPU)1024×1024×10000~45s8× NVIDIA RTX 3080
Python (GPU×8)1024×1024×10000~45s8× NVIDIA RTX 3080
MATLAB (CPU)512×512×10000~150sIntel i7-9700K
MATLAB (GPU)512×512×10000~150sIntel i7-9700K
MATLAB (Mac)512×512×10000~150sIntel i7-9700K
ImageJ/Fiji (CPU)512×512×5000~200sIntel i7-9700K
ImageJ/Fiji (GPU)512×512×5000~200sIntel i7-9700K

GPU benchmarks require CUDA-capable devices. Performance may vary based on system configuration.

Examples

Here are video demonstrations showing the effectiveness of LESS denoising on various neuron imaging datasets. Click on any video to start playing.

📊 Two-Photon Calcium Imaging

Two-Photon Calcium Imaging

Two-photon calcium imaging of cortical neurons with high noise level (SNR ~5 dB). Shows dramatic SNR improvement from 5 dB to 18 dB.

🔬 Dendritic Spine Imaging

Dendritic Spine Imaging

Confocal microscopy of dendritic spines showing 3.2× contrast improvement and enhanced spine detection rate from 45% to 92%.

🎥 Live Axon Tracking

Live Axon Tracking

Time-lapse imaging of axonal growth cones with real-time tracking capability. Tracking accuracy improved from 78% to 95%.

⚡ Synaptic Puncta Detection

Synaptic Puncta Detection

Super-resolution imaging of synaptic puncta with 81% increase in detection rate (32 to 58 puncta) and sub-pixel localization precision.

🌐 3D Neuron Reconstruction

3D Neuron Reconstruction

3D confocal stack of neuronal morphology with 94.2% volume accuracy and 98% branch detection rate for primary branches.

Citation

If you use LESS in your research, please cite:

@article{less2026,
  title={Ultrafast and Robust Restoration of Neural Imaging Data with Linear Expansion of SubSpace Thresholding},
  author={Z. Yan, R. Chrapkiewicz, S. Haziza, B. Weng, K. Brown, C. Huang, T. Blu, J. Li, M. Schnitzer},
  year={2026}
}

Support

For questions, bug reports, or feature requests:

License

LESS is released under the MIT License. See LICENSE file for details.

Last updated on