Neuroscience data processing and analysis

2021 Open Call project, total developer time: 4 months

Contact person: Dr. Rebecca Mease, Institute of Physiology and Pathophysiology

Outline

Voltage recordings from live mice brain experiments produce large amounts (terabytes) of raw data, which then require post-processing and a variety of further analyses. There are three main bottlenecks in the existing analysis setup which limit the possible size and length of experiments:

  • Processing/analysis software constraints limit the number of probes / density of experimental data that can be used
  • RAM constraints limit the length of experiments
  • Spike sorting is done manually, which is time consuming and does not scale well to larger experiments

The goal of the project is to overcome these limitations through improvements to the processing scripts, transferring the analysis pipeline to run on HPC resources, using standardized data formats and tools, and replacing the manual curation step with automated comparison of multiple spike sorting algorithms.

SSC Role

  • Improve/expand existing Python implementation for data preprocessing & standardization to Neurodata Without Borders (NWB)
  • Guide a student in optimizing Generalized Linear Modeling (GLM) code for batch processing.
  • Robustify existing Generalized Linear Model-Cross Correlation (GLMCC) connectivity algorithm.
  • Knowledge transfer to our group for sustainable development practices & version control, including assistance/mentorship in making existing analysis scripts more robust and compatible with NWB.
  • Time permitting: Develop Python implementation of Demixed Principal Components Analysis (dPCA) algorithm to run on large datasets.