SpikeInterface: A project to facilitate reproducible next generation electrophysiology

SpikeInterface Logo

Anyone who has worked with extracellular recordings will know that extracting spike trains of single neurons from raw data, a task known as spike sorting, is a difficult endeavor (to put it mildly). Traditionally, manual curation of sorted data, or even manual sorting, was feasible as data sets were reasonably small. While keeping a human in the loop allows for resolving common failures of sorting algorithms, it is worth noting that two people analysing the same data set will likely come up with different results. Along with the unreliability of manual intervention, the influx of large-scale data, generated by novel, dense arrays with hundreds or even thousands of recording channels, makes having a human operator impractical. To exploit these new amazing data sets, automated, reproducible approaches are essential.

Common problems

Relying on automated (or semi-automated) spike sorting algorithms, however, brings about a slew of issues that must be addressed. First, many approaches are now available for general use. While most of these algorithms have been assessed on synthetic or recorded ground truth data, it is difficult to predict how they will generalize to new data sets given the limitations of current validation datasets and metrics. Second, each algorithm implements its own pipeline for storing, preparing, processing, sorting, and curating extracellular recordings. This lack of standardization leads to the creation of entire workflows based off of a single spike sorting software and creates a barrier to collaboration among different experimental labs. Third, even two labs using the same sorting algorithm can generate wildly different results based on how they choose to run the underlying sorter. Tuning the adjustable parameters for a sorter, given the recording setup and experimental design, is more a matter intuition than standard procedure. Fourth, despite the availability of modern sorting solutions, many labs rely on home-grown solutions that are implemented by a capable lab member and tailored to work on a specific experimental set-up. These personalized sorting algorithms are difficult to adapt to novel experiments and, again, make collaboration among labs more difficult. Given the wide-range of issues, how can spike sorting be approached in a way that is standardized yet flexible, sophisticated yet reproducible?

A new framework

Here we present a new approach to solving these problems based on a common spike sorting workflow that can be adapted to the variety of preexisting sorting approaches. This common workflow consists of three steps: First, the raw data is preprocessed and prepared for sorting. Second, the prepared data is passed through the spike sorting software and the preliminary results of the analysis are generated. Third, the preliminary results are curated, either automatically or manually, and the final results are stored. A unified framework should thus provide a standardized way of accessing both the raw data and the results of a sorting algorithm, be capable of running any spike sorter, and should allow for the creation of standardized functions that can operate on both raw and sorted data. Unlike other approaches to standardized analysis, we avoid creating a new, unified data format or simple converters between formats. Instead, we provide a standardized API for accessing, sorting, and curating extracellular data regardless of the underlying file format.

The possibilities

This approach has significant advantages to previous solutions and opens up exciting new possibilities, for instance:

  1. Spike sorting a data set takes just two lines of code regardless of the sorting algorithm or underlying file format.
  2. Supporting new file formats or embedding new spike sorting algorithms in our framework is incredibly easy.
  3. Fully reproducible analysis is now a possibility as we continue to develop automatic logging tools that can record all parameters and methods used in an analysis (a process the computer scientist calls provenance capture).
  4. The same data set can be sorted with different sorters, and the results can be compared in a comprehensive and standardized way.
  5. Post-sorting curation can be performed using a single, agreed upon tool, and is fully reproducible.
  6. The implementation of a graphical user interface (GUI), designed to run any spike sorter with the click of a button, is now a possibility and is already in development.

Make use of it and get involved

The python based software framework we created is called SpikeInterface. The core components are the SpikeExtractors, which provide standardized access to raw and sorted data of any supported file format, and SpikeToolkit, which provides functions to preprocess data, run spike sorters, compare algorithms, and curate results. Additional repositories that cover a variety of applications of our framework are in development including (but not limited to): visualization tools, ground truth comparisons, and a GUI for running sorting pipelines.

This project has now reached a stage where we can release the main components of our API. SpikeExtractors version 0.5.0 and SpikeToolkit version 0.3.0 have now been released for general use. Both can be installed using pip and can be referenced by their version numbers when used for data analysis. All changes that add or alter the functionality of our API will be released with new version numbers and a predefined release schedule. Analysis done with our API, therefore, is fully reproducible.

Do get in touch with us if you are interested, either as a user or a contributor.

Acknowledgements

This is a joint project with major contributions from:

  • Alessio Paolo Buccino - Center for Inegrative Neurolasticity (CINPLA), Department of Biosciences, Physics, and Informatics, University of Oslo, Oslo, Norway
  • Samuel Garcia - Centre de Recherche en Neuroscience de Lyon (CRNL), Lyon, France
  • Matthias Hennig - The Institute for Adaptive and Neural Computation (ANC), University of Edinburgh, Edinburgh, Scotland
  • Cole Hurwitz - The Institute for Adaptive and Neural Computation (ANC), University of Edinburgh, Edinburgh, Scotland
  • Jeremy Magland - Center for Computational Biology (CCB), Flatiron Institute, New York, United States

  • Mariyana Cholakova made the logo for the project.

We are grateful to the Wellcome Trust for funding this work, and to the University of Pennsylvania for a Thouron award to Cole.