Poster
ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets
Yiqi Jiang · Hakki Akengin · Ji Zhou · Mehmet Aslihak · Yang Li · Oscar Hernandez · sadegh ebrahimi · Omar Jaidar · Yanping Zhang · Hakan Inan · Christopher Miranda · Fatih Dinc · Marta Pozo · Mark Schnitzer
Recent advances in calcium imaging allow simultaneous recordings of up to a million neurons in behaving animals. The resulting datasets are too large for experimental neuroscientists to apply traditional quality control methods based on human visual inspection. However, essentially all automated algorithms created to date for extracting neurons from calcium videos require some degree of curation of the putative cells to cull out false-positives. To speed this massive curation effort, we introduce a semi-supervised active-learning algorithm, ActSort, which drastically reduces the need for human input in cell sorting. ActSort integrates domain-expert features with novel active learning frameworks to optimize the sorting process, using minimal computational resources. Our discriminative-confidence active learning algorithm strategically selects \emph{outlier} cells near the decision boundary and outperforms conventional cell-sorting strategies used in the field. To benchmark and facilitate the wide adoption of ActSort, we developed user-friendly, custom software for cell sorting. We also describe here a large-scale benchmarking study (roughly 160,000 candidate cells) involving six domain experts. Our empirical results suggest that semi-automation reduces the need for human annotation to only approximately 1\%-5\% of the cell candidates, while also improving curation accuracy by mitigating annotator bias. As a robust tool that is validated in various experimental conditions and applicable across animal subjects, ActSort addresses the primary bottleneck in processing large-scale calcium videos and paves the way toward fully automated preprocessing of neural imaging datasets in modern systems neuroscience research.
Live content is unavailable. Log in and register to view live content