Hadoop and PySpark for reproducibility and scalability of genomic sequencing studies

Nicholas R. Wheeler, Penelope Benchek, Brian W. Kunkle, Kara L. Hamilton-Nelson, Mike Warfe, Jeremy R. Fondran, Jonathan L. Haines, William S. Bush

Research output: Contribution to journalConference articlepeer-review


Modern genomic studies are rapidly growing in scale, and the analytical approaches used to analyze genomic data are increasing in complexity. Genomic data management poses logistic and computational challenges, and analyses are increasingly reliant on genomic annotation resources that create their own data management and versioning issues. As a result, genomic datasets are increasingly handled in ways that limit the rigor and reproducibility of many analyses. In this work, we examine the use of the Spark infrastructure for the management, access, and analysis of genomic data in comparison to traditional genomic workflows on typical cluster environments. We validate the framework by reproducing previously published results from the Alzheimer's Disease Sequencing Project. Using the framework and analyses designed using Jupyter notebooks, Spark provides improved workflows, reduces user-driven data partitioning, and enhances the portability and reproducibility of distributed analyses required for large-scale genomic studies.

Original languageEnglish (US)
Pages (from-to)523-534
Number of pages12
JournalPacific Symposium on Biocomputing
Issue number2020
StatePublished - 2020
Event25th Pacific Symposium on Biocomputing, PSB 2020 - Big Island, United States
Duration: Jan 3 2020Jan 7 2020


  • Big Data
  • Rare-variants.
  • Spark
  • Whole-genome Sequence

ASJC Scopus subject areas

  • Biomedical Engineering
  • Computational Theory and Mathematics


Dive into the research topics of 'Hadoop and PySpark for reproducibility and scalability of genomic sequencing studies'. Together they form a unique fingerprint.

Cite this