Item Type: | Preprint |
---|---|
Title: | Portability of scientific workflows in NGS data analysis: a case study |
Creators Name: | Schiefer, C., Bux, M., Brandt, J., Messerschmidt, C., Reinert, K., Beule, D. and Leser, U. |
Abstract: | The analysis of next-generation sequencing (NGS) data requires complex computational workflows consisting of dozens of autonomously developed yet interdependent processing steps. Whenever large amounts of data need to be processed, these workflows must be executed on a parallel and/or distrib-uted systems to ensure reasonable runtime. To simplify the development and parallel execution of workflows, researchers rely on existing services such as distributed file systems, specialized workflow languages, resource managers, or workflow scheduling tools. Systems that cover some or all of these functionalities are categorized under labels like scientific workflow management systems, big data pro-cessing frameworks, or batch-queuing systems. Porting a workflow developed for a particular system on a particular hardware infrastructure to another system or to another infrastructure is non-trivial, which poses a major impediment to the scientific necessities of workflow reproducibility and workflow reusability. In this work, we describe our efforts to port a state-of -the-art workflow for the detection of specific variants in whole-exome sequencing of mice. The workflow originally was developed in the scientific workflow system snakemake for execution on a high-performance cluster controlled by Sun Grid En-gine. In the project, we ported it to the scientific workflow system SaasFee that can execute workflows on (multi-core) stand-alone servers or on clusters of arbitrary sizes using the Hadoop cluster manage-ment software. The purpose of this port was that also owners of low-cost hardware infrastructures, for which Hadoop was made for, become able to use the workflow. Although both the source and the target system are called scientific workflow systems, they differ in numerous aspects, ranging from the workflow languages to the scheduling mechanisms and the file access interfaces. These differences resulted in various problems, some expected and more unexpected, that had to be resolved before the workflow could be run with equal semantics. As a side-effect, we also report cost/runtime ratios for a state-of -the-art NGS workflow on very different hardware platforms: A comparably cheap stand-alone server (80 threads), a mid-cost, mid-sized cluster (552 threads), and a high-end HPC system (3784 threads). |
Source: | arXiv |
Publisher: | Cornell University |
Article Number: | 2006.03104 |
Date: | 4 June 2020 |
Official Publication: | https://arxiv.org/abs/2006.03104 |
Repository Staff Only: item control page