Damming the genomic data flood using a comprehensive analysis and storage data structure

Database (Oxford). 2010 Dec 15:2010:baq029. doi: 10.1093/database/baq029. Print 2010.

Abstract

Data generation, driven by rapid advances in genomic technologies, is fast outpacing our analysis capabilities. Faced with this flood of data, more hardware and software resources are added to accommodate data sets whose structure has not specifically been designed for analysis. This leads to unnecessarily lengthy processing times and excessive data handling and storage costs. Current efforts to address this have centered on developing new indexing schemas and analysis algorithms, whereas the root of the problem lies in the format of the data itself. We have developed a new data structure for storing and analyzing genotype and phenotype data. By leveraging data normalization techniques, database management system capabilities and the use of a novel multi-table, multidimensional database structure we have eliminated the following: (i) unnecessarily large data set size due to high levels of redundancy, (ii) sequential access to these data sets and (iii) common bottlenecks in analysis times. The resulting novel data structure horizontally divides the data to circumvent traditional problems associated with the use of databases for very large genomic data sets. The resulting data set required 86% less disk space and performed analytical calculations 6248 times faster compared to a standard approach without any loss of information. Database URL: http://castor.pharmacogenomics.ca.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Database Management Systems*
  • Databases, Genetic*
  • Genomics / methods*
  • Genotype
  • Humans
  • Information Storage and Retrieval / methods*
  • Phenotype
  • Sequence Analysis, DNA