Abstract
Translation of traumatic brain injury (TBI) research findings from bench to bedside involves aligning multi-species data across diverse data types including imaging and molecular biomarkers, histopathology, behavior, and functional outcomes. In this review we argue that TBI translation should be acknowledged for what it is: a problem of big data that can be addressed using modern data science approaches. We review the history of the term big data, tracing its origins in Internet technology as data that are "big"according to the "4Vs"of volume, velocity, variety, veracity and discuss how the term has transitioned into the mainstream of biomedical research. We argue that the problem of TBI translation fundamentally centers around data variety and that solutions to this problem can be found in modern machine learning and other cutting-edge analytical approaches. Throughout our discussion we highlight the need to pull data from diverse sources including unpublished data ("dark data") and "long-tail data"(small, specialty TBI datasets undergirding the published literature). We review a few early examples of published articles in both the pre-clinical and clinical TBI research literature to demonstrate how data reuse can drive new discoveries leading into translational therapies. Making TBI data resources more Findable, Accessible, Interoperable, and Reusable (FAIR) through better data stewardship has great potential to accelerate discovery and translation for the silent epidemic of TBI.
Original language | English (US) |
---|---|
Pages (from-to) | 2414-2423 |
Number of pages | 10 |
Journal | Journal of neurotrauma |
Volume | 37 |
Issue number | 22 |
DOIs | |
State | Published - Nov 15 2020 |
Externally published | Yes |
Keywords
- analytics
- big data
- data sharing
- FAIR principles
- traumatic brain injury
ASJC Scopus subject areas
- Clinical Neurology