AUTHOR=Fassina Lorenzo , Faragli Alessandro , Lo Muzio Francesco Paolo , Kelle Sebastian , Campana Carlo , Pieske Burkert , Edelmann Frank , Alogna Alessio TITLE=A Random Shuffle Method to Expand a Narrow Dataset and Overcome the Associated Challenges in a Clinical Study: A Heart Failure Cohort Example JOURNAL=Frontiers in Cardiovascular Medicine VOLUME=7 YEAR=2020 URL=https://www.frontiersin.org/journals/cardiovascular-medicine/articles/10.3389/fcvm.2020.599923 DOI=10.3389/fcvm.2020.599923 ISSN=2297-055X ABSTRACT=

Heart failure (HF) affects at least 26 million people worldwide, so predicting adverse events in HF patients represents a major target of clinical data science. However, achieving large sample sizes sometimes represents a challenge due to difficulties in patient recruiting and long follow-up times, increasing the problem of missing data. To overcome the issue of a narrow dataset cardinality (in a clinical dataset, the cardinality is the number of patients in that dataset), population-enhancing algorithms are therefore crucial. The aim of this study was to design a random shuffle method to enhance the cardinality of an HF dataset while it is statistically legitimate, without the need of specific hypotheses and regression models. The cardinality enhancement was validated against an established random repeated-measures method with regard to the correctness in predicting clinical conditions and endpoints. In particular, machine learning and regression models were employed to highlight the benefits of the enhanced datasets. The proposed random shuffle method was able to enhance the HF dataset cardinality (711 patients before dataset preprocessing) circa 10 times and circa 21 times when followed by a random repeated-measures approach. We believe that the random shuffle method could be used in the cardiovascular field and in other data science problems when missing data and the narrow dataset cardinality represent an issue.