Mind the gap: performance metric evaluation in brain-age prediction

Ann-Marie G. de Lange, Melis Anatürk, Jaroslav Rokicki, Laura K.M. Han, Katja Franke, Dag Alnæs, Klaus P. Ebmeier, Bogdan Draganski, Tobias Kaufmann, Lars T. Westlye, Tim Hahn, James H. Cole



Estimating age based on neuroimaging-derived data has become a popular approach to developing markers for brain integrity and health. While a variety of machine-learning algorithms can provide accurate predictions of age based on brain characteristics, there is significant variation in model accuracy reported across studies. We predicted age based on neuroimaging data in two population-based datasets, and assessed the effects of age range, sample size, and age-bias correction on the model performance metrics r, R2, Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE). The results showed that these metrics vary considerably depending on cohort age range; r and R2 values are lower when measured in samples with a narrower age range. RMSE and MAE are also lower in samples with a narrower age range due to smaller errors/brain age delta values when predictions are closer to the mean age of the group. Across subsets with different age ranges, performance metrics improve with increasing sample size. Performance metrics further vary depending on prediction variance as well as mean age difference between training and test sets, and age-bias corrected metrics indicate high accuracy - also for models showing poor initial performance. In conclusion, performance metrics used for evaluating age prediction models depend on cohort and study-specific data characteristics, and cannot be directly compared across different studies. Since age-bias corrected metrics in general indicate high accuracy, even for poorly performing models, inspection of uncorrected model results provides important information about underlying model attributes such as prediction variance.

Read the preprint here.

Published May 21, 2021 9:19 AM - Last modified May 21, 2021 9:19 AM