ru24.pro
News in English
Октябрь
2024
1 2 3 4 5 6 7 8 9 10 11 12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

Machine learning-based risk prediction for major adverse cardiovascular events in a Brazilian hospital: Development, external validation, and interpretability

0

by Gilson Yuuji Shimizu, Michael Schrempf, Elen Almeida Romão, Stefanie Jauk, Diether Kramer, Peter P. Rainer, José Abrão Cardeal da Costa, João Mazzoncini de Azevedo-Marques, Sandro Scarpelini, Katia Mitiko Firmino Suzuki, Hilton Vicente César, Paulo Mazzoncini de Azevedo-Marques

Background

Studies of cardiovascular disease risk prediction by machine learning algorithms often do not assess their ability to generalize to other populations and few of them include an analysis of the interpretability of individual predictions. This manuscript addresses the development and validation, both internal and external, of predictive models for the assessment of risks of major adverse cardiovascular events (MACE). Global and local interpretability analyses of predictions were conducted towards improving MACE’s model reliability and tailoring preventive interventions.

Methods

The models were trained and validated on a retrospective cohort with the use of data from Ribeirão Preto Medical School (RPMS), University of São Paulo, Brazil. Data from Beth Israel Deaconess Medical Center (BIDMC), USA, were used for external validation. A balanced sample of 6,000 MACE cases and 6,000 non-MACE cases from RPMS was created for training and internal validation and an additional one of 8,000 MACE cases and 8,000 non-MACE cases from BIDMC was employed for external validation. Eight machine learning algorithms, namely Penalized Logistic Regression, Random Forest, XGBoost, Decision Tree, Support Vector Machine, k-Nearest Neighbors, Naive Bayes, and Multi-Layer Perceptron were trained to predict a 5-year risk of major adverse cardiovascular events and their predictive performance was evaluated regarding accuracy, ROC curve (receiver operating characteristic), and AUC (area under the ROC curve). LIME and Shapley values were applied towards insights about model interpretability.

Findings

Random Forest showed the best predictive performance in both internal validation (AUC = 0.871 (0.859–0.882); Accuracy = 0.794 (0.782–0.808)) and external one (AUC = 0.786 (0.778–0.792); Accuracy = 0.710 (0.704–0.717)). Compared to LIME, Shapley values suggest more consistent explanations on exploratory analysis and importance of features.

Conclusions

Among the machine learning algorithms evaluated, Random Forest showed the best generalization ability, both internally and externally. Shapley values for local interpretability were more informative than LIME ones, which is in line with our exploratory analysis and global interpretation of the final model. Machine learning algorithms with good generalization and accompanied by interpretability analyses are recommended for assessments of individual risks of cardiovascular diseases and development of personalized preventive actions.