External validation of risk prediction models for post-stroke mortality in Berlin
External validation of risk prediction models for post-stroke mortality in Berlin
Prediction models for post-stroke mortality can support medical decision-making. Although numerous models have been developed, external validation studies determining the models’ transportability beyond the original settings are lacking. We aimed to assess the performance of two prediction models for post-stroke mortality in Berlin, Germany.
We used data from the Berlin-SPecific Acute Treatment in Ischaemic or hAemorrhagic stroke with Long-term follow-up (B-SPATIAL) registry.
Multicentre stroke registry in Berlin, Germany.
Adult patients admitted within 6 hours after symptom onset and with a 10th revision of the International Classification of Diseases discharge diagnosis of ischaemic stroke, haemorrhagic stroke or transient ischaemic attack at one of 15 hospitals with stroke units between 1 January 2016 and 31 January 2021.
We evaluated calibration (calibration-in-the-large, intercept, slope and plot) and discrimination performance (c-statistic) of Bray et al’s 30-day mortality and Smith et al’s in-hospital mortality prediction models. Information on mortality was supplemented by Berlin city registration office records.
For the validation of Bray et al’s model, we included 7879 patients (mean age 75; 55.0% men). We observed 763 (9.7%) deaths within 30 days of stroke compared with 680 (8.6%) predicted. The model’s c-statistic was 0.865 (95% CI: 0.851 to 0.879). For Smith et al’s model, we performed the validation among 1931 patients (mean age 75; 56.2% men), observing 105 (5.4%) in-hospital deaths compared with the 92 (4.8%) predicted. The c-statistic was 0.891 (95% CI: 0.864 to 0.918). The calibration plots of both models revealed an underestimation of the mortality risk for high-risk patients.
Among Berlin stroke patients, both models showed good calibration performance for low and medium-risk patients and high discrimination while underestimating risk among high-risk patients. The acceptable performance of Bray et al’s model in Berlin illustrates how a small number of routinely collected variables can be sufficient for valid prediction of post-stroke mortality.
Data are available upon reasonable request. B-SPATIAL registry data can be made available in a de-identified manner to researchers who provide a methodologically sound proposal (to the extent allowed by the registry’s data protection agreement). Data access requests should be directed to jessica.rohmann (at) charite.de.
http://creativecommons.org/licenses/by-nc/4.0/
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
In 2019, stroke was the second leading cause of death and the third leading cause of combined death and disability worldwide.1 In the context of stroke aftercare, prediction models have been developed to predict functional outcomes and mortality risk after acute stroke. These tools support (shared) clinical decision-making by providing information about likely prognosis to health professionals, patients and their families.2 3 Yet, before implementing prediction models in routine clinical practice, the transportability from the original development population to the population of interest should be assessed; models with low performance in the setting of interest may generate non-accurate predictions and lead to suboptimal decisions.4
According to the systematic review by Fahey et al, prior to September 2015, 38 prediction models for post-stroke mortality had been developed.2 Despite the abundance of existing prediction models for post-stroke outcomes, only a small fraction have been externally validated.2 Among the most frequently used predictors were demographic characteristics (eg, age and sex), stroke severity as measured by the National Institutes of Health Stroke Scale (NIHSS), stroke type and comorbidities.2 The variable NIHSS, alone, has shown high predictive performance for early mortality after acute stroke5 and is often used in prediction models for post-stroke mortality.6 7
Two prediction models for post-stroke mortality including the NIHSS and other routinely collected variables were developed by Bray et al using data from the Sentinel Stroke National Audit Programme (SSNAP) in the UK3 and by Smith et al using data from the Get With the Guidelines (GWTG) Stroke Program in the USA.8 Though these models have already been subjected to validation studies in their respective originating countries,9 to date, both models have only undergone external validation in the China National Stroke Registry.10–12 Our aim was to conduct an external validation4 study to assess calibration and discrimination performances of Bray et al3 and Smith et al8 prediction models for post-stroke mortality among Berlin stroke patients.
We used data from the Berlin-SPecific Acute Treatment in Ischaemic or hAemorrhagic stroke with Long-term follow-up (B-SPATIAL) registry (Clinicaltrials.gov identifier: NCT03027453), a multicentre registry for adult stroke patients in Berlin. Data were collected from patients aged 18 years or older, admitted within 6 hours after symptom onset and with discharge diagnoses according to the 10th revision of the International Classification of Diseases (ICD-10) of ischaemic stroke (I63/I64), haemorrhagic stroke (I61), non-traumatic subdural haemorrhage (I62) or transient ischaemic attack (TIA; G45.0–G45.3 and G45.5–G45.9) at one of the 15 hospitals with stroke units in Berlin, Germany, between 1 January 2016 and 31 January 2021. Patients with no symptoms on the arrival of emergency medical services and without neurological symptoms at hospital arrival were not included in the registry. In this external validation study, we did not include patients for whom a mobile stroke unit was dispatched as part of the B_PROUD interventional study,13 which was linked to the registry. We further excluded patients who opted out of data collection.14
We evaluated the performance of Bray et al’s model A including the full NIHSS (all items) for 30-day all-cause mortality3 (hereafter: Bray et al’s model) and Smith et al’s model including the NIHSS for in-hospital mortality.8
Bray et al’s model included the following predictors: age group (<60, 60–69, 70–79, 80–89, ≥90 years), stroke type (ischaemic or haemorrhagic), atrial fibrillation and NIHSS at admission.3 In the original development study, all variables were directly entered in a secure web portal by clinical teams in accordance with the SSNAP registry.3 In our external validation using the B-SPATIAL registry data, stroke type was determined using available ICD-10 codes (I63 or I64 for ischaemic, I61 for haemorrhagic). Atrial fibrillation was considered present if the patient had a known history of atrial fibrillation or if atrial fibrillation was diagnosed by the emergency medical service or at admission.
Smith et al’s model included the following predictors: age as a continuous variable, sex (male vs non-male), NIHSS at admission, atrial fibrillation, history of stroke or TIA, coronary artery disease, diabetes mellitus and dyslipidaemia.8 Additionally, the model included a variable indicating the mode of hospital arrival, categorised as arrival by private transport, by ambulance or other arrival not via the emergency department (ED) (eg, direct admission from the hospital ward) as a predictor. In the GWTG registry, used in the development study, clinicians used an internet-based tool for data entry.8 In our external validation, we assumed a prior history of stroke or TIA if indicated by imaging performed while in hospital or if documentation of an ischaemic stroke or TIA was available. We defined the presence of coronary artery disease as documented previous myocardial infarction, coronary stent placement or corresponding diagnostic coronary angiography result. In the B-SPATIAL registry, diabetes was defined as documented history of diabetes, the use of anti-diabetic medication, a measured A1C level above 6.5%, or blood glucose above 200 mg/dL (non-fasting) or 126 mg/dL (fasting). We defined dyslipidaemia to include either a reported history of the condition, having measured low-density-lipoprotein (LDL)-cholesterol levels above 130 mg/dL, or having total cholesterol levels above 220 mg/dL. In line with the original development study, in cases of missing documentation or an unknown mode of arrival, we assumed arrival by private transport. For the few cases with documented secondary transfer but no documentation of transfer from an external hospital, we assumed the patient was internally transferred within the same hospital and thus did not arrive via the ED.
Smith et al’s model predicted in-hospital mortality. We defined in-hospital mortality as death documented as the discharge reason or a modified Rankin Scale (mRS) score of 6 at discharge. In cases where both documentation of discharge reason and mRS at discharge were missing, we assumed patients were alive at discharge. Bray et al’s model used 30-day all-cause mortality as the outcome. To create the 30-day all-cause mortality variable, we counted those patients with in-hospital death and hospital stays ≤30 days, and patients for whom the date of death was within 30 days of hospital admission. We obtained information about the date of death from the Berlin city registration office at 2 and 4 months after stroke.14
For Bray et al’s model, we included patients with either an acute ischaemic or haemorrhagic stroke diagnosis. For Smith et al’s model, we included only ischaemic stroke patients, similar to the inclusion in the original publication (which retrospectively identified patients using the ICD-9 codes). Since the predictors history of stroke or TIA, coronary artery disease, and dyslipidaemia were only routinely recorded in three of the B-SPATIAL registry hospitals, we excluded patients from the remaining hospitals in the validation of Smith et al’s model. For both models, we excluded patients who were transferred from a hospital not participating in the B-SPATIAL registry, and, for the main analysis, we also excluded patients with missing values for one of the predictors. When information about transfer status was missing, we assumed the patient was not transferred.
Analyses of the data from the B-SPATIAL registry, including the external validation of clinical risk scores, were approved by the ethics committee of the Charité - Universitätsmedizin Berlin (EA1/208/21). The B-SPATIAL registry used an opt-out mechanism for patient inclusion. Two months after their index event, patients were informed in writing about the inclusion of their record in the B-SPATIAL registry and had multiple opportunities to opt out.14
We used the prediction models’ published formulas to calculate risk of 30-day all-cause mortality3 or in-hospital mortality8 for each included individual (see online supplemental R Code).
To assess model calibration, we evaluated the calibration-in-the-large by comparing the actual (‘observed’) number of deaths to the one predicted by the model (‘expected’) in observed-to-expected (O/E) event ratios. We then used a calibration plot to graphically compare the observed mortality risk with the mean predicted risk within decile groups of predicted risk. We estimated 95% CIs for the observed risk using the binomial exact method. Furthermore, we calculated the calibration intercept and slope using the logistic recalibration framework.15 We assessed the discrimination ability of the two prediction models by calculating the concordance statistic (c-statistic) and corresponding 95% CIs and visualising the receiver operating characteristic (ROC) curve.
In addition, we assessed the discrimination ability of the NIHSS alone for both outcomes and computed the c-statistic for Bray et al’s model predicting in-hospital mortality and Smith et al’s model predicting 30-day mortality. In a subgroup analysis, we evaluated the models’ performances in terms of calibration and discrimination separately by sex.
For both models, we conducted multiple sensitivity analyses. For Smith et al’s model, to assess the robustness of our assumption of arrival by private transport when the mode of arrival was missing or unknown, we reran our analysis excluding patients with unknown or missing mode of arrival. In the original prediction model development studies, Smith et al explicitly excluded patients with TIA, and Bray et al did not specify how these patients were handled in terms of eligibility.3 8 However, at the time of admission, TIA patients presenting with neurological symptoms compatible with stroke are not distinguishable from ischaemic stroke patients. Therefore, in an additional sensitivity analysis, we investigated the performance of both models when classifying all patients with final diagnosis of TIA as ischaemic stroke patients. Finally, we assessed calibration and discrimination of both models after imputing the predictors’ missing values using Multiple Imputation by Chained Equations. Specifically, for each model’s validation, we imputed five datasets using only the model-specific predictors and outcome in the imputation.
All statistical analyses were performed using R v4.2.1 and RStudio 2022.07.1. The pROC package was used for the calculation of the c-statistic and ROC curve, and the c-statistic’s confidence intervals were derived using the package’s ci.auc function with default settings. The mice package and the miceafter package were used for the multiple imputation and pooling of results.
Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this study.
We included 7879 stroke patients in the external validation of Bray et al’s model (figure 1). The median age of the B-SPATIAL patients included in this validation was 75 years and 55.0% were male (table 1). A final diagnosis of ischaemic stroke was considerably more common (92.4%) than haemorrhagic stroke. Median NIHSS at admission was 5 (IQR: 2–11). In total, 763 (9.7%) of included patients died within 30 days of admission. We found that Bray et al’s model underestimated the mortality risk, predicting an average mortality of 8.6%, corresponding to 680 deaths. The resulting observed-to-expected ratio was 1.12, indicating that 12% more deaths were observed in our sample than were predicted by the model. The calibration plot showed good alignment of predicted and observed mortality for eight decile groups (online supplemental table S1, figure 2). The model underestimated the observed mortality risk for the two decile groups with the highest predicted mortality. The calibration intercept was 0.34 (95% CI: 0.19 to 0.49) and the slope was 1.10 (95% CI: 1.03 to 1.17). Figure 2 shows the ROC curve illustrating the discrimination ability of the model in our sample. The c-statistic for 30-day mortality, the model’s intended outcome, was 0.865 (95% CI: 0.851 to 0.879). For comparison, the NIHSS alone showed a c-statistic of 0.838 (95% CI: 0.823 to 0.853). When instead using Bray et al’s model to predict in-hospital mortality in this validation dataset, we obtained a c-statistic of 0.873 (95% CI: 0.858 to 0.888). In the subgroup analysis by sex, Bray et al’s model showed similar performance for male and non-male patients with regard to calibration (online supplemental figure S1) and discrimination (c-statistic of 0.858 (95% CI: 0.836 to 0.880) for males and 0.865 (95% CI: 0.847 to 0.883) for non-males; online supplemental figure S2).
Figure 1
Flow chart showing eligibility criteria applied to the B-SPATIAL study population for the external validation of Bray et al’s model (2014) for post-stroke 30-day all-cause mortality3 (left) and Smith et al’s model (2010) for post-stroke in-hospital mortality8 (right). *Patients who opted out or for whom a mobile stroke unit was dispatched as part of the B-PROUD study were not included. B-SPATIAL, Berlin-SPecific Acute Treatment in Ischaemic or hAemorrhagic stroke with Long-term follow-up.
Figure 2
External validation of Bray et al’s model for post-stroke 30-day all-cause mortality in the B-SPATIAL registry (main analysis). Panel (a) shows the calibration plot and Panel (b) the receiver operating characteristic (ROC) curve. B-SPATIAL, Berlin-SPecific Acute Treatment in Ischaemic or hAemorrhagic stroke with Long-term follow-up.
Table 1
Characteristics of study population of stroke patients from the B-SPATIAL registry included in the external validation of Bray et al’s model for post-stroke 30-day all-cause mortality, stratified by outcome status
In the sensitivity analysis for Bray et al’s model, we included an additional 1932 patients who were ultimately diagnosed with TIA into the validation sample. Among the 9811 ischaemic stroke/TIA patients, the observed 30-day mortality was 7.9%, which was very similar to the 7.4% predicted by Bray et al’s model. The calibration plot, calibration intercept (0.37 (95% CI: 0.22 to 0.52)) and slope (1.15 (95% CI: 1.08 to 1.21)) from this sensitivity analysis were similar to the ones of the main analysis, and the c-statistic was 0.880 (95% CI: 0.867 to 0.893; online supplemental figure S3).
In a second sensitivity analysis, in which we used multiple imputation, a total of 8366 stroke patients were included, of whom 951 (11.4%) died within 30 days. The observed mortality was higher compared with the main analysis, but the conclusions did not fundamentally change. The model underestimated 30-day mortality in the highest risk individuals (to a slightly greater extent than as was assessed in the main analysis; online supplemental figure S4). The model’s calibration intercept was 0.52 (95% CI: 0.37 to 0.67)), and the calibration slope was 1.12 (95% CI: 1.04 to 1.19)). The c-statistic obtained after multiple imputation was 0.870 (95% CI: 0.855 to 0.884), similar to the main analysis.
For the external validation of Smith et al’s prediction model, we included 1931 ischaemic stroke patients (figure 1). The median age in this sample was 75, and 56.2% patients were male (table 2). The median NIHSS was 4 (IQR: 2–10) and most patients arrived by ambulance (81.1%). In total, 105 (5.4%) ischaemic stroke patients died during the hospital stay.
Table 2
Characteristics of study population of ischaemic stroke patients from the B-SPATIAL registry included in the external validation of Smith et al’s model for post-stroke in-hospital mortality, stratified by outcome status
Smith et al’s model predicted an average risk of in-hospital mortality of 4.8%, corresponding to 92 deaths. The observed-to-expected ratio of 1.14 indicated an underestimation of in-hospital mortality by 14%. The calibration plot revealed that the model underestimated the mortality risk in the decile groups with the highest predicted mortality. For decile groups with low and medium risk, the observed and predicted risks were well-aligned, although with high uncertainty, as only few patients were observed (online supplemental table S2, figure 3). The corresponding calibration intercept was 1.20 (95% CI: 0.66 to 1.76), and the slope was 1.43 (95% CI: 1.22 to 1.66). We depicted the discrimination ability of the model predicting in-hospital mortality as a ROC curve (figure 3). The corresponding c-statistic was 0.891 (95% CI: 0.864 to 0.918). For comparison, the c-statistic for in-hospital mortality o
f NIHSS alone was 0.868 (95% CI: 0.833 to 0.903). When instead using Smith et al’s model to predict 30-day mortality in this validation dataset, the c-statistic was 0.873 (95% CI: 0.847 to 0.899). Compared with non-male patients, the calibration of Smith et al’s model seemed slightly better among male patients, as the underestimation of the predicted risk was lower in the highest risk decile groups (online supplemental figure S5). Discrimination ability seemed higher for male patients, with a c-statistic of 0.914 (95% CI: 0.881 to 0.946), compared with non-male patients (c-statistic: 0.867 (95% CI: 0.825 to 0.908)) (online supplemental figure S6).
Figure 3
External validation of Smith et al’s model for post-stroke in-hospital mortality in the B-SPATIAL registry (main analysis). Panel (a) shows the calibration plot and Panel (b) the receiver operating characteristic (ROC) curve. B-SPATIAL, Berlin-SPecific Acute Treatment in Ischaemic or hAemorrhagic stroke with Long-term follow-up.
In the sensitivity analysis excluding patients with unknown or missing mode of arrival for Smith et al’s model, the observed in-hospital mortality was 5.6% compared with 5.0% predicted by the model. The calibration plot (online supplemental figure S7), calibration intercept (1.11 (95% CI: 0.56 to 1.67)) and slope (1.41 (95% CI: 1.19 to 1.64)), as well as the c-statistic of 0.883 (95% CI: 0.854 to 0.912), were comparable to those estimated in the main analysis.
In the second sensitivity analysis for Smith et al’s model, we additionally included 597 TIA patients. Among the 2528 included ischaemic stroke/TIA patients, 4.3% died in the hospital compared with a mortality of 4.0% predicted by the model. The calibration plot (online supplemental figure S8), calibration intercept (1.24 (95% CI: 0.71 to 1.77)) and slope (1.47 (95% CI: 1.27 to 1.68)), as well as the c-statistic (0.902 (95% CI: 0.875 to 0.929)) obtained in this second sensitivity analysis only slightly deviated from the main analysis results.
In a further sensitivity analysis, in which we used multiple imputation, a total of 2052 ischaemic stroke patients were included, of whom 117 (5.7%) died in-hospital. After imputation, the model’s calibration intercept (1.26 (95% CI: 0.70 to 1. 81)), slope (1.44 (95% CI: 1.22 to 1.66)) and calibration plot (online supplemental figure S9) as well as c-statistic (0.893 (95% CI: 0.864 to 0.917)) were similar to the main analysis.
In this study, we externally validated two prognostic prediction models for mortality after stroke; Bray et al’s model for 30-day all-cause mortality and Smith et al’s model for in-hospital mortality, using data from a multicentre registry of adult stroke patients presenting to 15 stroke units in Berlin, Germany.
Bray et al’s prediction model was originally developed in the UK in 2014 using SSNAP registry data, which is the national registry for acute stroke in England and Wales.3 The original publication included an external validation study using data from the South London Stroke Register, which showed good calibration performance and high discrimination of the model, with a c-statistic of 0.87.3 The model was later externally validated in two other studies. The first was a temporal validation study using SSNAP data from a different time period, which found a slightly worse discrimination ability (c-statistic of 0.774).9 The second was conducted in the China National Stroke Registry; despite substantial differences in the study population’s composition, the model showed a good discrimination ability (c-statistic of 0.80) and good calibration in this setting.11 Our findings add to this evidence body, providing an external validation study from Germany. In the Berlin setting, we found good alignment between the predicted and observed 30-day mortality in low and medium-risk individuals; however, the Bray et al model underestimated risk among high-risk patients. Compared with other external validations, we observed a higher discrimination performance of the model in our setting (c-statistic of 0.865). Overall, our conclusions did not differ after multiple imputation or when stratifying by sex.
Smith et al’s prediction model was originally developed using data from the GWTG Stroke Program in the USA.8 Thereafter, two external validation studies using different cohorts from the China National Stroke Registry found good calibration and high discrimination ability of this model with c-statistics of 0.86712 and 0.86,10 similar to the results of the internal validation in the development study (c-statistic of 0.85).8 In the Berlin setting, we observed higher discrimination performance (c-statistic of 0.891); however, the Smith et al model underestimated the risk in high-risk individuals, showing non-optimal calibration in our setting. Higher uncertainty was present for this validation, due to the low sample size. The findings of the external validation did not differ substantially after multiple imputation. We observed slightly better model performance for male stroke patients.
The calibration of both models showed an underestimation of the mortality risk for high-risk patients in Berlin, which may have been due to different factors. In our analysis, we excluded patients who opted out of study participation. Opting-out was only possible for patients who survived the stroke, which might have introduced a selection of more fatal strokes in our sample.
For the validation of both models, we excluded patients with final diagnosis of TIA in our main analysis, since in the original publications and validation studies, TIA patients were either explicitly excluded8 9 or their inclusion was not specified.3 10–12 Within the Berlin setting, we found that both models performed similarly or even better after the inclusion of TIA patients. Different definitions of TIA exist, and the diagnostic discrepancy may explain why researchers might hesitate to include patients with TIA in studies for the development of prediction models. However, since patients with a TIA present with symptoms comparable to an ischaemic stroke on admission, from a clinical and methodological perspective, we believe future work should consider including TIA patients also in the development and validation of post-stroke prediction models.
A systematic review of prediction models for post-stroke outcomes found that models with a high number of predictors do not necessarily show better performance.2 Our results underscore the high predictive ability of NIHSS, which as a single predictor attained a c-statistic of more than 0.83 for both mortality outcomes, comparable to previous studies.7 16 Even models with few variables such as Bray et al’s model, including only four predictors, showed high discrimination.3 11 Models that perform sufficiently well with fewer, routinely measured variables should be preferred over models with several predictors, since they are more likely to be used in practice. For this reason, as has also been argued for other clinical applications,17 18 we believe that future prediction model studies in the context of post-stroke outcomes should compare newly developed models’ performances with that of well-established models, or preferably focus on the external validation or updating of existing models rather than developing new ones.
The strengths of this study include the prospective, multicentre design of the B-SPATIAL registry with coverage of all 15 Berlin stroke units over a 5-year period. Therefore, the B-SPATIAL registry for adult stroke patients can be considered representative for the population of stroke patients in Berlin and comprises detailed information on demographics and clinical characteristics, with low loss to follow-up, especially for mortality endpoints.14 The recording of vital status during follow-up is considered particularly reliable because the information was supplemented by city registration office records.
Some limitations should be considered when interpreting our results. As Berlin is a densely populated city with several stroke units, the availability of stroke care might be different compared with other regional settings in Germany and Central Europe. Therefore, our results may not generalise to different settings, such as rural areas. Furthermore, the B-SPATIAL registry only contains information on patients with hospital arrival within 6 hours of symptom onset, since this was the eligibility window for reperfusion treatments when the registry commenced. However, we acknowledge that a non-trivial proportion of stroke patients present to hospitals later than 6 hours after onset,19 20 and the performance of these prediction models might differ for these patients.
Only three of 15 registry hospitals routinely documented history of hyperlipidaemia, coronary artery disease, and history of stroke or TIA. Therefore, we could only validate Smith et al’s model in this subsample, which composed 30% of the full validation sample. Furthermore, as overall in-hospital mortality risk was low in our setting, only 105 in-hospital deaths were observed in this subsample, which decreased the power of the analysis and somewhat limits the interpretation of the calibration plot due to higher uncertainty. For both models, in the main analysis, we excluded patients with missing information on at least one of the predictors (except mode of arrival). However, the sensitivity analysis in which predictors’ missing values were imputed showed similar behaviour in terms of calibration and discrimination compared with the main analysis for both models.
Despite being developed outside of Germany, the external validations of Smith et al’s model and Bray et al’s model for post-stroke mortality both demonstrated good calibration for low and medium-risk stroke patients in a large stroke registry in Berlin. Both models showed high discrimination ability but underestimated risk in high-risk patients. The performance of Bray et al’s model indicated an overall acceptable transportability to the Berlin setting and illustrates how a small number of variables that can be routinely obtained at hospital admission can suffice for valid prediction of post-stroke mortality.
Data are available upon reasonable request. B-SPATIAL registry data can be made available in a de-identified manner to researchers who provide a methodologically sound proposal (to the extent allowed by the registry’s data protection agreement). Data access requests should be directed to jessica.rohmann (at) charite.de.
Not applicable.
Analyses of the data from B-SPATIAL registry, including the external validation of clinical risk scores, were approved by the ethics committee of the Charité - Universitätsmedizin Berlin (EA1/208/21). The B-SPATIAL registry used an opt-out mechanism for patient inclusion. Two months after their index event, patients were informed in writing about the inclusion of their record in the B-SPATIAL registry and had multiple opportunities to opt out (Napierkowski et al 2022; https://doi.org/10.1212/wnl.0000000000200916).
This work builds directly on the Master’s thesis project of Dr. Reitzle (supervised by Dr. Piccininni and Dr. Rohmann) and was conducted in accordance with the registered research proposal, which was approved by the Master of Science in Epidemiology Program at the Berlin School of Public Health, Charité – Universitätsmedizin Berlin. We are grateful to all collaborating hospitals and the study nurses for their engagement and thank Jakob Beilstein for assistance with data management.