Skip to Main Content

INTRODUCTION

The field of interventional cardiology has evolved since the introduction of balloon angioplasty by Andreas Gruntzig,1 with current high procedural success rates and low attendant periprocedural complications.2 Percutaneous coronary intervention (PCI) and transcatheter aortic valve replacement (TAVR) are now preferred in many high-risk subgroups in whom they were previously contraindicated.3-7

Evaluation of risk-benefit ratios and risk stratification are important elements in optimizing care for an individual undergoing PCI or TAVR. Various models for prediction of risks can help physicians, patients, and their families better comprehend attendant risks and provide an objective basis for the most suitable treatment option. It is paramount for clinicians to become familiar with the available risk scores and apply them in clinical practice. This chapter focuses on the strengths and weaknesses, ease of applicability, and use of risk prediction models to estimate the risk of mortality and major adverse cardiac events (MACE) in the current environment of interventional cardiology.

DEVELOPMENT OF A RISK PREDICTION MODEL: STATISTICAL CONSIDERATIONS

A good predictive model should be accurate and able to discriminate between different levels of risk. An accurate model, on average, is not biased toward over- or underprediction. If the true average risk of an event in a population is 5%, one could achieve accuracy by predicting 5% risk for every patient. Such estimates would lack precision, however, because the same prognosis is given for both low- and high-risk patients. Discriminatory ability is related to precision (distinguishing high- from low-risk patients).

In the context of this chapter, a “prediction” is the probability that an event will occur. A high-risk patient may have a predicted risk of 0.50, but he or she will not suffer a half-event. This may be why assessment of these models tends to focus more on discriminatory ability than accuracy.

Logistic regression is the standard statistical analysis employed for binary outcomes.8 Harrell and colleagues9 recommend that the number of explanatory variables considered should not exceed one-tenth of the number of events. This set of variables should be determined without knowledge of their relationship to the outcome in the modeling data set. Clinical expertise is essential at this stage of model development. After candidate explanatory variables are chosen, the final model may be determined using automatic selection9 or bootstrap10 methods. These methods are intended not only to simplify the model, but also to avoid overfitting. Overfitting occurs when a model reflects anomalous associations specific only to the model-building data, resulting in suboptimal performance in other data sets.

Once a final model is chosen, the Hosmer-Lemeshow goodness-of-fit test may be used to determine if the model adequately reflects the observed data.11 A significant test result indicates an inadequate fit. Accuracy may be internally validated using data-splitting, cross-validation, or bootstrap methods.12 However, external validation is more valuable than these methods. Comparison ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.