Skip to Main Content

INTRODUCTION

The modern era of arrhythmology has seen an explosive expansion of discovery and technologies and an increasing number of cardiologists subspecializing in clinical electrophysiology. It is not surprising that credentialling has become necessary; hence, testing boards have sprung up in many international jurisdictions to provide certification that the test taker has fulfilled the cognitive requirements for the subspecialty. We (ENP and GJK) have both served as board members and chairmen on the arrhythmia panel of the American Board of Internal Medicine (ABIM) and try to bring this experience to bear in providing a few strategies for the test taker. We suspect that the basic issues will remain the same for other boards internationally, although we both can attest to the care and rigor exerted by the ABIM in ensuring that the test questions are unambiguous, the graphics are clear, and the results discriminate the qualified from the unqualified.

There are new testing methods utilized to include case scenarios and the like with realistic graphics and multiple potential management and treatment paths, but the core for most boards remains some variant of the classical multiple question. Of all the categories tested, we have found that the candidates still performed relatively poorly in the core ECG and electrogram tracing questions, the emphasis of this book.

In the process, committees of experts are formed and meet to formulate and vet questions. The core type of question is the single best answer of 4 or 5 choices. The incorrect choices or “distractors” are chosen to be somewhat credible, and nonsense choices are discouraged. Cryptic and confusing multiple choice question types are excluded, such as “1 and 3 are correct, 2 and 4 are false etc.” The correct answer may be black and white or more nuanced, especially with clinical management issues, but the committee has to hammer away until there is agreement on the correct answer. There exists a historical bank of questions, and new questions are added each session. Before a new question “goes live,” it may be inserted into the test to judge its performance without counting for the candidate’s final score. This is done statistically to a large degree, utilizing psychometric techniques. As examples, the top-tier candidates by score should have a higher correct percentage for a given question than the lowest tier. Too many candidates getting a question right or wrong sends it back to the drawing board to be revised or eliminated since it is of no value in “discriminating” between the candidates.

There are many successful potential “hints” for the test taker, but the following might be useful:

  1. The introduction to the question or “stem” introduces the case and may be very succinct or lengthy. Regardless, it should be read CAREFULLY as the context may well influence the choice of answer; too much extraneous information is not provided in the good question.

  2. We might recommend an initial ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.