The heart is characterized by a complex electromechanical activity essential for the sustenance of body function. Cardiac disease is the leading cause of morbidity and mortality in the industrialized world,1 imposing a major burden on health care systems. Current therapies rely, to a large extent, on implantable devices, administration of drugs, or the ablation of tissue. Although these therapies may improve a patient's condition significantly, they are palliative rather than curative, and undesired adverse effects of varying degrees of severity are quite common.2-5 In the quest of devising novel, safer, and more effective therapies to reduce medical costs and treatment duration, developing a comprehensive understanding of cardiac structure and function in health and disease is the strategy most likely to succeed and, thus, is a central focus of basic and clinical heart research. Traditionally, experimental work in conjunction with clinical studies was the dominant, if not exclusive, approach for inquiries into physiological function. Today, in the postgenomic era, a wider portfolio of techniques is employed, with computational modeling being another accepted approach, either as a complement to experimental work or as a stand-alone tool for exploratory testing of hypotheses. The need for computational modeling is mainly driven by the difficulty in dealing with the vast amount of available data, obtained from various subsystems of the heart, at different hierarchical levels of organization from different species, and with the complexity involved in the dynamics of interactions between subsystems within and across levels of organization. Computational modeling plays a pivotal, if not indispensable, role in harnessing these data for further advancing our mechanistic understanding of cardiac function.

Over the past decade, impressive advances in experimental technology have generated a wealth of information across all levels of biological organization for various species, including humans, ranging from the genetic scale of the cardiac system up to the entire organ. However, attempts to translate these large quantities of experimental data into safer and more effective therapies to the benefit of patients have largely proved to be elusive. In no small part, this results from the complexity of biological systems under study. They invariably comprise multiple subsystems, interacting with each other within and across levels of organization. Although biological systems can be dissected into many subcomponents, there is now a clear recognition that the interaction within and across levels of organization may produce emergent properties that simply cannot be predicted from "reductionist" analysis. Because these emerging properties are often not intuitive and not predictable from analyzing the subsystems in isolation, detailed understanding of individual subsystems may provide little mechanistic insight into functions at the organ level.6 Moreover, due to the highly complex nonlinear relationship between a rapidly increasing amount of disparate experimental data on an increasingly larger number of biological subsystems involved, any attempts to gain new mechanistic insights at the level of a system by deriving a qualitative understanding of all the simultaneous interactions within and between subsystems simply by reasoning are futile. Due to the complexity of the system, conclusions may even be misleading and, most likely, will not withstand thorough quantitative scrutiny. Integrative research approaches are required to address these issues, and computational modeling is a key technology to achieve such integration.

Furthermore, computer modeling can aid in overcoming the numerous limitations of current experimental techniques by complementing in vivo or in vitro experiments with matching in silico* models. The most common experimental limitations include the following. (1) Current experimental techniques cannot resolve, with sufficient accuracy, electrical events confined to the depth of the myocardial walls, which limits observations to the surfaces of the heart. (2) Selectively changing a single parameter in a well-controlled manner without affecting any other important parameters of cardiac function is difficult because the technology required is not available or because sufficiently selective substances have not been developed yet. (3) Certain experimental techniques require modification of the substrate or the knock-out of other physiological functions. For instance, to optically map cardiac electrical activity, first, photometric dyes are required, which modify membrane physiology and are known to be cytotoxic, and second, electromechanical uncouplers are administered to suppress mechanical contraction, further modulating physiological function. Caution is advised when interpreting such data because potentially important regulatory mechanisms are excluded.7 (4) Experimental studies are mainly performed in animals, but species differences in the distribution and kinetics of ion channels are significant. As a result, drugs, for instance, may have significantly different effects in humans than in other species. (5) Due to the invasive nature, most advanced experimental methods are limited to animal studies and cannot be applied clinically to humans. Hence, available data from human studies are very limited, and available animal models do not adequately reflect the evolution of cardiovascular diseases in people.

*In silico refers to performed on a computer via a computer simulation.

Biophysically detailed cell modeling was introduced as early as 1952 in neuroscience, through the seminal modeling work of Hodgkin and Huxley on the squid giant axon.8 Pioneered by Noble in the 1960s,9 first models of cardiac cellular activity were proposed, but the first computer models for studying bioelectric activity at the tissue level did not emerge until the 1980s.10 By today's standards, the level of detail in early modeling work was very limited. Computational performance was poor, efficient numerical techniques had not been fully developed, and descriptions of cellular dynamics were lacking many details. To keep simulations tractable, the cardiac geometry was approximated by geometrically very simple structures such as one-dimensional (1D) strands, two-dimensional (2D) sheets,11,12 and three-dimensional (3D) slabs.13 Although geometrically simple models continue to be used, due to their main advantage of simplicity, which facilitates the dissection of mechanisms, it became quickly recognized, maybe unsurprisingly, that observations made at the organ level cannot be reproduced without accounting for organ anatomy. The combined effects of structural complexity and the various functional heterogeneities in a realistic heart led to complex behavior that is hard to predict or investigate with overly simplified models. Besides computational restrictions, simple geometries were also popular because electric circuitry analogs could be used to represent the tissue as a lattice of interconnected resistors or a set of transversely interconnected core-conductor models.11 Thus, well-established techniques for electric circuit analysis could be applied, and simple discretization schemes based on the finite difference method became a standard due to their ease of implementation. With the advent of continuum formulations,14 derived by homogenization of discrete tissue components,15 and the adoption of more advanced spatial discretization techniques such as refined variants of the finite difference method,16,17 the finite element method,18,19 and the finite volume method,20,21 the stage was set for anatomically realistic models of the heart. During the past few years, anatomically realistic computer simulations of electrical activity in ventricular slices,22-24 whole ventricles,25-28 and atria29-31 have become accessible. Such models are considered as state-of-the-art; nonetheless, many simplifications persist:

- The geometry of the organ is represented in a stylized fashion ("one heart geometry fits all" approach)25,26; only parts of the heart are modeled, such as slices across the ventricles.22
- The computational mesh discretization is coarser than the size of the single cell (100 μm) to diminish the degrees of freedom in the model. This approach leads to (1) underrepresentation of (to the degree of fully ignoring) the finer details of the cardiac anatomy, such as endocardial trabeculations or papillary muscles and (2) the necessity to adjust, in an ad-hoc manner, the tissue conductivity tensors in order to avoid artificial scaling of the wavelength, thus compensating for the dependence of conduction velocities on grid granularity. That is, as the grid is coarsened with all other model parameters remaining unchanged, conduction velocity becomes reduced, and thus, the wavelength is diminished, with conduction block occurring above a certain spatial discretization limit.
- The myocardial mass is treated as a homogeneous continuum, without representing intramyocardial discontinuities such as vascularization, cleavage planes, or patches of fat or collagen.
- The specialized cardiac conduction system (ie, the sinoatrial node, the atrioventricular node, and the Purkinje system [PS]) is typically not represented in the whole organ simulations, although a few exceptions exist.32-34
- Functional gradients in the transmural35 and apicobasal directions,36 as well as differences between left ventricle (LV) and right ventricle (RV),37 are often ignored.
- The conductivity tensors in the ventricles are orthotropic due to the presence of laminae.38 Most, if not all, whole ventricular models do not account for this property. Besides, unlike the transmural rotation of fibers, the laminar arrangement cannot be described as easily by a rule because transmural variation of sheet angles appears to be spatially heterogeneous.39
- Myocardial membrane ion transport kinetics is modeled in a simplified fashion.18,40 Reduced models preserve salient features such as excitability, refractoriness, and electrical restitution; these are usually represented phenomenologically at the scale of the cell (so that they can be easily manipulated).

Similar to geometrically simplified models, the fairly long list of simplifications does not restrict the utility of such models. In recent years, computational whole heart modeling has become an essential approach in the quest for integration of knowledge as evidenced by numerous contributions that led to novel insights into the electromechanical function of the heart.

Despite the utility of current models, the underlying concepts and technologies are evolving at an increasingly faster pace with the aim to build the basis for comprehensive and detailed (high-dimensional) models that realistically represent as many of the known biological interactions as possible. Although this is clearly a goal worthwhile pursuing, one has to keep in mind that accounting for more and more mechanisms does not necessarily lead to deeper insights into systems behavior. Rather than accounting for all known details, modeling environments that enable researchers to flexibly choose the subsystems relevant for a particular question, as well as the level of detail for each subsystem, are sought. This flexibility helps to better balance the computational effort and to focus on specific aspects of a problem without being overwhelmed and distracted by the myriad of details. That is, some subsystems will be represented by high-dimensional models, whereas others are represented by reduced low-dimensional models, which often use nonlinear dynamics approaches to capture phenomenologically essential emergent behaviors. Although such flexibility is desirable, the involved technical challenges are formidable. Software engineering considerations begin to carry more weight to reconcile flexibility, reusability, and efficiency of increasingly more complex software stacks. The time scales and efforts involved in the development process of simulation tools as presented here are fairly substantial, at the scale of tens of man years, which complicates the implementation within an academic environment considerably and also raises sustainability issues that are not straightforward to address.

Impressive advances in imaging modalities, which provide 3D or even four-dimensional (4D) imaging data at an unprecedented resolution and quality, have led to an increased interest in using anatomically detailed, tomographically reconstructed models. Not only can anatomic parameters be quantitatively assessed with current imaging technology, but functional parameters can also be assessed. Growing access to such data sets has motivated the development of computational techniques that aim at providing a framework for the integration of data obtained from various imaging modalities into a single comprehensive computer model of an organ, which can be used for in silico experimentation. Quite often, more than one imaging modality has to be used and combined with electrocardiographic mapping techniques to complete the diagnostic picture. Although a multimodal approach complicates the model construction process, because various data sources have to be fused to precisely overlap in the same space, for the sake of biophysically accurate parametrization of in silico models, potentially relevant information is added that may improve the predictive power of the model. Further, interrelating multimodal diagnostic parameters, interpreting their relationships, and predicting the impact of procedures that modify these parameters can be challenging tasks, even for the most experienced physicians. In those cases where suitable mechanistic frameworks have been developed already to describe the relationship between diagnostic parameters and their dependency on parameters of cardiac function, computer modeling can be used to quantitatively evaluate these relationships. There is a legitimate hope that computational modeling may contribute to better inform clinical diagnosis and guide the selection of appropriate therapies in a patient-specific manner.

Currently, the number of tomographically reconstructed anatomic models used in cardiac modeling is fairly small. This has to be attributed, to a large extent, to the technical challenges involved in image-based grid generation, which renders model generation an elaborate and time-consuming task. The small number and the lack of general availability of simulators that are sufficiently flexible, robust, and efficient to support in silico experiments using such computationally expensive models are further obstacles. However, recent technological advances have led to a simplification, facilitating model generation within a time scale approaching hours, rather than days or weeks. The speed and ease of use of image-based model generation pipelines is about to lower the barrier for using individualized models. Although heart models based on a single individual or stylized models derived from a single data set may be well suited for studying generic questions, it is important to critically question gained insights because there is clearly a danger of introducing a bias by relying on a single model that may or may not be representative for a species. Knowing that intersubject variability is quite substantial and having the ultimate aspiration of personalized medicine in mind, individualized models may prove to be of higher predictive value for specific questions because they allow representation of anatomical and functional information on a case-by-case basis in an individualized subject-specific manner.

Unlike in basic research, time constraints in a clinical context can be more restricting, and the cycle model definition, parametrization, and execution of an in silico experiment have to be seen before the background of a clinical work flow. There are obstacles to overcome that currently prevent the refinement of these cycles. First, the biophysically detailed parametrization of a subject-specific in silico heart is a nontrivial problem and will require major research efforts. Second, the computational expenses of conducting in silico experiments are substantial, which may pose a problem when fast turnaround times are key for diagnostic value. Although current simulators cannot deliver results in real time yet, dramatic improvements in numerical techniques and modeling approaches suggest that a near real-time performance with computer simulations of cardiac electromechanical activity may indeed be achievable within a time horizon of only a few years. This optimistic view is based on current developments in the high-performance computing (HPC) industry, which promise to deliver enormously increased computational power at a much higher packing density, and the capability of recent algorithmic developments that promise optimal scalability, a key feature to be able to take advantage of future HPC architectures.

The goal of this chapter is to present an overview of current state-of-the-art advances toward using anatomically detailed, tomographically reconstructed, in silico models for predictive computational modeling of the heart as developed recently by the authors of this chapter. We first outline the methodology for constructing electrophysiological models derived from imaging data sets. Further, the mathematical and numerical underpinnings required for building advanced in silico cardiac simulators are presented in detail, along with details on practical aspects of the modeling cycle in electrophysiological studies. Finally, we provide three examples that demonstrate the use of these models, focusing specifically on optimization of defibrillation therapy, the role of the specialized conduction system in the formation of arrhythmias, and finally, a very detailed model of myocardial infarction for studying arrhythmogenesis in the canine ventricles.

To construct geometrically realistic models of cardiac anatomy, such information must be first obtained via various different imaging modalities and then processed and used in model construction. Depending on the specific application of the model, there is generally not a single ideal imaging modality to generate the most complete model, and fusion of image data obtained from various different modalities is commonplace. This can involve combining structural and functional imaging data, often from modalities that probe such information at a range of different scales from the subcellular to the whole organ.

In the past decade or so, efforts have been focused on developing techniques to construct 3D computational cardiac models directly from magnetic resonance (MR) data, due to the nondestructive nature of that technique.41 However, the detail and anatomic complexity of the resulting models have been limited by the inherent resolution limits of MR data sets, often >100 μm or worse. In the last few years, the advent of stronger magnets and refined scanning protocols has significantly increased the resolution of anatomical MR scans, such that small mammalian hearts now can have MR voxel dimensions of approximately 20 to 25 μm.42,43 An example of a high-resolution anatomic MR scan of a rabbit heart with voxel resolution of approximately 25 μm isotropic is shown in Fig. 14–1A.

###### Figure 14–1.

**A.** Magnetic resonance (MR)-to-model pipeline. **Left**. Longitudinal slice from a high-resolution MR image of the rabbit still heart with resolution of approximately 25 μm isotropic.43** Center**. Segmentation of the high-resolution MR data set to produce binary segmented voxel image stack with atria removed. **Right**. Generation of a high-resolution tetrahedral finite element mesh directly from segmented voxel stack. Highlighted region shows high quality of mesh representation of intricate endocardial surface structures. **B**. Visualization of tagged structures within the ventricular model showing papillary muscles (green) and valves (blue), generated via manually created masks prior to mesh generation. **C**. Visualization of fiber vectors incorporated into the finite element mesh using the minimum distance algorithm described in the "Assignment of Fibrous Architecture" section.

As a result of this recent increase in attainable resolution, anatomical MR imaging (MRI) is now capable of providing a wealth of information regarding fine-scaled cardiac structural complexity. Such MR data are currently allowing accurate identification of microscopic features such as the coronary vasculature, extracellular cleft spaces, and the free-running PS, as well as macroscopic structures such as trabeculations and papillary muscles. However, differentiation of different tissue types is often problematic in anatomical MR, due to the very similar signal intensities recorded from functionally different tissue regions of the preparation. Furthermore, information regarding the organization of cardiomyocytes into cardiac fibers44 and the laminar structure of the myocardial wall45 is unattainable with normal anatomic MRI.

Myocardial structural information is more commonly obtained using a variation on the conventional anatomical MR protocol, called *diffusion tensor MRI* (DT-MRI). In DT-MRI, the measurement of the preferential direction of water molecule diffusion can be directly related to the underlying fiber architecture of the sample. It has been shown that the primary eigenvector from the diffusion tensor data (representing the direction of greatest water diffusion) corresponds faithfully with the cardiomyocyte direction (commonly referred to as the fiber direction), being validated with histological measurements.46 Similar measurements also suggest that the tertiary eigenvector represents the sheet-normal direction, although this correlation is still controversially debated.47

Although recent advances in DT-MRI protocols have reduced voxel resolutions to approximately 100 μm isotropic for small mammalian hearts, this is still approximately an order of magnitude greater than corresponding anatomical scans,43 which can cause problems when combining information from the two modalities. Due to this relatively low resolution, partial volume effects at tissue boundaries can also result in significant inaccuracies in predicted fiber orientation in these regions, particularly evident at epicardial/endocardial surfaces. Such effects occur when edge voxels contain a significant proportion of nontissue (where diffusion is generally isotropic) within their boundaries, meaning that the signal from these voxels does not accurately represent the underlying tissue structure in these boundary regions. This can also prove problematic close to intramural blood vessel cavities and extracellular cleft regions. Furthermore, traditional DT-MRI techniques have difficulties resolving more than one prevailing cell alignment direction in any one voxel, which can cause inaccuracies in regions such as the papillary muscle insertion site on the endocardium or where the RV and LV meet close to the septum, sites where there is known to be an abrupt change in fiber orientation. However, the development of novel DT-MRI protocols, such as Q-ball imaging,48 have been shown in recent preliminary studies to be able to successfully locate these regions of "mixed" fiber vectors, building on the success of such techniques used previously in brain DT-MRI acquisition to delineate regions of neural fiber crossing.

Following the acquisition of high-resolution structural and functional imaging data sets, described earlier, such information must then be processed and transformed into a usable format to facilitate the generation of anatomically detailed computational cardiac models. One important aspect of this is that all relevant information regarding gross cardiac anatomy, histoarchitectural heterogeneity, and structural anisotropy acquired in the images must be successfully translated over to the final model, so far as resolution requirements permit. Second, the computational pipeline of technology established for the analysis and processing of the images, and the generation and parameterization of the models from the imaging data should be as automated as possible, minimizing manual interaction. The development of such an automated pipeline will facilitate its use alongside imaging atlas data sets to investigate the causal effect of intrapopulation heterogeneities on simulations of cardiac function.

Anatomical MR data are the most common imaging data currently being used to construct the geometric basis of computational cardiac models. The first stage of the MR-to-model pipeline is to faithfully extract all of the complex geometric information present in the MRIs, which can then be carried over and used in the computational model. This is done by the process of segmentation. Segmentation is a broad and widely used technique within medical image analysis that involves labeling voxels based on their association with different regions, objects, or boundaries within the image. In certain cases, this can be done manually, for example, a highly experienced physician may segment by hand a region of tumorous tissue within an MRI or computed tomography (CT) scan. Alternatively, computational algorithms can be developed and used, based on a priori knowledge of the image features, to automatically segment regions of interest within the image, ideally with little or no manual input. To generate a computational model from cardiac MRIs, our initial goal is simply to identify those voxels in the MR data set that belong to cardiac "tissue" and those that represent nontissue or "background." The background medium is generally identified in an MR scan as the lighter areas within the ventricular and atrial cavities, blood vessel lumens, the extracellular cleft spaces, and the medium on the exterior of the heart surrounding the preparation. Successful discrimination between background and tissue translates the gray scale MRI data set into a binary black/white (0/1) image mask.

Segmentation of the high-resolution rabbit MR data set shown in Fig. 14–1A, left43 was conducted using a level-set segmentation approach. Level-set methods are based on a numerical regimen for tracking the evolution of contours and surfaces.49 The contour is embedded as the zero level set of a higher dimensional function [Ψ(**X, t**)], which can be evolved under the control of a differential equation. Extracting the zero level set Γ(**X, t**) = ((**X, t**) = 0) allows the contour to be obtained at any time. A generic level-set equation to compute the update of the solution Ψ is given by:

where A, *P*, and *Z* are the advection, propagation, and expansion terms, respectively. The scalar constants α, β, and γ weight the relative influence of each of the terms on the movement of the surface, and thus, Equation 14–1 represents a generic structure that can be adapted to the particular characteristics of the problem. Different level-set filters use different features of an image to govern the contour propagation (eg, explicit voxel intensity or local gradient values) and can thus be combined to provide robust segmentation algorithms. Combining level-set filters in a sequential pipeline can be beneficial to provide a good initialization for filters that are prone to get trapped in local minima of the related cost function in the iterative solution to Equation 14–1. Figure 14–1A shows the final segmented voxel image stack of the rabbit MR data set following segmentation using a level-set approach.43

The final stage in the segmentation process involves the delineation of the ventricles from the segmented binary voxel image stack to facilitate the generation of a purely ventricular computational model. This requirement is such that the presence of an insulating layer of connective tissue that separates the ventricles from the atria (the annulus fibrosus) precludes direct electrical conduction between atrial and ventricular tissue. As a result, ventricular computational models are developed to allow the study of ventricular electrical function in isolation from the atria. Segmentation of the ventricles was performed by manual selection of 20 to 30 points in every fifth long axis slice through the MR data set along the line believed to represent the atrioventricular border. Identification of such a line was often possible because the connective tissue annulus fibrosus appeared relatively darker than the surrounding myocardium in the MR data. A smooth surface was then fitted through the landmark points to define a 2D surface, dissecting the image volume; all tissue voxels residing above the plane were subsequently removed, leaving only the ventricular tissue beneath. Figure 14–1A (center) shows the final segmented ventricular voxel image stack with atrial tissue removed.

Following discrimination between tissue and nontissue types in the segmentation process described earlier, it is further necessary to classify tissue voxels. Classification is either based on a priori knowledge or on additional complementary image sources or features that can be used to further differentiate functional or anatomical properties, both between different tissue types and also within a particular tissue type. Identifying such differences allows variations in electrical, structural, and/or mechanical parameters to be assigned on a per-region basis to tissue throughout the model, which is vital for realistic simulations of heart function. Each voxel can have one or more classifications, and for each classification, a numerical "tag" is assigned to the voxel. Importantly, these numerical tags within the voxel image stack are retained throughout the meshing process (see later section "Mesh Generation"), becoming associated with individual finite elements that make up the computational model.

Regions may be tagged in order to assign different cellular electrical properties to different tissue types. For example, the free-running Purkinje fibers are known to have very different electrical membrane dynamics compared with normal ventricular cells, requiring the use of specialized Purkinje cell models within these structures. In addition, tags can be used to assign different structural properties to distinct regions of tissue. For example, the papillary muscles and larger trabeculae (in places where they are detached from the endocardial wall) have a very different fiber architecture than myocytes lying within the bulk of the myocardial wall, tending to align parallel to the axis of these cylindrical structures. In all of these cases, it is often not possible to discern these different regions directly from the MR data because there is usually little or no difference in voxel intensities or intensity gradients between different regions. Although it is feasible that specialized and specific shape and appearance models may be developed to segment some of these different regions (eg, the papillary muscles and free-running Purkinje fibers have very distinctive morphologies), often manually created masks are produced and convoluted with the segmented image stack to numerically tag individual regions. Such a procedure was adopted to manually delineate structures such as the papillary muscles/larger trabeculae and valves from the segmented rabbit MR data, as well as the specialized Purkinje conduction system. Figure 14–1B shows tagged regions in the finite element rabbit ventricular model of Fig. 14–A (right), delineating the papillary muscles (green) and the valves (blue). In addition, Fig. 14–2A shows the manual identification of Purkinje fibers and their respective junctions within the MRI stack. Figures 14–2B,C shows the incorporation of this information into the ventricular model, allowing the tagged Purkinje fibers to be identified with corresponding specialized electrical properties.

###### Figure 14–2.

**A.** Manual identification of branching points within the free-running Purkinje system (Purkinje-Purkinje junctions [PPJ]) and points where the Purkinje fibers enter the myocardium (Purkinje-myocardial junctions [PMJ]) within the magnetic resonance image stack. **B.** and **C.** Corresponding delineation of Purkinje system within the computation ventricular model. LV, left ventricle; RV, right ventricle.

Furthermore, for certain applications, it may be required to delineate regions of diseased tissue, such as infarct, from the imaging data sets in order to assign the appropriate electrophysiological properties associated with such pathological conditions.50 Infarct tissue consists of both an inexcitable scar core and a border zone, which is assumed to contain excitable but pathologically remodeled tissue. Previous experimental studies have confirmed that myocardial necrosis, the influx of inflammatory cells with a more spherical morphology, and the deposition of collagen result in less anisotropic water diffusion within the infarct region as compared with healthy myocardium.51 Although scars cannot be discriminated from healthy myocardium in anatomical MR scans, the changes in diffusion anisotropy secondary to the structural remodeling are reflected in differences in fractional anisotropy values, which can be detected with DT-MRI scans.

The image processing procedures to generate a model of an infarcted regions are presented in Fig. 14–3 using ex-vivo MRI and DT-MRI scans of a canine heart approximately 4 weeks after infarction.50 Using similar techniques as described earlier,49,52 the myocardium was separated from the surrounding suspension media and segmented to obtain an accurate representation of the anatomy. Similarly, the ventricles were separated from the rest of the myocardium to account for the presence of the electrically insulating annulus fibrosus as well as to assign different electrophysiological properties (atrial vs ventricular) to the tissue on either side of this boundary. After the delineation of the ventricles, the infarct tissue is labeled. From the interpolated DT-MRI, the fractional anisotropy (FA) image is generated by computing the FA of the diffusion tensor at each voxel.53 The FA of a diffusion tensor is defined as:

where λ1, λ2, and λ3 are the eigenvalues of the tensor. The FA is a measure of the directional diffusivity of water, and its values range from 0 to 1. A value of 0 indicates perfectly isotropic diffusion, and 1 indicates perfectly anisotropic diffusion. The infarct region is characterized by lower anisotropy, and therefore lower FA values, compared with the healthy myocardium51 because of the myocardial disorganization in the infarct. Based on this difference in FA values, the infarct region is separated from the normal myocardium by applying a level-set segmentation to the 3D FA image. Next, the infarct region is subdivided into two areas by thresholding the structural MRI based on the intensity values. It is observed in the MRIs that the infarct region has heterogeneous image intensities. This is due to the presence of two distinct regions within the infarct: a core scar, which is assumed to contain inexcitable scar tissue, and a peri-infarct zone (PZ), which is assumed to contain excitable but pathologically remodeled tissue. The two regions are separated by thresholding the structural MRI based on the intensity values of the voxels; regions of high (>75%) and low (<25%) gray-level intensities were segmented as core scar, whereas tissue of medium intensity was segmented as PZ.54,55 Figure 14–3 shows the final segmentation; orange identifies healthy myocardium, purple identifies the scar, and red identifies the PZ.

Alternatively, instead of a numerical tag, regions of the myocardium can be assigned quantitative information in the form of the value of a function at that particular point within the model. For example, the normalized transmural distance within the myocardial wall can be used to assign myocardial fiber orientation within the bulk of the myocardial wall (described later), as well as to modulate transmural gradients in electrophysiological cell membrane properties, with similar normalized distances used to modulate apico-basal electrophysiological cell membrane parameter gradients. Such functional parametrizations can be assigned to the voxel image stack, but are more commonly assigned on a per-node or per-element basis within the finite element mesh as a postprocessing step following modal generation, discussed later in the "Postprocessing" section.

Simulating the heart's electrical or mechanical function requires that solution be sought to the different systems of equations that are used to model these aspects of cardiac behavior. In the case of tissue or whole organ simulations, it is required that the cardiac geometry in question be discretized into a set of grid or mesh points to facilitate a numerical solution approach to these equation systems. The majority of leading cardiac electrophysiology simulators (such as the one described later under "Simulating Cardiac Bioelectric Activity at the Tissue and Organ Level") use finite element meshes as an input.56 However, the construction of finite element meshes, representative of the particular cardiac tissue/organ preparation, is a highly nontrivial task. Previously, direct experimental measurements from ventricular preparations have provided sets of evenly spaced node points, representing the external tissue surfaces, with finite elements subsequently being assigned joining up these points. Such procedures have been used in the construction of the widely used Auckland pig57 and canine58 ventricular models. Recently, finite element meshes have been constructed directly from anatomic imaging modalities, such as MR. Although the exceptionally high resolution of such data sets currently being obtained can provide unprecedented insight regarding intact cardiac anatomical structure, faithfully transferring this information into a finite element mesh that is both of good quality and computationally tractable is a significant challenge.

The mesh generation software of choice used by our group is Tarantula (http://www.meshing.at; CAE-Software Solutions, Eggenburg, Austria). Tarantula is an octree-based mesh generator that generates unstructured, boundary fitted, locally refined, conformal finite element meshes and has been custom designed for cardiac modeling. The underlying algorithm is similar to a recently published image-based unstructured mesh generation technique.59 Briefly, the method uses a modified dual mesh of an octree applied directly to segmented 3D image stacks. The algorithm operates fully automatically with no requirements for interactivity and generates accurate volume-preserving representations of arbitrarily complex geometries with smooth surfaces. The smooth nature of the surfaces ensures general applicability of the meshes generated, in particular for studies involving the application of strong external stimuli, because the smooth, unstructured grids lack jagged boundaries that can introduce spurious currents due to tip effects, as is the case for structured grids. To reduce the overall computational load of the meshes, unstructured grids can be generated adaptively such that the spatial resolution varies throughout the domain. Fine discretizations with little adaptivity can be used to model the myocardium, thus minimizing undesired effects of grid granularity on propagation velocity, while coarser elements that grow in size with distance from myocardial surfaces are generated to represent a surrounding volume conductor (eg, tissue bath or torso). Using adaptive mesh generation techniques facilitates the execution of bidomain simulations with a minimum of overhead due to the discretization of a surrounding volume conductor. Importantly with respect to model construction, Tarantula directly accepts the segmented voxel image stacks as input, which removes the need to generate tessellated representations of the cardiac surface, as required by Delaunay-based mesh generators.60 Avoiding this additional step is a distinct advantage in the development of high-throughput cardiac model generation pipeline.43

The final segmented and postprocessed data set was used to generate a tetrahedral finite element mesh using the meshing software Tarantula, shown in Fig. 14–1A. In addition to meshing the myocardial volume, a mesh of the volume conductor surrounding the ventricular mesh was also produced. This extracellular mesh not only consists of the space within the ventricular cavities and the vessels, but also defines an extended bath region outside of the heart, which can be used to model an in vitro situation where the heart is surrounded by a conductive bath, such as during optical mapping experiments.61 The total mesh produced by Tarantula shown in Fig. 14–1A consisted of 6,985,108 node points, making up 41,489,283 tetrahedral elements. The intracellular mesh consisted of 4,306,770 node points making up 24,199,055 tetrahedral elements with 1,061,480 bounding triangular faces. The myocardial mesh had a mean tetrahedral edge length of 125.7 μm. The mesh took 39 minutes to generate using two Intel Xeon processors, clocked with 2.66 GHz.43

Whenever possible, classification and tagging operations are carried out at the preprocessing stage prior to mesh generation. In general, it is much simpler to devise algorithms that operate on perfectly regular image data sets than on fully unstructured finite element meshes. Further, due to the large and very active medical imaging community, a vast number of powerful image- processing techniques are readily available through specialized toolkits.52 However, a subset of operations is better employed directly to the finite element mesh, because this is the computational grid that will be used for solving the biophysical problems. Although the objects, as defined by the binarized image stack on one hand and by the computational grid on the other hand, overlap fairly accurately, inevitable differences arise along the boundaries. The boundaries in the finite element mesh are smoothed representations of the object's jagged boundaries in the voxel stack. Tags assigned on a per-voxel base prior to mesh generation have to be mapped over to the finite element mesh. This processing step, referred to as voxel mapping, is nonstandard and, as such, typically not an integral part of standard mesh generation software. Further, by using the finite element mesh, more complex tagging strategies can be conceived based on the use of the solution of partial differential equations (PDEs). An example is given in the following section, where a Laplace problem is solved on the finite element grid. The harmonic solution to the problem, φ, is then used to provide a reference frame for assigning tags. The scalar values of φ can be used as a measure of distance, for instance, the distance to the endocardium, and ∇φ may provide a reference that indicates the transmural direction at every node in the grid. An example of a postprocessing tag assignment is illustrated in Fig. 14–4. Here, it is shown how different regions within a ventricular finite element model may be tagged based on their locations relative to epicardial/endocardial surfaces, allowing epicardial, endocardial, and mid-myocardial regions to be defined. Depending on the tags held by different nodes within the mesh, different cellular ionic properties may be assigned to each region replicating the experimentally observed disparity in action potential dynamics seen within subepicardial/subendocardial/mid-myocardial regions of the ventricular walls. Another important postprocessing application is the regularization of DT-MRI data along the organ surfaces. DT-MRI data are very noisy and, due to partial volume effects, tend to be inaccurate close to surfaces. Such artifacts can be corrected easily by subtracting the projection of each fiber in surface elements onto the surface normal of the element from the fiber direction assigned to the element to ensure planarity of the fiber with the surface.

###### Figure 14–4.

Tagging, consolidating, and assignment steps involved in the parametrization of biophysically detailed in silico models. Analysis of a ventricular finite element model (bottom left) provides information regarding a specialized intramural distance parameter (top left), which is then consolidated to delineate and tag within the mesh regions of the myocardial wall as subepicardial (1), mid-myocardial (2), and subendocardial (3) (top right). Such regions are then assigned different cellular ionic properties to allow experimentally observed variations in action potential dynamics to be reproduced (bottom right). This concept applies in a wider context and not only to the definition of transmural functional heterogeneity. Note that tags are assigned in a redundant manner to increase flexibility during the consolidation phase where new regions can be formed on the fly as a function of input parameters, allowing one to reuse in silico setups quickly while avoiding the time-consuming and error-prone tagging procedures. Parameters may be assigned on a per-node basis (as shown here for assigning models of cellular dynamics) or on a per-element basis (as suited for, eg, assigning tissue conductivities).

The specific arrangement of cardiac myocytes within the myocardium is known to highly influence the electromechanical functioning of the heart. Specifically, the orientation of cardiac fibers is responsible for the anisotropic conduction throughout the myocardium, with electrical conduction occurring more readily along the axis of the myofiber than in the plane normal to it. Furthermore, the preferential alignment of cardiomyocytes influences the mechanical functioning of the heart, with the development of active tension that causes contraction occurring solely parallel to the fiber direction. Therefore, a realistic representation of the fibrous architecture throughout the model is paramount to faithfully simulating cardiac behavior. From histological44 and DT-MRI46 analysis of cardiac fiber architecture, it is known that cardiac fibers run primarily in a circumferential (or latitudinal) direction through the myocardial wall, with an additional inclination in the apico-basal (or longitudinal) direction. The inclination, or helix, angle (α) varies transmurally by approximately 120° from epi- to endocardium, being approximately zero in the mid wall.27,44 In the following sections, we describe such experiment techniques used to measure fiber architecture within the heart, as well as mathematical rule-based methods for incorporating fiber information into cardiac models in the absence of experimentally measured data.

Information regarding cardiac fiber orientation can be obtained experimentally from either histological analysis42,62 or DT-MRI.46 Although the exceptional resolution of the histology data sets (up to approximately 1 μm in plane43) allows fiber orientation information to be readily extracted from the images through the application of simple gradient-based image analysis filters,42 a number of problems exist. First, the physical process of cutting the sample introduces significant nonrigid deformations to the tissue. Although it is possible to use the intact geometric information obtained from a prior anatomic MR scan to register the histology slices together to recover the correct intact anatomy,63 it is still unclear whether fully 3D information regarding cardiac fiber architecture can also be recovered from these registered data sets.

More commonly, myocardial structural information is obtained from DT-MRI recordings. Despite the potential drawbacks of DT-MRI, discussed earlier, the technique is widely used to embellish cardiac fiber orientation into computational models. There are two main ways in which DT-MRI information can be incorporated into an anatomical computational model. The first case is where the computational geometric model has been generated directly from a particular component of the DT-MRI data set; usually the intensity-based information, such as the FA or apparent diffusion coefficient values, is used. In these cases, the DT-MRI fiber vectors can be simply mapped across to the geometric finite element model using a nearest neighbor approach.64 However, the second case involves instances in which a computational model has been generated based on a previously obtained anatomical scan from the same sample, usually following reembedding in a medium with a different contrast agent required for the anatomical scan. Alternatively, a computational model is used that has been generated from a different preparation entirely and, in some cases, from a different species. Here, the two data sets need to be geometrically registered together prior to mapping.

However, in certain instances, such DT-MRI data sets are unavailable for a particular heart preparation. In these cases, fiber architecture can be embellished into the model through the use of rule-based approaches using a priori knowledge regarding cardiac fiber structure within the heart of a particular species. The two most commonly used rule-based methods are the minimum distance algorithm and the Laplace-Dirichlet method. Both methods involve defining a smoothly varying function throughout the tissue domain and then using information regarding the variation of the function to define fiber orientation vectors. Each method is now briefly described.

The minimum distance algorithm, developed by Potse et al,27 involves computing the minimum distance of every point within the myocardium to each of the epicardial (*d*epi) and endocardial (*d*endo) surfaces, respectively. This is normally performed on a node-by-node basis over a finite element mesh, describing the tissue geometry. Within the septum, in absence of an epicardial surface, *d*epi can be taken to be the minimum distance to the endocardial surface of the RV

*d*endo can be taken to be the minimum distance to the endocardial surface of the LV , replicating the apparent structural continuity of the LV and septum seen in previous imaging studies.46 With knowledge of

*d*epi and

*d*endo, a normalized thickness parameter

*e*is computed as:

To avoid sudden discontinuities in *e* at boundaries between regions, the value of *e* at each node is then averaged with all of its immediate neighbors (each central node of a structured mesh would have 26 neighbors). Calculation of the gradient of *e* within each tetrahedral element (∇*e*) then defines the transmural direction, u. The local circumferential direction, v, is then imposed to be perpendicular to u and the global apico-basal direction, which defines the "default" fiber vector direction with zero helix angle. To account for the widely reported transmural variation of fiber helix angle through the myocardial wall,44 a rotation of α is then made to the fiber vector about the u-axis. The most widely used functional representations of the variation of α through the wall are a linear or a cubic27 variation, which is the method adopted here, giving:

where *e*av was taken to be the average of *e* of the nodal values making up the tetrahedra. Due to the reportedly less pronounced variation of fiber angles in the LV compared with the RV, *R* is set equal to π/3 for the LV and septum and π/4 for the RV.27 Figure 14–1C shows the fiber vectors incorporated into the ventricular finite element model of Fig. 14–1A using the minimum distance approach.43

The Laplace-Dirichlet method involves computing the solution of an electric potential φ within the tissue between two electrodes:

where isotropic conduction within the tissue is assumed. Dirichlet boundary conditions specified at the electrode positions (one ground and the other fixed voltage) produce a smoothly varying potential field between the electrodes within the tissue. The resulting potential gradient vectors can then be used to assign smoothly aligned fiber vectors that align with the underlying global and local structure of the tissue. Smoothness along the boundaries is ensured because the electric field lines only originate/terminate at electrodes due to the zero-flux boundary condition specified on all other external tissue boundaries to simulate electrical isolation; thus, the field has no component perpendicular to the tissue boundary.

However, the Laplace-Dirichlet method is highly dependent on electrode configuration with respect to the specific tissue preparation. Electrodes can be placed to produce global apico-basal (longitudinal), transmural, or circumferential (latitudinal) field directions. Computation of the gradients of such fields within each finite element of the domain gives the local orthonormal basis vectors, from which the fiber vector can be assigned in a similar manner to the minimum distance approach discussed earlier.

In cases where vastly more complex geometric representations of the ventricles are constructed, simple rule-based methods cannot be used to faithfully assign fiber structure to all regions of the model. Such methods have been developed based on histological analysis of fiber vector architecture within the bulk of the ventricular free wall and therefore cannot be used to define fiber vectors in structures such as the papillary muscles or trabeculations, which are included within the next generation of cardiac models (as described earlier). Within such "tubular" endocardial structures, cardiac myocytes are known to align preferentially along the axis of the structure, because mechanically, this is the direction that facilitates the most efficient contraction. In such cases, the structure tensor method, a robust method for determining localized structure based on neighboring gradient fields, can be used to define local fiber orientation.

Cardiac tissue is made up of myocytes of roughly cylindrical geometry. A myocyte is approximately 100 μm long and 10 to 20 μm in diameter. The intracellular spaces of adjacent myocytes are interconnected by specialized connexins referred to as gap junctions.65 The connexin expression over the cell is heterogeneous with a higher density of gap junctions at the intercalated discs located at the cell ends (along the long axis of the cell) and a lower density along the lateral boundaries,66,67 and different connexins of varying conductance are expressed in different regions of the heart.68 As a consequence of the elongated cellular geometry and the directionally varying gap junction density, current flows more readily along the longitudinal axes of the cells than transverse to it. This property is referred to as anisotropy. Cardiac tissue is composed of two spaces: an intracellular space formed by the interconnected network of myocytes, and the cleft space between myocytes referred to as interstitial space, which is made up of the extracellular matrix and the interstitial fluid. The extracellular matrix consists of networks of collagen fibers, which determine the passive mechanical properties of the myocardium. Electrically, it is assumed that the preferred directions are co-aligned between the two spaces but that the conductivity ratios between the principal axes are unequal between the two domains.38,69-71 All parameters influencing the electrical properties of the tissue, such as density and conductance of gap junctions, cellular geometry, orientation and cell packing density, and the composition of the interstitial space, are heterogeneous and may vary at all size scales, from the cellular level up to the organ. As a consequence, direction and speed of a propagating electric wave are constantly modified by interacting with discontinuous spatial variations in material properties at various size scales. At a finer size scale below the space constant, λ, of the tissue (ie, <1 mm), the tissue is best characterized as a discrete network in which the electrical impulse propagates in a discontinuous manner.72,73 At a larger more macroscopic size scale (>> λ), the tissue behaves as a continuous functional syncytium where the effects of small-scale discontinuities are assumed to play a minor role.

Theoretically, the idea of modeling an entire heart by using models of a single cell as the basic building block is conceivable. However, the associated computational costs are prohibitive with current computing hardware because a single heart consists of roughly 5 billion cells. Although a few high-resolution modeling studies have been conducted where small tissue preparations were discretized at a subcellular resolution,38,72,74 in general, the spatial discretization steps were chosen based on the spatial extent of electrical wave fronts and not on the size scales of the tissue's microstructures. For this reason, cardiac tissue is treated as a continuum for which appropriate material parameters have to be determined that translate the discrete cellular matrix into an electrically analog macroscopic representation. In principle, this is achieved by averaging material properties over suitable length scales such that both potential and current solution match between homogenized and discrete representation. A rigorous mathematical framework for this procedure is provided by homogenization theory, which has been applied by several authors to the bidomain problem.15-75-77 Homogenization is a two-step process where the intracellular and interstitial domains are homogenized in a first step, and the two respective domains are spread out and overlapped to fill the entire tissue domain. This concept of interpenetrating domains states that everywhere within the entire myocardial volume intracellular space, extracellular space and the cellular membrane coexist.

The bidomain equations78 state that currents that enter the intracellular or extracellular spaces by crossing the cell membrane represent the sources for the intracellular, Φ*i*, and extracellular potentials, Φ*e*:

where σ*i* and σ*e* are the intracellular and extracellular conductivity tensors, respectively; β is the membrane surface-to-volume ratio; *I**m* is the transmembrane current density; *I*tr is the current density of the transmembrane stimulus; *I**e* is the extracellularly applied current density; *C**m* is the membrane capacitance per unit area; *V**m* is the transmembrane voltage; and *I*ion is the density of the total current flowing through the membrane ionic channels, pumps, and exchangers, which in turn depends on the transmembrane voltage and a set of state variables η. At the tissue boundaries, electrical isolation is assumed, which is accounted for by imposing no-flux boundary conditions on Φ*e* and Φ*i*.

In those cases where cardiac tissue is surrounded by a conductive medium, such as blood in the ventricular cavities or a perfusing saline solution, an additional Poisson equation has to be solved:

where σ*b* is the isotropic conductivity of the conductive medium and *I**e* is stimulation current injected into the conductive medium. In this case, no-flux boundary conditions are assumed at the boundaries of the conductive medium, whereas continuity of the normal component of the extracellular current and continuity of Φ*e* are enforced at the tissue-bath interface. The no-flux boundary condition for Φ*i* remains unchanged.

Equations 14– 6 and 14–7 are referred to as the parabolic-parabolic form of the bidomain equations; however, the majority of solution schemes is based on the elliptic-parabolic form, which is found by adding Equations 14– 6 and 14–7 and replacing Φ*i* by *V**m* + Φ*e*.79 This yields:

where *V**m* and Φ*e*, the quantities of primary concern, are retained as the independent variables. Note that Equation 14–12 treats the bath as an extension of the interstitial space.

Although the use of a bidomain formulation is preferred when effects due to extracellular stimulation,80 defibrillation,81,82 or bath loading83,84 are under investigation, for studying wave propagation phenomena in cardiac tissue, the monodomain equations are by far the more popular choice due to the substantially lower computational costs. Differences between the two formulations are, relative to the inaccuracies of the overall system, fairly small.27 Assuming that σ*i* and σ*e* are related by a scalar, *v*:

the bidomain equations can be reduced to the monodomain equations:

where Equation 14–16 does not need to be solved for simulating wave propagation. However, it is widely accepted that the assumption of Equation 14–14 does not reflect biological reality. There is a large variation in reported measurements of anisotropy, ranging between 5.7 and 10.8 for α*i* = σil/σit in the intracellular domain and between 1.5 and 2.6 α*e* = σel/σet in the extracellular domain.71 However, all studies unanimously agreed that the ratios between the domains, α*i*/α*e*, are different, ranging from 3.5 to 6.3 (ie, there is no experimental support of Equation 14–14). Furthermore, the existence of unequal anisotropy ratios is indirectly supported by the experimental observations of virtual electrodes for which the presence of unequal anisotropy ratios, according to the bidomain theory, is a necessary condition. For instance, it was theoretically postulated that the delivery of a strong unipolar point-like stimulus85 leads to the formation of a very peculiar polarization pattern referred to as "dog bone." This prediction was experimentally confirmed by numerous labs only a few years later.

Further insights into the relationship between the two formulations are gained by considering a 1D strand. In this case, the conductivity tensors degenerate to scalar values, and bidomain and monodomain formulations are fully equivalent with *v* = σeζ/σiζ, where ζ are the three eigendirections of the conductivity tensor.38 This immediately reveals that *V**m* and Φ*e* are linked by the voltage divider relationship:

Using the same choice of *v* = σeζ/σiζ in a 3D model along the direction ζ leads to monodomain conductivities that are the harmonic mean between the two spaces along ζ. That is:

In this case, monodomain and bidomain formulations are equivalent in the sense that both models yield the same conduction velocities along the principal axes, which, in turn, results in very similar activation patterns. Minor deviations are expected when wave fronts are propagating in any other direction, which results in slightly different activation patterns. Differences between monodomain and bidomain simulations can be further reduced by finding an optimal *v* at each time step.86

The His PS is responsible for the rapid activation of the ventricles. The PS starts at the atrioventricular node as the bundle of His, extends toward the apex, and divides into the left and right Tawara branches. The branches emerge on the endocardium, continue toward the apex, and then turn and travel back toward the base (Fig. 14–5). Along the way, branches extend into the myocardium and become finer, being called Purkinje fibers at that point. The Purkinje fibers travel a certain transmural distance, which is species dependent,87 before terminating in electrical junctions with the myocardium called Purkinje-myocyte junctions (PMJs). Fibers of the PS are covered in a collagenous sheath that electrically isolates the PS from myocardium along or through which it runs.88 Subendocardially, the PS may form a mesh-like network such that it cannot be described as an arborized structure.88 Although the structure may appear dense, the PMJs are discrete and more sparsely distributed.88

Conduction within the PS is very rapid, being about four times faster than that through myocardium. Anterograde propagation of impulses from the PS to myocardium is much slower than retrograde, which has been shown to occur.

To construct the Purkinje network, a conformal approach was undertaken, meaning that the nodes representing the Purkinje network were a subset of those defining the ventricles. The ventricular endocardial surfaces were extracted, resulting in two triangularly meshed surfaces. The surfaces were flattened onto a plane using a multidimensional scaling technique,89 which attempted to preserve geodesic distances, as computed by the classic Dijkstra algorithm.90 A conduction system was manually laid out on each of the endocardial surfaces, based on schematics and pictures of the conduction system, text descriptions of heart anatomy, and excitation mappings of the heart,34,91-94 taking care to preserve major features.

For each ventricle, a single cable started at the base and proceeded apically. Daughter branches were added, which ran toward the areas of early activation. The major bifurcation points were chosen to roughly correspond to published descriptions. More cables were added to provide better coverage for activation. The 2D representation of the Purkinje network was then mapped back to 3D space, the left and right Purkinje networks joined at a common point of origin, and a segment representing the bundle of His was added.

The end points of the Purkinje networks (ie, those points that make electrical contact with, and activate, the ventricular myocardium) were inserted into the ventricular wall. Each cable end was extended into the myocardium by appending elements. The new elements made an angle of approximately 45°, with the surface tangent to avoid any sharp discontinuities, and continued for several hundred micrometers into the tissue.

Assume that the extracellular potential, Φ*e*, is due to the bulk myocardium only (ie, we ignore the field produced by the PS).

The governing parabolic equation is given by:

where σ*i**P* is the equivalent intracellular conductivity that homogenizes the cytoplasmic with gap junctional resistances. The DiFrancesco-Noble model of the Purkinje cell95 was used with the maximum sodium conductance increased by a factor of 3 compared with the published values. Although studies usually increase the sodium conductance by a factor of 1.5,96,97 conductance was further augmented to increase the propagation velocity and obtain an upstroke that better agrees with experimental measurements.98 Alternative ionic models have recently been developed.99

The current flowing longitudinally along each cell is given by:

where ρ is the radius of the Purkinje fiber, and σ*i* is the intracellular conductivity. To determine *V**m*, we recognize that the cells are connected by gap junctions, which are represented as resistors (Fig. 14–6). Φ*i* is defined as the potential midway along the gap junction, and *V**m* at either end of the junction can be expressed using the longitudinal current:

where *r**gj* is the gap junction resistance. Thus, after determining *V**m* at the ends of each cell along the cable, we split the operator to isolate the ionic current term and update of *V**m* based on ionic current flow. Φ*i* and *i**L* were then recomputed at each junction from:

where Φ*e* was computed on the myocardial grid.

###### Figure 14–6.

The intracellular potential in the gap junction is denoted by Φ*i* at the midpoint, Φ*i*− on the left, and Φ*i*+ on the right. The gap junction length is so small that the extracellular potential, Φ*e*, is constant across the junction. The current flowing through the junction of resistance *r*gj is *iL*.

To properly take into account spatial gradients, a 1D cubic Hermite formulation was used. This assumed a radially constant solution, which is justified given how small the radius is compared with the space constant. By using Equation 14–20, gradients were converted to currents. We also required the spatial and temporal derivatives of the extracellular potential (ie,

, and ). At each bifurcation, current conservation was explicitly enforced. The PS is coupled to the bulk myocardium through PMJs, which are represented as fixed resistances (*R*PMJ). At the cable ends, current flow through the PMJ is enforced as a boundary condition. For a cable end at

*x*

*e*:

where Φ*i**M*(*x*) is an intracellular potential in the bulk myocardium, and *k* is the set of indices of the myocytes to which the Purkinje cell is connected. From the perspective of the myocardium, the current flowing through the end of the PMJ was an intracellular current injection.

Various temporal discretization schemes have been applied to solve the cardiac bidomain equations. Initially, fully explicit schemes were popular due to their ease of implementations, but their utility was limited due to the severe restrictions imposed on the maximum possible time step, Δ*t*, particularly when finer spatial discretizations, *h* < 100 μm, were considered. The maximum stable time step as a function of *h* is dictated by the Courant-Friedrich-Levy (CFL) condition. An approximation of the CFL condition can be derived by assuming equal anisotropy ratios (σ*i* = νσ*e*) and straight fibers:

where σ☆ = σ*i*☆σ*e*☆/(σ*i*☆ + σ*e*☆), (☆ = *l, t*). Despite this limitation, fully explicit schemes are still widely used, particularly with coarser meshes where accuracy constraints are the limiting factor and not so much stability. Besides, fully explicit methods are quite attractive in the context of large-scale simulations where parallel scalability plays an important role. Forward methods require only matrix-vector products, which scale very well up to high processor counts.

Fully implicit methods lift restrictions on time stepping imposed by stability constraints, allowing much larger time steps than explicit methods. With this class of methods, the choice of time step is governed by accuracy considerations. Several authors suggested fully implicit numerical schemes for solving the bidomain equations100; however, only simplified membrane models were used. Although it is perfectly feasible to use fully implicit time discretizations with detailed physiological cellular models, the increase in overall system size and complexity is dramatic, and the solvers required are extremely expensive due to the large nonlinear system arising at each time step. An alternative approach to circumvent this problem has been recently proposed,101 where the ordinary differential equation (ODE) gating variables were decoupled from the PDE. The scheme was classified as fully implicit with decoupling. The underlying rationale is based on the observation that the Jacobian of a fully implicit discretization at one node of the mesh is strongly dominated by the diagonal entries, suggesting that the coupling between gating variables and potentials is weak.

Implicit-explicit (IMEX) schemes are the most widely used discretizations because they represent, from a practical point of view, a very good trade-off between stability, ease of implementation, and computational efficiency. Quite often, IMEX methods are combined with operator-splitting techniques, which is appealing from a software engineering perspective because the problem can be broken down into smaller subproblems. Moreover, on theoretical grounds, it is expected that solving two smaller problems of size *N* is computationally more efficient than solving a larger problem of size 2*N*. In real implementations, this is not necessarily the case though.102 The scheme preferred by our labs103 is based on discretizing Equations 14–12 and 14–13. An operator splitting technique is applied to separate the ODE system from the PDEs (Equation 14–12 – Equation 14–13) using a θ-rule.104-106 Depending on a choice of θ, either a first-order accurate Gudonov scheme or a second-order accurate Strang splitting scheme can be derived.107 Solutions of the PDEs are then found by leap-frogging between the decoupled components where either *V**m* in Equation 14–12 or Φ*e* in Equation 14–13 is considered as constant. This leads to a three-step scheme, which involves solving a parabolic PDE, an elliptic PDE, and a nonlinear system of ODEs at each time step:

where *A*ζ is the discretized —∇·(σζ∇)/(β*Cm*) operator with ζ being either *i* or *e*; Δ*t* is the time step; and *V**k*, Φ*e**k*, and η*k* are the temporal discretizations of *V**m*, Φ*e*, and η, respectively, at the time instant of *kΔt*.

Recent models of cellular dynamics account for increasingly more physiological detail, which increases both the overall number of state variables and the stiffness of the nonlinear system of ODEs. Evaluating the vector-valued function *g*(*Vm*,η), where η is typically comprised of several tens of state variables, and solving the set of ODEs robustly and efficiently remain an error-prone challenges in any cardiac modeling endeavor. Equation 14–28 is written in a semidescretized form as a symbolic indication that the state vector η is being updated. A host of methods exists, and the choice of optimal method depends largely on the cellular model under consideration. Standard integration techniques include explicit (or forward) and implicit (or backward) methods. Explicit methods are popular because they are easy to implement; however, the order of this class of methods is one that often results in insufficient accuracy. Approaches to overcome this weakness include the use of several previously computed solutions (multistep methods) or additional intermediate solutions in the interval [*t;t* + Δ*t*] (Runge-Kutta methods). The more sophisticated implicit backward methods, such as backward differentiation formula (BDF) or implicit Runge-Kutta methods (eg, Rosenbrock methods108), have superior stability properties and allow larger time steps; however, they are computationally expensive, and in general, robust implementation of these methods is difficult. The use of BDF methods requires the solution of a linear system of equations iteratively, by either fixed-point methods or variants of the Newton-Raphson methods, which requires that Jacobian matrices are either analytically determined and repetitively evaluated or numerically approximated.109 Many currently used integrators incorporate advanced features such as variable time stepping and error control, where a time step is chosen such that the local error per step is below a prescribed tolerance level.

Moreover, apart from the choice of integration rule, two main approaches can be distinguished: (1) standard ODE integration techniques that update the entire state vector η in one step by solving a nonlinear system; or (2) a component-wise integration approach where the vector function η is split into its components and each single state variable η*i* is updated sequentially, with all other state variables η*j* (*j ≠ i*) held constant. Many of the ODEs comprising an ionic model, typically all gating equations in Hodgkin-Huxley–type models, but also some ODEs in Markov-state formulations, can be written in the form:

where

is the subvector of state variables that are held constant when updating η*i*. When using a component-wise integration approach, all variables belonging to the subvector are considered constant during an integration step. Hence, the nonlinear ODE reduces to a linear ODE with constant coefficients, for which an analytic solution is given by:

Numerical analysis revealed that using this analytical expression to evolve the solution to the next time step, *t* + Δ*t*, offers significant stability and accuracy benefits over the forward Euler method.110 Although many of the integration techniques summarized above have been used for solving Equation 14–28 in single cell modeling studies, in the context of tissue level simulations the technique of using the analytical solution, originally proposed by Rush and Larsen,111 is most widely used. Those portions *g**i* of the vector function *g*(*Vm*,η) that cannot be written in a linear form need to be integrated in the fully nonlinear form. In the original Rush-Larsen formulation, the forward Euler method was applied to solve the fully nonlinear form. The numerical realization of Equation 14–28 following the Rush-Larsen approach can be written as:

where η*f* is the linear and η*s* the nonlinear portion of the state vector η. In general, the very fast acting variables that render the system stiff, such as gating variables, are in the subvector η*f*, whereas slower acting variables, such as ionic fluxes, are in the subvector η*s*. The Rush-Larsen scheme can be implemented to achieve high computational efficiency using various acceleration techniques. Because the physiological range of variables is well known a priori, expensive terms in the analytical solution can be conveniently precomputed and then simply looked up when required. With this scheme, one state variable can be updated with one table lookup operation, one multiplication, and one sum operation only.112

Several suggestions have been made to further improve the method.110 First, for those models where fast-acting variables are in η*s*, potential stability issues can be dealt with by using implicit methods.113 Second, the integration in the component-wise form introduces an approximation error that renders the Rush-Larsen technique first-order accurate only. Recently, an important extension has been proposed that overcomes this limitation by applying a local linearization of nonlinear terms in combination with the analytical solution of the quasi-linear ODEs to obtain a second-order accurate numerical scheme.114

Despite the numerous benefits and the computational efficiency of accelerated Rush-Larsen techniques, for some more recent ionic models,115 the performance may be insufficient for organ-level simulations due to the extreme stiffness of cellular subsystems such as the interactions of L-type calcium currents with ryanodine receptors. With these models, the range of time scales involved is vast, from processes occurring within a fraction of a microsecond up to processes occurring at a scale of tens of seconds. The wide range of time scales involved can be exploited in a systematic manner to devise more efficient integration schemes, where different components of the state vector are decoupled and grouped as a function of the multiple time scales involved. Each group is then integrated with the method and the time step that is appropriate for the respective time scale and stiffness of the group. Processes occurring at the tissue- and organ-level scale are unaffected by the temporal multiscale decoupling at the ODE solver stage because global quantities that link the two scales are updated synchronously at the time step chosen at the tissue level to fulfill stability and accuracy requirements.112

Various spatial discretization techniques have been applied to the cardiac bidomain problem, most notably the finite difference method (FDM),27,116 the finite volume method (FVM),20,21 and the finite element method (FEM),18,19 although other nonstandard techniques, such as the interconnected cable model, have been used successfully as well.11,117 In general, the FDM is easiest to implement, but the method does not accommodate complex boundaries as naturally as the FEM and the FVM do. Although suggestions were made to overcome this limitation by using the phase-field approach16 or other generalizations,17,118 the FDM loses its most appealing advantage, the ease of implementation. FEM and FVM are both very well suited for spatial discretizations of complex geometries with smooth representations of the boundaries, which is a key feature when polarization patterns induced via extracellularly applied currents are to be studied. Both FVM and FEM have been used to model electrical activity in anatomically realistic models of the atria29-31,119 and the ventricles.26-28,43 Mesh generation requirements are similar for both techniques; that is, the domain of interest has to be tessellated into a set of nonoverlapping and conformal geometric primitives. With the FVM, quadrilaterals in 2D20 and hexahedral elements in 3D21,29 have been preferred, whereas with the FEM, triangles and quadrilaterals were used in 2D and tetrahedral43 or hexahedral elements were used in 3D.31,101 Typically, monolithic meshes consisting of one element type only were used with one exception,59 where hybrid meshes consisting of tetrahedra, hexahedra, pyramids, and prisms were used. Further, most FEM studies relied on the Galerkin FEM where linear test functions with tetrahedral elements,19,120,121 isoparametric trilinear test functions with hexahedral elements,101 or cubic Hermite hexahedral elements18,122 were used.

Independently of the spatial discretization technique, the choice of space step, *h*, is of major importance. It has been known since very early modeling studies that the solution of the bidomain equations does depend, to a certain degree, on *h*, even with very fine spatial discretizations.123 This sensitivity has to be attributed to the nonlinearity and stiffness of the reaction term, which entails an extremely fast upstroke of the cardiac action potential, lasting approximately 1 ms only. When propagating, a fast upstroke in time translates into a steep wave front in space. Depending on tissue conductivity and cellular excitability, physiological conduction velocities range between 0.2 and 0.7 m/s within the myocardium, which translates an upstroke duration of 1 ms into a wave front that extends 200 to 700 μm in space. Under physiological situations where tissue conductivity and/or excitability is reduced, conduction velocity may be substantially slower, leading to wave fronts where the spatial extent may be even below 100 μm. The spatial extent of a wave front along a direction ζ is proportional to the space constant, λζ:

It has been shown that for sufficiently small effective discretizations, *H*ζ = λζ/*h*ζ < 0.15, solutions converge with deviations in conduction velocity <1%.123 In practice, a trade-off has to be made between accuracy and computational tractability. A standard choice for *h*, or for an average discretization

*h >*500 μm, and physiologically realistic models of cellular dynamics, simulations deviate substantially from results obtained at finer resolutions. Conduction velocities at such coarse grids are underestimated by at least 25% or more, and even conduction block may occur as a numerical side effect due to spatial undersampling.

Current research in bidomain modeling aims at enabling electrophysiologists to perform in silico experiments with biophysically detailed models of the heart in the same way as in vitro experiments are performed in the wet lab. Due to the nature of the phenomenon, which requires fine spatiotemporal resolutions and long observation periods, in silico experimentation is computationally quite challenging. Typically, systems of 1 to 50 million degrees of freedom are considered, and the solution has to be evolved over several tens of thousands of time steps. Although bidomain simulations can be executed easily on current standard desktop computers, problem sizes that can be tackled have to be restricted to smaller preparations, with 106 degrees of freedom being roughly the upper limit beyond which simulations become quickly intractable. Larger problems, as arising when anatomically detailed models of the entire heart are discretized, lead to much larger problem sizes. The ultimate goal is to perform fine-grained exploration of parameter spaces with in silico experiments where trade-offs between anatomical and functional detail in the model on one hand and execution times on the other hand can be relaxed. Extremely efficient and complex software stacks are required that deliver the computational performance to enable this kind of research. The Cardiac Arrhythmia Research Package (CARP) simulator is considered as being one of the most efficient and versatile codes for biophysically detailed simulations of cardiac electrophysiology; nonetheless, performance lags real time by roughly a factor of 103 or 104 when running monodomain or bidomain simulations, respectively. This is clearly a limiting factor for certain applications, for instance, when simulations have to be integrated into a clinical work flow.

Performance improvements by at least one to two orders of magnitude are sought to leverage computational modeling as a complement to experimental or clinical techniques by enabling quick simulation and data analysis cycles. Two strategies are currently being investigated to provide the required performance boost. One school of thinking is to implement spatiotemporally adaptive algorithms that dramatically reduce the computational burden by reducing the overall degrees of freedom in the system and by increasing time steps in low gradient regions where tissue is quiescent or during the plateau and repolarization phase. This is achieved by dynamically refining the grid as a function of current activity, with very fine resolution along the propagating wave fronts and rather coarse resolutions elsewhere. The alternative approach is parallelization, where domain decomposition techniques are used to split the overall problem into smaller subdomains and where each subdomain is solved for in parallel. In the ideal case, computations are sped up by the number of computational cores, *N**p*, engaged. Unfortunately, it is not straightforward to achieve such a strong scalability (ie, a scalability where the execution times is reduced linearly with *N**p*).

From an algorithmic point of view, weak scalability is an important requirement to achieve good parallel performance with increasing *Np*. During a weak scalability test, the problem size is increased with *Np* in a way that the computational load per core remains constant. With optimal methods, weak scaling can be achieved so the execution time remains constant while increasing problem size and *Np*. Multigrid and multilevel preconditioners that show weak scalability properties have been published, and the scalability has been demonstrated in implementations for problem sizes up to 135 million degrees of freedom with 2048 cores.101 Weak scalability does not necessarily imply strong scalability because computational science aspects play an important role, which strongly depend on the specifications of a particular hardware. It can be expected that strong scalability will always break down at a sufficiently high *N**p*, when the ratio between computational load per core and the inevitable communication costs becomes less favorable; however, it is expected that weakly scaling implementations will scale strongly as well over a certain *Np* range.

Current codes can scale up very well to a moderate number of cores in the range of 64 to 256 cores with current HPC hardware, and the available memory spaces are huge, being in the range of terabytes. The current trend in the computing industry, which moves toward many-core architectures at a fast pace, rather favors the parallelization approach, with classical sequential desktop computing, with or without spatiotemporal adaptivity, likely to lose relevance. This trend is poised to have a major impact on the design of future algorithms because parallelization and cache performance aspects are of primary concern. Future desktop computers will be very similar to HPC architectures and implementations, and algorithms that do not account for this shift in paradigm cannot deliver the high performance required for future in silico modeling studies.

The choice of methods depends largely on the problem sizes considered and the available computational resources. In general, independently of a particular algorithm or implementation, the main computational burden associated with solving the bidomain equations can be attributed to the solution of the elliptic PDE when operator splitting is applied to the diffusion part of the equation and to the set of ODEs. Typically, with simple ionic models, the elliptic problem amounts to more than 90% of the overall workload, whereas with recent ionic models involving very stiff ODEs,115 the ODE solver time may be comparable to the elliptic solver time or even dominate computations. The ODE problem is alleviated in a parallel computing context. State variables in ionic models do not diffuse, which qualifies the ODE solve as an embarrassingly parallel problem. No communication between processors is required, and thus, the parallel scaling of the ODE portion is linear. Solving the parabolic problem is typically less of a concern. On coarser meshes, where time steps are limited by the reaction term, simple forward Euler steps can be used that do not add significantly to the overall computational expense. On finer grids, where IMEX schemes tend to perform better than explicit schemes, a linear system has to be solved. However, due to the diagonal dominance of the system, even with relatively cheap iterative solvers, the parabolic PDE is solved efficiently at a fraction of the cost of the elliptic PDE.

Traditionally, with sequential computers, the linear systems of small grid problems are most efficiently solved with direct methods.19,124 With increasing problem size, memory demands increase quickly due to fill-in arising during matrix factorization, which, in turn, significantly increases the required number of operations per solver step. For larger systems, on the order of several hundreds of thousands up to tens of millions of unknowns, direct methods are inefficient or even not feasible due to excessive memory demands, and iterative solver techniques are clearly the better choice. Furthermore, large-scale problems typically rely on parallel computing approaches to provide a sufficiently large memory space and to keep execution times reasonably short. When simulations are executed on parallel computers, iterative methods are the method of choice. Although direct methods have been implemented to perform well in parallel environments,125,126 they are typically not competitive when using a large number of cores due to the required fine-grained parallelism that limits their parallel scalability. Among the iterative methods, Krylov subspace methods, such as the conjugate gradient (CG) method, with various preconditioners have been established as the standard technique in the field. CG scales very well in parallel, although the utility of the method depends mostly on performance and scalability of the preconditioner.

The most challenging problem is the solution of the elliptic PDE. Using an incomplete LU (ILU) or incomplete Cholesky (ICC) preconditioner for CG has been a standard choice for bidomain simulations.19 Although the scalability of an ILU-CG iterative solver is reasonably good, at least up to a moderate number of 64 cores,124 typically the method takes several hundreds of iterations to converge. This makes the elliptic solve substantially more expensive than the parabolic solve, which converges with the same solver configuration in less than 10 iterations. The reason for the slow converge is well studied and can be blamed on the fact that standard iterative methods are very efficient in removing the high-frequency modes from the residual, but are less efficient with the low-frequency modes. Multigrid techniques have proved to be very effective in tackling this weakness.127 The basic idea is to project the residual onto a coarser space where the low-frequency components can be dealt with more efficiently. This has been demonstrated for the bidomain problem in several recent studies.124,128,129 Multigrid preconditioners for CG methods significantly improve the overall performance and show reasonable parallel efficiency (better than 80%) for up to 128 processors. A generally applicable algebraic multigrid preconditioner (AMG) in conjunction with an iterative Krylov solver reduces the number of iterations per solver step by almost two orders of magnitude compared with ILU-CG. Although a single iteration with AMG is significantly more expensive than with ILU, the reduction in number of iterations clearly favors a multilevel approach. In Plank et al124, a speedup of six-fold was reported. Using AMG-CG is, to date, the most efficient method for solving the elliptic portion of the bidomain equations.

The use of computer simulations to perform in-silico experiments has become established over the last decade, either as a complement to experimental or clinical studies38,56,130-132 or as a standalone tool for quantitative testing of hypotheses.26,133 Two typical scenarios are considered with in silico experiments:

A subset of physiological signals has been experimentally recorded at a given spatiotemporal resolution in a subdomain of interest. An in silico experiment is designed to match the in vitro setup, and model parameters are tuned to match the data. Assuming that the underlying model is mechanistically sound and that in vitro and in silico data match well, the model can be trusted with some confidence. Then the obtained results can be dissected to understand the genesis of the physiological signals recorded in vitro. The main advantages of the in silico approach are that there is almost no limitation in terms of spatiotemporal resolution and that all quantities of interest are easily accessible and can be analyzed, including those for which no suitable experimental recording technique has been devised yet. Further, it can be assumed that in silico data can be trusted in regions beyond the experimental field of view where no data could be recorded at all. For instance, optical mapping can provide high-resolution data on myocardial activation patterns, but recordings are confined to the surfaces of the heart. Such in vitro data can be complemented with in silico data on electrical activity occurring deep in the myocardial wall where experimental techniques cannot provide data at a sufficiently high spatiotemporal resolution.

No in vitro experimental data are available that directly match the in silico experiment. This is, by a wide margin, the more frequently encountered scenario in modeling studies. In this case, model parameters are tuned during the initial phase of a study to ensure consistency with a priori known averaged physiological observations such as conduction velocities or isochronal activation and repolarization patterns.

Although differences between experiments and simulations grow smaller, attempting to achieve a 1:1 match is a futile effort. In fact, 1:1 matches are hardly achievable between two subsequent identical in vitro experiments. Rather than striving to achieve the impossible, in silico techniques focus on reproducing the salient features of the system as a whole, rather than on creating a 1:1 digital replica of a real heart. The main reasons for discrepancies between in vitro and in silico can be attributed to the following factors:

*Uncertainty in model parameters*: Although very detailed measurements of many cardiac parameters are found in the literature, the variance in the reported data is quite large, which renders a very detailed model parametrization a daunting task. The reasons for the large variance are multifactorial, including both technical reasons, due to differences in measurement setups and experimental procedures, and actual biological variability, such as intersubject and interspecies variability, regional heterogeneity, sex, age, type and state of disease, and other effects secondary to cardiac memory and pacing history. The uncertainty with all relevant biological parameters is, at best, 10%, but variances of 100% or more are not uncommon. An accurate determination of all relevant model parameters for a particular in vitro experiment is virtually impossible.*Anatomical and structural differences*: Recent progress in imaging and modeling techniques allows the generation of microanatomically realistic models of the heart at a paracellular resolution.43 Such models are geometrically very detailed, accounting for fine endocardial structures and interstitial cleft spaces due to embedded connective tissue secondary to fibrosis or deposition of fat. Despite the geometric detail, information provided through current nondestructive 3D imaging techniques on fiber and laminar arrangement cannot be obtained at a sufficiently high resolution, and alternative techniques such as serial histology are destructive and too time consuming for many applications. Moreover, imaging techniques provide data only on the eigendirections of the tissue, but not on the electrical conductivities along them.*Unaccounted mechanisms*: Despite the high level of structural and functional detail in current high-end modeling studies, many mechanisms remain unaccounted for because they are either unknown or poorly understood or available data are insufficient to support the formulation of a sound model. That is, with in vitro models, all mechanisms are included, whether known or not, but quite often, some of these mechanisms have to be knocked out first to facilitate experimental observation (eg, in optical mapping studies, electromechanical coupling has to suppressed to avoid movement artifacts).*Numerical inaccuracies*: Stiffness and nonlinearity of the bidomain equations renders their numerical solution a computationally expensive task. Trade-offs have to be made between spatiotemporal resolution, functional and anatomical detail included, and computing time. This is particularly true with whole heart simulations where the computational burden for high-resolution models may become prohibitive. Currently, depending on the species, approximately 100 to 200 μm is the highest resolution that can be used in silico in the context of whole heart simulations. In the case of a human heart, a discretization at 200 μm results in approximately 25 million degrees of freedom, which is currently the upper limit that can be handled efficiently with turnaround cycles that are sufficiently fast to support parameter studies. However, within this range of resolutions, the solution of the bidomain equations is not only a function of the model parameters; there is also, to some extent, a dependency on the spatiotemporal discretization.123 With discretization schemes that rely on spatially fixed grids,*h*and*dt*are chosen to keep the numerical uncertainty below a certain percentage of 5% or less. Relative to the uncertainty of model parameters, numerical uncertainty does not constitute a problem per se. Numerical uncertainties are more problematic for comparisons between modeling studies, than for comparing in silico and in vitro results. Further, for simple pacing simulations where the same sequence is repeated, numerical deviations are negligible due to the synchronizing effect of the stimulus. However, deviations are to be expected when reentrant activation patterns are simulated over prolonged periods of time at the order of seconds or longer.

The multitude of factors and their complex interplay render the performance of in silico experiments a challenging task that requires a great deal of electrophysiological and theoretical expertise to determine an appropriate set of parameters that is suitable for a particular study. In current models, there are tens to hundreds of parameters involved, where each parameter is uncertain within a fairly large range. The sensitivity of simulation outcomes to these parameters is not well known, and due to the nonlinearity of the system, it is not easily predictable either. Strategies to determine this optimal parameter set vary from lab to lab; no commonly accepted standard has been established yet for this challenging and important problem. Thus far, in the majority of modeling studies, this problem has not been addressed at all. Instead, vanilla parameters are taken directly from published reports to parametrize a model.

Despite the lack of an established protocol, parametrization procedures can be broadly subdivided into two stages—first, a cellular stage where membrane kinetics are adjusted, and second, the adjustment of tissue level parameters to match conduction velocities and activation and repolarization patterns. The two stages are performed independently and sequentially, although it is known that there is a nonnegligible bidirectional interplay between the single cell and the tissue level. That is, cellular parameters, such as upstroke velocity or shape and duration of the action potential, may change substantially when coupled in a tissue as a function of cellular location relative to stimulus site and tissue boundaries,134 as well as due to electrotonic loading in presence of cellular heterogeneity.135

During the first stage, cellular parameters are modified to fit a particular scenario under a given pacing protocol. In more complex studies where cellular heterogeneities are accounted for, the procedure has to be applied to several cells to obtain a representative cell for each heterogeneous region that is included in the tissue/organ model. In a subsequent step, adjustments of tissue-level bidomain parameters are made to arrive at a good match between in silico results and experimentally observed activation and repolarization patterns. This tuning is a more evolved procedure. A minimum prerequisite to allow any comparisons between model and experiment is to match the wavelength, *L*, defined as the product between action potential duration (APD) and conduction velocity, ϑ:

Since *L* is the decisive factor that determines the presence and distribution of excitable gap and refractory tissue, wavelength matching is of key importance when comparing theoretical and experimental observations. This is particularly important for whole heart studies, where conduction velocities cannot be estimated as easily as in geometrically simplified models. Using cellular kinetics as set up at the first stage and neglecting the dependency of cellular APD on tissue parameters, conduction velocity has to be adjusted first. Due to the orthotropic tissue properties and the complex fiber and laminae arrangements in organ-level models, auxiliary simulations are performed in thin strand models to determine conductivities that yield the desired conduction velocities, ϑ*l*, ϑ*t*, and ϑ*n*, along the main axes of the tissue (ie, along the fibers, e*l*, transverse to the fibers within a sheet, e*t*, and along the sheet orthogonal direction, e*n*). Conduction velocity ϑζ depends on tissue conductivities and surface-to-volume ratio:

where σbζ, the harmonic mean between intracellular and extracellular conductivity along ζ, are the conductivities of the tissue bulk. For tuning ϑζ, the bulk conductivities σbζ have to be modified because β is a global scaling factor that influences ϑ in a direction-independent way. Auxiliary simulations are run in a sufficiently long strand of length > 10·λζ, where one end of the strand is stimulated to initiate propagation and ϑζ is measured at the center of the strand. To minimize differences due to discretization errors in ϑ between auxiliary simulations and organ model, it is advisable to use the same average spatial discretization *h* in both the strand and the organ model. The ratio between desired ϑζ and measured conduction velocity,

where σbζ is the adjusted conductivity parameter that will be plugged into the organ model and σ0bζ is a default initial value taken from the literature. For setting up monodomain studies, the choice of σ0bζ does not matter. Any of the reported values can be used because the final σbζ as determined via this procedure is independent of σbζ0.

All reports on measured bidomain conductivities seem to be fairly consistent in terms of bulk conductivities, with σbl ranging between 0.09 and 0.13 s/m and σbt between 0.017 and 0.034 s/m.69,70,136 The larger discrepancy reported with σbt may be partially attributed to the fact, that, unlike in recent studies,38 transverse conductivities were estimated without knowledge of the laminar structure of the heart. Using the extremal values of bulk conductivities in a computer simulation would lead to differences in conduction velocity of 23% and 76% in the longitudinal and transverse directions, respectively.

For bidomain studies, the situation is substantially more complex. Not only do the bulk conductivities matter in this case, but also the individual bidomain conductivities σiζ and σ*e*ζ and, in particular, the anisotropies α*i* and α*e*, as well as the anisotropy ratios between the two spaces, α. However, unlike with bulk conductivities, the variability in terms of individual bidomain conductivities between the experiments is quite large. For instance, individual conductivities such as the interstitial longitudinal conductivity σel differ by a factor 5.2 between Clerc69 and Roberts and Scher,70 and the ratio between the longitudinal conductivities σil/σel between the lowest and highest values differs by a factor of 10.71 Reports on anisotropies α*i* range from 5.7 to 10.8 in the intracellular space and on α*e* between 1.5 and 2.6 in the interstitial space. In terms of anisotropy ratios α between the domains, these values translate into a range between 3.5 and 6.3. The anisotropy ratio α is a key factor in all bidomain modeling studies, and it is desirable to change α without changing the bulk conductivities. A systematic approach to adjust the bidomain conductivities for a desired α has been described by Roth71 for transversely isotropic tissue. A more general recipe that also covers the more general case of orthotropic conductivities has not been reported yet.

An accurate parametrization of an organ model with bidomain conductivities remains an unresolved problem. Most likely, conductivities are not constant throughout the heart. Conductivities as used in the bidomain equations are the result of a homogenization procedure where the discrete intracellular and interstitial conductivities are averaged to obtain syncytial conductivities that can be used in a continuum representation. The homogenization results depend on type and density of gap junction expression, cellular geometry, cell arrangement, packing density, and tissue composition, such as deposition of collagen or fat, within the volume over which one is averaging. Because all of these factors vary throughout the heart, not only the eigendirections of the tensor, but also the bidomain conductivities, are a function of space.

Accurate measurements of tissue conductivities are often difficult to perform, even for small tissue samples, let alone measurements of their spatial variation. As a result, faithful identification of conductivities cannot be determined by experimental means. However, to circumvent this, at the tissue level, activation and repolarization isochrones can be matched between simulations with experiments. The parameter of utmost importance is conduction velocity because most phenomena depend on the ratio between wavelength and size of a heart.137 There are no systematic matching procedures reported yet. Currently, research relies on ad-hoc adjustments of bulk conductivities. A more systematic strategy would require the solution of an inverse problem where experimental data serve as a constraint. Such an approach has been suggested to estimate space constants for simplified eikonal diffusion models130; however, for biophysically detailed bidomain models, no attempt has been reported yet.

Preparation of an in silico experiment normally proceeds via the following steps, described here and summarized in Fig. 14–7.

###### Figure 14–7.

Basic concept of a model generation and parametrization pipeline. The image processing stage provides structured voxel stacks where each voxel has been binarized by a segmentation procedure (Seg), classified by a feature extraction procedure (FE), and a fiber direction has been assigned, either from diffusion tensor magnetic resonance imaging or on a per-rule base. Optionally, a laminar direction can be assigned as well. During the mesh generation (MG) stage, the binarized structured voxel stack is converted to an unstructured finite element mesh, and all voxel classifications are mapped over to the mesh by the voxel mapper (VM). Electrode configuration and stimulation protocols are added to define the basic setup of an in silico experiment. An initial parametrization (PRM) is found by (1) using a given set of tissue conductivities σζi and σζe and (2) using a set of states for the respective tagged regions to properly initialize models of cellular dynamics, where appropriate initial states are found in a preprocessing step by pacing single cells at a given basic cycle length. A priori knowledge (APK) on experimentally observed conduction velocities and anisotropy ratios α serve as input for auxiliary simulations, using simple strand geometries that match the average discretization of the organ model, which are used to tune default conductivities in an automatic iterative procedure until the correct conduction velocities are observed in the auxiliary model. Simulations are performed then with the organ model, and the resulting activation patterns are compared against experimental observations (EXP). Subsequently, in the case of major deviations, parameters may be iteratively adjusted and simulations repeated until the required match is found.

*Definition of model anatomy*: The geometric model over which the simulations are to be performed must first be defined. This involves the extraction of anatomical information of the preparation from an imaging modality (usually MR) via the process of segmentation, as described in the "Image Processing" section. The mesh generation stage then tessellates the geometry defined in the binarized segmented image stack to produce an unstructured finite element grid, as described in the "Mesh Generation" section, where jagged boundaries of the structured stack are replaced by smooth representations of the cardiac surfaces. Structurally or functionally different anatomical features that are either manually or automatically identified and "tagged" in the segmentation stage are also mapped across to the finite element mesh.*Inclusion of fine-scale anatomical detail*: Cardiac fiber directions throughout the model are assigned to each finite element within the mesh, either directly from DT-MRI measurements or from a set of rules. In the case of DT-MRI, a voxel mapping method is used to map across voxel-based DT-MRI eigenvectors onto the centroids of the finite elements. For rule-based methods, prior computation of a smoothly varying field within the mesh is used to define fiber vectors based on a priori knowledge, as described in the "Assignment of Fibrous Architecture" section. Orthotropy within the model can be accounted for by including a laminar direction to simulate the effects of cardiac sheets.*Inclusion of electrophysiological detail*: The relevant ionic model of cellular membrane dynamics, matching the species of the anatomical model if possible, must be defined within the model, usually on a per-node basis. Gradients in cellular model parameters may be included based on experimentally recorded transmural or apico-basal modulations. In addition, different cell models may be assigned to different tagged regions of the mesh to represent their differing electrophysiological behavior (eg, ventricles, atria, Purkinje fibers).*Initialization of tissue conductivities*: Electrical conductivity values, associated with cardiac fiber and/or sheet/sheet-normal directions, are usually assigned experimentally measured values obtained from the literature. However, only a small number of robust measurements exists for a limited number of species, and large variations between these literature values are often seen.71 Further, simulated activation patterns also depend, to some extent, on a chosen spatial discretization. To compensate for these numerical inaccuracies, additional simulations on auxiliary grids are conducted to tune the default conductivities until simulations reproduce experimentally observed conduction velocities.*Definition of experimental protocol*: Similar to an in vitro experimental setup, electrode geometry and location have to be defined. Electrodes usually include a pacing electrode (eg, located at the apex), in addition to external plate electrodes (eg, at the exterior boundaries of a surrounding conductive bath) through which stronger shocks may be delivered. Stimulation protocols are assigned to each electrode to closely replicate experimental procedures.*Definition of the initial state*: During in vitro experiments, preparations are paced for several minutes to arrive at a steady-state before the actual experiment commences. In an in silico study, this protocol can be duplicated, but the computational cost of pacing a tissue preparation over several minutes may be prohibitive. In this case, one can resort to applying the desired protocol to a single cell to subsequently populate the tissue model with the final state of the single-cell experiment. Steady-state at the tissue level will be different, but the overall system is close to steady-state, which reduces computer time vastly because only a few further beats need to be simulated to arrive at the organ level steady-state.*Performing of simulations and comparison with experimental data*: If available, comparisons with activation patterns from experiments (usually optical mapping) should be performed in an iterative manner, whereby conductivity parameters undergo repetitive refinements to bring a closer match with experimental observations. Sometimes, comparison may involve inclusion of the potential distortion effects of the experimental measurement technique in question.131

Several multicenter clinical trials have provided consistent evidence that implantable defibrillation therapy prolongs patient life. This convincing demonstration of efficacy has led to a nearly exponential growth, over the last decade, in the number of patients receiving implantable devices. Currently, approximately 0.2 million implantable cardioverter-defibrillators (ICDs) are implanted every year throughout the world. Although ICD therapy has proven to be efficient and reliable in preventing sudden cardiac death,138 with success rates clearly superior to other therapeutic options such as pharmacological antiarrhythmia therapy,139 it is far from ideal. There are several known adverse effects secondary to the administration of electrical shocks; the most prominent are linked to electroporation140 (ie, the formation of pores in the cellular membrane that allow the free and indiscriminate redistribution of ions, enzymes, and large molecules between intracellular and interstitial space) and its aftereffects, which are indirectly caused by the high field strengths required to terminate arrhythmias such as ventricular fibrillation (VF) with sufficiently high probability. More importantly, psychological effects on patients play a nonnegligible role. Conscious patients may perceive shock delivery as extremely painful, which leads to traumatization and reduction in quality of life. Although pain may be tolerable in those cases where shock delivery terminates an otherwise lethal arrhythmia, this is less likely in those cases where inadequate shocks were delivered due to high-voltage component malfunctions of the device. A recent meta-analysis of industrial reports3 concluded that such malfunctions are much more frequent than expected, with thousands of patients being affected. Further, clinical data from ICD trials suggested that six of seven shocks delivered can be classified as inadequate, indicating that the amount of overtreatment in the ICD population is significant.139

A substantial reduction in shock energy can only be achieved by full appreciation of the mechanisms by which a shock interacts with the heart and by devising novel therapeutic approaches on their basis. However, despite major advances in both experimental technology and computational modeling, our understanding of the biophysical basis of defibrillation remains incomplete. Further progress has been hampered by the inability of current experimental techniques to resolve electrical events in 3D during and after shock delivery with sufficiently high spatiotemporal resolution. Current mapping techniques are limited to record electrical activity from cardiac surfaces only and thus are incapable of detecting electrical events in the depth of the myocardium, which may exist there without any signature at the surfaces.26 Computer models were introduced as a means to overcome experimental limitations, allowing the observation of electrical events within the depth of the myocardium. Initially, monodomain models were used, but theory and simulations predicted shock-induced changes in transmembrane voltage, Δ*V**m*, only along tissue boundaries and conductive discontinuities in the heart. These predictions contradicted experimental studies that had established that a critical mass of the tissue of approximately 95%141,142 has to be affected by a sufficiently strong gradient of >5 V/cm143,144 to be effective. Later, the "missing link"145 that could explain shock-induced polarizations in the far field was discovered with the advent of the bidomain model, which predicted the existence of "virtual electrodes" (ie, polarizations that occur far from any physical electrode).85

Conceptually, defibrillation can be considered to be a two-step process. First, the applied shock drives currents that traverse the myocardium and cause complex polarization changes in transmembrane potential distribution.81 Second, postshock active membrane reactions are invoked that eventually result either in termination of fibrillation in the case of shock success or in reinitiation of fibrillatory activity in the case of shock failure. Using computer models to analyze the etiology of "virtual electrode polarization" (VEP) patterns during the shock application phase revealed that shape, location, polarity, and intensity of shock-induced VEP are determined by both the cardiac tissue structure and the configuration of the applied field.81,146,147 Based on theoretical considerations, VEPs can be classified either as "surface VEP," which penetrates the ventricular wall over a few cell layers, or as "bulk VEP," where polarizations arise throughout the ventricular wall.148,149 Analysis of the bidomain equations revealed that a necessary condition for the existence of the bulk VEP is the presence of unequal anisotropies in the myocardium. Sufficient conditions include either spatial nonuniformity in applied electric field or nonuniformity in tissue architecture, such as fiber curvature, fiber rotation, fiber branching and anastomosis, and local changes in tissue conductivity due to resistive heterogeneities. A mathematical rationale supporting these notions is given in the "Theoretical Considerations for Low-Voltage Defibrillation" section.

The cellular response depends on VEP magnitude and polarity as well as on preshock state of the tissue. APD can be either extended (by positive VEP) or shortened (by negative VEP) to a degree that depends on VEP magnitude and shock timing, with strong negative VEP completely abolishing (de-exciting) the action potential, thus creating postshock excitable gaps. As demonstrated in bidomain modeling studies,26,150 the postshock VEP pattern is also the major determinant of the origin of postshock activations. In those regions where shock-induced virtual anodes and virtual cathodes are in close proximity, a "break" excitation at shock-end (ie, the "break" of the shock) can be elicited. The virtual cathode serves as an electrical stimulus eliciting a regenerative depolarization and a propagating wave in the newly created excitable area. Whether or not break excitations arise depends on whether the transmembrane potential gradient across the border spans the threshold for regenerative depolarization.151 The finding of break excitations, combined with the fact that positive VEP can result in "make" excitations (where "make" refers to the onset of a shock) in regions where tissue is at or near diastole, resulted in a novel understanding of how a strong stimulus can trigger the development of new activations.

According to VEP theory, mechanisms for shock success or failure are multifactorial depending mainly on postshock distribution of *V**m* as well as timing and speed of propagation of shock-induced wave fronts. Whether the depolarization of the postshock excitable gap is achieved in time critically depends on number and conduction velocity of postshock activations, as well as the available time window that is bounded by the instant at which refractory boundaries enclosing the excitable regions recover excitability. All factors ultimately depend on shock strength. Increasing shock strength results in higher voltage gradients across borders between regions of opposite polarity, leading to more break excitations,151 which then start to traverse the postshock excitable gap earlier116 and at a faster velocity,151 as well as extending refractoriness to a larger degree.152

Although ICD therapy has improved over the years, no major breakthrough was achieved that would allow the lowering of defibrillation threshold (DFT) substantially. Incremental technical refinements were implemented that led to smaller, longer lasting devices and less invasive implantation procedures. Many parameters such as size, geometry, and location of coils and can relative to the heart, as well as the waveform of the delivered pulse and the timing of the shock, play an important role in determining DFT. The large number of parameters and their nontrivial relationship renders optimizing an ICD configuration a challenging task. A large body of research exists that deals with optimization of shock waveforms. It has been demonstrated that biphasic153-155 or multiphasic156-158 waveforms defibrillate at a lower threshold than monophasic waveforms and that truncated exponential pulses further increase the efficiency.159 In addition to shock waveforms, optimization of lead placement has also seen increases in defibrillation efficacy. However, despite the many optimizations, shock energies delivered by current ICDs remain more than one order of magnitude too high to render defibrillation painless.

Although VEP theory provides a sound framework to describe mechanisms underlying success and failure of defibrillation shock in a high-voltage regimen, the theory does not lend itself easily to derive strategies that would facilitate defibrillation in a low-voltage regimen. Approaches to lower DFT by using more creative protocols are under examination; however, so far, only antitachycardia pacing (ATP) has gained clinical relevance. ATP is clinically applied with high success rates of 78% to 91% with VTs in the range of 188 to 200 beats/min and with similar success rates with faster VTs (200-250 beats/min).2 Although the underlying mechanisms are not fully understood, the therapy aims at eliciting new wave fronts by pacing the excitable gap instead of trying to reset the tissue via a strong shock. For ATP to work, it is assumed that the organizing center of a reentry is accessible from a chosen pacing site. With reentries characterized by a fairly stable cycle length, a proper timing for a pacing pulse can be chosen by delivering a series of pulses at a pacing frequency that is higher than the intrinsic frequency of the circuit such that each pacing pulse is delivered progressively closer to the wave back of the reentry. Once a stimulus falls sufficiently close to the wave back, a unidirectional block occurs, and extinction of the arrhythmia is accomplished by collision with the approaching wave front. Empirically, it has been shown that success rates are highest at approximately 88% of the cycle length with ventricular rates <250 beats/min.2 For faster activation rates or for more complex arrhythmias, ATP is more likely to fail, and a high-energy defibrillation shock has to be delivered, even if the arrhythmia has not degenerated into VF.

An alternative approach to avoid high-energy shocks has been suggested by Ripplinger et al,160 which targets arrhythmias driven by reentrant cores attached to anatomic obstacles. The proposed therapy is based on destabilizing such a reentry by unpinning the reentrant core from the anatomic obstacle. The unpinned reentry either self-terminates when encountering a tissue boundary or repins and anchors to another heterogeneity. The unpinning mechanism relies on the formation of VEPs of opposite polarity in the far field in response to an applied electric field. As predicted by VEP theory, areas of depolarization and hyperpolarization form around tissue heterogeneities including those that anchor the core of a reentry. Depending on the timing of the pulse, the reentrant core either shifts its phase or detaches from the obstacle. Because unpinning relies on the VEP mechanism of excitation, simultaneous excitation of all reentrant cores can be achieved, independently of a chosen electrode location. In vitro studies in rabbits demonstrated that unpinning could be achieved with shock strengths ≤2.4 V/cm.160 Success rates in terminating reentry depend critically on the timing of the unpinning pulse relative to the phase of the reentry. When unpinning shocks were applied uniformly throughout all phases of reentry, the success rate was only 13.1%. Hence, choosing the optimal phase for shock delivery is important. However, the phase of reentry is difficult to establish in vivo, and in the presence of multiple reentries, the timing of the unpinning pulse cannot be optimal for all reentrant circuits. Besides, in a whole heart model where heterogeneity is omnipresent, immediate repinning after detachment is not unlikely, which requires the repeated application of the therapy.

An alternative approach to terminate arrhythmias sustained by reentrant mechanism is to use a feedback-driven pacing protocol to control and eliminate reentry cores, by moving them until they hit inexcitable obstacles or each other, and annihilate. The mechanism used for influencing the direction of drift of the reentrant core relies on a phenomenon of resonant drift161,162 (ie, the drift of reentrant waves when periodic, low-energy shocks are applied in resonance with the period of the reentry). A feedback algorithm163 is required to maintain the resonance, which is implemented as a sensing electrode that serves to derive trigger signals for the pacing electrodes. A recent theoretical study that used a realistic anisotropic bidomain model of cardiac tissue with microscopic heterogeneities and realistic cellular kinetics confirmed earlier experimental and theoretical reports that were based on overly simplified models of cardiac tissue (such as the Belousov-Zhabotinsky reaction or monodomain models using a FitzHugh-Nagumo kinetics) that resonance drift pacing can be indeed used to move organizing centers of arrhythmias.133 Simulations showed that termination can be achieved with high probability and within a sufficiently short time frame at a fraction of the conventional single-shock defibrillation strength. The direction of drift can be controlled by choosing a delay between detection of the trigger signal at the sensing electrode and delivery of a pacing pulse. For arrhythmias where the organizing center is anatomically anchored, unpinning is required first to induce drift via resonance drift pacing.

Neither ATP nor unpinning or resonance drift is likely to be efficient with more complex activation patterns (ie, in the presence of multiple anatomical or functional reentrant circuits or during fibrillatory activity). Recently, Fenton et al132 proposed a strategy for such a scenario. A train of low-voltage field stimuli is applied at a fast rate to cause changes in polarizations, Δ*V**m*, in the far field along conductive discontinuities of the myocardium. Discontinuities within excitable regions at which Δ*V**m* is sufficiently large to be suprathreshold act as virtual electrodes, which become sites of wave-front emission. With sufficiently high field strengths, many virtual electrodes are formed, which progressively entrain the tissue until synchrony is achieved everywhere. The efficiency of the method has been demonstrated experimentally in vitro for thin-walled atrial preparations at very low field strengths of ≤1.6 V/cm. Whether the method can be applied with similarly low field strengths to ventricular arrhythmias where reentrant activity may occur within the thick ventricular walls has not yet been established.

At the very core of any attempt to understand the mechanisms underlying defibrillation shock and failure is the mechanistic link by which externally applied electric fields, *E* = −∇Φ*e*, transduce into changes in membrane polarization. Surface VEPs arise due to current redistribution near boundaries separating myocardium from blood in cavities or vessels or from a surrounding bath. Due to the arrangements of myocytes that tend to be aligned parallel to the organ surfaces, the attenuation of surface VEPs with distance to a boundary is governed by the transverse space constants λ*t* and λ*n*. That is, within a few space constants, typically <1 mm, surface VEP drops off to zero, leaving tissue unaffected by the shock, despite the presence of an extracellular potential gradient ∇Φ*e*. Hence, surface VEPs alone are insufficient to affect a critical mass of the tissue, and defibrillation success depends critically on the formation of bulk VEPs. According to bidomain theory, bulk VEPs exist only in the presence of unequal anisotropy ratios. Sufficient conditions for the existence of bulk VEPs are most easily understood by rewriting Equation 14–6:

which reveals that the term *S*, referred to as activating function,81,164 acts to induce changes in *V**m*. As shown by Sobie et al,81 *S* can be decomposed into:

That is, field-induced changes in *V**m* are driven either by the component *S*1, the spatial variation in the intracellular conductivities (∇·σ*i*) weighted by the applied electric field ∇Φ*e*, or by the component *S*2, the spatial variation in the applied electric field ∇(∇Φ*e*) weighted by the intracellular conductivity tensor σ*i*. By inspecting Equations 14–15 and 14–16, it becomes immediately evident that unequal anisotropy ratios are a necessary condition for bulk VEP because *S* disappears from the parabolic part of the bidomain equations.

Recently proposed low-voltage defibrillation strategies such as unpinning160 or antifibrillatory pacing132 rely on the presence of minimum field strengths either to unpin an anatomically anchored core or to elicit a sufficiently large number of activations via VEP mechanisms that act to entrain and synchronize the tissue. With traditional single-shock defibrillation strategies, it is assumed that a minimum gradient of 5 V/cm is required to facilitate successful defibrillation.165 With low-voltage strategies, the required minimum gradient seems to be lower, in the range between 1.6 and 2.4 V/cm. Designing an electrode configuration that ensures the required minimum gradients in an in vitro setup is easily achieved with large plate electrodes, where the field strength is constant throughout the tissue. This is clearly not the case when ICD configurations for in vivo use are under consideration. For technical reasons, size, shape, and location of the electrodes are subjected to limitations. Standard ICD configurations use fairly small coils, which lead to highly heterogeneous fields in their immediate vicinity. Further, due to the small size, field strength drops off rapidly with distance, leading to very high gradients in the near field, but the bulk of the tissue experiences much weaker field strengths below 10 V/cm,166,167 which is also below the electroporation threshold.168

Quantitative investigation of shock-tissue interaction within the heart and how this depends on specific electrode configurations could lead to important advances in ICD technologies and protocols with direct clinical relevance.169 However, predicting the exact electrical potential gradient distribution or activating function for determining changes in polarization in the presence of applied extracellular fields in vitro or in vivo in 3D at a sufficiently high spatiotemporal resolution is currently beyond the capabilities of any available clinical or experimental modality. In this scenario, computer modeling is the only choice for acquiring such information.

Indeed, the importance of predicting such factors has been recognized in earlier combined experimental and theoretical studies.146 However, the applied experimental technique provided only a fairly small epicardial field of view, and consistent with this experimental setup, the computational model represented the cardiac structure as a 2D patch that matched the experimental field of view. More recent computational studies to quantify DFT within 3D models have been highly simplified,169 fully relying on the critical mass hypothesis and ignoring the current view based on VEP theory. Further, these models are monodomain and disregard most structural and functional details that are known to be key determinants of shock outcome. Nonetheless, such models have been applied in a clinical study to provide metrics for relative comparison of electrode performance.169

Recent developments in computational cardiac modeling, as described earlier in this chapter, have significantly advanced our ability to simulate the electrical behavior of myocardial tissue within anatomically detailed whole ventricular models. Not only do such models provide us with knowledge of the response of both the extracellular and transmembrane potentials to an electrical stimulus throughout the fully 3D volume of the ventricles, but, due to the inherent detail and complexity of the models, they also allow us to assess the mechanisms by which fine-scale anatomical structures within the heart interact with strong electrical fields. Such micro-anatomically realistic models are currently being used by our groups to quantitatively predict DFT and tissue damage for clinically relevant electrode configurations and shock waveforms to rationalize placement and geometry of ICD cans and coils. Preliminary results from these investigations are presented in the following section.

The anatomically highly detailed MR-based computational model of the rabbit ventricles, described previously in the "Construction of Models of the Cardiac Anatomy" section, was used to simulate a typical ICD setup, as shown in Fig. 14–8A. The stimulus catheter is inserted into the RV, applying a stimulus to the endocardial surface of the RV close to the apex. A grounding electrode is placed close to the base of the LV, in line with typical clinical configurations. A strong electrical stimulus of variable strength, in the form of a current injection, is applied to the tissue via the catheter.

###### Figure 14–8.

**A.** Shown is the electrode configuration corresponding to an implantable cardioverter-defibrillator (ICD) set up, with a grounding (GND) electrode at the base of the left ventricle and a catheter inserted into the right ventricular (RV) cavity, applying a stimulus directly to endocardial surface at the apex of the RV. **B.** Shown is the effect of applying a current injection stimulus of 50 × 108 A/μm3 via the electrode configuration shown in A, with the left image showing the gradient of the extracellular potential (∇Φ*e*) within the ventricular cavities, where the color scale has been chosen to saturate at 4 V/cm; the center image showing the extracellular potential (Φ*e*); and the right image showing the transmembrane potential (V*m*).

Figure 14–8B shows the resulting distribution of extracellular potential gradient ∇Φ*e* (left), extracellular potential Φ*e* (center), and transmembrane potential *V**m* (right) throughout the volume of the ventricular model following the application of a 50 × 108 A/μm3 stimulus.

As can be seen from Fig. 14–8B, detailed information can be obtained regarding the prediction of voltage gradients for suprathreshold activations. These preliminary simulation results demonstrate, perhaps unsurprisingly, that the critical field strength isosurface (shown as 4 V/cm in Fig. 14–8B, left) is very close to the site of shock delivery, even though gradients in the immediate vicinity of the electrode are fairly high. The *V**m* panel also demonstrates how the model can be used to predict anatomical locations at which VEP-triggered activations arise.

Although preliminary, these initial simulations demonstrate the utility of such a modeling approach to provide a detailed analysis of electric potential distributions within the ventricular tissue during ICD protocols and to assess their dependency on specific lead placements. The flexibility of the model used here will also facilitate future investigations of novel ICD configurations to be tested in silico, providing the potential to optimize ICD configurations and protocols to improve success while reducing shock energy and tissue damage. Our initial results suggest how alterations in electrode placement may result in significant changes in distribution and heterogeneity of electric fields and activating function within the myocardial mass, which can thus be a major determinant of the DFT as well as spatial distribution and severity of tissue damage secondary to adverse shock effects. The bidomain nature of our model also allows for the computation of vulnerability grids for arrhythmia induction following shock application in order to accurately determine DFT, as well as allowing computation of the activating function to quantify shock-tissue interaction as a function of electrode configuration. Finally, the high level of anatomical detail contained within the model will provide knowledge of how anatomical heterogeneity within the ventricles influences the shock response, which is of great importance in the development of patient-specific therapies.

Electrophysiological properties of PS cells are distinct from those of ventricular myocytes, with prominent variations in numerous ionic currents.170-172 PS fibers respond to electrical fields differently from the endocardium upon which they run because of cellular differences173 and because they are oriented in different directions than the myocardial fibers. Because the PS is a network of 1D cables, they should also be more prone to field excitation than 3D tissue.32 Moreover, the myocardial response resulting from a particular shock depends on the orientation of the stimulating field; thus, different shocks may elicit distinct contributions from the PS, which could have implications for clinical techniques, where field direction is constrained by physical limitations.

Despite ever-accumulating evidence that the PS promotes and sustains arrhythmias174 and is the source of postshock activations that may contribute to the failure of defibrillation,175 no studies have directly observed the PS response to normal-strength stimuli at the organ level and the associated impact on shock outcome. Studies of field effects on papillary-Purkinje preparations173 have used strong shocks to study electroporation but have disregarded the effects of weaker shocks on the PS-ventricular system as a whole, which are certainly clinically relevant. Measurements of PS electrical activity are difficult to obtain experimentally because fibers are fine and penetrate into the myocardium where the PMJs are situated. Optical mapping is problematic because the PS signal is overwhelmed by the myocardial signal at the organ scale. Computer modeling offers a noninvasive alternative to experimentation for ascertaining contributions of the PS to the whole heart response to defibrillation-strength shocks.

In this section, we demonstrate how the computer model developed as described in the preceding sections can be used to assess the role of the PS during defibrillation. Using our model, we will investigate the various issues raised. By applying large electric fields to the ventricles and observing behavior of the system with and without the PS, we will clearly identify its role.

We hypothesize that the PS is easily activated by electrical fields, which leads to far-field myocardial depolarizations emanating from PMJs during defibrillation. This is a distinct phenomenon from VEP effects but would produce complementary results. The strength and orientation of the applied field are expected to determine the postshock excitable gap and affect transmission characteristics; thus, shock features will mediate PS contributions. A computer model of 3D rabbit ventricles was used to study the response of quiescent tissue to various defibrillation shocks, with and without a detailed representation of the PS.

The angle of incidence of an electric field on a fiber determines the strength of the interaction, so it was presumed that the His/Purkinje network, with its variously oriented branches, would respond differently to various fields. The ventricles with PS were subjected to 1-ms constant electric fields of 5 V/cm in the three principal directions. As expected, the excitation patterns for each of the fields were quite distinct (Fig. 14–9). The myocardium was excited in a small number of regions. For transverse shocks, the cathodal epicardial wall, the septal wall nearest the cathode, and the endocardial wall nearest the anode were depolarized. For shocks along the major axis, the basal epicardium and apical endocardium were excited. Looking at the PS, many small regions were affected strongly. These regions corresponded to either endings or sharp curves, whereas straight regions were relatively unaffected, demonstrating once again that saw-tooth effects are not strong under constant electric fields. Excitation occurred downfield where there were abrupt changes in conductivities. Such changes encompassed bends in the branches and endings. Conversely, de-excitation occurred upfield, again at abrupt changes in geometry. Thus, looking at a field in the *x*-direction (see Fig. 14–9, left), the bottom ends of the network are hyperpolarized, whereas the tops are depolarized. For the other directions, similar patterns hold in the appropriate directions. A field along the *x*-direction appeared to activate more of the network than fields oriented along the other directions. There were more excitation regions in the PS than in the myocardium.

The time for the entire network to activate is given in Fig. 14–10. This time varied with the direction of the field and decreased with increasing gap junction resistance. Decreasing *r*gj below 0.5 M Ω had essentially no effect. The total activation time was determined by the lengths of the branches, which were unaffected by the stimuli. The vast majority of the network was excited within 6 ms, with a particular branch being responsible for the remaining time. Such branches relied on activity to propagate into them and took considerably longer to activate than branches that were field stimulated at both ends. The activation time was reduced drastically for small increases in coupling resistance if that change in resistance resulted in field activation of a new branch; otherwise, the decrease in activation time was small with an increase in *r*gj. Fields along the *y*-direction took the longest time to activate the network, whereas fields in the *x*-direction excited the entire network faster. For the most part, increasing the gap junction resistance had a gradual effect, reducing the activation time by slightly increasing the regions depolarized by the field. The saw-tooth potential was enough to bring a near-threshold excitation portion of the membrane above threshold. With an *x*-directed field, there was a drastic decrease in excitation time as the resistance was increased from 1 to 2 MΩ. This corresponded to a case where the saw-tooth effect became large enough to trigger a branch that was raised to near threshold by the field. If the field failed to trigger the branch, the branch would be activated by activity propagating through the Purkinje network.

Figure 14–11 compares the effect of shocks oriented along the long axis with and without a PS. Without a PS, initial depolarization of tissue at the ventricular base was a result of proximity to the anodal plane, which raised *V**m* above threshold by decreasing local Φ*e*. The second source of activation was due to a VEP on the endocardial surface of the left ventricular apex, a consequence of hyperpolarization on the apical endocardium induced by the cathodal plane. Following the end of the shock, the pair of resultant wave fronts propagated gradually across the ventricles, completing activation in 59.5 ms.

###### Figure 14–11.

The Purkinje system (PS) alters shock-induced ventricular activation. A 5-V/cm shock was applied to quiescent ventricles without (top) and with (bottom) PS. With a PS, far-field excitations on the endocardial surface due to activations from the PS abbreviated total activation time by 30% compared with no PS.

Although ventricular activation immediately following the shock (0 ms) was independent of the PS, strong gradients were induced within segments of the PS. Ensuing activations emanating from these excited segments markedly affected the shock response. Initial effects of the PS on myocardial activation sequence were visible shortly after the end of the shock (5 ms); numerous ventricular locations were indirectly activated by the far field as a result of propagation from excited PMJs in PS sections excited by the shock. The resultant wave fronts propagated through the myocardial wall, giving rise to epicardial breakthroughs (right ventricular free wall, 17 ms) that did not occur in the absence of a PS, because the only sources of depolarization were wave fronts propagating from the apex and base. Later still (25 ms), several regions that remained inactive in the no PS case were completely activated in the presence of a PS. Complete activation of the ventricles with PS occurred in 39.0 ms, whereas a thick band of unexcited tissue between wave fronts remained when no PS was present (40 ms).

Full details of this study are provided in the article by Boyle et al.176

We examined how the PS contributed to postshock arrhythmogenesis. A transmembrane pacing pulse was applied to the apex of the ventricles. At varying coupling intervals, cross-shocks were applied in the *Y* direction to induce reentry. When a PS was present compared with no PS, the window of vulnerability was narrower (10 vs 15 ms) because the ventricles depolarized and, hence, repolarized more quickly. The minimum shock strength to induce reentry was lower with a PS (3.3 V/cm vs 4 V/cm).

The PS was active throughout reentry and exhibited both anterograde and retrograde conduction at PMJs. PS activity contributed to reentry dynamics in three ways (Fig. 14–12). First, PS end points conducted intramural activity retrogradely, exciting distant endocardium ahead of the wave front and effectively accelerating propagation when the original wave front merged with the new wave front. Second, retrograde activity provided an escape route for wave fronts terminating due to refractory tissue, thereby prolonging activity. Third, refractory regions surrounding PS entry points caused fractionation in wave fronts. Because PS cells have a longer intrinsic APD, this sort of wave front splitting occurred frequently.

###### Figure 14–12.

Purkinje system (PS) contributions during reentry. **A.** Wave fronts can be accelerated as activity is rapidly shunted ahead because of retrograde propagation through the PS, which causes breakthroughs (black arrow tips) that eventually merge with the primary front. **B.** Activity from wave fronts about to be extinguished due to refractoriness (white arrows) can "escape" annihilation by entering the PS retrogradely. **C.** Large areas of refractoriness surrounding PS end points create functional obstacles, which can cause wave fronts to fractionate (black arrows) and provide anchoring points for reentry.

To assess how the PS contributed to the maintenance and stabilization of reentry as time progressed, the PS was disconnected at various postshock instants. Isolating the PS immediately after the shock extinguished activity. PS disconnection at 200 ms led to reentry termination at 555 ms. In contrast, PS disconnection at later stages (≥1000 ms) did not terminate reentry. Thus, once meandering wave fronts on the epicardium and the endocardium converged into stable rotors, the PS did not appear to play a direct role.

Full details of this study are provided in the article by Deo et al.177

These simulations demonstrate that the PS is a very important component in determining the response of the ventricles to defibrillation shocks and in the propagation of electrical activity in the ventricles. Thus, it is vital that the PS is included in organ-level models if fibrillation and defibrillation are to be studied.

First, the PS runs in all directions, so it will be activated in several places by electric fields regardless of the orientation of the field. The PS can be considered 1D, so it is easier to directly excite than the myocardium. Activity in the PS is widespread following shocks and spreads quickly into the myocardium to accelerate the activation of the ventricles. Compared with the myocardium, more regions were excited in the PS. In many regions where the PS was depolarized, the ventricles were not affected. The gap junction resistance played a role in determining how long it took for the entire PS to activate. Higher gap junction resistances in the PS lead to a small saw-tooth effect, which effectively increases the spatial extent of cathode make excitation.

The PS contributed to the establishment of postshock reentry. We identified several mechanisms by which this occurred. PMJs conduct both anterogradely and retrogradely, offering entry and exit points for propagating wave fronts. They effectively increase the dimensionality of the organ, leading to longer reentry times. Activity arriving through the PS was manifest on the surface as breakthroughs, which often reestablished activity after it had died out in a region. Conversely, wave fronts that appeared to be heading toward collision with refractory tissue survived by retrograde propagation. A more subtle effect led to wave-front speed up as PS wave fronts raced ahead of a myocardial wave front and emerged slightly ahead of the myocardial wave front. The longer action potential of the PS led to increased refractoriness at the PMJs, which could cause wave-front fractionation as wide wave fronts split when passing through a PMJ.

Propagation through the ventricles is complicated by the presence of the PS. Furthermore, the effects of the PS may not always be obvious or directly observable. Modeling provides a way to gauge the importance of the PS in light of experimental difficulties in measuring both global ventricular and PS behavior.

Complex myocardial remodeling that occurs in postinfarcted hearts has been shown to give rise to substrates that could initiate/anchor ventricular tachycardia (VT) reentrant activity. The degree of myocardial injury in the infarcted region is dependent on tissue proximity from the site of occlusion. Tissue that experiences zero perfusion undergoes cellular necrosis and formation of scar tissue. Infarct shape analysis has demonstrated that strands of viable tissue within electrically passive scar tissue could provide alternate pathways for propagation. In addition, partial perfusion in the adjacent PZ tissue results in ion channel and gap junction remodeling that has been shown to result in slowed conduction and altered action potential morphology. The complexity of tissue remodeling within the infarct has made it difficult to elucidate the specific mechanisms that give rise to postinfarction VT and its morphology. This section will outline the application in simulation studies of an image-based 3D model of an infarcted canine heart that incorporates accurate infarct geometry and composition.

The ionic kinetics in the normal and PZ myocardium were represented by the Luo-Rudy dynamic model.178 Membrane kinetics in the PZ were modified based on data from literature. Previous studies of PZ in infarcted canine hearts have reported a reduction in peak sodium current to 38% of the normal value179; a reduction in peak L-type calcium current to 31% of normal180; and a reduction in peak potassium currents IKr and IKs to 30% and 20% of the maximum,181 respectively. These modifications result in longer APD and decreased excitability compared with the normal myocardium. Mathematical description of current flow in cardiac tissue was based on the monodomain representation. To examine the arrhythmogenic propensity of the infarct substrate, an aggressive pacing protocol was delivered from the apex, similar to protocols used for clinical evaluation of patients with myocardial infarction. Pacing commenced at a basic cycle length of 250 ms for five beats (S1); 450 ms after the last S1, six stimuli were delivered at progressively shorter coupling intervals, starting at 190 ms and decreasing in steps of 10 ms. The induced activity was monitored for an additional 2.5 seconds.

Figure 14–13 illustrates the events that lead to VT induction. It depicts isochrones of activation times for time periods during the fourth stimulus of the aggressive pacing protocol and during the resulting VT. Images on the right present the intramural activation pattern on a slice through the heart, the location of which is indicated by the dashed white line on the epicardium. When the propagating wave front from the pacing site reaches the PZ, conduction significantly slows compared with the surrounding normal tissue. Faster wave fronts from the normal myocardium converge into the PZ laterally (white arrows) activating the entire PZ. The transmural view shows late activation of the PZ due to the wave front propagating from the normal myocardium. Because the PZ has a longer APD, it remains refractory, whereas the surrounding myocardium is fully recovered. As the pacing rate is increased, the wave front encounters refractory tissue, resulting in conduction block. This region of block later becomes the conduit for wave front propagation from the intramural PZ toward the surface. When pacing is completed, the activation from within the PZ tissue develops into an epicardial quatrefoil reentry. The reentry core remains within the PZ and is sustained throughout the simulation with a rotation frequency of 5 Hz.

###### Figure 14–13.

**A.** Canine ventricular model of infarction containing scar (purple) and peri-infarct zone (PZ; red) regions. **B.** Isochrones of activation times during the fourth pacing stimulus between 2.24 and 2.54 seconds (left) and 2.75 and 3.05 seconds (right) shown on the epicardial surface and within an intramural slice.

Previous experimental studies of infarcted canine hearts have reported the induction of VT with epicardial reentry morphology.182,183 The simulations revealed that decreased excitability, longer APD, and reduced conduction velocity throughout the PZ promoted conduction block and wave break that develops into epicardial reentry. Furthermore, the simulation showed that wave break and reentry formation occurred in both the epicardial and intramural portions of the PZ. Thus, this study showcased the utility of image-based computational modeling in predicting sites of reentry formation and maintenance.

The use of modeling techniques to complement experimental work has proved to be fruitful and has become an almost indispensable tool in many studies. Despite the many limitations, which enforce trade-offs to keep simulation studies tractable, it has become widely accepted that cardiac modeling is a viable approach to gain mechanistic insights into the electrophysiological function of the heart in health and disease. Recent advancements in various disciplines that are key to further develop modeling technology will allow for the lifting of many of these restrictions. New image acquisition techniques, such as high-field MRI and DT-MRI, provide very detailed geometrical and structural descriptions of the heart at a paracellular resolution. Image processing techniques and fully automatic mesh generation techniques are available that are capable of generating micro-anatomically accurate models of the heart directly from image stacks. The introduction of optimal multigrid and multilevel preconditioning techniques improves the performance and parallel scalability of cardiac simulators, rendering large-scale parameter studies with anatomically realistic and biophysically detailed models feasible. Many of these very advanced solver techniques have been integrated in stable, mature, and easy to use toolkits.184 Current developments in HPC hardware, which aim at overcoming limitations of the current CPU-centric paradigm by using accelerator technologies such as general purpose graphics processing units, promise a tremendous boost in performance at a substantially lower price. Combining all these key technologies in a robust framework that supports researchers in performing in silico experiments with ease will require major research efforts, but the basic building blocks for such simulation tools are already available today. Although current whole heart simulators lag real time by a factor of 103 to 104, next-generation HPC hardware in conjunction with novel simulation technologies will enable research to perform in silico experiments with near real-time performance, lagging real time only by a factor of 101 to 102. Such high-performance simulation tools open new and exciting applications, such as the integration of in silico techniques into a clinical work flow to support clinicians in making better informed decisions that are not feasible today.

*Circulation*. 2004;110:2591-2596. [PubMed: 15492306]

*JAMA*. 2006;295:1929-1934. [PubMed: 16639052]

*Lancet*. 1996;348:7-12. [PubMed: 8691967]

*JAMA*. 1993;270:2451-2455. [PubMed: 8230622]

*Circulation*. 2008;118:1202-1211. [PubMed: 18779456]

*Prog Biophys Mol Biol*. 2003;82:3-9. [PubMed: 12732264]

*J Physiol*. 1952;117:500-544. [PubMed: 12991237]

*Nature*. 1960;188:495-497. [PubMed: 13729365]

*IEEE Trans Biomed Eng*. 1985;32:743-755. [PubMed: 2414207]

*Circ Res*. 1991;69:378-395. [PubMed: 1860179]

*Biophys J*. 1998;75:1-14. [PubMed: 9649363]

*Ann Biomed Eng*. 1999;27:160-170. [PubMed: 10199692]

*Prog Biophys Mol Biol*. 1988;52:101-164. [PubMed: 3076684]

*Crit Rev Biomed Eng*. 1993;1:1-77.

*Chaos*. 2005;15:13502. [PubMed: 15836267]

*Ann Biomed Eng*. 2003;31:577-588. [PubMed: 12757201]

*IEEE Trans Biomed Eng*. 1994;41:743-757. [PubMed: 7927397]

*IEEE Trans Biomed Eng*. 2002;49:1260-1269. [PubMed: 12450356]

*Ann Biomed Eng*. 1997;25:315-334. [PubMed: 9084837]

*Ann Biomed Eng*. 2005;33:590-602. [PubMed: 15981860]

*Am J Physiol Heart Circ Physiol*. 2004;286:H909-H917.

*J Cardiovasc Electrophysiol*. 2002;13:1253-1262. [PubMed: 12521342]

*Chaos*. 2002;12:962-972. [PubMed: 12779620]

*J Clin Invest*. 2004;113:686-693. [PubMed: 14991066]

*Circ Res*. 2008;102:737-745. [PubMed: 18218982]

*IEEE Trans Biomed Eng*. 2006;53:2425-2435. [PubMed: 17153199]

*Circ Res*. 2007;100:e87-e101.

*Circ Res*. 2000;87:E25-E36.

*Heart Rhythm*. 2004;1:334-344. [PubMed: 15851180]

*Philos Transact A Math Phys Eng Sci*. 2006;364:1465-1481. [PubMed: 16766355]

*IEEE Trans Biomed Eng*. 2007;54:389-399. [PubMed: 17355050]

*Prog Biophys Mol Biol*. 2008;96:152-170. [PubMed: 17910889]

*Circ Res*. 1998;82:1063-1077. [PubMed: 9622159]

*Circulation*. 1998;98:1921-1927. [PubMed: 9799214]

*Cardiovasc Res*. 2005;65:851-860. [PubMed: 15721865]

*Am J Physiol*. 1996;271:H548-H561.

*Circ Res*. 2007;101:e103-e112.

*Invest Radiol*. 2007;42:777-789. [PubMed: 18030201]

*Am J Physiol Heart Circ Physiol*. 2006;291:H1088-H1100.

*Ann Biomed Eng*. 1998;26:1022-1035. [PubMed: 9846940]

*Ann NY Acad Sci*. 2006;1380:301-319.

*Philos Transact A Math Phys Eng Sci*. 2009;367:2257-2292. [PubMed: 19414455]

*Circ Res*. 1969;24:339-347. [PubMed: 5766515]

*Am J Physiol*. 1995;269:H571-H582.

*Ann Biomed Eng*. 2000;28:934-944. [PubMed: 11144678]

*Ann NY Acad Sci*. 2005;1047:296-307. [PubMed: 16093505]

*FIMH 2009: Proceedings of the 5th International Conference on Functional Imaging and Modeling of the Heart*. Berlin, Germany: Springer-Verlag; 2009:495-504.

*Level-Set Fash Marching Methods*. Cambridge, United Kingdom: Cambridge University Press; 2002.

*J Electrocardiol*. 2009;42:157.e1-e10. [PubMed: 19181330]

*Am J Physiol Heart Circ Physiol*. 2003;285:H946-H954.

*The Insight Segmentation and Registration Toolkit (version 1.4)*. New York, NY: Kitware Inc.; 2003.

*J Magn Reson B*. 1996;111:209-219. [PubMed: 8661285]

*Circulation*. 2007;115:2006-2014. [PubMed: 17389270]

*Circulation*. 2006;114:1036-1045. [PubMed: 16940196]

*Rev Biomed Eng*. 2003;5:147. [PubMed: 14527312]

*J Biomech*. 2003;36:737-748. [PubMed: 12695004]

*Am J Physiol*. 1991;260:1365-1378.

*IEEE Trans Biomed Eng*. 2009;56:1318-1330. [PubMed: 19203877]

*Prog Biophys Molec Biol*. 1998;69:157-183. [PubMed: 9785937]

*FIMH 2009: Proceedings of the 5th International Conference on Functional Imaging and Modeling of the Heart*. Berlin, Germany: Springer-Verlag; 2009:87-96.

*J Membr Biol*. 2007;218:13-28. [PubMed: 17661127]

*J Cell Sci*. 1991;99:41-55. [PubMed: 1661743]

*Circ Res*. 1989;64:563-574. [PubMed: 2645060]

*Cardiovasc Res*. 2008;80:9-19. [PubMed: 18519446]

*J Physiol*. 1976;255:335-346. [PubMed: 1255523]

*Circ Res*. 1982;50:342-351. [PubMed: 7060230]

*IEEE Trans Biomed Eng*. 1997;44:326-328. [PubMed: 9125816]

*Circ Res*. 1995;76:366-380. [PubMed: 7859383]

*Physiol Rev*. 2004;84:431-488. [PubMed: 15044680]

*Biophys J*. 2008;95:3724-3737. [PubMed: 18641070]

*Crit Reb Biomed Eng*. 1993;21:137-199. [PubMed: 8243090]

*SIAM J Math*

*Anal*. 2006;37:1333-1370.

*Bull Math Biol*. 2009;71:1707-1726. [PubMed: 19412638]

*A Bi-Domain Model for Describing Ischemic Myocardial D-C Potentials*[doctoral dissertation]. Cambridge, MA: MIT; 1978.

*Crit Rev Biomed Eng*. 1992;20:171-210. [PubMed: 1478091]

*IEEE Trans Biomed Eng*. 1994;41:232-240. [PubMed: 8045575]

*Biophys J*. 1997;73:1410-1423. [PubMed: 9284308]

*Prog Biophys Mol Biol*. 1998;69:387-403. [PubMed: 9785947]

*J Cardiovasc Electrophysiol*. 1996;7:424-444. [PubMed: 8722588]

*Ann Biomed Eng*. 1997;25:783-792. [PubMed: 9300102]

*Biophys J*. 1989;55:987-999. [PubMed: 2720084]

*Appl Math Comput*. 2007;184:276-290.

*Eur J Cardiothorac Surg*. 2006;29(Suppl 1):S178-S187.

*Circ Res*. 1993;73:473-481. [PubMed: 8394223]

*IEEE Trans Vis Comput Graph*. 2002;8:1-10.

*Circulation*. 1970;41:899-912. [PubMed: 5482907]

*IEEE Trans Biomed Eng*. 1990;37:1173-1185. [PubMed: 2289791]

*Int J Bioelectromag*. 2001;3:51-58.

*Philos Trans R Soc Lond B Biol Sci*. 1985;307:353-398. [PubMed: 2578676]

*Am J Physiol*. 1998;274:H1163-H1173.

*Ann Biomed Eng*. 2000;28:1343-1351. [PubMed: 11212952]

*J Cardiovasc Electrophysiol*. 2001;12:93-102. [PubMed: 11204092]

*Biophys J*. 2009;97:20-39. [PubMed: 19580741]

*Numer Linear Algebra Appl*. 2004;11:261-277.

*SIAM J Sci Comput*. 2009;31:3861-3883.

*IEEE Trans Biomed Eng*. 2009;56:2404-2412. [PubMed: 19457741]

*J Electrocardiol*. 2003; 36(Suppl):69-74.

*Chaos*. 1998;8:234-241. [PubMed: 12779724]

*IEEE Trans Biomed Eng*. 1999;46:1166-1168. [PubMed: 10493080]

*Math Biosci*. 2005;194:233-248. [PubMed: 15854678]

*SIAM J Numerical Analysis*. 1968;5:506-517.

*Computer J*. 1963:329-331.

*Solving Ordinary Differential Equations II: Stiff and Differential Algebraic Problems*. 2nd ed. Ser. Springer Series in Computational Mathematics. New York, NY: Springer; 2004.

*Comput Methods Biomech Biomed Engin*. 2007;10:317-326. [PubMed: 17852182]

*IEEE Trans Biomed Eng*. 1978;25:389-392. [PubMed: 689699]

*Philos Transact A Math Phys Eng Sci*. 2008;366:3381-3409. [PubMed: 18603526]

*IEEE Trans Biomed Eng*. 2006;53:2139-2147. [PubMed: 17073318]

*IEEE Trans Biomed Eng*. 2009;56:2546-2548. [PubMed: 19237339]

*Biophys J*. 2006;91:1564-1589. [PubMed: 16679365]

*J Cardiovasc Electrophysiol*. 2000;11:785-796. [PubMed: 10921796]

*Ann Biomed Eng*. 1996;24:662-674. [PubMed: 8923986]

*Math Biosci*. 2005;198:169-189. [PubMed: 16140344]

*Chaos*. 2002;12:754-763. [PubMed: 12779604]

*SIAM J Sci Comput*. 2006;28:942-962.

*Ann Biomed Eng*. 2006;34:1088-1097. [PubMed: 16773461]

*Circ Res*. 2004;95:1216-1224. [PubMed: 15528464]

*Circ Res*. 1993;72:744-756. [PubMed: 8443866]

*IEEE Trans Biomed Eng*. 2007;54:585-596. [PubMed: 17405366]

*ACM Transact Ions Mathematical Software (TOMS)*. 2003;29:110-140.

*Para 2000: Proceedings of the 5th International Workshop on Applied Parallel Computing, New Paradigms for HPC in Industry and Academia*. London, United Kingdom: Springer-Verlag; 2001:121-130.

*A Multigrid Tutorial*. Philadelphia, PA: SIAM Publications; 2000.

*IEEE Trans Biomed Eng*. 2004;51:1960-1968.

*IEEE Trans Biomed Eng*. 2006;53:1265-1272. [PubMed: 16830931]

*Heart Fail Clin*. 2008;4:289-301. [PubMed: 18598981]

*Biophys J*. 2007;93:3714-3726. [PubMed: 17978166]

*Circulation*. 2009;120:467-476. [PubMed: 19635972]

*Biophys J*. 2009;96:1364-1373. [PubMed: 19217854]

*Am J Physiol Heart Circ Physiol*. 2005;289:H350-H360.

*Chaos*. 2002;12:819-828. [PubMed: 12779610]

*Circ Res*. 1979;44:701-712. [PubMed: 428066]

*Heart Rhythm*. 2006;3:862-864. [PubMed: 16818223]

*Circulation*. 1993;87:1152-1168. [PubMed: 8462144]

*N Engl J Med*. 1997;337:1576-1583.

*Ann Biomed Eng*. 1998;26:584-596.

*Pacing Clin Electrophysiol*. 1991;14:227-240. [PubMed: 1706508]

*Heart*. 2005;91:118-125. [PubMed: 15604356]

*J Clin Invest*. 1989;83:1039-1052. [PubMed: 2921316]

*Circ Res*. 1993;72:145-160. [PubMed: 8417837]

*Chaos*. 1998;8:204-220. [PubMed: 12779722]

*Biophys J*. 1999;77:1404-1417. [PubMed: 10465752]

*Circ Res*. 2005;97:168-175. [PubMed: 15976315]

*Chaos*. 1998;8:221-233. [PubMed: 12779723]

*IEEE Trans Biomed Eng*. 1999;46:260-270. [PubMed: 10097461]

*IEEE Trans Biomed Eng*. 1995;42:1174-1184. [PubMed: 8550059]

*Circ Res*. 1999;85:1056-1066. [PubMed: 10571537]

*Am J Physiol*. 1994;266:H2348-H2358.

*Pacing Clin Electrophysiol*. 1994;17:207-212. [PubMed: 7513406]

*J Cardiovasc Electrophysiol*. 1995;6:737-750. [PubMed: 8556194]

*Pacing Clin Electrophysiol*. 1997;20:2154-2162. [PubMed: 9309738]

*J Am Coll Cardiol*. 2003;42:568-575. [PubMed: 12906990]

*J Cardiovasc Electrophysiol*. 2004;15:802-808. [PubMed: 15250866]

*Resuscitation*. 2006;68:251-258. [PubMed: 16325983]

*IEEE Trans Biomed Eng*. 1971;18:410-415. [PubMed: 5115142]

*Am J Physiol Heart Circ Physiol*. 2006;291:H184-H192.

*Radiozika*. 1988;31:574-582.

*JETP Lett*. 1987;45:767-770.

*J Theor Biol*. 1994;169:101-112. [PubMed: 7934075]

*IEEE Trans Biomed Eng*. 1986;33:974-977. [PubMed: 3770787]

*Pacing Clin Electrophysiol*. 1995;18:512-525. [PubMed: 7777416]

*Am J Cardiol*. 1999;83:24D-33D.

*J Cardiovasc Electrophysiol*. 1997;8:537-547. [PubMed: 9160230]

*Methods Mol Biol*. 2008;423:433-448. [PubMed: 18370220]

*Heart Rhythm*. 2008;5:565-572. [PubMed: 18362024]

*Am J Physiol Heart Circ Physiol*. 2000;279:H466-H474.

*Circ Res*. 2002;91:790-797. [PubMed: 12411393]

*Am J Physiol Heart Circ Physiol*. 2002;283:H2495-H2503.

*J Am Coll Cardiol*. 1993;22:607-614. [PubMed: 8335836]

*Am J Physiol Heart Circ Physiol*. 2008;295:H883-H889.

*Heart Rhythm*. 2007;4:758-765. [PubMed: 17556199]

*Ann Biomed Eng*. 2010;38:456-468. [PubMed: 19876737]

*Heart Rhythm*. 2009;6:1782-1789. [PubMed: 19959130]

*Circ Res*. 1994;74:1071-1096. [PubMed: 7514509]

*Circ Res*. 1997;81:110-119. [PubMed: 9201034]

*Am J Physiol Heart Circ Physiol*. 2004;287:H1046-H1054.

*Cardiovasc Res*. 2000;48:34-43. [PubMed: 11033106]

*Circ Res*. 2007;101:939-947. [PubMed: 17916777]

*Heart Rhythm*. 2007;4:1034-1045. [PubMed: 17675078]