# Biomod/2013/NanoUANL/Simulations

### From OpenWetWare

** SIMULATIONS**

**Capsid simulations (Multiscale Modeling)**

A viral capsid is a system with many proteins, each with many atoms, Thus, when trying to simulate such a system, we have to face many obstacles, mainly concerned in the resolution of the information and computational time required. Efforts have been focused on simplification of the system. To solve this problem, lately has been proposed an approach to model and simulate such systems, called Multiscale Modeling.

This previous method has been successfully applied to simplify the capsid model, but compromises detail and resolution of its behavior and functionality. It is a great method for preliminary results of structural analysis, but, when concerned of specific and local phenomena, like charge inversion or electrostatic double layers, we could be missing useful information within these simplifications.

This is why we would not rely on this approach, although it could be a great alternative if there is no great impact of local phenomena in our first detailed observation, and is a powerful choice for saving computational effort.

**Molecular Dynamics**

**Theory**

Molecular dynamics (MD) emerged as one of the first simulation methods from the pioneering applications to the dynamics of liquids by Alder and Wainwright and by Rahman in the late 1950s and early 1960s. There are two main methods of approaching MD: the classical and quantum.

In the ‘classical’ mechanics approach to MD simulations molecules are treated as classical objects, resembling very much the ‘ball and stick’ model. Atoms correspond to soft balls and elastic sticks (more accurately seen as springs) correspond to bonds. The laws of classical mechanics (Newton’s differential equations of motion) define the dynamics of the system.

Quantum MD simulations represent an important improvement over the classical approach for small scale systems, and they are used in providing information on a number of biological problems. It mainly is involved in finding the electronic wave function and solving their Schrödinger’s equations. However, they require more computational resources, and often require great oversimplification of the system to balance the computational effort, losing detail of results. At present only the classical MD is practical for simulations of biomolecular systems comprising many thousands of atoms over time scales of nanoseconds without oversimplifying it.

**Idea**

In order to study the role of diffusion, electrostatics and particle dynamics, our idea was to simulate the entire system of capsid, solvent, enzyme and ions, with full detail (full atomistic resolution), in a Molecular Dynamics simulation.

The rate of ion movement across the capsid would give us the diffusion constant of ions, and observing the concentration inside and near the enzyme, we could be able to calculate the concentration near the enzyme that would feed the inputs for further enzyme’s kinetic study. Also, we could observe the ion distribution around the enzyme and capsid walls due to electrostatic interaction.

Because of the time limitation range, we wouldn’t be able of simulating the processes of nucleation and enzyme reaction, but we could be able to simulate several small frames of time, covering “photographs” of the whole phenomena.

Complicated and surprising events may occur, but this method shows the correct dynamical evolution of the system for this potential and these boundary conditions. If an accurate description of the atomic is taken into account, it will be a very accurate representation of the real physical system.

**Why is it difficult? Computational Limitations**

The time limitation is the most severe problem in MD simulations. Relevant time scales for biologically important processes extend over many orders of magnitude. For example, protein folding may take minutes. While this may become feasible in the near future, the nanosecond time scale for biosystems comprising several tens of thousands of atoms is the current estimated domain of standard MD simulations.

Molecular dynamics is only one of a number of computer methods available to molecular modelers. Global optimization techniques, free energy methods and alternative approaches to conformational analysis enable applications to a much broader range of problems.

In our case, not just these problems were of great relevance, but also the right incorporation and optimization of parameters needed for the simulation, to account for the right interaction of the particles, right charge consideration and refinement of computational parameters. These things take a great deal of time, and require many trials and tests of the system, that, in our case, is too big and implies a greater deal of time. Not being affordable for the current length of the project, this approach was postponed.

**Potentials**

**Potential energy functions**

One important step in simulations (both for Molecular Dynamics and Monte Carlo Simulations) is the evaluation of the system potential. It involves summing the following potential terms for all the particles:

\begin{equation} U(\vec r) = \sum U_{bonded}(\vec r) + \sum U_{nonbonded}(\vec r) \end{equation}

The bonded potential terms involve 2-, 3-, and 4-body interactions of covalently bonded atoms, with \(O(N)\) terms in the summation. The non-bonded potential terms involve interactions between all pairs of atoms with \(O(N^2)\) terms in the summation, although fast evaluation techniques are used to compute good approximations to their contribution to the potential with \(O(N)\) or \(O(N \log{N})\) computational cost, such as Particle Mesh Ewald method.

**Bonded Potential Energy**

To account for the 2-body interactions, a spring bond potential describes the harmonic vibrational motion between a pair of covalently bonded atoms,

\begin{equation} U_{bonded} = k (r_{ij}-r_0)^2 \end{equation}

where \(r_{ij}=||\vec r_j-\vec r_i||\) is the distance between the atoms, \(r_0\) is the equilibrium distance, and \(k\) is the spring constant.

The 3-body angular bond potential describes the angular vibrational motion occurring between a triple of covalently bonded atoms,

\begin{equation} U_{angle} = k_(\theta) (\theta - \theta_0)^2 + k_{ub} (r_{ik} - r_{ub})^2 \end{equation}

where, in the first term, \(\theta\) is the angle in radians between vectors \(r_{ij}=\vec r_j-\vec r_i\) and \(r_{kj}=\vec r_j-\vec r_k\), \(\theta_0\) is the equilibrium angle, and \(k_{\theta}\) is the angle constant. The second term is the Urey-Bradley term used to describe a (non-covalent) spring between the outer atoms, active when constant \(k_{ub} \neq 0\) , where \(r_{ik}=||\vec r_k-\vec r_i||\) gives the distance between the pair of atoms and \(r_{ub}\) is the equilibrium distance.

The 4-body torsion angle (dihedral angle) potential describes the angular spring between the planes formed by the first three and last three atoms of a consecutively bonded quadruple of atoms,

\begin{equation} U_{tors} = \begin{cases} k(1+\cos{n\Psi + \phi})^2 & if n > 0 \\ k(\Psi - \phi)^2 & if n = 0 \\ \end{cases} \end{equation}

where \(\Psi\) is the angle in radians between the first-triple-plane and the last-triple-plane. The integer constant is nonnegative and indicates the periodicity. For \(n > 0\), \(\phi\) is the phase shift angle and \(k\) is the multiplicative constant. For \(n = 0\), \(\phi\) acts as an equilibrium angle and the units of \(k\) change to correspond units. A given quadruple of atoms might contribute multiple terms to the potential, each with its own parameterization. The use of multiple terms for a torsion angle allows for complex angular variation of the potential, effectively a truncated Fourier series.

**Non-bonded potential energy**

The non-bonded potential terms involve interactions between all non-bonded pairs of atoms. Even using a fast evaluation method the cost of computing the non-bonded potentials dominates the work required for each time step of an MD simulation. The Lennard-Jones potential accounts for the weak dipole attraction between distant atoms and the hard core repulsion as atoms become close,

\begin{equation} U_{LJ} = - \epsilon \large[(\frac{\sigma}{r_{ij}})^{12}-2 (\frac{\sigma}{r_{ij}})^{6}\large] \end{equation}

The parameter \(\epsilon\) is the minimum of the potential term. The Lennard-Jones potential approaches 0 rapidly as \(r_{ij}\) increases, so it is usually truncated (smoothly shifted) to 0 past a cutoff radius, requiring \(O(N)\) computational cost.

The electrostatic potential is repulsive for atomic charges with the same sign and attractive for atomic charges with opposite signs,

\begin{equation} U_{elec} = \epsilon_{14} \frac{1}{4 \pi \kappa \epsilon}\frac{q_i q_j}{r_{ij}} \end{equation}

where \(q_i\) and \(q_j\) are the charges on the respective atoms. The dielectric constant \(\kappa\) is fixed for all electrostatic interactions. The parameter \(\epsilon_{14}\) is a unitless scaling factor whose value is 1, except for a modified 1-4 interaction, where the pair of atoms is separated by a sequence of three covalent bonds. Although the electrostatic potential may be computed with a cutoff like the Lennard-Jones potential, the \(\frac{1}{r}\) potential approaches 0 much more slowly than the \(\frac{1}{r^6}\) potential, so neglecting the long range electrostatic terms can degrade qualitative results, especially for highly charged systems.

**Simulation Considerations**

**Radius and Volume Exclusion**

One thing we needed to include to our simulations was Volume Exclusion. This means we would not consider dimensionless points as particles. Instead we are giving a real radius to our particles, changing the dynamics restrictions and boundary conditions of the problem. This also means we need to specify for the \(\sigma\) parameter of Lennard Jones potential by considering which elements are participating in the interaction.

**Fixed atoms**

To reduce computational cost, we need to fix certain groups of atoms within the molecule. This would reduce the resolution of information concerning the real dynamics of the problem, but allow us to have a more simple system. The sets of atoms that are going to be fixed are the viral capsid (centered in the simulation box) and the enzyme (at the center of the capsid).

**Explicit Water and Ions**

To observer full detail of electrostatic phenomena such as charge inversion and double layers, we need to keep explicit our water molecules and ion particles. This means that we need to solvate our system before the simulation, adding water molecules to the simulat ion box matching the water density required, and adding ions within the box until the system overall charge is neutralized (all these steps are done already within VMD/NAMD algorithms in its incorporated plugins).

**Electrostatic focus**

Previous literature shows that for nucleation, and general reactions, the electrostatic environment inside the capsid plays an important role. A small test of charge distribution was done for our capsid, taking into account the Sub-E mutation, which replaces some aminoacids of the N-terminal region of the coat protein, changing the overall internal charge to negative. This is supposed to enhance the reactions that occur inside the capsid.

It is clear that the capsid´s internal charge has been changed significantly (it has become negative), and can be observed around 120 and 130 A, which is the location of the inner N-terminal region.

**Monte Carlo**

**Theory**

The Monte Carlo (MC) technique was developed even before MD. The method is, essentially, a statistical approach to the study of differential equations, or more generally, of integro-differential equations that occur in various branches of the natural sciences.. A long series of random moves is generated with only some of them considered to be “correct”. Monte Carlo methods are mainly used in three distinct problems: optimization, numerical integration and generation of samples from a probability distribution.

**Idea**

Similar to the idea with MD simulations, our approach from MC simulations is to describe “photographs” of the whole phenomena, but now looking to find the most thermodynamically stable states (lower energy conformations) in each one.

Of course, this would require optimization of code, incorporation of appropriate modules to recollect the required information, refinement of parameters to adequate interactions, incorporation of Lennard Jones potential and Coulomb Electrostatic potential, adjustment of but potential’s units, among other minor steps.

**LJ code**

We followed a basic code structure of Monte Carlo Simulation of Lennard Jones particles in a NVT ensemble (Number of particles, Volume and Temperature constant). It implements integrating equations of motion using the velocity verlet algorithm, while temperature is conserved using the Andersen thermostat.

**Basic code modifications**

Several code modifications were done in order to have proper data compilation, analysis and simulation. This mainly involved incorporating a recompilation of particle positions and energy, radii (volume exclusion), charge (and electrostatic interaction), periodic boundary conditions, and trajectory of simulation.

**KMC**

**Theory**

Metropolis Monte Carlo samples configurational space and generates configurations according to the desired statistical-mechanics distribution. However, there is no time in Metropolis MC and the method cannot be used to study evolution of the system or kinetics. An alternative computational technique that can be used to study kinetics of slow processes is the kinetic Monte Carlo (KMC) method.

Also known as the dynamic MC method and Gillespie Algorithm, is a computer simulation intended to simulate the time evolution of some processes occurring in nature. Typically these are processes that occur with a given known rate. It is important to understand that these rates are inputs to the KMC algorithm, the method itself cannot predict them.

Before starting KMC simulation we have to make a list of all possible events that can be realized during the simulation and calculate rates for each event. When the rate constants of all processes are known, we can perform a KMC simulation in the time domain. In the case of a single process, the reciprocal of the rate of the process determines the time required for the reaction to occur. This quantity can be set equal to the KMC time. In the case of a many-particle multi-process system, however, introduction of time is less straightforward and several modifications of the algorithm exist.

**Idea**

Our approach was to simulate the behavior of the enzyme and nucleation reactions through KMC simulation, in order to elucidate the most probable size distribution of reduced and nucleated silver inside the capsid, to have better inputs in each of our simulations. This method has been replaced by the proposition and solution of the differential equations describing the kinetics of the single-enzyme confined inside the capsid.

**Simulink**

**Theory**

In numerical analysis, the Runge–Kutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations.

This is a strong algorithm to solve differential equations, and is one of the best options within MATLAB. Relying on this, and the simple Simulink interface of MATLAB, is possible to study and simulate a system of differential equations to better describe our phenomenon.

**Idea**

With the great tool of MATLAB and Simulink, we can solve the differential equations describing the single-enzyme confined kinetics, to better understanding of the complex phenomenon that is taking place.

**Method, Steps**

Once having obtained the adequate equations that describe our phenomenon, and proposing, if needed, new parameters or restriction equations, the next step is to translate everything into Simulink, and run the model for different parameters, to understand how are the variables changing with time, and how are the unknown parameters changing this behavior.