Multiscale approach to equilibrating model polymer melts

We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed to be computationally effective at each scale. Density fluctuations in the melt structure above the tube scale are minimized through a Monte Carlo simulated annealing of a lattice polymer model. Subsequently the melt structure below the tube scale is equilibrated via the Rouse dynamics of a force-capped Kremer-Grest model that allows chains to partially interpenetrate. Finally the Kremer-Grest force field is introduced to freeze the topological state and enforce correct monomer packing. We generate $15$ melts of $500$ chains of $10.000$ beads for varying chain stiffness as well as a number of melts with $1.000$ chains of $15.000$ monomers. To validate the equilibration process we study the time evolution of bulk, collective and single-chain observables at the monomeric, mesoscopic and macroscopic length scales. Extension of the present method to longer, branched or polydisperse chains and/or larger system sizes is straight forward.


I. INTRODUCTION
Computer simulations of polymer melts and networks allow unprecedented insights into the relation between microscopic molecular structure and macroscopic material properties such as the viscoelastic response to deformation, see e.g. [1][2][3][4]. Such simulation studies rely on very large model systems to reliably estimate material properties, and an important obstacle is the generation of large well equilibrated model systems for long entangled polymer chains.
What do we mean by equilibrium in the case of a linear homopolymer polymer melt? 1) Polymeric liquids have bulk moduli comparable to that of water, and they are nearly incompressible. Hence in equilibrium, we expect model melt states without significant density fluctuations. 2) Single chains in a melt adopt self-similar random walk statistics because excluded volume interactions are screened at length scales sufficiently large compared to monomeric scales. Hence in equilibrium, we expect model states without significant deviations from random walk statistics. 3) At mesoscopic scales, many polymer chains pervade the same volume, such that chains are strongly topologically entangled. This gives rise to the well known plateau modulus. [5] Hence in equilibrium, we also require model melts that achieve the correct entanglement density. 4) Finally at the monomeric scale we require the correct monomeric packing, such that we can expect to run long stable simulations where topology is preserved. Taken together, these constraints couple the conformations on scales that range from monomeric to macroscopic length scales. This makes the problem of making equilibrated model melts particular difficult, and the problem is acerbated in the case of heteropolymers or for branched polymers which are of significant industrial interest.
Brute force equilibration of model polymer materials is typically not feasible. Polymer materials display dynamics over a huge range of time scales. Even for polymers of moderate size, their largest conformational relaxation times are many orders of magnitude beyond that which is currently available via brute force simulation. Monomeric motion takes place on pico second time scales, whereas conformational relaxation times can easily reach up to macroscopic time scales. For a long linear polymer chains the dominant relaxation mechanism is reptation [6][7][8] which gives rise to relaxation times τ ∼ N 3 where N is the number of monomers. [7] In the case of star shaped polymers, reptation is not possible and the dominant relaxation mechanism becomes contour length fluctuations [9], in which case the relaxation times are τ ∼ exp(N arm ), where N arm is the number of monomers in an arm. [10] To our knowledge, there are three major strategies for equilibrating model polymer melts that address the challenges raised above: a) algorithms that attempts to construct equilibrium model melts with the correct large-scale single chain statistics, b) algorithms that utilize unphysical Monte Carlo (MC) moves to accelerate the dynamics compared to brute force Molecular Dynamics (MD), which simulates the real physical polymer dynamics, c) algorithms using different models e.g. utilizing softer potentials and a push-off process to allow chains to cross to accelerate the relaxation process. In the present approach we combine all these approaches, but before presenting our approach we review examples of these strategies found in the literature.
It is easy to generate single chain conformations with the desired large scale chain statistics. Equilibration procedures following this approach typically place the resulting chains randomly into the simulation domain. However, when monomer packing is introduced, the presence of density fluctuations in the initial state cause significant local chain stretching and compression. Brown et al. [11] were the first to recognize the importance of such density fluctuations. This was analyzed in detail by Auhl et al. [12], who made two proposals of how to resolved the density fluctuations, either to accelerate the relaxation utilizing double bridging moves, or to pre-pack the chains in space to avoid density fluctuations. This was done using Monte Carlo simulated annealing and accepting only moves that reduce density fluctuations [12] An completely different approach is has been proposed by Gao [13]. The idea is to start by an equilibrated liquid of monomers and then to create bonds between the monomers corresponding to a melt of polymers. This completely side steps the issue of density fluctuations, since the monomeric liquid is also incompressible. However, the problem becomes how to identify a set of potential bonds to that correspond to a mono-disperse melt of long linear or branched polymers. To reach near complete conversion Gao had to increase the search distance for the last bonds, and to remove the last monomers that could not be bonded. Whereas Gao performed instantaneous bonding on a frozen monomer liquid, Barrat and coworkers [14] extended the method by allowing the monomers to move during bonding.
This has the effect of enhancing the search distance for bonding. This method still has issues with producing mono-disperse melts, Barrat and coworkers solved the problem by aborting the bonding procedure when 80% of the monomers are linked into mono-disperse chains, and then removing the last 20% of monomers. The resulting states were then compressed to the target pressure, which globally deforms the chain statistics.
Monte Carlo techniques (MC) have the advantage that unphysical moves can be used to accelerate the relaxation dynamics compared to MD techniques, which follow the physical dynamics. A key contribution to the equilibration of polymer melts were the complex MC moves developed by Theodorou and co-workers. [15,16] End-bridging moves works by identifying a the end monomer of one chain and an internal monomer of another chain where the two monomers are in close proximity. The move is performed by cutting the chain at the internal monomer and attaching it to the end of the other chain. [16,17] Double bridging moves works by identifying two pairs of bonded monomers in spatial proximity. The move is performed by replacing the two intramolecular bonds by two intermolecular bonds. The result of these moves is a melt conformation with a new chemical connectivity. Compared with end-bridging moves double-bridging preserves the chain length when equivalent bead pairs along the polymer contours are chosen.
The double bridging moves are the best way currently known to accelerate the polymer dynamics, but method suffers from two major problems: 1) as the chain length is increased the density of potential sites for double bridging drops, and 2) the new proposed connectivity can have a high configurational energy, hence necessitating further tricks to relax the conformation to ensure a reasonable acceptance rate. For instance, it was proposed to reconnect not just monomers, but to grow small bridge segments in order to reduce the conformational energy of the proposed new state. [15] These methods have been used to equilibrate linear melts of polyethylene up to 1000 monomers [15,18], and polydisperse polyethylene melts up to 5000 monomers. [19] They have also been applied to branched molecules [20][21][22] and grafted polymers [23,24].
The first multi-scale approach was introduced by Subramanian [25,26], who applied it to linear and branched melts. His idea was to start by equilibrating a coarse representation of the polymer, and successively rescale the simulation domain by while doubling the number of beads in the polymer model. In this way polymer conformations are successively equilibrated on smaller and smaller length scales. A more sophisticated version of hierarchical equilibration has been studied by Zhang et al. [27], where a range of blob chain models were successively fine grained with force field that depended on the scale of fine graining. The most recently proposed equilibration method is that of Moreira et al. [28], who develop the Auhl method further by applying a warm-up procedure where pair-interactions are slowly introduced via a cap on the maximal force as well as the cut-off distance of the pair interactions that is progressively raised using an elaborate feed-back control mechanism during the equilibration process.
Equilibrated melts of atomistic polymer models can be obtained via fine-graining from a coarse-grained polymer model. Theodorou and Suter [29,30] studied polymer melts with atomistic models which they prepared by growing atomistic polymer models bond-by-bond in the simulation domain using a metropolis acceptance criterion while taking non-bonded interactions into account when choosing bond angles. The resulting states were then energy minimized. Carbone et al. [31] produce atomistic polymer melts by generating continuous (non-packed) random walks and fine graining them using an atomistic polymer models.
For each continuous random walk, a corresponding atomistic polymer chain is created by confining the configuration to follow the continuous random walk, and intra-chain monomeric packing is slowly introduced through a simulation with a soft push-off potential. In a second step, the atomistic polymer chains are placed in the simulation domain, and a second pushoff procedure is performed to introduce inter-chain monomeric packing. A similar approach was used by Kotelyanskii et al. but using self-avoiding random walks on a cubic lattice for the initial random walks, which resolves the packing problem. [32] Recently, Sliozberg et al. equilibrated a one million atom system of polyethylene using an united atom model. [33]. Similar to Theodorou and Suter the polymers are grown in the simulation domain, taking chemical structure into account to some extend. The resulting melt conformations are then simulated with a soft DPD inspired potential to gentle introduce excluded volume interactions, until they can be switched to the final united-atom force field.
In the present paper, our aim is to present a new general, simple, and computationally effective method of rapidly generating very large equilibrated melts of polymers. We illustrate the method by creating equilibrated monodisperse linear Kremer-Grest (KG) [34] polymer models. This polymer model is the standard model for Molecular Dynamics simulations of polymers. The KG model is generic and describes universal polymer properties without attempting to model chemical details of specific polymer species. Chemical details can be introduced in the KG model by varying the effective chain stiffness, which allows us to use this model for studying universal properties of specific polymer types. [35] Here we follow Auhl et al. [12] and study how to produce equilibrated melts for a wide range of chain stiff-nesses. The typical size of the melts we generate in this study comprise 5 − 15 × 10 6 beads for chains of 15.000 beads per chain or 200 entanglements per chain. These numbers are chosen be about a factor of five above the state of the art e.g. [27,28]. However, we are by no means pushing the limitations of the present equilibration approach.
We borrow ideas from many of the approaches described above, but with a few twists and improvements. The most important being that we use different polymer models at the different scales. At the tube length scale, we model the polymer as a chain of entanglement blobs on a lattice and minimize spatial density fluctuations using Monte Carlo simulated annealing. Here we use double-bridge moves, which are easy to identify and always accepted.
The lattice conformations are mapped onto a bead-spring model. Subsequently we equilibrate the chain structure at the tube length scale and below using a force capped force field inspired from dissipative-particle dynamics [36,37]. The resulting Rouse dynamics allows chains to pass through each other, and hence equilibrate local chain structure. Finally we switch to the Kremer-Grest force field and thermalize the conformations to produce the correct local bead packing. Each of these steps are fast because we are using computationally efficient models at each scale. The evolution of an example configuration is shown in Fig. 1 for a melt with M = 1.000 chains of length N b = 15.000 beads.
The paper is structured as follows; In the short theory section we introduce the basic concepts and quantities characterizing polymer melts. In Sect. III, we define and characterize the three polymer models that we use in the paper. In Sect. IV, we proceed to characterize the equilibration process in terms of single-chain, collective, and bulk observables at microscopic, mesoscopic and macroscopic scales. Finally, we conclude with our conclusion in Sect.

II. CHARACTERISTICS OF POLYMER MELTS
Below we introduce the characteristic spatial and temporal scales associated with polymers conformations and their dynamics. At the molecular scale, we can characterize the single chain statistics in a polymer melt as a ideal random walk, since excluded volume interactions are approximately screened. [38,39] We can characterize chain statistics either in terms of number of carbon atoms in the backbone or number of monomers, however, since our target here is the Kremer-Grest bead-spring model, we express conformations in terms of the number of beads N b per chain. The end-to-end distance of a chain of N b beads is then given by where l b is the average bond length, and c b is the chain stiffness due bead-packing and local chain structure. For N b 1 the chain stiffness is given by where θ denotes the angle between subsequent bonds. At the Kuhn scale (denoted by subscript "K") the chain statistics becomes particular simple. It is described by a random walk with contour length L K = l K N K = l b N b where the walk consists of N K Kuhn segments that are statistically independent i.e. c K = 1 at and above the Kuhn scale.
The Kuhn length can be estimated using where we have expressed the mean-square end-to-end distance in terms of the bond correla- This correlation function characterize along how many bonds correlations between bond directions persists. The bond correlation function is easy to sample from simulations.
To define a mesoscopic length scale due to collective chain effects, we can look at most characteristic macroscopic material property of a polymer melt -the plateau modulus. Since polymers can not move though each other, thermal fluctuations are topologically constrained.
This leads to a localization of the thermal fluctuations inside a tube-like shape of typical size d T . [40] Each topological entanglement contributes an free energy of k B T , and the plateau modulus is the corresponding free energy density Here ρ K = ρ b /c b is the number density of Kuhn segments, ρ b the number density of beads, k is the Boltzmann constant, and T the temperature. The entanglement length N ek is a measure of the contour length between topological entanglements along the chain. Note that we specify it in terms of Kuhn units and not beads between entanglements. In the present paper, we generally report results in terms of Kuhn units rather than numbers specific for the Kremer-Grest model. This is to simplify comparisons with theory and experiment, since in Kuhn units we would characterize a real chemical molecule and one of our model molecules with exactly the same numbers independent of the chosen polymer model. The 4/5 pre-factor is due to the entanglements lost as the stretched chains initially retract into the tube to reestablish their equilibrium contour length. [5] We can relate the length of a tube segment d T to the number of Kuhn units it contains as d 2 T = l 2 K N eK and Z = N K /N eK as the number of entanglements/tube segments per chain. Since the tube is a coarse representation of the chain it contains, the large scale tube and chain statistics must coincide, while below the tube length scale, the tube is straight and the chain performs a random-walk. In particular, the chain end-to-end distance matches the end-to-end distance of the tube R 2 = d 2 T Z = l 2 K N K . The dynamics of short unentangled polymer melts, is described by the Rouse model, [5,41] which also describes the local dynamics of long entangled melts. In this model, a chain is represented by a flexible string of non-interacting units connected by harmonic springs, i.e. each unit represents one Kuhn segment of the polymer. Besides the forces that arise due to connectivity, each unit also receives a stochastic kick and is affected by a friction force, i.e. the Rouse model is endowed with Langevin dynamics. The combined effects of these two forces are to model the presence of the other chains in the melt. The Rouse model can be solved exactly analytically by transforming it to a mode representation, see e.g. [5]. In particular, the Rouse model predicts the chain centre-of-mass diffusion coefficient D cm and its relation to the Kuhn friction ζ K as which has the form of a fluctuation-dissipation theorem. This relation can be inverted to derive the Kuhn friction from a measured diffusion coefficient. The fastest dynamics is that associated with the diffusive motion of individual Kuhn segments one Kuhn length i.e. τ K ∼ l 2 K D −1 K ∼ ζ K l 2 K /kT . A more careful derivation using Rouse theory provides the prefactor as In the case of entangled melts, we can define the entanglement time which is the characteristic time it takes an entangled chain segment to diffuse the length of a tube segment

and with prefactors from Rouse theory
the entanglement time is typically much larger than the fundamental Kuhn time. The conformational relaxation times due to reptation (linear polymers) or contour length fluctuations (star polymers) is again typically much larger than the entanglement time.
The Kuhn length is a microscopic single chain property, and the tube diameter is a collective mesoscale property that is typically associated with pair-wise entanglements. [42] In order to characterize bulk large scale melt properties and in particular density fluctuations, we use the structure factor. The structure factor is defined as where q is the momentum transfer in the scattering process. M denotes the number of polymers, and R jk the position of the k'th bead in the j'th polymer. We assume for notational simplicity that all polymers has the same number of beads. When performing simulations with periodic boundary conditions, we are limited to momentum transfers on the reciprocal lattice of the simulation box i.e. q vectors of the form q = (2πn x /L, 2πn y /L, 2πn z /L), where L denote the box size. Since the melts are isotropic, we average and bin the structure factor based on the magnitude of the momentum transfer vector denoted q = |q|. The structure factor for small q values converges to lim q→0 S(q) = χ T ρkT where χ T is the isothermal compressibility of the melt. For a further discussion on density fluctuations and compressibility, we refer to the more detailed derivations in the appendix.

III. POLYMER MODELS
In the following, we define and characterize the three polymer models employed in the present study: We begin with the Kremer-Grest model (sec. III A), we also introduce a force capped variant of the Kremer-Grest model (fcKG) (sec. III B), and finally we introduce a model where chains are modelled as a string of entanglement blobs on a lattice (sec. III C).
We also characterize the Kuhn length for both the KG and fcKG models (sec. III D), the tube diameter for the Kremer-Grest model (sec. III E), and finally the Kuhn friction of the fcKG model (sec. III F). These relations are required to transfer melt conformations between the different polymer models, and to determine how long a Rouse simulation is required for the equilibration process.

A. Kremer-Grest polymer model
The end goal of the present equilibration procedure is to produce an equilibrated Kremer-Grest model melt. [34,43] This is a generic bead-spring polymer model, where all beads interact via a Weeks-Chandler-Anderson (WCA) potential while springs are modeled by finite-elastic-non-extensible spring (FENE) potential where we choose and σ as the units of energy and distance respectively. The unit of time is τ = σ m b / where m b denotes the mass of a bead. We add an additional bending interaction given by The bending potential was introduced by Faller and MÃOEller-Plathe. [44][45][46] The KG models are simulated using Langevin dynamics, which couples all beads to a thermostat, and allows long simulations at constant temperature to be performed with reasonable large time steps. The Langevin dynamics is given by the conservative force due pair and bond interactions, as well as a friction term and a stochastic force term: where the stochastic force obeys ξ n = 0 and ξ n (t) · ξ m (t ) = 6kT Γδ(t − t )δ nm . The standard choice of the FENE bonds are R = 1.5σ and k = 30 σ −2 , which produce a bond length of l b = 0.965σ [25]. Hence the number of beads per Kuhn unit is c b = l K (κ)/l b . The standard value for the thermostat coupling is Γ = 0.5m b τ −1 . KG model melts are typically simulated with a bead density of ρ b = 0.85σ −3 . We use a time step of ∆t = 0.01τ . For integrating the dynamics of our force field, we utilize the GrÃžnbech-Jensen/Farago Langevin integration algorithm [47,48] implemented in the Large Atomic Molecular Massively Parallel Simulator (LAMMPS). [49] The KG model preserves topological entanglements via a kinetic barrier of about 75kT for chain pairs to move through each other. [50] This is due to the strong repulsive pair interaction in combination with a strongly attractive bond potential that diverges when bonds are stretched towards the maximal distance R. Preserving topological entanglements is essential for reproducing the plateau modulus. The lattice melt configurations has the correct large scale chain statistics, but as we will show later, the density of entanglements is much too low, hence directly switching from a lattice configuration to a topology preserving KG polymer model would produce model melts with a wrong entanglement density. Hence we need a computationally effective model to introduce the correct random walk statistics inside the tube diameter, and hence produce the correct entanglement density before switching to the KG model.
The force-capped Kremer-Grest model (fcKG) should solve this problem by 1) performing a Rouse like dynamics to introduce local random walk chain statistics, 2) prevent the growth of density fluctuations, 3) avoid the numerical instabilities due to short pair distances or long bonds which can occur in the lattice melt state or during the Rouse dynamics of the fcKG model, and finally 4) approximate the ground state of the KG force field such we can switch to this force field with a minimum of computational effort.
Inspired from dissipative particle dynamics [36,37] and a previous equilibration methods [27,28], we apply a force-cap to the WCA potential as follows The inner cutoff distance r c determines the potential at overlap. We choose U cap W CA (r = 0) = 5 which corresponds to an inner cutoff of r c = 0.9558 × 2 1/6 σ. For the bond potential, we choose a fourth degree Taylor expansion of sum of the original WCA and FENE bond potentials around the equilibrium distance (r 0 = 0.9609σ). The resulting bond potential is Finally we retain the bending potential and simulate the fcKG model with exactly the same Langevin dynamics as the full KG model.
We avoid numerical instabilities by using the Taylor expansion in the fcKG model rather than FENE and WCA potentials between bonded beads in the KG model. As a result the numerical stability of the force capped model is considerably improved both for very short and very long bonds. We can simulate the lattice melt states directly (after simple energy minimization) without requiring any elaborate push-off or warm-up procedures to gradually change the force field. Since the force-capped model also approximates the ground state of the full KG model, we can also switch force-capped melt configurations to the full KG force field using simple energy minimization and also avoid designing a delicate push-off or warm-up procedure for this change of force field. Furthermore, expect an increased bead mobility while local single chain structure remains mostly unaffected. Note that in the KG model the WCA interaction is applied between all bead pairs, however for the fcKG model the WCA potential is already included in Taylor expanded bond potential above, hence the pair interaction is limited to non-bonded beads for the force capped KG model. Compared to the KG model, this force cap reduce this energy barrier from 75 down to 7.5 .

C. Lattice blob model
We coarse-grain space into a lattice on a length scale a corresponding to the tube segment length d T . The polymers become random walks on this lattice. Since multiple chains pervade an entanglement volume, multiple blobs can occupy the same lattice site. We regard the We utilize the recently published lattice polymer model of Wang [52]. This model has the computational advantage that the Hamiltonian does not include pair-interactions, which makes it computationally very effective. The Hamiltonian comprises an incompressibility term and a configuration term as follows the first term is a sum over all sites, while the second is a sum over all polymers. n c denotes the blob occupation number at site c, while n ≈ n e is the average number of blobs per site.
The parameter χ plays the role of a compressibility [53,54] and hence allows us to introduce incompressibility gradually to remove large scale density fluctuations. In the configuration term we sum over bond angles in the chains. The three terms represents anti-parallel, orthogonal, and parallel successive bonds and their respective energy penalties, respectively. Hamiltonian requires a meaningful site occupation numbers n c 1. When this limit is approached, the incompressibility constraint converges to an excluded volume constraint and blobs to single monomers. Matching the lattice spacing and the tube diameter producing n ∼ 19 offers a reasonable compromise.

D. Kuhn lengths of both KG models
In order to have the same chain statistics and in particular a specific Kuhn length for the force capped and full KG models, we need to estimate how these change with stiffness.
Theoretically predicting the Kuhn length of a polymer model with pair-interactions is a highly non-trivial problem. While excluded volume interactions are approximately screened in melts (the Flory ideality hypothesis [38,39]), melt deviates from polymers in Θ-solutions due to their incompressibility. The incompressibility constraint creates a correlation hole, which leads to a long range net repulsive interaction between polymer blobs along the chain, this effectively causes a renormalization of the bead-bead stiffness to make them stiffer. [56][57][58][59][60] To circumvent this problem, we have brute force equilibrated medium length entangled melts with M = 2000 chains of length N b = 400 beads while systematically varying the stiffness parameter for both the KG and fcKG models. Each initial melt conformation was simulated for at least 2 × 10 5 τ while performing double-bridging hybrid MC/MD simulations [15,18,20,24] using the bond/swap fix in LAMMPS [61]. Ten to twenty configurations from the last 5 × 10 4 τ of the trajectory were used to estimate the Kuhn length.
We choose the chain length as a compromise between having as many Kuhn segments as possible and on having an acceptable double bridging acceptance rate. While double bridging moves are very efficient at removing correlations between the chain conformations, the acceptance rate drops significantly with chain lengths since the potential cross-over points are progressively diluted when requiring that the melt remains mono-disperse. The Kuhn length were derived using eq. 2.
The resulting Kuhn lengths are shown in Fig. 3. As expected, as the stiffness parameter is increased the Kuhn length grows concomitantly. The stiffness of the fcKG and the KG models varies slightly. This is due to the additional stiffness introduced by the WCA pair interaction between next nearest neighbors along the chain compared to the force-capped model. Using the extrapolations shown in Fig 3 we can numerically solve for the force-capped model stiffness κ f c required to reproduce equivalent KG model with stiffness parameter κ.
The result is shown in Fig. 3, and is given by the following empirical relationship valid for κ ∈ [−1 : 2.5].
. This corresponds to using not entanglement blobs, but rather blobs with a fixed number of beads (100) independently of chain stiffness.
We have performed primitive-path analysis (PPA) of the melt states. [4] During the PPA a melt conformation is converted into the topologically equivalent primitive-path mesh-work characterizing the tube structure. We have performed a version of the PPA analysis which preserves self-entanglements by only disabling pair interactions between beads within a chemical distance of 2c b N eK bonds. [50] The minimization was performed using the steepest descent algorithm implemented in LAMMPS followed by dampened Langevin dynamics as described in Ref. [4].
valid within the range of κ f c = −1, . . . , 2.5. The entanglement time varies from 1900τ down to 160τ as chains gets stiffer. For simplicity, we define τ e (κ) ≡ τ e (κ f c (κ)) below to simplify the notation. This does not lead to confusion since in the present paper entanglement times always refers to the Rouse simulations of the force capped model, and the aim of running a fcKG model to equilibrate the corresponding KG models with a specific stiffness κ. After the lattice annealing, the Rouse simulation should introduce random chain structure at progressively larger and larger scales. Before starting the Rouse simulation, the energy of the final lattice conformation is minimized with the force-capped force field. The melt configuration after 0.1τ e is shown in Fig. 1. After 0.1τ e the lattice structure is still visible.
However, after 1τ e of Rouse simulation the polymers appears to have adopted a random walk conformation on the tube scale, and no signs of the lattice structure remains. After having simulated Rouse dynamics with the fcKG model for a number of entanglement times τ e , we expect that the correct chain statistics have been established on all scales.
To switch a fcKG melt conformation to the KG force field, it is first energy minimized using the full WCA pair interaction, but retaining the fcKG bond potential. Subsequently, the bond potential is changed to sum of the FENE and WCA potentials and again the energy is minimized. The resulting melt conformation is then simulated for 5 × 10 4 MD steps at T = 1 with the full KG force field to equilibrate the local bead packing. We denote this procedure the KG warm-up. The resulting KG configuration is also shown in Fig. 1. It shows no discernible difference compared to the fcKG melt state at 1τ e . the largest melts produced in Refs. [28,63] where 1.000 chains of length 2.000 beads. We produced eight melts using the full lattice Hamiltonian described above, five melts without the incompressibility term, and three melts without the configuration term. With these variations of the annealing procedure, we can illustrate why both the incompressibility and configuration terms are required. The lattice states were simulated with the same Rouse simulation, but we have also performed the KG warm up at different times during the Rouse simulation to study how this impacts the resulting KG melts. Below we will characterize the 1.000 × 15.000 melts states unless specifying a chain stiffnesses κ, in which case the observables are calculated for the 500 × 10.000 melt states. probability shows a clear step structure. Below the transition temperature, the equilibration slows down considerably and the step like structure of the acceptance probability is lost. The acceptance rate remains clearly above 20% even below the transition temperature. This is primarily due to the end-bridging moves, which are attempted with 20% probability.
The local chain dynamics becomes frozen while the global chain state remains dynamic, since double bridge moves are still accepted with even below the transition temperature.  Figure 8 shows how the chain conformations evolve during the annealing process. We describe the large scale properties with the ratio of the end-to-end distance and the radius of gyration which for a random walk should be about six. [39] We observe that at large scales the melt conformations remain random walk like during the whole annealing process.
Furthermore the scatter of the curves below the transition temperature again shows that the MC moves keeps generating new conformations searching for a better minimum.
The chain stiffness c L characterizes blob chain angle statistics at the tube scale. This should be unity for random walks where subsequent steps are statistically uncorrelated. For melts with the configuration term, this is seen to be the case after some transients around the transition temperature, however, we see a slight but systematic increase in the chain stiffness for melts without the configuration term. This could be either due to the incompressibility constraint acting as a weak excluded volume even at occupation numbers of n ≈ 19, and hence leading to a small degree of swelling. Alternatively, it is also known that the Flory ideality hypothesis is only approximately true even for dense melts. The incompressibility constraint leads to a correlation hole of density fluctuations, which has been shown to give rise to an effective weakly repulsive intra-molecular interaction. [56][57][58][59][60] Both these effects lead to swelling, and the severity of the swelling is likely to depend on the rate at which the simulated annealing process is quenched. We have opted for adding the additional configuration term to the lattice Hamiltonian, to ensure that the lattice conformations show the desired random walk statistics at the smallest scales. Figure 9 shows the impact of incompressibility on the lattice melt conformations. We have averaged the structure factor over several statistically independent melts to improve the statistics. At the lowest q values, the structure factor characterize density fluctuations on the scale of the whole simulation domain, whereas the highest q values reflect density   in the approach of Auhl et al. [12], their push off produced a peak in the MSID that is due to local chain stretching due to density fluctuations, which was mitigated by the introduction of a pre-packing procedure. Here our fcKG model has been designed to perform this prepacking on scales below the tube diameter during the Rouse simulation.
The MSID is a single chain observable, we can also take melt configurations at various times along the Rouse dynamics simulation and submit them to PPA analysis to estimate the topological evolution of the melt. The entanglement length has been shown to be quite sensitive to the equilibration procedure, since chain stretching during equilibration of badly prepared samples artificially increase the entanglement density. [64] The result is shown in   Finally, we perform a fast warm up to the Kremer-Grest force field to establish the correct local bead packing. We have characterized the involved models in order to transfer melt states between them for varying chain stiffnesses. By measuring the Rouse friction of the fcKG model, we have also estimated the simulation time required for the equilibration of chain structure inside the tube, which was shown to be strongly dependent on chain stiffness.
We have also characterized and validated the equilibration process in terms of 1) single chain observables such as mean-square internal distances, 2) collective mesoscopic melt properties such as the evolution of the entanglement length during the Rouse  The present method is essentially independent of e.g. chain length and the polymer structure. These only impacts the lattice annealing stage of the equilibration procedure, which is the fastest part of the equilibration process. The Rouse simulation and KG warm up are completely independent of the chain structure and composition. Hence the present method can directly be used to equilibrate e.g. polymer melts of stars or mixtures of different polymer structures. Furthermore, the equilibrated KG melt configurations produced by the present approach can be fine-grained further to act as starting points for atomistic simulations of polymer melts.
With simple and computationally efficient equilibration approaches such as the one presented here, access to well equilibrated melts for studies of material properties is no longer a computational limitation, rather the limitation is the computational effort required to estimate the material properties of these huge systems.

VI. ACKNOWLEDGEMENTS
Computation/simulation for the work described in this paper was supported by the DeiC National HPC Center at University of Southern Denmark.

VII. APPENDIX
Below we derive predictions for the structure factor due to the density fluctuations created by randomly inserting polymers in the simulation box, and the structure factor after equilibration of density fluctuations and its relation to the compressibility.
Defining the microscopic density as microscopic density field ρ(R) = M j=1 N k=1 δ(R − R jk ), where δ denotes the Dirac-delta function. We can then recognize the Fourier transform of the density field is ρ(q) = M j=1 N b k=1 exp(iq · R jk ), such that the structure factor can be written To derive the structure factor after equilibration of density fluctuation, we start by expressing the structure factor in terms of spatially varying densities. From the right hand side of eq. 12 we get S(q) = ˆd R 1 dR 2 ρ(R 1 )ρ(R 2 ) exp(iq · (R 1 − R 2 )) =ˆdR 1 dR 2 exp(iq · (R 1 − R 2 )) δρ(R 1 )δρ(R 2 ) =ˆdR exp(iq · R) δρ(0)δρ(R) (13) where in the second equation we have replaced ρ(R) → δρ(R) = ρ(R) − ρ . The constant average density gives rise to a contribution proportional to a Dirac-delta function, which can be neglected for q > 0. In the third equation, we have furthermore assumed translational invariance.
Lets assume a local Hamiltonian for density fluctuations H(δρ) = 1 2χ ρ δρ 2 for a particular position analogous to a site in the lattice model. The Boltzmann probability of a given density fluctuation is given by P (δρ) ∝ exp(−H/kT ), and hence δρ 2 = χ ρ kT by the equipartition theorem. Assuming that the density fluctuations at different sites are statistically independent, which in practice is valid for sufficiently large distances, i.e. small values of q. The density fluctuation correlation function is then given by δρ(0)δρ(R) = χ ρ kT δ(R).
Inserting this in eq. 13, we obtain the prediction that the structure factor is independent of q and proportional to the compressibility S(q) = χ ρ kT.
We can also predict the structure factor for polymers randomly inserted into the simulation domain using the approach in Refs. [66,67]. By introducing an origin of the coordinate system for each polymer R o j e.g. one of its ends, we can rewrite eq. 6 as here the first term describes the single polymer scattering due to pairs of scattering sites on the same polymer, while the second term is the interference contribution between scattering sites on different polymers. Configurations of different polymers are generated independently of each other, and the starting points of the polymers are chosen randomly. Hence the three terms in parenthesis in the second exponential are sampled from statistically independent distributions. Having noted this, the average of the interference contribution factorize exactly into a product of three averages, where the first and third only depend on single chain statistics, while the second term only depends on the distance distribution between randomly chosen points: For notational simplicity, we can identify the average form factor as F (q) = exp(iq · (R jk 1 − R jk 2 )) j,k 1 ,k 2 with the average single polymer scattering, A(q) = exp(iq · R jk 1 − R o j ) j,k 1 with the average form factor amplitude relative to an end, and the average phase factor between different ends Ψ(q) = exp(iq · R o j 1 − R o j 2 ) j 1 =j 2 . Note all these factors are normalized as F (q) = A(q) = Ψ(q) → 1 for q → 0. With these simplifications, the structure factor reduce to the much shorter expression this expression is exact and was derived without any assumptions as to the detailed structure of the objects inserted in the simulation domain and only results from the assumption of statistical independence of intrnal conformations and positions of the objects. [66] For polymers modeled as ideal random walks, the expressions for the form factor and form factor amplitude are well known [68,69]: since random walks on average are isotropic, these functions only depends on the magnitude of the scattering vector q = |q|. The asymptotic behavior is realized for qR g 1, where R g = l 2 K N k /6 is the radius of gyration of the polymer. Furthermore, because polymers pairs are placed randomly in the box their starting positions are statistically independent, hence Ψ(q) = 1. Hence for randomly inserted polymers, the asymptotic behavior of the structure factor describing the resulting density fluctuation correlations is