Description of our 3D MHD Computations

Our preliminary and final predictions were run on a 3D mesh with 186 × 231 × 560 mesh points (about 24 million cells) in spherical coordinates (r,θ,φ). Various versions of this run with different parameters were run on two supercomputers, Ranger, a massively parallel supercomputer at the Texas Advanced Computing Center (TACC), and Pleiades, the new massively parallel supercomputer at NASA's Advanced Supercomputing Division (NAS).

Pleiades is a high performance computer with 126,720 CPUs and a theoretical peak speed of 1,240 teraflops/s. In June 2012 it was ranked as the 11th fastest supercomputer in the world. Pleiades is an SGI Altix that uses a combination of 2.6 GHz Intel Xeon E5-2670 (Sandy Bridge) processors, 2.93 GHz Intel Xeon X5670 and 3.06 GHz Intel Xeon X5675 (Westmere) processors, 2.93 GHz Intel Xeon X5570 (Nehalem) processors, and 3 GHz Intel Xeon E5472 (Harpertown) processors, with an Infiniband interconnect. The support of the staff at NAS helped us to successfully run our code under the real-time constraints required for a prediction.

Ranger is a high performance computer with 62,976 CPUs and a theoretical peak speed of 580 teraflops/s. In July 2009 it was ranked as the eighth fastest supercomputer in the world. Ranger is a SunBlade Linux Cluster that uses quad-core AMD Opteron (Barcelona) 2.3 GHz processors with an Infiniband interconnect.

We used our spherical 3D (magnetohydrodynamic) MHD code MAS, which integrates the MHD equations using semi-implicit (Alfvén and sound waves), fully implicit (diffusive terms), and explicit (flow terms) schemes. We solve the very large sparse matrix equations generated by these algorithms using a preconditioned iterative conjugate gradient solver. We set as a boundary condition the radial component of the magnetic field at the base of the corona. This field is deduced from various space and ground observatories, including those from the HMI magnetograph aboard the NASA’s SDO spacecraft, which measure the line of sight component of the photospheric magnetic field from space. Our code is written in Fortran 90 and uses the Message Passing Interface (MPI) for interprocessor communication. Our code scales very well on many high-performance computer systems. We have shown essentially linear scaling with processor number up to about 4,096 processors.

Our calculation for the preliminary eclipse prediction ran for about 40,000 time steps, relaxing the corona in time (for about 3.3 days of solar time) towards a steady state, thereby approximating the state of the solar corona. Our time step in the computation was about 7.2 seconds. The run used 3,360 processors on Pleiades, and ran continuously for about 2.1 days.

Unfortunately, the final eclipse calculation crashed due to a hardware problem with Pleiades, and we lost a lot of the data it produced. Fortunately, we had saved some of the data on our local machines, so it was not a total loss. We were able to squeeze out a prediction despite this setback. Our calculation for the final eclipse prediction ran for about 23,000 time steps, relaxing the corona in time (for about 1.9 days of solar time) towards a steady state, thereby approximating the state of the solar corona. Our time step in the computation was about 7 seconds. The run used 3,360 processors on Pleiades, and ran continuously for about 1.7 days.

We are very grateful for the assistance provided to us by the dedicated staff at NAS and TACC. Our prediction would not have been possible without these resources.