Description of Our 3D MHD Computations

Simulation Run Details

To obtain the approximate state of the solar corona, we integrate our Magnetohydrodynamic Algorithm outside a Sphere (MAS) model towards a steady-state solution. We set as a boundary condition the radial component of the magnetic field at the base of the corona. This field is deduced from the HMI magnetograph aboard the NASA's SDO spacecraft, which measure the line-of-sight component of the photospheric magnetic field from space. For an overview of the MAS model and its numerical methods, click here.

Numerous medium-resolution (269 x 148 x 315) simulations were performed to refine parameter choices, most notably those for the new two-temperature formulation for the wave-turbulence-driven (WTD) heating model. Each of these simulations relaxed the corona for ~ 80 hours, and took approximately 15-30 wall-clock hours to run (depending on the number of cores used).

The final prediction was performed in stages, using a mesh with 288 x 327 x 699 points (66 million cells). In this approach, we relaxed the corona for about 72 physical hours using the final WTD heating parametrization. At this point we inserted the energized fields, relaxing the final computation for an another 6.4 physical hours. All told, the computations required a few days running on on a few thousand processors.

The energized fields were developed seperately with a series of simplified "zero-beta" MHD simulations. These calculations incorporate time-dependent electric fields at the inner boundary designed to add shear and poloidal flux along selected filament channels. They were done at a resolution of (126 x 262 x 629) and the field from the best case is remeshed to the final high-resolution grid for insertion.


High-Performance Computing (HPC) Resources

The size and scope of the simulations requires the use of HPC resources. We have been granted allocations on the following HPC systems and have utilized each in running the simulations. We are very grateful for the assistance provided to us by the dedicated staff at NAS and SDSC. Our prediction would not have been possible without these resources.

NASA HECC Systems (Pleiades, Electra, Aitken)

Pleiades Electra Aitken

The NASA's HECC provides massively parallel supercomputers for NASA's Advanced Supercomputing Division (NAS). These systems are SGI machines linked by InfiniBand set in a partial hypercube topology. There are a variety of processor types across the compute nodes of the systems including Intel Xeon E5-2670 2.6GHz SandyBridge (16-core), Xeon E5-2680v2 2.8GHz Ivy Bridge (20-core), Xeon E5-2680v3 2.5GHz Haswell (24-core), Xeon E5-2680v4 2.4GHz Broadwell (28-core), Xeon Gold 6148 2.4GHz Skylake (40-core), and Xeon Gold 6248 2.5GHz Cascade Lake (40-core). All three systems (Pleiades, Electra, and Aitken) are connected to a single distributed file system.

The MAS code displays essentially linear scaling on all processor types up to ~ 5,000 cores. In this case, our largest calculations were run on the newer Electra and Aitken systems on about 4200 processors.

Our allocation was provided by NASA's Advanced Supercomputing Division.

Expanse

Comet

Expanse is a massively parallel supercomputer at the San Diego Super Computing Center (SDSC). It is a Dell machine consisting of dual-socket AMD EPYC 7742 (128-core) compute nodes linked by Mellanox High Data Rate InfiniBand. Presented as "Computing without Boundaries", Expanse is designed for simulations that run up to ~ 4,000 cores in under 2 days.

The MAS code displays good scaling up to the maximum 32 nodes (4,096 cores) available for a single run. This is more than enough compute power for our "medium" and "energization" calculations described above. The large core count of the compute nodes also makes them extremely useful for post-processing and visualization. This was particularly true for the Volume-rendered Squashing Factor Q, which requires well over a billion field line tracings to be done quickly.

Our allocation was provided by the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562.



Back to Homepage