Instructions for running CORHEL-AMCG on AWS
All activities on the AWS GPU instance should be done as the
"psi" user. To become
"psi", use the command: "sudo -u psi -s"
The PSI CORHEL environment will automatically be loaded when
becoming the "psi" user (through the script /data/psi/load_psi_gpu_env that is sourced in the "psi" user's .bashrc)
In the following instructions, [NEXT] means Click the
"Next" button.
1) Load interface web site in browser:
https://corhel-dev.ccmc.mysmce.com/thermo_webapps/thermo_designer
Enter name, e-mail, and daily session number (use 1)
[NEXT]
2) On Step Overview page, click "Enter Step 1"
3) In this guide, we are assuming there is no pre-existing
MHD background run for the
event under study,
so on the "Map Selection" page, select "Create a new
map..."
4) Select/enter date and time of event, then select the map source (HMI is default)
[NEXT]
5) Select the event location (active region (AR)) by drawing a box around it
[NEXT]
============= ZERO BETA FLUX ROPES
=================================
6) In "Step Overview", now click "Enter Step
2"
7) Click "Active Region 1"
8) After a brief calculation, the polarity inversion lines
(PILs) for the AR should be shown
9) Select a point at the desired start location of the PIL, then select a 2nd point at the desired end point.
[NEXT]
10) Use the tool tips as a guide to design/construct the RBSL flux rope (This is a long step)
[NEXT]
11) Now you are back to the page from step (7) but the "Active Region 1" should be green.
[NEXT]
12) The "Zero-Beta Simulation" page will
automatically start generating the run tar file for download.
Wait for it to complete (the "Download Zero-Beta
Archive" button will appear).
-
Click the button to download the run tar
ball. You are free to rename the file.
-
Also click the "Download Saveset" button to save the saveset
tar ball (you are also free to rename this file).
-
The saveset tar file
should be downloadable to the user.
-
The run tar ball should be sent to the GPU AWS
instance (either through sending it to the /data drive or placing it in the
/share drive which is accessible to the GPU instance.) In this test case, I
placed the run tar ball into a folder /data/psi/corhel_amcg_runs/<runtag>/
13) Once the run tar ball is on the AWS GPU instance,
-
navigate to that directory (as the psi user) and
launch the run with the command:
nohup
corunpy archive_file=<ARCHIVE_NAME>_zb.tar
nprocs=8 num_procs_node=8 wc_limit=06:00:00 overwrite=1 >
<ARCHIVE_NAME>_zb.log &
-
After launching the job, the
<ARCHIVE_NAME>_zb.log should look like:
CORUNPY is
launching the run...
Running CORHEL with command: corhel
-cr_ut 2022-01-29T23:32 -res custom -name
caplanr_at_predsci.com_20230208_10_zb_multi -model cme
-cme_model zb -cme_nprocs 8 -cme_wc_limit
06:00:00 -cme_archive_dir
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230208_10_zb_multi_launch/archive_zb
- After a brief setup time, the run should be seen to be running on the GPUs with the "nvidia-smi" command. For example:
-
Processes: | ||||||
---|---|---|---|---|---|---|
GPU | GI ID | CI ID | PID | Type | Process name | GPU Memory Usage |
0 | N/A | N/A | 21795 | C | .../tools/ps_fortran/bin/mas | 910MiB |
1 | N/A | N/A | 21796 | C | .../tools/ps_fortran/bin/mas | 910MiB |
2 | N/A | N/A | 21797 | C | .../tools/ps_fortran/bin/mas | 908MiB |
3 | N/A | N/A | 21798 | C | .../tools/ps_fortran/bin/mas | 908MiB |
14) When the run is done, the <ARCHIVE_NAME>_zb.log
file should contain something like:
CORHEL run
completed!
Submission
directory:
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230208_10_zb_multi_launch
Simulation run
directory:
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230208_10_zb_multi
- One could use the
line "CORHEL run completed!" as a way of detecting when the run is
completed, or one can check the process "corunpy"
is complete.
15) The zero beta run report tar
ball is located in the "Simulation run directory" listed in the log
file.
-
Copy the report tar ball to a location on the
/share drive (possibly renaming it as needed).
-
For example:
cp
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230208_10_zb_multi/caplanr_at_predsci.com_20230208_10_zb_multi_zero_beta_report.tar.gz
/share/psi/corhel_runs/
-
This file needs to be unpacked and the html
accessible for the user to view through the CCMC website.
16) Pack up the full simulation run directory into a tar file, and transfer it to S3.
17) Clean up the CORHEL run folder on /data (delete all
folders in /data/psi/corhel/run/*)
18) After viewing the report, the user (likely) may want to
change the rope and try again.
- Go to the
interface website and start a new daily session number (in this case, 2)
but this time,
select the saveset tar ball in the "Restore a
previously saved session" section.
19) Click through step 1 again, (all previously selected
parameters are already set),
and modify the AR
region if desired.
20) Go to step 2 as above, and make
changes to the rope.
21) As before, at the end of step 2, download/store the new
run tar ball, and download the new saveset tar ball
(which the user should receive).
22) As before, run the new zero-beta simulation on the AWS
GPU instance, transfer the report tar file to /share, provide the report to the
user, save the simulation run data to S3, and then clean up the CORHEL run
folder.
23) Repeat steps (18)-(22) as often as the user desires.
============= THERMODYNAMIC BACKGROUND
=================================
24) Load interface web site in browser, set a new session
number, and select the most recent
(or chosen flux rope) saveset
[NEXT]
[NEXT]
[NEXT]
25) On Step Overview page, click "Enter Step 3"
26) Select either Heating model 1 or Heating model 2
[NEXT]
27) On the "Background Simulation" page, the run
tar ball will be generating. When it is done (the "Download Background
Archive" button appears),
-
download/store the run tar ball
-
Download the saveset
and provide it to the user.
28) Once the run tar ball is on the AWS GPU instance (in the
same manner to the zero beta runs above),
-
navigate to that directory (as the psi user) and
launch the run with the command:
nohup corunpy archive_file=<ARCHIVE_NAME>_bg.tar
nprocs=8 num_procs_node=8 wc_limit=48:00:00 overwrite=1 >
<ARCHIVE_NAME>_bg.log &
-
After launching the job, the
<ARCHIVE_NAME>_bg.log should look like:
CORUNPY is
launching the run...
Running CORHEL
with command: corhel -cr_ut
2022-01-29T23:32 -cor_mas_slund 5000.0000 -cor_mas_visc 0.0020 -model corhel
-res custom -cor_mas_nprocs 8 -cor_mas_mesh_file
thermo/mesh_header.dat -cor_bc_br0_file thermo/br_final_tp.hdf
-cor_bc_br0_file_origin thermo/br_parent_tp.hdf
-cor_bc_obs hmi -cor_model mast -cor_mas_heat_model
2 -hel_masip_nprocs 8 -hel_masip_nr
501 -hel_masip_nt 181 -hel_masip_np
361 -hel_bc_type mhdfull
-hel_masip_r0 28.0 -name caplanr_at_predsci.com_20230207_1_bg -cme_archive_dir
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230207_1_bg_launch/thermo
-
After a brief setup time, the run should be seen
to be running on the GPUs with the "nvidia-smi"
command as above.
29) When the run is done, the <ARCHIVE_NAME>_bg.log
file should contain something like:
CORHEL run
completed!
Submission
directory:
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230207_1_bg_launch
Simulation run
directory:
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230207_1_bg
The run has
completed, please use the following command to package the run
into a tar
file to upload to the interface for the next step:
corhel_make_tar_thermo_ui.sh
-run_dir
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230207_1_bg
31) The thermodynamic background run needs to be tarred up
to be uploaded to the interface.
- To do this,
follow the command in the log file
For example:
corhel_make_tar_thermo_ui.sh -run_dir
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230207_1_bg
- The tar file is
created in the location where the above command is run.
- Transfer this
tar file to the /share folder where the background runs will "live".
32) The background run report tar ball is
located in the "Simulation run directory" listed in the log
file.
- Copy the report
tar ball to a location on the /share drive (possibly renaming it as needed).
For example:
cp
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230207_1_bg/caplanr_at_predsci.com_20230207_1_bg_thermobg_report.tar.gz
/share/psi/corhel_runs/
- This file needs
to be unpacked and the html accessible for the user to view through the CCMC
website.
33) Pack up the full simulation run directory into a tar file, and transfer it to S3.
34) Clean up the CORHEL run folder on /data (delete all
folders in /data/psi/corhel/run/*)
35) Go back to the interface and select a new session
number,
and select the saveset tar file from step (27)
36) Enter step 1, and now select "Load map from an existing...."
[NEXT]
37) Go to the "Upload my own CORHEL Run" and select the packaged background run tar file from step (31)
-
Note that this step may change since the file is
in the /share folder already. This
requires some development to allow the interface to auto-detect (or have a
manual script) that unpacks and lists the background run.
38) After the background run tar file is uploaded (or
loaded) it should
show up in the
list of "Select Existing CORHEL Run"
============= THERMODYNAMIC CME SIMULATION
================================
39) Load interface web site in browser, select a new session
number,
and load the latest saveset tar file
[NEXT]
40) Go to step 1 and select "Load map from an existing...”
[NEXT]
41) In the list of runs, select the thermo background run uploaded in step (38)
[NEXT]
[NEXT]
42) Go to Step 2, click into the active region designer and go through each page with [NEXT] until you
arrive at the "Step Overview" page again.
43) Go to Step 4,
On the "CME
Simulation" page, the run tar ball will be generating. When it is done (the
"Download CME Archive" button appears),
-
Download/store the run tar ball to /share
-
Download the saveset
and provide it to the user.
44) Once the run tar ball is on the AWS GPU instance (in the
same manner as the runs above),
-
Navigate to that directory (as the psi user) and
launch the run with the command:
nohup corunpy archive_file=<ARCHIVE_NAME>_cme.tar
nprocs=8 num_procs_node=8 wc_limit=90:00:00 overwrite=1 >
<ARCHIVE_NAME>_cme.log &
-
After launching the job, the
<ARCHIVE_NAME>_cme.log should look like:
CORUNPY is
launching the run...
Running CORHEL
with command: corhel -cr_ut
2022-01-29T23:32 -res custom -name caplanr_at_predsci.com_20230208_10_cme
-model cme -cme_model thermo -cme_nprocs 8 -cme_wc_limit 90:00:00 -cme_archive_dir
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230208_10_cme_launch/archive_cme
45) When the run is done (~20-40 hours), the
<ARCHIVE_NAME>_cme.log file should contain something like:
CORHEL run
completed!
Submission
directory:
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230208_10_cme_launch
Simulation run
directory:
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230208_10_cme
CORUNPY done!
46) The CME run report tar ball is located
in the "Simulation run directory" listed in the log file.
-
Copy the report tar ball to a location on the
/share drive (possibly renaming it as needed). For example:
cp
/data/psi/corhel/run/ut202201292332-custom/caplanr_at_predsci.com_20230208_10_cme/caplanr_at_predsci.com_20230208_10_cme_thermocme_report.tar.gz
/share/psi/corhel_runs/
-
This file needs to be unpacked and the html
accessible for the user to view through the CCMC website.
47) Pack up the full simulation run directory into a tar file, and transfer it to S3.
48) Clean up the CORHEL run folder on /data (delete all
folders in /data/psi/corhel/run/*)