Evolve a population of stars
You can find this application in the demos
folder of your Jupyter notebook environment.
- inlist
- inlist_create_zams
- inlist_hcoreburning
- multi_star_evolution.ipynb
In this tutorial, use Camber to generate some Modules for Experiments in Stellar Astrophysics (MESA) Stellar Tracks. Watch a population of stars evolve from the pre-main-sequence (pMS) to the zero-age-main-sequence (ZAMS), and then evolve them once more to the end of their hydrogen core-burning phase of evolution.
To execute the workload efficiently, this tutorial uses the create_scatter_job
to execute many simulations in parallel.
In short, the flow is as follows:
- Initialize a set of inlist files, essentially a configuration file that defines the parameters to evolve one star.
- Execute one job for each of these inlists in parallel.
- Plot the output of each evolution in one chart.
Set up MESA environment
First, import Camber and the libraries to do calculation and visualization:
import camber
import pandas as pd
import matplotlib.pyplot as plt
import mesa_reader as mr
from camber.job import CamberJob
import os
import json
import shutil
from pathlib import Path
from string import Template
from itertools import zip_longest
from typing import Dict, List, Optional, Union, Any
from concurrent.futures import ThreadPoolExecutor, as_completed
%matplotlib inline
Then, define a function that creates an ensemble of stellar tracks, ranging in mass from minMass
to maxMass
with an interval size dm
.
def create_table(minMass, maxMass, dm):
# Calculate the number of intervals
num_intervals = int((maxMass - minMass) / dm) + 1
# Generate the values based on the intervals
values = [minMass + i * dm for i in range(num_intervals)]
# Generate ZAMS file names
zams_file = [f"{round(val, 1)}Msun_zams.mod" for val in values]
hcore_file = [f"{round(val, 1)}Msun_hcore.mod" for val in values]
#outdir= [f"{val}Msun" for val in values]
# Create a DataFrame
df = pd.DataFrame({'mass': values, 'zams_file':zams_file,'hcore_file':hcore_file})
return df
Let’s range between 1-2 Msuns with an interval of 0.5 Msun.
minMass=1.0 ##
maxMass=4.5
dm=0.5
df = create_table(minMass,maxMass,dm)
df
To run this function, the user will supply a list of inlist files, which contains templating variables. To fill in the template, the user will supply a param_set
. When Camber executes the example job above, it will combine the two inputs to generate an actual inlist
file used by MESA.
With the same template file, this function essentially allows the user to execute multiple MESA jobs
by supplying a list of param_set
s. Each param_set
will be used to generate a new set of inlist
files nested in their param_set
/job specific directory, and the MESA job will execute on top of said
directory. The user will also get in each job directory a meta.json
file, which contains job info
alongside the param_set
used to template the inlist files.
This means the user will create n jobs from a param_sets
of length n.
def create_scatter_job(
param_sets: List[Dict[str, Any]],
inlist_files: List[str],
model_files: Optional[Union[str, List[str]]] = None,
command: str = "./mk && ./rn",
module: str = "star",
sync_mesa_scripts: bool = True,
engine_size: str = "XSMALL",
mesa_version: Optional[str] = "r23.05.1",
) -> List[CamberJob]:
"""
Executes multiple parallel MESA jobs on CamberCloud
Args:
param_sets: list of parameter sets
inlist_files: list of inlist files with template parameters
model_files: list of model files or a single model file to be copied to each job directory. If a list
is provided, it should have the same length as `param_sets`. Otherwise, the single model will be used for all jobs.
command: the command to execute the MESA job
module: the module to be used for running simulation (star, binary, astero, etc.), defaults to "star"
sync_mesa_scripts: Specify whether to sync MESA scripts to the job directory, defaults to True
engine_size: Supply the size of the engine, defaults to XSMALL
mesa_version: the MESA version to be used, defaults to "r23.05.1"
Returns:
List of camber jobs
"""
if model_files is not None and not len(param_sets) == len(model_files):
print("`param_sets` and `model_files` must have the same length")
return None
if isinstance(model_files, str):
model_files = [model_files] * len(param_sets)
model_files = model_files or []
cwd = Path.cwd()
def _start_mesa_job(idx: int = None, params: Dict[str, Any] = None, model: str = None) -> CamberJob:
dir_name = f"{idx:04d}"
job_dir = cwd / dir_name
job_dir.mkdir(parents=True, exist_ok=True)
for template_inlist_file in inlist_files:
# Ensure we get the path of the template inlist file relative to current working directory
template_inlist_file = Path(template_inlist_file).absolute().relative_to(Path.cwd())
inlist_file = job_dir / template_inlist_file
with open(str(template_inlist_file), "r") as f_template, open(str(inlist_file), "w") as f_output:
template = f_template.read()
f_output.write(Template(template).safe_substitute(params))
f_output.flush()
# Copy model file to job directory, if applicable
if model:
input_model_file = Path(model).absolute().relative_to(Path.cwd())
shutil.copy(input_model_file, job_dir / input_model_file.name)
# create_job API expects relative path to user's home directory
job = camber.mesa.create_job(
command=command,
module=module,
sync_mesa_scripts=sync_mesa_scripts,
engine_size=engine_size,
mount_dir=dir_name,
mesa_version=mesa_version,
)
with open(os.path.join(str(job_dir), "meta.json"), "w") as f:
meta = {
"CamberJobId": job.job_id,
"CamberJobParams": params,
}
json.dump(meta, f, indent=2)
f.flush()
return job
with ThreadPoolExecutor(max_workers=2) as executor:
futures = [
executor.submit(_start_mesa_job, idx, param_set, model)
for idx, (param_set, model) in enumerate(zip_longest(param_sets, model_files), 1)
]
for future in as_completed(futures):
future.result()
jobs = [future.result() for future in as_completed(futures)]
return jobs
Generate inlist files
Now generate ZAMS stars from pMS collapse. To generate a ZAMS star, specify an inlist that includes the function create_pre_main_sequence_model = .true.
This inlist is called inlist_create_zams
.
Note that the param_sets
variable is going to be passed as an agrument to the MESA create_scatter_job
method.
These params are the template for the jobs to execute in parallel.
jobs = create_scatter_job(
param_sets=[
{
"INLIST": "'inlist_create_zams'",
"OUTPUT_MOD": '"' + str(row['zams_file']) + '"', # Add extra quotes around the value
"INIT_MASS": str(row['mass'])
}
for _, row in df.iterrows()
],
inlist_files = ['inlist','inlist_create_zams'],
engine_size='SMALL'
)
This may take a few minutes on a small engine. To view the job status, you can return the jobs
object:
jobs
Once all statuses are COMPLETED
, you’ve generated your ZAMS files! Camber creates a final ZAMS model, appended with zams.mod
, in each of the respective directories.
These ZAMS mod files are the requisite input files to evolve our stars from ZAMS to the next stage of evolution (in our case, H-Core burning).
Let’s move these files to our main directory:
!find $(pwd) -maxdepth 1 -type d -name "00*" -exec sh -c 'cp "$1"/*.mod $(pwd)' sh {} \;
Now evolve each of our ZAMS stellar tracks along the H-core burning sequence.
To parameterize this, define another table of param_sets
:
param_sets=[
{
"INLIST": "'inlist_hcoreburning'",
"INPUT_MOD": '"' + str(row['zams_file']) + '"', # Add extra quotes around the value
"INIT_MASS": str(row['mass']),
"OUTPUT_MOD": '"' + str(row['hcore_file']) + '"', # Add extra quotes around the value
}
for _, row in df.iterrows()]
param_sets
zams_mod_files = [file for file in os.listdir("./") if file.endswith("zams.mod")]
zams_mod_files=sorted(zams_mod_files)
Run another scatter_job
with the H-Core burning inlist files:
jobs = create_scatter_job(
param_sets=[
{ "INLIST": "'inlist_hcoreburning'",
"INPUT_MOD":str(row['zams_file']),
"OUTPUT_MOD": str(row['hcore_file']),
"INIT_MASS": str(row['mass'])
}
for _, row in df.iterrows()
],
inlist_files = ['inlist','inlist_hcoreburning'],
model_files=zams_mod_files,
engine_size='SMALL'
)
This too takes just a few minutes to complete, but you can start plotting as soon as your files begin to be generated. Check the status with:
for i in range(len(jobs)):
print(jobs[i].job_id, ": ", jobs[i].status)
Once the statuses are RUNNING
, you can start to plot results.
Plot results
fig = plt.figure(figsize=(4,6))
maxx=4.4
minx=3.5
miny=-0.5
maxy=4.5
dirs = sorted([file for file in os.listdir("./") if file.startswith("00")])
for d in dirs:
h=mr.MesaData(str(d)+'/LOGS/history.data')
plt.plot(h.log_Teff,h.log_L,linewidth=0.75,zorder=1,label=str(round(h.star_mass[0],4))+' $M_{\odot}$',color='black')
plt.annotate(str(round(h.star_mass[0],4))+' $M_{\odot}$', (max(h.log_Teff)+0.12,h.log_L[0]),fontsize=8)
plt.xlim(maxx,minx)
plt.ylim(miny,maxy)
plt.grid(alpha=0.25)
plt.xlabel('$\log$(Teff)')
plt.ylabel('$\log$(L)')
plt.show()
It’s always a good practice to clean-up after yourself. Go ahead and clear the jobs:
for job in jobs:
job.clear()
Follow-up question
What would the plot look like if you changed the value of minMass
to 4
and the value of maxMass
to 5
?