Evolve a population of stars

Evolve a population of stars

You can find this application in the demos folder of your Jupyter notebook environment.

    • inlist
    • inlist_create_zams
    • inlist_hcoreburning
    • multi_star_evolution.ipynb
  • In this tutorial, use Camber to generate some Modules for Experiments in Stellar Astrophysics (MESA) Stellar Tracks. Watch a population of stars evolve from the pre-main-sequence (pMS) to the zero-age-main-sequence (ZAMS), and then evolve them once more to the end of their hydrogen core-burning phase of evolution.

    To execute the workload efficiently, this tutorial uses the create_scatter_job to execute many simulations in parallel. In short, the flow is as follows:

    1. Initialize a set of inlist files, essentially a configuration file that defines the parameters to evolve one star.
    2. Execute one job for each of these inlists in parallel.
    3. Plot the output of each evolution in one chart.

    Set up MESA environment

    First, import Camber and the libraries to do calculation and visualization:

    import camber
    import pandas as pd 
    import matplotlib.pyplot as plt
    import mesa_reader as mr
    from camber.job import CamberJob
    
    import os
    import json
    import shutil
    from pathlib import Path
    from string import Template
    from itertools import zip_longest
    from typing import Dict, List, Optional, Union, Any
    from concurrent.futures import ThreadPoolExecutor, as_completed
    
    %matplotlib inline

    Then, define a function that creates an ensemble of stellar tracks, ranging in mass from minMass to maxMass with an interval size dm.

    def create_table(minMass, maxMass, dm):
        # Calculate the number of intervals
        num_intervals = int((maxMass - minMass) / dm) + 1    
        # Generate the values based on the intervals
        values = [minMass + i * dm for i in range(num_intervals)]
        # Generate ZAMS file names
        zams_file = [f"{round(val, 1)}Msun_zams.mod" for val in values]
        hcore_file = [f"{round(val, 1)}Msun_hcore.mod" for val in values]
        #outdir= [f"{val}Msun" for val in values]
        # Create a DataFrame
        df = pd.DataFrame({'mass': values, 'zams_file':zams_file,'hcore_file':hcore_file})
        return df

    Let’s range between 1-2 Msuns with an interval of 0.5 Msun.

    minMass=1.0 ## 
    maxMass=4.5
    dm=0.5
    df = create_table(minMass,maxMass,dm)
    df
       mass         zams_file         hcore_file
    0   1.0  1.0Msun_zams.mod  1.0Msun_hcore.mod
    1   1.5  1.5Msun_zams.mod  1.5Msun_hcore.mod
    2   2.0  2.0Msun_zams.mod  2.0Msun_hcore.mod
    3   2.5  2.5Msun_zams.mod  2.5Msun_hcore.mod
    4   3.0  3.0Msun_zams.mod  3.0Msun_hcore.mod
    5   3.5  3.5Msun_zams.mod  3.5Msun_hcore.mod
    6   4.0  4.0Msun_zams.mod  4.0Msun_hcore.mod
    7   4.5  4.5Msun_zams.mod  4.5Msun_hcore.mod

    masszams_filehcore_file
    01.01.0Msun_zams.mod1.0Msun_hcore.mod
    11.51.5Msun_zams.mod1.5Msun_hcore.mod
    22.02.0Msun_zams.mod2.0Msun_hcore.mod
    32.52.5Msun_zams.mod2.5Msun_hcore.mod
    43.03.0Msun_zams.mod3.0Msun_hcore.mod
    53.53.5Msun_zams.mod3.5Msun_hcore.mod
    64.04.0Msun_zams.mod4.0Msun_hcore.mod
    74.54.5Msun_zams.mod4.5Msun_hcore.mod
    We define the `create_scatter_job` function to execute multiple parallel MESA jobs on CamberCloud.

    To run this function, the user will supply a list of inlist files, which contains templating variables. To fill in the template, the user will supply a param_set. When Camber executes the example job above, it will combine the two inputs to generate an actual inlist file used by MESA.

    With the same template file, this function essentially allows the user to execute multiple MESA jobs by supplying a list of param_sets. Each param_set will be used to generate a new set of inlist files nested in their param_set/job specific directory, and the MESA job will execute on top of said directory. The user will also get in each job directory a meta.json file, which contains job info alongside the param_set used to template the inlist files.

    This means the user will create n jobs from a param_sets of length n.

    def create_scatter_job(
        param_sets: List[Dict[str, Any]],
        inlist_files: List[str],
        model_files: Optional[Union[str, List[str]]] = None,
        command: str = "./mk && ./rn",
        module: str = "star",
        sync_mesa_scripts: bool = True,
        engine_size: str = "XSMALL",
        mesa_version: Optional[str] = "r23.05.1",
    ) -> List[CamberJob]:
        """
        Executes multiple parallel MESA jobs on CamberCloud
    
        Args:
            param_sets: list of parameter sets
            inlist_files: list of inlist files with template parameters
            model_files: list of model files or a single model file to be copied to each job directory. If a list
            is provided, it should have the same length as `param_sets`. Otherwise, the single model will be used for all jobs.
            command: the command to execute the MESA job
            module: the module to be used for running simulation (star, binary, astero, etc.), defaults to "star"
            sync_mesa_scripts: Specify whether to sync MESA scripts to the job directory, defaults to True
            engine_size: Supply the size of the engine, defaults to XSMALL
            mesa_version: the MESA version to be used, defaults to "r23.05.1"
        
        Returns:
            List of camber jobs
        """
        if model_files is not None and not len(param_sets) == len(model_files):
            print("`param_sets` and `model_files` must have the same length")
            return None
    
        if isinstance(model_files, str):
            model_files = [model_files] * len(param_sets)
        model_files = model_files or []
    
        cwd = Path.cwd()
    
        def _start_mesa_job(idx: int = None, params: Dict[str, Any] = None, model: str = None) -> CamberJob:
            dir_name = f"{idx:04d}"
            job_dir = cwd / dir_name
            job_dir.mkdir(parents=True, exist_ok=True)
    
            for template_inlist_file in inlist_files:
                # Ensure we get the path of the template inlist file relative to current working directory
                template_inlist_file = Path(template_inlist_file).absolute().relative_to(Path.cwd())
                inlist_file = job_dir / template_inlist_file
                with open(str(template_inlist_file), "r") as f_template, open(str(inlist_file), "w") as f_output:
                    template = f_template.read()
                    f_output.write(Template(template).safe_substitute(params))
                    f_output.flush()
    
            # Copy model file to job directory, if applicable
            if model:
                input_model_file = Path(model).absolute().relative_to(Path.cwd())
                shutil.copy(input_model_file, job_dir / input_model_file.name)
    
            # create_job API expects relative path to user's home directory
            job = camber.mesa.create_job(
                command=command,
                module=module,
                sync_mesa_scripts=sync_mesa_scripts,
                engine_size=engine_size,
                mount_dir=dir_name,
                mesa_version=mesa_version,
            )
    
            with open(os.path.join(str(job_dir), "meta.json"), "w") as f:
                meta = {
                    "CamberJobId": job.job_id,
                    "CamberJobParams": params,
                }
                json.dump(meta, f, indent=2)
                f.flush()
    
            return job
    
        with ThreadPoolExecutor(max_workers=2) as executor:
            futures = [
                executor.submit(_start_mesa_job, idx, param_set, model)
                for idx, (param_set, model) in enumerate(zip_longest(param_sets, model_files), 1)
            ]
            for future in as_completed(futures):
                future.result()
            jobs = [future.result() for future in as_completed(futures)]
            return jobs

    Generate inlist files

    Now generate ZAMS stars from pMS collapse. To generate a ZAMS star, specify an inlist that includes the function create_pre_main_sequence_model = .true. This inlist is called inlist_create_zams. Note that the param_sets variable is going to be passed as an agrument to the MESA create_scatter_job method. These params are the template for the jobs to execute in parallel.

    jobs = create_scatter_job(
        param_sets=[
            {
                "INLIST": "'inlist_create_zams'",
                "OUTPUT_MOD": '"' + str(row['zams_file']) + '"',  # Add extra quotes around the value
                "INIT_MASS": str(row['mass'])
            }
            for _, row in df.iterrows()
        ],
        inlist_files = ['inlist','inlist_create_zams'],
        engine_size='SMALL'
    )

    This may take a few minutes on a small engine. To view the job status, you can return the jobs object:

    jobs
    [CamberJob({"job_id": 6955, "status": "COMPLETED", "engine_size": "SMALL", "engine_type": "MESA", "command": "./mk && ./rn", "with_gpu": false}),
     CamberJob({"job_id": 6953, "status": "COMPLETED", "engine_size": "SMALL", "engine_type": "MESA", "command": "./mk && ./rn", "with_gpu": false}),
     CamberJob({"job_id": 6948, "status": "COMPLETED", "engine_size": "SMALL", "engine_type": "MESA", "command": "./mk && ./rn", "with_gpu": false}),
     CamberJob({"job_id": 6951, "status": "COMPLETED", "engine_size": "SMALL", "engine_type": "MESA", "command": "./mk && ./rn", "with_gpu": false}),
     CamberJob({"job_id": 6949, "status": "COMPLETED", "engine_size": "SMALL", "engine_type": "MESA", "command": "./mk && ./rn", "with_gpu": false}),
     CamberJob({"job_id": 6952, "status": "COMPLETED", "engine_size": "SMALL", "engine_type": "MESA", "command": "./mk && ./rn", "with_gpu": false}),
     CamberJob({"job_id": 6954, "status": "COMPLETED", "engine_size": "SMALL", "engine_type": "MESA", "command": "./mk && ./rn", "with_gpu": false}),
     CamberJob({"job_id": 6950, "status": "COMPLETED", "engine_size": "SMALL", "engine_type": "MESA", "command": "./mk && ./rn", "with_gpu": false})]

    Once all statuses are COMPLETED, you’ve generated your ZAMS files! Camber creates a final ZAMS model, appended with zams.mod, in each of the respective directories. These ZAMS mod files are the requisite input files to evolve our stars from ZAMS to the next stage of evolution (in our case, H-Core burning).

    Let’s move these files to our main directory:

    !find $(pwd) -maxdepth 1 -type d -name "00*" -exec sh -c 'cp "$1"/*.mod $(pwd)' sh {} \;

    Now evolve each of our ZAMS stellar tracks along the H-core burning sequence. To parameterize this, define another table of param_sets:

    param_sets=[
        {   
            "INLIST": "'inlist_hcoreburning'",
            "INPUT_MOD": '"' + str(row['zams_file']) + '"',  # Add extra quotes around the value
            "INIT_MASS": str(row['mass']),
            "OUTPUT_MOD": '"' + str(row['hcore_file']) + '"',  # Add extra quotes around the value
        }
        for _, row in df.iterrows()]
    param_sets
    [{'INLIST': "'inlist_hcoreburning'",
      'INPUT_MOD': '"1.0Msun_zams.mod"',
      'INIT_MASS': '1.0',
      'OUTPUT_MOD': '"1.0Msun_hcore.mod"'},
     {'INLIST': "'inlist_hcoreburning'",
      'INPUT_MOD': '"1.5Msun_zams.mod"',
      'INIT_MASS': '1.5',
      'OUTPUT_MOD': '"1.5Msun_hcore.mod"'},
     {'INLIST': "'inlist_hcoreburning'",
      'INPUT_MOD': '"2.0Msun_zams.mod"',
      'INIT_MASS': '2.0',
      'OUTPUT_MOD': '"2.0Msun_hcore.mod"'},
     {'INLIST': "'inlist_hcoreburning'",
      'INPUT_MOD': '"2.5Msun_zams.mod"',
      'INIT_MASS': '2.5',
      'OUTPUT_MOD': '"2.5Msun_hcore.mod"'},
     {'INLIST': "'inlist_hcoreburning'",
      'INPUT_MOD': '"3.0Msun_zams.mod"',
      'INIT_MASS': '3.0',
      'OUTPUT_MOD': '"3.0Msun_hcore.mod"'},
     {'INLIST': "'inlist_hcoreburning'",
      'INPUT_MOD': '"3.5Msun_zams.mod"',
      'INIT_MASS': '3.5',
      'OUTPUT_MOD': '"3.5Msun_hcore.mod"'},
     {'INLIST': "'inlist_hcoreburning'",
      'INPUT_MOD': '"4.0Msun_zams.mod"',
      'INIT_MASS': '4.0',
      'OUTPUT_MOD': '"4.0Msun_hcore.mod"'},
     {'INLIST': "'inlist_hcoreburning'",
      'INPUT_MOD': '"4.5Msun_zams.mod"',
      'INIT_MASS': '4.5',
      'OUTPUT_MOD': '"4.5Msun_hcore.mod"'}]
    zams_mod_files = [file for file in os.listdir("./") if file.endswith("zams.mod")]
    zams_mod_files=sorted(zams_mod_files)

    Run another scatter_job with the H-Core burning inlist files:

    jobs = create_scatter_job(
        param_sets=[
            {   "INLIST": "'inlist_hcoreburning'",
                "INPUT_MOD":str(row['zams_file']),
                "OUTPUT_MOD": str(row['hcore_file']),
                "INIT_MASS": str(row['mass'])
            }
            for _, row in df.iterrows()
        ],
        inlist_files = ['inlist','inlist_hcoreburning'],
        model_files=zams_mod_files,
        engine_size='SMALL'
    )

    This too takes just a few minutes to complete, but you can start plotting as soon as your files begin to be generated. Check the status with:

    for i in range(len(jobs)):
        print(jobs[i].job_id, ": ", jobs[i].status)
    6967 :  RUNNING
    6970 :  RUNNING
    6966 :  RUNNING
    6965 :  RUNNING
    6964 :  RUNNING
    6969 :  RUNNING
    6968 :  RUNNING
    6971 :  RUNNING
    

    Once the statuses are RUNNING, you can start to plot results.

    Plot results

    fig = plt.figure(figsize=(4,6))
    maxx=4.4
    minx=3.5
    miny=-0.5
    maxy=4.5
    
    dirs = sorted([file for file in os.listdir("./") if file.startswith("00")])
    for d in dirs:
        h=mr.MesaData(str(d)+'/LOGS/history.data')
        plt.plot(h.log_Teff,h.log_L,linewidth=0.75,zorder=1,label=str(round(h.star_mass[0],4))+' $M_{\odot}$',color='black')
        plt.annotate(str(round(h.star_mass[0],4))+' $M_{\odot}$', (max(h.log_Teff)+0.12,h.log_L[0]),fontsize=8)
    
    plt.xlim(maxx,minx)
    plt.ylim(miny,maxy)
    plt.grid(alpha=0.25)
    plt.xlabel('$\log$(Teff)')
    plt.ylabel('$\log$(L)')
    plt.show()
    image

    It’s always a good practice to clean-up after yourself. Go ahead and clear the jobs:

    for job in jobs:
        job.clear()
    Fin!

    Follow-up question

    What would the plot look like if you changed the value of minMass to 4 and the value of maxMass to 5?