github-actions[ci]
Clean sync from main branch - 2025-08-07 22:25:30
dfef45c

A newer version of the Streamlit SDK is available: 1.48.0

Upgrade

Equation of state (EOS) benchmark on WBM structures (2.1)

The compiled WBM structures are available as ASE DB here

Run

This benchmark requires parellel orchestration using Prefect utility. To run the benchmark, please modify the SLURM cluster setting (or change to desired job queing systems following dask-jobqueuue documentation) in run.py.

    nodes_per_alloc = 1
    gpus_per_alloc = 1
    ntasks = 1

    cluster_kwargs = dict(
        cores=1,
        memory="64 GB",
        processes=1,
        shebang="#!/bin/bash",
        account="matgen",
        walltime="00:30:00",
        # job_cpu=128,
        job_mem="0",
        job_script_prologue=[
            "source ~/.bashrc",
            "module load python",
            "source activate /pscratch/sd/c/cyrusyc/.conda/dev",
        ],
        job_directives_skip=["-n", "--cpus-per-task", "-J"],
        job_extra_directives=[
            "-J c2db",
            "-q regular",
            f"-N {nodes_per_alloc}",
            "-C gpu",
            f"-G {gpus_per_alloc}",
        ],
    )

    cluster = SLURMCluster(**cluster_kwargs)
    print(cluster.job_script())
    cluster.adapt(minimum_jobs=25, maximum_jobs=50)

Analysis

To analyze and gather the results, run analyze.py and generate summary. To plot the EOS curves, run plot.py# Equation of state (EOS) benchmark on WBM structures

The compiled WBM structures are available as ASE DB file here

Run

This benchmark requires parellel orchestration using Prefect utility. To run the benchmark, please modify the SLURM cluster setting (or change to desired job queing systems following dask-jobqueuue documentation) in run.py.

    nodes_per_alloc = 1
    gpus_per_alloc = 1
    ntasks = 1

    cluster_kwargs = dict(
        cores=1,
        memory="64 GB",
        processes=1,
        shebang="#!/bin/bash",
        account="matgen",
        walltime="00:30:00",
        # job_cpu=128,
        job_mem="0",
        job_script_prologue=[
            "source ~/.bashrc",
            "module load python",
            "source activate /pscratch/sd/c/cyrusyc/.conda/dev",
        ],
        job_directives_skip=["-n", "--cpus-per-task", "-J"],
        job_extra_directives=[
            "-J c2db",
            "-q regular",
            f"-N {nodes_per_alloc}",
            "-C gpu",
            f"-G {gpus_per_alloc}",
        ],
    )

    cluster = SLURMCluster(**cluster_kwargs)
    print(cluster.job_script())
    cluster.adapt(minimum_jobs=25, maximum_jobs=50)

Analysis

To analyze and gather the results, run analyze.py and generate summary. To plot the EOS curves, run plot.py