Spaces:
Running
Running
File size: 2,835 Bytes
dfef45c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
# Equation of state (EOS) benchmark on WBM structures (2.1)
The compiled WBM structures are available as ASE DB [here](../wbm_structures.db)
## Run
This benchmark requires parellel orchestration using Prefect utility. To run the benchmark, please modify the SLURM cluster setting (or change to desired job queing systems following [dask-jobqueuue documentation](https://jobqueue.dask.org/en/latest/)) in [run.py](run.py).
```
nodes_per_alloc = 1
gpus_per_alloc = 1
ntasks = 1
cluster_kwargs = dict(
cores=1,
memory="64 GB",
processes=1,
shebang="#!/bin/bash",
account="matgen",
walltime="00:30:00",
# job_cpu=128,
job_mem="0",
job_script_prologue=[
"source ~/.bashrc",
"module load python",
"source activate /pscratch/sd/c/cyrusyc/.conda/dev",
],
job_directives_skip=["-n", "--cpus-per-task", "-J"],
job_extra_directives=[
"-J c2db",
"-q regular",
f"-N {nodes_per_alloc}",
"-C gpu",
f"-G {gpus_per_alloc}",
],
)
cluster = SLURMCluster(**cluster_kwargs)
print(cluster.job_script())
cluster.adapt(minimum_jobs=25, maximum_jobs=50)
```
## Analysis
To analyze and gather the results, run [analyze.py](analyze.py) and generate summary. To plot the EOS curves, run [plot.py](plot.py)# Equation of state (EOS) benchmark on WBM structures
The compiled WBM structures are available as ASE DB file [here](../wbm_structures.db)
## Run
This benchmark requires parellel orchestration using Prefect utility. To run the benchmark, please modify the SLURM cluster setting (or change to desired job queing systems following [dask-jobqueuue documentation](https://jobqueue.dask.org/en/latest/)) in [run.py](run.py).
```
nodes_per_alloc = 1
gpus_per_alloc = 1
ntasks = 1
cluster_kwargs = dict(
cores=1,
memory="64 GB",
processes=1,
shebang="#!/bin/bash",
account="matgen",
walltime="00:30:00",
# job_cpu=128,
job_mem="0",
job_script_prologue=[
"source ~/.bashrc",
"module load python",
"source activate /pscratch/sd/c/cyrusyc/.conda/dev",
],
job_directives_skip=["-n", "--cpus-per-task", "-J"],
job_extra_directives=[
"-J c2db",
"-q regular",
f"-N {nodes_per_alloc}",
"-C gpu",
f"-G {gpus_per_alloc}",
],
)
cluster = SLURMCluster(**cluster_kwargs)
print(cluster.job_script())
cluster.adapt(minimum_jobs=25, maximum_jobs=50)
```
## Analysis
To analyze and gather the results, run [analyze.py](analyze.py) and generate summary. To plot the EOS curves, run [plot.py](plot.py) |