problem
stringclasses
36 values
Checkpoint number
int64
1
8
language
stringclasses
1 value
difficulty
stringclasses
3 values
tags
listlengths
7
15
instruction
stringlengths
515
15.3k
tests
stringlengths
5.59k
5.79k
problem_0001
1
cpp
Easy
[ "cli-tools", "config-management", "argument-parsing", "schema-validation", "http-integration", "watch-mode", "type-system", "deterministic-output" ]
## cfgpipe - Core Resolution ## 1. Purpose `cfgpipe` is a command-line configuration resolver. It reads a JSON schema document from disk, resolves each declared parameter from local sources, and writes the resolved configuration to stdout as JSON. Every resolved leaf value is a string. ## 2. Invocation ```text ./cf...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0001 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain cfgpipe.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0001
2
cpp
Easy
[ "cli-tools", "config-management", "argument-parsing", "schema-validation", "http-integration", "watch-mode", "type-system", "deterministic-output" ]
## cfgpipe - Primary Store ## 1. Feature Increment Adds a network-backed primary store to the resolution pipeline. Global flags may now precede the schema path. Resolved leaf values remain strings. ```text ./cfgpipe [global-flags...] <schema-file> [arg-candidates...] ``` ## 2. Primary-Store Configuration `--primar...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0001 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain cfgpipe.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0001
3
cpp
Easy
[ "cli-tools", "config-management", "argument-parsing", "schema-validation", "http-integration", "watch-mode", "type-system", "deterministic-output" ]
## cfgpipe - Groups and Custom Types ## 1. Feature Increment Introduces parameter groups, custom parameter types, and stricter schema validation. Parameters may appear under nested object containers. Every output path and every error path must use the full dotted path. Resolved leaf values remain strings. ## 2. Defi...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0001 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain cfgpipe.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0001
4
cpp
Easy
[ "cli-tools", "config-management", "argument-parsing", "schema-validation", "http-integration", "watch-mode", "type-system", "deterministic-output" ]
## cfgpipe - Watch Mode and Change Events ## 1. Feature Increment Adds watch mode, structured change events, primary-store runtime updates, and a poll-based secondary store. Resolved configuration leaf values remain strings. The built-in custom type `port` parses a base-10 integer string, accepts values 0-65535 inclu...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0001 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain cfgpipe.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0001
5
cpp
Easy
[ "cli-tools", "config-management", "argument-parsing", "schema-validation", "http-integration", "watch-mode", "type-system", "deterministic-output" ]
## cfgpipe - Extended Type System ## 1. Feature Increment Expands the type system and switches resolved configuration output from string-only leaves to type-native JSON serialization. Change events continue using string representations. Watch mode, grouped output, and both store integrations remain in effect. ## 2. ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0001 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain cfgpipe.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0001
6
cpp
Easy
[ "cli-tools", "config-management", "argument-parsing", "schema-validation", "http-integration", "watch-mode", "type-system", "deterministic-output" ]
## cfgpipe - Store Refinements ## 1. Feature Increment Refines store access and runtime fault handling. Primary store gains folder-prefix composition. Secondary store gains dynamic key prefixing. Secondary-store monitoring isolates per-key operational failures. Grouped schemas, watch mode, type-native resolved output...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0001 / checkpoint_6. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain cfgpipe.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0002
1
cpp
Medium
[ "cli-tools", "parsing", "digital-circuits", "boolean-logic", "optimization", "graphviz", "equivalence-checking", "truth-tables" ]
# Part 1 — CLI basics, scalar `.circ` format, and `check` ## What this part adds The executable with `--help`, `--version`, JSON output mode, and the first command `check` for validating scalar `.circ` circuit files. --- ## Definitions ### Exit codes | Code | Category | Description | |------|----------|----------...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0002 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain circopt.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0002
2
cpp
Medium
[ "cli-tools", "parsing", "digital-circuits", "boolean-logic", "optimization", "graphviz", "equivalence-checking", "truth-tables" ]
# Part 2 — Scalar 2-valued evaluation: `eval` ## Error types | Error type | Exit code | |------------|-----------| | `MissingInputError` | 1 | | `UnknownInputError` | 1 | | `InputValueParseError` | 2 | --- ## `eval` command ```bash ./circopt eval <file.circ> [--set name=value ...] [--default 0|1] [--allow-extra] [...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0002 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain circopt.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0002
3
cpp
Medium
[ "cli-tools", "parsing", "digital-circuits", "boolean-logic", "optimization", "graphviz", "equivalence-checking", "truth-tables" ]
# Part 3 — Vectors, richer expressions, and `--radix` ## What this part adds Vector signals in `.circ`, width checking, slicing/indexing/concatenation, richer literal formats, new operators (`MUX`, reductions, `EQ`), and `--radix` output formatting. --- ## New error types These errors use exit code 3 (semantic err...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0002 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain circopt.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0002
4
cpp
Medium
[ "cli-tools", "parsing", "digital-circuits", "boolean-logic", "optimization", "graphviz", "equivalence-checking", "truth-tables" ]
# Part 4 — 3-valued evaluation (`--mode 3val`) ## What this part adds The `eval` command now supports 3-valued logic with `X` (unknown/don't-care) as an input value. Circuit files still cannot contain literal `X` values. --- ## New error types | Error type | Exit code | Description | |------------|-----------|----...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0002 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain circopt.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0002
5
cpp
Medium
[ "cli-tools", "parsing", "digital-circuits", "boolean-logic", "optimization", "graphviz", "equivalence-checking", "truth-tables" ]
# Part 5 — JSON and BENCH input formats ## What this part adds The `check` and `eval` commands can now read `.json` and `.bench` circuit formats, controlled by the `--format` flag. --- ## New error types Format-specific errors follow existing conventions: parse/syntax errors exit `2`, validation/semantic errors ex...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0002 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain circopt.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0002
6
cpp
Medium
[ "cli-tools", "parsing", "digital-circuits", "boolean-logic", "optimization", "graphviz", "equivalence-checking", "truth-tables" ]
# Part 6 — Analysis commands: `stats`, `lint`, and `dot` ## What this part adds Three analysis commands: * `stats` — Circuit metrics and statistics * `lint` — Warnings about potential issues * `dot` — Graphviz DOT export All commands accept `--format` and work with all circuit formats. --- ## `stats` command ```b...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0002 / checkpoint_6. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain circopt.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0002
7
cpp
Medium
[ "cli-tools", "parsing", "digital-circuits", "boolean-logic", "optimization", "graphviz", "equivalence-checking", "truth-tables" ]
# Part 7 — Subcircuit extraction and equivalence: `cone`, `truth-table`, `equiv` ## What this part adds * `cone` — Extract subcircuit and write deterministically * `truth-table` — Exhaustive enumeration of outputs * `equiv` — Equivalence checking between circuits * `--seed` flag for reproducible randomness --- ## N...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0002 / checkpoint_7. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain circopt.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0002
8
cpp
Medium
[ "cli-tools", "parsing", "digital-circuits", "boolean-logic", "optimization", "graphviz", "equivalence-checking", "truth-tables" ]
# Part 8 — Optimization: `opt` ## What this part adds The `opt` command: a circuit optimizer with configurable passes, deterministic output, optional equivalence verification, and BENCH export support. --- ## `opt` command ```bash ./circopt opt <circuit_file> -o <out_file> [--format ...] [--out-format circ|jso...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0002 / checkpoint_8. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain circopt.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0003
1
cpp
Easy
[ "cli-tools", "ast-parsing", "multi-language", "pattern-matching", "metavariables", "code-transformation", "refactoring", "static-analysis", "regex", "code-search" ]
Build a command-line code searcher for Python codebases. It takes a directory of source files and a set of **rules**, then prints **one JSON object per match** (JSON Lines) to **STDOUT**. Supported language (by file extension only): **Python** (`.py`). Supported rule types (exactly these two): **exact match** and **va...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0003 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain code_search.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0003
2
cpp
Easy
[ "cli-tools", "ast-parsing", "multi-language", "pattern-matching", "metavariables", "code-transformation", "refactoring", "static-analysis", "regex", "code-search" ]
Extend your code searcher to support **JavaScript** and **C++** source files. --- ## New Requirements ### File type → language Scan these extensions: | Language | Extensions | | ---------- | -------------------------------------------- | | Python | `.py` ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0003 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain code_search.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0003
3
cpp
Easy
[ "cli-tools", "ast-parsing", "multi-language", "pattern-matching", "metavariables", "code-transformation", "refactoring", "static-analysis", "regex", "code-search" ]
Extend your code searcher to support **structure-aware patterns** with metavariables. Supported rule kinds: **`exact`**, **`regex`**, and **`pattern`**. # Deliverable Your existing executable **`code_search.cpp`** is extended to understand `kind: "pattern"` in the rules file: ``` ./code_search <root_dir> --rules <r...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0003 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain code_search.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0003
4
cpp
Easy
[ "cli-tools", "ast-parsing", "multi-language", "pattern-matching", "metavariables", "code-transformation", "refactoring", "static-analysis", "regex", "code-search" ]
Extend your code searcher to support **auto-fixes** generated from matches. Rules may include a **`fix`** object that turns a match into a text replacement. The CLI gains **`--apply-fixes`** (write changes to disk) and **`--dry-run`** (preview changes only). # Deliverable Your existing executable **`code_search.cpp`*...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0003 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain code_search.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0003
5
cpp
Easy
[ "cli-tools", "ast-parsing", "multi-language", "pattern-matching", "metavariables", "code-transformation", "refactoring", "static-analysis", "regex", "code-search" ]
You’re extending the tool to support **four more languages** end-to-end: **Rust, Java, Go, and Haskell**. All previously specified behavior (rule kinds `exact`/`regex`/`pattern`/`selector`, `captures`, sorting, fixes with `--dry-run` / `--apply-fixes`) remains the same. This addendum only defines what’s new or widened....
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0003 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain code_search.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0004
1
cpp
Hard
[ "workflow-orchestration", "dag", "cli-tools", "parser", "toml", "dependency-resolution", "caching", "task-scheduling" ]
Build a CLI tool that executes pipelines from a pipeline file and an optional TOML config file. # Pipelines ``` task <task name> { // <comment string> params: { <parameters> } run: { <command to run> } success: { <success criteria 1>: { <expression> } ... <success criteria n>: ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0004 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain launch.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clon...
problem_0004
2
cpp
Hard
[ "workflow-orchestration", "dag", "cli-tools", "parser", "toml", "dependency-resolution", "caching", "task-scheduling" ]
Your pipeline executor is being used by a data science team for their ML workflows. They've noticed that many pipelines call the same expensive data preprocessing tasks multiple times - for example, a feature extraction task might be needed by three different model training tasks. Currently, this preprocessing runs thr...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0004 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain launch.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clon...
problem_0004
3
cpp
Hard
[ "workflow-orchestration", "dag", "cli-tools", "parser", "toml", "dependency-resolution", "caching", "task-scheduling" ]
We are going to extend the caching system to now allow "dynamic" caching in the `requires` expression. The syntax for this is: ``` <task> <variable name> = CachedTask(...) ``` Where `CachedTask` takes in any of the cache specific parameters as a keyword argument. If a parameter is NOT specified in the invocation, then...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0004 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain launch.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clon...
problem_0005
1
cpp
Medium
[ "cli-tools", "database", "sqlite", "schema-migration", "data-transformation", "transaction-management", "foreign-keys", "constraints", "rollback", "dependency-resolution", "graph-algorithms" ]
# Database Migration Tool - Checkpoint 1: Basic Schema Migrations Build a CLI tool that applies database schema migrations to SQLite databases. This checkpoint focuses on basic schema operations: creating tables, adding columns, and modifying table structures. ## Task Implement a CLI tool that reads migration specif...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0005 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain migration_tool.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0005
2
cpp
Medium
[ "cli-tools", "database", "sqlite", "schema-migration", "data-transformation", "transaction-management", "foreign-keys", "constraints", "rollback", "dependency-resolution", "graph-algorithms" ]
# Database Migration Tool - Checkpoint 2: Data Migrations Extend the migration tool from Checkpoint 1 to support data transformations during migrations. This checkpoint adds the ability to migrate existing data when schema changes occur. ## New Operations ### Operation Type: `transform_data` Transforms data in a ta...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0005 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain migration_tool.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0005
3
cpp
Medium
[ "cli-tools", "database", "sqlite", "schema-migration", "data-transformation", "transaction-management", "foreign-keys", "constraints", "rollback", "dependency-resolution", "graph-algorithms" ]
# Database Migration Tool - Checkpoint 3: Foreign Keys, Indexes, and Constraints Extend the migration tool to support foreign key relationships, custom indexes, and advanced constraints. This checkpoint focuses on managing relational integrity and query performance. ## New Operations ### Operation Type: `add_foreign...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0005 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain migration_tool.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0005
4
cpp
Medium
[ "cli-tools", "database", "sqlite", "schema-migration", "data-transformation", "transaction-management", "foreign-keys", "constraints", "rollback", "dependency-resolution", "graph-algorithms" ]
# Database Migration Tool - Checkpoint 4: Rollback Support Extend the migration tool to support rolling back migrations. This checkpoint adds the ability to undo migrations, either individually or in batches, with proper dependency handling. ## New Command: Rollback Add a new command to rollback migrations: ```bash...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0005 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain migration_tool.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0005
5
cpp
Medium
[ "cli-tools", "database", "sqlite", "schema-migration", "data-transformation", "transaction-management", "foreign-keys", "constraints", "rollback", "dependency-resolution", "graph-algorithms" ]
# Database Migration Tool - Checkpoint 5: Dependency Management and Migration Chains Extend the migration tool to support migration dependencies, ensuring migrations are applied in the correct order and detecting circular dependencies. This checkpoint adds dependency resolution and validation. ## Migration Dependenci...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0005 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain migration_tool.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0006
1
cpp
Easy
[ "rest-api", "csv", "spreadsheet", "ingestion", "query-engine", "filtering", "export", "caching", "configuration", "access-control" ]
## datagate `datagate` starts with: ``` ./datagate start --port <port> --address <address> ``` `--port` default `8001`. `--address` default `127.0.0.1`. ### Ingestion: `GET /convert` Query parameters: | Parameter | Required | Description | | --------- | -------- | ----------- | | `source` | yes | URL of the...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0006 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain datagate.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0006
2
cpp
Easy
[ "rest-api", "csv", "spreadsheet", "ingestion", "query-engine", "filtering", "export", "caching", "configuration", "access-control" ]
## datagate — Pagination, Sorting, and Response Controls `GET /datasets/<id>` accepts control parameters for pagination, sorting, and shape. ### `total` Responses include integer `total` for the row count before pagination. ### Pagination `_size` (positive integer, default `100`) limits returned rows. If it exceed...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0006 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain datagate.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0006
3
cpp
Easy
[ "rest-api", "csv", "spreadsheet", "ingestion", "query-engine", "filtering", "export", "caching", "configuration", "access-control" ]
## datagate — Column-Level Filtering `GET /datasets/<id>` accepts filter params in this form: ``` <column>__<comparator>=<value> ``` Control params (names beginning with `_`) are not filters. ### Comparators - `exact`: case-sensitive string equality. - `contains`: case-sensitive substring. - `less`: numeric strict...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0006 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain datagate.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0006
4
cpp
Easy
[ "rest-api", "csv", "spreadsheet", "ingestion", "query-engine", "filtering", "export", "caching", "configuration", "access-control" ]
## Export, Upload, and Multi-Format Support `GET /datasets/<id>/export` returns CSV bytes with: - `Content-Type: text/csv` - `Content-Disposition: attachment; filename="<dataset-id>.csv"` The CSV uses source column order and applies the same filters, sort, and pagination as `/datasets/<id>`. `_shape`, `_rowid`, and ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0006 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain datagate.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0006
5
cpp
Easy
[ "rest-api", "csv", "spreadsheet", "ingestion", "query-engine", "filtering", "export", "caching", "configuration", "access-control" ]
## Caching and Cache Controls `/convert` caches results by default. ### Cache behavior With caching enabled, repeated `/convert` requests for the same source URL can return the cached dataset id without re-downloading. Cache responses are identical to fresh parse responses. ### `CACHE_ENABLED` `CACHE_ENABLED` acce...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0006 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain datagate.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0006
6
cpp
Easy
[ "rest-api", "csv", "spreadsheet", "ingestion", "query-engine", "filtering", "export", "caching", "configuration", "access-control" ]
## datagate — Configuration, Size Limits, and Access Control Service configuration sources, in order of precedence: 1. Built-in defaults 2. `DATAGATE_CONFIG` file 3. Direct environment variables The config file is `KEY=VALUE` lines, with commas for list values, blank/comment lines ignored. Invalid config or read fai...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0006 / checkpoint_6. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain datagate.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0006
7
cpp
Easy
[ "rest-api", "csv", "spreadsheet", "ingestion", "query-engine", "filtering", "export", "caching", "configuration", "access-control" ]
## Optional Dataset Enrichment `/convert` supports optional ingestion enrichment with `enrich=yes`. ### Enrichment trigger Only exact `enrich=yes` enables enrichment. All other states keep enrichment off. Success response remains: ```json {"ok": true, "endpoint": "/datasets/<id>"} ``` ### Enriched query output Fo...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0006 / checkpoint_7. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain datagate.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0007
1
cpp
Hard
[ "code-generation", "metaprogramming", "streaming-algorithms", "multi-language", "transform-inference", "stateful-processing", "window-functions", "compiler-design" ]
Write a **code generator** called `dynamic_buffer.cpp`. Given: * a **module name** (e.g. `dyna`), * a `--sample` directory containing: * `input.{ext}` * `output.{ext}` * a target language (`--python` or `--javascript`), * and an `--output` directory, it must: 1. **Infer a deterministic transformation** from `i...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0007 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain dynamic_buffer.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0007
2
cpp
Hard
[ "code-generation", "metaprogramming", "streaming-algorithms", "multi-language", "transform-inference", "stateful-processing", "window-functions", "compiler-design" ]
The inferred transformations can now depend on **neighboring rows and evolving state**: * An output row depends on the **previous N input rows** (e.g., running sum/mean over a window, cumulative total). * Output includes a column that reflects **state transitions** over time (e.g., segment counters, “session_id” label...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0007 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain dynamic_buffer.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0007
3
cpp
Hard
[ "code-generation", "metaprogramming", "streaming-algorithms", "multi-language", "transform-inference", "stateful-processing", "window-functions", "compiler-design" ]
Extends **dynamic_buffer.cpp** to generate **C++ and Rust** modules, with the *same* existing semantics: * Sample-based inference from `input.{ext} → output.{ext}`. * Per-row transforms **and** all neighborhood/stateful transforms (prefix, sliding window, state machines, neighbor-based filtering). * Streaming with `bu...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0007 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain dynamic_buffer.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0007
4
cpp
Hard
[ "code-generation", "metaprogramming", "streaming-algorithms", "multi-language", "transform-inference", "stateful-processing", "window-functions", "compiler-design" ]
**Expand the transform space** to include **advanced window functions** and **richer, multi-dimensional state**. Your program must now be able to **infer and generate** code for transforms where outputs depend on: * **Centered / symmetric windows** (rows before and after current row). * **Advanced aggregations** with...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0007 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain dynamic_buffer.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoi...
problem_0008
1
cpp
Medium
[ "rest-api", "versioning", "config-management", "json-schema", "yaml", "approval-workflow", "policy-enforcement", "deep-merge", "dependency-resolution" ]
Design and implement a small REST service that stores JSON configuration objects with **immutable versions**, supports **scoping**, **rollback** to any earlier version, and **imports/inheritance** across configs. Treat this like a platform primitive a larger system would depend on. ## Deliverable A network service (s...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0008 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain config_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid...
problem_0008
2
cpp
Medium
[ "rest-api", "versioning", "config-management", "json-schema", "yaml", "approval-workflow", "policy-enforcement", "deep-merge", "dependency-resolution" ]
Extend your service to support a **schema registry** (with versioning), **binding schemas to configs**, **validation at create/resolve**, and **ingesting raw config strings** in YAML/TOML/JSON that are parsed into canonical JSON internally. > This spec **adds endpoints and rules**; all Part 1 behavior remains required...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0008 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain config_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid...
problem_0008
3
cpp
Medium
[ "rest-api", "versioning", "config-management", "json-schema", "yaml", "approval-workflow", "policy-enforcement", "deep-merge", "dependency-resolution" ]
Introduce a change-management workflow for configuration activation. Every new config version begins as a draft, proposals gather human reviews, and activation requires meeting the configured quorum. Each proposal must include deterministic diffs to help reviewers understand the change. > All behavior from Checkpoints...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0008 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain config_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid...
problem_0008
4
cpp
Medium
[ "rest-api", "versioning", "config-management", "json-schema", "yaml", "approval-workflow", "policy-enforcement", "deep-merge", "dependency-resolution" ]
Introduce an organization-level guardrail layer that runs policy bundles against resolved configurations and their surrounding graph. Policies must block unsafe proposals and merges with deterministic violation details that are distinct from schema errors. > All behavior from Checkpoints 1–3 remains required unless ex...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0008 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain config_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid...
problem_0009
1
cpp
Easy
[ "cli-tools", "config-management", "planning", "dry-run", "dependency-graph", "profile-composition", "script-generation", "yaml", "json" ]
# System Provisioning Planner MVP ## Overview Build `rig.cpp` as a dry-run CLI. It discovers rig config files in a directory, validates them, and generates a deterministic plan for package installs, app installs, and file actions without mutating the host. ## CLI ### `./rig validate <dir>` Validate discovered rig ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0009 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain rig.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0009
2
cpp
Easy
[ "cli-tools", "config-management", "planning", "dry-run", "dependency-graph", "profile-composition", "script-generation", "yaml", "json" ]
# System Preferences, Dock Layout, and Conflict Detection ## Overview Generate the JSON execution plan, describing system preferences, Dock layout, and destination conflicts. Read a directory tree, validate config files before planning, and print one plan on success. The system supports a `preferences` schema and a ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0009 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain rig.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0009
3
cpp
Easy
[ "cli-tools", "config-management", "planning", "dry-run", "dependency-graph", "profile-composition", "script-generation", "yaml", "json" ]
# Developer Environments and Module Dependencies ## Overview Produce a deterministic plan shape, including language runtime setup plus dependency-aware module ordering. The `environments` schema describes runtimes, plugins, and virtual environments. Any rig config can declare `depends_on`, and those dependencies ord...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0009 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain rig.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0009
4
cpp
Easy
[ "cli-tools", "config-management", "planning", "dry-run", "dependency-graph", "profile-composition", "script-generation", "yaml", "json" ]
# Profile Composition ## Overview Describe named machine setups as profile compositions. A root-level manifest defines profiles, each profile resolves to a set of modules, and `plan` can target one of those profiles. A `list-profiles` command inspects the resolved profile contents without generating a full plan. ##...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0009 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain rig.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0009
5
cpp
Easy
[ "cli-tools", "config-management", "planning", "dry-run", "dependency-graph", "profile-composition", "script-generation", "yaml", "json" ]
# Standalone Build Script Generation ## Overview Turn the plan into a portable build artifact so we can move it to another machine and run one installer there without needing `rig` installed. A `build` command generates self-contained installer packages. Each package includes exactly one executable installer entrypo...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0009 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain rig.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0010
1
cpp
Easy
[ "cli-tools", "json", "data-transformation", "expression-evaluation", "validation", "etl", "schema-validation", "parser-design", "modular-composition" ]
# 1. Parse Basic ETL Pipeline Build a CLI program that reads an ETL pipeline specification from STDIN, validates and normalizes it, then outputs a structured JSON response or error. ## Deliverables * A CLI program `etl_pipeline.cpp` that reads JSON from STDIN and writes JSON to STDOUT * Exit code `0` on success, `1`...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0010 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain etl_pipeline.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0010
2
cpp
Easy
[ "cli-tools", "json", "data-transformation", "expression-evaluation", "validation", "etl", "schema-validation", "parser-design", "modular-composition" ]
# 2. Linear ETL Executor Execute a normalized ETL pipeline over an in-memory dataset. ## Deliverables * Extend the CLI with `--execute` flag (default `false`) * When `false`: behave like checkpoint 1 (return `normalized`) * When `true`: execute the pipeline and return `data` and `metrics` (no `normalized` field)...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0010 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain etl_pipeline.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0010
3
cpp
Easy
[ "cli-tools", "json", "data-transformation", "expression-evaluation", "validation", "etl", "schema-validation", "parser-design", "modular-composition" ]
# 3. Branching Transformations Add a `branch` step that conditionally routes rows to sub-pipelines. ## Deliverables * Extend parser/normalizer and executor to support a `branch` step ## Normalized Branch Step ```json { "op": "branch", "branches": [ { "id": "A", "when": "<expr>", "steps": [ /* steps */ ] },...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0010 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain etl_pipeline.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0010
4
cpp
Easy
[ "cli-tools", "json", "data-transformation", "expression-evaluation", "validation", "etl", "schema-validation", "parser-design", "modular-composition" ]
# 4. Reusable Sub-Pipelines Add named sub-pipelines defined at the top level and invoked with `call` steps. ## Deliverables * Extend parser/executor to support top-level `defs` and a `call` step with parameter binding ## Request Format ```json { "defs": { "Normalize": { "steps":[ /* steps */ ] }, "KeepCo...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0010 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain etl_pipeline.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0010
5
cpp
Easy
[ "cli-tools", "json", "data-transformation", "expression-evaluation", "validation", "etl", "schema-validation", "parser-design", "modular-composition" ]
# 5. Composable Pipeline Library Package reusable step sequences into a library and build pipelines by composing library references and inline fragments. ## Deliverables * Support top-level `library` with namespaced definitions * Support top-level `compose` for building pipelines from library references and inline s...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0010 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain etl_pipeline.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0011
1
cpp
Hard
[ "cli-tools", "csv-parsing", "game-data", "manufacturing-planning", "complex-formulas", "bom-generation", "optimization" ]
You're building a command-line tool that helps EVE Online industrialists plan builds from the official Static Data Export (SDE). Parse the SDE and emit a deterministic recipe report for a target product or blueprint. Your CLI can print anything you like before it, but it **must end with** a canonical block. The SDE di...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0011 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain industry.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0011
2
cpp
Hard
[ "cli-tools", "csv-parsing", "game-data", "manufacturing-planning", "complex-formulas", "bom-generation", "optimization" ]
You're extending the CLI to plan **invention** (Tech II) and **reverse engineering** (Tech III) jobs from the EVE SDE. Parse the SDE to compute the **chance to succeed per run** and emit a deterministic invention report for a target product or blueprint. The CLI may print anything you like before it, but it **must end ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0011 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain industry.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0011
3
cpp
Hard
[ "cli-tools", "csv-parsing", "game-data", "manufacturing-planning", "complex-formulas", "bom-generation", "optimization" ]
Now we are going to extend the `recipe` command to take in the **optional** flags: * `--material-efficiency`/`-me`: Integer in range [0,10]. Default is 0. * `--time-efficiency`/`-te`: Even integer in range [0,20]. Default is 0. * `--display-waste`/`-waste`: Bool flag to enable displaying the waste due to ME not being m...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0011 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain industry.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0011
4
cpp
Hard
[ "cli-tools", "csv-parsing", "game-data", "manufacturing-planning", "complex-formulas", "bom-generation", "optimization" ]
Now we are going to extend the CLI to have a `plan` command that creates a build plan for a given item. Now **`industry.cpp`** to implement: ``` ./industry plan <Build string> --config <YAML build config file> --sde <SDE Directory> --output <Output directory> ``` The `<Build String>` can be in one of 3 forms: * Item ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0011 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain industry.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0011
5
cpp
Hard
[ "cli-tools", "csv-parsing", "game-data", "manufacturing-planning", "complex-formulas", "bom-generation", "optimization" ]
Now you will add a `--build-all` flag to the `plan` that assumes you are building **ALL BUILDABLE** items in the tree. However, we now need to split jobs based on their total duration. The config now has the following new fields: ``` # Minimum number of DAYS a job must take before being split. split_days: 3.0 # <flo...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0011 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain industry.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0011
6
cpp
Hard
[ "cli-tools", "csv-parsing", "game-data", "manufacturing-planning", "complex-formulas", "bom-generation", "optimization" ]
Extend your planner to now incorporate `--stocks` and `--targets`. These will point to two text files that describe what we _currently_ have on hand, and what _extra_ we want to build to maintain a certain level. ## Stockpile The `--stocks` is a list of items we already have built/purchased that we can use in manufa...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0011 / checkpoint_6. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain industry.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cl...
problem_0012
1
cpp
Medium
[ "cli-tools", "pathfinding", "graph-algorithms", "spatial-algorithms", "data-processing", "csv-parsing", "multi-criteria-optimization", "domain-modeling", "game-mechanics" ]
Write an application tool that plans **Jump Freighters**(**JF**) logistics runs. The tool plans out our jumps from `start` to `end`. It calculates the Isotopes needed along with the total fatigue. ```sh ./did_he_say_jump \ --start <start station> \ --end <end station> \ --sde <path to SDE dir> \ [--r...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0012 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain did_he_say_jump.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avo...
problem_0012
2
cpp
Medium
[ "cli-tools", "pathfinding", "graph-algorithms", "spatial-algorithms", "data-processing", "csv-parsing", "multi-criteria-optimization", "domain-modeling", "game-mechanics" ]
There is a way to get better ranges through warping to a gate, taking it, then jumping immediately from cloak. It is a very safe method that can be used in ANY place. The only catch is we can only do this one time after undocking. Jumping then docking allows you to do it again. To support this behavior, add in the fla...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0012 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain did_he_say_jump.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avo...
problem_0012
3
cpp
Medium
[ "cli-tools", "pathfinding", "graph-algorithms", "spatial-algorithms", "data-processing", "csv-parsing", "multi-criteria-optimization", "domain-modeling", "game-mechanics" ]
We are going to extend the `plan` command to now support High Security(HS) destinations. Recall that we cannot jump _directly_ to HS space. We also dont want to be warping around in non-HS systems. This means we want to find a non-HS system, with a station, that connects to a HS system. So the process of **returning** ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0012 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain did_he_say_jump.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avo...
problem_0013
1
cpp
Hard
[ "rest-api", "data-processing", "optimization", "market-analysis", "csv-parsing", "stateful-service", "financial-calculations", "constraint-solving", "domain-modeling", "json" ]
Industrialists want deep market tooling: ingest raw orders, build price books, query quotes, value inventories, and reason about depth and spreads across markets. Build an API to provide this. ## Runtime & Data - Launch with `market_tools.cpp` and flags `--port`, `--address`, `--sde` (required). - SDE files in the `...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0013 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain market_tools.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0013
2
cpp
Hard
[ "rest-api", "data-processing", "optimization", "market-analysis", "csv-parsing", "stateful-service", "financial-calculations", "constraint-solving", "domain-modeling", "json" ]
We are going to add in reprocessing to our API. ## Reprocessing model - Items are processed in whole portions (`portionSize` in the SDE). Only full portions are reprocessed; leftover units are returned unchanged in `products` with their own volume contribution. - Category to efficiency mapping: - Ore efficiency...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0013 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain market_tools.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0013
3
cpp
Hard
[ "rest-api", "data-processing", "optimization", "market-analysis", "csv-parsing", "stateful-service", "financial-calculations", "constraint-solving", "domain-modeling", "json" ]
Extend your API with `POST /v1/compress` to find the optimal ores to buy and reprocess to meet material targets while **minimizing cost**. It MUST respect actual market order volumes. ```json { "materials": { "<material_name>": <int quantity>, ... }, "hubs": ["jita", "amarr"], // Case insensitive "cost_pe...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0013 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain market_tools.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0013
4
cpp
Hard
[ "rest-api", "data-processing", "optimization", "market-analysis", "csv-parsing", "stateful-service", "financial-calculations", "constraint-solving", "domain-modeling", "json" ]
Extend your API with four profit-finding endpoints. All endpoints must incorporate **sales tax** on sell transactions and include **competition analysis** metrics. Sales tax occurs on both buying and selling. **SELL/BUY PRICE** can at most deviate 4 digits (left hand side) from the nearest order. So to undercut an ord...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0013 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain market_tools.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid ...
problem_0014
1
cpp
Medium
[ "pathfinding", "graph-algorithms", "optimization", "physics-simulation", "logistics", "cli-tools", "data-processing", "constraint-satisfaction", "game-mechanics" ]
You're building a CLI application with tools related to travel planning in New Eden using the EVE Static Data Export (SDE). You need to support planning a route from `start` to `end`. Write `hauling.cpp` that will be called via: ```sh ./hauling <start> <end> \ --align <float seconds, >0> \ --top-speed <float m/s, >...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0014 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain hauling.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0014
2
cpp
Medium
[ "pathfinding", "graph-algorithms", "optimization", "physics-simulation", "logistics", "cli-tools", "data-processing", "constraint-satisfaction", "game-mechanics" ]
Building on the existing CLI you need to extend your CLI tool to support cargo hauling operations with manifest files and ship information via a config file. The new CLI is: ```sh ./hauling <start> <end> --manifest <manifest yaml> --config <config yaml> --ship <ship name from config> --sde <path to sde dir> ``` The co...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0014 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain hauling.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0014
3
cpp
Medium
[ "pathfinding", "graph-algorithms", "optimization", "physics-simulation", "logistics", "cli-tools", "data-processing", "constraint-satisfaction", "game-mechanics" ]
We are now going to extend the planner to factor in contracts with a new contracts command with an input `JSONL` file. All of the existing planning functionality will go into a command called `plan` while this new command will be called via: ```sh ./hauling contracts <start> <Path to open contracts file> \ --config <...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0014 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain hauling.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # clo...
problem_0015
1
cpp
Easy
[ "http-server", "process-execution", "state-management", "concurrency", "caching", "job-scheduling", "file-io", "glob-patterns", "dependency-graphs" ]
You’re building a tiny “local runner” web API used by internal CI to run quick checks against a throwaway working directory. Runs are **unsandboxed local executions** (your process spawns child processes directly). Each run receives a minimal **working directory image**, an optional **environment**, optional **STDIN**,...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0015 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain execution_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0015
2
cpp
Easy
[ "http-server", "process-execution", "state-management", "concurrency", "caching", "job-scheduling", "file-io", "glob-patterns", "dependency-graphs" ]
You’re extending the “local runner” web API to **optionally track output files** produced during a run. Callers may provide a list of **POSIX globs** (relative to the ephemeral working directory) under `track`. After the command completes (success, failure, or timeout), your service must resolve the globs, **read the m...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0015 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain execution_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0015
3
cpp
Easy
[ "http-server", "process-execution", "state-management", "concurrency", "caching", "job-scheduling", "file-io", "glob-patterns", "dependency-graphs" ]
You're extending the API so the input `files` map can carry structured data, not just raw text strings. # Deliverable The `/v1/execute` endpoint now accepts structured JSON values in the `files` object. **Supported file extensions:** * `.json`, `.yaml`, `.yml` * `.jsonl`, `.ndjson`, `.csv`, `.tsv` (valid formats are...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0015 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain execution_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0015
4
cpp
Easy
[ "http-server", "process-execution", "state-management", "concurrency", "caching", "job-scheduling", "file-io", "glob-patterns", "dependency-graphs" ]
You're extending the API to support **command chains**—sequences of commands that execute in the same working directory. CI workflows often require multi-step builds where later steps depend on earlier artifacts, but certain cleanup or reporting steps must run regardless of build success. All previous behavior (file t...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0015 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain execution_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0015
5
cpp
Easy
[ "http-server", "process-execution", "state-management", "concurrency", "caching", "job-scheduling", "file-io", "glob-patterns", "dependency-graphs" ]
You're extending the API to **cache execution results** in memory. CI runners often execute identical test suites or builds repeatedly; caching can eliminate redundant work. When a request matches a prior execution exactly, return the cached result instead of re-running the command. All previous behavior (command chai...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0015 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain execution_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0015
6
cpp
Easy
[ "http-server", "process-execution", "state-management", "concurrency", "caching", "job-scheduling", "file-io", "glob-patterns", "dependency-graphs" ]
You're extending the API to support **persistent environments with configurable concurrency modes**—named, reusable filesystem states that survive across executions. Environments can be configured to handle concurrent access in different ways: strictly exclusive, allowing parallel executions from a base state, or allow...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0015 / checkpoint_6. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain execution_server.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0016
1
cpp
Easy
[ "cli-tools", "scheduling", "file-io", "incremental-backup", "yaml", "jsonl", "glob-patterns", "tar-archives", "checksums", "event-driven" ]
Build a single-run, CLI-driven backup scheduler that parses a YAML file, decides which jobs are due “now”, runs it, applies exclusion rules, and emits event history as JSON Lines. For now, we are _only_ simulating the backing up. # Deliverables Extend `backup_scheduler.cpp` that reads a YAML schedule, evaluates what...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0016 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain backup_scheduler.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0016
2
cpp
Easy
[ "cli-tools", "scheduling", "file-io", "incremental-backup", "yaml", "jsonl", "glob-patterns", "tar-archives", "checksums", "event-driven" ]
Extend your scheduler to support backup strategies: jobs can now perform full backups, create tar archives, verify files, or dry-run. # Deliverables Extend `backup_scheduler.cpp` to support strategy-driven execution. --- ## YAML Schedule Changes ```yaml version: 1 timezone: "UTC" jobs: - id: "daily-home" enab...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0016 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain backup_scheduler.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0016
3
cpp
Easy
[ "cli-tools", "scheduling", "file-io", "incremental-backup", "yaml", "jsonl", "glob-patterns", "tar-archives", "checksums", "event-driven" ]
Add destination directories and incremental backups. Jobs now specify where backups are stored, and the system skips files that haven't changed since the last backup. # Deliverables **New CLI flag:** - `--backup`: Path to the backup destination directory (It may not already exist). If not provided, assume no existin...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0016 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain backup_scheduler.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0016
4
cpp
Easy
[ "cli-tools", "scheduling", "file-io", "incremental-backup", "yaml", "jsonl", "glob-patterns", "tar-archives", "checksums", "event-driven" ]
Extend the pack strategy to support incremental backups by reading existing pack files from the destination. # Deliverables Extend `backup_scheduler.cpp` to support reading pack files from the destination. **Note:** The `strategy.options.max_pack_bytes` constraint still applies when updating packs. --- ## Destinat...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0016 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain backup_scheduler.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to av...
problem_0017
1
cpp
Medium
[ "csv", "tsv", "jsonl", "parquet", "cli-tools", "schema-resolution", "type-inference", "external-sorting", "memory-constrained", "partitioning", "nested-types", "etl-pipeline" ]
# CSV Merger and Sorter A command-line tool that ingests multiple CSVs, aligns their schemas, and produces one sorted CSV output. # Deliverables Write `merge_files.cpp` with the following interface. **Usage** ``` ./merge_files --output <PATH|-> \ --key <col>[,<col>...] \ [--desc] \ [--sc...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0017 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain merge_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0017
2
cpp
Medium
[ "csv", "tsv", "jsonl", "parquet", "cli-tools", "schema-resolution", "type-inference", "external-sorting", "memory-constrained", "partitioning", "nested-types", "etl-pipeline" ]
# Multi-Format CSV Merger Extend your tool to support heterogeneous inputs (CSV, TSV, JSON Lines, Parquet) with schema reconciliation. # Deliverables Extend `merge_files.cpp` with the following additions. **Usage** ``` ./merge_files --output <PATH|-> \ --key <col>[,<col>...] \ [--desc] \ ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0017 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain merge_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0017
3
cpp
Medium
[ "csv", "tsv", "jsonl", "parquet", "cli-tools", "schema-resolution", "type-inference", "external-sorting", "memory-constrained", "partitioning", "nested-types", "etl-pipeline" ]
# Partitioned Output Extend your tool to write partitioned outputs by field values (Hive-style) or by file size/row count. # Deliverables Extend with the following additions. **Usage** ``` ./merge_files --output <PATH|-> \ --key <col>[,<col>...] \ [--partition-by <col>[,<col>...]] \ [--m...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0017 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain merge_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0017
4
cpp
Medium
[ "csv", "tsv", "jsonl", "parquet", "cli-tools", "schema-resolution", "type-inference", "external-sorting", "memory-constrained", "partitioning", "nested-types", "etl-pipeline" ]
# Nested Types Support Add nested types (`struct`, `array<T>`, `map<string,T>`) in provided schema. Output remains CSV with nested columns as canonical JSON. # Deliverables **Usage Additions** ``` ./merge_files --output <PATH|-> \ --key <fieldPath>[,<fieldPath>...] \ [--partition-by <fieldPath>[,<...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0017 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain merge_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0018
1
cpp
Medium
[ "sql", "sql-engine", "cli-tools", "file-io", "query-processing", "multi-format", "window-functions", "cte", "csv", "parquet", "json" ]
A data team wants a lightweight CLI to run **read-only SQL queries** across a folder of CSVs with matching headers. Your script takes in an SQL query and the input directory, then return the result of executing the query on the input data. The user can optionally specify an output path to save the result to. The SQL o...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0018 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain query_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0018
2
cpp
Medium
[ "sql", "sql-engine", "cli-tools", "file-io", "query-processing", "multi-format", "window-functions", "cte", "csv", "parquet", "json" ]
Now extend your current tool to handle multiple file formats and compressed files (either `bz2` or `gz`). The formats you need to support are: * Parquet(`.parquet`) * TSV(`.tsv`) * JSON(`.json`)/JSONL(`.jsonl`) If two files have the same name but different suffixes/compressions, your code must check that they have the...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0018 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain query_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0018
3
cpp
Medium
[ "sql", "sql-engine", "cli-tools", "file-io", "query-processing", "multi-format", "window-functions", "cte", "csv", "parquet", "json" ]
A common pattern that occurs in stored data is to **shard** the files into multiple smaller files. Extend the CLI program to support a `--sharded` flag that can be specified multiple times. For instance: ``` --sharded transactions=data/transactions/shard-*.jsonl ``` Will merge all files matching the glob pattern `data/...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0018 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain query_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0018
4
cpp
Medium
[ "sql", "sql-engine", "cli-tools", "file-io", "query-processing", "multi-format", "window-functions", "cte", "csv", "parquet", "json" ]
Building on the existing file query tool, now extend the SQL engine to support **window functions** and **advanced analytics**. Window functions allow calculations across sets of table rows that are related to the current row, enabling powerful analytical queries without requiring self-joins or complex subqueries. The...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0018 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain query_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0018
5
cpp
Medium
[ "sql", "sql-engine", "cli-tools", "file-io", "query-processing", "multi-format", "window-functions", "cte", "csv", "parquet", "json" ]
Building on the existing file query tool with window functions, now extend the SQL engine to support **Common Table Expressions (CTEs)** and **advanced subqueries**. CTEs provide a way to write auxiliary statements for use in a larger query, acting like temporary tables that exist only for the duration of the query. T...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0018 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain query_files.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid #...
problem_0019
1
cpp
Easy
[ "resource-management", "lifecycle", "versioning", "access-control", "json", "cli-tools", "crud", "state-persistence", "schema-validation", "argument-parsing" ]
# Forge Forge is a command-line tool that manages blueprints — named templates describing hardware resource requirements for allocation in a compute cluster. Forge persists all state to a local data directory and produces structured JSON output for every operation. ## Invocation ``` ./forge --data-dir <path> bluepri...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0019 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain forge.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloni...
problem_0019
2
cpp
Easy
[ "resource-management", "lifecycle", "versioning", "access-control", "json", "cli-tools", "crud", "state-persistence", "schema-validation", "argument-parsing" ]
## Allocation Creation and Discovery Previously defined behavior is unchanged unless stated here. Allocations are resource requests derived from blueprints. Creating an allocation references a blueprint by name and expands its requirement sets into individual allocation records, each tracking a binding lifecycle. ##...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0019 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain forge.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloni...
problem_0019
3
cpp
Easy
[ "resource-management", "lifecycle", "versioning", "access-control", "json", "cli-tools", "crud", "state-persistence", "schema-validation", "argument-parsing" ]
## Allocation Binding and Removal Previously defined behavior is unchanged unless stated here. Allocations support constrained binding to assignments and two modes of deletion with different idempotency guarantees. ### Binding `forge allocation bind --data-dir <path>` reads a JSON payload from stdin that binds or u...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0019 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain forge.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloni...
problem_0019
4
cpp
Easy
[ "resource-management", "lifecycle", "versioning", "access-control", "json", "cli-tools", "crud", "state-persistence", "schema-validation", "argument-parsing" ]
## Forge — Unit Inventory Previously defined behavior is unchanged unless stated here. ## Unit Resource A unit record contains: - `uuid` — auto-assigned identifier (v4, lowercase hyphenated). - `category` — hardware classification. - `manufacturer` — hardware manufacturer name. - `host` — host machine where the uni...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0019 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain forge.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloni...
problem_0019
5
cpp
Easy
[ "resource-management", "lifecycle", "versioning", "access-control", "json", "cli-tools", "crud", "state-persistence", "schema-validation", "argument-parsing" ]
## Forge — Unit State Transitions Previously defined behavior is unchanged unless stated here. ### New Unit Fields All unit responses (`get`, `list`, `activate`, `deactivate`) now include two additional fields: - **`capabilities`**: an array of strings representing the unit's capability set, sorted ascending lexico...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0019 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain forge.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloni...
problem_0019
6
cpp
Easy
[ "resource-management", "lifecycle", "versioning", "access-control", "json", "cli-tools", "crud", "state-persistence", "schema-validation", "argument-parsing" ]
## Module Operations and Metadata Tags Previously defined behavior is unchanged unless stated here. `forge` gains two resource types: modules and tags. Modules are programmable hardware components associated with units; they support filtered listing, individual retrieval, and firmware flashing. Tags are key-value met...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0019 / checkpoint_6. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain forge.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloni...
problem_0019
7
cpp
Easy
[ "resource-management", "lifecycle", "versioning", "access-control", "json", "cli-tools", "crud", "state-persistence", "schema-validation", "argument-parsing" ]
## Forge — Revision Negotiation and Feature Gating Previously defined behavior is unchanged unless stated here. `forge` supports revision negotiation: an optional `--revision` flag on every command controls which revision of the tool's behavior is active. ### Version Discovery `forge version` returns the supported ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0019 / checkpoint_7. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain forge.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloni...
problem_0019
8
cpp
Easy
[ "resource-management", "lifecycle", "versioning", "access-control", "json", "cli-tools", "crud", "state-persistence", "schema-validation", "argument-parsing" ]
## Access Control and Administrative Operations Previously defined behavior is unchanged unless stated here. `forge` gains policy-based access control for protected operations, an administrative readiness check, schema lifecycle commands, and startup-time plugin configuration validation. ### Access Control Blueprin...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0019 / checkpoint_8. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain forge.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloni...
problem_0020
1
cpp
Easy
[ "cli-tools", "latex", "markdown", "text-processing", "parsing", "conversion", "code-generation", "file-io", "format-translation" ]
# Basic LaTeX-to-Markdown Converter You are building `l2m.cpp`, a command-line tool that converts one LaTeX source file into KaTeX-compatible Markdown. ## Entry Point Invoke the tool as `./l2m INPUT_FILE [-o OUTPUT_FILE]`. When `-o` is omitted, write output in the current working directory using the input basename...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0020 / checkpoint_1. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain l2m.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0020
2
cpp
Easy
[ "cli-tools", "latex", "markdown", "text-processing", "parsing", "conversion", "code-generation", "file-io", "format-translation" ]
# Enumerate and List Conversion Extend the converter with `enumerate` and `itemize` environment handling while preserving prior feature behavior. Only `enumerate` and `itemize` are supported list environments. ## Enumerate Conversion When parsing `\begin{enumerate}...\end{enumerate}`: - Convert each `\item` to a n...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0020 / checkpoint_2. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain l2m.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0020
3
cpp
Easy
[ "cli-tools", "latex", "markdown", "text-processing", "parsing", "conversion", "code-generation", "file-io", "format-translation" ]
# Macro Expansion and Command Processing Extend the converter with preamble macro expansion and additional command processing while preserving prior feature behavior. ## Macro Expansion When `\begin{document}` is present, inspect only the preamble section (text before `\begin{document}`) for parameter-free macro de...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0020 / checkpoint_3. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain l2m.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0020
4
cpp
Easy
[ "cli-tools", "latex", "markdown", "text-processing", "parsing", "conversion", "code-generation", "file-io", "format-translation" ]
# Admonition Environments Extend the converter with mode-based rendering for recognized theorem-like environments and `solution` blocks while preserving prior feature behavior. ## CLI Flags Command shape: `./l2m INPUT_FILE [-o OUTPUT_FILE] [--docusaurus | --mkdocs]` `--docusaurus` and `--mkdocs` are mutually exclu...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0020 / checkpoint_4. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain l2m.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...
problem_0020
5
cpp
Easy
[ "cli-tools", "latex", "markdown", "text-processing", "parsing", "conversion", "code-generation", "file-io", "format-translation" ]
# Exercice Blocks, Theorem Titles, and Alternate Enumerate Mode Extend the converter with numbered `exercice` handling, optional theorem-like titles, and a simple enumerate rendering mode while preserving prior feature behavior. ## `exercice` Environment For `\begin{exercice}...\end{exercice}`, render according to ...
#!/usr/bin/env bash set -euo pipefail # Benchmark test runner for problem_0020 / checkpoint_5. # Usage: # bash test.sh /path/to/submission [extra pytest args...] # # The submission directory should contain l2m.cpp and, optionally, # requirements.txt. Set BENCH_PROBLEMS_REPO to an existing checkout to avoid # cloning...