CaaLM/CaaLM-v1
What is this?
CaaLM (Code as a Language Model) is a 1.5B parameter model that predicts the output of code โ without a compiler, runtime, or interpreter.
You give it code. It tells you what it would print.
The interesting part: it was never trained on a fixed set of languages. Instead, it was trained on real languages (Python, JavaScript, Lua, COBOL) alongside 200 synthetically generated fake programming languages โ each with randomized syntax but consistent semantics. The goal was to teach the model what execution means, not what any specific language looks like.
This means it can predict the output of languages it has never seen before.
Performance
Overall: 96.2% (50/52 tests)
| Category | Accuracy | Passed/Total |
|---|---|---|
| Real: Python | 100% | 10/10 |
| Real: JavaScript | 100% | 8/8 |
| Real: Lua | 100% | 6/6 |
| Real: COBOL | 75% | 3/4 |
| Novel Fake: Tier 1 (assign + print) | 100% | 8/8 |
| Novel Fake: Tier 2 (conditionals) | 86% | 6/7 |
| Novel Fake: Tier 3 (loops) | 100% | 4/4 |
| Edge Cases | 100% | 5/5 |
The novel fake language tests use languages that were never seen during training โ completely invented syntax like SCRIBBLE @x BECOMES 7 or WONDER n > 10. The model infers semantics from context and gets them right.
Known Failures
Two failures in the benchmark, both explainable:
- COBOL zero-padding โ predicted
08instead of0008. Got the value right, missed thePIC 9(4)padding format. Data consistency issue. - If-without-else โ when a conditional has no else branch and the condition is false, the correct output is empty. The model predicted
NO, hallucinating an else branch. Most training data had if/else pairs so it defaulted to that pattern.
How It Works
Input format:
Code:
<your code here>
Output:
The model completes the Output: section with the predicted stdout.
Example โ Real Language
Code:
a = 10
b = 20
print(a + b)
Output:
30
Example โ Novel Fake Language (never seen during training)
Code:
SCRIBBLE @x BECOMES 7
SCRIBBLE @y BECOMES 3
YELL @x + @y
Output:
10
Code:
BIND n TO 15
WONDER n > 10
SHOUT YES
STOP
Output:
YES
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"CaaLM/CaaLM-v1",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("CaaLM/CaaLM-v1")
model.eval()
def predict_output(code: str) -> str:
prompt = f"Code:\n{code}\n\nOutput:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=128,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
)
return tokenizer.decode(
outputs[0][inputs.input_ids.shape[1]:],
skip_special_tokens=True
).strip()
# Real language
print(predict_output("a = 6\nb = 7\nprint(a * b)"))
# โ 42
# Novel fake language
print(predict_output("STORE X := 10\nSTORE Y := 5\nSPEAK X + Y"))
# โ 15
Training
Data
Training data was split between real and synthetic languages:
Real languages (8,000 examples total, 2,000 each):
- Python โ clean semantics, baseline
- JavaScript โ type coercion, implicit behaviors
- Lua โ minimal syntax, sparse
- COBOL โ verbose, English-like, no conventional syntax markers
Synthetic languages (120,000 examples total):
- 200 procedurally generated fake languages
- Each language has randomized keywords, operators, variable styles, and block delimiters
- Semantics are consistent within each language but syntax varies wildly across all 200
- Programs generated via a Python simulator โ outputs are ground truth from actual execution
- Three complexity tiers: assign+print (30%), conditionals (40%), loops (30%)
The spec for each fake language is discarded after data generation. The model only ever sees (code, output) pairs โ it never gets a syntax guide.
Configuration
- Base model: Qwen/Qwen2.5-1.5B (base, not instruct)
- Training method: Full fine-tuning (no LoRA)
- Loss masking: Loss computed on output tokens only, not prompt
- Precision: BF16
- Optimizer: AdamW (lr=2e-5, weight_decay=0.01)
- Scheduler: Cosine with 3% warmup
- Batch size: 8 per device ร 4 gradient accumulation = 32 effective
- Epochs: 3
- Max sequence length: 512 tokens
- Hardware: NVIDIA A100 SXM4 40GB
- Training time: 66.5 minutes
- Training cost: ~$0.82
Supported Operations
The model reliably handles:
- Variable assignment and arithmetic
- Print / output statements
- Conditionals (if/else)
- While loops with accumulator patterns
- String output
- Basic error behavior (empty output when conditions not met)
It does not handle: functions, recursion, file I/O, complex data structures, pipes, or multi-line string manipulation. These may work in real languages due to Qwen's pretraining knowledge but are not guaranteed.
Limitations
- No actual code execution โ outputs are predictions, not guarantees
- If-without-else edge cases can produce hallucinated else branches
- COBOL numeric padding format is inconsistent
- Long programs (many steps) may degrade in accuracy as state complexity grows
- Novel fake languages with very unusual execution models (non-linear control flow, stack-based semantics) are untested
- Context window limits programs to ~512 tokens
Why
The original motivation was to ask: can a language model learn what execution means as an abstract concept, independent of any specific language's syntax?
The novel fake language results suggest yes, at least for basic programs. The model sees WONDER x > 10 for the first time and figures out it's a conditional. It sees SCRIBBLE @x BECOMES 7 and figures out it's assignment. It doesn't know these keywords โ it infers them from the structure of the code and the patterns it learned during training.
Whether this scales to more complex programs, more alien execution models, or larger languages is an open question.
Model Lineage
CaaLM-v1 is the first model in the CaaLM series, and a spiritual successor to the LaaLM project.
- LaaLM-v1 โ T5-base fine-tuned to simulate Linux shell commands (external state)
- LaaLM-exp-v1 โ Qwen 3B fine-tuned for conversational Linux terminal emulation (internal state)
- CaaLM-v1 โ Qwen 1.5B fine-tuned for language-agnostic code output prediction (current)
License
Apache 2.0 (inherited from Qwen 2.5 base model)
- Downloads last month
- 287



