Add model card for Re:Form

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - qwen2
7
+ - code-generation
8
+ - formal-verification
9
+ - dafny
10
+ - reinforcement-learning
11
+ ---
12
+
13
+ # Re:Form: Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs
14
+
15
+ This repository contains a model from the **Re:Form** framework, presented in the paper [Re:Form -- Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs: A Preliminary Study on Dafny](https://huggingface.co/papers/2507.16331).
16
+
17
+ **Re:Form** is a novel approach that addresses the challenges of unreliable and unscalable verification processes in Large Language Models (LLMs) that typically operate with informal languages. It proposes grounding LLMs in rigorous formal systems, specifically using the Dafny formal language, to enable automatic and mathematically provable verification of their reasoning processes and outcomes. This capability is pivotal for achieving large-scale, reliable formal software verification.
18
+
19
+ - ๐Ÿ“„ [Paper](https://huggingface.co/papers/2507.16331)
20
+ - ๐ŸŒ [Project Page](https://veri-code.github.io/ReForm-page)
21
+ - ๐Ÿ’ป [GitHub Repository](https://github.com/Veri-Code/ReForm)
22
+
23
+ ## Overview
24
+
25
+ The `Re:Form` pipeline focuses on reducing human priors in complex programming tasks through an automatic and scalable data curation pipeline. It integrates careful Reinforcement Learning (RL) designs with feedback from a formal language verifier. This methodology allows even small models (e.g., 0.5B) to generate syntactically valid and verifiable Dafny code, surpassing proprietary models and demonstrating stronger generalization to out-of-domain tasks, including on the challenging DafnyComp benchmark.
26
+
27
+ ![Overall Pipeline](https://github.com/Veri-Code/ReForm/raw/main/assets/pipeline.png)
28
+
29
+ ## Usage
30
+
31
+ You can load and use the `Re:Form` models with the Hugging Face `transformers` library. The model uses the `Qwen2ForCausalLM` architecture.
32
+
33
+ Here's an example of how to perform text generation for Dafny code:
34
+
35
+ ```python
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+ import torch
38
+
39
+ # Replace "Veri-Code/sft_7B" with the specific model checkpoint you want to load
40
+ # e.g., "Veri-Code/sft_0.5B", "Veri-Code/sft_1.5B", "Veri-Code/sft_3B", "Veri-Code/sft_14B", or "Veri-Code/14B-RL-entropy"
41
+ model_id = "Veri-Code/sft_7B"
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
44
+ model = AutoModelForCausalLM.from_pretrained(
45
+ model_id,
46
+ torch_dtype=torch.bfloat16, # Adjust dtype based on your hardware (e.g., torch.float16 for GPUs without bfloat16 support)
47
+ device_map="auto"
48
+ )
49
+
50
+ # Example: Generate Dafny code for a function that returns the absolute value
51
+ dafny_prompt = """function Abs(x: int): int
52
+ ensures Abs(x) >= 0
53
+ {
54
+ """
55
+
56
+ inputs = tokenizer.apply_chat_template([{"role": "user", "content": dafny_prompt}], return_tensors="pt", add_generation_prompt=True)
57
+ outputs = model.generate(inputs.to(model.device), max_new_tokens=100)
58
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
59
+
60
+ print(generated_text)
61
+ ```
62
+
63
+ For more detailed usage, including training and evaluation scripts, please refer to the [official GitHub repository](https://github.com/Veri-Code/ReForm).
64
+
65
+ ## Citation
66
+
67
+ If you use this work in your research, please cite:
68
+
69
+ ```bibtex
70
+ @misc{yan2025reformreducinghuman,
71
+ title={Re:Form -- Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs: A Preliminary Study on Dafny},
72
+ author={Chuanhao Yan and Fengdi Che and Xuhan Huang and Xu Xu and Xin Li and Yizhi Li and Xingwei Qu and Jingzhe Shi and Zhuangzhuang He and Chenghua Lin and Yaodong Yang and Binhang Yuan and Hang Zhao and Yu Qiao and Bowen Zhou and Jie Fu},
73
+ year={2025},
74
+ eprint={2507.16331},
75
+ archivePrefix={arXiv},
76
+ primaryClass={cs.CL},
77
+ url={https://arxiv.org/abs/2507.16331},
78
+ }
79
+ ```