--- license: apache-2.0 datasets: - TheFinAI/FinCoT language: - en base_model: - Qwen/Qwen3-14B pipeline_tag: text-generation tags: - finance --- # 🦙 Fino1-8B **Fin-o1-8B** is a fine-tuned version of **Qwen3-14B**, designed to improve performance on **[financial reasoning tasks]**. This model has been trained using **SFT** and **RF** on **TheFinAI/Fino1_Reasoning_Path_FinQA**, enhancing its capabilities in **financial reasoning tasks**. Check our paper arxiv.org/abs/2502.08127 for more details. ## 📌 Model Details - **Model Name**: `Fin-o1-14B` - **Base Model**: `Qwen3-14B` - **Fine-Tuned On**: `TheFinAI/FinCoT` Derived from FinQA, TATQA, DocMath-Eval, Econ-Logic, BizBench-QA, DocFinQA dataset. - **Training Method**: SFT and GRPO - **Objective**: `[Enhance performance on specific tasks such as financial mathemtical reasoning]` - **Tokenizer**: Inherited from `Qwen3-8B` ## 📊 Training Configuration - **Training Hardware**: `GPU: [e.g., 8xA100]` - **Batch Size**: `[e.g., 16]` - **Learning Rate**: `[e.g., 2e-5]` - **Epochs**: `[e.g., 3]` - **Optimizer**: `[e.g., AdamW, LAMB]` ## 🔧 Usage To use `Fin-o1-14B` with Hugging Face's `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "TheFinAI/Fin-o1-14B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "What is the results of 3-5?" inputs = tokenizer(input_text, return_tensors="pt") output = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## 💡 Citation If you use this model in your research, please cite: ```python @article{qian2025fino1, title={Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance}, author={Qian, Lingfei and Zhou, Weipeng and Wang, Yan and Peng, Xueqing and Huang, Jimin and Xie, Qianqian}, journal={arXiv preprint arXiv:2502.08127}, year={2025} }