FoxBrain: Foxconnβs Industrial-Grade Reasoning LLM for Domain-Specific Applications
FoxBrain is a large language model (LLM) independently developed by Foxconn, representing a major milestone in the companyβs long-term strategy to create AI that deeply understands industrial knowledge and high-reliability domains.
Currently at version 1.0, FoxBrain delivers exceptional performance in Chinese language understanding and generation, and is evolving to offer increasingly domain-specialized capabilities, with a focus on reasoning and decision-making in complex industrial contexts.

π Official GitHub: FoxBrain_LLMs
βΉοΈ Preview Notice
The current version of FoxBrain Model is in the preview stage, and access is temporarily limited to collaborating partners for early testing and feedback.The official release is scheduled for late June to early July, at which point the model will be publicly available and fully licensed for general use.
Thank you for your interest and support!
π Key Features
π§ First Reasoning-Capable Industrial LLM
Designed for multi-step thinking, logical inference, and complex task understanding in domain-specific environments.π Industrial-Grade Performance
Built for the precision, consistency, and robustness required in mission-critical industrial applications.π Optimized for Traditional Chinese
Fine-tuned on high-quality Taiwanese Traditional Chinese datasets for superior linguistic alignment.π‘ Fluent and Structured Output
Generates grammatically sound, coherent, and stylistically natural responses.βοΈ Fast Inference with VLLM
Easily deployable on 2β8 H100 GPUs with ultra-low latency.
FoxBrain is not just another general-purpose LLM β it is a domain-adapted, reasoning-focused, industrial-grade AI foundation, purpose-built for the challenges of real-world, high-reliability environments.
π Academic & Human Evaluation Benchmarks
π Taiwan MMLU+ (Academic Benchmark)
FoxBrain was evaluated on Taiwan MMLU+, a domain-adjusted version of the Massive Multitask Language Understanding benchmark. Results show that the model is competitive in reasoning and general knowledge over various academic and technical fields, demonstrating strong capability in domain-specific contexts.

π₯ MT-Bench (Human Preference Evaluation)
FoxBrain was also benchmarked using MT-Bench, a multi-turn human evaluation benchmark. It was compared against:
- Meta-LLaMA3.1-70B-Instruct
- Taiwan-LLaMA-70B-Instruct
Evaluation metrics:
- β Pairwise Comparison
- β Voting Preference
- β Overall Rating Score (Human & LLM-Judge)

π FoxBrain demonstrated favorable results in pairwise human judgment, highlighting its strength in multi-turn instruction-following and industrial dialogue scenarios.
π° News Rewriting Task
To showcase FoxBrainβs strengths, we evaluated it against Claude and ChatGPT on a Traditional Chinese news rewriting task (100 samples).
Model | Score | Semantic Sim. | Lexical Diversity | Syntactic Complexity | BLEU | Rouge | Length ratio |
---|---|---|---|---|---|---|---|
FoxBrain | 68.50 | 0.9568 | 0.5203 | 0.8910 | 0.3641 | 0.3853 | 0.5119 |
Claude | 62.26 | 0.9207 | 0.6567 | 0.7385 | 0.0388 | 0.0142 | 0.3270 |
ChatGPT | 61.19 | 0.9450 | 0.5666 | 0.6683 | 0.0684 | 0.0703 | 0.3299 |
π This task is one example use case. FoxBrain also performs well in summarization, Q&A, dialogue, and instruction-following tasks.
π Quickstart: Inference with VLLM
π₯οΈ Environment Requirements
- Python 3.8+
- CUDA-compatible environment
- 2 to 8 Γ H100 GPUs
vllm
installed
π¦ Install VLLM
pip install vllm==0.8.4
π§ Launch Inference API
vllm serve \
--model FoxBrain_70B_fp16 \
--api-key foxbrain-cit \
--port 8800 \
--enable-auto-tool-choice \
--tool-call-parser llama3_json \
--chat-template llama31_chattemplate.jinja \
--max-model-len 32768 \
--tensor-parallel-size 2 \
--gpu-memory-utilization 0.98 \
--enforce-eager
π Key flags:
--model
: Path to model weights--chat-template
: Prompt formatting template (Jinja)--tensor-parallel-size
: Number of GPUs (2/4/8)--max-model-len
: Token limit (32K to 128K depending on GPU count)
π Reference: FoxBrain VLLM Deployment Guide
π Colab Demo: FoxBrain API Notebook
π€ Suggested Use Cases
FoxBrain is suitable for a wide range of Traditional Chinese NLP tasks and industrial applications:
- π§ General-purpose Chinese language generation and summarization
- π° News rewriting and technical report generation
- π£οΈ Dialogue systems and Q&A agents
- π» Assisting with code generation and explanation
- π Future versions will support manufacturing process optimization, industrial knowledge Q&A, and smart city scenarios
π§ Roadmap

- π Version 1.0: Focused on strong Chinese language proficiency across general domains
- π Version 2.x+: Gradually enhanced with industrial knowledge, domain-specific reasoning, and smart manufacturing expertise
- π Long-Term Goal: Build an LLM that truly understands manufacturing, automation, and smart city ecosystems
FoxBrain is more than a language modelβit's Foxconnβs commitment to building next-generation AI for the future of smart industries.
π License
This model is released under the Llama 3.1 Community License Agreement.
π Contributors
- AI Research Center of Hon Hai Research Institute(model training, deployment & evaluation)
- Meta-Llama (base model)
π« Contact
For support or partnership inquiries: π§ harry.sh.liu@foxconn.com
- Downloads last month
- -
Model tree for FoxconnAI/Llama_3.1-FoxBrain-70B
Base model
meta-llama/Llama-3.1-70B