AntiFraud-SFT

AntiFraud-SFT is a supervised fine-tuned audio-text fraud detection model built on top of Qwen2-Audio for Chinese telecom fraud analysis.

Overview

This model is trained on the TeleAntiFraud-28k dataset and is designed for:

  • telecom fraud detection from call audio
  • scene understanding from audio-text conversational inputs
  • fraud-related reasoning over Chinese phone-call content

The current release is intended as a research model checkpoint for reproduction and further study.

Related Resources

Model Details

  • Base model: Qwen/Qwen2-Audio-7B-Instruct
  • Architecture: Qwen2AudioForConditionalGeneration
  • Framework: PyTorch
  • Weight format: safetensors
  • License: Apache License 2.0

Usage

This repository contains the model weights and tokenizer / processor files required for inference with transformers.

Example loading code:

from transformers import AutoProcessor, Qwen2AudioForConditionalGeneration

model_id = "JimmyMa99/AntiFraud-SFT"

processor = AutoProcessor.from_pretrained(model_id)
model = Qwen2AudioForConditionalGeneration.from_pretrained(
    model_id,
    device_map="auto",
)

Notes

  • This release focuses on model weights for research and benchmarking.
  • For evaluation scripts and LM-as-judge utilities, see the evaluation/ directory in the TeleAntiFraud repository.
  • For the end-to-end reinforcement-learning follow-up paper, see SAFE-QAQ.

Citation

@inproceedings{ma2025teleantifraud,
  title={TeleAntiFraud-28k: An Audio-Text Slow-Thinking Dataset for Telecom Fraud Detection},
  author={Ma, Zhiming and Wang, Peidong and Huang, Minhua and Wang, Jinpeng and Wu, Kai and Lv, Xiangzhao and Pang, Yachun and Yang, Yin and Tang, Wenjie and Kang, Yuchen},
  booktitle={Proceedings of the 33rd ACM International Conference on Multimedia},
  pages={5853--5862},
  year={2025}
}

@article{wang2026safe,
  title={SAFE-QAQ: End-to-End Slow-Thinking Audio-Text Fraud Detection via Reinforcement Learning},
  author={Wang, Peidong and Ma, Zhiming and Dai, Xin and Liu, Yongkang and Feng, Shi and Yang, Xiaocui and Hu, Wenxing and Wang, Zhihao and Pan, Mingjun and Yuan, Li and others},
  journal={arXiv preprint arXiv:2601.01392},
  year={2026}
}
Downloads last month
42
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JimmyMa99/AntiFraud-SFT

Quantizations
1 model

Papers for JimmyMa99/AntiFraud-SFT