TorchSight Beam q4_K_M

Cybersecurity document classifier. LoRA fine-tune of Qwen 3.5 27B, quantized to q4_K_M. Approximately 17 GB GGUF.

Recommended hardware: 32 GB unified memory (e.g. M-series Mac) or 24 GB GPU.

This is the default quantization for the TorchSight system โ€” released alongside:

Dobrovolskyi, I. Security Document Classification with a Fine-Tuned Local Large Language Model: Benchmark Data and an Open-Source System. Journal of Information Security and Applications, 2026.

Benchmark results

Evaluated under identical methodology (alpaca prompt, Ollama /api/generate, temperature = 0, num_predict = 2048) on the companion dataset torchsight/cybersecurity-classification-benchmark. Canonical numbers live in that repo's BENCHMARK_NUMBERS.md.

Primary โ€” eval-1000-synthetic (n = 1,000)

Model Type Cat. acc [95% CI] Subcat. acc
Beam q4_K_M Local (LoRA) 95.0% [93.5, 96.2] 48.2%
Beam f16 Local (LoRA) 93.2% [91.5, 94.6] 51.1%
Beam q8_0 Local (LoRA) 93.0% [91.2, 94.4] 51.4%
Claude Sonnet 4 Commercial API 79.9% [77.3, 82.3] 23.0%
Claude Opus 4 Commercial API 79.9% [77.3, 82.3] 22.5%
GPT-5 Commercial API 76.9% [74.2, 79.4] 11.6%
Gemini 2.5 Pro Commercial API 75.4% [72.6, 78.0] 21.0%
Qwen 3.5 27B base Local (no LoRA) 86.3% [84.0, 88.3] 19.0%
Regex (48 patterns) Rule-based 52.7% [49.6, 55.8] โ€”

95% confidence intervals are Wilson-score. Beam q4_K_M's advantage over every commercial baseline is significant under pairwise McNemar's tests after Bonferroni correction (ฮฑ = 0.05).

External โ€” eval-500-external (n = 500)

Model Cat. acc [95% CI] ฮ” vs. primary
Beam q4_K_M 93.8% [91.3, 95.6] โˆ’1.2 pp
Beam f16 91.2% [88.4, 93.4] โˆ’2.0 pp
Beam q8_0 91.2% [88.4, 93.4] โˆ’1.8 pp
Claude Sonnet 4 86.4% [83.1, 89.1] +6.5 pp
Gemini 2.5 Pro 82.0% [78.4, 85.1] +6.6 pp
Qwen 3.5 27B base 86.6% [83.3, 89.3] +0.3 pp
GPT-5 65.8% [61.5, 69.8] โˆ’11.1 pp
Regex baseline 29.6% [25.8, 33.7] โˆ’23.1 pp

Usage with Ollama

ollama pull torchsight/beam-q4_K_M
ollama run torchsight/beam-q4_K_M

Or via the TorchSight CLI for full document-scanning workflow:

./install.sh
torchsight /path/to/scan

Training

  • Base: Qwen 3.5 27B (dense)
  • Method: LoRA (r = 128, ฮฑ = 256), bf16, 5 epochs
  • Dataset: 78,358 balanced samples โ€” see torchsight/beam-training-data
  • Hardware: 8ร— NVIDIA A100 80GB SXM4, 10.5 hours

License

Apache 2.0. The base model (Qwen 3.5 27B) carries its own license; consult upstream terms for use.

Downloads last month
19
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for torchsight/beam-q4_K_M

Base model

Qwen/Qwen3.5-27B
Adapter
(66)
this model