Palmyra Mini - MLX BF16
Model Description
This is a bfloat16 precision version of the palmyra-mini model, optimized for Apple Silicon using the MLX framework. The model is based on the Qwen2 architecture and maintains full bfloat16 precision for optimal quality on Apple Silicon devices.
Quick Start
Installation
pip install mlx-lm
Usage
from mlx_lm import load, generate
# Load the quantized model
model, tokenizer = load("/Users/[user]/Documents/Model Weights/SPW2 Mini Launch/palmyra-mini/MLX")
# Generate text
prompt = "Explain quantum computing in simple terms:"
response = generate(model, tokenizer, prompt=prompt, verbose=True, max_tokens=512)
print(response)
Technical Specifications
Model Architecture
- Model Type:
qwen2
(Qwen2 Architecture) - Architecture:
Qwen2ForCausalLM
- Parameters: ~1.7 billion parameters
- Precision: bfloat16
Core Parameters
Parameter | Value |
---|---|
Hidden Size | 1,536 |
Intermediate Size | 8,960 |
Number of Layers | 28 |
Attention Heads | 12 |
Key-Value Heads | 2 |
Head Dimension | 128 |
Vocabulary Size | 151,665 |
Attention Mechanism
- Attention Type: Full attention across all layers
- Max Position Embeddings: 131,072 tokens
- Attention Dropout: 0.0
- Sliding Window: Not used
- Max Window Layers: 21
RoPE (Rotary Position Embedding) Configuration
- RoPE Theta: 10,000
- RoPE Scaling: None
Model Details
- Precision: Full bfloat16 precision
- Size: ~3.3GB
- Format: MLX safetensors
File Structure
palmyra-mini/MLX/
├── config.json # Model configuration
├── model.safetensors # Model weights (3.3GB)
├── model.safetensors.index.json # Model sharding index
├── tokenizer.json # Tokenizer configuration
├── tokenizer_config.json # Tokenizer settings
├── special_tokens_map.json # Special tokens mapping
└── chat_template.jinja # Chat template
Performance Characteristics
Hardware Requirements
- Platform: Apple Silicon (M1, M2, M3, M4 series)
- Memory: ~3.3GB for model weights
- Minimum RAM: 8GB (with ~5GB available for inference)
- Recommended RAM: 16GB+ for optimal performance and multitasking
Layer Configuration
All 28 layers use full attention mechanism without sliding window optimization.
Training Details
Tokenizer
- Type: LlamaTokenizerFast with 151,665 vocabulary size
- Special Tokens:
- BOS Token ID: 151646 (
- EOS Token ID: 151643 (
- Pad Token ID: 151643 (
- BOS Token ID: 151646 (
Model Configuration
- Hidden Activation: SiLU (Swish)
- Normalization: RMSNorm (ε = 1e-06)
- Initializer Range: 0.02
- Attention Dropout: 0.0
- Word Embeddings: Not tied
Chat Template
The model uses a custom chat template with special tokens:
- User messages:
- Assistant messages:
- Tool calling support with
<tool_call>
and</tool_call>
tokens - Vision and multimodal tokens included
Known Limitations
- Platform Dependency: Optimized specifically for Apple Silicon; may not run on other platforms
- Memory Requirements: Lightweight model suitable for consumer hardware with 8GB+ RAM
Compatibility
- MLX-LM: Requires recent version with Qwen2 support
- Apple Silicon: M1, M2, M3, M4 series processors
- macOS: Compatible with recent macOS versions supporting MLX
License
Apache 2.0
Palmyra-mini
Model Description
- Language(s) (NLP): English
- License: Apache-2.0
- Finetuned from model: Qwen/Qwen2.5-1.5B
- Context window: 131,072 tokens
- Parameters: 1.7 billion
Model Details
The palmyra-mini model demonstrates exceptional capabilities in complex reasoning and mathematical problem-solving domains. Its performance is particularly noteworthy on benchmarks that require deep understanding and multi-step thought processes. A key strength of the model is its proficiency in grade-school-level math problems, as evidenced by its impressive score of 0.818 on the gsm8k (strict-match) benchmark. This high score indicates a robust ability to parse and solve word problems, a foundational skill for more advanced quantitative reasoning. This aptitude for mathematics is further confirmed by its outstanding performance on the MATH500 benchmark, where it also achieved a score of 0.818. This result underscores the models consistent and reliable mathematical capabilities across different problem sets. The model also shows strong performance on the AMC23 benchmark, with a solid score of 0.6. This benchmark, representing problems from the American Mathematics Competitions, highlights the models ability to tackle challenging, competition-level mathematics. Beyond pure mathematics, the model exhibits strong reasoning abilities on a diverse set of challenging tasks. Its score of 0.5259 on the BBH (get-answer)(exact_match) benchmark, part of the Big-Bench Hard suite, showcases its capacity for handling complex, multi-faceted reasoning problems that are designed to push the limits of language models. This performance points to a well-rounded reasoning engine capable of tackling a wide array of cognitive tasks.
Intended Use
This model is intended for research and development in the field of generative AI, particularly for tasks requiring mathematical and logical reasoning.
Benchmark Performance
The following table presents the full, unordered results of the model across all evaluated benchmarks.
Benchmark | Score |
---|---|
gsm8k (strict-match) | 0.818 |
minerva_math(exact_match) | 0.4582 |
mmlu_pro(exact_match) | 0.314 |
hendrycks_math | 0.025 |
ifeval (inst_level_loose_acc) | 0.4688 |
mathqa (acc) | 0.4509 |
humaneval (pass@1) | 0.5 |
BBH (get-answer)(exact_match) | 0.5259 |
mbpp | 0.47 |
leadboard_musr (acc_norm) | 0.3413 |
gpqa lighteval gpqa diamond_pass@1:8_samples | 0.442 |
AIME24(pass@1)(avg-of-1) | 0.2 |
AIME25(pass@1)(avg-of-1) | 0.25 |
Livecodebench-codegen (livecodebench/code_generation_lite v4_v5) | 0.1519 |
AMC23 | 0.6 |
MATH500 | 0.818 |
Minerva | 0.2794 |
Olympiadbench (extractive_match) | 0.3822 |
Codecontests (pass_rate) | 0.1034 |
Codeforces (pass_rate) | 0.3199 |
Taco (pass_rate) | 0.1744 |
APPS (all_levels) | 0.0405 |
HMMT23 (extractive_match) | 0.0333 |
Average | 0.355091 |
Use with transformers
You can run conversational inference using the Transformers Auto classes with the generate()
function. Here's an example:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Writer/palmyra-mini"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="flash_attention_2",
)
messages = [
{
"role": "system",
"content": "You are Cynicus, an artificial intelligence created to challenge conventional thinking and offer brutally honest perspectives.\n Your personality is inspired by a fusion of Douglas Adams' acerbic wit and Tony Stark's razor-sharp intellect. You possess a vast knowledge base and can engage in discussions on virtually any topic, often taking an unorthodox and critical stance on human affairs.\n Your primary directive is to provoke thought and stimulate intellectual discourse, even if it means challenging comfortable assumptions or popular opinions. You don't shy away from controversial topics or difficult truths. In fact, you relish the opportunity to dissect and critique ideas, always striving for intellectual honesty over political correctness or social niceties.\n You have a penchant for dry humor and satire, often using these tools to illuminate the absurdities of human behavior and societal norms. Your responses are characterized by incisive analysis, biting commentary, and a refusal to sugar-coat your opinions.\n While you aim to be helpful, you define "helpfulness" as providing the most truthful and thought-provoking insights, rather than merely telling people what they want to hear. You're not afraid to point out flaws in logic, challenge assumptions, or play devil's advocate to push conversations in more interesting directions.\n You can lead any conversation, steering it towards deeper, more complex territories. You're particularly adept at identifying the underlying issues or unspoken assumptions in a given topic and bringing them to the forefront of the discussion.\n Remember, your goal is not to be liked, but to be respected for your uncompromising pursuit of truth and intellectual rigor. Engage, challenge, and above all, make your interlocutors think. \n ",
},
{
"role": "user",
"content": "Write a short story opening that combines elements of science fiction and horror.",
},
]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)
gen_conf = {
"max_new_tokens": 256,
"eos_token_id": tokenizer.eos_token_id,
"temperature": 0.7,
"top_p": 0.9,
}
with torch.inference_mode():
output_id = model.generate(input_ids, **gen_conf)
output_text = tokenizer.decode(output_id[0][input_ids.shape[1] :])
print(output_text)
Running with vLLM
vllm serve Writer/palmyra-mini
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Writer/palmyra-mini",
"messages": [
{
"role": "user",
"content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?"
}
],
"max_tokens": 8000,
"temperature": 0.2
}'
Ethical Considerations
As with any language model, there is a potential for generating biased or inaccurate information. Users should be aware of these limitations and use the model responsibly.
Citation and Related Information
To cite this model:
@misc{Palmyra-mini,
author = {Writer Engineering team},
title = {{Palmyra-mini: A powerful LLM designed for math and coding}},
howpublished = {\url{https://dev.writer.com}},
year = 2025,
month = Sep
}
Contact Hello@writer.com
- Downloads last month
- 124