---
base_model: agentrl/ReSearch-Qwen-7B
datasets:
- RUC-NLPIR/FlashRAG_datasets
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher, Manojb
pipeline_tag: text-generation
tags:
- function-calling
- tool-calling
- codex
- local-llm
- gguf
- 6gb-vram
- ollama
- code-assistant
- api-tools
- openai-alternative
---
This is a packged Q8_0 only model from https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF that runs on 9-12GB VRAM without any quality loss.
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ReSearch-Qwen-7B-i1-GGUF
For this base model DONT apply the chat completion
## Setup
Install ollama
```bash
curl -fsSL https://ollama.com/install.sh | sh
```
Go into your favourite folder
```bash
# make sure you hve Python 3.8+
# apt-get update && apt-get install libcurl build-essential curl
pip install huggingface-hub ollama
huggingface-cli download Manojb/Qwen-7B-toolcalling-ReSearch-gguf-Q8_0 --download-dir Qwen-7B-toolcalling-ReSearch-gguf-Q8_0
cd "$(find . -type d -iname '*Qwen-7B-toolcalling-ReSearch-gguf-Q8_0*' | head -n 1)"
source run_model.sh
```
Or
```bash
# Download and run instantly
ollama create qwen-7b:toolcall -f ModelFile
ollama run qwen-7b:toolcall # without chat completion
```
### Basic Function Calling
for Base model (THIS):
```base
curl http://localhost:11434/api/generate -H "Content-Type: application/json" -d '{
"model": "qwen-7b:toolcall",
"prompt": "Get the current weather in San Francisco and convert to Celsius",
"stream": false
}'
```
```python
# Load with Ollama
import requests
response = requests.post('http://localhost:11434/api/generate', json={
'model': 'qwen-7b:toolcall',
'prompt': 'Get the current weather in San Francisco and convert to Celsius',
'stream': False
})
print(response.json()['response'])
```
for Instruct models:
```bash
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"stream": false,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Why is the sky blue?"}
]
}'
```
```python
from ollama import chat
# Your custom model name here
model_name = "qwen-7b:toolcall"
messages = [
{"role": "system", "content": "You are an instruct model."},
{"role": "user", "content": "Explain how to use this custom model in Python."}
]
response = chat(model=model_name, messages=messages)
print(response.message.content)
```
***ReSearch***, a novel framework that trains LLMs to ***Re***ason with ***Search*** via reinforcement learning without using any supervised data on reasoning steps. Our approach treats search operations as integral components of the reasoning chain, where when and how to perform searches is guided by text-based thinking, and search results subsequently influence further reasoning.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ReSearch-Qwen-7B-GGUF/resolve/main/ReSearch-Qwen-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
