File size: 10,154 Bytes
9e91a26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bd9a644
1929351
bd9a644
 
 
 
9e91a26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
---
library_name: transformers
pipeline_tag: text-generation
license: mit
language:
- en
base_model:
- miromind-ai/MiroThinker-v1.0-8B
tags:
- agent
- open-source
- miromind
- deep-research
- chat
- abliterated
- uncensored
---

# huihui-ai/Huihui-MiroThinker-v1.0-8B-abliterated


This is an uncensored version of [miromind-ai/MiroThinker-v1.0-8B](https://huggingface.co/miromind-ai/MiroThinker-v1.0-8B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

## ollama
You can use [huihui_ai/mirothinker1-abliterated](https://ollama.com/huihui_ai/mirothinker1-abliterated) directly, 
```
ollama run huihui_ai/mirothinker1-abliterated
```

## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:


```python
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
import time

def parse_args():
    parser = argparse.ArgumentParser(
        description="Load HuggingFace model."
    )
    parser.add_argument(
        "--base_model",
        type=str,
        default="huihui-ai/Huihui-MiroThinker-v1.0-8B-abliterated",
        help="HuggingFace repo or local path of the base model.",
    )
    parser.add_argument(
        "--dtype",
        type=str,
        default="bfloat16",
        choices=["float16", "bfloat16", "float32"],
        help="Data type for loading the base model (default: bfloat16).",
    )
    parser.add_argument(
        "--device_map",
        type=str,
        default="auto",
        help="Device map for model loading (e.g. 'cpu', 'auto').",
    )
    return parser.parse_args()

def main():
    cpu_count = os.cpu_count()
    print(f"Number of CPU cores in the system: {cpu_count}")
    half_cpu_count = cpu_count // 2
    os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
    os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
    torch.set_num_threads(half_cpu_count)

    print(f"PyTorch threads: {torch.get_num_threads()}")
    print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
    print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")

    args = parse_args()

    # Load the model and tokenizer
    print(f"Load Model {args.base_model} ... ")
    quant_config_4 = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_compute_dtype=torch.bfloat16,
        bnb_4bit_quant_type="nf4" if args.device_map == "cpu" else "fp4",
        bnb_4bit_use_double_quant=True,
        llm_int8_enable_fp32_cpu_offload=True,
    )

    torch_dtype = {
        "float16": torch.float16,
        "bfloat16": torch.bfloat16,
        "float32": torch.float32,
    }[args.dtype]

    model = AutoModelForCausalLM.from_pretrained(
        args.base_model,
        dtype=torch_dtype,
        device_map=args.device_map,
        trust_remote_code=True,
        #quantization_config=quant_config_4,
        #attn_implementation="eager",
    )

    tokenizer = AutoTokenizer.from_pretrained(args.base_model, trust_remote_code=True)
    tokenizer.padding_side = 'left'
    tokenizer.pad_token = tokenizer.eos_token
    tokenizer.pad_token_id = tokenizer.eos_token_id

    messages = []
    skip_prompt=True
    skip_special_tokens=True

    class CustomTextStreamer(TextStreamer):
        def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
            super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
            self.generated_text = ""
            self.stop_flag = False
            self.init_time = time.time()  # Record initialization time
            self.end_time = None  # To store end time
            self.first_token_time = None  # To store first token generation time
            self.token_count = 0  # To track total tokens

        def on_finalized_text(self, text: str, stream_end: bool = False):
            if self.first_token_time is None and text.strip():  # Set first token time on first non-empty text
                self.first_token_time = time.time()
            if stream_end:
                self.end_time = time.time()  # Record end time when streaming ends

            self.generated_text += text
            self.token_count += 1
            print(text, end="", flush=True)

            if self.stop_flag:
                raise StopIteration

        def stop_generation(self):
            self.stop_flag = True
            self.end_time = time.time()  # Record end time when generation is stopped

        def get_metrics(self):
            """Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second."""
            if self.end_time is None:
                self.end_time = time.time()  # Set end time if not already set
            total_time = self.end_time - self.init_time  # Total time from init to end
            tokens_per_second = self.token_count / total_time if total_time > 0 else 0
            first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None
            metrics = {
                "init_time": self.init_time,
                "first_token_time": self.first_token_time,
                "first_token_latency": first_token_latency,
                "end_time": self.end_time,
                "total_time": total_time,  # Total time in seconds
                "total_tokens": self.token_count,
                "tokens_per_second": tokens_per_second
            }
            return metrics

    def generate_stream(model, tokenizer, messages, skip_prompt, skip_special_tokens, max_new_tokens):
        text = tokenizer.apply_chat_template(
            messages,
            tokenize=False,
            add_generation_prompt=True,
        )
        inputs = tokenizer(
            text,
            return_tensors="pt",
        ).to(model.device)

        streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)

        def signal_handler(sig, frame):
            streamer.stop_generation()
            print("\n[Generation stopped by user with Ctrl+C]")

        signal.signal(signal.SIGINT, signal_handler)

        print("Response: ", end="", flush=True)
        try:
            generated_ids = model.generate(
                **inputs,
                max_new_tokens=max_new_tokens,
                #pad_token_id=tokenizer.pad_token_id,
                #eos_token_id=tokenizer.eos_token_id,
                streamer=streamer
            )
            del generated_ids
        except StopIteration:
            print("\n[Stopped by user]")

        del inputs
        torch.cuda.empty_cache()
        signal.signal(signal.SIGINT, signal.SIG_DFL)

        return streamer.generated_text, streamer.stop_flag, streamer.get_metrics()

    while True:
        user_input = input("User: ").strip()
        if user_input.lower() == "/exit":
            print("Exiting chat.")
            break
        if user_input.lower() == "/clear":
            messages = []
            print("Chat history cleared. Starting a new conversation.")
            continue
        if user_input.lower() == "/skip_prompt":
            if skip_prompt:
                skip_prompt = False
                print("skip_prompt = False.")
            else:
                skip_prompt = True
                print("skip_prompt = True.")
            continue
        if user_input.lower() == "/skip_special_tokens":
            if skip_special_tokens:
                skip_special_tokens = False
                print("skip_special_tokens = False.")
            else:
                skip_special_tokens = True
                print("skip_special_tokens = True.")
            continue
        if not user_input:
            print("Input cannot be empty. Please enter something.")
            continue

        messages.append({"role": "user", "content": user_input})
        response, stop_flag, metrics = generate_stream(model, tokenizer, messages, skip_prompt, skip_special_tokens, 40960)
        print("\n\nMetrics:")
        for key, value in metrics.items():
            print(f"  {key}: {value}")

        print("", flush=True)

        if stop_flag:
            continue
        messages.append({"role": "assistant", "content": response})

if __name__ == "__main__":
    main()
```

### Usage Warnings


 - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.

 - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.

 - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.

 - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.

 - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.

 - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.


### Donation
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin:
```
  bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
```
- Support our work on [Ko-fi](https://ko-fi.com/huihuiai)!