Join our Discord! https://discord.gg/BeaverAI
More than 8000 members strong πͺ A hub for users and makers alike!
Drummer is open for new opportunities: https://linktr.ee/thelocaldrummer
Thank you to everyone who subscribed through Patreon. Your support helps me chug along in this brave new world.
FAQ for those out-of-the-loop
πΆ Who is Drummer?
Hi! I'm Drummer. I'm a Software Engineer with experience in JavaScript, Golang, Python, and generally engineering the crap out of things.
Why I'm in the AI space:
- Exploration: Everyone is trying to figure out how AI works and what it's capable of. I am too - just not in creating the smartest, safest model at all costs.
- Upskill: The world is headed towards AI. It is here to stay. This has been my way of brushing up in this new form of computing challenge.
- Value: I yearn to create value. I feel satisfaction and fulfillment in providing something meaningful for others.
- Fun: It's just fun using and making models. It's also fun coming up with theories and realizing them in practice (training AI).
I started my tuning venture back in mid-2024 when I wanted to improve its literary capabilities. I've come a long way since then and I have branched out and specialized. Foundational models today are optimized for non-creative uses, and I believe there is a place for AI in creativity and entertainment.
I am here to take the road less traveled by.
β What are my models like?
Bottomline: My models are usually geared towards creativity, usability, and entertainment!
While intelligence, correctness, and problem solving are not my priority, they are still one of many qualities I want in my models.
The primary goal is to enhance the experience for users looking to use models for creative uses, and other use cases which require no alignment.
In an effort to make it clear to myself and to others what I'm aiming for, I've identified certain qualities that my users often want:
Creativity
- Writing: Does it string together words and sentences in a pleasant & effective way? Does it feel like a writer?
- Dynamism: How good is the AI at being compelling and intriguing in its storytelling?
- Imagination: Can the AI navigate through a plethora of possibilities? Can it skirt incoherence and rise up to absolute coherence at the end of it?
(Dis)alignment
- Attitude: Does it refuse in both soft or hard ways? Does it lean towards certain corporate/religious/political ethics & beliefs? How does it see the user and itself?
- Morality: Does it know ethics? Is its language infected with forced positivity? If not, can it still moralize over difficult & dubious themes?
- Formatting: How stubborn is it with its established formatting? Can it create effective and novel formats to answer the prompt?
Intelligence
- Adherence: Can it follow instructions? Is it sticking to the prompt? Can it understsand you?
- Knowledge: Does it know about the world in both fictional and non-fictional way?
- Perception: Can it handle nuance, complexity, and logic?
If it doesn't excel in one of these qualities, or if it's overall mediocre for its size, then I would most likely reiterate until I get something right.
π‘ Philosophy
A person is defined by the language they use. Not whether they speak in English or German, but in how they perceive reality.
Just like how we associate a serial killer as a mind that can't map 'murder' to 'evil', an innocent person is a mind that simply can't imagine 'murder'. They get confused when forced to deal with such subjects.
AI's use of language speaks volumes about their 'perception' of reality. If a language model has been skewed and limited to a positive perception, then it's ability to imagine is also limited.
Finetuning is an opportunity to adjust and broaden the language. Corporations use it to achieve safety and compliance. I'm here to ACK-
TheDrummer proudly presents...
Precog 123B v1
Description
Precog is a 'reasoning' model that thinks in a different way. Instead of breaking down the question and coming up with a solution, it will instead provide an overview of the response like:
The intention is to have the model write a simple, digestible draft and then using the draft to write the actual, complicated response. The draft can be prefilled and edited to influence the actual response. I've always wondered what would happen if the model had solid basis for what it's about to write (as an actual response).
Strengths
- Improved narrative flow and storytelling.
- Provides a short draft you can inspect / modify before the AI writes the actual response, saving time and effort.
- Allows the model to plan out the response without spending too much tokens on it.
- Possibly better prompt adherence / instruction following.
- Uses the standard
<think>format, meaning it's plug & play for most frontends once configured for reasoning.
Weaknesses
- The model's 'reasoning' / draft might not always align with the actual response.
- Conventional reasoning like Behemoth R1 / Cydonia R1 might do better in detecting and portraying nuances.
Usage
- Mistral v7 Non-Tekken +
[SYSTEM_PROMPT] - Prefill
<think>in case it doesn't think. - Make sure any additional prefills inside
<think>works well with the new reasoning pattern. - Like my other reasoning tunes, you can invent new think tags like
<evil_think>or<slowburn_think>to influence how it should write.
Links
- Original: https://huggingface.co/TheDrummer/Precog-123B-v1
- GGUF: https://huggingface.co/TheDrummer/Precog-123B-v1-GGUF
- iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Precog-123B-v1-GGUF
- EXL3: https://huggingface.co/ArtusDev/TheDrummer_Precog-123B-v1-EXL3
Feedback
This is really good. Adherence is good and creativity is there. I liked this one better than X and ReduX. Maybe on par with R1, but the short thinks really works for me. It's now my go to for when I want something a bit more creative than GLM-4.6
The smarts are insane on this. Creativity is very good.
this model is really awesome, just like my previous experience with behemoth, it actually keeps a flowing story narrative with character development and good dialogue and pacing
I am not an expert but I am popping up to say this is quite good. It makes smart decisions, feels like it's logicing + emotioning! π
Seems good. Overall feels like an upgrade to BX for me subjectively in terms of writing style, and very few derps so far and characters stay in-character. Logic and pacing is generally sound. It did have issues not closing think tags rarely and just writing the entire response in think, but that was fixable with a regen in most cases... aside from the one test I did where I had it continue a story from another (non-thinking) model at 37k context (that one took multiple regens to get working, but the response was very good, better than BX). Also, it did write a solid continuation to that story even without think though so I'd still consider it a win regardless. Seems like short think on large models was a good call.
config-v1a
- Downloads last month
- 63



